content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
What is Loss-Versus-Rebalancing (LVR)? - CoW DAO
What is Loss-Versus-Rebalancing (LVR)?
Providing liquidity to automated market makers (AMMs) like Uniswap has become one of the most popular ways to earn yield in crypto.
While other yield-bearing methods like staking and lending depend on protocol economics and aggregate market demand, there is always alpha in providing liquidity. Between new token launches,
incentive programs from ecosystem grants, and ever-evolving AMM models, liquidity providing is one of the most mature sectors for earning returns on crypto assets.
However, liquidity providing has one major downside: it is subject to a pernicious form of price exploitation known as loss-versus-rebalancing (LVR).
LVR is a form of maximal extractable value (MEV) responsible for more price exploitation than all other forms of MEV combined. Many liquidity providers haven’t even heard of LVR, but it costs them
5–7% of their liquidity, resulting in hundreds of millions lost each year. In fact, when accounting for LVR, many of the largest liquidity pools are not profitable for LPs at all.
How can this be? Let’s examine how liquidity providing works to learn what loss-versus-rebalancing is and where it comes from.
What is LVR
First coined by a team of researchers from Columbia University, LVR is a form of arbitrage that occurs whenever an AMM has an outdated (stale) price in comparison to some other trading venue.
Arbitrageurs exploit this difference by trading from the AMM to the more liquid exchange (usually a centralized exchange like Binance), correcting the arbitrage and extracting value from LPs in the
To better understand LVR as a process, however, we first need to understand the nature of automated market makers, as well as the pre-cursor to LVR — impermanent loss.
The CF-AMM Design
Most AMMs (such as Uniswap) are examples of constant function automated market makers—CF-AMMs.
These AMMs take in two assets and automatically re-calculate prices after each trade to ensure ample liquidity at all times. As one asset gets depleted, the other must trade at a higher price in
order to maintain a constant ratio where the value of all “A” tokens in a pool is equal to the value of all “B” tokens.
CF-AMMs use the constant product function x*y=k to calculate the ratio of assets, where x and y are the two tokens and k is the constant maintained. Thus, all trades on a CF-AMM can be mapped on the
constant product function curve:
Liquidity Positions
Let’s look at an example of providing liquidity which will help illustrate the design (and shortcomings) of CF-AMMs.
Let’s say that the price of ETH is $1,000 and we have 2 ETH worth $2,000 that we would like to earn yield on. We can start by providing liquidity to, say, an ETH-USDC pool. In order to do this, we
must first split the value of our holdings into 50% ETH and 50% USDC (remember, most AMMs require LP positions to be split 50/50 between your two assets.)
LP Position:
1 ETH $1,000 USDC
Total value: $2,000
Over time, we earn LP fees on this position for our service of providing liquidity to traders. So our $2,000 is slowly growing by just sitting in a liquidity pool… sounds great, right?
Loss-Versus-Holding (Impermanent Loss)
In our example, we’re making free money off LP fees up until the moment ETH starts to move in price. This is where we risk incurring “impermanent loss” which can be thought of as the opportunity cost
of our LP position compared to what we would have had if we just held the assets (without putting them in a liquidity pool). That’s right—LPing is not automatically better.
Let’s take a look at how this shakes out for our particular example:
In an extreme scenario, let’s say the price of ETH jumps from $1,000 per ETH to $4,000 per ETH. The liquidity pool will adjust so that our assets are still 50% ETH and 50% USDC.
To calculate the new position, we use the formula x=√(p1ETH∗p2ETH), where p1ETH represents the original price of ETH and p2ETH represents the new price of ETH. Solving for x will give us the updated
(dollar-denominated) value of each asset after the AMM re-calculates prices back to 50/50 in this new price environment.
The sqrt(1000*4000) = 2,000, meaning that the dollar value of each asset in our liquidity pool will be $2,000 after the recalculation. This leaves us with $2,000 USDC, and 0.5 ETH in our liquidity
pool, for a total of $4,000. We started out with $2,000 and we end up with $4,000 — doubling our money.
Original LP Position:
1 ETH $1,000 USDC
Total value: $2,000
New LP Position:
0.5 ETH $2,000 USDC
Total value: $4,000
That’s great! …until we look at what we could have made if we had just held our 1 ETH and $1,000 USDC without ever putting them in a liquidity pool in the first place.
If we had held, our 1 ETH would be worth $4,000, which in addition to our original $1,000 USDC would leave us with $5,000 net. That’s $1,000 more than what we end up with through the liquidity pool!
Original LP Position:
1 ETH $1,000 USDC
Total value: $2,000
New LP Position:
0.5 ETH $2,000 USDC
Total value: $4,000
Holding Position:
1 ETH $1,000 USDC
Total value: $5,000
This difference is known as “divergence loss,” or, more colloquially, “impermanent loss.”
It’s “impermanent” because if the price of the assets were to return to their original value ($1,000 per ETH in our example) the loss would be reversed and the value of our LP position (excluding
yield earned) would be equal to the value of a portfolio where we had just held. So impermanent loss can also be conceptualized as “loss-versus-holding.”
If we begin thinking about counterfactual scenarios where rather than LPing, we had taken some other course of action with our money, a world of possibilities opens up. The first,
“loss-versus-holding,” we just covered. The second, “loss-versus-rebalancing,” is the topic at hand that we now turn to.
Loss-Versus-Rebalancing (LVR)
In the case of impermanent loss, if an asset begins and ends a time period at the same price, there is no loss incurred by the LPs. However, it turns out that there is a portfolio scenario which
would have made money even in the case of an asset beginning at and returning to the same price. This would be a “rebalancing” portfolio.
Rather than leaving your money in an AMM or simply holding, in a rebalancing portfolio you trade your way up the price curve when it’s going up and trade down when it’s going down, effectively
“rebalancing” your portfolio in real time.
“Loss-versus-rebalancing” is the difference in value between an LP portfolio and a rebalancing portfolio.
Let’s look at another example.
LVR in numbers
Say we start with an LP position of 1 ETH and $1,000 USDC.
Over time, the price of ETH jumps from $1,000 to $2,000. Using the x=√(p1ETH∗p2ETH) formula, we can calculate that our new LP holdings would consist of 0.71 ETH and $1,414 USDC, meaning we sold 0.29
ETH for $414 during the AMM recalculation. This gives us a total portfolio value of $2,828 and a loss-versus-holding value of $172.
After some more time, the price of ETH goes back down to $1,000 where it started. Here, the values of both our LP portfolio (minus fees) and our holding portfolio are equal at $2,000, with a
loss-versus-holding of $0.
Original LP Position:
1 ETH $1,000 USDC
Total value: $2,000
LP Position After Price Move:
0.71 ETH $1,414 USDC
Total value: $2,828
LP Position After Price Reversion:
1 ETH $1,000 USDC
Total value: $2,000
Holding Position: 1 ETH $1,000 USDC
Total value: $2,000
But what about the loss-versus-rebalancing?
When compared to a rebalancing portfolio executed on a liquid exchange such as Binance, our LP and holding portfolios underperform — even in the case where prices return to their original value.
The rebalancing portfolio starts at the same 1 ETH and 1,000 USDC. When the price of ETH goes to $2,000, we copy the LP trade (selling 0.29 ETH), but we do so at the current market price of $2,000
through a liquid non-AMM exchange, earning $580 for the trade instead of the $414 we got on the AMM, for a net LVR of $166. Copying the same trade on the way down, we buy back our 0.29 ETH for $290,
instead of the $414 we would have to pay through the AMM, netting another $124 in LVR.
Original Rebalancing Portfolio Position:
1 ETH $1,000 USDC
Total value: $2,000
Rebalancing Portfolio Position After Price Move:
0.71 ETH $1,580 USDC
Total value: $3,160
Rebalancing Portfolio Position After Price Reversion:
1 ETH $1,290 USDC
Total value: $2,290
So the performance of our three portfolios (ignoring LP fees) looks like this:
The LVR math
For any given price movement, LVR can be calculated using the formula “a(p-q)” where a is the quantity of the asset being sold, p is the “real” market price, and q is the “stale” AMM price. (Note:
“a” is a positive number when selling and a negative number when buying.)
Unlike impermanent loss, LVR depends on price trajectory and volatility. This intuitively makes sense… the more price moves there are, the more opportunities for arbitrage. The volatility of a given
market can be derived using the famous “Black-Scholes” options-pricing model, and is represented by σ — a lowercase sigma.
For CF-AMMs, instantaneous LVR can be calculated as σ²/8. There are also more complex LVR formulas which sum instantaneous LVR to find the overall LVR of any given AMM. Find these formulas, along
with a technical explanation of how the numbers were derived, in this video and this blog post by Anthony Lee Zhang, a co-author of the original LVR paper.
LP Returns in Practice
So far, we have the concept of LVR but we’ve ignored one very important factor: LP fees. After all, LPs don’t deposit money into AMMs out of charity, they do it to earn yield.
Now that we have a reliable formula for calculating loss-versus-rebalancing, we can look at historical data to answer the question “How often do LP returns exceed LVR losses?”
Using the LVR formula, the authors of the original paper concluded that, in an environment of 5% asset volatility, a Uniswap pool would need to turnover 10% of its entire volume each day in order for
LP fees of 30 basis points to offset LVR losses.
However, we can get even more specific by retroactively looking at historical data. DeFi researcher Atis Elsts, wrote up a retrospective of 2023 LP returns that shows real data on loss-versus-holding
and loss-versus-rebalancing for ETH throughout the year.
2023 Loss-Versus-Holding
ETH began the year at around $1,200 and closed around $2,300. LPs outperformed a holding portfolio of 50/50 ETH/USDC by an aggregate 1–2% after fees.
Images: 2023 LP versus holding strategies & 2023 LP PnL over holding strategy
2023 Loss-Versus-Rebalancing
Mapped alongside volatility, we can see that, as expected, loss-versus-rebalancing is highly correlated.
Elsts mapped LVR (red line) versus expected fees on WETH/USDC Uniswap v2 positions and WETH/USDC v3 positions across three fee tiers. We can see that v3 LP positions closely tracked LVR, while v2
positions largely outperformed it.
According to Elsts, the reason for the large discrepancy between v3 and v2 positions is that v2 sees primarily “core” (non-arbitrage) trading volume while a majority of v3 trading volume is
Images: Uniswap v2 volume makeup & Uniswap v3 volume makeup
So here we finally get to the $500 million question: How can LPs protect themselves from LVR?
Solutions to LVR
There are multiple proposed solutions to the LVR problem, each with its own set of tradeoffs.
Oracle designs: One of the most popular suggestions for solving the LVR problem involves using price oracles to feed AMMs the most up-to-date prices for assets. The oracle model is well-tested in
DeFi, but an oracle-informed AMM would require significant design changes to current AMM models and introduce new problems around ensuring that the oracle always provides the most current price data.
Decreasing block times: Some proposals argue that increasing trade frequency by decreasing block times will result in even more granular arbitrage opportunities. This would theoretically increase the
number of trades that arbitrageurs have to make, with the larger LP returns hopefully outpacing LVR.
Batch auction designs: While decreasing block times might mitigate the price lag AMMs experience, going in the opposite direction sounds even more promising. First proposed by Andrea Canidio and
Robin Fritsch in the paper “Arbitrageurs’ profits, LVR, and sandwich attacks: batch trading as an AMM design response,” a new category of “Function-Maximizing” AMMs forces traders to trade against
AMMs in batches, creating a competition between arbitrageurs which eliminates LVR.
CoW AMM
The CoW AMM is the first production-ready implementation of an FM-AMM and it’s designed to capture LVR on behalf of LPs, now live on Balancer.
Backtesting research conducted for a 6 month period during 2023 shows that CoW AMM returns would have equaled or outperformed CF-AMM returns for 10 of the 11 most liquid, non-stablecoin pairs.
The CoW AMM works by having solvers compete with each other for the right to trade against the AMM.
Whenever there is an arbitrage opportunity solvers bid to rebalance the CoW AMM pools. The winning solver is the one that moves the AMM curve higher.
Learn more about the CoW AMM here, and contact us if you are interested in setting up a liquidity pool.
Share article | {"url":"https://cow.fi/learn/what-is-loss-versus-rebalancing-lvr","timestamp":"2024-11-13T19:17:54Z","content_type":"text/html","content_length":"758489","record_id":"<urn:uuid:06ac8f6e-0762-4ab5-86bc-32137150798f>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00182.warc.gz"} |
python-igraph API reference
class documentation
Class representing a sequence of vertices in the graph.
This class is most easily accessed by the vs field of the Graph object, which returns an ordered sequence of all vertices in the graph. The vertex sequence can be refined by invoking the
VertexSeq.select() method. VertexSeq.select() can also be accessed by simply calling the VertexSeq object.
An alternative way to create a vertex sequence referring to a given graph is to use the constructor directly:
>>> g = Graph.Full(3)
>>> vs = VertexSeq(g)
>>> restricted_vs = VertexSeq(g, [0, 1])
The individual vertices can be accessed by indexing the vertex sequence object. It can be used as an iterable as well, or even in a list comprehension:
>>> g=Graph.Full(3)
>>> for v in g.vs:
... v["value"] = v.index ** 2
>>> [v["value"] ** 0.5 for v in g.vs]
[0.0, 1.0, 2.0]
The vertex set can also be used as a dictionary where the keys are the attribute names. The values corresponding to the keys are the values of the given attribute for every vertex selected by the
>>> g=Graph.Full(3)
>>> for idx, v in enumerate(g.vs):
... v["weight"] = idx*(idx+1)
>>> g.vs["weight"]
[0, 2, 6]
>>> g.vs.select(1,2)["weight"] = [10, 20]
>>> g.vs["weight"]
[0, 10, 20]
If you specify a sequence that is shorter than the number of vertices in the VertexSeq, the sequence is reused:
>>> g = Graph.Tree(7, 2)
>>> g.vs["color"] = ["red", "green"]
>>> g.vs["color"]
['red', 'green', 'red', 'green', 'red', 'green', 'red']
You can even pass a single string or integer, it will be considered as a sequence of length 1:
>>> g.vs["color"] = "red"
>>> g.vs["color"]
['red', 'red', 'red', 'red', 'red', 'red', 'red']
Some methods of the vertex sequences are simply proxy methods to the corresponding methods in the Graph object. One such example is VertexSeq.degree():
>>> g=Graph.Tree(7, 2)
>>> g.vs.degree()
[2, 3, 3, 1, 1, 1, 1]
>>> g.vs.degree() == g.degree()
Method __call__ Shorthand notation to select()
Method attributes Returns the list of all the vertex attributes in the graph associated to this vertex sequence.
Method find Returns the first vertex of the vertex sequence that matches some criteria.
Method select Selects a subset of the vertex sequence based on some criteria
Inherited from VertexSeq:
Method __new__ Create and return a new object. See help(type) for accurate signature.
Method attribute_names Returns the attribute name list of the graph's vertices
Method get_attribute_values Returns the value of a given vertex attribute for all vertices in a list.
Method set_attribute_values Sets the value of a given vertex attribute for all vertices
Method _reindex_names Re-creates the dictionary that maps vertex names to IDs.
def __call__
(self, *args, **kwds):
Shorthand notation to select()
This method simply passes all its arguments to VertexSeq.select().
Returns the list of all the vertex attributes in the graph associated to this vertex sequence.
def find
(self, *args, **kwds):
Returns the first vertex of the vertex sequence that matches some criteria.
The selection criteria are equal to the ones allowed by VertexSeq.select. See VertexSeq.select for more details.
For instance, to find the first vertex with name foo in graph g:
>>> g.vs.find(name="foo") #doctest:+SKIP
To find an arbitrary isolated vertex:
>>> g.vs.find(_degree=0) #doctest:+SKIP
def select
(self, *args, **kwds):
Selects a subset of the vertex sequence based on some criteria
The selection criteria can be specified by the positional and the keyword arguments. Positional arguments are always processed before keyword arguments.
• If the first positional argument is None, an empty sequence is returned.
• If the first positional argument is a callable object, the object will be called for every vertex in the sequence. If it returns True, the vertex will be included, otherwise it will be excluded.
• If the first positional argument is an iterable, it must return integers and they will be considered as indices of the current vertex set (NOT the whole vertex set of the graph -- the difference
matters when one filters a vertex set that has already been filtered by a previous invocation of VertexSeq.select(). In this case, the indices do not refer directly to the vertices of the graph
but to the elements of the filtered vertex sequence.
• If the first positional argument is an integer, all remaining arguments are expected to be integers. They are considered as indices of the current vertex set again.
Keyword arguments can be used to filter the vertices based on their attributes. The name of the keyword specifies the name of the attribute and the filtering operator, they should be concatenated by
an underscore (_) character. Attribute names can also contain underscores, but operator names don't, so the operator is always the largest trailing substring of the keyword name that does not contain
an underscore. Possible operators are:
• eq: equal to
• ne: not equal to
• lt: less than
• gt: greater than
• le: less than or equal to
• ge: greater than or equal to
• in: checks if the value of an attribute is in a given list
• notin: checks if the value of an attribute is not in a given list
For instance, if you want to filter vertices with a numeric age property larger than 200, you have to write:
>>> g.vs.select(age_gt=200) #doctest: +SKIP
Similarly, to filter vertices whose type is in a list of predefined types:
>>> list_of_types = ["HR", "Finance", "Management"]
>>> g.vs.select(type_in=list_of_types) #doctest: +SKIP
If the operator is omitted, it defaults to eq. For instance, the following selector selects vertices whose cluster property equals to 2:
>>> g.vs.select(cluster=2) #doctest: +SKIP
In the case of an unknown operator, it is assumed that the recognized operator is part of the attribute name and the actual operator is eq.
Attribute names inferred from keyword arguments are treated specially if they start with an underscore (_). These are not real attributes but refer to specific properties of the vertices, e.g., its
degree. The rule is as follows: if an attribute name starts with an underscore, the rest of the name is interpreted as a method of the Graph object. This method is called with the vertex sequence as
its first argument (all others left at default values) and vertices are filtered according to the value returned by the method. For instance, if you want to exclude isolated vertices:
>>> g = Graph.Famous("zachary")
>>> non_isolated = g.vs.select(_degree_gt=0)
For properties that take a long time to be computed (e.g., betweenness centrality for large graphs), it is advised to calculate the values in advance and store it in a graph attribute. The same
applies when you are selecting based on the same property more than once in the same select() call to avoid calculating it twice unnecessarily. For instance, the following would calculate betweenness
centralities twice:
>>> edges = g.vs.select(_betweenness_gt=10, _betweenness_lt=30)
It is advised to use this instead:
>>> g.vs["bs"] = g.betweenness()
>>> edges = g.vs.select(bs_gt=10, bs_lt=30)
the new, filtered vertex sequence | {"url":"https://igraph.org/python/api/0.9.6/igraph.VertexSeq.html","timestamp":"2024-11-04T04:46:52Z","content_type":"text/html","content_length":"40914","record_id":"<urn:uuid:fddebef7-c346-4e94-a589-535a015375af>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00871.warc.gz"} |
JEE Main Physics Syllabus 2024 - Download Syllabus PDF for FREE
JEE Main Physics Syllabus 2024
On this page, you can get the JEE Main Physics syllabus for 2024, which may be seen and downloaded. Students are recommended to look over the JEE Main Physics syllabus in order to construct a strong
and effective preparation strategy for the upcoming session. JEE aspirants can also read the syllabus to learn vital facts such as the course objectives, important chapters and themes, reference
materials, and more. Furthermore, candidates who have a thorough understanding of the curriculum will have more control over their learning.
JEE Main 2024 Physics Syllabus PDF
The NTA released the syllabus for JEE Main 2024. Thus, students can start their preparation by referring to the syllabus provided here.
JEE Mains Physics Syllabus: Section A
The syllabus contains two Sections – A and B. Section A contains the theory part having 80% weightage, and Section B contains the practical component (experimental skills), having 20 % Weightage.
Unit 1: Physics and Measurement
Physics, technology, and society, S I Units, fundamental and derived units, least count, accuracy and precision of measuring instruments, Errors in measurement, Dimensions of Physics quantities,
dimensional analysis, and its applications.
Unit 2: Kinematics
The frame of reference, motion in a straight line, Position- time graph, speed and velocity; Uniform and non-uniform motion, average speed and instantaneous velocity, uniformly accelerated motion,
velocity-time, position-time graph, relations for uniformly accelerated motion, Scalars and Vectors, Vector. Addition and subtraction, zero vector, scalar and vector products, Unit Vector, Resolution
of a Vector. Relative Velocity, Motion in a plane, Projectile Motion, Uniform Circular Motion.
Unit 3: Laws of Motion
Force and inertia, Newton’s First law of motion; Momentum, Newton’s Second Law of motion, Impulses; Newton’s Third Law of motion. Law of conservation of linear momentum and its applications.
Equilibrium of concurrent forces. Static and Kinetic friction, laws of friction, rolling friction. Dynamics of uniform circular motion: centripetal force and its applications.
Unit 4: Work, Energy and Power
Work done by a content force and a variable force; Kinetic and potential energies, Work-energy theorem, Power.
The potential energy of spring conservation of mechanical energy, conservative and neoconservative forces; Elastic and inelastic collisions in one and two dimensions.
Unit 5: Rotational Motion
Centre of the mass of a two-particle system, Centre of the mass of a rigid body; Basic concepts of rotational motion; A moment of a force; Torque, Angular momentum, Conservation of angular momentum
and its applications; The moment of inertia, The radius of gyration.
Values of moments of inertia for simple geometrical objects, Parallel and perpendicular axes theorems and their applications. Rigid body rotation equations of rotational motion.
Unit 6: Gravitation
The universal law of gravitation. Acceleration due to gravity and its variation with altitude and depth. Kepler’s law of planetary motion. Gravitational potential energy; gravitational potential.
Escape velocity, Orbital velocity of a satellite. Geo stationary satellites.
Unit 7: Properties of Solids and Liquids
Elastic behaviour, Stress-strain relationship, Hooke’s Law. Young’s modulus, bulk modulus, modulus of rigidity. Pressure due to a fluid column; Pascal’s law and its applications. Viscosity. Stokes’
law. Terminal velocity, streamline and turbulent flow. Reynolds number. Bernoulli’s principle and its applications. Surface energy and surface tension, angle of contact, application of surface
tension – drops, bubbles and capillary rise. Heat, temperature, thermal expansion; Specific heat capacity, calorimetry; Change of state, latent heat. Heat transfer-conduction, convection and
radiation. Newton’s law of cooling.
Unit 8: Thermodynamics
Thermal equilibrium, Zeroth law of thermodynamics, The concept of temperature. Heat, work and internal energy. The first law of thermodynamics. The second law of thermodynamics: reversible and
irreversible processes. Carnot engine and its efficiency.
Unit 9: Kinetic Theory of Gases
Equation of state of a perfect gas, work done on compressing a gas, Kinetic theory of gases – assumptions, the concept of pressure. Kinetic energy and temperature: RMS speed of gas molecules: Degrees
of freedom. Law of equipartition of energy, Applications to specific heat capacities of gases; Mean free path. Avogadro’s number.
Unit 10: Oscillations and Waves
Periodic motion – period, frequency, displacement as a function of time. Periodic functions. Simple harmonic motion (S.H.M.) and its equation; Phase: oscillations of a spring -restoring force and
force constant: energy in S.H.M. – Kinetic and potential energies; Simple pendulum – derivation of expression for its time period: Free, forced and damped oscillations, resonance.
Wave motion. Longitudinal and transverse waves, speed of a wave. Displacement relation for a progressive wave. Principle of superposition of waves, a reflection of waves. Standing waves in strings
and organ pipes, Fundamental mode and harmonics. Beats. Doppler Effect in sound.
Unit 11: Electrostatics
Electric charges: Conservation of charge. Coulomb’s law-forces between two point charges, Forces between multiple charges: superposition principle and continuous charge distribution.
Electric field: Electric field due to a point charge, Electric field lines. Electric dipole, Electric field due to a dipole. Torque on a dipole in a uniform electric field.
Electric flux. Gauss’s law and its applications to find field due to infinitely long uniformly charged straight wire, uniformly charged infinite plane sheet and uniformly charged thin spherical
shell. Electric potential and its calculation for a point charge, electric dipole and system of charges; Equipotential surfaces, Electrical potential energy of a system of two point charges in an
electrostatic field.
Conductors and insulators. Dielectrics and electric polarization, Capacitor, The combination of capacitors in series and parallel, Capacitance of a parallel plate capacitor with and without
dielectric medium between the plates. Energy stored in a capacitor.
Unit 12: Current Electricity
Electric current. Drift velocity. Ohm’s law. Electrical resistance. Resistances of different materials. V-l characteristics of Ohmic and non-ohmic conductors. Electrical energy and power. Electrical
resistivity. Colour code for resistors; Series and parallel combinations of resistors; Temperature dependence of resistance.
Electric cell and its Internal resistance, Potential difference and emf of a cell, A combination of cells in series and parallel. Kirchhoff’s laws and their applications. Wheatstone bridge. Metre
Bridge. Potentiometer – principle and its applications.
Unit 13: Magnetic Effects of Current and Magnetism
Biot – Savart law and its application to current carrying circular loop. Ampere’s law and its applications to infinitely long current carrying straight wire and solenoid. Force on a moving charge in
uniform magnetic and electric fields. Cyclotron.
Force on a current-carrying conductor in a uniform magnetic field. The force between two parallel current carrying conductors- definition of ampere. Torque experienced by a current loop in a uniform
magnetic field: Moving coil galvanometer, its current sensitivity and conversion to ammeter and voltmeter.
Current loop as a magnetic dipole and its magnetic dipole moment. Bar magnet as an equivalent solenoid, Magnetic field lines; Earth’s magnetic field and magnetic elements. Para-, dia- and
ferromagnetic substances. Magnetic susceptibility and permeability. Hysteresis. Electromagnets and permanent magnets.
Unit 14: Electromagnetic Induction and Alternating Currents
Electromagnetic induction: Faraday’s law. Induced emf and current: Lenz’s Law, Eddy currents. Self and mutual inductance. Alternating currents, peak and RMS value of alternating current/voltage:
Reactance and impedance: LCR series circuit, resonance: Quality factor, power in AC circuits, wattless current. AC generator and transformer.
Unit 15: Electromagnetic Waves
Electromagnetic waves and their characteristics, Transverse nature of electromagnetic waves, Electromagnetic spectrum (radio waves, microwaves, infrared, visible, ultraviolet. X-rays. Gamma rays),
Applications of e.m. waves.
Unit 16: Optics
Reflection and refraction of light at plane and spherical surfaces, mirror formula. Total internal reflection and its applications. Deviation and dispersion of light by a; prism; Lens formula.
Magnification. Power of a lens. Combination of thin lenses in contact. Microscope and astronomical telescope (reflecting and refracting ) and their magnifying powers.
Wave optics: Wavefront and Huygens’ principle. Laws of reflection and refraction using Huygens principle. Interference, Young’s double-slit experiment and expression for fringe width, Coherent
sources and sustained interference of light. Diffraction due to a single slit, width of central maximum. Resolving power of microscopes and astronomical telescopes. Polarisation, plane-polarised
light: Brewster’s law, uses of plane-polarized light and polaroid.
Unit 17: Dual Nature of Matter and Radiation
Dual nature of radiation. Photoelectric effect. Hertz and Lenard’s observations; Einstein’s photoelectric equation: particle nature of light. Matter waves-wave nature of particle, de Broglie
relation. Davisson-Germer experiment.
Unit 18: Atoms and Nuclei
Alpha-particle scattering experiment; Rutherford’s model of atom; Bohr model, energy levels, hydrogen spectrum. Composition and size of nucleus, atomic masses, isotopes, isobars: isotones.
Radioactivity- alpha beta and gamma particles/rays and their properties; Radioactive decay law. Mass-energy relation, mass defect; Binding energy per nucleon and its variation with mass number,
Nuclear fission and fusion.
Unit 19: Electronic Devices
Semiconductors; semiconductor diode: 1- V characteristics in forward and reverse bias; Diode as a rectifier; I-V characteristics of LED. The photodiode, solar cell and Zener diode; Zener diode as a
voltage regulator. Junction transistor, transistor action, characteristics of a transistor: transistor as an amplifier (common emitter configuration) and oscillator. Logic gates (OR. AND. NOT. NAND
and NOR). Transistor as a switch.
Unit 20: Communication Systems
Propagation of electromagnetic waves in the atmosphere; Sky and space wave propagation. Need for modulation. Amplitude and frequency modulation, Bandwidth of signals. The bandwidth of transmission
medium, Basic elements of a communication system (Block diagram only).
JEE Mains Physics Syllabus Section B
Sometimes questions from experimental skills (which is Section-B) can also be asked in the JEE, so, here at BYJU’S, we have provided the syllabus for the same.
UNIT 21: Experimental Skills
Familiarity with the basic approach and observations of the experiments and activities:
1. Vernier callipers – its use to measure the internal and external diameter and depth of a vessel.
2. Screw gauge – its use to determine the thickness/diameter of a thin sheet/wire.
3. Simple Pendulum – Dissipation of energy by plotting a graph between the square of amplitude and time.
4. Metre Scale – The mass of a given object by the principle of moments.
5. Young’s modulus of elasticity of the material of a metallic wire.
6. Surface tension of water by capillary rise and effect of detergents.
7. Co-efficient of viscosity of a given viscous liquid by measuring the terminal velocity of a given spherical body.
8. Plotting a cooling curve for the relationship between the temperature of a hot body and time.
9. Speed of sound in air at room temperature using a resonance tube.
10. Specific heat capacity of a given (i) solid and (ii) liquid by method of mixtures.
11. The resistivity of the material of a given wire using a metre bridge.
12. The resistance of a given wire using Ohm’s law.
13. Potentiometer-
• Comparison of emf of two primary cells.
• Determination of internal resistance of a cell.
14. Resistance and figure of merit of a galvanometer by half deflection method.
15. The focal length of
• Convex mirror
• Concave mirror, and
• Convex lens, using the parallax method.
16. The plot of the angle of deviation vs angle of incidence for a triangular prism.
17. Refractive index of a glass slab using a travelling microscope.
18. Characteristic curves of a p-n junction diode in forward and reverse bias.
19. Characteristic curves of a Zener diode and finding reverse breakdown voltage.
20. Characteristic curves of a transistor and finding current gain and voltage gain.
21. Identification of Diode. LED, Transistor. IC. Resistor. A capacitor from a mixed collection of such items.
22. Using a multimeter to:
• Identify the base of a transistor
• Distinguish between NPN and PNP type transistor
• See the unidirectional current in the case of a diode and an LED.
• Check the correctness or otherwise of a given electronic component (diode, transistor or IC). | {"url":"https://primerankers.com/jee/jee-main-physics-syllabus/","timestamp":"2024-11-09T06:33:00Z","content_type":"text/html","content_length":"71674","record_id":"<urn:uuid:d8fbd409-5566-4fda-9d38-3ade3960b52a>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00081.warc.gz"} |
KeyPack grin-wallet enhancement, please take a look
KeyPack: A Transaction Protocol for Grin
Wallet-Layer Enhancement for MimbleWimble Transactions with Advanced Privacy Through Transaction Unlinking
This paper presents KeyPack, a wallet-layer protocol that enhances Grin’s transaction model without modifying the underlying MimbleWimble protocol. The proposed method splits transactions into two
atomic parts while maintaining security through encrypted keys, creating unlinked transactions that improve privacy by breaking transaction graph analysis. The protocol reduces the standard
three-step interactive process to a single step, enables offline receiving, and enhances MimbleWimble privacy properties through transaction unlinking. The approach creates distinct, temporally
separated transactions that appear unrelated on the blockchain, making transaction graph analysis substantially more difficult. The paper provides comprehensive mathematical proofs of security,
enhanced privacy preservation through unlinking, and proper fee handling.
Table of Contents
1. [Introduction]
2. [Background]
3. [The KeyPack Protocol]
4. [Security Analysis]
5. [Privacy Considerations]
6. [Fee Handling]
7. [Compression Techniques]
8. [Edge Cases and Mitigations]
9. [Conclusions]
1. Introduction
1.1 Problem Statement
Grin’s current transaction model presents several significant challenges:
1. Complex Transaction Flow: The current three-step interactive process requires:
Sender (slate_v1) → Receiver (slate_v2) → Sender (slate_v3)
This necessitates simultaneous online presence and creates high coordination complexity.
2. Payment Proof Limitations: While supported, current payment proofs suffer from:
□ Complex exchange processes
□ Non-standardized implementation
□ Integration difficulties
3. User Experience Barriers: Current implementation creates friction through:
□ Unfamiliar transaction models
□ High cognitive load
□ Poor mobile experience
1.2 Key Innovations
KeyPack protocol provides:
• Enhanced privacy through transaction unlinking, making transaction graph analysis more difficult
• Temporally separated, seemingly unrelated transactions that mask transaction flow
• Reduced transaction steps with maintained security
• Integrated robust payment proofs
• Enhanced MimbleWimble privacy properties
• No consensus changes required
The protocol’s primary innovation lies in creating two distinct, apparently unrelated transactions on the blockchain. This unlinking property improves privacy by:
1. Breaking direct transaction graph connections
2. Creating temporal separation between transaction components
3. Increasing difficulty of correlation attacks
4. Preventing formation of clear transaction patterns
5. Masking sender-receiver relationships
2. Background
2.1 MimbleWimble Fundamentals
MimbleWimble transactions use Pedersen Commitments:
• C(r, v) = r·H + v·G
□ r is the blinding factor
□ v is the value
□ H, G are generator points
□ · denotes scalar multiplication
2.2 Standard Grin Transaction Structure
A typical Grin transaction consists of:
1. Inputs: C(r_in, v) = r_in·H + v·G
2. Outputs: C(r_out, v) = r_out·H + v·G
3. Kernel: K = (r_in - r_out)·H
4. Kernel Signature: s = k + e*r where:
□ k is the nonce
□ e is the challenge
□ r is the kernel excess (r_in - r_out)
3. The KeyPack Protocol
3.1 Core Concept
KeyPack decomposes a standard Grin transaction into two cryptographically linked but temporally separated transactions. This decomposition maintains atomic operation through encrypted key management
while breaking transaction graph linkability.
3.1.1 Transaction Decomposition
Standard Grin transaction A → B is decomposed into:
TX1: A → T (Temporary output with encrypted key for B)
TX2: T → B (Automatic execution when B receives key)
3.1.2 Detailed Transaction Structure
First Transaction (A → T):
1. Input:
- C(r_a, v) = r_a·H + v·G
- Commitment from A's UTXO set
- Standard Pedersen commitment
2. Temporary Output:
- C(r_t, v) = r_t·H + v·G
- r_t is encrypted for B
- Uses temporary blinding factor
3. Kernel K1:
- K1 = (r_a - r_t)·H
- Standard kernel excess
- Includes encrypted payload
4. Transaction Data:
- Encrypted r_t for B
- Payment proof components
- Version information
- Metadata (timestamp, network)
Second Transaction (T → B):
1. Input:
- C(r_t, v) = r_t·H + v·G
- Uses decrypted r_t
- Spends temporary output
2. Final Output:
- C(r_b, v-f) = r_b·H + (v-f)·G
- B's receiving address
- Includes fee deduction
3. Kernel K2:
- K2 = (r_t - r_b)·H + f·G
- Commits to transaction fee
- Links to payment proof
4. Transaction Data:
- Complete payment proof
- Fee commitment
- Standard metadata
3.1.10 Non-repudiation and Dispute Prevention
KeyPack’s encrypted mode provides cryptographic proof of transaction intent and execution, preventing disputes about coin transmission.
Cryptographic Evidence Chain:
1. Sender Proof Generation:
- Transaction kernel K1 signed by sender
- Encrypted r_t bound to recipient's key
- Payment proof σ₁ with amount commitment
- Timestamped blockchain record
2. Recipient Validation Chain:
recipient_access = ECDH(recipient_priv, sender_pub)
if decrypt(r_t, recipient_access) succeeds:
- Proves sender intended recipient
- Proves correct amount
- Proves sender authorization
- Provides temporal proof
Dispute Resolution Properties:
1. Sender Cannot Deny:
- Kernel signature proves sending intent
- Encrypted key proves intended recipient
- Blockchain timestamp proves timing
- Amount commitment proves value
2. Recipient Cannot Deny:
- Key decryption proves receipt ability
- Second transaction proves access
- Blockchain records prove timing
- Kernel linkage proves completeness
3. Third Party Verification:
- All proofs publicly verifiable
- Temporal sequence confirmed
- Amount commitments validated
- Cryptographic linkage verified
Protocol Evidence Trail:
Evidence Components:
1. First Transaction:
- Kernel commitment K1
- Sender signature σ₁
- Encrypted key data
- Amount commitment
2. Recipient Access:
- Successful key decryption
- Second transaction K2
- Linked payment proof
- Blockchain inclusion
Dispute Prevention Mechanisms:
1. Transaction Intent:
if (verify_kernel_signature(K1, sender_pub) &&
verify_key_encryption(r_t, recipient_pub)) {
// Proves sender intended this recipient
// Proves specific amount
UNDENIABLE_INTENT = true
2. Receipt Capability:
if (decrypt_temp_key(r_t, recipient_priv) &&
verify_second_tx(K2, r_t)) {
// Proves recipient could access funds
// Proves transaction completion
UNDENIABLE_RECEIPT = true
Comparison with Unencrypted Mode:
Encrypted Mode:
1. Proof Properties:
- Recipient-specific intent provable
- Access capability provable
- Temporal sequence verifiable
- Complete evidence chain
2. Dispute Resolution:
- Clear responsibility assignment
- Cryptographic evidence
- Third-party verifiable
- Temporal proof available
Unencrypted Mode:
1. Proof Properties:
- General sending intent only
- No specific recipient binding
- Temporal sequence only
- Incomplete evidence chain
2. Dispute Scenarios:
- Recipient uncertainty
- Intent ambiguity
- Claim race conditions
- Limited evidence
Forensic Analysis Capabilities:
1. Transaction Validation:
kernel1 = verify_first_transaction()
key_encryption = verify_recipient_binding()
kernel2 = verify_second_transaction()
return {
sender_intent: kernel1.valid,
recipient_bound: key_encryption.valid,
completion_proof: kernel2.valid,
temporal_proof: blockchain_timestamps
2. Evidence Collection:
return {
sender_proofs: {
kernel_signature: K1.signature,
encrypted_key: r_t_encrypted,
payment_proof: σ₁,
blockchain_record: block_data
recipient_proofs: {
key_decryption: decrypt_proof,
second_tx: K2,
temporal_data: timestamps
This cryptographic evidence chain makes encrypted mode particularly suitable for:
• Commercial transactions requiring proof
• Exchange integrations needing audit
• Regulated environments
• Dispute-prone scenarios
• High-value transfers
The undeniable cryptographic proof of sending intent to a specific recipient, combined with proof of recipient access capability, creates a complete evidence chain that prevents disputes about
whether coins were actually sent and to whom.
In contrast, unencrypted mode lacks these dispute resolution properties as it cannot prove specific recipient intent, making it unsuitable for scenarios where transaction non-repudiation is required.
Implementation Differences:
Encrypted Mode:
1. Key Distribution:
encrypted_r_t = ChaCha20Poly1305(
2. Access Control:
- Only intended recipient can decrypt
- Transaction bound to recipient
- Failed attempts traceable
Unencrypted Mode:
1. Key Distribution:
clear_r_t = r_t // Direct publication
2. Access Control:
- Any party can use r_t
- First-to-claim model
- Racing condition possible
Use Case Analysis:
Encrypted Mode Optimal For:
1. Direct payments
2. Exchange transactions
3. Guaranteed delivery
4. Privacy-critical transfers
5. Known recipient scenarios
Unencrypted Mode Enables:
1. Atomic swaps
2. Payment channels
3. Lightning-style networks
4. Prize/reward distributions
5. First-to-claim games
Security Implications:
Encrypted Mode:
1. Privacy Properties:
- Recipient privacy preserved
- Transaction graph obscured
- Full unlinkability maintained
2. Security Guarantees:
- Guaranteed recipient
- No front-running possible
- Replay protection built-in
Unencrypted Mode:
1. Privacy Properties:
- Recipient privacy optional
- Transaction graph partially visible
- Conditional unlinkability
2. Security Considerations:
- Front-running possible
- Race conditions exist
- Requires additional protections
Protocol Modifications:
Transaction Header:
struct KeyPackHeader {
version: u8,
flags: u8,
encryption_mode: EncryptionMode,
enum EncryptionMode {
Encrypted {
recipient_pubkey: PublicKey,
encrypted_key: Vec<u8>
Unencrypted {
clear_key: Vec<u8>
Mode Selection Logic:
fn determine_encryption_mode(
transaction_type: TransactionType,
recipient_known: bool,
privacy_level: PrivacyLevel
) -> EncryptionMode {
match (transaction_type, recipient_known, privacy_level) {
// Standard payment to known recipient
(TransactionType::Direct, true, _) =>
// Atomic swap or similar
(TransactionType::Swap, _, _) =>
// High privacy requirement
(_, _, PrivacyLevel::Maximum) =>
// Default to encrypted for safety
_ => EncryptionMode::Encrypted
Usage Considerations:
1. Default Behavior:
- Encrypted mode default
- Explicit opt-out required
- Warning for unencrypted use
2. Security Recommendations:
- Use encrypted mode for direct payments
- Unencrypted only with additional protections
- Consider privacy implications
3. Implementation Requirements:
Encrypted Mode:
- Full KeyPack implementation
- Key management system
- Recipient public key
Unencrypted Mode:
- Simplified KeyPack
- Additional safety checks
- Race condition handling
The choice between encrypted and unencrypted modes represents a trade-off between security guarantees and protocol flexibility. While encrypted mode provides the strongest security and privacy
properties, unencrypted mode enables advanced features and novel use cases.
For typical transactions, encrypted mode remains the recommended default. Unencrypted mode should be used only in specific scenarios where its unique properties are required, and additional security
measures can be implemented.
3.1.4 Transaction Flow Detail
Sending Process:
1. Wallet Preparation:
- Select inputs
- Calculate change
- Generate r_t
- Create payment proof
2. First Transaction Creation:
- Build A → T transaction
- Encrypt r_t for B
- Sign transaction
- Include metadata
3. First Transaction Broadcast:
- Submit to network
- Wait for confirmation
- Monitor temporary output
Receiving Process:
1. Key Recovery:
- Detect transaction
- Derive shared secret
- Decrypt r_t
- Verify payment proof
2. Second Transaction Creation:
- Build T → B transaction
- Calculate fees
- Sign transaction
- Link payment proof
3. Second Transaction Broadcast:
- Submit to network
- Monitor confirmation
- Update wallet state
3.1.5 Cut-Through and Blockchain State
Final Blockchain State:
1. Visible Components:
- Input: C(r_a, v)
- Output: C(r_b, v-f)
- Kernels: K1, K2
2. Removed by Cut-Through:
- Temporary output C(r_t, v)
- All intermediate states
3. Net Effect:
((r_a - r_b)·H) + f·G // Final kernel with fee
3.1.6 Payment Proof Integration
Proof Structure:
1. First Transaction Proof:
E1 = r_excess1·H
σ1 = k1 + e1·r_excess1
msg1 = hash(
amount ||
kernel_features ||
2. Second Transaction Proof:
E2 = r_excess2·H
σ2 = k2 + e2·r_excess2
msg2 = hash(
amount ||
kernel_features ||
3. Combined Proof:
P = {
E1, σ1, msg1, // First transaction
E2, σ2, msg2, // Second transaction
linking_data // Cross-transaction validation
3.1.7 Timelock Mechanism
1. Temporary Output Timelock:
- Minimum lock: current_height + min_conf
- Maximum lock: current_height + max_lock
- Default: current_height + 1440 (≈24 hours)
2. Recovery Conditions:
- Automatic return if not spent
- Sender can reclaim after timeout
- Configurable timeouts
3.1.8 Fee Handling Detail
1. First Transaction:
- No fee required
- Standard weight calculation
- Zero fee commitment
2. Second Transaction:
fee = weight * fee_base_rate
weight = kernel_weight + // Two kernels
input_weight + // Single input
output_weight // Final output
3. Fee Commitment in Kernel:
K2 = (r_t - r_b)·H + f·G // Explicit fee term
This expanded core concept provides a comprehensive technical foundation for understanding the complete KeyPack protocol implementation.
3.2 Temporary Key Encryption
The temporary key (r_t) is encrypted using:
1. Shared Secret Generation:
s = ECDH(Alice_priv, Bob_pub)
2. Key Derivation:
k = KDF(s || transaction_data)
3. Encryption:
c = ChaCha20(k, r_t)
4. Security Analysis
4.1 Value Conservation Proof
Theorem 1: KeyPack preserves value conservation.
1. Initial state before cut-through:
(r_a·H + v·G) + // Input from Alice
(r_t·H + v·G) - // Output to Temp
(r_t·H + v·G) + // Input from Temp
(r_b·H + v·G) // Output to Bob
2. After combining terms:
(r_a - r_b)·H + 0·G
3. This proves:
□ Value is conserved (0·G term)
□ No coins created/destroyed
□ Blinding factors balance
4.2 Enhanced Transaction Privacy Through Unlinking
Theorem 2: KeyPack enhances transaction privacy through temporal and structural unlinking.
1. For any outputs A, B in a standard transaction:
P(link(A,B)) = 1/N
where N is the total number of outputs
2. For KeyPack’s split transactions A₁, A₂:
P(link(A₁,A₂)) = 1/N²
This quadratic improvement occurs because:
□ Transactions appear at different times
□ No structural links exist between A₁ and A₂
□ Independent kernel commitments
3. The unlinking property holds because:
□ Cut-through removes C(r_t, v)
□ Kernels K1, K2 are temporally separated
□ Kernels are independently random and unlinkable
□ Each transaction appears as a standard Grin transaction
□ No observable correlation between parts exists
4. Transaction Graph Analysis Resistance:
For an adversary attempting to link transactions:
P(identify_true_flow) = P(link(A₁,A₂)) * P(temporal_correlation)
□ P(temporal_correlation) ≤ 1/T for time window T
□ Total linkability probability becomes 1/(N²T)
This demonstrates that KeyPack provides substantially stronger privacy than standard transactions by:
• Creating temporal separation
• Breaking structural links
• Generating independent kernels
• Masking transaction flow
• Preventing graph analysis
5. Privacy Considerations
5.1 Cryptographic Assumptions
Security relies on:
1. Discrete Logarithm Problem:
Given P = x·G, finding x is hard
2. ECDH Security:
s = ECDH(a, B) = ECDH(b, A)
A = a·G, B = b·G
3. ChaCha20-Poly1305 Security:
□ IND-CCA2 secure
□ Authenticated encryption
5.2 Privacy Guarantees
1. Transaction Unlinkability:
∀ tx1, tx2: P(linked(tx1, tx2)) = P(random_guess)
2. Amount Confidentiality:
□ Hidden by Pedersen commitments
□ No leakage in proofs
6. Fee Handling
6.1 Mathematical Structure
Transaction 1 (Alice → Temp):
Inputs: C(r_a, v) = r_a·H + v·G
Outputs: C(r_t, v) = r_t·H + v·G
Kernel1: K1 = (r_a - r_t)·H
Fee1: 0
Transaction 2 (Temp → Bob):
Inputs: C(r_t, v) = r_t·H + v·G
Outputs: C(r_b, v-f) = r_b·H + (v-f)·G
Kernel2: K2 = (r_t - r_b)·H + f·G
Fee2: f
6.2 Fee Conservation Proof
Theorem 3: KeyPack preserves proper fee handling.
1. Initial state (all components):
(r_a·H + v·G) + // Input from Alice
(r_t·H + v·G) - // Output to Temp
(r_t·H + v·G) + // Input from Temp
(r_b·H + (v-f)·G) - // Output to Bob
(r_a - r_t)·H + // Kernel1
(r_t - r_b)·H + f·G // Kernel2 with fee
2. After combining terms:
((r_a - r_b)·H) + f·G
This proves:
• Value conservation including fee
• Single fee payment for entire operation
• Proper fee commitment in kernel
7. Compression Techniques
7.1 Key Compression
For public key P(x,y):
compressed = sign_bit(y) || x_coordinate
Compression ratio: ~50%
7.2 Nonce Derivation
nonce = blake2b(
kernel_commitment ||
amount ||
7.3 Size Analysis
Final compressed format:
struct CompressedKeyPack {
enc_key: [u8; 33], // Compressed key
nonce_data: [u8; 8], // Derived nonce
amount: u64, // Amount
height_delta: u16, // Block height difference
proof: [u8; 32], // Compact proof
Total size: ~83 bytes (39% reduction)
8. Edge Cases and Mitigations
8.1 Network Partitions
Protected by:
1. Timelock prevents hanging
2. Funds return to sender after timeout
3. No permanent state changes
8.2 Front-running Protection
Secured through:
1. Recipient-specific encryption
2. Unique transaction data
3. Timelock enforcement
9. Conclusions
KeyPack implements:
1. Mathematically proven security
2. Full MimbleWimble compatibility
3. Strong payment proofs
4. Enhanced privacy preservation
1. No consensus changes
2. No protocol modifications
3. Wallet-layer updates only
All proofs rely on standard cryptographic assumptions and protocol design principles.
[1] MimbleWimble Protocol Specification
[2] Pedersen Commitments in Cryptography
[3] ChaCha20-Poly1305 AEAD Specification
[4] Schnorr Signature Scheme
[5] ECDH Key Exchange Protocol
3 Likes
This looks like the old familiar 1-of-2 output. Alice creates an output that both she and Bob can spend once she shares its blinding key with Bob. Which fails when that output gets spent, but both
Alice and Bob deny spending it, claiming the other party must have spent it. With 1-of-2 outputs, there’s no way to determine who spent it…
No simultaneity required. Just 2 messages, one in each direction.
Actually, it’s r·G + v·H.
Btw, Grin has a graph obfuscation protocol [1].
Grin has no maximum-time locks.
[1] https://forum.grin.mw/t/mimblewimble-coinswap-proposal
1 Like
For convenience, a link to the original 1-of-2 output proposal by David B to eliminate the finalize step.
It is stated in this new KeyPack proposal that there are
Strong payment proofs
That is something that David had not solved in his proposal. Whats new in this proposal to my understanding is the use of a
1. A time-lock
2. Claimed strong payment proofs.
3. Use of cut-through to eliminate a temporary output (nifty, I also used this in my fun but poor transaction buddy proposal)
The payment proofs is something to really pay attention to, also if the proof works to avoid dispute in case a transaction expires (can the sender not falsely claim the funds were send)? I really
wonder how the peoof can work, is the proof included in the transaction output that is dropped “C(r_t, v)”, or no probabbly the kernel.
Another part I do not understand is the “Single fee payment for entire operation”. Even if cut-through is used the transaction does create two kernels, and to my understanding the fees for the
eliminated temporary output would still have to be paid for for the first kernel to be valid.
1 Like
I think I see a first major problem. The proposed solutions wants to let Alice add a first transaction with a temporary output to the mempool that does not include a fee. Then Bob comes in to claim
funds from that temporary output and he pays all the fee, or he does not… This would allow Alice to spam the mempool without limits with fake transactions since there is no fee as spam protection,
rather surprising @ebkme would have missed this rather obvious problem. At the moment the mempool would reject the first transaction since it does not include a fee making the transaction invalid. In
theory this could simply be fixed in the proposal by adding a fee to the first transaction with the only downside of increasing link-ability of the two transaction.
I leave it to others to check if payment proofs can theoretically work with this proposed solution.
Intuitively I would expect that after the timelock transpires there will be a race condition where you cannot prove who claimed the funds.
Just for fun I threw the first part of this proposal in an AI writing checker and the result is:
AI-generated 87%
Human written 13%
2 Likes
That would explain the lack of follow-ups?!
1 Like
@ebkme thanks for taking the time to put this together and post. Sounds like the concept was already considered under another name but the the effort is appreciated.
@tromp @Anynomous thanks for being knowledgeable community members with the history to prevent repeat effort. | {"url":"https://forum.grin.mw/t/keypack-grin-wallet-enhancement-please-take-a-look/11347","timestamp":"2024-11-07T20:36:53Z","content_type":"text/html","content_length":"61552","record_id":"<urn:uuid:0afaa048-ac54-4b7a-9837-da3758893661>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00333.warc.gz"} |
Dear All,
I am analyzing a SMART survey dataset of a district A where standard deviation is 1.23 (got maximum penalty point). I am considering reporting the calculated GAM prevalence with a SD of 1. Is a
possibility of getting the prevalence breakdown by sex and CI? I know to get to that analysis (calculating the prevalence with SD of 1) it uses the probity function. I have the raw dataset, can
someone shed light how I can proceed in SPSS so that I can have the calculated prevalence distributed by sex and with CI?
I know plausibility reports provides the calculated prevalence with SD 1, but it doesn’t provide by sex, neither the CI.
Thank you.
I am unsure why you would want to assume SD = 1 when you observe SD = 1.23. Doing so is likely to underestimate true prevalence.
What you need to use a PROBIT estimator is the mean and the SD that you observe from your data. For your problem you need to have the mean and SD for boy and girls separately. You then use the Normal
cumulative density to estimate prevalence as the probability of finding a child with a (e.g.) a WHZ below your case-finding threshold. You would do this for boys and girls separately. You can find
how to do this in SPSS here.
It has been a long time since I used SPSS but I recall that you can acces the Normal cumulative density using 'compute'. Something like:
Transform -> Compute variable -> CDF -> CDF.Normal(theshold, mean SD)
This is described here.
If e.g. you have mean WH = -0.48, SE WHZ = 1.08, and a case-defining threshold of WHZ = -2 then then the probability of a child having WHZ < -2 is 0.07965331 which corresponds to a prevalence of
about 7.97%. You want to estimate p using the lower (left-hand) tail of the normal CDF. If you wanted to force SD=1 this is where you would do it using something like:
CDF.Normal(-2, -0.48, 1)
If you want confidence intervals then you would need to calculate 95% confidence limits for the means and use these, in place of the mean, with Normal.CDF. You can get 95% confidence limits for the
mean using:
Analyse -> Descriptive Statistics -> Explore
is SPSS.
I hope this is of some help.
Mark Myatt
Technical Expert
Dear Mark,
Thanks a lot for this detailed feedback. I would like to react to your first paragraph by sharing the SMART plausibility report you can find here. From there you will see that SD is problematic,
assuming the maximum score. That is why I want to use the calculated prevalence with SD of 1.
Thanks for this useful discussion,
Regards. Tomás
Dear All,
If am not wrong the SMART Methodology recomends, if the plausibiltiy check of a survey is problematic, like this survey which has a penalty of 29 (over 25), it suggests to use calculated SD of 1,
exclusion from reference mean (WHO flags). But you have to mention this in your report.
I see. Your SD is outside of the range used for the SMART data-quality report. This could be due to your survey sampling from a number of different populations (e.g. different livelihood zones)
resulting in yoUr WHZ being a "mixture of Gaussians".
The SD looks problematic but that does not mean that it is legitimate for you to assume SD = 1. Your observed SD is the best information you have for the value of the SD and you should use that. With
the example data (above) we have:
mean WHZ = -0.48, SD WHZ = 1.08
The PROBIT estimator gives p = 0.07965331 (7.97%) If we assumed SD WHZ = 1 we would see the estimated prevalence drop to 0.06425549 (6.43%), Narrowing the SD reduces the prevalence estimate. A wider
SD increases the prevalence estimate. I would use the wider SD as it is the best information I have about the SD and because any error will lead to a false-positive increase in the IPC classification
which is likely more benign than a false-negative error. Note that a wider SD will lead to a wider confidence intervale about the means and a wider confidence interval about the prevalence estimate.
Not also that the wider SD may be due to outliers and, if this is the case, the mean may be biased and the PROBIT estimate be way out. Your best option might to to censor outliers using a rule that
censores observations more than 3 interquartile ranges above or below the upper or lower quartiles ... other ways of identifying outliers are available directly in SPSS. Once you have removed
outliers you should calculate means and SDs again and apply the PROBIT estimator.
I hope this is some use.
Mark Myatt
Technical Expert
Dear Mark,
It is always a good pleasure to read from you.
I have gone through as per the description provided by you and I got the following:
• For the overall GAM prevalence, I used the following details from my dataset:
□ GAM by WHZ defined as anything <-2 z-scores
□ My mean was of: -0.30
□ SD = 1.23.
Please note my test for outliers was excellent (as per the plausibility report I shared before).
With this, the CDF.Normal (-2, -0.30, 1.23) = 0.083468 = 8.35%
For prevalence by sex, I calculated the mean z-score for boys and for girls separately, the SD for each and using the same case-definition of GAM by WHZ. I got the following:
CDF.Normal (-2, -0.30, 1.31) = 0.09719 = 9.72%
CDF.Normal (-2, -0.42, 1.29) = 0.1132 = 11.3%
Overall comments:
I must confess that this was indeed of some use, but at the same time a need to confess that method has the limitation of not taking in consideration bilateral edema. Also, acknowledging the fact
that the of of calculated prevalence with a SD of 1 underestimates prevalence, perhaps the SMART methodology team should consider adopting this (CDF) approach and, consequently IPC protocols too as
it IPC recommends use of calculated prevalence with SD of 1 when SD is beyond the acceptable ranges.
Million of thanks.
Dear Tomas:
In your latest post, it appears that you are calculating the prevalence of GAM by calculating the area under a normal curve defined by the mean and standard deviation of WHZ. I do not understand why
you would do this. Let's go back to basics: the prevalence of a given condition is the proportion of a defined population which has that condition. Prevalence is measured by assessing individuals,
defining each individual as having or not having that condition, then dividing the number of individuals with the condition by the total number of individuals assessed. GAM is defined in individuals
as having a WHZ less than -2 and/or bilateral pedal edema. Just apply this definition to each individual to calculate prevalence. Using statistical manipulation to calculate prevalence from an
idealized distribution of WHZ may produce inaccurate estimates of prevalence because such manipulation forces normality on a curve which may not be normal.
Bradley A. Woodruff
Technical Expert
Dear Bradley,
Thanks for your attention.
Allow me to take you back to my initial post where I stated a problem and in my second post I shared how the problem looks like based on the SMART plausibility report. This problem makes the use of
basics calculation of a ‘prevalence’ not realistic as it’s likely to overestimate the GAM prevalence measured by Weight-for-height because SD is way out from the acceptable ranges as per the SMART
methodology (0.8 – 1.2) AND, as recommendation from the SMART when this happens (when SD is >1.2) then a calculated prevalence with a SD of 1 should be used and by doing this way it underestimates
the prevalence as it goes all the way down.
I fully agree with you that “Using statistical manipulation to calculate prevalence from an idealized distribution of WHZ may produce inaccurate estimates of prevalence because such manipulation
forces normality on a curve which may not be normal.” That’s why Mark was showing here a way how to calculate the prevalence (when SD is beyond the upper limit) but using techniques that do not
underestimate the prevalence, by using observed mean, SD from my survey sample.
Thanks again for your feedback about this post. | {"url":"https://www.en-net.org/forum/question/3844","timestamp":"2024-11-04T11:20:25Z","content_type":"text/html","content_length":"36403","record_id":"<urn:uuid:bef14e55-b1e1-4572-a5f7-3314b2b6993d>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00619.warc.gz"} |
Method of Lagrange
We use the Method of Lagrange to find extremums of multivariate functions with respect to a constraint. To do so, we first find its critical point (with respect to a constraint), and afterwards,
evaluating those points using , we conclude
• Biggest values = global max
• smallest values = global min
To find the critical points of subject to a constraint where ( is a constant), find the values of and for which
is known as the Lagrange Multiplier.
For inequalities, find critical points of , then only consider the points inside the boundary condition.
Then, use lagrange to check at the edge of the boundary. | {"url":"https://stevengong.co/notes/Method-of-Lagrange","timestamp":"2024-11-09T04:40:51Z","content_type":"text/html","content_length":"22649","record_id":"<urn:uuid:e1c1d919-8ddd-4103-b4bf-e2a4b0f8a1ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00780.warc.gz"} |
What is the Boltzmann Constant?
There are actually two Boltzmann constants, the Boltzmann constant and the Stefan-Boltzmann constant; both play key roles in astrophysics … the first bridges the macroscopic and microscopic worlds,
and provides the basis for the zero-th law of thermodynamics; the second is in the equation for blackbody radiation.
The zero-th law of thermodynamics is, in essence, what allows us to define temperature; if you could ‘look inside’ an isolated system (in equilibrium), the proportion of constituents making up the
system with energy E is a function of E, and the Boltzmann constant (k or k[B]). Specifically, the probability is proportional to:
where T is the temperature. In SI units, k is 1.38 x 10^-23 J/K (that’s joules per Kelvin). How Boltzmann’s constant links the macroscopic and microscopic worlds may perhaps be easiest seen like
this: k is the gas constant R (remember the ideal gas law, pV = nRT) divided by Avogadro’s number.
Among the many places k appears in physics is in the Maxwell-Boltzmann distribution, which describes the distribution of speeds of molecules in a gas … and thus why the Earth’s (and Venus’)
atmosphere has lost all its hydrogen (and only keeps its helium because what is lost gets replaced by helium from radioactive decay, in rocks), and why the gas giants (and stars) can keep theirs.
The Stefan-Boltzmann constant (?), ties the amount of energy radiated by a black body (per unit of area of its surface) to the blackbody temperature (this is the Stefan-Boltzmann law). ? is made up
of other constants: pi, a couple of integers, the speed of light, Planck’s constant, … and the Boltzmann constant! As astronomers rely almost entirely on detection of photons (electromagnetic
radiation) to observe the universe, it will surely come as no surprise to learn that astrophysics students become very familiar with the Stefan-Boltzmann law, very early in their studies! After all,
absolute luminosity (energy radiated per unit of time) is one of the key things astronomers try to estimate.
Why does the Boltzmann constant pop up so often? Because the large-scale behavior of systems follows from what’s happening to the individual components of those systems, and the study of how to get
from the small to the big (in classical physics) is statistical mechanics … which Boltzmann did most of the original heavy lifting in (along with Maxwell, Planck, and others); indeed, it was Planck
who gave k its name, after Boltzmann’s death (and Planck who had Boltzmann’s entropy equation – with k – engraved on his tombstone).
Want to learn more? Here are some resources, at different levels: Ideal Gas Law (from Hyperphysics), Radiation Laws (from an introductory astronomy course), and University of Texas (Austin)’s Richard
Fitzpatrick’s course (intended for upper level undergrad students) Thermodynamics & Statistical Mechanics. | {"url":"https://www.universetoday.com/51383/boltzmann-constant/","timestamp":"2024-11-06T12:28:18Z","content_type":"text/html","content_length":"174960","record_id":"<urn:uuid:38dcfaa8-9022-4d58-83c0-7f2df038a642>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00839.warc.gz"} |
Applied Mechanics of Solids (A.F. Bower) Chapter 5: Elastic solutions -
5.5 Anisotropic elasticity
Chapter 5
Analytical techniques and solutions for linear elastic solids
5.5 Solutions to generalized plane problems for anisotropic linear elastic solids
Materials such as wood, laminated composites, and single crystal metals are stiffer when loaded along some material directions than others.  Such materials are said to be anisotropic, and cannot
be modeled using the procedures described in the preceding sections.  In this chapter, we describe briefly the most widely used method for calculating elastic deformation and stress in two
dimensional anisotropic solids. As you might expect, these calculations are difficult, and while the solutions can be expressed in a surprisingly compact form, the resulting expressions can usually
only be evaluated using a computer. In many practical situations it is simplest to calculate solutions for anisotropic materials using direct numerical computations (e.g. using the finite element
method, discussed in Chapters 7 and 8).  Nevertheless, analytical solutions are useful: for example, the finite element method cannot easily be applied to problems involving cracks, dislocations,
or point forces, because they contain singularities; in addition exact calculations can show how the solutions vary parametrically with elastic constants and material orientation.  Â
5.5.1 Governing Equations of elasticity for anisotropic solids
A typical plane elasticity problem is illustrated in the picture. The solid is two dimensional: in this case we are concerned with plane strain solutions, which means that the solid is very long in
the  direction, and every cross section is loaded identically and only in the  plane.  The material is an anisotropic, linear elastic solid, whose properties can be characterized by the
elasticity tensor  (or an equivalent matrix) as discussed in Chapter 3.
To simplify calculations, we shall assume that (i) The solid is free of body forces; (ii) thermal strains can be neglected.  Under these conditions the general equations of elasticity listed in
Section 5.1.2 reduce to
subject to the usual boundary conditions. In subsequent discussions, it will be convenient to write the  equilibrium equations in matrix form as
Conditions necessary for strict plane strain deformation of anisotropic solids. For Plane strain deformations the displacement field has the form . Under these conditions the equilibrium equations
reduce to
In this case,  can be chosen to satisfy two, but not all three, of the three equations. The elastic constants must satisfy . Consequently, the third equation can only be satisfied by setting
Strict plane deformations therefore only exist in a material with elastic constants and orientation satisfying
The most common class of crystals  cubic materials  satisfies these conditions for appropriate orientations.Â
Generalized plane strain deformations. A generalized plane strain displacement field can exist in any general anisotropic crystal. In this case the displacement field has the form
i.e. the displacement is independent of position along the length of the cylindrical solid, but points may move out of their original plane when the solid is loaded.
5.5.2 Stroh representation for fields in anisotropic solids
The Stroh solution is a compact, complex variable, representation for generalized plane strain solutions to elastically anisotropic solids. To write the solution, we need to define several new
1. We define three new 3x3 matrices of elastic constants, as follows
2. We introduce three complex valued eigenvalues  (i=1…3) and eigenvectors  which satisfy
The eigenvalues can be computed by solving the equation
Since Q, R and T are 3x3 matrices, this is a sextic equation for p, with 6 roots. It is possible to show that for a material with physically admissible elastic constants p is always complex, so the
6 roots are pairs of complex conjugates . Each pair of complex roots has a corresponding pair of complex valued eigenvectors ,   We define  to be the roots with positive imaginary part, and  to
be the corresponding eigenvector.
3. To calculate the stresses, it is helpful to introduce three further vectors  defined as
4. It is often convenient to collect the eigenvectors  and  and the eigenvalues  into matrices  as follows
Note also that, as always, while the eigenvalues  are uniquely defined for a particular set of elastic constants, the eigenvectors  (and consequently the vectors  ) are not unique, since they may
be multiplied by any arbitrary complex number and will remain eigenvectors. It is helpful to normalize the eigenvectors so that the matrices A and B satisfy
where I is the identity matrix.
General representation of displacements: The displacement  at a point  in the solid is
where  are the three pairs of complex roots of the characteristic equation;  are the corresponding eigenvalues,  and  are analytic functions, which are analogous to the complex potentials  for
isotropic solids.
General representation of stresses: The stresses can be expressed in terms of a vector valued stress function  (you can think of this as a generalized Airy function) defined as
The stresses can be calculated from the three components of  as
Combined matrix representation for displacement and stresses:Â The solution for the displacement field and stress function can be expressed in the form
Simpler representation for stresses and displacements: The solutions given above are the most general form of the generalized plane strain solution to the governing equations of linear elasticity.Â
However, not all the solutions of this form are of practical interest, since the displacements and stresses must be real valued.  In practice most solutions can be expressed in a much simpler the
form as
where Re(z) denotes the real part of z,
and .
5.5.3 Demonstration that the Stroh representation satisfies the governing equations
Our first objective is to show that a displacement field of the form , with , and (p, ) are any one of the eigenvalues  and eigenvectors  defined in the preceding section, satisfy the governing
To see this,
1. Note that , where  is the Kronecker delta. Therefore, it follows that
2. Substituting this result into the governing equation shows that
3. This can be re-written as
or in matrix form as
where  are the matrices defined in Section 5.5.2. The eigenvalue/eigenvector pairs (p, ) satisfy this equation by definition, which shows that the governing equation is indeed satisfied.
Our next objective is to show that stresses can be computed from the formulas given in Section 5.5.2. To see this,
1. Note that the stresses can be obtained from the constitutive equation
2. Recall that for each of the six characteristic solutions we may obtain displacements as , so that
where Q, R and T are the matrices defined in the preceding section.
3. To simplify this result, define
and note that the governing equations require that
4. Combining the results of (2) and (3) shows that stresses can be computed from
5. Finally, recall that the stress function  has components , and . Consequently, the stresses are related to the stress function by  as required.
5.5.4 Stroh eigenvalues and anisotropy matrices for cubic materials
Since the eigenvalues p for a general anisotropic material involve the solution to a sextic equation, an explicit general solution cannot be found. Even monoclinic materials (which have a single
symmetry plane) give solutions that are so cumbersome that many symbolic manipulation programs cannot handle them. The solution for cubic materials is manageable, as long as one of the coordinate
axes is parallel to the  direction. If the cube axes coincide with the coordinate directions, the elasticity matrix reduces to
The characteristic equation therefore has the form
For  the eigenvalues are purely imaginary. The special case  corresponds to an isotropic material.
The matrices A and B can be expressed as
5.5.5 Degenerate Materials
There are some materials for which the general procedure outlined in the preceding sections breaks down.  We can illustrate this by attempting to apply it to an isotropic material. In this case
we find that , and there only two independent eigenvectors a associated with the repeated eigenvalue . In addition, if you attempt to substitute material constants representing an isotropic material
into the formulas for A and B given in the preceding section you will find that the terms in the matrices are infinite.
The physical significance of this degeneracy is not known. Although isotropic materials are degenerate, isotropy does not appear to be a necessary condition for degeneracy, as fully anisotropic
materials may exhibit the same degeneracy for appropriate values of their stiffnesses.
S. T. Choi, H. Shin and Y. Y. Earmme, Int J. Solids Structures 40, (6) 1411-1431 (2003) have found a way to re-write the complex variable formulation for isotropic materials into a form that is
identical in structure to the Stroh formulation. This approach is very useful, because it enables us to solve problems involving interfaces between isotropic and anisotropic materials, but it does
not provide any fundamental insight into the cause of degeneracy, nor does it provide a general fix for the problem.
In many practical situations the problems associated with degeneracy can be avoided by re-writing the solution in terms of special tensors (to be defined below) which can be computed directly from
the elastic constants, without needing to determine A and B.
5.5.6 Fundamental Elasticity Matrix
The vector  and corresponding eigenvector  can be shown to be the right eigenvectors and eigenvalues of a real, unsymmetric matrix known as the fundamental elasticity matrix, defined as
where the matrices  are the elasticity matrices defined in Section 5.5.2. Similarly,  can be shown to be the left eigenvector of N.
To see this, note that the expressions relating vectors a and b
can be expressed as
Since T is positive definite and symmetric its inverse can always be computed. Therefore we may write
and therefore
This is an eigenvalue equation, and multiplying out the matrices gives the required result.
The second identity may be proved in exactly the same way.  Note that
again, giving the required answer.
For non-degenerate materials N has six distinct eigenvectors. A matrix of this kind is called simple. For some materials N has repeated eigenvalues, but still has six distinct eigenvectors. A
matrix of this kind is called semi-simple. For degenerate materials N does not have six distinct eigenvectors. A matrix of this kind is called non semi-simple.
5.5.7 Orthogonal properties of Stroh matrices A and B
The observation that  and  are right and left eigenvectors of N has an important consequence. If the eigenvalues are distinct (i.e. the material is not degenerate), the left and right
eigenvectors of a matrix are orthogonal. This implies that
In addition. the vectors can always be normalized so that
If this is done, we see that the matrices A and B must satisfy
Clearly the two matrices are inverses of each other, and therefore we also have that
These results give the following relations between A and B
5.5.8 Barnett-Lothe tensors and the Impedance Tensor.
In this section we define four important tensors that can be calculated from the Stroh matrices A and B. Specifically, we introduce:
The Barnett-Lothe tensors
The Impedance Tensor with properties  (  )
The following relations between the Barnett-Lothe tensors and the impedance tensor are also useful
Many solutions can be expressed in terms of S, H and L directly, rather than in terms of A and B. In addition, Barnett and Lothe devised a procedure for computing S, H and L without needing to
calculate A and B (See Sect. 5.5.11). Consequently, these tensors can be calculated even for degenerate materials.
As an example, for cubic materials, with coordinate axes aligned with coordinate directions,
5.5.9 Useful properties of matrices in anisotropic elasticity
We collect below various useful algebraic relations between the various matrices that were introduced in the preceding sections.
By definition, a matrix  satisfying  is Hermitian. A matrix satisfying  is skew-Hermitian.
·  is skew Hermitian. To see this, note that the orthogonality relations for A and B require that
·  is Hermitian. This follows trivially from the preceding expression.
·  and  are both Hermitian. To see this, note  and use the preceding result.
· The matrices  are Hermitian. To show the first expression, note that  and recall that L is real. A similar technique shows the second.
·  are both orthogonal matrices. To see this for the first matrix, note that , where we have used the orthogonality properties of B. A similar procedure shows the second result.
· The Barnett-Lothe tensors are real (i.e. they have zero imaginary part). To see this, note that the orthogonality of A and B (see sect. 5.5.7) implies that
Therefore  and  are pure imaginary, while the real part of . Â
· The impedance tensor can be expressed in terms of the Barnett Lothe tensors as
To see the first result, note that  and use the definitions of H and S. The second result follows in the same way. Note that H, L and S are all real, so this gives a decomposition of M and its
inverse into real and imaginary parts. In addition, since we can compute the Barnett-Lothe tensors for degenerate materials, M can also be determined without needing to compute A and B explicitly.
· . To see these, note that M and its inverse are Hermitian, note that the imaginary part of a Hermitian matrix is skew symmetric, and use the preceding result.
· , where . To see this, recall that the fundamental elasticity tensor satisfies
The second row of this equation is .
5.5.10 Basis Change formulas for matrices used in anisotropic elasticity
The various tensors and matrices defined in the preceding sections are all functions of the elastic constants for the material. Since the elastic constants depend on the orientation of the material
with respect to the coordinate axes, the matrices are functions of the direction of the coordinate system. Â
To this end:
1. Let  and  be two Cartesian bases, as indicated in the figure.
2. Let  denote the components of  in , i.e.   Â
3. Let  be the components of the elasticity tensor in , and let matrices Q, R and T be matrices of elastic constants defined in Section 5.5.2.
4. Let   denote any one of the three Stroh eigenvalues and the matrices of Stroh eigenvectors, computed for the coordinate system ;
5. Let  denote the Barnett-Lothe tensors and impedance tensor in the  basis;
6. Similarly, let , , etc denote the various matrices and tensors in the  basis.
In addition, define rotation matrices   as follows
   Â
The following alternative expressions for  are also useful
The basis change formulas can then be expressed as
Derivation: These results can be derived as follow:
1. Note that the displacements transform as vectors, so that . Consequently,
which shows that  anddirectly gives the basis change formula for A.
2. To find the expression for p, we note that
Therefore, we may write  with  and
as required.
3. The basis change formulas for Q, R and T follow directly from the definitions of these matrices.Â
4. The basis change formula for B is a bit more cumbersome. By definition
Substituting for  gives
and finally recalling that  we obtain the required result.
5. The basis change formulas for the Barnett-Lothe tensors and impedance tensor follow trivially from their definitions. The basis change formulas justify our earlier assertion that these
quantities are tensors.
5.5.11 Barnett-Lothe integrals
The basis change formulas in the preceding section lead to a remarkable direct procedure for computing the Barnett-Lothe tensors, without needing to calculate A and B. The significance of this
result is that, while A and B break down for degenerate materials, S, H, and L are well-behaved. Consequently, if a solution can be expressed in terms of these tensors, it can be computed for any
combination of material parameters.
Specifically, we shall show that S, H, and L can be computed by integrating the sub-matrices of the fundamental elasticity matrix over orientation space, as follows. Let
and define
Derivation: To see this, we show first that  can be diagonalized as
and  was defined earlier. From the preceding section, we note that
which can be expressed as
as before, we can arrange this into an Eigenvalue problem by writing
This shows that [a,b] are eigenvectors of the rotated elasticity matrix. Following standard procedure, we obtain the diagonalization stated.
Now, we examine  more closely. Recall that
Integrating gives
(the sign of the integral is determined by Im(p) because the branch cut for  is taken to lie along the negative real axis). Thus,
5.5.12 Stroh representation for a state of uniform stress
A uniform state of stress (with generalized plane strain deformation) provides a very simple example of the Stroh representation. The solution can be expressed in several different forms. Note
that for a uniform state of stress  and corresponding strain  we may write
In terms of these vectors the Stroh representation is given by
or, in matrix form
Derivation: To see this, recall that a and b form eigenvectors of the fundamental elasticity matrix N as
therefore we can write (for each pair of eigenvectors/values)
Recall that
and finally, defining
gives the required result.
5.5.13 Line load and Dislocation in an Infinite Anisotropic Solid
The figure illustrates the problem to be solved. We consider an infinite, anisotropic, linear elastic solid, whose elastic properties will be characterized using the Stroh matrices A and B.
The solid contains a straight dislocation, with line direction , perpendicular to the plane of the figure. The dislocation has Burger’s vector .
At the same time, the solid is subjected to a line of force (with line direction extending out of the plane of the figure). The force per unit length acting on the solid will be denoted by .
The displacement and stress function can be expressed in terms of the Stroh matrices as
where , where diag denotes a diagonal matrix, and
 The solution can also be expressed as
Derivation: Â We must show that the solution satisfies the following conditions:
1. The displacement field for a dislocation with burgers vector b must satisfy  (this corresponds to taking a counterclockwise Burgers circuit around the dislocation, as described in Section 5.3.4).
2. The resultant force exerted by the stresses acting on any contour surrounding the point force must balance the external force F. For example, taking a circular contour with radius r centered at
the origin, we see that
3. We can create the required solution using properties of log(z). We try a solution of the form
where  and q is a vector to be determined. Recall that we may write , whence . This, in turn, implies that . Therefore
4. Recalling the orthogonality properties of A and B
we can solve for q
5.5.14 Line load and dislocation below the surface of an anisotropic half-space
The figure shows an anisotropic, linear elastic half-space. The elastic properties of the solid are characterized by the Stroh matrices A, B and P defined in Section 5.5.2. The solid contains a
dislocation with Burgers vector b and is also subjected to a linear load with force per unit length F at a point , while the surface of the solid is traction free.
The solution can be computed from the simplified Stroh representation
The first term in the expression for f will be recognized as the solution for a dislocation and point force in an infinite solid; the second term corrects this solution for the presence of the free | {"url":"http://solidmechanics.org/text/Chapter5_5/Chapter5_5.htm","timestamp":"2024-11-15T00:55:18Z","content_type":"text/html","content_length":"584041","record_id":"<urn:uuid:5e005f0f-24e3-48ce-95c7-d2872d1bec45>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00244.warc.gz"} |
Kcal/g to BTU/lb
Units of measurement use the International System of Units, better known as SI units, which provide a standard for measuring the physical properties of matter. Measurement like latent heat finds its
use in a number of places right from education to industrial usage. Be it buying grocery or cooking, units play a vital role in our daily life; and hence their conversions. unitsconverters.com helps
in the conversion of different units of measurement like Kcal/g to BTU/lb through multiplicative conversion factors. When you are converting latent heat, you need a Kilocalorie per Gram to BTU per
Pound converter that is elaborate and still easy to use. Converting Kcal/g to BTU per Pound is easy, for you only have to select the units first and the value you want to convert. If you encounter
any issues to convert Kilocalorie per Gram to BTU/lb, this tool is the answer that gives you the exact conversion of units. You can also get the formula used in Kcal/g to BTU/lb conversion along with
a table representing the entire conversion. | {"url":"https://www.unitsconverters.com/en/Kcal/G-To-Btu/Lb/Utu-8806-4763","timestamp":"2024-11-08T09:17:38Z","content_type":"application/xhtml+xml","content_length":"111287","record_id":"<urn:uuid:38f9e76a-a7da-4e3a-9ab1-ef5f4004c35b>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00524.warc.gz"} |
The Calibration-Accuracy Plot: Introduction and Examples
Last updated:
Table of Contents
View all these examples on this example notebook
Calibration vs Accuracy
For more information on model calibration, see Introduction to AUC and Calibrated Models with Examples using Scikit-Learn
Calibration^1 is a term that represents how well your model's scores can be interpreted as probabilities.
For example: in a well-calibrated model, a score of 0.8 for an instance actually means that this instance has 80% of chance of being TRUE.
Training models that output calibrated probabilities is very useful for whoever uses your model's outputs, since they will have a better idea of how likely the events are. This is especially relevant
in fields such as:
• Likelihood of credit default for a client
• Likelihood of fraud in a transaction
Calibration Accuracy
Measures how well your model's Measures how often your model
scores can be interpreted as probabilities produces correct answers
The calibration-accuracy plot
The Python version of the calibration-accuracy plot can be found on library ds-util. You can install it using pip: pip install dsutil
The calibration-accuracy plot is a way to visualize how well a model's scores correlate with the average accuracy in that confidence region.
In short, it answers the following question: what is the average accuracy for the model at each score bucket?. The closer those values, the better calibrated your model is.
If the line that plots the accuracies is a perfect diagonal, it means your model is perfectly calibrated.
Simplest possible example with default options
You can install the dsutil library via pip install dsutil
import numpy as np
from dsutil.plotting import calibration_accuracy_plot
# binary target variable (0 or 1)
y_true = np.random.choice([0,1.0],500)
y_pred = np.random.normal(size=500)
calibration_accuracy_plot(y_true, y_pred)
Simplest possible example, using default options. Your
results may vary because data points have been
randomly generated.
Example: model with good calibration but low accuracy
For more information how to generate dummy data with sklearn, see Scikit-Learn examples: Making Dummy Datasets
import numpy as np
from dsutil.plotting import calibration_accuracy_plot
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
# generate a dummy dataset
X, y = make_classification(n_samples=2000, n_features=5)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
# train a simple logistic regression model
clf = LogisticRegression()
y_pred = clf.predict_proba(X_test)[:,1]
calibration_accuracy_plot(y_test, y_pred)
Logistic Regression models are calibrated by default.
In this case, the model's accuracy isn't very good (see how many
predictions are in the lowest bucket), but you can have a reasonably good
expectation that scores are welll-calibrated.
Example: a badly calibrated model with high accuracy
from dsutil.plotting import calibration_accuracy_plot
from sklearn.datasets import make_classification
from sklearn.svm import SVC
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
X, y = make_classification(n_samples=2000, n_features=5, class_sep=0.01)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
clf = SVC(kernel='rbf')
y_pred_raw = clf.decision_function(X_test)
# need to normalize the scores because decision_function outputs absolute values
# and reshape so it's a column vector
y_pred = MinMaxScaler().fit_transform(y_pred_raw.reshape(-1,1)).ravel()
calibration_accuracy_plot(y_test, y_pred)
In this case, we have a badly calibrated
model. This model is almost equally bad if the score
is anywhere from 0.0 - 0.7, so the line indicating the
average accuracy per bucket doesn't look very much like a
diagonal. However, a large percentage of the scores are in the 0.7 - 1.0 range,
so overall the accuracy is not bad at all.
Example: a very badly calibrated model, with low accuracy
from dsutil.plotting import calibration_accuracy_plot
from sklearn.datasets import make_classification
from sklearn.svm import SVC
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
X, y = make_classification(n_samples=2000, n_features=5, class_sep=0.01)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
clf = SVC(kernel='linear')
y_pred_raw = clf.decision_function(X_test)
# need to normalize the scores because decision_function outputs absolute values
# and reshape so it's a column vector
y_pred = MinMaxScaler().fit_transform(y_pred_raw.reshape(-1,1)).ravel()
calibration_accuracy_plot(y_test, y_pred)
In this case, a badly calibrated model (the
orange line is nowhere near a diagonal) and the model's
accuracy is also not good, since most of the scores are in the
lower buckets.
Notice that the only difference between this and the previous example
is the kernel used in the SVM model.
1: One way to measure how well-calibrated your model is is using the Brier Score | {"url":"https://queirozf.com/entries/the-calibration-accuracy-plot-introduction-and-examples","timestamp":"2024-11-11T06:44:49Z","content_type":"text/html","content_length":"36974","record_id":"<urn:uuid:2c752970-48da-4995-b39c-cc599f48492b>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00889.warc.gz"} |
Name: The Pleiades - THE PLEIADES - ot l ; rw Al X 4 : diffwe learned in a3 prev.zous lab, the HR Diagram contains a wealth of in eljent type§ of stars in existence, and how they evolve HR Diagram
with data solely from stars that formed toget 7 We formation about the ' th . ; : Rt eathor s : : e Pleiades. Using the Stellarium are and how old they are., REQUIRED MATERIALS This lab requires use
of the following programs ( ® Microsoft Excel e Stellarium © You should make sure you have all of the extra star catalogs downloaded before starting this lab. Instructions are included in the
Appendix. see Appendix for program details): IMPORTANT TERMS In this lab we will be investigating the Color-Magnitude Diagram (CMD) for the Pleiades. If you recall, the color of a star tells you its
temperature, and the magnitude of a star is related to its brightness (or luminosity), so in essence, a CMD is nothing more than an HR Diagram. In this case, though, all of the stars plotted on the
diagram will belong to the same stellar cluster. Because of this, we can assume they all reside the same distance away. In the latter part of the lab, once we have plotted the stellar data onto an HR
diagram, we will compare the points to a “standard main sequence”, or Zero Age Main Sequence (ZAMS). This representsghe “starting poivnt”_ro_rfhs“tars once thgy become main sequence stars Stella.»r
properties such as temperature and [uminosity will change slightly as a star ages (typically getting cooler yet more luminous over time), so we can use the ZAMS as a lower boundary indicator of where
a standard main sequence would be located. 67 | {"url":"http://aprivateaffair.biz/519026.html","timestamp":"2024-11-12T22:43:50Z","content_type":"text/html","content_length":"333804","record_id":"<urn:uuid:c76a5f6c-31a0-4b20-aa60-c3660e78be8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00196.warc.gz"} |
Determining fair valuation – Boost Your Income.ca
Originally posted on September 13th, 2015
This post will show how I calculate valuation, but more importantly, why it’s paramount to buy a business when fairly valued. Buying a high quality business is important, but you don’t want it at any
price. A high quality business can be a bad investment if one pays too much for it. Valuation can be calculated based on a company’s intrinsic value.
However, calculating the present intrinsic value is just half of the equation. One must consider estimated earnings as well, to have an idea of what the intrinsic value will be. Since forecasting
earnings involve the future, it can’t be precise, since no one knows what will happen for sure. It doesn’t mean it’s a random guess either. The key is that it doesn’t have to be precise. Market
consensus and corporate guidance can give a good idea of what earnings are estimated to be – and therefore, how it should affect total return. One can also measure a degree of certainty by looking
how accurate analysts have been with estimates. A company with a good score of meeting analysts estimates gives a good comfort for estimates to remain accurate, as it indicates that the business is
predictable and analysts understand them well.
Stock prices in the short run can be driven by strong emotions such as fear and greed. The intrinsic value of a business is driven by fundamentals and can be calculated within a reasonable degree of
certainty. Once this calculation is made, sound investing decisions can be made and implemented.
Continuing the exercise with BlackRock, we can see that stock price (line in black) follows earnings (line in orange). That also shows how price can be volatile. The blue line shows the historical P/
E that the company has been trading for the analyzed period – which is normally considered as the fairly valued P/E to be used for this company.
For example, BlackRock was overvalued at the end of 2009. And that can be calculated. See how the stock price was trading above its intrinsic value. Had one bought in 2009, the annualized ROR would
be 6.89%, which is an ugly total return for that period (the index alone was almost doubled, annualized).
However, had one purchased when it was fairly valued (it can be calculated), in this case just a few months later, the annualized ROR would be much higher, at 12%, almost double annualized by simply
buying it when fairly valued. It would also give a better cushion if fundamentals suddenly deteriorate and one has to sell it eventually, because this would have been purchased at a lower price when
bought when it was fairly valued. Future earnings is not in our control, but valuation is – that’s a risk that we can mitigate. So buying when fairly valued serves 2 purposes.
Since any company has different growth stages during its lifetime, I like to plot a 5-year timeframe, made of the last 3 years + the estimated 2 years. That gives a good idea of how BlackRock is
presently undervalued:
BlackRock’s normalized P/E for that period is 16.9, and since earnings growth rate for that period is 13.4%, the intrinsic value is calculated through an extrapolation of Graham and Lynch’s formula.
Estimated growth for this year (4%) and next year (11%) is lower than the earnings growth rate for the last 3 years (13.3%), so I’d consider the P/E 15 (Graham’s P/E for fair valuation) the fair
valuation for this stock, simply going by the principle that there’s no justification for this company to trade at a premium P/E (16.7), if growth is slowing down. It made sense a few years ago, when
growth was higher, but it makes less sense now. Therefore, considering this year’s estimated earnings of $20.08, my buy price for this stock is $301.18 – so now is a good time to buy it.
From an earnings estimated perspective, I antecipate an annualized ROR over 14% by buying it today, by end of 2017.
This graph shows that the historical P/E for the period that I used (last 3 years) is actually the lowest P/E from all past periods. That means, this company typically trades at a higher P/E – which
represents the potential higher return, the current calculation is actually very conservative.
Lastly, I just want to finish this post giving 2 examples of how buying an overvalued company can be detrimental to total return. See Wal-Mart below, if one purchased it 15 years ago, when it was
overvalued (this was not even the peak price, one could have done worse). In 15 years, annualized total return was a dismal 2.1%. Many might think Wal-Mart is a lousy business to invest, but the
culprit here was actually high valuation. Price follows earnings, so it was a matter of time for that to happen.
A more dramatic example is Cisco. Terrific business, very strong earnings growth. But see the damage when one doesn’t take valuation into consideration. Cisco highest price was $77. After it dropped
50% from the peak, one must think this is a great bargain. So I consider the purchase at $38.25, done 15-years ago. Still today, one wouldn’t have recovered their money, regardless of being a
fantastic business with an annualized earnings growth of 11.4 for 15-years. This is the best example of how a great business can be a terrible investment if one doesn’t take valuation into
consideration – regardless of how great the business earnings go.
Earnings and valuation are the main drivers of total return, and both can be calculated with a certain degree of accuracy. Buying an undervalued company means buying a company that is not being loved
by the market. If one understands that this is a long term investing and stay put, monitoring earnings, these will turn out to be great opportunities. | {"url":"https://boostyourincome.ca/determining-fair-valuation/","timestamp":"2024-11-12T10:39:05Z","content_type":"text/html","content_length":"298793","record_id":"<urn:uuid:4cd66c33-64ea-4e90-bd57-f6e4ec106eb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00236.warc.gz"} |
AdS description of induced higher-spin gauge theory
We study deformations of three-dimensional large N CFTs by double-trace operators constructed from spin s single-trace operators of dimension Δ. These theories possess UV fixed points, and we
calculate the change of the 3-sphere free energy δF = FUV FIR. To describe the UV fixed point using the dual AdS4 space we modify the boundary conditions on the spin s field in the bulk; this
approach produces δF in agreement with the field theory calculations. If the spin s operator is a conserved current, then the fixed point is described by an induced parity invariant conformal spin s
gauge theory. The low spin examples are QED3 (s = 1) and the 3-d induced conformal gravity (s = 2). When the original CFT is that of N conformal complex scalar or fermion fields, the U(N) singlet
sector of the induced 3-d gauge theory is dual to Vasiliev's theory in AdS4 with alternate boundary conditions on the spin s massless gauge field. We test this correspondence by calculating the
leading term in δF for large N. We show that the coefficient of logN in δF is equal to the number of spin s 1 gauge parameters that act trivially on the spin s gauge field. We discuss generalizations
of these results to 3-d gauge theories including Chern-Simons terms and to theories where s is half-integer. We also argue that the Weyl anomaly a-coefficients of conformal spin s theories in even
dimensions d, such as that of the Weyl-squared gravity in d = 4, can be efficiently calculated using massless spin s fields in AdSd+1 with alternate boundary conditions. Using this method we derive a
simple formula for the Weyl anomaly a-coefficients of the d = 4 Fradkin-Tseytlin conformal higher-spin gauge fields. Similarly, using alternate boundary conditions in AdS3 we reproduce the well-known
central charge c = 26 of the bc ghosts in 2-d gravity, as well as its higherspin generalizations.
All Science Journal Classification (ASJC) codes
• Nuclear and High Energy Physics
• AdS-CFT correspondence
• Conformal and W symmetry
• Field theories in lower dimensions
• Renormalization group
Dive into the research topics of 'AdS description of induced higher-spin gauge theory'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/ads-description-of-induced-higher-spin-gauge-theory","timestamp":"2024-11-05T11:08:02Z","content_type":"text/html","content_length":"53112","record_id":"<urn:uuid:f001276a-1a03-48ac-8de7-be269ec0e849>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00672.warc.gz"} |
UVA Online Judge solution - 10171 - Meeting Prof Miguel - UVA Online Judge solution in C,C++,Java
problem Link : https://onlinejudge.org/index.php?option=onlinejudge&Itemid=8&page=show_problem&problem=1112
Shahriar Manzoor and Miguel A. Revilla (Shanghai, 2005). First meeting after five years of collaboration I have always thought that someday I will meet Professor Miguel, who has allowed me to arrange
so many contests. But I have managed to miss all the opportunities in reality. At last with the help of a magician I have managed to meet him in the magical City of Hope. The city of hope has many
roads. Some of them are bi-directional and others are unidirectional. Another important property of these streets are that some of the streets are for people whose age is less than thirty and rest
are for the others. This is to give the minors freedom in their activities. Each street has a certain length. Given the description of such a city and our initial positions, you will have to find the
most suitable place where we can meet. The most suitable place is the place where our combined effort of reaching is minimum. You can assume that I am 25 years old and Prof. Miguel is 40+. Input The
input contains several descriptions of cities. Each description of city is started by a integer N, which indicates how many streets are there. The next N lines contain the description of N streets.
The description of each street consists of four uppercase alphabets and an integer. The first alphabet is either ‘Y’ (indicates that the street is for young) or ‘M’ (indicates that the road is for
people aged 30 or more) and the second character is either ‘U’ (indicates that the street is unidirectional) or ‘B’ (indicates that the street is bi-directional). The third and fourth characters, X
and Y can be any uppercase alphabet and they indicate that place named X and Y of the city are connected (in case of unidirectional it means that there is a one-way street from X to Y ) and the last
non-negative integer C indicates the energy required to walk through the street. If we are in the same place we can meet each other in zero cost anyhow. Every energy value is less than 500. After the
description of the city the last line of each input contains two place names, which are the initial position of me and Prof. Miguel respectively. A value zero for N indicates end of input. Output For
each set of input, print the minimum energy cost and the place, which is most suitable for us to meet. If there is more than one place to meet print all of them in lexicographical order in the same
line, separated by a single space. If there is no such places where we can meet then print the line ‘You will never meet.’
Sample Input 4 Y U A B 4 Y U C A 1 M U D B 6 M B C D 2 A D 2 Y U A B 10 M U C D 20 A D 0
Sample Output 10 B You will never meet.
Code Examples
#1 Code Example with C Programming
Code - C Programming
#include <bits/stdc++.h>
using namespace std;
int main()
int n;
char type,direction,a,b;
int val;
while(scanf("%d",&n), n!=0){
int youngGraph[26][26] = {};
int elderGraph[26][26] = {};
for(int i=0;i < 26;i++)
for(int j=0;j < 26;j++)
youngGraph[i][j] = elderGraph[i][j] = i==j ? 0 : 1e7;
for(int i=0;i < n;i++){
cin >> type >> direction >> a >> b >> val;
a -= 'A';
b -= 'A';
if(type == 'Y'){
youngGraph[a][b] = min(val,youngGraph[a][b]);
if(direction == 'B'){
youngGraph[b][a] = min(val,youngGraph[b][a]);
} else {
elderGraph[a][b] = min(val,elderGraph[a][b]);
if(direction == 'B'){
elderGraph[b][a] = min(val,elderGraph[b][a]);
// floyd warshall
for(int k=0;k<26;k++)
for(int i=0;i < 26;i++)
for(int j=0;j < 26;j++){
elderGraph[i][j] = min(elderGraph[i][j], elderGraph[i][k]+elderGraph[k][j]);
youngGraph[i][j] = min(youngGraph[i][j], youngGraph[i][k]+youngGraph[k][j]);
int best = INT_MAX;
vector<int> meetings;
cin >> a >> b;
a-='A'; b-='A';
for(int i=0;i < 26;i++)
if(youngGraph[a][i] < 1e7 && elderGraph[b][i] < 1e7){
int cost = youngGraph[a][i]+elderGraph[b][i];
if(cost < best){
best = cost;
if(cost == best) meetings.push_back(i+'A');
printf("You will never meet.\n");
for(auto& v : meetings)
printf(" %c",v);
Copy The Code &
Try With Live Editor
Y U A B 4
Y U C A 1
M U D B 6
M B C D 2
A D
Y U A B 10
M U C D 20
A D
10 B
You will never meet.
UVA Online Judge solution - 10171 - Meeting Prof Miguel - UVA Online Judge solution in C,C++,Java | {"url":"https://devsenv.com/example/uva-online-judge-solution-10171-meeting-prof-miguel-uva-online-judge-solution-in-c,c++,java","timestamp":"2024-11-14T13:48:05Z","content_type":"text/html","content_length":"259507","record_id":"<urn:uuid:f9f33e29-68a6-4a0a-b4f3-4a87b8af8cdf>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00889.warc.gz"} |
1951 Proof Set | United States Mint Proof Sets
The 1951 Proof Set (Buy on eBay) was the second Proof Set issued by the United States Mint following the gap in production for World War II. Proof coins were sold in complete sets of five coins with
no individual Proof coins offered. The sets were priced at $2.10 each for a face value of 91 cents in Proof coins.
1951 Proof Washington Quarter
The Philadelphia Mint produced 57,500 of the 1951 Proof Sets. This number topped the previous year by about six thousand. The continually increasing mintages for the Proof Sets would become recurring
theme for many years as the popularity of the offering grew with collectors.
Each 1951 Proof Set includes a Proof version of the Franklin Half Dollar, Washington Quarter, Roosevelt Dime, Jefferson Nickel, and Lincoln Cent. Most Proof coins from 1951 are found with brilliant
finishes, as opposed to cameo or deep cameo finishes which are rare. For this year, deep cameo coins are exceedingly rare and command large premiums.
The original packaging for the Proof Sets remained unchanged from the prior year. Each of the five coins was placed in an individual cellophane sleeve with the group stapled at the top. This was
wrapped in tissue paper, placed within a small square cardboard box, and sealed shut with tape.
Not all 1951 Proof Sets survive in the original state, as many were transferred to more durable plastic holders over the years.
1951 Proof Set Coins
1951 Proof Set Information
• Coins per Set: 5
• Face Value: $0.91
• Original Issue Price: $2.10
• Mintage: 57,500
< 1950 Proof Set | 1952 Proof Set > | {"url":"https://proofsetguide.com/1951-proof-set/","timestamp":"2024-11-04T19:51:04Z","content_type":"text/html","content_length":"84513","record_id":"<urn:uuid:94f93948-ae55-42e5-bbe2-f8da78886b6d>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00177.warc.gz"} |
Test Bank for College Algebra 11th Edition by Lial Hornsby Schneider and Daniels ISBN 0321671791 9780321671790
This is completed downloadable of for College Algebra 11th Edition by Margaret L. Lial, John Hornsby, David I. Schneider and Callie Daniels Test Bank
Test Bank for College Algebra 11th Edition by Lial Hornsby Schneider and Daniels ISBN 0321671791 9780321671790
Instant download College Algebra 11th Edition by Margaret L. Lial, John Hornsby, David I. Schneider and Callie Daniels Test Bank pdf docx epub after payment.
Product Details:
Language: English
ISBN-10: 0321671791
ISBN-13: 978-0321671790
ISBN-13: 9780321671790
Author: Margaret L. Lial, John Hornsby, David I. Schneider and Callie Daniels
Relate Product:
Solution Manual for College Algebra 11th Edition by Lial Hornsby Schneider and Daniels ISBN 0321671791 9780321671790
Table of Content:
1. Equations and Inequalities
1.1 Linear Equations
1.2 Applications and Modeling with Linear Equations
1.3 Complex Numbers
1.4 Quadratic Equations
1.5 Applications and Modeling with Quadratic Equations
1.6 Other Types of Equations and Applications
1.7 Inequalities
1.8 Absolute Value Equations and Inequalities
2. Graphs and Functions
2.1 Rectangular Coordinates and Graphs
2.2 Circles
2.3 Functions
2.4 Linear Functions
2.5 Equations of Lines and Linear Models
2.6 Graphs of Basic Functions
2.7 Graphing Techniques
2.8 Function Operations and Composition
3. Polynomial and Rational Functions.
3.1 Quadratic Functions and Models
3.2 Synthetic Division
3.3 Zeros of Polynomial Functions
3.4 Polynomial Functions: Graphs, Applications, and Models
3.5 Rational Functions: Graphs, Applications, and Models
3.6 Variation
4. Inverse, Exponential, and Logarithmic Functions.
4.1 Inverse Functions
4.2 Exponential Functions
4.3 Logarithmic Functions
4.4 Evaluating Logarithms and the Change-of-Base Theorem
4.5 Exponential and Logarithmic Equations
4.6 Applications and Models of Exponential Growth and Decay
5. Systems and Matrices
5.1 Systems of Linear Equations
5.2 Matrix Solution of Linear Systems
5.3 Determinant Solution of Linear Systems
5.4 Partial Fractions
5.5 Nonlinear Systems of Equations
5.6 Systems of Inequalities and Linear Programming
5.7 Properties of Matrices
5.8 Matrix Inverses
6. Analytic Geometry
6.1 Parabolas
6.2 Ellipses
6.3 Hyperbolas
6.4 Summary of the Conic Sections
7. Further Topics in Algebra
7.1 Sequences and Series
7.2 Arithmetic Sequences and Series
7.3 Geometric Sequences and Series
7.4 The Binomial Theorem
7.5 Mathematical Induction
7.6 Counting Theory
7.7 Basics of Probability
People Also Search:
college algebra 11th edition sullivan pdf free download
college algebra 11th edition by sullivan
college algebra 11th edition pdf
college algebra 11th edition pdf free download
college algebra 11th edition answers | {"url":"https://testbankpack.com/p/test-bank-for-college-algebra-11th-edition-by-lial-hornsby-schneider-and-daniels-isbn-0321671791-9780321671790/","timestamp":"2024-11-07T22:37:13Z","content_type":"text/html","content_length":"136179","record_id":"<urn:uuid:c0a23fc6-ae1f-434a-8e6b-c2dab0783086>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00635.warc.gz"} |
Showing infinite sets are countable using a proper subset
• Thread starter k3k3
• Start date
In summary: Is this supposed to be a contradiction proof? If I am showing that a set is infinite if one of its proper subsets are infinite, then there cannot be a 1-1 correspondence from either of
the sets to any set {1,2,3,...n} for some positive integer...
Homework Statement
Show if a set is infinite, then it can be put in a 1-1 correspondence with one of its proper subsets.
Homework Equations
This was included with the problem, but I am sure most already know this.
A is a proper subset of B if A is a subset of B and A≠B
The Attempt at a Solution
Did I do this correctly? Here is my work:
Showing the forward implication first:
Show if a set is infinite, then it can be put in a 1-1 correspondence with one of its proper subsets.
Let A be an infinite set.
Let B be an infinite set that is a proper subset of A.
Since A is infinite, there exists a bijective mapping f:X→A, where X is either the set of natural numbers or real numbers depending if the sets are countable or not, be defined by f(x_n)=a_n where
x_n is in X and a_n is in A.
Similarly for B, there exists a bijective mapping g:X→B be defined by g(x_n)=b_n where x_n is in X and b_n is in B.
Since g is a bijection, its inverse exists and is a bijection.
Let h=f([itex]g^{-1}[/itex])
Since [itex]g^{-1}[/itex](b_n)=x_n, then f([itex]g^{-1}[/itex])=a_n
Therefore if a set is infinite, it can be put into a 1-1 correspondence with one of its proper subsets.
Reverse implication:
If a set can be put into a 1-1 correspondence with one of its proper subsets, then it is infinite.
Suppose a finite set can be put into a 1-1 correspondence with one of its proper subsets.
Let A={a,b,c,d}
Let B={c,d}
Let f:B→A
Where f(c)=a and f(d)=b
It is a 1-1 function, but not onto.
Therefore, if a a set can be put into a 1-1 correspondence with one of its proper subsets, it must be infinite.
k3k3 said:
Homework Statement
Show if a set is infinite, then it can be put in a 1-1 correspondence with one of its proper subsets.
What's your definition of an infinite set? Reason I ask is that the property you're trying to prove is very often taken as the definition of an infinite set. So if you're using some other definition,
it will be helpful to state it.
SteveL27 said:
What's your definition of an infinite set? Reason I ask is that the property you're trying to prove is very often taken as the definition of an infinite set. So if you're using some other
definition, it will be helpful to state it.
The book defines an infinite set as:
Let A be an arbitrary set.
The set A is finite if it is empty or if its elements can be put in a 1-1 correspondence with the natural numbers.
The set A is infinite if it is not finite.
I included the finite definition too since the infinite definition is not very descriptive.
k3k3 said:
The book defines an infinite set as:
Let A be an arbitrary set.
The set A is finite if it is empty or if its elements can be put in a 1-1 correspondence with the natural numbers.
How can a finite set be put into 1-1 correspondence with the natural numbers? Are you sure you wrote that down correctly?
SteveL27 said:
How can a finite set be put into 1-1 correspondence with the natural numbers? Are you sure you wrote that down correctly?
Sorry, with a finite subset of natural nunbers I meant to write. That was not typed properly.
k3k3 said:
Sorry, with a finite subset of natural nunbers I meant to write. That was not typed properly.
Well, what's a finite subset? Isn't that the word we're trying to define?
Don't mean to be picking at you with details ... but the overall steps here are going to involve
1) Nailing down the definition of "infinite set."
2) USING that definition to prove what's required.
So we're stuck on step 1 here.
SteveL27 said:
Well, what's a finite subset? Isn't that the word we're trying to define?
Don't mean to be picking at you with details ... but the overall steps here are going to involve
1) Nailing down the definition of "infinite set."
2) USING that definition to prove what's required.
So we're stuck on step 1 here.
If it is finite, it can be put into a 1-1 correspondence with the set {1,2,3,...,n} for some positive integer n
k3k3 said:
If it is finite, it can be put into a 1-1 correspondence with the set {1,2,3,...,n} for some positive integer n
Now, can you redo your proof, using that definition?
SteveL27 said:
Now, can you redo your proof, using that definition?
Is this supposed to be a contradiction proof? If I am showing that a set is infinite if one of its proper subsets are infinite, then there cannot be a 1-1 correspondence from either of the sets to
any set {1,2,3,...n} for some positive integer n.
k3k3 said:
Is this supposed to be a contradiction proof? If I am showing that a set is infinite if one of its proper subsets are infinite, then there cannot be a 1-1 correspondence from either of the sets
to any set {1,2,3,...n} for some positive integer n.
Helpful to go directly back to what's required by the problem. You started out " If I am showing that a set is infinite if one of its proper subsets are infinite ..." which is a little confusing.
By the way, what class is this for? The fact that these two definitions are equivalent is non-trivial and involves some set-theoretical subtleties. However you're only being asked here to prove one
direction, so perhaps that's more straightforward.
SteveL27 said:
Helpful to go directly back to what's required by the problem. You started out " If I am showing that a set is infinite if one of its proper subsets are infinite ..." which is a little confusing.
By the way, what class is this for? The fact that these two definitions are equivalent is non-trivial and involves some set-theoretical subtleties. However you're only being asked here to prove
one direction, so perhaps that's more straightforward.
It's for a real analysis class. It is a if and only if problem, but I found the forward implication to be difficult. I should have paid attention to what I was typing instead of just transcribing
everything I wrote on my paper and not include the reverse.
Here is the question verbatim from the book: Prove that a set is infinite if and only if it can be put into a 1-1 correspondence with one of its proper subsets. (A set A is a proper subset of B if A
[itex]\subseteq[/itex]B and A≠B)
k3k3 said:
It's for a real analysis class. It is a if and only if problem, but I found the forward implication to be difficult. I should have paid attention to what I was typing instead of just transcribing
everything I wrote on my paper and not include the reverse.
Here is the question verbatim from the book: Prove that a set is infinite if and only if it can be put into a 1-1 correspondence with one of its proper subsets. (A set A is a proper subset of B
if A[itex]\subseteq[/itex]B and A≠B)
I don't have time to work this out at the moment ... but here's a link showing why this is a harder problem than it looks. The link doesn't have a proof or even a hint, just some interesting context
for the question.
Try posting what you've got so far and perhaps others can jump in.
FAQ: Showing infinite sets are countable using a proper subset
1. How can you show that an infinite set is countable using a proper subset?
To show that an infinite set is countable using a proper subset, you can use the method of bijection. This means finding a one-to-one correspondence between the elements of the infinite set and the
elements of a known countable set, such as the set of natural numbers.
2. What is the importance of proving that an infinite set is countable?
Proving that an infinite set is countable is important because it helps us understand the size and structure of the set. It also allows us to make comparisons and draw conclusions about other sets
that may be infinite, but not countable.
3. Can all infinite sets be counted using a proper subset?
No, not all infinite sets can be counted using a proper subset. Only sets that have a one-to-one correspondence with a known countable set can be counted using this method. Sets that are uncountably
infinite, such as the real numbers, cannot be counted using a proper subset.
4. How does the concept of cardinality relate to showing infinite sets are countable using a proper subset?
The concept of cardinality, which refers to the size or quantity of a set, is essential in showing that an infinite set is countable using a proper subset. By establishing a bijection, we are
essentially proving that the two sets have the same cardinality, even though one may be infinite and the other may be finite.
5. Are there any limitations to using a proper subset to show countability of an infinite set?
Yes, there are limitations to using a proper subset to show the countability of an infinite set. This method only works for sets that have a one-to-one correspondence with a known countable set. It
cannot be used for sets that are uncountably infinite, such as the real numbers. | {"url":"https://www.physicsforums.com/threads/showing-infinite-sets-are-countable-using-a-proper-subset.590196/","timestamp":"2024-11-14T00:37:49Z","content_type":"text/html","content_length":"129376","record_id":"<urn:uuid:c4f63d9d-8c9c-44fc-884f-3c8ecd9e1adb>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00581.warc.gz"} |
Key (cryptography)
741 VIEWS
Everipedia is now
- Join the
IQ Brainlist
and our
for early access to editing on the new platform and to participate in the beta testing.
Key (cryptography)
In cryptography, a key is a piece of information (a parameter) that determines the functional output of a cryptographic algorithm. For encryption algorithms, a key specifies the transformation of
plaintext into ciphertext, and vice versa for decryption algorithms. Keys also specify transformations in other cryptographic algorithms, such as digital signature schemes and message authentication
In designing security systems, it is wise to assume that the details of the cryptographic algorithm are already available to the attacker. This is known as Kerckhoffs' principle — "only secrecy of
the key provides security", or, reformulated as Shannon's maxim, "the enemy knows the system". The history of cryptography provides evidence that it can be difficult to keep the details of a widely
used algorithm secret (see security through obscurity). A key is often easier to protect (it's typically a small piece of information) than an encryption algorithm, and easier to change if
compromised. Thus, the security of an encryption system in most cases relies on some key being kept secret.^[2]
Trying to keep keys secret is one of the most difficult problems in practical cryptography; see key management. An attacker who obtains the key (by, for example, theft, extortion, dumpster diving,
assault, torture, or social engineering) can recover the original message from the encrypted data, and issue signatures.
Keys are generated to be used with a given suite of algorithms, called a cryptosystem. Encryption algorithms which use the same key for both encryption and decryption are known as symmetric key
algorithms. A newer class of "public key" cryptographic algorithms was invented in the 1970s. These asymmetric key algorithms use a pair of keys—or keypair—a public key and a private one. Public keys
are used for encryption or signature verification; private ones decrypt and sign. The design is such that finding out the private key is extremely difficult, even if the corresponding public key is
known. As that design involves lengthy computations, a keypair is often used to exchange an on-the-fly symmetric key, which will only be used for the current session. RSA and DSA are two popular
public-key cryptosystems; DSA keys can only be used for signing and verifying, not for encryption.
Part of the security brought about by cryptography concerns confidence about who signed a given document, or who replies at the other side of a connection. Assuming that keys are not compromised,
that question consists of determining the owner of the relevant public key. To be able to tell a key's owner, public keys are often enriched with attributes such as names, addresses, and similar
identifiers. The packed collection of a public key and its attributes can be digitally signed by one or more supporters. In the PKI model, the resulting object is called a certificate and is signed
by a certificate authority (CA). In the PGP model, it is still called a "key", and is signed by various people who personally verified that the attributes match the subject.^[3]
In both PKI and PGP models, compromised keys can be revoked. Revocation has the side effect of disrupting the relationship between a key's attributes and the subject, which may still be valid. In
order to have a possibility to recover from such disruption, signers often use different keys for everyday tasks: Signing with an intermediate certificate (for PKI) or a subkey (for PGP) facilitates
keeping the principal private key in an offline safe.
Deleting a key on purpose to make the data inaccessible is called crypto-shredding.
For the one-time pad system the key must be at least as long as the message. In encryption systems that use a cipher algorithm, messages can be much longer than the key. The key must, however, be
long enough so that an attacker cannot try all possible combinations.
A key length of 80 bits is generally considered the minimum for strong security with symmetric encryption algorithms. 128-bit keys are commonly used and considered very strong. See the key size
article for a more complete discussion.
The keys used in public key cryptography have some mathematical structure. For example, public keys used in the RSA system are the product of two prime numbers. Thus public key systems require longer
key lengths than symmetric systems for an equivalent level of security. 3072 bits is the suggested key length for systems based on factoring and integer discrete logarithms which aim to have security
equivalent to a 128 bit symmetric cipher. Elliptic curve cryptography may allow smaller-size keys for equivalent security, but these algorithms have only been known for a relatively short time and
current estimates of the difficulty of searching for their keys may not survive. As early as 2004, a message encrypted using a 109-bit key elliptic curve algorithm had been broken by brute force.^[4]
The current rule of thumb is to use an ECC key twice as long as the symmetric key security level desired. Except for the random one-time pad, the security of these systems has not been proven
mathematically as of 2018, so a theoretical breakthrough could make everything one has encrypted an open book (see P versus NP problem). This is another reason to err on the side of choosing longer
To prevent a key from being guessed, keys need to be generated truly randomly and contain sufficient entropy. The problem of how to safely generate truly random keys is difficult, and has been
addressed in many ways by various cryptographic systems. There is a RFC on generating randomness (RFC 4086 ^[5] , Randomness Requirements for Security). Some operating systems include tools for
"collecting" entropy from the timing of unpredictable operations such as disk drive head movements. For the production of small amounts of keying material, ordinary dice provide a good source of high
quality randomness.
For most computer security purposes and for most users, "key" is not synonymous with "password" (or "passphrase"), although a password can in fact be used as a key. The primary practical difference
between keys and passwords is that the latter are intended to be generated, read, remembered, and reproduced by a human user (though the user may delegate those tasks to password management
software). A key, by contrast, is intended for use by the software that is implementing the cryptographic algorithm, and so human readability etc. is not required. In fact, most users will, in most
cases, be unaware of even the existence of the keys being used on their behalf by the security components of their everyday software applications.
If a password is used as an encryption key, then in a well-designed crypto system it would not be used as such on its own. This is because passwords tend to be human-readable and, hence, may not be
particularly strong. To compensate, a good crypto system will use the password-acting-as-key not to perform the primary encryption task itself, but rather to act as an input to a key derivation
function (KDF). That KDF uses the password as a starting point from which it will then generate the actual secure encryption key itself. Various methods such as adding a salt and key stretching may
be used in the generation.
• Cryptographic key types classification according to their usage
• Diceware describes a method of generating fairly easy-to-remember, yet fairly secure, passphrases, using only dice and a pencil.
• EKMS
• Group key
• Keyed hash algorithm
• Key authentication
• Key derivation function
• Key distribution center
• Key escrow
• Key exchange
• Key generation
• Key management
• Key schedule
• Key server
• Key signature (cryptography)
• Key signing party
• Key stretching
• Key-agreement protocol
• glossary of concepts related to keys
• Password psychology
• Public key fingerprint
• Random number generator
• Session key
• Tripcode
• Machine-readable paper key
• Weak key
Citation Linksearchsecurity.techtarget.com"What is cryptography? - Definition from WhatIs.com". SearchSecurity. Retrieved 2019-07-20.
Sep 29, 2019, 6:01 AM
Citation Linkwww.idquantique.com"Quantum Key Generation from ID Quantique". ID Quantique. Retrieved 2019-07-20.
Sep 29, 2019, 6:01 AM
Citation Linkdigitalmonki.comMatthew Copeland; Joergen Grahn; David A. Wheeler (1999). Mike Ashley (ed.). "The GNU Privacy Handbook". GnuPG. Archived from the original on 12 April 2015. Retrieved 14
December 2013.
Sep 29, 2019, 6:01 AM
Citation Linkbooks.google.comBidgoli, Hossein (2004). The Internet Encyclopedia. John Wiley. p. 567. ISBN 0-471-22201-1 – via Google Books.
Sep 29, 2019, 6:01 AM
Citation Linksearchsecurity.techtarget.com"What is cryptography? - Definition from WhatIs.com"
Sep 29, 2019, 6:01 AM
Citation Linkwww.idquantique.com"Quantum Key Generation from ID Quantique"
Sep 29, 2019, 6:01 AM
Citation Linkweb.archive.org"The GNU Privacy Handbook"
Sep 29, 2019, 6:01 AM
Citation Linkbooks.google.comThe Internet Encyclopedia
Sep 29, 2019, 6:01 AM
Citation Linken.wikipedia.orgThe original version of this page is from Wikipedia, you can edit the page right here on Everipedia.Text is available under the Creative Commons Attribution-ShareAlike
License.Additional terms may apply.See everipedia.org/everipedia-termsfor further details.Images/media credited individually (click the icon for details).
Sep 29, 2019, 6:01 AM | {"url":"https://everipedia.org/wiki/lang_en/Key_%2528cryptography%2529","timestamp":"2024-11-12T00:50:15Z","content_type":"text/html","content_length":"84263","record_id":"<urn:uuid:d9ae23c2-08d0-40fb-b7df-678c76392be7>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00804.warc.gz"} |
Prof. Chris Koen
• Modelling the rotation period distribution of M dwarfs in the Kepler field
(Springer Verlag, 2018) Koen, Chris
McQuillan et al. (Mon. Not. R. Astron. Soc.432:1203, 2013) presented 1570 periods P of M dwarf stars in the field of view of the Kepler telescope. It is expected that most of these reflect
rotation periods, due to starspots. It is shown here that the data can be modelled as a mixture of four subpopulations, three of which are overlapping log-normal distributions. The fourth
subpopulation has a power law distribution, with P −1/2. It is also demonstrated that the bulk of the longer periods, representing the two major sub-populations, could be drawn from a single
subpopulation, but with a period-dependent probability of observing half the true period.
• The analysis of indexed astronomical time series – XII. The statistics of oversampled Fourier spectra of noise plus a single sinusoid
(Oxford University Press, 2015) Koen, Chris
With few exceptions, theoretical studies of periodogram properties focus on pure noise time series. This paper considers the case in which the time series consists of noise together with a single
sinusoid, observed at regularly spaced time points. The distribution of the periodogram ordinates in this case is shown to be of exponentially modified Gaussian form. Simulations are used to
demonstrate that if the periodogram is substantially oversampled (i.e. calculated in a dense grid of frequencies), then the distribution of the periodogram maxima can be accurately approximated
by a simple form (at least at moderate signal-to-noise ratios). This result can be used to derive a calculation formula for the probability of correct signal frequency identification at given
values of the time series length and (true) signal-to-noise ratio. A set of curves is presented which can be used to apply the theory to, for example, asteroseismic data. An illustrative
application to Kepler data is given.
• The analysis of indexed astronomical time series – XI. The statistics of oversampled white noise periodograms
(Oxford University Press, 2015) Koen, Chris
The distribution of the maxima of periodograms is considered in the case where the time series is made up of regularly sampled, uncorrelated Gaussians. It is pointed out that if there is no
oversampling, then for large data sets, the known distribution of maxima tends to a oneparameter Gumbel distribution. Simulations are used to demonstrate that for oversampling by large factors, a
two-parameter Gumbel distribution provides a highly accurate representation of the simulation results. As the oversampling approaches the continuous limit, the twoparameter Gumbel distribution
takes on a simple form which depends only on the logarithm of the number of data. Subsidiary results are the autocorrelation function of the oversampled periodogram; expressions for the accuracy
of simulated percentiles; and the relation between percentiles of the periodogram and the amplitude spectrum.
• A search for p-mode pulsations in white dwarf stars using the Berkeley Visible Imaging Tube detector
(Oxford University Press, 2014) Kilkenny, David; Welsh, B.Y.; Koen, Chris; Gulbis, A.A.S.; Kotze, M.M.
We present high-speed photometry (resolution 0.1 s) obtained during the commissioning of the Berkely Visible Imaging Tube system on the Southern African Large Telescope (SALT). The observations
were an attempt to search for very rapid p-mode oscillations in white dwarf stars and included three DA stars known to be g-mode pulsators (ZZ Cet, HK Cet and AF Pic), one other DA star (WD
1056-384) not known to be variable and one AM Cvn star (HP Lib). No evidence was found for any variations greater than about 1 mmag in amplitude (∼0.1 per cent) at frequencies in excess of 60 mHz
(periods <17 s) in any of the target stars, though several previously known g-mode frequencies were recovered.
• An O−C (and light travel time) method suitable for application to large photometric databases
(Oxford University Press, 2014) Koen, Chris
The standard method of studying period changes in variable stars is to study the timing residuals or O−C values of light-curve maxima or minima. The advent of photometric surveys for variability,
covering large parts of the sky and stretching over years, has made available measurements of probably hundreds of thousands of variable stars, observed at random phases. Simple methodology is
described which can be used to quickly check such measurements of a star for indications of period changes. Effectively, the low-frequency periodogram of a first-order estimate of the O−C
function is calculated. In the case of light travel time (LTT) effects, the projected orbital amplitude follows by robust regression of a sinusoid on the O−C. The results can be used as input
into a full non-linear least-squares regression directly on the observations. Extensive simulations of LTT configurations are used to explore the sensitivity of results to various parameter
values (period of the variable star and signal to noise of measurements; orbital period and amplitude; number and time baseline of
• Multicolour time series photometry of the T Tauri star CVSO 30
(Oxford University Press, 2015) Koen, Chris
Five consecutive runs of at least five hours, and two shorter runs, of V(RI)C time series photometry of CVSO 30 are presented. No evidence could be seen for planetary transits, previously claimed
in the literature for this star. The photometry described in this paper, as well as earlier observations, can be modelled by two non-sinusoidal periodicities of 8 and 10.8 h (frequencies 3 and
2.23 d−1) or their 1 d−1 aliases. The possibility is discussed that star-spots at different latitudes of a differentially rotating star is responsible for the brightness modulations. The steep
wavelength dependence of the variability amplitudes is best described by hot star-spots.
• Multicolour time series photometry of the variable star 1SWASP J234401.81−212229.1
(Oxford University Press, 2014) Koen, Chris
1SWASP J234401.81-212229.1 may be one of a handful of contact binaries comprising two Mdwarfs. Modelling of the available observations is complicated by the fact that the radiation of the
eclipsing system is dominated by a third star, a K dwarf. New photometry, presented in this paper, strengthens this interpretation of the data. The existence of such systems will have
implications for the statistical distributions of masses in hierarchical multiple star systems.
• Multicolour time series photometry of four short-period weak-lined T Tauri stars
(Oxford University Press, 2015) Koen, Chris
The paper describes continuous photometric monitoring of four pre-main-sequence stars, probable members of young stellar associations. Measurements, covering at least four nights per star, were
obtained by cycling through several filters. The data could be used to choose between aliases of rotation periods quoted in the literature. As expected, the amplitudes of sinusoidal variations
decline with increasing wavelength, mildly enough to indicate the presence of coolspots on the stellar surfaces. Variability amplitudes can dwindle from a 0.1mag level to virtually zero on a
time-scale of one or two days. A flare observed in CD-36 3202 is discussed in some detail, and a useful mathematical model for its shape is introduced. It is demonstrated that accurate colour
indices (σ <5–6 mmag, typically) can be derived from the photometry. The magnitude variations as measured through different filters are linearly related. This is exploited to calculate spot
temperatures (800–1150K below photospheric for the different stars) and the ranges of variation of the spot filling factors (roughly 10– 20 per cent). The available All Sky Automated Survey
measurements of the stars are analysed, and it is concluded that there is good evidence for differential rotation in all four stars.
• A detection threshold in the amplitude spectra calculated from Kepler data obtained during K2 mission.
(Oxford University Press, 2015) Baran, A.C.; Koen, Chris; Pokrzywka, B.
We present our analysis of simulated data in order to derive a detection threshold which can be used in the pre-whitening process of amplitude spectra. In case of ground-based data of pulsating
stars, this threshold is conventionally taken to be four times the mean noise level in an amplitude spectrum. This threshold is questionable when space-based data are analysed. Our effort is
aimed at revising this threshold in the case of continuous 90-d Kepler K2 phase observations. Our result clearly shows that a 95 per cent confidence level, common for ground observations, can be
reached at 5.4 times the mean noise level and is coverage dependent. In addition, this threshold varies between 4.8 and 5.7, if the number of cadences is changed. This conclusion should secure
further pre-whitening and helps to avoid over-interpretation of spectra of pulsating stars observed with the Kepler spacecraft during K2 phase. We compare our results with the standard approach
widely used in the literature.
• Multivariate comparisons of the period–light-curve shape distributions of Cepheids in five galaxies
(Oxford University Press, 2007) Koen, Chris; Siluyele, I.
A number of published tests suitable for the comparison of multivariate distributions are described. The results of a small power study, based on realistic Cepheid log period – Fourier
coefficient data, are presented. It is found that a statistic due to Henze has good general performance. The tests are applied to Cepheid observations in the Milky Way galaxy, Large Magellanic
Cloud, Small Magellanic Cloud, IC 1613 and NGC 6822. The null hypothesis of equal populations is rejected for all pairs compared, except IC 1613 – NGC 6822.
• Multicolour time series photometry of three periodic ultracool dwarfs
(Oxford University Press, 2013) Koen, Chris
Photometry in I, or contemporaneously in I and R, of the known variable ultracool dwarfs Kelu-1 and 2MASS J11553952−3727350 is presented. The nature of the variability of Kelu-1 appears to evolve
on time-scales of a day or less. Both the period and amplitude of the variability of 2MASS J11553952−3727350 have changed substantially since publication of earlier observations of the object.
DENIS 1454−6604 is a new variable ultracool dwarf, with persistent and prominent brightness modulations at a period of 2.6 h.
• HE0230–4323: an unusual pulsating hot subdwarf star
(Oxford University Press, 2007) Koen, Chris
HE 0230−4323 is a known binary, consisting of a subdwarf star and a companion which is not observable in the optical. Photometric measurements reported in this paper have shown it to be both a
reflection-effect and a pulsating variable. The dominant pulsation frequencies changed over the course of several nights of observing, from ∼32–39 d−1 to ∼8–16 d−1. Observations were obtained
through B and V filters, and the variations in the two wavebands appear to be approximately 180◦ out of phase.
• The Nyquist frequency for time series with slight deviations from regular spacing
(Oxford University Press, 2009) Koen, Chris
The paper is based on the notion that the Nyquist frequency νN is a symmetry point of the periodogram of a time series: the power spectrum at frequencies above νN is a mirror image of that below
νN. Koen showed that the sum SS(ν) = k, [sin 2πν(tk −t )]2 (where tk and t range over the time points of observation) is zero when the frequency ν = νN. This property is used to investigate the
Nyquist frequency for data which are almost regularly spaced in time. For some configurations, there are deep minima of SS at frequencies νP νN; such νP are dubbed ‘pseudo-Nyquist’ frequencies:
the implication is that most of the information about the frequency content of the data is available in the spectrum over (0, νP). Systematic simulation results are presented for two
configurations – small random variations in respectively the time points of observation and the lengths of the intervals between successive observations. A few real examples of CCD time series
photometry obtained over several hours are also discussed.
• Fitting power-law distributions to data with measurement errors
(Oxford University Press, 2009) Koen, Chris; Kondlo, L
If X, which follows a power-law distribution, is observed subject to Gaussian measurement error e, thenX+e is distributed as the convolution of the power-lawand Gaussian distributions.
Maximum-likelihood estimation of the parameters of the two distributions is considered. Large-sample formulae are given for the covariance matrix of the estimated parameters, and implementation
of a small-sample method (the jackknife) is also described. Other topics dealt with are tests for goodness of fit of the posited distribution, and tests whether special cases (no measurement
errors or an infinite upper limit to the power-law distribution) may be preferred. The application of the methodology is illustrated by fitting convolved distributions to masses of giant
molecular clouds in M33 and the Large Magellanic Cloud (LMC), and to HI cloud masses in the LMC.
• The analysis of indexed astronomical time series – X. Significance testing of O − C data
(Oxford University Press, 2006) Koen, Chris
It is assumed that O − C (‘observed minus calculated’) values of periodic variable stars are determined by three processes, namely measurement errors, random cycle-to-cycle jitter in the period,
and possibly long-term changes in the mean period. By modelling the latter as a random walk, the covariances of all O − C values can be calculated. The covariances can then be used to estimate
unknown model parameters, and to choose between alternative models. Pseudo-residuals which could be used in model fit assessment are also defined. The theory is illustrated by four applications
to spotted stars in eclipsing binaries.
• Estimation of the coherence time of stochastic oscillations from modest samples
(Oxford University Press, 2012) Koen, Chris
‘Quasi-periodic’ or ‘solar-like’ oscillations can be described by three parameters – a characteristic frequency, a coherence time (or ‘quality factor’) and the variance of the random driving
process. This paper is concerned with the estimation of these quantities, particularly the coherence time, from modest sample sizes (observations covering of the order of a hundred or fewer
oscillation periods). Under these circumstances, finite sample properties of the periodogram (bias and covariance) formally invalidate the commonly used maximum-likelihood procedure. It is shown
that it none the less gives reasonable results, although an appropriate covariance matrix should be used for the standard errors of the estimates. Tailoring the frequency interval used, and
oversampling the periodogram, can substantially improve parameter estimation. Maximum-likelihood estimation in the time-domain has simpler statistical properties, and generally performs better
for the parameter values considered in this paper. The effects of added measurement errors are also studied. An example analysis of pulsating star data is given.
• Detection of an increasing orbital period in the subdwarf B eclipsing system NSVS 14256825
(Oxford University Press, 2012) Kilkenny, David; Koen, Chris
New timings of eclipses made in 2010 and 2011 are presented for the hot subdwarf B (sdB) eclipsing binary NSVS 14256825. Composed of an sdB star and a much cooler companion, with a period near
0.1104 days, this system is very similar to the prototype sdB eclipsing binary HW Vir. The new observations show that the binary period of NSVS 14256825 is rapidly increasing at a rate of about
12 × 10−12 days orbit−1.
• Photometry of the magnetic white dwarf SDSS 121209.31+013627.7
(Oxford University Press, 2006) Koen, Chris; Maxted, P.F.L.
The results of 27 h of time series photometry of SDSS 121209.31+013627.7 are presented. The binary period established from spectroscopy is confirmed and refined to 0.061 412 d (88.43 min). The
photometric variations are dominated by a brightening of about 16 mmag, lasting a little less than half a binary cycle. The amplitude is approximately the same in V, R and white light. A
secondary small brightness increase during each cycle may also be present. We speculate that SDSS 121209.31+013627.7 may be a polar in a low state.
• The analysis of irregularly observed stochastic astronomical time-series – I. Basics of linear stochastic differential equations
(Oxford University Press, 2005) Koen, Chris
The theory of low-order linear stochastic differential equations is reviewed. Solutions to these equations give the continuous time analogues of discrete time autoregressive time-series. Explicit
forms for the power spectra and covariance functions of first- and second-order forms are given. A conceptually simple method is described for fitting continuous time autoregressive models to
data. Formulae giving the standard errors of the parameter estimates are derived. Simulated data are used to verify the performance of the methods. Irregularly spaced observations of the two
hydrogen-deficient stars FQ Aqr and NO Ser are analysed. In the case of FQ Aqr the best-fitting model is of second order, and describes a quasi-periodicity of about 20 d with an e-folding time of
3.7 d. The NO Ser data are best fitted by a first-order model with an e-folding time of 7.2 d.
• Improved SAAO–2MASS photometry transformations
(Oxford University Press, 2007) Koen, Chris; Marang, F.; Kilkenny, David; Jacobs, C.
Near-infrared photometry of 599 stars is used to calculate transformations from the South African Astronomical Observatory (SAAO) JHK system to the Two-Micron All-Sky Survey (2MASS) JHKS system.
Both several-term formal regression relations and simplified transformations are presented. Inverse transformations (i.e. 2MASS to SAAO) are also given. The presence of non-linearities in some
colour terms is highlighted. | {"url":"https://uwcscholar.uwc.ac.za/collections/bd000481-a7b1-47e7-93b2-7ff310157a6b","timestamp":"2024-11-03T04:41:19Z","content_type":"text/html","content_length":"792091","record_id":"<urn:uuid:02742635-71ff-4a8d-bab2-e99c24e56d6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00398.warc.gz"} |
Tree Height - DeriveIt
1. Recursion
To solve this problem recursively, we have to write the height of the tree height(root) using the heights of other nodes.
Whenever you look for a recursion, you should always think locally near the original problem you're breaking down. Here, this means you should look near the root node. We should try to find a
recursion that involves the root.left and root.right:
From this image, you can see that the height of the tree is equal to the height of the tallest child (the blue tree), plus 1. So this is our recursive equation. The same way that 1 + 1 = 2, here's
what the recursion is equal to:
2. Base case
The recursion calls itself on lower and lower nodes in the tree, until it eventually gets to a node that's null. This is where the recursion breaks. Since we can't use the recursion to get what
height is equal to in this case, we need to manually find what it's equal to. The height of no node is just equal to 0.
3. Code
To code the solution, you should write a function that always returns whatever height is equal to. That way it's guaranteed to be correct. It's equal to the recursion, unless we're in the base case
in which case it's equal to 0. Here's the full code:
Time Complexity $O(n)$. We visit all $n$ nodes, which takes $O(n)$ time.
Space Complexity $O(n)$. For now, you don't need to know where this comes from - we'll go over the full details in the next lesson "Call Stack". Just know the computer needs some memory to keep track
of the function calls it's making. | {"url":"https://deriveit.org/coding/tree-height-95","timestamp":"2024-11-12T01:58:04Z","content_type":"text/html","content_length":"93443","record_id":"<urn:uuid:4ac0e134-b478-4e38-8751-e5c1ad101f4a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00653.warc.gz"} |
Math 212: Several Complex Variables, Fall 2021
TuTh 12:30pm-2:00pm, 2 Evans
Prerequisites: 185, 202AB or equivalent.
1. Volker Scheidemann, Introduction to Complex Analysis in Several Variables, (Chapters 1,2,3,5)
2. Lars Hörmander, The Analysis of Linear Partial Differential Operators II (Section 15.1)
3. Dmitry Khavinson, Holomorphic Partial Differential Equations and Classical Potential Theory (Chapters 1-8)
4. Lars Hörmander, The Analysis of Linear Partial Differential Operators I (Section 9.4)
5. Johannes Sjöstrand, Analytic microlocal analysis using holomorphic functions with exponential weights
The course will concentrate on PDE aspects of the theory and will be largely independendent from 212 given in the Fall of 2019.
1. Holomorphic functions, Cauchy formula, power series
2. Biholomorphic maps
3. The inhomogenous Cauchy–Riemann Differential Equations, Hartogs phenomenon
4. Solving non-homogeneous Cauchy--Riemann equations in spaces defined using global strictly plurisubharmonic weights (Hörmander's L2 estimates)
5. Solving partial differential equations with holomorphic coefficients: theorems of Cauchy--Kovalevskaya, Zerner and Bony--Shapira
6. Pseudodifferential operators in complex domains (no background in standard theory is expected or needed)
7. Fourier integral operators, Egorov's theorem and applications
Based on biweekly homework assignments. | {"url":"https://math.berkeley.edu/~zworski/212_21.html","timestamp":"2024-11-05T02:29:05Z","content_type":"text/html","content_length":"4191","record_id":"<urn:uuid:1e1e5e85-df3c-4ccb-a298-abe65d20e404>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00116.warc.gz"} |
[Solved] The coefficient of x2 in the expansion of \(\rm (4-5x^2
The coefficient of x^2 in the expansion of \(\rm (4-5x^2)^{-1/2}\) is
Answer (Detailed Solution Below)
Option 1 : \(\frac {5}{16}\)
Noun & Pronoun: Fill In The Blanks (Most Important Rules)
1.2 Lakh Users
15 Questions 15 Marks 12 Mins
General term: General term in the expansion of (x + y)^n is given by
\(\rm {T_{\left( {r\; + \;1} \right)}} = \;{\;^n}{C_r} \times {x^{n - r}} \times {y^r}\)
Expansion of (1 + x)^n:
\(\rm (1+x)^n = 1+nx+\frac{n(n-1)}{2!}x^2+\frac{n(n-1)(n-2)}{3!}x^3 +....\)
To Find: coefficient of x^2 in the expansion of \(\rm (4-5x^2)^{-1/2}\)
\(\rm (4-5x^2)^{-1/2} = 4^{-1/2}\left(1-\frac{5}{4}x^2 \right )^{-1/2}\\ \text{As we know}\;\rm (1+x)^n = 1+nx+\frac{n(n-1)}{2!}x^2+\frac{n(n-1)(n-2)}{3!}x^3 +....\\\therefore 4^{-1/2}\left(1-\frac
{5}{4}x^2 \right )^{-1/2} = 2^{-1}\left[1 + \left(-\frac{5}{4}x^2 \right ) \times (\frac{-1}{2}) + ... \right ]\)
Now, the coefficient of x^2 in the expansion = \(2^{-1} \times \frac{-5}{4} \times \frac{-1}{2} = \frac{5}{16}\)
Latest Indian Coast Guard Navik GD Updates
Last updated on Oct 16, 2024
-> The Coast Guard Navik GD Result for the 02/2024 batch CGEPT has been announced.
-> The Indian Coast Guard Navik GD Notification has been released for 260 vacancies through the Coast Guard Enrolled Personnel Test (CGEPT) for the 01/2025 batch.
-> Candidates who have completed their 10+2 with Maths and Physics are eligible for this post.
-> Candidates must go through the Indian Coast Guard Navik GD previous year papers. | {"url":"https://testbook.com/question-answer/the-coefficient-of-x2-in-the-expansion-of--5f8460a1bb1c4e44a97f1739","timestamp":"2024-11-02T20:07:36Z","content_type":"text/html","content_length":"207314","record_id":"<urn:uuid:0cf0f612-6652-40f4-aee7-348cf7c7cb9a>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00634.warc.gz"} |
Ms excel calculate interest rate
Here's how to use Excel to calculate any of the five key unknowns for any “I know the payment, interest rate, and current balance of a loan, and I need to calculate works with loans and investments
in Excel knows about the PMT function. We are calling: «FORMULAS»-«Function Library»-«Financial» finding the function EFFECT. The arguments: «Nominal rate» - is the annual rate of interest on the
Excel is the spreadsheet application component of the Microsoft Office suite of Type "Balance" in cell A1, "Interest rate" in cell A2 and "Periods" in cell A3.
One use of the RATE function is to calculate the periodic interest rate when the amount, number of payment periods, and payment amount are known. For this 21 Jan 2015 In the function, the first 3
arguments are obligatory and the last 2 are optional. rate - an interest rate per period, exactly as the argument's name This article describes the formula syntax and usage of the RATE function in
Microsoft Excel. Description. Returns the interest rate per period of an annuity. RATE The RATE function is an Excel Financial function that is used to calculate the interest rate charged on a loan
or the rate of return needed to reach a specified Rate Function Examples. Example 1. In the following spreadsheet, the Excel Rate function is used to calculate the interest rate, with fixed payments
of $1,000 per 23 Sep 2010 Use Excel to Figure Out an Effective Interest Rate from a Nominal Interest Rate and annual percentage yield (APY), Excel makes it easy to calculate interest rate (APY),
click on the cell at B3, click on the Insert Function of Excel PMT Function. Loans consist of 4 basic parts. The Loan amount, Rate of Interest, the loan duration (number of regular payments), and
The Rate function calculates the interest rate implicit in a set of loan or investment terms given the number of periods (months, quarters, years or whatever), the payment per period, the present
value, the future value, and, optionally, the type-of-annuity switch, and also optionally, an interest-rate guess.
22 Nov 2019 The arguments of the function are: Rate The interest rate on the loan. Nper The total number of payment periods for the loan. Pv The Present On the basis that you receive $150,000 now in
respect of 300 payments at the rate of $566.67 per month the parameters for RATE are: For the PMT function to calculate the entire loan to be repaid three bits of information are required: rate - The
interest rate of the loan expressed as a decimal. 14 Feb 2013 In other words, if you want to compute an annual loan payment, then you should express this as an annual interest rate and nper should
be 17 Nov 2019 The Excel PMT() function is used in cell C7 to calculate the monthly repayment. It takes the form: PMT(InterestRate, NumberOfPeriods, Principal
How To: Calculate APR, EAR & period rates in Microsoft Excel 2010 How To: Calculate simple and compound interest in MS Excel How To: Calculate average and marginal tax rates in Microsoft Excel 2010
How To: Calculate incentive rates by formula in MS Excel
Excel is the spreadsheet application component of the Microsoft Office suite of Type "Balance" in cell A1, "Interest rate" in cell A2 and "Periods" in cell A3. For more accurate tracking of the loan,
the periodic interest rate is needed, To make it easier, Excel includes the RATE function with which you can figure out the Excel uses iteration to determine the periodic rate, so it will run its
calculation 29 Jan 2018 RATE is an Excel function that calculates the interest rate that applies to a system of present value, periodic equidistant equal cash flows and/or 1 Apr 2019 Once you get
the effective rate, you can use it in the formula cited earlier to calculate the maturity value of your investment. MS Excel also has
In Excel, you use the PMT function to calculate the periodic payment for a the payment for a loan based on constant payments and a constant interest rate. FV.
For the PMT function to calculate the entire loan to be repaid three bits of information are required: rate - The interest rate of the loan expressed as a decimal. 14 Feb 2013 In other words, if you
want to compute an annual loan payment, then you should express this as an annual interest rate and nper should be 17 Nov 2019 The Excel PMT() function is used in cell C7 to calculate the monthly
repayment. It takes the form: PMT(InterestRate, NumberOfPeriods, Principal 27 Dec 2018 Those three numbers are your principal, or the amount of money you're borrowing; your interest rate; and the
number of months in your loan 24 Feb 2010 Although technical, interest rate calculations are really at the heart of of loan payments, and use the IRR function to calculate the interest rate. 1 May
2016 These Excel functions will take some of the pain out of calculating your I have used the PMT function to calculate the monthly repayment on a loan. so the rate in B3 is divided by 12 to
determine the monthly interest rate. 13 Nov 2014 The basic annuity formula in Excel for present value is =PV(RATE,NPER,PMT). Let's break it down: • RATE is the discount rate or interest rate,
To calculate simple interest in Excel (i.e. interest that is not compounded), you can use a formula that multiples principal, rate, and term. This example assumes that $1000 is invested for 10 years
at an annual interest rate of 5%.
13 Nov 2014 The basic annuity formula in Excel for present value is =PV(RATE,NPER,PMT). Let's break it down: • RATE is the discount rate or interest rate, 15 Dec 2014 The steps for calculating your
monthly payment in Excel Interest rate (the interest rate divided by the number of accrual periods per year – for instance, a 6 % Insert the correction function in the cell next to Monthly Payment.
example: Excels function Rate(nper,pmt,pv,fv) there is a way to calculate interest, however it basically sometime fails and gives errors. 11 Feb 2011 Using Excel's PMT function, you can easily figure
out what the yearly The By Changing Cell variable is the interest rate you want Excel to find for you you can explore the different variables in your calculations more clearly. If you make monthly
payments on a four-year loan at 12 percent annual interest, use 12%/12 for guess and 4*12 for nper. If you make annual payments on the same loan, use 12% for guess and 4 for nper. Example. Copy the
example data in the following table, and paste it in cell A1 of a new Excel worksheet. How to Calculate an Interest Payment Using Microsoft Excel - Steps Open Microsoft Excel. Click Blank Workbook.
Set up your rows. Enter the payment's total value. Enter the current interest rate. Enter the number of payments you have left. Select cell B4. Enter the interest payment formula. The Excel RATE
function is a financial function that returns the interest rate per period of an annuity. You can use RATE to calculate the periodic interest rate, then multiply as required to derive the annual
interest rate. The RATE function calculates by iteration.
To calculate simple interest in Excel (i.e. interest that is not compounded), you can use a formula that multiples principal, rate, and term. This example assumes that $1000 is invested for 10 years
at an annual interest rate of 5%. How to Create a Mortgage Calculator With Microsoft Excel. This wikiHow teaches you how to calculate your mortgage-related expenses like interest, monthly payments,
and total loan amount using a Microsoft Excel spreadsheet. Once you've done Microsoft Excel Mortgage Calculator with Amortization Schedule. interest rate - the loan's stated APR. loan term in years -
most fixed-rate home loans across the United States are scheduled to amortize over 30 years. Other common domestic loan periods include 10, 15 & 20 years. Some foreign countries like Canada or the
United Kingdom have | {"url":"https://tradingkzolv.netlify.app/winge88301jig/ms-excel-calculate-interest-rate-zih.html","timestamp":"2024-11-10T15:57:16Z","content_type":"text/html","content_length":"32458","record_id":"<urn:uuid:01203f45-6382-47b5-97d0-f2a5cbee76d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00285.warc.gz"} |
Unit 4
Lesson 3
Addition and Subtraction with Tens
Warm-up: How Many Do You See: Groups of 10 (10 minutes)
The purpose of this How Many Do You See is for students to subitize or use grouping strategies to describe the images they see. This leads into the next activity, in which students add or subtract 10
from multiples of 10. The third image introduces a base-ten drawing, which students will see throughout the rest of the unit.
• Groups of 2
• “How many do you see? How do you see them?”
• Flash the image.
• 30 seconds: quiet think time
• Display the image.
• “Discuss your thinking with your partner.”
• 1 minute: partner discussion
• Record responses.
• Repeat for each image.
Student Facing
How many do you see?
How do you see them?
Activity Synthesis
• “How are the cube towers of 10 the same as the drawing in the last image? How are they different?” (They both show 4 tens in different ways. One looks like cubes and one looks like a drawing.)
Activity 1: How Many Tens Now? (10 minutes)
The purpose of this activity is for students to use towers of 10 to physically add or subtract a ten from a multiple of 10. The structure of this task encourages students to notice patterns in the
count of tens and the numbers used to represent the count. Students connect adding and subtracting a ten to skip-counting forward or backward by ten and what they've learned about counting groups of
tens from previous lessons (MP7, MP8).
• Groups of 3
• Give each group 9 towers of 10 connecting cubes.
• “Show 1 ten.”
• “Add a ten. How many do you have now?”
• 30 seconds: partner discussion
• Share responses.
• “Now that we have answered the first one together, you and your partner will keep adding 1 ten and record how many you have each time. As you work, talk to your partner about what you notice
about the numbers.”
• 5 minutes: partner work time
• Monitor for students who represent the value with:
□ base-ten drawings
□ __ tens
□ two-digit number
□ addition equations
Student Facing
1. Show 1 ten.
Add a ten.
How many do you have now?
2. Add another ten.
How many do you have now?
Show your thinking using drawings, numbers, or words.
3. Add another ten.
How many do you have now?
Show your thinking using drawings, numbers, or words.
4. Add another ten.
How many do you have now?
Show your thinking using drawings, numbers, or words.
5. Add another ten.
How many do you have now?
Show your thinking using drawings, numbers, or words.
6. Add another ten.
How many do you have now?
Show your thinking using drawings, numbers, or words.
7. Add another ten.
How many do you have now?
Show your thinking using drawings, numbers, or words.
8. Add another ten.
How many do you have now?
Show your thinking using drawings, numbers, or words.
1. Show 9 tens
Take away a ten.
How many do you have now?
Show your thinking using drawings, numbers, or words.
2. Take away another ten.
How many do you have now?
Show your thinking using drawings, numbers, or words.
3. Take away another ten.
How many do you have now?
Show your thinking using drawings, numbers, or words.
4. Take away another ten.
How many do you have now?
Show your thinking using drawings, numbers, or words.
5. Take away another ten.
How many do you have now?
Show your thinking using drawings, numbers, or words.
6. Take away another ten.
How many do you have now?
Show your thinking using drawings, numbers, or words.
7. Take away another ten.
How many do you have now?
Show your thinking using drawings, numbers, or words.
8. Take away another ten.
How many do you have now?
Show your thinking using drawings, numbers, or words.
Activity Synthesis
• Display 7 towers of 10.
• “I have 7 tens, which is 70 cubes. How many will I have if I add one more ten? How can I represent it?”
• Invite selected students to share their representations.
• “How are these representations related?” (They all show the same value.)
• “What did you notice each time you added a ten?” (The number was the next number you say when you count by ten. The number of tens was 1 more. One of the digits changed. It went up by 1.)
• “What did you notice each time you subtracted a ten?” (The number was the number you say when you count back by 10. The number of tens was 1 less. One of the digits changed. It was 1 less.)
Activity 2: Introduce Five in a Row, Add or Subtract 10 (15 minutes)
The purpose of this activity is for students to learn stage 4 of the Five in a Row center. Students choose a card that shows a multiple of 10. They choose whether to add or subtract 10 from the
number on the card to cover a number on their gameboard.
MLR8 Discussion Supports. Display sentence frames to encourage partner discussion during the game: “I will add 10. The sum is _____.” and “I will subtract 10. The difference is _____.”
Advances: Speaking, Conversing, Representing
Action and Expression: Internalize Executive Functions. To support working memory, provide students with access to sticky notes or mini whiteboards to keep track of solutions for adding 10 or
subtracting 10 before making a choice of where to place the counter on the gameboard.
Supports accessibility for: Memory, Organization
Required Materials
Materials to Gather
Materials to Copy
• Five in a Row Addition and Subtraction Stage 4 Gameboard
• Number Cards, Multiples of 10 (0-90)
Required Preparation
• Create a set of cards from the blackline master for each group of 2.
• Groups of 2
• Give each group a set of cards and a gameboard. Give students access to connecting cubes in towers of 10 and two-color counters.
• “We are going to learn a new way to play Five in a Row. It is called Five in a Row, Add or Subtract 10.”
• Display the gameboard and pile of cards.
• “I am going to flip one card over. Now I will decide if I want to add 10 to the number or subtract 10 from the number. I am going to choose to add 10. What is the sum? How do you know?”
• 30 seconds: quiet think time
• Share responses.
• “Now, I put a counter on the sum on my gameboard. Now it’s my partner’s turn.”
• “Start by deciding who will use the yellow side and who will use the red side of the counters. Then, take turns flipping one card over, choosing to add 10 to or subtract 10 from the number, and
placing your counter on the gameboard to cover the sum or difference. The first person to cover five numbers in a row is the winner.”
• 8 minutes: partner work time
• “Now choose your favorite round. Record how you added or subtracted 10.”
• 2 minutes: independent work time
• Monitor for students who:
□ draw towers of 10
□ count on or back by 10
□ say or write “2 tens and 1 ten is 3 tens”
□ record with expressions or equations
Student Facing
Record your favorite round.
Show your thinking using drawings, numbers, or words.
Activity Synthesis
• Invite 2-3 previously identified students to share how they added or subtracted 10.
• Display gameboard with some numbers covered by counters.
• Display a card.
• “Would you add or subtract 10 from this number? Why?” (I would add 10 because then I could cover _____. I would subtract 10 because then I could cover _____.)
• Repeat as time allows.
Activity 3: Centers: Choice Time (15 minutes)
The purpose of this activity is for students to choose from activities that offer practice adding and subtracting. Students choose from any stage of previously introduced centers.
• Five in a Row
• How Close?
• Number Puzzles
Required Preparation
• Gather materials from previous centers:
□ Five in a Row, Stages 1–4
□ How Close, Stages 1 and 2
□ Number Puzzles, Stages 1 and 2
• Groups of 2
• “Now you are going to choose from centers we have already learned.”
• Display the center choices in the student book.
• “Think about what you would like to do.”
• 30 seconds: quiet think time
• Invite students to work at the center of their choice.
• 10 minutes: center work time
Student Facing
Choose a center.
Five in a Row
How Close?
Number Puzzles
Activity Synthesis
• “Han is playing Five in a Row, Add or Subtract 10. He picks a card with the number 60. What are the two numbers he could cover on his gameboard? How do you know?”
Lesson Synthesis
“Today we added and subtracted 10. Tell your partner how you add 10 to a number.” (I count by ten until I get to the number and then I count one more number. I look at the number to see how many tens
are in it and I add one more.)
Cool-down: Unit 4, Section A Checkpoint (0 minutes) | {"url":"https://im.kendallhunt.com/K5/teachers/grade-1/unit-4/lesson-3/lesson.html","timestamp":"2024-11-07T06:22:21Z","content_type":"text/html","content_length":"106566","record_id":"<urn:uuid:afd203d3-943b-4007-9278-a99a182a636e>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00517.warc.gz"} |
Gregor Papa
δ-perturbation of bilevel optimization problems: An error bound analysis, Oper. Res. Perspect. (2024)
Models for forecasting the traffic flow within the city of Ljubljana, ETRE (2023)
Electric-bus routes in hilly urban areas: overview and challenges, RENEW SUST ENERG REV (2022)
Data Multiplexed and Hardware Reused Architecture for Deep Neural Network Accelerator, NEUROCOMPUTING (2022)
Four algorithms to solve symmetric multi-type non-negative matrix tri-factorization problem, J GLOBAL OPTIM (2022)
An open-source approach to solving the problem of accurate food-intake monitoring, IEEE ACCESS (2021)
Differential Evolution with Estimation of Distribution for Worst-case Scenario Optimization, MATHEMATICS (2021)
The effect of colour on reading performance in children, measured by a sensor hub: From the perspective of gender, PLOS ONE (2021)
The Relation Between Physiological Parameters and Colour Modifications in text Background and Overlay During Reading in Children With and Without Dyslexia, BRAIN SCI (2021)
Low power contactless bioimpedance sensor for monitoring breathing activity, SENSORS (2021)
Multi-objective approaches to ground station scheduling for optimization of communication with satellites, OPTIM ENG (2021)
The sensor hub for detecting the developmental characteristics in reading in children on a white vs. coloured background/coloured overlays, SENSORS (2021)
Simulation of Variational Gaussian Process NARX models with GPGPU, ISA T (2021)
Improving the Maintenance of Railway Switches Through Proactive Approach, ELECTRONICS (2020)
Advanced sensor-based maintenance in real-world exemplary cases, TAUT (2020)
Towards Managerial Support for Data Analytics Results, IND ENG MANAG (2020)
Multi-level information fusion for learning a blood pressure predictive model using sensor data, INFORM FUSION (2020)
Thermal phenomena in LTCC sensor structures, SENSOR ACTUAT A-PHYS (2019)
A comparison of models for forecasting the residential natural gas demand of an urban area, EGY (2019)
Innovative pocket-size Bluetooth kitchen scale, Agro Food Ind Hi Tech (2018)
Multi-hop communication in Bluetooth Low Energy ad-hoc wireless sensor network, Inform. MIDEM (2018)
A formal framework of human machine interaction in proactive maintenance – MANTIS experience, TAUT (2018)
Sensors in Proactive Maintenance - A case of LTCC pressure sensors, MAINT RELIAB (2018)
The Concept of an Ecosystem Model to Support the Transformation to Sustainable Energy Systems, APPL ENERG (2016)
Design of an Axial Flux Permanent Magnet Synchronous Machine Using Analytical Method and Evolutionary Optimization, IEEE T ENERGY CONVER (2016)
Big-Data Analytics: a Critical Review and Some Future Directions, IJBIDM (2015)
Using a Genetic Algorithm to Produce Slogans, Informatica (2015)
A Case Analysis of Embryonic Data Mining Success, INT J INFORM MANAGE (2015)
A multi-objective approach to the application of real-world production scheduling, EXPERT SYST APPL (2013)
Parameter-less algorithm for evolutionary-based optimization, COMPUT OPTIM APPL (2013)
Metaheuristic Approach to Transportation Scheduling in Emergency Situations, Transport (2013)
Guided restarting local search for production planning, Eng. Appl. Artif. Intell. (2012)
Who are the Likeliest Customers: Direct Mail Optimization with Data Mining, Contemp. eng. sci. (2011)
Temperature Simulations in Cooling Appliances, Elektroteh. vest. (2011)
MatPort - online mathematics learning with bioinspired decision-making system, Int. J. Innov. Comput. Appl. (2011)
Simulator hladilnega aparata za Gorenje d.o.o., Novice IJS (2010)
Production scheduling with a memetic algorithm, Int. J. Innov. Comput. Appl. (2010)
Genetic Algorithm for Test Pattern Generator Design, Appl. Intell. (2010)
Simuliranje in optimiranje pri razvoju hladilnih aparatov, GIB (2009)
Visual control of an industrial robot manipulator: Accuracy Estimation, Stroj. vestn. (2009)
Robot Vision Accuracy Estimation, Elektroteh. vest. (2009)
A new approach to optimization of test pattern generator structure, Inform. MIDEM (2008)
Test Pattern Generator Design Optimization Based on Genetic Algorithm, Lect. Notes Comp. Sc. (2008)
A quadtree-based progressive lossless compression technique for volumetric data sets, J. Inf. Sci. Eng. (2008)
Deterministic test pattern generator design with genetic algorithm approach, J. Electr. Eng. (2007)
Evaluation of accuracy in a 3D reconstruction system, WSEAS Trans. Syst. Control (2007)
Trust Modeling for Networked Organizations using Reputation and Collaboration Estimates, IEEE T. Syst. Man Cy. C (2007)
A comparative study of stochastic optimization methods in electric motor design, Appl. Intell. (2007)
Optimization algorithms inspired by electromagnetism and stigmergy in electro-technical engineering, WSEAS Trans. Inf. Sc. Appl. (2005)
An artificial intelligence approach to the efficiency improvement of a universal motor, Eng. Appl. Artif. Intel. (2005)
The parameters tuning for evolutionary synthesis algorithm, Informatica (Ljublj.) (2004)
An evolutionary technique for scheduling and allocation concurrency, WSEAS Trans. Syst. (2004)
Universal Motor Efficiency Improvement using Evolutionary Optimization, IEEE T. Ind. Electron. (2003)
An evolutionary approach to chip design: an empirical evaluation, Inform. MIDEM (2003)
An evolutionary algorithm for concurrent scheduling and allocation in the process of integrated-circuit design, Elektroteh. vest. (2003)
Chip design based on genetic approach, J. Electr. Eng. (2002)
Automatic large-scale integrated circuit synthesis using allocation-based scheduling algorithm, Microprocess. Microsy. (2002)
Visokonivojska sinteza vezij z genetskimi algoritmi, Inform. MIDEM (2001)
Empirical evaluation of heuristic scheduling algorithms used in parallel systems design, Eng. Simul. (2001)
Linear algebra in one-dimensional systolic arrays, Informatica (Ljublj.) (2000)
Empirical evaluation of heuristic scheduling algorithms used in parallel systems design, Electron. Model. (2000)
Scheduling Algorithms in High-Level Synthesis - Overview and Evaluation, Elektroteh. vest. (1998)
Traffic Forecasting With Uncertainty: A Case for Conformalized Quantile Regression, ECCOMAS 2024
Fleet and Traffic Management Systems for Conducting Future Cooperative Mobility, ECCOMAS 2024
Fleet and traffic management systems for conducting future cooperative mobility, TRA 2024
An evolutionary approach to pessimistic bilevel optimization problems,
Users’ cognitive processes in a user interface for predicting football match results, IS 2023
An evolutionary approach to pessimistic bilevel optimization problems, BILEVEL 2023
Evaluation of Parallel Hierarchical Differential Evolution for Min-Max Optimization Problems Using SciPy, BIOMA 2022
On Suitability of the Customized Measuring Device for Electric Motor, IECON 2022
Real‐world Applications of Dynamic Parameter Control in Evolutionary Computation - tutorial , PPSN 2022
Dynamic Computational Resource Allocation for CFD Simulations Based on Pareto Front Optimization, GECCO '22
GPU-based Accelerated Computation of Coalescence and Breakup Frequencies for Polydisperse Bubbly Flows, NENE 2021
Worst-Case Scenario Optimisation: Bilevel Evolutionary Approaches, IPSSC 2021
Solving pessimistic bilevel optimisation problems with evolutionary algorithms, EUROGEN 2021
Detecting Network Intrusion Using Binarized Neural Networks, WF-IoT2021
Preferred Solutions of the Ground Station Scheduling Problem using NSGA-III with Weighted Reference Points Selection, CEC 2021
Applications of Dynamic Parameter Control in Evolutionary Computation -tutorial, GECCO 2021
Dynamic Parameter Changing During the Run to Speed Up Evolutionary Algorithms: PPSN 2020 tutorial, PPSN 2020
On Formulating the Ground Scheduling Problem as a Multi-objective Bilevel Problem, BIOMA 2020
Refining the CC-RDG3 Algorithm with Increasing Population Scheme and Persistent Covariance Matrix, BIOMA 2020
Dynamic Parameter Choices in Evolutionary Computation: WCCI 2020 tutorial, WCCI 2020
Dynamic control parameter choices in evolutionary computation: GECCO 2020 tutorial, GECCO 2020
Solving min-max optimisation problems by means of bilevel evolutionary algorithms: a preliminary study, GECCO 2020
Colors and colored overlays in dyslexia treatment, IPSSC 2020
Optimisation platform for remote collaboration of different teams, OSE5
Experimental evaluation of deep-learning applied on pendulum balancing, ERK 2019
An adaptive evolutionary surrogate-based approach for single-objective bilevel optimisation, UQOP 2019
Parameter Control in Evolutionary Bilevel Optimisation, IPSSC 2019
The sensor hub for detecting the influence of colors on reading in children with dyslexia, IPSSC 2019
Comparing different settings of parameters needed for pre-processing of ECG signals used for blood pressure classification, BIOSIGNALS 2019
The role of colour sensing and digitalization on the life quality and health tourism, QLife 2018
Evolutionary operators in memetic algorithm for matrix tri-factorization problem, META 2018
From a Production Scheduling Simulation to a Digital Twin, HPOI 2018
Evolution of Electric Motor Design Approaches: The Domel Case, HPOI 2018
Bluetooth based sensor networks for wireless EEG monitoring, ERK 2018
Construction of Heuristic for Protein Structure Optimization Using Deep Reinforcement Learning, BIOMA 2018
Proactive Maintenance of Railway Switches, CoDIT 2018
Sensors: The Enablers for Proactive Maintenance in the Real World, CoDIT 2018
The role of physiological sensors in dyslexia treatment, ERK 2017
Transportation problems and their potential solutions in smart cities, SST 2017
Prediction of natural gas consumption using empirical models, SST 2017
Cyber Physical System Based Proactive Collaborative Maintenance, SST 2016
Optimizacija geometrije profila letalskega krila, ERK 2016
Pametno urejanje prometa in prostorsko načrtovanje, IS 2015
Pobuda PaMetSkup, IS 2015
Concept of an Ecosystem model to support transformation towards sustainable energy systems, SDEWES 2015
Napredne metode za optimizacijo izdelkov in procesov, Dnevi energetikov 2015
Case-based slogan production, ICCBR 2015
Modularni sistem za upravljanje Li-Ion baterije, AIG 2015
Upgrade of the MovSim for Easy Traffic Network Modification, SIMUL 2015
Suitability of MASA Algorithm for Traveling Thief Problem, SOR'15
Empirical Convergence Analysis of GA for Unit Commitment Problem, BIOMA 2014
Automated Slogan Production Using a Genetic Algorithm, BIOMA 2014
Comparison Between Single and Multi Objective Genetic Algorithm Approach for Optimal Stock Portfolio Selection, BIOMA 2014
Implementation of a slogan generator, ICCC 2014
The Parameter-Less Evolutionary Search for Real-Parameter Single Objective Optimization, CEC 2013
Optimization in Organizations: Things We Tend to Forget, BIOMA 2012
Combinatorial implementation of a parameter-less evolutionary algorithm, IJCCI 2011
Optimal on-line built-in self-test structure for system-reliability improvement, CEC 2011
Hybrid Parameter-Less Evolutionary Algorithm in Production Planning, ICEC2010
Vpliv časovnih omejitev na učinkovitost memetskega algoritma pri planiranju proizvodnje, ERK 2010
Organizacijski vidik uvajanja napovedne analitike v organizacije, ERK 2010
Influence of Fixed-Deadline Orders on Memetic Algorithm Performance in Production Planning, MCPL 2010
Optimization of cooling appliance control parameters, EngOpt 2010
Bioinspired Online Mathematics Learning, BIOMA 2010
Application of Memetic Algorithm in Production Planning, BIOMA 2010
Thermal Simulation for Development Speed-up, SIMUL 2010
Metaheuristic Approach to Loading Schedule Problem, MISTA 2009
MATPORT - Web application to support enhancement in elementary mathematics pedagogy, CSEDU 2009
Constrained transportation scheduling, BIOMA 2008
Web interface for Progressive zoom-in algorithm, ERK 2008
Parameter-less evolutionary search, GECCO-08
Deterministic Test Pattern Generator Design, EvoHOT 2008
Robot TCP positioning with vision : accuracy estimation of a robot visual control system, ICINCO 2007
Estimation of accuracy for robot vision control, SHR2007
Accuracy of a 3D reconstruction system, ISPRA 2007
Non-parametric genetic algorithm, BIOMA 2006
Evolutionary approach to deterministic test pattern generator design, Euromicro 2006
Algoritem postopnega približevanja, ERK 2006
On design of a low-area deterministic test pattern generator by the use of genetic algorithm, CADSM2005
Optimization algorithms inspired by electromagnetism and stigmergy in electro-technical engineering, EC 2005
Towards automated trust modeling in networked organizations, ICAS 2005
A decision support approach to modeling trust in networked organizations, IEA/AIE 2005
Učinkovitost brezparametrskega genetskega algoritma, ERK 2005
Test pattern generator structure design by genetic algorithm, BIOMA 2004
Electrical engineering design with an evolutionary approach, BIOMA 2004
An evolutionary technique for scheduling and allocation concurrency, MATH 2004
Ovrednotenje sočasnega razvrščanja operacij in dodeljevanja enot v postopku načrtovanja integriranih vezij, ERK 2003
Reševanje soodvisnih korakov načrtovanja integriranih vezij, IS 2003
Concurrent operation scheduling and unit allocation with an evolutionary technique, DSD 2003
Evolutionary approach to scheduling and allocation concurrency in integrated circuits design, ECCTD-03
Sočasno razvrščanje operacij in dodeljevane enot v postopku načrtovanja integriranih vezij, ERK 2002
Evolutionary synthesis algorithm - genetic operators tuning, EC-02
Evolutionary method for a universal motor geometry optimization: a new automated design approach, IUTAM 2002
Evolutionary method for a universal motor geometry optimization, IUTAM
Optimizacija geometrije lamele univerzalnega motorja, ERK 2001
Optimization of the geometry of the rotor and the stator, SOR-01
Performance optimization of a universal motor using a genetic algorithm, INES 2001
Improving the technical quality of a universal motor using and evolutionary approach, Euromicro-01
Evolutionary performance optimization of a universal motor, MIPRO 2001
Evolutionary optimization of a universal motor, IECON-01
Večkriterijsko genetsko načrtovanje integriranih vezij, ERK 2000
Multi-objective genetic scheduling algorithm with respect to allocation in high-level synthesis, Euromicro2000
The use of genetic algorithm in the integrated circuit design, WDTA-99
Optimization of the parallel matrix multiplication, CSCC-99
Transformation of the systolic arrays from two-dimensional to linear form, ICECS-99
Genetic algorithm as a method for finite element mesh smoothing, PPAM-99
Scheduling algorithms based on genetic approach, NNTA-99
Reševanje linearnega sistema enačb v enodimenzionalnih sistoličnih poljih, ERK-98
Evolutionary scheduling algorithms in high-level synthesis, WDTA-98
Using simulated annealing and genetic algorithm in the automated synthesis of digital systems, IMACS-CSC-98
On-line testing of a discrete PID regulator: a case study, Euromicro97
A Framework for Applying Data-Driven AI/ML Models in Reliability, Recent Advances in Microelectronics Reliability, Springer (2024)
Reliability Improvements for In-Wheel Motor, Recent Advances in Microelectronics Reliability, Springer (2024) | {"url":"https://cs.ijs.si/papa/?show=publications&id=192&journals=all&conferences=all","timestamp":"2024-11-10T06:33:47Z","content_type":"text/html","content_length":"38633","record_id":"<urn:uuid:f29cd6f3-e27b-496e-9d48-cb7489b847ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00243.warc.gz"} |
Show that the three points (−1,6),(−10,12),(−16,16) are colline... | Filo
Show that the three points are collinear.
Not the question you're searching for?
+ Ask your question
Area of the triangle formed by the three points , by using the formula for area.
Was this solution helpful?
Video solutions (3)
Learn from their 1-to-1 discussion with Filo tutors.
5 mins
Uploaded on: 12/16/2023
Was this solution helpful?
6 mins
Uploaded on: 1/17/2023
Was this solution helpful?
Found 7 tutors discussing this question
Discuss this question LIVE for FREE
6 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from IIT-JEE Super Course in Mathematics -Algebra I (Pearson)
Practice more questions from Straight Lines
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Show that the three points are collinear.
Updated On Dec 16, 2023
Topic Straight Lines
Subject Mathematics
Class Class 11
Answer Type Text solution:1 Video solution: 3
Upvotes 298
Avg. Video Duration 6 min | {"url":"https://askfilo.com/math-question-answers/show-that-the-three-points-16-1012-1616-are-collinear","timestamp":"2024-11-09T17:06:58Z","content_type":"text/html","content_length":"331222","record_id":"<urn:uuid:31adf721-61bd-47e8-8ef7-8f3c9863ade8>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00042.warc.gz"} |
Imputation in R: Top 3 Ways for Imputing Missing Data – R-Craft
Imputation in R: Top 3 Ways for Imputing Missing Data
This article is originally published at https://appsilon.com
Real-world data is often messy and full of missing values. As a result, data scientists spend the majority of their time cleaning and preparing the data, and have less time to focus on predictive
modeling and machine learning. If there’s one thing all data preparation steps share, then it’s dealing with missing data. Today we’ll make this process a bit easier for you by introducing 3 ways for
data imputation in R.
After reading this article, you’ll know several approaches for imputation in R and tackling missing values in general. Choosing an optimal approach oftentimes boils down to experimentation and domain
knowledge, but we can only take you so far.
Interested in Deep Learning? Learn how to visualize PyTorch neural network models.
Table of contents:
Introduction to Imputation in R
In the simplest words, imputation represents a process of replacing missing or NA values of your dataset with values that can be processed, analyzed, or passed into a machine learning model. There
are numerous ways to perform imputation in R programming language, and choosing the best one usually boils down to domain knowledge.
Picture this – there’s a column in your dataset that stands for the amount the user spends on a phone service X. Values are missing for some clients, but what’s the reason? Can you impute them with a
simple mean? Well, you can’t, at least not without asking a business question first – Why are these values missing?
Most likely, the user isn’t using that phone service, so imputing missing values with mean would be a terrible, terrible idea.
Let’s examine our data for today. We’ll use the training portion of the Titanic dataset and try to impute missing values for the Age column:
You can see some of the possible values below:
There’s a fair amount of NA values, and it’s our job to impute them. They’re most likely missing because the creator of the dataset had no information on the person’s age. If you were to build a
machine learning model on this dataset, the best way to evaluate the imputation technique would be to measure classification metrics (accuracy, precision, recall, f1) after training the model.
But before diving into the imputation, let’s visualize the distribution of our variable:
ggplot(titanic_train, aes(Age)) +
geom_histogram(color = "#000000", fill = "#0099F8") +
ggtitle("Variable distribution") +
theme_classic() +
theme(plot.title = element_text(size = 18))
The histogram is displayed in the figure below:
So, why is this important? It’s a good idea to compare variable distribution before and after imputation. You don’t want the distribution to change significantly, and a histogram is a good way to
check that.
Don’t know a first thing about histograms? Our detailed guide with ggplot2 has you covered.
We’ll now explore a suite of basic techniques for imputation in R.
Simple Value Imputation in R with Built-in Functions
You don’t actually need an R package to impute missing values. You can do the whole thing manually, provided the imputation techniques are simple. We’ll cover constant, mean, and median imputations
in this section and compare the results.
The value_imputed variable will store a data.frame of the imputed ages. The imputation itself boils down to replacing a column subset that has a value of NA with the value of our choice. This will
• Zero: constant imputation, feel free to change the value.
• Mean (average): average age after when all NA‘s are removed.
• Median: median age after when all NA‘s are removed.
Here’s the code:
value_imputed <- data.frame(
original = titanic_train$Age,
imputed_zero = replace(titanic_train$Age, is.na(titanic_train$Age), 0),
imputed_mean = replace(titanic_train$Age, is.na(titanic_train$Age), mean(titanic_train$Age, na.rm = TRUE)),
imputed_median = replace(titanic_train$Age, is.na(titanic_train$Age), median(titanic_train$Age, na.rm = TRUE))
We now have a dataset with four columns representing the age:
Let’s take a look at the variable distribution changes introduced by imputation on a 2×2 grid of histograms:
h1 <- ggplot(value_imputed, aes(x = original)) +
geom_histogram(fill = "#ad1538", color = "#000000", position = "identity") +
ggtitle("Original distribution") +
h2 <- ggplot(value_imputed, aes(x = imputed_zero)) +
geom_histogram(fill = "#15ad4f", color = "#000000", position = "identity") +
ggtitle("Zero-imputed distribution") +
h3 <- ggplot(value_imputed, aes(x = imputed_mean)) +
geom_histogram(fill = "#1543ad", color = "#000000", position = "identity") +
ggtitle("Mean-imputed distribution") +
h4 <- ggplot(value_imputed, aes(x = imputed_median)) +
geom_histogram(fill = "#ad8415", color = "#000000", position = "identity") +
ggtitle("Median-imputed distribution") +
plot_grid(h1, h2, h3, h4, nrow = 2, ncol = 2)
Here’s the output:
All imputation methods severely impact the distribution. There are a lot of missing values, so setting a single constant value doesn’t make much sense. Zero imputation is the worst, as it’s highly
unlikely for close to 200 passengers to have the age of zero.
Maybe mode imputation would provide better results, but we’ll leave that up to you.
Impute Missing Values in R with MICE
MICE stands for Multivariate Imputation via Chained Equations, and it’s one of the most common packages for R users. It assumes the missing values are missing at random (MAR).
The basic idea behind the algorithm is to treat each variable that has missing values as a dependent variable in regression and treat the others as independent (predictors). You can learn more about
MICE in this paper.
The R mice packages provide many univariate imputation methods, but we’ll use only a handful. First, let’s import the package and subset only the numerical columns to keep things simple. Only the Age
attribute contains missing values:
titanic_numeric <- titanic_train %>%
select(Survived, Pclass, SibSp, Parch, Age)
The md.pattern() function gives us a visual representation of missing values:
Onto the imputation now. We’ll use the following MICE imputation methods:
• pmm: Predictive mean matching.
• cart: Classification and regression trees.
• laso.norm: Lasso linear regression.
Once again, the results will be stored in a data.frame:
mice_imputed <- data.frame(
original = titanic_train$Age,
imputed_pmm = complete(mice(titanic_numeric, method = "pmm"))$Age,
imputed_cart = complete(mice(titanic_numeric, method = "cart"))$Age,
imputed_lasso = complete(mice(titanic_numeric, method = "lasso.norm"))$Age
Let’s take a look at the results:
It’s hard to judge from the table data alone, so we’ll draw a grid of histograms once again (copy and modify the code from the previous section):
The imputed distributions overall look much closer to the original one. The CART-imputed age distribution probably looks the closest. Also, take a look at the last histogram – the age values go below
zero. This doesn’t make sense for a variable such as age, so you will need to correct the negative values manually if you opt for this imputation technique.
That covers MICE, so let’s take a look at another R imputation approach – Miss Forest.
Imputation with R missForest Package
The Miss Forest imputation technique is based on the Random Forest algorithm. It’s a non-parametric imputation method, which means it doesn’t make explicit assumptions about the function form, but
instead tries to estimate the function in a way that’s closest to the data points.
In other words, it builds a random forest model for each variable and then uses the model to predict missing values. You can learn more about it by reading the article by Oxford Academic.
Let’s see how it works for imputation in R. We’ll apply it to the entire numerical dataset and only extract the age:
missForest_imputed <- data.frame(
original = titanic_numeric$Age,
imputed_missForest = missForest(titanic_numeric)$ximp$Age
There’s no option for different imputation techniques with Miss Forest, as it always uses the random forests algorithm:
Finally, let’s visualize the distributions:
It looks like Miss Forest gravitated towards a constant value imputation since a large portion of values is around 35. The distribution is quite different from the original one, which means Miss
Forest isn’t the best imputation technique we’ve seen today.
Summary of Imputation in R
And that does it for three ways to impute missing values in R. You now have several new techniques under your toolbelt, and these should simplify any data preparation and cleaning process. The
imputation approach is almost always tied to domain knowledge of the problem you’re trying to solve, so make sure to ask the right business questions when needed.
For a homework assignment, we would love to see you build a classification machine learning model on the Titanic dataset, and use one of the discussed imputation techniques in the process. Which one
yields the most accurate model? Which one makes the most sense? Feel free to share your insights in the comment section below and to reach us on Twitter – @appsilon. We’d love to hear from you.
Looking for more guidance on Data Cleaning in R? Start with these two packages.
The post appeared first on appsilon.com/blog/.
Thanks for visiting r-craft.org
This article is originally published at https://appsilon.com
Please visit source website for post related comments. | {"url":"https://r-craft.org/imputation-in-r-top-3-ways-for-imputing-missing-data/","timestamp":"2024-11-05T01:21:18Z","content_type":"text/html","content_length":"126382","record_id":"<urn:uuid:e787e7b6-1a99-4eb2-b8b7-3748b56b12b8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00619.warc.gz"} |
Multiple published clinical studies have demonstrated that the Kane formula is more accurate than all currently available IOL formulas (including Hill-RBF 2.0, Barrett Universal 2, Olsen, Haigis,
Hoffer Q, Holladay 1, SRK/T, EVO and Holladay 2).
The Kane formula maintains its accuracy at the extremes of axial length, resulting in a 25.1% reduction in absolute error in long eyes (≥26.0 mm), compared with the SRK/T; and a 25.5% reduction in
absolute error in short eyes (≤22.0 mm), compared with the Hoffer Q formula.
In a recent study of extreme axial hyperopia, published in the Journal of Cataract and Refractive Surgery, the Kane formula resulted in an additional 22.0% of patients within ±0.50 D of the
refractive aim compared with the Barrett Universal 2 and 23.1% more compared to the Hoffer Q. This study is available at the following link: https://journals.lww.com/jcrs/Abstract/9000/
The Kane formula was developed in September 2017 using ~30,000 highly accurate cases. The formula is based on theoretical optics and incorporates both regression and artificial intelligence
components to further refine its predictions. The formula was created using high-performance cloud-based computing which is a way to leverage the power of the cloud to create a virtual supercomputer
capable of performing many decades worth of calculations in a few days. A focus of the formula was to reduce the errors seen at the extremes of the various ocular dimensions which is where the
current formulas display larger errors. Variables used in the formula are axial length, keratometry, anterior chamber depth, lens thickness, central corneal thickness and patient biological sex.
The Kane toric formula uses an algorithm incorporating regression, theoretical optics and artificial intelligence techniques to calculate the total corneal astigmatism. It then applies an ELP based
approach to calculate the residual astigmatism for a particular eye and IOL power combination. It is recommended to use an SIA of zero with the Kane toric formula when performing surgery with a
temporal incision size of ≤2.75 mm.
In the largest study on toric IOL formula accuracy, the Kane toric formula has been shown to be more accurate than all currently available toric formulas (Barrett, Abulafia-Koch, Holladay 2 with
total SIA, EVO 2.0 and Næser-Savini). This study is available in Ophthalmology (https://www.aaojournal.org/article/S0161-6420(20)30416-4/fulltext).
The Kane keratoconus formula is a purely theoretical modification of the original Kane formula. It uses a modified corneal power, derived from anterior corneal radii of curvature, that better
represents the true anterior/posterior ratio in keratoconic eyes. The formula also minimizes the effect of corneal power on the ELP calculation to enable more accurate predictions. There is no
requirement for additional variables for the keratoconus formula and the formula works with the same A-constant used for non-keratoconic patients. A myopic target refraction is recommended in
patients with an average corneal power >48 D. Between 48 D and 53 D, a target of -0.50 DS is recommended; between 53 D and 59 D, a target of -1.00 DS is recommended; and above 59 D, a target of -1.50
to -2.50 DS is recommended. These targets are designed to avoid an undesirable hyperopic outcome.
The Kane keratoconus formula was shown to be significantly more accurate than all currently available formulas in patients with keratoconus in a study recently published in Ophthalmology (https://
Compulsory fields for the calculator are the A-constant, biological sex, axial length, corneal power and anterior chamber depth. Although adding lens thickness and central corneal thickness improves
the prediction, they are optional variables. This allows owners of older biometers to use the formula.
Index relates to the K-index of the instrument used to measure the corneal curvature. A default value of 1.3375 has been selected. If your device uses a different K-index, please select the
appropriate option from the drop-down menu.
The formula has been developed to have an A-constant very similar to the SRK/T A-constant. If the surgeon has an optimised A-constant, then that is recommended for use. Otherwise, we recommend the
ULIB SRK/T A-constant for any particular IOL. The “Constants” page has information about the appropriate constant for different IOLs. | {"url":"https://www.iolformula.com/about/","timestamp":"2024-11-06T09:09:30Z","content_type":"text/html","content_length":"18946","record_id":"<urn:uuid:7d68ce21-13e1-4b9f-be5f-56e789cbb2ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00327.warc.gz"} |
Re: cellBasisVector x,y,z values
Re: cellBasisVector x,y,z values
Hi Shirley,
I don't think your interpretation is correct.
This is what they are:
cellBasisVector1 = |min(x)-max(x)|,0,0
cellBasisVector2 = 0,|min(y)-max(y)|,0
cellBasisVector3 = 0,0,|min(z)-max(z)|
The cell basis is just the length of the side of the cube, cell origin
defines where the cube is located.
On Thu, 8 Jul 2004, Shirley Hui wrote:
> I was wondering if someone could confirm if my understanding is right.
> When setting the cellBasisVector1, cellBasisVector2, cellBasisVector3 for
> Periodic Boundary conditions in the NAMD config file:
> cellBasisVector1 = x,0,0
> cellBasisVector2 = 0,y,0
> cellBasisVector3 = 0,0,z
> The x,y,z are the minimum OR maximum values obtained by running the command
> in VMD:
> > set everyone [atomselect top all]
> > measure minmax $everyone
> Can someone please confirm if this is correct?
> The tutorial Statistical Mechanics of Proteins doesn't explicitly indicate
> that the x,y,z values should be min or max - infact it doesn't tell you how
> to obtain the x,y,z values.
> The tutorial infact says:
> "Three periodic cell basis vectors are to be specified to give the periodic
> cell its shape and size. They are cellBasisVector1 , cellBasisVector2, and
> cellBasisVector3 . In this file, each vector is perpendicular to the other
> two, as indicated by a single x, y, or z value being specified by each. For
> instance, cellBasisVector1 is x = 42Å, y = 0Å, z = 0Å. With each vector
> perpendicular, a rectangular 3-D box is formed."
> http://www.ks.uiuc.edu/Training/SumSchool/materials/sources/tutorials/02-namd-tutorial/namd-tutorial-html/node9.html#SECTION00025100000000000000
> Where they came up with the value for x,y,z is not explicitly indicated.
> But if my understanding is correct I believe it should be the min max
> values.
> Thanks,
> shirley
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:37:44 CST | {"url":"https://www.ks.uiuc.edu/Research/namd/mailing_list/namd-l.2003-2004/1160.html","timestamp":"2024-11-13T19:13:48Z","content_type":"text/html","content_length":"7229","record_id":"<urn:uuid:fc775e1f-698d-4531-844a-a21068c66b8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00875.warc.gz"} |
Getting Started¶
Get Started in 4 Steps¶
Finally, if you’re not already familiar with NetworkX and GeoPandas, make sure you read their user guides as OSMnx uses their data structures and functionality.
Introducing OSMnx¶
This quick introduction explains key concepts and the basic functionality of OSMnx.
OSMnx is pronounced as the initialism: “oh-ess-em-en-ex”. It is built on top of NetworkX and GeoPandas, and interacts with OpenStreetMap APIs to:
• Download and model street networks or other infrastructure anywhere in the world with a single line of code
• Download geospatial features (e.g., political boundaries, building footprints, grocery stores, transit stops) as a GeoDataFrame
• Query by city name, polygon, bounding box, or point/address + distance
• Model driving, walking, biking, and other travel modes
• Attach node elevations from a local raster file or web service and calculate edge grades
• Impute missing speeds and calculate graph edge travel times
• Simplify and correct the network’s topology to clean-up nodes and consolidate complex intersections
• Fast map-matching of points, routes, or trajectories to nearest graph edges or nodes
• Save/load network to/from disk as GraphML, GeoPackage, or OSM XML file
• Conduct topological and spatial analyses to automatically calculate dozens of indicators
• Calculate and visualize street bearings and orientations
• Calculate and visualize shortest-path routes that minimize distance, travel time, elevation, etc
• Explore street networks and geospatial features as a static map or interactive web map
• Visualize travel distance and travel time with isoline and isochrone maps
• Plot figure-ground diagrams of street networks and building footprints
The OSMnx Examples Gallery contains tutorials and demonstrations of all these features, and package usage is detailed in the User Reference.
You can configure OSMnx using the settings module. Here you can adjust logging behavior, caching, server endpoints, and more. You can also configure OSMnx to retrieve historical snapshots of
OpenStreetMap data as of a certain date. Refer to the FAQ below for server usage limitations.
Geocoding and Querying¶
OSMnx geocodes place names and addresses with the OpenStreetMap Nominatim API. You can use the geocoder module to geocode place names or addresses to lat-lon coordinates. Or, you can retrieve place
boundaries or any other OpenStreetMap elements by name or ID.
Using the features and graph modules, as described below, you can download data by lat-lon point, address, bounding box, bounding polygon, or place name (e.g., neighborhood, city, county, etc).
Urban Amenities¶
Using OSMnx’s features module, you can search for and download any geospatial features (such as building footprints, grocery stores, schools, public parks, transit stops, etc) from the OpenStreetMap
Overpass API as a GeoPandas GeoDataFrame. This uses OpenStreetMap tags to search for matching elements.
Modeling a Network¶
Using OSMnx’s graph module, you can retrieve any spatial network data (such as streets, paths, rail, canals, etc) from the Overpass API and model them as NetworkX MultiDiGraphs. See the official
reference paper at the Further Reading page for complete modeling details.
In short, MultiDiGraphs are nonplanar directed graphs with possible self-loops and parallel edges. Thus, a one-way street will be represented with a single directed edge from node u to node v, but a
bidirectional street will be represented with two reciprocal directed edges (with identical geometries): one from node u to node v and another from v to u, to represent both possible directions of
flow. Because these graphs are nonplanar, they correctly model the topology of interchanges, bridges, and tunnels. That is, edge crossings in a two-dimensional plane are not intersections in an OSMnx
model unless they represent true junctions in the three-dimensional real world.
The graph module uses filters to query the Overpass API: you can either specify a built-in network type or provide your own custom filter with Overpass QL. Refer to the graph module’s documentation
for more details. Under the hood, OSMnx does several things to generate the best possible model. It initially creates a 500m-buffered graph before truncating it to your desired query area, to ensure
accurate streets-per-node stats and to attenuate graph perimeter effects. It also simplifies the graph topology as discussed below.
Topology Clean-Up¶
The simplification module automatically processes the network’s topology from the original raw OpenStreetMap data, such that nodes represent intersections/dead-ends and edges represent the street
segments that link them. This takes two primary forms: graph simplification and intersection consolidation.
Graph simplification cleans up the graph’s topology so that nodes represent intersections or dead-ends and edges represent street segments. This is important because in OpenStreetMap raw data, ways
comprise sets of straight-line segments between nodes: that is, nodes are vertices for streets’ curving line geometries, not just intersections and dead-ends. By default, OSMnx simplifies this
topology by discarding non-intersection/dead-end nodes while retaining the complete true edge geometry as an edge attribute. When multiple OpenStreetMap ways are merged into a single graph edge, the
ways’ attribute values can be aggregated into a single value.
Intersection consolidation is important because many real-world street networks feature complex intersections and traffic circles, resulting in a cluster of graph nodes where there is really just one
true intersection as we would think of it in transportation or urban design. Similarly, divided roads are often represented by separate centerline edges: the intersection of two divided roads thus
creates 4 nodes, representing where each edge intersects a perpendicular edge, but these 4 nodes represent a single intersection in the real world. OSMnx can consolidate such complex intersections
into a single node and optionally rebuild the graph’s edge topology accordingly. When multiple OpenStreetMap nodes are merged into a single graph node, the nodes’ attribute values can be aggregated
into a single value.
Model Attributes¶
An OSMnx model has some standard required attributes, plus some optional attributes. The latter are sometimes present based on the source OSM data’s tagging, the settings module configuration, and
any processing you may have done to add additional attributes (as noted in various functions’ documentation).
As a NetworkX MultiDiGraph object, it has top-level graph, nodes, and edges attributes. The graph attribute dictionary must contain a “crs” key defining its coordinate reference system. The nodes are
identified by OSM ID and each must contain a data attribute dictionary that must have “x” and “y” keys defining its coordinates and a “street_count” key defining how many physical streets are
incident to it. The edges are identified by a 3-tuple of “u” (source node ID), “v” (target node ID), and “key” (to differentiate parallel edges), and each must contain a data attribute dictionary
that must have an “osmid” key defining its OSM ID and a “length” key defining its length in meters.
The OSMnx graph module automatically creates MultiDiGraphs with these required attributes, plus additional optional attributes based on the settings module configuration. If you instead manually
create your own graph model, make sure it has these required attributes at a minimum.
Convert, Project, Save¶
OSMnx’s convert module can convert a MultiDiGraph to a MultiGraph if you prefer an undirected representation of the network, or to a DiGraph if you prefer a directed representation without any
parallel edges. It can also convert a MultiDiGraph to/from GeoPandas node and edge GeoDataFrames. The nodes GeoDataFrame is indexed by OSM ID and the edges GeoDataFrame is multi-indexed by u, v, key
just like a NetworkX edge. This allows you to load arbitrary node/edge ShapeFiles or GeoPackage layers as GeoDataFrames then model them as a MultiDiGraph for graph analysis.
You can easily project your graph to different coordinate reference systems using the projection module. If you’re unsure which CRS you want to project to, OSMnx can automatically determine an
appropriate UTM CRS for you.
Using the io module, you can save your graph to disk as a GraphML file (to load into other network analysis software), a GeoPackage (to load into other GIS software), or an OSM XML file. Use the
GraphML format whenever saving a graph for later work with OSMnx.
Network Measures¶
You can use the stats module to calculate a variety of geometric and topological measures as well as street network bearing and orientation statistics. These measures define streets as the edges in
an undirected representation of the graph to prevent double-counting bidirectional edges of a two-way street. You can easily generate common stats in transportation studies, urban design, and network
science, including intersection density, circuity, average node degree (connectedness), betweenness centrality, and much more.
You can also use NetworkX directly to calculate additional topological network measures.
Working with Elevation¶
The elevation module lets you automatically attach elevations to the graph’s nodes from a local raster file or the Google Maps Elevation API (or equivalent web API with a compatible interface). You
can also calculate edge grades (i.e., rise-over-run) and analyze the steepness of certain streets or routes.
The distance module can find the nearest node(s) or edge(s) to coordinates using a fast spatial index. The routing module can solve shortest paths for network routing, parallelized with
multiprocessing, using different weights (e.g., distance, travel time, elevation change, etc). It can also impute missing speeds to the graph edges. This imputation can obviously be imprecise, so the
user can override it by passing in arguments that define local speed limits. It can also calculate free-flow travel times for each edge.
You can plot graphs, routes, network figure-ground diagrams, building footprints, and street network orientation rose diagrams (aka, polar histograms) with the plot module. You can also explore
street networks, routes, or geospatial features as interactive Folium web maps.
Usage Limits¶
Refer to the Nominatim Usage Policy and Overpass Commons documentation for API usage limits and restrictions to which you must adhere. If you configure OSMnx to use an alternative API instance,
ensure you understand and follow their policies. If you feel you need to exceed these limits, consider installing your own hosted instance and setting OSMnx to use it.
More Info¶
All of this functionality is demonstrated step-by-step in the OSMnx Examples Gallery, and usage is detailed in the User Reference. More feature development details are in the Changelog. Consult the
Further Reading resources for additional technical details and research.
Frequently Asked Questions¶
How do I install OSMnx? Follow the Installation guide.
How do I use OSMnx? Check out the step-by-step tutorials in the OSMnx Examples Gallery.
How does this or that function work? Consult the User Reference.
What can I do with OSMnx? Check out recent projects that use OSMnx.
I have a usage question. Please ask it on StackOverflow. | {"url":"https://osmnx.readthedocs.io/en/latest/getting-started.html","timestamp":"2024-11-12T21:54:20Z","content_type":"text/html","content_length":"38204","record_id":"<urn:uuid:7c6036eb-6bb7-4baa-9e5d-0875b4f3ef8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00797.warc.gz"} |
Einstein’s gravity and quantum mechanics united at last?
EarthHuman WorldSpace
Einstein’s gravity and quantum mechanics united at last?
This week, physicists at University College London announced – in 2 papers published simultaneously – a radical new theory that consistently unifies Einstein’s gravity and quantum mechanics while
preserving Einstein’s classical concept of spacetime. Image via Isaac Young/ UCL. Used with permission.
The University College London published this article on December 4, 2023. Reprinted here with permission. Edits by EarthSky.
Einstein’s gravity and quantum mechanics
Modern physics is founded upon two pillars. One is quantum theory, which governs the smallest particles in the universe. The other is Einstein’s theory of general relativity, which explains gravity
through the bending of spacetime. But these two theories are in contradiction with each other, and a reconciliation has remained elusive for over a century.
The prevailing assumption has been to modify Einstein’s theory of gravity, or “quantized” to fit within quantum theory. This is the approach of two leading candidates for a quantum theory of gravity,
string theory and loop quantum gravity.
But Jonathan Oppenheim at University College London Physics & Astronomy has developed a new theory. In a new paper in the peer-reviewed open-access journal Physical Review X (PRX), he challenges that
consensus and takes an alternative approach by suggesting that spacetime may be classical. That is, not governed by quantum theory at all.
The 2024 lunar calendars are here! Best Christmas gifts in the universe! Check ’em out here.
Here’s how Einstein’s gravity and quantum mechanics works
Instead of modifying spacetime, the theory – dubbed a “postquantum theory of classical gravity” – modifies quantum theory. It predicts an intrinsic breakdown in predictability that is mediated by
spacetime itself. This results in random and violent fluctuations in spacetime that are larger than envisaged under quantum theory, rendering the apparent weight of objects unpredictable if measured
precisely enough.
A second paper, published simultaneously in the peer-reviewed, open-access journal Nature Communications and led by Oppenheim’s former Ph.D. students, looks at some of the consequences of the theory.
It also proposes an experiment to test it: to measure a mass very precisely to see if its weight appears to fluctuate over time.
For example, the International Bureau of Weights and Measures in France routinely weighs a 1 kilogram mass, which used to be the 1kg standard. If the fluctuations in measurements of this 1kg mass are
smaller than required for mathematical consistency, they can rule out that theory.
Jonathan Oppenheim of University College London. Image via UCL. He is the author of the new theoretical paper on Einstein’s gravity and quantum mechanics.
A 5,000:1 odds bet
The outcome of the experiment, or other evidence emerging that would confirm the quantum versus classical nature of spacetime, is the subject of a 5,000:1 odds bet between Professor Oppenheim and
theoretical physicists Carlo Rovelli and Geoff Penington. Rovelli and Penington are leading proponents of quantum loop gravity and string theory, respectively.
For the past five years, the UCL research group has been stress-testing the theory and exploring its consequences.
Professor Oppenheim said:
Quantum theory and Einstein’s theory of general relativity are mathematically incompatible with each other. So it’s important to understand how this contradiction is resolved. Should spacetime be
quantized, or should we modify quantum theory, or is it something else entirely? Now that we have a consistent fundamental theory in which spacetime does not get quantized, it’s anybody’s guess.
The experimental proposal
Co-author Zach Weller-Davies, who, as a Ph.D. student at UCL, helped develop the experimental proposal and made key contributions to the theory itself, said:
This discovery challenges our understanding of the fundamental nature of gravity but also offers avenues to probe its potential quantum nature.
We have shown that if spacetime doesn’t have a quantum nature, then there must be random fluctuations in the curvature of spacetime which have a particular signature that can be verified
In both quantum gravity and classical gravity, spacetime must be undergoing violent and random fluctuations all around us, but on a scale which we haven’t yet been able to detect. But if
spacetime is classical, the fluctuations have to be larger than a certain scale, and this scale can be determined by another experiment where we test how long we can put a heavy atom in
superposition* of being in two different locations.
The analytical and numerical calculations of co-authors Carlo Sparaciari and Barbara Šoda helped guide the project. They expressed hope that these experiments could determine whether the pursuit of a
quantum theory of gravity is the right approach.
More about the proposal
Šoda (formerly UCL Physics & Astronomy, now at the Perimeter Institute of Theoretical Physics, Canada) said:
Because gravity is made manifest through the bending of space and time, we can think of the question in terms of whether the rate at which time flows has a quantum nature, or classical nature.
And testing this is almost as simple as testing whether the weight of a mass is constant, or appears to fluctuate in a particular way.
Sparaciari (UCL Physics & Astronomy) said:
While the experimental concept is simple, the weighing of the object needs to be carried out with extreme precision.
But what I find exciting is that starting from very general assumptions, we can prove a clear relationship between two measurable quantities, the scale of the spacetime fluctuations, and how long
objects like atoms or apples can be put in quantum superposition of two different locations. We can then determine these two quantities experimentally.
Weller-Davies added:
A delicate interplay must exist if quantum particles such as atoms are able to bend classical spacetime. There must be a fundamental trade-off between the wave nature of atoms, and how large the
random fluctuations in spacetime need to be.
Einstein’s gravity and quantum mechanics background
Quantum mechanics. All the matter in the universe obeys the laws of quantum theory, but we only really observe quantum behavior at the scale of atoms and molecules. Quantum theory tells us that
particles obey Heisenberg’s uncertainty principle, and we can never know their position or velocity at the same time. In fact, they don’t even have a definite position or velocity until we measure
them. Particles like electrons can behave more like waves and act almost as if they can be in many places at once (more precisely, physicists describe particles as being in a “superposition” of
different locations).
Quantum theory governs everything from the semiconductors that are ubiquitous in computer chips, to lasers, superconductivity and radioactive decay. In contrast, we say that a system behaves
classically if it has definite underlying properties. A cat appears to behave classically: it is either dead or alive, not both, nor in a superposition of being dead and alive. Why do cats behave
classically, and small particles quantumly? We don’t know, but the postquantum theory doesn’t require the measurement postulate, because the classicality of spacetime infects quantum systems and
causes them to localize.
About gravity
Einstein’s gravity. Newton’s theory of gravity gave way to Einstein’s theory of general relativity (GR), which holds that gravity is not a force in the usual sense. Instead, heavy objects such as the
sun bend the fabric of spacetime in such a way that causes Earth to revolve around it. Spacetime is just a mathematical object consisting of the three dimensions of space, and time considered as a
fourth dimension. General relativity predicted the formation of black holes and the Big Bang. It holds that time flows at different rates at different points in space, and the GPS in your smartphone
needs to account for this to properly determine your location.
Illustration at top
At the top of this article is an artistic version of Figure 1 in the PRX paper. It depicts an experiment in which heavy particles (illustrated as the moon) cause an interference pattern (a quantum
effect), while also bending spacetime. The hanging pendulums depict the measurement of spacetime. The actual experiment typically uses Carbon-60, one of the largest known molecules. The UCL
calculation indicates that the experiment should also use higher density atoms such as gold. Image via Isaac Young/ University College London. Used with permission.
Physical Review X paper
Nature Communications paper
Public lecture by Professor Jonathan Oppenheim in January 2024
Professor Oppenheim’s academic profile
UCL Physics & Astronomy
UCL Mathematical & Physical Sciences
Bottom line: Einstein’s gravity and quantum mechanics are the two bases for modern physics. But these two theories contradict each other. Have we reached a reconciliation?
December 6, 2023
Like what you read?
Subscribe and receive daily news delivered to your inbox.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form. | {"url":"https://earthsky.org/space/einsteins-gravity-and-quantum-mechanics-oppenheim-ucl/","timestamp":"2024-11-02T08:52:45Z","content_type":"text/html","content_length":"127437","record_id":"<urn:uuid:960c2764-a353-49f0-a148-fd65725dfa2d>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00534.warc.gz"} |
At an amusement park, a swimmer uses a water slide to enter the
main pool. If...
At an amusement park, a swimmer uses a water slide to enter the main pool. If...
At an amusement park, a swimmer uses a water slide to enter the main pool. If the swimmer starts at rest, slides without friction, and descends through vertical height of 2.31 m up to the bottom of
the slide, what is her speed (a) at the middle of the slide? and (b) at the bottom of the slide?
As the swimmer slides down, potential energy is being converted to kinetic energy
(a)At the middle of the slide, the swimmer will have covered a vertical height of 1.155m
(b)At the bottom of the slide, the swimmer will have covered a vertical height of 2.31m | {"url":"https://justaaa.com/physics/1103773-at-an-amusement-park-a-swimmer-uses-a-water-slide","timestamp":"2024-11-04T01:19:20Z","content_type":"text/html","content_length":"43225","record_id":"<urn:uuid:516127f9-d21e-41af-b437-6990b6d5cc0d>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00222.warc.gz"} |
PPT - Unit Two: Dynamics PowerPoint Presentation, free download - ID:3653162
1. Unit Two: Dynamics Section 1: Forces
2. Look in glossary of book … • What is the difference between dynamics and kinematics? • What is a force? What can a force do? What causes a force? • Key Terms: • Dynamics Kinematics • Force
Gravitational Force • Strong Nuclear Force • Inertia Net Force • Normal Force Weight Mass
3. What is dynamics??? • Kinematics: The study of how objects move (velocity, acceleration) • Galileo performed experiments that allowed him to describe motion but not explain motion. • Dynamics:
The study of why objects move. • The connection between acceleration and its cause can be summarized by Newton’s 3 Laws of Motion (published in 1687) • The cause of acceleration is FORCE.
4. Forces • What is a force? • A push or a pull • Some forces cause acceleration • Example: gravity • Some forces cause stretching, bending, squeezing • Example: spring force
5. The 2 Main Types of Forces • Contact Forces: are forces that result when two objects are physically in contact with one another Example: push/pull, normal force, friction, spring force, tension,
air resistance Non-contact Forces: forces that result when two objects are not in physical contact Example: gravitational force, nuclear force, magnetic force, electrostatic force (electric
6. Newton’s First Law of Motion- Newton’s Law of Inertia • An object at rest or in uniform motion (ie, constant velocity) will remain at rest or in uniform motion unless acted on by an external
force. • Section 5.1 in text (pages 154 to 159) • Reworded: An object at rest will remain at rest until a force is applied. An object moving at a constant velocity will continue to move at a
constant velocity if no force is applied (ie, no acceleration).
7. Inertia • the natural tendency of an object to remain in its current state of motion (either moving or at rest)
8. Where did this come from? • Galileo performed many experiments and speculated that if a perfectly smooth object were on a perfectly smooth horizontal surface it would travel forever in a straight
line. • Newton developed this idea.
9. Newton’s First Law Example • If an apple is sitting on Mrs. Evans’ desk, it will remain there until the desk is removed (so gravity acts on it) or someone lifts it up (applied force). • If a car
is driving along a straight road at 100km/h, it will continue to do so (given the car still has gas!) until the brakes are applied (applied force), there is a turn or the road surface changes
(more or less friction).
10. Net Force • The sum of all vector forces acting on an object. • Example: What are the forces acting on a stopped car? Draw a labeled diagram. • Example: What are the forces acting on a car moving
at 100km/h [N]?
12. Normal Force • A force that acts in a direction perpendicular to the common contact surface between two objects • Example Diagram:
13. Quick Experiment • Materials – cup, card, penny or coin • What to do: • Set up the card on top of the cup and the penny on the card in the middle. • Flick the card. What happens to the card? The
penny? Why?
14. Questions 1. To which object was a force applied by the flick and which object was not acted upon by the flick? • 2. Why did the penny fall into the cup and not fly off with the card? • 3. What
force held the penny in place while the card was flicked out? What force brought the penny down into the cup? • 4. Would the penny move in the same way if sandpaper was used instead of the card?
15. Summary • The inertia of every object resists the change in motion. In this case, the inertia of the penny held it in place while the card was flicked out from under it. The force acting on the
card was not applied to the penny. After the card was moved from under the coin, gravity supplied the force to bring the penny down into the cup. If a force had been applied to both the card and
the penny, then both would have moved and the penny would not have fallen into the cup.
16. Check Your Learning • 1. Why does a package on the seat of a bus slide backward when the bus accelerates quickly from rest? Why does it slide forward when the driver applies the brakes? • Use as
many physics terms as possible and describe in detail.
17. The bus is initially at rest, as is the package. In the absence of any force, the natural state of the package is to remain at rest. When the bus pulls forward, the package remains at rest
because of its inertia (until the back of the seat applies a forward force to make it move with the bus). • From the point of view of someone on the bus, it appears that the package is moving
backward; however, someone watching from outside the bus would see the bus move forward and the package trying to stay in its original position. • Once the package is moving with the bus, its
inertia has now changed. It now has a natural tendency to be moving forward with a constant speed. When the bus slows down, the package continues to move forward with the same constant speed that
it had until some force stops it.
18. Force • Symbol: F • Formula: F=ma • Force = mass x acceleration • Units: kg x m/s2 = Newtons (N)
19. Gravitational Forces • Example: Consider the following information and then compare the gravitational force on the SAME OBJECT in each case. • A man standing near the equator (distance from
Earth’s centre = 6378 km) • A man standing near the North pole (distance from Earth’s centre = 6357 km) • A man standing in the International Space Station (distance = 6628 km) • A man in a space
ship past Pluto
20. Gravitational Forces • Gravitational force decreases as we increase how far we are from the centre of the Earth • It is a non-contact force
21. Weight Vs. Mass • Weight and mass are NOT THE SAME. • Weight = the force of gravity acting on a mass. Weight can change. It is measured in Newtons (force). • Weight = mass x gravitational force •
Fg = mg • Mass = the quantity of matter an object contains. Mass for the same object is constant. It is measured in kg.
23. Examples of Weight Problems • Mrs. Evans’ dog Pi has a mass of 17kg. What would Pi’s weight be: • A) On Earth? • B) On Jupiter (where g = 25.9 m/s2) • C) On the Moon (where g = 1.64 m/s2)
24. Examples of Weight Problems • A student standing on a scientific spring scale on Earth finds that he weighs 825N. Find his mass.
25. Practice • Page 137, #1, 2, 3, 4
26. Friction • A contact force • Electromagnetic Force (between surface atoms of objects touching)
27. Friction • There are 2 types of friction: • Static Frictional Force • When you start to move an object from rest • Larger than Kinetic Frictional Force due to Inertia • ųs • Kinetic Frictional
Force • Exists when the object is moving • ųK
28. Friction • The strength of friction depends on… • Surface materials • Magnitude of forces pressing surfaces together • The strength of friction DOES NOT depend on… • Surface area • Velocity of
object moving • See page 140, table 4.5 for a list!
29. Coefficient of Friction • “Stickiness value” • ų (symbol mu) • ų has no units • Page 140, table 4.5 • Formula: Ff = ųFN • Remember: FN = - Fg
30. Friction Example • During the winter, owners of pickup trucks often place sandbags in the rear of their vehicles. Calculate the increased static force of friction between the rubber tires and wet
concrete resulting from the addition of 200. kg of sandbags in the back of the truck. • Use the table of coefficients of friction on page 140.
31. Friction Example 2 • A horizontal force of 85N is required to pull a child in a sled at constant speed over dry snow to overcome the force of friction. The child and sled have a combined mass of
52 kg. Calculate the coefficient of kinetic friction between the sled and the snow.
32. Practice Friction Problems • Page 144 • Questions 5, 6, 7, 8 • Weight Problems • Page 137, #1, 2, 3, 4
33. Tug of War • Sometimes we have more than 1 force acting on an object (like in a tug of war). • What are the forces at work in a tug of war? • What direction are the forces? • If your team wins,
what does that mean about the forces? • If your team loses, what does that mean about the forces? • What other forces are there on the players?
34. Free Body Diagrams • We usually use a box or small circle to represent the object. • The size of the arrow is reflective of the magnitude (SIZE) of the force. • The direction of the arrow reveals
the direction in which the force acts. • Each force arrow in the diagram is labelled to indicate the type of force. • Use math symbols to show equality if needed.
36. A free body diagram will be used in most dynamics problems in order to simplify the situation In a FBD, the object is reduced to a point and forces are drawn starting from the point Free Body
Diagrams FN Fa Ff Fg
37. Free Body Diagram Examples • 1. A book is at rest on a table top. Diagram the forces acting on the book. • Refer to sheet in class with 10 examples!
38. The Net Force • The net force is a vector sum which means that both the magnitude and direction of the forces must be considered • In most situations we consider in Physics 11, the forces will be
parallel (ie, up and down, etc) and perpendicular
39. The Net Force • In most situations, there is more than one force acting on an object at any given time • When we draw the FBD we should label all forces that are acting on an object and also
determine which would cancel each other out • Ones that do not completely cancel out will be used to determine the net force
43. Newton’s Second Law • Newton’s first law states that an object does not accelerate unless a net force is applied to the object. • But how much will an object accelerate when there is a net force?
• The larger the force the larger the acceleration. • Therefore acceleration is directly proportional to mass. • Acceleration also depends on mass. • The larger the mass, the smaller the
acceleration. • Therefore acceleration is inversely proportional to mass. • We say that a massive body has more INERTIA than a less massive body.
44. Newton’s Second Law- Newton’s Law of Motion • Force = mass x acceleration • Fnet = ma • The acceleration is in the same direction as the force.
45. Newton’s Second Law Examples • Ex. 1: What net force is required to accelerate a 1500. kg race car at +3.00m/s2? • Draw a FBD to show the net force.
46. Practice Problems • Page 163, Questions 1, 2, 3
47. Putting it All Together • Now that we have considered Newton’s Second Law, you can use that to analyze kinematics problems with less information than we have used previously • We can either use
dynamics information to then apply to a kinematic situation or vice versa
48. Newton’s Second Law Examples • Ex. 2: An artillery shell has a mass of 55 kg. The shell is fired from a gun leaving the barrel with a velocity of +770 m/s. The gun barrel is 1.5m long. Assume
that the force, and the acceleration, of the shell is constant while the shell is in the gun barrel. What is the force on the shell while it is in the gun barrel?
49. Practice Problems • Page 168, questions 4 to 8
50. An Example • A 25kg crate is slid from rest across a floor with an applied force 72N applied force. If the coefficient of static friction is 0.27, determine: • The free body diagram. Include as
many of the forces (including numbers) as possible. • The acceleration of the crate. • The time it would take to slide the crate 5.0m across the floor. | {"url":"https://fr.slideserve.com/risa/unit-two-dynamics","timestamp":"2024-11-15T02:46:26Z","content_type":"text/html","content_length":"103209","record_id":"<urn:uuid:8d0a9b15-390c-4a52-971d-d2f012b489f6>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00784.warc.gz"} |
Current Search: Vibration
BHAT, RAJENDRA AGHUT., Florida Atlantic University, Stevens, Karl K.
An experimental investigation to determine the effectiveness of partial constrained layer damping treatments for a clamped rectangular plate is described. The impulse testing technique was used
with a Hewlett Packard '5423A structural Dynamics Analyzer' to determine modal parameters of the first five flexural modes. The results obtained are compared with theoretical results and they are
in agreement. The results indicate that partial constrained layer damping treatments, if properly used, can...
Show moreAn experimental investigation to determine the effectiveness of partial constrained layer damping treatments for a clamped rectangular plate is described. The impulse testing technique
was used with a Hewlett Packard '5423A structural Dynamics Analyzer' to determine modal parameters of the first five flexural modes. The results obtained are compared with theoretical results and
they are in agreement. The results indicate that partial constrained layer damping treatments, if properly used, can be more effective than complete treatments.
Show less
Date Issued
Subject Headings
Plates (Engineering)--Vibration, Damping (Mechanics)
Document (PDF)
Dynamic stall and three-dimensional wake effects on trim, stability and loads of hingeless rotors with fast Floquet theory.
Chunduru, Srinivas Jaya., Florida Atlantic University, Gaonkar, Gopal H., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
This dissertation investigates the effects of dynamic stall and three-dimensional wake on isolated-rotor trim, stability and loads. Trim analysis of predicting the pilot's control inputs and the
corresponding periodic responses is based on periodic shooting with the fast Floquet theory and damped Newton iteration. Stability analysis, also based on the fast Floquet theory, predicts
damping levels and frequencies. Loads analysis uses a force-integration approach to predict the rotating-blade...
Show moreThis dissertation investigates the effects of dynamic stall and three-dimensional wake on isolated-rotor trim, stability and loads. Trim analysis of predicting the pilot's control inputs
and the corresponding periodic responses is based on periodic shooting with the fast Floquet theory and damped Newton iteration. Stability analysis, also based on the fast Floquet theory,
predicts damping levels and frequencies. Loads analysis uses a force-integration approach to predict the rotating-blade root shears and moments as well as the hub forces and moments. The blades
have flap bending, lag bending and torsion degrees of freedom. Dynamic stall is represented by the ONERA stall models of lift, drag and pitching moment, and the unsteady, nonuniform downwash is
represented by a three-dimensional, finite-state wake model. Throughout, full blade-stall-wake dynamics is used in that all states are included from trim to stability to loads predictions.
Moreover, these predictions are based on four aerodynamic theories--quasisteady linear theory, quasisteady stall theory, dynamic stall theory and dynamic stall and wake theory--and cover a broad
range of system parameters such as thrust level, advance ratio, number of blades and blade torsional frequency. The investigation is conducted in three phases. In phase one, the elastic
flap-lag-torsion equations are coupled with a finite-state wake model and with linear quasisteady airfoil aerodynamics. The investigation presents convergence characteristics of trim and
stability with respect to the number of spatial azimuthal harmonics and radial shape functions in the wake representation. It includes a comprehensive parametric study over a broad range of
system parameters. The investigation also includes correlation with the measured lag-damping data of a three-bladed isolated rotor operated untrimmed. In the correlation, three structural models
of the root-flexure-blade assembly are used to demonstrate the strengths and the weaknesses of lag-damping predictions. Phase two includes dynamic stall in addition to three-dimensional wake to
generate trim and stability results over a comprehensive range of system parameters. It addresses the degree of sophistication necessary in blade discretization and wake representation under
dynamically stalled conditions. The convergence and parametric studies isolate the effects of wake, quasisteady stall and dynamic stall on trim and stability. Finally, phase three predicts the
rotating blade loads and nonrotating hub loads; the predictions are based on the blade, wake and stall models used in the preceding trim and stability investigations. Although an accurate
evaluation of loads requires a more refined blade description, the results isolate and demonstrate the principal dynamic stall and wake effects on the loads.
Show less
Date Issued
Subject Headings
Floquet theory, Helicopters, Rotors (Helicopters), Vibration (Aeronautics)
Document (PDF)
MAILLET, PHILIPPE LOUIS., Florida Atlantic University, Dunn, Stanley E., Cuschieri, Joseph M., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
The energy flow and the acoustic radiation of fluid-loaded panels are investigated using the Energy Accountancy Concept. The various energy components of the systems are defined and studied. Each
component is a function of the excitation, the structure, the medium and their coupling. An energy balance equation is written for the system. This method is applied to study the acoustic
radiation from a point-excited clamped plate placed on the free surface of a water tank. The radiation efficiency...
Show moreThe energy flow and the acoustic radiation of fluid-loaded panels are investigated using the Energy Accountancy Concept. The various energy components of the systems are defined and
studied. Each component is a function of the excitation, the structure, the medium and their coupling. An energy balance equation is written for the system. This method is applied to study the
acoustic radiation from a point-excited clamped plate placed on the free surface of a water tank. The radiation efficiency of the plate is measured and compared to previous works. The energy
balance equation gives very good results at frequencies between 50 Hz and 12 kHz. An undefined source of energy dissipation is observed in one experiment. The results of this study have shown
that the Energy Accountancy Concept can be used to describe the energy flow in a vibrating structure under water-loading.
Show less
Date Issued
Subject Headings
Acoustic radiation pressure, Vibration--Measurement
Document (PDF)
Power flow analysis of a structure subjected to distributed excitation.
Cimerman, Benjamin Pierre., Florida Atlantic University, Cuschieri, Joseph M.
An analytical investigation based on the Power Flow Method is presented for the prediction of vibrational Power Flow in simple connected structures subjected to various forms of distributed
excitations. The principle of the power flow method consists of dividing the global structure into a series of substructures which can be analyzed independently and then coupled through the
boundary conditions. Power flow expressions are derived for an L-shaped plate structure, subjected to any form of...
Show moreAn analytical investigation based on the Power Flow Method is presented for the prediction of vibrational Power Flow in simple connected structures subjected to various forms of
distributed excitations. The principle of the power flow method consists of dividing the global structure into a series of substructures which can be analyzed independently and then coupled
through the boundary conditions. Power flow expressions are derived for an L-shaped plate structure, subjected to any form of distributed mechanical excitation or excited by an acoustic plane
wave. In the latter case air loading is considered to have a significant effect on the power input to the structure. Fluid-structure interaction considerations lead to the derivation of a
corrected mode shape for the normal velocity, and the determination of the scattered pressure components in the expressions for the Power Flow.
Show less
Date Issued
Subject Headings
Structural dynamics, Plates (Engineering)--Vibration
Document (PDF)
Vibrational analysis of a journal bearing.
Journeau, Franck Daniel., Florida Atlantic University, Cuschieri, Joseph M.
A Statistical Energy Analysis (SEA) approach is used to investigate the vibrational behavior of a journal bearing. In developing the SEA model, consideration is given to the determination of
coupling loss factors between non-conservatively coupled substructures. In the case of the journal bearing, the oil film between the rotating shaft and the bearing liner represents
non-conservative coupling. The coupling loss factors are estimated using experimentally measured point mobility functions....
Show moreA Statistical Energy Analysis (SEA) approach is used to investigate the vibrational behavior of a journal bearing. In developing the SEA model, consideration is given to the
determination of coupling loss factors between non-conservatively coupled substructures. In the case of the journal bearing, the oil film between the rotating shaft and the bearing liner
represents non-conservative coupling. The coupling loss factors are estimated using experimentally measured point mobility functions. The internal loss factors are directly measured with the
bearing structure disassembled. Additionally, estimates for the coupling and internal loss factors are obtained in-situ using an energy ratio approach. Using the determined coupling and internal
loss factors in an SEA model, estimates for the average mean square velocities on the surface of the bearing subcomponents are obtained for both static and dynamics conditions. The SEA estimates
match well with directly measured results for the spatial average surface velocities at medium to high frequencies.
Show less
Date Issued
Subject Headings
Journal bearings, Machinery--Noise, Couplings, Vibration
Document (PDF)
An experimental study of the response of circular plates subjected to fluid loading.
Coulson, Robert Kenneth., Florida Atlantic University, Glegg, Stewart A. L.
The interaction between vibrating structures and fluids can have a profound influence upon the natural frequencies of the structure's vibration. This study examines one specific structure; a thin
circular plate with the rarely studied free edge condition. It starts by considering a completely free plate in a vacuum and then, using receptance matching, utilises this result to determine the
effects, on the natural frequencies, of a centrally located driving rod. Then, using the same technique,...
Show moreThe interaction between vibrating structures and fluids can have a profound influence upon the natural frequencies of the structure's vibration. This study examines one specific
structure; a thin circular plate with the rarely studied free edge condition. It starts by considering a completely free plate in a vacuum and then, using receptance matching, utilises this
result to determine the effects, on the natural frequencies, of a centrally located driving rod. Then, using the same technique, a result for the drive admittance of the fluid loaded plate is
adapted to predict the natural frequencies of the same structure when subjected to significant fluid loading. All these results are then compared to those obtained from experiments.
Show less
Date Issued
Subject Headings
Plates (Engineering)--Vibration, Acoustical engineering
Document (PDF)
Free and random vibrations of shells of revolution with interior supports.
Xia, Zhiyong., Florida Atlantic University, Yong, Yan, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
A new analytical method based on the wave propagation scheme has been developed for the dynamic analysis of axially symmetric shells with arbitrary boundary conditions and interior supports. In
this approach, a shell structure is considered as a waveguide and the response to external excitations is treated as a superposition of wave motions. To segregate the effect of the interior
supports, the waveguide is first divided into several sub-waveguides. Upon analyzing these sub-waveguides...
Show moreA new analytical method based on the wave propagation scheme has been developed for the dynamic analysis of axially symmetric shells with arbitrary boundary conditions and interior
supports. In this approach, a shell structure is considered as a waveguide and the response to external excitations is treated as a superposition of wave motions. To segregate the effect of the
interior supports, the waveguide is first divided into several sub-waveguides. Upon analyzing these sub-waveguides separately, a composition scheme is adopted to relate them by connecting the
wave components according to the continuity conditions for the state variables at each interior supports. Closed form solutions for free and random vibration are derived. The proposed method is
presented in a general fashion and numerical examples are given to illustrate the application of the theory.
Show less
Date Issued
Subject Headings
Shells (Engineering)--Vibration, Wave guides
Document (PDF)
KUNG, CHUN-HUA., Florida Atlantic University, Stevens, Karl K., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
An energy method for predicting the natural frequency and loss factor for square plates with partial and complete coatings is developed. Both simply-supported and edge-fixed bonndary conditions
are considered. An impulse testing technique is used to provide an experimental verification of the analysis for the case of an edge-fixed square plate. The analytical and experimental results
are in close agreement, and indicate that partial coatings can provide effective damping treatments.
Date Issued
Subject Headings
Plates (Engineering)--Vibration, Damping (Mechanics)
Document (PDF)
The vacuum ultraviolet magnetic circular dichroism of propylene: Elucidation of the electronic structure of propylene.
Atanasova, Sylvia T., Florida Atlantic University, Snyder, Patricia Ann
The purpose of this research is to investigate the electronic structure of propylene. The vacuum-ultraviolet absorption and magnetic circular dichroism (MCD) spectra of propylene were obtained by
Professor Snyder at the National Synchrotron Radiation Center. The absorption and MCD spectra are presented in the region 52 000--77 000 cm-1. The spectra were examined in detail in the pi → pi*
region from 52 000 cm-1 to 58766 cm-1. The MCD provides unique information about the electronic structure...
Show moreThe purpose of this research is to investigate the electronic structure of propylene. The vacuum-ultraviolet absorption and magnetic circular dichroism (MCD) spectra of propylene were
obtained by Professor Snyder at the National Synchrotron Radiation Center. The absorption and MCD spectra are presented in the region 52 000--77 000 cm-1. The spectra were examined in detail in
the pi → pi* region from 52 000 cm-1 to 58766 cm-1. The MCD provides unique information about the electronic structure of a molecule. The MCD spectrum clearly showed that there are at least three
electronic transitions in the pi → pi* region of the propylene spectrum. The presently proposed assignments for these transitions are pi → 3s, pi → pi*, and pi → 3p. The first step in the
vibrational analysis, which may aid in the assignments of the electronic transitions, was carried out. A theoretical calculation of the normal vibrational modes of ethylene and propylene has been
done using Hyperchem software.
Show less
Date Issued
Subject Headings
Magnetic circular dichroism, Propene, Vibrational spectra
Document (PDF)
Deterministic, stochastic and convex analyses of one- and two-dimensional periodic structures.
Zhu, Liping., Florida Atlantic University, Lin, Y. K., Elishakoff, Isaac, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
The periodic structures considered in the dissertation are one-dimensional periodic multi-span beams, and two-dimensional periodic grillages with elastic interior supports. The following specific
topics are included: (1) Deterministic Vibration--Exact solutions are obtained for free vibrations of both multi-span beams and grillages, by utilizing the wave propagation concept. The wave
motions at the periodic supports/nodes are investigated and the dispersion equations are derived from which...
Show moreThe periodic structures considered in the dissertation are one-dimensional periodic multi-span beams, and two-dimensional periodic grillages with elastic interior supports. The following
specific topics are included: (1) Deterministic Vibration--Exact solutions are obtained for free vibrations of both multi-span beams and grillages, by utilizing the wave propagation concept. The
wave motions at the periodic supports/nodes are investigated and the dispersion equations are derived from which the natural frequencies of the periodic structures are determined. The emphasis is
placed on the calculation of mode shapes of both types of periodic structures. The general expressions for mode shapes with various boundary conditions are obtained. These mode shapes are used to
evaluate the exact dynamic response to a convected harmonic loading. (2) Stochastic Vibration--A multi-span beam under stochastic acoustic loading is considered. The exact analytical expressions
for the spectral densities are derived for both displacement and bending moment by using the normal mode approach. Nonlinear vibration of a multi-span beam with axial restraint and initial
imperfection are also investigated. In the latter case, the external excitation is idealized as a Gaussian white nose. An expression for the joint probability density function in the generalized
coordinates is obtained and used to evaluate the mean square response of a multi-span beam system. (3) Convex Modeling of Uncertain Excitation Field--It is assumed that the parameters of the
stochastic excitation field are uncertain and belong to a multi-dimensional convex set. A new approach is developed to determine the multi-dimensional ellipsoidal convex set with a minimum
volume. The most and least favorable responses of a multi-span beam are then determined for such a convex set, corresponding to a stochastic acoustic field. The procedure is illustrated in
several examples.
Show less
Date Issued
Subject Headings
Grillages (Structural engineering), Girders--Vibration, Wave-motion, Theory of, Vibration
Document (PDF)
Mobility power flow (MPF) approach applied to fluid-loaded shells with ring discontinuities.
McCain, Thomas Scott., Florida Atlantic University, Cuschieri, Joseph M., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
The vibrational and acoustic characteristics of fluid-loaded, cylindrical shells with single or multiple, aperiodically-spaced ring discontinuities are studied using an approach based on the
mobility power flow (MPF) method and a hybrid numerical/analytical method for the evaluation of the velocity Green's function of the shell. The discontinuities are associated with internal
structures coupled to the shell via ring junctions. The approach is a framework allowing alternative shell and/or...
Show moreThe vibrational and acoustic characteristics of fluid-loaded, cylindrical shells with single or multiple, aperiodically-spaced ring discontinuities are studied using an approach based on
the mobility power flow (MPF) method and a hybrid numerical/analytical method for the evaluation of the velocity Green's function of the shell. The discontinuities are associated with internal
structures coupled to the shell via ring junctions. The approach is a framework allowing alternative shell and/or internal structure models to be used. The solution consists of the net
vibrational power flow between the shell and internal structure(s) at the junction(s), the shell's velocity Green's function, and the far-field acoustic pressure. Use of the MPF method is
advantageous because the net power flow solution can be used as a diagnostic tool in ascertaining the proper coupling between the shell and internal structure(s) at the junction(s). Results are
presented for two canonical problems: an infinite, thin cylindrical shell, externally fluid-loaded by a heavy fluid, coupled internally to: (1) a single damped circular plate bulkhead, and (2) a
double bulkhead consisting of two identical damped circular plates spaced a shell diameter apart. Two excitation mechanisms are considered for each model: (1) insonification of the shell by an
obliquely-incident, acoustic plane wave, and (2) a radial ring load applied to the shell away from the junction(s). The shell's radial velocity Green's function and far-field acoustic pressure
results are presented and analyzed to study the behavior of each model. In addition, a comparison of these results accentuates the qualitative difference in the behavior between the single and
multiple junction models. When multiple internal structures are present, the results are strongly influenced by inter-junction coupling communicated through the shell and the fluid. Results are
presented for circumferential modes n = 0 & 2. The qualitative differences in the results for modes n = 0 and n = 2 (indicative of all modes n > 0ified in the far-field acoustic pressure and
velocity Green's function response with the characteristics of the shell and internal plate bulkhead. The results for the single junction model demonstrate the significance of the shell's
membrane waves on the reradiation of acoustic energy from the shell; however, when multiple junctions are present, inter-junction coupling results in a significant broad acoustic scattering
pattern. Using the results and analysis presented here, a better understanding can be obtained of fluid-loaded shells, which can be used to reduce the strength of the acoustic pressure field
produced by the shell.
Show less
Date Issued
Subject Headings
Shells (Engineering)--Vibration, Cylinders--Vibration, Fluid dynamics, Sound--Transmission
Document (PDF)
A perturbation method for the vibration analysis of beams and plates with free-layer damping treatments.
Shen, Sueming, Florida Atlantic University, Stevens, Karl K., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
The feasibility of using structural modification techniques to determine the effect of added viscoelastic damping treatments on the modal properties of a distinct eigenvalue system and a
degenerate system is investigated. Linear perturbation equations for the changes introduced into the system eigenproperties are derived and applied to several examples involving the flexural
vibration of beams and square plates with varying degrees of damping treatment. Both large and small perturbations are...
Show moreThe feasibility of using structural modification techniques to determine the effect of added viscoelastic damping treatments on the modal properties of a distinct eigenvalue system and a
degenerate system is investigated. Linear perturbation equations for the changes introduced into the system eigenproperties are derived and applied to several examples involving the flexural
vibration of beams and square plates with varying degrees of damping treatment. Both large and small perturbations are considered. An FEM code has been developed to compute the dynamic system
parameters which are subsequently used in an iterative method to determine the modal properties. The perturbation approach described can accommodate temperature and frequency-dependent material
properties, and the procedures involved are illustrated in the examples considered. Results obtained for these examples are compared with those available from closed form or finite element
solutions, or from experiments. Excellent agreement of the results of the present method with those of other contemporary methods demonstrates the validity, overall accuracy, efficiency and
convergence rate of this technique. The perturbation approach appears to be particularly well suited for systems with temperature and frequency dependent material properties, and for design
situations where a number of damping configurations must be investigated.
Show less
Date Issued
Subject Headings
Girders--Vibration, Plates (Engineering)--Vibration, Perturbation (Mathematics), Damping (Mechanics)
Document (PDF)
Detection, localization, and identification of bearings with raceway defect for a dynamometer using high frequency modal analysis of vibration across an array of accelerometers.
Waters, Nicholas., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
This thesis describes a method to detect, localize and identify a faulty bearing in a rotating machine using narrow band envelope analysis across an array of accelerometers. This technique is
developed as part of the machine monitoring system of an ocean turbine. A rudimentary mathematical model is introduced to provide an understanding of the physics governing the vibrations caused
by a bearing with a raceway defect. This method is then used to detect a faulty bearing in two setups : on a...
Show moreThis thesis describes a method to detect, localize and identify a faulty bearing in a rotating machine using narrow band envelope analysis across an array of accelerometers. This
technique is developed as part of the machine monitoring system of an ocean turbine. A rudimentary mathematical model is introduced to provide an understanding of the physics governing the
vibrations caused by a bearing with a raceway defect. This method is then used to detect a faulty bearing in two setups : on a lathe and in a dynamometer.
Show less
Date Issued
Subject Headings
Marine turbines, Mathematical models, Vibration, Measurement, Fluid dynamics, Dynamic testing
Document (PDF)
Numerical models to simulate underwater turbine noise levels.
Lippert, Renee'., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
This work incorporates previous work done by Guerra and the application of fluid dynamics. The structure attached to the turbine will cause unsteady fluctuations in the flow, and ultimately
affect the acoustic pressure. The work of Guerra is based on a lot of assumptions and simplifications to the geometry of the turbine and structure. This work takes the geometry of the actual
turbine, and uses computational fluid dynamic software to numerically model the flow around the turbine structure....
Show moreThis work incorporates previous work done by Guerra and the application of fluid dynamics. The structure attached to the turbine will cause unsteady fluctuations in the flow, and
ultimately affect the acoustic pressure. The work of Guerra is based on a lot of assumptions and simplifications to the geometry of the turbine and structure. This work takes the geometry of the
actual turbine, and uses computational fluid dynamic software to numerically model the flow around the turbine structure. Varying the angle of the attack altered the results, and as the angle
increased the noise levels along with the sound pulse, and unsteady loading increased. Increasing the number of blades and reducing the chord length both reduced the unsteady loading.
Show less
Date Issued
Subject Headings
Underwater acoustics, Mathematical models, Turbines, Vibration, Mathematical models, Fluid dynamics
Document (PDF)
Design and implementation of an adaptive control system for active noise control.
Duprez, Adrien Eric., Florida Atlantic University, Cuschieri, Joseph M.
This thesis describes the design and implementation of an adaptive control system for active noise control. The main approaches available for implementing an active noise controller are presented
and discussed. A Least Mean Squares (LMS) based algorithm, the Filtered-X LMS (FXLMS) algorithm, is selected for implementation. The significance of factors, such as delays, system output noise,
system complexity, type and size of adaptive filter, frequency bandwidth, etc..., which can limit the...
Show moreThis thesis describes the design and implementation of an adaptive control system for active noise control. The main approaches available for implementing an active noise controller are
presented and discussed. A Least Mean Squares (LMS) based algorithm, the Filtered-X LMS (FXLMS) algorithm, is selected for implementation. The significance of factors, such as delays, system
output noise, system complexity, type and size of adaptive filter, frequency bandwidth, etc..., which can limit the performance of the adaptive control, is investigated in simulations. For
hardware implementation, a floating-point DSP is selected to implement the adaptive controller. The control program and its implementation on the DSP are discussed. The program is first tested
with a hardware-in-the-loop set-up and then implemented on a physical system. Active Noise Control in a duct is finally successfully demonstrated. The hardware and the results are discussed.
Show less
Date Issued
Subject Headings
Adaptive control systems, Active noise and vibration control
Document (PDF)
Dynamic response of plate structures to external excitations.
Mani, George., Florida Atlantic University, Yong, Yan, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
The dynamic response of plate structures composed of rigidly connected thin plates subjected to point loads is studied. The finite strip method combined with a new approach for analyzing periodic
structures is utilized to obtain substantial reduction in computational efforts. Each strip with various boundary conditions is treated as a waveguide capable of transmitting different wave
motions. Wave scattering matrices are defined to characterize wave motions at boundaries, intersection of...
Show moreThe dynamic response of plate structures composed of rigidly connected thin plates subjected to point loads is studied. The finite strip method combined with a new approach for analyzing
periodic structures is utilized to obtain substantial reduction in computational efforts. Each strip with various boundary conditions is treated as a waveguide capable of transmitting different
wave motions. Wave scattering matrices are defined to characterize wave motions at boundaries, intersection of plates and where type of wave guides are changed. The results obtained from the
application of the approach on various plate configurations are presented and discussed.
Show less
Date Issued
Subject Headings
Plates (Engineering)--Vibration, Finite strip method, Structural analysis (Engineering)
Document (PDF)
Computer-aided design of speed humps.
Joseph, Philip Puthooppallil., Florida Atlantic University, Wong, Tin-Lup
A six-degree-of-freedom model of a vehicle was simulated over different hump profiles with a computer program and the results were verified. The resulting vibration characteristics were analyzed
to calculate a discomfort index. The discomfort index considered is the equivalent root mean square acceleration specified by the proposal for the revision of ISO 2631. A parametric study was
conducted to find the sensitivity of different hump and vehicular parameters on the ride comfort. The optimal...
Show moreA six-degree-of-freedom model of a vehicle was simulated over different hump profiles with a computer program and the results were verified. The resulting vibration characteristics were
analyzed to calculate a discomfort index. The discomfort index considered is the equivalent root mean square acceleration specified by the proposal for the revision of ISO 2631. A parametric
study was conducted to find the sensitivity of different hump and vehicular parameters on the ride comfort. The optimal hump parameters were obtained for different limiting speeds. Two field
humps were simulated and modification of the humps is suggested for optimum performance.
Show less
Date Issued
Subject Headings
Speed reducers--Data processing, Traffic engineering, Automobiles--Vibration
Document (PDF)
Dynamic stability of fluid-conveying pipes on uniform or non-uniform elastic foundations.
Vittori, Pablo J., Florida Atlantic University, Elishakoff, Isaac, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
The dynamic behavior of straight cantilever pipes conveying fluid is studied, establishing the conditions of stability for systems, which are only limited to move in a 2D-plane. Internal friction
of pipe and the effect of the surrounding fluid are neglected. A universal stability curve showing boundary between the stable and unstable behaviors is constructed by finding solution to
equation of motion by exact and high-dimensional approximate methods. Based on the Boobnov-Galerkin method, the...
Show moreThe dynamic behavior of straight cantilever pipes conveying fluid is studied, establishing the conditions of stability for systems, which are only limited to move in a 2D-plane. Internal
friction of pipe and the effect of the surrounding fluid are neglected. A universal stability curve showing boundary between the stable and unstable behaviors is constructed by finding solution
to equation of motion by exact and high-dimensional approximate methods. Based on the Boobnov-Galerkin method, the critical velocities for the fluid are obtained by using both the eigenfunctions
of a cantilever beam (beam functions), as well as the utilization of Duncan's functions. Stability of cantilever pipes with uniform and non-uniform elastic foundations of two types are considered
and discussed. Special emphasis is placed on the investigation of the paradoxical behavior previously reported in the literature.
Show less
Date Issued
Subject Headings
Strains and stresses, Structural dynamics, Structural stability, Fluid dynamics, Vibration
Document (PDF)
Impact analysis of a piezo-transducer-vibrator.
Karabiyik, Necati., Florida Atlantic University, Tsai, Chi-Tay, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
Piezo-Transducer-Vibrators are miniature devices that emit both audio and silent signals and are currently targeted for use as an integral part of wristwatch technology. Utilizing nonlinear
finite element analysis is essential for obtaining a greater understanding of the system response under varying conditions. Dyna3D nonlinear finite element code is applied in this analysis with
the focus on the mechanical aspects of the vibrator. Four impact variables, the velocity, the plate gap, the...
Show morePiezo-Transducer-Vibrators are miniature devices that emit both audio and silent signals and are currently targeted for use as an integral part of wristwatch technology. Utilizing
nonlinear finite element analysis is essential for obtaining a greater understanding of the system response under varying conditions. Dyna3D nonlinear finite element code is applied in this
analysis with the focus on the mechanical aspects of the vibrator. Four impact variables, the velocity, the plate gap, the weight and the velocity angle are studied to determine the effects on
the system response. Each impact variable is assigned three separate values, creating twelve programs for analysis. For each program, responses to impact conditions are studied demonstrating the
deformed mode shapes, maximum principal stresses and maximum displacements using state database plots and time-history plots.
Show less
Date Issued
Subject Headings
Piezoelectric transducers, Finite element method, Wrist watches, Vibrators
Document (PDF)
Li, Qiang, Florida Atlantic University, Lin, Y. K., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
The phenomenon of flow-induced vibration is found in many engineering systems. The fluid flow generates forces on the structure that cause motion of the structure. In turn, the structural motion
changes the angle of attack between the flow and the structure, hence the forces on the structure. Furthermore, turbulence generally exists in a natural fluid flow; namely, the fluid velocity
contains a random part. Thus, the problem is formulated as a nonlinear system under random excitations. This...
Show moreThe phenomenon of flow-induced vibration is found in many engineering systems. The fluid flow generates forces on the structure that cause motion of the structure. In turn, the
structural motion changes the angle of attack between the flow and the structure, hence the forces on the structure. Furthermore, turbulence generally exists in a natural fluid flow; namely, the
fluid velocity contains a random part. Thus, the problem is formulated as a nonlinear system under random excitations. This thesis is focused on one type of motion known as galloping. A
mathematical model for the motion of an elastically supported square cylinder in turbulent flow is developed. The physical nonlinear equation is converted to ideal stochastic differential
equations of the Ito type using the stochastic averaging method. The probability density for the motion amplitude and the values for the most probable amplitudes are obtained for various mean
flow velocities and turbulence levels.
Show less
Date Issued
Subject Headings
Random vibration--Mathematical models, Turbulence, Fluid dynamics
Document (PDF) | {"url":"https://fau.digital.flvc.org/islandora/search/catch_all_subjects_mt%3A%28Vibration%29?page=1","timestamp":"2024-11-11T23:44:54Z","content_type":"text/html","content_length":"176123","record_id":"<urn:uuid:521870a8-f497-452e-81a5-8f55212ef68a>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00192.warc.gz"} |
Total force of the fluid on the sphere in a creeping flow around a sphere
R: Radius $(\mathrm{m})$
$\rho$: Density $\left(\mathrm{kg} / \mathrm{m}^{3}\right)$
$\boldsymbol{\mu}$: Viscosity $(\mathrm{kg} /(\mathrm{ms}))$
g: Gravitational Acceleration $\left(\mathrm{m} / \mathrm{s}^{2}\right)$
vs: Apparent Velocity $(\mathrm{m} / \mathrm{s})$
F: Total Force of Fluid (N)
$F=\frac{4}{3} * \pi * R^{3} * \rho * g+6 * \pi * \mu * R * v s$
Bird, R.B., Stewart, W.E. and Lightfoot, E.N. (2002). Transport Phenomena (Second Ed.). John Wiley & Sons, Page: 60 . | {"url":"https://petroleumoffice.com/formula/total-force-of-the-fluid-on-the-sphere-in-a-creeping-flow-around-a-sphere/","timestamp":"2024-11-10T01:37:27Z","content_type":"text/html","content_length":"26933","record_id":"<urn:uuid:37b4d5b5-d38d-4154-a165-6519cb95d05f>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00333.warc.gz"} |
2.4.3 fitcmpmodel(Pro)
Menu Information
Compare Models
Brief Information
Compare two fitting models for a given dataset
Additional Information
It is not accessible from script. This feature is for OriginPro only.
X-Function Execution Options
Please refer to the page for additional option switches when accessing the x-function from script
Display Variable I/O Default
Name Name and Value Description
Fit Result1 result1 Specifies fit report sheets fitting with same datasets and different model.
Fit Result2 result2 Specifies fit report sheets fitting with same datasets and different model.
Akaike's Information aic 1 Decide whether to output result of Akaike's Information Creiteria (AIC) for comparison. The method has less limitation for model comparison.
Criteria (AIC) int
Bayesian Information bic 0 Select to decide whether to output result of Bayesian Information Criteria (BIC) for comparison. BIC introduces a larger penalty term than AIC
Criteria (BIC) int to resolve overfitting problem in data fitting.
F-test ftest 0 Decide whether to output result of F-test for comparison. Please note that F-test only makes sense for nested models.
Significance Level sl 0.05 Values between 0 and 1 are supported.
Fit Parameters param 1 Decide whether to output Fit Parameter table for comparison.
Fit Statistics statics 1 Decide whether to output Fit Statistics table for comparison.
1st Model Name name1 Model1 Specify the display name for the first model in the report sheet.
2nd Model Name name2 Model2 Specify the display name for the second model in the report sheet.
Results rt <new> Specify where to put the output report.
This tool helps to find out which model is the best fit for the same dataset.
Usually we have learned to compare values of Reduced Chi-Square to select the best fit model. It is a useful measure of goodness-of-fit. The more it is close to 1.0, the better model describes our
data. But, since variance of each point which enters in the calculation of Chi-Square is not sufficiently known, Chi-Square criteria is not significant in a statistical sense.
So, we adopt the following 2 methods in model comparison.
F-test takes advantage of difference of the sum square of residuals of each fit to find out which model is the best. F-test is to compare the sum of square of residuals into a component removed by
the simpler model and into a component additionally removed by the more complex model. So, it only makes sense when two models are nested. We recommend users to use this method in following situation
1. Equation of 2 models should be in similar structure , such as:
2. Model with some parameter fixed vs. Model with no parameter fixed
Akaike's Information Criteria (AIC)
Akaike's Information Criteria is to find which model would best approximate reality given the data we have recorded. It can simultaneously compare nested or non-nested models. Not relying on concept
of significance, AIC is founded on maximum likelihood to rank models. So robust and precise estimates can be obtained by incorporating model uncertainty based on AIC.
To use this tool, please pay attention to following
• Input for this tool is fit report sheets (Linear Fit, Polynomial, Nonlinear Curve Fit, etc). So fit tools need to be run before you use this tool
• Only the 1^st result in report sheet can be found. So we need to ensure that results are in separate sheet when fitting multiple datasets.
Bayesian Information Criteria (BIC)
Bayesian information criterion is a model selection criterion that was derived by Schwarz (1978) from a Bayesian modification of the AIC criterion. The penalty term for BIC is similar to AIC
equation, but uses a multiplier of ln(n) for k instead of a constant 2 by incorporating the sample size n. That can resolve so called over fitting problem in data fitting.
For two models compared, the model with lower value of BIC is preferred by data.
This example compares the two models as described below.
fname$=system.path.program$ + "Samples\Curve Fitting\Exponential Decay.dat"; // prepare the data
nlbegin 1!2 ExpDec1 tt; // nonlinear fitting on column 2
nlend 1 2; ;
nlbegin 1!2 ExpDec2 tt; // nonlinear fitting on column 3
nlend 1 2;
fitcmpmodel -r 2 result1:=2! result2:=4!; // compare the two models
Suppose we have a dataset and want to see which model is the best fit model for it.
Candidate models are:
1.Import Exponential Growth.dat on \Samples\Curve Fitting folder
2.Highlight Col(B), select Analysis: Fitting: Nonlinear Curve Fit to open dialog. Set Function as ExpDec1. Click OK to get result sheet
3.Open Nonlinear Curve fit dialog again, Set Function as ExpDec2 this time. Click OK to get result sheet
4.Select Analysis: Fitting: Compare Models to open dialog
5.Click browse button to open Report Tree Browser and select 1 item for Fit Result1
6.Repeat same operation to select another item for Fit Result2
7. Select all options in GUI and click OK
8.From F-test table and AIC result table, we can draw conclusion that ExpDec1 function is the best fit model
1. F-test
F Statistic:
RSS1 is residual sum of square of fit for the simpler model, RSS2 is residual sum of square of fit for the other model.
2. Akaike's Information Criteria (AIC)
where N is number is data points, K is number of parameters plus 1, RSS is residual sum of square of fit.
3. Schwarz Bayesian Information Criterion (BIC)
$BIC=N\ln \left(\frac{RSS}{N} \right)+K\ln \left(N \right)$
where N is number is data points, K is number of parameters plus 1, RSS is residual sum of square of fit.
1. Akaike, Hirotsugu (1974). "A new look at the statistical model identification". IEEE Transactions on Automatic Control19 (6): 716-723
2. Burnham, K. R. and D. R. Anderson. 2002. Model Selection and Multimodel Inference. Springer, New York.
Related X-Functions | {"url":"http://cloud.originlab.com/doc/en/X-Function/ref/fitcmpmodelPro","timestamp":"2024-11-05T12:40:56Z","content_type":"text/html","content_length":"144411","record_id":"<urn:uuid:926602a7-8911-4f94-a9af-9e22e1bc83c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00715.warc.gz"} |
Bob needs to make a total of 60 deliveries this week. So far he has completed 18 of them. What percentage of his total deliveries has Bob completed?
Find an answer to your question ✅ “Bob needs to make a total of 60 deliveries this week. So far he has completed 18 of them. What percentage of his total deliveries has Bob ...” in 📘 Mathematics if
you're in doubt about the correctness of the answers or there's no answer, then try to use the smart search and find answers to the similar questions.
Search for Other Answers | {"url":"https://edustrings.com/mathematics/1574534.html","timestamp":"2024-11-10T19:03:10Z","content_type":"text/html","content_length":"24472","record_id":"<urn:uuid:dab070e4-e4b3-43c7-9938-c7e396319b37>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00761.warc.gz"} |
Notes from the Primer, with examples.
Electrochemistry Notes
The Origin of Electrode Potentials
An equilibrium of reduction and oxidation is set up in a solution of Mn+ and Mm+ when a source of electrons (e.g. metal wire) is introduced.
e.g. Fe3+(aq) + e-(metal) ⇌ Fe2+(aq)
Regardless of the favoured direction, it is expected that at equilibrium there will exist a charge separation between the electrode and solution, and hence a potential difference between the two will
Using Le Chatelier’s Principle, we can expect that the concentration of metal ions will influence the equilibrium, and hence the potential. In fact, that ratio of the concentrations of e.g. Fe3+ and
Fe2+ is crucial:
φM - φS = constant – RT/F ln ( [Fe2+]/[Fe3+] )
Where φ denotes the electrical potential, and F is the Faraday Constant.
This is the Nernst Equation.
Thermodynamic Description of Equilibrium
Consider the gas phase reaction:
A(g) ⇌ B(g)
The simplest way of keeping track of this system is to note that at equilibrium the reactants and products must have identical chemical potentials:
μA = μB
These can in turn be related to partial pressures, as:
μA = μAo + RT ln pA
At equilibrium:
Kp = pB/pA = exp [ (μAo - μBo)/RT ]
Applying this now to the case where A and B are in solution:
μA = μAo + RT ln xA = μAo + RT ln [A]
where the solutions are assumed to be ideal. Also,
xA = nA/(nA+nB) and [A] = nA/V
This gives rise to two alternative standard states:
1. when mole fractions are used, μo is the chemical potential when x = 1 and so relates to a pure liquid, and
2. when considering concentrations, μo is the chemical potential of a solution of A of unit concentration.
Thermodynamic Description of Electrochemical Equilibrium
The above must be modified for two phases (solution and electrode). Also, it involves the transfer of a charge particle (electrical energy must also be considered). The latter adds a factor of zAFφ
to the chemical potential, where zA is the charge on the molecule A.
Deriving the Nernst Equation for the Fe2+/Fe3+ system above, the starting point for this is that at equilibrium, total electrochemical potential of reactants = total electrochemical potential of
products. This gives us:
(μFe3+ + 3Fφs) + (μe- - FφM) = (μFe2+ + 2Fφs)
From the revised equation for chemical potential. Rearranging, and applying:
μFe3+ = μoFe3+ + RT ln [Fe3+] (and similar for Fe2+)
we obtain:
φM - φs = Δφo – RT/F ln ( [Fe2+]/[Fe3+] )
Δφo is a constant, containing the two standard chemical potentials + the chemical potential of an electrode in the electrode.
The Nernst Equation and some other electrode/solution Interfaces
The Hydrogen Electrode
H+(aq) + e-(metal) ⇌ ½H2(g) (1)
We can derive the Nernst Equation as above:
(μH+ + Fφs) + (μe- - FφM) = ½ μH2
Rearrangement as above gives:
φM - φS = Δφo + RT/F ln ([H+]/pH21/2)
where pH21/2 is equivalent to the “concentration” of H2 gas in terms of chemical potentials.
The Nernst Equation thus predicts that increasing [H+] should make the electrode more positive relative to the solution. This is exactly what we would predict on the basis of the potential
determining equilibrium written for this electrode above (i.e. equation (1) is the right way round). Applying Le Chatelier’s Principle, adding H+ shifts the equilibrium to the right. This removes
electrons from the electrode, so makes it more positively charged.
The Chlorine Electrode
½ Cl2(g) + e-(metal) ⇌ Cl-(aq)
Deriving the Nernst Equation exactly as above gives:
½ μCl2 + (μe- - FφM) = (μCl- - Fφs)
φM - φS = Δφo + RT/F ln (pCl21/2/[H+])
This predicts that the quantity φM - φs becomes more positive if the partial pressure of chlorine gas is increased, or if the concentration of Cl- is decreased (i.e. the opposite way round to the
Hydrogen Electrode).
The Silver/Silver Chloride Electrode
AgCl(s) + e-(metal) ⇌ Ag(metal) + Cl-(aq)
The equilibrium is established at the silver/silver chloride boundary. It is therefore important that the silver chloride coat is porous so that the aqueous solution containing the chloride ions
penetrates to the boundary to establish equilibrium. We obtain:
(μAgCl) + (μe- - FφM) = (μAg) + (μCl- - Fφs)
Both AgCl and Ag are present as pure solids, hence no terms of the form RT ln [A] appear in their chemical potential equations (only for solutions / gases). This gives:
φM - φS = Δφo – RT/F ln ([Cl-])
Concentrations or Activities?
Activities are introduced to account for non-ideality. More on this below. For now, we can say that activities approximate to concentrations in chemical potential terms, i.e.
φM - φS = Δφo – RT/F ln ([aCl-])
is the most accurate way to express the above. The activity terminology will be used from now on.
Generalising the Nernst Equation for arbitrary potential determining equilibria
For any electrode process:
vAA + vBB + … + e-(metal) ⇌ vCC + vDD + …
The vJ terms are the stoichiometric amounts of each component.
Using the same methodology as for the Fe2+/Fe3+ (and subsequent examples) we find that the Nernst Equation generalises to:
φM - φS = Δφo – RT/F ln (aAvAaBvB…/aCvCaDvD…)
For gaseous components, the activity is replaced by the partial pressures, pJ as above.
Measurement of Electrode Potentials: The Need For A Reference Electrode
Impossible to measure an absolute value for the potential drop at a single electrode solution interface. Need a test electrode and a reference electrode to compare to (to complete the circuit). This
is depicted in shorthand notation as:
Reference Electrode | Solution | Test Electrode
The vertical line notates a boundary between two separate phases. The measured potential by the cell is then given by (φtest - φs) – (φref - φs). This works best when (φref - φs) is constant. We can
then measure any changes in measured potential directly.
A successful reference electrode must display the following properties:
• The chemical composition of the electrode and the solution to which the electrode is directly exposed must be held fixed. This is because the reference electrode potential will be established by
some potential-determining equilibrium and the value of this potential will depend on the relative concentrations of the chemical species involved. If these concentrations change the electrode
potential also changes.
• One very important consequence of the requirement for a fixed chemical composition is that it would be disastrously unwise to pass a large electric current through the reference electrode since
the passage of this current would induce electrolysis to take place and this would inevitable perturb the concentrations of the species involved in the potential-determining equilibrium.
• It is also experimentally desirable that potential term (φref - φs) attains its thermodynamic equilibrium value rapidly. In other words the potential determining equilibrium should display fast
electrode kinetics.
The Standard Hydrogen Electrode
The standard Hydrogen Electrode (SHE) fits the above well. For example, to measure the Fe2+/Fe3+ potential we would use:
The Salt Bridge – tube containing KCl which places the two half cells in electrical contact. One purpose of this is to stop the two different solutions required for the two half cells from mixing.
Otherwise, for example, the Pt electrode forming part of the SHE would be exposed to Fe2+/Fe3+ and its potential accordingly disrupted.
• In the SHE the pressure of hydrogen gas is fixed at one atmosphere and the concentration of protons in the aqueous HCl is exactly 1.18M (unity activity of H+). The temperature is fixed at 298K.
• The digital voltmeter draws negligible current so that no electrolysis of the solution occurs during measurement.
• The reference electrode is fabricated from platinised platinum rather than bright platinum metal to ensure fast electrode kinetics. The purpose of depositing a layer of fine platinum black onto
the surface is to provide catalytic sites which ensure that the potential determining equilibrium is rapidly established.
Using the Nernst Equation we obtain:
φPt - φS = ΔφoFe2+/Fe3+ – RT/F ln (aFe2+/aFe3+)
φref - φS = ΔφoH2/H+ – RT/F ln (aH+/pH21/2)
The measured potential is:
Δφ = φPt - φref
And can be obtained from the above by simple subtraction:
Δφ = (φPt - φs) - (φPt - φs)
Δφ =ΔφoFe2+/Fe3+ - ΔφoH2/H+ + RT/F ln (aFe3+pH21/2/aFe2+aH+)
Which is often written as:
E = Eo + RT/F ln (aFe3+pH21/2/aFe2+aH+)
The value of Eo is known as the “standard electrode potential” of the Fe2+/Fe3+ couple. This is the measured potential with the SHE (aH+ = 1, pH2 = 1atm) when all the chemical species contributing to
the potential determining equilibrium are present at unity activity.
Standard Electrode Potentials (SEP)
A further example is the Cu/Cu2+ couple.
Pt | H2(g) (p = 1atm) | H+(aq) (a=1) || Cu2+(aq) (a=1) | Cu
(The symbol || denotes the salt bridge).
The SEP of the Cu/Cu2+ couple is given by:
EoCu/Cu2+ = φCu - φPt
When measured the potential difference between the copper and platinum electrodes is found to be 0.34V with the copper electrode positively charged and the platinum negatively charged. In writing
down potentials a convention is essential so that the correct polarity is assigned to the cell. This is done as follows: with reference to a cell diagram, the potential is that of the right hand
electrode relative to that of the left hand electrode, as the diagram is written down. Thus for:
Cu | Cu2+(aq) (a=1) || H+(aq) (a=1) | H2(g) (p=1atm) | Pt
Ecell = -0.34V = φPt - φCu
Pt | H2(g) (p = 1atm) | H+(aq) (a=1) || Cu2+(aq) (a=1) | Cu
Ecell = +0.34V = φCu - φPt
Tabulated values allow the prediction of the potential of any cell formed from any pair of half cells. For example, the cell,
Cu | Cu2+(aq) (a=1) || Zn2+(aq) (a=1) | Zn
Ecell = φZn - φCu
Eocell = EoZn/Zn2+ - EoCu/Cu2+
Note also that it is possible to calculate cell potentials when the activity is non-unity. For example,
Cu | Cu2+(aq) (aCu2+) || Zn2+(aq) (aZn2+) | Zn
Ecell = φZn - φCu = (φZn - φs) – (φCu - φs)
Ecell = (ΔφoZn/Zn2+ + RT/F ln aZn2+1/2) - (ΔφoCu/Cu2+ + RT/F ln aCu2+1/2)
Ecell = (ΔφoZn/Zn2+ - ΔφoCu/Cu2+) + RT/F ln (aZn2+1/2/aCu2+1/2)
Ecell = EoZn/Zn2+ - EoCu/Cu2+ + RT/2F ln (aZn2+/aCu2+)
This is the Nernst Equation for the cell.
The Nernst Equation Applied to General Cell
Following the procedure below (generalised strategy of preceding section):
1. Write down the cell in shorthand notation.
2. Write the reactions at the two electrodes as reductions involving one electron only. Writing the reaction as a reduction means that the electron appears in the left hand side of the equation in
each case.
3. Subtract the reaction at the left hand electrode (in the cell as written down) from the reaction at the right hand electrode to find a “formal cell reaction”:
where Ri represents the reactants, Pj the products, and vi and vj are their stoichiometries.
4. The resulting Nernst Equation is given by:
Where Eocell = Eoright + Eoleft (the SEPs of the half-cell reactions as drawn).
Next we illustrate the above procedure for the cell:
Cd | Cd2+(aq) (aCd2+) || Pb2+(aq) (aPb2+) | Pb
Step (ii) gives the reaction at the right hand electrode as:
½Pb2+(aq) + e- → ½Pb(s)
At the left hand electrode the reaction is:
½Cd2+(aq) + e- → ½Cd(s)
Step (iii) and subtracting gives:
½Pb2+(aq) + ½Cd(s) → ½Pb(s) + ½Cd2+(aq)
This is the formal cell reaction.
Step (iv) gives:
Where Eocell = EoPb/Pb2+ + EoCd/Cd2+.
Tables give EoPb/Pb2+ = -0.126V and EoCd/Cd2+ = -0.403V:
Eocell = (-0.126) – (-0.403) = +0.277V
It should be noted that the formal cell reaction, as introduced in step (iii), depends on how the cell is written down in step (i). For example, the cell:
Cd | Cd2+(aq) (aCd2+) || Pb2+(aq) (aPb2+) | Pb
½Pb2+(aq) + ½Cd(s) → ½Pb(s) + ½Cd2+(aq)
Eocell = +0.277V
In contrast the cell:
Pb | Pb2+(aq) (aPb2+) || Cd2+(aq) (aCd2+) | Cd
½Cd2+(aq) + ½Pb(s) → ½Cd(s) + ½Pb2+(aq)
Eocell = -0.277V
It is thus helpful to distinguish the formal cell reaction from the spontaneous cell reaction. The latter is the reaction that would occur if the cell were shortcircuited. That is, the two electrode
were directly connected to each other.
The nature of the spontaneous reaction can be readily deduced since in reality, electrons will flow from a negative electrode to a positive electrode through an external circuit as is illustrated:
Notice that this predicts the Cadmium electrode to be negatively charged and the Lead to carry a positive charge. Electrons therefore pass from the Cd to the Pb. This implies that oxidation occurs at
the right hand electrode:
½ Cd → ½ Cd2+ + e-
And reduction at the left hand electrode:
½ Pb2+ + e- → ½ Pb
It follows that the spontaneous reaction is:
½Pb2+(aq) + ½Cd(s) → ½Pb(s) + ½Cd2+(aq)
In general, the spontaneous cell reaction that occurs when the two electrodes are shortcircuited may be established using the protocol set out earlier, and tables of electrode potentials decide which
electrode is positive charged and which negatively charged. Electron flow in the external current will always be from the negative to the positive electrode so that an oxidation process will occur at
the former and a reduction at the latter.
The relation of Electrode Potentials to the Thermodynamics of the Cell Reaction
Due to the negligible current, electrons are transferred under essentially thermodynamically reversible conditions.
If dn moles of electrons flow from the negative electrode, then dn moles of the following reaction will occur:
½Pb2+(aq) + ½Cd(s) → ½Pb(s) + ½Cd2+(aq)
Associated with this will be a change dG in the Gibbs Free Energy of the cell.
dG = dwadditional
where dwadd corresponds to the work done (other than pdV work) in the process. In the above scheme the only contribution to this quantity is the work done in transferring the charge (-Fdn Coulombs)
through the external circuit across a potential difference of Ecellvolts. It follows that:
dG = dwadd = (-Fdn)Ecell
For each mole transferred:
ΔG = - FEcell
Where ΔG refers to the reaction:
½Pb2+(aq) + ½Cd(s) → ½Pb(s) + ½Cd2+(aq)
If the cell components are at unit activity, then:
ΔGo = - FEo
It can therefore be seen that the measurement of cell potentials provides information about free energy changes. Furthermore, since:
dG = Vdp – SdT
It can be concluded that:
Combining this with ΔHo with ΔGo + TΔSo gives:
ΔHo = - FEo + TF
i.e. the entropy and enthalpy of a cell reaction can be obtained from the cell potential and its variation with temperature.
Standard Electrode Potentials and the Direction of Chemical Reactions
Eocell = Eoright + Eoleft
ΔGo = - FEo = - RT ln K
Hence we obtain,
Eo = (RT/F) ln K
We can thus conclude that if Eo is greater than zero K will be greater than one, and if Eo is negative K will be less than unity for the cell reaction. For example, we consider the SEP of a metal/
metal ion couple as noted:
Pt | H2(g) (p=1atm) | H+(aq) (a=1), Mn+(aq) (a=1) | M
The formal cell reaction will be:
1/n Mn+(aq) + ½H2(g) → 1/n M(s) + H+(aq)
and so the SEP of the metal/metal ion couple indicates whether or not the metal will react with H+(aq) to give hydrogen gas. Thus for example in the case of gold, we consider have a SEP for the above
of +1.83V and so ΔGo = - 1.83F. It follows that gold will not react with acid under standard conditions to form H2.
Conversely, considering the reaction with Li+, the SEP is -3.04V so that for the above reaction ΔGo = +3.04F, showing reaction of Li with acid is strongly favoured in thermodynamic terms.
The inertness of gold and the reactivity of lithium in aqueous acid predicted in this way is well known.
Generalising the above, it can be seen that if a metallic element M has a SEP for M/Mn+ couple which is negative, then it is possible for M to react with acid under standard conditions to evolve
hydrogen. If the SEP is positive then it will be impossible thermodynamically.
A further useful example is seen when taking Cu+ and Cu2+ in aqueous solution. If we consider the disproportionation reaction of Cu(I):
2Cu+(aq) ⇌ Cu(s) + Cu2+(aq)
This can be broken down into two separate reactions:
Cu+(aq) + e- → Cu(s)
Cu2+(aq) + e- → Cu+(aq)
We find that Eo for the former reaction is +0.52V and for the latter is +0.16V. It follows that for the reaction:
Cu+(aq) + ½ H2(g) ⇌ Cu(s) + H+(aq)
ΔGo = -0.52F
Cu2+(aq) + ½ H2(g) ⇌ Cu+(aq) + H+(aq)
ΔGo = -0.16F
The two reactions may be subtracted to give the disproportionation reaction:
Cu+(aq) + ½ H2(g) ⇌ Cu(s) + H+(aq)
minus Cu2+(aq) + ½ H2(g) ⇌ Cu+(aq) + H+(aq)
gives 2Cu+(aq) ⇌ Cu(s) + Cu2+(aq)
For which ΔGo = (-0.52F) – (-0.16F) = -0.36F
And so:
K = aCu2+/aCuaCu+ = 1.2 x 106 mol dm-3
We conclude that the disproportionation reaction is very likely to occur. Indeed Cu(I) disproportionates very rapidly in water, with a lifetime typically of less than one second, forming metallic
copper and copper(II) ions.
Standard Electrode Potentials and Disproportionation
Extending the above Copper example to generalise for the reaction:
(a+b) Mx+(aq) ⇌ aM(x+b)+(aq) + bM(x-a)+(aq)
If we consider the separate reactions:
1/b M(x+b)+(aq) + ½ H2(g) ⇌ 1/b Mx+(aq) + H+(aq) ΔGo = - FEo(Mx+/M(x+b)+)
1/a Mx+(aq) + ½ H2(g) ⇌ 1/a M(x-a)+(aq) + H+(aq) ΔGo = - FEo(M(x-a)+/Mx+)
(ab) times: 1/a Mx+(aq) + ½ H2(g) ⇌ 1/a M(x-a)+(aq) + H+(aq)
minus (ab) times: 1/b M(x+b)+(aq) + ½ H2(g) ⇌ 1/b Mx+(aq) + H+(aq)
gives: (a+b) Mx+(aq) ⇌ aM(x+b)+(aq) + bM(x-a)+(aq)
So that:
ΔGo = -abF [ Eo(M(x-a)+/Mx+) – Eo(Mx+/M(x+b)+) ]
It follows that the ΔGo will be negative and the disproportionation favourable if:
Eo(Mx+/M(x+b)+) < Eo(M(x-a)+/Mx+)
In the case of copper, Eo(Cu+/Cu2+) < Eo(Cu/Cu+), and thus disproportionation is favourable.
Standard Electrode Potentials and pH
pH = -log10aH3O+
Consider the disproportionation of bromine:
2Br2(aq) + 3H2O(l) → BrO3-(aq) + 6H+(aq) + 5Br-(aq)
Since Eo(Br2/BrO3-) = +1.48V and Eo(Br-/Br2) = +1.06V, it follows that ΔGo = +2.10F kJ mol-1.
Thus, at pH = 0, where aH+ = 1 the disproportionation is unfavourable. However, at pH = 9, we can deduce that:
So that in weakly basic solution the disproportionation becomes thermodynamically possible. Whenever protons or hydroxide ions appear for particular redox couples, the equilibria involved in these
couples will be sensitive to the solution pH and by varying this quantity the equilibrium may be shifted in favour of products or reactants.
Thermodynamics vs. Kinetics
We have illustrated the use of electrode potentials in predicting the position of chemical equilibria. The predictions are subject, however, to kinetic limitations.
A classic example of this is Mg(s) dipped in water. Calculations predict this is favourable by -1.53F kJ mol-1, but in practice little or no reaction is observed since a thin film of MgO on the metal
surface prevents the reaction taking place. This is also seen for Titanium and Aluminium in water.
Electrode Potentials tell us nothing about the likely rate of the reaction.
Allowing for Non-Ideality – Activity Coefficients
Non-ideal solutions have a chemical potential given by μA = μAo + RT ln aA, where aA is the effective concentration of A in the solution or the activity of A. It is related to the concentration of
the solution by the coefficient γ, such that aA = γA[A].
Clearly, if γA is unity, then the solution is ideal. Deviations from unity by γA are a measure of non-ideality. For dilute electrolytic solutions it is possible to calculate γA, involving the use of
the Debye-Huckel Theory.
In approaching this topic, it is useful to have some grounding in the Thermodynamics of Solutions, found in the Thermodynamics Notes.
Debye Huckel Theory
For a dilute electrolytic solution the activity coefficient, γ, is usually less than 1. This implies that the solution is more stable, by an amount RT ln γI, as compared to the hypothetical situation
where the ionic charge is “off”. The physical origin of this stabilisation arises from the fact that the anion “sees” more oppositely charged ions than like-charged ions as it moves about in
solution. Let us consider the distribution of charge around an ion.
On a time average this must be spherically symmetrical and reflect the fact that there will be a build up of opposite charge around the ion. The magnitude of the opposite charge decreases in a radial
direction away form the ion in question. Far away from the ion the charge becomes zero corresponding to bulk solution, sufficiently remote that the ion’s electroneutrality is unperturbed. These are
referred to as ionic atmospheres.
We can calculate the charge distribution in the ionic atmosphere around a particular ion, j, and then use this to quantify the stabilisation of the ion. When scaled up for one mole of ions this
should be equal to RT ln γj.
This turns out to be a straightforward but tedious exercise in electrostatics, provided some assumptions are made. The result is quite simple.
The deviation from ideality depends on a quantity known as the ionic strength, I, of the solution. This is defined as:
where the sum is over all the ions, i, in solution, ci is the concentration of ion i and zi its charge.
For example, consider a 0.1M solution of MgCl2:
I = ½ [ 0.1 x (+2)2 + 0.2 x (-1)2 ] = 0.3M
The ionic strength here is greater than the concentration.
As a second example consider a 0.1M solution of NaCl:
I = ½ [ 0.1 x (+1)2 + 0.1 x (-1)2 ] = 0.1M
In this case the ionic strength equals the concentration. This is a general result for species of the formula M+X-.
The basic equation of Debye-Huckel Theory is:
log10 γj = - Azj2 √I
where zj is the charge on the ion and A is a temperature and solvent dependent parameter. For water at 25oC, A ≈ 0.5.
In calculating the electrostatic stabilisation conferred on an ion by its atmosphere so as to establish this equation, several assumptions are made:
1. The cause of the solution non-ideality resides exclusively in coulombic interactions between the ions, and not at all, for example, in ion-solvent interactions.
2. The ionic interactions are quantitatively described by Coulomb’s Law for point charge. This presumes that the effects of the solvent is solely to reduce inter-ionic forces by means of the
dielectric constant.
3. The electrolyte is fully dissociated and no significant numbers of ion pairs exist. This implies that the external forces between the ions are weaker than the thermal forces in the solution
moving ions around, together and apart.
These assumptions work well in dilute solutions, so that for ionic concentrations below ~ 10-2 M the Debye-Huckel Limiting Law works quantitatively.
The Debye-Huckel Limiting Law predicts that the deviation from ideality increases with the square root of the ionic strength, I. It is interesting to consider why this should be, and to focus on the
size of the ionic atmosphere. The effective size is the latter is measured by its Debye length, which gives an indication of the distance between any ion and the average location of the charge in its
ionic atmosphere. The higher the concentration, the shorter the Debye length, i.e. Debye length 1/√I.
It follows that as the ionic strength increases the distance between the central ion and the charge in the ionic atmosphere shrinks. Accordingly Coulomb’s Law leads us to expect that the
electrostatic stabilisation of the ion conferred by the ionic atmosphere increases so that γj becomes smaller and the solution more non-ideal.
Limits of Debye-Huckel Theory
Works well in dilute solution – up to concentrations around 10-2M, but overestimates the deviation from ideality at higher concentrations.
The Extended Debye-Huckel Law is given by the equation:
The constant B is, like A, a solvent and temperature specific parameter, whilst a is the radius of the ion, j. Hence, it is derived without the need to assume point charges (uses spheres of radius a
instead). This works at higher concentrations.
Neither of these laws can predict the upturn seen at higher concentrations however. Physically this is because both equations attribute the deviation from ideality to electrostatic forces stabilising
each ion and these increase – and the ionic atmospheres shrink – as the ionic strength increases. Some new factor must become important at higher concentrations.
This is because the y-axis no longer relates to the activity coefficient of a single ion, but is the mean activity coefficient, γ±, defined as γ± = (γ+γ-)1/2 for an electrolyte of stoichiometry MX.
It can be seen that the deviation of experiment from the Debye-Huckel Limiting Law is faster for e.g. LiCl when compared to KCl. This implies that the new factor influencing deviation is greatest for
Li+. This can be attributed to charge density, which is higher on Li+. As a consequence, Li+ is more strongly hydrated in solution, i.e. deviation from the Debye-Huckel Limiting Law at higher
concentrations is due to ion-solvent effects.
Can actually add a cI term to the Extended Debye-Huckel Law to account for this, where c is a solute and solvent specific parameter characterising the salvation of the ions.
Applications of the Debye-Huckel Limiting Law
The solubility of sparingly soluble salts can be slightly enhanced by an increase in ionic strength. For example the solubility product of silver chloride is Ksp = aAg+aCl- = 1.7x10-10 mol2 dm-6 so
that in pure water the solubility is approximately 1.3x10-5 mol dm-3. The solubility of AgCl is promoted, a little, by the addition of KNO3.
Mz+Mz-(s) ⇌ Mz+(aq) + Mz-(aq)
Ksp = aMaX = γM[Mz+].γx[Xz-]
= γ±2[Mz+][Xz-]
The cation and anion concentrations will equal one another (=c), so that:
log10 Ksp = 2 log10 γ± + 2 log10c.
Applying the Debye-Huckel Limiting Law then gives:
log10c = ½ log10Ksp + z2A√I.
This shows that as the ionic strength is increased the solubility of MX is promoted. It is helpful to ask the physical reason for this, and to focus on the specific case of silver chloride. When AgCl
is dissolved into a solution to which KNO3 is progressively added, the Ag+ and Cl- ions will develop ionic atmospheres around themselves which will serve to stabilise the ions. Considering the
equilibrium, stabilisation will pull it to the right hand side, so promoting solubility of the silver halide. As the ionic strength of the solute rises, the Debye length of the ionic atmosphere will
shrink so that each ion will become closer to the (opposite) charge surrounding it and so the stabilisation is enhanced. Remember though that it will only apply quantitatively to dilute solutions.
The Kinetic Salt Effect
Consider the reaction between two charged species, M and X:
M + X ⇌ { M,X } → products
Where { M,X } is an activated complex or transition state denoted ‡ below. Assuming a pre-equilibrium with reactants M and X prior to reaction to the former with rate constant k, then:
K = a‡ / aMaX = (γ‡/γMγX)([‡]/[M][X])
The rate of reaction will be given by rate = k[‡]
Combining gives:
rate = kK(γMγX/γ‡)
log10kapp = log10kK + log10γM + log10γX – log10γ‡
When expanding taking into account the Debye-Huckel Limiting law this gives:
log10kapp = log10kK - AzM2√I – AzX2√I + A(zM+zX)2√I
log10kapp = log10kK + 2AzMzX√I
where k0→1 is the measured second order rate constant at infinite dilution.
This equation predicts that if we change the ionic strength, I, of a solution by adding an inert electrolyte containing no M or X, and which plays no part in the reaction other than to change the
ionic strength, nevertheless the rate of the reaction between M and X can be altered. Bizarrely, if M and X have the same charge, increasing the ionic strength is predicted to increase the rate,
while opposite charges gives an anticipated decrease in rate!
The clue to this behaviour lies in the effect of ionic atmospheres, not just on the reactants M and X, but also now on the transition state.
Considering a pair of divalent anions and cations reacting, adding an inert salt supplies ions which provide the reactants with an ionic atmosphere. This will stabilise the ions. In contrast, the
transition state, which is neutral, will have no ionic atmosphere and hence its energy will be essentially unchanged. The barrier to the reaction is thus increased.
Two divalent cations reacting, however, means that the transition state carries charge. In this case both the reactants and transition state are stabilised by the ionic atmosphere, and the reaction
barrier is now lowered.
More on Electrode Potentials
Suppose we wanted to obtain the value of the standard electrode potential of the cell:
Pt | H2(g) (p=1atm) | H+(aq) (a=1) Cl-(aq) (a=1) | AgCl | Ag
This cell is know as the Harned Cell.
That is equivalent to investigating the standard electrode potential of the Ag/AgCl couple:
EoAg/AgCl = φAg - φPt
In order to proceed with measurement we need to know how to obtain the concentration for unit solution activity. It is necessary to proceed as follows:
The potential of the cell is given by:
E = EoAg/AgCl – (RT/F) ln [ aH+aCl- ]
Where the activities are related to the concentrations by means of activity coefficients γ. If the solution is suitably dilute the Debye-Huckel Limiting Law can be used to predict the ionic activity
log10 γ+ = - A z+2 √I
In our example z+ = z- = 1 and A = 0.509, which gives:
E = EoAg/AgCl – (RT/F) ln [HCl]2 – (RT/F) ln γ±2
On relating the activity coefficient to the ionic strength we finally obtain:
E + (2RT/F) ln [HCl] = EoAg/AgCl + (2.34RT/F) √[HCl]
Consequently if we measure the cell potential, E, as a function of the HCl concentration in the region where the Debye-Huckel limiting law applies, a plot of E + (2RT/F) ln [HCl] against √[HCl]
should give a straight line with an intercept equal to the standard electrode potential of the Ag/AgCl couple.
The above experiment is a little special in the sense that both electrodes dip into the same electrolyte solution. The latter contains H+ ions which participate (with the hydrogen gas) in
establishing the potential at the platinum electrode and also Cl- ions which help produce the potential on the silver/silver chloride electrode. In other cases it is simply not possible to use a
single solution. For example, suppose we wished to measure the standard electrode potential of the Fe2+/Fe3+ couple. A single solution cell,
Pt | H2(g) (p=1) | H+(aq) (a=1), Fe2+(aq) (a=1), Fe3+(aq) (a=1) | Pt
is inappropriate since both platinum electrodes are exposed to the Fe2+ and Fe3+ ions. Thus at one of the two electrodes both the Fe2+/Fe3+ and the H+/H2 couples will try to establish their
potentials. This is why salt bridges are used.
Worked Examples
Equilibrium Constants
Calculate the equilibrium constants for the following reactions at 25oC in aqueous solutions,
1. Sn(s) + CuSO4(aq) ⇌ Cu(s) + SnSO4(aq)
2. 2H2(g) + O2(g) ⇌ 2H2O(l)
Given the following standard electrode potentials:
½ Sn2+(aq) + e- ⇌ ½ Sn(s) -0.136V
½ Cu2+(aq) + e- ⇌ ½ Cu(s) +0.337V
¼ O2(g) + H+(aq) + e- ⇌ ½ H2O +1.229V
We consider first equilibrium (a) and begin by noting that definition of the standard electrode potential of the Sn/Sn2+ couple implies for the following cell:
Pt | H2(g) (p=1atm) |H+(aq) (a=1) || Sn2+(aq) (a=1) | Sn
the cell potential is EoSn/Sn2+ = -0.136V = φSn - φPt
The strategy from earlier allows us to associate a formal cell reaction with the above cell as follows. The potential determining equilibrium at the right hand electrode is:
½ Sn2+ + e- ⇌ ½ Sn(s)
And at the left hand electrode:
H+(aq) + e- ⇌ ½ H2(g)
Subtracting gives:
½ Sn2+(aq) + ½ H2(g) ⇌ ½ Sn(s) + H+(aq)
For this last reaction:
ΔGo = - FEoSn/Sn2+ = +0.136F
Likewise for the Cu cell,
Pt | H2(g) (p=1atm) |H+(aq) (a=1) || Cu2+(aq) (a=1) | Cu
EoCu/Cu2+ = +0.337V
The potential determining equilibria at each electrode are:
Right Hand Electrode: ½ Cu2+(aq) + e- ⇌ ½ Cu(s)
Left Hand Electrode: H+(aq) + e- ⇌ ½ H2(g)
This enables the formal cell reaction to be deduced:
½ Cu2+(aq) + ½ H2(g) ⇌ ½ Cu(s) + H+(aq)
For which:
ΔGo = - FEoCu/Cu2+ = -0.337F
From these two reactions, subtracting:
½ Cu2+(aq) + ½ Sn(s) ⇌ ½ Cu(s) + ½ Sn2+(aq)
For which,
ΔGo = (-0.337F) – (+0.136F) = -0.463F
and ΔGo = -RT ln Kc
Kc here = [Sn2+]/[Cu2+] = 1x108, so for the original question (1 mol, not ½), the equilibrium constant = 1x1016.
Next we turn to equilibrium (b).
Pt | H2(g) (p=1atm) | H+(aq) (a=1), H2O (a=1) | O2(g) (p=1atm) | Pt
The formal cell reaction can be deduced by subtracting the potential determining equilibria at each electrode, as follows:
Right Hand Electrode: ¼ O2(g) + H+(aq) + e- ⇌ ½ H2O(l)
Left Hand Electrode: H+(aq) + e- ⇌ ½ H2(g)
To Give: ¼ O2(g) + ½ H2(g) ⇌ ½ H2O(l)
For this reaction, ΔGo = -1.229F
And the associated equilibrium constant:
= 6 x 1020 atm-3/4
Note that the activity of water is absent from the definition of K.
The Nernst Equation
For the following cell:
Al | Al3+(aq) || Sn4+(aq), Sn2+(aq) | Pt
State or calculate at 25oC:
1. the cell reaction
2. the cell EMF when all concentrations are 0.1M and 1.0M (ignore activity coefficients).
3. ΔGo for the cell reaction in (a)
4. K for the cell reaction in (a)
5. The positive electrode and the direction of electron flow in an external circuit connecting the two electrodes.
The standard potentials are EoSn2+/Sn4+ = 0.15V and EoAl/Al3+ = -1.61V
The potential determining equilibria are:
Right Hand Electrode: ½ Sn4+(aq) + e- ⇌ ½ Sn2+(aq)
Left Hand Electrode: ⅓ Al3+(aq) + e- ⇌ ⅓ Al(s)
Formal cell reaction: ½ Sn4+(aq) + ⅓ Al(s) ⇌ ½ Sn2+(aq) + ⅓ Al3+(aq)
When all the potential determining species in the cell are present at unit activity the cell potential is:
Eocell = EoSn2+/Sn4+ - EoAl/Al3+ = (0.15) – (-1.66) = 1.81V
So that for the reaction above: ΔGo = -1.81F = -175kJ mol-1
It follows that the reaction is thermodynamically downhill and is the process which would occur if the cell was short-circuited.
The cell EMF will be given by the appropriate Nernst Equation:
So that when all the concentrations are 1.0M the cell EMF is 1.81V.
When the concentrations are 0.1M,
= 1.81 + 0.02 = 1.83V
The equilibrium constant for the reaction is:
K = [Al3+]⅓[Sn2+]½/[Sn4+]½ = exp (1.81F/RT) = (4x1030 mol dm-3)1/2
Last we note the cell polarity will always be:
(-) Al | Al3+(aq) || Sn4+(aq), Sn2+(aq) | Pt (+)
unless tremendous extremes of concentration ratios occur ([Al3+] >> [Sn4+] and [Sn2+] >> [Sn4+]). It follows that if the cell is short-circuited then electrons would leave the aluminium, which would
oxidise, and that at the platinum electrode Sn4+ ions would be reduced.
Concentration Cells
Consider the following cell:
Pt | H2(g) (p1) | HCl(aq) (m1) || HCl(aq) (m2) | H2(g) (p2) | Pt
Where the hydrogen gas pressures are p1 and p2 atmospheres respectively and the two hydrochloric acid concentrations are m1 and m2 in mol dm-3. At 25oC calculate or state:
1. an expression for the cell EMF in terms of m1, m2, p1 and p2 (ignoring activity coefficients).
2. The cell EMF when m1 = 0.1M, m2 = 0.2M and p1 = p2 = 1 atm.
3. The cell EMF when the hydrogen pressure p2 is increased to 10atm, all other concentrations remaining the same
4. The cell reaction
As in the previous examples the strategy is first to identify the potential determining equilibria which in this case are:
Right Hand Electrode: H+(aq, m2) + e- ⇌ ½ H2(g, p2)
Left Hand Electrode: H+(aq, m1) + e- ⇌ ½ H2(g, p1)
Formal Cell Reaction: H+(aq, m2) + ½ H2(g, p1) ⇌ H+(aq, m1) + ½ H2(g, p2)
The Nernst Equation is therefore:
When p1 = p2 = 1, and m1 = 0.1 but m2 = 0.2:
E = (-RT/F) ln (0.1/0.2) = 0.018V
It p2 is changed to 10 atm we have:
E = (-RT/F) ln (0.1x101/2/0.2) = -0.012V
The formal cell reaction in this example was established above. The spontaneous cell reaction – that occurring when the cell is short-circuited – can be seem from the above to depend on the cell
concentrations. When p1 = p2 = 1 then the spontaneous reaction is the same as the formal cell reaction since:
ΔG = -0.018F < 0
When p2 is increased to 10 atm, ΔG = +0.012F > 0, so that the direction of the spontaneous cell reaction is reversed.
Solubility Products
Given the standard electrode potentials:
EoAg/Ag+ = +0.799V and EoAg/AgI = -0.152V
Calculate the solubility product, Ksp, and solubility of silver iodide at 25oC.
The first SEP quoted relates to the cell:
Pt | H2(g) (p=1atm) | H+(aq) (a=1) || Ag+(aq) (a=1) | Ag(s)
For which the formal cell reaction is:
Ag+(aq) + ½ H2(g) ⇌ Ag(s) + H+(aq)
ΔGo = -0.799F
Likewise the second SEP is that of the cell:
Pt | H2(g) (p=1atm) | H+(aq) (a=1), I-(aq) (a=1) | AgI | Ag(s)
The potential determining equilibria at the two electrodes are:
RHS: AgI(s) + e- ⇌ Ag(s) + I-(aq)
LHS: H+(aq) + e- ⇌ ½ H2(g)
Formal: AgI(s) + ½ H2(g) ⇌ Ag(s) + I-(aq) + H+(aq)
Which has ΔGo = +0.152F
Subtracting the two reactions gives:
AgI(s) ⇌ Ag+(aq) + I-(aq)
For which:
ΔGo = (+0.152F) – (-0.799F) = 0.951F = -RT ln Ksp
Ksp = 8.5x10-17 mol2 dm-6.
Weak Acids
The EMF of each of the following Harned cells is measured at two temperatures:
Pt | H2(g) (p=1atm) | HCl (10-5M) | AgCl | Ag (E1)
Pt | H2(g) (p=1atm) | HA (10-2M), KA (10-2M), KCl (10-5M) | AgCl | Ag (E2)
Where HA is a weak acid and KA is its potassium salt. The results are as follows:
293K 303K
E1 / V 0.820 0.806
E2 / V 0.878 0.866
Calculate Ka and ΔHo for the dissociation of the weak acid, pointing out any assumptions you make. Do not ignore activity coefficients, but assume in the second cell that [HA] >> [H+].
We start by identifying the potential determining equilibria in the two cells. In both cases these are:
RHS: AgCl(s) + e- ⇌ Ag(s) + Cl-(aq)
LHS: H+(aq) + e- ⇌ ½ H2(g)
Formal: AgCl(s) + ½ H2(g) ⇌ Ag(s) + Cl-(aq) + H+(aq)
The corresponding Nernst Equation is:
E = Eo – (RT/F) ln [aH+aCl-]
This applies to both cells although the a-values will differ.
aH+ = γH+[H+] and aCl- = γCl-[Cl-]
Consider the first cell and apply the Nernst Equation at the lower temperature:
0.820 = EoAg/AgCl – (293R/F) ln [ γH+γCl-10-5.10-5 ]
At concentrations as low as 10-5M the solutions are effectively ideal to a high degree of approximation. Physically this arises since the ions are so far apart that the ion-ion interactions are
negligible. We can therefore put: γH+ = γCl- ≈ 1, and so deduce that:
EoAg/AgCl = 0.240V (at 293K)
At the higher temperature, but the same method:
EoAg/AgCl = 0.206V (at 303K)
We next turn to the second cell and note that the hydrogen ion activity, aH+, “seen” by the hydrogen electrode will be governed by the dissociation of the weak acid,
HA(aq) ⇌ H+(aq) + A-(aq)
For which we can write the acid dissociation constant:
Now HA is uncharged so that we can safely assume γHA ≈ 1 is a very good approximation.
However, the ionic strength, I, of the solution is in excess of 10-2 mol dm-3 so we expect that γCl- < 1 and γA- < 1.
Returning to the Nernst Equation:
E2 = EoAg/AgCl – (RT/F) ln [aH+γCl-10-5]
At the lower temperature of 293K:
0.878 = 0.240 – 0.058 log [aH+γCl-10-5]
So that:
log [aH+γCl-] = -6.00 (at 293K)
At the higher temperature,
log [aH+γCl-] = -6.00 (at 303K)
it follows that at both temperatures:
aH+γCl- = 10-6
Ka = aH+aCl-(γA-/γCl-)([A-]/[HA])
= 10-6(γA-/γCl-)(10-2/10-2) = 10-6M
(if γA- = γCl-)
The last assumption is a good one since the Debye-Huckel Limiting Law predicts the same value for the activity coefficients of ions with the same charge experiencing the same ionic strength.
We have shown that Ka has the same value at both 293 and 303K. We can find ΔH for the acid dissociation by using the Van’t Hoff Ischore:
d ln K / dT = ΔHo/RT2
Which shows that ΔHo ≈ 0.
Thermodynamic Quantities
The EMF of the cell
Ag | AgCl | HCl (10-5M) | Hg2Cl2 | Hg
is 0.0421V at 288K and 0.0489V at 308K. Use this information to calculate the enthalpy, free energy and entropy changes that accompany the cell reaction at 298K.
The potential determining equilibria at the electrodes are:
RHS: ½ Hg2Cl2(s) + e- ⇌ Hg(l) + Cl-(aq)
LHS: AgCl(s) + e- ⇌ Ag(s) + Cl-(aq)
Formal: ½ Hg2Cl2(s) + Ag(s) ⇌ Hg(l) + AgCl(s)
ΔGo = - FEo
ΔGo288 = -0.421F = -4.062 kj mol-1 at 288K.
ΔGo308 = -0.0489F = -4.719 kJ mol-1 at 308K
Linearly interpolating between 288K and 308K we find:
ΔGo298 = -4.390 kJ mol-1 at 298K
Note that the reaction involves the pure solid metal chlorides and pure elements in their standard states so that the free energies evaluated above are standard free energies, regardless of the
concentration of HCl in the cell – the latter does not enter the net formal cell reaction, or influence the cell EMF. It does however play the vital role of establishing the potentials on the two
electrodes through the potential determining equilibria given above.
The entropy change be found from:
ΔSo298 = F
= F [ (-0.0489 – 0.0421)/20 ]
= 32.8 J K-1 mol-1 at 298K
The enthalpy change may be readily estimated from ΔHo = ΔGo + T ΔSo
ΔHo298 = -4.390 + [298x32.8/103] = +5.387kJ mol-1 at 298K.
It is apparent that the cell reaction is thermodynamically downhill, but that it is entropy drive, the process being enthalpically unfavourable (ΔH > 0). The positive ΔS value reflects the increase
in disorder in converting the solids Hg2Cl2 and Ag into solid AgCl and liquid Hg.
These notes are copyright Alex Moss, © 2003-present.
I am happy for them to be reproduced, but please include credit to this website if you're putting them somewhere public please! | {"url":"https://alchemyst.co.uk/note/electrochemistry","timestamp":"2024-11-11T13:00:15Z","content_type":"text/html","content_length":"190183","record_id":"<urn:uuid:e6d63a87-edca-4e89-b441-4a025f7f5342>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00762.warc.gz"} |
ML Aggarwal Class 6 Solutions for ICSE Maths Chapter 13 Practical Geometry Ex 13.2
ML Aggarwal Class 6 Solutions Chapter 13 Practical Geometry Ex 13.2 for ICSE Understanding Mathematics acts as the best resource during your learning and helps you score well in your exams.
ML Aggarwal Class 6 Solutions for ICSE Maths Chapter 13 Practical Geometry Ex 13.2
Question 1.
Draw a line segment \(\overline{\mathrm{PQ}}\) =5.6 cm. Draw a perpendicular to it from a point A outside \(\overline{\mathrm{PQ}}\) by using ruler and compass.
Question 2.
Draw a line segment \(\overline{\mathrm{AB}}\) = 6.2 cm. Draw a perpendicular to it at a point M on \(\overline{\mathrm{AB}}\) by using ruler and compass.
Question 3.
Draw a line l and take a point P on it. Through P, draw a line segment \(\overline{\mathrm{PQ}}\) perpendicular to l. Now draw a perpendicular to \(\overline{\mathrm{PQ}}\) at Q (use ruler and
Question 4.
Draw a line segment \(\overline{\mathrm{AB}}\) of length 6.4 cm and construct its axis of symmetry (use ruler and compass).
Question 5.
Draw the perpendicular bisector of \(\overline{\mathrm{XY}}\) whose length is 8.3 cm.
(i) Take any point P on the bisector drawn. Examine whether PX = PY.
(ii) If M is the mid-point of \(\overline{\mathrm{XY}}\), what can you say about the lengths MX and MY?
Question 6.
Draw a line segment of length 8.8 cm. Using ruler and compass, divide it into four equal parts. Verify by actual measurement.
Question 7.
With \(\overline{\mathrm{PQ}}\) of length 5.6 cm as diameter, draw a circle.
Question 8.
Draw a circle with centre C and radius 4.2 cm. Draw any chord AB. Construct the perpendicular bisector of AB and examine if it passes through C.
Question 9.
Draw a circle of radius 3.5 cm. Draw any two of its (non-parallel) chords. Construct the perpendicular bisectors of these chords. Where do they meet? | {"url":"https://ncertmcq.com/ml-aggarwal-class-6-solutions-for-icse-maths-chapter-13-ex-13-2/","timestamp":"2024-11-02T15:22:43Z","content_type":"text/html","content_length":"61772","record_id":"<urn:uuid:126d06f7-dd17-4e68-89c0-216f98e369b8>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00215.warc.gz"} |
4 Digit By 2 Digit Multiplication Worksheets Pdf
Math, specifically multiplication, forms the foundation of countless scholastic self-controls and real-world applications. Yet, for lots of learners, grasping multiplication can present an obstacle.
To address this hurdle, educators and moms and dads have actually embraced an effective tool: 4 Digit By 2 Digit Multiplication Worksheets Pdf.
Intro to 4 Digit By 2 Digit Multiplication Worksheets Pdf
4 Digit By 2 Digit Multiplication Worksheets Pdf
4 Digit By 2 Digit Multiplication Worksheets Pdf -
Long Multiplication Worksheet Multiplying 4 Digit by 2 Digit Numbers Author Math Drills Free Math Worksheets Subject Long Multiplication Keywords math multiplication long multiply product 4 digit 2
digit Created Date 8 31 2016 8 39 03 AM
Below are six versions of our grade 5 math worksheet on multiplying 4 digit by 2 digit numbers These worksheets are pdf files Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 5
Relevance of Multiplication Technique Recognizing multiplication is crucial, laying a strong foundation for sophisticated mathematical concepts. 4 Digit By 2 Digit Multiplication Worksheets Pdf
supply structured and targeted practice, cultivating a deeper understanding of this basic arithmetic procedure.
Advancement of 4 Digit By 2 Digit Multiplication Worksheets Pdf
Math Multiplication Worksheets 4th Grade
Math Multiplication Worksheets 4th Grade
We ve got you covered Multiplying 4 digit by 2 digit is pretty much the same as 4 digit by 1 digit Further you will have no difficulty making these CCSS aligned pdf worksheets fit into your existing
curriculum Packed with fifteen problems in each of its worksheets this bunch of resources not only works as a wonderful new study tool for
Here you will find a range of Free Printable 4th Grade Multiplication Worksheets The following worksheets involve using the Fourth Grade Math skills of multiplying and solving multiplication problems
use their multiplication table knowledge to multiply by 10s and 100s mentally multiply a two or three digit number by a two digit number
From typical pen-and-paper workouts to digitized interactive layouts, 4 Digit By 2 Digit Multiplication Worksheets Pdf have actually developed, dealing with varied discovering designs and
Sorts Of 4 Digit By 2 Digit Multiplication Worksheets Pdf
Fundamental Multiplication Sheets Straightforward workouts focusing on multiplication tables, aiding students build a strong math base.
Word Issue Worksheets
Real-life situations incorporated into issues, enhancing crucial thinking and application abilities.
Timed Multiplication Drills Tests made to enhance speed and accuracy, helping in fast mental mathematics.
Benefits of Using 4 Digit By 2 Digit Multiplication Worksheets Pdf
3 Digit X 2 Digit Multiplication
3 Digit X 2 Digit Multiplication
Multi Digit box method multiplication worksheets PDF are giving for students learning or revision These Partial product multiplication worksheets and Area model multiplication examples and test are
gives to make kids more successful in complex multiplication Here there are 2 digits 3 digits and 4 digits printable multiplication exercises
Vertical Format This Multiplication worksheet may be configured for 2 3 or 4 digit multiplicands being multiplied by 1 2 or 3 digit multipliers You may vary the numbers of problems on each worksheet
from 12 to 25 This multiplication worksheet is appropriate for Kindergarten 1st Grade 2nd Grade 3rd Grade 4th Grade and 5th Grade
Enhanced Mathematical Abilities
Consistent method develops multiplication efficiency, enhancing total math abilities.
Improved Problem-Solving Abilities
Word issues in worksheets establish logical reasoning and technique application.
Self-Paced Knowing Advantages
Worksheets fit private learning speeds, promoting a comfy and versatile learning environment.
Just How to Create Engaging 4 Digit By 2 Digit Multiplication Worksheets Pdf
Including Visuals and Colors Vibrant visuals and colors catch focus, making worksheets visually appealing and engaging.
Including Real-Life Circumstances
Relating multiplication to everyday scenarios adds significance and practicality to exercises.
Tailoring Worksheets to Various Ability Degrees Tailoring worksheets based on varying efficiency degrees guarantees inclusive understanding. Interactive and Online Multiplication Resources Digital
Multiplication Devices and Gamings Technology-based sources provide interactive discovering experiences, making multiplication interesting and satisfying. Interactive Sites and Applications On-line
systems give varied and obtainable multiplication method, supplementing conventional worksheets. Tailoring Worksheets for Various Understanding Styles Aesthetic Students Aesthetic aids and
representations help comprehension for learners inclined toward aesthetic understanding. Auditory Learners Spoken multiplication issues or mnemonics satisfy students who comprehend principles via
acoustic methods. Kinesthetic Students Hands-on activities and manipulatives support kinesthetic students in comprehending multiplication. Tips for Effective Execution in Discovering Uniformity in
Practice Normal technique enhances multiplication abilities, advertising retention and fluency. Stabilizing Repetition and Variety A mix of repeated exercises and varied trouble layouts maintains
passion and understanding. Giving Positive Responses Comments help in recognizing areas of enhancement, urging continued development. Obstacles in Multiplication Technique and Solutions Motivation
and Engagement Obstacles Dull drills can cause uninterest; innovative strategies can reignite inspiration. Getting Rid Of Worry of Math Negative assumptions around mathematics can impede development;
developing a favorable understanding environment is essential. Impact of 4 Digit By 2 Digit Multiplication Worksheets Pdf on Academic Performance Research Studies and Research Study Searchings For
Research suggests a favorable connection between regular worksheet use and boosted mathematics performance.
Final thought
4 Digit By 2 Digit Multiplication Worksheets Pdf become flexible tools, cultivating mathematical proficiency in students while accommodating diverse learning styles. From basic drills to interactive
on the internet sources, these worksheets not only improve multiplication skills but also advertise vital reasoning and analytic abilities.
7 Multiplication Worksheets Examples In PDF Examples
Math Multiplication Worksheet
Check more of 4 Digit By 2 Digit Multiplication Worksheets Pdf below
Three digit Multiplication Practice Worksheet 03
3 Digit By 2 Digit Multiplication Word Problems Worksheets Pdf Free Printable
Multiplying 2 Digit By 2 Digit Worksheets
Math Worksheets 2 Digit Multiplication
Two digit multiplication Worksheet 5 Stuff To Buy Pinterest Multiplication worksheets
Multi Digit Multiplication by 2 Digit 2 Digit Multiplicand EdBoost
Grade 5 Math Worksheets Multiplication in columns 4 by 2 digit K5
Below are six versions of our grade 5 math worksheet on multiplying 4 digit by 2 digit numbers These worksheets are pdf files Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 5
span class result type
Independent Worksheet 4 Using the Standard Algorithm for Two Digit by One Digit Multiplication A5 111 Independent Worksheet 5 Choose Your Strategy A5 113 H represent multiplication of two digit by
two digit numbers H multiply by 10 and 100 H multiply 2 and 3 digit by 1 and 2 digit numbers using efficient methods including the standard
Below are six versions of our grade 5 math worksheet on multiplying 4 digit by 2 digit numbers These worksheets are pdf files Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 5
Independent Worksheet 4 Using the Standard Algorithm for Two Digit by One Digit Multiplication A5 111 Independent Worksheet 5 Choose Your Strategy A5 113 H represent multiplication of two digit by
two digit numbers H multiply by 10 and 100 H multiply 2 and 3 digit by 1 and 2 digit numbers using efficient methods including the standard
Math Worksheets 2 Digit Multiplication
3 Digit By 2 Digit Multiplication Word Problems Worksheets Pdf Free Printable
Two digit multiplication Worksheet 5 Stuff To Buy Pinterest Multiplication worksheets
Multi Digit Multiplication by 2 Digit 2 Digit Multiplicand EdBoost
Double Digit Multiplication Worksheets Pdf WorksSheet List
4 digit By 1 digit Multiplication Worksheets
4 digit By 1 digit Multiplication Worksheets
multiplication 3 digit by 2 digit worksheets
Frequently Asked Questions (Frequently Asked Questions).
Are 4 Digit By 2 Digit Multiplication Worksheets Pdf appropriate for any age groups?
Yes, worksheets can be tailored to various age and skill degrees, making them versatile for various students.
How often should pupils exercise utilizing 4 Digit By 2 Digit Multiplication Worksheets Pdf?
Regular practice is essential. Normal sessions, preferably a few times a week, can produce considerable enhancement.
Can worksheets alone improve math abilities?
Worksheets are a valuable tool yet must be supplemented with varied knowing approaches for detailed ability growth.
Exist online platforms offering cost-free 4 Digit By 2 Digit Multiplication Worksheets Pdf?
Yes, several instructional websites offer open door to a variety of 4 Digit By 2 Digit Multiplication Worksheets Pdf.
Exactly how can parents support their youngsters's multiplication practice in the house?
Motivating regular practice, supplying support, and creating a favorable understanding setting are useful actions. | {"url":"https://crown-darts.com/en/4-digit-by-2-digit-multiplication-worksheets-pdf.html","timestamp":"2024-11-13T21:36:02Z","content_type":"text/html","content_length":"28343","record_id":"<urn:uuid:e1c3ec82-4f1b-44eb-927f-f356e46b5298>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00556.warc.gz"} |
Journal of Modern Physics
Vol.5 No.12(2014), Article ID:48047,3 pages DOI:10.4236/jmp.2014.512111
The Dynamic Gravitation of Photons from the Perspective of Maxwell’s Wave Equations
Guido Zbiral
Private Retired Scientist, Klosterneuburg, Austria
Email: guido@zbiral.at
Copyright © 2014 by author and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY).
Received 14 May 2014; revised 8 June 2014; accepted 2 July 2014
Although the gravitational constant (G) does not explicitly occur in the Maxwell Wave Equations, this paper will show that G is indeed implicitly contained in them. The logical consequence hereby is
that electromagnetic radiation is associated with dynamic gravitation and not—as assumed in Einstein’s Special Theory of Relativity—with “static” gravitation, dynamic gravitation being at the time
unknown. According to the Maxwell Wave Equations, gravitation experiences the same dynamic (speed of light c) as electromagnetic radiation and must therefore also be of a quantum nature. There must
exist an equal number of gravitational quanta as there are photons. Since photons do not possess a baryonic rest mass but only a relativistic mass, this mass must be nonbaryonic in nature—precisely
as their dynamic gravitation.
Keywords:Photon, Dynamic Gravitation, Gravitational Quanta, Maxwell’s Wave Equations
1. Introduction
For more detailed information on the nature of dynamic gravitation of photons, please refer to the paper by Guido Zbiral: The “Dynamic Gravitation of Photons: A Hitherto Unknown Physical Quantity”.
New Aspects on the Physics of Photons in Journal of Modern Physics, 5, 198-204. http://dx.doi.org/10.4236/jmp.2014.55030 (March 2014).
The following short paper is intended as a supplement to the paper cited above.
2. The Dynamic Gravitation of Photons from the Perspective of Maxwell’s Wave Equations
The Maxwell’s Wave Equations for the x-axis are: [1] [2]
The vectors for the electrical component E and the magnetic component B of the electromagnetic wave are perpendicular to the direction of propagation (the x-axis) and are thus transversal waves;
furthermore, the Efield and the B-field are themselves mutually perpendicular (along the yand z-axes, respectively); both fields are completely symmetrical and also—in regard to their
energy—completely equivalent. The dynamic E-field and the dynamic B-field are inseparably linked to each other in the electromagnetic wave and are mutually dependent.
Since the property c^2 [m^2∙s^−^2] is included in the dimensions of the Gravitational Constant G = 6.67 × 10^−^11 [m^3∙kg^−^1∙s^−^2], and according to the so-called Maxwell’s 5th Equation:
the following relationship then applies:
When the relationship (4) is inserted into the two Maxwell Wave Equations (1), (2), they then become:
Both of these converted wave equations now contain the Gravitational Constant G and can be interpreted as follows:
Since the two dynamic vector fields E and B transport electrical and magnetic energy along their respective axes with them, each of these fields are associated with G, i.e., both components of the
electromagnetic wave are subject to gravitation.
Gravitation (denoted by the constant physical value G) is not able to create a gravitational wave (i.e. radiation) of its own accord. However, as G is inseparably associated with both the dynamic
vector fields E and B^1 gravitation is—so to speak—“carried along” by the two vector fields to each other. Due to its coupling with E and B, gravitation must of necessity assume all the dynamic
properties possessed by E and B^2. Gravitation is thus propagated at the speed of light along the x-axis with the same frequency n as the electromagnetic wave. For this reason, gravitation must
possess a wave property—i.e. radiation of a quantum nature.
Gravitational waves are—as is the case with electromagnetic waves—transverse waves and resonate synchronously in the plane of their respective field components E and B. Since G is expressed
completely symmetrically in both of Maxwell’s Wave Equations (5), (6), the gravitational waves also obey Maxwellian theory in this regard. It therefore follows that the equation for the energy of the
electromagnetic radiation (of the photons)
also applies to the gravitational radiation (of the gravitational quanta) associated with the photons. Gravitational waves are therefore completely equivalent to the Eund B-fields of electromagnetic
waves, albeit acting in an opposite manner. At the constant speed of light, a stable state of equilibrium exists within each photon between the expansive force of electrodynamics and the equal but
opposing (braking) force of its dynamic gravitation. Therefore at the constant speed of light, the resulting total energy of every photon is always zero! This is an absolute necessity for the
constancy of the speed of light.
This derivation represents a confirmation that photons as dynamic electromagnetic quanta are inseparably linked to dynamic gravitation (in the form of gravitational quanta).
3. Conclusion
The existence of dynamic gravitation, proposed in the aforementioned paper, is verified as a result of the theoretical derivation from the Maxwell Wave Equations.
My warmest thanks go to my translator Kris Szwaja (M.A. Physics, Oxon), both for translating my manuscript from German into English and his valuable suggestions on the text itself.
1. Leisen, J. Research Seminar on “Die Maxwell-Gleichungen Verstehen”, University of Mainz.
2. (2004) Die Maxwell-Gleichungen und ihre Bedeutung fur die SRT. http://www.mahag.com/srt/maxwell.php
3. Wikipedia (Google) (2014) Article on the Higgs Mechanism.
In the review comments of this paper, the opinion was expressed that the dynamic gravitation of photons possesses an internal connection with the source of the (relativistic) mass of these
fundamental particles, which could possibly be interpreted by the Higgs Mechanism (or Higgs field).
In this regard the following should be noted: All elementary particles with baryonic mass obtain their mass by means of interaction with the Higgs Field, whereby a “non-relativistic environment” is
supposed. The Higgs Field is massive but not directly measurable; the associated Higgs boson has a mass of approx. 125 GeV and acts purely “longitudinally”.
In contrast, electromagnetism is not of massive nature and the Higgs Mechanism does not couple with it. For this reason the gauge boson relevant for this theory, the photon, possesses no rest mass,
it does, however, possess a relativistic mass and displays a purely “transversal” behaviour. While the dynamic gravitation of photons is associated with its relativistic mass and exists only within a
relativistic environment, for the reasons described above it is not possible to establish a relationship of the relativistic mass with the Higgs Mechanism. This means that from the present-day
perspective, the relativistic mass of photons cannot be explained by means of the Higgs Mechanism—there must be a different cause at work.
It has not yet been definitively proven whether the new particle discovered in July 2012 is indeed the Higgsboson predicted in the Standard Model [3] .
^1E and B are energy fields; each form of energy is inseparably associated with gravitation.
^2The properties of the dynamic vector fields E and B are—as it were—transferred to the gravitation. | {"url":"https://file.scirp.org/Html/6-7501857_48047.htm","timestamp":"2024-11-12T17:38:04Z","content_type":"application/xhtml+xml","content_length":"41063","record_id":"<urn:uuid:a76df564-4833-4ebd-b8d1-26d750fe69a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00783.warc.gz"} |
Do we have a quantum field theory of monopoles?
Recently, I read a review on magnetic monopole published in late 1970s, wherein some conjectures of properties possibly possessed by a longingly desired quantum field theory of monopoles are stated.
My question is what our contemporary understanding of the quantum field theory of monopoles is. Do we have a fully developed one? Any useful ref. is also helpful.
This post imported from StackExchange Physics at 2014-08-22 05:08 (UCT), posted by SE-user huotuichang
This is almost, but not quite, a duplicate of What tree-level Feynman diagrams are added to QED if magnetic monopoles exist?.
In principle quantum electrodynamics includes magnetic monopoles as well as electrons, so yes we do have a theory to describe them. However we expect monopoles to be many orders of magnitude heavier
than electrons, and that causes problems trying to describe both with a perturbative calculation.
This post imported from StackExchange Physics at 2014-08-22 05:08 (UCT), posted by SE-user John Rennie
This answer is based on David Tong's lectures on solitons - Chapter 2 - Monopoles.
The general answer to the question is that it is known how to construct a quantum mechanical theory of magnetic monopoles acting as individual particles among themselves and also perturbatively in
the background of the standard model fields.
t' Hooft - Polyakov monopoles appear as solitons in non-Abelian gauge theories, i.e. as stable static solutions of the classical Yang-Mills-Higgs equations. These solutions depend on some free
parameters called moduli. For exmple the center of mass vector of the monopole is a modulus, since monopoles centered around any point in space are solutions since the basic theory is translation
invariant. The full moduli space in this case is:
$\mathcal{M_1} = \mathbb{R}^3 \times S^1$.
The first factor is the monopole center of mass, the second factor $S^1$ will provide after quantization an electric charge to the monopole by means of its winding number.
A two monopole solution will have apart of its geometric coordinates an and charge another compact manifold giving it more internal dynamics. This part is called the Atiyah-Hitchin manifold after
Atiyah and Hitchin who were the first to investigate the monopole moduli spaces and compute many of their characteristics:
$\mathcal{M_2} = \mathbb{R}^3 \times \frac{S^1 \times \mathcal{M_{AH}}}{\mathbb{Z}_2}$.
The knowledge about the arbitrary Atiyah-Hitchin manifolds is not complete. We can compute its metric and its symplectic structure. It is known thta they are HyperKaehler, which suggests that they
can be quantized in a supersymmetric theory. Also, some topological invariants are also known.
These moduli spaces can be quantized (i.e., associated with Hilbert spaces on which the relevant operators can act), and the resulting theory will be a quantum mechanical theory of the monopoles. For
example the for the charge 2 monopole one can in principle find the solutions representing the scattering of the two monopoles. It should be emphasized that this is a quantum mechanical theory and
not a quantum field theory.
One way to understand that is to let the moduli vary very slowly (although strictly speaking the solutions are only for constant moduli). Then the resulting solutions will correspond to the classical
scattering of the monopoles.
Basically, one can find the interaction of the monopoles with the usual fields of the theory by expanding the Yang-Mills theory around the monopole solution, then quantize the moduli space. In
particular, the Dirac equation in the monopole background has zero modes which can be viewed as particles in the infrared limit.
This post imported from StackExchange Physics at 2014-08-22 05:08 (UCT), posted by SE-user David Bar Moshe | {"url":"https://www.physicsoverflow.org/22700/do-we-have-a-quantum-field-theory-of-monopoles","timestamp":"2024-11-07T23:20:05Z","content_type":"text/html","content_length":"131301","record_id":"<urn:uuid:5d02ae20-c457-4928-9fa7-140726ea5c11>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00580.warc.gz"} |
Fibonacci Extensions
Introduction to Fibonacci Mathematics
Fibonacci mathematics can help traders to reveal the hidden proportionality of market behavior. Fibonacci extension analysis studies the extends of prime trends and countertrends in order to
identify key reversal zones, or else levels where a trending market may lose momentum and reverse.
Calculating the Basic Ratios using the Fibonacci Sequence
The Fibonacci sequence of numbers begins as follows: 0, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233 and etc.
The above sequence can then be broken down into ratios. The Fibo ratios can be found by dividing the Fibonacci numbers into each other.
If we do the math by excluding the first numbers, we realize that:
• Every number is approximately 1.618 times the preceding number
• Every number is 0.618 of the number to the right of it
Note that 1.618 is the golden ratio, and its inverse is 0.618.
Key Ratios for Financial Trading
In financial trading, the key ratios are 0.236, 0.382, 0.618, 1.618, 2.618, and 4.236. Many traders also use 0.5, 1.0.
Table: Key Fibonacci ratios for Financial trading
│0.236 │1.000 │
│0.382 │1.618 │
│0.500 │2.618 │
│0.618 │3.618 │
│0.786 │4.236 │
Fibonacci Extensions
The Fibonacci sequence of numbers produces useful trading tools such as the Fibonacci retracement and the Fibonacci extensions.
Fibonacci extensions can identify the extent of prime trends and countertrends to spot potential price reversal zones. Let's start by calculating the Fibo extensions above 100.
Calculating Fibonacci Extensions above 100%
These are some basic calculations, as presented in the following table:
• A, the Fibonacci sequence
• A1, dividing each Fibonacci number by the prior number (1 position), and the ratio approaches 1.618
• A2, dividing each Fibonacci number by two places to the left (2 positions), and the ratio approaches 2.618
• A3, dividing each Fibonacci number by three places to the left (3 positions), and the ratio approaches 4.236
Table: Calculating all Fibonacci Ratios
│A │A1 │A2 │A3 │
│1 │ │ │ │
│2 │2.000 (2/1) │ │ │
│3 │1.500 (3/2) │3.000 (3/1) │ │
│5 │1.667 (5/3) │2.500 (5/2) │5.000 (5/1) │
│8 │1.600 (8/5) │2.667 (8/3) │4.000 (8/2) │
│13 │1.625 (13/8) │2.600 (13/5) │4.333 (13/3) │
│21 │1.615 │2.625 │4.200 │
│34 │1.619 │2.615 │4.250 │
│55 │1.618 │2.619 │4.231 │
│89 │1.618 │2.618 │4.238 │
│144│1.618 │2.618 │4.235 │
│233│1.618 │2.618 │4.236 │
│377│1.618 │2.618 │4.236 │
By adding 3.618 (2.618+1), the key Fibonacci extensions (above 100) are 161.8%, 261.8%, 361.8%, and 423.6%.
Extension levels are areas where the price is expected to reverse. The Trend-Based Fibonacci Extensions (TBFEs) are drawn on any chart and work through the use of the Fibonacci ratios. In bull
markets, the Fibonacci Extensions tool is particularly useful to determine strong resistance when the price of an asset is found at price discovery. However, you can apply the tool on a bearish
market as well. The Fibonacci Extensions tool can be used for multiple purposes:
• Evaluating how far primary uptrends and downtrends can go.
• Creating useful targets for our orders (take-profit and stop-loss), especially when the price of an asset is found at price discovery.
• Analyzing price corrections and distinguishing between temporary price pullbacks and key trend reversals.
• Analyzing crowd behavior during extremely bullish market movements, when other TA tools fail.
How to Draw the Fibonacci Levels
Fibonacci Extensions are drawn by joining three (3) points, in contrast to the Fibonacci Retracement which has only (2) two points. The first thing is to spot a trend that will be used as a base.
To draw the Trend-Based Fibonacci Extensions in a bullish trend, you need to click on three separate price levels: the start and the end of the prime trend, plus the end of the secondary end. In
a bearish trend, the logic remains the same, but in reverse.
• Click-1: Start by clicking on the beginning of the price movement
• Click-2: Click on the point of completion of the price movement
• Click-2: Click on the point of completion of the secondary trend (end of the retracement against that move)
In the following chart, the Trend-Based Fibonacci Extension (TBFE) is applied on Ethereum/USD.
Image: Trend-Based Fibonacci Extension Tool on Ethereum (TradingView)
Trading with Fibonacci Extensions
Fibonacci Extensions use the same logic as the Fibonacci Retracement. Both tools indicate levels of a potential trend reversal. The main difference is that Fibonacci extensions can analyze trends
that extend beyond the base trend, and that means the price extends above the 100% level. That is a common situation for recently listed financial assets lacking historical trading data or for
financial assets that move in price discovery after a significant fundamental shift.
Fibonacci extensions can signal entries when the price bounces from an extension level, or indicate take-profit price levels:
• Use the Fibonacci extension levels like any other support and resistance levels.
• Trade in the direction of the trend when there is a breakout of a Fibonacci extension level. Take profits near the next extension level.
• Trade a trend reversal, after price bounces from an extension level.
• Create very useful take-profit zones, close to Fibonacci extension levels. In addition, indicate price targets when the price of an asset is at price discovery.
• Enter a Stop-Loss order near the next Fibonacci extension level.
Key Takeaways
• Fibonacci Extensions apply the same logic as the Fibonacci Retracement, however, Fibonacci extensions can analyze trends that extend beyond the base trend, and that means the price extends above
the 100% level.
• The key extensions above 100 are 161.8%, 261.8%, 361.8%, and 423.6%.
• In bullish trends, the Fibonacci Extensions tool is particularly useful to determine strong resistance when the price of an asset is found at price discovery.
• The Trend-Based Fibonacci Extensions (TBFEs) are drawn on any chart and work through the use of the Fibonacci ratios.
• To draw the Fib Extensions, you need a trend and two swing points.
• Extension levels are price zones where the trend is likely to reverse.
• You can use the Fibonacci extensions as price targets. Profit-taking can include various sell orders spread across different Fibonacci extensions.
• You can use multiple orders based on Fibonacci Extension levels, but you should remember that these levels indicate a zone of support/resistance, not exact points.
• False signals can always appear.
• Prefer to apply the Fibonacci Extensions tool in higher timeframes, and wait for an official closing price.
• Traders should use Fibonacci Extensions in combination with another indicator or a continuation/reversal pattern in a higher timeframe.
■ Introduction to Fibonacci Extensions
G.P. for TradingFibonacci.com (c)
More: » Trading Using Phi and the Fibonacci Numbers | » Fibonacci Retracement Tool | » Combining Fibonacci with Support & Resistance | » Combining Fibonacci with Major Technical Analysis Tools | »
MT4 / MT5 Fibonacci Indicators | {"url":"https://tradingfibonacci.com/index.php/fibonacci-trading/introduction-to-fibonacci-extensions","timestamp":"2024-11-04T13:24:59Z","content_type":"text/html","content_length":"66959","record_id":"<urn:uuid:1bbad214-e040-4a46-8f5b-9a5961b1ba36>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00365.warc.gz"} |
$60,000 A Year is How Much an Hour? - Aimingthedreams
How much is $60,000 a year per hour? Let me tell you, and it’s a lot! But, it’s so much money that most people can live comfortably. So, in this blog post, we will discuss how much it converts into
an hour before and after tax.
Disclaimer: This post contains affiliate links, meaning if you sign up through my link, I may get compensated at no extra cost. For full disclosure, read here.
How Much Is $60,000 A Year Per Hour?
As we know, there are 52 weeks and 260 working days a year, giving us an hourly rate of $28.85 per hour. Assuming your work day is 8 hours, the working hours per year come out to be 2080. Here is the
breakdown of hours for $60000 per year income.
60000 ÷ 2080 hours = $28.85 hourly
28.85 × 8 = $230.8 per day
$60000 ÷ 52 = $1153.84 weekly
$1153.84 × 2 = $2307.69 biweekly
$60000 ÷ 12 = $5000 monthly
How Much Is $60,000 A Year Per Hour, including paid time off?
Including paid time off, $60,000 annually comes to $31.25 an hour. It is based on 52 weeks of work, including two weeks of vacation. If you get ten days of federal days paid and two weeks of PTO,
your working weeks will reduce to 48 weeks. To calculate the hours worked now, t will be
48 × 5= 240 actual working days
240 days × 8 = 1920 hours a year
$60000 ÷ 1920 = 31.25
After calculating the federal days off and PTO, the actual wage will be $31.25/ hour. It depends on the PTO you get where you work. Subtract the PTO from working days and calculate the actual pay per
$60000 is how much biweekly?
$60,000 a year is $1153.84 a week. This comes out to $230.77 per day.
$60,000 ÷ 52 = $1153.84 weekly
$1153.84 ÷ = $230.77 per day
How Much Is $60,000 monthly?
$60,000 a year is $5000 per month.
$60,000 ÷ 12 = $5000 monthly
If you calculate the after-tax or take-home income, it can come out to be 45600 to 48500, depending on the taxes in your state.
How much is $60000 a year after taxes?
The answer to this question will depend on the tax bracket that you are in. Different states have different tax rates.
For example, if you are in the 22% tax bracket, $60,000 a year will be $46,800 after taxes.
$60,000 × 0.22 = $13,200 in taxes
$60,000 – $13,200 = $46,800 after taxes
If you are in the 24% tax bracket, $60,000 yearly will be $45,600 after taxes.
$60,000 × 0.24 = $14,400 in taxes
$60,000 – $14,400 = $45,600 after taxes
As you can see, $60,000 a year is a significant amount. However, you could easily make $60,000 per year with the right skills and knowledge.
Create a budget for $60000 a month
To live comfortably on $60,000 a year, you need to have a clear budget. This budget should include your income, expenses, and savings goals. Here is an example of what your budget might look like:
Income: $60,000 per year ($5000 per month). If we consider after-tax income to be $46,800 per year and $3900 per month, The monthly Expenses should be:
• Housing: $ 1300 per month
• Savings: 20% of $3900= $870 per month
• Utilities: $ 150
• Food: $500 per month
• Transportation: $200 per month
• Car payment: $200
• Car insurance: $100
• Entertainment: $100 per month
• Internet : $50
• Cellphone : $100
Total monthly expenses: $3570
Total monthly income: $3900
Monthly surplus: $330
With a monthly surplus of $330, you would have $3960 annually to save or invest. This is a great way to build your wealth to make $60,000 an hour.
You can create a tighter budget to suit your needs. Or you can also look for ways to make more money.
What can you afford with $60000 a year?
If you make $60,000 a year, you can afford a lot! You can easily afford a comfortable lifestyle, including:
A nice home:
With $60,000 a year, you can afford a mortgage of $1200 – $1300 per month. This will leave you with plenty of money for other expenses.
A new car:
With $60,000 a year, you can afford a monthly car payment of $200.
A vacation:
With $60,000 a year, you can easily afford the vacation of your dreams. You can even save money for future holidays.
It shows that this amount is a good salary that can let you live comfortably and happily. But, of course, you must pay attention to your budget and spending habits.
How to earn $60,000 per year?
You can earn $60000 annually by working hard and being smart with your money. Here are some tips to help you make $60,000 per year
• Get a good job that pays $60000 per year
• Work hard at your job and aim for promotions
• Be frugal with your spending and save as much money as possible
• Invest your money wisely so that it can grow over time
• Look for ways to earn extra income through side hustles or investments
If you follow these tips, you can easily earn $60,000 per year. Just remember to be disciplined with your money and always aim to improve your financial situation.
With a little hard work and thoughtful financial planning, earning $60,000 per year is achievable.
Moreover, with the correct mindset and financial decisions, you can make a good living and invest or save money. | {"url":"https://aimingthedreams.com/2022/08/12/60000-a-year-is-how-much-an-hour/","timestamp":"2024-11-07T13:54:58Z","content_type":"text/html","content_length":"92412","record_id":"<urn:uuid:443fbdbd-370d-49d2-8d1d-ea0fc1d4f56f>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00476.warc.gz"} |
Representative Curves
curveRep {Hmisc} R Documentation
Representative Curves
curveRep finds representative curves from a relatively large collection of curves. The curves usually represent time-response profiles as in serial (longitudinal or repeated) data with possibly
unequal time points and greatly varying sample sizes per subject. After excluding records containing missing x or y, records are first stratified into kn groups having similar sample sizes per curve
(subject). Within these strata, curves are next stratified according to the distribution of x points per curve (typically measurement times per subject). The clara clustering/partitioning function is
used to do this, clustering on one, two, or three x characteristics depending on the minimum sample size in the current interval of sample size. If the interval has a minimum number of unique values
of one, clustering is done on the single x values. If the minimum number of unique x values is two, clustering is done to create groups that are similar on both min(x) and max(x). For groups
containing no fewer than three unique x values, clustering is done on the trio of values min(x), max(x), and the longest gap between any successive x. Then within sample size and x distribution
strata, clustering of time-response profiles is based on p values of y all evaluated at the same p equally-spaced x's within the stratum. An option allows per-curve data to be smoothed with lowess
before proceeding. Outer x values are taken as extremes of x across all curves within the stratum. Linear interpolation within curves is used to estimate y at the grid of x's. For curves within the
stratum that do not extend to the most extreme x values in that stratum, extrapolation uses flat lines from the observed extremes in the curve unless extrap=TRUE. The p y values are clustered using
print and plot methods show results. By specifying an auxiliary idcol variable to plot, other variables such as treatment may be depicted to allow the analyst to determine for example whether
subjects on different treatments are assigned to different time-response profiles. To write the frequencies of a variable such as treatment in the upper left corner of each panel (instead of the
grand total number of clusters in that panel), specify freq.
curveSmooth takes a set of curves and smooths them using lowess. If the number of unique x points in a curve is less than p, the smooth is evaluated at the unique x values. Otherwise it is evaluated
at an equally spaced set of x points over the observed range. If fewer than 3 unique x values are in a curve, those points are used and smoothing is not done.
curveRep(x, y, id, kn = 5, kxdist = 5, k = 5, p = 5,
force1 = TRUE, metric = c("euclidean", "manhattan"),
smooth=FALSE, extrap=FALSE, pr=FALSE)
## S3 method for class 'curveRep'
print(x, ...)
## S3 method for class 'curveRep'
plot(x, which=1:length(res),
m=NULL, probs=c(.5, .25, .75), nx=NULL, fill=TRUE,
idcol=NULL, freq=NULL, plotfreq=FALSE,
xlim=range(x), ylim=range(y),
xlab='x', ylab='y', colorfreq=FALSE, ...)
curveSmooth(x, y, id, p=NULL, pr=TRUE)
x a numeric vector, typically measurement times. For plot.curveRep is an object created by curveRep.
y a numeric vector of response values
id a vector of curve (subject) identifiers, the same length as x and y
kn number of curve sample size groups to construct. curveRep tries to divide the data into equal numbers of curves across sample size intervals.
kxdist maximum number of x-distribution clusters to derive using clara
k maximum number of x-y profile clusters to derive using clara
number of x points at which to interpolate y for profile clustering. For curveSmooth is the number of equally spaced points at which to evaluate the lowess smooth, and if p is omitted
p the smooth is evaluated at the original x values (which will allow curveRep to still know the x distribution
force1 By default if any curves have only one point, all curves consisting of one point will be placed in a separate stratum. To prevent this separation, set force1 = FALSE.
metric see clara
By default, linear interpolation is used on raw data to obtain y values to cluster to determine x-y profiles. Specify smooth = TRUE to replace observed points with lowess before
smooth computing y points on the grid. Also, when smooth is used, it may be desirable to use extrap=TRUE.
extrap set to TRUE to use linear extrapolation to evaluate y points for x-y clustering. Not recommended unless smoothing has been or is being done.
pr set to TRUE to print progress notes
which an integer vector specifying which sample size intervals to plot. Must be specified if method='lattice' and must be a single number in that case.
The default makes individual plots of possibly all x-distribution by sample size by cluster combinations. Fewer may be plotted by specifying which. Specify method='lattice' to show a
method lattice xyplot of a single sample size interval, with x distributions going across and clusters going down. To not plot but instead return a data frame for a single sample size interval,
specify method='data'
the number of curves in a cluster to randomly sample if there are more than m in a cluster. Default is to draw all curves in a cluster. For method = "lattice" you can specify m =
m "quantiles" to use the xYplot function to show quantiles of y as a function of x, with the quantiles specified by the probs argument. This cannot be used to draw a group containing n =
nx applies if m = "quantiles". See xYplot.
probs 3-vector of probabilities with the central quantile first. Default uses quartiles.
for method = "all", by default if a sample size x-distribution stratum did not have enough curves to stratify into k x-y profiles, empty graphs are drawn so that a matrix of graphs will
fill have the next row starting with a different sample size range or x-distribution. See the example below.
idcol a named vector to be used as a table lookup for color assignments (does not apply when m = "quantile"). The names of this vector are curve ids and the values are color names or numbers.
freq a named vector to be used as a table lookup for a grouping variable such as treatment. The names are curve ids and values are any values useful for grouping in a frequency tabulation.
set to TRUE to plot the frequencies from the freq variable as horizontal bars instead of printing them. Applies only to method = "lattice". By default the largest bar is 0.1 times the
plotfreq length of a panel's x-axis. Specify plotfreq = 0.5 for example to make the longest bar half this long.
colorfreq set to TRUE to color the frequencies printed by plotfreq using the colors provided by idcol.
xlim, ylim, plotting parameters. Default ranges are the ranges in the entire set of raw data given to curveRep.
xlab, ylab
... arguments passed to other functions.
In the graph titles for the default graphic output, n refers to the minimum sample size, x refers to the sequential x-distribution cluster, and c refers to the sequential x-y profile cluster. Graphs
from method = "lattice" are produced by xyplot and in the panel titles distribution refers to the x-distribution stratum and cluster refers to the x-y profile cluster.
a list of class "curveRep" with the following elements
res a hierarchical list first split by sample size intervals, then by x distribution clusters, then containing a vector of cluster numbers with id values as a names attribute
ns a table of frequencies of sample sizes per curve after removing NAs
nomit total number of records excluded due to NAs
missfreq a table of frequencies of number of NAs excluded per curve
ncuts cut points for sample size intervals
kn number of sample size intervals
kxdist number of clusters on x distribution
k number of clusters of curves within sample size and distribution groups
p number of points at which to evaluate each curve for clustering
id input data after removing NAs
curveSmooth returns a list with elements x,y,id.
The references describe other methods for deriving representative curves, but those methods were not used here. The last reference which used a cluster analysis on principal components motivated
curveRep however. The kml package does k-means clustering of longitudinal data with imputation.
Frank Harrell
Department of Biostatistics
Vanderbilt University
Segal M. (1994): Representative curves for longitudinal data via regression trees. J Comp Graph Stat 3:214-233.
Jones MC, Rice JA (1992): Displaying the important features of large collections of similar curves. Am Statistician 46:140-145.
Zheng X, Simpson JA, et al (2005): Data from a study of effectiveness suggested potential prognostic factors related to the patterns of shoulder pain. J Clin Epi 58:823-830.
See Also
## Not run:
# Simulate 200 curves with per-curve sample sizes ranging from 1 to 10
# Make curves with odd-numbered IDs have an x-distribution that is random
# uniform [0,1] and those with even-numbered IDs have an x-dist. that is
# half as wide but still centered at 0.5. Shift y values higher with
# increasing IDs
N <- 200
nc <- sample(1:10, N, TRUE)
id <- rep(1:N, nc)
x <- y <- id
for(i in 1:N) {
x[id==i] <- if(i %% 2) runif(nc[i]) else runif(nc[i], c(.25, .75))
y[id==i] <- i + 10*(x[id==i] - .5) + runif(nc[i], -10, 10)
w <- curveRep(x, y, id, kxdist=2, p=10)
par(ask=TRUE, mfrow=c(4,5))
plot(w) # show everything, profiles going across
plot(w,1) # show n=1 results
# Use a color assignment table, assigning low curves to green and
# high to red. Unique curve (subject) IDs are the names of the vector.
cols <- c(rep('green', N/2), rep('red', N/2))
names(cols) <- as.character(1:N)
plot(w, 3, idcol=cols)
par(ask=FALSE, mfrow=c(1,1))
plot(w, 1, 'lattice') # show n=1 results
plot(w, 3, 'lattice') # show n=4-5 results
plot(w, 3, 'lattice', idcol=cols) # same but different color mapping
plot(w, 3, 'lattice', m=1) # show a single "representative" curve
# Show median, 10th, and 90th percentiles of supposedly representative curves
plot(w, 3, 'lattice', m='quantiles', probs=c(.5,.1,.9))
# Same plot but with much less grouping of x variable
plot(w, 3, 'lattice', m='quantiles', probs=c(.5,.1,.9), nx=2)
# Use ggplot2 for one sample size interval
z <- plot(w, 2, 'data')
ggplot(z, aes(x, y, color=curve)) + geom_line() +
facet_grid(distribution ~ cluster) +
theme(legend.position='none') +
# Smooth data before profiling. This allows later plotting to plot
# smoothed representative curves rather than raw curves (which
# specifying smooth=TRUE to curveRep would do, if curveSmooth was not used)
d <- curveSmooth(x, y, id)
w <- with(d, curveRep(x, y, id))
# Example to show that curveRep can cluster profiles correctly when
# there is no noise. In the data there are four profiles - flat, flat
# at a higher mean y, linearly increasing then flat, and flat at the
# first height except for a sharp triangular peak
x <- 0:100
m <- length(x)
profile <- matrix(NA, nrow=m, ncol=4)
profile[,1] <- rep(0, m)
profile[,2] <- rep(3, m)
profile[,3] <- c(0:3, rep(3, m-4))
profile[,4] <- c(0,1,3,1,rep(0,m-4))
col <- c('black','blue','green','red')
matplot(x, profile, type='l', col=col)
xeval <- seq(0, 100, length.out=5)
s <- x
matplot(x[s], profile[s,], type='l', col=col)
id <- rep(1:100, each=m)
X <- Y <- id
cols <- character(100)
names(cols) <- as.character(1:100)
for(i in 1:100) {
s <- id==i
X[s] <- x
j <- sample(1:4,1)
Y[s] <- profile[,j]
cols[i] <- col[j]
yl <- c(-1,4)
w <- curveRep(X, Y, id, kn=1, kxdist=1, k=4)
plot(w, 1, 'lattice', idcol=cols, ylim=yl)
# Found 4 clusters but two have same profile
w <- curveRep(X, Y, id, kn=1, kxdist=1, k=3)
plot(w, 1, 'lattice', idcol=cols, freq=cols, plotfreq=TRUE, ylim=yl)
# Incorrectly combined black and red because default value p=5 did
# not result in different profiles at x=xeval
w <- curveRep(X, Y, id, kn=1, kxdist=1, k=4, p=40)
plot(w, 1, 'lattice', idcol=cols, ylim=yl)
# Found correct clusters because evaluated curves at 40 equally
# spaced points and could find the sharp triangular peak in profile 4
## End(Not run)
version 5.1-3 | {"url":"https://search.r-project.org/CRAN/refmans/Hmisc/html/curveRep.html","timestamp":"2024-11-13T14:23:00Z","content_type":"text/html","content_length":"18466","record_id":"<urn:uuid:b65d5d63-ac9c-476e-ad24-6a831b6702ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00335.warc.gz"} |
Category:Specular Reflections
This is a read-only mirror of pymolwiki.org
Category:Specular Reflections
Jump to navigation Jump to search
Specular reflection is the perfect, mirror-like reflection of light (or sometimes other kinds of wave) from a surface, in which light from a single incoming direction (a ray) is reflected into a
single outgoing direction^[1].
Here you can find settings and information about controlling specular reflections in PyMOL. Specular reflections can make a surface look smooth like thing plastic or metal/glass.
Pages in category "Specular Reflections"
The following 6 pages are in this category, out of 6 total. | {"url":"https://wiki.pymol.org/index.php/Category:Specular_Reflections","timestamp":"2024-11-05T12:03:44Z","content_type":"text/html","content_length":"18719","record_id":"<urn:uuid:481061b3-5a14-4fdb-97fd-9dc63534d386>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00635.warc.gz"} |
Is the number 12.38 X 10^2 in scientific notation and how do you know? | Socratic
Is the number #12.38 X 10^2# in scientific notation and how do you know?
2 Answers
Technically $12.38 \times {10}^{2}$ is in scientific notation ; any number expressed in the form $a \times {10}^{b}$ is in scientific notation.
However, it is not in normalized scientific notation . In normalized scientific notation, there is exactly 1 non-zero digit to the left of the decimal point in the $a$ term (with an exception being
made for zero which in scientific notation is written as $0.0 \times {10}^{1}$).
In normalized scientific notation, your original value would be written as $1.238 \times {10}^{3}$
It's not in correct scientic notation.
The correct notation should be $1.238 \cdot {10}^{3}$
The rule is that the number before the $10$power must be from $1$ up to (but not including) $10$, so you allways have just one (non-zero) digit before the decimal point.
The exception is when you have a table of values. In that case you use the same $10$power throughout (example: a table of specific masses for different materials).
Impact of this question
3345 views around the world | {"url":"https://socratic.org/questions/is-the-number-12-38-x-10-2-in-scientific-notation-and-how-do-you-know#133549","timestamp":"2024-11-14T11:24:17Z","content_type":"text/html","content_length":"35361","record_id":"<urn:uuid:cab26ec6-7230-451a-b3c4-5fbabff1a4d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00847.warc.gz"} |
Statistics/Summary/Variance - Wikibooks, open books for an open world
Variance and Standard Deviation
Probability density function for the normal distribution. The red line is the standard normal distribution.
When describing data it is helpful (and in some cases necessary) to determine the spread of a distribution. One way of measuring this spread is by calculating the variance or the standard deviation
of the data.
In describing a complete population, the data represents all the elements of the population. As a measure of the "spread" in the population one wants to know a measure of the possible distances
between the data and the population mean. There are several options to do so. One is to measure the average absolute value of the deviations. Another, called the variance, measures the average square
of these deviations.
A clear distinction should be made between dealing with the population or with a sample from it. When dealing with the complete population the (population) variance is a constant, a parameter which
helps to describe the population. When dealing with a sample from the population the (sample) variance is actually a random variable, whose value differs from sample to sample. Its value is only of
interest as an estimate for the population variance.
Population variance and standard deviation
Let the population consist of the N elements x[1],...,x[N]. The (population) mean is:
${\displaystyle \mu ={\frac {1}{N}}\sum _{i=1}^{N}x_{i}}$ .
The (population) variance σ^2 is the average of the squared deviations from the mean or (x[i] - μ)^2 - the square of the value's distance from the distribution's mean.
${\displaystyle \sigma ^{2}={\frac {1}{N}}\sum _{i=1}^{N}(x_{i}-\mu )^{2}}$ .
Because of the squaring the variance is not directly comparable with the mean and the data themselves. The square root of the variance is called the Standard Deviation σ. Note that σ is the root mean
squared of differences between the data points and the average.
Sample variance and standard deviation
Let the sample consist of the n elements x[1],...,x[n], taken from the population. The (sample) mean is:
${\displaystyle {\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}}$ .
The sample mean serves as an estimate for the population mean μ.
The (sample) variance s^2 is a kind of average of the squared deviations from the (sample) mean:
${\displaystyle s^{2}={\frac {1}{n-1}}\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}$ .
Also for the sample we take the square root to obtain the (sample) standard deviation s
A common question at this point is "why do we square the numerator?" One answer is: to get rid of the negative signs. Numbers are going to fall above and below the mean and, since the variance is
looking for distance, it would be counterproductive if those distances factored each other out.
When rolling a fair die, the population consists of the 6 possible outcomes 1 to 6. A sample may consist instead of the outcomes of 1000 rolls of the die.
The population mean is:
${\displaystyle \mu ={\frac {1}{6}}(1+2+3+4+5+6)=3.5}$ ,
and the population variance:
${\displaystyle \sigma ^{2}={\frac {1}{6}}\sum _{i=1}^{n}(i-3.5)^{2}={\frac {1}{6}}(6.25+2.25+0.25+0.25+2.25+6.25)={\frac {35}{12}}\approx 2.917}$
The population standard deviation is:
${\displaystyle \sigma ={\sqrt {\frac {35}{12}}}\approx 1.708}$ .
Notice how this standard deviation is somewhere in between the possible deviations.
So if we were working with one six-sided die: X = {1, 2, 3, 4, 5, 6}, then σ^2 = 2.917. We will talk more about why this is different later on, but for the moment assume that you should use the
equation for the sample variance unless you see something that would indicate otherwise.
Note that none of the above formulae are ideal when calculating the estimate and they all introduce rounding errors. Specialized statistical software packages use more complicated logarithms that
take a second pass of the data in order to correct for these errors. Therefore, if it matters that your estimate of standard deviation is accurate, specialized software should be used. If you are
using non-specialized software, such as some popular spreadsheet packages, you should find out how the software does the calculations and not just assume that a sophisticated algorithm has been
For Normal Distributions
The empirical rule states that approximately 68 percent of the data in a normally distributed dataset is contained within one standard deviation of the mean, approximately 95 percent of the data is
contained within 2 standard deviations, and approximately 99.7 percent of the data falls within 3 standard deviations.
As an example, the verbal or math portion of the SAT has a mean of 500 and a standard deviation of 100. This means that 68% of test-takers scored between 400 and 600, 95% of test takers scored
between 300 and 700, and 99.7% of test-takers scored between 200 and 800 assuming a completely normal distribution (which isn't quite the case, but it makes a good approximation).
For a normal distribution the relationship between the standard deviation and the interquartile range is roughly: SD = IQR/1.35.
For data that are non-normal, the standard deviation can be a terrible estimator of scale. For example, in the presence of a single outlier, the standard deviation can grossly overestimate the
variability of the data. The result is that confidence intervals are too wide and hypothesis tests lack power. In some (or most) fields, it is uncommon for data to be normally distributed and
outliers are common.
One robust estimator of scale is the "average absolute deviation", or aad. As the name implies, the mean of the absolute deviations about some estimate of location is used. This method of estimation
of scale has the advantage that the contribution of outliers is not squared, as it is in the standard deviation, and therefore outliers contribute less to the estimate. This method has the
disadvantage that a single large outlier can completely overwhelm the estimate of scale and give a misleading description of the spread of the data.
Another robust estimator of scale is the "median absolute deviation", or mad. As the name implies, the estimate is calculated as the median of the absolute deviation from an estimate of location.
Often, the median of the data is used as the estimate of location, but it is not necessary that this be so. Note that if the data are non-normal, the mean is unlikely to be a good estimate of
It is necessary to scale both of these estimators in order for them to be comparable with the standard deviation when the data are normally distributed. It is typical for the terms aad and mad to be
used to refer to the scaled version. The unscaled versions are rarely used. | {"url":"https://en.m.wikibooks.org/wiki/Statistics/Summary/Variance","timestamp":"2024-11-15T03:38:08Z","content_type":"text/html","content_length":"52151","record_id":"<urn:uuid:486a1fa1-bc9d-4599-91f5-9422af8910ae>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00084.warc.gz"} |
DEPARTMENT OF PHYSICS / HIP JOINT COLLOQUIA / SEMINARS 2015
• Tuesday 20 January 2015 at 10.15 in A315: Alfonso Ramallo (Santiago de Compostela)
Cold holographic matter
Abstract: After a brief review of the Landau Fermi liquid theory, we will discuss a holographic modeling of cold matter in terms of D-brane intersections. We will analyze the different regimes of
these systems as a function of the temperature and, in particular, their zero-sound and diffusion modes.
• Tuesday 3 February 2015 at 10.15 in A315: Sven Heinemeyer (Santander)
Higgs and Supersymmetry
Abstract: The particle discovered in the Higgs boson searches at the LHC in 2012 can be interpreted as the lightest Higgs boson of the Minimal Supersymmetric Standard Model (MSSM), in perfect
agreement with predictions {\em before} the discovery. We briefly review the relevant phenomenology of the Higgs sector of the MSSM and the implication of the Higgs discovery for the model. We
discuss possibilities for searches for the manifestation of Supersymmetry in the Higgs sector, including deviations from the Standard Model predictions as well as the search for additional Higgs
• Wednesday 4 February 2015 at 10.15 in A315: Katherine Freese (Nordita, Stockholm)
The dark side of the Universe
Abstract: What is the Universe made of? This question is the longest outstanding problem in all of modern physics, and it is the most important research topic in cosmology and particle physics
today. The reason for the excitement is clear: the bulk of the mass in the Universe consists of a new kind of dark matter particle, and most of us believe its discovery is imminent. I’ll start by
discussing the evidence for the existence of dark matter in galaxies, and then show how it fits into a big picture of the Universe containing 5% atoms, 25% dark matter, and 70% dark energy.
Probably the best dark matter candidates are WIMPs (Weakly Interacting Massive Particles). There are three approaches to experimental searches for WIMPS: at the Large Hadron Collider at CERN in
Geneva; in underground laboratory experiments; and with astrophysical searches for dark matter annihilation products. Currently there are claimed detections in multiple experiments — but they
cannot possibly all be right. Excitement is building but the answer is still unclear. At the end of the talk I’ll turn to dark energy and its effect on the fate of the Universe.
• Thursday 5 February 2015 at 10.15 in A315: James Pinfold (Edmonton)
The MoEDAL Experiment at the LHC – a New Light on the High Energy Frontier
Abstract: In 2010 the MoEDAL experiment at the Large Hadron Collider (LHC) was unanimously approved by CERN’s Research Board to start data taking in 2015. MoEDAL is a pioneering experiment
designed to search for highly ionizing avatars of new physics such as magnetic monopoles or massive (pseudo-)stable charged particles. Its groundbreaking physics program defines over 30 scenarios
that yield potentially revolutionary insights into such foundational questions as: are there extra dimensions or new symmetries; what is the mechanism for the generation of mass; does magnetic
charge exist; what is the nature of dark matter; and, how did the big-bang develop. MoEDAL’s purpose is to meet such far-reaching challenges at the frontier of the field.The innovative MoEDAL
detector employs unconventional methodologies tuned to the prospect of discovery physics. The largely passive MoEDAL detector, deployed at Point 8 on the LHC ring, has a dual nature. First, it
acts like a giant camera, comprised of nuclear track detectors – analyzed offline by ultra fast scanning microscopes – sensitive only to new physics. Second, it is uniquely able to trap the
particle messengers of physics beyond the Standard Model for further study. MoEDAL’s radiation environment is monitored by a state-of-the-art real-time TimePix pixel detector array. I shall also
briefly discuss a new proposal to include a new active MoEDAL sub-detector to search for millicharged particles.
• Tuesday 10 February 2015 at 10.15 in A315: K. Kajantie (Helsinki)
Phases and phase transitions of hot QCD with lots of massless quarks
Abstract: For this talk QCD has Nc=3 colors and we want to study what happens when the number of quarks, assumed to be massless, grows, Nf=0,3,6,… We pretend that these numbers are so big that we
can use holography, solve the equation of state from a 5-dimensional gravity model. The chirally symmetric quark-gluon plasma phase then is straightforward: temperature and entropy are those of a
classical gravity black hole solution. However, at low T there are hadrons, but there is no classical solution giving their hadron gas thermodynamics – there is a T=0 solution from which masses
of low-lying states can be computed. So we approach the problem phenomenologically and put in an ansatz for the hadron spectrum. Because of chiral symmetry breaking there at low T are at least Nf
^2 massless Goldstone bosons, further there is a partly calculable Hagedorn spectrum of massive states. We try to see how the properties of the chiral phase transition constrain the hadron gas
model and show explicitly how the equation of state looks like for a third order phase transition. These phenomenological constructions can be interpreted as 1-loop or even stringy corrections to
the classical gravity dual. The final word on phase transition properties lies with lattice Monte Carlo, but imposing zero quark masses is very difficult on the lattice.
• Tuesday 17 February 2015 at 10.15 in A315: Jürgen Reuter (Desy)
Electroweak Vector Boson Scattering at the LHC after the Higgs discovery
Abstract: The Large Hadron Collider (LHC) has greatly enlarged our understanding of electroweak symmetry breaking by the discovery of a Higgs boson compatible with the Standard Model (SM) of
Particle Physics. The LHC was guaranteed to contribute to this understanding before it was switched on because the process of scattering of the longitudinal modes of the electroweak gauge bosons
would have entered a strong interaction regime in the LHC energy reach without the presence of a light Higgs particle. There was for the first time evidence of this scattering process in the 2012
LHC data. Delicate cancellations in its otherwise rising cross section makes this process an ideal telescope for searches of physics beyond the SM (BSM). Some prime examples will be shown as well
as methods to theoretically consistently describe these searches in a model-independent way.
• Wednesday 18 February 2015 at 10.15 in A315: Timo Alho (Reykjavik)
Finite temperature monopole correlations in holographically flavored liquids
Abstract: We study the phase structure of a (2 + 1) -dimensional many-body system with a global U(1) -current via top-down holography, at finite temperature. The non-perturbative infrared
behaviour of such a system can be probed by inserting a monopole operator coupled to the U(1) -current to the field theory. Holographically, the operator is dual to a bulk magnetic monopole. The
specific model is obtained by adding a probe D3 brane to the D3D5 intersection, such that the probe D3 ends on the D5, sourcing a magnetic charge. At finite temperature we find a series of
transitions between various stable and metastable phases.
• Wednesday 4 March 2015 at 16.00 (note time!) in A315: Matti Kalliokoski (CERN)
Beam Loss Monitoring and Machine Protection of the LHC for Run 2
Abstract: During Long Shutdown 1 (LS1) a series of modifications and updates to the LHC was made. These were done to allow the accelerator to reach 7 TeV and to improve the machine safety. One
main part of the machine protection of the LHC is the Beam Loss Monitoring system (BLM). It consists of about 4000 monitors that have the task to prevent the superconducting magnets from
quenching and protect the machine components from damage, as a result of critical beam losses. In this talk, modifications to the LHC Beam Instrumentation during the LS1, especially to the BLM
system, are discussed.
• Tuesday 10 March 2015 at 10.15 in A315: Olli Taanila (Nikhef)
Analytical models of holographic thermalization
Abstract: AdS/CFT is one of the very few tools we have to study the out-of-equilibrium dynamics of strongly coupled systems. I present several different analytical models of gravitational
collapse which are dual to the thermalization of a strongly coupled field theory. The time evolution of several different observables are computed in these backgrounds, such as two point
functions and entaglement entropy.
• Friday 20 March 2015 at 10.15 in A315: Miklos Långvik (Marseille)
Applications of conformal SU(2,2) transformations to spinfoams and spin-networks
Abstract: We attempt the construction of the cotangent bundle of SU(2,2) as a symplectic manifold using an analogy to previous approaches. However, the manifold ends up being a Lagrangian
submanifold. We then study and discuss different applications of it to spinfoam models. Most notably, the concept of a particle in a Poincaré invariant spacetime is discussed and the geometrical
interpretation of our construction for full SU(2,2) is clarified. The talk will include a brief introduction to spinfoam models and especially a short review of how fermions enter spinfoams.
• Thursday 26 March 2015 at 10.15 in A315: Laszlo Jenkovszky (Bogolyubov Institute for Theoretical Physics, National Ac. Sc. of Ukraine)
Regge factorization and diffraction dissociation at the LHC
Abstract: At the LHC, for the first time the nearly forward elastic scattering amplitude is completely determined by vacuum (Pomeron) exchange in the t-channel, making possible the full use of
Regge factorization. Based on a simple Regge (Pomeron) pole exchange, different diffractive processes, including single, double proton dissociation as well as central production are related and
compared with the existing LHC measurements.
• Tuesday 7 April 2015 at 10.15 in A315: A. Rebhan (Wien)
Top-down holographic glueballs and their decay patterns
Abstract: I report recent results on the spectrum and the decay patterns of scalar and tensor glueballs in the top-down holographic Witten-Sakai Sugimoto model. This model, which has only one
free dimensionless parameter, gives semi-quantitative predictions for the vector meson spectrum, their decay widths, and also a gluon condensate in agreement with SVZ sum rules. The predictions
for glueball decay are compared with experimental data for some of the widely discussed gluon candidates in the meson spectrum.
• Thursday 16 April 2015 at 10.15 in A315: Giacomo Cacciapaglia (Lyon, IPN)
Unveiling the dynamics behind a composite Higgs
Abstract: I will present a novel approach to composite Higgs model building, based on a simple fundamental dynamics. It allows to describe in a unified way the Higgs as a pNGB and as a
Technicolor-like bound state. The status after the LHC Run I will be discussed, together with the interplay with Lattice calculations and possible avenues for model building.
• Thursday 23 April 2015 at 10.15 in A315: Jorma Louko (Nottingham)
Did the chicken survive the firewall?
Abstract: The forty-year debate about the final state of an evaporating black hole was recently reinvigorated by the suggestion that if unitarity is preserved, the black hole horizon develops a
“firewall” which a) affects strongly any matter that crosses it and b) erases any information that such matter may carry about the outside world.
We present evidence that neither a) nor b) is true for a firewall within flat spacetime quantum field theory, arguing that the same holds for sufficiently young gravitational firewalls. A
firewall’s prospective capability to resolve the information loss paradox must hinge on its detailed gravitational structure, presently poorly understood.
• Tuesday 28 April 2015 at 10.15 in A315: Dorota Sokolowska (Warsaw)
Dark matter in multi-scalar extensions of the Standard Model with discrete symmetries
Abstract: Multi-scalar extensions of the Standard Model can accommodate a viable Dark Matter candidate and modification of Higgs decay rates, particularly into two photons. One of the simplest
choices for the extended scalar sector is the Inert Doublet Model, i.e. the Standard Model with an additional inert scalar doublet. The IDM can be further extended by extra doublets or singlets,
which may modify both DM- and collider phenomenology.
In this talk I will discuss the the interplay between the LHC results for the decay of the SM-like Higgs into two photons and the properties of Dark Matter from Planck measurements. Constraints
for multi-scalar models obtained in this way are stronger or comparable to the constraints provided by direct and indirect detection experiments for low, medium and high Dark Matter mass regions.
• Tuesday 12 May 2015 at 10.15 in A315: Hans-Peter Nilles (Bonn)
Unification of Fundamental Interactions
Abstract: Symmetries have played a crucial role in the development of the standard model of particle physics. Moreover, they are believed to provide the key ingredients for a unified description
of all fundamental interactions. We review the arguments that favour the investigation of these mathematical structures and explain possible consequences for particle physics and cosmology.
• Tuesday 19 May 2015 at 10.15 in A315: Yago Bea (Santiago de Compostela)
Massive study of magnetized unquenched ABJM
Abstract: We study the physics of a magnetized probe D6-brane in the ABJM theory with unquenched massive flavor. We analyze the effect of the mass of the background branes in the magnetic
catalysis of chiral symmetry breaking, and also the effect of the magnetic field on the meson spectrum. Besides, we obtain a non-commutative version of the background via a TsT rotation, which
has an NSNS B2 field turned on, as an intermediate step towards a fully backreacted solution. In addition, we include new analytical solutions, perturbative in flavor, for the background
• Tuesday 2 June 2015 at 10.15 in A315: Stefan Pokorski (Warsaw)
Looking for hidden supersymmetry
Abstract: After a brief review of the state of supersymmetry after RUN I of the LHC, it will be argued that the supersymmetric electroweak sector may play the leading role in its discovery.
Experimental signatures of electroweakinos and the prospects for their discovery will be discussed.
• Tuesday 9 June 2015 at 10.15 in A315: Harvey Meyer (Mainz)
Real-time phenomena in finite-temperature QCD
Abstract: I present two topics in finite-temperature QCD: one concerns the pion quasiparticle and its dispersion relation in the low-temperature phase; the second concerns non-static screening
masses in the high-temperature phase, the corresponding perturbative predictions and the relation of these masses to transport properties and the dilepton rate.
• Thursday 11 June 2015 at 10.15 in A315: Carlos Hoyos (Oviedo)
Ward identities and transport in 2+1 dimensions
Abstract: The Hall viscosity of chiral superfluids is determined by the angular momentum density. One can understand and generalize this relation using Ward identities of the energy-momentum
tensor. Furthermore, the same identities connect viscosities and conductivities. A preliminary analysis in AdS/CFT reveals that the identities seem to be non-trivial in the sense that they cannot
be derived simply from the asymptotic expansion of the solutions.
• Tuesday 18 August 2015 at 13.15 in A315; Note time!: Jacob Bekenstein (Hebrew University of Jerusalem)
Playing with quantum gravity on the tabletop
Abstract: Many would class the quantum theory of gravity as the number-one open problem in theoretical physics. The subject is at least seventy years old, but all along it has been almost
exclusively a theoretical undertaking. After describing some currently planned experiments, I will focus on the quantum foam idea, and delineate the idea for a relatively simple tabletop
experiment to expose it. This experiment relies on instrumentation which is routine in quantum optics. I will also examine sources of noise and strategies for their control.
• Tuesday 8 September 2015 at 10.15 in A315: Robert Fleischer (Nikhef, Amsterdam)
In Pursuit of New Physics with B Decays: Theoretical Status and Prospects
Abstract: The exploration of B-meson decays has reached an unprecedented level of sophistication, with a phase of even much higher precision ahead of us thanks to the next run of the LHC and the
future era of Belle II and the LHCb upgrade. For many processes, the theoretical challenge in the quest to reveal possible footprints of physics beyond the Standard Model will be the control of
uncertainties from strong interactions. After a brief discussion of the global picture emerging from the LHC data, I will focus on the theoretical prospects and challenges for benchmark B decays
to search for new sources of CP violation, and highlight future opportunities to probe the Standard Model with strongly suppressed rare B decays.
• Thursday 10 September 2015 at 10.15 in A315: Aleksi Kurkela (CERN)
Hydrodynamisation in high energy nuclear collisions from QCD Lagrangian?
Abstract: Thermalisation, isotropisation and hydrodynamisation in high energy collisions of large nuclei has been studied in numerous models. I present a computation which is as close as possible
to a first principle one starting from the QCD Lagrangian.
• Thursday 10 September 2015 at 14.15 in E206: Yen Chin Ong (Nordita, Stockholm) Note place and time!
Hawking evaporation time scale of black holes in Anti-de Sitter spacetime
Abstract: If an absorbing boundary condition is imposed at infinity, an asymptotically Anti-de Sitter Schwarzschild black hole with a spherical horizon takes only a finite amount of time to
evaporate away even if its initial mass is arbitrarily large. In fact this is a rather generic property for an asymptotically locally AdS spacetime: regardless of their horizon topologies,
neutral AdS black holes in general relativity take about the same amount of time to evaporate down to the same size. We explain this surprising property in this talk.
• Tuesday 29 September 2015 at 10.15 in A315: Jose Miguel No (Sussex)
Probing the Electroweak Phase Transition at LHC and Beyond
Abstract: Uncovering the nature of the electroweak (EW) phase transition in the early Universe would be key to shed light on the possible origin of the cosmic matter-antimatter asymmetry. We
discuss various ways in which searches for new physics beyond the Standard model (SM) at LHC can be used to probe the nature of the EW phase transition, and their implications for the generation
of the baryon asymmetry of the Universe at the EW scale.
• Tuesday 6 October 2015 at 10.15 in A315: Jarno Rantaharju (Odense)
Lattice Four-Fermion Interactions for Beyond Standard Model Physics
Abstract: We present a study of a lattice model of chirally symmetric four-fermion interactions, the Nambu Jona-Lasinio model, with Wilson fermions. Four fermion operators are a necessary part of
many models of beyond Standard Model physics. In particular we are interested in technicolor models, where effective four-fermion operators are used to generate the standard model fermion masses.
In the ideal walking scenario, the same interaction is responsible for breaking the chiral symmetry in an otherwise conformal model. As a first step, we study the restoration and spontaneous
breaking of chiral symmetry in the lattice NJL model before adding a gauge interaction. We map the phase structure of the model and establish chiral symmetry breaking.
• Tuesday 13 October 2015 at 10.15 in A315: Alejandro Cabo (La Habana, Cuba)
Is a generalized NJL model the effective action of massless QCD?
Abstract: A local and gauge invariant alternative version of QCD for massive fermions introduced in previous works, is considered here to
propose a model which includes Nambu-Jona-Lasinio (NJL) terms in the action. The Lagrangian includes new vertices which at first sight look as breaking power counting renormalizability. However,
these terms also modify the quark propagators, to become more convergent at large momenta, thus indicating that the theory is renormalizable. Therefore, it follows the surprising conclusion that
the added NJL four fermion terms do not break renormalizability. The approach can also be interpreted as a slightly generalized renormalization procedure for massless QCD, which seems able to
incorporate the mass generating properties for the quarks of the NJL model, in a renormalizable way. It also seems to have opportunity to implement Fritzsch’s Democratic Symmetry description of
the quark mass hierarchy.
• Thursday 15 October 2015 at 10.15 in A315: Tommi Alanne (CP3-Origins, Odense)
Elementary Goldstone Higgs and raising the fundamental scale
Abstract: We study an extension of the scalar sector of the Standard Model (SM) where the observed Higgs is a pseudo-Goldstone boson (pGB) associated with the global symmetry breaking pattern SU
(4) to Sp(4). This particular breaking pattern is interesting, because depending on the embedding of the electroweak (EW) symmetry, the breaking of the global symmetry can either leave the full
EW sector intact, break the EW completely to electromagnetism, or more interestingly lead to something between these two extreme cases. In the unbroken case, the entire Higgs doublet can be
identified with four of the five Goldstone bosons of the global symmetry breaking.All of the different alignments of the EW symmetry are equivalent at the tree-level, but since the gauging of the
electroweak sector and introducing Yukawa terms for the SM fermions break the global symmetry of the scalar sector, quantum effects determine a preferred vacuum alignment.
In this talk, I will present the main results of our study and show that very slightly broken EW symmetry is preferred. Therefore, the observed Higgs boson is dominantly a pGB, and interestingly
the electroweak scale emerges due to the alignment of the EW sector in the global symmetry, whereas the fundamental scale of the spontaneous symmetry breaking is significantly higher.
• Tuesday 20 October 2015 at 10.15 in A315: Timo Alho (Reykjavik)
Geometric Algebra: a coordinate free formalism for inner product spaces
Abstract: Geometric Algebra (GA) is a mathematical system defined by imbuing the vectors of an inner product space directly with the Clifford algebra generated by the inner product, without
considering a separate representation of the algebra operating on the vectors. This simple change in point of view gives a system which generalizes exterior algebra, quaternions, spinors, and
many other algebras used in theoretical physics, while simplifying both concepts and calculations. This talk will open a brief course on the algebra and its extension to geometric calculus.
During the talk we will primarily give elementary details of the formalism, with more advanced topics to be handled later during the course. Some applications are pointed out for motivation.
• Thursday 22 October 2015 at 10.15 in A315: Timo Alho (Reykjavik)
Geometric Algebra: a coordinate free formalism for inner product spaces
Abstract: Continuation of the lecture on Tuesday.
• Tuesday 27 October 2015 at 10.15 in A315: Alexander Merle (MPI, Munich)
Sterile Neutrino Dark Matter: From Particle to Astrophysics and back
Abstract: In the absence of a clear WIMP signal, we should think about alternative candidates for Dark Matter. A very well motivated example is a (up to now hypothetical) sterile neutrino with a
mass of a few keV. In this talk, I will give an overview over the topic of keV sterile neutrino Dark Matter, thereby exploring all corners from Dark Matter production in the early Universe over
astrophysical bounds and neutrino phenomenology to particle physics model building aspects. While WIMPs are not dead yet, they have to learn how to live in the neighbourhood of serious
• Tuesday 3 November 2015 at 10.15 in A315: Lene Bryngemark (Lund)
Search for physics beyong the Standard Model using dijet distributions in ATLAS
Abstract: The LHC gives us access to the highest collider energies, at the highest intensities, providing a unique opportunity to thoroughly examine the constituents of matter and their
interactions at ever smaller distances and higher mass scales. Produced in the strong — or a new, previously unseen — interaction, jets probe the very energy frontier. With the recent increase in
LHC beam energy, ATLAS makes use of this sensitivity to make its first statements of what physics looks like in a new energy regime. In this presentation I show the results from both the 8 and
the more recent 13 TeV analysis of dijet mass and angular distributions.
• Thursday 5 November 2015 at 10.15 in A315: David Daverio (Geneva)
Large scale structure formation within a general relativistic framework
Abstract: One century after the conception of General Relativity (GR), there is still a large pool of predictions we still have difficulties to get. Indeed, to give only one example, large scale
structure formation is still simulated within a Newtonian framework. This framework, even well suited for cosmology as it can be understood as a week field quasi static approximation of GR, does
not take into account the propagating degree of freedom of GR and is not well suited to include relativistic sources such as neutrinos. Recently, the first code aiming to simulate large scale
structure within a general relativistic framework has been developed. This code, gevolution, solves GR in the weak field limit using the approximation scheme proposed in Adamek et al. 2014 and is
constructed on top of the framework LATfield2 which manages the particles and provides a scalable parallelisation allowing to run with lattice of 4096^3 cells with one particle per cell on 16k
processes. In this talk we will discuss the method used to develop this code and first results of cold dark matter simulation will be presented.
• Tuesday 10 November 2015 at 10.15 in A315: David Salek (Amsterdam)
Dark matter (and dark mediators) at the LHC
Abstract: The LHC results on dark matter from Run-1 were mostly interpreted in the framework of effective field theories. Simplified models involve new mediators between the Standard Model and
the Dark Sector and allow for richer phenomenology and more complex interpretations. Possible dark matter search strategies at the LHC in Run-2 will be discussed.
• Thursday 12 November 2015 at 14.15 in E207 (note place): Jaeyoung Park (Energy Matter Conversion Corporation, San Diego)
Polywell Fusion – Electric Fusion in a Magnetic Cusp
Abstract: Nuclear fusion power is considered the ultimate energy source because of its nearly inexhaustible supply of cheap fuels,
intrinsic safety, zero carbon emissions and lack of long-lived radioactive waste. In this talk, I will introduce the Polywell fusion concept that may offer a low cost and rapid development path
to power the world economically and sustainably. As conceived by Dr. Robert Bussard at Energy Matter Conversion Corporation (EMC2) in 1985, the Polywell fusion concept combines electric fusion
with magnetic cusp confinement. This allows the Polywell reactor to be small, stable, and highly efficient. Recently, EMC2 carried out an experiment that demonstrated dramatically improved
high-energy electron confinement in a magnetic cusp system operating at beta (=plasma pressure/magnetic field pressure) near 1. This result has significant implications for cusp related schemes
for producing controlled nuclear fusion power.
• Tuesday 17 November 2015 at 10.15 in A315: Georgios Itsios (Santiago de Compostela)
Exploring cold holographic matter
Abstract: In this talk we discuss aspects of cold matter using holographic techniques. Our holographic description is realized through a top-down approach, in which we consider D-brane
intersections of different dimensionalities. We will analyze several properties of these systems such as the speed of first sound, the diffusion constant and the speed of zero sound. We also
discuss the specific case of the D3-D5 intersection with a non-zero flux across the internal part of its worldvolume.
• Thursday 19 November 2015 at 10.15 in A315: Joonas Nättilä (Tuorla, Turku)
Equation of state for the dense matter inside neutron stars using thermonuclear explosions
Abstract: In my talk I will describe how observations of thermonuclear explosions on top of neutron stars ends up constraining the size of these ultra-compact objects. I will also show how we can
model these explosions and how the atmosphere of the star modifies the emerging spectrum. As it turns out, good understanding of the physics behind these powerful bursts is also crucial for
accurate mass and radius measurements. From the size measurements we can then obtain a parametrized equation of state of the cold dense matter by using Bayesian methods. This allows us to set
limits on some nuclear parameters and to constrain an empirical pressure-density relation for the dense neutron matter.
• Tuesday 24 November 2015 at 10.15 in A315: Jacopo Ghiglieri (Bern)
Gravitational wave background from Standard Model physics
Abstract: Any plasma in thermal equilibrium emits gravitational waves, caused by physical processes such as macroscopic hydrodynamic fluctuations and microscopic particle collisions. We will show
that, for the largest wavelengths, the emission rate is due to the former process and is proportional to the shear viscosity of the plasma. In the Standard Model at T > 160 GeV, the shear
viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational
waves. At smaller wavelengths the leading contribution is given by particle collisions, which we also estimate at leading logarithmic order. Even though at small frequencies (corresponding to the
sub-Hz range relevant for planned observatories such as eLISA) this SM background is tiny compared with that from non-equilibrium sources, we conclude that the total energy carried by the
high-frequency part of the spectrum is non-negligible if the production continues for a long time. Finally, we suggest that this may constrain (weakly) the highest temperature of the radiation
epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors.
• Thursday 26 November 2015 at 14.15 in E207 (note place!): Albert de Roeck (CERN)
The Large Hadron Collider: The Present and the Future.
• Tuesday 1 December 2015 at 10.15 in A315: Heribert Weigert (Cape Town)
QCD at small high energies: Wilson line correlators in the Color Glass Condensate and beyond
Abstract: Modern collider experiments use ever higher energies for many reasons, be it to study the QCD phase transition, to study the newest confirmed addition to the Standard Model or to search
for physics beyond the standard model. In all of his QCD plays a central role, be it the immediate area of study (as with the quark gluon plasma) or as the main limiting factor on precision (as
with standard model particle physics). Over the last decade it has been established that at small x the machinery and phenomenology of the Color Glass Condensate provide powerful tools to study
the energy dependence of cross sections for a wide range of observables. Its evolution equation, the JIMWLK equation has a structural analogue in jet evolution equations (the BMS equation and its
finite Nc generalizations) with a conformal map establishing the connection. The key ingredient that drives cross sections in all these cases are Wilson line correlators which also appear in GPDs
and in energy loss calculations aiming at studying the QCD as a medium. This is clearly too big a canvas to fill with paint completely with one talk, instead, I will content myself with the first
steps needed to expose these connections, starting from JIMWLK and the CGC, and show some ideas of how to study and analyze the underlying structures.
• Thursday 10 December 2015 at 14.15 in E206 (note place!): David Milstead (Stockholm)
A new high precision search for neutron-antineutron oscillations at the ESS
Abstract: In this talk I shall discuss how a search for neutron-antineutron oscillations can be carried out at the European Spallation Source. It will be shown how such oscillations can provide a
unique probe of some of the central questions in particle physics and cosmology: the energy scale and mechanism for baryon number violation, the origin of the baryon-antibaryon asymmetry of the
universe, and the mechanism for neutrino mass generation. An overview of the proposed experiment and its capability to a sensitivity to the oscillation probability, which is three orders of
magnitude greater than previously obtained, will be given. The international collaboration which has been formed to carry out the proposed work will also be described.
• Thursday 17 December 2015 at 10.15 in A315: Martin Krššák (Sao Paulo)
Teleparallel gravity and the role of inertia in theories of gravity
Abstract: The central role in general relativity is played by the principle of equivalence, which suggests that gravity and inertia are locally indistinguishable, and it is hardcoded into general
relativity through the use of the Levi-Civita geometry. In this talk, I will present an interesting work-around to this problem known as teleparallel gravity. It can be understood as a dual
theory to general relativity, where gravity is attributed to torsion of spacetime, rather than curvature. As it turns out, in this theory it is possible to separate (non-locally) the problematic
inertial contributions from gravity, and define the finite, purely gravitational, action. I will show that this subtraction of the inertial effects closely resembles holographic renormalization,
and discuss its applications in holography and modified gravity.
• Friday 18 December 2015 at 10.15 in A315: Ville Keränen (Oxford, Note day!)
Thermalization in the AdS/CFT duality
Abstract: We study examples of out of equilibrium systems in the context of the AdS/CFT duality. We attempt to draw general conclusions from these studies and in particular will highlight the
role of black hole quasinormal frequencies in determining the rate at which these systems approach thermal equilibrium. | {"url":"https://www.hip.fi/seminars/archive/2015-2/","timestamp":"2024-11-11T23:17:48Z","content_type":"text/html","content_length":"75173","record_id":"<urn:uuid:756be536-11c0-47ec-bed5-4a91749a001b>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00192.warc.gz"} |
Monday November 23, 2015
Chenqi Mou (School of Mathematics and Systems Science, Beihang University, China)
Sparse FGLM algorithms for solving polynomial systems
B013, 14:00
Groebner basis is an important tool in computational ideal theory, and the term ordering plays an important role in the theory of Groebner bases. In particular, the common strategy to solve a
polynomial system is to first compute the basis of the ideal defined by the system w.r.t. DRL, change its ordering to LEX, and perhaps further convert the LEX Groebner basis to triangular sets.
Given a zero-dimensional ideal I ⊂ 𝕂[x[1],...,x[n]] of degree D, the transformation of the ordering of its Groebner basis from DRL to LEX turns out to be the bottleneck of the whole solving process.
Thus it is of crucial importance to design efficient algorithms to perform the change of ordering.
In this talk we present several efficient methods for the change of ordering which take advantage of the sparsity of multiplication matrices in the classical FGLM algorithm. Combining all these
methods, we propose a deterministic top-level algorithm that automatically detects which method to use depending on the input. As a by-product, we have a fast implementation that is able to handle
ideals of degree over 60000. Such an implementation outperforms the Magma and Singular ones, as shown by our experiments.
First for the shape position case, two methods are designed based on the Wiedemann algorithm: the first is probabilistic and its complexity to complete the change of ordering is O(D(N[1]+n log(D)^2))
, where N[1] is the number of nonzero entries of a multiplication matrix; the other is deterministic and computes the LEX Groebner basis of $√I$ via Chinese Remainder Theorem. Then for the general
case, the designed method is characterized by the Berlekamp-Massey-Sakata algorithm from Coding Theory to handle the multi-dimensional linearly recurring relations. Complexity analyses of all
proposed methods are also provided.
Furthermore, for generic polynomial systems, we present an explicit formula for the estimation of the sparsity of one main multiplication matrix, and prove that its construction is free. With the
asymptotic analysis of such sparsity, we are able to show that for generic systems the complexity above becomes O(√(6/n π)D^2+(n-1)/n).
This talk is based on joint work with Jean-Charles Faugere.
Monday November 16, 2015
Frédéric Bihan (Université de Savoie Mont Blanc)
Polynomial systems with many positive solutions from bipartite triangulations
A006, 10:30
We use a version of Viro's method to construct polynomial systems with many positive solutions. We show that if a polytope admits an unimodular regular triangulation whose dual graph is bipartite,
then there exists an unmixed polynomial system with this polytope as Newton polytope and which is maximally positive in that all its toric complex solutions are in fact real positive solutions. We
present classical families of polytopes which admit such triangulations. These examples give evidence in favor of a conjecture due to Bihan which characterizes affine relations inside the support of
a maximally polynomial system. We also use our construction to get polynomial systems with many positive solutions considering a simplicial complex contained in a regular triangulation of the cyclic
polytope. This is joint work with Pierre-Jean Spaenlehauer (INRIA Nancy).
Wednesday October 14, 2015
Jan Tuitman (Department of Mathematics, KU Leuven, Belgium) Counting points on curves: the general case B013, 10:30
pdf (208 kb)
Kedlaya's algorithm computes the zeta function of a hyperelliptic curve over a finite field using the theory of p-adic cohomology. We have recently dev eloped and implemented a generalisation of this
algorithm that works for (almost) any curve. First, we will outline the theory involved. Then we will describe our algorithm and illustrate the main ideas by giving some examples. Finally, if time
permits, we will talk about some current and future work of ours with various coauthors on improving the algorithm and applying it in other settings.
Wednesday September 9, 2015
Roland Wen (Univ. of New South Wales, Sydney)
Engineering Cryptographic Applications: Leveraging Recent E-Voting Experiences in Australia to Build Failure-Critical Systems
A006, 10:30
Advanced, bespoke cryptographic applications are emerging for large-scale use by the general population. A good example is cryptographic electronic voting systems, which make extensive use of
sophisticated cryptographic techniques to help attain strong security properties (such as secrecy and verifiability) that are required due to the failure-critical nature of public elections. Recently
in Australia, cryptographic e-voting systems were used in two state elections: iVote, an Internet voting system, was used in New South Wales, and vVote, a polling place voting system, was used in
However developing and deploying such complex and critical cryptographic applications involves a range of engineering challenges that have yet to be addressed in practice by industry and the research
community. As with any complex, large-scale system, there were barriers to applying appropriately rigorous engineering practices in the two Australian e-voting systems. But since these e-voting
systems are critical national infrastructure, such engineering practices are needed to provide high assurance of the systems and their required properties.
In this talk I will discuss some of the engineering challenges, practical barriers and issues, and what can be learned from the two recent Australian e-voting experiences.
Thursday February 5, 2015
Matthieu Rambaud (Télécom ParisTech)
Comment trouver de bons algorithmes de multiplication par interpolation ?
A006, 10:30
Thursday November 20, 2014
Sebastian Kochinke (Universität Leipzig)
The Discrete Logarithm Problem on non-hyperelliptic Curves of Genus g > 3.
A006, 14:00
We consider the discrete logarithm problem in the degree-0 Picard group of non-hyperelliptic curves of genus g > 3. A well known method to tackle this problem is by index calculus. In this talk we
will present two algorithms based on the index calculus method. In both of them linear systems of small degree are generated and then used to produce relations. Analyzing these algorithms leads to
several geometric questions. We will discuss some of them in more detail and state further open problems. At the end we will show some experimental results.
Thursday November 20, 2014
Enea Milio (LFANT team, Institut de Mathématiques de Bordeaux)
Calcul des polynômes modulaires en genre 2
A006, 10:30
Monday November 17, 2014
Thomas Richard (CARAMEL team, LORIA)
Cofactorization strategies for NFS
B013, 14:00
Nowadays, when we want to factor large numbers, we use sieve algorithms like the number field sieve or the quadratic sieve. In these algorithms, there is an expensive part called the relation
collection step. In this step, one searches a wide set of numbers to identify those which are smooth, i.e. integers where all prime divisors are less than a particular bound. In this search, a first
step finds the smallest prime divisors with a sieving procedure and a second step tries to factor the remaining integers which are no longer divisible by small primes, using factoring methods like
P-1, P+1 and ECM. This talk will introduce a procedure, following Kleinjung, to optimize the second step!
Thursday October 23, 2014
Andrea Miele (LACAL, EPFL, Lausanne) Post-sieving on GPUs A006, 10:30
pdf (930 kb)
The number field sieve (NFS) is the fastest publicly known algorithm for factoring RSA moduli. We show how the post sieving step, a compute-intensive part of the relation collection phase of NFS, can
be farmed out to a graphics processing unit. Our implementation on a GTX 580 GPU, which is integrated with a state-of-the-art NFS implementation, can serve as a cryptanalytic co-processor for several
Intel i7-3770K quad-core CPUs simultaneously. This allows those processors to focus on the memory-intensive sieving and results in more useful NFS-relations found in less time.
Thursday September 11, 2014
Christian Eder (Department of Mathematics, University of Kaiserslautern) Computing Groebner Bases B013, 10:30
pdf (7607 kb)
In 1965 Buchberger introduced a first algorithmic approach to the computation of Groebner Bases. Over the last decades optimizations to this basic attempt were found. In this talk we discuss two main
aspects of the computation of Groebner Bases: Predicting zero reductions is essential to keep the computational overhead and memory usage at a low. We show how Faugère's idea, initially presented in
the F5 algorithm, can be generalized and further improved. The 2nd part of this talk is dedicated to the exploitation of the algebraic structures of a Groebner Basis. Thus we are not only able to
replace polynomial reduction by linear algebra (Macaulay matrices, Faugère's F4 algorithm), but we can also specialize the Gaussian Elimination process for our purposes.
Thursday July 3, 2014
Nicholas Coxon (CARAMEL team, LORIA) Nonlinear polynomials for NFS factorisation B013, 10:30
pdf (596 kb)
To help minimise the running time of the number field sieve, it is desirable to select polynomials with balanced degrees in the polynomial selection phase. I will discuss algorithms for generating
polynomial pairs with this property: those based on Montgomery's approach, which reduce the problem to the construction of small modular geometric progressions; and an algorithm which draws on ideas
from coding theory.
Thursday June 19, 2014
Irene Márquez-Corbella (GRACE team, LIX, École Polytechnique) Une attaque polynomiale du schéma de McEliece basé sur les codes géométriques A006, 10:30
pdf (1036 kb)
Thursday April 24, 2014
Guillaume Moroz (Vegas team, LORIA)
Évaluation et composition rapide de polynômes
A006, 10:30
Thursday March 13, 2014
Pierre-Jean Spaenlehauer (CARAMEL team, LORIA)
A Newton-like iteration and algebraic methods for Structured Low-Rank Approximation
A006, 10:30
Given an linear/affine space of matrices E with real entries, a data matrix U ∈ E and a target rank r, the Structured Low-Rank Approximation Problem consists in computing a matrix M ∈ E which is
close to U (with respect to the Frobenius norm) and has rank at most r. This problem appears with different flavors in a wide range of applications in Engineering Sciences and symbolic/numeric
We propose an SVD-based numerical iterative method which converges locally towards such a matrix M. This iteration combines features of the alternating projections algorithm and of Newton's method,
leading to a proven local quadratic rate of convergence under mild tranversality assumptions. We also present experimental results which indicate that, for some range of parameters, this general
algorithm is competitive with numerical methods for approximate univariate GCDs and low-rank matrix completion (which are instances of Structured Low-Rank Approximation).
In a second part of the talk, we focus on the algebraic structure and on exact methods to compute symbolically the nearest structured low-rank matrix M to a given matrix U ∈ E with rational entries.
We propose several ways to compute the algebraic degree of the problem and to recast it as a system of polynomial equations in order to solve it with algebraic methods.
The first part of the talk is a joint work with Eric Schost, the second part is a joint work with Giorgio Ottaviani and Bernd Sturmfels.
Wednesday February 26, 2014
Armand Lachand (Équipe Théorie des Nombres, IECL)
Quelques perspectives mathématiques sur la sélection polynomiale dans le crible algébrique NFS
A006, 10:30
Thursday December 19, 2013
Luca De Feo (CRYPTO, PRiSM, Univ. Versailles Saint-Quentin)
Algorithms for F[p]
A006, 10:30
Realizing in software the algebraic closure of a finite field F[p] is equivalent to construct so called “compatible lattices of finite fields”, i.e. a collection of finite extensions of F[p] together
with embeddings F[p^m] ⊂ F[p^n] whenever m | n.
There are known algorithms to construct compatible lattices in deterministic polynomial time, but the status of the most practically efficient algorithms is still unclear. This talk will review the
classical tools available, then present some new ideas towards the efficient construction of compatible lattices, possibly in quasi-optimal time.
Friday December 6, 2013
Cécile Pierrot (PolSys team, LIP6)
Crible Spécial sur Corps de Nombres (SNFS) – Application aux courbes elliptiques bien couplées.
A006, 13:00
Thursday November 28, 2013
Clément Pernet (AriC team, LIP, ENS Lyon)
Calcul de formes echelonnées et des profils de rang
A006, 10:30
Thursday November 7, 2013
Julia Pieltant (GRACE team, LIX, École Polytechnique)
Algorithme de type Chudnovsky pour la multiplication dans les extensions finies de F[q].
A006, 10:30
Friday July 19, 2013
Bastien Vialla (LIRMM)
Un peu d'algèbre linéaire
A006, 14:00
Wednesday July 17, 2013
Alice Pellet-Mary (CARAMEL team, LORIA)
Test rapide de cubicité modulaire
B013, 14:00
Tuesday July 16, 2013
Svyatoslav Covanov (CARAMEL team, LORIA)
Implémentation efficace d'un algorithme de multiplication de grands nombres
A006, 15:00
Monday July 15, 2013
Hamza Jeljeli (CARAMEL team, LORIA)
RNS Arithmetic for Linear Algebra of FFS and NFS-DL algorithms
B013, 15:30
Computing discrete logarithms in large cyclic groups using index-calculus-based methods, such as the number field sieve or the function field sieve, requires solving large sparse systems of linear
equations modulo the group order. In this talk, we present how we use the Residue Number System (RNS) arithmetic to accelerate modular operations. The first part deals with the FFS case, where the
matrix contains only small values. The second part discusses how we treat for NFS-DL, the particular dense columns corresponding to Schirokauer's maps, where the values are large.
Tuesday May 28, 2013
Mourad Gouicem (PEQUAN team, LIP6) Fractions continues et systèmes de numérations : applications à l'implémentation de fonctions élémentaires et à l'arithmétique modulaire A006, 10:30
pdf (1269 kb)
Friday April 5, 2013
François Morain (GRACE team, LIX)
ECM using number fields
B200, 13:30
Thursday March 28, 2013
Antoine Joux (CryptoExperts / CRYPTO, PRiSM, Univ. Versailles Saint-Quentin)
Logarithmes discrets dans les corps finis. Application en caractéristique "moyenne".
A006, 10:30
Friday March 15, 2013
Adeline Langlois (AriC, LIP, ENS Lyon)
Classical Hardness of Learning with Errors
A006, 14:00
The decision Learning With Errors (LWE) problem, introduced by Regev in 2005 has proven an invaluable tool for designing provably secure cryptographic protocols. We show that LWE is classically at
least as hard as standard worst-case lattice problems, even with polynomial modulus. Previously this was only known under quantum reductions.
Our techniques capture the tradeoff between the dimension and the modulus of LWE instances, leading to a much better understanding of the landscape of the problem. The proof is inspired by techniques
from several recent cryptographic constructions, most notably fully homomorphic encryption schemes.
This work has been done with Z. Brakerski, C. Peikert, O. Regev and D. Stehlé.
Friday February 22, 2013
Jérémy Parriaux (CRAN, Univ. de Lorraine)
Contrôle, synchronisation et chiffrement
A006, 13:30
Wednesday February 13, 2013
Emmanuel Jeandel (Équipe CARTE, LORIA)
Quête du plus petit jeu de tuiles apériodique
A006, 10:30
Thursday January 31, 2013
Maike Massierer (Mathematisches Institut, Universität Basel)
An Efficient Representation for the Trace Zero Variety
A006, 10:30
The hardness of the (hyper)elliptic curve discrete logarithm problem over extension fields lies in the trace zero variety. A compact representation of the points of this abelian variety is needed in
order to accurately assess the hardness of the discrete logarithm problem there. Such representations have been proposed by Lange for genus 2 curves and by Silverberg for elliptic curves. We present
a new approach for elliptic curves. It is made possible by a new equation for the variety derived from Semaev's summation polynomials. The new representation is optimal in the sense that it reflects
the size of the group, it is compatible with the structure of the variety, and it can be computed efficiently.
Thursday January 17, 2013
Pierre-Jean Spaenlehauer (ORCCA, University of Western Ontario) Résolution de systèmes polynomiaux structurés et applications en Cryptologie B013, 13:00
pdf (853 kb)
Thursday December 20, 2012
Alin Bostan (Projet SpecFun, INRIA Saclay)
Computer algebra for the enumeration of lattice walks
A008, 14:00
Classifying lattice walks in restricted lattices is an important problem in enumerative combinatorics. Recently, computer algebra methods have been used to explore and solve a number of difficult
questions related to lattice walks. In this talk, we will give an overview of recent results on structural properties and explicit formulas for generating functions of walks in the quarter plane,
with an emphasis on the algorithmic methodology.
Thursday December 20, 2012
Mohab Safey El Din (Projet PolSys, LIP6, UPMC-IUF-INRIA)
Computer Algebra Algorithms for Real Solving Polynomial Systems: the Role of Structures
A008, 10:30
Real solving polynomial systems is a topical issue for many applications. Exact algorithms, using computer algebra techniques, have been deployed to answer various specifications such that deciding
the existence of solutions, answer connectivity queries or one block-real quantifier elimination. In this talk, we will review some recent on-going works whose aims are to exploit algebraic and
geometric properties in order to provide faster algorithms in theory and in practice.
Thursday November 29, 2012
Paul Zimmermann (Projet CARAMEL, LORIA)
Sélection polynomiale dans CADO-NFS
C005, 10:30
Friday November 23, 2012
Alexandre Benoit (Lycée Alexandre Dumas, Saint Cloud) Multiplication quasi-optimale d'opérateurs différentiels A006, 10:30
pdf (422 kb)
Thursday November 15, 2012
Christophe Petit (Crypto Group, Université Catholique de Louvain) On polynomial systems arising from a Weil descent A006, 14:00
pdf (713 kb)
Polynomial systems of equations appearing in cryptography tend to have special structures that simplify their resolution. In this talk, we discuss a class of polynomial systems arising after
deploying a multivariate polynomial equation over an extension field into a system of polynomial equations over the ground prime field (a technique commonly called Weil descent).
We provide theoretical and experimental evidence that the degrees of regularity of these systems are very low, in fact only slightly larger than the maximal degrees of the equations.
We then discuss cryptographic applications of (particular instances of) these systems to the hidden field equation (HFE) cryptosystem, to the factorization problem in SL(2, 2^n) and to the elliptic
curve discrete logarithm over binary fields. In particular, we show (under a classical heuristic assumption in algebraic cryptanalysis) that an elliptic curve index calculus algorithm due to Claus
Diem has subexponential time complexity O(2^c n^2/3 log n) over the binary field GF(2^n), where c is a constant smaller than 2.
Based on joint work with Jean-Charles Faugère, Ludovic Perret, Jean-Jacques Quisquater and Guénaël Renault.
Tuesday November 6, 2012
Hugo Labrande (ENS de Lyon)
Accélération de l'arithmétique des corps CM quartiques
A006, 14:00
Thursday September 27, 2012
Laura Grigori (Projet Grand Large, INRIA Saclay-Île de France, LRI)
Recent advances in numerical linear algebra and communication avoiding algorithms
B013, 10:30
Numerical linear algebra operations are ubiquitous in many challenging academic and industrial applications. This talk will give an overview of the evolution of numerical linear algebra algorithms
and software, and the major changes such algorithms have undergone following the breakthroughs in the hardware of high performance computers. A specific focus of this talk will be on communication
avoiding algorithms. This is a new and particularly promising class of algorithms, which has been introduced in the late 2008 as an attempt to address the exponentially increasing gap between
computation time and communication time - one of the major challenges faced today by the high performance computing community. I will also discuss novel preconditioning techniques for accelerating
the convergence of iterative methods.
Friday July 20, 2012
Francisco Rodríguez-Henríquez (CINVESTAV, IPN, México)
Computing square roots over prime extension fields
A006, 10:30
Taking square roots over finite fields is a classical number theoretical problem that has capture the attention of researchers across the centuries. Nowadays, the computation of square roots is
especially relevant for elliptic curve cryptosystems, where hashing an arbitrary message to a random point that belongs to a given elliptic curve, point compression and point counting over elliptic
curves, are among its most relevant cryptographic applications.
In this talk, we present two new algorithms for computing square roots over finite fields of the form F[q], with q = p^m and where p is a large odd prime and m an even integer. The first algorithm is
devoted to the case when q ≡ 3 (mod 4), whereas the second handles the complementary case when q ≡ 1 (mod 4). We include numerical comparisons showing the efficiency of our algorithms over the ones
previously published in the open literature.
Friday June 29, 2012
Răzvan Bărbulescu (Projet CARAMEL, LORIA)
Finding Optimal Formulae for Bilinear Maps
B013, 14:00
We describe a unified framework to search for optimal formulae evaluating bilinear or quadratic maps. This framework applies to polynomial multiplication and squaring, finite field arithmetic, matrix
multiplication, etc. We then propose a new algorithm to solve problems in this unified framework. With an implementation of this algorithm, we prove the optimality of various published upper bounds,
and find improved upper bounds.
Friday June 29, 2012
Cyril Bouvier (Projet CARAMEL, LORIA)
Finding ECM-Friendly Curves through a Study of Galois Properties
A006, 10:00
In this paper we prove some divisibility properties of the cardinality of elliptic curves modulo primes. These proofs explain the good behavior of certain parameters when using Montgomery or Edwards
curves in the setting of the elliptic curve method (ECM) for integer factorization. The ideas of the proofs help us to find new families of elliptic curves with good division properties which
increase the success probability of ECM.
Wednesday June 20, 2012
Aurore Guillevic (Équipe Crypto, ENS / Laboratoire Chiffre, Thales) Familles de courbes hyperelliptiques de genre 2, calcul explicite de l'ordre de la jacobienne et constructions pour les couplages
A006, 10:30
pdf (656 kb)
Wednesday April 25, 2012
Olivier Levillain (ANSSI)
SSL/TLS: état des lieux et recommandations
A008, 10:30
Tuesday April 3, 2012
Luc Sanselme (Lycée Henri Poincaré, Nancy)
Algorithmique des groupes en calcul quantique
A006, 14:00
Friday March 30, 2012
Jean-Charles Faugère (Projet PolSys, LIP6)
Gröbner Bases and Linear Algebra
A006, 10:30
There is a strong interplay between computing efficiently Gröbner bases and linear algebra. In this talk, we focus on several aspects of this convergence:
• Algorithmic point of view: algorithms for computing efficiently Gröbner bases (F4, F5, FGLM, ...) rely heavily on efficient linear algebra.
□ The matrices generated by these algorithms have unusual properties: sparse, almost block triangular. We present a dedicated algorithm for computing Gaussian elimination of Gröbner bases
□ By taking advantage of the sparsity of multiplication matrices in the classical FGLM algorithm we can design an efficient algorithm to change the ordering of a Gröbner basis. The algorithm is
a related to multivariate generalization of the Wiedemann algorithm. When the matrices are not sparse, for generic systems, the complexity is Õ(D^ω) where D is the number of solutions and ω ≤
2.3727 is the linear algebra constant.
□ Mixing Gröbner bases methods and linear algebra technique for solving sparse linear systems leads to an efficient algorithm to solve Boolean quadratic equations over F[2]; this algorithm is
faster than exhaustive search by an exponential factor
• Application point of view: for instance, a generalization of the eigenvalue problem to several matrices – the MinRank problem – is at the heart of the security of many multivariate public key
• Design of C library: we present a multi core implementation of these new algorithms; the library contains specific algorithms to compute Gaussian elimination as well as specific internal
representation of matrices.The efficiency of the new software is demonstrated by showing computational results for well known benchmarks as well as some crypto applications.
Joint works with S. Lachartre, C. Mou, P. Gaudry, L. Huot, G. Renault, M. Bardet, B. Salvy, P.J. Spaenlehauer and M. Safey El Din.
Tuesday March 20, 2012
Peter Schwabe (Academia Sinica, Taiwan) EdDSA signatures and Ed25519 A217, 15:00
pdf (1050 kb)
One of the most widely used applications of elliptic-curve cryptography is digital signatures. In this talk I will present the EdDSA signature scheme that improves upon previous elliptic-curve-based
signature schemes such as ECDSA or Schnorr signatures in several ways. In particular it makes use of fast and secure arithmetic on Edwards curves, it is resilient against hash-function collisions and
it supports fast batch verification of signatures. I will furthermore present performance results of Ed25519, EdDSA with a particular set of parameters for the 128-bit security level.
Wednesday March 14, 2012
Karim Khalfallah (ANSSI)
Les canaux auxiliaires, approche sous l'angle du rapport signal-à-bruit
A006, 10:30
Wednesday March 7, 2012
Jérémie Detrey (Projet CARAMEL, LORIA)
Implémentation efficace de la recherche de formules pour applications bilinéaires
A006, 10:30
Wednesday February 29, 2012
Marion Videau (Projets CARAMEL, LORIA)
Codes d'authentification de message (suite et fin)
A006, 10:30
Thursday February 2, 2012
Stéphane Glondu (Projets CASSIS/CARAMEL, LORIA)
Tutoriel Coq (suite et fin)
B013, 10:00
Friday January 20, 2012
Charles Bouillaguet (CRYPTO, PRiSM, Univ. Versailles Saint-Quentin) Presumably hard problems in multivariate cryptography A006, 10:30
pdf (2437 kb)
Public-key cryptography relies on the existence of computationaly hard problems. It is not widely accepted that a public-key scheme is worthless without a "security proof", i.e., a proof that if an
attacker breaks the scheme, then she solves in passing an instance of an intractable computational problem. As long as the hard problem is intractable, then the scheme is secure. The most well-known
hardness assumptions of this kind are probably the hardness of integer factoring, or that of taking logarithms in certain groups.
In this talk we focus on multivariate cryptography, a label covering all the (mostly public-key) schemes explicitly relying on the hardness of solving systems of polynomial equations in several
variables over a finite field. The problem, even when restricted to quadratic polynomials, is well-known to be NP-complete. In the quadratic case, it is called MQ. Interestingly, most schemes in this
area are not "provably secure", and a lot of them have been broken because they relied on another, less well-known, computational assumption, the hardness of Polynomial Linear Equivalence (PLE),
which is a higher-degree generalization the problem testing whether two matrices are equivalent.
In this talk I will present the algorithms I designed to tackle these two hard problems. I will show that 80 bits of security are not enough for MQ to be practically intractable, and I will present
faster-than-before, sometimes practical algorithms for various flavors of PLE.
Wednesday November 30, 2011
Cyril Bouvier (Projet CARAMEL, LORIA)
ECM sur GPU
A006, 10:30
In this talk, I will present a new implementation of the Elliptic Curve Method algorithm (ECM) for graphic cards (GPU). This implementation uses Montgomery's curve, like GMP-ECM, but uses a different
implementation and a different algorithm for the scalar multiplication. As there is no modular arithmetic library (like GMP) available for GPU, it was necessary to implement a modular arithmetic
using array of unsigned integers from scratch, while taking into account constraints of GPU programming. The code, written for NVIDIA GPUs using CUDA, was optimized for factoring 1024 bits integers.
Friday October 28, 2011
Sorina Ionica (Projet CARAMEL, LORIA)
Pairing-based algorithms for Jacobians of genus 2 curves with maximal endomorphism ring
A006, 11:00
Using Galois cohomology, Schmoyer characterizes cryptographic non-trivial self-pairings of the ℓ-Tate pairing in terms of the action of the Frobenius on the ℓ-torsion of the Jacobian of a genus 2
curve. We apply similar techniques to study the non-degeneracy of the ℓ-Tate pairing restrained to subgroups of the ℓ-torsion which are maximal isotropic with respect to the Weil pairing. First, we
deduce a criterion to verify whether the jacobian of a genus 2 curve has maximal endomorphism ring. Secondly, we derive a method to construct horizontal (ℓ, ℓ)-isogenies starting from a jacobian with
maximal endomorphism ring. This is joint work with Ben Smith.
Thursday October 6, 2011
Paul Zimmermann (Projet CARAMEL, LORIA) Short Division of Long Integers (with David Harvey) A006, 10:30
pdf (1764 kb)
We consider the problem of short division — i.e., approximate quotient — of multiple-precision integers. We present ready-to-implement algorithms that yield an approximation of the quotient, with
tight and rigorous error bounds. We exhibit speedups of up to 30% with respect to GMP division with remainder, and up to 10% with respect to GMP short division, with room for further improvements.
This work enables one to implement fast correctly rounded division routines in multiple-precision software tools.
Thursday September 29, 2011
Diego F. Aranha (Universidade de Brasília) Efficient Software Implementation of Binary Field Arithmetic Using Vector Instruction Sets B011, 10:30
pdf (402 kb)
In this talk, we will describe an efficient software implementation of characteristic 2 fields making extensive use of vector instruction sets commonly found in desktop processors. Field elements are
represented in a split form so performance-critical field operations can be formulated in terms of simple operations over 4-bit sets. In particular, we detail techniques for implementing field
multiplication, squaring, square root extraction, half-trace and inversion and present a constant-memory lookup-based multiplication strategy. We illustrate performance with timings for scalar
multiplication on a 251-bit curve and compare our results with publicly available benchmarking data.
Wednesday July 13, 2011
Benoît Gaudel (Projet CARAMEL, LORIA) Étude de stratégies de cofactorisation pour l'algorithme Function Field Sieve A006, 10:30
pdf (657 kb)
Thursday June 30, 2011
Cyril Bouvier (Projet CARAMEL, LORIA) ECM sur GPU B013, 10:00
pdf (371 kb)
Thursday June 9, 2011
Alain Couvreur (Projet COCQ, Institut de Mathématiques de Bordeaux)
Une nouvelle construction géométrique de codes sur de petits corps
B013, 10:30
Tuesday June 7, 2011
Christophe Mouilleron (Projet Arénaire, LIP, ENS Lyon) Génération de schémas d'évaluation avec contraintes pour des expressions arithmétiques A006, 10:30
pdf (445 kb)
Monday May 30, 2011
Marion Videau (ANSSI)
Cryptanalyse d'ARMADILLO2
A217, 14:00
Monday May 30, 2011
Hamza Jeljeli (LACAL, EPFL, Lausanne)
RNS on Graphic Processing Units
A213, 09:30
Wednesday May 18, 2011
Benjamin Smith (Projet TANC, LIX, École Polytechnique)
Middlebrow Methods for Low-Degree Isogenies in Genus 2
A006, 10:30
In 2008, Dolgachev and Lehavi published a method for constructing (ℓ,ℓ)-isogenies of Jacobians of genus 2 curves. The heart of their construction requires only elementary projective geometry and some
basic facts about abelian varieties. In this talk, we put their method into practice, and consider what needs to be done to transform their method into a practical algorithm for curves over finite
Thursday May 5, 2011
Xavier Pujol (Projet Arénaire, LIP, ENS Lyon) Analyse de BKZ B013, 10:30
pdf (870 kb)
Strong lattice reduction is the key element for most attacks against lattice-based cryptosystems. Between the strongest but impractical HKZ reduction and the weak but fast LLL reduction, there have
been several attempts to find efficient trade-offs. Among them, the BKZ algorithm introduced by Schnorr and Euchner in 1991 seems to achieve the best time/quality compromise in practice. However, no
reasonable time complexity upper bound was known so far for BKZ. We give a proof that after Õ(n^3/k^2) calls to a k-dimensional HKZ reduction subroutine, BKZ[k] returns a basis such that the norm of
the first vector is at most ≈ γ[k]^n/2(k-1) × det(L)^1/n. The main ingredient of the proof is the analysis of a linear dynamic system related to the algorithm.
Thursday April 21, 2011
Răzvan Bărbulescu (Projet CARAMEL, LORIA) Améliorations au problème du logarithme discret dans F[p]^* B013, 10:30
pdf (306 kb)
Thursday April 14, 2011
Alin Bostan (Projet ALGORITHMS, INRIA Paris-Rocquencourt) Algébricité de la série génératrice complète des chemins de Gessel B013, 10:30
pdf (249 kb)
Wednesday March 30, 2011
Martin Albrecht (Projet SALSA, LIP6) The M4RI & M4RIE libraries for linear algebra over GF(2) and small extensions A006, 10:30
pdf (1881 kb)
In this talk we will give an overview of the M4RI and M4RIE libraries. These open-source libraries are dedicated to efficient linear algebra over small finite fields with characteristic two. We will
present and discuss implemented algorithms, implementation issues that arise for these fields and also some ongoing and future work. We will also demonstrate the viability of our approach by
comparing the performance of our libraries with the implementation in Magma.
Thursday February 17, 2011
Bogdan Pasca (Projet Arénaire, LIP, ENS Lyon) FPGA-specific arithmetic pipeline design using FloPoCo B013, 10:30
pdf (869 kb)
In the last years the capacity of modern FPGAs devices has increased to the point that they can be used with success for accelerating various floating-point computations. However, ease of
programmability has never rhymed with FPGAs. Obtaining good accelerations was most of the time a laborious and error-prone process.
This talk is addressing the programmability of arithmetic circuits on FPGAs. We present FloPoCo, a framework facilitating the design of custom arithmetic datapaths for FPGAs. Some of the features
provided by FloPoCo are: an important basis of highly optimized arithmetic operators, a unique methodology for separating arithmetic operator design from frequency-directed pipelining the designed
circuits and a flexible test-bench generation suite for numerically validating the designs.
The framework is reaching maturity, so far being tested with success for designing several complex arithmetic operators including the floating-point square-root, exponential and logarithm functions.
Synthesis results capture the designed operator's flexibility: automatically optimized for several Altera and Xilinx FPGAs, wide range of target frequencies and several precisions ranging from single
to quadruple precision.
Thursday January 27, 2011
Christophe Arène (Institut de Mathématiques de Luminy)
Complétude des lois d'addition sur une variété abélienne
B013, 10:30
Friday January 21, 2011
Frederik Vercauteren (ESAT/COSIC, KU Leuven)
Fully homomorphic encryption via ideals in number rings
A006, 10:30
In this talk, I will review the concept of fully homomorphic encryption, describe its possibilities and then give a construction based on principal ideals in number rings. This is joint work with
Nigel Smart.
Thursday January 20, 2011
Junfeng Fan (ESAT/COSIC, KU Leuven)
ECC on small devices
B013, 10:30
The embedded security community has been looking at the ECC ever since it was introduced. Hardware designers are now challenged by limited area (<15 kGates), low power budget (<100 μW) and
sophisticated physical attacks. This talk will report the state-of-the-art ECC implementations for ultra-constrained devices. We take a passive RFID tag as our potential target. We will discuss the
known techniques to realize ECC on such kind of devices, and what are the challenges we face now and in the near future.
Thursday December 9, 2010
Sylvain Collange (Arénaire, LIP, ENS Lyon) Enjeux de conception des architectures GPGPU : unités arithmétiques spécialisées et exploitation de la régularité A006, 10:30
pdf (1216 kb)
Thursday December 2, 2010
Guillaume Batog (VEGAS, LORIA)
Sur le type d'intersection de deux quadriques de P^3(R)
A006, 10:30
Thursday November 25, 2010
Vanessa Vitse (CRYPTO, PRiSM, Univ. Versailles Saint-Quentin) Calcul de traces de l'algorithme F4 et application aux attaques par décomposition sur courbes elliptiques A006, 10:30
pdf (1243 kb)
Monday November 8, 2010
Mehdi Tibouchi (Équipe Cryptographie, Laboratoire d'Informatique de l'ENS) Hachage vers les courbes elliptiques et hyperelliptiques A006, 10:30
pdf (396 kb)
Thursday November 4, 2010
Marcelo Kaihara (LACAL, EPFL, Lausanne) Implementation of RSA 2048 on GPUs A006, 10:30
pdf (2765 kb)
Following the NIST recommendations for Key Management (SP800-57) and the DRAFT recommendation for the Transitioning of Cryptographic Algorithms and Key Sizes (SP 800-131), the use of RSA-1024 will be
deprecated from January 1, 2011. Major vendors and enterprises will start the transition to the next minimum RSA key size of 2048 bits, which is computationally much more expensive. This talk
presents an implementation of RSA-2048 on GPUs and explores the possibility to alleviate the computationally overhead by offloading the operations on GPUs.
Thursday October 14, 2010
Louise Huot (Équipe SALSA, LIP6)
Étude des systèmes polynomiaux intervenant dans le calcul d'indice pour la résolution du problème du logarithme discret sur les courbes
B200, 14:00
Friday July 30, 2010
Thomas Prest (Équipe CARAMEL, LORIA)
Sélection polynomiale pour le crible NFS
B013, 11:00
Thursday July 15, 2010
Peter Montgomery (Microsoft Research & CWI)
Attempting to Run NFS with Many Linear Homogeneous Polynomials
A213, 16:00
Thursday July 15, 2010
Julie Feltin (Équipe CARAMEL, LORIA)
Implantation de l'algorithme ECM sur GPU
A213, 14:00
Thursday June 17, 2010
Fabrice Rouillier (Projet SALSA, LIP6)
Quelques astuces pour résoudre les systèmes polynomiaux dépendant de 2 variables
A008, 10:30
Thursday June 3, 2010
Xavier Goaoc (Projet VEGAS, LORIA) Influence du bruit sur le nombre de points extrêmes C005, 10:30
pdf (365 kb)
Friday May 28, 2010
Paul Zimmermann (Projet CARAMEL, LORIA) 1,82 ? A006, 10:30
pdf (308 kb)
Wednesday April 28, 2010
Francesco Sica (Department of Mathematics and Statistics, University of Calgary) Une approche analytique au problème de la factorisation d'entiers A006, 10:30
pdf (471 kb)
Monday April 26, 2010
Jean-François Biasse (Projet TANC, LIX, École Polytechnique)
Calcul de groupe de classes d'idéaux de corps de nombres.
A006, 10:30
Friday April 9, 2010
Mioara Joldeş (Projet Arénaire, LIP, ENS Lyon) Chebyshev Interpolation Polynomial-based Tools for Rigorous Computing A006, 10:30
pdf (690 kb)
Performing numerical computations, yet being able to provide rigorous mathematical statements about the obtained result, is required in many domains like global optimization, ODE solving or
integration. Taylor models, which associate to a function a pair made of a Taylor approximation polynomial and a rigorous remainder bound, are a widely used rigorous computation tool. This approach
benefits from the advantages of numerical methods, but also gives the ability to make reliable statements about the approximated function.
A natural idea is to try to replace Taylor polynomials with better approximations such as minimax approximation, Chebyshev truncated series or interpolation polynomials. Despite their features, an
analogous to Taylor models, based on such polynomials, has not been yet well-established in the field of validated numerics. In this talk we propose two approaches for computing such models based on
interpolation polynomials at Chebyshev nodes.
We compare the quality of the obtained remainders and the performance of the approaches to the ones provided by Taylor models. We also present two practical examples where this tool can be used:
supremum norm computation of approximation errors and rigorous quadrature.
This talk is based on a joint work with N. Brisebarre.
Wednesday March 31, 2010
Peter Schwabe (Department of Mathematics and Computer Science, TU Eindhoven) Breaking ECC2K-130 A006, 14:00
pdf (280 kb)
In order to "increase the cryptographic community's understanding and appreciation of the difficulty of the ECDLP", Certicom issued several elliptic-curve discrete-logarithm challenges. After these
challenges were published in 1997 the easy ones of less than 100 bits were soon solved — the last one in 1999.
In 2004 also all challenges of size 109 bits were solved but the 131-bit challenges have so far all not been successfully targeted.
Since end of 2009 a group of several institutions is trying to solve the challenge ECC2K-130, a discrete-logarithm problem on a Koblitz curve over the field F[2^131]. In my talk I will describe the
approach taken to solve this challenge and give details of the Pollard rho iteration function. Furthermore I will give implementation details on two different platforms, namely the Cell Broadband
Engine and NVIDIA GPUs.
Friday March 26, 2010
Romain Cosset (Projet CARAMEL, LORIA) Formules de Thomae et isogénies A006, 10:00
pdf (392 kb)
Friday March 19, 2010
Luca De Feo (Projet TANC, LIX, École Polytechnique)
Isogeny computation in small characteristic
A006, 10:30
Isogenies are an important tool in the study of elliptic curves. As such their applications in Elliptic Curve Cryptography are numerous, ranging from point counting to new cryptographic schemes.
The problem of finding explicit formulae expressing an isogeny between two elliptic curves has been studied by many. Vélu gave formulae for the case where the curves are defined over C; these
formulae have been extended in works by Morain, Atkin and Charlap, Coley & Robbins to compute isogenies in the case where the characteristic of the field is larger than the degree of the isogeny.
The small characteristic case requires another treatment. Algorithms by Couveignes, Lercier, Joux & Lercier, Lercier & Sirvent give solutions to different instances of the problem. We review these
strategies, then we present an improved algorithm based over Couveignes' ideas and we compare its performance to the other ones.
Friday March 5, 2010
Wouter Castryck (Department of Mathematics, KU Leuven) The probability that a genus 2 curve has a Jacobian of prime order A006, 10:00
pdf (744 kb)
To generate a genus 2 curve that is suitable for use in cryptography, one approach is to repeatedly pick a curve at random until its Jacobian has prime (or almost prime) order. Naively, one would
expect that the probability of success is comparable to the probability that a randomly chosen integer in the according Weil interval is prime (or almost prime). However, in the elliptic curve case
it was observed by Galbraith and McKee that large prime factors are disfavoured. They gave a conjectural formula that quantifies this effect, along with a heuristic proof, based on the
Hurwitz–Kronecker class number formula. In this talk, I will provide alternative heuristics in favour of the Galbraith–McKee formula, that seem better-suited for generalizations to curves of higher
genus. I will then elaborate this for genus 2. This is joint (and ongoing) research with Hendrik Hubrechts and Alessandra Rigato.
Tuesday March 2, 2010
Osmanbey Uzunkol (KANT Group, Institute of Mathematics, TU Berlin)
Shimura's reciprocity law, Thetanullwerte and class invariants
A006, 10:30
In the first part of my talk I am going to introduce the classical class invariants of Weber, and their generalizations, as quotients of values of "Thetanullwerte", which enables to compute them more
efficiently than as quotients of values of the Dedekind η-function. Moreover, the proof that most of the invariants introduced by Weber are actually units in the corresponding ring class fields will
be given, which allows to obtain better class invariants in some cases, and to give an algorithm that computes the unit group of corresponding ring class fields.
In the second part, using higher degree reciprocity law I am going to introduce the possibility of generalizing the algorithmic approach of determining class invariants for elliptic curves with CM,
to determining alternative class invariant systems for principally polarized simple abelian surfaces with CM.
Thursday February 11, 2010
Fabien Laguillaumie (Algorithmique, GREYC, Univ. Caen Basse-Normandie) Factorisation des entiers N = pq^2 et applications cryptographiques A006, 10:30
pdf (537 kb)
Monday February 1, 2010
Pascal Molin (Institut de Mathématiques de Bordeaux)
Intégration numérique rapide et prouvée — Application au calcul des périodes de courbes hyperelliptiques
A006, 10:30
Thursday December 10, 2009
Éric Brier (Ingenico)
Familles de courbes pour factorisation par ECM des nombres de Cunningham
A208, 10:30
Friday September 25, 2009
Iram Chelli (CACAO) Fully Deterministic ECM A006, 10:30
pdf (589 kb)
We present a FDECM algorithm allowing to remove — if they exist — all prime factors less than 2^32 from a composite input number n. Trying to remove those factors naively either by trial-division or
by multiplying together all primes less than 2^32, then doing a GCD with this product both prove extremely slow and are unpractical. We will show in this article that with FDECM it costs about a
hundred well-chosen elliptic curves, which can be very fast in an optimized ECM implementation with optimized B[1] and B[2] smoothness bounds. The speed varies with the size of the input number n.
Special attention has also been paid so that our FDECM be the most implementation-independent possible by choosing a widespread elliptic-curve parametrization and carefully checking all results for
smoothness with Magma. Finally, we have considered possible optimizations to FDECM first by using a rational family of parameters for ECM and then by determining when it is best to switch from ECM to
GCD depending on the size of the input number n. To the best of our knowledge, this is the first detailed description of a fully deterministic ECM algorithm.
Wednesday September 23, 2009
Răzvan Bărbulescu (CACAO)
Familles de courbes elliptiques adaptées à la factorisation des entiers
B100, 10:30
Thursday September 17, 2009
Tadanori Teruya (LCIS, University of Tsukuba, Japan)
Generating elliptic curves with endomorphisms suitable for fast pairing computation
A006, 10:00
This presentation is about a kind of ordinary elliptic curves introduced by Scott at INDOCRYPT 2005. These curves' CM discriminant is -3 or -1, then they have endomorphism for reducing Miller's loop
length to half. These curves are also restricted in terms of the form of group order. Therefore, these are generated by Cocks-Pinch method. Cocks-Pinch method is a general method to obtain elliptic
curve parameters with rho-value approximately 2. This method enables to fix group order, CM disciminant and embedding degree in advance as long as they meet the requirements. Elliptic curves
introduced by Scott with CM discriminant -3, they were investigated by Scott and Takashima but CM discriminant -1 are not. In this presentation, we show the result of generating curve parameters with
CM discriminant -1 and what amount of parameters meet the requirements.
Tuesday June 23, 2009
Andy Novocin (ANR LareDa, LIRMM, Montpellier)
Gradual Sub-Lattice Reduction and Applications
B011, 10:30
One of the primary uses of lattice reduction algorithms is to approximate short vectors in a lattice. I present a new algorithm which produces approximations of short vectors in certain lattices. It
does this by generating a reduced basis of a sub-lattice which is guaranteed to contain all short vectors in the given lattice. This algorithm has a complexity which is less dependent on the size of
the input basis vectors and more dependent on the size of the output vectors.
To illustrate the usefulness of the new algorithm I will show how it can be used to give new complexity bounds for factoring polynomials in Z[x] and reconstructing algebraic numbers from
Monday May 18, 2009
Nicolas Guillermin (Centre d'électronique de l'armement (CELAR), DGA) Architecture matérielle pour la cryptographie sur courbes elliptiques et RNS C103, 16:00
pdf (1027 kb)
Thursday April 30, 2009
Judy-anne Osborn (Australian National University) On Hadamard's Maximal Determinant Problem A006, 11:00
pdf (3142 kb)
The Maximal Determinant Problem was first posed around 1898. It asks for a square matrix of largest possible determinant, with the entries of the matrix restricted to be drawn from the set {0, 1}, or
equivalently {+1, -1}.
Emperical investigations show an intriguing amount of structure in this problem, both in the numerical sequence of maximal determinants, and in the corresponding maximal determinant matrices
themselves. But naive brute force search becomes infeasible beyond very small orders, due to the exponential nature of the search space.
High and maximal determinant matrices are useful in applications, particularly in statistics, which is one reason why it is desirable to have at hand a means of constructing these matrices. For
certain sparse infinite subsequences of orders, constructive algorithms have been found - some relating to finite fields. However progress over the last one hundred years has been distinctly patchy,
depending on elementary number theoretic properties of the matrix order: particularly its remainder upon division by four.
We discuss ways of setting up computations which may be feasible with current computing power and yet still yield new maximal determinant matrices that would not be accessible to a naive search.
Thursday March 26, 2009
Jérémie Detrey (CACAO) Hardware Operators for Pairing-Based Cryptography – Part II: Because speed also matters – A006, 10:30
pdf (684 kb)
Originally introduced in cryptography by Menezes, Okamoto and Vanstone (1993) then Frey and Rück (1994) to attack the discrete logarithm problem over a particular class of elliptic curves, pairings
have since then been put to a constructive use in various useful cryptographic protocols such as short digital signature or identity-based encryption. However, evaluating these pairings relies
heavily on finite field arithmetic, and their computation in software is still expensive. Developing hardware accelerators is therefore crucial.
In the second part of this double-talk, we will focus on the other end of the hardware design spectrum. While the first part (given by Jean-Luc Beuchat) presented a co-processor which, although quite
slow, would strive to minimize the amount of hardware resources required to compute the Tate pairing, in this second part we will describe another co-processor architecture, designed to achieve much
lower computation times, at the expense of hardware resources.
Thursday February 26, 2009
Jean-Luc Beuchat (Tsukuba) Hardware Operators for Pairing-Based Cryptography – Part I: Because size matters – A208, 10:00
pdf (676 kb)
Originally introduced in cryptography by Menezes, Okamoto and Vanstone (1993) then Frey and Rück (1994) to attack the discrete logarithm problem over a particular class of elliptic curves, pairings
have since then been put to a constructive use in various useful cryptographic protocols such as short digital signature or identity-based encryption. However, evaluating these pairings relies
heavily on finite field arithmetic, and their computation in software is still expensive. Developing hardware accelerators is therefore crucial.
In this talk, we will then present a hardware co-processor designed to accelerate the computation of the Tate pairing in characteristics 2 and 3. As the title suggests, this talk will emphasize on
reducing the silicon footprint (or in our case the usage of FPGA resources) of the circuit to ensure scalability, while trying to minimize the impact on the overall performances.
Thursday November 6, 2008
Nicolas Estibals (ENS Lyon) Multiplieurs parallèles et pipelinés pour le calcul de couplage en caractéristiques 2 et 3 A006, 11:00
pdf (872 kb)
Thursday October 16, 2008
Marc Mezzarobba (Projet Algo)
Suites et fonctions holonomes : évaluation numérique et calcul automatique de bornes
A006, 10:30
Thursday June 26, 2008
Éric Schost (University of Western Ontario.)
Deformation techniques for triangular arithmetic
B200, 14:00
Triangular representations are a versatile data structure; however, even basic arithmetic operations raise difficult questions with such objects. I will present an algorithm for multiplication modulo
a triangular set that relies on deformation techniques and ultimately evaluation and interpolation. It features a quasi-linear running time (without hidden exponential factor), at least in some nice
cases. More or less successful applications include polynomial multiplication, operations on algebraic numbers and arithmetic in Artin-Schreier extensions.
Friday May 23, 2008
Joerg Arndt (Australian National University) arctan relations for computing pi. A006, 10:00
pdf (89 kb)
Wednesday May 14, 2008
Joerg Arndt (Australian National University) Binary polynomial irreducibility tests avoiding GCDs. A006, 14:00
Thursday April 10, 2008
Mathieu Cluzeau (INRIA Rocquencourt, équipe SECRET)
Reconnaissance d'un code linéaire en bloc
A006, 10:30
Tuesday March 25, 2008
Nicolas Meloni (Université de Toulon)
Chaines d'additions différentielles et multiplication de point sur les courbes elliptiques
A006, 10:30
Thursday February 14, 2008
Laurent Imbert (LIRMM)
Quelques systèmes de numération exotiques (et applications)
A006, 10:30
Thursday February 7, 2008
Guillaume Melquiond (MSR-INRIA)
L'arithmétique flottante comme outil de preuve formelle
A006, 10:30
Thursday January 31, 2008
Aurélie Bauer (Université de Versailles Saint-Quentin-en-Yvelines, Laboratoire PRISM,)
Toward a Rigorous Variation of Coppersmith's Algorithm on Three Variables
A006, 10:30
In 1996, Coppersmith introduced two lattice reduction based techniques to find small roots in polynomial equations. One technique works for modular univariate polynomials, the other for bivariate
polynomials over the integers. Since then, these methods have been used in a huge variety of cryptanalytic applications. Some applications also use extensions of Coppersmith's techniques on more
variables. However, these extensions are heuristic methods.
In this presentation, we present and analyze a new variation of Coppersmith's algorithm on three variables over the integers. We also study the applicability of our method to short RSA exponents
attacks. In addition to lattice reduction techniques, our method also uses Gröbner bases computations. Moreover, at least in principle, it can be generalized to four or more variables.
Thursday January 17, 2008
Wednesday December 5, 2007
Christophe Doche
DBNS et cryptographie sur courbes elliptiques
A006, 10:30
Thursday November 29, 2007
Clément Pernet
Algèbre linéaire dense dans des petits corps finis: théorie et pratique.
B13, 10:30
Thursday November 8, 2007
Thomas Sirvent
Schéma de diffusion efficace basé sur des attributs
A006, 10:30
Monday June 18, 2007
Jean-Luc Beuchat Arithmetic Operators for Pairing-Based Cryptography B13, 10:30
pdf (625 kb)
Since their introduction in constructive cryptographic applications, pairings over (hyper)elliptic curves are at the heart of an ever increasing number of protocols. Software implementations being
rather slow, the study of hardware architectures became an active research area. In this talk, we first describe an accelerator for the $\eta_T$ pairing over $\mathbb{F}_3[x]/(x^{97}+x^{12}+2)$. Our
architecture is based on a unified arithmetic operator which performs addition, multiplication, and cubing over $\mathbb{F}_{3^{97}}$. This design methodology allows us to design a compact
coprocessor ($1888$ slices on a Virtex-II Pro~$4$ FPGA) which compares favorably with other solutions described in the open literature. We then describe ways to extend our approach to any
characteristic and any extension field.
The talk will be based on the following research reports:
Thursday June 14, 2007
Ley Wilson
Quaternion Algebras and Q-curves
B13, 10:30
Let K be an imaginary quadratic field with Hilbert class field H and maximal order OK. We consider elliptic curves E defined over H with the properties that the endomorphism ring of E is isomorphic
to OK and E is isogenous to E over H for all \sigma\in Gal(H/K). Taking the Weil restriction W_{H/K} of such an E from H to K, one obtains an abelian variety whose endomorphism ring will be either a
field or a quaternion algebra. The question of which quaternion algebras may be obtained in this way is one of our motivations.
For quaternion algebras to occur, the class group of K must have non-cyclic 2-Sylow subgroup, the simplest possible examples occuring when K has class number 4. In this case, investigating when W_{H/
K}(E) has a non-abelian endomorphism algebra is closely related to finding extensions L/H such that Gal(L/K) is either the dihedral or quaternion group of order 8.
Thursday June 7, 2007
Jeremie Detrey
Évaluation en virgule flottante de la fonction exponentielle sur FPGA
B13, 10:30
Tuesday June 5, 2007
David Kohel (Université de Sydney et UHP-Nancy 1)
Complex multiplication and canonical lifts
B200, 14:00
The $j$-invariant of an elliptic curve with complex multiplication by $K$ is well-known to generate the Hilbert class field of $K$. Such $j$-invariants, or rather their minimal polynomials in $\ZZ[x]
$, can be determined by means of complex analytic methods from a given CM lattice in $\CC$. A construction of CM moduli by $p$-adic lifting techniques was introduced by Couveignes and Henocq.
Efficient versions of one-dimensional $p$-adic lifting were developed by Br\"oker. These methods provide an alternative application of $p$-adic canonical lifts, as introduced by Satoh for determining
the zeta function of an elliptic curves $E/\FF_{p^n}$.
Construction of such defining polynomials for CM curves is an area of active interest for use in cryptographic constructions. Together with Gaudry, Houtmann, Ritzenthaler, and Weng, we generalised
the elliptic curve CM construction to genus 2 curves using $2$-adic canonical lifts. The output of this algorithm is data specifying a defining ideal for the CM Igusa invariants $(j_1,j_2,j_3)$ in $\
ZZ[x_1,x_2,x_3]$. In contrast to Mestre's AGM algorithm for determining zeta functions of genus 2 curves $C/\FF_{2^n}$, this construction pursues the alternative application of canonical lifts to CM
constructions. With Carls and Lubicz, I developed an analogous $3$-adic CM construction using theta functions. In this talk I will report on recent progress and challenges in extending and improving
these algorithms.
Thursday April 26, 2007
David Lubicz
Relèvement canonique en caractéristique impaire.
B200, 10:30
Tuesday April 17, 2007
Schönhage proposed in the paper "Schnelle Multiplikation von Polynomen über Körpern der Charakteristik 2" (Acta Informatica, 1977) an O(n log(n) log(log(n))) algorithm to multiply polynomials over GF
(2)[x]. We describe that algorithm and report on its implementation in NTL.
Tuesday April 17, 2007
Richard Brent A Multi-level Blocking Algorithm for Distinct-Degree Factorization of Polynomials over GF(2). B200, 10:30
Abstract: We describe a new multi-level blocking algorithm for distinct-degree factorization of polynomials over GF(2). The idea of the algorithm is to use one level of blocking to replace most GCD
computations by multiplications, and a finer level of blocking to replace most multiplications by squarings (which are much faster than multiplications over GF(2)). As an application we give an
algorithm that searches for all irreducible trinomials of given degree. Under plausible assumptions, the expected running time of this algorithm is much less than that of the classical algorithm. For
example, our implementation gives a speedup of more than 60 over the classical algorithm for trinomials of degree 6972593 (a Mersenne exponent). [Joint work with Paul Zimmermann.]
Thursday March 15, 2007
Ben Smith Explicit isogenies of hyperelliptic Jacobians B011, 10:30
Isogenies — surjective homomorphisms of algebraic groups with finite kernel — are of great interest in number theory and cryptography. Algorithms for computing with isogenies of elliptic curves are
well-known, but in higher dimensions, the situation is more complicated, and few explicit examples of non-trivial isogenies are known. We will discuss some of the computational issues, and describe
some examples and applications of isogenies of Jacobians of hyperelliptic curves.
Thursday March 8, 2007
Guillaume Hanrot Problème du vecteur le plus court dans un réseau : analyse de l'algorithme de Kannan (travail commun avec D. Stehlé). B011, 11:00
Thursday February 8, 2007
Tuesday January 16, 2007
Sylvain Chevillard (speaker), Christoph Lauter (Project-team Arénaire, INRIA Rhône-alpes.) Une norme infinie certifiée pour la validation d'algorithmes numériques B200, 14:45
Tuesday January 16, 2007
Christoph Lauter (Project-team Arénaire, INRIA Rhône-alpes.) Automatisation du contrôle de précision et de la preuve pour les formats double-double et triple-double B200, 14:00
Wednesday November 29, 2006
Shift registers are very common hardware devices. They are always associated with a combinatorial/sequential feedback. Linear Feedback Shift Registers (LFSRs) are certainly the most famous setup
amongst those circuits. LFSRs are used everywhere in communication systems: scramblers, stream-ciphers, spread spectrum, Built-in Self Test (BIST)... Despite their popularity, the impact of LFSRs
characteristics has never been clearly studied. I have studied the LFSRs synthesis on Xilinx Spartan2E FPGA with several goals (area, critical path, throughput). Studying high throughtput synthesis
is particularly interesting since it is a circuitous way to study software synthesis. I will describe which properties can be observed for both synthesis. In conclusion, other shift registers setups
will be considered like Feedback with Carry Shift Registers (FCSRs) or Non Linear Feedback Shift Registers (NLFSRs).
Tuesday October 10, 2006
Using Deurings theory of correspondences we are able to construct homomorphisms between the degree zero classgroups of function fields. Correspondences are divisors of function fields with
transcendent constant fields of degree one. They form an entire ring which is for example in the case of elliptic function fields isomorphic to an order of an imaginary quadratic number field. In
this talk we show how to compute endomporphisms of elliptic and hyperelliptic curves using correspondences.
Friday April 14, 2006
Michael Quisquater
Cryptanalyse lineaire des algorithmes de chiffrement par bloc.
B200, 10:00
Thursday April 6, 2006
Stef Graillat
Évaluation précise de polynômes en précision finie
B200, 10:00
Monday March 27, 2006
Christopher Wolf
Division without Multiplication in Factor Rings
B200, 11:00
In a factor ring, i.e., in a polynomial ring F[z]/(m) or the integer ring Z_n, the conventional way of performing division a/b consists of two steps: first, the inverse b^{-1} is computed and then
the product ab^{-1}. In this talk we describe a technique called "direct division" which computes the division a/b for given a,b directly, only using addition and multiplication in the underlying
structure, i.e., finite field operations in F for the polynomial ring F[z]/(m) and addition and multiplication by 2 in the integer ring Z_n. This technique requires that the module m is not divisible
by z, and the module n is odd.
Thursday March 23, 2006
Benoît Daireaux Analyse dynamique des algorithmes euclidiens B200, 14:00
pdf (522 kb)
Thursday March 16, 2006
Frederik Vercauteren
The Number Field Sieve in the Medium Prime Case
B200, 14:00
Thursday February 16, 2006
Thomas Plantard Arithmétique modulaire pour la cryptographie. A208, 10:00
pdf (966 kb)
Thursday January 5, 2006
Marion Videau (Project-team Codes, INRIA Rocquencourt.) Propriétés cryptographiques des fonctions booléennes symétriques. A208, 11:00
Thursday November 24, 2005
Alexander Kruppa (Technische Universität München)
Optimising the enhanced standard continuation of P-1, P+1 and ECM
B200, 10:00
The enhanced standard continuation of the P-1, P+1 and ECM factoring methods chooses pairs (a,b), where a is a multiple of a suitably chosen d, so that every prime in the desired stage 2 interval can
be written as a-b. Montgomery [1] showed how to include more than one prime per (a,b) pair by instead evaluating f(a)-f(b) so that this bivariate polynomial has algebraic factors. However, he
restricts his analysis to the algebraic factors a-b and a+b, and considers only prime values taken by these factors. We present a framework for generalising Montgomery's ideas by choosing (a,b) pairs
as nodes in a partial cover of a bipartite graph, which allows utilising large prime factors of composite values, and algebraic factors of higher degree.
[1] P. L. Montgomery, Speeding the Pollard and elliptic Curve Methods of factorization, Math. Comp. 48 (177), 1987.
Wednesday November 23, 2005, informal workgroup
Let K be a p-adic field. We give an explicit characterization of the abelian extensions of K of degree p by relating the coefficients of the generating polynomials of extensions L/K of degree p to
the norm group N_{L/K}(L^*). This is applied in the construction of class fields of degree p^m.
Tuesday November 8, 2005, informal workgroup
Damien to be announced B200, 16:00
to be announced
Thursday October 13, 2005
Thursday October 6, 2005
The talk surveys various algorithms for computing in the divisor class groups of general non singular curves and gives a running time discussion.
Thursday September 29, 2005, informal workgroup
Pierrick On finit l'exposé d'il y a deux semaines. B200, 11:00
Thursday September 29, 2005, informal workgroup
Paul On finit l'exposé de la semaine dernière. B200, 10:00
Thursday September 22, 2005, informal workgroup
Paul Calcul de fonctions holonomes en O(M(n) log(n)^3) A006, 10:00
Thursday September 15, 2005, informal workgroup
Pierrick Fonctions thétas et formules efficaces pour loi de groupe en genre 2. B200, 10:30
Friday June 3, 2005, informal workgroup
Julien Cochet
à préciser
B200, 11:00
Thursday April 14, 2005
Damien Vergnaud (LMNO, CNRS / Université de Caen.) On the decisional xyz-Diffie Hellman problem. A006, 16:00
Digital signatures have the sometimes unwanted property of being universally verifiable by anybody having access to the signer's public key. In recent work with F. Laguillaumie and P. Pailler, we
have proposed a signature scheme where the verification requires interaction with the signer. Its security relies on the « xyz » variant of the classical Diffie-Hellman problem. We present in this
talk the underlying algorithmical problem within its cryptographical context, and give some assessment of its difficulty
Thursday April 7, 2005, informal workgroup
Jean-Yves Degos
Study of Basiri-Enge-Faugère-Gurel paper on arithmetic of C[3,4] curves.
B200, 14:00
Wednesday March 23, 2005, informal workgroup
Paul Study of P. L. Montgomery's paper: "Five, Six, and Seven-Term Karatsuba-Like Formulae". B200, 14:00
Thursday March 10, 2005, informal workgroup
Emmanuel Study of the security proofs of OAEP and OAEP+ B200, 14:00
Thursday March 3, 2005
A curve of genus g defined over the complex field C is isomorphic to a torus with g holes, or equivalently to a quotient of the form C^g/(Z^g.1⊕Z^g.τ), τ being a g×g matrix called a Riemann matrix.
When the genus g equals one, the computation of τ from the equation of an elliptic curve is one of the classical applications of the arithmetico-geometric mean (AGM). The AGM can be interpreted using
functions called theta constants.
We show how this special case extends to higher genus, using a generalization of the AGM known as the Borchardt mean.
In particular, we develop an algorithm for computing genus 2 Riemann matrices in almost linear time. This algorithm can be implemented easily.
As we also show, this technique allows for rapid computation of modular forms and functions, and we discuss the applications thereof (construction of CM curves, explicit computation of isogenies, …). | {"url":"https://caramel.loria.fr/seminars.en.html","timestamp":"2024-11-10T02:13:06Z","content_type":"application/xhtml+xml","content_length":"136446","record_id":"<urn:uuid:4ae57286-d265-470b-85ba-1e85aaf2645b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00169.warc.gz"} |
In situ Visualization with Ascent
In situ Visualization with Ascent
Ascent is a system designed to meet the in-situ visualization and analysis needs of simulation code teams running multi-physics calculations on many-core HPC architectures. It provides rendering
runtimes that can leverage many-core CPUs and GPUs to render images of simulation meshes.
Compiling with GNU Make
After building and installing Ascent according to the instructions at Building Ascent, you can enable support for it in WarpX by changing the line
in GNUmakefile to
Furthermore, you must ensure that either the ASCENT_DIR shell environment variable contains the directory where Ascent is installed or you must specify this location when invoking make, i.e.,
make -j 8 USE_ASCENT_INSITU=TRUE ASCENT_DIR=/path/to/ascent/install
Inputs File Configuration
Once WarpX has been compiled with Ascent support, it will need to be enabled and configured at runtime. This is done using our usual inputs file (read with amrex::ParmParse). The supported parameters
are part of the FullDiagnostics with <diag_name>.format parameter set to ascent.
Visualization/Analysis Pipeline Configuration
Ascent uses the file ascent_actions.yaml to configure analysis and visualization pipelines. Ascent looks for the ascent_actions.yaml file in the current working directory.
For example, the following ascent_actions.yaml file extracts an isosurface of the field Ex for 15 levels and saves the resulting images to levels_<nnnn>.png. Ascent Actions provides an overview over
all available analysis and visualization actions.
action: "add_pipelines"
type: "contour"
field: "Ex"
levels: 15
action: "add_scenes"
image_prefix: "levels_%04d"
type: "pseudocolor"
pipeline: "p1"
field: "Ex"
Here is another ascent_actions.yaml example that renders isosurfaces and particles:
action: "add_pipelines"
type: "contour"
field: "Bx"
levels: 3
action: "add_scenes"
type: "pseudocolor"
pipeline: "p1"
field: "Bx"
type: "pseudocolor"
field: "particle_electrons_Bx"
radius: 0.0000005
azimuth: 100
elevation: 10
image_prefix: "out_render_3d_%06d"
Finally, here is a more complex ascent_actions.yaml example that creates the same images as the prior example, but adds a trigger that creates a Cinema Database at cycle 300:
action: "add_triggers"
condition: "cycle() == 300"
actions_file: "trigger.yaml"
action: "add_pipelines"
type: "contour"
field: "jy"
iso_values: [ 1000000000000.0, -1000000000000.0]
action: "add_scenes"
type: "pseudocolor"
pipeline: "p1"
field: "jy"
type: "pseudocolor"
field: "particle_electrons_w"
radius: 0.0000002
azimuth: 100
elevation: 10
image_prefix: "out_render_jy_part_w_3d_%06d"
When the trigger condition is meet, cycle() == 300, the actions in trigger.yaml are also executed:
action: "add_pipelines"
type: "contour"
field: "jy"
iso_values: [ 1000000000000.0, -1000000000000.0]
action: "add_scenes"
type: "pseudocolor"
pipeline: "p1"
field: "jy"
type: "pseudocolor"
field: "particle_electrons_w"
radius: 0.0000001
type: "cinema"
phi: 10
theta: 10
db_name: "cinema_out"
You can view the Cinema Database result by opening cinema_databases/cinema_out/index.html.
With Ascent/Conduit, one can store the intermediate data files before the rendering step is applied to custom files. These so-called Conduit Blueprint HDF5 files can be “replayed”, i.e. rendered
without running the simulation again. VisIt 3.0+ also supports those files.
Replay is a utility that allows the user to replay a simulation from aforementioned files and rendering them with Ascent. Replay enables the user or developer to pick specific time steps and load
them for Ascent visualization, without running the simulation again.
We will guide you through the replay procedure.
Get Blueprint Files
To use replay, you first need Conduit Blueprint HDF5 files. The following block can be used in an ascent action to extract Conduit Blueprint HDF5 files from a simulation run.
action: "add_extracts"
type: "relay"
path: "conduit_blueprint"
protocol: "blueprint/mesh/hdf5"
The output in the WarpX run directory will look as in the following listing. The .root file is a metadata file and the corresponding directory contains the conduit blueprint data in an internal
format that is based on HDF5.
In order to select a few time steps after the fact, a so-called cycles file can be created. A cycles file is a simple text file that lists one root file per line, e.g.:
Run Replay
For Ascent Replay, two command line tools are provided in the utilities/replay directory of the Ascent installation. There are two version of replay: the MPI-parallel version replay_mpi and a serial
version, replay_ser. Use an MPI-parallel replay with data sets created with MPI-parallel builds of WarpX. Here we use replay_mpi as an example.
The options for replay are:
• --root: specifies Blueprint root file to load
• --cycles: specifies a text file containing a list of Blueprint root files to load
• --actions: specifies the name of the actions file to use (default: ascent_actions.yaml)
Instead of starting a simulation that generates data for Ascent, we now execute replay_ser/replay_mpi. Replay will loop over the files listed in cycles in the order in which they appear in the cycles
For example, for a small data example that fits on a single computer:
./replay_ser --root=conduit_blueprint.cycle_000400.root --actions=ascent_actions.yaml
Will replay the data of WarpX step 400 (“cycle” 400). A whole set of steps can be replayed with the above mentioned cycles file:
./replay_ser --cycles=warpx_list.txt --actions=ascent_actions.yaml
For larger examples, e.g. on a cluster with Slurm batch system, a parallel launch could look like this:
# one step
srun -n 8 ./replay_mpi --root=conduit_blueprint.cycle_000400.root --actions=ascent_actions.yaml
# multiple steps
srun -n 8 ./replay_mpi --cycles=warpx_list.txt --actions=ascent_actions.yaml
Example Actions
A visualization of the electric field component \(E_x\) (variable: Ex) with a contour plot and with added particles can be obtained with the following Ascent Action. This action can be used both in
replay as well as in situ runs.
action: "add_pipelines"
type: "contour"
field: "Ex"
levels: 16
type: "clip"
topology: topo # name of the amr mesh
x: 0.0
y: 0.0
z: 0.0
x: 0.0
y: -1.0
z: 0.0
x: 0.0
y: 0.0
z: 0.0
x: -0.7
y: -0.7
z: 0.0
type: histsampling
field: particle_electrons_uz
bins: 64
sample_rate: 0.90
type: "clip"
topology: particle_electrons # particle data
x: 0.0
y: 0.0
z: 0.0
x: 0.0
y: -1.0
z: 0.0
x: 0.0
y: 0.0
z: 0.0
x: -0.7
y: -0.7
z: 0.0
# Uncomment this block if you want to create "Conduit Blueprint files" that can
# be used with Ascent "replay" after the simulation run.
# Replay is a workflow to visualize individual steps without running the simulation again.
# action: "add_extracts"
# extracts:
# e1:
# type: "relay"
# params:
# path: "./conduit_blueprint"
# protocol: "blueprint/mesh/hdf5"
action: "add_scenes"
type: "pseudocolor"
field: "particle_electrons_uz"
pipeline: "sampled_particles"
type: "pseudocolor"
field: "Ex"
pipeline: "clipped_volume"
bg_color: [1.0, 1.0, 1.0]
fg_color: [0.0, 0.0, 0.0]
image_prefix: "lwfa_Ex_e-uz_%06d"
azimuth: 20
elevation: 30
zoom: 2.5
There are more Ascent Actions examples available for you to play.
This section is in-progress. TODOs: finalize acceptance testing; update 3D LWFA example
In the preparation of simulations, it is generally useful to run small, under-resolved versions of the planned simulation layout first. Ascent replay is helpful in the setup of an in situ
visualization pipeline during this process. In the following, a Jupyter-based workflow is shown that can be used to quickly iterate on the design of a ascent_actions.yaml file, repeatedly rendering
the same (small) data.
First, run a small simulation, e.g. on a local computer, and create conduit blueprint files (see above). Second, copy the Jupyter Notebook file ascent_replay_warpx.ipynb into the simulation output
directory. Third, download and start a Docker container with a prepared Jupyter installation and Ascent Python bindings from the simulation output directory:
docker pull alpinedav/ascent-jupyter:latest
docker run -v$PWD:/home/user/ascent/install-debug/examples/ascent/tutorial/ascent_intro/notebooks/replay -p 8000:8000 -p 8888:8888 -p 9000:9000 -p 10000:10000 -t -i alpinedav/ascent-jupyter:latest
Now, access Jupyter Lab via: http://localhost:8888/lab (password: learn).
Inside the Jupyter Lab is a replay/ directory, which mounts the outer working directory. You can now open ascent_replay_warpx.ipynb and execute all cells. The last two cells are the replay action
that can be quickly iterated: change replay_actions.yaml cell and execute both.
• Keep an eye on the terminal, if a replay action is erroneous it will show up on the terminal that started the docker container. (TODO: We might want to catch that inside python and print it in
Jupyter instead.)
• If you remove a “key” from the replay action, you might see an error in the AscentViewer. Restart and execute all cells in that case. | {"url":"https://warpx.readthedocs.io/en/latest/dataanalysis/ascent.html","timestamp":"2024-11-05T10:19:04Z","content_type":"text/html","content_length":"54751","record_id":"<urn:uuid:9f5b5368-ba2c-4717-8632-538cbec5f17f>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00803.warc.gz"} |
UK House Sales – More Seasonality in Time Series with R
So the average sale price of houses in the UK is seasonal. Does that mean it’s sensible to advise house buyers to only purchase in the winter months? Let’s try to see.
I’m going to have a look and see if the data we have implies that the change in average sale price of a house with the month is actually just a function of some other monthly variation. I don’t
really know how to go about doing this but it’s probably best to not let things like that stop me – I’m thinking the first port of call is likely calculate the correlation between the month each of
the other factors (excluding price). If there’s a decent correlation (positive or negative) then we might be in trouble and will have to investigate that variable with a bit more seriousness.
Again, that’d be a delightfully easy task if I could hold the entire dataset in memory. Unfortunately I’m not that lucky and so I’ll have to do a bit of aggregation before importing the data to R/
So my independent variables:
1.) Region
2.) Type of house
3.) New or old house
4.) Freehold or leasehold
I’m thinking of the work we did in the last blog post in Python and that that might be the best way to proceed; to generate vectors containing the average proportion of sales due to each ‘test group’
(the factors of the independent variable in question) in each of the relevant years. Once I’ve got that, I’m initially thinking of a twelve variant paired t-test. We’ve got 12 different months – in
each month we’ve got a year for which each of the other test groups have a corresponding year, hence the choice of paired t-test. However, previously when I grievously abused the normality assumption
required to run a t-test I had a whole bunch of data (800,000 points) and so I was sort of O.K with it. Now, I’ve got 18. We may have to investigate other options – Kruskal-Wallis being at the
forefront of those. Anyway – let’s worry about that when we have to.
First things first, let’s get this data into a format we can load into memory:
awk -F, '{print $4"-"$(NF-1)}' pp-all.csv | cut -d '-' -f1,2,4 | tr -d '"' | tr '-' ' ' | sed -e 's/s+/-/' | sed -e 's/s+/,/' | sort | uniq -c | sort -s -nk2 | sed 's/^ *//' | sed -e 's/s+/,/' | awk
-F, '{if ($3 != "2014-01") print $0}' > number_of_sales_by_region.txt
Again, a horrible one-liner that I’ll have to apologise for. All it does is give me an output file with the format: Count | Month | Region – off of the back of that I can now use R:
myData <- read.csv('number_of_sales_by_region.txt', header=F, sep=',', col.names=c("Sales", "Datey", "Region"))
## To store as a date object we need a day - let's assume the first of the month
myData$Datey <- as.Date(paste(myData$Datey, 1, sep="-"), format="%Y-%m-%d")
## I'm not too worried about January 2014 - it makes the lengths of the 'month vectors' uneven and ruins the below graphs
myData <- myData[format(myData$Datey, "%Y") < 2014,]
byYear <- data.frame(aggregate(Sales ~ format(Datey, "%Y"), data = myData, FUN=sum))
colnames(byYear) <- c("Year", "houseSales")
ggplot(byYear, aes(x=Year, y=houseSales)) + geom_bar(stat="identity") + ggtitle("Number of UK House Sales") + theme(axis.text.x = element_text(angle=90, hjust=1)) + scale_y_continuous(name="Houses Sold", labels=comma)
byMonth <- data.frame(aggregate(Sales ~ format(Datey, "%m"), data = myData, FUN=sum))
colnames(byMonth) <- c("Month", "houseSales")
byMonth$Month <- factor(month.name, levels=month.name)
ggplot(byMonth, aes(x=Month, y=houseSales)) + geom_bar(stat="identity") + ggtitle("Number of UK House Sales") + theme(axis.text.x = element_text(angle=90, hjust=1)) + scale_y_continuous(name="Houses Sold", labels=comma)
Giving us what’d I’d class as a couple of very interesting graphs:
In terms of the housing crash, we saw it a bit in the average house sale price but we can see the main impact was a complete slow-down on the number of houses sold. There are potentially hints of a
re-awakening in 2013 but I guess we’ll have to see how this year ends up panning out. The monthly variation is interesting and at first glance, counter-intuitive when viewed alongside the average
house price data. Naively, you’d expect the average house price to be highest when fewer houses were being sold (what with number of houses being the denominator and all). I’m not too bothered in
digging into the relationship between number of houses sold and average house sale price (I’ve got the feeling that it’s the sort of thing economists would concern themselves with) so won’t really be
looking at that. I am however now at least a bit interested in the most sold houses in the UK – I don’t know what I’ll uncover but I’m marking it down as something to look at in the future.
Anyway, now we’ve had a first look at our data let’s see if we can track the proportion of UK house sales made by each region. There are likely a few ways to do this in R; I’ll be picking the
SQL-esque way because I use SQL a lot more than I use R and so am more familiar with the ideas behind it. I’d be glad to be shown a more paradigmically R way to do it (in the comments):
myData$Year <- format(myData$Datey, "%Y")
myData <- merge(x=myData, y=byYear, by = "Year")
myData$Percent <- 100*(myData$Sales/myData$houseSales)
## I'm not very London-centric but given that they're the biggest house sellers in the UK...
londontimeseries <- ts(myData[myData$Region == 'GREATER LONDON',]$Percent, frequency=12, start=c(1995, 1))
london_decomposed <- decompose(londontimeseries)
seasonality <- data.frame(london_decomposed$seasonal[c(1:12)])
colnames(seasonality) <- c("Sales", "Month")
ggplot(seasonality, aes(x=Month, y=Sales)) + geom_bar(stat="identity") + ggtitle("Seasonal variations in London's proportion of UK House Sales") + theme(axis.text.x = element_text(angle=90, hjust=1)) + scale_y_continuous(name="$ of London's % of total UK house sales", labels=percent)
and the bit we were after:
Well, I don’t really know if that’s good news or bad. It’s good in the fact that we thought to check the factors behind seasonal variations in house price. It’s bad because I can no longer advise
people to buy houses in the winter (I’ve checked and there’s a seasonal variation for every region I tried). In all honesty, I think the two graphs above are really interesting. I’m saying that the
housing crash effected London more strongly than the rest of the country, but that the market in London bounced back within a year and is now above pre-crash levels. The size of the seasonal
variations is pretty marked as well, with 20% swings either way from London’s mean value of percent of total house sales (sorry if the language seems verbose – I’m being careful to be precise).
What does this mean for our investigation into the seasonality of the average house price? Well, I’m confident that the average house price is seasonal but I’m also confident that we can’t use that
to advise people when they should be selling their house (just yet).
There are a couple of pieces of analysis I’d now like to do on this data. I think it’d be really interesting to get an idea of the ‘most-sold’ house in the UK since 1995. I also think there may be
surprises around the correlation between the number of times a house is sold and its selling price. However, this seasonality by region is also really interesting and I think I’d like to try to
cluster regions based on the seasonality of their housing market. It’d be interesting to graph the clusters and see if the divide is North/South, City/Country or something else entirely.
Additionally, the (G.C.S.E) economist in me is screaming out for the same investigation as above but with total sale price instead of number sold. | {"url":"https://dogdogfish.com/r/uk-house-sales-more-seasonality-in-time-series-with-r/","timestamp":"2024-11-04T01:41:17Z","content_type":"text/html","content_length":"79384","record_id":"<urn:uuid:578e5041-11fd-4df4-bac5-b05c760f8193>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00751.warc.gz"} |
Finding the Base Side Length of a Right Square Pyramid given Its Volume and Its Height
Question Video: Finding the Base Side Length of a Right Square Pyramid given Its Volume and Its Height Mathematics • Second Year of Secondary School
Determine the base side length of a right square pyramid whose height is 12 cm and volume is 1,296 cm³.
Video Transcript
Determine the base side length of a right square pyramid whose height is 12 centimeters and volume is 1,296 cubic centimeters.
It can be helpful to draw a sketch of the information. So here we have a pyramid. And because it’s a square pyramid, that means that the base is a square. The fact that this is a right pyramid means
that the apex, or the top of the pyramid, lies above the centroid of the base. The only length measurement information we’re given here is that the height is 12 centimeters. And so in order to work
out the base side length, we’ll need to use the fact that the volume is 1,296 cubic centimeters.
We can remember that there is a formula regarding the volume of a pyramid. This formula tells us that the volume is equal to one-third times the area of the base multiplied by ℎ, which is the height
of the pyramid. In this question, we are given the volume and the height of the pyramid. What this formula would allow us to do then is simply work out the area of the base. However, if we did use
this formula to work out the area of the base, that would allow us to work out the side length of the base. That’s because we know that the base is a square.
Now we can apply this formula and fill in the values that we know. The volume is 1,296, and the height is 12, so we have 1,296 is equal to one-third times the area of the base times 12. On the
right-hand side, we can simplify one-third times 12 as four. Dividing both sides by four, we have 1,296 over four is equal to the area of the base. Simplifying on the left-hand side, we have now
worked out that the area of the base is 324 square centimeters.
At the start of this question, we recognized that finding the area of the base would allow us to find the side length. This is because we know that the base is a square. To find the area of a square,
if we say that the side length of the square is 𝑙, then the area is equal to 𝑙 squared. We have already worked out that the area of the base, that’s the area of the square, is 324 square centimeters.
And so if we take the side length on this square to be 𝑙, then 𝑙 squared must be equal to 324. To find the value of 𝑙, we would take the square root of both sides. And the square root of 324 is 18.
And we know that that’s going to be a positive value because it’s a length.
So we can give the answer that the base side length of this square pyramid is 18 centimeters. | {"url":"https://www.nagwa.com/en/videos/919128609814/","timestamp":"2024-11-12T05:56:13Z","content_type":"text/html","content_length":"250941","record_id":"<urn:uuid:e1358b76-9d9b-4e7f-a005-c43e0bc8b60c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00319.warc.gz"} |
Re: st: Problem with proportions as explanatory variables in panel data
Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Problem with proportions as explanatory variables in panel data regression
From Maarten buis <[email protected]>
To [email protected]
Subject Re: st: Problem with proportions as explanatory variables in panel data regression
Date Tue, 14 Dec 2010 10:04:45 +0000 (GMT)
--- On Tue, 14/12/10, F. Javier Sese wrote:
> I am modeling the dependent variable (Y) as a function of three main
> explanatory variables (X1-X3) and a vector of control variables (Z).
> X1-X3 are proportions: they range between zero and one and add up to
> one for each observation (X1 + X2 + X3 = 1). Given the nature of
> X1-X3, there is a high negative correlation between them (an increase
> in one variable leads to a decrease in the other two), which gives
> rise to a potential collinearity problem that may be causing some
> unexpected results in the signs and statistical significance of the
> coefficients. In my dataset, X1 and X2 have a correlation coefficient
> of -0.81; X1 and X3 of -0.42; X2 and X3 of -0.19.
> Given that the main focus of my research is on understanding the
> impact of these three variables on Y, I would really appreciate it if
> someone can provide me with some guidance on how to obtain reliable
> parameter estimates for the coefficients b1-b3.
Multicolinearity is in it self never a problem: it leads to a reduction
in the power of our tests, but that is just an accurate representation
of the amount of information available in the data.
The real problem with your data is conceptual. We usually interpret
coefficients as a change in y for a unit change in x while keeping all
else constant. How can you change one proportion while keeping the
others constant? You can't. You can find a discussion of this problem
and possible solutions in chapter 12 of J. Aitchison (2003 [1986])
"The Statistical Analysis of Compositional Data". Caldwell, NJ: The
Blackburn Press.
Hope this helps,
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"https://www.stata.com/statalist/archive/2010-12/msg00506.html","timestamp":"2024-11-09T08:13:26Z","content_type":"text/html","content_length":"11791","record_id":"<urn:uuid:5d5c3e6a-5cae-4212-a9af-6f4657bebadb>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00434.warc.gz"} |
Lower Bound on the Accuracy of Parameter Estimation Methods for Linear Sensorimotor Synchronization Models
The mechanisms that support sensorimotor synchronization-that is, the temporal coordination of movement with an external rhythm-are often investigated using linear computational models. The main
method used for estimating the parameters of this type of model was established in the seminal work of Vorberg and Schulze (2002), and is based on fitting the model to the observed auto-covariance
function of asynchronies between movements and pacing events. Vorberg and Schulze also identified the problem of parameter interdependence, namely, that different sets of parameters might yield
almost identical fits, and therefore the estimation method cannot determine the parameters uniquely. This problem results in a large estimation error and bias, thereby limiting the explanatory power
of existing linear models of sensorimotor synchronization. We present a mathematical analysis of the parameter interdependence problem. By applying the Cramér-Rao lower bound, a general lower bound
limiting the accuracy of any parameter estimation procedure, we prove that the mathematical structure of the linear models used in the literature determines that this problem cannot be resolved by
any unbiased estimation method without adopting further assumptions. We then show that adding a simple and empirically justified constraint on the parameter space-assuming a relationship between the
variances of the noise terms in the model-resolves the problem. In a follow-up paper in this volume, we present a novel estimation technique that uses this constraint in conjunction with matrix
algebra to reliably estimate the parameters of almost all linear models used in the literature.
Bibliographical note
Publisher Copyright:
© 2015 by Koninklijke Brill NV, Leiden, The Netherlands.
• Cramér-Rao lower bound
• Sensorimotor synchronization
• linear models
• period correction
• phase correction
Dive into the research topics of 'Lower Bound on the Accuracy of Parameter Estimation Methods for Linear Sensorimotor Synchronization Models'. Together they form a unique fingerprint. | {"url":"https://cris.huji.ac.il/en/publications/lower-bound-on-the-accuracy-of-parameter-estimation-methods-for-l","timestamp":"2024-11-10T15:36:22Z","content_type":"text/html","content_length":"52861","record_id":"<urn:uuid:647ec2fe-1778-40e6-a953-afe50532604b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00853.warc.gz"} |
Highwoods Properties Depreciation And Amortization from 2010 to 2024 | Macroaxis
HIW Stock USD 33.27 0.23 0.70%
Highwoods Properties Depreciation And Amortization yearly trend continues to be fairly stable with very little volatility. Depreciation And Amortization is likely to outpace its year average in 2024.
Depreciation And Amortization is the systematic reduction in the recorded value of an intangible asset. This includes the allocation of the cost of tangible assets to periods in which the assets are
used, representing the expense related to the wear and tear, deterioration, or obsolescence of physical assets and intangible assets over their useful lives.
View All Fundamentals
First Reported Previous Quarter Current Value Quarterly Volatility
Depreciation And Amortization
1994-09-30 75.1 M 79.1 M 20.3 M
Dot-com Bubble Housing Crash Credit Downgrade Yuan Drop Covid
Highwoods Depreciation And Amortization
Check out the analysis of
Highwoods Properties Correlation
against competitors. For more information on how to buy Highwoods Stock please use our
How to Invest in Highwoods Properties
Latest Highwoods Properties' Depreciation And Amortization Growth Pattern
Below is the plot of the Depreciation And Amortization of Highwoods Properties over the last few years. It is the systematic reduction in the recorded value of an intangible asset. This includes the
allocation of the cost of tangible assets to periods in which the assets are used, representing the expense related to the wear and tear, deterioration, or obsolescence of physical assets and
intangible assets over their useful lives. Highwoods Properties' Depreciation And Amortization historical data analysis aims to capture in quantitative terms the overall pattern of either growth or
decline in Highwoods Properties' overall financial position and show how it may be relating to other accounts over time.
Depreciation And Amortization 10 Years Trend
Depreciation And Amortization
Highwoods Depreciation And Amortization Regression Statistics
Arithmetic Mean 316,356,813
Geometric Mean 216,938,743
Coefficient Of Variation 65.06
Mean Deviation 178,141,618
Median 226,660,000
Standard Deviation 205,836,770
Sample Variance 42368.8T
Range 644.8M
R-Value 0.93
Mean Square Error 5959.1T
R-Squared 0.87
Slope 42,915,826
Total Sum of Squares 593162.9T
Highwoods Depreciation And Amortization History
About Highwoods Properties Financial Statements
Highwoods Properties investors use historical fundamental indicators, such as Highwoods Properties' Depreciation And Amortization, to determine how well the company is positioned to perform in the
future. Understanding over-time patterns can help investors decide on long-term investments in Highwoods Properties. Please read more on our
technical analysis
fundamental analysis
Last Reported Projected for Next Year
Depreciation And Amortization 617.2 M 648.1 M
Also Currently Popular
Analyzing currently trending equities could be an opportunity to develop a better portfolio based on different market momentums that they can trigger. Utilizing the top trending stocks is also useful
when creating a market-neutral strategy or pair trading technique involving a short or a long position in a currently trending equity.
Additional Tools for Highwoods Stock Analysis
When running Highwoods Properties' price analysis, check to
measure Highwoods Properties' market volatility
, profitability, liquidity, solvency, efficiency, growth potential, financial leverage, and other vital indicators. We have many different tools that can be utilized to determine how healthy
Highwoods Properties is operating at the current time. Most of Highwoods Properties' value examination focuses on studying past and present price action to
predict the probability of Highwoods Properties' future price movements
. You can analyze the entity against its peers and the financial market as a whole to determine factors that move Highwoods Properties' price. Additionally, you may evaluate how the addition of
Highwoods Properties to your portfolios can decrease your overall portfolio volatility. | {"url":"https://widgets.macroaxis.com/financial-statements/HIW/Depreciation-And-Amortization","timestamp":"2024-11-06T11:49:00Z","content_type":"text/html","content_length":"322187","record_id":"<urn:uuid:5fa7904c-9aa2-4b30-8d3a-46b4c04f0884>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00039.warc.gz"} |
Edward's early work left an immediate impression on experts. He discovered a new class of instanton solutions to the classical Yang-Mills equations, very much a central subject at the time. He
pioneered work on field theories with N components and the associated "large-N limit" as N tends to infinity. Three years later as a Junior Fellow at Harvard he had already established a solid
international reputation - both in research and as a spell-binding lecturer. That year several major physics departments took the unusual step, at the time an extraordinary one, to attempt to recruit
a young post doctoral fellow to join their faculty as a full professor! At that point Edward returned to Princeton with Chiara Nappi, my post-doctoral fellow and Edward's new wife. Edward has been in
great demand ever since.
Edward already became well-known in his early work for having keen mathematical insights. He re-interpreted Morse theory in an original way and related the Atiyah-Singer index theorem to the concept
of super-symmetry in physics. These ideas revolved about the classical formula expressing the Laplace-Beltrami operator in terms of the de Rham exterior derivative, ? = (d+d*)2. This insight was
interesting in its own right. But it inspired his applying the same ideas to study the index of infinite-dimensional Dirac operators D and the self-adjoint operator Q = D+D*, known in physics as
super-charges, related to the energy by the representation H = Q^2 analogous to the formula for ?. This led to the name "Witten index" for the index of D, a terminology that many physicists still
In 1981 Witten also discovered an elegant approach to the positive energy theorem in classical relativity, proved in 1979 by Schoen and Yau. What developed as Witten's hallmark is the insight to
relate a set of ideas in one field to an apparently unrelated set of ideas in a different field. In the case of the positive energy theorem, Witten again took inspiration from super-symmetry to
relate the geometry of space-time to the theory of spin structures and to an identity due to Lichnerowicz. The paper by Witten framed the new proof in a conceptual structure that related it to old
ideas and made the result immediately accessible to a wide variety of physicists and mathematicians.
In 1986 Witten's had a spectacular insight by giving a quantum-field theory interpretation to Vaughan Jones' recently-discovered knot invariant. Witten showed that the Jones polynomial for a knot can
be interpreted as the expectation of the parallel transport operator around the knot in a theory of quantum fields with a Chern-Simons action. This work set the stage for many other geometric
invariants, including the Donaldson invariants, being regarded as partition functions or expectations in quantum field theory. In most of these cases, the mathematical foundations of the functional
integral representations can still not be justified, but the insights and understanding of the picture will motivate work for many years in the future.
With the resurgence of "super-string theory" in 1984, Witten quickly became one of its leading exponents and one of its most original contributors. His 1987 monograph with Green and Schwarz became
the standard reference in that subject. Later Witten unified the approach to string theory by showing that many alternative string theories could be regarded as different aspects of one grand theory.
Witten also pioneered the interpretation of symmetries related to the electromagnetic duality of Maxwell's equations, and its generalization in field theory, gauge theory, and string theory. He
pioneered the discovery of SL(2,Z) symmetry in physics, and brought concepts from number theory, as well as geometry, algebra, and representation theory centrally into physics.
In understanding Donaldson theory in 1995 Seiberg and Witten formulated the equations named after them which have provided so much insight into modern geometry. With the advent of this point of view
and fueled by its rapid dissemination over the internet, many geometers saw progress in their field proceed so rapidly that they could not hope to keep up.
Not only is Witten's own work in the field of super-symmetry, string theory, M-theory, dualities and other symmetries of physics legend, but he has trained numerous students and postdoctoral
coworkers who have come to play leading roles in string theory and other aspects of theoretical physics.
I could continue on and on about other insights and advances made or suggested by Edward Witten. But perhaps it is just as effective to mention that for all his mentioned and unmentioned work, Witten
has already received many national and international honors and awards. These include the Alan Waterman award in 1986, the Field's Medal in 1990, the CMI Research Award in 2001, the U.S. National
Medal of Science in 2002, and an honorary degree from Harvard University in 2005. Witten is a member of many honorary organizations, including the American Philosophical Society and the Royal
Society. While Witten may not need any additional recognition, it is an especially great personal pleasure and honor, as one of the original founders of IAMP, to present Edward Witten to receive the
Poincaré prize in 2006.
Arthur Jaffe
Harvard University | {"url":"https://www.iamp.org/poincare/ew06-laud.html","timestamp":"2024-11-07T23:35:28Z","content_type":"text/html","content_length":"7057","record_id":"<urn:uuid:6480e3d9-9e76-44a6-a38d-d813fbef3180>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00695.warc.gz"} |
My Favorite Balance Sheet RatioMy Favorite Balance Sheet Ratio - Portfolio123 Blog
My Favorite Balance Sheet Ratio
• Offers a number of reasons to favor companies with low net operating assets (NOA).
• Examines NOA from eight different angles.
• Gives examples of companies with very low NOA.
(This article was first published on December 20, 2017; it has here been revised for Portfolio123’s blog.)
In this piece, I want to drill down deep into a ratio that I find useful in assessing potential investments: the ratio of net operating assets (NOA) to total assets (the lower, the better). It’s my
favorite balance sheet ratio.
In a 2004 paper entitled “Do Investors Overvalue Firms with Bloated Balance Sheets,” David Hirshleifer, Kewei Hou, Siew Hong Teoh, and Yinglei Zhang came to the following conclusion: “In our
1964-2002 sample, net operating assets scaled by beginning total assets is a strong negative predictor of long-run stock returns. Predictability is robust with respect to an extensive set of controls
and testing methods.” Their data seems convincing to me.
The ratio has produced similar results for the last 20 years too. I divided companies in the Russell 1000 into deciles depending on the ratio of NOA to total assets, with a monthly rebalance since
January 1999, and looked at their price performance, using data available from Portfolio123. Here are the results, with the right-most decile having the lowest ratio. (The left-most bar is the return
of the S&P 500.)
Hirshleifer et al. explain their conclusions by invoking investor behavior: “The financial position of a firm with high net operating assets superficially looks attractive, but is deteriorating, like
an overripe fruit ready to drop from the tree.”
But there are other ways to explain this phenomenon. Like Wallace Stevens did in “Thirteen Ways of Looking at a Blackbird,” I’m going to look at eight different ways to define or calculate NOA.
1. This is the basic definition: subtract operating liabilities from operating assets, and you get NOA. Operating assets are the total assets minus cash and equivalents; operating liabilities are the
total liabilities minus total debt.
Let’s look at an imaginary company that has 10 widgets in its inventory valued at $10 apiece, a factory valued at $100, $50 in cash, and $100 in debt. Its NOA, then, is $200 ($100 in inventory plus
$100 in fixed assets), compared to $250 in total assets, for an NOA ratio of 0.8. Now let’s say it sells five of its widgets for $20 in cash apiece, and it creates five more widgets at $10 apiece,
deferring the cost of doing so (on the balance sheet, it has $50 in payables). Its NOA is now $150, but it has $350 in total assets (including $150 in cash), for a ratio of 0.43. As you can see, as
you do business wisely, you reduce your NOA.
Now, let’s look at another company that starts from an identical position but does business a bit differently. It sells its five widgets but doesn’t get paid up front (on the balance sheet, it now
has $100 in receivables) and creates five more widgets using its cash reserve of $50. Both companies have a net income of $50, $200 in equity, and $100 in debt. But company B’s net operating assets
will be $300, with an NOA ratio of 1.
Now, which company is in a better financial position? Company A has $150 in cash and company B has none. Company A can pay off its payables, pay down its debt, and/or invest in more widget
production; company B can’t do anything until it gets paid, borrows more money, or issues more shares.
2. The current NOA is the NOA at the firm’s inception plus the accumulated balance-sheet accruals since then.
The CFA Institute defines accruals in two different ways, one based on the cash flow statement and the other based on the balance sheet. The balance sheet accruals number is simply the increase in
NOA from one period to the next. It therefore follows that NOA is the sum of all the accruals over the history of the firm in addition to whatever operating assets the firm started with. A company
with low accruals is going to have lower NOA than it used to, so a company with low or negative NOA is going to be one with historically low accruals.
3. NOA is the sum of all debt, all equity, and all non-controlling interest, minus cash and equivalents.
(If you’re a Portfolio123 user, this is one way you can calculate NOA: DbtTotQ + IsNA (NonControlIntQ, 0) + IsNA (PfdEquityQ, 0) + ComEqQ – IsNA (CashEquivQ, 0).)
What is the ideal way to finance the ongoing operations of your company? Clearly, the answer is with cash. Cash has no costs. Debt has interest costs and risk; equity dilutes the ownership of your
company and has high costs due to investor expectations. Therefore, the more of your assets are in cash and equivalents and the less are in debt and equity, the lower your cost of continuing
operations will be.
4. NOA can be calculated by subtracting cash, equivalents, and non-debt liabilities from total assets.
You can rank companies, as Hirshleifer et al. and I did above, on the ratio of net operating assets to total assets, with the lower values getting higher ranks. But you could alternatively rank
companies on the ratio of cash and equivalents and non-debt liabilities to total assets, with higher values getting higher ranks, and come up with the exactly the same rankings.
I’ve just explained why a company with plenty of cash is primed to grow, but what about non-debt liabilities? These consist primarily of accounts payable, deferred liability charges, and minority
interest; also included might be accrued expenses, deferred revenue, deferred taxes, and commercial paper. These liabilities are often investable in the short term, and they cost nothing. They’re
very roughly equivalent to what Warren Buffett calls, in insurance companies, “float.” To quote from the Berkshire Hathaway (NYSE:BRK.A) Owner’s Manual,
Berkshire has access to two low-cost, non-perilous sources of leverage that allow us to safely own far more assets than our equity capital alone would permit: deferred taxes and ‘float,’ the
funds of others that our insurance business holds because it receives premiums before needing to pay out losses. Better yet, this funding to date has been cost-free. Deferred tax liabilities bear
no interest. Neither item, of course, is equity; these are real liabilities. But they are liabilities without covenants or due dates attached to them. In effect, they give us the benefit of debt
– an ability to have more assets working for us – but saddle us with none of its drawbacks.
5. For a company with a price-to-book ratio of 1, NOA is exactly the same as enterprise value (EV). EV is defined as market capitalization plus preferred equity plus non-controlling interest minus
cash and equivalents plus total debt; NOA can be defined in exactly the same way except by substituting book value, or common equity, for market cap.
In every EV-based valuation ratio – EV to EBITDA, EV to gross profit, EV to sales, or the classic of discounted cash flow analysis, unlevered free cash flow to EV – the lower the enterprise value,
the better. So, it stands to reason that if a company’s price-to-book ratio is more or less 1, the lower the NOA, the better.
6. Since common equity is the sum of net operating and net financial assets, NOA is equivalent to common equity minus net financial assets. Financial assets (as opposed to operating assets) are cash
and equivalents; financial liabilities are debt, preferred equity, and non-controlling interest; and net financial assets is the difference between them.
Of all a company’s sources of capital, common equity has the highest cost. So a company with a low ratio of common equity to net financial assets is probably going to be healthier than a company with
a high ratio. Remember that in DuPont analysis, return on equity is profit margin times asset turnover times the ratio of assets to equity. Clearly, having a lot more assets than common equity is a
good thing.
7. Today’s NOA is last year’s (or quarter’s or decade’s) NOA plus accumulated net operating profit after taxes (NOPAT) since then, minus accumulated free cash flow (defined as the sum of cash flow
from operations and cash flow from investments – the latter is usually a negative number). Using the same logic as I used in definition 2, NOA is therefore the accumulated NOPAT since the company’s
inception minus its accumulated free cash flow. These equivalences come from Stephen Penman’s book Financial Statement Analysis and Security Valuation; his accounting reformulations depart from GAAP
in many instances, so if you stick to GAAP numbers, your calculation of NOA based on NOPAT and free cash flow will be a bit off. However, in 90% of the Russell 1000, the difference between this
definition and the previous definitions is less than 10%.
This is the crux of Hirshleifer et al.’s argument:
“When cumulative net operating income (accounting value added) outstrips cumulative free cash flow (cash value added), subsequent earnings growth is weak. In this circumstance, we argue that
investors with limited attention overvalue the firm, because naïve earnings-based valuation disregards the firm’s relative lack of success in generating cash flows in excess of investment needs.”
They put it another way too:
“Net operating assets are a cumulative measure of the discrepancy between accounting value added and cash value added – ‘balance sheet bloat.’ An accumulation of accounting earnings without a
commensurate accumulation of free cash flows raises doubts about future profitability.”
Companies with higher NOA ratios tend to have higher earnings at the present time, simply because of their accumulated profit. But their growth potential is significantly lower than companies with
low NOA ratios.
From this understanding Hirshleifer et al. derive an eighth equivalence:
8. NOA is the accumulated accruals since the company’s inception (as measured by the difference between NOPAT and cash flow from operations) plus the accumulated investment in operating assets (the
negative cash flow from investments).
There’s nothing wrong with having deep investment in operating assets, but a company with a history of negative accruals is able to convert cash into earnings much faster than one with a history of
positive accruals.
I’ll close by naming sixteen companies with market caps over $1 billion with very low or negative net operating assets (under 20% of total assets). These are all stable companies with signs of
healthy growth and are, in my opinion, currently undervalued; they’re all pretty cash-rich too. They are: AmeriSource Bergen (ABC), Autodesk (ADSK), Bristol-Myers Squibb (BMY), Cornerstone OnDemand (
CSOD), CommVault Systems (CVLT), NIC (EGOV), Fortinet (FTNT), Hailiang Education Group (HLG), Humana (HUM), Incyte (INCY), Kinaxis (KXSCF), Microsoft (MSFT), QAD (QADA), Qualys (QLYS), Sony (SNE),
and Trend Micro (TMICY) .
Disclosure: I am long TMICY.
6 Replies to “My Favorite Balance Sheet Ratio”
1. great stuff… thanks! | {"url":"https://blog.portfolio123.com/my-favorite-balance-sheet-ratio/","timestamp":"2024-11-05T13:47:52Z","content_type":"text/html","content_length":"70545","record_id":"<urn:uuid:9aee71b0-aa71-4590-9164-639100e9ece1>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00793.warc.gz"} |
Unit 3 assignment: chapter 3 connect homework
Homework Help
Unit 3 assignment: chapter 3 connect homework
Unit 3 Assignment: Chapter 3 Connect Homework
LO 3-1) 3-29. Basic Decision Analysis Using CVP
Derby Phones is considering the introduction of a new model of headphones with the following price and cost characteristics.
Sales price $ 270 per unit
Variable costs 120 per unit
Fixed costs 300,000 per month
a. What number must Derby sell per month to break even?
b. What number must Derby sell to make an operating profit of $180,000 for the month?
(LO 3-1) 3-30. Basic Decision Analysis Using CVP
Refer to the data for Derby Phones in Exercise 3-29. Assume that the projected number of units sold for the month is 5,000. Consider requirements (b), (c), and (d) independently of each other.
a. What will the operating profit be?
b. What is the impact on operating profit if the sales price decreases by 10 percent? Increases by 20 percent?
c. What is the impact on operating profit if variable costs per unit decrease by 10 percent? Increase by 20 percent?
d. Suppose that fixed costs for the year are 20 percent lower than projected, and variable costs per unit are 10 percent higher than projected. What impact will these cost changes have on operating
profit for the year? Will profit go up? Down? By how much?
(LO 3-4) 3-63. Extensions of the CVP Model—Multiple Products
On-the-Go, Inc., produces two models of traveling cases for laptop computers—the Programmer and the Executive. The bags have the following characteristics.
Programmer Executive
Selling price per bag $70 $100
Variable cost per bag $30 $40
Expected sales (bags) per year 8,000 12,000
The total fixed costs per year for the company are $819,000.
a. What is the anticipated level of profits for the expected sales volumes?
b. Assuming that the product mix is the same at the break-even point, compute the break-even point.
c. If the product sales mix were to change to nine Programmer-style bags for each Executive-style bag, what would be the new break-even volume for On-the-Go? | {"url":"https://homeworkpaper.help/2024/01/31/unit-3-assignment-chapter-3-connect-homework/","timestamp":"2024-11-06T08:14:18Z","content_type":"text/html","content_length":"54447","record_id":"<urn:uuid:65e46fd1-5b7f-405f-8486-a91f7d905dbc>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00203.warc.gz"} |
This theoretical work studies the role of detuning in the dynamics of Rabi vortices, i.e., vortices oscillating in the photonic and excitonic fields due to light-matter coupling, as introduced in
Ref. [^[1]] (note that the report itself was published after this analysis, in which it appears as arXiv:1801.02580).
Besides focusing on detuning, the Authors also describe the dynamics of the core through the effect of a Magnus force, acting on an effective particle (the latter idea also being a concept previously
introduced by our broader collaboration). They find the mass (negative) to also depend on time. The spatial trajectories are simple (circle) but their temporal profile is more complex (featuring
accelerations) and all this is accounted for by their description. They also introduce a "Rabi energy" (Eq. (12)) obtained through the overlap of the excitonic and photonic fields, and that depends
on the position of the cores, and whose gradient, for vortices-positions as variables, defines another relevant force.
They solve the dynamics using a spectral method, meaning they do so through the introduction of a basis of special functions (Eqs. (5,6)), which involves a truncation that was checked numerically. | {"url":"http://laussy.org/index.php?title=Detuning_control_of_Rabi_vortex_oscillations_in_light-matter_coupling&oldid=32993","timestamp":"2024-11-12T19:42:47Z","content_type":"text/html","content_length":"13453","record_id":"<urn:uuid:9aa1b4a5-21a0-4ce3-a075-d0e640ba3d07>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00672.warc.gz"} |
FIC-UAI Publication Database -- Query Results
Bravo, M., Cominetti, R., & Pavez-Signe, M. (2019). Rates of convergence for inexact Krasnosel'skii-Mann iterations in Banach spaces. Math. Program., 175(1-2), 241–262.
Barrera, J., Homem-De-Mello, T., Moreno, E., Pagnoncelli, B. K., & Canessa, G. (2016). Chance-constrained problems and rare events: an importance sampling approach. Math. Program., 157(1), 153–189.
Ramirez-Pico, C., & Moreno, E. (2022). Generalized Adaptive Partition-based Method for Two-Stage Stochastic Linear Programs with Fixed Recourse. Math. Program., to appear.
Cominetti, R., Dose, V., & Scarsini, M. (2022). The price of anarchy in routing games as a function of the demand. Math. Program., Early Access.
Contreras, J. P., & Cominetti, R. (2022). Optimal error bounds for non-expansive fixed-point iterations in normed spaces. Math. Program., Early Access. | {"url":"https://ficpubs.uai.cl/search.php?sqlQuery=SELECT%20author%2C%20title%2C%20type%2C%20year%2C%20publication%2C%20abbrev_journal%2C%20volume%2C%20issue%2C%20pages%2C%20keywords%2C%20abstract%2C%20thesis%2C%20editor%2C%20publisher%2C%20place%2C%20abbrev_series_title%2C%20series_title%2C%20series_editor%2C%20series_volume%2C%20series_issue%2C%20edition%2C%20language%2C%20author_count%2C%20online_publication%2C%20online_citation%2C%20doi%2C%20serial%2C%20area%20FROM%20refs%20WHERE%20abbrev_journal%20RLIKE%20%22Math%5C%5C.%20Program%5C%5C.%22%20ORDER%20BY%20volume_numeric%20DESC&submit=Cite&citeStyle=APA&citeOrder=&orderBy=volume_numeric%20DESC&headerMsg=&showQuery=0&showLinks=0&formType=sqlSearch&showRows=20&rowOffset=0&client=&viewType=Print","timestamp":"2024-11-07T07:34:30Z","content_type":"text/html","content_length":"18732","record_id":"<urn:uuid:6493895f-5940-4ddf-8dd8-a745e3567b39>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00509.warc.gz"} |
Python Data Science Handbook
Views: ^92370
Kernel: Python 3
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
In the previous sections, we saw how to access and modify portions of arrays using simple indices (e.g., arr[0]), slices (e.g., arr[:5]), and Boolean masks (e.g., arr[arr > 0]). In this section,
we'll look at another style of array indexing, known as fancy indexing. Fancy indexing is like the simple indexing we've already seen, but we pass arrays of indices in place of single scalars. This
allows us to very quickly access and modify complicated subsets of an array's values.
Exploring Fancy Indexing
Fancy indexing is conceptually simple: it means passing an array of indices to access multiple array elements at once. For example, consider the following array:
[51 92 14 71 60 20 82 86 74 74]
Suppose we want to access three different elements. We could do it like this:
Alternatively, we can pass a single list or array of indices to obtain the same result:
When using fancy indexing, the shape of the result reflects the shape of the index arrays rather than the shape of the array being indexed:
array([[71, 86], [60, 20]])
Fancy indexing also works in multiple dimensions. Consider the following array:
array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]])
Like with standard indexing, the first index refers to the row, and the second to the column:
Notice that the first value in the result is X[0, 2], the second is X[1, 1], and the third is X[2, 3]. The pairing of indices in fancy indexing follows all the broadcasting rules that were mentioned
in Computation on Arrays: Broadcasting. So, for example, if we combine a column vector and a row vector within the indices, we get a two-dimensional result:
array([[ 2, 1, 3], [ 6, 5, 7], [10, 9, 11]])
Here, each row value is matched with each column vector, exactly as we saw in broadcasting of arithmetic operations. For example:
array([[0, 0, 0], [2, 1, 3], [4, 2, 6]])
It is always important to remember with fancy indexing that the return value reflects the broadcasted shape of the indices, rather than the shape of the array being indexed.
Combined Indexing
For even more powerful operations, fancy indexing can be combined with the other indexing schemes we've seen:
[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]]
We can combine fancy and simple indices:
We can also combine fancy indexing with slicing:
array([[ 6, 4, 5], [10, 8, 9]])
And we can combine fancy indexing with masking:
array([[ 0, 2], [ 4, 6], [ 8, 10]])
All of these indexing options combined lead to a very flexible set of operations for accessing and modifying array values.
Example: Selecting Random Points
One common use of fancy indexing is the selection of subsets of rows from a matrix. For example, we might have an $N$ by $D$ matrix representing $N$ points in $D$ dimensions, such as the following
points drawn from a two-dimensional normal distribution:
Let's use fancy indexing to select 20 random points. We'll do this by first choosing 20 random indices with no repeats, and use these indices to select a portion of the original array:
array([93, 45, 73, 81, 50, 10, 98, 94, 4, 64, 65, 89, 47, 84, 82, 80, 25, 90, 63, 20])
Now to see which points were selected, let's over-plot large circles at the locations of the selected points:
This sort of strategy is often used to quickly partition datasets, as is often needed in train/test splitting for validation of statistical models (see Hyperparameters and Model Validation), and in
sampling approaches to answering statistical questions.
Modifying Values with Fancy Indexing
Just as fancy indexing can be used to access parts of an array, it can also be used to modify parts of an array. For example, imagine we have an array of indices and we'd like to set the
corresponding items in an array to some value:
[ 0 99 99 3 99 5 6 7 99 9]
We can use any assignment-type operator for this. For example:
[ 0 89 89 3 89 5 6 7 89 9]
Notice, though, that repeated indices with these operations can cause some potentially unexpected results. Consider the following:
[ 6. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Where did the 4 go? The result of this operation is to first assign x[0] = 4, followed by x[0] = 6. The result, of course, is that x[0] contains the value 6.
Fair enough, but consider this operation:
array([ 6., 0., 1., 1., 1., 0., 0., 0., 0., 0.])
You might expect that x[3] would contain the value 2, and x[4] would contain the value 3, as this is how many times each index is repeated. Why is this not the case? Conceptually, this is because x
[i] += 1 is meant as a shorthand of x[i] = x[i] + 1. x[i] + 1 is evaluated, and then the result is assigned to the indices in x. With this in mind, it is not the augmentation that happens multiple
times, but the assignment, which leads to the rather nonintuitive results.
So what if you want the other behavior where the operation is repeated? For this, you can use the at() method of ufuncs (available since NumPy 1.8), and do the following:
[ 0. 0. 1. 2. 3. 0. 0. 0. 0. 0.]
The at() method does an in-place application of the given operator at the specified indices (here, i) with the specified value (here, 1). Another method that is similar in spirit is the reduceat()
method of ufuncs, which you can read about in the NumPy documentation.
Example: Binning Data
You can use these ideas to efficiently bin data to create a histogram by hand. For example, imagine we have 1,000 values and would like to quickly find where they fall within an array of bins. We
could compute it using ufunc.at like this:
The counts now reflect the number of points within each bin–in other words, a histogram:
Of course, it would be silly to have to do this each time you want to plot a histogram. This is why Matplotlib provides the plt.hist() routine, which does the same in a single line:
plt.hist(x, bins, histtype='step');
This function will create a nearly identical plot to the one seen here. To compute the binning, matplotlib uses the np.histogram function, which does a very similar computation to what we did before.
Let's compare the two here:
NumPy routine: 10000 loops, best of 3: 97.6 µs per loop Custom routine: 10000 loops, best of 3: 19.5 µs per loop
Our own one-line algorithm is several times faster than the optimized algorithm in NumPy! How can this be? If you dig into the np.histogram source code (you can do this in IPython by typing
np.histogram??), you'll see that it's quite a bit more involved than the simple search-and-count that we've done; this is because NumPy's algorithm is more flexible, and particularly is designed for
better performance when the number of data points becomes large:
NumPy routine: 10 loops, best of 3: 68.7 ms per loop Custom routine: 10 loops, best of 3: 135 ms per loop
What this comparison shows is that algorithmic efficiency is almost never a simple question. An algorithm efficient for large datasets will not always be the best choice for small datasets, and vice
versa (see Big-O Notation). But the advantage of coding this algorithm yourself is that with an understanding of these basic methods, you could use these building blocks to extend this to do some
very interesting custom behaviors. The key to efficiently using Python in data-intensive applications is knowing about general convenience routines like np.histogram and when they're appropriate, but
also knowing how to make use of lower-level functionality when you need more pointed behavior. | {"url":"https://cocalc.com/share/public_paths/8b892baf91f98d0cf6172b872c8ad6694d0f7204/notebooks%2F02.07-Fancy-Indexing.ipynb","timestamp":"2024-11-11T20:53:39Z","content_type":"text/html","content_length":"370448","record_id":"<urn:uuid:d66641d5-ffe0-48e5-a91b-8ec8c19fd6e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00223.warc.gz"} |
nL-n-P6: nL-Least Squared Distances PointnL-n-P6 is the unique point in an n-Line with the Least Sum of Squared Distances to its n Lines.
ConstructionnL-n-P6 can be constructed in a recursive way:
The nL-n-P6 point can be constructed because in an n-Line the points with an equal sum of squared distances lie on an ellipse. See Ref-34, QFG #1617, #1622.This ellipse with a Fixed Sum of Squared
Distances is called here an FSD-ellipse and when passing through P it is called the P-FSD-ellipse. The point with Least Sum of Squared Distances is the center of any FSD-ellipse.A P-FSD-ellipse can
be constructed by drawing lines in an n-Line through P parallel to the n Lines. Now on each of these parallel lines there will be a second point next to P with the same fixed sum of squared distances
also lying on the P-FSD-ellipse. When we find 5 of these second points we have defined a conic and this conic should be the P-FSD-ellipse. Actually 4 points will be enough for construction because P
is per definition also on the conic.Per level a P-FSD-ellipse is constructed that will be transferred to the next level. The center of the P-FSD-ellipse is nL-n-P6 for that level.
Construction in a 3-Line (triangle):
This is the lowest level which differs from the general case.
1. Let P be some arbitrary point and K be the Symmedian Point X(6) in a triangle. X(6) is the nL-Least Squares Point of a triangle.
2. Draw lines Lp1,Lp2,Lp3 through P parallel to the sidelines L1,L2,L3,
3. Let K1,K2,K3 be the intersection points of Lp1,Lp2,Lp3 and the resp. symmedians through the triangle vertices L2^L3, L3^L1, L1^L2.
4. Let S1,S2,S3 be the reflections of P in K1,K2,K3.
5. Let Pr be the reflection of P in K.
6. The conic through P, Pr, S1, S2, S3 will be the P-FSD-ellipse.
In a similar way we can construct a P-FSD-ellipse in a 4-Line (quadrilateral).1. Let L1,L2,L3,L4 be the 4 defining lines of the 4-Line.2. Draw lines Lp1,Lp2,Lp3,Lp4 through arbitrary point P parallel
to the sidelines L1,L2,L3,L4.3. We are searching for the second point on Lp1 with same sum of squared distances to L1,L2,L3,L4 as P has. When we vary P on Lp1 at least the distance to L1 is fixed. So
we have to find the point with fixed sum of squared distances to L2,L3,L4. This is the FSD-triangle problem like described here before. So construct the P-FSD-ellipse wrt triangle L2.L3.L4. Let S1 be
the 2nd intersection point of this P-FSD-ellipse with Lp1. S1 has the same fixed sum of squared distances to L1,L2,L3,L4 as P.4. Accordingly we can construct S2, S3, S4.5. The conic through P, S1,
S2, S3, S4 will be the P-FSD-ellipse in a 4-Line (quadrilateral).6. The center of this ellipse is QL-P26 indeed. See EQF.
In a similar way we can construct a P-FSD-ellipse in a 5-Line (pentalateral).
1. Let L1,L2,L3,L4,L5 be the 5 defining lines of the 5-Line (pentalateral).
2. Draw lines Lp1,Lp2,Lp3,Lp4,Lp5 through arbitrary point P parallel to the sidelines L1,L2,L3,L4,L5.
3. We are searching for the second point on Lp1 with same sum of squared distances to L1,L2,L3,L4,L5 as P has. When we vary P on Lp1 at least the distance to L1 is fixed. So we have to find the point
with fixed sum of squared distances to L2,L3,L4,L5. This is the FSD-triangle-problem for a 4-Line like described here before. So construct the P-FSD-ellipse wrt quadrilateral L2.L3.L4.L5. Let S1 be
the 2nd intersection point of this P-FSD-ellipse with Lp1. S1 has the same fixed sum of squared distances to L1,L2,L3,L4,L5 as P.
4. Accordingly we can construct S2, S3, S4, S5.
5. The conic through P, S1, S2, S3, S4 will be the P-FSD-ellipse in a 4-Line (quadrilateral). It will appear that S5 is also on the conic.
6. The center of this ellipse will be the LSD-point of a 5-Line.
In a recursive way P-FSD-ellipses can be constructed in every n-Line (n>2) and the center of this P-FSD-ellipse will be the LSD-points of the n-Line.
Another Construction:
Coolidge describes in Ref-25 a general method for constructing this point in an n-Line.In this picture an example is given in a 4-Line, where nL-n-P6 = QL-P26.
This construction is a modified version of the construction of Coolidge.Let O (origin), A and B be random non-collinear points.Go = Quadrangle Centroid of the projection points of O on the n basic
lines of the Reference n-Line.Ga = Quadrangle Centroid of the projection points of O on the n lines through point A parallel to the n basic lines of the Reference Quadrilateral.Gb = Quadrangle
Centroid of the projection points of O on the n lines through point B parallel to the n basic lines of the Reference Quadrilateral.Let Sa = Ga.Go ^ O.Gb and Sb = Gb.Go ^ O.Ga.Construct A1 on line O.A
such that Sa.Ga : Ga.Go = O.A : A.A1.Construct B1 on line O.B such that Sb.Gb : Gb.Go = O.B : B.B1.Construct P such that O.A1.P.B1 is a parallelogram and where O and P are opposite vertices. P is the
Least Squares Point nL-n-P6. | {"url":"https://chrisvantienhoven.nl/nl-items/nl-obj/nl-pts/nl-n-p6","timestamp":"2024-11-11T11:27:49Z","content_type":"text/html","content_length":"115015","record_id":"<urn:uuid:dfb3a77b-931b-47d3-84db-17518842c42f>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00235.warc.gz"} |
Semi-definite Programming
We present a unifying framework to establish a lower-bound on the number of semidefinite programming based, lift-and-project iterations (rank) for computing the convex hull of the feasible solutions
of various combinatorial optimization problems. This framework is based on the maps which are commutative with the lift-and-project operators. Some special commutative maps were originally observed
by … Read more
Solving nonconvex SDP problems of structural optimization with stability control
The goal of this paper is to formulate and solve structural optimization problems with constraints on the global stability of the structure. The stability constraint is based on the linear buckling
phenomenon. We formulate the problem as a nonconvex semidefinite programming problem and introduce an algorithm based on the Augmented Lagrangian method combined with the … Read more
Interior Point and Semidefinite Approaches in Combinatorial Optimization
Interior-point methods (IPMs), originally conceived in the context of linear programming have found a variety of applications in integer programming, and combinatorial optimization. This survey
presents an up to date account of IPMs in solving NP-hard combinatorial optimization problems to optimality, and also in developing approximation algorithms for some of them. The surveyed approaches
include … Read more
A New Computational Approach to Density Estimation with Semidefinite Programming
Density estimation is a classical and important problem in statistics. The aim of this paper is to develop a new computational approach to density estimation based on semidefinite programming (SDP),
a new technology developed in optimization in the last decade. We express a density as the product of a nonnegative polynomial and a base density … Read more
A Parallel Primal-Dual Interior-Point Method for Semidefinite Programs Using Positive Definite Matrix Completion
A parallel computational method SDPARA-C is presented for SDPs (semidefinite programs). It combines two methods SDPARA and SDPA-C proposed by the authors who developed a software package SDPA. SDPARA
is a parallel implementation of SDPA and it features parallel computation of the elements of the Schur complement equation system and a parallel Cholesky factorization of … Read more
Sums of Squares Relaxations of Polynomial Semidefinite Programs
A polynomial SDP (semidefinite program) minimizes a polynomial objective function over a feasible region described by a positive semidefinite constraint of a symmetric matrix whose components are
multivariate polynomials. Sums of squares relaxations developed for polynomial optimization problems are extended to propose sums of squares relaxations for polynomial SDPs with an additional
constraint for the … Read more
The Reduced Density Matrix Method for Electronic Structure Calculations and the Role of Three-Index Representability Conditions
The variational approach for electronic structure based on the two-body reduced density matrix is studied, incorporating two representability conditions beyond the previously used $P$, $Q$ and $G$
conditions. The additional conditions (called $T1$ and $T2$ here) are implicit in work of R.~M.~Erdahl [Int.\ J.\ Quantum Chem.\ {\bf13}, 697–718 (1978)] and extend the well-known three-index
diagonal … Read more
On Extracting Maximum Stable Sets in Perfect Graphs Using Lovasz’s Theta Function
We study the maximum stable set problem. For a given graph, we establish several transformations among feasible solutions of different formulations of Lov{\’a}sz’s theta function. We propose
reductions from feasible solutions corresponding to a graph to those corresponding to its subgraphs. We develop an efficient, polynomial-time algorithm to extract a maximum stable set in a … Read
Local Minima and Convergence in Low-Rank Semidefinite Programming
The low-rank semidefinite programming problem (LRSDP_r) is a restriction of the semidefinite programming problem (SDP) in which a bound r is imposed on the rank of X, and it is well known that
LRSDP_r is equivalent to SDP if r is not too small. In this paper, we classify the local minima of LRSDP_r and … Read more
Generalized Lagrangian Duals and Sums of Squares Relaxations of Sparse Polynomial Optimization Problems
Sequences of generalized Lagrangian duals and their SOS (sums of squares of polynomials) relaxations for a POP (polynomial optimization problem) are introduced. Sparsity of polynomials in the POP is
used to reduce the sizes of the Lagrangian duals and their SOS relaxations. It is proved that the optimal values of the Lagrangian duals in the … Read more | {"url":"https://optimization-online.org/category/linear-cone-semidefinite/semi-definite-programming/page/55/","timestamp":"2024-11-03T23:19:31Z","content_type":"text/html","content_length":"111753","record_id":"<urn:uuid:7f208ecf-99e5-46f5-88df-52e783201c08>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00251.warc.gz"} |
A binary communication system transmits a signal X in the following way: 1 is transmitted if a 0 bit is to be communicated, +1 is transmitted if a 1...
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.
A binary communication system transmits a signal X in the following way: 1 is transmitted if a 0 bit is to be communicated, +1 is transmitted if a 1...
Please help and solve the problem attached. Thanks!
A binary communication system transmits a signal X in the following way: —1 istransmitted if a 0 bit is to be communicated, +1 is transmitted if a 1 bit is to be communicated. The received signal is
Y = X + N. N is noise with zero—mean Normaldistribution with variance 02. Assume that the 0 bits are 4 times as likely as 1 bits. (a) Find the conditional PDF of Y given the input value:1.1%le = +1)&
amp;11d ffiylx = —1Jl (b) The receiver decides a 0 bit was sent if the received value of y hasfYCUlX = —1)P[X = —1l> 1’14le : +1)P[X = +1], and decides the bit was 1 otherwise. Using the results
from the previous part,show that this rule is equivalent to: If 1” <2 T1 decide 0, if Y 3 T, decide 1, whereT is some threshold. (c) If 02 = 16, what is the probability that the receiver makes an
error given a +1was transmitted? What about if a —1 was transmitted? [d] What is the overall error probability when 02 = 16?
Show more
Homework Categories
Ask a Question | {"url":"https://studydaddy.com/question/a-binary-communication-system-transmits-a-signal-x-in-the-following-way-1-is-tra","timestamp":"2024-11-02T02:56:35Z","content_type":"text/html","content_length":"26834","record_id":"<urn:uuid:64fe4074-28d1-4812-9553-e4188e45956b>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00081.warc.gz"} |
The Power of Mathematical Visualization
World-renowned math educator Dr. James Tanton shows you how to think visually in mathematics, solving problems in arithmetic, algebra, geometry, probability, and other fields with the help of
imaginative graphics that he designed. Also featured are his fun do-it-yourself projects using poker chips, marbles, strips of paper, and other props, designed to give you many eureka moments of
mathematical insight. | {"url":"https://www.thegreatcoursesplus.com/the-power-of-mathematical-visualization","timestamp":"2024-11-11T09:44:37Z","content_type":"text/html","content_length":"221290","record_id":"<urn:uuid:b89896ae-55ae-4b39-8645-ac55d89d5eda>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00464.warc.gz"} |
What processes drive soil moisture dynamics?
The primary processes driving soil moisture dynamics in the Vacant to Vibrant parcels are rainfall events that act as sources (both direct rainfall and runoff) and losses due to infiltration,
evaporation, and plant mediated evapotranspiration.
In the previous blog post (
), I presented evidence that the patterns of diurnal variation in soil moisture were driven by variation in soil temperature and did not reflect the effects of the primary drivers.
The regularity of the diurnal variability and its low amplitude allows use of smoothing functions to characterizing longer term declines in soil moisture associated with the loss processes
(infiltration, evaporation, and evapotranspiration).
Figure 1 shows the application a simple linear regression to a time series between rainfall events.
Using the linear fit to the time series, Figure 2 shows the that the removal of the longer-term trend emphasizes the diurnal variability, and Figure 3 shows that the variation in the residuals are
correlated with observed soil temperature.
Figure 1. Semi logarithmic plot of decline of soil moisture over the period May 25 to June 6, 2016 for the Gary E1 parcel at 3 cm. The blue line is a linear regression.
Figure 2. Plot of the residuals for the regression in Figure 1.
Figure 3. Plot of the residuals from Figure 2 showing association with measured soil temperature at 3 cm depth. The blue line is a regression between the two variables (r= 0.80, p <0.0001).
The advantage of the linear regression in Figure 1 is that the slope is a first order (i.e. an exponential decay rate) estimate of the rate of change of soil moisture between rainfall events. The
estimated rate for the Gary E1 parcel at 3 cm soil depth is -1.68e-07 (1/s). This method can be applied to trends at other depths or for the weighted average soil moisture and provides data for
comparison of experimental and control parcels in the different soil types of the neighborhoods of the three cities in the project.
The remaining decline pattern (presented in the previous blog post) is the transient associated with a rainfall event. Figure 4, presents the observed relation between increment of soil moisture and
the subsequent first order rate of decline.
Figure 4. Relation between soil moisture increment and subsequent rate of decline in soil moisture at 3 cm depth in the Gary E1 rain garden.
Figure 5. Relation between soil moisture increment and subsequent rate of decline in soil moisture at 3 cm depth in the Gary E1 rain garden for soil moisture increments greater than 0.012. The blue
line is a regression line and the shaded area represents the standard error bounds.
The slope of the relation in Figure 5 is -1.488e-05 (r=-0.5408624, p=0.1663, NS). The hint of an inverse relation between the increment of soil moisture following a rainfall event exists, but is not
statistically significant. The average rate of decline, however, is 1.02e-6 (1/s). An outlier occurs on April 30, 2016. On 4/29 and 4/30 (see Figure 6), there was a double increase. The
identification of extrema near this interval is a problem and it is reasonable to regard the April 30 point as anomalous. Figure 7 shows the result of eliminating this point. The estimate of the
decline rate is a statistically significant -1.90e-5, which is nearly two orders of magnitude greater than the rate of decline of the inter-event interval in Figure 1. Clearly, more data are needed
to explore this relationship and its drivers, but it seems reasonable to expect that the rate of decline following a rainfall event is a function of the soil moisture gradient and the permeability of
the soil.
Figure 6. Pattern of variation in soil moisture (m3/m3) at 3 cm soil depth in the Gary E1 rain garden.
Figure 7. Relation between soil moisture increment and subsequent rate of decline in soil moisture at 3 cm depth in the Gary E1 rain garden for soil moisture increments greater than 0.012 and
eliminating. The blue line is a regression line (r= -0.96, p<0.001) and the shaded area represents the standard error bounds. | {"url":"https://www.josephkoonce.org/2016/06/what-processes-drive-soil-moisture.html","timestamp":"2024-11-13T04:30:47Z","content_type":"text/html","content_length":"108657","record_id":"<urn:uuid:681e0d95-2e8e-4349-b1cd-325e2de0c1bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00548.warc.gz"} |
The search for systems of diagonal Latin squares using the SAT@home project
The Search for Systems of Diagonal Latin Squares Using the SAT@home Project
Oleg Zaikin and Stepan Kochemazov
Abstract—In this paper we consider one approach to searching for systems of orthogonal diagonal Latin squares. It is based on the reduction of an original problem to the Boolean Satisfiability
problem. We describe two different propositional encodings that we use. The first encoding is constructed for finding pairs of orthogonal diagonal Latin squares of order 10. Using this encoding we
managed to find 17 previously unknown pairs of such squares using the volunteer computing project SAT@home. The second encoding is constructed for finding pseudotriples of orthogonal diagonal Latin
squares of order 10. Using the pairs found with the help of SAT@home and the second encoding we successfully constructed several new pseudotriples of diagonal Latin squares of order 10.
Keywords—Latin squares, Boolean satisfiability problem, volunteer computing, SAT@home.
The combinatorial problems related to Latin squares, which are a form of combinatorial design [1], attract the attention of mathematicians for the last several centuries. In recent years a number of
new computational approaches to solving these problems have appeared. For example in [2] it was shown that there is no finite projective plane of order 10. It was done using special algorithms based
on constructions and results from the theory of error correcting codes [3]. Corresponding experiment took several years, and on its final stage employed quite a powerful (at that moment) computing
cluster. More recent example is the proof of hypothesis about the minimal number of clues in Sudoku [4] where special algorithms were used to enumerate and check all possible Sudoku variants. To
solve this problem a modern computing cluster had been working for almost a year. In [5] to search for some sets of Latin squares a special program system based on the algorithms of search for
maximal clique in a graph was used.
Also, in application to the problems of search for combinatorial designs, the SAT approach shows high effectiveness [6]. It is based on reducing the original problem to the Boolean satisfiability
problem (SAT) [7]. All known SAT solving algorithms are exponential in the worst
Manuscript received October 8, 2015. This work was supported in part by the Russian Foundation for Basic Research (grants 14-07-00403-a and 15-07-07891-a) and by the Council on grants of the
President of Russian Federation (grants SP-1184.2015.5 and NSH-5007.2014.9).
Oleg Zaikin is a researcher at Matrosov Institute for System Dynamics and Control Theory of Siberian Branch of Russian Academy of Sciences, e-mail: zaikin.icc@gmail.com.
Stepan Kochemazov is a programmer at Matrosov Institute for System Dynamics and Control Theory of Siberian Branch of Russian Academy of Sciences; e-mail: veinamond@gmail.com.
case since SAT itself is NP-hard. Nevertheless, modern SAT solvers successfully cope with many classes of instances from different areas, such as verification, cryptanalysis, bioinformatics, analysis
of collective behavior, etc.
For solving hard SAT instances it is necessary to involve significant amounts of computational resources. That is why the improvement of the effectiveness of SAT solving algorithms, including the
development of algorithms that are able to work in parallel and distributed computing environments is a very important direction of research. In 2011 for the purpose of solving hard SAT instances
there was launched the volunteer computing project SAT@home [8]. One of the aims of the project is to find new combinatorial designs based on the systems of orthogonal Latin squares.
The paper is organized as follows. In the second section we discuss relevant problems regarding the search for systems of orthogonal Latin squares. In the third section we describe the technique we
use to construct the propositional encodings of the considered problems. The fourth section discusses the computational experiment on the search for pairs of orthogonal diagonal Latin squares of
order 10 that was held in SAT@home. Later in the same section we show the results obtained for the search of pseudotriples of orthogonal diagonal Latin squares of order 10, using the computing
II. Some Relevant Problems of Search for Systems of Latin Squares
The Latin square [1] of order n is the square table n x n that is filled with the elements from some set M,\M\ = n in such a way that in each row and each column every element from M appears exactly
once. Leonard Euler in his works considered as M the set of Latin letters, and that is how the Latin squares got their name. Hereinafter, M denotes the set {0,..., n- 1}.
Two Latin squares A and B of the same order n are called orthogonal if all ordered pairs of the kind (a^, btj), i, j £ {0, n - 1}, are different. If there is a set of k different Latin squares among
which each two squares are orthogonal, then this set is called a system of k mutually orthogonal Latin squares (MOLS). The question if there exist 3 MOLS of order 10 is of particular interest since
this problem remains unanswered for many years. From the computational point of view the problem is very difficult, therefore it is interesting to search for such triples of Latin squares of order 10
for which the orthogonality condition is somehow weakened. For example, we can demand that it should hold in full only for one (two) pairs of squares out of three and only partly for the remaining
two (one). There can be other
variants of weakening this condition. In the remainder of the paper we will refer to such systems of squares as
In this paper we consider the following weakened variant of the orthogonality condition: we fix the number of ordered pairs of elements for which the orthogonality condition should hold
simultaneously for all three pairs of squares (A and B, A and C, B and C), comprising the pseudotriple A, B, C. The corresponding number of pairs of elements we call the characteristics of the
pseudotriple considered. Currently the record pseudotriple in this notation is the one published in [9] (see Fig. 1). In this pseudotriple square A is fully orthogonal to squares B and C, but squares
B and C are orthogonal only over 91 pairs of elements out of 100. It means that in our notation the characteristics of this pseudotriple is 91.
.4 =
0897564231 914G273805 7425138690 8 6 5 3 9 2 I (147 6 218 4 0 9 5 7 3 4932750168 5371086924 3509842716 I 760395482 ¿08 401 7 3 .I y
B =
078912345 61 [0 789
9 0 G183254" 0 428
7204391865 I 95 3
3617485029 1)5 4 2
239574618 oj |_8 G I 4
Fig. 1. Record pseudotriple of order 10 from
4 5 S 3 70 0 1 8 9 0 2 084
A ~
B =
0851734692 5 1 72980346 1729.1 08034 9 6 4 3 0 2 7 I 58 30864 15927 4308659271 7 2 9 5 1 10 8 0 3 643089 2 7 15 2964371580
L80576234 9J [85 1 72 03169, Fig. 2. First pair of MODLS of order 10 from [10]
A -
13 -
0419827356 3168294 5 07 6524903871 18 5 3 7 1 9 I) 6 2 9205478613 8637150924 407 "J 5361 9 8 2 9 4 1 6 8 5 7 3 0 7396012485 5 7 8 0 3 6 1249
Fig. 3. Second pair of MODLS
of order 10 from [10]
A -
In this paper we develop the SAT approach for solving the problem described above. To apply the SAT approach one has to reduce the original problem to the Boolean equation in the form "CNF=1" (here
CNF stands for conjunctive normal form). Corresponding transition process is usually referred to as encoding the original problem to SAT. First attempts on the application of the SAT approach to
finding systems of orthogonal Latin squares started in the 90-ies years of XX century. A lot of useful information in this area can be found in [6]. In particular, the author of [6] have been trying
to find three mutually orthogonal Latin square of order 10 for more than 10 years using a specially constructed grid system of 40 PCs (however, without any success).
In our opinion, it is also interesting to search for systems of orthogonal diagonal Latin squares. The Latin square is called diagonal if both its primary and secondary diagonals contain all numbers
from 0 to n — 1, where n is the order of the Latin square. In other words, the constraint on the uniqueness is extended from rows and columns to two diagonals. The existence of a pair of mutually
orthogonal diagonal Latin squares (MODLS) of order 10 was proved only in 1992 - in the paper [10] three such pairs were presented.
Similar to the problem of search for pseudotriples of Latin squares we can consider the problem of search for pseudotriples of diagonal Latin squares of order 10. In available sources we have not
found if the problem in such formulation have been studied. Implicitly, however, in [10] one of squares in the first and the second pairs is the same. Figures 2 and 3 depict the corresponding pairs.
Based on the pairs shown in Fig. 2 and Fig. 3 it is easy to construct the pseudotriple of diagonal Latin squares of order 10 (see Fig. 4). The characteristics of this pseudotriple is equal to 60.
In the next section we describe the propositional encodings that we used in our experiments.
5 17 3 4 6 9 2 7298034G 29568034 43 0 2 7158 8 G4 1 5927 0805 92 7 1 95146803 3 0892715 G 4 3 T 1 5 8 0 I 7 2 0 3 4 6 a
O -
Fig. 4.
U9461758 2 3 0419827356
25308 9 G471 4072536198
180576234 aj |.5780361p49. Pseudotriple of diagonal Latin squares of order 10 from [10]
Fig. 5 presents the corresponding 60 ordered pairs of elements for the pseudotriple from Fig. 4.
00 01 02 -- 05 0G - 08 -
111 11 - 13 15 16 - 18
- 21 22 - 24 25 - 27 - 29 -- 32 33 -- 36 37 - 39
- 41 - - 44 45 46 - 48 4a 50 - 52 53 - 55 - 57 58 59 60 61 - - - 65 66 - - 6a
-- 72 73 74 75 - 77 - 70
--- 83 84 85 - 87 88 -
90 a 1 - 93 94 - 96 97 98 98
Fig. 5. The set formed by 60 ordered pairs of elements, over which all three pairs of diagonal Latin squares from Fig. 4 are orthogonal
III. Encoding Problems of Search for Systems of Latin Squares to SAT
It is a widely known fact that the system of mutually orthogonal Latin squares as a combinatorial design is equivalent to a number of other combinatorial designs. For example, the pair of MOLS is
equivalent to a special set of transversals, to an orthogonal array with some special properties, etc. It means that if we want to construct a system of mutually orthogonal Latin squares, we can do
it in a number of various ways using equivalent objects. That is why it is possible to construct vastly different propositional encodings for the same problem. Generally speaking, even when we use
one particular representation of a system of orthogonal Latin squares, the predicates involved in the encoding can be transformed to the form "CNF=1" [11] in different ways, thus producing
essentially different encodings. Actually, we believe that the impact of the
representation method and techniques used to produce the SAT encodings of the problem on the effectiveness of SAT solvers on corresponding instances is very interesting and we intend to study this
question in the nearest future.
In our computational experiments on the search of pairs of orthogonal diagonal Latin squares we used the propositional encoding based on the so called "naive" scheme. It was described, for example,
in [12].
Let us briefly describe this encoding. We consider two matrices A = ||ay|| and B = \\bij\\, i,j = 1,...,n. The contents of each matrix cell are encoded via n Boolean variables. It means that one
matrix is encoded using n3 Boolean variables. By x(i, j, k) and y(i, j, k) we denote the variables corresponding to matrices A and B, respectively. Here the variable x(i, j, k), i, j, k £ {1,..., n}
has the value of "Truth" if and only if in the corresponding cell in the i-th row and y'-th column of the matrix A there is the number k - 1. For matrices A and B to represent Latin squares they
should satisfy the following constraints on the corresponding variables. Without the loss of generality let us consider these constraints on the example of matrix A.
Each matrix cell contains exactly one number from 0 to n- 1:
A7l=iA7l=iyr^=ix(i, j, k);
A^A^A^A^! (—x(i, j, k) V —x(i, j, r)). Each number from 0 to n- 1 appears in each row exactly once:
A7^=iA1i=iV7}=ix(i, j, k);
A= iAi=i A=1 (—x(i, j, k) V —X(r, j, k)).
Each number from 0 to n- 1 appears in each column exactly once:
A7l=iA1i=iV7^=ix(i, j, k);
A^Al^A^A^! x(i, j, k) V —x(i, r, k)). In a similar way we write the constraints on the variables forming the matrix B. After this, we need to write the orthogonality condition. For example, we can do
it in the following manner:
Af=1AjI=1Al=1Al=1Al=1Al=1 (—x(i, j, k) V —y(i, j, k) V
—x(p, q, r) V —y(p, q, r)). Since in the paper we consider not just Latin squares but diagonal Latin squares, we need to augment the described encoding with the constraint, specifying that the
primary and secondary diagonals contain all numbers from 0 to n- 1, where n is the order of the Latin square.
Al=1Vl=1x(i, j, k);
Al=1 A=1 Aj=i+1 (—x(i, i, k) V —x(j, j, k)); Al=1Vll=1 x(i, n -i + 1, k);
Al=1Al=1Al=i+1 (—x(i, n -i + 1, k) V —x(j, j, k)) Also we consider the optimization variant of the problem of search for three MODLS. Since at the present moment it is unknown if there even exist
three MODLS we believe that it is natural to weaken this problem and to evaluate the effectiveness of our methods in application to the weakened variant. Among all the constraints that form the
corresponding encoding it is most natural to weaken the orthogonality condition. It can be done via different means. For example, one can demand that the orthogonality condition holds only for fixed
cells, for fixed ordered pairs, or for the fixed number of cells or for the fixed number of different pairs. In our experiments we weakened the orthogonality condition in the following way. We first
the parameter K, K < n2, called the characteristics of the pseudotriple. Then we demand that each two squares in the pseudotriple are orthogonal over the same set of ordered pairs of elements
(a1,..., b1),..., (aK,..., bK).
To consider the corresponding problem in the SAT form, it is necessary to significantly modify the propositional encoding described above. In particular, we have to replace the "old" orthogonality
condition with the "new" one. For this purpose we introduce an additional construct: the special matrix M = {m^}, my £ {0,1}, i, j £ {1,..., n}. We will refer to this matrix as markings matrix. We
assume that if mtj = 1, then for the corresponding pair (i - 1, j - 1) the orthogonality condition must hold. In the propositional form this constraint is written in the following manner:
M=1 Aj=1(—mij V v;=1 ynq=1(x(v, q, i) A y(p, q, j)) ).
Additionally, if we search for the pseudotriple with the value of characteristics not less than K (it corresponds to the situation when the markings matrix M contains at least K ones) we need to
encode the corresponding constraint. For example, it can be done via the following natural manner. First we sort the bits in the Boolean vector (m11,..., m1n,..., mnn) in the descending order, just
as we would if it were simply integer numbers. Assume that as the result we obtain the Boolean vector (a1,..., an2). Then it is clear that the constraint we need would be satisfied if and only if aK
= 1. To sort the bits in the Boolean vector it is possible to use various encoding techniques. In our computational experiments we used the CNFs in which it was done using the Batcher sorting
networks [13].
The problems of search for orthogonal Latin squares using the SAT approach are good candidates for organization of large-scale computational experiments in distributed computing environments. In
particular, they suit well for volunteer computing projects [14]. It is explained by the fact that SAT instances on their own allow one to use natural large scale parallelization strategies. In 2011
the authors of the paper in collaboration with colleagues from IITP RAS developed and launched the volunteer computing project SAT@home [8]. This project is designed to solve hard SAT instances from
various areas. It is based on the open BOINC platform [15]. As of October, 7, 2015 the project involves 2426 active PCs from participants all over the world. The average performance of SAT@home is
about 6 TFLOPs. In subsection 4.1 we describe the experiment performed in SAT@home on the search for new pairs of MODLS of order 10. In subsection 4.2 we use the pairs found on the previous step to
search for pseudotriples of diagonal Latin squares of order 10.
4.1. Finding Pairs of Orthogonal Diagonal Latin Squares
of Order 10
In 2012 we launched the experiment in SAT@home aimed at finding new pairs of orthogonal diagonal Latin squares of order 10. In this experiment we used the propositional encoding described in the
previous section. The client application (the part that works on participants PCs) was based on the CDCL SAT solver MiniSat 2.2 [16] with slight modifications, that made it possible to reduce the
amount of RAM consumed.
In the SAT instances to be solved we fixed the first row of the first Latin square to 0 1 2 3 4 5 6 7 8 9 (by assigning
values to corresponding Boolean variables). It is possible because every pair of MODLS can be transformed to such kind by means of simple manipulations that do not violate orthogonality and
diagonality conditions. The decomposition of this SAT instance was performed in the following manner. By varying the values in the first 8 cells of the second and the third rows of the first Latin
square we produced about 230 billions of possible variants of corresponding assignments, that do not violate any condition. We decided to process in SAT@home only first 20 million subproblems out of
230 billions (i.e. about 0.0087% of the search space). As a result, each subproblem was formed by assigning values to variables corresponding to the first 8 cells of the second and the third rows
(with the fixed first row) in the SAT instance considered. The values of remaining 74 cells of the first Latin square and of all cells of the second Latin square were unknown, so the SAT solver had
to find it.
To solve each subproblem the SAT solver MINISAT 2.2 had the limit of 2600 restarts that is approximately equal to 5 minutes of work on one core of Intel Core 2 Duo E8400 processor. After reaching the
limit the computations were interrupted. In one job batch downloaded by project participant there were 20 of such subproblems. This number was chosen so that one job batch can be processed in about 2
hours on one CPU core (because it suits well for BOINC projects). To process 20 million subproblems (in the form of 1 million job batches) it took SAT@home about 9 months (from September 2012 to May
2013). During this experiment the CluBORun tool [17] was used for increasing performance of SAT@home by engaging idle resources of a computing cluster. The computations for the majority of
subproblems were interrupted, but 17 subproblems were solved and resulted in 17 previously unknown pairs of MODLS of order 10 (we compared them with the pairs from [10]). All the pairs found are
published on the site of the SAT@home project in the "Found solutions" section. Fig. 6 presents the first pair of MODLS of order 10 found in the SAT@home project.
7 3 5 3 5 g 49 7 ,is4
8 4 0 6 7 3 <> 0 1
2 g 8
4 3 7 y 04 8 90 2 g 8 fi 7 !
1 2 3 (j 1 2
5 mg 7 59
g 78 9 8 5 8 0 2 1 i 7
0 1 2 3 4 5 6 7 8 fj
7 5 19280463 1034675298 9847521036 6790832154 4 6 5 i 0 9 8 3 2 7 2385164970 527834 9 601 3 462907815
8 9 0 g 7 1 35 4 2
CNFs (where each CNF corresponds to one known pair of MODLS of order 10) and to make these SAT instances solvable in reasonable time we can weaken them by assigning correct values to Boolean
variables corresponding to several rows of the first Latin square of the pair. This series of experiments will make it possible to choose the most effective combination SAT solver + SAT encoding for
this particular problem.
4.2. Finding Pseudotriples of Diagonal Latin Squares of
Order 10
We considered the following formulation of the problem: to find the pseudotriple of diagonal Latin squares of order 10 with the characteristics value larger than that of the pseudotriple from [10]
(see section 2).
On the first stage of the experiment using the encodings described above we constructed the CNFs, in which there was encoded the constraint that the value of characteristics K (see section 3) is
greater or equal to the number varied from 63 to 66 with step 1 (i.e. we considered 4 such CNFs). In computational experiments we used the parallel SAT solvers Plingeling and Treengeling [18]. Our
choice is motivated by the fact that on the SAT competition 2014 these solvers rated in top 3 in parallel categories "Parallel, Application SAT+UNSAT" and "Parallel, Hard-combinatorial SAT+UNSAT".
The experiments were carried out within the computing cluster "Academician V.M. Matrosov" of Irkutsk supercomputer center of SB RAS. Each computing node of this cluster has two 16-core AMD Opteron
6276 processors. Thus, each of the SAT solvers mentioned was launched in multithreaded mode on one computing node, i.e. it employed 32 threads. We used time limit 1 day for per instance. Table 1
shows the results obtained using these solvers in application to 4 SAT instances considered with time limit of 1 day per instance.
K 63 64 65 66
Plingeling 1 h 10 m 1 h 29 m 1 h 21 m > 1 day
Treengeling 2h 2 m 4h 8 m > 1 day > 1 day
Fig. 6. The first pair of MODLS of order 10 found in the SAT@home project.
As we noted in the previous section, one can construct many different propositional encodings for the problem of search for pairs of orthogonal Latin squares. However, in this case the question of
comparison of the effectiveness of corresponding encodings becomes highly relevant. The practice showed that the number of variables, clauses and literals usually does not make it possible to
adequately evaluate the effectiveness of SAT solvers on corresponding SAT instances. In the nearest future we plan to use the pairs found in the SAT@home project to estimate the effectiveness of
different encodings of this particular problem. For each encoding we can construct the set of
Table 1. The runtime of SAT solvers applied to CNFs encoding the search for pseudotriples with different constraints on the value of characteristics (K).
As a result of the experiments of the first stage we found the pseudotriple with the characteristics value 65 that is better than that of the pseudotriple from [10] (its characteristics value is 60).
Note that the runtime of all considered solvers on the CNF encoding the search for a pseudotriple with characteristics value 66 was greater than 1 day (that is why the computations were interrupted
and no results were obtained), and by this time each of the solvers consumed all available RAM of the computing node (64 Gb) and started using swap.
On the second stage of our experiments on the search for pseudotriples we used the previously found pairs of MODLS of order 10 (3 pairs from [10] and 17 pairs found in SAT@home). For a fixed value of
characteristics K we formed 20 CNFs by assigning values to the Boolean variables corresponding to first two squares (i.e. for each pair of MODLS and for each value of K we constructed one such CNF).
Thus each of the CNFs encoded the following problem: for two fixed orthogonal diagonal Latin squares to
find a diagonal Latin square such that in total they form the pseudotriple with characteristics value > K.
We considered 6 values of parameter K - from 65 to 70. It means that in total we constructed 120 SAT instances (20 for each value of K). Each of two considered solvers was launched on all these CNFs
on one computing node of the cluster with time limit of 1 hour per instance. Table 2 shows how many SAT instances out of the family of 20 could the SAT solver solve within the time limit. As a result
we managed to find the pseudotriple with the characteristics value 70.
K 65 66 67 68 69 70
Plingeling 15 11 11 1 3 0
Treengeling 20 19 15 3 11 1
Table 2. The number of SAT instances, encoding the search for pseudotriples of diagonal Latin squares with two known
squares, that the solver managed to solve within the time limit of one hour (out of 20 SAT instances)
The experiments of this stage required substantial computational resources since the amount of SAT instances was quite large. To search for pseudotriples with characteristics greater than 70 we chose
the solver that performed best on the second stage. As it is easy to see from the Table 2 it was Treengeling. On the third stage we launched this SAT solver on 80 SAT instances encoding the search
for pseudotriples with two known squares and K varying from 71 to 74. The time limit was increased to 10 hours. As a result we found pseudotriple with characteristics 73. On all 20 SAT instances with
K= 74 the solution was not found before the time limit. Fig. 7 presents the record pseudotriple with characteristics value 73.
A =
2345g789 04387956 9 0 734812 7564012 3 5 !) s i 2 <> 1 i) 18 2 6 9 3 0 4 87193065 610 2 5 437 42 9 01578 3 g578 2!) 1
n =
0 1 23456739 7480592361 4259378106 604 79 13 852 0 (i 1 8 2 4 0 5 3 7 1396784025 « 9 3 5 (j 2 ! 17 0 5702139648 3871065234 2564807913
(7 =
The predecessor of the SAT@home was the BNB-Grid system [19], [20]. Apparently, [21] became the first paper about the use of a desktop grid based on the BOINC platform for solving SAT. It did not
evolve into a publicly available volunteer computing project (like SAT@home did). The volunteer computing project with the most similar to SAT@home problem area is Sudoku@vtaiwan [22]. It was used to
confirm the solution of the problem regarding the minimal number of clues in Sudoku, previously solved on a computing cluster [4]. In [6] there was described the unsuccessful attempt to solve the
problem of search for three MOLS of order 10 using the psato SAT solver in a grid system.
VI. Conclusion
In the paper we describe the results obtained by applying the resources of the volunteer computing project SAT@home to searching for systems of diagonal Latin squares of order 10. We reduce the
original combinatorial problem to the Boolean satisfiability problem, then decompose it and launch solving of corresponding subproblems in SAT@home. Using this approach we found 17 new pairs of
orthogonal diagonal Latin squares of order 10. Based on these pairs, we have found new systems of three partly orthogonal diagonal Latin squares of order 10. In the future we plan to develop new SAT
encodings for the considered combinatorial problems and also to find new orthogonal (or partly orthogonal) systems of Latin squares.
We thank Alexander Semenov for valuable comments, Mikhail Posypkin and Nickolay Khrapov for their help in maintaining the SAT@home project, and all the SAT@home volunteers for their participation.
Fig. 7. New pseudotriple of diagonal Latin squares of order 10 with characteristics value 73
Fig. 8 shows the corresponding 73 ordered pairs of elements over which the orthogonality condition holds for all pairs of Latin squares from the triple.
" - 01 02 03 04 or, 0(i - 08 09" 10 - 12 13 14 15 16 - 18 19 20 ■ 23 24 25 26 27 28 30 - 32 33 34 35 36 37 38 -40 41 - 43 44 - 46 47 - 49 50 51 - 53 - 55 56 57 58 -60 - 02 03 64 65 00 - 08 09 70 71 -
73 - 75 - 77 78 79
- 81 82 83 --- 87 88 89
00 91 02 - 94 95 - 97 - 99_ Fig. 8. The set formed by 73 ordered pairs of elements, over which all three pairs of squares from Fig. 7 are orthogonal
Note that this pseudotriple is based on one of the 17 pairs of MODLS of order 10 found in the SAT@home project (in the figure the first two squares correspond to the pair found in SAT@home).
C. J. Colbourn and J. H. Dinitz, Handbook of Combinatorial Designs, 2nd ed. (Discrete Mathematics and Its Applications). Chapman & Hall/CRC. 2006.
C. W. H. Lam, L. Thiel, and S. Swiercz, "The non-existence of finite projective planes of order 10," Canad J. Math., vol. 41, pp. 11171123, 1989.
F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes, North-Holland Mathematical Library, North Holland Publishing Co, 1988.
G. McGuire, B. Tugemann and G. Civario, "There is no 16-clue Sudoku: solving the Sudoku minimum number of clues problem via hitting set enumeration," Experimental Mathematics, vol. 23, no. 2, pp.
190-217, 2014.
B. D. Mckay, A. Meynert and W. Myrvold, "Small Latin squares, quasigroups and loops," J. Combin. Designs, vol. 15(2), pp. 98-119, 2007.
H. Zhang, "Combinatorial Designs by SAT Solvers,". in: Handbook of Satisfiability, Frontiers in Artificial Intelligence and Applications, vol. 185. IOS Press, 2009.
Handbook of Satisfiability, Frontiers in Artificial Intelligence and Applications, vol. 185. IOS Press, 2009.
M. A. Posypkin, A. A. Semenov and O. S. Zaikin, "Using BOINC desktop grid to solve large scale SAT problems," Computer Science Journal, vol. 13, no. 1, pp. 25-34, 2012.
J. Egan and I. M. Wanless, "Enumeration of MOLS of small order," CoRR abs/1406.3681v2. [10] Brown et al, "Completion of the spectrum of orthogonal diagonal Latin squares," Lect. Notes Pure Appl.
Math., vol. 139, pp. 43-49 (1993)
[11] S. D. Prestwich, "CNF encodings," in: Handbook of Satisfiability, Frontiers in Artificial Intelligence and Applications, vol. 185. IOS Press, 2009, pp. 75-97.
[12] I. Lynce and J. Ouaknine, "Sudoku as a SAT problem," in: Proc. Ninth International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, FL., 2006.
[13] K. E. Batcher, "Sorting networks and their applications," in Proc. of Spring Joint Computer Conference, New York, USA, 1968, pp. 307314.
[14] M. Nouman Durrani and J. A. Shamsi, "Review: Volunteer computing: Requirements, challenges, and solutions," J. Netw. Comput. Appl, vol. 39, pp. 369-380, 2014.
[15] D. P. Anderson and G. Fedak, "The computational and storage potential of volunteer computing," in 6th IEEE International Symposium on Cluster Computing and the Grid, Singapore, 2006, pp. 73-80.
[16] N. E'en and N. S'orensson, "An extensible SAT-solver," in 6th International Conference on Theory and Applications of Satisfiability Testing, Santa Margherita Ligure, Italy, 2003, pp. 502-518.
[17] A. P. Afanasiev, I. V. Bychkov, M. O. Manzyuk, M. A. Posypkin, A. A. Semenov and O. S. Zaikin, "Technology for Integrating Idle Computing Cluster Resources into Volunteer Computing Projects," in
Proc. of The 5th International Workshop on Computer Science and Engineering, Moscow, Russia, 2015, pp. 109-114.
[18] A. Biere, "Lingeling essentials, A tutorial on design and implementation aspects of the the SAT solver lingeling," in Proc. Fifth Pragmatics of SAT workshop, Vienna, Austria, 2014, p. 88.
[19] Y. Evtushenko, M. Posypkin and I. Sigal. "A framework for parallel large-scale global optimization,". Computer Science - Research and Development, vol. 23(3-4), pp. 211-215, 2009.
[20] A. Semenov, O. Zaikin, D. Bespalov and M. Posypkin, "Parallel Logical Cryptanalysis of the Generator A5/1 in BNB-Grid System," in Parallel Computational Technologies, Kazan, Russia, 2011, pp.
[21] M. Black and G. Bard, "SAT Over BOINC: An Application-Independent Volunteer Grid Project," in 12th IEEE/ACM International Conference on Grid Computing, Lyon, France, 2011, pp. 226-227.
[22] H.-H. Lin and I-C. Wu, "An Efficient Approach to Solving the Minimum Sudoku Problem," in. Proc. International Conference on Technologies and Applications of Artificial Intelligence, Hsinchu,
Taiwan. 2010, pp. 456-461. | {"url":"https://cyberleninka.ru/article/n/the-search-for-systems-of-diagonal-latin-squares-using-the-sat-home-project","timestamp":"2024-11-12T06:06:56Z","content_type":"application/xhtml+xml","content_length":"93191","record_id":"<urn:uuid:edd0bc9d-4a21-4c03-aad2-59bfdf31c531>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00520.warc.gz"} |
3d graphics not displayed
3d graphics not displayed
I am new to Sage. I use SageMath 8.6 on Windows 10. 2D plotting works, but not 3d. For example this produced no graphics but just the: π Άπ π π π π π π π Ήπ π Ύπ π π π π
sage: x, y = var('x,y')
sage: g = plot3d(x^2 + y^2, (x,-2,2), (y,-2,2))
sage: show(g)
π Άπ π π π π π π π Ήπ π Ύπ π π π π
I use Chrome.
With the suggested option:
sage: show(g,figsize=8, viewer="threejs", frame=False,axes=True)
it works now!
It works also with:
sage: show(g,figsize=8, viewer="threejs", axes=True)
And with:
sage: show(g,figsize=8, viewer="threejs")
It looks like viewer option should be there for 3D to work on windows 10 with Chrome.
SageMath is great! With Python it is much more friendlier than Mathematica.
Could you precise which WEB browser do you use ? because for me the code does not work in Edge but works in Chrome. Strangely, there was a time or in Edge it worked when I specified the viewer . like
show(g, figsize=8, viewer="threejs", frame=False,axes=True)
But not anymore !.
I Just ask a question on MS forum: question
it is working now for me with the Edge browser, but I made 2 changes, and I do not know if either 1 is enough or if both are needed! modification 1: allow in the firewall "3D display" change 2:
download as Windows Insider the latest version of Edge: Version 74.1.96.24 it is ok using JMOLL default or ThreeJs. look the 2 images on MS forum (click on question in above comment.
What version of Java did you install ? On Linux, our version of jmol works with openjdk 8, but win't work with openjdk 9, 10 or 11... An excruciating pain in th a$$, only partially relieved by
threejs, which has its share of problems (shading and transparency, among others).
@Emmanuel Charpentier, I see that I have Oracle Java version 8
same version as mine : Java 8 upgrade 191 (build 1.8.0_191-b12)
1 Answer
Sort by Β» oldest newest most voted
The default "jmol" viewer has basically never worked on Windows: see https://github.com/sagemath/sage-wind...
In a future release we hope to make threejs the default but there are still some unresolved issues with threejs: https://trac.sagemath.org/ticket/26410
edit flag offensive delete link more
Sorry Iguananaut , but JMOL works now in W10 with the Windows Insider latest version of Edge: Version 74.1.96.24 . see the image (click on the link question in my comment above) .But it is true that
jmol did not work with older version of Edge. with the simple command show(g), besides, I can see JSMOL colored writing displaying just before displaying the 3D plot, using simple show(g)
ortollj ( 2019-04-15 15:32:10 +0100 )edit
Oops ! I mean it works in notebook !, I did not try in console, sorry
ortollj ( 2019-04-15 15:48:05 +0100 )edit | {"url":"https://ask.sagemath.org/question/46147/3d-graphics-not-displayed/","timestamp":"2024-11-13T19:43:08Z","content_type":"application/xhtml+xml","content_length":"68459","record_id":"<urn:uuid:e6d75507-9d19-4589-bc8a-a58e501f09be>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00524.warc.gz"} |
Particle-Based Fluid Simulation with SPH | Lucas V. Schuermann
Particle-Based Fluid Simulation with SPH
Published 05/23/2016
## Introduction
Fluid simulation is a popular topic in computer graphics that uses computational fluid dynamics to generate realistic animations of fluids such as water and smoke. Most of these simulation techniques
are simply numerical approximations of the Navier-Stokes equations, which describe the motion of fluids.
There are two primary approaches when numerically approximating the Navier-Stokes equations (or any problem involving flow fields): Lagrangian, or so-called "particle-based" methods, and Eulerian, or
"grid-based" methods. Taking a Lagrangian perspective, we observe an individual discretized fluid parcel as it moves through space and time. Contrastingly, Eulerian solvers focus on the motion of a
fluid through specific locations in space as time passes. One can think of modeling these Lagrangian parcels as particles and these Eulerian fixed locations in space as grid cells.
Though both Lagrangian and Eulerian solvers are widely used in computer graphics and computational fluid dynamics to great effect, a notable property of particle-based solvers is that they are
generally not as spatially restricted since they do not rely entirely on an underlying Eulerian simulation grid. This, as well as their comparative ease of implementation, makes Lagrangian solvers
uniquely well suited for applications such as video games, which often involve large, dynamic environments.
One of the most popular methods for the simulation of particle-based fluids is Smoothed Particle Hydrodynamics (SPH), first developed by Gingold and Monaghan in 1977 for astrophysics simulations.
Müller first applied this same technique to real-time fluid simulation for games, and SPH has since grown into the method of choice for simulators with applications in engineering, disaster
prevention, high-fidelity computer animation, and, of course, video games and other real-time effects.
## The Equations of Fluid Motion
Before discussing the Smoothed Particle Dynamics formulation of the Navier-Stokes equations in depth, it is necessary to build some amount of intuition by means of a derivation from more basic
physics. I suggest a strong knowledge of multi-variable calculus, and a working familiarity with differential equations.
The Navier-Stokes equations are given by:
ρ(∂u∂t+u⋅∇u)=−∇p+η∇2u+ρg\rho(\frac{\partial \mathbf{u}}{\partial t}+\mathbf{u}\cdot\nabla\mathbf{u}) = -\nabla p + \eta\nabla^2\mathbf{u}+\rho\mathbf{g}ρ(∂t∂u+u⋅∇u)=−∇p+η∇2u+ρg
subject to the incompressibility constraint
where ρ\rhoρ is the density of the fluid, u\mathbf{u}u is the velocity vector field, ppp is the pressure, and g\mathbf{g}g is a vector of external forces (e.g. gravity).
### A Brief Derivation…
This can easily be understood from a more reasonable foundation of Newtonian physics using the continuity of mass, momentum, and energy. First, recall the definition of the material or Lagrangian
derivative, which, as opposed to the Eulerian derivative with respect to a fixed position in space, quantifies the change in a field following a moving parcel. The material derivative is defined as
the nonlinear operator
DDt≡∂∂t+u⋅∇\frac{D}{Dt}\equiv\frac{\partial}{\partial t}+\mathbf{u}\cdot\nablaDtD≡∂t∂+u⋅∇
where u\mathbf{u}u is the flow velocity.
Recall the time-dependent classical relation between force and acceleration F=ma\mathbf{F}=m\mathbf{a}F=ma. Applying the material derivative, we have
Which we can then expand to the forces acting on a particle
noting that the discretization of density limits to mass as
ρ=lim∇V→L3ΔmΔV\rho=\lim_{\nabla V\rightarrow L^3}\frac{\Delta m}{\Delta V}ρ=∇V→L3limΔVΔm
where the volume is approaching the size of the domain.
The contribution of the fluid to the total force on each particle is modeled in two parts: pressure and viscosity, with g\mathbf{g}g again representing any external force such as gravity. Modeling
the force contribution of pressure as the gradient of the pressures, and the force contribution of viscosity proportional to the divergence of the gradient of the velocity field, we have
ρDuDt=ρ(∂u∂t+u⋅∇u)=−∇p+η∇⋅∇u+ρg\rho\frac{D\mathbf{u}}{Dt} = \rho(\frac{\partial \mathbf{u}}{\partial t}+\mathbf{u}\cdot\nabla\mathbf{u}) = -\nabla p + \eta\nabla\cdot\nabla\mathbf{u}+\rho\mathbf{g}ρD
We wish to model incompressible fluids, or fluids in which the material density is constant within each infinitesimal volume that moves with the flow velocity. Examples of incompressible fluids
include liquids such as water, which are of great interest in computer graphics and physics simulation.
Following from this statement, we constrain the volume change of any arbitrary "chunk" of fluid Ω\OmegaΩ, motivated by the conservation of mass
This volume change can be measured by integrating the normal component of the fluid velocity field along the boundary of our "chunk":
Gauss' Theorem gives
∭Ω∇⋅u=0∴∇⋅u=0\iiint_{\Omega}\nabla\cdot\mathbf{u}=0\\ \therefore\nabla\cdot\mathbf{u}=0∭Ω∇⋅u=0∴∇⋅u=0
While a formal derivation of the incompressible Navier-Stokes equations is much more involved, this should help to motivate some physical understanding of their origins and validity. Further reading
could help show the fundamentals of pressure and viscosity forces, which can be better understood from the stress in the system (see the Stokes hypothesis or Cauchy's equation of motion).
## Smoothed Particle Hydrodynamics
The basic idea of SPH is derived from the integral interpolation, similar to Kernel Density Estimation. In essence, SPH is a discretization of the fluid into discrete elements, particles, over which
properties are "smoothed" by a kernel function. This means that neighboring particles within the smoothing radius affect the properties of a given particle, such as pressure and density
contributions—a surprisingly intuitive way of thinking about fluid dynamics simulation.
This cornerstone kernel function gives the approximation of any quantity AAA at a point r\mathbf{r}r
As(r)=∫A(r′)W(r−r′,h)dr′≈∑jAjmjρjW(r−rj,h)A_s(\mathbf{r}) = \int A(\mathbf{r}')W(\mathbf{r}-\mathbf{r}',h)d\mathbf{r}' \approx \sum_{j}A_j\frac{m_j}{\rho_j}W(\mathbf{r}-\mathbf{r}_j,h)As(r)=∫A(r′)W(
where mjm_jmj and ρj\rho_jρj are the mass and density of the j^th particle, respectively, and W(r,h)W(\mathbf{r},h)W(r,h) is a radially symmetric smoothing kernel with length hhh having the
following properties:
W(−r,h)=W(r,h))∫W(r)dr=1W(-\mathbf{r},h)=W(\mathbf{r},h))\\ \int W(\mathbf{r})d\mathbf{r}=1W(−r,h)=W(r,h))∫W(r)dr=1
Note that we are summing over all other particles jjj. Applying this to the above discussed Navier-Stokes equations, we first must determine a way to define the fluid density based upon neighboring
particles, though this follows trivially from the previous approximation, substituting ρ\rhoρ for the arbitrary quantity AAA
ρi=ρ(ri)=∑jmjρjρjW(ri−rj,h)=∑jmjW(ri−rj,h)\rho_i = \rho(\mathbf{r}_i)=\sum_j m_j\frac{\rho_j}{\rho_j}W(\mathbf{r}_i-\mathbf{r}_j,h) = \sum_j m_jW(\mathbf{r}_i-\mathbf{r}_j,h)ρi=ρ(ri)=j∑mjρjρjW
Due to the linearity of the derivative ∇\nabla∇, the spatial derivtive (gradient) of any quantity can be easily obtained as follows
∇A(r)=∇∑jmjAjρjW(r−rj,h)=∑jmjAjρj∇W(r−rj,h)\nabla A(\mathbf{r})=\nabla\sum_jm_j\frac{A_j}{\rho_j}W(\mathbf{r}-\mathbf{r}_j,h)=\sum_jm_j\frac{A_j}{\rho_j}\nabla W(\mathbf{r}-\mathbf{r}_j,h)∇A(r)=∇j∑
The same is true for the Laplacian
∇2A(r)=∑jmjAjρj∇2W(r−rj,h)\nabla^2 A(\mathbf{r})=\sum_jm_j\frac{A_j}{\rho_j}\nabla^2 W(\mathbf{r}-\mathbf{r}_j,h)∇2A(r)=j∑mjρjAj∇2W(r−rj,h)
### SPH and Navier-Stokes
Following from the previous discussion of the Navier-Stokes equations, it is evident that the sum of the force density fields on the right hand side of the equation (pressure, viscosity, and
external) give the change in momentum ρDuDt\rho\frac{D\mathbf{u}}{Dt}ρDtDu of the particles on the left hand side. From this change in momentum, we can compute the acceleration of particle iii
We are therefore interested in approximating the force terms of the Navier-Stokes equations for pressure and viscosity with SPH as follows
Fipressure=−∇p(ri)=−∑jmimj(piρi2+pjρj2)∇W(ri−rj,h)Fiviscosity=η∇2u(ri)=η∑jmjuj−uiρj∇2W(ri−rj,h)\mathbf{F}^{\text{pressure}}_i=-\nabla p(\mathbf{r}_i)=-\sum_jm_im_j(\frac{p_i}{\rho_i^2}+\frac{p_j}{\
rho_j^2})\nabla W(\mathbf{r}_i-\mathbf{r}_j,h)\\ \mathbf{F}^{\text{viscosity}}_i=\eta\nabla^2\mathbf{u}(\mathbf{r}_i)=\eta\sum_jm_j\frac{\mathbf{u}_j-\mathbf{u}_i}{\mathbf{\rho}_j}\nabla^2W(\mathbf
The pressure ppp at a particle is calculated using some equation of state relating density to a defined rest density, usually the Tait equation or the ideal gas equation, such as
where ρ0\rho_0ρ0 is the rest density and kkk is some defined gas constant dependent on the temperature of the system.
Note that these formulations can change between different methodologies of implementation and more advanced techniques. For example, the direct application of SPH to the pressure term −∇p-\nabla p−∇p
Fipressure=−∇p(ri)=−∑jmjpjρj∇W(ri−rj,h)\mathbf{F}^{\text{pressure}}_i=-\nabla p(\mathbf{r}_i)=-\sum_jm_j\frac{p_j}{\rho_j}\nabla W(\mathbf{r}_i-\mathbf{r}_j,h)Fipressure=−∇p(ri)=−j∑mjρjpj∇W(ri
which is not symmetric. Consider the case of two particles, wherein particle iii would use the pressure only of particle jjj to compute its pressure force and vice versa. The pressures at two
particle locations are not equal in general, therefore the force is not symmetric. The above SPH pressure force equation uses what has become a canonical symmetrization (weighted sum). In Müller's
original paper, for reference, the symmetrization is performed using an arithmetic mean
Fipressure=−∇p(ri)=−∑jmjpi+pj2ρj∇W(ri−rj,h)\mathbf{F}^{\text{pressure}}_i=-\nabla p(\mathbf{r}_i)=-\sum_jm_j\frac{p_i+p_j}{2\rho_j}\nabla W(\mathbf{r}_i-\mathbf{r}_j,h)Fipressure=−∇p(ri)=−j∑mj2ρj
This again has to be addressed in the viscosity term, which naively yields the asymmetric relation
This is addressed using velocity differences as a natural approach due to the viscosity force's dependence only on velocity differences and not absolute velocity which can be thought of as looking at
the neighbors of particle iii from iii's own moving frame of reference.
### Surface Tension
So far, we have given a brief justification for the Navier-Stokes from basic physical principles and have discussed the use of SPH as a Lagrangian discretization scheme, allowing us to model pressure
and viscosity forces. A crucial missing component for a believable simulation, however, is surface tension. Surface tension is the result of attractive forces between neighboring molecules in a
fluid. On the interior of a fluid, these attractive forces are balanced, and cancel out, whereas on the surface of a fluid, they are unbalanced, creating a net force acting in the direction of the
surface normal towards the fluid, usually minimizing the curvature of the fluid surface. The magnitude of this force depends on the magnitude of the current curvature of the surface as well as a
constant σ\sigmaσ depending on the two fluids that form the boundary.
Surface tension forces, while not present in the incompressible Naiver-Stokes equations, can be explicitly modeled in a number of different ways. Again, the most common and perhaps easiest to grasp
approach is presented by Müller et al., which revises the Navier-Stokes equations by adding another term based upon the above physical principles for surface tension
ρ(ri)uidt=−∇p(ri)+η∇2u(ri)+ρ(ri)g(ri)−σ∇2cs(ri)∇cs(ri)∣∇cs(ri)∣\rho(\mathbf{r}_i)\frac{\mathbf{u}_i}{dt}=-\nabla p(\mathbf{r}_i)+\eta\nabla^2\mathbf{u}(\mathbf{r}_i)+\rho(\mathbf{r}_i)\mathbf{g}(\
mathbf{r}_i)\boxed{-\sigma\nabla^2 c_s(\mathbf{r}_i)\frac{\nabla c_s(\mathbf{r}_i)}{|\nabla c_s(\mathbf{r}_i)|}}ρ(ri)dtui=−∇p(ri)+η∇2u(ri)+ρ(ri)g(ri)−σ∇2cs(ri)∣∇cs(ri)∣∇cs(ri)
where cs(r)c_s(\mathbf{r})cs(r) is a color field 111 at particle locations and 000 everywhere else, given in a smoothed form by
note that the gradient of the smoothed color field n=∇cs\mathbf{n}=\nabla c_sn=∇cs yields the surface normal field pointing into the fluid and the divergence of nnn measures the curvature of the
κ=−∇2csn\mathbf{\kappa}=\frac{-\nabla^2 c_s}{\mathbf{n}}κ=n−∇2cs
Thus, we have the following force density
Fsurface=σκn=−σ∇2csn∣n∣\mathbf{F}^{\text{surface}}=\sigma\kappa\mathbf{n}=-\sigma\nabla^2 c_s\frac{\mathbf{n}}{|\mathbf{n}|}Fsurface=σκn=−σ∇2cs∣n∣n
formed by distributing the surface traction among particles near the surface by multiplying a normalized scalar field which is non-zero only near the surface.
### Kernel Functions
Müller states "(s)tability, accuracy and speed of the SPH method highly depend on the choice of the smoothing kernels." The design of kernels for use in SPH is an area of active research and varies
widely by implementation, but there are a number of core principles which are generally adhered to, well exemplified by Müller's original proposed kernels: poly6, spiky, and viscosity. The figure
above shows these kernels (in order), with the thick line corresponding to their value, the thin line to their gradient towards the center, and the dashed line to their Laplacian.
Kernels with second-order interpolation errors can be constructed as even and normalized. Kernels that are 000 with vanishing derivatives at the boundary are conductive to stability. Justification of
this is left as an exercise to the reader.
Müller's poly6 kernel
Wpoly6(r,h)=31564πh9{(h2−r2)30≤r≤h0otherwiseW_{\text{poly6}}(\mathbf{r},h)=\frac{315}{64\pi h^9}\begin{cases} (h^2-r^2)^3 &\quad 0\leq r\leq h\\ 0 &\quad \text{otherwise} \end{cases}Wpoly6(r,h)=64π
satisfies the design constraints and can be evaluated without computing square roots in distance computations since rrr only appears squared.
The spiky kernel
Wspiky(r,h)=15πh6{(h−r)30≤r≤h0otherwiseW_{\text{spiky}}(\mathbf{r},h)=\frac{15}{\pi h^6}\begin{cases} (h-r)^3 &\quad 0\leq r\leq h\\ 0 &\quad \text{otherwise} \end{cases}Wspiky(r,h)=πh615{(h−r)300
originally proposed by Desburn for use in SPH deformable body simulation solves the problem of particle clumping when poly6 is used for pressure force computation by necessitating a non-zero gradient
near the center.
Finally, the viscosity kernel
Wviscosity(r,h)=152πh3{−r32h3+r2h2+h2r−10≤r≤h0otherwiseW_{\text{viscosity}}(\mathbf{r},h)=\frac{15}{2\pi h^3}\begin{cases} -\frac{r^3}{2h^3}+\frac{r^2}{h^2}+\frac{h}{2r}-1 &\quad 0\leq r\leq h\\ 0 &\
quad \text{otherwise} \end{cases}Wviscosity(r,h)=2πh315{−2h3r3+h2r2+2rh−100≤r≤hotherwise
has the following properties
W(∣r∣=h,h)=0∇W(∣r∣=h,h)=0∇2W(r,h)=45πh6(h−r)\begin{aligned} W(|\mathbf{r}|=h,h)&=0\\ \nabla W(|\mathbf{r}|=h,h)&=\mathbf{0}\\ \nabla^2W(\mathbf{r},h)&=\frac{45}{\pi h^6}(h-r) \end{aligned}W(∣r∣=h,h)∇
We desire a viscosity kernel with a smoothing effect only on the velocity field. For other kernels, there can be a non-zero negative Laplacian of the smoothed velocity field for two close particles,
creating forces that increase particle relative velocities, creating artifacts and instability especially when the number of particles is low. The given viscosity kernel has a positive Laplacian
everywhere and satisfies the desired constraints.
## Extensions
As discussed, vanilla (i.e. Müller-like) SPH formulations have a number of limitations, notably in the calculation of pressure and viscosity, artificial surface tension, and the relatively arbitrary
choice of kernel functions. Most importantly, but beyond the scope of this brief introduction, are the problems with enforcing incompressibility and maintaining simulation stability with large
timesteps with techniques such as PCISPH. Further extensions include using different equations of state for pressure calculation (see WCSPH), a discussion of applicable numerical integration methods,
adaptive timestepping, and a myriad of performance optimizations (the astute reader might already see a need for sorting particles into a grid based upon the kernel support radius).
We plan to release a number of tutorials in the near future detailing both basic and advanced SPH implementations, as well as discussions of the mathematics behind some of the techniques used in
bleeding-edge particle-based fluid solvers.
## Citation
Cited as:
Schuermann, Lucas V. (May 2016). Particle-Based Fluid Simulation with SPH. Writing. https://lucasschuermann.com/writing/particle-based-fluid-simulation
title = {Particle-Based Fluid Simulation with SPH},
author = {Schuermann, Lucas V.},
year = {2016},
month = {May},
url = "https://lucasschuermann.com/writing/particle-based-fluid-simulation" | {"url":"https://lucasschuermann.com/writing/particle-based-fluid-simulation","timestamp":"2024-11-03T10:59:25Z","content_type":"text/html","content_length":"230608","record_id":"<urn:uuid:248c713f-4d12-476e-9d04-d04b135ce13e>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00486.warc.gz"} |
Gauge Symmetry
A gauge symmetry is analogous to how we can describe something within one language through different words (synonyms). A description of the same thing in different languages is called a Duality.
When we describe things in physics, we have always some freedom in our description. For example, it doesn't matter what coordinate system we choose. It makes no difference where we choose the origin
of the coordinate system or how it is oriented.
The computations can be different in different coordinate systems and usually, one picks a coordinate system where the computation is especially simple. However, the physics that we are describing,
of course, doesn't care about how we describe it. It stays the same, no matter how we choose our coordinate system.
In modern physics, we no longer describe what is happening merely through the position of objects at a given time, as we do it in classical mechanics. Instead, we use abstract objects called fields.
The best theory of what is happening in nature at the most fundamental level is quantum field theory. Like the electromagnetic field, these fields can get excited (think: we can produce a wave or
ripple of the field). For example, when we excite the electron field we "produce" an electron.
The fields themselves are abstract mathematical entities that are introduced as convenient mathematical tools. With these new mathematical entities comes a new kind of freedom. Completely analogous
to how we have the freedom to choose the orientation and the location of the origin of our coordinate system, we now have freedom in how we define our fields.
The freedom to "shift" or "rotate" our fields is called gauge symmetry. It is important to note that this symmetry is completely independent from the rotational and translational symmetry of our
coordinate systems. When we "shift" or "rotate" a field we do not refer to anything in spacetime, but instead we "shift" and "rotate" merely our description of a given field.
A particular nice system to understand Yang-Mills theory in a non-technical context are financial markets.
Say, we represent different countries by different points on a lattice. Each country has its own currency. However, the absolute values of the different currencies has no meaning of all on the global
money market. Instead, all that counts are the relationships between the different currencies. The exchange rates between different currencies is encoded in something we call connection.
Now, how can we make money on the money market?
Well, let's say we are a banker in London. We have a budget of 100 pounds. Our goal is to trade our currency against other currencies in such a way that we have in the end more than 100 pounds. This
means, we need to look for profitable loops through the global money lattice. We are interested in loops, because to actually determine if have earned money, we need to compare our final amount of
money with the 100 pounds we started with. This is only possible if the final amount of money is also given in pounds.
We start by trading our 100 pound against 150 dollars.
Thus, by trading in a loop, we have gained 50 pounds.
The loops we considered here are exactly analogous to the Wilson loops used in quantum field theories. The gauge freedom corresponds here to the freedom to rescale the local currencies. For examples,
England could introduce a new currency called "new-pound" and determine that 1 new-pound is worth 10 pounds. This wouldn't change the situation at the global money market at all because all banks
would simply adjust their exchange rates:
Absolute value has no meaning and this is what we call gauge freedom. We can't make more money simply because we change the local value of our currency.
The thing we used in the situation above to earn money is called arbitrage. An arbitrage means a possibility to earn money without any risk. The arbitrage possibilities are completely encoded in
closed loops.
In physics, we usually don't use Wilson loops, but instead, work with gauge field strengths (think electric or magnetic field strength). These gauge field strengths $F_{\mu\nu}$ correspond to
infinitesimal Wilson loops around a given point $x$.
So, imagine the universe as a big chessboard. I could change every white square on a chessboard to a black square and every black square to a white square and the game would be exactly the same.
That’s the simple kind of symmetry. Now I can turn it into a gauge symmetry by making it much trickier. I can say, “Let me just change locally, whenever I want, a white square to a black square or a
black square to a white square. Not everywhere but place to place." Now the chessboard doesn’t look the same at all, so the game can’t be the same unless I also have a rule book—a coordinate system
for what happens at every point—containing rules for the pieces of the chessboard to follow to keep the game the same, rules that account for everywhere I have changed the color of a square. That
becomes a very weird symmetry. As I explain in the book, this sort of symmetry tells you how to go from the conservation of charge to the theory of electromagnetism. It says, “I could change the sign
of each electric charge in nature locally. But I have to have a rule book.” What's the rule book? In this case, it’s the electromagnetic field. Even though gauge symmetry is something that most
people find obscure, it’s the most visible thing in the world—and if you don’t have it, things fall apart in surprising ways. Whenever you look at a lightbulb, you're able to see light because nature
has this weird symmetry. https://www.scientificamerican.com/article/q-a-lawrence-krauss-on-the-greatest-story-ever-told/
So, imagine the universe as a big chessboard. I could change every white square on a chessboard to a black square and every black square to a white square and the game would be exactly the same.
That’s the simple kind of symmetry. Now I can turn it into a gauge symmetry by making it much trickier. I can say, “Let me just change locally, whenever I want, a white square to a black square or a
black square to a white square. Not everywhere but place to place." Now the chessboard doesn’t look the same at all, so the game can’t be the same unless I also have a rule book—a coordinate system
for what happens at every point—containing rules for the pieces of the chessboard to follow to keep the game the same, rules that account for everywhere I have changed the color of a square. That
becomes a very weird symmetry.
As I explain in the book, this sort of symmetry tells you how to go from the conservation of charge to the theory of electromagnetism. It says, “I could change the sign of each electric charge in
nature locally. But I have to have a rule book.” What's the rule book? In this case, it’s the electromagnetic field. Even though gauge symmetry is something that most people find obscure, it’s the
most visible thing in the world—and if you don’t have it, things fall apart in surprising ways. Whenever you look at a lightbulb, you're able to see light because nature has this weird symmetry.
Fields in physics are something which associate with each point in space and with each instance in time a quantity. In case of electromagnetism this is a quantity describing the electric and magnetic
properties at this point. Each of these two properties turn out to have a strength and a direction. Thus the electric and magnetic fields associate with each point in space and time an electric and a
magnetic magnitude and a direction. For a magnetic field this is well known from daily experience. Go around with a compass. As you move, the magnetic needle will arrange itself in response to the
geomagnetic field. Thus, this demonstrates that there is a direction involved with magnetism. That there is also a strength involved you can see when moving two magnets closer and closer together.
How much they pull at each other depends on where they are relative to each other. Thus there is also a magnitude associated with each point. The same actually applies to electric fields, but this is
not as directly testable with common elements. Ok, so it is now clear that electric and magnetic fields have a direction and a magnitude. Thus, at each point in space and time six numbers are needed
to describe them: two magnitudes and two angles each to determine a direction. When in the 19th century people tried to understand how electromagnetism works they also figured this out. However, they
made also another intriguing discovery. When writing down the laws which govern electromagnetism, it turns out that electric and magnetic fields are intimately linked, and that they are just two
sides of the same coin. That is the reason to call it electromagnetism. In the early 20th century it then became clear that both phenomena can be associated with a single particle, the photon. But
then it was found that to characterize a photon only two numbers at each point in space and time are necessary. This implies that between the six numbers characterizing electric and magnetic fields
relations exist. These are known as Maxwell equations in classical physics, or as quantum Maxwell dynamics in the quantum theory. If you would add, e. g., electrons to this theory, you would end up
with quantum electro dynamics - QED. So, this appeared as a big step forward in describing numerically electromagnetism. However, when looking deeper into the mathematical concepts, it turned out to
be technically rather complicated to describe all electric and magnetic phenomena with just these two properties of the photon. It was then that people noticed that including a certain redundancy
things became much simpler. An ideal solution was found to describe electromagnetism with four numbers at each space-time point, instead of two. These can then not be independent, of course. And it
is here where the symmetry comes into play: It is a symmetry concept which connects these numbers. First, here is a simple example of how it works. Take someone walking only along the circumference
of a circle. Then you can either describe her position by the height and width from the center of the circle. Or you can use the angle around the circle's circumference. Both is equally valid. Hence,
the two numbers of the first choice are uniquely connected to the second choice: Changing the angle will change both height and width simultaneously! And because this connection comes from the fact
that the circle is rotationally symmetric, it is this symmetry. And the symmetry of a circle is called U(1). Now, the relation between the four convenient numbers and the two important ones is quite
in analogy to this case, and is therefore also a U(1) symmetry. That is how the symmetry becomes associated with electromagnetism. This tells us that if we change the four numbers by, so to say,
moving them around on the circle, we do not change the two numbers describing the photon (or the six describing the electric and magnetic field). Only when we move away from the circumference, the
two (and six) numbers change. In this way the symmetry is only helping us in a mathematical description, but is not influencing what we can measure. It is therefore also called a gauge symmetry.
Fields in physics are something which associate with each point in space and with each instance in time a quantity. In case of electromagnetism this is a quantity describing the electric and magnetic
properties at this point. Each of these two properties turn out to have a strength and a direction. Thus the electric and magnetic fields associate with each point in space and time an electric and a
magnetic magnitude and a direction. For a magnetic field this is well known from daily experience. Go around with a compass. As you move, the magnetic needle will arrange itself in response to the
geomagnetic field. Thus, this demonstrates that there is a direction involved with magnetism. That there is also a strength involved you can see when moving two magnets closer and closer together.
How much they pull at each other depends on where they are relative to each other. Thus there is also a magnitude associated with each point. The same actually applies to electric fields, but this is
not as directly testable with common elements. Ok, so it is now clear that electric and magnetic fields have a direction and a magnitude. Thus, at each point in space and time six numbers are needed
to describe them: two magnitudes and two angles each to determine a direction.
When in the 19th century people tried to understand how electromagnetism works they also figured this out. However, they made also another intriguing discovery. When writing down the laws which
govern electromagnetism, it turns out that electric and magnetic fields are intimately linked, and that they are just two sides of the same coin. That is the reason to call it electromagnetism.
In the early 20th century it then became clear that both phenomena can be associated with a single particle, the photon. But then it was found that to characterize a photon only two numbers at each
point in space and time are necessary. This implies that between the six numbers characterizing electric and magnetic fields relations exist. These are known as Maxwell equations in classical
physics, or as quantum Maxwell dynamics in the quantum theory. If you would add, e. g., electrons to this theory, you would end up with quantum electro dynamics - QED.
So, this appeared as a big step forward in describing numerically electromagnetism. However, when looking deeper into the mathematical concepts, it turned out to be technically rather complicated to
describe all electric and magnetic phenomena with just these two properties of the photon. It was then that people noticed that including a certain redundancy things became much simpler. An ideal
solution was found to describe electromagnetism with four numbers at each space-time point, instead of two. These can then not be independent, of course. And it is here where the symmetry comes into
play: It is a symmetry concept which connects these numbers.
First, here is a simple example of how it works. Take someone walking only along the circumference of a circle. Then you can either describe her position by the height and width from the center of
the circle. Or you can use the angle around the circle's circumference. Both is equally valid. Hence, the two numbers of the first choice are uniquely connected to the second choice: Changing the
angle will change both height and width simultaneously! And because this connection comes from the fact that the circle is rotationally symmetric, it is this symmetry. And the symmetry of a circle is
called U(1). Now, the relation between the four convenient numbers and the two important ones is quite in analogy to this case, and is therefore also a U(1) symmetry. That is how the symmetry becomes
associated with electromagnetism. This tells us that if we change the four numbers by, so to say, moving them around on the circle, we do not change the two numbers describing the photon (or the six
describing the electric and magnetic field). Only when we move away from the circumference, the two (and six) numbers change. In this way the symmetry is only helping us in a mathematical
description, but is not influencing what we can measure. It is therefore also called a gauge symmetry. http://axelmaas.blogspot.de/2010/10/electromagnetism-photons-and-symmetry.html
$$ \mathcal{L}_{EM} = -{1\over 4} F_{\mu \nu}F^{\mu \nu} - J^{\mu}A_{\mu} $$ where $F^{\mu \nu} \equiv \partial^{\mu}A^{\nu} - \partial^{\nu}A^{\mu} $
The crucial observation is now that $A^{\mu}$ does not uniquely specify the action. Instead, we can take an arbitrary function $\chi(x^{\mu})$, and the action will be unchanged under the
transformation \begin{eqnarray} A^{\mu} \rightarrow A'^{\mu} = A^{\mu} + \partial^{\mu}\chi . \end{eqnarray} To see this explictly we first calculate \begin{eqnarray} F'^{\mu \nu} &=& \partial^{\mu}
A'^{\nu} - \partial^{\nu}A'^{\mu} = \partial^{\mu}(A^{\nu}+\partial^{\nu}\chi) - \partial^{\nu}(A^{\mu}+ \partial^{\mu}\chi) \\ &=& \partial^{\mu}A^{\nu} - \partial^{\nu}A^{\mu} + \partial^{\mu} \
partial^{\nu}\chi -\partial^{\mu}\partial^{\nu}\chi \\ &=& F^{\mu \nu} \label{eq:invarianceoffmn} \end{eqnarray} So the field strength tensor $ F^{\mu \nu}$ is indeed unchanged by this
transformation: $F'^{\mu \nu} = F^{\mu \nu}$.
In addition we can calculate that $J^{\mu}A_{\mu} \rightarrow J^{\mu}A_{\mu} + J^{\mu}\partial_{\mu}\chi$. To see this, we integrate the second term by parts with the usual boundary conditions, \
begin{eqnarray} \int d^4x J^{\mu}\partial_{\mu}\chi = -\int d^4x (\partial_{\mu}J^{\mu})\chi \end{eqnarray} Now we use that according to Maxwell's equations, $\partial_{\mu}J^{\mu} = \partial_{\mu}\
partial_{\nu}F^{\mu \nu} \equiv 0$ because $F^{\mu \nu}$ is totally antisymmetric.
Therefore, both $F^{\mu \nu}$ and $J^{\mu}\partial_{\mu}\chi$ are invariant under the transformation
\begin{eqnarray} A^{\mu} \rightarrow A'^{\mu} = A^{\mu} + \partial^{\mu}\chi , \end{eqnarray}
which is called a gauge transformation. This means immediately that the action is unchanged and that this transformation is a symmetry of the system.
The group of gauge transformations $G$ means the bundles automorphisms which preserve the Lagrangian. (Source)
The gauge group is simply one fiber of the bundle, i.e. for example, $SU(2)$.
We denote the space of all connections by $A$. Now, to get physically sensible results we must be careful with these different notions:
Integration should therefore be carried out on the quotient space $\mathcal{G}=A/G$. Now $A$ is a linear space but $\mathcal{G}$ is only a manifold and has to be treated with more respect. Thus for
integration purposes a Jacobian term arises which, in perturbation theory, gives rise to the well-known Faddeev-Popov "ghost" particles. Nonperturbatively it seems reasonable that global topological
features of $\mathcal{G}$ will be relevant. Geometrical Aspects of Gauge Theories by M. F. Atiyah
Integration should therefore be carried out on the quotient space $\mathcal{G}=A/G$. Now $A$ is a linear space but $\mathcal{G}$ is only a manifold and has to be treated with more respect. Thus for
integration purposes a Jacobian term arises which, in perturbation theory, gives rise to the well-known Faddeev-Popov "ghost" particles. Nonperturbatively it seems reasonable that global topological
features of $\mathcal{G}$ will be relevant.
There are different "high-level" descriptions of gauge theories. The most famous one makes use of the fibre bundle formalism. Another possibility is the "loop formulation".
Gauge potentials take their values in the Lie algebra $\mathfrak{g}$ of the gauge group $\mathcal G$.
It is important to note that there is the difference between a group $G$ and the corresponding gauge group $\mathcal G$.
$G$ is simply a set of symmetry transformations. In contrast, $\mathcal G$ is a group of smooth functions on spacetime that take values in $G$. More mathematically, the group of gauge transformations
$ \mathcal G$ means the bundles automorphisms which preserve the Lagrangian. (Source). The group $G$ is simply one fibre of the bundle, i.e. for example, $SU(2)$.
Moreover, one can argue that the "true" gauge symmetry is given by a subset of $\mathcal G$ called $\mathcal G_\star$:
\mathcal{G}_\star &= \big \{ \text{ set of all } g(x) \text{ such that } g \to 1 \text{ as } |x| \to \infty \big \} \notag \\ \mathcal{G} &= \big \{ \text{ set of all } g(x) \text{ such that } g \to
\text{ constant element of $G$, not necessarily $1$ as } |x| \to \infty \big \}
This comes about when one considers Gauss law to identify physical states. Such physical states are invariant under $\mathcal{G}_\star$ and thus this subgroup connects physically redundant variables
in the theory.
Since the elements of $\mathcal G$ go only to a constant, which is not necessarily $1$ at spatial infinity, we have
\mathcal{G} / \mathcal{G}_\star \sim \text{ set of constant g's } \sim G \notag \\ \mathcal{G} / \mathcal{G}_\star \sim G \text{ is the Noether symmetry of the theory defined by the charges}
All transformations $g(x)$ which go to a constant at spatial infinity that is not $1$ act as a Noether symmetry of the theory.
For more on this see, section 10.3 and chapter 16 in Quantum Field Theory - A Modern Perspective by V. P. Nair.
We denote the space of all connections by $\mathcal A$ (= the space of all gauge potentials $A_i$). This space is a an affine space, which simply means that any potential $A_i$ can be written as $A_i
^{(0)} + h_i$, where $A_i^{(0)}$ is a given fixed potential and $h_i$ is an arbitrary vector field that takes values in the Lie algebra. Geometrically this means that, any two points in $\mathcal A$
can be connected by a straight line.
For two potentials $A_i^{(1)}$ and $A_i^{(2)}$, we can define the following sequence of gauge potential configurations
where $0 \leq \tau \geq 1$ parametrizes the straight line between the two configurations. (Topologically this space is rather trivial).
The configuration space of the theory is $\mathcal C = \mathcal A / \mathcal G_\star$.
Now, to get physically sensible results we must be careful with these different notions:
Integration should therefore be carried out on the quotient space $G=\mathcal A/\mathcal{G}_\star$. Now $\mathcal A$ is a linear space but $\mathcal{G}_\star$ is only a manifold and has to be treated
with more respect. Thus for integration purposes a Jacobian term arises which, in perturbation theory, gives rise to the well-known Faddeev-Popov "ghost" particles. Nonperturbatively it seems
reasonable that global topological features of $\mathcal{G}_\star$ will be relevant. Geometrical Aspects of Gauge Theories by M. F. Atiyah
Integration should therefore be carried out on the quotient space $G=\mathcal A/\mathcal{G}_\star$. Now $\mathcal A$ is a linear space but $\mathcal{G}_\star$ is only a manifold and has to be treated
with more respect. Thus for integration purposes a Jacobian term arises which, in perturbation theory, gives rise to the well-known Faddeev-Popov "ghost" particles. Nonperturbatively it seems
reasonable that global topological features of $\mathcal{G}_\star$ will be relevant.
Gauge symmetry sometimes appears to be a curious shell game. One starts with some initial global symmetry algebra and makes it “local” via the introduction of new degrees of freedom, enlarging the
symmetry algebra enormously; then, states that differ by gauge transformations are identified as the same physical state, effectively reducing the symmetry algebra. It is typically expected that the
reduced symmetry algebra relating physical observables is the same as the initial algebra. In which case, the net effect of the gauge procedure, is to introduce new dynamical degrees of freedom (the
gauge bosons). In the end, the advantage of the redundant description over a description involving only physical degrees is that the physical description is nonlocal. […] It has long been known that
for gravity in asymptotically flat space [1, 2] or asymptotically AdS3 [3], the final physical symmetry algebra is an infinite-dimensional enhancement of the “global part” of the gauge group. Only
recently, however, has it been realized that the enhancement also occurs for higher dimensional gravity, Maxwell theory, Yang–Mills theory, and string theory, and moreover, that the symmetry
constrains the IR structure via nontrivial Ward identities [4–15].https://arxiv.org/abs/1510.07038
Gauge symmetries are at the heart of the best theory of fundamental interactions, the standard model of particle physics. Theories that make use of gauge symmetry are commonly called gauge theories.
In addition to this application, gauge symmetry can also be useful to understand finance. This is shown, for example, in
Gauge symmetry principles are regularly invoked in the context of justification, as deep physical principles, fundamental starting points in thinking about why physical theories are the way they are,
so to speak. “On continuous symmetries and the foundations of modern physics” by CHRISTOPHER A. MARTIN
Gauge symmetry principles are regularly invoked in the context of justification, as deep physical principles, fundamental starting points in thinking about why physical theories are the way they are,
so to speak.
“On continuous symmetries and the foundations of modern physics” by CHRISTOPHER A. MARTIN
We do not yet have a full picture of how nature overcomes the dichotomy between simple fundamental laws and complex emergent phenomena. But particle physics has made huge progress in this direction
and the key words are gauge theory. Gauge theory is the essential concept out of which the Standard Model is built: a concept that has all the features of a fundamental principle of nature. It is
elegant (based on symmetry considerations), robust (no continuous deformations of the theory are generally allowed), and predictive (given the field content, all processes are described by a single
coupling constant). In short, it has all the requirements for a physicist to see simplicity in it. The magic about gauge theory lies in the richness of its structure and its ability to produce, out
of a simple conceptual principle, a great variety of different manifestations. Long-range forces, short-range forces, confinement, dynamical symmetry breaking are all phenomena described by the same
principle. The vacuum structure of gauge theory is unbe- lievably rich, with θ-vacua, instantons, chiral and gluon condensates, all being expressions of the same theory. The phase diagram at finite
temperature and density exhibits a variety of new phenomena and states of matter. In short, gauge theory is an exquisite tool to make complexity out of simplicity.
Especially: "local gauge invariance in quantum theory does not imply the existence of an external electromagnetic field"!
For example, there are thirteen groups with the same Lie algebra as the famous $SU(3) \times SU(2) \times U(1)$ gauge symmetry of the standard model. In addition, there are good reasons to believe
that the correct gauge group of the standard model is not $SU(3) \times SU(2) \times U(1)$ , but rather $S(U(3) \times U(2) )$.
This is done, for example, in Vol. 1 of Weinberg's Quantum Field Theory book in section 5.9.
Weinberg shows that a massless spin 1 vector field $A_\mu$ cannot be a four-vector under Lorentz transformations. Instead, he derives that under a Lorentz transformation $\Lambda$ a massless spin 1
vector field $A_\mu$ transforms as follows:
$$ U(\Lambda) A_\mu(x) U^{-1}(\Lambda) = \Lambda^\nu_\mu A_\nu(\Lambda x) + \partial_\mu \Omega(x,\Lambda), $$
where $\Omega(x,\Lambda)$ is some function of the creation and annihilation operators. Therefore, he concludes that in order to get a Lorentz invariant theory it is not enough to write down terms in
the Lagrangian that are invariant under the "naive" transformation $A_\mu \to \Lambda^\nu_\mu A_\nu$, but additionally the terms must be invariant under $A_\mu \to A_\mu + \partial_\mu \Omega $. This
second part of the transformation is the well known gauge transformation of $A_\mu$. In this sense, the gauge symmetry follows from the Lorentz symmetry.
A summary of Weinberg's argument with an easier notation can be found in this article.
This emergence of gauge symmetry was also discussed nicely from a bit different perspective in this recent paper by Nima Arkani-Hamed, Laurentiu Rodina, Jaroslav Trnka.
“Maxwell’s theory and Einstein’s theory are essentially the unique Lorentz invariant theories of massless particles with spin $j =1$ and $j =2$”. Photons and Gravitons in Perturbation Theory:
Derivation of Maxwell's and Einstein's Equations by Steven Weinberg
“Maxwell’s theory and Einstein’s theory are essentially the unique Lorentz invariant theories of massless particles with spin $j =1$ and $j =2$”.
Photons and Gravitons in Perturbation Theory: Derivation of Maxwell's and Einstein's Equations by Steven Weinberg
Take note that there is a close connection between this kind of argument and the famous Weinberg-Witten Theorem.
This point of view is formulated by Wilczek in his article "What QCD tells us about nature - and why we should listen":
Summarizing the argument, only those relativistic field theories which are asymptotically free can be argued in a straightforward way to exist. And the only asymptotically free theories in four
space-time dimensions involve nonabelian gauge symmetry, with highly restricted matter content. So the axioms of gauge symmetry and renormalizability are, in a sense, gratuitous. They are implicit in
the mere existence of non-trivial interacting quantum field theories. What QCD tells us about nature - and why we should listen by F. Wilczek
Summarizing the argument, only those relativistic field theories which are asymptotically free can be argued in a straightforward way to exist. And the only asymptotically free theories in four
space-time dimensions involve nonabelian gauge symmetry, with highly restricted matter content. So the axioms of gauge symmetry and renormalizability are, in a sense, gratuitous. They are implicit in
the mere existence of non-trivial interacting quantum field theories.
What QCD tells us about nature - and why we should listen by F. Wilczek
This modification is not uniquely dictated by the demand of local gauge invariance. There are infinitely many other gauge-invariant terms that might be added to the Lagrangian if gauge invariance
were the only input to the argument. In order to pick out the minimal modification uniquely, we must bring in, besides gauge invariance and knowledge of field theories generally, the requirements of
Lorentz invariance, simplicity, and, importantly, renormalizability. (For example, a Pauli term is Lorentz invariant and gauge invariant but not renormalizable.) The minimal modification is then the
simplest, renormalizable, Lorentz and gauge-invariant Lagrangian yielding second-order equations of motion for the coupled system (O’Raifeartaigh, 1979). The point is simply that, in the context of
the gauge argument, the requirement of local gauge invariance gets a lot of its formal muscle in combination with other important considerations and requirements. “On continuous symmetries and the
foundations of modern physics” by CHRISTOPHER A. MARTIN
This modification is not uniquely dictated by the demand of local gauge invariance. There are infinitely many other gauge-invariant terms that might be added to the Lagrangian if gauge invariance
were the only input to the argument. In order to pick out the minimal modification uniquely, we must bring in, besides gauge invariance and knowledge of field theories generally, the requirements of
Lorentz invariance, simplicity, and, importantly, renormalizability. (For example, a Pauli term is Lorentz invariant and gauge invariant but not renormalizable.) The minimal modification is then the
simplest, renormalizable, Lorentz and gauge-invariant Lagrangian yielding second-order equations of motion for the coupled system (O’Raifeartaigh, 1979). The point is simply that, in the context of
the gauge argument, the requirement of local gauge invariance gets a lot of its formal muscle in combination with other important considerations and requirements.
The Pauli Term is $\frac{m_0}{\Lambda_0^2}\bar\Psi \gamma^{\mu \nu} F_{\mu \nu} \Psi$, where $\gamma^{\mu \nu}$ is $[\gamma^\mu,\gamma^\nu]$. It is non-renormalizable, because the factors coming from
$\frac{m_0}{\Lambda_0^2}$ in higher order of perturbation theory, have to be compensated by more and more divergent integrals.
Moreover, any theory can be made gauge invariant by the "Stueckelberg trick":
While many older textbooks rhapsodize about the beauty of gauge symmetry, and wax eloquent on how “it fully determines interactions from symmetry principles”, from a modern point of view gauge
invariance can also be thought of as by itself an empty statement. Indeed any theory can be made gauge invariant by the “Stuckelberg trick”–elevating gauge transformation parameters to fields–with
the “special” gauge invariant theories distinguished only by realizing the gauge symmetry with the fewest number of degrees of freedom.
This is similar to the discussion about the role of "general covariance" in general relativity. According to Einstein, "general covariance" is the symmetry principle at the heart of general
relativity. However, it was quickly noted by Kretschmann that any theory can be formulated in a general covariant way.
See Anandan (1983) who argues that from both the physical and mathematical points of view, the holonomy contains all the relevant (gauge-invariant) information. Specifically, the connection can be
constructed (up to gauge transformation) from a knowledge of the holonomies. Formalizing gauge theories in terms of holonomies associated with (non-local) loops in space appears, though, to require a
revamped conception of the notion of a physical field (see Belot, 1998). “On continuous symmetries and the foundations of modern physics” by CHRISTOPHER A. MARTIN
See Anandan (1983) who argues that from both the physical and mathematical points of view, the holonomy contains all the relevant (gauge-invariant) information. Specifically, the connection can be
constructed (up to gauge transformation) from a knowledge of the holonomies. Formalizing gauge theories in terms of holonomies associated with (non-local) loops in space appears, though, to require a
revamped conception of the notion of a physical field (see Belot, 1998).
For more on the loop space formulation of quantum field theory, have a look at the small book "Some Elementary Gauge Theory Concepts" by Sheung Tsun Tsou, Hong-Mo Chan
See, for example, "Tracking down gauge: an ode to the constrained Hamiltonian formalism" by JOHN EARMAN
What Nielsen imagines is that the whole cosmos is just at the point of a phase transition between two phases. He and his colleagues, such as Don Bennett, try to demonstrate that many of the observed
properties of the elementary particles arise simply from this fact, independently of whatever the fundamental laws of physics are. They want to say that, just as bubbles are universally found in
liquids that are boiling, the fundamental particles we observe may be simply universal consequences of the universe being balanced at the point of a transition between phases. If so, their properties
may to a large extent be independent of whatever fundamental law governs the world. […] In fact, Nielsen and his colleagues do claim some successes for the hypothesis of random fundamental dynamics.
Among them is the fact that all the fundamental interactions must be gauge interactions, of the type described by Yang-Mills theory and general relativity. This means that the world would appear at
large scales to be governed by these interactions, whether or not they are part of the fundamental description of the world at the Planck scale. This last claim is, in fact, rather well accepted
among particle theorists. It has been independently confirmed by Steven Shenker and others. The Life of the Cosmos by Lee Smolin
What Nielsen imagines is that the whole cosmos is just at the point of a phase transition between two phases. He and his colleagues, such as Don Bennett, try to demonstrate that many of the observed
properties of the elementary particles arise simply from this fact, independently of whatever the fundamental laws of physics are. They want to say that, just as bubbles are universally found in
liquids that are boiling, the fundamental particles we observe may be simply universal consequences of the universe being balanced at the point of a transition between phases. If so, their properties
may to a large extent be independent of whatever fundamental law governs the world.
In fact, Nielsen and his colleagues do claim some successes for the hypothesis of random fundamental dynamics. Among them is the fact that all the fundamental interactions must be gauge interactions,
of the type described by Yang-Mills theory and general relativity. This means that the world would appear at large scales to be governed by these interactions, whether or not they are part of the
fundamental description of the world at the Planck scale. This last claim is, in fact, rather well accepted among particle theorists. It has been independently confirmed by Steven Shenker and others.
Such a point of view is supported, for example, by observations in condensed matter physics:
Well, all the asymptotic behavior and renormalization group fixed points that we look at in condensed matter theory seem to grow symmetries not necessarily reflecting those of the basic, underlying
theory. In particular, I will show some experiments tomorrow, where, in fact, one knows for certain that the observed symmetry grows from a totally unsymmetric underlying physics. Although as a
research strategy I think what you say about postulating symmetry is totally unarguable, one can remark, in opposition, that it is only the desperate man who seeks after symmetry! If we truly
understand a theory, we should see symmetry coming out or, on the other hand, failing to appear. So I am certainly not criticizing you on strategy. But you recognize - you put it very nicely, and I
was relieved to hear it - that the renormalization group principle works in a large space, there are many fixed points, and there are many model field theories. So I am still unclear as to the origin
of your faith that string theory should give us the standard model rather than some other type of local universe. Michael Fisher in Conceptual Foundations of Quantum Field Theory, Edited by Cao
Well, all the asymptotic behavior and renormalization group fixed points that we look at in condensed matter theory seem to grow symmetries not necessarily reflecting those of the basic, underlying
theory. In particular, I will show some experiments tomorrow, where, in fact, one knows for certain that the observed symmetry grows from a totally unsymmetric underlying physics. Although as a
research strategy I think what you say about postulating symmetry is totally unarguable, one can remark, in opposition, that it is only the desperate man who seeks after symmetry! If we truly
understand a theory, we should see symmetry coming out or, on the other hand, failing to appear. So I am certainly not criticizing you on strategy. But you recognize - you put it very nicely, and I
was relieved to hear it - that the renormalization group principle works in a large space, there are many fixed points, and there are many model field theories. So I am still unclear as to the origin
of your faith that string theory should give us the standard model rather than some other type of local universe.
Michael Fisher in Conceptual Foundations of Quantum Field Theory, Edited by Cao
Is gauge symmetry an autonomous concept, logically independent of other leading principles of physics? On the contrary, it appears to be mandatory, in the theory of vector particles, to insure
consistency with special relativity and quantum mechanics. For if the transverse parts of the vector field produce excitations that have a normal probabilistic interpretation (i.e., the square of
their amplitude is the probability for their presence), then Lorentz invariance implies that the longitudinal parts produce excitations that are, in the jargon of quantum theory, ghosts. That is to
say, the square of their amplitudes is minus the probability for their presence, so that when we contemplate their production we are confronted with the specter of negative probabilities, which on
the face of it are senseless. Gauge invariance saves the day by insuring that the longitudinal modes decouple, i.e. that transition amplitudes to excite such modes actually vanish. Thus gauge
invariance is required, in order to insure that no physical process is assigned a negative probability. Yang-Mills Theory In, Beyond, and Behind Observed Reality by Frank Wilczek
So what does this mean? What’s the point of having a local symmetry if we can just choose a gauge (in fact, we have to choose a gauge to do any computations) and the physics is the same? There are
two answers to this question. First, it is fair to say that gauge symmetries are a total fake. They are just redundancies of description and really do have no physically observable consequences. In
contrast, global symmetries are real features of nature with observable consequences. For example, global symmetries imply the existence of conserved charges, which we can test. So the first answer
is that we technically don’t need gauge symmetries at all. The second answer is that local symmetries make it much easier to do computations. You might wonder why we even bother introducing this
field Aµ which has this huge redundancy to it. Instead, why not just quantize the electric and magnetic fields, that is Fµν, itself? Well you could do that, but it turns out to be more of a pain than
using Aµ. To see that, first note that Fµν as a field does not propagate with the Lagrangian L = − 1 4 Fµν 2 . All the dynamics will be moved to the interactions. Moreover, if we include
interactions, either with a simple current AµJµ or with a scalar field φ ⋆Aµ∂µφ or with a fermion ψ¯γµAµψ, we see that they naturally involve Aµ. If we want to write these in terms of Fµν we have to
solve for Aµ in terms of Fµν and we will get some crazy non-local thing like Aν = ∂ν Fµν. Then we’d have to spend all our time showing that the theory is actually local and causal. It turns out to be
much easier to deal with a little redundancy so that we don’t have to check locality all the time. Another reason is that all of the physics of the electromagnetic field is not entirely contained in
Fµν. In fact there are global properties of Aµ that are not contained in Fµν but that can be measured. This is the Aharanov-Bohm effect, that you might remember from quantum mechanics. Thus we are
going to accept that using the field Aµ instead of Fµν is a necessary complication. So there’s no physics in gauge invariance but it makes it a lot easier to do field theory. The physical content is
what we saw in the previous section with the Lorentz transformation properties of spin 1 fields. http://isites.harvard.edu/fs/docs/icb.topic473482.files/08-gaugeinvariance.pdf
So what does this mean? What’s the point of having a local symmetry if we can just choose a gauge (in fact, we have to choose a gauge to do any computations) and the physics is the same? There are
two answers to this question. First, it is fair to say that gauge symmetries are a total fake. They are just redundancies of description and really do have no physically observable consequences. In
contrast, global symmetries are real features of nature with observable consequences. For example, global symmetries imply the existence of conserved charges, which we can test. So the first answer
is that we technically don’t need gauge symmetries at all. The second answer is that local symmetries make it much easier to do computations. You might wonder why we even bother introducing this
field Aµ which has this huge redundancy to it. Instead, why not just quantize the electric and magnetic fields, that is Fµν, itself? Well you could do that, but it turns out to be more of a pain than
using Aµ. To see that, first note that Fµν as a field does not propagate with the Lagrangian L = − 1 4 Fµν 2 . All the dynamics will be moved to the interactions. Moreover, if we include
interactions, either with a simple current AµJµ or with a scalar field φ ⋆Aµ∂µφ or with a fermion ψ¯γµAµψ, we see that they naturally involve Aµ. If we want to write these in terms of Fµν we have to
solve for Aµ in terms of Fµν and we will get some crazy non-local thing like Aν = ∂ν Fµν. Then we’d have to spend all our time showing that the theory is actually local and causal. It turns out to be
much easier to deal with a little redundancy so that we don’t have to check locality all the time. Another reason is that all of the physics of the electromagnetic field is not entirely contained in
Fµν. In fact there are global properties of Aµ that are not contained in Fµν but that can be measured. This is the Aharanov-Bohm effect, that you might remember from quantum mechanics. Thus we are
going to accept that using the field Aµ instead of Fµν is a necessary complication. So there’s no physics in gauge invariance but it makes it a lot easier to do field theory. The physical content is
what we saw in the previous section with the Lorentz transformation properties of spin 1 fields.
We might instead give a gauge-invariant interpretation, taking the physical state as specified completely by the gauge-invariant electric and magnetic field strengths. In this case, electromagnetism
is deterministic since the gauge invariance that threatened determinism is in effect washed away from the beginning. However, in the case of non-trivial spatial topologies, the gauge-invariant
interpretation runs into potential complications. The issue is that in this case there are other gauge invariants. So-called holonomies (or their traces, Wilson loops) – the line integral of the
gauge potential around closed loops in space – encode physically significant information about the global features of the gauge field. The problem is that these gauge invariants, being ascribed to
loops in space, are apparently non-local. But, coming full circle, providing a local description requires appeal to non-gauge-invariant entities such as the electromagnetic potential, whose very
reality is in question according to the received understanding. The context for this discussion is the interpretation of the well-known Aharonov–Bohm (A–B) effect “On continuous symmetries and the
foundations of modern physics” by CHRISTOPHER A. MARTIN
We might instead give a gauge-invariant interpretation, taking the physical state as specified completely by the gauge-invariant electric and magnetic field strengths. In this case, electromagnetism
is deterministic since the gauge invariance that threatened determinism is in effect washed away from the beginning. However, in the case of non-trivial spatial topologies, the gauge-invariant
interpretation runs into potential complications. The issue is that in this case there are other gauge invariants. So-called holonomies (or their traces, Wilson loops) – the line integral of the
gauge potential around closed loops in space – encode physically significant information about the global features of the gauge field. The problem is that these gauge invariants, being ascribed to
loops in space, are apparently non-local. But, coming full circle, providing a local description requires appeal to non-gauge-invariant entities such as the electromagnetic potential, whose very
reality is in question according to the received understanding. The context for this discussion is the interpretation of the well-known Aharonov–Bohm (A–B) effect
Since $\Lambda$ is a constant, however, this gauge transformation must be the same at all points in space-time; it is a global gauge transformation. So when we perform a rotation in the internal
space of $\phi$ at one point, through an angle $\Lambda$, we must perform the same rotation at all other points at the same time. If we take this physical interpretation seriously, we see that it is
impossible to fulfil, since it contradicts the letter and spirit of relativity, according to which there must be a minimum time delay equal to the time of light travel. To get round this problem we
simply abandon the requirement that $\Lambda$ is a constant, and write it as an arbitrary function of space-time, $\Lambda(x^\mu)$. This is called a local gauge transformation, since it clearly
differs from point to point. page 93 in Quantum Field Theory by Ryder
Since $\Lambda$ is a constant, however, this gauge transformation must be the same at all points in space-time; it is a global gauge transformation. So when we perform a rotation in the internal
space of $\phi$ at one point, through an angle $\Lambda$, we must perform the same rotation at all other points at the same time. If we take this physical interpretation seriously, we see that it is
impossible to fulfil, since it contradicts the letter and spirit of relativity, according to which there must be a minimum time delay equal to the time of light travel. To get round this problem we
simply abandon the requirement that $\Lambda$ is a constant, and write it as an arbitrary function of space-time, $\Lambda(x^\mu)$. This is called a local gauge transformation, since it clearly
differs from point to point.
First, the initial and all-important demand of local as opposed to global gauge invariance is anything but self-evident, and presumably it must be argued for on some basis. Historically, the
arguments surrounding the ‘demand’ as such are quite thin. The most prevalent form goes back to Yang and Mills’ remarks to the effect that ‘local’ symmetries are more in line with the idea of ‘local’
field theories. Arguments from a sort of locality, and especially those predicated specifically on the demands of STR (i.e. no communication-at-a-distance) (See for example Ryder (1996, p. 93)). ,
are somewhat suspect, however, and careful treading is needed. Most immediately, the requirement of locality in the STR sense – say, as given by the lightcone structure – does not map cleanly onto to
the global/local distinction figuring into the gauge argument – i.e. $G_r$ vs. $G_{\infty r}$ . Overall, the question of how ‘natural’, physically, this demand is, is not uncontentious. This is
especially so in light of the received view of gauge transformations which maintains that they have no physical significance or counterpart (more below). I will return briefly in the next section to
considering possible “On continuous symmetries and the foundations of modern physics” by CHRISTOPHER A. MARTIN
First, the initial and all-important demand of local as opposed to global gauge invariance is anything but self-evident, and presumably it must be argued for on some basis. Historically, the
arguments surrounding the ‘demand’ as such are quite thin. The most prevalent form goes back to Yang and Mills’ remarks to the effect that ‘local’ symmetries are more in line with the idea of ‘local’
field theories. Arguments from a sort of locality, and especially those predicated specifically on the demands of STR (i.e. no communication-at-a-distance) (See for example Ryder (1996, p. 93)). ,
are somewhat suspect, however, and careful treading is needed. Most immediately, the requirement of locality in the STR sense – say, as given by the lightcone structure – does not map cleanly onto to
the global/local distinction figuring into the gauge argument – i.e. $G_r$ vs. $G_{\infty r}$ . Overall, the question of how ‘natural’, physically, this demand is, is not uncontentious. This is
especially so in light of the received view of gauge transformations which maintains that they have no physical significance or counterpart (more below). I will return briefly in the next section to
considering possible
However, using Noether's second theorem we can derive relations between our equations of motion, that are known as Bianchi identities.
gauge symmetries aren’t real symmetries: they are merely redundancies in our description of the system.David Tong
[Gauge symmetry], thinking about it as a symmetry is a bad idea, thinking about it as being broken is a bad idea.
The problem with gauge symmetry is that it is not a symmetry in the sense of quantum mechanics. A symmetry is the invariance of the Hamiltonian under transformations of quantum states, which are
elements of a Hilbert space. Gauge symmetry is not a symmetry because the corresponding transformation does not change the quantum states. Gauge symmetry acts trivially on the Hilbert space and does
not relate physically distinct states. A gauge transformation is like a book by James Joyce: it seems that something is going on, but nothing really happens. Gauge symmetry is the statement that
certain degrees of freedom do not exist in the theory. This is why gauge symmetry corresponds only to as a redundancy of the theory description. The non-symmetry nature of gauge symmetry explains why
gauge symmetry, unlike global symmetry, cannot be broken by adding local operators to the action: gauge symmetry is exact at all scales. The only way to “break” gauge symmetry is adding to the theory
the missing degrees of freedom, but this operation is not simply a deformation of the theory (as the case of adding local operators to an action with global symmetry) but corresponds to considering
an altogether different theory. The non-symmetry nature of gauge symmetry also explains trivially the physical content of the Higgs theorem. For a spontaneously-broken global symmetry, an infinite
number of vacuum states are related by the symmetry transformation. This leads to the massless modes dictated by the Goldstone theorem. In a spontaneously-broken gauge symmetry, there is a single
physical vacuum and thus there are no massless Goldstones. Gauge symmetry does not provide an exception to the Goldstone theorem, simply because there is no symmetry to start with. For gauge
symmetry, the word ‘symmetry’ is a misnomer, much as ‘broken’ is a misnomer for spontaneously broken symmetry. But as long as the physical meaning is clear, any terminology is acceptable in human
language. The important aspect is that the mathematical language of gauge symmetry (both in the linear and non-linear versions) is extremely pow- erful in physics and permeates the Standard Model,
general relativity, and many systems in condensed matter. As the redundancy of degrees of freedom is mathematically described by the same group theory used for quantum symmetries, the use of the word
‘symmetry’ seems particularly forgivable. Does this necessarily make gauge symmetry a fundamental element in the UV? The property of gauge symmetry of being – by construction – valid at all energy
scales may naively suggest that gauge symmetry must be an ingredient of any UV theory from which the Standard Model and general relativity are derived. On the contrary, many examples have been
constructed – from duality to condensed-matter systems – where gauge symmetry is not fundamental, but only an emergent property of the effective theory [41]. Gauge symmetry could emerge in the IR,
without being present in the UV theory. If this is the case, gauge symmetry is not the key that will unlock the mysteries of nature at the most fundamental level. The concept of symmetry has given
much to particle physics, but it could be that it is running out of fuel and that, in the post-naturalness era, new concepts will replace symmetry as guiding principles.
But there are several reasons not to accept this view. First of all terminology. When we say gauge symmetry, this is really a misnomer. It's a misnomer because in physics gauge symmetry is not a
symmetry. It is not a symmetry of anything. Symmetry is a set of transformations that act on physical observables. They act on the Hilbert space. The Hilbert space is always gauge invariant. So the
gauge symmetry doesn't even act on the Hilbert space. So it's not a symmetry of anything. […] Second, gauge symmetry can be made to look trivial. So, I'll give one trivial example and then I'll make
it more elaborate… [explains the Stückelberg mechanism, where one introduces a Stückelberg field to make a non U(1) gauge invariant Lagrangian, gauge invariant] This is almost like a fake… This gauge
symmetry is what we would call emergent, except that in this case it is completely trivial. The second thing which is wrong about gauge symmetry, which suggests that it's not fundamental is that, it
started in condensed matter physics, people talked about spontaneous symmetry breaking. That was crucial in the context of superconductivity and superfluidity and so forth. And the recent Nobel price
in physics was also associated with spontaneous gauge symmetry breaking. That of Higgs, and Englert. This is all very nice and physicists love to talk about spontaneous symmetry breaking, but this is
a bit too naive. First of all I've already emphasized that a gauge symmetry is not a symmetry. And since it is not a symmetry, how could it possibly be broken. You can break a symmetry that exists,
but you cannot break a symmetry that does not exist. Second, the phenomenon of spontaneous symmetry breaking is often associated with the fact that the system goes to infinity. Concretely in quantum
mechanics, you never have symmetry breaking. It is only in quantum field theory or statistical mechanics, where we have volume going to infinity we have an infinite number of degrees of freedom and
there we have this phenomenon of spontaneous symmetry breaking. That's not true for gauge theories. For gauge theories, we have a lot of symmetry. At every point of space we have a separate symmetry.
But the number of degrees of freedom that transform under a given symmetry transformation is always finite. Nothing goes off to infinity. So the gauge symmetry cannot be spontaneously broken. The
ground state is always unique. Or if you wish, all these would-be separate ground states are all related to each other by a gauge transformation. […] I said that gauge symmetry cannot be ultimate
symmetry because it's so big, there is a separate transformation at every point in space. So the breaking of a gauge theory cannot happen, I can use a phrase from the financial crisis in 2008 that a
gauge symmetry is so big, it's too big to fail. Duality and emergent gauge symmetry - Nathan Seiberg
But there are several reasons not to accept this view. First of all terminology. When we say gauge symmetry, this is really a misnomer. It's a misnomer because in physics gauge symmetry is not a
symmetry. It is not a symmetry of anything. Symmetry is a set of transformations that act on physical observables. They act on the Hilbert space. The Hilbert space is always gauge invariant. So the
gauge symmetry doesn't even act on the Hilbert space. So it's not a symmetry of anything. […] Second, gauge symmetry can be made to look trivial. So, I'll give one trivial example and then I'll make
it more elaborate… [explains the Stückelberg mechanism, where one introduces a Stückelberg field to make a non U(1) gauge invariant Lagrangian, gauge invariant] This is almost like a fake… This gauge
symmetry is what we would call emergent, except that in this case it is completely trivial. The second thing which is wrong about gauge symmetry, which suggests that it's not fundamental is that, it
started in condensed matter physics, people talked about spontaneous symmetry breaking. That was crucial in the context of superconductivity and superfluidity and so forth. And the recent Nobel price
in physics was also associated with spontaneous gauge symmetry breaking. That of Higgs, and Englert. This is all very nice and physicists love to talk about spontaneous symmetry breaking, but this is
a bit too naive. First of all I've already emphasized that a gauge symmetry is not a symmetry. And since it is not a symmetry, how could it possibly be broken. You can break a symmetry that exists,
but you cannot break a symmetry that does not exist. Second, the phenomenon of spontaneous symmetry breaking is often associated with the fact that the system goes to infinity. Concretely in quantum
mechanics, you never have symmetry breaking. It is only in quantum field theory or statistical mechanics, where we have volume going to infinity we have an infinite number of degrees of freedom and
there we have this phenomenon of spontaneous symmetry breaking. That's not true for gauge theories. For gauge theories, we have a lot of symmetry. At every point of space we have a separate symmetry.
But the number of degrees of freedom that transform under a given symmetry transformation is always finite. Nothing goes off to infinity. So the gauge symmetry cannot be spontaneously broken. The
ground state is always unique. Or if you wish, all these would-be separate ground states are all related to each other by a gauge transformation. […] I said that gauge symmetry cannot be ultimate
symmetry because it's so big, there is a separate transformation at every point in space. So the breaking of a gauge theory cannot happen, I can use a phrase from the financial crisis in 2008 that a
gauge symmetry is so big, it's too big to fail.
See also Seiberg's slides starting at page 30 here http://research.ipmu.jp/seminar/sysimg/seminar/1607.pdf:
Gauge symmetry is deep •Largest symmetry (a group for each point in spacetime) •Useful in making the theory manifestly Lorentz invariant, unitary and local (and hence causal) But •Because of Gauss
law the Hilbert space is gauge invariant.( More precisely, it is invariant under small gauge transformation; large gauge transformations are central.) •Hence:gauge symmetry is not asymmetry. • It
does not act on anything. • A better phrase is gauge redundancy. Gauge symmetries cannot break •Not a symmetry and hence cannot break •For spontaneous symmetry breaking we need an infinite number of
degrees of freedom transforming under the symmetry. Not here. •This is the deep reason there is no massless Nambu-Goldstone boson when gauge symmetries are “broken.” Gauge symmetries cannot break For
weakly coupled systems (e.g. Landau-Ginsburg theory of superconductivity, or the weak interactions) the language of spontaneous gauge symmetry breaking is appropriate and extremely useful
[Stueckelberg,Anderson,Brout, Englert,Higgs]. Global symmetries can emerge as accidental symmetries at long distance. Then they are approximate. Exact gauge symmetries can be emergent. Examples of
emergent gauge symmetry…
•Useful in making the theory manifestly Lorentz invariant, unitary and local (and hence causal)
•Because of Gauss law the Hilbert space is gauge invariant.( More precisely, it is invariant under small gauge transformation; large gauge transformations are central.)
•For spontaneous symmetry breaking we need an infinite number of degrees of freedom transforming under the symmetry. Not here.
•This is the deep reason there is no massless Nambu-Goldstone boson when gauge symmetries are “broken.”
Gauge symmetries cannot break For weakly coupled systems (e.g. Landau-Ginsburg theory of superconductivity, or the weak interactions) the language of spontaneous gauge symmetry breaking is
appropriate and extremely useful[Stueckelberg,Anderson,Brout, Englert,Higgs].
Global symmetries can emerge as accidental symmetries at long distance. Then they are approximate. Exact gauge symmetries can be emergent.
Gauge symmetries are properly to be thought of as not being symmetries at all, but rather redundancies in our description of the system 1. The true configuration space of a (3 + 1)- dimensional gauge
theory is the quotient $\mathcal{A}^3/\mathcal{G}^3$ of gauge potentials in $A_0=0$ gauge modulo three-dimensional gauge transformations. When gauge degrees of freedom become anomalous, we find that
they are not redundant after all. Hamiltonian Interpretation of Anomalies by Philip Nelson and Luis Alvarez-Gaume
Gauge symmetries are properly to be thought of as not being symmetries at all, but rather redundancies in our description of the system 1. The true configuration space of a (3 + 1)- dimensional gauge
theory is the quotient $\mathcal{A}^3/\mathcal{G}^3$ of gauge potentials in $A_0=0$ gauge modulo three-dimensional gauge transformations. When gauge degrees of freedom become anomalous, we find that
they are not redundant after all.
Hamiltonian Interpretation of Anomalies by Philip Nelson and Luis Alvarez-Gaume
From the modern point of view, then, gauge symmetry is merely a useful redundancy for describing the physics of interacting massless particle of spin 1 or 2, tied to the specific formalism of Feynman
diagrams, that makes locality and unitarity as manifest as possible.
Gauge invariance is not physical. It is not observable and is not a symmetry of nature. Global symmetries are physical, since they have physical consequences, namely conservation of charge. That is,
we measure the total charge in a region, and if nothing leaves that region, whenever we measure it again the total charge will be exactly the same. There is no such thing that you can actually
measure associated with gauge invariance. We introduce gauge invariance to have a local description of massless spin-1 particles. The existence of these particles, with only two polarizations, is
physical, but the gauge invariance is merely a redundancy of description we introduce to be able to describe the theory with a local Lagrangian. A few examples may help drive this point home. First
of all, an easy way to see that gauge invariance is not physical is that we can choose any gauge, and the physics is going to be exactly the same. In fact, we have to choose a gauge to do any
computations. Therefore, there cannot be any physics associated with this artificial symmetry. Quantum Field Theory and the Standard Model by Matthew Schwartz
Gauge invariance is not physical. It is not observable and is not a symmetry of nature. Global symmetries are physical, since they have physical consequences, namely conservation of charge. That is,
we measure the total charge in a region, and if nothing leaves that region, whenever we measure it again the total charge will be exactly the same. There is no such thing that you can actually
measure associated with gauge invariance. We introduce gauge invariance to have a local description of massless spin-1 particles. The existence of these particles, with only two polarizations, is
physical, but the gauge invariance is merely a redundancy of description we introduce to be able to describe the theory with a local Lagrangian. A few examples may help drive this point home. First
of all, an easy way to see that gauge invariance is not physical is that we can choose any gauge, and the physics is going to be exactly the same. In fact, we have to choose a gauge to do any
computations. Therefore, there cannot be any physics associated with this artificial symmetry.
Quantum Field Theory and the Standard Model by Matthew Schwartz
Herrmann Weyl was one of the first who took symmetry ideas to the next level. He believed in the power of symmetry constraints and tried to derive electromagnetism in 1918 from invariance under local
changes of length scale [1]. Nowadays, this is called scale invariance. For the invariance under changes of length scale, the name gauge symmetry is certainly appropriate. If one would change the
object one used to define the length of a meter, at this time the Prototype Metre Bar, this would be a change of gauge and hence a change of length scale.
Weyl's attempt failed, but he was on the right track. Soon after he discovered the correct symmetry that enables the derivation of the correct theory of electromagnetism and the name stuck, despite
making little sense in the new context.
The original paper is: Hermann Weyl, Raum, Zeit, Materie: Vorlesungen über die Allgemeine Relativitätstheorie, Springer Berlin Heidelberg 1923
A great summary of the history of gauge symmetry can be found in "On continuous symmetries and the foundations of modern physics" by Christopher Martin
Lochlainn O'Raifeartaigh, The Dawning of Gauge Theory, Princeton University Press (1997)
Lochlainn O'Raifeartaigh, Norbert Straumann, Gauge Theory: Historical Origins and Some Modern Developments Rev. Mod. Phys. 72, 1-23 (2000).
Norbert Straumann, Early History of Gauge Theories and Weak Interactions (arXiv:hep-ph/9609230)
Norbert Straumann, Gauge principle and QED, talk at PHOTON2005, Warsaw (2005) (arXiv:hep-ph/0509116)
In 1932, Werner Heisenberg suggested the possibility that the known nucle- ons (the proton and the neutron) were, in fact, just two different “states” of the same particle and proposed a mathematical
device for modeling this so-called isotopic spin state of a nucleon. Just as the phase of a charged particle is represented by a complex number of modulus 1 and phase changes are accomplished by the
action of U (1) on S 1 (rotation) so the isotopic spin of a nucleon is represented by a pair of complex numbers whose squared moduli sum to 1 and changes in the isotopic spin state are accomplished
by an action of SU (2) on S 3 . In 1954, C. N. Yang and R. L. Mills set about constructing a theory of isotopic spin that was strictly analogous to classical electromagnetic theory. They were led to
consider matrix-valued potential functions (denoted B μ in [YM]) and corresponding fields (F μν in [YM]) constructed from the derivatives of the potential functions. The underlying physical
assumption of the theory (gauge invariance) was that, when electromagnetic effects can be neglected, interactions between nucleons should be invariant under arbitrary and independent “rotation” of
the isotopic spin state at each spacetime point. This is entirely analogous to the invariance of classical electromagnetic interactions under arbitrary phase changes (see Chapter 0) and has the
effect of dictating the transformation properties of the potential functions B μ under a change of gauge and suggesting the appropriate combination of the B μ and their derivatives to act as the
field F μν .Topology, Geometry and Gauge Fields: Foundations by Naber | {"url":"https://physicstravelguide.com/advanced_tools/gauge_symmetry","timestamp":"2024-11-12T00:31:16Z","content_type":"text/html","content_length":"145621","record_id":"<urn:uuid:324df2ca-59a4-499d-90cf-37f6341bd8e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00510.warc.gz"} |
Algebraic Statistics
Algebraic statistics brings together ideas from algebraic geometry, commutative algebra, and combinatorics to address problems in statistics and its applications. Computer algebra provides powerful
tools for the study of algorithms and software. However, these tools are rarely prepared to address statistical challenges and therefore new algebraic results need often be developed. This way of
interplay between algebra and statistics fertilizes both disciplines. Algebraic statistics is a relatively new branch of mathematics that developed and changed rapidly over the last ten years. The
seminal work in this field was the paper of Diaconis and Sturmfels (1998) introducing the notion of Markov bases for toric statistical models and showing the connection to commutative algebra. Later
on, the connection between algebra and statistics spread to a number of different areas including parametric inference, phylogenetic invariants, and algebraic tools for maximum likelihood estimation.
These connection were highlighted in the celebrated book "Algebraic Statistics for Computational Biology" of Pachter and Sturmfels (2005) and subsequent publications. In this report, statistical
models for discrete data are viewed as solutions of systems of polynomial equations. This allows to treat statistical models for sequence alignment, hidden Markov models, and phylogenetic tree
models. These models are connected in the sense that if they are interpreted in the tropical algebra, the famous dynamic programming algorithms (Needleman-Wunsch, Viterbi, and Felsenstein) occur in a
natural manner. More generally, if the models are interpreted in a higher dimensional analogue of the tropical algebra, the polytope algebra, parametric versions of these dynamic programming
algorithms can be established. Markov bases allow to sample data in a given fibre using Markov chain Monte Carlo algorithms. In this way, Markov bases provide a means to increase the sample size and
make statistical tests in inferential statistics more reliable.We will calculate Markov bases using Groebner bases in commutative polynomial rings. The manuscript grew out of lectures on algebraic
statistics held for Master students of Computer Science at the Hamburg University of Technology. It appears that the first lecture held in the summer term 2008 was the first course of this kind in
Germany. The current manuscript is the basis of a four-hour introductory course. The use of computer algebra systems is at the heart of the course. Maple is employed for symbolic computations,
Singular for algebraic computations, and R for statistical computations. The monograph "Statistical Computing with R" from Maria L. Rizzi (2007) was an excellent source for implementing the R code in
this book. The second and third editions are just streamlined versions of the first one.
Sequence alignment, hidden Markov model, tree models, Gröbner bases, Markov bases, inferential statistics, computational statistics | {"url":"https://tore.tuhh.de/entities/publication/e03c7c2b-b4cb-422f-be72-9ef7dfe7b6c0","timestamp":"2024-11-09T19:46:57Z","content_type":"text/html","content_length":"962989","record_id":"<urn:uuid:34edfd33-dd55-4142-beb9-6c03c4ec49e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00796.warc.gz"} |
WILLBES BANDA - MATLAB Central
Last seen: 1 year ago |  Active since 2020
Followers: 0 Following: 0
of 295,098
19 Questions
0 Answers
of 20,174
0 Files
of 153,199
0 Problems
0 Solutions
Double interpolation using lookup tables in matlab
hi, i want to create a code that interpolates 2 dimensional data using the method shown in the attached pictures. To clarify, th...
4 years ago | 1 answer | 0
populating a structure array
Hi, i have three functions of which are working well. Now i want to populate my answers in a structure array using the spilt fun...
4 years ago | 0 answers | 0
Taking 1d and 2d values from structure then interpolate
Hi, i have to create a code from 2 structure arrays and the code must first check if my data is 1 dimensional or 2 dimensional. ...
4 years ago | 1 answer | 0
Extracting values from a cell then converting to a matrix
Hi, i have a cell array A, i want to remove column 1, row 1 and row 2 so that i am left with the other points to analyse. A...
4 years ago | 1 answer | 0
calling a variable in another function
Hi, i have 2 functions where i calculated a variable in function 1, now i want to use the variable in function 2 but it says var...
4 years ago | 2 answers | 0
creating a function that reads into the directory
Hi, i want to create a function that uses the file prefix, directory name and file type as inputs to pick up files that i want ...
4 years ago | 1 answer | 0
using linear interpolation to find coefficients
Hi, i want to create a function that interpolates linearly to give me the coefficients. As an example, the function must take in...
4 years ago | 2 answers | 0
converting a string to a matrix
hi, i have a string that i would like all its values to be stored in a matrix but when i use square brackets([ ]) to store it as...
4 years ago | 1 answer | 0
converting cell to struct with fields
Hi, i have 2 cell arrays that i want to convert to structure with fields. The first cell is A (10×1 cell array) DeliArg= {[-3...
4 years ago | 1 answer | 0
using the while loop to get the number of divisions
Hi, I am trying to add the while loop to my code such that the code should run while the difference between the present and prev...
4 years ago | 1 answer | 0
determining the number of divisions in riemann sums
Hi, since riemann sum is all about adding smaller divided rectangles below the graph. I developed a code which calculates the di...
4 years ago | 1 answer | 0
Typing a long and complicated equation
Hi, i am trying to integrate my function but when i type out my equation it gives me the wrong answer and i suspect that i may n...
4 years ago | 2 answers | 0
Integrating using sums(riemann sums)
Hi, i want to find the integral of function f without using the int function with the limits 5 and 17 . I want to find the riema...
4 years ago | 1 answer | 0
How to add a matrix vertically and nest an if loop
Hi, i have a vector OxygenT = [0 5] and i want to add [0 5] to the next row so that i get 0 5 ...
4 years ago | 1 answer | 0
Calculating Time Intervals Between measurements
Hi, i have a set of values stored in vector Humidity. I have to first determine frequency of Humidity then use the frequency to ...
4 years ago | 1 answer | 0
Deleting NaN`s from a set of values using if loop
I want to delete the NaN`s from A so that am left with numbers only and store them in the Absorption vector . When i execute/run...
4 years ago | 1 answer | 0
Extracting values that are greater than the threshold
the Alert vector below is a combination of time in hours and minutes(column 1 and 2 respectively) and corresponding oxygen value...
4 years ago | 2 answers | 0 | {"url":"https://ch.mathworks.com/matlabcentral/profile/authors/17717119","timestamp":"2024-11-07T09:44:44Z","content_type":"text/html","content_length":"87929","record_id":"<urn:uuid:af1b1371-3e55-4446-8963-b02f953c2b7f>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00429.warc.gz"} |
USACO 2015 December Contest, Silver
Problem 1. Switching on the Lights
Contest has ended.
Log in to allow submissions in analysis mode
Farmer John has recently built an enormous barn consisting of an $N \times N$ grid of rooms ($2 \leq N \leq 100$), numbered from $(1,1)$ up to $(N,N)$. Being somewhat afraid of the dark, Bessie the
cow wants to turn on the lights in as many rooms as possible.
Bessie starts in room $(1,1)$, the only room that is initially lit. In some rooms, she will find light switches that she can use to toggle the lights in other rooms; for example there might be a
switch in room $(1,1)$ that toggles the lights in room $(1,2)$. Bessie can only travel through lit rooms, and she can only move from a room $(x,y)$ to its four adjacent neighbors $(x-1,y)$, $(x+1,y)
$, $(x,y-1)$ and $(x,y+1)$ (or possibly fewer neighbors if this room is on the boundary of the grid).
Please determine the maximum number of rooms Bessie can illuminate.
INPUT FORMAT (file lightson.in):
The first line of input contains integers $N$ and $M$ ($1 \leq M \leq 20,000$).
The next $M$ lines each describe a single light switch with four integers $x$, $y$, $a$, $b$, that a switch in room $(x,y)$ can be used to toggle the lights in room $(a,b)$. Multiple switches may
exist in any room, and multiple switches may toggle the lights of any room.
OUTPUT FORMAT (file lightson.out):
A single line giving the maximum number of rooms Bessie can illuminate.
Here, Bessie can use the switch in $(1,1)$ to turn on lights in $(1,2)$ and $(1,3)$. She can then walk to $(1,3)$ and turn on the lights in $(2,1)$, from which she can turn on the lights in $(2,2)$.
The switch in $(2,3)$ is inaccessible to her, being in an unlit room. She can therefore illuminate at most 5 rooms.
Problem credits: Austin Bannister and Brian Dean
Contest has ended. No further submissions allowed. | {"url":"https://usaco.org/index.php?page=viewproblem2&cpid=570","timestamp":"2024-11-07T09:04:38Z","content_type":"text/html","content_length":"9006","record_id":"<urn:uuid:6eab2bc0-ad33-48b1-b1dd-3eb30fbc1ba9>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00282.warc.gz"} |
Teacher access
Request a demo account. We will help you get started with our digital learning environment.
Student access
Is your university not a partner? Get access to our courses via
Pass Your Math
independent of your university. See pricing and more.
Or visit
if jou are taking an OMPT exam. | {"url":"https://app.passyourmath.com/courses/theory/38/340/3906/en","timestamp":"2024-11-09T06:06:30Z","content_type":"text/html","content_length":"75716","record_id":"<urn:uuid:09b831d0-b9f4-4001-8abd-58d6362edfe4>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00118.warc.gz"} |
Free Group Study Rooms with Timer & Music | FiveableAP Calc Unit 3 Notes: Composite, Implicit, Inverse Functions | Fiveable
Calculus AB: 9β 13%Β
Calculus BC: 4β 7%
Unit 3 builds upon understanding from Unit 1 and Unit 2 of AP Calculus. The two main components of Calculus are differentiating and integrating. Of these two, differentiation is the focal point of
the first few units in Calculus.Β Β
This unit, in particular, emphasizes the analysis of functions, continued correct application and understanding of function notation, and the recognition of β innerβ and β outerβ functions
within composites.Β This unit makes up 9-13% of the Calculus AB exam and 4-7% of the Calculus BC exam.
This list will be a good place to start in terms of self assessing what you need to study or learn.Β As you are looking through this list write down what topics on this list you donβ t remember or
still need to learn.Β Go to the full study guide page then for this topic to get some more information!
Use when you are deriving a function that is a composition of functions:
Use when you have an equation that looks like this:
3xy + 2y = 50x
*The first step would be to differentiate with respect to x*
Use this formula to help find the derivative of the inverse function:
Memorize these to help you solve problems quickly and efficiently!
When you are selecting procedures for calculating derivatives decide if the function you are looking at is:
• A composition of functions
• Two functions multiplied together
• Two functions divided
• A function with exponents
A function with trig in it
When you get into the next few units you will need to take the next derivate; especially when you get into describing functions and finding where it is increasing, decreasing, concave up and concave
The notation for the derivatives are:
• fβ (x): First derivative
• fβ β (x): Second derivative
• fβ β β (x): Third derivativeΒ | {"url":"https://hours-zltil9zhf-thinkfiveable.vercel.app/ap-calc/unit-3/unit-3-overview-differentiation-composite-implicit-inverse-functions/study-guide/3LtjBRddXSB4fWE76cWh","timestamp":"2024-11-02T20:43:27Z","content_type":"text/html","content_length":"262418","record_id":"<urn:uuid:c203ea57-6a6b-4dff-b934-b4958538c560>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00015.warc.gz"} |
Lesson 3: Data Structures
Lesson 3: Data Structures
Students will learn that data can be represented in rectangular format.
1. DS journals (must be available during every lesson)
2. Stick Figures cutouts (see lesson 2)
variables numerical variables categorical variables rows columns rectangular or spreadsheet format variability
Essential Concepts:
Essential Concepts:
Variables record values that vary. By organizing data into rectangular format, we can easily see the characteristics of observations by reading across a row, or we can see the variability in a
variable by reading down the column. Computers can easily process data when it is rectangular format.
1. Remind students that they briefly learned what variables are during the previous lesson. Have students create their own definitions of the term “variables” and share their responses with their
teams. Select a few students in the class to share out their definitions and discuss what could be modified (if anything) to create a more complete definition.
2. Using the Stick Figure information from Lesson 2, allow the class to come up with a set of variable names that describe the different categories of information. Note that it is best when variable
names are short (one to three words). The variable names for the Stick Figures data could possibly be:
1. Name
2. Height
3. GPA
4. Shoe or Shoe Type
5. Sport
6. Friends or Number of Friends
3. Next, have a class discussion about how the values from “Shoe” are different than the values from “Height.”
1. The values from “Shoe” are either “sneakers” or “sandals”.
Note: Other terms for these shoes are acceptable – e.g., tennis shoes, flip flops, closed-toe, open-toe, etc.
2. The values from “Height” are 72, 68, 61, 66, 65, 61, 67, and 64.
4. Students should notice that the “Shoe” variable consists of categories or groupings, and the “Height” variable consists of numbers. Therefore, we can classify variables into two types:
categorical variables and numerical variables. Typically, categorical variables represent values that have words, while numerical variables represent values that have numbers.
Note: Categorical variables can sometimes be coded as numbers (e.g., “Gender” could have values 0 and 1, where 0=Male and 1=Female).
5. As a class, determine which variables from the Stick Figures data are numerical, and which variables are categorical. The students should create two lists in their DS journals similar to the ones
below (the correct classifications are in grey):
Numerical Categorical
1. Height 1. Name
2. GPA 2. Shoe
3. Friends 3. Sport
6. Explain that although we can understand many different representations of data (as evidenced by the posters from Lesson 2), computers are not as capable. Instead, we need to organize data in a
structured way so that a computer can read and interpret them.
7. One way to organize the data is to create a data table that consists of rows and columns. We can define this type of organization as rectangular format, or spreadsheet format.
8. Display a generic table on the board (see example below) and explain that the columns are the vertical portions of the table, while the rows are the horizontal portions. Another way to think of
it is that columns go from top to bottom, and rows go from left to right.
9. Ask students:
1. What should each row represent? Each row should represent one observation, or one stick figure person in this case.
2. What should each column represent? Each column should represent one variable. As you go down a column, all the values represent the same characteristic (e.g., Height).
10. On the board, draw the following table and have the students copy it into their DS journals (be sure to use variable names agreed upon by the class):
11. In teams, students should complete the data table using all 8 of the Stick Figures cards. Each row of the table should represent one person on a card.
12. Engage the class in a discussion with the following questions:
1. Do any of the people in the data have the same value for a given variable? In other words, does a value appear more than once in a column? Give two examples. Answers will vary. One example
could be that Dakota, Kamryn, Emerson, and London all wear sneakers. Another example could be that Charlie and Jessie are both 61 inches tall.
2. Do any of the people in the data have different values for a given variable? Absolutely. There are many instances of this in the data table.
13. Discuss the term variability. As in question (b) above, the values for each variable vary depending on which person we are observing. This shows that the data has variability, and the first step
in any investigation is to notice variability. We can see the relationship between the terms variable and variability. The word “variable” indicates that values vary.
Class Scribes:
One team of students will give a brief talk to discuss what they think the 3 most important topics of the day were. | {"url":"https://curriculum.idsucla.org/unit1/lesson3/","timestamp":"2024-11-11T08:23:07Z","content_type":"text/html","content_length":"78345","record_id":"<urn:uuid:627231b4-f4f2-4988-ae27-22b7b225a6a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00311.warc.gz"} |
ncl_gevtm: Constructs a GKS segment - Linux Manuals (3)
ncl_gevtm (3) - Linux Manuals
ncl_gevtm: Constructs a GKS segment
GEVTM (Evaluate transformation matrix) - Constructs a GKS segment transformation matrix starting from a given point, a shift vector, a rotation angle, and X and Y scale factors.
CALL GEVTM(X0,Y0,DX,DY,PHI,FX,FY,SW,MOUT)
#include <ncarg/gks.h>
void geval_tran_matrix(const Gpoint *point, const Gvec *shift, Gdouble angle, const Gvec *scale, Gcoord_switch coord_switch, Gtran_matrix tran_matrix);
X0 (Real, Input) - An X coordinate value for a fixed point to be used for the scaling and rotation parts of the output transformation. X is either in world coordinates or normalized device
coordinates depending on the setting of the argument SW described below.
Y0 (Real, Input) - A Y coordinate value for a fixed point to be used for the scaling and rotation parts of the output transformation. Y is either in world coordinates or normalized device
coordinates depending on the setting of the argument SW described below.
DX (Real, Input) - The X component of a shift vector to be used for the scaling part of the output transformation. DX is either in world coordinates or normalized device coordinates depending on the
setting of the argument SW described below.
DY (Real, Input) - The Y component of a shift vector to be used for the scaling part of the output transformation. DY is either in world coordinates or normalized device coordinates depending on the
setting of the argument SW described below.
PHI (Real, Input) - The rotation angle, in radians, to be used for the rotation part of the output transformation.
FX (Real, Input) - An X coordinate scale factor to be used in the scaling part of the output transformation.
FY (Real, Input) - A Y coordinate scale factor to be used in the scaling part of the output transformation.
SW (Integer, Input) - A coordinate switch to indicate whether the values for the arguments X0, Y0, DX, and DY (described above) are in world coordinates or normalized device coordinates. SW=0
indicates world coordinates and SW=1 indicates normalized device coordinates.
MOUT (Real, Output) - A 2x3 array that contains the GKS transformation matrix in a form that can be used as input to other GKS functions such as GSSGT.
If world coordinates are used, the shift vector and the fixed point are transformed by the current normalization transformation.
The order of the transformation operations as built into the output matrix is: scale (relative to the fixed point); rotate (relative to the fixed point; shift.
Elements MOUT(1,3) and MOUT(2,3) are in normalized device coordinates and the other elements of MOUT are unitless.
The following code
PI = 3.1415926
CALL GEVTM(.5,.5,.25,0.,45.*PI/180.,.5,1.5,0,TM)
would produce a transformation matrix in TM that would: scale the X coordinates by .5, scale the Y coordinates by 1.5 (relative to the fixed point of (.5,.5) ); rotate by 45 degrees (relative to the
fixed point (.5,.5) ); and shift by .25 in X and 0. in Y. The input values for the fixed point and shift vector are in world coordintes.
To use GKS routines, load the NCAR GKS-0A library ncarg_gks.
Copyright (C) 1987-2009
University Corporation for Atmospheric Research
The use of this Software is governed by a License Agreement.
Online: gactm, gclsg, gcrsg, gcsgwk, gdsg, gqopsg, gqsgus, gssgt., geval_tran_matrix | {"url":"https://www.systutorials.com/docs/linux/man/3-ncl_gevtm/","timestamp":"2024-11-09T00:39:06Z","content_type":"text/html","content_length":"10596","record_id":"<urn:uuid:e72937c3-aee3-42ea-9b08-f6923c1876b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00527.warc.gz"} |
TF = islocalmax(A) returns a logical array whose elements are 1 (true) when a local maximum is detected in the corresponding element of A.
You can use islocalmax functionality interactively by adding the Find Local Extrema task to a live script.
TF = islocalmax(A,dim) specifies the dimension of A to operate along. For example, islocalmax(A,2) finds local maximum of each row of a matrix A.
TF = islocalmax(___,Name,Value) specifies parameters in addition to any of the input argument combinations in previous syntaxes for finding local maxima using one or more name-value arguments. For
example, islocalmax(A,'SamplePoints',t) finds local maxima of A with respect to the time stamps contained in the time vector t.
[TF,P] = islocalmax(___) also returns the prominence corresponding to each element of A.
Local Maxima in Vector
Compute and plot the local maxima of a vector of data.
x = 1:100;
A = (1-cos(2*pi*0.01*x)).*sin(2*pi*0.15*x);
TF = islocalmax(A);
Maxima in Matrix Rows
Create a matrix of data, and compute the local maxima for each row.
A = 25*diag(ones(5,1)) + rand(5,5);
TF = islocalmax(A,2)
TF = 5x5 logical array
Separated Maxima
Compute the local maxima of a vector of data relative to the time stamps in the vector t. Use the MinSeparation parameter to compute maxima that are at least 45 minutes apart.
t = hours(linspace(0,3,15));
A = [2 4 6 4 3 7 5 6 5 10 4 -1 -3 -2 0];
TF = islocalmax(A,'MinSeparation',minutes(45),'SamplePoints',t);
Flat Maxima Regions
Specify a method for indicating consecutive maxima values.
Compute the local maxima of data that contains consecutive maxima values. Indicate the maximum of each flat region based on the first occurrence of that value.
x = 0:0.1:5;
A = min(0.75, sin(pi*x));
TF1 = islocalmax(A,'FlatSelection','first');
Indicate the maximum of each flat region with all occurrences of that value.
TF2 = islocalmax(A,'FlatSelection','all');
Prominent Maxima
Select maxima based on their prominence.
Compute the local maxima of a vector of data and their prominence, and then plot them with the data.
x = 1:100;
A = peaks(100);
A = A(50,:);
[TF1,P] = islocalmax(A);
axis tight
Compute only the most prominent maximum in the data by specifying a minimum prominence requirement.
TF2 = islocalmax(A,'MinProminence',2);
axis tight
Input Arguments
A — Input data
vector | matrix | multidimensional array | table | timetable
Input data, specified as a vector, matrix, multidimensional array, table, or timetable.
dim — Operating dimension
positive integer scalar
Operating dimension, specified as a positive integer scalar. If no value is specified, then the default is the first array dimension whose size does not equal 1.
Consider an m-by-n input matrix, A:
• islocalmax(A,1) computes local maxima according to the data in each column of A and returns an m-by-n matrix.
• islocalmax(A,2) computes local maxima according to the data in each row of A and returns an m-by-n matrix.
For table or timetable input data, dim is not supported and operation is along each table or timetable variable separately.
Name-Value Arguments
Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but
the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose Name in quotes.
Example: TF = islocalmax(A,'MinProminence',2)
Data Options
SamplePoints — Sample points
vector | table variable name | scalar | function handle | table vartype subscript
Sample points, specified as a vector of sample point values or one of the options in the following table when the input data is a table. The sample points represent the x-axis locations of the data,
and must be sorted and contain unique elements. Sample points do not need to be uniformly sampled. The vector [1 2 3 ...] is the default.
When the input data is a table, you can specify the sample points as a table variable using one of these options:
Indexing Scheme Examples
Variable name:
• "A" or 'A' — A variable named A
• A string scalar or character vector
Variable index:
• 3 — The third variable from the table
• An index number that refers to the location of a variable in the table
• [false false true] — The third variable
• A logical vector. Typically, this vector is the same length as the number of variables, but you can omit trailing 0 or false values
Function handle:
• @isnumeric — One variable containing numeric values
• A function handle that takes a table variable as input and returns a logical scalar
Variable type:
• vartype("numeric") — One variable containing numeric values
• A vartype subscript that selects one variable of a specified type
This name-value argument is not supported when the input data is a timetable. Timetables use the vector of row times as the sample points. To use different sample points, you must edit the timetable
so that the row times contain the desired sample points.
Example: islocalmax(A,'SamplePoints',0:0.1:10)
Example: islocalmax(T,'SamplePoints',"Var1")
DataVariables — Table variables to operate on
table variable name | scalar | vector | cell array | pattern | function handle | table vartype subscript
Table variables to operate on, specified as one of the options in this table. The DataVariables value indicates which variables of the input table to examine for local maxima. The data type
associated with the indicated variables must be numeric or logical.
The first output TF contains false for variables not specified by DataVariables unless the value of OutputFormat is 'tabular'.
Indexing Values to Specify Examples
• A string scalar or character vector • "A" or 'A' — A variable named A
Variable • A string array or cell array of character vectors • ["A" "B"] or {'A','B'} — Two variables named A and B
• A pattern object • "Var"+digitsPattern(1) — Variables named "Var" followed by a
single digit
• An index number that refers to the location of a variable in the table • 3 — The third variable from the table
Variable • A vector of numbers • [2 3] — The second and third variables from the table
• A logical vector. Typically, this vector is the same length as the number of variables, but you can omit trailing • [false false true] — The third variable
0 (false) values.
Function • A function handle that takes a table variable as input and returns a logical scalar • @isnumeric — All the variables containing numeric values
Variable type • A vartype subscript that selects variables of a specified type • vartype("numeric") — All the variables containing numeric
Example: islocalmax(T,'DataVariables',["Var1" "Var2" "Var4"])
OutputFormat — Output data type
'logical' (default) | 'tabular'
Output data type, specified as one of these values:
• 'logical' — For table or timetable input data, return the output TF as a logical array.
• 'tabular' — For table input data, return the output TF as a table. For timetable input data, return the output TF as a timetable.
For vector, matrix, or multidimensional array input data, OutputFormat is not supported.
Example: islocalmax(T,'OutputFormat','tabular')
Extrema Detection Options
MinProminence — Minimum prominence
0 (default) | nonnegative scalar
Minimum prominence, specified as a nonnegative scalar. islocalmax returns only local maxima whose prominence is at least the value specified.
ProminenceWindow — Prominence window
positive integer scalar | two-element vector of positive integers | positive duration scalar | two-element vector of positive durations
Prominence window, specified as a positive integer scalar, a two-element vector of positive integers, a positive duration scalar, or a two-element vector of positive durations. The value defines a
window of neighboring points for which to compute the prominence for each local maximum.
When the window value is a positive integer scalar k, then the window is centered about each local maximum and contains k-1 neighboring elements. If k is even, then the window is centered about the
current and previous elements. If a local maximum is within a flat region, then islocalmax treats the entire flat region as the center point of the window.
When the value is a two-element vector of positive integers[b f], then the window contains the local maximum, b elements backward, and f elements forward. If a local maximum is within a flat region,
then the window starts b elements before the first point of the region and ends f elements after the last point of the region.
When the input data is a timetable or SamplePoints is specified as a datetime or duration vector, the window value must be of type duration, and the window is computed relative to the sample points.
FlatSelection — Flat region indicator
'center' (default) | 'first' | 'last' | 'all'
Flat region indicator for when a local maximum value is repeated consecutively, specified as one of these values:
• 'center' — Indicate only the center element of a flat region as the local maximum. The element of TF corresponding to the center of the flat is 1, and is 0 for the remaining flat elements.
• 'first' — Indicate only the first element of a flat region as the local maximum. The element of TF corresponding to the start of the flat is 1, and is 0 for the remaining flat elements.
• 'last' — Indicate only the last element of a flat region as the local maximum. The element of TF corresponding to the end of the flat is 1, and is 0 for the remaining flat elements.
• 'all' — Indicate all the elements of a flat region as the local maxima. The elements of TF corresponding to all parts of the flat are 1.
When using the MinSeparation or MaxNumExtrema name-value arguments, flat region points are jointly considered a single maximum point.
MinSeparation — Minimum separation
0 (default) | nonnegative scalar
Minimum separation between local maxima, specified as a nonnegative scalar. The separation value is defined in the same units as the sample points vector, which is [1 2 3 ...] by default. When the
separation value is greater than 0, islocalmax selects the largest local maximum and ignores all other local maxima within the specified separation. This process is repeated until there are no more
local maxima detected.
When the sample points vector has type datetime, the separation value must have type duration.
MaxNumExtrema — Maximum number of maxima
positive integer scalar
Maximum number of maxima, specified as a positive integer scalar. islocalmax finds no more than the specified number of the most prominent maxima, which is the length of the operating dimension by
Output Arguments
TF — Local maxima indicator
vector | matrix | multidimensional array | table | timetable
Local maxima indicator, returned as a vector, matrix, multidimensional array, table, or timetable.
TF is the same size as A unless the value of OutputFormat is 'tabular'. If the value of OutputFormat is 'tabular', then TF only has variables corresponding to the DataVariables specified.
Data Types: logical
P — Prominence
vector | matrix | multidimensional array | table | timetable
Prominence, returned as a vector, matrix, multidimensional array, table, or timetable.
• If P is a vector, matrix, or multidimensional array, P is the same size as A.
• If P is a table or timetable, P is the same height as A and only has variables corresponding to the DataVariables specified.
If the input data has a signed or unsigned integer type, then P is an unsigned integer.
More About
Prominence of Local Maximum
The prominence of a local maximum (or peak) measures how the peak stands out with respect to its height and location relative to other peaks.
To measure the prominence of a peak, first extend a horizontal line from the peak. Find where the line intersects the data on the left and on the right, which will either be another peak or the end
of the data. Mark these locations as the outer endpoints of the left and right intervals. Next, find the lowest valley in both intervals. Take the larger of these two valleys, and measure the
vertical distance from that valley to the peak. This distance is the prominence.
For a vector x, the largest prominence is at most max(x)-min(x).
Alternative Functionality
Live Editor Task
You can use islocalmax functionality interactively by adding the Find Local Extrema task to a live script.
Extended Capabilities
Tall Arrays
Calculate with arrays that have more rows than fit in memory.
The islocalmax function supports tall arrays with the following usage notes and limitations:
• Tall timetables are not supported.
• You must specify a value for the ProminenceWindow name-value argument.
• The MaxNumExtrema, MinSeparation, and SamplePoints name-value arguments are not supported.
• The value of DataVariables cannot be a function handle.
For more information, see Tall Arrays.
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
• You must enable support for variable-size arrays.
• The ProminenceWindow name-value argument is not supported.
Thread-Based Environment
Run code in the background using MATLAB® backgroundPool or accelerate code with Parallel Computing Toolbox™ ThreadPool.
This function fully supports thread-based environments. For more information, see Run MATLAB Functions in Thread-Based Environment.
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
The islocalmax function fully supports GPU arrays. To run the function on a GPU, specify the input data as a gpuArray (Parallel Computing Toolbox). For more information, see Run MATLAB Functions on a
GPU (Parallel Computing Toolbox).
Version History
Introduced in R2017b
R2022a: Return table or timetable containing logical output
For table or timetable input data, return a tabular output TF instead of a logical array by setting the OutputFormat name-value argument to 'tabular'.
R2021b: Specify sample points as table variable
For table input data, specify the sample points as a table variable using the SamplePoints name-value argument.
See Also
Live Editor Tasks | {"url":"https://nl.mathworks.com/help/matlab/ref/islocalmax.html","timestamp":"2024-11-06T11:23:50Z","content_type":"text/html","content_length":"142998","record_id":"<urn:uuid:72853d7c-d004-4421-ad9d-72238bf556a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00044.warc.gz"} |
Statistics - (Residual|Error Term|Prediction error|Deviation) (e| )
<math>e = Y - \hat{Y}</math>
where in a regression
Variance and bias
The ingredients of prediction error are actually:
• bias
: the bias is how far off on the average the model is from the truth.
• and
. The variance is how much that the estimate varies around its average.
Bias and variance together gives us prediction error.
This difference can be expressed in term of variance and bias:
<math>e^2 = var(model) + var(chance) + bias</math>
• <math>var(model)</math> is the variance due to the training data set selected. (Reducible)
• <math>var(chance)</math> is the variance due to
(Not reducible)
• bias
is the average of all <math>\hat{Y}</math> over all training data set minus the true Y (Reducible)
As the flexibility (order in complexity) of f increases, its variance increases, and its bias decreases. So choosing the flexibility based on average test error amounts to a bias-variance trade-off.
See Statistics - Bias-variance trade-off (between overfitting and underfitting) | {"url":"https://datacadamia.com/data_mining/residual","timestamp":"2024-11-05T10:07:59Z","content_type":"text/html","content_length":"319219","record_id":"<urn:uuid:19abeddc-93ca-4677-b6a5-25fb9e87ea25>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00779.warc.gz"} |
General: Calculating Hyperfocal Distance and Depth of Field
On this page, I present how the hyperfocal distance (HFD) and depth of field (DOF) can be calculated based on formulae that I found on the DOFMaster Website. I am using Microsoft Excel for performing
the actual calculations.
On other pages, I present the results my calculations of the hyperfocal distance for the Sony RX100 M1, the Ricoh GR and the Leica X Vario as well as for depth of field for the Ricoh GR and the Leica
X Vario in various tabular formats.
See the notes on page A Small Glossary of Photography Terms regarding the validity of depth of field and hyperfocal distance calculations as well as of the circle of confusion, on which these
calculations are based.
The DOFMaster Website not only offers a depth of field (DOF) calculator but also lists formulae for calculating depth of field tables on your own. Below, I list the formulae for DOF and hyperfocal
distances that I use for my own calculations that are presented as tables in various places on this site.
Such calculations are, however, tedious, and error prone process. Therefore, please take the tables presented on this site and elsewhere with a grain of salt. Moreover, particularly for zoom lenses,
such tables are not too useful and practical...
The Circle of Confusion (CoC)
... might as well been regarded as a source of confusion...
The original definition of the circle of confusion was based o the resolving power of the human eye, which was assumed to be no smaller than one quarter of a millimeter in diameter on a piece of 8 x
10 paper 250 millimeters from the eye. Merklinger writes:
• The human eye is said to be capable of resolving a spot no smaller than one quarter of a millimeter in diameter on a piece of paper 250 millimeters from the eye. If this spot were on an 8 by 10
inch photograph made from a 35 mm negative, the enlargement factor used in making the print would have been about eight. Thus if spots smaller than one-quarter millimeter are unimportant in the
print, then spots smaller than one-thirty-second of a millimeter in diameter are unimportant in the negative. The usual standard used in depth-of-field calculations is to permit a
circle-of-confusion on the negative no larger than one-thirtieth of a millimeter in diameter. (From Merklinger)
As a result, most lenses show a DOF scale, provided they have one, that is based on a CoC of 1/30 mm (0.03 to 0.035 mm). Since this is based on a full-format film format, the CoC size has been
adapted to different sensor sizes (based on the magnification factor that is much higher than 8, or on the sensor size).
Note: I hit on a discusion in the l-camera-forum debating that the DOF is different for digital and for analog (film) cameras, but for digital cameras still the same calculation is used as for film
cameras: It is uses a circle of confusion (coc) that is based on the sensor size (or film size for analog cameras). The argument put forth in an article, entitled Profondeur de champ et capteurs
numériques (English version: Depth of field and digital sensors), that was cited there is that not sensor size but photosite size should be the criterion for calculting the coc. According to this
article, the coc is simply the photosite size multiplied by a factor of 1.5 (this factor 1.5 can be adapted to specific needs). All in all, this leads to a much smaller coc and consequently also to a
considerably smaller DOF.
For the APS-C format (23.6 x 15.8 mm) and an image size of 4928×3264 pixels, we calculate the coc as follows:
• 23.6/4928 = 0.0048 mm => 0.0072 mm (15.8/3264 => 0.0073 mm)
Thus, for the DOF tables that I calculated, I should have used a coc of 0.0072 mm instead of 0.019 or 0.02 mm.
In the above-mentioned thread, FlashGordonPhotography wrote:
• "... DOF is directly related to the circle of confusion. On digital you need nine pixels to create a circle. There are some visual differences because of this."
Nine pixels for representing a circle means three pixels in one dimension. Thus, we would have to use the size of three photosites x 1.5 to calculate the circle of confusion. This way, we would
arrive at a coc of about 0.022 mm, that is, at about the middle between Leica's 0.02 mm and the usual 0.025 mm for APS-C sensors. Or should we use just the factor 3 instead of 1.5? I do not know at
the moment...
In short, using the second argument, we would arrive at where we already were at the beginning, and I'll therefore leave everything as it is for the moment... So, before I create new tables, I will
first wait how this issue is further discussed in the above-mentioned thread...
Considerations Before You Do Any Calculations
Before you do any calculations, you have to consider which focal length and aperture values are to be used in the calculations.
Focal Length
Please note that for focal length, you have to use the exact focal length, not the equivalent length. For example, use 18 mm instead of 28 mm, which is the "equivalent" focal length for 18 mm for an
APS-C camera.
Exact Aperture Values
Aperture values are a bit more tricky. When I compared my initial results wit those of DOFMaster, there were differences for some focal lengths. I found out that there were differences, when the
nominal f-numbers were rounded and did not correspond to the "exact" values, which are powers of roots of 2. DOFMaster points out: f-number is calculated by the definition N = 2^i/2, where i = 1, 2,
3, ... for f/1.4, f/2, f/2.8, ... - which is true for full f-numbers only... For one-half or one-third aperture steps, it gets even more complicated:
• Full f-numbers: 2 power 1/2 = sqrt(2) = 1.414213562
• One-half f-numbers: 2 power 1/4 = fourth root of 2 = 1.189207115
• One-third f-numbers: 2 power 1/6 = sixth root of 2 = 1.122462048
Here is a table of rounded "exact" apertures in full, one-half, and one-third steps:
Step Full One-Half One-Third
0 1.00 1.00 1.00
1 1.41 1.19 1.12
2 2.00 1.41 1.26
3 2.83 1.68 1.41
4 4.00 2.00 1.59
5 5.66 2.38 1.78
6 8.00 2.83 2.00
7 11.31 3.36 2.24
8 16.00 4.00 2.52
9 22.63 4.76 2.83
10 32.00 5.66 3.17
11 -- 6.73 3.56
12 -- 8.00 4.00
13 -- 9.51 4.49
14 -- 11.31 5.04
15 -- 13.45 5.66
16 -- 16.00 6.35
17 -- 19.03 7.13
18 -- 22.63 8.00
19 -- 26.91 8.98
20 -- 32.00 10.08
21 -- -- 11.31
22 -- -- 12.70
23 -- -- 14.25
24 -- -- 16.00
25 -- -- 17.96
26 -- -- 20.16
27 -- -- 22.63
28 -- -- 25.40
29 -- -- 28.51
30 -- -- 32.00
Table: Exact aperture values in full, one-half, and one-third steps rounded to two decimals
You get correct values for the hyperfocal distance (HFD) and depth of field (DOF) only if you use the exact f-numbers. In my tables (see elsewhere), I list both the rounded exact and the "nominal"
f-numbers for your convenience.
Note: I noticed that some hyperfocal distance and depth of field calculators use "standard" aperture values. A a consequence, these deliver values that differ from my calculations, which are based on
"exact" aperture values.
Calculating the Hyperfocal Distance
According to the DOFMaster Website and Wikipedia (definition 1 of the hyperfocal distance), hyperfocal distance H is calculated using to the following formula:
where f = focal length [mm], N = f-number, c = circle of confusion [mm]; H will also be in mm and has to be divided by 1,000 to deliver results in meters.
This formula can be approximated to:
The approximation corresponds to definition 2 of the hyperfocal distance on Wikipedia (it is also mentioned on the DOFMaster Website).
See also my glossary of terms for the two different definitions of the hyperfocal distance. In the following, I use the first formula (or definition 1) for calculating the depth of field.
Calculating the Depth of Field
The DOFMaster Website bases its formulae for the depth of field (DOF) limits on the hyperfocal distance H and calculates the near and the far DOF limits (near limit Dn, far limit Df) using the
following formulae:
• Dn = s*(H-f)/(H+s-2f)
• Df = s*(H-f)/(H-s)
where H = f*f/(N*c) + f = hyperfocal distance, according to definition 1 in Wikipedia; f = focal length [mm], N = f-number, c = circle of confusion [mm], s = distance set at the lens [mm]; Dn/f
values will be in mm and have to be divided by 1,000 to be in meters.
You can calculate the hyperfocal distance according to one of the two formulae above and then enter the value into the DOF formulae above. The DOF formulae based on the hyperfocal distance H are also
useful for calculating the near and far limits (= the DOF) for the extreme case that distance is set to infinity. This is helpful when adopting Merklinger's approach to estimating depth-of-field in
landscape photography. See below for the derivation of the results.
As with the hyperfocal distance, the circle of confusion plays (CoC) an important role in the DOF calculations. However, I transformed the formulae so that both, the formula for the near limit and
the one for the far limit look, very much the same*. This similarity helped me when using Excel for calculating the DOF tables, particularly when searching for errors.
However, I prefer to arrive at formulae, in which the near and far limits depend only focal length, f-number, circle of confusion, and distance. Therefore, the hyperfocal distance needs to be
substituted by its definition.
Substituting the hyperfocal distance H in the near/far limit formulae with its definition leads to:
• Dn = s*f*f/(N*c*(f*f/N*c)-f+s)
• Df = s*f*f/(N*c*(F*F+f-s)
After some transformations, I arrive as the following formulae for the near limit Dn and the far limit Df:
• Dn = s/(1+(s-f)*N*c/(f*f))
• Df = s/(1-(s-f)*N*c/(f*f))
In these formulae, the near and far limits Df/n depend only on three parameters (just like the hyperfocal distance...) plus the distance s set at the lens:
• f-number N (aperture),
• focal length f, and
• circle of confusion c (which can be regarded as an indicator of the sensor size)
• distance s set at the lens.
By the way, using the approximation for H (or definition 2), actually leads to more complex formulae and minor (or major) differences when the different calculations are compared.
Note: Like for the hyperfocal distance, exact f-numbers and real focal lengths have to be used in the calculations.
Approximation (Wikipedia)
Wikipedia uses approximations of the formulae for DOF and hyperfocal distance to calculate Dn and Df:
• Dn = s*H/(H+s)
• Df = s*H/(H-s) for s<H
For H, it uses the approximation:
Substituting H leads to the following approximate formulae:
• Dn = f*f*N*c*s/(f*f+N*c*s)
• Df = f*f*N*c*s/(f*f-N*c*s)
The above or these formulae lead to minor (or major) differences when the different calculations are compared.
Special Case: Lens Focused at Infinity
The DOFMaster formulae for the near and the far depth of field limits Dn and Df are useful for inspecting a special case, namely, distance set to infinity:
• Dn = s*(H-f)/(H+s-2f) - approximation (from Merklinger): s*H/(H+s)
• Df = s*(H-f)/(H-s) - approximation (from Merklinger): s*H/(H-s)
where f = focal length [mm], H = hyperfocal distance [mm], s = distance set at the lens [mm]; Dn/f values will be in mm and have to be divided by 1,000 to be in meters. Now we let s go towards
• Dn = s*H/(H+s-2f) - s*f/(H+s-2f) = H/(H/s+1-2f/s) - f/(H/s+1-2f/s) => s=>inf => Dn = H/(0+1-0) - f/(0+1-0) = H - f = H
• Df = s*H/(H-s) - s*f/(H-s) = H/(H/s-1) - f/(H/s-1) => s=>inf =>Df = H/(0-1) - f/(0-1) = -H + f = -H
The approximations lead to just Dn = H, Df = -H. A negative HFD means a distance beyond infinity, that is, sharpness is at infinity.
Thus, as Merklinger does (and Boone as well), we can state that, when we focus any lens at infinity, the near limit (based on the CoC criterion!) is the hyperfocal distance and the far limit is
If you do not believe my calculations, you can simply look at the photos below:
Set the distance to infinity and read the near distance or hyperfocal distance from the "near" mark Move the infinity mark to the "far" marker for the chosen f-number - here f/16 - to set the
- here f/16 hyperfocal distance
Set the distance to infinity and read the near distance or hyperfocal distance from the "near" mark Move the infinity mark to the "far" marker for the chosen f-number - here f/8 - to set the
- here f/8 hyperfocal distance
Photos: When distance is set to infinity, the "near limit" is just the hyperfocal distance, as the photos in the right column prove...
Why Do I Get/Find Different Values from the Ones Published on this Site?
You may have found a Website that offers a DOF/hyper focal distance calculator, or you bought an app that performs these and other calculations. Then you stumble across this Website to find out that
the DOF and hyper focal distance values published here differ from the ones that you calculate or read. This is, of course, confusing, and you may rightfully ask who is wrong. In the following, I
would like to propose a few answers to this puzzle:
• Hyperfocal distance can be calculated according to two formulae/definitions (see Willie). These differ in only one term - the focal length. Thus, the difference between both variants is small,
but it exists (it's just the focal length).
• When using formulae for DOF, you can calculate the hyperfocal distance first and then enter this value into the respective formulae for the near and far limit (DOF). Depending on which of the two
formulae you use, whether you round the value before entering it into the formulae for the near and far limits, you can arrive at slightly different values.
• Wikipedia lists simplified, approximate formulae for calculating DOF. If you use these you will also get slightly different DOF values…
• DOFMaster points out that you not only have to use the actual focal length but also the exact f-values in the calculations, which is a power of 2 (the exponent depends on the size of the steps).
I found a number of sites/apps that use the "nominal" f-numbers (those that are written on the lenses) for the calculations (you may nevertheless specify the nominal values…). For f-numbers that
are not plain powers of 2 (these are f/2, f/4, f/8, f/16, f/32), you get values that differ from the DOFMaster values. Therefore, when I find that a calculator more less replicated the results of
the DOFMaster formulae, I check this not only for an aperture value of f/4, but also for f/2.8 (which is 2*sqrt(2) = 2.828…).
All these differences are minor, but they exist and they can puzzle you. They can also add up…
In my opinion, every application or Website that wants to be trusted should disclose the formulae according to which it calculates its values. This allows its users/visitors to check the validity of
the results and to understand why there are differences. Actually, after fiddling around a lot with hyperfocal distance and DOF data, I feel a bit as in the saying "trust only those statistical data
that you faked yourself" (or so…). After looking at so many DOF/hyperfocal distance apps/sites, I tend to trust only my own calculations — and those that they are based on, that is, the DOFMaster
Website formulae and calculations/calculator.
ExifTool lists "Composite" tags at the end of the Exif data that it supplies (the same does the long-time Mac imaging tool GraphicConverter), among others, the hyperfocal distance (HFD). I therefore
assumed that such a thing as a "composite Exif tag" does exist and that some cameras write the hyperfocal distance into the Exif data. I was, however, wrong, as I found out on the ExifTool Website.
Author Phil Harvey writes:
• The values of the composite tags are Derived From the values of other tags. These are convenience tags which are calculated after all other information is extracted.
For calculating the hyperfocal distance, Harvey uses the FocalLength, Aperture, and CircleOfConfusion tags in the Exif data. This may explain, why I detected differences between the value in the
composite HFD tag and my own calculations based on the DOFMaster formulae. I assume, that Harvey does not use the exact aperture value, which is based on powers of two, but directly the one given in
the Exif data. Since the deviations can be in either direction, the calculated values may be sometimes too short and sometimes too wide (they should not be taken too strictly anyway...). I checked my
assumption but did not arrive at a conclusion yet...
The formulae for the near and far distance that I have derived may look even more complex to you than the original ones, but they offer the advantage that they differ only in one sign, which is very
helpful when using Excel. The formulae based on the approximations presented in Wikipedia do not look much simpler either, after the hyperfocal distance was substituted by its definition...
Note that you can also calculate the hyperfocal distance first (either according to definition 1 or 2) and then enter this value into the DOF formulae (the ones given here from DOFMaster or the
approximations listed in Wikipedia). I prepared a HFD/DOF calculator ins Excel using all these approaches in order to compare the approximations with the formulae that I use. Overall, the differences
are minor...
Alternative: Merklinger's Approach
In his article Hyperfocal distances and Merklinger's method in landscape photography, Kevin Boone discusses hyperfocal distance versus Harold M. Merklinger's approach to estimating depth-of-field in
landscape photography. He also discusses simpler approaches, such as using f/8 and setting distance to infinity.
I prepared a page, entitled Merklinger's Approach to Estimating Depth of Field, that covers Merklinger's approach in more detail. In addition, I calculated tables for using a special and simple case
of the method for most of my cameras. You will find these pages in the respective sections.
Hyperfocal Distance, DOF
• Kevin Boone: Hyperfocal distances and Merklinger's method in landscape photography (www.kevinboone.net/hyperfocal.html)
• Harold M. Merklinger: The INs and OUTs of Focus - An Alternative Way to Estimate Depth-of-Field and Sharpness in the Photographic Image (Internet edition; www.trenholm.org/hmmerk/TIAOOFe.pdf) | {"url":"http://www.waloszek.de/gen_dof_e.php","timestamp":"2024-11-14T17:11:31Z","content_type":"text/html","content_length":"36630","record_id":"<urn:uuid:a3f8c526-535a-4f77-a6a6-c564af520787>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00774.warc.gz"} |
Vision guidance system using dynamic edge detection
Vision guidance system using dynamic edge detection
A row vision system modifies automated operation of a vehicle based on edges detected between surfaces in the environment in which the vehicle travels. The vehicle may be a farming vehicle (e.g., a
tractor) that operates using automated steering to perform farming operations that track an edge formed by a row of field work completed next to the unworked field area. A row vision system may
access images of the field ahead of the tractor and apply models that identify surface types and detect edges between the identified surfaces (e.g., between worked and unworked ground). Using the
detected edges, the system determines navigation instructions that modify the automated steering (e.g., direction) to minimize the error between current and desired headings of the vehicle, enabling
the tractor to track the row of crops, edge of field, or edge of field work completed.
Latest DEERE & COMPANY Patents:
This disclosure relates generally to a detection system for vehicles, and more specifically to detecting an edge between surfaces to control the operation (e.g., automated steering) of a farming
Automated operation of vehicles may depend on information that is not always reliably provided to the vehicles. For example, automated navigation often depends on Global Positioning System (GPS)
information that is wirelessly provided to an autonomous or semi-autonomous vehicle. The communication channel providing information for automated operation, however, is subject to influence by
factors inherent to the medium, such as fading or shadowing, or external factors, such as interference from transmitters on or near the same radio frequencies used by the communication channel.
Additionally, automated operation of a vehicle is dependent upon the accuracy with which the environment around the vehicle is understood (e.g., by a computer vision system managing the automated
operation). For example, navigation of the vehicle is improved as the accuracy with which objects or boundaries surrounding the vehicle, affecting where the vehicle can or should travel, is improved.
Systems in the art may depend fully on computer-determined classification of objects or boundaries in the environment, neglecting the increase in accuracy afforded by external input such as manually
identified objects or boundaries.
Automated operation may further require tremendous processing and power resources, which may be limited in certain devices such as on-vehicle embedded controllers. This limitation may cause delays in
the automated operation. While there is increasing incentive to leverage mobile devices for automated operation due to their commercial prevalence and convenience, mobile devices may similarly,
despite a potential improvement over the limitations of an embedded controller, cause delays in the automated operation due to their processing and power constraints. These delays may render the
automated operation insufficient or even detrimental to its operator.
A system for detecting edges between surfaces in an environment is described herein. An edge, or “row edge,” between two surfaces may be detected by a row vision system to modify the operation of a
vehicle (e.g., steering direction or speed). The row vision system may be used in a farming environment where various operations depend on the identification of an edge between surfaces such as soil
and crops. For example, a tractor may perform mowing using the row vision system that detects an edge between previously cut crop and uncut crop. The row vision system may provide an operator images
of the field ahead of the tractor, enabling the operator to identify a location within the images where there is a target edge of a field crop that the tractor should follow. Using the operator's
input, the row vision system may identify a set of candidate edges that are likely to include the target edge. In particular, the row vision system may limit the identification of candidate edges to
an area of the images around the operator-provided location. The row vision system may use a model (e.g., a machine learning model) to select an edge of the candidate edges and modifies the route of
the tractor based on the selected edge.
Accordingly, the row vision system may be reliant upon information such as images taken from a camera located at the vehicle, where this information is more reliably available than information
subject to wireless communication conditions (e.g., GPS signals). By using operator input to supplement the operation of the computer vision classification of the environment, the row vision system
may increase the accuracy of the classification (e.g., more accurately identifying an edge between surfaces). The row vision system improves the operation of devices with limited processing or power
resources (e.g., mobile devices) by decreasing the amount of processing required by the mobile devices. For example, by limiting the identification of candidate edges to a particular area within the
images captured by the vehicle's camera, the row vision system avoids performing unnecessary image processing on portions of the images that are unlikely to include the target edge indicated by an
In one embodiment, a row vision system accesses a set of images captured by a vehicle while navigating via automated steering through an area of different surface types. The set of images may include
images of a ground surface in front of the vehicle. The images may be displayed to an operator that is collocated with the vehicle or located remotely. The row vision system receives an input from
the operator, where the input represents a location within the set of images (e.g., where a target edge is between two surface types). The system identifies a set of candidate edges within an image
portion corresponding to the location within the images. For example, the image portion may be a region or bounding box centered at the location. Each candidate edge identified may correspond to a
candidate boundary between two different surface types. For example, one candidate edge may be a boundary between uncut crop and cut crop in front of the mowing tractor. The row vision system applies
an edge selection model to the set of candidate edges. The edge selection model may be configured to select an edge of the set of candidate edges based on the location within the set of images
represented by the received input. For example, the edge selection model may include a machine learning model trained to identify whether the candidate edge identified within the image is an actual
edge between two different surface types and a confidence score representing the accuracy level of the identification. The row vision system modifies a route being navigated by the vehicle based on
the selected candidate edge. For example, the steering wheel direction is changed to guide the vehicle towards the row of uncut crop for mowing to maintain a lateral offset distance as needed for the
mowing implement.
The set of candidate edges may be identified using an edge detection model corresponding to one or both of the two different surface types adjacent to a candidate edge. For example, the edge
detection model may be a machine-learned model trained on images of manually tagged boundaries between two surface types such as soil and one of a crop, grass, and pavement. The edge selection model
configured to select an edge of the set of candidate edges may be configured to weight the candidate edges at least in part based on a distance between each candidate and the user-input location
within the set of images. A first candidate edge may be weighted lower than a second candidate edge that is closer to the location represented by the user-input location. The edge selection model may
include a machine-learned model that is trained on images each with a set of candidate edges and a manually-selected edge of the set of candidate edges.
The vehicle may identify each of the two different surface types between which a candidate edge is located. The vehicle may select an edge selection model from a set of edge selection model based on
the identified surface types. The identified surface may include a type of crop, where images of the type of crop may be used to train the edge selection model. The vehicle may identify the set of
candidate edges, apply the edge selection model, or perform a combination thereof. Alternatively or additionally, a remote computing system communicatively coupled to the vehicle may identify the set
of candidate edges, apply the edge selection model, or perform a combination thereof. To modify the route being navigated by the vehicle, the row vision system may move the vehicle such that an edge
of a tool or instrument being pulled by the vehicle is aligned with the selected edge. Additionally sets of candidate edges may be iteratively identified as the vehicle captures additional images.
The edges may be iteratively selected from among the sets of candidate edges.
FIG. 1 is a block diagram of a system environment in which a vehicle operates, in accordance with at least one embodiment.
FIG. 2 is a block diagram of logical architecture for the edge detection for the vehicle of FIG. 1, in accordance with at least one embodiment.
FIG. 3 depicts a front-facing view from within a vehicle using edge detection, in accordance with at least one embodiment.
FIG. 4 depicts a graphical user interface (GUI) for edge detection, in accordance with at least one embodiment.
FIG. 5 depicts edge candidates provided within a GUI for edge detection, in accordance with at least one embodiment.
FIG. 6A shows a top view of a configuration for calibration of a camera for edge detection on a vehicle, in accordance with at least one embodiment.
FIG. 6B shows a side view of the configuration for calibration of FIG. 6A, in accordance with at least one embodiment.
FIG. 7 is a flowchart illustrating a process for modifying, based on edge detection, the operations of a vehicle, in accordance with at least one embodiment.
The figures depict various embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures
and methods illustrated herein may be employed without departing from the principles described herein.
System Architecture
FIG. 1 is a block diagram of a system environment 100 in which a vehicle 110 operates, in accordance with at least one embodiment. The system environment 100 includes a vehicle 110, a remote server
140, a database 150, and a network 160. The system environment 100 may have alternative configurations than shown in FIG. 1, including for example different, fewer, or additional components. For
example, the system environment 100 may include not include the network 160 when the vehicle 110 operates offline.
The vehicle 110 includes hardware and software modules configured to enable the vehicle 110 to perform tasks autonomously or semi-autonomously. The vehicle 110 may be a farming vehicle such as a
tractor, or any vehicle suitable for performing farming operations along an edge between two types of surfaces or an edge between variations within one surface. Farming operations may include mowing,
harvesting, spraying, tilling, etc. An example of a variation within one surface includes a soil surface with a type of crop planted and without crops planted. The vehicle may be a land-based or
aerial vehicle (e.g., a drone). The vehicle 110 includes hardware such as an embedded controller 111, a display 112, a steering motor controller 113, and a camera 114. The vehicle 110 includes
software such as a row vision system 120. As referred to herein, a “row” may be a portion of a surface adjacent to another surface or a variation within the surface. For example, a first row may be a
row of soil without crops planted that is adjacent to a second row of soil with crops planted.
The vehicle 110 may have alternative configurations than shown in FIG. 1, including for example different, fewer, or additional components. For example, the vehicle 110 may include a user device such
as a tablet or smartphone including the display 112 and the camera 114 and configured to execute the row vision system 120 software. The vehicle 110 may include a mount for the user device to be
installed or removed from the vehicle 110 when the vehicle 110 is not in operation. In another example, the vehicle 110 may not include the display 112, which may instead be located remotely for
remote control of an automated farming vehicle. Although not depicted, the vehicle 110 may include radio frequency (RF) hardware to enable the vehicle 110 to communicatively couple to the network 160
. The RF hardware may be included within the user device. Although the embedded controller 111, the display 112, the steering motor controller 113, and the camera 114 are shown as component separate
from the row vision system 120, one or more of these components may be included within the row vision system 120.
The embedded controller 111 enables communication between a processing device executing the row vision system 120 and the steering motor controller 113. The embedded controller 111 may enable this
communication using a Controller Area Network (CAN) bus, optical transceivers (connected from display) or digital pulse width modulation (PWM) electrical signals. The embedded controller 111 may
receive data generated by the row vision system 120 and generate corresponding instructions to the steering motor controller 113. For example, the row vision system 120 determines a distance between
a target edge and a reference point on the vehicle (e.g., the location of a GPS receiver on the vehicle or center point of rear axle). This distance may be referred to as a “desired guidance line
lateral error.” Additionally, the row vision system 120 determines a distance between a detected row edge 408 and a tracking target 406. This distance may be referred to as a “target lateral error.”
The embedded controller 111 may receive lateral error offset values (e.g., offset values of the desired guidance line lateral error and/or the target lateral error) for modifying the navigation of
the vehicle 110 as determined by the row vision system 120. The embedded controller 111 may be configured to generate, responsive to receiving the lateral error values, analog signal instructions to
transmit to the steering motor controller 113, which then modifies the movement of the vehicle 110.
In addition or as an alternative to receiving lateral error values, the embedded controller 111 may receive heading error values, geographic location information (e.g., GPS information), a steering
wheel speed value, or a steering wheel direction value. The heading error may be an angle made between a desired heading of the vehicle and the actual heading of the vehicle. In some embodiments, the
vehicle 110 may be configured to allow the row vision system 120 to modify the vehicle's motor operation without the embedded controller 111. For example, the row vision system 120 may be integrated
into a computing device that is fixed onboard the vehicle 110, where the computing device includes hardware and software functionality to implement the operations of the row vision system 120 and the
steering motor controller 113.
The display 112 provides an output for a graphical user interface (GUI) for the row vision system 120 to be displayed and an input for the operator of the vehicle 110 to control the row vision system
120. Although an operator is described herein, the described edge detection may be performed using a fully autonomous vehicle (i.e., without an operator). The user interface may be any suitable
interface, such as a keypad, keyboard, touch screen, touchpad, stylus input, voice recognition interface, or other interfaces for receiving user input. The display 112 may be provided as a
stand-alone device or integrated with other elements of the vehicle 110. The display 112 may be a display of a mobile device (e.g., a tablet). Although not shown, a speaker and/or a microphone may be
integrated with the vehicle 110 or as a component of a mobile device to further facilitate input and output for the row vision system 120.
The steering motor controller 113 regulates the steering motor of the vehicle 110 based on values (e.g., lateral error) determined by the row vision system 120. The steering motor controller 113 may
include a control loop mechanism that employs feedback to automate steering of the vehicle 110. For example, the steering motor controller 113 may include a proportional-integral-derivative (PID)
controller or any suitable control loop mechanism for automated steering. The steering motor controller 113 may receive instructions from the embedded controller 111, where the instructions may be in
the form of analog signals used to specify a particular direction or used to increase or decrease the speed at which the vehicle 110 steers in that particular direction.
The camera 114 captures images and/or video for the row vision system 120 to perform edge detection. The images may be taken before or during operation of the vehicle. Images captured by the camera
114 before operation may be used for calibration of the camera for edge detection. Calibration is described further in the description of the calibration module 121 and FIGS. 6 and 7. Images captured
by the camera 114 during operation may be used for edge detection and modifying the operation of the vehicle 110 based on the detected edges within the images. The camera 114 may be an RGB camera, 3D
stereo camera, mono camera, video camera, thermal camera, light detection and ranging (LiDAR) camera, any other suitable camera for detecting edges within captured images, or a combination thereof.
In some embodiments, the vehicle includes a LiDAR camera for detecting variations in surfaces (e.g., mounds within a surface) or other height differences among objects within the vehicle's
environment, which may be accounted for when detecting an edge within the surface. The camera 114 may be a standalone camera, integrated within the vehicle 110, or integrated within a processing
device executing the row vision system 120. The camera 114 is communicatively coupled to enable the row vision system 120 to access images captured. The camera 114 may be in a front-facing
configuration such that the images capture surfaces ahead of the vehicle. While reference is made to captured images for clarity throughout the specification, the operations described with respect to
images may be likewise applicable to videos captured by the camera 114.
The row vision system 120 detects edges between surfaces, which may be referred to herein as “row edges,” as depicted within images (e.g., of fields). The row vision system 120 may determine
instructions to change the operation of the vehicle 110. For example, the system may determine a direction and/or speed at which a steering wheel of the vehicle 110 is to be turned. The row vision
system 120 includes various software modules configured to detect edges within images or videos captured by the camera 114 and modify the operation of the vehicle 110 based on the detected edges. The
software modules include a calibration module 121, a model training engine 122, a user interface module 123, a navigation module 124, a hood detection module 125, and a row edge detection module 130.
The row vision system 120 may have alternative configurations than shown in FIG. 1, including for example different, fewer, or additional components. For example, the row vision system 120 may not
include the model training engine 122. In such an example, the training for machine learning models used in edge detection may be performed remotely rather than at the vehicle 110 to conserve
processing or power resources of the vehicle 110 or a user device executing the row vision system 120.
The calibration module 121 determines a position (e.g., a 3D coordinate position) of the camera 114 relative to a reference point in the vehicle 110 and/or the position relative to a reference point
on the ground to the camera 114. This determined position relative to the point in the vehicle 110 may be referred to as a “camera-3D pose estimation to vehicle.” The determined position relative to
the point on the ground may be referred to as a “camera 3D pose estimate to ground plane.” A “camera lateral offset” may be defined as the shortest distance from the camera 114 to a line running from
the front center to the rear center of the vehicle 110. The line may be the center of the vehicle or any line used for calibration. For example, a line may be applied (e.g., painted) to the hood of
the vehicle 110 for calibration and/or hood detection. The hood may be selected as a marker for continuous calibration when determining ground truth camera 3D pose, as vehicles (e.g., tractors) may
have a fixed axel on the chassis frame and the hood may move with the main, solid, rear axle even if isolators of a cab of the vehicle result in camera movement. The calibration module 121 may
perform calibration to check environmental factors that contribute to edge detection performed by the row vision system 120. The environmental factors may include lighting states (e.g., off and in
field), engine states (e.g., off and running), camera 3D pose (e.g., including camera lateral offset and camera pitch), and vibration allowances.
In some embodiments, the calibration module 121 may begin a calibration process by receiving confirmation that the vehicle 110 is in a proper state or location for calibration. For example, an
operator may manually steer the vehicle to a predetermined location and provide user input (e.g., via the display 112) that the vehicle 110 is properly located and/or to begin calibration. The
calibration module 121 receives images from the camera 114 depicting calibration markers in predetermined calibration configurations. Calibration markers may include tape, painted shapes, mats, or
any suitable object for marking one or more points and/or predetermined distances between the marked points. Calibration configurations are depicted in FIGS. 6 and 7.
The calibration module 121 may receive user-specified expected distances between calibration markers within images of calibration configurations. For example, the calibration module 121 may access
user-provided images of calibration configurations corresponding to respective camera 3D pose. The calibration module 121 may determine locations of calibration markers within images received from
the camera 114 and compare the determined locations to expected locations from the user-provided images. Based on the comparison, the calibration module 121 may determine the camera 3D pose and
recommend camera mounting adjustments required by the navigation module 124 for proper operation. In some embodiments, the calibration module 121 may continuously process received images
corresponding to a discrete range of camera 3D poses and apply interpolation to the image processing to determine a camera 3D pose estimate to the ground plane while estimating change in ground
objects to determine a candidate edge 3D position. The calibration module 121 may use the vehicle 110 as a calibration marker to continuously estimate camera 3D pose changes that may result in world
coordinate systems (e.g., ground plane) offsets. The calibration module 121 may provide the determined camera 3D pose for display at the display 112 via the user interface module 123.
In some embodiments, the calibration module 121 may determine instructions to adjust the camera 3D pose to an expected position. For example, a target camera pitch of twenty degrees may be used for
capturing an appropriate portion of the hood of the vehicle 110 in images for edge detection. The calibration module 121 may determine a difference between a current camera pitch and the target
camera pitch of twenty degrees. The calibration module 121 may generate a notification provided at the display 112 that includes the determined difference and instructions to move the camera 114 a
particular direction to minimize the difference. In some embodiments, the row vision system 120 may provide instructions to a controller that may automatically adjust the positioning of the camera
114. For example, the camera 114 may be part of a tablet installed within a motorized mount on the vehicle 110, the motorized mount communicatively coupled to the row vision system 120 via a
controller. The determined camera pitch difference determined by the calibration module 121 may be provided to the controller to operate the motorized mount and adjust the camera's position in the
vehicle 110. The motorized mount may also allow for improved calibration module 121 processing of inertial measurement unit (IMU) and camera data to determine camera 3D pose estimation to vehicle
while the vehicle 110 is not moving.
The model training engine 122 trains machine learning models for use in detecting edges (e.g., during farming operations). Detected edges include edges between surfaces, between variations within a
surface, or the edge of a hood of the vehicle. The model training engine 122 may train one or more of the machine learning models of the row edge detection module 130: the surface detection model 131
and the edge detection models 132 and 133. The model training engine 122 may use images depicting one or more surfaces or variations within a surface to train the models. For example, the model
training engine 122 may use an image depicting a roadside and a farming field to train the models.
The images may be labeled with one or more surface types or crop types as depicted in the image and/or the presence of an edge. The labeling may be manually labeled or determined automatically by the
model training engine 122 (e.g., using computer vision). Surfaces may include a ground surface (e.g., the surface on which the vehicle travels on or over), surfaces of objects such as the surface of
the hood of the vehicle 110. Ground surfaces may have various types such as “roadside” and “field,” may be characterized by objects on the surface such as whether crops are present, and/or
characterized by the state of the surface (e.g., dry or wet). In the previous example, the image depicting the roadside and the farming field may be labeled with labels for the corresponding surface
types of “roadside” and “field.” The image may be further or alternatively labeled to indicate an edge is present in the image. This label may be associated with the surface types that the edge
separates. The model training engine 122 may access the images used for training from the database 150.
The model training engine 122 may train a machine learning model of the row edge detection module 130 using images each with a set of candidate edges and a manually-selected edge of the set of
candidate edges. In some embodiments, the row vision system 120 determines a set of candidate edges within an image and provides the image with the candidate edges to a user for manual selection of a
target edge. The determination of the set of candidate edges is described further with respect to the row edge detection module 130. The row vision system 120 may receive the user's selection of the
target edge and use the image with the set of candidate edges and the manually selected edge to train a model (e.g., the edge detection models 132 or 133).
In some embodiments, the model training engine 122 may train a machine learning model based on a type of crop depicted within a surface identified within images captured by the camera 114. The model
training engine 122 may access images depicting one or more crops. The images may be manually labeled with a type of crop or the model training engine 122 may determine crop types with which to label
respective images. The model training engine 122 uses the labeled images to train a model. For example, the model training engine 122 uses images depicting soil without a crop and soil with lettuce
planted, where the image may be labeled with a surface type of “field” and a crop type of “lettuce.”
In some embodiments, the model training engine 122 trains a machine learning model in multiple stages. In a first stage, the model training engine 122 may use a first set of image data collected
across various farming environments (e.g., surface or crop types as depicted from various farms) to train the machine learning model. This generalized data may be labeled (e.g., by the model training
engine 122) with the corresponding surface type, crop type, or edge presence. In a second stage of training, the model training engine 122 may use data collected by the camera 114 to optimize the
models trained in the first stage to the environmental conditions associated with the vehicle 110. The model training engine 122 may re-train a machine learning model using the second training set
such that the machine learning model is customized to the vehicle 110 or the environment in which the vehicle 110 operates.
In addition or alternatively, the model training engine 122 may use user feedback to train the models in the second stage. For example, the user interface module 123 receives feedback provided by the
operator of the vehicle 110 using the display 112 that a machine learning model correctly or incorrectly identified an edge within the image captured by the camera 114. In some embodiments, the first
training set used to train that model may also be included in the second training set to further strengthen a relationship or association between data and identified objects during the second stage
of training. For example, if the received feedback indicated that the machine learning model correctly identified the edge, the first training set may be included within the second training set.
In some embodiments, the model training engine 122 uses metadata related to the feedback to re-train a machine learning model. For example, the model training engine 122 may determine the frequency
at which a user provides a user input instructing the row vision system 120 to detect an edge, and use the determined frequency to re-train the machine learning model. The model training engine 122
may use a threshold feedback frequency to determine a likelihood that the detected edge is accurate. That is, if an operator is frequently requesting an edge to be detected (e.g., over five times in
under a minute), the edge detected by the row edge detection module 130 is likely inaccurate and the model training engine 122 may adjust the training data such that data associated with the
inaccurate edge detected is given less weight or is not used in the future.
The user interface module 123 enables user input and system output for modifying the operation of the vehicle 110 using the row vision system 120. In some embodiments, the user interface module 123
provides a GUI for display at the display 112. Examples of GUIs that the user interface module 123 can provide are depicted in FIGS. 4 and 5. The user interface module 123 may provide images or
videos of the view ahead of the vehicle 110 during operation and receive a user input representative of a location within the images as displayed to an operator of the vehicle 110. In some
embodiments, the user interface module 123 may provide an operation status of the vehicle 110 at the display 112. For example, a GUI may display that the vehicle is currently engaged in automated
steering. The operation status may also include the current heading of the vehicle and/or the desired guidance line to which the vehicle is tracking.
The user interface module 123 may provide inputs for an operator to select a type of surface that the vehicle 110 is currently traveling over. In some embodiments, the surface may be automatically
determined by the row vision system 120. For example, the user interface module 123 may receive, from the surface detection model 131, a determined surface depicted within an image captured by the
camera 114 of the environment ahead of the vehicle 110. The user interface module 123 may then display the determined surface on a GUI at the display 112. Although the action of displaying is
referenced with respect to the output of the user interface module 123, the user interface module 123 may provide additional or alternative input/output mechanisms such as sound (e.g., using natural
language processing or predetermined utterances related to the vehicle's operations) or haptics (e.g., vibrating the steering wheel to confirm that a row edge has been identified and will cause the
steering to change).
The navigation module 124 may determine information describing the position and/or orientation of the vehicle 110 and generate instructions for modifying the operation of the vehicle 110 based on the
determined position and/or orientation information. This information may include a lateral and/or heading error of the vehicle 110. The navigation module 124 may provide a determined error to the
embedded controller 111 to modify a route being navigated by the vehicle 110. Modifying the route may include moving the vehicle 110 such that an edge of a tool or instrument being pulled by the
vehicle is aligned with an edge (e.g., between a row of crops and soil) detected by the row vision system 120.
The navigation module 124 may determine a lateral error of the vehicle 110 with respect to a desired guidance line. The navigation module 124 may receive a desired guidance line (e.g., the desired
guidance line 411 as shown in FIG. 4) as specified by a user. The desired guidance line may be a path on an automated routine or a target edge (e.g., the target edge 406 in FIG. 4) that an operator
specifies that the vehicle 110 (e.g., via the automated steering) should follow. The target lateral offset with respect to a target edge may be calculated based on the target edge and the hood of the
vehicle 110 as depicted within images captured by the camera 114. The target lateral offset may also depend on the camera lateral offset. In one example, the navigation module 124 determines, within
an image, lines formed by the target edge and a reference line associated with the hood of the vehicle 110 (e.g., painted on the hood). The navigation module 124 determines corresponding x-intercepts
of the determined lines. For example, with reference to FIG. 4, the navigation module 124 may determine the corresponding x-intercept of the detected row edge 408 inside the bounding box 407
projected coordinate system with tracking target 406 as origin or center point (0,0). The navigational module 124 may calculate target lateral error using Equation 1.
$Err lat = [ X edge 0 - X target ] * ( mm pixel ) Eqn ( 1 )$
Where Err[lat ]is the target lateral error, X[edge][0 ]is the x-intercept of the line formed by the target edge, X[target ]is the x-intercept of the line formed by the reference line associated with
a tracking target, and
$mm pixel$
is a ratio of millimeters captured by each pixel within the photo.
The navigation module 124 may determine a heading error of the vehicle 110 with respect to a desired heading. The navigation module 124 may receive a desired heading as specified by a user. The
desired heading may correspond to the desired guidance line specified by a user. To determine the heading error of the vehicle 110, the navigation module 124 may calculate the angle between the
desired heading and the current heading of the vehicle 110. The navigational module 124 may calculate heading error using Equation 2.
$Err Heading = [ ( M edge 0 - M target ) ] * ( mm pixel ) Eqn ( 2 )$
Where Err[Heading ]is the heading error, M[edge][0 ]is the slope of the line formed by the target edge, M[target ]is the slope of the line formed by the reference line associated with the tracking
target (e.g., target tracking heading 409 of FIG. 4), and
$mm pixel$
is a ratio of millimeters captured by each pixel within the photo.
The navigation module 124 may determine a steering wheel speed based on a determined error or determined distance to obstacles or end of rows as detected with object recognition computer vision. The
determined speed may be proportional to the determined error or obstacle distance. For example, the navigation module 124 may determine a first steering wheel speed corresponding to a first lateral
error and a second steering wheel speed corresponding to a subsequent, second lateral error, where the second speed is smaller than the first speed because the second lateral error is smaller than
the first lateral error. The navigation module 124 may access a mapping table of speeds to heading and/or lateral errors. The navigation module 124 may update the values of the mapping table based on
user feedback. For example, an operator of the vehicle 110 may begin manually steering the vehicle 110 after the steering motor controller 113 has modified the direction in which the vehicle is
traveling based on the speed determined by the navigation module 124. The vehicle 110 may provide an indication to the navigation module 124 that the automated steering was manually overridden, and
the navigation module 124 may modify an algorithm for selecting a speed (e.g., modifying a confidence score or weight associated with the selected speed).
The navigation module 124 may determine a steering wheel direction based on a determined error. For example, the navigation module 124 determines (e.g., using Equation 1) a positive or negative
offset and determines a corresponding direction (e.g., left or right) in which to direct the steering wheel.
The hood detection module 125 may detect the hood of the vehicle 110 within images captured by the camera 114. The hood detection module 125 may use an edge detection operation such as Canny edge
detection, Roberts filter, Sobel filter, Prewitt filter, or any suitable digital image processing technique for detecting edges. The hood detection module 125 may calculate a current heading line of
the vehicle 110 within the images using the detected hoods. The hood of the vehicle 110 may include a marker. In one example, the marker is a line painted across the center of the hood in the
direction that the vehicle is facing. The hood detection module 125 may detect the marker in the images subsequent to detecting the hood. This order of detection may save processing resources by
first determining if an image depicts the hood rather than detect the marker in an image that does not depict the hood. The hood detection module 125 may use the marker detected within an image to
determine a line that is represents current vehicle heading within the image. This detected line may be used by the navigation module 124 for lateral and/or heading error calculations.
The row edge detection module 130 detects row edges as depicted in images (e.g., of farming fields) and provides the detected edges within images to the navigation module for modification of the
operation of the vehicle 110. The row edge detection module 130 may implement models such as the surface detection model 131 and the edge detection models 132 and 133. The models may be machine
learning models. Machine learning models of the row edge detection module 130 may use various machine learning techniques such as linear support vector machine (linear SVM), boosting for other
algorithms (e.g., AdaBoost), neural networks, logistic regression, naïve Bayes, memory-based learning, random forests, bagged trees, decision trees, boosted trees, boosted stumps, or any suitable
supervised or unsupervised learning algorithm.
The row edge detection module 130 may implement additional, fewer, or different models than depicted in FIG. 1. For example, the row edge detection model 130 may be specialized to detect edges
between soil and one particular type of crop, which may render the use of more than one edge detection model as unnecessary. In this example, the operator may intend the vehicle 110 to be limited to
following edges between soil and lettuce; thus, one edge detection model corresponding to an edge between surface type “field” and crop type “lettuce” may be implemented by the row edge detection
module 130.
The row edge detection module 130 may access a set of images captured by the vehicle 110 while the vehicle is navigating via automated steering through an area of different surface types. The set of
images may include images of a ground surface in front of the vehicle 110. In some embodiments, the camera 114 may capture the images and provide the images to the row edge detection module 130. The
row edge detection module 130 may determine an edge within the set of images between the ground surface and a surface adjacent to the ground surface.
The row edge detection module 130 may identify a set of candidate edges with an image portion corresponding to the location within the set of images. Each candidate edge may correspond to a candidate
boundary between two different surface types. The row edge detection module 130 may determine the image portion based on user input provided through the user interface module 123. For example, a user
may select a point on an image (e.g., displayed on the display 112) including the ground surface, where the point is part of a line for the target edge. The row edge detection module 130 may
determine a bounding box (e.g., having dimension of pixels) centered on the user-selected point within the image. The row edge detection module 130 may then perform edge detection on the image
portion within the bounding box rather than the original image. This may conserve processing and/or power resources of the computing device executing the row vision system 120.
To identify the set of candidate edges within the image portion, the row edge detection module 130 may use an edge or line detection operation such as Canny edge detection or a Hough Transform. In
some embodiments, a combination of operations (e.g., Canny edge detection followed by a Hough Transform) may increase the accuracy of the edge detection. From the identified set of candidate edges,
the row edge detection module 130 may select a candidate edge that most accurately aligns with the edge between surfaces as depicted in an image captured by the camera 114. This selected candidate
edge may be referred to herein as a “best fit edge.”
In some embodiments, the set of candidate edges may be identified using an edge detection model corresponding to one or more surface types (e.g., soil and crop). The edge detection model may be used
in combination with an edge detection operator. For example, the row edge detection module 130 may apply a Hough Transform followed by the edge detection model to the image portion centered at the
operator-selected location within an image. The edge detection model applied may depend on the types of surfaces depicted within the image portion, which may be identified by a surface detection
model described herein. The result of the application may be a set of candidate edges, where each edge may have a likelihood of being a best fit edge above a predetermined threshold (e.g., a
confidence score of at least 80 out of 100). The row edge detection module 130 may apply one or more additional models to the resulting candidate edges to determine the best fit edge. Such models may
include a machine-learned model, statistical algorithms (e.g., linear regression), or some combination thereof.
The row edge detection module 130 may include, although not depicted, an edge selection model to select the best fit edge within the image portion. While the operations for selecting a best fit edge
may be described herein with reference to the models depicted in FIG. 1, an edge selection model may perform similar operations. The edge selection model may include one or more of the models
depicted. For example, the edge selection model may include the surface detection model 131 and one of the edge detection models 132 or 133. The vehicle 110 or a mobile device coupled to the vehicle
110 may identify the set of candidate edges and apply the edge selection model. The row vision system 120 may include a set of edge selection models. Each of the edge selection models may be
configured for a respective pair of surface types to select an edge among candidate edges (i.e., between the respective pair of surface types).
The surface detection model 131 detects surfaces present in images. Surface types include soil without crops, soil with crops, grass, pavement, sand, gravel, or any suitable material covering an
area. The surface types may be further characterized by states of the surface such as dry, wet, flat, sloped, etc. The surface detection model 131 may receive, as input, image data corresponding to
an image depicting at least one surface and identify one or more surface types within the image. The surface detection model 131 may be trained by the model training engine 122 using images of
surfaces that are labeled with the corresponding surface. The surface detection model 131 may be re-trained by the model training engine 122 using user feedback indicating the identification of a
surface was correct or incorrect and/or additional labeled images of surfaces (e.g., of the farming environment in which vehicle 110 operates). The row edge detection module 130 may use the surfaces
identified by the surface detection model 131 within an image to select an edge detection model. For example, the surface detection model 131 determines that soil and pavement type surfaces are
present in an image and the row edge detection module 130 selects an edge detection model trained to identify edges between soil and pavement type surfaces. The selected edge detection model may then
determine a best fit edge within the image.
An edge detection model (e.g., edge detection models 132 and 133) identifies an edge within an image. In particular, the identified edge may be a best fit edge among a set of candidate edges. The row
edge detection module 130 may apply one or more images and candidate edges associated with the images (e.g., the images may be annotated with candidate edges as shown in FIG. 5) to an edge detection
model. The edge detection model may determine confidence scores associated with respective identified edges. In some embodiments, an edge detection model may be trained by the model training engine
122 on images of manually tagged boundaries between two surfaces. For example, the training images may be manually tagged with boundaries between soil and one of a crop, grass, and pavement. The
boundaries may further be tagged with an indication of best fit or lack of best fit (e.g., positive and negative sample training).
In one example, the row edge detection module 130 applies a set of images and candidate edges to the edge detection model 132 to determine a best fit edge among the candidate edges between surface
types of soil and pavement. The edge detection model 132 may be trained by the model training engine 122 using training images depicting edges between soil and pavement. The edges depicted within the
training images may be labeled to indicate that the edge is the best fit edge or is not the best fit edge. The edge detection model 132 can determine, for each image applied by the row edge detection
module 130 depicting an edge of the candidate edges, that the image depicts a best fit edge or does not depict a best fit edge. In this example, the edge detection model 133 may be configured to
identify best fit edges between two different types of surfaces than soil and pavement (e.g., crops and soil, crops and pavement, a first type of crop and a second type of crop, etc.).
The row edge detection module 130 may determine confidence scores for one or more of the candidate edges using one or more models. The surface detection model 131 may determine a confidence score for
the identified surface types. In some embodiments, confidence scores associated with the identified surfaces may be used to determine which edge detection model to apply to an image or image portion
with candidate edges. For example, the surface detection model 131 may determine two surfaces depicted in an image portion are soil and pavement with 40% confidence and soil and a crop with 90%
confidence. The row edge detection module 130 may use the determined confidence scores to select an edge detection model for detecting edges between soil and a crop. In some embodiments, the row edge
detection module 130 may use one or more of the confidence scores determined by the surface detection model and edge detection model to determine a confidence score for a candidate edge. For example,
the confidence score determined by the surface detection model 131 is 90% for a pair of surface types and the confidence score determined by the edge detection model 132 is 95%. The confidence score
for the candidate edge may be a combination of the two (e.g., a weighted average of the two based where the weights correspond to the accuracy of the respective models).
In addition or as an alternative to the application of the models depicted in FIG. 1 to identify a best fit edge in the set of candidate edges, the row edge detection module 130 may apply one or more
algorithms to identify the best fit edge. A first algorithm may include determining, among the set of candidate edges, the candidate edge having the shortest distance from a user-selected point
within the image. In some embodiments, the edge detection model may be configured to weight the candidate edges at least in part based on a distance between each candidate edge and a user-selected
location, represented by input received (e.g., via a selection of the location at the display 112), within the set of images. With respect to the first algorithm, the largest weight may be applied to
the candidate edge with the shortest distance from the user-selected location. For example, a first candidate edge is weighted lower than a second candidate edge that is closer to the user-selected
location, represented by the input, than the first candidate edge. A second algorithm may include calculating the average slope of the candidate edges and selecting a candidate edge having the slope
closest to the average slope. A third algorithm may include linear regression based on points within the candidate edges. A fourth algorithm may include randomly selecting an edge.
The row edge detection module 130 may provide the candidate edges to an operator (e.g., via the user interface module 123) for display at the display 112. The operator may select the best fit edge
and the row edge detection module 130 may receive the user-selected best fit edge. The row edge detection module 130 may perform optional confidence score determinations associated with one or more
of the candidate edges including the user-selected edge. The row edge detection module 130 may prompt the user to select another edge if the confidence score is below a threshold. The navigation
module 124 may use the user-selected edge to determine a heading and/or lateral errors to modify the operation of the vehicle 110.
As the vehicle 110 performs a farming operation, additional images may be captured by the camera 114. The row edge detection module 130 may iteratively identify candidate edges as the additional
images are captured. The edges may be iteratively selected from among the sets of candidate edges. Iterative identification of candidate edges may comprise identifying, for images captured during the
farming operation and in chronological order of the images, a best fit edge within each image and determining a confidence score associated with the best fit edge. In response to a higher confidence
score associated with a subsequently identified best fit edge, the navigation module 124 may determine navigational errors (e.g., heading and/or lateral errors) to be used to modify the operation
(e.g., automated steering) of the vehicle 110 using the subsequently identified best fit edge.
The remote server 140 may store and execute the row vision system 120 for remote use by the vehicle 110. For example, the remote server 140 may train machine learning models using the model training
engine for access over the network 160. The remote server 140 may train and provide the models to be stored and used by the vehicle 110. The remote server 140 may identify the set of candidate edges
and apply an edge selection model as an alternative to the vehicle 110 or a mobile device coupled to the vehicle 110 performing the operations.
The database 150 is a storage for data collected by the vehicle 110 (e.g., using the camera 114) or provided by an operator. Data collected by the vehicle 110 may include images or videos of the
environment through which the vehicle 110 travels. For example, images depicting one or more surfaces and/or edges that are captured by the camera 114 may be stored in the database 150. Data provided
by the operator may include training images or training videos for the model training engine 122 to access and train machine learning models of the row vision system 120. The training data may be
labeled by the operator when provided to the database 150 or the training data may be labeled by the model training engine 122 using computer vision. Data provided by the operator may include user
input and/or feedback provided during the operation of the vehicle 110. For example, the operator may provide user input selecting a location on an image corresponding to a target edge between
surfaces that the operator wants the vehicle 110 to track. This input and the corresponding image may be stored at the database 150. In another example, the operator may provide feedback regarding
the accuracy of the edge detection, which may be stored at the database 150 to retrain a machine learning model and/or adjust algorithms used by the row vision system 120 to detect edges, detect
hoods, determine steering wheel direction, determine steering wheel speed, etc.
The network 160 may serve to communicatively couple the vehicle 110, the remote server 140, and the database 150. In some embodiments, the network 160 includes any combination of local area and/or
wide area networks, using wired and/or wireless communication systems. The network 160 may use standard communications technologies and/or protocols. For example, the network 160 includes
communication links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 3G, 4G, 5G, code division multiple access (CDMA), digital subscriber line
(DSL), etc. Examples of networking protocols used for communicating via the network 160 include multiprotocol label switching (MPLS), transmission control protocol/Internet protocol (TCP/IP),
hypertext transport protocol (HTTP), simple mail transfer protocol (SMTP), and file transfer protocol (FTP). Data exchanged over the network may be represented using any suitable format, such as
hypertext markup language (HTML), extensible markup language (XML), JavaScript Object Notation (JSON), or Protocol Buffers (Protobuf). In some embodiments, all or some of the communication links of
the network 160 may be encrypted using any suitable technique or techniques.
FIG. 2 is a block diagram of a logical architecture 200 for the edge detection for the vehicle 110 of FIG. 1, in accordance with at least one embodiment. The logical architecture 200 depicts
interactions and information communicated between an operator 201, the vehicle 110, and the vision system 202. The operator 201 may be collocated with the vehicle 110 (e.g., seated in the vehicle) or
remotely located from the vehicle 110. In one example of remote location, the operator 201 may be seated in a control room using a processing device that is communicatively coupled to the vehicle 110
. The vision system 202 may be software executed by a processing device integrated with the vehicle 110 or remotely located from the vehicle 110. The vision system 202 includes a guidance system 210
and the row vision system 120.
The guidance system 210 may enable the vehicle 110 to follow a route specified by the operator 201 for automated travel through a field, where the route may be independent of a desired row edge to
which the operator 201 desires the vehicle 110 to follow. The guidance system 210 may determine steering instructions for automated steering using location information (e.g., GPS coordinates) and a
user-specified route. The guidance system 210 may provide the determined steering instructions to the steering motor controller 113 of the vehicle 110 to modify the operations of the vehicle 110.
The guidance system 210, as depicted in FIG. 2, may be separate from the row vision system 120 and provide data such as lateral error, heading error, or geographic location (e.g., position or
heading) to the row vision system 120. In some embodiments, the functionality of the guidance system 210 is incorporated into the row vision system 120. For example, the row vision system 120 may
perform the lateral and heading error calculations. The guidance system 210 may receive user-specified route guidance instructions and determine lateral and/or heading error relative to a desired
heading or desired guidance line derived from or defined by the route guidance instructions. This lateral and heading error determined by the guidance system with respect to the route guidance
instructions may be distinct from a lateral and heading error determined for a desired row edge and navigation instructions for automated steering.
The operator 201 may provide field setup to the vision system 202. The field setup may include a map of the environment in which the vehicle 110 is planned to operate. For example, the map may
include the farming field, roads around the field, the location of crops within the field, any suitable feature for representing an area of land, or a combination thereof. The field setup may include
information about the environment such as the types of crops, the weather of the environment, time of day, humidity, etc. The operator may interact with the vehicle 110 by engaging the transmission,
the manually steering, or automated steering of the vehicle 110.
The vehicle 110 may provide information to the vision system 202 such as the vehicle identification information, transmission state, or operator presence state. As referred to here, an operation
performed by the vision system 202 may be performed by one or more of the guidance system 210 or the row vision system 120. The vehicle identification information may be used modify the automated
steering as determined by the vision system 202, customizing the automated steering to the vehicle. For example, a first vehicle's steering motor may react more sensitively than a second vehicle's
steering motor responsive to the same steering instructions. In another example, a first vehicle may be used for baling and a second vehicle may be used for spraying. The second vehicle may not need
to track a row edge as precisely as the first vehicle to accomplish its farming operation. The vision system 202 may factor in the vehicle identification when determining navigation instructions for
modifying the operation of the steering motor to account for the vehicle's particular hardware behavior or operation purpose.
The vision system 202 may use the transmission state of the vehicle to determine whether to engage in automated steering or modify navigation instructions based on the transmission state. For
example, if the transmission state indicates the vehicle is stationary, the systems may not engage in automated steering or edge detection. In another example, the navigation instructions related to
steering speed determined by the vision system 202 may varying depending on a gear or belt of the corresponding transmission state. The vision system 202 may use the operator presence state to
determine whether to engage in automated steering. For example, if the operator presence state indicates no operator is present (e.g., the vehicle 110 has not been manually operated for more than a
threshold amount of time), the vision system 202 may automatically engage in automated steering (e.g., until manually overridden by an operator).
The vision system 202 may provide the vehicle 110 with instructions or data to modify the operation of the vehicle (e.g., modify the steering motor operations). The instructions may include a
steering wheel speed or a steering wheel direction. The data may include a lateral error, heading error, or geographic location. The vehicle 110 (e.g., the controller 111) may receive the
instructions or data and generate corresponding instructions (e.g., processed by the steering motor controller 113) to modify the automated steering of the vehicle 110. The vision system 202 may
provide one or more of a current heading or a guidance line for display to the operator 201. For example, the vision system 202 may provide both the current heading and the desired guidance line of
the vehicle 110 for display at the display 112. Although FIG. 2 depicts the heading and line tracking information as being transmitted to the operator 201, the vision system 202 may transmit the
information to the vehicle 110 to be displayed for the operator 201.
Edge Detection Onboard a Vehicle
FIG. 3 depicts a front-facing view 300 from within a vehicle 310 using edge detection, in accordance with at least one embodiment. The view 300 may be from the perspective of an operator seated
within the vehicle 310. The view 300 may have alternative configurations than shown in FIG. 3, including for example different, fewer, or additional components within the vehicle 310.
The operator may engage with the row vision system 120 to modify the operation of the vehicle 310. For example, the operator may interact with the user interface provided via the display 312, which
may be a standalone display that is communicatively coupled to the row vision system 120 (e.g., remotely located at the remote server 140 or located at a separate processing device within the vehicle
310) or a display of a processing device executing the row vision system 120.
The vehicle 310 may detect the presence status of the operator (e.g., through interactions with the display 312 or the lack thereof) and engage or disengage with automated steering based on the
operator presence status. The display 312 may provide images or videos of the environment around (e.g., in front of) the vehicle 310 to the operator. The operator may interact with the display 312 by
selecting a location on an image. The selected location may be used to perform edge detection and steer the vehicle 310 to track the detected edge. The GUI provided by the row vision system 120 on
the display 312 is further described in the descriptions of FIGS. 4 and 5.
FIG. 4 depicts a graphical user interface (GUI) 400 for edge detection, in accordance with at least one embodiment. The GUI 400 may be displayed on a display collocated with a vehicle performing
farming operations with edge detection or on a display remotely located from the vehicle. A row vision system may provide the GUI 400 for display. For example, the row vision system 120 provides the
GUI 400 for display at the display 112 using the user interface module 123. The GUI 400 includes a camera frame view 401, a field type selection menu 402, calculated lateral error 403, a vehicle
operation control button 404, a tractor heading indicator 405, an operator-selected tracking target 406, a bounding box 407, a detected row edge 408, a target indicator 409, and a desired guidance
line 411. The camera frame view 401 depicts a vehicle 410, the hood 415 of the vehicle 410, and surface types 420 and 421. The GUI 400 may have alternative configurations than shown in FIG. 4,
including for example different, fewer, or additional user interface elements.
An operator may interact with the GUI 400 to request that the row vision system 120 detect an edge between two surfaces in the environment of the vehicle 410. For example, the operator may select a
location in the camera frame view 401 using a tap, click, voice command, or any suitable user input mechanism for selecting a location on the camera frame view 401. The user interface module 123 may
receive the user-selected location within the image or video corresponding to the camera frame view 401. The row edge detection module 130 may additionally receive the image displayed in the camera
frame view 401. Using the user-selected location, the row vision system 120 may determine an image portion of the received image, where the image portion is centered at the user-selected location and
is bounded by the bounding box 407. The dimensions of the bounding box 407 may be user-specified or dynamically determined by the row vision system 120. In some embodiments, the row edge detection
module 130 may determine different dimensions of the bounding box. For example, the row edge detection module 130 determines, based on user feedback, the success rate of detecting a target edge or
best fit edge has decreased over time and increases the dimension of the bounding box to increase the likelihood of detecting the target edge. The bounding box 407 may be referred to herein as a
“tracking region.”
The row edge detection module 130 may determine the best fit edge depicted within the received image. For example, the row edge detection module 130 may perform a Hough transform to identify
candidate edges within the bounding box 407 corresponding to the user-selected location, identify the surface types 420 and 421 within the bounding box 407, and select a corresponding edge detection
model configured to identify a best fit edge between the identified surfaces. The best fit edge may be presented as the detected row edge 408 on the GUI 400. The detected row edge 408 may indicate
the edge used to modify the automated steering of the vehicle 410. The navigation module 124 may determine the calculated lateral error 403 using the detected row edge 408. The lateral error 403 may
indicate a relative distance between the operator-selected tracking target 406 and the detected row edge 408. The navigation module 124 may determine a calculated heading error using the tractor
heading, as shown on the GUI 400 by the indicator 409, and a desired guidance line 411 for the automated steering of the vehicle 410.
In some embodiments, the operator may provide a user input specifying a surface on which the vehicle 410 is traveling using the field type selection menu 402. For example, the operator provides user
input that the vehicle 410 is traveling on a surface of “roadside.” The row edge detection module 130 may use this user input to determine a confidence score for the surfaces identified by the
surface detection model 131. In some embodiments, the row vision system 120 uses the hood detection module 125 to detect the hood 415 depicted within the camera frame view 401 to detect an edge
between surfaces, calculate the tractor heading as shown by the indicator 405, or determine confidence scores related to the detected edge. For example, if the hood is not detected within the camera
frame view 401, the row vision system 120 may determine that the camera of the vehicle 410 is likely dislodged from a calibrated position and subsequent calculations of lateral or heading error may
be inaccurate. The row vision system 120 may pause edge detection until the hood 415 is detected within the camera frame view 401. In some embodiments, the user interface module 123 may provide a
notification via the GUI 400 that the hood is not detected within the camera frame view 401 and prompt the operator to check the configuration of the camera.
The operator may control the operation of the vehicle 410 using the vehicle operation control button 404. For example, the operator may select the button 404 to stop the operation of the vehicle 410
and prevent the vehicle 410 from traveling further. The button 404 may also be used to resume operation.
FIG. 5 depicts edge candidates provided within a GUI for edge detection, in accordance with at least one embodiment. Like the GUI 400, the GUI 500 may be displayed on a display collocated with a
vehicle performing farming operations with edge detection or on a display remotely located from the vehicle. A row vision system may provide the GUI 500 for display. For example, the row vision
system 120 provides the GUI 500 for display at the display 112 using the user interface module 123. The GUI 500 includes a camera frame view 501, an operator-selected tracking target 506, a bounding
box 507, a detected row edge 508, and edge candidate groups 530 and 531. The GUI 500 may have alternative configurations than shown in FIG. 5, including for example different, fewer, or additional
user interface elements.
Similar to the operator interactions with the GUI 400, the operator may select a location within the camera frame view 501 at or near which the operator identifies an edge that the vehicle should
track to perform a farming operation. The selected location may be the operator-selected tracking target 506. The user interface module 123 provides the bounding box 507 in response to the operator's
selection of the target 506. The row vision system 120 detects edge candidate groups 530 and 531 within the bounding box 507. For example, the row edge detection module 130 may use a Hough transform
on the image portion defined by the bounding box. The result of the Hough transform may be several candidate edges, including the edges groups 530 and 531. In some embodiments, for each of the
candidate edges, the row edge detection module 130 may apply the image portion with a respective candidate edge to an edge detection model to determine a confidence score corresponding to the
likelihood that the respective candidate edge is the best fit edge.
In some embodiments, the row edge detection module 130 may filter and eliminate candidate edges from consideration as the best fit edge before applying an edge detection model. For example, the row
edge detection module 130 may determine that group 530 is closer to the operator-selected tracking target 506 than the group 531 and eliminate the candidate edges in the group 531 from consideration.
After processing one or more of the candidate edges, the row edge detection module 130 may determine that the detected row edge 508 is the best fit edge (e.g., the module 130 calculates the highest
confidence score for the detected row edge 508).
The candidate edges in the edge candidate groups 530 and 531 may be selectable by the operator. For example, the user interface module 123 may provide the candidate edges as determined by the row
edge detection module 130 for display at the GUI 500, prompt the user to select the target edge, and receive a user selection of one of the displayed edges. Although the GUI 500 depicts the candidate
edges as being presented to an operator, the row vision system 120 may not provide the candidate edges for display in some embodiments. For example, after receiving the user input of the
operator-selected tracking target 506, the row vision system 120 may proceed to determine candidate edges and determine that the detected row edge 508 is the best fit edge without presenting the
determined candidate edges in the GUI 500.
Calibration for Edge Detection
FIGS. 6A and 6B show top and side views, respectively, of a configuration 600 for calibration of a camera for edge detection on a vehicle 610, in accordance with at least one embodiment. The
configuration 600 includes calibration markers 601a-f, 602, 603, and 604. The calibration markers 601a-f may include lines located at predetermined distances apart from one another. The calibration
markers 601a-f may be arranged to calibrate a desired camera pitch, desired camera lateral offset, hood detection, or a combination thereof. For example, the desired camera pitch may be a 20 degree
angle formed between the ground and camera as shown in FIG. 6B. In another example, the desired camera lateral offset may be a distance from a vertical line through the center of the vehicle 610 that
allows for a target offset ranging between one edge of a calibration mat to an opposing edge of the calibration mat.
Target offset min and max may be selected based on a type of operation that the vehicle is performing and a typical bounding box location for the type of operation. For example, if the vehicle is
pulling implements with narrow working widths, the target offset min and max may be small (e.g., two to five feet). In another example, if the vehicle is pulling wide tillage implements, calibration
may be done with target offset min and max that are larger (e.g., fifteen to seventeen feet). A large target offset min and max may be used to calibrate a vehicle equipped with a camera having a
large field of view (e.g., greater than 130 degrees) or a unique camera 3D pose. The calibration may be used to adjust camera 3D pose to prevent occlusion issues that may exist with attachments to
the vehicle (e.g., loaders, tanks, or toolboxes).
As shown in FIG. 6A, the calibration markers 601b-d may be located at 30, 40, and 60 meters vertically away from the calibration marker 601a. The calibration marker 601a may be aligned with a line at
the vehicle 610 that includes the location of the camera at (e.g., within) the vehicle 610. The calibration markers 601a-d oriented horizontally may be a different color than the calibration markers
601e-f. The color distinguishing may be used to distinguish the different orientations during image processing of images or videos capturing the configuration 600 during calibration. The calibration
markers 602 and 603 may be associated with a calibration mat. For example, the calibration mat may be located between the calibration markers 602 and 603 and between the calibration markers 601e and
Hood detection may be included within the calibration process in addition to or alternatively to the calibration of the camera lateral offset or pitch. The hood of the vehicle 610 may include a
calibration marker 604 such as a painted line along the length (i.e., vertically oriented over the hood of the vehicle 610 in FIG. 6A). The calibration marker 604 may be a color and dimension to
facilitate image processing to identify the calibration marker 604 and hence, the hood within images or videos. For example, the calibration marker 604 may be a yellow line against green paint of the
vehicle 610. In another example, the calibration marker 604 may be half an inch thick. The detected hood may be used for self-diagnostics while the vehicle 610 is in operation (e.g., performing
farming operations). The self-diagnostics may be periodically performed to determine whether the row vision system 120 is detecting the environment correctly. For example, self-diagnostics determine
that the camera is detecting the environment correctly based at least on the camera pitch being approximately 20 degrees and thus, able to capture the hood in images.
The configuration 600 shown in FIGS. 6A and 6B may be dependent upon the location of the camera within the vehicle 610 or the dimensions of the vehicle 610. The configuration 600 may be modified
accordingly. For example, the vertical distance between the camera to the calibration mat 602 may be 30 meters rather than 60 meters away from the camera, depending on the height at which the camera
is located at the vehicle 610, to maintain a 20 degree camera pitch as shown in FIG. 6B. In some embodiments, there may be additional or fewer calibration markers than shown in the configuration 600.
For example, the calibration may have at least the calibration markers 602 and 602. Minimizing the number of calibration markers may reduce processing or power resources needed to calibrate the
camera or calibrate for hood detection.
Process for Detecting Edges Using the Row Vision System
FIG. 7 is a flowchart illustrating a process 700 for modifying, based on edge detection, the operations of a vehicle, in accordance with at least one embodiment. In some embodiments, the row vision
system 120 performs operations of the process 700 in parallel or in different orders, or may perform different steps. For example, the process 700 may include calibrating the camera lateral offset
using a configuration as described in the description of FIGS. 6A and 6B. In another example, the process 700 may include training a machine learning model to be used in identifying a best fit edge
of candidate edges.
In one example, the process 700 may involve a vehicle (e.g., a tractor) performing a farming operation (e.g., mowing). The tractor may be autonomously steered through a farm (e.g., using the row
vision system 120), tracking an edge of a crop surface for mowing. The autonomous steering may track one or more of a predetermined route through the farm or an edge between surfaces (e.g., between
soil and the crop). An operator may be located within the tractor, interacting with a user input for the row vision system 120 to specify the target edge of the crop surface to which the tractor is
to track. Alternatively or additionally, an operator may be located remotely and may use a network (e.g., the network 160) to control the tractor. For example, the remote operator may transmit
instructions (e.g., a target edge to track) to a row vision system 120 located onboard the vehicle or operate a remotely located row vision system 120 that transmits instructions to a controller
onboard the vehicle to modify the autonomous steering.
The row vision system 120 accesses 701 a set of images captured by a vehicle while navigating via autonomous steering through an area of different surface types. For example, the row vision system
120 located remotely from the tractor of the previous example accesses 701 images of the farm that depict soil and crop ahead of the tractor during the mowing operation. The tractor autonomously
steers through the farm that includes the different surface types such as soil and a crop type (e.g., grass). The images may be captured by a camera integrated with the tractor or integrated with an
operator device (e.g., a tablet) that may be installed and removed from the tractor once the farming operation is completed. The camera positioning (e.g., offset or pitch) may be calibrated by the
row vision system 120 before the mowing begins via the calibration configuration described in FIGS. 6A and 6B. In one example in which the operator is located remotely from the tractor, the row
vision system 120 may be located onboard the tractor. The row vision system 120 may transmit the images to the remote operator (e.g., to a tablet communicatively coupled to the tractor using the
network 160). In another example in which the operator is located remotely from the tractor, the row vision system 120 is also located remotely from the tractor (e.g., executed on the remote server
140). The images may be transmitted by the tractor to the remote server 140. The images may be displayed to an operator (e.g., at the tablet) by the row vision system 120 for the operator to provide
user input selecting an edge that the tractor should track.
The row vision system 120 receives 702, from a remote operator, an input representative of a location within the set of images displayed to the remote operator. The remote operator may provide an
input (i.e., user input) selecting the edge that the tractor should track. The selection may be a tap of a display (e.g., the display 112) that presents the accessed 701 images. The selection may
correspond to one or more pixels of an image, where the one or more pixels depict an operator selected tracking target, such as the operator-selected tracking target 406 shown in FIG. 4. In some
embodiments, the location may correspond to a first pixel of a first image and to a second pixel of a second, subsequent image. That is, the location of the user-selected target may change across the
images accessed 701 yet correspond to the same target in the environment (i.e., the edge between surface types).
The row vision system 120 identifies 703 a set of candidate edges within an image portion corresponding to the location within the set of images. The row vision system 120 may use the input received
702 to determine a bounding box centered at the location within the set of images. The row vision system 120 performs an operation such as a Hough transform to detect one or more candidate edges
within the bounding box. The row vision system 120 may detect several candidate edges between the soil and crop surfaces that the tractor may follow to perform the mowing operation.
The row vision system 120 applies 704 an edge selection model to the set of candidate edges, the edge selection model configured to select an edge of the set of candidate edges based at least in part
on the location within the set of images represented by the received input. The edge selection model may include one or more machine learning models. For example, the application 704 of the edge
selection model may involve an application of the surface detection model 131 followed by one of the edge detection models 132 or 133. The row vision system 120 uses the edge selection model to
select an edge (e.g., a best fit edge) of the identified 703 set of candidate edges within the bounding box centered on the operator-selected location within the set of images. The edge selection
model may first identify the two surfaces within the bounding box are soil and crop. The edge selection model or the surface detection model 131 may be configured to determine that the crop surface
is of crop type “grass.” Based on the identified surfaces, the edge selection model may use a corresponding edge detection model trained on images of edges between soil and crop (e.g., grass) to
determine a best fit edge within the bounding box. The edge selection model may assign confidence scores to one or more of the candidate edges, where the best fit edge is selected based on the
confidence score (e.g., the edge having the highest confidence score).
The row vision system 120 modifies 705 a route being navigated by the vehicle based at least in part on the selected candidate edge. In some embodiments, the row vision system 120 determines one or
more of a heading or lateral error of the vehicle to modify 705 the route being navigated. For example, the row vision system 120 uses the location of the selected candidate edge (e.g., the best fit
edge) within the accessed 701 images and one or more of a guidance line or heading of the vehicle to determine the lateral or heading error, respectively. The determined error may be used to
determine a modification to the steering direction or speed of the autonomous steering used to control the vehicle. In some embodiments, the lateral offset indicates the relative distance between the
operator-selected target and best fit edge line. For example, the row vision system 120 determines, for the tractor performing mowing, a lateral offset between the operator-selected target and the
best fit edge between soil and crop. Using the determined lateral offset, the row vision system 120 modifies 705 the route navigated by the tractor. Modification 705 of the route navigated by the
vehicle may include modifying a direction in which the vehicle is traveling, a speed at which the vehicle travels in that direction, or a combination thereof. For example, the row vision system 120
modifies 705 the steering direction of the tractor in response to determining that the lateral offset between the operator-selected target and the best fit edge indicates that the tractor is
traveling away from the crop.
Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These operations,
while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also
proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software,
firmware, hardware, or any combinations thereof. The software modules described herein may be embodied as program code (e.g., software comprised of instructions stored on non-transitory computer
readable storage medium and executable by at least one processor) and/or hardware (e.g., application specific integrated circuit (ASIC) chips or field programmable gate arrays (FPGA) with firmware).
The modules correspond to at least having the functionality described herein when executed/operated.
The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may include a computer
selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type
of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type
of media suitable for storing electronic instructions, each coupled to a computer system bus.
A processing device as described herein may include one or more processors such as a microprocessor, a central processing unit, or the like. More particularly, the processing device may be complex
instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other
instruction sets, or processors implementing a combination of instruction sets. The processing device may also be one or more special-purpose processing devices such as an application specific
integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device may be configured to execute instructions
for performing the operations and steps described herein. A controller or microcontroller as described herein may include one or more processors, such as a central processing unit, internal memory,
and input/output components. The controller may communicatively couple processing devices such that one processing device may manage the operation of another processing device through the controller.
In one example, a controller may be a JOHN DEERE AUT300. A controller may be communicatively coupled to a processing device through a CAN bus.
As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in
at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. In addition, use of the “a”
or “an” are employed to describe elements and components of the embodiments herein. This description should be read to include one or at least one and the singular also includes the plural unless it
is obvious that it is meant otherwise.
As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process,
method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process,
method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of
the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for a system and a process for automated quality check and
diagnosis for production model refresh through the disclosed principles herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that
the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the
art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined in the appended claims.
1. A method comprising:
accessing a set of images captured by a vehicle while navigating via autonomous steering through an area of different surface types, the set of images comprising images of a ground surface in
front the vehicle;
receiving, from a remote operator, an input representative of a location within the set of images displayed to the remote operator;
identifying a set of candidate edges within an image portion corresponding to the location within the set of images, each candidate edge corresponding to a candidate boundary between two
different surface types;
determining, for each of the set of candidate edges, a distance between the candidate edge and the location within the set of images represented by the input received from the remote operator;
applying an edge selection model to the set of candidate edges, the edge selection model configured to select an edge of the set of candidate edges based at least in part on the determined
distance for each candidate edge; and
modifying a route being navigated by the vehicle based at least in part on the selected candidate edge.
2. The method of claim 1, wherein the image portion within the set of images comprises a bounding box centered on the location represented by the received input.
3. The method of claim 1, wherein the set of candidate edges are identified using an edge detection model corresponding to one or both of the two different surface types.
4. The method of claim 3, wherein the two different surface types comprise soil and one of a crop, grass, and pavement, and wherein the edge detection model comprises a machine-learned model that is
trained on images of manually tagged boundaries between soil and the one of a crop, grass, and pavement.
5. The method of claim 1, wherein the edge selection model is configured to weight the candidate edges at least in part based on the distance between each candidate edge and the location represented
by the received input within the set of images, wherein a first candidate edge is weighted lower than a second candidate edge that is closer to the location represented by the received input than the
first candidate edge.
6. The method of claim 1, wherein the edge selection model comprises a machine-learned model that is trained on images each with a set of candidate edges and a manually-selected edge of the set of
candidate edges.
7. The method of claim 1, further comprising:
identifying, by the vehicle, each of the two different surface types; and
selecting, by the vehicle, the edge selection model from a set of edge selection models based on the identified surface types.
8. The method of claim 7, wherein an identified surface comprises a type of crop, and wherein the selected edge selection model is trained based on images of the type of crop.
9. The method of claim 1, wherein the set of candidate edges are identified by and the edge selection model is applied by the vehicle.
10. The method of claim 1, wherein the set of candidate edges are identified by and the edge selection model is applied by a remote computing system communicatively coupled to the vehicle.
11. The method of claim 1, wherein modifying the route being navigated by the vehicle comprises moving the vehicle such that an edge of a tool or instrument being pulled by the vehicle is aligned
with the selected edge.
12. The method of claim 1, wherein additional sets of candidate edges are iteratively identified as the vehicle captures additional images, and wherein edges are iteratively selected from among the
sets of candidate edges.
13. A system comprising a hardware processor and a non-transitory computer-readable storage medium storing executable instructions that, when executed by the processor, are configured to cause the
system to perform steps comprising:
accessing a set of images captured by a vehicle while navigating via autonomous steering through an area of different surface types, the set of images comprising images of a ground surface in
front the vehicle;
receiving, from a remote operator, an input representative of a location within the set of images displayed to the remote operator;
identifying a set of candidate edges within an image portion corresponding to the location within the set of images, each candidate edge corresponding to a candidate boundary between two
different surface types;
determining, for each of the set of candidate edges, a distance between the candidate edge and the location within the set of images represented by the input received from the remote operator;
applying an edge selection model to the set of candidate edges, the edge selection model configured to select an edge of the set of candidate edges based at least in part on the determined
distance for each candidate edge; and
modifying a route being navigated by the vehicle based at least in part on the selected candidate edge.
14. The system of claim 13, wherein the image portion within the set of images comprises a bounding box centered on the location represented by the received input.
15. The system of claim 13, wherein the set of candidate edges are identified using an edge selection model corresponding to one or both of the two different surface types, wherein the two different
surface types comprise soil and one of a crop, grass, and pavement, and wherein the edge selection model comprises a machine-learned model that is trained on images of manually tagged boundaries
between soil and the one of a crop, grass, and pavement.
16. The system of claim 13, wherein the edge selection model is configured to weight the candidate edges at least in part based on the distance between each candidate edge and the location
represented by the received input within the set of images, wherein a first candidate edge is weighted lower than a second candidate edge that is closer to the location represented by the received
input than the first candidate edge.
17. The system of claim 13, wherein the edge selection model comprises a machine-learned model that is trained on images each with a set of candidate edges and a manually-selected edge of the set of
candidate edges, and wherein the edge selection model is selected among a set of edge selection models based on an identified surface type of the two different surface types.
18. The system of claim 13, wherein the hardware processor and the non-transitory computer-readable storage medium are implemented within one of the vehicle and a remote computing system coupled to
the vehicle.
19. The system of claim 13, wherein modifying the route being navigated by the vehicle comprises moving the vehicle such that an edge of a tool or instrument being pulled by the vehicle is aligned
with the selected edge.
20. The system of claim 13, wherein additional sets of candidate edges are iteratively identified as the vehicle captures additional images, and wherein edges are iteratively selected from among the
sets of candidate edges.
21. A method comprising:
accessing a set of images captured by a vehicle while navigating via autonomous steering through an area of different surface types, the set of images comprising images of a ground surface in
front the vehicle;
identifying a set of candidate edges within the set of images, each candidate edge corresponding to a candidate boundary between two different surface types;
displaying, to a remote operator via a device of the remote operator, the set of images overlayed with the identified set of candidate edges;
receiving, from the remote operator, a selection of a candidate edge from the set of candidate edges displayed to the remote operator to be overlayed on the set of images; and
modifying a route being navigated by the vehicle based at least in part on the selected candidate edge.
Referenced Cited
U.S. Patent Documents
9446791 September 20, 2016 Nelson, Jr. et al.
9817396 November 14, 2017 Takayama
20170248946 August 31, 2017 Ogura et al.
20190228224 July 25, 2019 Guo
20190266418 August 29, 2019 Xu
20200341461 October 29, 2020 Yokoyama
20210000006 January 7, 2021 Ellaboudy
Foreign Patent Documents
102019204245 October 2020 DE
102021203109 November 2021 DE
3970466 March 2022 EP
2021016377 February 2021 JP
WO2020182564 September 2020 WO
Other references
• Extended European Search Report and Written Opinion issued in European Patent Application No. 22167320.5, dated Oct. 10, 2022, in 11 pages.
• Deere & Company, “Auto Trac™ RowSense™—Sprayer,” Date Unknown, three pages, [Online] [Retrieved on May 3, 2021] Retrieved from the Internet <URL: https://www.deere.com/en/technology-products/
• Deere & Company, “Auto Trac™,” Date Unknown, three pages, [Online] [Retrieved on May 3, 2021] Retrieved from the Internet <URL: https://www.deere.com/en/technology-products/
• Google AI Blog, “Real-Time 3D Object Detection on Mobile Devices with MediaPipe,” Mar. 11, 2020, six pages, [Online] [Retrieved on Apr. 26, 2021] Retrieved from the Internet <URL: https://
• OpenCV, “OpenCV demonstrator (GUI),” Dec. 16, 2015, seven pages, [Online] [Retrieved on Apr. 30, 2021] Retrieved from the Internet <URL: https://opencv.org/opencv-demonstrator-gui/>.
• Riba, E., “Real Time pose estimation of a textured object,” Date Unknown, 11 pages, [Online] [Retrieved on Apr. 26, 2021] Retrieved from the Internet <URL: https://docs.opencv.org/master/dc/d2c/
• Sovrasov, V., “Interactive camera calibration application,” Date Unknown, five pages, [Online] [Retrieved on Apr. 26, 2021] Retrieved from the Internet <URL: https://docs.opencv.org/master/d7/d21
International Classification: G06V 20/56 (20220101); A01B 69/04 (20060101); B62D 15/02 (20060101); G05D 1/00 (20240101); G06T 7/40 (20170101); G06V 10/22 (20220101); G06V 10/44 (20220101); G06V 20/10 | {"url":"https://patents.justia.com/patent/12112546","timestamp":"2024-11-13T09:55:25Z","content_type":"text/html","content_length":"183712","record_id":"<urn:uuid:7d6dc85f-be64-477b-b513-8833c29bbd85>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00364.warc.gz"} |
Epic Fail? The Polls and the 2016 Presidential Election
Donald Trump’s victory in the 2016 presidential election caused widespread shock, in large part because political polls seemed to predict an easy victory for Hillary Clinton. New York magazine was so
confident of a Clinton victory that, the week of the election, its cover featured the word “loser” stamped across Donald Trump’s face.
The day after the election, Politico asked, “How Did Everyone Get It So Wrong?” The next month, a columnist for the New Republic asked, “[A]fter 2016, can we ever trust the polls again?” given that
they “failed catastrophically” in the presidential race. A Wall Street Journal columnist doubtless referred to the 2016 outcome when he noted that recent election polls had been “spectacularly
Was the primary failure among the polls or, instead, among Americans who, for whatever reason, were unable to recognize what the polls were actually saying? We explore the issue by considering two
primary consolidators of presidential polls: FiveThirtyEight.com, the Nate Silver website that was previously considered the gold standard among consolidators, and Real Clear Politics, the immensely
popular political website that had 32 million unique visitors in the month before the 2016 election.
It is of interest to know what these consolidators reported on the eve of the election, and then consider what their forecasts implied about polling accuracy in key states that were credited or
blamed for Hillary Clinton’s “reversal of fortune.”
Properly understood, the publicly available polls suggested that the Clinton/Trump race was what the media, in comparable circumstances, would typically call a “statistical dead heat.” The belief
that the Democrat was all but certain to win was inconsistent with both a superficial awareness of what the polls said and a deeper understanding of what they implied about the outcome.
To the question “Can we ever trust the polls again?” after what happened in 2016, the answer is an emphatic yes.
On Election Day
Even a cursory look at FiveThirtyEight or Real Clear Politics on the morning of Election Day 2016 would have suggested that a Clinton victory was far from inevitable. In its poll-based forecast,
FiveThirtyEight estimated then that Trump had a 28.6% chance of winning the presidency. Real Clear Politics predicted that the Electoral College breakdown of the results would be 272 for Clinton and
266 for Trump.
That prognosis implied that, if only one state switched from Clinton to Trump, Trump would be president. Even if the switching state had only three electoral votes (the minimum number), the result
would be a 269–269 tie, in which case, Trump would prevail when the winner was chosen by the Republican House of Representatives.
To be sure, on Election Day itself, many people were no longer looking at polls, but polling outcomes a few days before the election were, if anything, more favorable to Trump: On November 4, 2016,
for example, FiveThirtyEight estimated Trump’s chances of winning four days hence at 35.4%. Thus, evidence of a close race cannot be said to have arisen too late to be widely noticed.
Paradoxically, respect for the two consolidators and the state-specific polls they summarized actually grows if one looks further at what they supposedly did wrong.
FiveThirtyEight’s “Errors” in Three Key States
The great surprise in the 2016 election was that Donald Trump carried Pennsylvania, Michigan, and Wisconsin, three states that were considered part of the Democratic “firewall.” FiveThirtyEight did
not contradict the general impression: Its polls-only forecast the morning of the election was that Trump had a 21.1% chance of winning Michigan, a 23.0 chance of winning Pennsylvania, and a 16.5%
chance of carrying Wisconsin. (Those “polls-plus” forecasts that also considered economic and historical factors were essentially the same.)
A simple calculation with the FiveThirtyEight estimates might suggest that Trump’s chances of achieving a “trifecta” in these states was essentially .211 ∗ .23 ∗ .165 = 1 in 125, but such a
calculation would be unfair to FiveThirtyEight.
Advancing the statistic “1 in 125” is misleading for two reasons. The first is that judging a forecasting model by what emerged as its weakest forecasts is systematically biased against it. The
second is that the calculation treats the three outcomes as independent, when FiveThirtyEight explicitly assumes that they are not. Although its specific procedures are not transparent,
FiveThirtyEight recognized that the outcomes in these three similar states are correlated. Indeed, on election night, TV commentators readily grasped that Trump’s strong performance in Pennsylvania
increased the chance that he would also carry Michigan and Wisconsin.
A simple analysis indicates the strength of the pattern of correlation. Assuming that FiveThirtyEight’s probability assessments were accurate in Pennsylvania and Wisconsin (at 23.0% and 16.5% for
Trump, respectively), how might one estimate the chance that he would win in both states?
One reflection of the correlation is that, in the 12 elections before 2016, Pennsylvania and Wisconsin went for the same candidate 10 times (whether Republican or Democrat, and whether the national
winner or the national loser). Using this statistic, we can set up some simple equations that approximate the chance that Trump would carry the two states.
Some definitions:
P[CC] = probability that Clinton would win both Pennsylvania and Wisconsin
P[TT] = probability that Trump would win both Pennsylvania and Wisconsin
P[CT] = probability that Clinton would win Pennsylvania and Trump would win Wisconsin
P[TC] = probability that Trump would win Pennsylvania and Clinton would win Wisconsin
On the morning of the election, we could write:
P[CC] + P[TT] = 10/12 (based on the historical chance both states go the same way)
P[TT] + P[TC] = .230 (based on FiveThirtyEight’s forecast of the chance Trump would win Pennsylvania)
P[TT] + P[CT] = .165 (based on FiveThirtyEight’s forecast of the chance Trump would win Wisconsin)
P[CC] + P[TT] + P[CT] + P[TC] = 1 (because something has to happen)
Solving this system of linear equations provides:
P[TT] = .114 P[TC] = .116 P[CT] = .051 P[CC] = .719
That P[TT] = .114 (as opposed to .23 ∗ .165 = .038) implies that the tendency of Pennsylvania and Michigan to “go the same way” yields a much greater chance of a double victory than would arise under
the FiveThirtyEight projections and an independence assumption. The chance that Trump would carry at least one of the states would be estimated by 1 – .719 = .2.
What about Michigan, as well as Pennsylvania and Wisconsin? The historical pattern is that, over the 12 presidential elections from 1960 to 2012, Michigan went along with Pennsylvania and Wisconsin
every time the latter two states voted for the same candidate (i.e., 10 times out of 10). It therefore could be approximated:
P[TTT] = P(Trump would win PA, MI, and WI) ≈ P(win PA and MI) ∗ (10/10) = .114
P(Trump win at least one of PA, MI, and WI) ≈ 1 – P(Clinton wins all three).
or P(Trump win at least one of PA, MI, and WI) ≈ 1 – .719 ∗ (10/10) = .281
These calculations, however, do not actually use FiveThirtyEight’s estimate that Trump’s chance of carrying Michigan was .211. One could perform the analogous calculations based initially on
Pennsylvania plus Michigan, and on Michigan plus Wisconsin, as shown in Table 1.
The key probabilities exhibit some variation over the three approximations, but the collective results suggest roughly a 1 in 9 chance that Trump would win all three states, and about a 1 in 3 chance
that he would carry at least one. (The use of an omnibus linear model that considers all three of FiveThirtyEight’s projections at once—with eight equations and eight unknowns—does not yield more
reliable probability estimates than the three-part approximation above.)
This 1 in 3 probability is especially noteworthy because, outside PA, MI, and WI, Trump amassed 260 electoral votes. That outcome was very much consistent with the state-by-state projections by both
FiveThirtyEight and Real Clear Politics, but, once Trump had 260 electoral votes, he only had to win one of PA/MI/WI to achieve a national victory. If the chance of that outcome was something like
33%, it is no surprise that FiveThirtyEight saw a probability near 3 in 10 that Trump would triumph.
The Democratic “firewall” was far from fireproof.
These probability calculations are obviously approximate, and weigh historical patterns almost as much as the individual FiveThirtyEight projections, but the general point of the exercise survives
its imperfections. There was reason to believe that, if Trump performed better than expected in one of the three states, he was more likely to do so in the others. Because FiveThirtyEight gave Trump
an appreciable chance of winning each of the three states and recognized that the outcomes were not independent, it saw a Clinton victory as far from a sure thing—and it did so based largely on
in-state polling results, which by no means deserved to be described as “spectacularly wrong.”
Real Clear Politics
As noted, Real Clear Politics (RCP) predicted on Election Day that Trump would get 266 electoral votes, compared to Clinton’s 272. It correctly identified the winner in 47 of the 51 states,
erring—like FiveThirtyEight—in PA, MI, and WI, and also in Nevada. (FiveThirtyEight got Nevada right, but it tilted toward Clinton in Florida and North Carolina, which she lost.) However, RCP listed
all four of the states it got wrong in its “tossup” category.
Still, the specific procedure by which RCP assigned a state’s winner would cause statisticians to tear their hair out. Roughly speaking, it considered all major polls in the state within two weeks of
the election, and took a simple arithmetical average of their results. If Trump was ahead of Clinton in this average, he was awarded the state in RCP’s “no tossup” assessment.
The procedure obliterated the distinction between:
• A poll one day before the election and another 13 days before
• A poll with sample size 1,500 and another with sample size 400
• A poll among all registered voters and another restricted to “likely” voters
Furthermore, RCP assigned no margin of error to its state-specific forecasts.
It is desirable to place the RCP results on a stronger statistical foundation, especially as an indicator of the caliber of state-level polls that it summarized. A simple way to do so is to accept
RCP’s assumption that the various polls in its average are essentially interchangeable. A “megapoll” could be created by combining all the individual polls RCP used for a given state in its Election
Day forecast. Using that megapoll, one can estimate a margin of sampling error for the statewide result and—more importantly—estimate the probability that Trump would carry the state.
For example, the final 2016 RCP poll listings for the state of Ohio are shown in Table 2.
(It was not RCP that rounded-off polling results to the nearest percentage point; rather, that convention was followed by all four polls in their announcements of the results.)
Trump’s average lead over Clinton in the four Ohio polls (weighed equally) was 14/4 = 3.5 percentage points, and that was the statistic featured by RCP in its summary about Ohio.
Although an average of 12% of those canvassed in the four polls were either undecided or supportive of a third-party candidate, the tacit assumption was that such voters could effectively be ignored
in the race between Clinton and Trump.
The Emerson poll in Ohio generated 900 ∗ .39 = 351 supporters of Clinton and 900 ∗ .46 = 414 of Trump. For CBS/YouGov, the corresponding numbers were 546 and 535. For the four merged polls, Clinton
had 2,252 supporters while Trump had 2,382. Obviously, the large Remington poll had a disproportionate effect on the merged result. If no relevance is accorded to the identity of the pollster,
though, the number of days until the election, and the likelihood a polled voter will actually vote, then one can presumably construe the Clinton/Trump result in Ohio as arising from a random sample
of 2,252 + 2,382 = 4,634 voters.
Among such voters, Trump’s share of the Clinton/Trump vote was 51.4% and Clinton’s was 48.6%.
Taking sampling error into account, what is the probability that Trump was actually ahead in Ohio, given the result of the “megapoll?” Starting with a uniform before Trump’s vote share versus Clinton
and revising it in Bayesian manner with the findings for the 4,634 voters sampled, the chance that Trump would defeat Clinton in Ohio would be estimated as .972. (This result arises from a beta
distribution with α = 2,383 and β = 2,253.)
Proceeding similarly, one can get a poll-based estimate of the chance that Trump would win in each of the 51 states.
An RCP Projection
If V[i] is state i’s number of electoral votes (i = 1, .., 51) and the indicator variable X[i] is defined by:
then T—Trump’s total number of electoral votes—follows:
Then, if P[i] = P(Trump will win state i | recent polling results there), the mean of T is given by:
RCP never mentions possible correlations among the X[i]‘s, and tacitly treats the X[i]‘s as independent (implying that similarities across states are already appropriately reflected in the point
estimates of the X[i]‘s). Under independence, the variance of T would approximately follow:
To estimate P(Trump wins) = P(T ≥ 269 out of FiveThirtyEight Electoral Votes), we might be tempted to assume that T is normally distributed as the sum of 51 independent random variables. However, the
heavy majority of P[i]‘s (calculated with the method shown above for Ohio) are very close to either zero or one. As a practical matter, therefore, the uncertainty in T depends on a small subset of
the state-specific variables.
Thus, to approximate P(Trump wins), we turn to simulation rather than treat T as normal, using the calculated P[i]‘s = 258.8, with a standard deviation of about 17.3.
More than 2,000 simulation runs using the state-specific P[i]‘s, the fraction in which Trump was victorious was .290. This estimate of P(Trump wins) might seem smaller than expected given RCP’s
estimated 266–272 split. But, as the value of E(T) suggests, Trump was only modestly ahead in several states assigned to him (i.e., several of his P[i]‘s only moderately exceeded 1/2), while
Clinton’s leads were more substantial in the states where she led.
What should we make of the estimate that P(Trump wins) ≈.290 (which, coincidentally, is very similar to the estimate made by FiveThirtyEight)? Consider a pre-election poll that suggests that
candidate A leads candidate B by 51% to 49%, with a margin of error of 3 percentage points. The media would routinely describe the race as “a statistical dead heat” or at least “too close to call.”
If, on Election Day, candidate B were to win with 50.3% of the vote, few people would cite the result as evidence that the poll was wrong.
This point is relevant because, under the standard formula for a 95% confidence interval, the sample size in this hypothetical poll with its 3-percentage-point error margin would be 1,068, and the
split corresponding to 51%/49% would be 544 for A and 524 for B. Under a Bayesian revision of a uniform prior, B’s true share of the vote would follow a beta distribution with α = 525 and β = 545 ),
and the chance that B was actually ahead would be estimated as 25.0%.
An obvious question arises: If a poll is not seen as inaccurate when it had assigned the actual winner a 25.0% chance of victory, how can one say that RCP was “spectacularly wrong” when it implied a
29.0% probability that Trump would win the presidency?
Indeed, in a two-candidate race with no undecided voters and a poll with a 3-percentage-point margin of error, the outcome would be treated as “too close to call” if B’s estimated vote share was
anything above 47%. But that means that the race would be viewed as undecided even when B’s approximate chance of actually winning was as low as .03. The observed split corresponding to a 29% chance
that B would win would be 50.8% for A and 49.2% for B, and the split—which obviously suggests a close race—is statistically equivalent to the result implied by Real Clear Politics in 2016.
The good performance of RCP—despite ignoring the correlation among outcomes in PA, MI, and WI—is quiet testimony to the high accuracy of the individual polls it synthesized.
If the conventional wisdom is that the polls failed in 2016 systematically, then the conventional wisdom is somewhat more in error than the polls were.
In Conclusion
The day after the election, the New York Times ran the headline “Donald Trump’s Victory Is Met with Shock Across a Wide Political Divide.” That assessment was clearly accurate but, as we have
suggested, this “shock” should be viewed as arising less because of the polls than despite them. Ironically, the real lesson from 2016 could be that not that we should pay less attention to the
polls, but that we should pay more attention.
Further Reading
Caldwell, C. 2017. France’s Choice: Le Divorce? Wall Street Journal.
FiveThirtyEight. 2016. Who will win the presidency?.
Healy, P., and Peters, J. 2016. Donald Trump’s Victory Is Met with Shock Across a Wide Political Divide. New York Times.
Narea, N. 2016. After 2016, Can We Ever Trust the Polls Again? New Republic.
Real Clear Politics, 2016 Presidential Election, No Toss-Up States.
Vogel, K., and Isenstad, A. 2016. How Did Everyone Get It So Wrong? Politico.
About the Author
Arnold Barnett is the George Eastman Professor of Management Science and professor of statistics at MIT’s Sloan School of Management. He has received the President’s Award for “outstanding
contributions to the betterment of society” from the Institute for Operations Research and the Management Sciences (INFORMS), and the Blackett Lectureship from the Operational Research Society of
the UK. He has also received the Expository Writing Award from INFORMS and is a fellow of that organization. He has been honored 15 times for outstanding teaching at MIT.
1 Comment
1. It was interesting to read at the bottom of the polls I looked at that Democrats were consistently over-polled by from 5% to 15%. Two things went through my mind at the time: Where’d they get
those baloney numbers from that there were 15% more registered Democrats than Republicans, and why would they tell us that they were cooking the books by using inflated populations?
My guess to the first is that they were so in bed with Hillary – perish the thought, not even Bill is in bed with Hillary – that they couldn’t ‘bare’ the thought that the queen has no clothes, so
they fudged the data enough to make it look like she was dressed in purple and awaiting her due at coronation.
To the second, I have NO idea. | {"url":"https://chance.amstat.org/2018/11/epic-fail/","timestamp":"2024-11-11T20:57:37Z","content_type":"application/xhtml+xml","content_length":"61121","record_id":"<urn:uuid:c7670477-faf9-4de8-b57b-c771b4d1c7a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00553.warc.gz"} |
Experimental Determination of Order of a Reaction - Chemical Kinetics, Chemistry, Class 12
Experimental Determination of Order of a Reaction
(1) Graphical method
This method is used when there is only one reactant. It involves the following steps:
1) The concentrations of the reactants are measured by some suitable method.
2) A graph is plotted between concentration and time.
3) The instantaneous rates of the reaction at different times are calculated by finding out the slopes of the tangents corresponding to different times.
4) The rate of reaction is plotted versus concentration,[A] or (concentration)^2, [A]^2 and so on.
(a) If rate of reaction remains constant in rate versus concentration graph, it means that the rate is independent of the concentration of the reactant, i.e.
Rate = k [A]^0 = k
Therefore, the reaction is of zero order.
(b) If a straight line is obtained in rate versus concentration graph, it means that the rate is directly proportional to concentration of the reactant i.e.
Rate = k [A]
Therefore, the reaction is of first order.
(c) If a straight line is obtained in rate versus (concentration)^2 graph, it means that
Rate = k [A]^2
Therefore, the order of the reaction is two.
(d) Similarly, if we get straight line in rate versus (concentration)^3 graph, then
Rate = k[A]^3
and the order of reaction is 3
If we get straight line by plotting graph of rate versus (concentration)^n, where n =1, 2, 3…. so on, then
Rate = k[A]^n
and the order of reaction is n.
Example : Decomposition of nitrogen pentoxide.
2N[2]05(g) ——-> 4NO[2](g) + O[2](g)
This reaction involves the gaseous reactants and products. Therefore, the reaction can be easily studied by measuring the increase in pressure of the gaseous mixture at different intervals of time.
From the measured values of total pressure, the partial pressure of N[2]O[5] at different times can be calculated.
From this, the concentration of N[2]O[5] in moles per litre can be calculated. The molar concentrations of N[2]O[5] obtained are plotted against time. The rates at different times are obtained by
measuring the slopes of the tangents corresponding to these times.
Rate = k [N[2]O[5]]^n
[N[2]O[5]] (mol L^-1) Rate (mol L^-1 min)
1.13 × 10^-2 3.4 × 10-4
0.84 × 10^-2 2.5 × 10-4
0.62 × 10^-2 1.1 × 10-4
0.46 × 10^-2 1.3 × 10-4
0.35 × 10^-2 1.0 × 10-4
0.26 × 10^-2 0.8 × 10-4
The plot of rate versus [N[2]O[5]] is a straight line.This means that the rate of the reaction is directly proportional to the [N[2]O[5]]. Therefore, the rate law is:
Rate = k [N[2]O[5]]
However, we do not get straight line by plotting rate of reaction against [N[2]O[5]]^2. This means that the reaction is not of second order.
Rate = -dx/dt = k [N[2]O[5]]
and order of the reaction is 1.
(2) Use of integrated rate equation
The kinetic data is fitted to different integrated rate equation. Wherever the data fit with the equation for the correct order of the reaction, it will give constant value of rate constant for all
data points (concentrations at different times).
For a general reaction:
A ———> Products
the integrated rate equation for zero, first and second order reactions are below given:
The straight lines are obtained for a plot of [A] versus t for a zero order reaction.
(3) Initial Rate Method
The graphical methods cannot be applied for the reaction which involve more than one reactant.The rates of such reactions can be determined by initial rate method.
a) The initial rate of the reaction i.e. the rate at the beginning of the reaction is measured.The rate over an initial time interval that is short enough so that concentration of the reactants do
not change appreciably from their initial values.This corresponds to slope of the tangent to the concentration versus time graph at t=0.
b) The initial concentration of only one reactant is changed and the rate is determined again.From this order with respect to that particular reactant is calculated.
c) The procedure is repeated with respect to each reactant until the overall rate law is fully determined.
d) The sum of the individual orders with respect to each reactant gives the order of the reaction.
Consider a reaction
aA + bB + c C ——-> Products
The general form of the rate law may be written as :
Rate = k[A]^p [B]^q [C]^r
Then initial rate of the reaction may be given as :
r[0] = Rate = k [A][0]^p [B][o]^q [C][o]^r
If [B] and [C] are kept constant, then
r[o]= k[o][A][o]^p where k[o]= k [B][o]^q [C][o]^r
The value of p can be determined by inspecting the rate at different values of [A].If we know the initial rates at two different concentration of A.
(r[0])[2] = k[o] [A[o]][2]^p
where (r[0])[1 ]and (r[0])[2 ]are initial rate of reactions when the initial concentration of A are [A[o]][1 ]and [A[o]][2]
n= p+ q+ c
2A + 2B ———> products
Rate = k[A]^p[B]^q
(Rate)[0]= k [A][o]^p[B][o]^q
(4) Ostwald Isolation Method
The total order of the reaction is then equal to the sum of the orders of reaction for individual reactants.This method is based on the principle that if the concentration of all but one reactant are
taken in excess , then during the course of the reaction, the concentration of those reactants taken in excess will remain almost constant and hence variation in rate will correspond to the
concentration of that reactant whose concentration is small.This process is repeated one by one and order with respect to each reactant is determined.
The overall order will be the sum of all these orders.
aA + bB + c C ———> Products
Suppose we isolate A by taking B and C in large excess and get order of reaction with respect to A.Similarly , we isolate B by taking A and C in B and and C.
Overall order of reaction n= p+ q+ r
1. Williams says
complement of the season,
you’ve been a helping hand to me, God bless you and keep it up. HAPPY NEW YEAR
2. Timi says
This was really helpful. | {"url":"https://classnotes.org.in/class12/chemistry12/chemical-kinetics/experimental-determination-order-reaction/","timestamp":"2024-11-05T12:49:56Z","content_type":"text/html","content_length":"87304","record_id":"<urn:uuid:065bd16f-dc6f-466f-ba02-d6cf1eb7bd5f>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00833.warc.gz"} |
Infrared effects in de Sitter spacetime: nonperturbative treatment of secular terms
The study of interacting quantum fields in de Sitter geometry reveals
peculiarities that are of conceptual and phenomenological interest. In
this geometry, the exponential expansion of the metric produces an
effective growth in the self-interaction of light fields, breaking down
the standard perturbative expansion. Furthermore, in the massless limit
the free propagators do not respect the symmetries of the classical
theory, and neither do they decay at large distances.
One way to avoid the problems of the standard perturbative calculations
is to go to Euclidean de Sitter space, where the zero mode responsible
for IR divergences can be treated exactly, giving an effective coupling
sqrt{lambda} for the perturbative corrections coming from the
nonzero modes. The Lorentzian counterpart is then obtained by analytical
continuation. However, we point out that a further partial resummation
of the leading secular terms (which necessarily involves nonzero modes)
is required to obtain a decay of the two-point functions at large
distances for massless fields. We implement this resummation along with
a systematic double expansion in sqrt{lambda} and in 1/N in the
O(N) model. These results improve on those known in the leading
infrared approximation obtained directly in Lorentzian de Sitter
spacetime, while reducing to them in the appropriate limits.
Mardi, 18 octobre, 2016 - 14:00
Nom/Prénom // Last name/First name:
Centro Atómico Bariloche, Argentina
Equipe(s) organisatrice(s) / Organizing team(s): | {"url":"https://apc.u-paris.fr/APC_CS/fr/node/365","timestamp":"2024-11-05T03:11:34Z","content_type":"text/html","content_length":"32949","record_id":"<urn:uuid:c18c2056-8ccc-47d0-a45d-a04f44bedd29>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00030.warc.gz"} |
Arithmetic Poem Questions & Answers - WittyChimpArithmetic Poem Questions & Answers
Arithmetic Poem Questions & Answers
Hi Everyone!! This article will share Arithmetic Poem Questions & Answers.
This poem is written by Carl Sandburg. In this poem, the poet describes a subject which is the bugbear of many. In my previous posts, I have shared the questions and answers of Street Cries and
Ozymandias so, you can check these posts as well.
Arithmetic Poem Questions & Answers
Question 1: Pick out phrases or lines from the poem which convey the following ideas.
1. Arithmetic is about predicting profit and loss:
Answer: ‘Arithmetic tells you how many you lose or win if you know how many you had before you lost or won.’
2. Arithmetic makes demands on the brain:
Answer: ‘Arithmetic is numbers you squeeze from your head to your hand to your pencil to your paper till you get the answer.’
3. Arithmetic deals with impossibly long calculations:
Answer: ‘the number gets bigger and bigger and goes higher and higher and only arithmetic can tell you what the number is when you decide to quit doubling.’
4. Memorizing tables is very difficult:
Answer: ‘and you carry the multiplication table in your head and hope you won’t lose it.’
5. When you get the answer right it makes you very happy:
Answer: ‘Arithmetic is where the answer is right and everything is nice and you can look out of the window and see the blue sky.’
Question 2: To what does the poet compare numbers in arithmetic with and how?
Answer: The poet compares numbers in arithmetic with pigeons that fly in and out of your head. It represents that how during problem solving various figures and calculations occupy our thoughts.
Question 3: Does the poet think that arithmetic is an easy and enjoyable subject? Give reasons for your answer.
Answer: No, the poet doesn’t think that arithmetic is an easy and enjoyable subject. Rather, he describes it as a bugbear and finds being unable to arrive at the right answer extremely frustrating.
He finds it hard to remember the multiplication tables and believes that questions in arithmetic can be quite tricky.
Question 4: Does the poet think it is useful to study this subject?
Answer: The poet who is not fond of this subject doesn’t think that it is useful to study this subject.
Question 5: What according to the poem, arithmetic tells you?
Answer: According to the poem, arithmetic serves as a means of calculating gain or loss based on the data that you already had.
So, these were the Questions & Answers. | {"url":"https://www.wittychimp.com/arithmetic-poem-questions-answers/","timestamp":"2024-11-13T15:45:19Z","content_type":"text/html","content_length":"186936","record_id":"<urn:uuid:f01d29fe-afe0-4f7a-a57c-cc3c516f2871>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00270.warc.gz"} |
Chapter B. Intersections
1. Solving point positions
a. Connecting an unknown position
A point whose absolute position is to be determined has two unknowns: North and East. To solve both unknowns requires two measurements connecting the point to a set of known coordinates. In the case
of a point-to-point forward computation, the unknown point is connected with a length and direction to a single known point, Equations A-1 and A-2.
The two measurements don't have to be from a single point. In Figure B-1, the coordinates of points L and M are known, point P's are not.
Figure B-8
Connected to two points
The forward computation Equations A-1 and A-2 can be written for point P from points L and M:
Equation B-1
Equation B-2
Equations B-2 and B-8 have four unknowns: L[LP], Dir[LP], L[MP], and Dir[MP].
b. Intersections
To solve the position of point P, it must be connected to points L and M using one of the following combinations:
• L[LP] and Dir[MP]
• Dir[LP] and L[MP]
• L[LP] and L[MP]
• Dir[LP] and Dir[MP]
These are the standard COGO intersections since point P is at the intersection of two measurements. There are three intersections based on the type of measurements:
• Distance-direction (or Direction-distance, we'll use these interchangeably)
• Distance-distance
• Direction-direction
Fixing two of the measurements allows solution of the other two. For example, given the coordinates of points L and M, and the azimuths from both to point P:
Point North (ft) (East) Azimuth to P
L 614.80 2255.90 117°22'40"
M 791.53 2517.03 198°10'30"
Equations B-1 and B-2 can be written as:
Equation B-3
Equation B-4
These can be solved simultaneously for L[LP] and L[MP]. Either distance can then be substituted back into its respective side of Equations B-3 and B-4 to perform a forward computation to point P.
The problem is that while a direction-direction intersection is relatively easy to solve simultaneously, the other two types are not. The two equations for a direction-distance intersection will
contain the sine and cosine of one unknown direction; distance-distance intersection equations will contain the sine and cosine of two unknown directions. Because sine and cosine functions are
non-linear, these are not trivial solutions.
So how do we go about solving direction-distance and distance-distance intersections?
d. Solution Methods
For manual solutions, there are two general ways to solve COGO intersections:
For both, we'll look first at its underlying geometry then how it's applied for the different types of intersections.
2. Intersection limitations
In basic COGO intersections there are just enough measurements made to solve for the unknown location: two measurements to solve two unknowns. An error in one or more measurement will not be apparent
in the computations unless the error results in an impossible geometry condition. While COGO provides powerful computational tools, they can't make up for bad or erroneous data. The surveyor should
always include additional measurements in order to provide an independent check. | {"url":"https://jerrymahun.com/index.php/home/open-access/12-iv-cogo/22-cogo-chap-b?showall=1","timestamp":"2024-11-09T10:19:23Z","content_type":"application/xhtml+xml","content_length":"18583","record_id":"<urn:uuid:4d6ebf48-23c0-4d6c-936d-e46067e613a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00485.warc.gz"} |
in Year 5?
What maths is taught in Year 5?
In Year 5, the national curriculum says that children will learn to: add and subtract whole numbers with more than 4 digits, including using formal written methods (columnar addition and subtraction)
add and subtract numbers mentally with increasingly large numbers.
What should a Year 3 know in maths?
Children will learn to: count from 0 in multiples of 4, 8, 50 and 100; find 10 or 100 more or less than a given number. recognise the place value of each digit in a three-digit number (hundreds,
tens, ones) compare and order numbers up to 1000. identify, represent and estimate numbers using different representations.
What do you expect in Year 5?
Children in Year 5 will be expected to be confident enough with addition, subtraction, multiplication and division to know which one to use in what situation. They need to be confident in their
methods for using all four operations with larger numbers (three digits and then four digits).
What maths is covered in KS2?
The KS2 maths curriculum is broken down into the main topics that pupils learn across their years at school.
• Number & place value.
• Addition & subtraction.
• Multiplication & division.
• Fractions, decimals and percentages.
• Measurement.
• Geometry (properties of shapes)
• Geometry (position and direction)
• Statistics.
What level is KS3 maths?
KS3 covers Years 7, 8 and 9: the first three years of secondary school. Children in KS3 have to study 12 (or 13) compulsory subjects: English. Maths.
What do you learn in y8 maths?
Year 8 children cover 6 different areas in year 8 maths: Number, Algebra, Ratio and proportion, Geometry, Probability and Statistics. They will build on the work that they have done in year 7 as well
as being introduced to some new concepts.
What should I teach in Year 5?
Children will be learning about fractions, decimals and percentages. They will need to calculate the area and perimeter of different shapes. Children will need to solve measurement problems that
involve converting between units of measurement.
What times tables should Year 3 learn?
With lots of multiplications to learn in Year 3, learning them in a specific order can really help. The 4 times table is a great place to begin, as the number rules your child will have picked up
from the 2 times table will come into play.
What level should Year 3 be working at?
C means that a child is working at the lower end of the level. B means that he’s working comfortably at that level….Each National Curriculum level was divided into sub-levels:
Year 1 Level 1b
Year 2 Level 2a-c
Year 3 Level 2a-3b
Year 4 Level 3
Year 5 Level 3b-4c
What kind of math do they teach in 3rd grade?
In third grade, multiplication and division are introduced. A majority of the year is spent focusing on the understanding of these two operations and the relationship between them. By the end of
third grade, your child should have all their multiplication and division facts (up to 100) memorized.
What is mathematics curriculum?
1. Mathematics curriculum is the “ plan for the experiences that learners will encounter, as well as the actual experiences they do encounter, that are designed to help them reach specified
mathematics objectives” ( Remillard & Heck, 2014 , p. 707).
What are the Year 5 primary maths worksheets?
These worksheets cover every part of the Year 5 primary maths curriculum, to help your children practise and gain confidence in their understanding ahead of Year 6 and the KS2 SATs. Their focus is on
retrieval practice – going over topics that children should already have covered and helping them strengthen their knowledge and understanding.
What topics are covered in primary 3 (P3) Maths?
The Primary 3 (P3) topics that are covered under Numbers are Whole Numbers, Addition, Subtraction, Multiplication, Division, Fractions and Money. Here’s the breakdown of the skills in each topic:
What math should be taught in primary schools?
Pupils should be taught to: use place value, known and derived facts to multiply and divide mentally, including: multiplying by 0 and 1; dividing by 1; multiplying together 3 numbers multiply
two-digit and three-digit numbers by a one-digit number using formal written layout
What is the National Curriculum for maths?
The national curriculum for mathematics aims to ensure that all pupils: become fluent in the fundamentals of mathematics, including through varied and frequent practice with increasingly complex
problems over time, so that pupils develop conceptual understanding and the ability to recall and apply knowledge rapidly and accurately | {"url":"https://www.replicadb4.com/what-maths-is-taught-in-year-5/","timestamp":"2024-11-03T09:15:04Z","content_type":"text/html","content_length":"42580","record_id":"<urn:uuid:b2c437e6-b38e-4c30-ae99-05db4cb4da50>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00609.warc.gz"} |
PDE in Control — Applications in Finance and Learning
Vienna Probability Seminar
Date: Monday, April 22, 2024 15:45 - 16:45
Speaker: Xin Zhang (University of Vienna)
Location: Mondi 2 (I01.01.008), Central Building
Series: Mathematics and CS Seminar
Host: M. Beiglböck, N. Berestycki, L. Erdös, J. Maas, F. Toninelli, E. Schertzer
The theory of stochastic control offers a framework for understanding, analyzing, and designing random systems with the goal of achieving desired outcomes. It finds wide-ranging applications in
finance, engineering, and data science. Stochastic control problems are known to be related to nonlinear parabolic partial differential equations (PDEs), which are powerful tools in problem solving.
In this talk, we will review the viscosity theory of finite dimensional nonlinear parabolic PDEs and discuss their applications in adversarial prediction problems. Subsequently, we will introduce the
mean field control problem, which models the decision-making in large populations of interacting agents. This corresponds to a class of nonlinear parabolic PDEs on Wasserstein space. As a main
result, we will present a comparison principle for such equations and characterize the value function of a filtering problem as the unique viscosity solution. Based on the joint work with Erhan
Bayraktar and Ibrahim Ekren. | {"url":"https://talks-calendar.ista.ac.at/events/4932","timestamp":"2024-11-06T20:17:36Z","content_type":"text/html","content_length":"7523","record_id":"<urn:uuid:ebfbdb62-6397-4626-853c-d847ec6bce42>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00857.warc.gz"} |
Check Your Ability by Seeing If You Can Solve This Math Question
Math equations that involve addition, subtraction, division, and multiplication are perhaps considered some of the easiest to solve. However, that is not usually the case, as many people have always
found themselves sweating over these ‘simple’ math equations.
Are you one of those who sweat? Or are you among the smart lot?
There’s only one way to find out. If you can solve this equation below easily and fast enough, then you certainly need a pat on the back and an addition to the list of smart people.
So, you’re ready?
Here’s the equation. Can you solve it?
It’s a no-brainer.
Well, get your brain to action. Is your brain wondering too much? Hopefully, you have the right answer by now. Write it down but don’t peep to the correct answer down there just yet. The time to view
it will come sooner.
This equation aims to tease your brain a little and test your speed as well. While you’re still wondering whether you got the correct answer or not, let’s have a look at some of the benefits of a
Why You Should Have More Brainteasers
Your brain can inevitably get dull and tired for several reasons. However, when you engage in brainteasers like the equation above, you give your brain more reasons to be alive and active.
More benefits of brainteasers to your brain and body include:
You should fully enjoy all of these benefits every day by making it a habit to solve puzzles.
Now that you know the benefits of a brainteaser let’s find out if you solved it correctly.
The Solution
The correct answer is 61.
But how did it come by; you may ask?
Beginning with the multiplication, 60 x 0 = 0
This leaves the equation reading: 60 + 0 + 1 =?
Making the additions: 60 + 0 + 1 = 61
Hence the correct answer is 61.
Congratulations if that was your answer too, you did well. You’ve passed this math test and need to challenge your friends to try it too. Please share it with your Facebook friends and let them have
fun too.
Leave a Comment | {"url":"https://getbeasts.com/check-your-ability-by-seeing-if-you-can-solve-this-math-question/","timestamp":"2024-11-13T21:10:42Z","content_type":"text/html","content_length":"138206","record_id":"<urn:uuid:cedf4120-e267-49dd-abb9-9fcd5e906e94>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00840.warc.gz"} |
30.1: The Process of Statistical Modeling
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
There is a set of steps that we generally go through when we want to use our statistical model to test a scientific hypothesis:
1. Specify your question of interest
2. Identify or collect the appropriate data
3. Prepare the data for analysis
4. Determine the appropriate model
5. Fit the model to the data
6. Criticize the model to make sure it fits properly
7. Test hypothesis and quantify effect size
Let’s look at a real example. In 2007, Christopher Gardner and colleagues from Stanford published a study in the Journal of the American Medical Association titled “Comparison of the Atkins, Zone,
Ornish, and LEARN Diets for Change in Weight and Related Risk Factors Among Overweight Premenopausal Women The A TO Z Weight Loss Study: A Randomized Trial” (Gardner et al. 2007).
30.1.1 1: Specify your question of interest
According to the authors, the goal of their study was:
To compare 4 weight-loss diets representing a spectrum of low to high carbohydrate intake for effects on weight loss and related metabolic variables.
30.1.2 2: Identify or collect the appropriate data
To answer their question, the investigators randomly assigned each of 311 overweight/obese women to one of four different diets (Atkins, Zone, Ornish, or LEARN), and measured their weight and other
measures of health over time.
The authors recorded a large number of variables, but for the main question of interest let’s focus on a single variable: Body Mass Index (BMI). Further, since our goal is to measure lasting changes
in BMI, we will only look at the measurement taken at 12 months after onset of the diet.
30.1.3 3: Prepare the data for analysis
Figure 30.1: Violin plots for each condition, with the 50th percentile (i.e the median) shown as a black line for each group.
The actual data from the A to Z study are not publicly available, so we will use the summary data reported in their paper to generate some synthetic data that roughly match the data obtained in their
study. Once we have the data, we can visualize them to make sure that there are no outliers. Violin plots are useful to see the shape of the distributions, as shown in Figure 30.1. Those data look
fairly reasonable - in particular, there don’t seem to be any serious outliers. However, we can see that the distributions seem to differ a bit in their variance, with Atkins and Ornish showing
greater variability than the others.
This means that any analyses that assume the variances are equal across groups might be inappropriate. Fortunately, the ANOVA model that we plan to use is fairly robust to this.
30.1.4 4. Determine the appropriate model
There are several questions that we need to ask in order to determine the appropriate statistical model for our analysis.
• What kind of dependent variable?
□ BMI : continuous, roughly normally distributed
• What are we comparing?
□ mean BMI across four diet groups
□ ANOVA is appropriate
• Are observations independent?
□ Random assignment and use of difference scores should ensure that the assumption of independence is appropriate
30.1.5 5. Fit the model to the data
Let’s run an ANOVA on BMI change to compare it across the four diets. It turns out that we don’t actually need to generate the dummy-coded variables ourselves; if we pass lm() a categorical variable,
it will automatically generate them for us.
## Call:
## lm(formula = BMIChange12Months ~ diet, data = dietDf)
## Residuals:
## Min 1Q Median 3Q Max
## -8.14 -1.37 0.07 1.50 6.33
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -1.622 0.251 -6.47 3.8e-10 ***
## dietLEARN 0.772 0.352 2.19 0.0292 *
## dietOrnish 0.932 0.356 2.62 0.0092 **
## dietZone 1.050 0.352 2.98 0.0031 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## Residual standard error: 2.2 on 307 degrees of freedom
## Multiple R-squared: 0.0338, Adjusted R-squared: 0.0243
## F-statistic: 3.58 on 3 and 307 DF, p-value: 0.0143
Note that lm automatically generated dummy variables that correspond to three of the four diets, leaving the Atkins diet without a dummy variable. This means that the intercept models the Atkins
diet, and the other three variables model the difference between each of those diets and the Atkins diet. By default, lm() treats the first value (in alphabetical order) as the baseline.
30.1.6 6. Criticize the model to make sure it fits properly
The first thing we want to do is to critique the model to make sure that it is appropriate. One thing we can do is to look at the residuals from the model. In the left panel of Figure ??, we plot the
residuals for each individual grouped by diet, which are positioned by the mean for each diet. There are no obvious differences in the residuals across conditions, although there are a couple of
datapoints (#34 and #304) that seem to be slight outliers.
Another important assumption of the statistical tests that we apply to linear models is that the residuals from the model are normally distributed. The right panel of Figure ?? shows a Q-Q
(quantile-quantile) plot, which plots the residuals against their expected values based on their quantiles in the normal distribution. If the residuals are normally distributed then the data points
should fall along the dashed line — in this case it looks pretty good, except for those two outliers that are once again apparent here.
30.1.7 7. Test hypothesis and quantify effect size
First let’s look back at the summary of results from the ANOVA, shown in Step 5 above. The significant F test shows us that there is a significant difference between diets, but we should also note
that the model doesn’t actually account for much variance in the data; the R-squared value is only 0.03, showing that the model is only accounting for a few percent of the variance in weight loss.
Thus, we would not want to overinterpret this result.
The significant result also doesn’t tell us which diets differ from which others. We can find out more by comparing means across conditions using the emmeans() (“estimated marginal means”) function:
## diet emmean SE df lower.CL upper.CL .group
## Atkins -1.62 0.251 307 -2.11 -1.13 a
## LEARN -0.85 0.247 307 -1.34 -0.36 ab
## Ornish -0.69 0.252 307 -1.19 -0.19 b
## Zone -0.57 0.247 307 -1.06 -0.08 b
## Confidence level used: 0.95
## P value adjustment: tukey method for comparing a family of 4 estimates
## significance level used: alpha = 0.05
The letters in the rightmost column show us which of the groups differ from one another, using a method that adjusts for the number of comparisons being performed. This shows that Atkins and LEARN
diets don’t differ from one another (since they share the letter a), and the LEARN, Ornish, and Zone diets don’t differ from one another (since they share the letter b), but the Atkins diet differs
from the Ornish and Zone diets (since they share no letters).
30.1.7.1 Bayes factor
Let’s say that we want to have a better way to describe the amount of evidence provided by the data. One way we can do this is to compute a Bayes factor, which we can do by fitting the full model
(including diet) and the reduced model (without diet) and then comparing their fit. For the reduced model, we just include a 1, which tells the fitting program to only fit an intercept. Note that
this will take a few minutes to run.
This shows us that there is very strong evidence (Bayes factor of nearly 100) for differences between the diets.
30.1.8 What about possible confounds?
If we look more closely at the Garder paper, we will see that they also report statistics on how many individuals in each group had been diagnosed with metabolic syndrome, which is a syndrome
characterized by high blood pressure, high blood glucose, excess body fat around the waist, and abnormal cholesterol levels and is associated with increased risk for cardiovascular problems. Let’s
first add those data into the summary data frame:
Table 30.1: Presence of metabolic
syndrome in each group in the
AtoZ study.
Diet N P(metabolic syndrome)
Atkins 77 0.29
LEARN 79 0.25
Ornish 76 0.38
Zone 79 0.34
Looking at the data it seems that the rates are slightly different across groups, with more metabolic syndrome cases in the Ornish and Zone diets – which were exactly the diets with poorer outcomes.
Let’s say that we are interested in testing whether the rate of metabolic syndrome was significantly different between the groups, since this might make us concerned that these differences could have
affected the results of the diet outcomes.
30.1.8.1 Determine the appropriate model
• What kind of dependent variable?
• What are we comparing?
□ proportion with metabolic syndrome across four diet groups
□ chi-squared test for goodness of fit is appropriate against null hypothesis of no difference
Let’s compute that statistic using the chisq.test() function. Here we will use the simulate.p.value option, which will help deal with the relatively small
## Pearson's Chi-squared test
## data: contTable
## X-squared = 4, df = 3, p-value = 0.3
This test shows that there is not a significant difference between means. However, it doesn’t tell us how certain we are that there is no difference; remember that under NHST, we are always working
under the assumption that the null is true unless the data show us enough evidence to cause us to reject this null hypothesis.
What if we want to quantify the evidence for or against the null? We can do this using the Bayes factor.
## Bayes factor analysis
## --------------
## [1] Non-indep. (a=1) : 0.058 ±0%
## Against denominator:
## Null, independence, a = 1
## ---
## Bayes factor type: BFcontingencyTable, independent multinomial
This shows us that the alternative hypothesis is 0.058 times more likely than the null hypothesis, which means that the null hypothesis is 1/0.058 ~ 17 times more likely than the alternative
hypothesis given these data. This is fairly strong, if not completely overwhelming, evidence. | {"url":"https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/30%3A_Practical_statistical_modeling/30.01%3A_The_Process_of_Statistical_Modeling","timestamp":"2024-11-02T19:03:04Z","content_type":"text/html","content_length":"144895","record_id":"<urn:uuid:b05b1a60-d342-4668-afbf-b05c99f505f6>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00468.warc.gz"} |
The area of an equilateral triangle ABC is 17320.5 cm². With each vertex of the triangle as centre, a circle is drawn with radius equal to half the length of the side of the triangle (see Figure).
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
E-Mail* [ ]
Captcha*[ ] Click on image to update the captcha. | {"url":"https://discussion.tiwariacademy.com/question/the-area-of-an-equilateral-triangle-abc-is-17320-5-cm%C2%B2-with-each-vertex-of-the-triangle-as-centre-a-circle-is-drawn-with-radius-equal-to-half-the-length-of-the-side-of-the-triangle-see-figure/","timestamp":"2024-11-07T20:28:03Z","content_type":"text/html","content_length":"155494","record_id":"<urn:uuid:f232405e-0407-4de7-9c2a-f6034b73d152>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00013.warc.gz"} |
Symmetry (from Greek συμμετρία symmetria "agreement in dimensions, due proportion, arrangement") in everyday language refers to a sense of harmonious and beautiful proportion and balance. In
mathematics, "symmetry" has a more precise definition, that an object is invariant to any of various transformations; including reflection, rotation or scaling. Although these two meanings of
"symmetry" can sometimes be told apart, they are related, so they are here discussed together.
Mathematical symmetry may be observed with respect to the passage of time; as a spatial relationship; through geometric transformations; through other kinds of functional transformations; and as an
aspect of abstract objects, theoretic models, language, music and even knowledge itself.
This article describes symmetry from three perspectives: in mathematics, including geometry, the most familiar type of symmetry for many people; in science and nature; and in the arts, covering
architecture, art and music.
The opposite of symmetry is asymmetry.
A geometric shape or object is symmetric if it can be divided into two or more identical pieces that are arranged in an organized fashion. This means that an object is symmetric if there is a
transformation that moves individual pieces of the object but doesn't change the overall shape. The type of symmetry is determined by the way the pieces are organized, or by the type of
A dyadic relation R is symmetric if and only if, whenever it's true that Rab, it's true that Rba. Thus, "is the same age as" is symmetrical, for if Paul is the same age as Mary, then Mary is the same
age as Paul. | {"url":"http://piglix.com/piglix/Symmetry","timestamp":"2024-11-07T20:35:20Z","content_type":"text/html","content_length":"12970","record_id":"<urn:uuid:d4feb899-4676-433b-90b4-2d97b6cc5854>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00652.warc.gz"} |
Chromatic Number - (Ramsey Theory) - Vocab, Definition, Explanations | Fiveable
Chromatic Number
from class:
Ramsey Theory
The chromatic number of a graph is the smallest number of colors needed to color the vertices of the graph such that no two adjacent vertices share the same color. This concept is crucial in
understanding various properties of graphs and their coloring, connecting to broader themes like Rado's theorem, edge coloring, and Ramsey numbers.
congrats on reading the definition of Chromatic Number. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. The chromatic number is denoted as $$ ext{ฯ }(G)$$ for a graph G.
2. Finding the chromatic number is NP-hard for general graphs, meaning there's no known efficient way to compute it for all cases.
3. A complete graph with n vertices has a chromatic number of n since every vertex is connected to every other vertex.
4. For bipartite graphs, the chromatic number is always 2, as you can color one set with one color and the other set with another color.
5. The relationship between chromatic numbers and Ramsey numbers helps in establishing bounds for edge coloring problems.
Review Questions
• How does the concept of chromatic number apply to understanding the complexities in Rado's Theorem?
□ The chromatic number plays a significant role in Rado's Theorem by illustrating how certain structures can be formed in large enough graphs. Rado's Theorem relates to the existence of
monochromatic subsets in colored graphs, where the chromatic number helps determine how many colors can be used before certain configurations must appear. Thus, analyzing the chromatic number
offers insights into when we can guarantee specific patterns or structures exist within a graph.
• Discuss the implications of chromatic numbers in edge coloring and how they relate to multicolor Ramsey numbers.
□ Chromatic numbers have direct implications in edge coloring because they define how edges can be colored without creating adjacent edges of the same color. In multicolor Ramsey theory,
chromatic numbers can indicate thresholds at which certain properties must hold. For example, if a graph's edge coloring requires k colors, this could relate to Ramsey numbers that ensure
there are cliques or independent sets of a certain size when colored in that manner.
• Evaluate the significance of chromatic numbers in real-world applications, especially considering its role in network design and scheduling.
□ Chromatic numbers are crucial in real-world scenarios such as network design and scheduling, where resources must be allocated without conflict. For example, when scheduling exams for
students from different classes, each exam can represent a vertex, and conflicts between exams would represent edges. The minimum number of time slots needed corresponds to the chromatic
number, ensuring that no student has overlapping exams. This demonstrates how understanding chromatic numbers extends beyond theory into practical applications, influencing efficient resource
ยฉ 2024 Fiveable Inc. All rights reserved.
APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/ramsey-theory/chromatic-number","timestamp":"2024-11-11T17:51:42Z","content_type":"text/html","content_length":"157981","record_id":"<urn:uuid:b5dd0537-26a1-44d0-bbb6-90abd3e96f84>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00054.warc.gz"} |
Tight Space Complexity of the Coin Problem
In the coin problem we are asked to distinguish, with probability at least 2/3, between n i.i.d. coins which are heads with probability 12}+β from ones which are heads with probability 12}-β. We are
interested in the space complexity of the coin problem, corresponding to the width of a read-once branching program solving the problem. The coin problem becomes more difficult as β becomes smaller.
Statistically, it can be solved whenever β= Ω(n-1/2}), using counting. It has been previously shown that for β=O(n-1/2}), counting is essentially optimal (equivalently, width poly (n) is necessary
[Braverman-Garg-Woodruff FOCS'20]). On the other hand, the coin problem only requires O(log n) width for β > n-c for any constant c > log2}(√5}-1)\approx 0.306 (following low-width simulation of
AND-OR tree of [Valiant Journal of Algorithms'84]). In this paper, we close the gap between the bounds, showing a tight threshold between the values of β=n-c where O(log n) width suffices and the
regime where poly (n) width is needed, with a transition at c=1/3. This gives a complete characterization (up to constant factors) of the memory complexity of solving the coin problem, for all values
of bias β. We introduce new techniques in both bounds. For the upper bound, we give a construction based on recursive majority that does not require a memory stack of size log n bits. For the lower
bound, we introduce new combinatorial techniques for analyzing progression of the success probabilities in read-once branching programs.
Publication series
Name Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS
Volume 2022-February
ISSN (Print) 0272-5428
Conference 62nd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2021
Country/Territory United States
City Virtual, Online
Period 2/7/22 → 2/10/22
All Science Journal Classification (ASJC) codes
• amplification
• branching program
• coin problem
• complexity
• lower bounds
Dive into the research topics of 'Tight Space Complexity of the Coin Problem'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/tight-space-complexity-of-the-coin-problem","timestamp":"2024-11-02T14:57:16Z","content_type":"text/html","content_length":"55021","record_id":"<urn:uuid:bb29ff6e-f655-4965-a582-244e13f1c3c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00307.warc.gz"} |
Search-based Program Synthesis
Writing programs that are both correct and efficient is challenging. A potential solution lies in program synthesis aimed at automatic derivation of an executable implementation (the “how”) from a
high-level logical specification of the desired input-to-output behavior (the “what”). A mature synthesis technology can have a transformative impact on programmer productivity by liberating the
programmer from low-level coding details. For instance, for the classical computational problem of sorting a list of numbers, the programmer has to simply specify that given an input array A of n
numbers, compute an output array B consisting of exactly the same numbers as A such that B[i] ≤ B[i + 1] for 1 ≤ i < n, leaving it to th synthesizer to figure out the sequence of steps needed for the
desired computation. Traditionally, program synthesis is formalized as a problem in deductive theorem proving:^17 A program is derived from the constructive proof of the theorem that states that for
all inputs, there exists an output, such that the desired correctness specification holds. Building automated and scalable tools to solve this problem has proved to be difficult. A recent alternative
to formalizing synthesis allows the programmer to supplement the logical specification with a syntactic template that constrains the space of allowed implementations and the solution strategies focus
on search algorithms for efficiently exploring this space. The resulting search-based program synthesis paradigm is emerging as an enabling technology for both designing more intuitive programming
notations and aggressive program optimizations.
ASJC Scopus subject areas
Dive into the research topics of 'Search-based Program Synthesis'. Together they form a unique fingerprint. | {"url":"https://cris.bgu.ac.il/en/publications/search-based-program-synthesis","timestamp":"2024-11-06T20:30:11Z","content_type":"text/html","content_length":"57593","record_id":"<urn:uuid:ad52436b-d433-4189-91f2-817f171ea513>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00337.warc.gz"} |
Embedded Lab
A pyranometer or solar irradiation tester is measurement tool that is a must-have for every professional in renewable energy sector. However, owing one is not easy because it is both expensive and
rare. It is expensive because it uses highly calibrated components and it is rare because it is not an ordinary multi-meter that is available in common hardware shops.
Personally, I have a long professional career in the field of LED lighting, renewable energy (mainly solar), Lithium and industrial battery systems, electronics and embedded-systems. I have been at
the core of designing, testing, commissioning and analyzing some of the largest solar power projects of my country – a privilege that only a few enjoyed. In many of my solar-energy endeavors, I
encountered several occasions that required me to estimate solar isolation or solar irradiation. Such situations included testing solar PV module efficiencies, tracking solar system performance with
seasons, weather, sky conditions, dust, Maximum Power Point Tracking (MPPT) algorithms of solar inverters, chargers, etc.
I wanted a simply way with which I can measure solar irradiance with some degree of accuracy. Having the advantage of working with raw solar cells, tools that test cells and complete PV modules
combined with theoretical studies, I found out a rudimentary method of measuring solar irradiation.
In simple terms, solar insolation is defined as the amount of solar radiation incident on the surface of the earth in a day and is measured in kilowatt-hour per square meter per day (kWh/m²/day).
Solar irradiation, on the other hand, is the power per unit area received from the Sun in the form of electromagnetic radiation as measured in the wavelength range of the measuring instrument. It is
measured in watt per square meter (W/m²). The solar insolation is, therefore, an aggregate of solar irradiance over a day. Sun energy collection by solar photovoltaic (PV) modules is depended on
solar irradiance and this in turn has impact on devices using this energy. For example, in a cloudy day a solar battery charger may not be able to fully charge a battery attached to it. Likewise, in
a sunny day the opposite may come true.
Solar PV Cells
For making a solar irradiation meter we would obviously need a solar cell. In terms of technology, there are three kinds of cells commonly available in the market and these are shown below.
Basic Solar Math
Typically, Earth-bound solar irradiation is about 1350 W/m². About 70 – 80% of this irradiation makes to Earth’s surface and rest is absorbed and reflected. Thus, the average irradiation at surface
is about 1000 W/m². Typical cell surface area of a poly crystalline cell having dimensions 156 mm x 156 mm x 1mm is:
The theoretical or ideal wattage of such a cell should be:
However, the present-day cell power with current technology somewhere between 4 – 4.7 Wp. It is safe to assume an average wattage of a typical poly crystalline cell to be 4.3 Wp. Therefore, cell
efficiency is calculated to be:
72 cells of 4.3 Wp capacity will form a complete solar PV module of 310Wp.
Likewise, the approximate area of a 310 Wp solar panel having dimensions (1960 mm x 991 mm x 40 mm) is:
The theoretical wattage that we should be getting with 1.942 m² area panel is found to be:
Therefore, module/panel efficiency is calculated to be:
Cell efficiency is always higher than module efficiency because in a complete PV module there are many areas where energy is not harvested. These areas include bus-bar links, cell-to-cell gaps, guard
spaces, aluminum frame, etc. As technology improves, these efficiencies also improve.
Monocrystalline cells have same surface are but more wattage, typically between 5.0 to 5.5 Wp. We can assume an average cell wattage of 5.2 Wp.
Therefore, cell efficiency is calculated to be:
Typically, monocrystalline PV modules consist of 60 cells and so 60 such cells are arranged then the power of a complete PV module is calculated to be:
The area of that PV module having dimensions 1650 mm x 991 mm x 38 mm would be:
The theoretical wattage that we should be getting with 1.635 m² area panel is found to be:
Therefore, module/panel efficiency is calculated to be:
These calculations prove the following points:
• Monocrystalline PV modules can harvest the same amount of energy as their polycrystalline counterparts with lesser area.
• Monocrystalline cells are more efficient than poly crystalline cells.
• Monocrystalline PV modules are more efficient than polycrystalline PV modules.
• 156 mm x 156 mm cell dimension is typical but there are also cells of other dimensions like 125 mm x 125 mm x 1 mm.
• Though I compared monocrystalline and polycrystalline cells and modules here the same points are true for thin film or amorphous cells.
• Currently, there are more efficient cells and advanced technologies, and thus higher power PV modules with relatively smaller area. For instance, at present there are PV modules from Tier 1
manufacturers that have powers well above 600W but their physical dimensions are similar to 300Wp modules.
Solar PV modules can be generally categorized in four categories based on the number of solar cells in series and they usually have the following characteristics:
36 Cells
• 36 x 0.6 V = 21.6V (i.e., 12V Panel)
• VOC = 21.6V
• VMPP = 16 – 18V
• For modules from 10 – 120 Wp
60 Cells
• 60 x 0.6 V = 36V
• VOC = 36V
• VMPP = 25 – 29V
• For modules from 50 – 300+ Wp
72 Cells
• 72 x 0.6 V = 43.2V (i.e., 24V Panel)
• VOC = 43.2V
• VMPP = 34 – 38V
• For modules from 150 – 300+ Wp
96 Cells
• 96 x 0.6 V = 57.6V (i.e., 24V Panel)
• VOC = 57.6V
• VMPP = 42 – 48V
• For modules from 250 – 300+ Wp
• Rare
One cell may not be fully used and could be cut with laser scribing machines to meet voltage and power requirements. Usually, the above-mentioned series connections are chosen because they are the
standard ones. Once a cell series arrangement is chosen, the cells are cut accordingly to match required power. A by-cut cell has lesser area and therefore lesser power. Cutting a cell reduces its
current and not its voltage. Thus, power is reduced. 36 cell modules are also sometimes referred as 12V panels although there is no 12V rating in them and this is because such modules can directly
charge a 12V lead-acid battery but cannot charge 24V battery systems without the aid of battery chargers. Similarly, 72 cell modules are often referred as 24V panels.
Important Parameters of a Panel/Cell
Open Circuit Voltage (Voc) –no load/open-circuit voltage of cell/panel.
Short Circuit Current (Isc) – current that will flow when the panel/cell is shorted in full sunlight.
Maximum Power Point Voltage (VMPP) – output voltage of a panel at maximum power point or at rated power point of the cell’s/panel’s characteristics curve.
Maximum Power Point Current (IMPP) – output current of a panel at maximum power point or at rated power point of the cell’s/panel’s characteristics curve.
Peak Power (Pmax) – the product of VMPP and IMPP. It is the rated power of a cell’s/PV module at STC. Its unit is watt-peak (WP) and not watts because this is the maximum or peak power of a cell/
Power Tolerance – percentage deviation of power.
STC stands for Standard Temperature Condition or simply lab conditions (25°C).
AM stands for Air Mass – This can be used to help characterize the solar spectrum after solar radiation has traveled through the atmosphere.
E stands for Standard Irradiation of 1000 W/m².
Shown below is the technical specification sticker of a 320 Wp JA Solar PV Module:
According to the sticker the PV module has the following specs at STC and irradiation, E = 1000 W/m²:
VOC = 46.12 V
VMPP = 37.28 V
Isc = 9.09 A
IMPP = 8.58 A
Pmax = 320 WP
From these specs we can estimate and deduce the followings:
VMPP-by-VOC ratio:
This value is always between 76 – 86%.
Similarly, IMPP-by-ISC ratio:
This value is always between 90 – 96%.
The product of these ratios should be as high as possible. Typical values range between 0.74 – 0.79. This figure represents fill-factor (FF) of a PV module or a PV cell.
An ideal PV cell/module should have a characteristics curve represented by the green line. However, the actual line is represented by the blue one. Fill-Factor is best described as the percentage of
ideality. Thus, the greater it is or the closer it is to 100 % the better is the cell/module.
In this case, the FF is calculated to be 0.94 x 0.81 = 0.76 or 76%.
Number of cells is calculated to be:
Therefore, each cell has an open circuit voltage (VOC) of:
IMPP is 8.58 A and ISC is 9.09 A.
Cell power is calculated as follows:
Efficiencies are deduced as follows:
The I-V curves of the PV module shown below show the effect of solar irradiation on performance. It is clear that with variations in solar irradiation power production varies. The same is shown by
the power curve. Voltage and current characteristics follow a similar shape.
The I-V curve shown below demonstrates the effect of temperature on performance. We can see that power production performance deteriorates with rise in temperature.
The characteristics curves of the 320 Wp JA Solar PV module are shown above. From the curves above we can clearly see the following points:
• Current changes with irradiation but voltage changes slightly and so power generation depends on irradiation and current.
• Current changes slightly with temperature but voltage changes significantly. At high temperatures, PV voltage decreases and this leads to change in Maximum Power Point (MPP) point. Thus, at high
temperatures power generation decreases and vice-versa.
• Vmpp-by-Voc ratio remains pretty much the same at all irradiation levels.
• The curves are similar to that of a silicon diode and this is so because a cell is essentially a light sensitive silicon diode.
Impacts of Environment on PV
Effect of Temperature
Let us now see how temperature affects solar module performance. Remember that a solar cell is just like a silicon diode and we know that a silicon diode’s voltage changes with temperature. This
change is about -2 mV/°C. Thus, the formula below demonstrates the effect of temperature:
Suppose the ambient temperature is 35°C and let us consider the same 320 Wp PV module having:
VOC at STC = 46.12V
VMPP at STC = 37.28V
Therefore, the temperature difference from STC is:
Under such circumstance the VMPP should be about 35V but it gets decreased more and fill-factor is ultimately affected.
So clearly there is a shift in VMPP which ultimately affects output power. This is also shown in the following graph:
Effect of Shading, Pollution and Dust
Power generation is also affected by shades, i.e., obstruction of sunlight. Obstacles can be anything from a thin stick to a large tree. Dust and pollution also contribute to shading on microscopic
level. However, broadly speaking, shading can be categorized in two basic categories – Uniform Shading and Non-uniform Shading.
Uniform shading is best described as complete shading of all PV modules or cells of a solar system. This form of shading is mainly caused by dust, clouds and pollution. In such shading conditions, PV
current is affected proportionately with the amount of shading, yielding to decreased power collection. However, fill-factor and efficiencies are literally unaffected.
Non-uniform shading, as its name suggests, is partial shading of some PV modules or cells of a solar system. Causes of such shading include buildings, trees, electric poles, etc. Non-uniform shading
must be avoided because cells that receive less radiation than the others behave like loads, leading to formation of hotspots in long-run. Shadings as such result in decreased efficiency, current and
power generation because of multiple wrong MPP points.
Effect of Other Variables
There are several other variables that have effect on solar energy collection. Variables and weather conditions like rain, humidity, snow, clouds, etc. affect energy harvest while cool, windy and
sunny conditions have positive impact on energy harvest. These factors are region and weather dependent and cannot be generalized.
Basic Solar Geometry
Some basics of solar geometry is needed to be realized to efficiently design solar systems and how sun rays are reach earth. Nature follows a mathematical pattern that it never suddenly or abruptly
alters. The movement of the sun and its position on the sky at any instance can be described by some simple trigonometric functions. The Earth not just orbits the Sun but it tilts and revolves about
its axis. The daily rotation of the Earth about the axis through its North and South poles (also known as celestial poles) is perpendicular to the equator point, however it is not perpendicular to
the plane of the orbit of the Earth. In fact, the measure of tilt or obliquity of the axis of the Earth to a line perpendicular to the plane of its orbit is currently about 23.5°.
The plane of the Sun is the plane parallel to the Earth’s celestial equator and through the center of the sun. The Earth passes alternately above and below this plane making one complete elliptic
cycle every year.
In summer solstice, the Sun shines down most directly on the Tropic of Cancer in the northern hemisphere, making an angle δ = 23.5° with the equatorial plane. Likewise, in winter solstice, it shines
on the Tropic of Capricorn, making an angle δ = -23.5° with the equatorial plane. In equinoxes, this angle δ is 0°. Here δ is called the angle of declination. In simple terms the angle of declination
represents the amount of Earth’s tilt or obliquity.
The angle of declination, δ, for Nth day of a year can be deduced using the following formula:
Here, N = 1 represents 1^st January.
The angle of declination has effect on day duration, sun travel path and thus it dictates how we should tilt PV modules with respect to ground with seasonal changes.
The next most important thing to note is the geographical location in terms of GPS coordinates (longitude, λ East and latitude, ϕ North). This is because geo location also has impact on the
aforementioned. Northern and Southern Hemispheres have several differences.
Shown below is a graphical representation of some other important solar angles. Here:
• The angle of solar elevation, ά, is the angular measure of the Sun’s rays above the horizon.
• The solar zenith angle, Z, is the angle between an imaginary point directly above a given location, on the imaginary celestial sphere. and the center of the Sun‘s disc. This is similar to tilt
The azimuth angle, A, is a local angle between the direction of due North and that of the perpendicular projection of the Sun down onto the horizon line measured clockwise.
The angle of solar elevation, ά, at noon for location on Northern Hemisphere can be deduced as follows:
Similarly, for location on Southern Hemisphere, the angle of solar elevation at noon can be deduced as follows:
Here ϕ represents latitude and δ represents angle of declination. Zenith/Tilt angle is found to be:
Determining azimuth angle is not simple as additional info are needed. The first thing that we will need is to the sunrise equation. This is used to determine the local time of sunrise and sunset at
a given latitude, ϕ at a given solar declination angle, δ. The sunrise equation is given by the formula below:
Here ω is the hour angle. ω is between -180° to 0° at sunrise and between 0° to 180° at sunset.
If [tan(ϕ).tan(δ)] ≥ 1, there is no sunset on that day. Likewise, if [tan(ϕ).tan(δ)] ≤ -1, there is no sunrise on that day.
The hour angle, ω is the angular distance between the meridian of the observer and the meridian whose
plane contains the sun. When the sun reaches its highest point in the sky at noon, the hour angle is zero. At this time the Sun is said to be ‘due south’ (or ‘due north’, in the Southern Hemisphere)
since the meridian plane of the observer contains the Sun. On every hour the hour angle increases by 15°.
From hour angle, we can determine the local sunrise and sunset times as follows:
For a given location, hour angle and date the angle of solar elevation can be expressed as follows:
Since we have the hour angle info along with sunrise and sunset times, we can now determine the azimuth angle. It is expressed as follows:
Solving the above equation for A will yield in azimuth angle.
Knowing azimuth and solar elevation angles help us determine the length and location of the shadow of an object. These are important for solar installations.
The length of the shadow is as in the figure above is found to be:
From the geometric analysis, it can be shown that solar energy harvest will increase if PV modules can be arranged as such to follow the Sun. This is the concept of solar tracking system.
If the Sun can be tracked according to these math models, the maximum possible harvest can be obtained. However, some form of solar-tracking structures will be needed. An example of sun tracking
system is shown above. This was designed by a friend of mine and myself back in 2012. He designed the mechanical section while I added the intelligence in form of embedded-system coding and tracking
controller design.
Here’s a photo of the sun tracking controller that I designed. It used a sophisticate algorithm to track the sun.
Building the Device
In order to build the solar irradiance meter, we will need a solar cell or a PV module of known characteristics. Between a single cell and module, I would recommend for a cell because:
• Cells have lower power than complete modules.
• Cells have almost no framing structure.
• Cells are small and light-weight.
• Cells are individual and so there is no need to take account of series-parallel combination.
We would see why these are important as we move forward.
For the project, I used a cheap amorphous solar cell as shown in the photo below. It looks like a crystalline cell under my table lamp but actually it is an amorphous one. It can be purchased from
AliExpress, Amazon or similar online platform.
With a cell test machine, it gave a characteristic curve and data as shown below:
From this I-V characteristics curve, we can see its electrical parameters. The cell has a physical dimension of 86mm x 56mm and so it has an area of 0.004816m².
From the calculation which I already discussed, irradiation measurement needs two known components – the total cell area and the maximum power it can generate. We know both of these data. Now we just
have to formulate how we can use these data to measure irradiation.
Going back to a theory taught in the first year of electrical engineering, the maximum power transfer theorem, we know that to obtain maximum external power from a source with a finite internal
resistance, the resistance of the load must equal the resistance of the source as viewed from its output terminals. This is the theory behind my irradiation measurement.
From the cell electrical characteristics data, we can find the ideal resistance that is needed to make this theory work.
It is important to note that we would just be focusing on maximum power point data only and that is because this is the maximum possible output that the cell will provide at the maximum irradiation
level of 1000W/m². The ideal resistance is calculated as follows:
and so does the electric current. The voltage that would be induced across RMPP is proportional to this current because according to Ohms law:
The equivalent circuit is as shown below:
The boxed region is the electrical equivalent of a solar cell. At this point it may look complicated but we are not digging inside the box and so it can be considered as a mystery black box.
We know that:
We know the value of resistance and all we have to do is to measure the voltage (V) across the cell. We also know the area of the cell and so we can deduce the value of irradiation, E incident on the
cell according to the formula:
#include "N76E003.h"
#include "SFR_Macro.h"
#include "Function_define.h"
#include "Common.h"
#include "Delay.h"
#include "soft_delay.h"
#include "LCD_2_Wire.h"
#define Vmpp 5.162F
#define Impp 0.056F
#define R_actual (Vmpp / Impp)
#define R_fixed 100.0F //
#define R_Error (R_fixed / R_actual)
#define ADC_Max 4095.0F
#define VDD 3.3F
#define scale_factor 2.0F
#define cell_efficiency 0.065F // 6.5% (Typical Amorphous Cell Efficiency)
#define cell_length 0.0854F // 85.4mm as per inscription on the cell
#define cell_width 0.0563F // 56.3mm as per inscription on the cell
#define effective_area_factor 0.90F // Ignoring areas without cell, i.e. boundaries, frames, links, etc
#define cell_area (cell_length * cell_width) // 0.004816 sq.m
#define effective_cell_area (cell_area * effective_area_factor * cell_efficiency) // 0.000281736 sq.m
void setup(void);
unsigned int ADC_read(void);
unsigned int ADC_average(void);
void lcd_print(unsigned char x_pos, unsigned char y_pos, unsigned int value);
void main(void)
unsigned int ADC = 0;
float v = 0;
float V = 0;
float P = 0;
float E = 0;
ADC = ADC_average();
v = ((VDD * ADC) / ADC_Max);
V = (v * scale_factor);
P = (((V * V) / R_fixed) * R_Error);
E = (P / effective_cell_area);
lcd_print(12, 0, (unsigned int)(P * 1000.0));
lcd_print(12, 1, (unsigned int)E);
void setup(void)
LCD_goto(0, 0);
LCD_putstr("PV PWR mW:");
LCD_goto(0, 1);
LCD_putstr("E. W/sq.m:");
unsigned int ADC_read(void)
register unsigned int value = 0x0000;
while(ADCF == 0);
value = ADCRH;
value <<= 4;
value |= ADCRL;
return value;
unsigned int ADC_average(void)
signed char samples = 16;
unsigned long value = 0;
while(samples > 0)
value += ((unsigned long)ADC_read());
value >>= 4;
return ((unsigned int)value);
void lcd_print(unsigned char x_pos, unsigned char y_pos, unsigned int value)
unsigned char ch = 0;
if((value > 999) && (value <= 9999))
ch = (((value % 10000) / 1000) + 0x30);
LCD_goto(x_pos, y_pos);
ch = (((value % 1000) / 100) + 0x30);
LCD_goto((x_pos + 1), y_pos);
ch = (((value % 100) / 10) + 0x30);
LCD_goto((x_pos + 2), y_pos);
ch = ((value % 10) + 0x30);
LCD_goto((x_pos + 3), y_pos);
else if((value > 99) && (value <= 999))
ch = 0x20;
LCD_goto(x_pos, y_pos);
ch = (((value % 1000) / 100) + 0x30);
LCD_goto((x_pos + 1), y_pos);
ch = (((value % 100) / 10) + 0x30);
LCD_goto((x_pos + 2), y_pos);
ch = ((value % 10) + 0x30);
LCD_goto((x_pos + 3), y_pos);
else if((value > 9) && (value <= 99))
ch = 0x20;
LCD_goto(x_pos, y_pos);
ch = 0x20;
LCD_goto((x_pos + 1), y_pos);
ch = (((value % 100) / 10) + 0x30);
LCD_goto((x_pos + 2), y_pos);
ch = ((value % 10) + 0x30);
LCD_goto((x_pos + 3), y_pos);
ch = 0x20;
LCD_goto(x_pos, y_pos);
ch = 0x20;
LCD_goto((x_pos + 1), y_pos);
ch = 0x20;
LCD_goto((x_pos + 2), y_pos);
ch = ((value % 10) + 0x30);
LCD_goto((x_pos + 3), y_pos);
This project is completed with a Nuvoton N76E003 microcontroller. I chose this microcontroller because it is cheap and features a 12-bit ADC. The high-resolution ADC is the main reason for using it
because we are dealing with low power and thus low voltage and current. The solar cell that I used is only about 300mW in terms of power. If the reader is new to Nuvoton N76E003 microcontroller, I
strongly suggest going through my tutorials on this microcontroller here.
Let us first see what definitions have been used in the code. Most are self-explanatory.
#define Vmpp 5.162F
#define Impp 0.056F
#define R_actual (Vmpp / Impp)
#define R_fixed 100.0F //
#define R_Error (R_fixed / R_actual)
#define ADC_Max 4095.0F
#define VDD 3.3F
#define scale_factor 2.0F
#define cell_efficiency 0.065F // 6.5% (Typical Amorphous Cell Efficiency)
#define cell_length 0.0854F // 85.4mm as per inscription on the cell
#define cell_width 0.0563F // 56.3mm as per inscription on the cell
#define effective_area_factor 0.90F // Ignoring areas without cell, i.e. boundaries, frames, links, etc
#define cell_area (cell_length * cell_width) // 0.004816 sq.m
#define effective_cell_area (cell_area * effective_area_factor * cell_efficiency) // 0.000281736 sq.m
Obviously VMPP and IMPP are needed to calculate RMPP and this is called here as R_actual. R_fixed is the load resistor that is put in parallel to the solar cell and this is the load resistor needed
to fulfill maximum power transfer theorem. Practically, it is not possible to get 92.18Ω easily and so instead of 92.18Ω ,a 100Ω (1% tolerance) resistor is placed in its place. The values are close
enough and the difference between these values is about 8%. This difference is also taken into account in the calculations via the definition R_Error.
Thus, during calculation this amount of difference should be compensated for.
ADC_Max and VDD are maximum ADC count and supply voltage values respectively. These would be needed to find the voltage resolution that the ADC can measure.
Since 806µV is a pretty small figure, we can rest assure that very minute changes in solar irradiance will be taken into account during measurement.
scale_factor is the voltage divider ratio. The cell gives a maximum voltage output of 5.691V but N76E003 can measure up to 3.3V. Thus, a voltage divider is needed.
This is the maximum voltage that N76E003 will see when the cell reaches open-circuit voltage. Thus, the ADC input voltage needs to be scaled by a factor of 2 in order to back-calculate the cell
voltage, hence the name scale_factor.
The next definitions are related to the cell that is used in this project. Cell length and width are defined and with these the area of the cell is calculated.
The physical cell area does not represent the total area that is sensitive to solar irradiation. There are spaces within this physical area that contain electrical links, bezels or frame, etc. Thus,
a good estimate of effective solar sensitive cell area is about 90% of the physical cell area. This is defined as the effective_area_factor.
Lastly, we have to take account of cell efficiency because we have seen that in a given area not all solar irradiation is absorbed and so the cell_efficiency is also defined. Typically, a thin film
cell like the one I used in this project has an efficiency of 6 – 7% and so a good guess is 6.5%. All of the above parameters lead to effective cell area and it is calculated as follows:
So now the aforementioned equation becomes as follows:
The equations suggest that only by reading ADC input voltage we can compute both the power and solar irradiance.
The code initializes by initializing the I2C LCD, printing some fixed messages and enabling ADC pin 0.
void setup(void)
LCD_goto(0, 0);
LCD_putstr("PV PWR mW:");
LCD_goto(0, 1);
LCD_putstr("E. W/sq.m:");
The core components of the code are the functions related to ADC reading and averaging. The code below is responsible for ADC reading.
unsigned int ADC_read(void)
register unsigned int value = 0x0000;
while(ADCF == 0);
value = ADCRH;
value <<= 4;
value |= ADCRL;
return value;
The following code does the ADC averaging part. Sixteen samples of ADC reading are taken and averaged. Signal averaging allows for the elimination of false and noisy data. The larger the number of
samples, the more is the accuracy but having a larger sample collection lead to slower performance.
unsigned int ADC_average(void)
signed char samples = 16;
unsigned long value = 0;
while(samples > 0)
value += ((unsigned long)ADC_read());
value >>= 4;
return ((unsigned int)value);
In the main, the ADC average is read and this is converted to voltage. The converted voltage is then upscaled by applying the scale_factor to get the actual cell voltage. With the actual voltage,
power and irradiation are computed basing on the math aforementioned. The power and irradiation values are displayed on the I2C LCD and the process is repeated every 100ms.
ADC = ADC_average();
v = ((VDD * ADC) / ADC_Max);
V = (v * scale_factor);
P = (((V * V) / R_fixed) * R_Error);
E = (P / effective_cell_area);
lcd_print(12, 0, (unsigned int)(P * 1000.0));
lcd_print(12, 1, (unsigned int)E);
I have tested the device against a SM206-Solar solar irradiation meter and the results are fairly accurate enough for simple measurements.
Demo video links:
Improvements and Suggestions
Though this device is not a professional solar irradiation measurement meter, it is fair enough for simple estimations. In fact, SMA – a German manufacturer of solar inverter, solar charger and other
smart solar solution uses a similar method with their SMA Sunny Weather Station. This integration allows them to monitor their power generation devices such as on-grid inverters, multi-cluster
systems, etc. and optimize performances.
There are rooms for lot of improvements and these are as follows:
• A temperature sensor can be mounted on the backside of the solar cell. This sensor can take account of temperature variations and thereby compensate readings. We have seen that temperature
affects solar cell performance and so temperature is a major influencing factor.
• A smaller cell would have performed better but it would have needed additional amplifiers and higher resolution ADCs. A smaller cell will have lesser useless areas and weight. This would make the
device more portable.
• Using filters to filter out unnecessary components of solar spectrum.
• Adding a tracking system to scan and align with the sun would result in determining maximum irradiation.
Project Code can be found here.
Happy coding.
Author: Shawon M. Shahryiar | {"url":"https://embedded-lab.com/blog/tag/irradiation/","timestamp":"2024-11-05T01:13:29Z","content_type":"text/html","content_length":"106863","record_id":"<urn:uuid:0ac84f44-1216-41ba-866d-5e6fe7426a79>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00095.warc.gz"} |
1. When you roll a fair dice, what is the probability that you obtain: (a) an odd number, (b) a 2, (c) a multiple of 3, (d) a number less than 5, (e) a prime number, (f) a 3 or a number less than 3?
1. When you roll a fair dice, what is the probability that you obtain:
(a) an odd number,
(b) a 2,
(c) a multiple of 3,
(d) a number less than 5,
(e) a prime number,
(f) a 3 or a number less than 3?
Find an answer to your question 👍 “1. When you roll a fair dice, what is the probability that you obtain: (a) an odd number, (b) a 2, (c) a multiple of 3, (d) a number less ...” in 📗 Mathematics if
the answers seem to be not correct or there’s no answer. Try a smart search to find answers to similar questions.
Search for Other Answers | {"url":"https://cpep.org/mathematics/1500801-1-when-you-roll-a-fair-dice-what-is-the-probability-that-you-obtain-a-.html","timestamp":"2024-11-12T04:05:31Z","content_type":"text/html","content_length":"23892","record_id":"<urn:uuid:c178aba3-98fc-4d29-944b-90c2ab9e9ce8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00261.warc.gz"} |
Five Minute Lessons
Comments on this lesson
Sales Rep Item Sold Units
John Fridge 4
Balu Washing Machine 5
Muneer Microven 6
John Washing Machine 7
Balu Microven 8 Data
Muneer Microven 2
John Washing Machine 3
Balu Washing Machine 4
Muneer Fridge 3
Sales Rep Fridge Washing Machine Microven
John 4 10 o
Balu 0 9 8 Sample
Muneer 3 0 8
Sales Rep Fridge Washing Machine Microven
John Report
Please solve this problem
abdul ali
I'd recommend you solve this using a PivotTable
You could solve this problem using SUMIF and SUMIFS, but I would recommend you use a PivotTable instead. You can learn how to use a PivotTable in this lesson. It's a lot more flexible and powerful
and will do exactly what you need.
Column A=Sales Rep, Column
Column A=Sales Rep, Column B=Items, Column C=Sold Qty
I need individual record (each Sales Rep, each item ,total sold qty)
Clear and concise
Great tutorial on the use of SUMIFS . Excel is really a powerful tool and it is very helpful when one can get what they need online in the way of help. Most books that cover these topics (yes , there
are still books) go into too much over the top things and don't get into the real meat of the subject - like this does, covering what I need to know ! Thank you,
excel function sto solve
Can any one please help me to use the similar formula in excel to find out
the sum the Row 3 IF row 1 is equal to "SecSch" AND row 2 is equal to "Current"
Say in excel Row 1 (contains a list of "Sec", "Pry", "RCs"), Row 2 (contains "Current" and "Capital"), and Row 3 (contains list of budget figures 135.45)
Thank you in advance
I have a column with with
I have a column with with categories and next to that are their numbers/amounts. I want to add/sum certain categories together? I have the categories identified by text ...thanks
The SUMIF function should do this for you
Hi E Dot
Sounds like you need the ordinary SUMIF function rather than the SUMIFS function. This will allow you to add up numbers that match the categories you choose. If you need to add up more than one
category in a single formula, simply put multiple SUMIF functions together in the same formula. In this scenario each SUMIF function would add up all the values for one category.
Check out the SUMIF lesson here.
SUMIFS with multiple criteria
I beleive I understand the material, but cannot undertsand why this produces a #VALUE! error.
Different sized ranges are the reason
Hi Tom
I think your problem is that the first sum range and the second are different sizes. If you change $G$74:$G$84 to $G$2:$G$51 then the formula no longer generates an error.
Given your example, I would create a formula that uses the SUMIF function twice rather than trying to use SUMIFS, e.g.
SUMIFS with multiple criteria
Hi David,
Thank you very much for the reply. I didn't know about using two SUMIFs.
I really appreciate your help. I'm pretty good at using arrays to explore massive amounts of data at Intel, but really ignorant of the Excel syntax to do other things.
If you have the time and interest, I try to explain better. If you don't, Thanks again.
I'm not sure if were able to look at the file and see what/where the ranges are.The error is gone, but I'm not getting the correct answer. (I didn't explain what I was trying to do, so it wasn't very
clear.) After two hours of trial and error, I was crabby and desparate.
I'm trying to sum the amounts in col E by month (Col F) and by catagory 1 or 2 (Col G). Col E is month # reference and D81
is the target Category number. I'm sure there are better ways to do it, but I only know what I've figured out on my own.
The results are reported farther down the sheet.
The first part worked fine in summing by month, but I wanted to separate Totals by Catagory 1 and two. The correct answers for Month 8 are $355.00 for Cat 1 and $278.87 for Cat 2. The formula as is
produces $8 and $38, so something isn't playing right. I've tried using * and + to combine the formula sections, but that doesn't work either.
Thanks again for the tip about separate SUMIFs. I need to take a class....................
Try a Pivot Table instead
Hi Tom
Looking at your spreadsheet, I'd say that you could do this with formulas but a Pivot Table will summarise your data far more quickly and easily. Why don't you check out our lessons on Pivot Tables
and see if you can find a solution there. If you can't, post a comment again and I'll see what we can do to solve the problem for you.
Our Pivot Table lessons are summarized here.
Try a Pivot Table instead
Thanks, David. I haven't used Pivot tables for quite a while, so I'll take your advice and do the class.
Thank you for taking the time!
I appreciate it.
Take care,
Sumifs function
In your example, if I wanted to sum the sales for Monday, Wednesday and Friday, would I need three criteria and repeat the crteria range for each criterion? Thanks
using excel as a time sheet
I have different levels of PTO for my employees, some get paid holidays at 7 hours, 7.5 hours, 4.5 hours and 3 hours. It is a challenge to remember who gets paid what; I am not the book keeper, I
just do the time sheets for the book keeper. I would like to use a function that would automatically return the specific rate for each person. I use an excel time sheet that automatically adds up
hours for each pay period, I was just hoping to simplify even more. Any help would be appreciated!
Example: if Sally gets PTO for the 4th of July, I would like to place an "X" in the cell (labeled PTO) and have it automatically give her 4.5 hours of pay; and Fred will automatically get 7.5 hours
PTO when I place an X in the 4th of July cell.
This way I only have to "x" the holiday or vacation day, not remember how many hours each of them are compensated.
If you were to add another column that said what they got paid for PTO. then you could do a sumif statement that would check that column when you add an X to all of the spots. So if x was the input
for the fourth of july, it could check whatever column the variable for PTO is in, and assign a amount based on that. Im not sure what the syntax line would look like, but that is one way to solve
the problem.
More information please?
Hi Laurie
You're on the right track but it's a bit hard to figure out how you've structured your spreadsheet. Is it set out as a timesheet, i.e. each person gets a row for each day they work, which has a
column for the date and the hours worked? That's how I'd set out a timesheet in Excel, but it sounds like you have something different? If I know more about the way your data is structured, it will
be easier to provide a definite solution for you.
Sumifs adding together all the months of a fiscal year
I need to sum all of the sales to date in a fiscal year. I have all the criteria set except I can only get it to pull one months data. =SUMIFS(Data!$H:$H,Data!$C:$C,$AZ$1,Data!$B:$B,Bacardi!
$A$4,Data!$A:$A,Bacardi!$D5,Data!$E:$E,Bacardi!$A$16) this is the formula.
Data!$C:$C,$AZ$1 is the range and criteria that pulls the date
I need a forumla to add all values up for a customer, but only when this has been invoiced
In column I I have the order values
In column J I have the customer
In column S I have the invoice number
So, unless there is a value in column S, I do not want it to be in the figures, but each customer needs to have a total figure. There is probably an easy solution, and I am trying to overcomplicate
it with trying to make it work with SUMIFS!!
How to sumif for a give range of variable
I need to sumif for a given range of variables. for eg, in the attached sheet I want sum of revenue of 100 odd batches. There are around 1000's of batches in total. Could you pls help.
Can you please help, I need
Can you please help, I need to total the contents of D,E,F & G in column C - but only having one total in column C per person - i.e. for Lenon, James I would expect to see blank cells in C2 and C3
and the total 17 in cell C4 - this has been driving me insane!
Can you please help, I need
Can you please help, I need to total the contents of D,E,F & G in column C - but only having one total in column C per person - i.e. for Lenon, James I would expect to see blank cells in C2 and C3
and the total 17 in cell C4 - this has been driving me insane!
Validating across a column and a row?
I found you articles very useful recently when tasked with an excel project at work. I had a bit of experience, but having to combining IF and SUMIFS formulas etc. in one was a new level for me.
One thing I tried after looking into you articles, and the very useful solutions in your comments was to try and calculate the total of a number of cells after validating the criteria against a
Column and a Row. The SUM area then became more of a grid then a column. I've provided an example formula below:
this resulted in a #VALUE error.
is there a different, or combination, of formulas that would allow me to do such analysis?
SUMIFS with <= or <> conditions
my SUMIFS() function fails to consider <= or <> conditions. How can i do it??
I'm trying to sum up my hours based on client and task. How can I do that?
Dear Sir/Madam,
Dear Sir/Madam,
I have a problem to add three cells having different conditions. For example a student get 78 marks out of 100 in paper A, 60 marks out of 75 in paper B and 14 marks out of 50 in paper C. Now I want
to add these marks only if the paper is passed if not then those marks would not be included to the total sum using different condition for each one. Like, " add if 100>33, 75>25 and 50>17. How
should I do it. Please guide me
Find the SUM of a range of cells containing numbers & text
I am trying to figure out how to find the SUM of a column whose values are auto-generated by a different function. The column values are determined by an IF function based on date values. The date
values are Review Date(A:A), Deadline Date(B:B), 2nd Review Date(C:C), and the column (D:D) whose values I'm looking to SUM operates as follows:
If A1 and B1 have date values, and C1 is blank, then D1 will result in the number of days remaining based on B1-TODAY().
If A1 and B1 have date values, and C1 also contains a date value, D1 will result in the number of days from the date in A1 to the date in C1, and will be followed by the text string &" "&"Days to
A simplified example of the formula I'm using for D1 is: =IF(C1>0,C1-A1&" "&"Days to Edit",IF(C1=0,B1-TODAY(),"")).
I'm assuming that the "Days to Edit" text string is preventing me from being able to SUM the numeric values that precede the "Days to Edit" text string, because when C1 is blank and D1 just returns a
numeric value (B1-TODAY), I can highlight the cells and select the "SUM" from the drop down menu at the bottom of the Excel window and get the total without any problems. However, when I just select
cells that contain a number followed by "Days to Edit", the SUM = 0.
Is it possible to just calculate the numeric values that precede the text string, or do I have to remove the text string?
Also, the function column (D:D) is formatted as a number value.
I'd appreciate any guidance on this issue as soon as possible.
rolling sums with a common amount
I need to figure out how to do a sum if for every £1000 sold they would earn £4.50.
It needs to be a rolling total in a cell.
I have set up already the sums for the revenue input. ie - =sum(a2:a30) so this calculates the total revenue in a31 cell. But then i need cell a32 to give me the earnings for the staff as mentioned
above. :)
Just what I was trying to do and really easy to understand. Solved my problem quickly, thanks so much.
Adding rows in a column that match an equal value in another col
Column "A" is the social security number of a debtor. Column "B" is an account number. And finally Column "C" is the amount due on that account. A debtor with the social security number in Column A
could have multiple rows of accounts and totals (Column B & C). I need to sum column "C" whenever the value in Column A changes. So for each social security number in column A, I need to have only
one total of all the account totals in column C. How can I easily get this done? THANKS!!!
add particular elements
i want to add numbers in "C" column where text in "B" column is "abc". Help me please.
That is a remarkable submit
That is a remarkable submit We seen because of present it. It's absolutely just what I necessary to notice find in foreseeable future you might progress after providing this kind of fantastic
article. http://fl-studio-11-crack.blogspot.fr
I have understand several of
I have understand several of the content articles on your own web page now. and I like look regarding blogs. I added this in order to the favorites web log list and will also be checking out back
again shortly. Please look at my personal website at the same time along with allow me to understand what anyone this particular <a href="http://fl-studio-11-crack.blogspot.com">fl 12 crack</a>
multiple sum issue
Below mention is my different account number & each account number made multiple times transaction of different amount.
therefore, i need to calculate each account's total or sum of transaction amount that they made.
please help me out,.,
Account Amount
multiple (or) conditions in multiple columns
thanks dear,
could you please help me with multiple (or) conditions in multiple columns
is this can work in somehow, I try but its give unreal result (Less than the actual) “it seems like the excel is calculating the second (or) condition based on the filtered data in the first (or)
thanks in advanced :),,
Try wrapping your formula in another SUM function
Hi Yasir
Try modifying your formula like this:
Placing the whole SUMIFS function inside the SUM function should result in the correct result being returned.
Please reply to this comment if you still have any questions.
still less than the actual
hi David,
thanks for prompt response, but it still gives wrong result.
here as sample with formula result:
Revenu $ Emploee Product
55 Mike Apple
60 ELSE Orange
100 John Orange
50 Mike ELSE
70 John Apple
85 ELSE Apple
105 John ELSE
90 Mike Apple
95 John Apple
80 Mike Orange
correct result is (490) while formula returned to (245)
Solution to your problem
Hi Yaser
Sorry - I guess I didn't test my original formula.
Try this one instead, which returns the right result:
Note that the first criteria set (John, Mike) uses a comma to separate them. The second (Apple, Orange) uses a semi-colon. Not entirely sure what that would make the difference to Excel, but it does.
This example would also work:
Again, note the use of commas in the first criteria set, but semi-colons in the second.
big thanks man
Hi David,
I really appreciate you it works finally, and also, I'm very interested to know the difference between commas and semi-colons in excel.
SUMIFS - year is in a cell
This formula works for me however I want to have the year placed in a cell so I dont have to retype my formulas.
For Example: T1 would have 2017 in the cell. When I try this for whatever reason the formula does not work. How could I make this work?
SOLVED - see T1
T1 equals the cell that contains the year (a variable that I need to change throughout many formulas in my case).
I am trying to create a formula which will add up figures in columns M and N if there is specific text in Column E. I have tried =SUMIF(E4:E550,"SOLUTIONS"&D1&"*",M4:N549) and it works but doesn't
include column N.
I have three columns of data. The first column is a list of amounts I want to add up. The second column is the various types, example, 101, 102, 103, ect., the third column lists numbers that I would
like to have in a range. They range from 850 down to 350.
For example, I want the formula to add amounts in column 1 if column 2 equals 101 and column 3 is between 790 and 799. I then want to take this down by adding amounts in column 1 if column 2 equals
101 and column 3 is between 780 and 789 and so on down to the end to include all column 2 types.
Basically I want a breakdown of the totals of each type that fall within the range. Is that possible? I know I can do this if I say column 3 is > a certain number but I want it to be between a range
of numbers.
Sir, I have a question which i was unable to solve through this function.
Sample: i have a numeric list which contain multiple values, how we can sum the highest 4 value from the list.
Need sum of highest 4 value
2 worksheets, find and sum if
Dear Ahthor,
In the intro you did mention SUMIF is good to use with multiple criteria, such as sum up all sales for microwave made hy John. This is exactly what I need but I could not see in the example. Have
tried various version like matching text bit does not work.
I'm currently having a spreadsheet with all sales history of various products where I'm looking to find and add up all sales of one product but only if made by certain customers. I have 5-6 types of
customers and sometimes I need one type of customer, bit other types I need multiple types of customers. (Have seen so.ething like giving a criteria for multiple customers)
Do you have a solution for this please?
Much appreciated.
sum of different row
I want to sum row cells. each cell has a different condition. just as I want to add a number of different subjects each subject has a different pass condition
sum of different row
I want to sum row cells. each cell has a different condition. just as I want to add a number of different subjects each subject has a different pass condition
I want to add or subtract 2 cells if they meet the criteria in a third cell. Eg. If a cell says sales then add sale amount from 1 cell & taxes (calculated on sale amount) from the 2 cell. Similarly
if it says refund then taxes should be subtracted from sale amount.
Add comment | {"url":"https://fiveminutelessons.com/comment/1681","timestamp":"2024-11-11T10:18:39Z","content_type":"text/html","content_length":"159900","record_id":"<urn:uuid:17d1de49-15cc-4180-8864-485bc8901a2e>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00317.warc.gz"} |
How many triangles are in this picture? - Daily Quiz and Riddles
How many triangles are in this picture?
(An image is provided after the explanation)
Welcome to the mesmerizing world of geometric puzzles! Brace yourself for a challenge that will have you scrutinizing every nook and cranny of the image above. It may look simple at first, but
beware, there’s more to it than meets the eye!
Your task is to count the number of triangles hidden within this seemingly innocent picture. As you start your quest, you’ll encounter various types of triangles—right-angles, isosceles,
equilateral—all artfully entwined to keep you on your toes.
Proceed with caution, for it’s easy to underestimate the true number of triangles lurking in plain sight. Take your time, go cross-eyed if you must, and carefully spot those elusive shapes.
Now, here comes the moment of truth. How many triangles did you manage to find? If you’ve reached a number you’re satisfied with, let’s compare it to our answer. Did you come up with 44? If you did,
then congratulations, you’ve conquered the puzzle!
For those who love a good equation, we have a mathematical expression to arrive at the same answer: 16 + (8 × 2) + (4 × 2) + (1 × 2) + (1 × 2). But fear not, even without this equation, you can still
triumph in this challenge.
To break it down for you, let’s consider the following:
• Each square formed by four triangles contains eight smaller triangles. Multiply 8 by 4 squares, and you get 32 triangles.
• But that’s not all! The whole puzzle comprises four large triangles, adding 4 more triangles to our count, bringing the total to 36.
• And wait, there’s more! Don’t forget the eight medium triangles within the puzzle, adding another 8 to our count, resulting in a grand total of 44!
To make it crystal clear, we’ve provided an image below, highlighting all the triangles concealed in this cunning puzzle.
Now, take a moment to appreciate the intricate design and revel in your triumph as a true triangle detective. Happy puzzling!
Leave a Comment | {"url":"https://quizandriddles.com/how-many-triangles-are-in-this-picture/","timestamp":"2024-11-13T16:25:41Z","content_type":"text/html","content_length":"169383","record_id":"<urn:uuid:d9deb211-ff9a-4d4f-93fc-ebf1f5dd5d60>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00301.warc.gz"} |