arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
# Thue's lemma
Last updated
In modular arithmetic, Thue's lemma roughly states that every modular integer may be represented by a "modular fraction" such that the numerator and the denominator have absolute values not greater than the square root of the modulus.
## Contents
More precisely, for every pair of integers (a, m) with m > 1, given two positive integers X and Y such that Xm < XY, there are two integers x and y such that
${\displaystyle ay\equiv x{\pmod {m}}}$
and
${\displaystyle |x|
Usually, one takes X and Y equal to the smallest integer greater than the square root of m, but the general form is sometimes useful, and makes the uniqueness theorem (below) easier to state. [1]
The first known proof is attributed to AxelThue ( 1902 ) [2] who used a pigeonhole argument. [3] [4] It can be used to prove Fermat's theorem on sums of two squares by taking m to be a prime p that is congruent to 1 modulo 4 and taking a to satisfy a2 + 1 = 0 mod p. (Such an "a" is guaranteed for "p" by Wilson's theorem. [5] )
## Uniqueness
In general, the solution whose existence is asserted by Thue's lemma is not unique. For example, when a = 1 there are usually several solutions (x, y) = (1, 1), (2, 2), (3, 3), ..., provided that X and Y are not too small. Therefore, one may only hope for uniqueness for the rational number x/y, to which a is congruent modulo m if y and m are coprime. Nevertheless, this rational number need not be unique; for example, if m = 5, a = 2 and X = Y = 3, one has the two solutions
${\displaystyle 2a+1\equiv -a+2\equiv 0{\pmod {5}}}$.
However, for X and Y small enough, if a solution exists, it is unique. More precisely, with above notation, if
${\displaystyle 2XY
and
${\displaystyle ay_{1}-x_{1}\equiv ay_{2}-x_{2}\equiv 0{\pmod {m}}}$,
with
${\displaystyle \left|x_{1}\right|
and
${\displaystyle \left|x_{2}\right|
then
${\displaystyle {\frac {x_{1}}{y_{1}}}={\frac {x_{2}}{y_{2}}}.}$
This result is the basis for rational reconstruction, which allows using modular arithmetic for computing rational numbers for which one knows bounds for numerators and denominators. [6]
The proof is rather easy: by multiplying each congruence by the other yi and subtracting, one gets
${\displaystyle y_{2}x_{1}-y_{1}x_{2}\equiv 0{\pmod {m}}.}$
The hypotheses imply that each term has an absolute value lower than XY < m/2, and thus that the absolute value of their difference is lower than m. This implies that ${\displaystyle y_{2}x_{1}-y_{1}x_{2}=0}$, hence the result.
## Computing solutions
The original proof of Thue's lemma is not efficient, in the sense that it does not provide any fast method for computing the solution. The extended Euclidean algorithm, allows us to provide a proof that leads to an efficient algorithm that has the same computational complexity of the Euclidean algorithm. [7]
More precisely, given the two integers m and a appearing in Thue's lemma, the extended Euclidean algorithm computes three sequences of integers (ti), (xi) and (yi) such that
${\displaystyle t_{i}m+y_{i}a=x_{i}\quad {\text{for }}i=0,1,...,}$
where the xi are non-negative and strictly decreasing. The desired solution is, up to the sign, the first pair (xi, yi) such that xi < X.
## Related Research Articles
In number theory, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer n by several integers, then one can determine uniquely the remainder of the division of n by the product of these integers, under the condition that the divisors are pairwise coprime.
In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value, called the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801.
In number theory, the law of quadratic reciprocity is a theorem about modular arithmetic that gives conditions for the solvability of quadratic equations modulo prime numbers. Due to its subtlety, it has many formulations, but the most standard statement is:
Fermat's little theorem states that if p is a prime number, then for any integer a, the number apa is an integer multiple of p. In the notation of modular arithmetic, this is expressed as
In number theory, Euler's theorem states that if n and a are coprime positive integers, then a raised to the power of the totient of n is congruent to one, modulo n, or:
In mathematics, factorization or factoring consists of writing a number or another mathematical object as a product of several factors, usually smaller or simpler objects of the same kind. For example, 3 × 5 is a factorization of the integer 15, and (x – 2)(x + 2) is a factorization of the polynomial x2 – 4.
This article collects together a variety of proofs of Fermat's little theorem, which states that
In number theory, Euler's criterion is a formula for determining whether an integer is a quadratic residue modulo a prime. Precisely,
In mathematics, p-adic analysis is a branch of number theory that deals with the mathematical analysis of functions of p-adic numbers.
In arithmetic and algebra, the cube of a number n is its third power, that is, the result of multiplying three instances of n together. The cube of a number or any other mathematical expression is denoted by a superscript 3, for example 23 = 8 or (x + 1)3.
A unit fraction is a rational number written as a fraction where the numerator is one and the denominator is a positive integer. A unit fraction is therefore the reciprocal of a positive integer, 1/n. Examples are 1/1, 1/2, 1/3, 1/4, 1/5, etc.
In additive number theory, Fermat's theorem on sums of two squares states that an odd prime p can be expressed as:
In algebraic number theory, the narrow class group of a number field K is a refinement of the class group of K that takes into account some information about embeddings of K into the field of real numbers.
In number theory, the law of quadratic reciprocity, like the Pythagorean theorem, has lent itself to an unusual number of proofs. Several hundred proofs of the law of quadratic reciprocity have been published.
In mathematics, particularly in the area of number theory, a modular multiplicative inverse of an integer a is an integer x such that the product ax is congruent to 1 with respect to the modulus m. In the standard notation of modular arithmetic this congruence is written as
Pocklington's algorithm is a technique for solving a congruence of the form
In computational number theory, Cornacchia's algorithm is an algorithm for solving the Diophantine equation , where and d and m are coprime. The algorithm was described in 1908 by Giuseppe Cornacchia.
Coppersmith's attack describes a class of cryptographic attacks on the public-key cryptosystem RSA based on the Coppersmith method. Particular applications of the Coppersmith method for attacking RSA include cases when the public exponent e is small or when partial knowledge of the secret key is available.
AN codes are error-correcting code that are used in arithmetic applications. Arithmetic codes were commonly used in computer processors to ensure the accuracy of its arithmetic operations when electronics were more unreliable. Arithmetic codes help the processor to detect when an error is made and correct it. Without these codes, processors would be unreliable since any errors would go undetected. AN codes are arithmetic codes that are named for the integers and that are used to encode and decode the codewords.
In algebraic number theory Eisenstein's reciprocity law is a reciprocity law that extends the law of quadratic reciprocity and the cubic reciprocity law to residues of higher powers. It is one of the earliest and simplest of the higher reciprocity laws, and is a consequence of several later and stronger reciprocity laws such as the Artin reciprocity law. It was introduced by Eisenstein (1850), though Jacobi had previously announced a similar result for the special cases of 5th, 8th and 12th powers in 1839.
## References
• Shoup, Victor (2005). A Computational Introduction to Number Theory and Algebra (PDF). Cambridge University Press. Retrieved 26 February 2016.CS1 maint: discouraged parameter (link)
1. Shoup, theorem 2.33
2. Thue, A. (1902), "Et par antydninger til en taltheoretisk metode", Kra. Vidensk. Selsk. Forh., 7: 57–75
3. Clark, Pete L. "Thue's Lemma and Binary Forms". CiteSeerX .Cite journal requires |journal= (help)
4. Löndahl, Carl (2011-03-22). "Lecture on sums of squares" (PDF). Retrieved 26 February 2016.Cite journal requires |journal= (help)CS1 maint: discouraged parameter (link)
5. Ore, Oystein (1948), Number Theory and its History, pp. 262–263
6. Shoup, section 4.6
7. Shoup, section 4.5
|
|
# Is it possible to control a treadmill's tread speed such that a plane on the treadmill will be prevented from moving?
I've posed the question in this particular way to avoid the ambiguity usually found in the posing of the "airplane on a treadmill" puzzle, e.g.
I'm not specifying how the treadmill is controlled but asking if it can be controlled in such a way that the thrust of the plane's engine is countered with an equal and opposite force. Assume the wheel bearings are frictionless and the wheels rotate freely. Please justify your answer.
[EDIT] Idealize the problem such that we can ignore rolling resistance.
-
yes it is possible. You have to account for the Tire Friction and Rolling resistance. It is a bit complicated math but you can resort to experiment. Attach a spring balance to the nose and measure the resistance offered by the tires while the treadmill is spinning at desired speed. Now make sure that engine produces just enough thrust to cancel the resistance and there it is, your model airplane is stationary on a running treadmill. In reality, when the engine starts, it produces surplus thrust even while it is idling that it is almost impossible to hold it down without brakes. So a full scale aircraft with engines turned on, without brakes, will find its way out of the treadmill.
-
I'll edit my question based on your answer. – Alfred Centauri Jul 18 '12 at 22:41
The answer is not really, not in any practical way. The force on the airplane from the propellor is not balanced by anything from the wheels when you exclude friction. The wheels just slide on the treadmill. If you have wheels with contact friction that have a huge moment of inertia, like enormous flywheels, and you have contact friction but no rolling friction in the axle, then it is technically possible to accelerate the treadmill an enormous amount to give a force at the contact point of the wheels to the ground which is equal to the force on the airplane from the propellor. This will keep the airplane stationary, since the net force on the airplane will be zero.
This force produces a torque
$$F R$$
were R is the radius of the wheel, and F is the force from the propellor (the two have to balance to keep the airplane from moving forward), and this leads the wheels to accelerate with an angular acceleration of
$$\dot{\omega} = {FR\over I}$$
Which for normal wheels will be something of order $F\over MR$, i.e. all the force from the propellor is going to spin the wheels. This ridiculous angular acceleration is unrealizable for real airplanes, considering the small fraction of airplane mass in the wheel and the small radius of the wheel.
-
The $\alpha$ is ridiculous! But there's something else I thought about too that is separate from those particular physical considerations. The plane's engine power is constant for constant thrust but, whatever is driving the tread must produce linearly increasing with time power to keep the plane from moving. Sooner or later, any physical system driving the tread hits it's finite power limit and then the angular acceleration can no longer counteract the thrust. The treadmill can only delay the plane's motion. – Alfred Centauri Jul 19 '12 at 3:12
|
|
# How do you find the slope and y intercept of the line that is perpendicular to y=-x-1 and passes through the point (5, 7)?
Nov 26, 2016
The equation of the line is $y = x + 2$
The slope is $= 1$
The intercepts are $\left(0 , 2\right)$ and $\left(- 2 , 0\right)$
#### Explanation:
The equation of a line is $y = m x + c$
where $m$ is the slope
The slope of the line $y = - x - 1$ is ${m}_{1} = - 1$
The slope of the perpedicular line is ${m}_{2}$
${m}_{1} \cdot {m}_{2} = - 1$
So ${m}_{2} = 1$
The equation of a line, that passes through $\left({x}_{0} , {y}_{0}\right)$ with a slope of ${m}_{2}$ is
$y - {y}_{0} = {m}_{2} \left(x - {x}_{0}\right)$
Here, $\left({x}_{0} , {y}_{0}\right) = \left(5 , 7\right)$
So, the equation is
$y - 7 = 1 \left(x - 5\right)$
$y = x + 2$
graph{(y+x+1)(y-x-2)=0 [-7.9, 7.9, -3.95, 3.95]}
|
|
# 2.1A: Rates of Change & Limits Created by Greg Kelly, Hanford High School, Richland, Washington Revised by Terry Luskin, Dover-Sherborn HS, Dover, Massachusetts.
## Presentation on theme: "2.1A: Rates of Change & Limits Created by Greg Kelly, Hanford High School, Richland, Washington Revised by Terry Luskin, Dover-Sherborn HS, Dover, Massachusetts."— Presentation transcript:
2.1A: Rates of Change & Limits Created by Greg Kelly, Hanford High School, Richland, Washington Revised by Terry Luskin, Dover-Sherborn HS, Dover, Massachusetts
Suppose you drive 200 miles, and it takes you 4 hours. Then your average speed is: If you look at your speedometer at some time during this trip, it might read 65 mph. This is your instantaneous speed at that particular instant. average speed = change in position = Δy elapsed time Δt
A rock falls from a high cliff… The position (measured from the cliff top) is given by: Position at 0 sec: average velocity from t=0 to t=2: What is the instantaneous velocity at 2 seconds? Position at 2 sec:
for some very small change in t where h = some very small change in t First, we can move toward a value for this limit expression for smaller and smaller values of h (or Δt) …
1 80 0.1 65.6 0.01 64.16 0.001 64.016 0.0001 64.0016 0.00001 64.0002 We can see that the velocity limit approaches 64 ft/sec as h becomes very small. We say that near 2 seconds ( the change in time approaches zero), velocity has a limit value of 64. (Note that h never actually became zero in the denominator, so we dodged division by zero.) Evaluate this expression with shrinking h values of: 1, 0.1, 0.01, 0.001, 0.0001, 0.00001
The limit as h approaches zero analytically: = = = =
Consider: What happens as x approaches zero? Graphically: WINDOW Y=
Looks like y→1 from both sides as x→0 (even though there’s a gap in the graph AT x=0!)
Numerically: TblSet You can scroll up or down to see more values. TABLE
It appears that the limit of is 1, as x approaches zero
Limit notation: “The limit of f of x as x approaches c is L.” So:
The limit of a function is the function value that is approached as the function approaches an x-coordinate from left and right (not the function value AT that x-coordinate!) = 2
Properties of Limits: Limits can be added, subtracted, multiplied, multiplied by a constant, divided, and raised to a power. (See page 58 for details.) For a two-sided limit to exist, the function must approach the same height value from both sides. One-sided limits approach from only the left or the right side.
The limit of a function refers to the function value as the function approaches an x-coordinate from left and right (not the function value AT that x-coordinate!) (not 1!)
1234 1 2 Near x=1: limit from the left limit from the right does not exist because the left- and right-hand limits do not match! = 0 = 1
Near x=2: limit from the left limit from the right because the left and right hand limits match. 1234 1 2
Near x=3: left-hand limit right-hand limit because the left- and right-hand limits match. 1234 1 2
Download ppt "2.1A: Rates of Change & Limits Created by Greg Kelly, Hanford High School, Richland, Washington Revised by Terry Luskin, Dover-Sherborn HS, Dover, Massachusetts."
Similar presentations
|
|
Home » Igor kriz
Igor Kriz
1. # WorkshopHot Topics: Kervaire invariant
Oct 26, 2010
Tuesday
03:30 PM - 04:30 PM The Slice Spectral Sequence for $$C_2$$ and $$C_4$$ Igor Kriz
|
|
Properties
Label 333270dm Number of curves $4$ Conductor $333270$ CM no Rank $1$ Graph
Related objects
Show commands for: SageMath
sage: E = EllipticCurve("dm1")
sage: E.isogeny_class()
Elliptic curves in class 333270dm
sage: E.isogeny_class().curves
LMFDB label Cremona label Weierstrass coefficients Torsion structure Modular degree Optimality
333270.dm3 333270dm1 [1, -1, 1, -16763, -523173] [2] 1441792 $$\Gamma_0(N)$$-optimal
333270.dm2 333270dm2 [1, -1, 1, -111983, 14064531] [2, 2] 2883584
333270.dm1 333270dm3 [1, -1, 1, -1778333, 913226991] [2] 5767168
333270.dm4 333270dm4 [1, -1, 1, 30847, 47372487] [2] 5767168
Rank
sage: E.rank()
The elliptic curves in class 333270dm have rank $$1$$.
Complex multiplication
The elliptic curves in class 333270dm do not have complex multiplication.
Modular form 333270.2.a.dm
sage: E.q_eigenform(10)
$$q + q^{2} + q^{4} - q^{5} + q^{7} + q^{8} - q^{10} - 4q^{11} - 2q^{13} + q^{14} + q^{16} - 6q^{17} + O(q^{20})$$
Isogeny matrix
sage: E.isogeny_class().matrix()
The $$i,j$$ entry is the smallest degree of a cyclic isogeny between the $$i$$-th and $$j$$-th curve in the isogeny class, in the Cremona numbering.
$$\left(\begin{array}{rrrr} 1 & 2 & 4 & 4 \\ 2 & 1 & 2 & 2 \\ 4 & 2 & 1 & 4 \\ 4 & 2 & 4 & 1 \end{array}\right)$$
Isogeny graph
sage: E.isogeny_graph().plot(edge_labels=True)
The vertices are labelled with Cremona labels.
|
|
Homotopy equivalence induces bijection between path components
If $x\in X$, let $C(x)$ the path component of $x$ (the biggest path connected set containing $x$), and similarly if $y\in Y$. Let $C(X)$ and $C(Y)$ the family of all path components of $X$ and $Y$.
Let $f:X\to Y$ be a homotopy equivalence.
We define $G:C(X)\to C(Y)$ by $G(C(x))=C(f(x))$. Then I want to prove that:
1) $G$ is a bijection.
2) $C(x)$ and $G(C(x))=C(f(x))$ are homotopy equivalent.
This is what we have:
We know that $f$ is continuous and there exists $g:Y\to X$ continuous such that $g\circ f$ is homotopic to $1_X$ and $f\circ g$ is homotopic to $1_Y$.
Then, exist $h_1:X\times [0,1]\to X$ continuous such that $h_X(x,0)=g(f(x))$ and $h_X(x,1)=x$ for each $x\in X$, and $h_2:Y\times [0,1]\to Y$ continuous such that $h_2(y,0)=f(g(y))$ and $h_2(y,1)=y$ for each $y\in Y$.
1) I don't know how to prove $G$ is a bijection. If $C(f(x))=C(f(y))$, why it is $C(x)=C(y)$? And if $C(y)\in C(Y)$ is arbitrary, why does exist $x\in X$ such that $C(f(x))=C(y)$?
2) We need to show a homotopy equivalence $f':C(x)\to C(f(x))$. How should we define such $f'$?
Thanks.
• Hint: denote $G = G_f$ to emphasize the dependence of $G$ on $f$. Suppose $f_1,f_2 : X \to Y$ are homotopic. Can you show $G_{f_1} = G_{f_2}$? – Thomas Belulovich Mar 25 '14 at 5:09
I'll call the function $F$ instead of $G$, since we have maps $f$ and $g$ and the function is induced by $f$. First, you should show that $F$ is well-defined. The problem with these types of definitions is that when you write $F(C(x))=C(f(x))$, then in order to determine the image of the component $C(x)$ you choose a point $x\in C(x)$ and then take $f(x)$. But if $C(x)=C(y)$, we could as well have chosen $y\in C(y)$, so we have to show that $F(C(x))=F(C(y))$. Okay, but this is trivial since $C(x)=C(y)$ is a connected set containing both $x$ and $y$, so $f[C(x)]$ is connected set containing $f(x)$ and $f(y)$, hence $C(f(x))=C(f(y))$.
Then you want to show bijectivity. We can do this by showing injectivity and surjectivity. But often it is advisable to construct a function $G$ in the other direction and prove this $G$ to be a both-sided inverse of $F$, especially when we already have a map $g:Y\to X$, then we can consider the function $G:C(Y)\to C(X)$ which is induced the same way as $F$.
Now since $GF(C(x))=C(gf(x))$, you only need to check that $gf(x)$ and $x$ are in the same component. More generally you could show that whenever $h\simeq h':X\to Z$, the induced functions $H,H'$ on the set of path components are equal, and then apply this to the special case $h=gf$, $h'=\text{Id}_X$.
|
|
2020
Activity report
Project-Team
TRIPOP
RNSR: 201822629Y
Research center
In partnership with:
CNRS, Institut polytechnique de Grenoble
Team name:
Modeling, Simulation and Control of Nonsmooth Dynamical Systems
In collaboration with:
Laboratoire Jean Kuntzmann (LJK)
Domain
Applied Mathematics, Computation and Simulation
Theme
Optimization and control of dynamic systems
Creation of the Project-Team: 2019 June 01
# Keywords
• A6.1.1. Continuous Modeling (PDE, ODE)
• A6.1.4. Multiscale modeling
• A6.4.1. Deterministic control
• A6.4.3. Observability and Controlability
• A6.4.4. Stability and Stabilization
• A6.4.5. Control of distributed parameter systems
• A6.4.6. Optimal control
• A6.5.1. Solid mechanics
• A6.5.4. Waves
• B3.3.1. Earth and subsoil
• B5.2.3. Aviation
• B5.2.4. Aerospace
• B5.4. Microelectronics
• B5.6. Robotic systems
• B9.5.2. Mathematics
• B9.5.5. Mechanics
• B9.11.1. Environmental risks
# 1 Team members, visitors, external collaborators
## Research Scientists
• Vincent Acary [Team leader, Inria, Senior Researcher, HDR]
• Franck Bourrier [IRSTEA, Researcher]
• Bernard Brogliato [Inria, Senior Researcher, HDR]
• Felix Miranda Villatoro [Inria, from Nov 2020, Starting Faculty Position]
• Arnaud Tonnelier [Inria, Researcher, from Sep 2020, HDR]
## Faculty Members
• Paul Armand [Univ de Limoges, Professor, from Sep 2020, HDR]
• Guillaume James [Institut polytechnique de Grenoble, Professor, HDR]
## Post-Doctoral Fellows
• Abhishek Chatterjee [Inria, from Sep 2020]
• Nicholas Anton Collins-Craft [Inria, from Nov 2020]
• Mohammad Rasool Mojallizadeh [Inria , 01/07/2019 to 31/01/2021, DigitSlid ANR project ]
• Alexandre Rocca [Inria, until Oct 2020]
## PhD Students
• Charlelie Bertrand [Ecole de l'aménagement durable des territoires]
• Rami Sayoud [Schneider Electric]
• Benoit Viano [Inria]
## Technical Staff
• Franck Perignon [CNRS, Engineer]
• Alexandre Rocca [Inria, Engineer, from Nov 2020]
## Interns and Apprentices
• Denise Cariaga Sandoval [Inria, until Mar 2020]
• Mohammed Haidar [Inria, from Feb 2020 until Jul 2020]
• Diane Courtiol [Inria]
## Visiting Scientists
• Olivier Buzzi [Université de Newcastle upon Tyne - Angleterre, from Mar 2020 until Apr 2020]
• Christophe Prieur [CNRS]
# 2 Overall objectives
## 2.1 Introduction
The joint research team, TRIPOP, between INRIA Grenoble Rhône–Alpes, Grenoble INP and CNRS, part of the Laboratoire Jean Kuntzmann (LJK UMR 5224) is a follow up of the BIPOP team (2003–2017). The team is mainly concerned by the modeling, the mathematical analysis, the simulation and the control of nonsmooth dynamical systems. Nonsmooth dynamics concerns the study of the time evolution of systems that are not smooth in the mathematical sense, i.e., systems that are characterized by a lack of differentiability, either of the mappings in their formulations, or of their solutions with respect to time. In mechanics, the main instances of nonsmooth dynamical systems are multibody systems with Signorini unilateral contact, set-valued (Coulomb-like) friction and impacts. In Electronics, examples are found in switched electrical circuits with ideal components (diodes, switches, transistors). In Control, nonsmooth systems arise in the sliding mode control theory and in optimal control. A lot of examples can also be found in cyber-physical systems (hybrid systems), in transportation sciences, in mathematical biology or in finance.
## 2.2 General scope and motivations
Nonsmooth dynamics concerns the study of the time evolution of systems that are not smooth in the mathematical sense, i.e., systems that are characterized by a lack of differentiability, either of the mappings in their formulations, or of their solutions with respect to time. The class of nonsmooth dynamical systems recovers a large variety of dynamical systems that arise in many applications. The term “nonsmooth”, as the term “nonlinear”, does not precisely define the scope of the systems we are interested in but, and most importantly, they are characterized by the mathematical and numerical properties that they share. To give more insight of what are nonsmooth dynamical systems, we give in the sequel a very brief introduction of their salient features. For more details, we refer to 1, 260, 77, 98, 63, 35.
## 2.3 A flavor of nonsmooth dynamical systems
As a first illustration, let us consider a linear finite-dimensional system described by its state $x\left(t\right)\in I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}^{n}$ over a time-interval $t\in \left[0,T\right]$:
$\stackrel{˙}{x}\left(t\right)=Ax\left(t\right)+a,\phantom{\rule{1.em}{0ex}}A\in I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}^{n×n},\phantom{\rule{0.166667em}{0ex}}a\in I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}^{n},$ 1
subjected to a set of $m$ inequality (unilateral) constraints:
$y\left(t\right)=Cx\left(t\right)+c\ge 0,\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}C\in I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}^{m×n},\phantom{\rule{0.166667em}{0ex}}c\in I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}^{m}.$ 2
If the constraints are physical constraints, a standard modeling approach is to augment the dynamics in (1) by an input vector $\lambda \left(t\right)\in I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}^{m}$ that plays the role of a Lagrange multiplier vector. The multiplier restricts the trajectory of the system in order to respect the constraints. Furthermore, as in the continuous optimization theory, the multiplier must be signed and must vanish if the constraint is not active. This is usually formulated as a complementarity condition:
$0\le y\left(t\right)\perp \lambda \left(t\right)\ge 0,$ 3
which models the one-sided effect of the inequality constraints. The notation $y\ge 0$ holds component–wise and $y\perp \lambda$ means ${y}^{T}\lambda =0$. All together we end up with a Linear Complementarity System (LCS) of the form,
$\left\{\begin{array}{c}\stackrel{˙}{x}\left(t\right)=Ax\left(t\right)+a+B\lambda \left(t\right)\hfill \\ y\left(t\right)=Cx\left(t\right)+c\hfill \\ 0\le y\left(t\right)\perp \lambda \left(t\right)\ge 0\hfill \end{array}\right\$ 4
where $B\in I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}^{n×m}$ is the matrix that models the input generated by the constraints. In a more general way, the constraints may also involve the Lagrange multiplier,
$y\left(t\right)=Cx\left(t\right)+c+D\lambda \left(t\right)\ge 0,\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}D\in I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}^{m×m},$ 5
leading to a general definition of LCS as
$\left\{\begin{array}{c}\stackrel{˙}{x}\left(t\right)=A\phantom{\rule{0.166667em}{0ex}}x\left(t\right)+a+B\phantom{\rule{0.166667em}{0ex}}\lambda \left(t\right)\hfill \\ y\left(t\right)=C\phantom{\rule{0.166667em}{0ex}}x\left(t\right)+c+D\phantom{\rule{0.166667em}{0ex}}\lambda \left(t\right)\hfill \\ 0\le y\left(t\right)\perp \lambda \left(t\right)\ge 0.\hfill \end{array}\right\$ 6
The complementarity condition, illustrated in Figure 1 is the archetype of a nonsmooth graph that we extensively use in nonsmooth dynamics. The mapping $y↦\lambda$ is a multi-valued (set-valued) mapping, that is nonsmooth at the origin. It has a lot of interesting mathematical properties and reformulations that come mainly from convex analysis and variational inequality theory. Let us introduce the indicator function of $I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}_{+}$ as
${\Psi }_{I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}_{+}}\left(x\right)=\left\{\begin{array}{c}0\phantom{\rule{4.pt}{0ex}}\text{if}\phantom{\rule{4.pt}{0ex}}x\ge 0,\hfill \\ +\infty \phantom{\rule{4.pt}{0ex}}\text{if}\phantom{\rule{4.pt}{0ex}}x<0.\hfill \end{array}\right\$ 7
This function is convex, proper and can be sub-differentiated 67. The definition of the subdifferential of a convex function $f:I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}^{m}\to I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}R$ is defined as:
$\partial f\left(x\right)=\left\{{x}^{☆}\in I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}^{m}\mid f\left(z\right)\ge f\left(x\right)+{\left(z-x\right)}^{\top }{x}^{☆},\forall z\right\}.$ 8
A basic result of convex analysis reads as
$0\le y\perp \lambda \ge 0⟺-\lambda \in \partial {\Psi }_{I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}_{+}}\left(y\right)$ 9
that gives a first functional meaning to the set-valued mapping $y↦\lambda$. Another interpretation of $\partial {\Psi }_{I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}_{+}}$is based on the normal cone to a closed and nonempty convex set $C$:
${N}_{C}\left(x\right)=\left\{v\in I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}^{m}|{v}^{\top }\left(z-x\right)\le 0\phantom{\rule{0.277778em}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}\text{all}\phantom{\rule{0.277778em}{0ex}}z\in K\right\}.$ 10
It is easy to check that $\partial {\Psi }_{I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}_{+}}={N}_{I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}_{+}}\left(x\right)$ and it follows that
$0\le y\perp \lambda \ge 0⟺-\lambda \in {N}_{I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}_{+}}\left(y\right).$ 11
Finally, the definition of the normal cone yields a variational inequality:
$0\le y\perp \lambda \ge 0⟺{\lambda }^{\top }\left(y-z\right)\le 0,\forall z\ge 0.$ 12
The relations (11) and (12) allow one to formulate the complementarity system with $D=0$ as a differential inclusion based on a normal cone (see (15)) or as a differential variational inequality. By extending the definition to other types of convex functions, possibly nonsmooth, and using more general variational inequalities, the same framework applies to the nonsmooth laws depicted in Figure 2 that includes the case of piecewise smooth systems.
The mathematical concept of solutions depends strongly on the nature of the matrix quadruplet $\left(A,B,C,D\right)$ in (6). If $D$ is a positive definite matrix (or a $P$-matrix), the Linear Complementarity problem
$0\le Cx+c+D\lambda \perp \lambda \ge 0,$ 13
admits a unique solution $\lambda \left(x\right)$ which is a Lipschitz continuous mapping. It follows that the Ordinary Differential Equation (ODE)
$\stackrel{˙}{x}\left(t\right)=Ax\left(t\right)+a+B\lambda \left(x\left(t\right)\right),$ 14
is a standard ODE with a Lipschitz right-hand side with a ${C}^{1}$ solution for the initial value problem. If $D=0$, the system can be written as a differential inclusion in a normal cone as
$-\stackrel{˙}{x}\left(t\right)+Ax\left(t\right)+a\in B{N}_{I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}_{+}}\left(Cx\left(t\right)\right),$ 15
that admits a solution that is absolutely continuous if $CB$ is a definite positive matrix and the initial condition satisfies the constraints. The time derivative $\stackrel{˙}{x}\left(t\right)$ and the multiplier $\lambda \left(t\right)$ may have jumps and are generally considered as functions of bounded variations. If $CB=0$, the order of nonsmoothness increases and the Lagrange multiplier may contain Dirac atoms and must be considered as a measure. Higher–order index, or higher relative degree systems yield solutions in terms of distributions and derivatives of distributions 26.
A lot of variants can be derived from the basic form of linear complementarity systems, by changing the form of the dynamics including nonlinear terms or by changing the complementarity relation by other multivalued maps. In particular the nonnegative orthant may be replaced by any convex closed cone $K\subset I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}^{m}$ leading to complementarity over cones
${K}^{☆}\ni y\perp \lambda \in K,$ 16
where ${K}^{☆}$ its dual cone given by
${K}^{☆}=\left\{x\in I\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{R}^{m}\mid {x}^{\top }y\ge 0\phantom{\rule{4.pt}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}\text{all}\phantom{\rule{4.pt}{0ex}}y\in K\right\}.$ 17
In Figure 2, we illustrate some other basic maps that can used for defining the relation between $\lambda$ and $y$. The saturation map, depicted in Figure 2(a) is a single valued continuous function which is an archetype of piece-wise smooth map. In Figure 2(b), the relay multi-function is illustrated. If the upper and the lower limits of $\lambda$ are respectively equal to 1 and $-1$, we obtain the multivalued sign function defined as
$Sgn\left(y\right)=\left\{\begin{array}{cc}1,\hfill & y>0\hfill \\ \left[-1,1\right],\hfill & y=0\hfill \\ -1,\hfill & y<0.\hfill \end{array}\right\$ 18
Using again convex analysis, the multivalued sign function may be formulated as an inclusion into a normal cone as
$\lambda \in Sgn\left(y\right)⟺y\in {N}_{\left[-1,1\right]}\left(\lambda \right).$ 19
More generally, any system of the type,
$\left\{\begin{array}{c}\stackrel{˙}{x}\left(t\right)=Ax\left(t\right)+a+B\lambda \left(t\right)\hfill \\ y\left(t\right)=Cx\left(t\right)+a\hfill \\ -\lambda \left(t\right)\in Sgn\left(y\left(t\right)\right),\hfill \end{array}\right\$ 20
can reformulated in terms of the following set-valued system
$\left\{\begin{array}{c}\stackrel{˙}{x}\left(t\right)=Ax\left(t\right)+a+B\lambda \left(t\right)\hfill \\ y\left(t\right)=Cx\left(t\right)+c\hfill \\ -y\left(t\right)\in {N}_{{\left[-1,1\right]}^{m}}\left(\lambda \left(t\right)\right).\hfill \end{array}\right\$ 21
The system (21) appears in a lot of applications; among them, we can cite the sliding mode control, electrical circuits with relay and Zener diodes 21, or mechanical systems with friction 28.
Though this class of systems seems to be rather specific, it includes as well more general dynamical systems such as piecewise smooth systems and discontinuous ordinary differential equations. Indeed, the system (20) for scalars $y$ and $\lambda$ can be viewed as a discontinuous differential equation:
$\stackrel{˙}{x}\left(t\right)=\left\{\begin{array}{ccc}Ax+a+B\hfill & \phantom{\rule{4.pt}{0ex}}\text{if}\phantom{\rule{4.pt}{0ex}}& Cx+c>0\hfill \\ Ax+a-B\hfill & \phantom{\rule{4.pt}{0ex}}\text{if}\phantom{\rule{4.pt}{0ex}}& Cx+c<0.\hfill \end{array}\right\$ 22
One of the most well-known mathematical framework to deal with such systems is the Filippov theory 60 that embed the discontinuous differential equations into a differential inclusion. In the case of a single discontinuity surface given in our example by $S=\left\{x\mid Cx+c=0\right\}$, the Filippov differential inclusion based on the convex hull of the vector fields in the neighborhood of $S$ is equivalent to the use of the multivalued sign function in (20). Conversely, as it has been shown in 34, a piecewise smooth system can be formulated as a nonsmooth system based on products of multivalued sign functions.
## 2.4 Nonsmooth Dynamical systems in the large
Generally, the nonsmooth dynamical systems we propose to study mainly concern systems that possess the following features:
1. A nonsmooth formulation of the constitutive/behavioral laws that define the system. Examples of nonsmooth formulations are piecewise smooth functions, multi–valued functions, inequality constraints, yielding various definitions of dynamical systems such as piecewise smooth systems, discontinuous ordinary differential equations, complementarity systems, projected dynamical systems, evolution or differential variational inequalities and differential inclusions (into normal cones). Fundamental mathematical tools come from convex analysis 90, 66, 67, complementarity theory 55, and variational inequalities theory 59.
2. A concept of solutions that does not require continuously differentiable functions of time. For instance, absolutely continuous, Lipschitz continuous functions or functions of local bounded variation are the basis for solution concepts. Measures or distributions are also solutions of interest for differential inclusions or evolution variational inequalities.
## 2.5 Nonsmooth systems versus hybrid systems
The nonsmooth dynamical systems we are dealing with, have a nonempty intersection with hybrid systems and cyber-physical systems, as it is briefly discussed in Sect. 3.2.4. Like in hybrid systems, nonsmooth dynamical systems define continuous–time dynamics that can be identified to modes separated by guards, defined by the constraints. However, the strong mathematical structure of nonsmooth dynamical systems allows us to state results on the following points:
1. Mathematical concept of solutions: well-posedness (existence, and possibly, uniqueness properties, (dis)continuous dependence on initial conditions).
2. Dynamical systems theoretic properties: existence of invariants (equilibria, limit cycles, periodic solutions,...) and their stability, existence of oscillations, periodic and quasi-periodic solutions and propagation of waves.
3. Control theoretic properties: passivity, controllability, observability, stabilization, robustness.
These latter properties, that are common for smooth nonlinear dynamical systems, distinguish the nonsmooth dynamical systems from the very general definition of hybrid or cyber-physical systems 37, 65. Indeed, it is difficult to give a precise mathematical concept of solutions for hybrid systems since the general definition of hybrid automata is usually too loose.
## 2.6 Numerical methods for nonsmooth dynamical systems
To conclude this brief exposition of nonsmooth dynamical systems, let us recall an important fact related to numerical methods. Beyond their intrinsic mathematical interest, and the fact that they model real physical systems, using nonsmooth dynamical systems as a model is interesting, because it exists a large set of robust and efficient numerical techniques to simulate them. Without entering into deeper details, let us give two examples of these techniques:
• Numerical time integration methods: convergence, efficiency (order of consistency, stability, symplectic properties). For the nonsmooth dynamical systems described above, there exist event–capturing time–stepping schemes with strong mathematical results. These schemes have the ability to numerically integrate the initial value problem without performing an event location, but by capturing the event within a time step. We call an event, or a transition, every change into the index set of the active constraints in the complementarity formulation or in the normal cone inclusion. Hence these schemes are able to simulate systems with a huge number of transitions or even worth finite accumulation of events (Zeno behavior). Furthermore, the schemes are not suffering from the weaknesses of the standard schemes based on a regularization (smoothing) of the multi-valued mapping resulting in stiff ordinary differential equations. For the time–integration of the initial value problem (IVP), or Cauchy problem, a lot of improvements of the standard time–stepping schemes for nonsmooth dynamics (Moreau–Jean time-stepping scheme) have been proposed in the last decade, in terms of accuracy and dissipation properties 31, 33, 91, 92, 30, 54, 50, 93, 52. An important part of these schemes has been developed by members of the BIPOP team and has been implemented in the Siconos software (see Sect. 5).
• Numerical solution procedure for the time–discretized problem, mainly through well-identified problems studied in the optimization and mathematical programming community. Another very interesting feature is the fact that the discretized problem that we have to solve at each time–step is generally a well-known problem in optimization. For instance, for LCSs, we have to solve a linear complementarity problem 55 for which there exist efficient solvers in the literature. Comparing to the brute force algorithm with exponential complexity that consists in enumerating all the possible modes, the algorithms for linear complementarity problem have polynomial complexity when the problem is monotone.
In the Axis 2 of the research program (see Sect. 3.3), we propose to perform new research on the geometric time-integration schemes of nonsmooth dynamical systems, to develop new integration schemes for Boundary Value Problem (BVP), and to work on specific methods for two time-discretized problems: the Mathematical Program with Equilibrium Constraints (MPEC) for optimal control and Second Order Cone Complementarity Problems (SOCCP) for discrete frictional contact systems.
# 3 Research program
## 3.1 Introduction
In this section, we develop our scientific program. In the framework of nonsmooth dynamical systems, the activities of the project–team will be on focused on the following research axes:
• Axis 1: Modeling and analysis (detailed in Sect. 3.2).
• Axis 2: Numerical methods and simulation (detailed in Sect. 3.3).
• Axis 3: Automatic Control (detailed in Sect. 3.4)
These research axes will be developed with a strong emphasis on the software development and the industrial transfer.
## 3.2 Axis 1: Modeling and analysis
This axis is dedicated to the modeling and the mathematical analysis of nonsmooth dynamical systems. It consists of four main directions. Two directions are in the continuation of BIPOP activities: 1) multibody vibro-impact systems (Sect. 3.2.1) and 2) excitable systems (Sect. 3.2.2). Two directions are completely new with respect to BIPOP: 3) Nonsmooth geomechanics and natural hazards assessment (Sect. 3.2.3) and 4) Cyber-physical systems (hybrid systems) (Sect. 3.2.4).
### 3.2.1 Multibody vibro-impact systems
Participants: B. Brogliato, F. Bourrier, G. James, V. Acary
• Multiple impacts with or without friction : there are many different approaches to model collisions, especially simultaneous impacts (so-called multiple impacts) 84. One of our objectives is on one hand to determine the range of application of the models (for instance, when can one use “simplified” rigid contact models relying on kinematic, kinetic or energetic coefficients of restitution?) on typical benchmark examples (chains of aligned beads, rocking block systems). On the other hand, try to take advantage of the new results on nonlinear waves phenomena, to better understand multiple impacts in 2D and 3D granular systems. The study of multiple impacts with (unilateral) nonlinear visco-elastic models (Simon-Hunt-Crossley, Kuwabara-Kono), or visco-elasto-plastic models (assemblies of springs, dashpots and dry friction elements), is also a topic of interest, since these models are widely used.
• Artificial or manufactured or ordered granular crystals, meta-materials : Granular metamaterials (or more general nonlinear mechanical metamaterials) offer many perspectives for the passive control of waves originating from impacts or vibrations. The analysis of waves in such systems is delicate due to spatial discreteness, nonlinearity and non-smoothness of contact laws 88, 72, 71, 78. We will use a variety of approaches, both theoretical (e.g. bifurcation theory, modulation equations) and numerical, in order to describe nonlinear waves in such systems, with special emphasis on energy localization phenomena (excitation of solitary waves, fronts, breathers).
• Systems with clearances, modeling of friction : joint clearances in kinematic chains deserve specific analysis, especially concerning friction modeling 36. Indeed contacts in joints are often conformal, which involve large contact surfaces between bodies. Lubrication models should also be investigated.
• Painlevé paradoxes : the goal is to extend the results in 62, which deal with single-contact systems, to multi-contact systems. One central difficulty here is the understanding and the analysis of singularities that may occur in sliding regimes of motion.
As a continuation of the work in the BIPOP team, our software code, Siconos (see Sect. 5) will be our favorite software platform for the integration of these new modeling results.
### 3.2.2 Excitable systems
Participants: A. Tonnelier, G. James
An excitable system elicits a strong response when the applied perturbation is greater than a threshold 81, 82, 40, 94. This property has been clearly identified in numerous natural and physical systems. In mechanical systems, non-monotonic friction law (of spinodal-type) leads to excitability. Similar behavior may be found in electrical systems such as active compounds of neuristor type. Models of excitable systems incorporate strong non-linearities that can be captured by non-smooth dynamical systems. Two properties are deeply associated with excitable systems: oscillations and propagation of nonlinear waves (autowaves in coupled excitable systems). We aim at understanding these two dynamical states in excitable systems through theoretical analysis and numerical simulations. Specifically we plan to study:
• Threshold-like models in biology: spiking neurons, gene networks.
• Frictional contact oscillators (slider block, Burridge-Knopoff model).
• Dynamics of active electrical devices : memristors, neuristors.
### 3.2.3 Nonsmooth geomechanics and natural hazards assessment
Participants: F. Bourrier, B. Brogliato, G. James, V. Acary
• Rockfall impact modeling : Trajectory analysis of falling rocks during rockfall events is limited by a rough modeling of the impact phase 43, 42, 76. The goal of this work is to better understand the link between local impact laws at contact with refined geometries and the efficient impact laws written for a point mass with a full reset map. A continuum of models in terms of accuracy and complexity will be also developed for the trajectory studies. In particular, nonsmooth models of rolling friction, or rolling resistance will be developed and formulated using optimization problems.
• Experimental validation : The participation of IRSTEA with F. Bourrier makes possible the experimental validation of models and simulations through comparisons with real data. IRSTEA has a large experience of lab and in-situ experiments for rockfall trajectories modeling 43, 42. It is a unique opportunity to vstrengthen our model and to prove that nonsmooth modeling of impacts is reliable for such experiments and forecast of natural hazards.
• Rock fracturing : When a rock falls from a steep cliff, it stores a large amount of kinetic energy that is partly dissipated though the impact with the ground. If the ground is composed of rocks and the kinetic energy is sufficiently high, the probability of the fracture of the rock is high and yields an extra amount of dissipated energy but also an increase of the number of blocks that fall. In this item, we want to use the capability of the nonsmooth dynamical framework for modeling cohesion and fracture 73, 32 to propose new impact models.
• Rock/forest interaction : To prevent damages and incidents to infrastructures, a smart use of the forest is one of the ways to control trajectories (decrease of the run-out distance, jump heights and the energy) of the rocks that fall under gravity 56, 58. From the modeling point of view and to be able to improve the protective function of the forest, an accurate modeling of impacts between rocks and trees is required. Due to the aspect ratio of the trees, they must be considered as flexible bodies that may be damaged by the impact. This new aspect offers interesting modeling research perspectives.
More generally, our collaboration with IRSTEA opens new long term perspectives on granular flows applications such as debris and mud flows, granular avalanches and the design of structural protections. The numerical methods that go with these new modeling approaches will be implemented in our software code, Siconos (see Sect. 5)
### 3.2.4 Cyber-physical systems (hybrid systems)
Participants: V. Acary, B. Brogliato, C. Prieur, A. Tonnelier
Nonsmooth systems have a non-empty intersection with hybrid systems and cyber–physical systems. However, nonsmooth systems enjoy strong mathematical properties (concept of solutions, existence and uniqueness) and efficient numerical tools. This is often the result of the fact that nonsmooth dynamical systems are models of physical systems, and then, take advantage of their intrinsic property (conservation or dissipation of energy, passivity, stability). A standard example is a circuit with $n$ ideal diodes. From the hybrid point of view, this circuit is a piecewise smooth dynamical system with ${2}^{n}$ modes, that can be quite cumbersome to enumerate in order to determinate the current mode. As a nonsmooth system, this circuit can be formulated as a complementarity system for which there exist efficient time–stepping schemes and polynomial time algorithms for the computation of the current mode. The key idea of this research action is to take benefit of this observation to improve the hybrid system modeling tools.
Research actions: There are two main actions in this research direction that will be implemented in the framework of the Inria Project Lab (IPL “ Modeliscale”, see https://team.inria.fr/modeliscale/ for partners and details of the research program):
$\phantom{\rule{1.em}{0ex}}•$Structural analysis of multimode DAE : When a hybrid system is described by a Differential Algebraic Equation (DAE) with different differential indices in each continuous mode, the structural analysis has to be completely rethought. In particular, the re-initialization rule, when a switching occurs from a mode to another one, has to be consistently designed. We propose in this action to use our knowledge in complementarity and (distribution) differential inclusions 26 to design consistent re-initialization rule for systems with nonuniform relative degree vector $\left({r}_{1},{r}_{2},...,{r}_{m}\right)$ and ${r}_{i}\ne {r}_{j},i\ne j$.
$\phantom{\rule{1.em}{0ex}}•$Cyber–physical in hybrid systems modeling languages : Nowadays, some hybrid modeling languages and tools are widely used to describe and to simulate hybrid systems (modelica, simulink, and see 53 for references therein). Nevertheless, the compilers and the simulation engines behind these languages and tools suffer from several serious weaknesses (failure, weird output or huge sensitivity to simulation parameters), especially when some components, that are standard in nonsmooth dynamics, are introduced (piecewise smooth characteristic, unilateral constraints and complementarity condition, relay characteristic, saturation, dead zone, ...). One of the main reasons is the fact that most of the compilers reduce the hybrid system to a set of smooth modes modeled by differential algebraic equations and some guards and reinitialization rules between these modes. Sliding mode and Zeno–behaviour are really harsh for hybrid systems and relatively simple for nonsmooth systems. With B. Caillaud (Inria HYCOMES) and M. Pouzet (Inria PARKAS), we propose to improve this situation by implementing a module able to identify/describe nonsmooth elements and to efficiently handle them with siconos as the simulation engine. They have already carried out a first implementation 51 in Zelus, a synchronous language for hybrid systems http://zelus.di.ens.fr. Removing the weaknesses related to the nonsmoothness of solutions should improve hybrid systems towards robustness and certification.
$\phantom{\rule{1.em}{0ex}}•$A general solver for piecewise smooth systems This direction is the continuation of the promising result on modeling and the simulation of piecewise smooth systems 34. As for general hybrid automata, the notion or concept of solutions is not rigorously defined from the mathematical point of view. For piecewise smooth systems, multiplicity of solutions can happen and sliding solutions are common. The objective is to recast general piecewise smooth systems in the framework of differential inclusions with Aizerman–Pyatnitskii extension 34, 60. This operation provides a precise meaning to the concept of solutions. Starting from this point, the goal is to design and study an efficient numerical solver (time–integration scheme and optimization solver) based on an equivalent formulation as mixed complementarity systems of differential variational inequalities. We are currently discussing the issues in the mathematical analysis. The goal is to prove the convergence of the time–stepping scheme to get an existence theorem. With this work, we should also be able to discuss the general Lyapunov stability of stationary points of piecewise smooth systems.
## 3.3 Axis 2: Numerical methods and simulation
This axis is dedicated to the numerical methods and simulation for nonsmooth dynamical systems. As we mentioned in the introduction, the standard numerical methods have been largely improved in terms of accuracy and dissipation properties in the last decade. Nevertheless, the question of the geometric time–integration techniques remains largely open. It constitutes the objective of the first research direction in Sect. 3.3.1. Beside the standard IVP, the question of normal mode analysis for nonsmooth systems is also a research topic that emerged in the recent years. More generally, the goal of the second research direction (Sect. 3.3.2) is to develop numerical methods to solve boundary value problems in the nonsmooth framework. This will serve as a basis for the computation of the stability and numerical continuation of invariants. Finally, once the time-integration method is chosen, it remains to solve the one-step nonsmooth problem, which is, most of time, a numerical optimization problem. In Sect. 3.3.3, we propose to study two specific problems with a lot of applications: the Mathematical Program with Equilibrium Constraints (MPEC) for optimal control, and Second Order Cone Complementarity Problems (SOCCP) for discrete frictional contact systems. After some possible prototypes in scripting languages (Python and Matlab), we will be attentive that all these developments of numerical methods will be integrated in Siconos.
### 3.3.1 Geometric time–integration schemes for nonsmooth Initial Value Problem (IVP)
Participants: V. Acary, B. Brogliato, G. James, F. Pérignon
The objective of this research item is to continue to improve classical time–stepping schemes for nonsmooth systems to ensure some qualitative properties in discrete-time. In particular, the following points will be developed
• Conservative and dissipative systems. The question of the energy conservation and the preservation of dissipativity properties in the Willems sense 64 will be pursued and extended to new kinds of systems (nonlinear mechanical systems with nonlinear potential energy, systems with limited differentiability (rigid impacts vs. compliant models)).
• Lie–group integration schemes for finite rotations for the multi-body systems extending recent progresses in that directions for smooth systems 38.
• Conservation and preservation of the dispersion properties of the (non)-dispersive system.
### 3.3.2 Stability and numerical continuation of invariants
Participants: G. James, V. Acary, A. Tonnelier, F. Pérignon,
By invariants, we mean equilibria, periodic solutions, limit cycles or waves. Our preliminary work on this subject raised the following research perspectives:
• Computation of periodic solutions of discrete mechanical systems . The modal analysis, i.e., a spectral decomposition of the problem into linear normal modes is one of the basic tools for mechanical engineers to study dynamic response and resonance phenomena of an elastic structure. Since several years, the concept of nonlinear normal modes 74, that is closely related to the computation of quasi-periodic solutions that live in a nonlinear manifold, has emerged as the nonlinear extension of the modal analysis. One of the fundamental question is: what remains valid if we add unilateral contact conditions ? The computation of nonsmooth modes amounts to computing periodic solutions, performing the parametric continuation of solution branches and studying the stability of these branches.
This calls for time integration schemes for IVP an BVP that satisfy some geometric criteria: conservation of energy, reduced numerical dispersion, symplecticity as we described before. Though the question of conservation of energy for unilateral contact has been discussed in 30, the other questions remain open. For the shooting technique and the study of stability, we need to compute the Jacobian matrix of the flow with respect to initial conditions, the so-called saltation matrix 75, 85 for nonsmooth flows. The eigenvalues of this matrix are the Floquet multipliers that give some information on the stability of the periodic solutions. The question of an efficient computation of this matrix is also an open question. For the continuation, the question is also largely open since the continuity of the solutions with respect to the parameters is not ensured.
• Extension to elastic continuum media . This is a difficult task. First of all, the question of the mathematical model for the dynamic continuum problem with unilateral contact raises some problems of well–posedness. For instance, the need for an impact law is not clear in some cases. If we perform a semi–discretization in space with classical techniques (Finite Element Methods, Finite Difference Schemes), we obtain a discrete system for which the impact law is needed. Besides all the difficulties that we enumerate for discrete systems in the previous paragraph, the space discretization also induces numerical dispersion that may destroy the periodic solutions or renders their computation difficult. The main targeted applications for this research are cable–systems, string musical instruments, and seismic response of electrical circuit breakers with Schneider Electric.
• Computation of solutions of nonsmooth time Boundary Value Problems (BVP) (collocation, shooting) . The technique developed in the two previous items can serve as a basis for the development of more general solvers for nonsmooth BVP that can be for instance found when we solve optimal control problems by direct or indirect methods, or the computation of nonlinear waves. Two directions can be envisaged:
• Shooting and multiple shooting techniques. In such methods, we reformulate the BVP into a sequence of IVPs that are iterated through a Newton based technique. This implies the computation of Jacobians for nonsmooth flows, the question of the continuity w.r.t to initial condition and the use of semi-smooth Newton methods.
• Finite differences and collocations techniques. In such methods, the discretization will result into a large sparse optimization problems to solve. The open questions are as follows: a) the study of convergence, b) how to locally improve the order if the solution is locally smooth, and c) how to take benefit of spectral methods.
• Continuation techniques of solutions with respect to a parameter. Standard continuation technique requires smoothness. What types of methods can be extended in the nonsmooth case (arc-length technique, nonsmooth (semi-smooth) Newton, Asymptotical Numerical Methods (ANM))
### 3.3.3 Numerical optimization for discrete nonsmooth problems
Participants: V. Acary, M. Brémond, F. Pérignon, B. Brogliato, C. Prieur
• Mathematical Program with Equilibrium Constraints (MPEC) for optimal control . The discrete problem that arises in nonsmooth optimal control is generally a MPEC 95. This problem is intrinsically nonconvex and potentially nonsmooth. Its study from a theoretical point of view has started 10 years ago but there is no consensus for its numerical solving. The goal is to work with world experts of this problem (in particular M. Ferris from Wisconsin University) to develop dedicated algorithms for solving MPEC, and provide to the optimization community challenging problems.
• Second Order Cone Complementarity Problems (SOCCP) for discrete frictional systems : After some extensive comparisons of existing solvers on a large collection of examples 25, 22, the numerical treatment of constraints redundancy by the proximal point technique and the augmented Lagrangian formulation seems to be a promising path for designing new methods. From the comparison results, it appears that the redundancy of constraints prevents the use of second order methods such as semi–smooth Newton methods or interior point methods. With P. Armand (XLIM, U. de Limoges), we propose to adapt recent advances for regularizing constraints for the quadratic problem 61 for the second-order cone complementarity problem. The other question is the improvement of the efficiency of the algorithms by using accelerated schemes for the proximal gradient method that come from large–scale machine learning and image processing problems. Learning from the experience in large–scale machine learning and image processing problems, the accelerated version of the classical gradient algorithm 83 and the proximal point algorithm 39, and many of their further extensions, could be of interest for solving discrete frictional contact problems. Following the visit of Y. Kanno (University of Tokyo) and his preliminary experience on frictionless problems, we will extend its use to frictional contact problem. When we face large-scale problems, the main available solvers is based on a Gauss–Seidel strategy that is intrinsically sequential. Accelerated first-order methods could be a good alternative to take benefit of the distributed scientific computing architectures.
## 3.4 Axis 3: Automatic Control
Participants: B. Brogliato, C. Prieur, V. Acary
This last axis is dedicated to the automatic control of nonsmooth dynamical systems, or the nonsmooth control of smooth systems. The first item concerns the discrete-time sliding mode control for which significant results on the implicit implementation have been obtained in the BIPOP team. The idea is to pursue this research towards state observers and differentiators (Sect 3.4.1). The second direction concerns the optimal control which brings of nonsmoothness in their solution and their formulation. After the preliminary work in BIPOP on the quadratic optimal control of Linear Complementarity systems(LCS), we propose to go further to the minimal time problem, to impacting systems and optimal control with state constraints (Sect. 3.4.2). In Sect 3.4.3, the objective is to study the control of nonsmooth systems that contain unilateral constraint, impact and friction. The targeted systems are cable–driven systems, multi-body systems with clearances and granular materials. In Sect 3.4.4, we will continue our work on the higher order Moreau sweeping process. Up to now, the work of BIPOP was restricted to finite-dimensional systems. In Sect 3.4.5, we propose to extend our approach to the control of elastic structures subjected to contact unilateral constraints.
It is noteworthy that most of the problems listed below, will make strong use of the numerical tools analyzed in Axis 2, and of the Modeling analysis of Axis 1. For instance all optimal control problems yield BVPs. Control of granular materials will undoubtedly use models and numerical simulation developed in Axis 1 and 2. And so on. It has to be stressed that the type of nonsmooth models we are working with, deserve specific numerical algorithms which cannot be found in commercial software packages. One of the goals is to continue to extend our software package Siconos, and in particular the siconos/control toolbox with these developments.
### 3.4.1 Discrete-time Sliding-Mode Control (SMC) and State Observers (SMSO)
• SMSO, exact differentiators: we have introduced and obtained significant results on the implicit discretization of various classes of sliding-mode controllers 27, 29, 68, 79, 47, with successful experimental validations 69, 68, 70, 97. Our objective is to prove that the implicit discretization can also bring advantages for sliding-mode state observers and Levant's exact differentiators, compared with the usual explicit digital implementation that generates chattering. In particular the implicit discretization guarantees Lyapunov stability and finite-time convergence properties which are absent in explicit methods.
• High-Order SMC (HOSMC): this family of controllers has become quite popular in the sliding-mode scientific community since its introduction by Levant in the nineties. We want here to continue the study of implicit discretization of HOSMC (twisting, super-twisting algorithms) and especially we would like to investigate the comparisons between classical (first order) SMC and HOSMC, when both are implicitly discretized, in terms of performance, accuracy, chattering suppression. Another topic of interest is stabilization in finite-time of systems with impacts and unilateral constraints, in a discrete-time setting.
### 3.4.2 Optimal Control
• Linear Complementarity Systems (LCS) : With the PhD thesis of A. Vieira, we have started to study the quadratic optimal control of LCS. Our objective is to go further with minimum-time problems. Applications of LCS are mainly in electrical circuits with set-valued components such as ideal diodes, transistors, etc. Such problems naturally yield MPEC when numerical solvers are sought. It is therefore intimately linked with Axis 2 objectives.
• Impacting systems : the optimal control of mechanical systems with unilateral constraints and impacts, largely remains an open issue. The problem can be tackled from various approaches: vibro-impact systems (no persistent contact modes) that may be transformed into discrete-time mappings via the impact Poincaré map; or the classical integral action minimization (Bolza problem) subjected to the complementarity Lagrangian dynamics including impacts.
• State constraints, generalized control : this problem differs from the previous two, since it yields Pontryagin's first order necessary conditions that take the form of an LCS with higher relative degree between the complementarity variables. This is related to the numerical techniques for the higher order sweeping process 26.
### 3.4.3 Control of nonsmooth discrete Lagrangian systems
• Cable–driven systems: these systems are typically different from the cable-car systems, and are closer in their mechanical structure to so-called tensegrity structures. The objective is to actuate a system via cables supposed in a first instance to be flexible (slack mode) but non-extensible in their longitudinal direction. This gives rise to complementarity conditions, one big difference with usual complementarity Lagrangian systems being that the control actions operate directly in one of the complementary variables (and not in the smooth dynamics as in cable-car systems). Therefore both the cable models and the control properties are expected to differ a lot from what we may use for cableway systems (for which guaranteeing a positive cable tension is usually not an issue, hence avoiding slack modes, but the deformation of the cables due to the nacelles and cables weights, is an important factor). Tethered systems are a close topic.
• Multi-body systems with clearances: our approach is to use models of clearances with dynamical impact effects, i.e. within Lagrangian complementarity systems. Such systems are strongly underactuated due to mechanical play at the joints. However their structure, as underactuated systems, is quite different from what has been usually considered in the Robotics and Control literature. In the recent past we have proposed a thorough numerical robustness analysis of various feedback collocated and non-collocated controllers (PD, linearization, passivity-based). We propose here to investigate specific control strategies tailored to such underactuated systems 44.
• Granular systems: the context is the feedback control of granular materials. To fix the ideas, one may think of a “juggling” system whose “object” (uncontrolled) part consists of a chain of aligned beads. Once the modeling step has been fixed (choice of a suitable multiple impact law), one has to determine the output to be controlled: all the beads, some of the beads, the chain's center of mass (position, velocity, vibrational magnitude and frequency), etc. Then we aim at investigating which type of controller may be used (output or state feedback, “classical” or sinusoidal input with feedback through the magnitude and frequency) and especially which variables may be measured/observed (positions and/or velocities of all or some of the beads, position and/or velocity of the chain's center of gravity). This topic follows previous results we obtained on the control of juggling systems 48, with increasing complexity of the “object”'s dynamics. The next step would be to extend to 2D and then 3D granular materials. Applications concern vibrators, screening, transport in mining and manufacturing processes.
• Stability of structures: our objective here is to study the stability of stacked blocks in 2D or 3D, and the influence on the observed behavior (numerically and/or analytically) of the contact/impact model.
### 3.4.4 Switching LCS and DAEs, higher-order sweeping process (HOSwP)
• We have gained a strong experience in the field of complementarity systems and distribution differential inclusions 26, 49, that may be seen as some kind of switching DAEs. We plan to go further with non-autonomous HOSwP with switching feedback inputs and non-uniform vector relative degrees. Switching linear complementarity systems can also be studied, though the exact relationships between both point of views remain unclear at the present time. This axis of research is closely related to cyber-physical systems in section 3.2.
### 3.4.5 Control of Elastic (Visco-plastic) systems with contact, impact and friction
• Stabilization, trajectory tracking: until now we have focused on the stability and the feedback control of systems of rigid bodies. The proposal here is to study the stabilization of flexible systems (for instance, a “simple” beam) subjected to unilateral contacts with or without set-valued friction (contacts with obstacles, or impacts with external objects line particle/beam impacts). This gives rise to varying (in time and space) boundary conditions. The best choice of a good contact law is a hard topic discussed in the literature.
• Cableway systems (STRMTG, POMA): cable-car systems present challenging control problems because they usually are underactuated systems, with large flexibilities and deformations. Simplified models of cables should be used (Ritz-Galerkin approach), and two main classes of systems may be considered: those with moving cable and only actuator at the station, and those with fixed cable but actuated nacelles. It is expected that they possess quite different control properties and thus deserve separate studies. The nonsmoothness arises mainly from the passage of the nacelles on the pylons, which induces frictional effects and impacts. It may certainly be considered as a nonsmooth set-valued disturbance within the overall control problem.
# 4 Application domains
## 4.1 Domain 1
Nonsmooth dynamical systems arise in a lot of application fields. We briefly expose here some applications that have been treated in the BIPOP team and that we will continue in the TRIPOP team, as a validation for the research axes and also in terms of transfer. In mechanics, the main instances of nonsmooth dynamical systems are multibody systems with Signorini's unilateral contact, set-valued (Coulomb-like) friction and impacts, or in continuum mechanics, ideal plasticity, fracture or damage. Some illustrations are given in Figure 5(a-f). Other instances of nonsmooth dynamical systems can also be found in electrical circuits with ideal components (see Figure 5(g)) and in control theory, mainly with sliding mode control and variable structure systems (see Figure 5(h)). More generally, every time a piecewise, possibly set–valued, model of systems is invoked, we end up with a nonsmooth system. This is the case, for instance, for hybrid systems in nonlinear control or for piecewise linear modeling of gene regulatory networks in mathematical biology (see Figure 5(i)). Another common example of nonsmooth dynamics is also found when the vector field of a dynamical system is defined as a solution of an optimization problem under constraints, or a variational inequality. Examples of this kind are found in the optimal control theory, in dynamic Nash equilibrium or in the theory of dynamic flows over networks.
# 5 New software and platforms
Participant: Vincent Acary, Franck Bourrier, Maurice Brémond, Franck Pérignon, Alexandre Rocca.
In the framework of the FP5 European project Siconos (2002-2006), Bipop was the leader of the Work Package 2 (WP2), dedicated to the numerical methods and the software design for nonsmooth dynamical systems. This has given rise to the platform siconos which is the main software development task in the team. The aim of this work is to provide a common platform for the simulation, modeling, analysis and control of abstract nonsmooth dynamical systems. Besides usual quality attributes for scientific computing software, we want to provide a common framework for various scientific fields, to be able to rely on the existing developments (numerical algorithms, description and modeling software), to support exchanges and comparisons of methods, to disseminate the know-how to other fields of research and industry, and to take into account the diversity of users (end-users, algorithm developers, framework builders) in building expert interfaces in Python and end-user front-end through Scilab.
After the requirement elicitation phase, the Siconos Software project has been divided into 5 work packages which are identified to software products:
• Siconos/Numerics This library contains a set of numerical algorithms, already well identified, to solve non smooth dynamical systems. This library is written in low-level languages (C,F77) in order to ensure numerical efficiency and the use of standard libraries (Blas, Lapack, ...)
• Siconos/Kernel This module is an object-oriented structure (C++) for the modeling and the simulation of abstract dynamical systems. It provides the users with a set of classes to describe their nonsmooth dynamical system (dynamical systems, intercations, nonsmooth laws, ...) and to perform a numerical time integration and solving.
• Siconos/Front-End. This module is mainly an auto-generated wrapper in Python which provides a user-friendly interface to the Siconos libraries. A scilab interface is also provided in the Front-End module.
• Siconos/Control This part is devoted to the implementation of control strategies of non smooth dynamical systems.
• Siconos/Mechanics. This part is dedicated to the modeling and the simulation of multi-body systems with 3D contacts, impacts and Coulomb's friction. It uses the Siconos/Kernel as simulation engine but relies on a industrial CAD library (OpenCascade and pythonOCC) to deal with complex body geometries and to compute the contact locations and distances between B-Rep description and on Bullet for contact detection between meshes.
Further informations may be found at Siconos website
# 6 New results
## 6.1 Nonlinear waves in granular chains
Participant: Guillaume James, Bernard Brogliato, Kirill Vorotnikov.
Granular chains made of aligned beads interacting by contact (e.g. Newton's cradle) are widely studied in the context of impact dynamics and acoustic metamaterials. In order to describe the response of such systems to impacts or vibrations, it is important to analyze different wave effects such as the propagation of compression waves (solitary waves or fronts) or localized oscillations (traveling breathers), or the scattering of vibrations through the chain. Such phenomena are strongly influenced by contact nonlinearities (Hertz force), spatial inhomogeneities and dissipation.
In the work 12, we analyze the Kuwabara-Kono (KK) model for contact damping, and we develop new approximations of this model which are efficient for the simulation of multiple impacts. The KK model is a simplified viscoelastic contact model derived from continuum mechanics, which allows for simpler calibration (using material parameters instead of phenomenological ones), but its numerical simulation requires a careful treatment due to its non-Lipschitz character. Using different dissipative time-discretizations of the conservative Hertz model, we show that numerical dissipation can be tuned properly in order to reproduce the physical dissipation of the KK model and associated wave effects. This result is obtained analytically in the limit of small time steps (using methods from backward analysis) and is numerically validated for larger time steps. The resulting schemes turn out to provide good approximations of impact propagation even for relatively large time steps.
In addition, G.J. has developed a theoretical method to analyze impacts in homogeneous granular chains with KK dissipation. The idea is to use the exponent $\alpha$ of the contact force as a parameter and derive simpler dynamical equations through an asymptotic analysis, in the limit when $\alpha$ approaches unity and long waves are considered. In that case, different continuum limits of the granular chain can be obtained. When the contact damping constant remains of order unity, wave profiles are well approximated by solutions of a viscous Burgers equation with logarithmic nonlinearity. For small contact damping, dispersive effects must be included and the continuum limit corresponds to a KdV-Burgers equation with logarithmic nonlinearity. By studying traveling wave solutions to these partial differential equations, we obtain analytical approximations of wave profiles such as compression fronts. We observe that these approximations remain meaningful for the classical exponent $\alpha =3/2$. Indeed, they are close to exact wave profiles computed numerically for the KK model, using both dynamical simulations (response of the chain to a compression by a piston) and the Newton method (computation of exact traveling waves by a shooting method). In addition, in analogy with the Rankine-Hugoniot conditions for hyperbolic systems, we relate the asymptotic states of the KK model (for an infinite granular chain) to the velocity of a propagating front. These results are described in an article in preparation.
## 6.2 Signal propagation along excitable chains
Participant: Arnaud Tonnelier.
Nonlinear self-sustained waves, or autowaves, have been identified in a large class of discrete excitable media. We have proposed a simple continuous-time threshold model for wave propagation in excitable media. The ability of the resulting transmission line to convey a one-bit signal is investigated. Existence and multistability of signals where two successive units share the same waveform is established. We show that, depending on the connectivity of the transmission line, an arbitrary number of distinct signals can be transmitted. More precisely, we prove that, for a one-dimensional information channel with $n\mathrm{th}$-neighbor interactions, a $n$-fold degeneracy of the speed curve induces the coexistence of $2n$ propagating signals, $n$ of which are stable and allow $n$ distinct symbols transmission. The influence of model parameters (time constants, coupling strength and connectivity) on the traveling signal properties is analyzed. This work is almost finished and is going to be submitted.
## 6.3 Hybrid Differential Algebraic equations
Participant: Vincent Acary, Bernard Brogliato, Alexandre Rocca.
In 8914, we study differential algebraic equations with constraints defined in a piecewise manner using a conditional statement. Such models classically appear in systems where constraints can evolve in a very small time frame compared to the observed time scale. The use of conditional statements or hybrid automata are a powerful way to describe such systems and are, in general, well suited to simulation with event driven numerical schemes. However, such methods are often subject to chattering at mode switch in presence of sliding modes, or can result in Zeno behaviours. In contrast, the representation of such systems using differential inclusions and method from non-smooth dynamics are often closer to the physical theory but may be harder to interpret. Associated time-stepping numerical methods have been extensively used in mechanical modelling with success and then extended to other fields such as electronics and system biology. In a similar manner to the previous application of non-smooth methods to the simulation of piecewise linear ODEs, non-smooth event-capturing numerical scheme are applied to piecewise linear DAEs. In particular, the study of a 2-D dynamical system of index-2 with a switching constraint using set-valued operators, is presented.
## 6.4 Numerical analysis of multibody mechanical systems with constraints
This scientific theme concerns the numerical analysis of mechanical systems with bilateral and unilateral constraints, with or without friction 1. They form a particular class of dynamical systems whose simulation requires the development of specific methods for analysis and dedicated simulators 57.
### 6.4.1 Numerical solvers for frictional contact problems.
Participant: Vincent Acary, Maurice Brémond, Paul Armand.
In 23, we review several formulations of the discrete frictional contact problem that arises in space and time discretized mechanical systems with unilateral contact and three-dimensional Coulomb’s friction. Most of these formulations are well–known concepts in the optimization community, or more generally, in the mathematical programming community. To cite a few, the discrete frictional contact problem can be formulated as variational inequalities, generalized or semi–smooth equations, second–order cone complementarity problems, or as optimization problems such as quadratic programming problems over second-order cones. Thanks to these multiple formulations, various numerical methods emerge naturally for solving the problem. We review the main numerical techniques that are well-known in the literature and we also propose new applications of methods such as the fixed point and extra-gradient methods with self-adaptive step rules for variational inequalities or the proximal point algorithm for generalized equations. All these numerical techniques are compared over a large set of test examples using performance profiles. One of the main conclusion is that there is no universal solver. Nevertheless, we are able to give some hints to choose a solver with respect to the main characteristics of the set of tests.
Recently, new developments have been carried out on two new applications of well-known numerical methods in Optimization:
• Interior point methods With the visit of Paul Armand, Université de Limoges, we co-supervise a M2 internship, Maksym Shpakovych on the application of interior point methods for quadratic problem with second-order cone constraints. The results are encouraging and a publication in computational mechanics is in progress.
• Alternating Direction Method of Multipliers. In collaboration with Yoshihiro Kanno, University of Tokyo, the use of the Alternating Direction Method of Multipliers (ADMM) has been adapted to the discrete frictional contact problems. With the help of some acceleration and restart techniques for first-order optimization methods and a residual balancing technique for adapting the proximal penalty parameter, the method proved to be efficient and robust on our test bench examples. A publication is also in preparation on this subject.
### 6.4.2 Modeling and numerical methods for frictional contact problems with rolling resistance
Participant: Vincent Acary, Franck Bourrier.
In 3, the Coulomb friction model is enriched to take into account the resistance to rolling, also known as rolling friction. Introducing the rolling friction cone, an extended Coulomb's cone and its dual, a formulation of the Coulomb friction with rolling resistance as a cone complementarity problem is shown to be equivalent to the standard formulation of the Coulomb friction with rolling resistance. Based on this complementarity formulation, the maximum dissipation principle and the bi-potential function are derived. Several iterative numerical methods based on projected fixed point iterations for variational inequalities and block-splitting techniques are given. The efficiency of these method strongly relies on the computation of the projection onto the rolling friction cone. In this article, an original closed-form formulae for the projection on the rolling friction cone is derived. The abilities of the model and the numerical methods are illustrated on the examples of a single sphere sliding and rolling on a plane, and of the evolution of spheres piles under gravity.
### 6.4.3 Numerical modeling of rockfall trajectory
Participant: Vincent Acary, Franck Bourrier.
Rockfall propagation models are routinely used for the quantitative assessment of rockfall hazard. Their capacities and limitations remain difficult to assess due to the limited amount of exhaustive experimental data at the slope scale.
The article 5 presents experiments of block propagation performed in a quarry located in Authume (France). A total of more than one hundred blocks were released on two propagation paths. The propagation of the blocks was assessed by measuring the block stopping points as well as their kinematics at specific locations of the paths, called evaluation screens. Significant variability of the stopping points and of the block kinematics at the evaluation screens was observed and preferential transit and deposit zones were highlighted. The analysis of the results showed predominant effect of topography, in particular that related to topographical discontinuities. Significant influence of local and small scale parameters (e.g. block orientation, local topography) was also highlighted. These conclusions are of particular interest for researchers or practitioners who would like to assess the relevance of propagation modelling tools considering this complex study site. In this configuration, the quality of block propagation simulations should notably rely on the accuracy of digital terrain models, and on the integration of local conditions effects using physically based approaches.
Complementary with the research held in 5, the predictive capabilities of block propagation models after a preliminary calibration phase is investigated. It is focused on models integrating the shape of blocks since, despite their sound physical bases, they remain less used than lumped-mass approaches due to their more recent popularisation. We first performed an expert-based calibration based on the use of the 2D model and, second, evaluated the predictive capabilities of the calibrated model in 2D and in 3D using the remaining part of the experimental results. The calibrated model simulations predict the main characteristics of the propagation : after a calibration phase on sufficient amount of soil types, the model may be used in a predictive manner. The adequacy between 2D and 3D simulations also favors applicability of the model since easier and faster calibrations based on 2D simulations only can be envisaged. As classically observed for block propagation models, the model is not sufficient to predict the details of the velocity and stopping points but provides accurate prediction of the global ranges of these quantities, in particular of the extreme values. To lift these limitations in terms of predictive capabilities, more advanced calibration procedures based on optimization techniques can constitute a promising perspective as it is studied in 41.
### 6.4.4 Finite element modeling of cable structures
Participant: Vincent Acary, Charlélie Bertrand.
Standard finite element discretization for cable structures suffer from several drawbacks. The first one is related to the mechanical assumption that the cable can not support compression. Standard formulations do not take into account this assumption. The second drawback comes from the high stiffness of the cable model when we deal with large lengths with high Young modulus such as cable ropeways installations. In this context, standard finite element applications cannot avoid compressive solutions and have huge difficulties to converge. In a forthcoming paper, we propose to a formulation based on a piecewise linear modeling of the cable constitutive behavior where the elasticity in compression is canceled. Furthermore, a dimensional analysis help us to formulate a problem that is well-balanced and the conditioning of the problem is diminished. The finite element discretization of this problem yields a robust method where convergence is observed with the number of elements and the nonlinear solver based on nonsmooth Newton strategy is converging up to tight tolerances. The convergence with the number of element allows one to refine the mesh as much as we want that will be of utmost importance for applications with contact and friction. Indeed, a fine discretization with respect to the whole length of the cable will be possible in the contact zone. This work has been the object of the following publication 4
### 6.4.5 Damage model of concrete structures by a variational approach
Participant: Vincent Acary.
The work in 16 aims at providing a numerical method to estimate the damage of a concrete structure, under the load of an avalanche-type natural event. Using the Francfort and Marigo damage model, we begin by validating the model in a 1–D configuration by analytical and numerical calculus. Moreover, as concrete has very different behaviour in tension and compression, we then introduce a tension-compression formulation in a 2–D configuration within the variational approach to damage. Considering a non-vanishing resulting Young modulus and minimizing the total energy, including the energy released by damage, provides us with the damage state during the load, without resorting to non-local formulation. We present some validation simulations such as three-point flexural test. Finally, we show realistic simulations of bending of the structure under the load of an avalanche and the resulting damaged state.
### 6.4.6 Well-posedness of the contact problem
We continue in 6 the analysis of the so-called contact problem for Lagrangian systems with bilateral and unilateral constraints, with set-valued Coulomb's friction. The problem that is analysed this time concerns sticking contacts (in both the normal and the tangential directions), i.e., does there exist a solution (possibly unique) to the contact problem (that takes the form of a complementarity problem) when all contacts are sticking ? An algorithm is proposed that allows in principle to compute solutions. We rely strongly on results of existence and uniqueness of solutions to variational inequality of the second kind, obtained in the team some years ago. Let us note also the erratum/addendum of the monograph 46 in 45, which is regularly updated.
## 6.6 Discrete-time differentiators
The article 18 deals with the problem of online differentiation of noisy signals. In this context, several types of differentiators including linear, sliding-mode based, adaptive, Kalman, and ALIEN differentiators are studied through mathematical analysis and numerical experiments. To resolve the drawbacks of the exact differentiators, new implicit and semi-implicit discretization schemes are proposed in this work to suppress the digital chattering caused by the wrong time-discretization of set-valued functions as well as providing some useful properties, e.g., finite-time convergence, invariant sliding-surface, exactness. A complete comparative analysis is presented in the manuscript to investigate the behavior of the discrete-time differentiators in the presence of several types of noises, including white noise, sinusoidal noise, and bell-shaped noise. Many details such as quantization effect and realistic sampling times are taken into account to provide useful information based on practical conditions. Many comments are provided to help the engineers to tune the parameters of the differentiators.
In 20, the experimental analysis of discrete-time differentiators implemented in closed-loop control systems is achieved. To this end, two laboratory setups, namely an electro-pneumatic system and a rotary inverted pendulum have been used to implement 25 different differentiators. Since the selected laboratory setups behave differently in the case of dynamic response and noise characteristics, it is expected that the results remain valid for a wide range of control applications. The validity of several theoretical results, which have been already reported in the literature using mathematical analysis and numerical simulations, has been investigated, and several comments are provided to allow one to select an appropriate differentiation scheme in practical closed-loop control systems.
## 6.7 Robust sliding-mode control: continuous and discrete-time
The implicit method for the time-discretization of set-valued sliding-mode controllers was introduced in 27, 29. The backstepping approach is used in 80 to design a continuous-time and a discrete-time nested set-valued controller that is able to reject unmatched disturbances (a problem that is known to be tough in the sliding-mdoe control community). In 87, 86 we continue the analysis of the implicit discretization of set-valued systems, this time oriented towards the consistency of time-discretizations for homogeneous systems, with one discontinuity at zero (sometimes called quasi-continuous, strangely enough). The discrete-time analysis of the twisting and the super-twisting algorithms are tackled in 11, 8.
## 6.8 Analysis of set-valued Lur'e dynamical systems
Lur'e systems are very popular in the Automatic Control field since their introduction by Lur'e in 1944. In 9 we propose a very complete survey/tutorial on the set-valued version of such dynamical systems (in finite dimension) which mainly consist of the negative feedback interconnection of an ODE with a maximal monotone set-valued operator. The first studies can be traced back to Yakubpovich in 1963 who analysed the stability of a linear time invariant system with positive real constraints, in negative feedback connection with a hysteresis operator. About 600 references are analysed from the point of view of the mathematical formalisms (Moreau's sweeping process, evolution variational inequalities, projected dynamical systems, complementarity dynamical systems, maximal monotone differential inclusions, differential variational inequalities), the relationships between these formalisms, the numerous fields of application, the well-posedness issues (existence, uniqueness and continuous dependence of solutions), and the stability issues (generalized equations for fixed points, Lyapunov stability, invariance principles).
## 6.9 Optimal control of LCS
The quadratic and minimum time optimal control of LCS as in (6) is tackled in 96, 13. This work relies on the seminal results by Guo and ye (SIAM 2016), and aims at particularizing their results for LCS, so that they become numerically tractable and one can compute optimal controllers and optimal trajectories. The basic idea is to take advantage of the complementarity, to construct linear complementarity problems in the Pontryagin's necessary conditions which can then be integrated numerically, without having to guess a priori the switching instants (the optimal controller can eb discontinuous and the optimal trajectories can visit several modes of the complementarity conditions).
## 6.10 Dissipative systems
Participant: Bernard Brogliato.
The third edition of the book Dissipative Systems Analysis and Control has been released https://www.springer.com/gp/book/9783030194192. Also a short proof of equivalence of so-called side conditions for strictly positive real (SPR) transfer functions is done in 10, closing a long debate in the Automatic Control community about the frequency-domain characterization of SPR transfer matrices.
# 7 Bilateral contracts and grants with industry
## 7.1 Bilateral grants with industry
#### Schneider Electric
This action started in 2001 with the post-doc of V. Acary co–supported by Schneider Electric and CNRS. With some brief interruptions, this action is still active and should further continue. It concerns mainly the simulation and modeling of multi–body systems with contact, friction and impacts with the application for the virtual prototyping of electrical circuit breakers.
During these years, various forms of collaborations have been held. Two PhD thesis have been granted by Schneider Electric (D.E. Taha and N. Akhakdar) accompanied with research contracts between INRIA and Schneider Electric. Schneider Electric participated also the ANR project Saladyn as a main partner.
Without going into deep details of the various actions over the years, the major success of this collaboration is the statistical tolerance analysis of the functional requirements of the circuit breakers with respect to clearance in joints and geometrical tolerances on the parts. Starting from the geometrical descriptions (CAD files) of a mechanism with prescribed tolerances on the manufacturing process, we perform worst-case analysis and Monte–Carlo simulations of the circuit breaker with Siconos and we record the variations in the functional requirements. The difficulty in such simulations are the modeling of contact with friction that models the joints with clearances. The results of these analysis enable Schneider Electric to define the manufacturing precision that has a huge impact of the production cost (Schneider Electric produces several millions of C60-type circuit breaker per year). Note that it is not possible to perform such simulations with the existing software codes of the market.
At the beginning, our interlocutor at Schneider Electric was the innovation (R&D) department. Now, we are working and discussing with the business unit, Division Power and Dinnov (M. Abadie, E. Boumediene, X. Herreros) in charge of designing and producing the circuit–breakers. The targeted users are the R&D engineers of Schneider Electric that use simulation tools for designing new models or improving existing circuit breakers. This collaboration continues with new modeling and simulation challenges (flexible parts, multiple impact laws) with the CIFRE PhD of Rami Sayoud.
#### STRMTG
We have started with STRMTG a research contract about modelling, simulation and control of cable-transport systems. In such systems, the question of the coupling between the nonlinear dynamics of cables and their supports with unilateral contact and friction appears now to be determinant in order to increase the performances of the cableway systems, especially for urban transportation systems.
TBW
# 8 Partnerships and cooperations
## 8.1 National initiatives
#### ANR project Digitslid
B. Brogliato coordinates the ANR project Digitslid (PRC, ANR-18-CE40-0008-01), Differentiateurs et commandes homogenes par modes glissants en temps discret: l'approche implicite. Partners: LS2N (Ecole Centrale de Nantes), INRIA Lille Nord Europe (team Non-A-Post), and Tripop. October 2018-September 2021. 12 participants overall (3 post-doc students recruited by the project, 3 Ph.D. students supported by other means). Total financial support by the ANR: 338 362 euros (100 762 for Tripop, 18 months of post-doc to be recruited in 2019).
#### FUI Modeliscale.
The ModeliScale FUI focuses on the modeling, simulation and analysis of large cyber-physical systems. It federates the research activities of several teams, covering a broad spectrum of topics, namely hybrid systems modeling & verification, numerical analysis, programming language design and automatic control. Our research agenda includes the following tracks:
• New compilation techniques for Modelica modelers: structural analysis of multimode DAE (Differential Algebraic Equations) systems, modular compilation, combining state-machines and non-smooth dynamical systems (complementarity dynamical systems and Filippov differential inclusions), contract-based specification of cyber-physical systems requirements, requirements capture using under-/over-determined DAE systems.
• Simulation of large cyber-physical systems: distributed simulation, discretization methods for non-smooth dynamical systems, space-/time-adaptive discretization methods for multimode DAE systems, quantized state solvers (QSS).
• Guaranteed numerics: guaranteed simulation of non-smooth and hybrid dynamical systems, numerical methods preserving invariant properties of hybrid systems, contract-based reasoning methods.
#### Inria Project Lab (IPL): ModeliScale, Languages and Compilation for Cyber-Physical System Design
The project gathers researchers from three Inria teams, and from three other research labs in Grenoble and Paris area.
Name Team Inria Center or Laboratory Vincent Acary Bipop Inria Grenoble Rhône Alpes Bernard Brogliato Albert Benveniste Hycomes Inria Rennes Benoît Caillaud Bretagne Atlantique Khalil Ghorbal Marc Pouzet Parkas ENS Tim Bourke Inria Paris Goran Frehse Tempo Verimag-univ. Grenoble Alpes Antoine Girard L2S-CNRS, Saclay Eric Goubault Cosynus LIX, École Polytechnique, Sylvie Putot Saclay
The main objective of ModeliScale is to advance modeling technologies (languages, compile-time analyses, simulation techniques) for CPS combining physical interactions, communication layers and software components. We believe that mastering CPS comprising thousands to millions of components requires radical changes of paradigms. For instance, modeling techniques must be revised, especially when physics is involved. Modeling languages must be enhanced to cope with larger models. This can only be done by combining new compilation techniques (to master the structural complexity of models) with new mathematical tools (new numerical methods, in particular).
ModeliScale gathers a broad scope of experts in programming language design and compilation (reactive synchronous programming), numerical solvers (nonsmooth dynamical systems) and hybrid systems modeling and analysis (guaranteed simulation, verification). The research program is carried out in close cooperation with the Modelica community as well as industrial partners, namely, Dassault Systèmes as a Modelica/FMI tool vendor, and EDF and Engie as end users.
## 8.2 Regional initiatives
#### AURA R&D Booster Smart Protect
The project aims to develop and test an innovative structure for protection against natural hazards. It is funded by the Auvergne Rhône-Alpes region as part of the R&D operation BOOSTER 2019. The partnerships (GEOLITHE INNOV, GEOLITHE, MYOTIS, INRIA and INRAe) and the operational solutions and tools developed as part of the "Smart-Protect" project will constitute major advances in the methods and means for the natural risk management, both nationally and internationally. GEOLITHE INNOV is leader of the SMART-PROTECT collaborative project. The financial support for INRIA is devoted to the post-doc of Nicholas Collins Craft for the study and the development of cohesive zone model for fracture mechanics simulation.
# 9 Dissemination
## 9.1 Promoting scientific activities
### 9.1.1 Scientific events: organisation
#### Member of the organizing committees
• Vincent Acary is co-organisor (with O. Bruls and R. Leine) of the mini-symposium Nonsmooth Dynamics, 10th Nonlinear Dynamics Conference (ENOC 2021), Lyon, July 2021.
• Bernard Brogliato is co-organisor (with Nathan van de Wouw) of the mini-symposium Control and Synchronization of Nonlinear Systems, 10th Nonlinear Dynamics Conference (ENOC 2021), Lyon, July 2021.
### 9.1.2 Scientific events: selection
#### Member of the conference program committees
• Bernard Brogliato was a member of the IPC of ADHS 21 (IFAC Conference on Analysis and Design of Hybrid Systems), Brussels, July 2021.
• Bernard Brogliato was a member of the national organizing committee of ENOC 2021, Lyon, July 2021.
### 9.1.3 Journal
#### Member of the editorial boards
• Vincent Acary is Technical Managing editor and co-founder of the Journal of Theoretical, Computational and Applied Mechanics, which is a scholarly journal, provided on a Fair Open Access basis, without cost to both readers and authors. The Journal aims to select publications of the highest scientific calibre in the form of either original research papers or reviews. http://jtcam.episciences.org
#### Reviewer - reviewing activities
• Vincent Acary was reviewer for IEEE Transactions on Automatic Control, Multibody Systems Dynamics, Nonlinear Dynamics, Computer Methods in Applied Mechanics and Engineering, SIAM Journal on Control and Optimization, Multibody Systems Dynamics, systems and Control Letters.
• Bernard Brogliato is reviewer for IEEE Transactions on Automatic Control, IEEE Transactions on Robotics, Systems and Control Letters, Multibody System Dynamics, Automatica, ASME Journal of Computational and Nonlinear Dynamics, ASME Journal of Applied Mechanics.
• iFranck Bourrier was reviewer for International Journal for Numerical and Analytical Methods in Geomechanics, Plant and Soils, Natural Hazards, Landslides, Physical Review Fluids, Rock Mechanics and Rock Engineering, Beomorphology, International Journal of Rock Mechanics and Rock Sciences, Nature Communications
### 9.1.4 Invited talks
• Vincent Acary gave a lecture at Web I-Risk 2020. 7th December 2020. http://www.indura.fr/
• Franck Bourrier gave a lecture at Web I-risk 2020. 12th November 2020
## 9.2 Teaching - Supervision - Juries
### 9.2.1 Teaching
• Licence : G. James, Introduction to Numerical Methods, 31 hETD, L3, Grenoble INP - Pagora (1st year).
• Licence : G. James, Normed Vector Spaces, 26 hETD, L2, Prépa INP, Grenoble.
• Master : G. James, Numerical Methods, 91 hETD, M1, Grenoble INP - Ensimag (1st year).
• Master : G. James, Dynamical Systems, 45 hETD, M1, Grenoble INP - Ensimag (2nd year).
• Master : Vincent Acary, 17H éq TD Systèmes dynamiques, ENSIMAG 2A.
• Master : Franck Bourrier, 5H éq TD Modélisation des chutes de blocs, Master GAIA, Université Savoie Mont-Blanc.
### 9.2.2 Supervision
• PhD in progress : Rami Sayoud, Analyse vibratoire des armoires électriques, January 2018, université Grenoble Alpes, Vincent Acary and Bernard Brogliato
• PhD in progress : Benoit Viano, Nonsmooth modelling of impacted elastoplastic beams, November 2019, université Grenoble Alpes, Vincent Acary and Franck Bourrier.
• PhD in progress : Charlélie Bertrand, September 2018, Mechanical model for cable vibrations, ENTPE, Claude Lamarque and Vincent Acary.
• PhD in progress : Christelle Kozaily. Structural analysis for multi-mode DAE systems, Octobre 2018, V. Acary and B. Caillaud.
• PhD in progress : Vivien Cros, Analyse de la réponse vibratoire d'arbres sous sollicitations dynamiques, November 2018, université Grenoble Alpes, Franck Bourrier.
### 9.2.3 Juries
• Vincent Acary was president of the Jury of Loic Dugelas on Jan 10th 2020 (Stratégies probabilistes appliquées à la modélisation numérique discrète : le cas des filets pare-pierres) Université Grenoble Alpes - École doctorale Ingénierie - matériaux mécanique énergétique environnement procédés production (Grenoble)) and president of the jury of Agathe Furet on Aug 28th 2020 (Modélisations expérimentale et numérique d'ouvrages pare-blocs modulaires : Application à la technologie Bloc Armé. Université Grenoble Alpes École doctorale Ingénierie - matériaux mécanique énergétique environnement procédés production (Grenoble).
• Bernard Brogliato was member of the jury of Siyuan Wang on 15 December 2020, INRIA Lille Team Valse (Homogeneous Quadrator Control: Theory and Experiments).
# 10 Scientific production
## 10.1 Major publications
• 1 book V. Acary and B. Brogliato. 'Numerical methods for nonsmooth dynamical systems. Applications in mechanics and electronics'. Lecture Notes in Applied and Computational Mechanics 35. Berlin: Springer. xxi, 525~p. 2008
• 2 bookB. Brogliato. 'Nonsmooth mechanics'.Communications and Control Engineering SeriesModels, dynamics and controlSpringer, [Cham]2016, xxii+629
## 10.2 Publications of the year
### International journals
• 3 article'Coulomb friction with rolling resistance as a cone complementarity problem'.European Journal of Mechanics - A/Solids2020, 104046
• 4 articleC. Bertrand, V. Acary, C.-H. Lamarque and A. Ture Savadkoohi. 'A robust and efficient numerical finite element method for cables'.International Journal for Numerical Methods in Engineering12118September 2020, 4157-4186
• 5 article F. Bourrier, D. Toe, B. Garcia, J. Baroth and S. Lambert. 'Experimental investigations on complex block propagation for the assessment of propagation models quality'. Landslides 2021
• 6 articleB. Brogliato, J. Kövecses and V. Acary. 'The contact problem in Lagrangian systems with redundant frictional bilateral and unilateral constraints and singular mass matrix. The all-sticking contacts problem'.Multibody System Dynamics4822020, 151-192
• 7 article 'Digital implementation of sliding-mode control via the implicit method: A tutorial'. International Journal of Robust and Nonlinear Control 2020
• 8 article'The implicit discretization of the super-twisting sliding-mode control algorithm'.IEEE Transactions on Automatic Control658August 2020, 3707-3713
• 9 article'Dynamical systems coupled with monotone set-valued operators: Formalisms, applications, well-posedness, and stability'.SIAM Review621February 2020, 3-129
• 10 articleA. Ferrante, A. Lanzon and B. Brogliato. 'A direct proof of the equivalence of side conditions for strictly positive real matrix transfer functions'.IEEE Transactions on Automatic Control651January 2020, 450-452
• 11 article'Lyapunov stability analysis of the implicit discrete-time twisting control algorithm'.IEEE Transactions on Automatic Control656June 2020, 2619-2626
• 12 articleG. James, K. Vorotnikov and B. Brogliato. 'Kuwabara-Kono numerical dissipation: a new method to simulate granular matter'.IMA Journal of Applied Mathematics851February 2020, 27-66
• 13 article'Quadratic Optimal Control of Linear Complementarity Systems: First order necessary conditions and numerical analysis'.IEEE Transactions on Automatic Control656June 2020, 2743-2750
### Conferences without proceedings
• 14 inproceedings A. Rocca, V. Acary and B. Brogliato. 'Index-2 hybrid DAE: a case study with well-posedness and numerical analysis'. IFAC World Congress 2020 Berlin, Germany https://www.ifac2020.org/ July 2020
### Scientific books
• 15 book B. Brogliato, R. Lozano, B. Maschke and O. Egeland. 'Dissipative Systems Analysis and Control: Theory and Application'. Communication and Control Engineering 2020
### Reports & preprints
• 16 misc 'A local damage model of concrete structures by a variational approach including tension and compression mechanisms'. June 2020
• 17 misc G. James. 'Traveling Fronts in Dissipative Granular Chains and Nonlinear Lattices'. July 2020
• 18 misc M. Mojallizadeh, B. Brogliato and V. Acary. 'Discrete-time differentiators: design and comparative analysis'. February 2021
• 19 misc 'On Consistent Discretization of Finite-time Stable Homogeneous Differential Inclusions'. March 2020
• 20 report M. Rasool Mojallizadeh, B. Brogliato, A. Polyakov, S. Selvarajan, L. Michel, F. Plestan, M. GHANES, J.-P. Barbot and Y. Aoustin. 'Discrete-time differentiators in closed-loop control systems: experiments on electro-pneumatic system and rotary inverted pendulum'. INRIA Grenoble January 2021
## 10.3 Cited publications
• 21 bookV. Acary, O. Bonnefon and B. Brogliato. 'Nonsmooth modeling and simulation for switched circuits.'.Lecture Notes in Electrical Engineering 69. Dordrecht: Springer. xxiii, 284~p.2011,
• 22 inbookV. Acary, M. Brémond and O. Huber. 'Advanced Topics in Nonsmooth Dynamics'.To appearAcary, V. and Brüls. O. and Leine, R. (eds). Springer Verlag2018, On solving frictional contact problems: formulations and comparisons of numerical methods.
• 23 incollectionV. Acary, M. Brémond and O. Huber. 'On solving contact problems with Coulomb friction: formulations and numerical comparisons'.Advanced Topics in Nonsmooth Dynamics - Transactions of the European Network for Nonsmooth DynamicsJune 2018, 375-457
• 24 inproceedingsV. Acary, M. Brémond, K. Kapellos, J. Michalczyk and R. Pissard-Gibollet. 'Mechanical simulation of the Exomars rover using Siconos in 3DROV'.ASTRA 2013 - 12th Symposium on Advanced Space Technologies in Robotics and AutomationESA/ESTECNoordwijk, NetherlandsMay 2013,
• 25 techreportV. Acary, M. Brémond, T. Koziara and F. Pérignon. 'FCLIB: a collection of discrete 3D Frictional Contact problems'.RT-0444INRIAFebruary 2014, 34
• 26 articleV. Acary, B. Brogliato and D. Goeleven. 'Higher order Moreau's sweeping process: mathematical formulation and numerical simulation'.Mathematical Programming Ser. A1132008, 133-217
• 27 articleV. Acary and B. Brogliato. 'Implicit Euler numerical scheme and chattering-free implementation of sliding mode systems'.Systems and Control Letters595doi:10.1016/j.sysconle.2010.03.0022010, 284--295
• 28 book V. Acary and B. Brogliato. 'Numerical methods for nonsmooth dynamical systems. Applications in mechanics and electronics'. Lecture Notes in Applied and Computational Mechanics 35. Berlin: Springer. xxi, 525~p. 2008
• 29 articleV. Acary, B. Brogliato and Y. Orlov. 'Chattering-Free Digital Sliding-Mode Control With State Observer and Disturbance Rejection'.IEEE Transactions on Automatic Control575may 2012, 1087--1101
• 30 articleV. Acary. 'Energy conservation and dissipation properties of time-integration methods for the nonsmooth elastodynamics with contact'.Zeitschrift für Angewandte Mathematik und Mechanik9552016, 585--603
• 31 articleV. Acary. 'Higher order event capturing time--stepping schemes for nonsmooth multibody systems with unilateral constraints and impacts.'.Applied Numerical Mathematics622012, 1259--1275
• 32 techreportV. Acary and Y. Monerie. 'Nonsmooth fracture dynamics using a cohesive zone approach'.RR-6032INRIA2006, 56
• 33 articleV. Acary. 'Projected event-capturing time-stepping schemes for nonsmooth mechanical systems with unilateral contact and Coulomb's friction'.Computer Methods in Applied Mechanics and Engineering2562013, 224--250
• 34 articleV. Acary, H. de~Jong and B. Brogliato. 'Numerical Simulation of Piecewise-Linear Models of Gene Regulatory Networks Using Complementarity Systems Theory'.Physica D269January 2013, 103--199
• 35 book S. Adly. 'A Variational Approach to Nonsmooth Dynamics. Applications in Unilateral Mechanics and Electronics'. SpringerBriefs in Mathematics Springer Verlag 2017
• 36 article N. Akhadkar, V. Acary and B. Brogliato. 'Multibody systems with 3D revolute joint clearance, modelling, numerical simulation and experimental validation: an industrial case study'. Multibody System Dynamics 2017
• 37 articleR. Alur, C. Courcoubetis, N. Halbwachs, T. Henzinger, P. Ho, X. Nicollin, A. Olivero, J. Sifakis and S. Yovine. 'The algorithmic analysis of hybrid systems'.Theoretical Computer Science13811995, 3--34
• 38 articleM. Arnold, O. Brüls and A. Cardona. 'Error analysis of generalized- Lie group time integration methods for constrained mechanical systems'.Numerische Mathematik1291Jan 2015, 149--179
• 39 articleA. Beck and M. Teboulle. 'A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems'.SIAM Journal on Imaging Sciences212009, 183-202
• 40 articleV. Biktashev and M. Tsyganov. 'Solitary waves in excitable systems with cross-diffusion'.Proc. R. Soc. A4612005, 3711-3730
• 41 unpublishedF. Bourrier and V. Acary. 'Predictive capabilities of 2D and 3D block propagation models integrating block shape assessed from field experiments'.March 2021, working paper or preprint
• 42 articleF. Bourrier, F. Berger, P. Tardif, L. Dorren and O. Hungr. 'Rockfall rebound: comparison of detailed field experiments and alternative modelling approaches'.Earth Surface Processes and Landforms3762012, 656--665
• 43 articleF. Bourrier, L. Dorren, F. Nicot, F. Berger and F. Darve. 'Toward objective rockfall trajectory simulation using a stochastic impact model'.Geomorphology11032009, 68--79
• 44 articleB. Brogliato. 'Feedback control of multibody systems with joint clearance and dynamic backlash: a tutorial'.Multibody System DynamicsAug 2017,
• 45 techreportB. Brogliato. 'Nonsmooth Mechanics. Models, Dynamics and Control : Erratum/Addendum'.INRIA Grenoble - Rhone-AlpesSeptember 2019, 1-15
• 46 bookB. Brogliato. 'Nonsmooth mechanics'.Communications and Control Engineering SeriesModels, dynamics and controlSpringer, [Cham]2016, xxii+629
• 47 inproceedingsB. Brogliato and A. Polyakov. 'Globally stable implicit Euler time-discretization of a nonlinear single-input sliding-mode control system'.2015 54th IEEE Conference on Decision and Control (CDC)Dec 2015, 5426-5431
• 48 articleB. Brogliato and A. Rio. 'On the control of complementary-slackness juggling mechanical systems'.IEEE Transactions on Automatic Control452Feb 2000, 235-246
• 49 articleB. Brogliato and L. Thibault. 'Existence and uniqueness of solutions for non-autonomous complementarity dynamical systems'.J. Convex Anal.173-42010, 961--990
• 50 articleO. Brüls, V. Acary and A. Cardona. 'Simultaneous enforcement of constraints at position and velocity levels in the nonsmooth generalized- scheme'.Computer Methods in Applied Mechanics and Engineering281November 2014, 131-161
• 51 misc B. Caillaud. 'Hybrid vs. nonsmooth dynamical systems'. ://synchron2014.inria.fr/wp-content/uploads/sites/13/2014/12/Caillaud-nsds.pdfhttp://synchron2014.inria.fr/wp-content/uploads/sites/13/2014/12/Caillaud-nsds.pdf 2014
• 52 articleG. Capobianco and S. Eugster. 'Time finite element based Moreau‐type integrators'.International Journal for Numerical Methods in Engineering11432018, 215-231
• 53 articleL. Carloni, R. Passerone, A. Pinto and A. Sangiovanni--Vicentelli. 'Languages and tools for hybrid systems design'.Foundations and Trends in Electronic Design Automation11/22006, 1--193
• 54 articleQ. Chen, V. Acary, G. Virlez and O. Brüls. 'A nonsmooth generalized- scheme for flexible multibody systems with unilateral constraints'.International Journal for Numerical Methods in Engineering9682013, 487--511
• 55 book R. Cottle, J. Pang and R. Stone. 'The Linear Complementarity Problem'. Boston, MA Academic Press, Inc. 1992
• 56 articleL. Dorren, F. Berger, M. Jonsson, M. Krautblatter, M. Mölk, M. Stoffel and A. Wehrli. 'State of the art in rockfall – forest interactions'.Schweizerische Zeitschrift fur Forstwesen15862007, 128-141
• 57 articleF. Dubois, V. Acary and M. Jean. 'The Contact Dynamics method: A nonsmooth story '.Comptes Rendus Mécanique3463March 2018, 247-262
• 58 articleS. Dupire, F. Bourrier, J.-M. Monnet, S. Bigot, L. Borgniet, F. Berger and T. Curt. 'Novel quantitative indicators to characterize the protective effect of mountain forests against rockfall'.Ecological Indicators672016, 98--107
• 59 book F. Facchinei and J. Pang. 'Finite-dimensional Variational Inequalities and Complementarity Problems'. I & II Springer Series in Operations Research Springer Verlag NY. Inc. 2003
• 60 book A. Filippov. 'Differential Equations with Discontinuous Right Hand Sides'. Dordrecht, the Netherlands Kluwer 1988
• 61 articleM. Friedlander and D. Orban. 'A primal--dual regularized interior-point method for convex quadratic programs'.Mathematical Programming Computation412012, 71--107
• 62 articleF. Génot and B. Brogliato. 'New results on Painlevé Paradoxes'.European Journal of Mechanics - A. solids181999, 653-677
• 63 book D. Goeleven. 'Complementarity and Variational Inequalities in Electronics'. Academic Press 2017
• 64 articleS. Greenhalgh, V. Acary and B. Brogliato. 'On preserving dissipativity properties of linear complementarity dynamical systems with the -method'.Numerische Mathematik1252013, 601--637
• 65 inproceedingsT. Henzinger. 'The Theory of Hybrid Automata'.Proceedings of the Eleventh Annual IEEE Symposium on Logic in Computer Science (LICS)1996, 278-292
• 66 book J. Hiriart-Urruty and C. Lemaréchal. 'Convex Analysis and Minimization Algorithms'. I and II Springer Verlag, Berlin 1993
• 67 book J. Hiriart-Urruty and C. Lemaréchal. 'Fundamentals of Convex Analysis'. Springer Verlag 2001
• 68 articleO. Huber, V. Acary and B. Brogliato. 'Lyapunov stability and performance analysis of the implicit discrete sliding mode control'.IEEE Transactions on Automatic Control6110In press2016, 3016-3030
• 69 articleO. Huber, V. Acary, B. Brogliato and F. Plestan. 'Implicit discrete-time twisting controller without numerical chattering: analysis and experimental results'.Control Engineering Practice462016, 129--141
• 70 incollection O. Huber, B. Brogliato, V. Acary, A. Boubakir, F. Plestan and B. Wang. 'Experimental results on implicit and explicit time-discretization of equivalent-control-based sliding-mode control'. Recent Trends in Sliding Mode Control IET 2016
• 71 articleG. James, P. Kevrekidis and J. Cuevas. 'Breathers in oscillator chains with Hertzian interactions'.Physica D2512013, 39--59
• 72 articleG. James. 'Periodic travelling waves and compactons in granular chains'.J. Nonlinear Sci.222012, 813-848
• 73 articleM. Jean, V. Acary and Y. Monerie. 'Non Smooth Contact dynamics approach of cohesive materials'.Philisophical Transactions : Mathematical, Physical & Engineering Sciences, The Royal Society, London AA35917892001, 2497--2518
• 74 articleG. Kerschen, M. Peeters, J. Golinval and A. Vakakis. 'Nonlinear normal modes, Part I: A useful framework for the structural dynamicist'.Mechanical Systems and Signal Processing231Special Issue: Non-linear Structural Dynamics2009, 170--194
• 75 phdthesis R. Leine. 'Bifurcation in Discontinuous Mechanical Systems of Filippov-Type'. Technische Universiteit Eindhoven 2000
• 76 articleR. Leine, A. Schweizer, M. Christen, J. Glover, P. Bartelt and W. Gerber. 'Simulation of rockfall trajectories with consideration of rock shape'.Multibody System Dynamics322Aug 2014, 241--271
• 77 book R. Leine and N. van de Wouw. 'Stability and Convergence of Mechanical Systems with Unilateral Constraints'. 36 Lecture Notes in Applied and Computational Mechanics Springer Verlag 2008
• 78 articleL. Liu, G. James, P. Kevrekidis and A. Vainchtein. 'Nonlinear waves in a strongly resonant granular chain'.Nonlinearity2016, 3496-3527
• 79 articleF. Miranda-Villatoro, B. Brogliato and F. Castanos. 'Multivalued Robust Tracking Control of Lagrange Systems: Continuous and Discrete--Time Algorithms'.IEEE Transactions on Automatic ControlPP992017, 1
• 80 articleF. Miranda-Villatoro, F. Castaños and B. Brogliato. 'Continuous and discrete-time stability of a robust set-valued nested controller'.Automatica107September 2019, 406-417
• 81 techreport J. Morales, G. James and A. Tonnelier. 'Solitary waves in the excitable Burridge-Knopoff model'. RR-8996 To appear in Wave Motion INRIA Grenoble - Rhône-Alpes December 2016
• 82 article J. Morales, G. James and A. Tonnelier. 'Traveling waves in a spring-block chain sliding down a slope'. Phys. Rev. E 96 96 012227 2017
• 83 articleY. Nesterov. 'A method of solving a convex programming problem with convergence rate $O\left(1/{k}^{2}\right)$'.Soviet Mathematics Doklady2721983, 372--376
• 84 book N. Nguyen and B. Brogliato. 'Multiple Impacts in Dissipative Granular Chains'. 72 Lecture Notes in Applied and Computational Mechanics XXII, 234 p. 109 illus. Springer Verlag 2014
• 85 articleF. Nqi and M. Schatzman. 'Computation of Lyapunov Exponents for dynamical system with impact'.Applied Mathematial Sciences452010, 237--252
• 86 articleA. Polyakov, D. Efimov and B. Brogliato. 'Consistent Discretization of Finite-time and Fixed-time Stable Systems'.SIAM Journal on Control and Optimization5712019, 78-103
• 87 inproceedingsA. Polyakov, D. Efimov, B. Brogliato and M. Reichhartinger. 'Consistent Discretization of Locally Homogeneous Finite-time Stable Control Systems'.ECC 2019 - 18th European Control ConferenceNaples, ItalyIEEEJune 2019, 1616-1621
• 88 article M. Porter, P. Kevrekidis and C. Daraio. 'Granular crystals: Nonlinear dynamics meets materials engineering'. Physics Today 68 44 2015
• 89 techreport A. Rocca, V. Acary and B. Brogliato. 'Index-2 hybrid DAE: a case study with well-posedness and numerical analysis'. Inria - Research Centre Grenoble – Rhône-Alpes November 2019
• 90 book R. Rockafellar. 'Convex Analysis'. Princeton University Press 1970
• 91 articleT. Schindler and V. Acary. 'Timestepping schemes for nonsmooth dynamics based on discontinuous Galerkin methods: Definition and outlook'.Mathematics and Computers in Simulation952013, 180--199
• 92 articleT. Schindler, S. Rezaei, J. Kursawe and V. Acary. 'Half-explicit timestepping schemes on velocity level based on time-discontinuous Galerkin methods'.Computer methods in Applied Mechanics in Engineering290152015, 250--276
• 93 book C. Studer. 'Numerics of Unilateral Contacts and Friction. -- Modeling and Numerical Time Integration in Non-Smooth Dynamics'. 47 Lecture Notes in Applied and Computational Mechanics Springer Verlag 2009
• 94 article A. Tonnelier. 'McKean caricature of the FitzHugh-Nagumo model: traveling pulses in a discrete diffusive medium'. Phys. Rev. E 67 036105 2003
• 95 conference A. Vieira, B. Brogliato and C. Prieur. 'Optimal control of linear complementarity systems'. IFAC World Congress on Automatic Control Toulouse, France 2017
• 96 inproceedingsA. Vieira, B. Brogliato and C. Prieur. 'Optimality conditions for the minimal time problem for Complementarity Systems'.Joint 8th IFAC Symposium on Mechatronic Systems (MECHATRONICS'19) and 11th IFAC Symposium on Nonlinear Control Systems (NOLCOS'19)IFACVienne, AustriaSeptember 2019, 325-330
• 97 articleB. Wang, B. Brogliato, V. Acary, A. Boubakir and F. Plestan. 'Experimental comparisons between implicit and explicit implementations of discrete-time sliding mode controllers: towards input and output chattering suppression'.IEEE Transactions on Control Systems Technology2352015, 2071--2075
• 98 bookM. di Bernardo, C. Budd, A. Champneys and P. Kowalczyk. 'Piecewise-smooth dynamical systems : theory and applications'.Applied mathematical sciencesLondonSpringer2008,
|
|
# 8 Pointed Star [pdf]
Your price: $1.00 Author/Creator Curl SKU: FD-4204 Regular price:$1.00
Member price: \$0.90
This model makes a great ornament or gift. It's not that hard, taking only about 5-10 minutes to complete.
I recommend using a somewhat thin paper that you can still wet fold (or finish), and you may need to iron the model when finished to get it to keep its shape. Foil works well, also.
As you can see in the picture, you can add a circle behind it. You do this simply by squash-folding the four flaps on the back symmetrically.
|
|
# What is free energy in the context of a quantum field theory?
I was reading the papers Large $$N$$ behavior of mass deformed ABJM theory and New 3D $${\cal N}=2$$ SCFT's with $$N^{3/2}$$ scaling. These papers talk about the free energy in the context of quantum field theory. I have an idea of what thermodynamic free energy is (related to the work done by the system). But what is free energy in the context of a quantum field theory?
The definition of the free energy in QFT is the same as in many-body QM $$\exp(-\beta F) = {\rm Tr}\left[\exp(-\beta H)\right]$$ where $$H$$ is the Hamiltonian of the system and $$\beta=1/(k_BT)$$. Note that this definition implies the standard thermodynamic identities. If you have a box of this stuff (described by $$H$$) coupled to a heat bath, then the isothermal change of $$F$$ is $$dF=-pdV$$, etc. There is a euclidean path integral representation of $$Z={\rm Tr}[\exp(-\beta H)]$$ $$Z = \int_{S_1\times R^3}{\cal D\phi} \;\exp(-S_E)$$ where the size of the circle $$S_1$$ is equal to $$\beta$$, $${\cal D}\phi$$ is the path integral measure in the QFT, and $$S_E$$ is the euclidean action. We have to impose periodic/anti-periodic boundary conditions for bosons/fermions along the $$S_1$$.
• This is not the free energy used by the papers in question. – Hans Moleman Jul 25 at 18:51
• @HansMoleman How do you know? (What other free energy is there?) – Thomas Jul 25 at 20:45
The theories on question are local QFTs in 3d. You can put such a theory on any compact 3-manifold $$M$$, and then you can compute observables such as the partition function $$Z[M]$$ or correlation functions $$\langle O_1 \dotsm O_n \rangle_M$$. All these observables will in any case depend on the couplings of the original theory and any parameters you use to define $$M$$, for instance its size $$R$$. In the papers in question the three-sphere $$M=S^3$$ is used. We always have in mind that we're tuning to a critical point, such that the couplings of the original theory are completely fixed.
Typically this procedure is a little bit ambiguous, in the sense that in the Lagrangian you can turn on new couplings: $$\mathcal{L} \mapsto \mathcal{L} + \text{cosmological constant} + \text{Ricci scalar} + \ldots$$ that don't exist in flat space. If you measure the partition function for $$M=S^3$$, you find that $$\ln Z[M] = a (\Lambda R)^3 + b \Lambda R - F + \ldots$$ for some dimensionless coefficients $$a,b,f$$. (Here $$\Lambda$$ is the UV cutoff, and all couplings are measured in units of $$\Lambda$$.) We can set $$a,b = 0$$ by tuning the cosmological constant and the Ricci scalar. Once you're at the critical point, the partition function $$Z[M]$$ is therefore a pure number, $$e^{-F}$$, and often this $$F$$ is called the free energy of a 3d CFT.
You are talking about Self Consistent Field Theories.
It is a key tool for describing the phase behavior of block polymers.
The lowest free energy state is the thermodynamic equilibrium.
http://pscf.cems.umn.edu/scft/background
In a thermodynamic system, the internal energy (U) is a function of entropy (S) and volume. Now since entropy is not easy to measure, we need to use temperature (T).
F = U - T x S
|
|
Path following is a simple concept to grasp: the object moves from point A to point B to point C, and so on. But what if we want our object to follow the path of the player, like ghosts in racing games? In this tutorial, I'll show you how to achieve this with waypoints in AS3.
## Final Result Preview
Click the SWF, then use the arrow keys to move around. Press space to switch to the ghost, which will follow the path you've created.
## The Logic Behind Path Following
Let's suppose the player moves 4 units left and 2 units down from our point of origin. For our ghost to end up in the same location it will have to also move 4 units left and 2 units down from the same point of origin. Now let's say our player is moving at a speed of 2; for the path following to remain accurate our ghost will also have a speed rate of 2.
What if our player decides to take a pause before continuing on? The obvious solution is for the ghost to keep track of the player's exact position every tick - but this will involve storing a lot of data. Instead, what we'll do is simply store data every time the player presses different keys - so if the player moves right for ten seconds, we'll store the same amount of data as if the player moved right for half a second.
For this technique to work our ghost must abide by the following rules:
• The ghost and player have the same point of origin.
• The ghost must follow the exact same path as the player.
• The ghost should move at the same speed as the player.
• The ghost has to store the current time each time the player's motion changes.
## Step 1: Setting Up
Start by creating a new Flash file (ActionScript 3.0). Set the width to 480, the height to 320 and frames per second to 30. Leave the background color as white and save the file as CreatingGhosts.fla; lastly set its class to CreatingGhosts.
Before we move into the classes we need to create a pair of MovieClips. Start by drawing two separate 20px squares without a stroke. Convert the first fill to a MovieClip, setting its registration to the center, naming it player and exporting it for ActionScript with the class name Player. Now repeat the same process, except replace the name with ghost and the class with Ghost. Remove these MovieClips from the stage.
Create your document class with the following code:
Self explanatory; our next step will be to set up the Player class:
The first three variables are used to help meet the rules; startPos is our point of origin, startTime is the time when the Player was added to the stage and speed is our our rate of movement. currentLife is an addition used to check how many times the player has died, accordingly each path is stored and obtainable through that value. The last four variables are used to check key presses.
It's time to create the Ghost class:
The two static variables, waypoints and times, will be used to store arrays; the first will store coordinates of the player's positions whenever the player changes motion, and the second will store the times at which each change occurred. The other variables match those from the Player class.
## Step 2: Initializing the Player
Within the Player's constructor add the following line:
Next create the init() function:
First, we need to obtain the startTime and push a new time array to the Ghost's times array. (This is a little confusing; the ghost has multiple time arrays to allow it to deal with multiple lives in the future.)
startTime is set to the current time (a value in milliseconds); we add a new child array to the Ghost's times array; our currentLife is set to the index of this new array; and we push the time that has elapsed during this function to the first element of this new array.
Now we set up the starting position:
Our point of origin is set to the center of the stage; we reposition our Player to the origin; a new array is added to the waypoints array in the Ghost class; and the first position is pushed to that array.
So, at the moment, Ghost.times[0][0] contains the number of milliseconds since the SWF was set up (practically zero), and Ghost.waypoints[0][0] contains a Point set to the center of the stage.
Our aim is to code this so that if, after one second, the player presses a key, then Ghost.times[0][1] will be set to 1000, and Ghost.waypoints[0][1] will be another Point, again set to the center (because the player will not have moved yet). When the player lets go of that key (or presses another), Ghost.times[0][2] will be set to the current time, and Ghost.waypoints[0][2] will be a Point that matches the player's position at that time.
Now, here are the three event listeners:
## Step 3: Key Events
For now let's ignore the enterFrame and focus on the key presses.
Just a few simple if-statements to prevent bugs in key presses, and two new functions that are being called. updateWaypoints() will be called every time new points and times are to be pushed to the ghost arrays, and destroy() is used to remove the Player and add the Ghost to the stage. But before we go to those functions let's finish off the key press functions.
This time we do the opposite: the variables are set to false when the key is released and the waypoints are updated.
I will elaborate in more detail on what is happening between those functions. Each time you press a key the waypoints and times are updated, so if you press another to cause a change a point and its corresponding time are added to the ghost arrays.
But what happens if the player decides to randomly release a key and cause change again? Well we account for that by updating the waypoints and times again. If this was not done the Ghost would not be able to account for 90 degree turns; instead it would move on an angle towards the next point.
## Step 4: Updating and Destroying
Our updateWaypoints() function is fairly simple, seeing as it consists of code that we have already written:
The destroy() function is just as simple! Waypoints are updated, a Ghost is added, event listeners are stopped and our Player is removed:
## Step 5: The Player's enterFrame
Begin by creating the function:
For the purposes of this tutorial we will add some simple collision with borders, to show how the waypoints are updated on this change:
Now are player should only move in the specified direction while it isn't touching a border. Inside the first if-statement add the following code for moving left:
First we check if the left key is currently down, then we check to see if the Player's position is greater than or equal to 0; if so we update our waypoints and reposition the player to the edge of the left side; if not we continue to move the player left.
The exact same thing is done for the other three sides:
And with that we are finished with the Player Class!
## Step 6: Initializing the Ghost
Add the following line inside the Ghost's constructor:
Like before create the init() function:
We start by selecting the path we want to use (by default it will choose the last array); we then position the ghost to the origin and set our Ghost's start time. Then an event listener for the enterFrame is created.
## Step 7: The Ghost's enterFrame
Naturally we create our enterFrame function:
Now we have to loop through our time array. We do this through the variable i; we check if it is less than the length of the array and we also check if the time elapsed is greater than or equal to the current time in the array:
The next thing to do is to move the Ghost if the time elapsed is less than the current time from the array:
## Step 8: Updating the Ghost's Position
We'll start this step off by creating the updatePosition() function:
Next add two variables, to represent the difference and the distance between the old and the new position:
We subtract the points from each other to find the distance. Now, we must move the ghost:
First we check whether the distance is less than the speed (i.e. the distance the ghost moves each tick); if so we move the Ghost directly to the point. However, if the distance is less then we normalize the difference ("means making its magnitude be equal to 1, while still preserving the direction and sense of the vector" - Euclidean Vectors in Flash), and we increase the Ghost's position along the direction of the point.
## Step 9: A Side Note
Something to note about this method is that it uses a lot of CPU resources to continuously load times and points, and at times can produce some lag even though the logic is correct. We found two ways of countering this, though!
The first is setting your SWF to be uncompressed in the Publish Settings; this will result in a longer load time at start up however the performance will be smoother. The second is more preferable if you plan on compiling your project as an exe for offline use: simply increase the frame rate to something around 60.
## Conclusion:
Thank you for taking the time to read this tutorial! If you have any questions or comment please leave them below. And if you want an extra challenge try setting up the Ghost class to follow the paths in reverse, or in slow motion.
|
|
# zbMATH — the first resource for mathematics
A classification of finite locally 2-transitive generalized quadrangles. (English) Zbl 07313190
A generalized quadrangle is an incidence geometry of points and lines such that every pair of distinct points determines at most one line and every line contains at least two distinct points, satisfying the GQ Axiom: Given a point $$P$$ and a line $$\ell$$ not incident with $$P,$$ there is a unique point on $$\ell$$ collinear with $$P.$$
One of the outstanding open questions in the area is the classification of flag-transitive finite generalized quadrangles, that is, the classification of all finite generalized quadrangles with a group of collineations that is transitive on incident point-line pairs.
It has been conjectured by W.M. Kantor that the only non-classical finite flag-transitive generalized quadrangles are (up to duality) either the unique generalized quadrangle of order $$(3, 5)$$ or the generalized quadrangle of order $$(15, 17)$$ arising from the Lunelli-Sce hyperoval.
We recall that an antiflag is a non-incident point-line pair. Notice that, by the GQ Axiom, antiflag-transitivity implies flag-transitivity for a generalized quadrangle.
In [J. Bamberg et al., Trans. Amer. Math. Soc. 370, 1551–1601 (2018, Zbl 1381.51001)] the authors have proved that, up to duality, the only non-classical antiflag-transitive generalized quadrangle is the unique generalized quadrangle of order $$(3, 5).$$
If $$G$$ is a subgroup of collineations of a finite generalized quadrangle $$Q$$ that is transitive both on pairs of collinear points and on pairs of concurrent lines, then $$Q$$ is said to be a locally $$(G, 2)-$$transitive generalized quadrangle. The authors prove the following theorem:
Theorem 1 If $$Q$$ is a thick finite locally $$(G, 2)$$-transitive generalized quadrangle and $$Q$$ is not a classical generalized quadrangle, then (up to duality) $$Q$$ is the unique generalized quadrangle of order $$(3, 5).$$
An equivalent definition of locally $$2-$$transitive generalized quadrangle is that it has an incidence graph that is locally $$2$$-arc-transitive. Since an equivalent definition of an antiflag-transitive generalized quadrangle is that it has a locally $$3-$$arc-transitive incidence graph, Theorem 1 provides further progress towards Kantor conjecture.
##### MSC:
51E12 Generalized quadrangles and generalized polygons in finite geometry 20B05 General theory for finite permutation groups 20B15 Primitive groups 20B25 Finite automorphism groups of algebraic, geometric, or combinatorial structures
GAP; Magma
Full Text:
##### References:
[1] Alavi, Seyed Hassan; Burness, Timothy C., Large subgroups of simple groups, J. Algebra, 421, 187-233 (2015) · Zbl 1308.20012 [2] Aschbacher, Michael, $$S_3$$-free 2-fusion systems, Proc. Edinb. Math. Soc. (2), 56, 1, 27-48 (2013) · Zbl 1278.20021 [3] Bamberg, John; Giudici, Michael; Morris, Joy; Royle, Gordon F.; Spiga, Pablo, Generalised quadrangles with a group of automorphisms acting primitively on points and lines, J. Combin. Theory Ser. A, 119, 7, 1479-1499 (2012) · Zbl 1245.05014 [4] Bamberg, John; Glasby, S. P.; Popiel, Tomasz; Praeger, Cheryl E., Generalized quadrangles and transitive pseudo-hyperovals, J. Combin. Des., 24, 4, 151-164 (2016) · Zbl 1338.05031 [5] Bamberg, John; Li, Cai Heng; Swartz, Eric, A classification of finite antiflag-transitive generalized quadrangles, Trans. Amer. Math. Soc., 370, 3, 1551-1601 (2018) · Zbl 1381.51001 [6] Bamberg, John; Popiel, Tomasz; Praeger, Cheryl E., Point-primitive, line-transitive generalised quadrangles of holomorph type, J. Group Theory, 20, 2, 269-287 (2017) · Zbl 1428.20004 [7] Bamberg, John; Popiel, Tomasz; Praeger, Cheryl E., Simple groups, product actions, and generalized quadrangles, Nagoya Math. J., 234, 87-126 (2019) · Zbl 1431.51003 [8] Bichara, Alessandro; Mazzocca, Francesco; Somma, Clelia, On the classification of generalized quadrangles in a finite affine space $${\rm AG}(3,\,2^h)$$, Boll. Un. Mat. Ital. B (5), 17, 1, 298-307 (1980) · Zbl 0463.51013 [9] Bosma, Wieb; Cannon, John; Playoust, Catherine, The Magma algebra system. I. The user language, J. Symbolic Comput., 24, 3-4, 235-265 (1997) · Zbl 0898.68039 [10] Bray, John N.; Holt, Derek F.; Roney-Dougal, Colva M., The maximal subgroups of the low-dimensional finite classical groups, London Mathematical Society Lecture Note Series 407, xiv+438 pp. (2013), Cambridge University Press, Cambridge · Zbl 1303.20053 [11] Brouwer, A. E.; Cohen, A. M.; Neumaier, A., Distance-regular graphs, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)] 18, xviii+495 pp. (1989), Springer-Verlag, Berlin · Zbl 0747.05073 [12] Brown, Julia M. N.; Cherowitzo, William E., The Lunelli-Sce hyperoval in $$\rm PG(2,16)$$, J. Geom., 69, 1-2, 15-36 (2000) · Zbl 0968.51004 [13] Buekenhout, F.; Van Maldeghem, H., Finite distance-transitive generalized polygons, Geom. Dedicata, 52, 1, 41-51 (1994) · Zbl 0809.51008 [14] Cameron, Peter J., Permutation groups, London Mathematical Society Student Texts 45, x+220 pp. (1999), Cambridge University Press, Cambridge · Zbl 0922.20003 [15] Fan, Wenwen; Leemans, Dimitri; Li, Cai Heng; Pan, Jiangmin, Locally 2-arc-transitive complete bipartite graphs, J. Combin. Theory Ser. A, 120, 3, 683-699 (2013) · Zbl 1259.05121 [16] Fong, Paul; Seitz, Gary M., Groups with a $$(B,\,N)$$-pair of rank $$2$$. I, II, Invent. Math., 21, 1-57; ibid. 24 (1974), 191-239 (1973) · Zbl 0295.20048 [17] GAP4 The GAP Group. GAP – Groups, Algorithms, and Programming, Version 4.10.0, 2018. [18] Giudici, Michael; Li, Cai Heng; Praeger, Cheryl E., Analysing finite locally $$s$$-arc transitive graphs, Trans. Amer. Math. Soc., 356, 1, 291-317 (2004) · Zbl 1022.05033 [19] Giudici, Michael; Li, Cai Heng; Praeger, Cheryl E., Characterizing finite locally $$s$$-arc transitive graphs with a star normal quotient, J. Group Theory, 9, 5, 641-658 (2006) · Zbl 1113.05047 [20] Giudici, Michael; Li, Cai Heng; Praeger, Cheryl E., Locally $$s$$-arc transitive graphs with two different quasiprimitive actions, J. Algebra, 299, 2, 863-890 (2006) · Zbl 1092.05028 [21] Gorenstein, Daniel; Lyons, Richard; Solomon, Ronald, The classification of the finite simple groups, Mathematical Surveys and Monographs 40, xiv+165 pp. (1994), American Mathematical Society, Providence, RI · Zbl 0816.20016 [22] Guralnick, Robert; Penttila, Tim; Praeger, Cheryl E.; Saxl, Jan, Linear groups with orders having certain large prime divisors, Proc. London Math. Soc. (3), 78, 1, 167-214 (1999) · Zbl 1041.20035 [23] Guralnick, Robert M.; Mar\'{o}ti, Attila; Pyber, L\'{a}szl\'{o}, Normalizers of primitive permutation groups, Adv. Math., 310, 1017-1063 (2017) · Zbl 1414.20002 [24] Kantor, W. M., Automorphism groups of some generalized quadrangles. Advances in finite geometries and designs, Chelwood Gate, 1990, Oxford Sci. Publ., 251-256 (1991), Oxford Univ. Press, New York · Zbl 0736.51002 [25] Kleidman, Peter; Liebeck, Martin, The subgroup structure of the finite classical groups, London Mathematical Society Lecture Note Series 129, x+303 pp. (1990), Cambridge University Press, Cambridge · Zbl 0697.20004 [26] Li, Cai Heng; Seress, \'{A}kos; Song, Shu Jiao, $$s$$-arc-transitive graphs and normal subgroups, J. Algebra, 421, 331-348 (2015) · Zbl 1301.05172 [27] Liebeck, Martin W.; Praeger, Cheryl E.; Saxl, Jan, The maximal factorizations of the finite simple groups and their automorphism groups, Mem. Amer. Math. Soc., 86, 432, iv+151 pp. (1990) · Zbl 0703.20021 [28] Morgan, Luke; Swartz, Eric; Verret, Gabriel, On 2-arc-transitive graphs of order $$kp^n$$, J. Combin. Theory Ser. B, 117, 77-87 (2016) · Zbl 1329.05148 [29] Ostrom, T. G.; Wagner, A., On projective and affine planes with transitive collineation groups., Math. Z, 71, 186-199 (1959) · Zbl 0085.14302 [30] Payne, Stanley E.; Thas, Joseph A., Finite generalized quadrangles, EMS Series of Lectures in Mathematics, xii+287 pp. (2009), European Mathematical Society (EMS), Z\"{u}rich · Zbl 1247.05047 [31] Praeger, Cheryl E., Finite quasiprimitive graphs. Surveys in combinatorics, 1997 (London), London Math. Soc. Lecture Note Ser. 241, 65-85 (1997), Cambridge Univ. Press, Cambridge · Zbl 0881.05055 [32] Praeger, Cheryl E., An O’Nan-Scott theorem for finite quasiprimitive permutation groups and an application to $$2$$-arc transitive graphs, J. London Math. Soc. (2), 47, 2, 227-239 (1993) · Zbl 0738.05046 [33] Song S. J. Song, On the stabilisers of locally 2-transitive graphs, 1603.08398 [34] Tits, Jacques, Sur la trialit\'{e} et certains groupes qui s’en d\'{e}duisent, Inst. Hautes \'{E}tudes Sci. Publ. Math., 2, 13-60 (1959) · Zbl 0088.37204 [35] Toborg, Imke; Waldecker, Rebecca, Finite simple $$3^\prime$$-groups are cyclic or Suzuki groups, Arch. Math. (Basel), 102, 4, 301-312 (2014) · Zbl 1304.20022 [36] Vasil\cprime ev, A. V., Minimal permutation representations of finite simple exceptional groups of types $$G_2$$ and $$F_4$$, Algebra i Logika. Algebra and Logic, 35 35, 6, 371-383 (1996) [37] Walter, John H., The characterization of finite groups with abelian Sylow $$2$$-subgroups, Ann. of Math. (2), 89, 405-514 (1969) · Zbl 0184.04605 [38] Wilson, Robert A., The finite simple groups, Graduate Texts in Mathematics 251, xvi+298 pp. (2009), Springer-Verlag London, Ltd., London · Zbl 1203.20012
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
|
Category:Preorder Theory
This category contains results about Preorder Theory.
Definitions specific to this category can be found in Definitions/Preorder Theory.
$\RR$ is a preordering on $S$ if and only if:
$(1)$ $:$ $\RR$ is reflexive $\displaystyle \forall a \in S:$ $\displaystyle a \mathrel \RR a$ $(2)$ $:$ $\RR$ is transitive $\displaystyle \forall a, b, c \in S:$ $\displaystyle a \mathrel \RR b \land b \mathrel \RR c \implies a \mathrel \RR c$
Subcategories
This category has only the following subcategory.
|
|
# How do you use Riemann sums to evaluate the area under the curve of y = x^2 + 1 on the closed interval [0,1], with n=4 rectangles using midpoint?
$\frac{43}{32}$
${\sum}_{n = 1}^{4} f \left(\frac{2 n - 1}{8}\right) \left(\frac{1}{4}\right) = \frac{1}{4} \left(\frac{1}{64} + 1 + \frac{9}{64} + 1 + \frac{25}{64} + 1 + \frac{49}{64} + 1\right) = \frac{1}{4} \left(4 + \frac{84}{64}\right) = 1 + \frac{22}{64} = 1 + \frac{11}{32} = \frac{43}{32}$
|
|
JASells Feb 10, 2011 at 5:25 PM From the profile data I have, it appears that the System.Windows.Controls.Image class is holding on to image byte[] arrays even after I assign a different Image.Source. Am I using Image properly? private void button1_Click(object sender, RoutedEventArgs e) { myIrIm = new SOPM.Imaging.Gray16SOPM(@"c:\Imagery Dataset\tau640_11-08-10_230337.ADF"); myIrIm.LoadFrame(imageCount); image1.Source = myIrIm; ++imageCount; } I can't find any examples on how to change the Image.Source (or to make the image displayed change) without creating a new instance of a CustomBitmap child class. The problem appears to be that the Image control never releases ANY reference set to Image.Source. The source file is actually multiple frames and I am trying to create what is, essentially, a movie player for a custom format. DwayneNeed Feb 10, 2011 at 6:18 PM Edited Feb 26, 2011 at 2:21 AM This doesn’t sound familiar. Please investigate a little further with a managed memory profiler to confirm the chain of references keeping the byte[] alive. If you don’t have a memory profiler, CLRProfiler is a free one: http://blogs.msdn.com/b/davbr/archive/2011/02/01/clrprofiler-v4-released.aspx JASells Feb 10, 2011 at 6:52 PM Edited Feb 10, 2011 at 7:02 PM First off, thanks for the quick reply! I have checked with a profiler: JetBrains dotTrace memory profiler. Even implementing IDisposable and disposing each of my CustomBitmapSource objects, they still remain on the heap. They show up under the "Garbage Collector Handle" , but they don't ever seem to be collected, so I may have been mistaken when I said the Image control was holding the references. If I could make the Image control update/refresh itself on the screen I wouldn't have to make a new object every time... I guess I could make some sort of factory object that contains the CustomBitmapSource sub-type and have it handle the file stream and buffer byte[] so that those would be re-used for each new BmpSource object created by the factory, but I would like to understand why these objects aren't being collected in the first place so I can effectively work around or fix the root problem. BTW, thanks for the MS profiler, I tried to find this but could only fins the .NET v2.0 one, which won't work on .NFW v4.0 code. Neither one seems to know the app has started... Joshua A. Sells Computer Engineer JASells Feb 10, 2011 at 7:04 PM Just speculating: could the unsafe blocks be affecting the GC's collection algorithm? I have known about unsafe blocks, in concept, but have little experience actually using them so I'm not sure of the potential pitfalls that may be in that realm. DwayneNeed Feb 11, 2011 at 7:06 AM Edited Feb 26, 2011 at 2:21 AM Do use GCHandle.Alloc? That would create a GC handle that might get pinned in memory. Any pointers used within unsafe blocks would, I think, get unpinned at the end of the block. But it might be worth experimenting with. JASells Feb 14, 2011 at 2:51 PM I do use a GCHandle.Alloc call within my derived CustomBitmap class. I followed the pattern you used in CustomBitmap with a try...catch...finally to make sure that GChandle.Free() is always called. As I stated, from my profiler, I do believe that the GC owns the memory that is "leaking" but never releases it. I am working the factory patern impl now. I will post the results of that effort. DwayneNeed wrote: Do use GCHandle.Alloc? That would create a GC handle that might get pinned in memory. Any pointers used within unsafe blocks would, I think, get unpinned at the end of the block. But it might be worth experimenting with. From: JASells [email removed] Sent: Thursday, February 10, 2011 11:05 AM To: Dwayne Need Subject: Re: Memory Leak Using Custom Bitmap w/ Image in WPF [MicrosoftDwayneNeed:245478] From: JASells Just speculating: could the unsafe blocks be affecting the GC's collection algorithm? I have known about unsafe blocks, in concept, but have little experience actually using them so I'm not sure of the potential pitfalls that may be in that realm. JASells Feb 14, 2011 at 10:49 PM Edited Feb 17, 2011 at 12:50 PM The factory pattern/re-using the byte[] memory buffer(s) did not help the leak. I tried to find a way to make sure the pointers are reclaimed, but the only pointers I am using in my unsafe block are pointers to managed memory that has already been pinned. Looking at the memory usage via the profiler, there is only ~10MB being used, but according to taskman, I am over 200. The leak must be in unmanaged memory somewhere, I suspect the Image control. Have you ever streamed 100's of images into this control? Is it designed for this? I can upload a simplified example, if anyone wants to see exactly what I'm doing. DwayneNeed Feb 15, 2011 at 12:22 AM Edited Feb 26, 2011 at 2:21 AM A simplified repro would be great. Perhaps you should use the connect site for this, as you can attach repros. We will pick it up when it gets synchronized with our internal bug system. JASells Feb 15, 2011 at 1:56 PM Where/what is the connect site? Do you have a Url? I could use my skydrive storage too. DwayneNeed wrote: A simplified repro would be great. Perhaps you should use the connect site for this, as you can attach repros. We will pick it up when it gets synchronized with our internal bug system. From: JASells [email removed] Sent: Monday, February 14, 2011 2:50 PM To: Dwayne Need Subject: Re: Memory Leak Using Custom Bitmap w/ Image in WPF [MicrosoftDwayneNeed:245478] From: JASells The factory pattern/re-using the byte[] memory buffer did not help the leak. I tried to find a way to make sure the pointers are reclaimed, but the only pointers I am using in my unsafe block are pointers to manage memory that has been pinned. Looking at the memory usage via the profiler, there is only ~10MB being used, but according to taskman, I am over 200. The leak must be in unmanaged memory somewhere, I suspect the Image control. Have you ever streamed 100's of images into this control? Is it designed for this? I can upload a simplified example, if anyone wants to see exactly what I'm doing. DwayneNeed Feb 15, 2011 at 3:37 PM Edited Feb 26, 2011 at 2:22 AM I was referring to http://connect.microsoft.com/WPF However, if you can put your repro up on skydrive, that would work just as well. JASells Feb 17, 2011 at 12:49 PM Edited Feb 17, 2011 at 3:02 PM Ok, I uploaded the code with about 10 frames of the sample imagery. http://cid-255987229e4d13e3.skydrive.live.com/redir.aspx?resid=255987229E4D13E3!134 That should be enough to easily see the memory leak, but you can loop through those 10 frames if you want to push the memory use even higher. Data is Rgba, with alpha unused. Code comments explain. P.S. The data file path is hard coded in the application project, so you'll have to change that to reflect your local system. http://cid-255987229e4d13e3.skydrive.live.com/redir.aspx?resid=255987229E4D13E3!134 JASells Feb 22, 2011 at 3:34 PM I am just checking to see if anyone has had a chance to take a look at the code I posted. DwayneNeed Feb 25, 2011 at 2:04 AM Edited Feb 26, 2011 at 2:22 AM Looks like the solution is missing the WpfImageMemLeakDemo\SOPM\SOPM.Imaging\SOPM.Imaging.csproj project? JASells Feb 25, 2011 at 2:25 PM So it is. Here is a link to the missing project. http://cid-255987229e4d13e3.office.live.com/self.aspx/.Documents/CustomBitmapMemleakExample.zip Tonight I will put them into one download and fix previous posts' links. DwayneNeed wrote: Looks like the solution is missing the WpfImageMemLeakDemo\SOPM\SOPM.Imaging\SOPM.Imaging.csproj project? Dwayne Need | WPF is hiring From: JASells [email removed] Sent: Tuesday, February 22, 2011 7:35 AM To: Dwayne Need Subject: Re: Memory Leak Using Custom Bitmap w/ Image in WPF [MicrosoftDwayneNeed:245478] From: JASells I am just checking to see if anyone has had a chance to take a look at the code I posted. http://cid-255987229e4d13e3.office.live.com/self.aspx/.Documents/CustomBitmapMemleakExample.zip JASells Feb 25, 2011 at 2:29 PM Btw, your hiring link is broken... DwayneNeed wrote: Looks like the solution is missing the WpfImageMemLeakDemo\SOPM\SOPM.Imaging\SOPM.Imaging.csproj project? Dwayne Need | WPF is hiring From: JASells [email removed] Sent: Tuesday, February 22, 2011 7:35 AM To: Dwayne Need Subject: Re: Memory Leak Using Custom Bitmap w/ Image in WPF [MicrosoftDwayneNeed:245478] From: JASells I am just checking to see if anyone has had a chance to take a look at the code I posted. DwayneNeed Feb 26, 2011 at 2:20 AM Edited Feb 26, 2011 at 2:22 AM Your issue has been reported before: http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/2113b16e-d4a2-4423-a7ae-6f386ae2706a The underlying IWICBitmapSource is not being released. The above thread suggests a workaround, which is to use reflection to manually release this interface. I implemented the suggestion in your code in the CustomBitmapStream.Dispose(bool) method: protected virtual void Dispose(bool disposing) { if (myFs != null) myFs.Close(); myFs = null; var field = GetType().GetField("_wicSource", BindingFlags.Instance | BindingFlags.NonPublic); var val = (SafeHandle)field.GetValue(this); val.Close(); RawFrameCache = null; } This solves the memory leak. However, I notice that after 10 button clicks or so, the image goes black. This happens before my change as well. JASells Feb 26, 2011 at 2:27 AM There are only 10 frames in the file, so probably no data. Good to know that there is a work around. I did not find that report. Thanks for the info! Joshua A. Sells 5400 Fowler Rd. AMSRD-AMR-WD-UC Office: 256.842.6925 DSN: 788.6925 Mobile: 256.503.9247 From: DwayneNeed [email removed] Sent: Friday, February 25, 2011 08:20 PM To: Sells, Joshua A AMRDEC Subject: Re: Memory Leak Using Custom Bitmap w/ Image in WPF [MicrosoftDwayneNeed:245478] From: DwayneNeed Your issue has been reported before: http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/2113b16e-d4a2-4423-a7ae-6f386ae2706a The underlying IWICBitmapSource is not being released. The above thread suggests a workaround, which is to use reflection to manually release this interface. I implemented the suggestion in your code in the CustomBitmapStream.Dispose(bool) method: protected virtual void Dispose(bool disposing) { if (myFs != null) myFs.Close(); myFs = null; var field = GetType().GetField("_wicSource", BindingFlags.Instance | BindingFlags.NonPublic); var val = (SafeHandle)field.GetValue(this); val.Close(); RawFrameCache = null; } This solves the memory leak. However, I notice that after 10 button clicks or so, the image goes black. This happens before my change as well. From: JASells [email removed] Sent: Friday, February 25, 2011 6:30 AM To: Dwayne Need Subject: Re: Memory Leak Using Custom Bitmap w/ Image in WPF [MicrosoftDwayneNeed:245478] From: JASells Btw, your hiring link is broken... DwayneNeed wrote: Looks like the solution is missing the WpfImageMemLeakDemo\SOPM\SOPM.Imaging\SOPM.Imaging.csproj project? Dwayne Need | WPF is hiring From: JASells [email removed] Sent: Tuesday, February 22, 2011 7:35 AM To: Dwayne Need Subject: Re: Memory Leak Using Custom Bitmap w/ Image in WPF [MicrosoftDwayneNeed:245478] From: JASells I am just checking to see if anyone has had a chance to take a look at the code I posted. aKzenT Aug 14, 2014 at 3:54 PM I know that this entry is quite old, but I just stumbled upon it and noticed that this behaviour changed from .NET 4.0 to .NET 4.5. The latter does no longer produce memory leaks. The root cause for the memory leak is the internal ManagedBitmapSource class which is reference counted (COM) and used by the BitmapSource class. This is what the _wicSource handle points to. The ManagedBitmapSource has a reference to the original BitmapSource class, thus creating a cycle which the GC cannot break. In .NET 4.5 the cycle is broken by making the reference from ManagedBitmapSource to the original BitmapSource a WeakReference. For this reason the workarround described above is no longer needed from .NET 4.5 on forward.
|
|
# What is the difference between QFT and elementary particle physics?
I'm a little unclear as to how QFT differs from Elementary particle physics. They both use pictorials of Feynman graphs, is it that Elementary particle physics assumes the point particle perspective, while QFT treats them as fields?
-
Concerning this answer to another question. You appear to have duplicate accounts. There is a help center page about that. – dmckee Feb 10 '14 at 13:23
## 1 Answer
Quantum Field Theory is the framework that we normally use to study particle physics. It is the idea that the world is described by fields and each field can move into excited states which correspond to particles.
Since we are working with fields, you not constricted to a fixed particle number as in Quantum Mechanics. This is very useful since due to $E=mc^2$, particles constantly appear and disappear in the real world.
Particle physics on the other hand, is the study of the particles that make up our world. Since QFT's are terrific at describing Nature, we use them to describe particle physics; they are tool we need to study particle physics. However, some people study QFT for the sake of QFT since its a very deep and subtle subject.
-
|
|
# Calling the NAG C Library Mark 26 from C#
The information here is applicable to the DLLs supplied with CLW3226DEL. For the latest information on calling the NAG C Library DLLs from various environments, or for sample projects for a specific Mark of the NAG C Library, please go to the main NAG C Library DLL support index page.
Please note that we recommend using the NAG Library for .NET from within C#. If this does not contain the NAG routines that you require, the NAG Fortran Library (DLLs) or the NAG C Library (as described below) may be used.
The NAG C Library uses the following data types as parameters. These are:
1. scalars of type double, int and complex. These are passed either by value or by reference (as pointers to the particular type).
2. enum types.
3. arrays of type double, int and complex.
4. a large number of structures, generally passed as pointers.
5. in a few instances arrays (type double**) that are allocated within NAG routines and have to be freed by users. Also, in one instance, an array (type char***) of strings which is allocated internally.
6. function parameters, also known as call-backs, that are pointers to functions with particular signatures.
As a first step towards making the C Library functions available to the .NET environment, an assembly of imports is needed. This contains:
1. signatures of all the functions we want to call from the NAG C Library.
2. structures with their attributes specified both in terms of their layout and their properties when passed as parameters (in, out or both). This is necessary as structures are a value type in .NET while in C they can be reference types also. Also pointers are more or less a taboo in C#. We avoid the explicit use of pointers by using instead the C# IntPtr type which is in fact equivalent to void * in C. The IntPtr fields can be handled later using marshalling methods.
3. signatures of all call-back types.
4. enum types derived from the C header file, nag_types.h.
The file clcsdnet.cs contains signatures of NAG C Library routines and call-back functions. It also contains layouts of structures used in the library, e.g. the NagError structure which is used by almost all the routines in the library. One of the most useful areas of the library is optimization. We have provided the layout for the Nag_E04_Opt structure along with the signatures for nag_opt_nlp (e04ucc) and the call-backs, e04ucc_objfun_DELEGATE and e04ucc_confun_DELEGATE, which are required by this routine, for example. enum types have also been provided.
We have also provided basic C# classes for illustrative purposes. These can be found in the files listed below. For example a sample C# class calling e04ucc can be found in e04ucce.cs. Each of the illustrative classes has a Main method exercising the NAG C example as shown in the C Library manual.
These examples may be compiled from the command line (e.g. from a Visual Studio Command Prompt) in the following manner:
csc /unsafe d01sjce.cs clcsdnet.cs
Note that because calling NAG C Library functions involves the use of pointers (see above), the /unsafe compiler option must be specified. On 64-bit machines, you will also need to specify the /platform:x86 compiler option.
The function declarations in clcsdnet.cs are based on the stand-alone version of the NAG C Library DLL (CLW3226DE_nag.dll); to specify the version of the DLL which uses the MKL BLAS/LAPACK instead (CLW3226DE_mkl.dll), replace CLW3226DE_nag.dll by CLW3226DE_mkl.dll in the DllImport attribute.
Remember also that to be able to run the program after you have compiled it, the NAG C Library DLL will need to appear somewhere in your current path. For example, if the DLLs are in C:\Program Files\NAG\CL26\clw3226del\bin, then your PATH environment variable must contain this folder. You will also need the folder containing the Intel run-time libraries on your path (unless these are already present). If you are using the MKL-based version of the NAG C Library (CLW3226DE_mkl.dll), then the folder containing the MKL DLLs should also be on your path, but should appear later in the path than the bin folder for the NAG C Library DLLs, e.g.
C:\Program Files\NAG\CL26\clw3226del\bin;
C:\Program Files\NAG\CL26\clw3226del\rtl\bin;
C:\Program Files\NAG\CL26\clw3226del\mkl_ia32_11.3.3\bin;
<rest of path>
Tested with Visual Studio 2010 (Visual C# Compiler version 4.6.1055.0 for C# 5), Visual Studio 2012 (Visual C# Compiler version 4.0.30319.34209 for .NET Framework 4.5), Visual Studio 2013 (Visual C# Compiler version 12.0.40629.0 for C# 5) and Visual Studio 2015 (Visual C# Compiler version 1.0.0.50618).
|
|
# diophantus
Hello, this is beta version of diophantus. If you want to report about a mistake, please, write to hello@diophantus.org
#### Conjugate field and fluctuation-dissipation relation for the dynamic phase transition in the two-dimensional kinetic Ising model
09 Apr 2007 cond-mat.stat-mech arxiv.org/abs/0704.1123
Abstract. The two-dimensional kinetic Ising model, when exposed to an oscillating applied magnetic field, has been shown to exhibit a nonequilibrium, second-order dynamic phase transition (DPT), whose order parameter Q is the period-averaged magnetization. It has been established that this DPT falls in the same universality class as the equilibrium phase transition in the two-dimensional Ising model in zero applied field. Here we study for the first time the scaling of the dynamic order parameter with respect to a nonzero, period-averaged, magnetic `bias' field, H_b, for a DPT produced by a square-wave applied field. We find evidence that the scaling exponent, \delta_d, of H_b at the critical period of the DPT is equal to the exponent for the critical isotherm, \delta_e, in the equilibrium Ising model. This implies that H_b is a significant component of the field conjugate to Q. A finite-size scaling analysis of the dynamic order parameter above the critical period provides further support for this result. We also demonstrate numerically that, for a range of periods and values of H_b in the critical region, a fluctuation-dissipation relation (FDR), with an effective temperature T_{eff}(T, P, H_0) depending on the period, and possibly the temperature and field amplitude, holds for the variables Q and H_b. This FDR justifies the use of the scaled variance of Q as a proxy for the nonequilibrium susceptibility, \partial / \partial H_b, in the critical region.
# Reviews
There are no reviews yet.
|
|
Last edited by Murr
Friday, November 6, 2020 | History
3 edition of gamma family and derived distributions applied in hydrology found in the catalog.
gamma family and derived distributions applied in hydrology
Bernard BobeМЃe
# gamma family and derived distributions applied in hydrology
Written in English
Subjects:
• Hydrology -- Mathematics.,
• Gamma functions.
• Edition Notes
Includes bibliographical references (p. 173-187) and index.
Classifications The Physical Object Statement by Bernard Bobée and Fahim Ashkar. Contributions Ashkar, Fahim. LC Classifications GB656.2.M34 B63 1991 Pagination xiv, 203 p. : Number of Pages 203 Open Library OL1894433M ISBN 10 0918334683 LC Control Number 90070598
The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, (Italian: [p a ˈ r e ː t o] US: / p ə ˈ r eɪ t oʊ / pə-RAY-toh), is a power-law probability distribution that is used in description of social, quality control, scientific, geophysical, actuarial, and many other types of observable ally applied to describing the. Starting from simple notions of the essential graphical examination of hydrological data, the book gives a complete account of the role that probability considerations must play during modelling, diagnosis of model fit, prediction and evaluating the uncertainty in model predictions, including the essence of Bayesian application in hydrology and. For example, the gamma distribution is derived from the gamma function. The Pareto distribution is mathematically an exponential-gamma mixture. Taking independent sum of independent and identically distributed exponential random variables produces the Erlang distribution, a sub gamma family of distribution. It is originally applied as a. One-parameter canonical exponential family Canonical exponential family for k = 1, y ∈ IR (yθ −b(θ)) f. θ (y) = exp + c(y,φ) φ. for some known functions b() and c(,). If φ is known, this is a one-parameter exponential family with θ being the canonical parameter. If .
Some general properties of these families of distributions are studied. Four members of the T-R{generalized lambda} families of distributions are derived. The shapes of these distributions can be symmetric, skewed to the left, skewed to the right, or bimodal. Two real life data sets are applied to illustrate the flexibility of the distributions.
You might also like
FET circuits.
FET circuits.
Slocums good deed
Slocums good deed
Pakistan
Pakistan
Forestry Extension Organization/F2907
Forestry Extension Organization/F2907
Second class, working class
Second class, working class
Critical political economy
Critical political economy
Mascots
Mascots
Rumpelstiltskin
Rumpelstiltskin
critique of a defence of equalising grants to local authorities
critique of a defence of equalising grants to local authorities
Rules and regulations for approval to perform hematology and urinalysis
Rules and regulations for approval to perform hematology and urinalysis
Final report of the Hospital Services Review Committee
Final report of the Hospital Services Review Committee
Storage batteries
Storage batteries
Common diseases of the ear
Common diseases of the ear
Sociological Worlds
Sociological Worlds
### gamma family and derived distributions applied in hydrology by Bernard BobeМЃe Download PDF EPUB FB2
Gamma Family and Derived Distributions Applied in Hydrology [Bobee, B., Ashkar] on *FREE* shipping on qualifying offers.
Gamma Family and Derived Distributions Applied in HydrologyCited by: The Gamma Family and Derived Distributions Applied in Hydrology The Gamma Family and Derived Distributions Applied in Hydrology, Bernard Bobée: Authors: Bernard Bobée, Fahim Ashkar: Publisher: Water Resources Publications, Original from: the University of California: Digitized: ISBN:Length: : Gamma Family and Derived Distributions Applied in Hydrology () by Bobee, B.; Ashkar and a great selection of similar New, Used and Format: Paperback.
Book Review: The gamma family and derived distributions applied in hydrology. By Bernard Bobee and Fahim Ashkar, Water Resources Publications, Littleton, COUSA,pp., soft cover, \$, ISBN Cited by: 1. - Buy Gamma Family and Derived Distributions Applied in Hydrology book online at best prices in India on Read Gamma Family and Derived Distributions Applied in Hydrology book reviews & author details and more at Free delivery on qualified : B.
Bobee, Ashkar. This paper is devoted to a new class of distributions called the Box-Cox gamma-G family. Gamma family and derived distributions applied in hydrology book is a natural generalization of the useful Ristić–Balakrishnan-G family of distributions, containing a wide variety of power gamma-G distributions, including the odd gamma-G distributions.
The key tool for this generalization is the use of the Box-Cox transformation involving a tuning power parameter. Bobée B. and F. Ashkar () The gamma family and derived distributions applied in hydrology, p. Water Resources Publications, Fort Collins, Colorado. Google Scholar.
The Gamma Family and Derived Distributions Applied in Hydrology. Water Resources Publications, Littleton, Colorado.
Water Resources Publications, Littleton, Colorado. Google Scholar. The Gamma Family and Derived Distributions Applied in Hydrology. Water Resources Publications, Littleton, CO, pp. Durrans, S.R., Parameter estimation for the Pearson type 3 distribution using order statistics.
The Gamma Family and Derived Distributions Applied in Hydrology, Water Resources Publications, USA () New forms of correlation relationships between positive quantities applied in hydrology.
Paper presented at International Symposium on Mathematical Models in Hydrology, International Association of Scientific Hydrology, Warsaw, Poland. dgamma3 gives the density, pgamma3 gives the distribution function, qgamma3 gives the quantile function, and rgamma3 generates random deviates.
References. Bobee, B. and F. Ashkar (). The Gamma Family and Derived Distributions Applied in Hydrology. Water Resources Publications, Littleton, Colo., p. See Also. dgamma, pgamma, qgamma. The gamma distribution (Pearson's Type III), which is a limiting case of Type I distribution and next, to the Gaussian distribution in simplicity, gives a good fit to monthly rainfall at all the.
), can be attributed to Laplace () who obtained a gamma distribution asthe distribution of a “precisionconstant”. The gamma distribution has been used to model waiting times. For example in life testing, the waiting time until “death” is a random variablethat has a gamma distribution.
Family of generalized gamma distributions: Properties and applications Ayman Alzaatreh y, Carl Leezand Felix Famoyex Abstract In this paper, a family of generalized gamma distributions, T-gamma family,hasbeenproposedusingtheT-RfYgframework.
Thefamilyof distributions is generated using the quantile functions of uniform, ex. The Gamma Family and Derived Distributions Applied in Hydrology The Gamma Family and Derived DistributionsApplied in Hydrology Chapter 7: Log-Pearson type 3 distribution.
Raindrop size distributions have been characterized through the gamma family. Over the years, quite a few estimates of these gamma parameters have been proposed.
The natural question for the practitioner, then, is what estimation procedure should be used. We provide guidance in answering this question when a large sample size (> drops) of accurately measured drops is available.
In a case study, a bivariate flood frequency analysis was carried out using a five-parameter bivariate gamma distribution. A family of joint return period curves relating the runoff peak discharges to the runoff volumes at the dam site was derived.
dlgamma3 gives the density, plgamma3 gives the distribution function, qlgamma3 gives the quantile function, and rlgamma3 generates random deviates. References.
BOBEE, B. and F. ASHKAR (). The Gamma Family and Derived Distributions Applied in Hydrology. Water Resources Publications, Littleton, Colo., p. See Also. Ayman Alzaatreh, Mohammad A.
Aljarrah, Michael Smithson, Saman Hanif Shahbaz, Muhammad Qaiser Shahbaz, Felix Famoye, Carl Lee, Truncated Family of Distributions with Applications to Time and Cost to Start a Business, Methodology and Computing in Applied Probability, /s, ().
In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution.
There are three different parametrizations in common use. With a shape parameter k and a scale parameter θ. Three-Parameter Gamma Distribution (also known as Pearson type III distribution) Density, distribution function, quantile function and random generation for the 3-parameter gamma distribution with shape, ().
The Gamma Family and Derived Distributions Applied in Hydrology. Water Resources Publications, Littleton, Colo., p. See. The gamma family of distributions is made up of three distributions: gamma, negative gamma and normal.
It covers any specified average, standard deviation and skewness. Together they form a 3-parameter family of distributions that is represented by a curve on a skewness-kurtosis plot as shown below.
The gamma distribution covers the positive. In another post I derived the exponential distribution, which is the distribution of times until the first change in a Poisson process. The gamma distribution models the waiting time until the 2nd, 3rd, 4th, 38th, etc, change in a Poisson process.
As we did with the exponential distribution, we derive it from the Poisson distribution. It turns out that this family consists of the gamma distributions. Gamma distributions describe continuous non-negative random variables.
As we know, the value of $$\lambda$$ in the Poisson can take any non-negative value so this fits. The gamma family is flexible, and Figure illustrates a wide range of gamma shapes. • The comparison between the gamma distribution and the log normal distribution shows that the gamma distribution is more flexible than lognormal distribution since the estimated depth (), is nearest to the actual data.
REFERENCES • Aron, G., White, E.L., (). Fitting a gamma-distribution over a synthetic unit-hydrograph. Is October Reliability Basics: Overview of the Gumbel, Logistic, Loglogistic and Gamma Distributions.
Weibull++ introduces four more life distributions in addition to the Weibull-Bayesian distribution discussed in the previous issue of HotWire.
These are the Gumbel, logistic, loglogistic and Gamma distributions. This paper deals with a Maximum likelihood method to fit a three-parameter gamma distribution to data from an independent and identically distributed scheme of sampling.
The likelihood hinges on the joint distribution of the n − 1 largest order statistics and its maximization is done by resorting to a MM-algorithm. Monte Carlo simulations is performed in order to examine the behavior of. Fitting performances of the gamma distribution function The first step is to choose the type of theoretical distributions that best describe the empirical distribution.
According to cited references, the gamma distribution is a particularly suitable distribution for monthly precipitation data. By elementary changes of variables this historical definition takes the more usual forms: Theorem 2 For x>0 Γ(x)=0 tx−1e−tdt, (2) or sometimes Γ(x)=20 t2x−1e−t2dt.
(3) Proof. Use respectively the changes of variable u = −log(t) and u2 = −log(t) in (1). From this theorem, we see that the gamma function Γ(x) (or the Eulerian integral of the second kind) is well defined and.
The pdf is used because of its similarity to IUH shape and unit hydrograph properties, e.g., the two‐parameter gamma distribution and the three‐parameter Beta distribution [Bhunya et al., ].
[10] The possibility of preserving the form of the IUH through a two‐parameter gamma pdf has been analyzed in the past by Rosso [], who. The gamma distribution is another widely used distribution. Its importance is largely due to its relation to exponential and normal distributions.
The gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the.
In probability theory and statistics, the Gumbel distribution (Generalized Extreme Value distribution Type-I) is used to model the distribution of the maximum (or the minimum) of a number of samples of various distributions.
This distribution might be used to represent the distribution of the maximum level of a river in a particular year if there was a list of maximum values for the past ten. A demonstration of how to show that the gamma distribution is a member of the natural exponential family of distributions, and hence how to find it's mean an.
Follow Bernard Bobée and explore their bibliography from 's Bernard Bobée Author Page. distribution [24], the Burrr XII-Singh-Maddala (BSM) distribution function derived from the maximum entropy principle using the Boltzmann-Shannon entropy with some constraints [25].
“Entropy-Based Parameter Estimation in Hydrology” is the first book focusing on parameter estimation using entropy. It does depend where you look. For example, gamma distributions have been popular in several of the environmental sciences for some decades and so modelling with predictor variables too is a natural extension.
There are many examples in hydrology and geomorphology, to name some fields in which I. A new residence-time distribution (RTD) function has been developed and applied to quantitative dye studies as an alternative to the traditional advection-dispersion equation (AdDE).
The new method is based on a jointly combined four-parameter gamma probability density function (PDF). The gamma residence-time distribution (RTD) function and its first and second moments are derived from the. The extension of the model to Gamma family is briefly summarized. Then, the performance of Gamma waiting times is compared with both exponential times and actual Poisson process.
Based on this, it is concluded that as the shape parameter of Gamma gets larger, we have actual Poisson process as the limiting distribution. In this paper, we derive the cumulative distribution functions (CDF) and probability density functions (PDF) of the ratio and product of two independent Weibull and Lindley random variables.
The moment generating functions (MGF) and the k -moment are driven from the ratio and product cases. In these derivations, we use some special functions, for instance, generalized hypergeometric functions.
In hydrology the Pareto distribution is applied to extreme events such as annually maximum one-day rainfalls and river discharges.
The blue picture illustrates an example of fitting the Pareto distribution to ranked annually maximum one-day rainfalls showing also the 90% confidence belt based on the binomial distribution. A review on recent generalizations of exponential distribution; Submit manuscript Due to current COVID19 situation and as a measure of abundant precaution, our Member Services centre are operating with minimum staff.
eISSN: X. Biometrics & Biostatistics International Journal.distributions, which is not possible with standard classical methods.
The methodology has been applied on two different problems in hydrology. The first application is concerned with the combined risk in the framework of frequency analysis. Four copulas have been tested on peak flows from the watershed of Peribonka in Que´bec, Canada. The second.Both exponential and gamma distributions play pivotal roles in the study of records because of hydrology, medicine, number theory, order statistics, physics, psychology, etc.
Some of the notable examples are the ratios of inventory in economics, ratios of Gamma(can be derived in a similar way to the record values of Lomax distribution.
|
|
# American Institute of Mathematical Sciences
• Previous Article
Comparison theorems of oscillation and nonoscillation for neutral difference equations with continuous arguments
• CPAA Home
• This Issue
• Next Article
On quasilinear elliptic equations related to some Caffarelli-Kohn-Nirenberg inequalities
December 2003, 2(4): 567-577. doi: 10.3934/cpaa.2003.2.567
## A numerical investigation of the dynamics of a system of two time-delay coupled relaxation oscillators
1 Department of Theoretical and Applied Mechanics, Cornell University, Ithaca, New York 14853, United States 2 Department of Mathematical Sciences, Indiana University, Indianapolis, IN 46202, United States
Received August 2001 Revised June 2003 Published October 2003
In this paper we examine the dynamics of two time-delay coupled relaxation oscillators of the van der Pol type. By integrating the governing differential-delay equations numerically, we find the various phase-locked motions including the in-phase and out-of-phase modes. Our computations reveal that depending on the strength of coupling ($\alpha$) and the amount of time-delay ($\tau$), the in-phase (out-of-phase) mode may be stable or unstable. There are also values of $\alpha$ and $\tau$ for which the in-phase and out-of-phase modes are both stable leading to birhythmicity. The results are illustrated in the $\alpha$-$\tau$ parameter plane. Near the boundaries between stability and instability of the in-phase (out-of-phase) mode, many other types of phase-locked motions can occur. Several examples of these phase-locked states are presented.
Citation: Richard H. Rand, Asok K. Sen. A numerical investigation of the dynamics of a system of two time-delay coupled relaxation oscillators. Communications on Pure & Applied Analysis, 2003, 2 (4) : 567-577. doi: 10.3934/cpaa.2003.2.567
[1] Jianquan Li, Xin Xie, Dian Zhang, Jia Li, Xiaolin Lin. Qualitative analysis of a simple tumor-immune system with time delay of tumor action. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020341 [2] Ilyasse Lamrani, Imad El Harraki, Ali Boutoulout, Fatima-Zahrae El Alaoui. Feedback stabilization of bilinear coupled hyperbolic systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020434 [3] Giuseppina Guatteri, Federica Masiero. Stochastic maximum principle for problems with delay with dependence on the past through general measures. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020048 [4] Yichen Zhang, Meiqiang Feng. A coupled $p$-Laplacian elliptic system: Existence, uniqueness and asymptotic behavior. Electronic Research Archive, 2020, 28 (4) : 1419-1438. doi: 10.3934/era.2020075 [5] Maoding Zhen, Binlin Zhang, Vicenţiu D. Rădulescu. Normalized solutions for nonlinear coupled fractional systems: Low and high perturbations in the attractive case. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020379 [6] Xavier Carvajal, Liliana Esquivel, Raphael Santos. On local well-posedness and ill-posedness results for a coupled system of mkdv type equations. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020382 [7] Xin-Guang Yang, Lu Li, Xingjie Yan, Ling Ding. The structure and stability of pullback attractors for 3D Brinkman-Forchheimer equation with delay. Electronic Research Archive, 2020, 28 (4) : 1395-1418. doi: 10.3934/era.2020074 [8] Soniya Singh, Sumit Arora, Manil T. Mohan, Jaydev Dabas. Approximate controllability of second order impulsive systems with state-dependent delay in Banach spaces. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020103 [9] Fathalla A. Rihan, Hebatallah J. Alsakaji. Stochastic delay differential equations of three-species prey-predator system with cooperation among prey species. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020468 [10] Lars Grüne, Matthias A. Müller, Christopher M. Kellett, Steven R. Weller. Strict dissipativity for discrete time discounted optimal control problems. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020046 [11] Serena Dipierro, Benedetta Pellacci, Enrico Valdinoci, Gianmaria Verzini. Time-fractional equations with reaction terms: Fundamental solutions and asymptotics. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 257-275. doi: 10.3934/dcds.2020137 [12] Guido Cavallaro, Roberto Garra, Carlo Marchioro. Long time localization of modified surface quasi-geostrophic equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020336 [13] Cuicui Li, Lin Zhou, Zhidong Teng, Buyu Wen. The threshold dynamics of a discrete-time echinococcosis transmission model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020339 [14] Awais Younus, Zoubia Dastgeer, Nudrat Ishaq, Abdul Ghaffar, Kottakkaran Sooppy Nisar, Devendra Kumar. On the observability of conformable linear time-invariant control systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020444 [15] Hoang The Tuan. On the asymptotic behavior of solutions to time-fractional elliptic equations driven by a multiplicative white noise. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020318 [16] Haixiang Yao, Ping Chen, Miao Zhang, Xun Li. Dynamic discrete-time portfolio selection for defined contribution pension funds with inflation risk. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020166 [17] Youshan Tao, Michael Winkler. Critical mass for infinite-time blow-up in a haptotaxis system with nonlinear zero-order interaction. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 439-454. doi: 10.3934/dcds.2020216 [18] Mohammed Abdulrazaq Kahya, Suhaib Abduljabbar Altamir, Zakariya Yahya Algamal. Improving whale optimization algorithm for feature selection with a time-varying transfer function. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 87-98. doi: 10.3934/naco.2020017 [19] Reza Lotfi, Zahra Yadegari, Seyed Hossein Hosseini, Amir Hossein Khameneh, Erfan Babaee Tirkolaee, Gerhard-Wilhelm Weber. A robust time-cost-quality-energy-environment trade-off with resource-constrained in project management: A case study for a bridge construction project. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020158
2019 Impact Factor: 1.105
|
|
## CCC '05 J1 - The Cell Sell
View as PDF
Points: 3
Time limit: 2.0s
Memory limit: 64M
Problem type
Moe Bull has a cell phone and after a month of use is trying to decide which price plan is the best for his usage pattern. He has two options, each plan has different costs for daytime minutes, evening minutes and weekend minutes.
Plan Costs
daytime evening weekend
A 100 free minutes then 25 cents per minute 15 cents per minute 20 cents per minute
B 250 free minutes then 45 cents per minute 35 cents per minute 25 cents per minute
Write a program that will input the number of each type of minutes and output the cheapest plan for this usage pattern, using the format shown below. The input will be in the order of daytime minutes, evening minutes and weekend minutes. In the case that the two plans are the same price, output both plans.
#### Sample Input 1
251
10
60
#### Sample Output 1
Plan A costs 51.25
Plan B costs 18.95
Plan B is cheapest.
#### Sample Input 2
162
61
66
#### Sample Output 2
Plan A costs 37.85
Plan B costs 37.85
Plan A and B are the same price.
CCC problem statements in large part from the PEG OJ
• saysuhayl commented on Oct. 14, 2018, 12:31 a.m.
Spent 20 mins debugging because the output is worded "Plan B is cheapest." vs "Plan B is the cheapest." which is proper grammar.
• Arihan10 commented on Feb. 23, 2019, 7:19 p.m.
This is why the Copy button exists.
• Summertony717 commented on Feb. 23, 2019, 3:06 p.m.
Only 20 min? Good job, that would have taken me hours.
• Epic1Online commented on Oct. 30, 2017, 5:11 p.m. edit 2
Are compile errors due to bad code? Because my code works on my laptop but I get a compile error when submitting. The same thing happened for another problem but I resubmitted and it worked but that's not fixing the error here.
EDIT: I fixed it by using Math.round() instead of DecimalFormat, but I don't see why DecimalFormat shouldn't work (Java 8)
• injust commented on Oct. 30, 2017, 6:41 p.m.
You may want to import java.text.DecimalFormat.
• Selena_Liu commented on July 12, 2017, 12:51 a.m.
I don't get why my first case is wrong. They are the same price, aren't they?
• injust commented on July 12, 2017, 3:35 a.m.
Your program prints Plan A and Plan B are the same price. when the plans cost the same amount.
• lolzballs commented on March 18, 2015, 1:16 p.m.
This comment is hidden due to too much negative feedback. Click here to view it.
• HyperNeutrino commented on Feb. 23, 2019, 5:51 p.m.
"Plan A is cheap"
"Plan A is cheaper"
...
Both are correct; cheapest is an adjective but it's a superlative so you can also stick a definitive article (i.e. "the") in front of it and treat is as though it were a noun.
• FatalEagle commented on March 18, 2015, 1:17 p.m.
How correct the grammar is doesn't matter; it's simply the output specifications.
• Hunterdrago1 commented on Oct. 18, 2014, 6:31 p.m.
Plan A costs 37.85 Plan B costs 37.849999999999994 Plan B is cheapest.
• FatalEagle commented on Nov. 1, 2014, 12:17 a.m.
We are dealing with exact real numbers in this problem.
• Miss commented on Nov. 19, 2014, 2:05 p.m.
This comment is hidden due to too much negative feedback. Click here to view it.
• FatalEagle commented on Nov. 19, 2014, 6:47 p.m.
The answer must be to exactly 2 decimal places. The constraints are such that the correct answer will not have any nonzero digits after and including the third decimal place.
• Zhenpai commented on Oct. 7, 2014, 9:41 p.m.
I think there's a problem with your problem, and I have a problem with that.
• quantum commented on Oct. 7, 2014, 10:22 p.m.
There is no problem with this problem.
|
|
# Using models
All recommender objects in the turicreate.recommender module expose a common set of methods, such as recommend and evaluate.
In this section we will cover
#### Making recommendations
Once a model is created, you can now make recommendations of new items for users. To do so, call model.recommend() with an SArray of user ids. If users is set to None, then model.recommend() will make recommendations for all the users seen during model creation, automatically excluding the items that are observed for each user. In other words, if data contains a row "Alice, The Great Gatsby", then model.recommend() will not recommend "The Great Gatsby" for user "Alice". It will return at most k new items for each user, sorted by their rank. It will return fewer than k items if there are not enough items that the user has not already rated or seen.
The score column of the output contains the unnormalized prediction scores for each user-item pair. The semantic meanings of these scores may differ between models. For the linear regression model, for instance, a higher average score for a user means that the model thinks that this user is generally more enthusiastic than others.
There are a number of ways to make recommendations: for known users or new users, with new observation data or side information, and with different ways to explicitly control item inclusion or exclusion. Let's walk through these options together.
##### Making recommendations for all users
By default, calling m.recommend() without any arguments returns the top 10 recommendations for all users seen during model creation. It automatically excludes items that were seen during model creation. Hence all generated recommendations are for items that the user has not already seen.
data = turicreate.SFrame({'user_id': ["Ann", "Ann", "Ann", "Brian", "Brian", "Brian"],
'item_id': ["Item1", "Item2", "Item4", "Item2", "Item3", "Item5"],
'rating': [1, 3, 2, 5, 4, 2]})
m = turicreate.factorization_recommender.create(data, target='rating')
recommendations = m.recommend()
##### Making recommendations for specific users
If you specify a list or SArray of users, recommend() returns recommendations for only those user(s). The user names must correspond to strings in the user_id column in the training data.
recommendations = m.recommend(users=['Brian'])
##### Making recommendations for specific users and items
In situations where you build a model for all of your users and items, you may wish to limit the recommendations for particular users based on known item attributes. For example, for US-based customers you may want to limit recommendations to US-based products. The following code sample restricts recommendations to a subset of users and items -- specifically those users and items whose value in the 'country' column is equal to "United States".
country = 'United States'
m.recommend(users=users['user_id'][users['country']==country].unique(),
items=items['item_id'][items['country']==country])
##### Making recommendations for new users
This is known as the "cold-start" problem. The recommend() function works seamlessly with new users. If the model has never seen the user, then it defaults to recommending popular items:
m.recommend(['Charlie'])
Here 'Charlie' is a new user that does not appear in the training data. Also note that you don't need to explicitly write down users=; Python automatically assumes that arguments are provided in order, so the first unnamed argument to recommend() is taken to be the user list.
##### Incorporating information about a new user
To improve recommendations for new users, it helps to have side information or new observation data for the user.
##### Incorporating new side information
To incorporate side information, you must have already created a recommender model that knows how to incorporate side features. This can be done by passing in side information to create(). For example:
user_info = turicreate.SFrame({'user_id': ['Ann', 'Brian'],
'age_category': ['2', '3']})
m_side_info = turicreate.factorization_recommender.create(data, target='rating',
user_data=user_info)
Now, we can add side information for the new user at recommendation time. The new side information must contain a column with the same name as the column in the training data that's designated as the 'user_id'. (For more details, please see the API documentation for turicreate.recommender.create.)
new_user_info = turicreate.SFrame({'user_id' : ['Charlie'],
'age_category' : ['2']})
recommendations = m_side_info.recommend(['Charlie'],
new_user_data = new_user_info)
Given Charlie's age category, the model can incorporate what it knows about the importance of age categories for item recommendations. Currently, the following models can take side information into account when making recommendations: LinearRegressionModel, FactorizationRecommender. LinearRegressionModel is the simpler model, and FactorizationRecommender the more powerful. For more details on how each model makes use of side information, please refer to the model definition sections in the individual models' API documentation.
##### Incorporating new observation data
recommend() accepts new observation data. Currently, the ItemSimilarityModel makes the best use of this information.
m_item_sim = turicreate.item_similarity_recommender.create(data)
new_obs_data = turicreate.SFrame({'user_id' : ['Charlie', 'Charlie'],
'item_id' : ['Item1', 'Item5']})
recommendations = m_item_sim.recommend(['Charlie'], new_observation_data = new_obs_data)
##### Controlling the number of recommendations
The input parameter k controls how many items to recommend for each user.
recommendations = m.recommend(k = 5)
##### Excluding specific items from recommendation
Suppose you make some recommendations to the user and they ignored them. So now you want other recommendations. This can be done by explicitly excluding those undesirable items via the exclude keyword argument.
exclude_pairs = turicreate.SFrame({'user_id' : ['Ann'],
'item_id' : ['Item3']})
recommendations = m.recommend(['Ann'], k = 5, exclude = exclude_pairs)
By default, recommend() excludes items seen during training, so that it would not recommend items that the user has already seen. To change this behavior, you can specify exclude_known=False.
recommendations = m.recommend(exclude_known = False)
##### Including specific items in recommendation
Suppose you want to see only recommendations within a subset of items. This can be done via the items keyword argument. The input must be an SArray of items.
item_subset = turicreate.SArray(["Item3", "Item5", "Item2"])
recommendations = m.recommend(['Ann'], items = item_subset)
#### Finding Similar Items
Many of the above models make recommendations based on some notion of similarity between a pair of items. Querying for similar items can help you understand the model's behavior on your data.
We have made this process very easy with the get_similar_items function:
similar_items = model.get_similar_items(my_list_of_items, k=20)
The above will return an SFrame containing the 20 nearest items for every item in my_list_of_items. The definition of "nearest" depends on the type of similarity used by the model. For instance, "jaccard" similarity measures the two item's overlapping users. The 'score' column contains a similarity score ranging between 0 and 1, where larger values indicate increasing similarity. The mathematical formula used for each type of similarity can be found in the API documentation for ItemSimilarityRecommender.
For a factorization-based model, the similarity used for is the Euclidean distance between the items' two factors, which can be obtained using m['coefficients'].
model.save("recommendations.model")
model = tc.load_model("recommendations.model")
|
|
Number of loaded cables in the backfill area $N_b$ is calculated individually for every cable or duct as the total losses in the backfill area divided by the losses in the specific cable or duct with multi-core cables and cables in a common duct treated as a single loaded cable. Hence, $N_b$ is not necessarily an integer number.
Symbol
$N_{\mathrm{b}}$
Unit
-
Used in
$T_{\mathrm{4db}}$
|
|
LLVM 17.0.0git
llvm::MachineJumpTableInfo Class Reference
#include "llvm/CodeGen/MachineJumpTableInfo.h"
## Public Types
enum JTEntryKind {
EK_Inline , EK_Custom32
}
JTEntryKind - This enum indicates how each entry of the jump table is represented and emitted. More...
## Public Member Functions
MachineJumpTableInfo (JTEntryKind Kind)
JTEntryKind getEntryKind () const
unsigned getEntrySize (const DataLayout &TD) const
getEntrySize - Return the size of each entry in the jump table.
unsigned getEntryAlignment (const DataLayout &TD) const
getEntryAlignment - Return the alignment of each entry in the jump table.
unsigned createJumpTableIndex (const std::vector< MachineBasicBlock * > &DestBBs)
createJumpTableIndex - Create a new jump table.
bool isEmpty () const
isEmpty - Return true if there are no jump tables.
const std::vector< MachineJumpTableEntry > & getJumpTables () const
void RemoveJumpTable (unsigned Idx)
RemoveJumpTable - Mark the specific index as being dead.
bool RemoveMBBFromJumpTables (MachineBasicBlock *MBB)
RemoveMBBFromJumpTables - If MBB is present in any jump tables, remove it.
bool ReplaceMBBInJumpTables (MachineBasicBlock *Old, MachineBasicBlock *New)
ReplaceMBBInJumpTables - If Old is the target of any jump tables, update the jump tables to branch to New instead.
bool ReplaceMBBInJumpTable (unsigned Idx, MachineBasicBlock *Old, MachineBasicBlock *New)
ReplaceMBBInJumpTable - If Old is a target of the jump tables, update the jump table to branch to New instead.
void print (raw_ostream &OS) const
print - Used by the MachineFunction printer to print information about jump tables.
void dump () const
dump - Call to stderr.
## Detailed Description
Definition at line 42 of file MachineJumpTableInfo.h.
## ◆ JTEntryKind
JTEntryKind - This enum indicates how each entry of the jump table is represented and emitted.
Enumerator
EK_BlockAddress - Each entry is a plain address of block, e.g.: .word LBB123.
EK_GPRel64BlockAddress - Each entry is an address of block, encoded with a relocation as gp-relative, e.g.: .gpdword LBB123.
EK_GPRel32BlockAddress - Each entry is an address of block, encoded with a relocation as gp-relative, e.g.: .gprel32 LBB123.
EK_LabelDifference32
EK_LabelDifference32 - Each entry is the address of the block minus the address of the jump table.
This is used for PIC jump tables where gprel32 is not supported. e.g.: .word LBB123 - LJTI1_2 If the .set directive is supported, this is emitted as: .set L4_5_set_123, LBB123 - LJTI1_2 .word L4_5_set_123
EK_Inline
EK_Inline - Jump table entries are emitted inline at their point of use.
It is the responsibility of the target to emit the entries.
EK_Custom32
EK_Custom32 - Each entry is a 32-bit value that is custom lowered by the TargetLowering::LowerCustomJumpTableEntry hook.
Definition at line 46 of file MachineJumpTableInfo.h.
## ◆ MachineJumpTableInfo()
llvm::MachineJumpTableInfo::MachineJumpTableInfo ( JTEntryKind Kind )
inlineexplicit
Definition at line 82 of file MachineJumpTableInfo.h.
## ◆ createJumpTableIndex()
unsigned MachineJumpTableInfo::createJumpTableIndex ( const std::vector< MachineBasicBlock * > & DestBBs )
createJumpTableIndex - Create a new jump table.
Create a new jump table entry in the jump table info.
Definition at line 1273 of file MachineFunction.cpp.
References assert().
## ◆ dump()
LLVM_DUMP_METHOD void MachineJumpTableInfo::dump ( ) const
dump - Call to stderr.
Definition at line 1335 of file MachineFunction.cpp.
References llvm::dbgs(), and print().
## ◆ getEntryAlignment()
unsigned MachineJumpTableInfo::getEntryAlignment ( const DataLayout & TD ) const
getEntryAlignment - Return the alignment of each entry in the jump table.
Return the alignment of each entry in the jump table.
Definition at line 1253 of file MachineFunction.cpp.
Referenced by llvm::AsmPrinter::emitJumpTableInfo().
## ◆ getEntryKind()
JTEntryKind llvm::MachineJumpTableInfo::getEntryKind ( ) const
inline
Definition at line 84 of file MachineJumpTableInfo.h.
## ◆ getEntrySize()
unsigned MachineJumpTableInfo::getEntrySize ( const DataLayout & TD ) const
getEntrySize - Return the size of each entry in the jump table.
Return the size of each entry in the jump table.
Definition at line 1234 of file MachineFunction.cpp.
## ◆ getJumpTables()
const std::vector< MachineJumpTableEntry > & llvm::MachineJumpTableInfo::getJumpTables ( ) const
inline
## ◆ isEmpty()
bool llvm::MachineJumpTableInfo::isEmpty ( ) const
inline
isEmpty - Return true if there are no jump tables.
Definition at line 97 of file MachineJumpTableInfo.h.
## ◆ print()
void MachineJumpTableInfo::print ( raw_ostream & OS ) const
print - Used by the MachineFunction printer to print information about jump tables.
Implemented in MachineFunction.cpp
Definition at line 1318 of file MachineFunction.cpp.
References MBB, llvm::printJumpTableEntryReference(), and llvm::printMBBReference().
Referenced by dump(), and llvm::MachineFunction::print().
## ◆ RemoveJumpTable()
void llvm::MachineJumpTableInfo::RemoveJumpTable ( unsigned Idx )
inline
RemoveJumpTable - Mark the specific index as being dead.
This will prevent it from being emitted.
Definition at line 105 of file MachineJumpTableInfo.h.
References Idx.
Referenced by llvm::BranchFolder::OptimizeFunction().
## ◆ RemoveMBBFromJumpTables()
bool MachineJumpTableInfo::RemoveMBBFromJumpTables ( MachineBasicBlock * MBB )
RemoveMBBFromJumpTables - If MBB is present in any jump tables, remove it.
If MBB is present in any jump tables, remove it.
Definition at line 1292 of file MachineFunction.cpp.
References MBB.
Referenced by llvm::MachineFunction::deleteMachineBasicBlock().
## ◆ ReplaceMBBInJumpTable()
bool MachineJumpTableInfo::ReplaceMBBInJumpTable ( unsigned Idx, MachineBasicBlock * Old, MachineBasicBlock * New )
ReplaceMBBInJumpTable - If Old is a target of the jump tables, update the jump table to branch to New instead.
If Old is a target of the jump tables, update the jump table to branch to New instead.
Definition at line 1304 of file MachineFunction.cpp.
References assert(), Idx, MBB, and llvm::MachineJumpTableEntry::MBBs.
Referenced by ReplaceMBBInJumpTables().
## ◆ ReplaceMBBInJumpTables()
bool MachineJumpTableInfo::ReplaceMBBInJumpTables ( MachineBasicBlock * Old, MachineBasicBlock * New )
ReplaceMBBInJumpTables - If Old is the target of any jump tables, update the jump tables to branch to New instead.
If Old is the target of any jump tables, update the jump tables to branch to New instead.
Definition at line 1282 of file MachineFunction.cpp.
References assert(), and ReplaceMBBInJumpTable().
The documentation for this class was generated from the following files:
|
|
Opened 9 years ago
Closed 9 years ago
display hours in the milestone view
Reported by: Owned by: Jeff Hammel Jeff Hammel normal TracHoursPlugin normal julien.perville@…, fabien.catteau@… 0.11
Description
Hello!
On the roadmap (/roadmap) screen there is a line for each milestone with the total estimated and total worked hours. It would be nice if we could display the same information in the milestone view (/milestone/\$milestone_name). This could be done by refactoring the existing MilestoneMarkup class in source:trachoursplugin/0.11/trachours/hours.py to be available outside of the filter_roadmap function.
Sincerely, Julien Pervillé
comment:1 Changed 9 years ago by Jeff Hammel
CCing original author, julien.perville@...
comment:2 Changed 9 years ago by Jeff Hammel
Resolution: → fixed new → closed
this should be done; i forgot about this ticket and reticketed at #4419 which i closed
Modify Ticket
Change Properties
|
|
# Math Help - Need to demonstrate that the slope is strictly positive
1. ## Need to demonstrate that the slope is strictly positive
If I have a multivariable equation whose slope is always positive, say $f(x,y,z)=x^2+y^2+z^2$ how do I demonstrate that the slope is always positive?
I imagine this involves partial derivatives but need some guidance.
Thanks
2. Originally Posted by rainer
If I have a multivariable equation whose slope is always positive, say $f(x,y,z)=x^2+y^2+z^2$ how do I demonstrate that the slope is always positive?
I imagine this involves partial derivatives but need some guidance.
Thanks
How do you define the slope of a multivariable function??
Tonio
3. Oh yeah, good point.
Let me give a few more parameters.
First, I reduce the equation to 3 variables:
$f(x,y)=x^2+y^2$
So it's a 3D graph. I am interested in the slope on the 2D x-y plane or "cross-section" of the origin.
Does that narrow it down enough?
4. I don't understand your definition. The definiton I am familiar with says a function $f:\mathbb{R}^n\to \mathbb{R}$ has positive slope iff for every injective curve $c=(c^1,\cdots,c^n)a,b)\to \mathbb{R}^n" alt="c=(c^1,\cdots,c^n)a,b)\to \mathbb{R}^n" /> whose coordinate functions $c^i$ are increasing, the derivative of $f \circ c:\mathbb{R}\to \mathbb{R}$ is positive. Is this what you want?
I don't understand your definition. The definiton I am familiar with says a function $f:\mathbb{R}^n\to \mathbb{R}$ has positive slope iff for every injective curve $c=(c^1,\cdots,c^n)a,b)\to \mathbb{R}^n" alt="c=(c^1,\cdots,c^n)a,b)\to \mathbb{R}^n" /> whose coordinate functions $c^i$ are increasing, the derivative of $f \circ c:\mathbb{R}\to \mathbb{R}$ is positive. Is this what you want?
Hmmm, this definition is really cool-looking. But not understanding the half of it I will have to say I don't know if it's what I need or not.
It looks like I need to ruminate and clarify whatever it is I am trying to ask. So let's leave off here and maybe I'll post a clarified version of my question in a new thread.
Thanks a lot
6. The difficulty appears to be that you do not know what you mean by "slope" of a multivariable function. In order to be able to talk about slope being positive, you must mean it to be a number, but what number?
|
|
# will this methodology end up giving me a nonsense regression equation.
I'm wondering if this is a valid methodology to find the best regression equation for a given data set.
User provides a rang of estimated value for some set of variables. Th algorithm uses the given range to randomly generate a large number of regression equations like the one bellow.
Price = actualValue1* estimatedValue1 + actualValue2* estimatedValue2;
We solve them all for price and pick the best one as regression equation. Will this methodology give us a valid regression equation or will it just find some random set of estimated values That happens to give a price that is close to the actual price?
PS. I've taken stats but I'm not a statistician So I'm in a little over my head here. If I need to clarify something please let me know.
• Can you give a precise mathematical description of your question, specifying exactly what is being done? The trouble with plain English text descriptions is that they leave quite a bit of room for ambiguity and guesswork, making questions harder to answer. – Kirill Sep 15 '17 at 14:12
|
|
Review
# Ethnicity and Conflict: Theory and Facts
See allHide authors and affiliations
Science 18 May 2012:
Vol. 336, Issue 6083, pp. 858-865
DOI: 10.1126/science.1222240
## Abstract
Over the second half of the 20th century, conflicts within national boundaries became increasingly dominant. One-third of all countries experienced civil conflict. Many (if not most) such conflicts involved violence along ethnic lines. On the basis of recent theoretical and empirical research, we provide evidence that preexisting ethnic divisions do influence social conflict. Our analysis also points to particular channels of influence. Specifically, we show that two different measures of ethnic division—polarization and fractionalization—jointly influence conflict, the former more so when the winners enjoy a “public” prize (such as political power or religious hegemony), the latter more so when the prize is “private” (such as looted resources, government subsidies, or infrastructures). The available data appear to strongly support existing theories of intergroup conflict. Our argument also provides indirect evidence that ethnic conflicts are likely to be instrumental, rather than driven by primordial hatreds.
There are two remarkable facts about social conflict that deserve notice. First, within-country conflicts account for an enormous share of deaths and hardship in the world today. Figure 1 depicts global trends in inter- and intrastate conflict. Since the Second World War, there have been 22 interstate conflicts with more than 25 battle-related deaths per year, and 9 of them have killed at least 1000 over the entire history of conflict (1). The total number of attendant battle deaths in these conflicts is estimated to be around 3 to 8 million (2). The same period witnessed 240 civil conflicts with more than 25 battle-related deaths per year, and almost half of them killed more than 1000 (1). Estimates of the total number of battle deaths are in the range of 5 to 10 million (2). Added to the direct count of battle deaths are the 25 million noncombatant civilian (3) and indirect deaths due to disease and malnutrition, which have been estimated to be at least four times as high as violent deaths (4), as well as the forced displacements of more than 40 million individuals by 2010 (5). In 2010 there were 30 ongoing civil conflicts (6).
Second, internal conflicts often appear to be ethnic in nature. More than half of the civil conflicts recorded since the end of the Second World War have been classified as ethnic or religious (3, 7). One criterion for a conflict to be classified as ethnic is that it involves a rebellion against the state on behalf of some ethnic group (8). Such conflicts involved 14% of the 709 ethnic groups categorized worldwide (9). Brubaker and. Laitin, examining the history of internal conflicts in the second half of the 20th century, are led to remark on “the eclipse of the left-right ideological axis” and the “marked ethnicization of violent challenger-incumbent contests” (10). Horowitz, author of a monumental treatise on the subject of ethnic conflict, observes that “[t]he Marxian concept of class as an inherited and determinative affiliation finds no support in [the] data. Marx’s conception applies with far less distortion to ethnic groups.… In much of Asia and Africa, it is only modest hyperbole to assert that the Marxian prophecy has had an ethnic fulfillment” (11).
The frightening ubiquity of within-country conflicts, as well as their widespread ethnic nature, provokes several questions. Do “ethnic divisions” predict conflict within countries? How do we conceptualize those divisions? If it is indeed true that ethnic cleavages and conflicts are related, how do we interpret such a result? Do “primordial,” ancestral ethnic hatreds trump “more rational” forms of antagonism, such as the instrumental use of ethnicity to achieve political power or economic gain? To discuss and possibly answer some of these questions is the goal of this review.
## Class and Ethnicity as Drivers of Conflict
The study of human conflict is (and has been) a central topic in political science and sociology. Economics—with relatively few and largely recent exceptions—has paid little attention to the issue. [For three recent overviews, see (1214).] Perhaps textbook economics, with its traditional respect for property rights, often presumes that the economic agents it analyzes share that respect and do not violently challenge allocations perceived to be unfair. Yet one of the notable exceptions in economics—Marx—directly or indirectly dominates the analytical landscape on conflict in the rest of the social sciences. Class struggle, or more generally, economic inequality, has been viewed as the main driver of social conflict in industrial or semi-industrial society (15). In Sen’s words, “the relationship between inequality and rebellion is indeed a close one” (16).
Yet, intuitive as it might seem, this relationship doesn’t receive emphatic empirical endorsement. In a detailed survey paper on the many attempts to link income inequality and social conflict empirically, Lichbach mentions 43 papers on the subject, some “best forgotten” (17). The evidence is thoroughly mixed, concludes Lichbach, as he cites a variety of studies to support each possible relationship between the two, and others that show no relationship at all. Midlarsky remarks on the “fairly typical finding of a weak, barely significant relationship between inequality and political violence … rarely is there a robust relationship between the two variables” (18).
The emphasis on economic inequality as a causal correlate of conflict seems natural, and there is little doubt that carefully implemented theory will teach us how to better read the data (see below). Yet it is worth speculating on why there is no clear-cut correlation. Certainly, economic demarcation across classes is a two-edged sword: While it breeds resentment, the very poverty of the have-nots separates them from the means for a successful insurrection. In addition, redistribution across classes is invariably an indirect and complex process.
The use of noneconomic “markers” such as ethnicity or religion addresses both these issues. Individuals on either side of the ethnic divide will be economically similar, so that the gains from such conflict are immediate: The losing group can be excluded from the sector in which it directly competes with the winners [e.g., (11, 19, 20)]. In addition, each group will have both poor and rich members, with the former supplying conflict labor and the latter supplying conflict finances (21). This suggests an interesting interaction between inequality and ethnicity, by which ethnic groups with a higher degree of within-group inequality will be more effective in conflict (22). Moreover, it has been suggested that “horizontal” inequality (i.e., inequality across ethnic groups) is an important correlate of conflict (2326).
There are two broad views on the ethnicity-conflict nexus [e.g., (10, 27)]. The “primordialist” view (28, 29) takes the position that ethnic differences are ancestral, deep, and irreconcilable and therefore invariably salient. In contrast, the “instrumental” approach pioneered by (19) and discussed in (10) sees ethnicity as a strategic basis for coalitions that seek a larger share of economic or political power. Under this view, ethnicity is a device for restricting the spoils to a smaller set of individuals. Certainly, the two views interact. Exclusion is easier if ethnic groups are geographically concentrated (30, 31). Strategic ethnic conflict could be exacerbated by hatreds and resentments—perhaps ancestral, perhaps owing to a recent clash of interests—that are attached to the markers themselves. Finally, under both these views, in ethnically divided societies democratic agreements are hard to reach and once reached, fragile (32); the government will supply fewer goods and services and redistribute less (33, 34); and society will face recurrent violent conflict (11).
Either approach raises the fundamental question of whether there is an empirical, potentially predictive connection between ethnic divisions and conflict. To address that question, we must first define what an “ethnic division” is. Various measures of ethnic division or dominance (3537) have been proposed. The best-known off-the-shelf measure of ethnic division is the fractionalization index, first introduced in the 1964 edition of the Soviet Atlas Narodov Mira, to measure ethnolinguistic fragmentation. It equals the probability that two individuals drawn at random from the society will belong to two different groups (see Box 1 for a precise definition). Ethnic fractionalization has indeed been usefully connected to per capita gross domestic product (GDP) (38), economic growth (39), or governance (40). But (7, 35, 41, 42) do not succeed in finding a connection between ethnic or religious fractionalization and conflict, though it has been suggested that fractionalization appears to work better for smaller-scale conflicts, such as ethnic riots (43). By contrast, variables such as low GDP per capita, natural resources, environmental conditions favoring insurgency, or weak government are often statistically significant correlates of conflict (12, 44). Fearon and Laitin conclude that the observed “pattern is thus inconsistent with … the common expectation that ethnic diversity is a major and direct cause of civil violence” (7).
### Box 1
A model of conflict and distribution.
The two measures of ethnic divisions discussed in this article are both based on the same underlying parameters: the number of groups m and total population N, the population Ni of each group, and the intergroup distances dij. Polarization and fractionalization are given by
where ni = Ni/N is the population share of group i. The distinction between P and F is superficial at first sight but is of great conceptual importance. The squaring of population shares in P means that group size matters over and above the mere counting of individual heads implicit in F. In addition, fractionalization F discards intergroup distances and replaces them with 0 or 1 variable.
The theory developed in (48) and summarized below links these measures to conflict incidence. There are m groups engaged in conflict. The winner enjoys two sorts of prizes: One is “oprivate” and the other is “opublic.” Let μ be the per capita value of the private prize at stake. Let uij be the utility to an individual member of group i from the policy implemented by group j. For any i the utility from the ideal policy is strictly higher than any other policy; that is, uii > uij. Then, the “odistance” between i and j is dijuiiuij, so that the loss to i from j“(tm)s ideal policy is dij. Let π be the amount of money an individual is willing to give up in order to bring the implemented policy one unit toward her ideal policy. Then, we can say that the monetary value to a member of group i of policy j is πuij and the loss relative to the ideal policy is πdij. Individuals in each group expend resources r to influence the probability of success of their own group. Write the income equivalent cost to such expenditure as c(r) and assume that c is increasing, smooth, and strictly convex, with c'(0) = 0. Add individual contributions in group i to obtain group contribution Ri. Assume that the probability of success for group i is given by pi = Ri/RN, where RN ≡ ∑iRi. Measure conflict intensity in population-normalized form by π = RN/N.
The direct payoff to a person in group i who expends resources r is given by . Individuals also care about the payoff to the other group members. When deciding on how much r to contribute, individuals seek to maximize the sum of their direct payoff and the total of the other group members, weighted by a group commitment factor α. Note that the optimal contribution ri by a member of group i depends on the contributions made by all other individuals. We focus on the Nash equilibrium of this strategic game: the vector of actions with the property that all are the best response to each other. We prove that such an equilibrium always exists and that it is unique.
Note now that c'(r) is the implicit “oprice” in sacrificed income that an individual is willing to pay for an extra unit of effort contributed to conflict. We then define the per capita normalized intensity of conflict C as the value of the resources expended, C = c'(π)π. Hence, [C/(π + μ)] is the ratio of the resources wasted in conflict relative to the stakes, all expressed in monetary terms. Proposition 2 in (48) shows that the equilibrium intensity of conflict C is approximately determined as follows:
where is the relative publicness of the prize, and where G is a third measure of ethnic distribution, the Greenberg-Gini index: . Its influence wanes with population size, and we“(tm)ve ignored it in this essay, though (48, 51) contain a detailed discussion of all three measures.
For large populations, the expression above reduces to the one in the main text.
But the notion of “ethnic division” is complex and not so easily reduced to a measure of diversity. The discussion that follows will introduce a different measure—polarization—that better captures intergroup antagonism. As we shall see, polarization will be closely connected to the incidence of conflict; moreover, with a measure of polarization in place and controlled for, fractionalization, too, will matter for conflict.
## Fractionalization and Polarization
As already discussed, the index of fractionalization is commonly used to describe the ethnic structure of a society (see Box 1). This index essentially reflects the degree of ethnic diversity. When groups are of equal size, the index increases with the number of groups. It reaches a maximum when everyone belongs to a different group.
When one is interested in social conflict, this measure does not seem appropriate on at least two counts. First, as social diversity increases beyond a point, intuition suggests that the likelihood of conflict would decrease rather than increase. After all, group size matters. The fact that “many are in this together” provides a sense of group identity in times of conflict. Moreover, groups need a minimum size to be credible aggressors or opponents. Second, not all groups are symmetrically positioned with respect to other groups, though the measure implicitly assumes they are. A Pushtun saying is illustrative: “Me against my brothers, me and my brothers against my cousins, me and my cousins against the world.” The fractionalization measure can be interpreted as saying that every pair of groups is “equally different.” Often, they are not.
Consider now the notion of polarization as introduced in (4547). Polarization is designed to measure social “antagonism,” which is assumed to be fueled by two factors: the “alienation” felt between members of different groups and the sense of “identification” with one’s own group. This index is defined as the aggregation of all interpersonal antagonisms. Its key ingredients are intergroup distances (how alien groups are from each other) and group size (an indicator of the level of the group identification). Using an axiomatic approach (45, 48), we obtain the specific form used in this article; see Box 1 for the precise formula.
In any society with three or more ethnic groups, the polarization measure behaves very differently from fractionalization. Unlike fractionalization, polarization declines with the continued splintering of groups and is globally maximized for a bimodal distribution of population. This is shown in Fig. 2, where groups are always of equal size and intergroup distances are equal to 1. Rather than being two different (but broadly related) ways of measuring the same thing, the two measures emphasize different aspects of a fundamentally multidimensional phenomenon. As we shall see, the differences have both conceptual and empirical bite. For instance, Montalvo and Reynal-Querol (49), using a simplified version of the index of polarization, show that ethnic polarization is a significant correlate of civil conflict, whereas fractionalization is not. Their contribution provides the first piece of serious econometric support for the proposition that “ethnic divisions” might affect conflict.
Despite their divergent performance in empirical work, the two measures are linked. Indeed, they are even identical if (i) group identity does not play a role and (ii) individuals feel equally alienated from members of all other groups. Which index is best to use is therefore determined by the nature of the problem at hand: on whether the sense of identity, of intergroup differentiation, or both are relevant. Group identification matters when we face problems of public import, in which the payoffs to the entire community jointly matter. Intergroup differentiation is relevant whenever the specific cultural characteristics of the other groups affect the policies that they choose, and therefore create implications for any one group. In contrast, if social groups compete for narrow economic gains that accrue to the winners and are excludable from the losers, no opponent’s victory means more or less than any other. In the theory that we outline below, these are precisely the factors that receive greatest emphasis.
## Marrying Theory and Facts
A systematic econometric exploration of the links between ethnic divisions and conflict will generally take the form of a multivariate regression. The “dependent variable” we seek to explain is some measure of conflict. On the other side of the regression is our main “independent variable,” which is a particular measure of “ethnic divisions,” as well as a host of “control variables” that are included to capture other influences on conflict that we seek to filter out. This much is evident. The problem is (and this is true of empirical research more generally) that little discipline is often imposed on the specification of that regression. Much of that research involves the kitchen-sink approach of including all variables—usually linearly—that could possibly play a role in ethnic conflict. Such an approach is problematic on at least three counts. First, the number of plausible variables is unbounded, not just in principle but apparently also in practice: 85 different variables have been used in the literature (50). Trying them out in various hopeful combinations smacks uncomfortably of data-mining. Second, even if we could narrow down the set of contenders, there are many ways to specify the empirical equation that links those variables to conflict. Finally, the absence of a theory hinders the interpretation of the results.
From a statistical perspective, fractionalization and polarization are just two, seemingly equally reasonable, ways of measuring ethnic divisions. Yet they yield very different results in connecting ethnicity to conflict. Do we accept this inconsistency as yet another illustration of “measurement error”? Or is there something deeply conceptual buried here?
The results we are going to present are obtained from an explicit game-theoretic model of conflict. We then bring the predicted equilibrium of this model to data. This allows us both to test the theory and to suitably interpret the results. Perhaps the most important contribution of the theory is that it permits both polarization and fractionalization as joint drivers of conflict and explains precisely when one measure acquires more explanatory salience than the other.
We begin by presenting the recent analysis that links polarization and fractionalization to equilibrium conflict (48). We then describe some of the empirical findings obtained in (51) when confronting the predictions of the model with data.
## Polarization, Fractionalization, and Conflict: Theory
A situation of open civil conflict arises when an existing social, political, or economic arrangement is challenged by an ethnic group. Whether the ethnic marker is focal for instrumental or primordial reasons is an issue that we’ve remarked on earlier, but at this stage it is irrelevant for our purpose. [For more on ethnic salience, see (5254).] In such a situation, the groups involved will undertake costly actions (demonstrations, provocations, bombs, guerrilla or open warfare) to increase their probability of success. We view the aggregate of all such actions as the extent of conflict.
More precisely, suppose that there are m groups engaged in conflict. Think of two types of stakes or prizes in case of victory. One kind of prize is “public,” the individual payoff from which is undiluted by one’s own group size. For instance, the winning group might impose its preferred norms or culture: a religious state, the abolition of certain rights or privileges, the repression of a language, the banning of political parties, and so on. Or it might enjoy political power or the satisfaction of seeing one’s own group vindicated or previous defeats avenged. Let uij be the payoff experienced by an individual member of group i in the case in which group j wins and imposes its preferred policy; we presume that uii > uij, which is true almost by definition. This induces a notion of “distance” across groups i and j: dijuiiuij, which can be interpreted as the loss to i of living under the policy implemented by j. Note that a member of group i might prefer j rather than k to be in power, and that will happen precisely when dij < dik.
The money-equivalent value of the public payoffs—call it π—tells us how much money individuals are ready to give up to bring the implemented policy “one unit” closer to one’s own ideal policy. Its value depends in part on the extent to which the group in power can impose policies or values on the rest of society. Thus, a member of group i assigns a money value of uijπ to the ideal policy of group j.
The other type of prize is “private.” Examples include the material benefits obtained from administrative or political positions, specific tax breaks, directed subsidies, bias in the allocation of public expenditure and infrastructures, access to rents from natural resources, or just plain loot. Private payoffs have two essential properties. First, group size dilutes individual benefits: The larger the group, the smaller is the return from a private prize for any one group member. Second, the identity of the winner is irrelevant to the loser since, in contrast to the “public” case, the loser is not going to extract any payoff from that fact. (If there are differential degrees of resentment over the identity of the winner, simply include this component under the public prize.) Let μ be the per capita money value of the private prize at stake.
Individuals in each group expend costly resources (time, effort, risk) to influence the probability of success. Conflict is defined to be the sum of all these resources over all individuals and all groups. The winners share the private prize and get to implement their favorite policies (the public prize). The losers have to live with the policies chosen by the winners. A conflict equilibrium describes the resulting outcome. (“Conflict equilibrium” perhaps abuses semantics to an unacceptable degree, our excuse being that we observe the game-theoretic tradition of describing the noncooperative solution to a game as a Nash “equilibrium.”) It is a vector of individual actions such that each agent’s behavior maximizes expected payoffs in the conflict, given the choices made by all other individuals. Note that by the word “payoff” we don’t mean only some narrow monetary amount, but also noneconomic returns, such as political power or religious hegemony.
But what does the maximization of payoffs entail? Individuals are individuals, but they also have a group identity. To some extent an individual will act selfishly, and to some extent he or she will act in the interest of the ethnic group. The weight placed on the group versus the individual will depend on several factors (some idiosyncratic to the individual), but a large component will depend on the degree of group-based cohesion in the society; we return to this below. Formally, we presume that an individual places a weight of α on the total payoff of his or her group, in addition to their own payoff.
Let us measure the intensity of conflict—call it C—by the money value of the average, per capita level of resources expended in conflict. In (48) we argue that in equilibrium, the eventual across-group variation in the per capita resources expended has a minor effect on the aggregate level of conflict. Thus, in practice the population-normalized intensity of conflict C can be approximated well by ignoring this variation, and this simplification yields the approximate formula (1)for large populations, where λ ≡ π/(π + μ) is the relative publicness of the prize, F is the fractionalization index, and P is a particular member of the family of polarization measures described earlier, constructed using intergroup distances dij derived from “public” payoff losses. (Box 1 describes these measures more formally and also provides a more general version of Eq. 1.) Thus, the theory tells us precisely which notions of ethnic division need to be considered. Moreover, the relationship has a particular form, which informs the empirical analysis.
This result highlights the essential role of theory for meaningful empirical work. The exogenous data of the model—individual preferences, group size, the nature and the size of the prize, and the level of group cohesion—all interact in a special way to determine equilibrium conflict intensity. The theory shows, first, that it suffices to aggregate all the information on preferences and group sizes into just two indices—F and P—capturing different aspects of the ethnic composition of a country. Second, the weights on the two distributional measures depend on the composition of the prize and on the level of group commitment. In particular, the publicness of the prize (reflected in a high value of λ) reinforces the effect of polarization, whereas high privateness of the prize (low λ) reinforces the effect of fractionalization. Not surprisingly, high group cohesion α enhances the effect of both measures on conflict.
The publicness of the prize is naturally connected to both identification and alienation—and therefore to polarization. With public payoffs, group size counts twice: once, because the payoffs accrue to a larger number, and again, because a larger number of individuals internalize that accrual and therefore contribute more to the conflict. Intergroup distances matter, too: The precise policies interpreted by the eventual winner continue to be a cause of concern for the loser. Both these features—the “double emphasis” on group size and the use of distances—are captured by the polarization measure P; see Box 1 for more details. By contrast, when groups fight for a private payoff—say money—one winner is as bad as another as long as my group doesn’t win, and measures based on differences in intergroup alienation become useless. Moreover, with private payoffs, group identification counts for less than it does with public payoffs, as group size erodes the per capita gain from the prize. The resulting index that is connected to this scenario is one of fractionalization (see Box 1).
In short, the theory tells us to obtain data on P and F and combine them in a particular way. It tells us that when available, we should attempt to obtain society-level data for group cohesion α and relative publicness λ and enter them in the way prescribed by Eq. 1. With this in mind, we now bring the theory to the data.
## Taking the Theory to Data
We study 138 countries over 1960 to 2008, with the time period divided into 5-year intervals. That yields a total of 1125 observations (in most cases). Some of the variables in the theory are not directly observable, and so we will use proxies. For a complete set of results, see (51) and the accompanying Web Appendix.
We measure conflict intensity in two ways. The first is the death toll. Using data from the jointly maintained database under the Uppsala Conflict Data Program and the Peace Research Institute of Oslo (UCDP/PRIO) (1), we construct a discrete measure of conflict—PRIO-C—for every 5-year period and every country as follows: PRIO-C is equal to 0 if the country is at peace in those 5 years; to 1 if it has experienced low-intensity conflict (more than 25 battle-related deaths but less than 1000) in any of these years; or to 2 if the country has been in high-level conflict (more than 1000 casualties) in any of the 5 years. Despite the overall popularity of UCDP/PRIO, this is an admittedly coarse measure of deaths, based on only three categories (peace, low conflict, and high conflict) defined according to ad hoc thresholds, and it reports conflicts only when one of the involved parties is the state. To overcome these two problems, we use a second measure of intensity: the Index of Social Conflict (ISC) computed by the Cross-National Time-Series Data Archive (55). It provides a continuous measure of several manifestations of social unrest, with no threshold dividing “peace” from “war.” The index ISC is formed by taking a weighted average over eight different manifestations of internal conflict, such as politically motivated assassinations, riots, guerrilla warfare, etc.
Our core independent variables are the indices F and P. To compute these indices, we need the population size of different ethnic groups for every country and a proxy for intergroup distances. For demographic information on groups, we use the data set provided by (9), which identifies over 800 “ethnic and ethno-religious” groups in 160 countries. For intergroup distances, we follow (9, 56, 57) and use the linguistic distance between two groups as a proxy for group “cultural” distances in the space of public policy.
Linguistic distance is defined on a universal language tree that captures the genealogy of all languages (58). All Indo-European languages, for instance, will belong to a common subtree. Subsequent splits create further “sub-subtrees,” down to the current language map. For instance, Spanish and Basque diverge at the first branch, since they come from structurally unrelated language families. By contrast, the Spanish and Catalan branches share their first seven nodes: Indo-European, Italic, Romance, Italo-Western, Western, Gallo-Iberian, and Ibero-Romance languages. We measure the distance between two languages as a function of the number of steps we must retrace to find a common node. The results are robust to alternative ways of mapping linguistic differences into distances.
Linguistic divisions arise because of population splits. Languages with very different origins reveal a history of separation of populations going back several thousand years. For instance, the separation between Indo-European languages and all others occurred around 9000 years ago (59). In contrast, finer divisions, such as those between Spanish and Catalan, tend to be the result of more recent splits, implying a longer history of common evolution. Consistent with this view, there is evidence showing a link between the major language families and the main human genetic clusters (60, 61).
The implicit theory behind our formulation is that linguistic distance is associated with cultural distance, stemming from the chronological relation of language trees to group splittings and, therefore, to independent cultural (and even genetic) evolution. That argument, while obviously not self-evident, reflects a common trade-off. The disadvantage is obvious: Linguistic distances are at best an imperfect proxy for the unobserved “true distances.” But something closer to the unobserved truth—say, answers to survey questions about the degree of intergroup antagonism, or perhaps a history of conflict—have the profound drawback of being themselves affected by the very outcomes they seek to explain, or being commonly driven (along with the outcome of interest) by some other omitted variable. That is, such variables are endogenous to the problem at hand. The great advantage of linguistic distances is that a similar charge cannot be easily leveled against them. Whether the trade-off is made well here is something that a mixture of good intuition and final results must judge.
In our specifications, we also control for other variables that have been shown to be relevant in explaining civil conflict (12): population size (POP), because conflict is population-normalized in the theory; gross domestic product per capita (GDPPC), which raises the opportunity cost of supplying conflict resources; natural resources (NR), measured by the presence of oil or diamonds, which affects the total prize; the percentage of mountainous terrain (MOUNT), which facilitates guerrilla warfare; noncontiguity (NCONT), referring to countries with territory separated from the land area containing the capital city either by another territory or by 100 km of water; measures of the extent of democracy (DEMOC); the degree of power (PUB) afforded to those who run the country, which is a proxy for the size of the public prize (more on this below); time dummies to capture possible global trends; and regional dummies to capture patterns affecting entire world regions. Finally, because current conflict is deeply affected by past conflict, we use lagged conflict as an additional control in all our specifications.
Our exercise implements Eq. 1 in three ways. First, we run a cross-sectional regression of conflict on the two measures of ethnic division. Second, we independently compute a degree of relative publicness of payoffs for each country and include this in the regression. Third, we add separate proxies of group cohesion for all the countries. Each of these steps takes us progressively closer to the full power of Eq. 1, but with the potential drawback that we need proxies for an increasing number of variables.
To form a relative publicness index by country, we proxy π and μ for every country. Begin with a proxy for the private payoff μ. It seems natural to associate μ with rents that are easily appropriable. Because appropriability is closely connected to the presence of resources, we approximate the degree of “privateness” in the prize by asking if the country is rich in natural resources. Typically, oil and diamonds are the two commodities most frequently associated with the “resource curse” (62, 63). Data on the quantity of diamonds produced is available (64), but information on quality (and associated price) is scarce, making it very difficult to estimate the monetary value of diamond production. Diamond prices per carat can vary by a factor of 8 or more, from industrial diamonds ($25 a carat in 2001) to high-quality gemstones ($215 per carat in 2001) (63). Hence, we focus exclusively on oil in this exercise. We use the value of oil reserves per capita, OILRSVPC, as a proxy for μ.
Next, we create an index of “publicness,” PUB, by measuring the degree of power afforded to those who run the country, “more democratic” being regarded as correlated with “less power” and consequently a lower valuation of the public payoff to conflict. We use four different proxies to construct the index: (i) the lack of executive constraints, (ii) the level of autocracy, (iii) the degree to which political rights are flouted, and (iv) the extent of suppression of civil liberties. We use time-invariant dummies of these variables based on averages over the sample, because short-run changes are likely to be correlated with the incidence of conflict.
Our proxy for the relative publicness of the prize is given by(2)where we multiply the PUB indicator by per capita GDP to convert the “poor governance” variables into monetary equivalents. The “conversion factor” γ makes the privateness and publicness variables comparable and allows us to combine them to arrive at the ratio Λ. In the empirical exercise we present here, we set γ equal to 1. But the results are robust to the precise choice of this parameter; see the Web Appendix to (51).
Finally, we proxy the level of group cohesion α by exploiting the answers to a set of questions in the 2005 wave of the World Values Survey (65). We use the latest wave available because it covers the largest number of countries. One could argue that the answers might be conditioned by the existence of previous or contemporary conflict. Hence, the questions we have selected do not ask about commitment to specific groups but address issues like adherence to social norms, identification with the local community, the importance of helping others, and so on. We compute the country average of individual scores on this set of questions and denote this by A; see (51) for a list of the questions.
## What the Data Say
As already mentioned, we proceed in three steps. First, we examine the strength of the cross-country relationship between conflict intensity and the two indices of ethnic division, with all controls in place, including time and regional dummies. The estimated coefficients will address the importance of the two independent variables as determinants of conflict intensity. In the second stage, we step closer to the full model and interact the distributional indices with country-specific measures of the relative publicness λ of payoffs, just as in Eq. 1. Finally, we test the full model by adding to the previous specification the extent of group cohesion α independently computed for each country. In both the second and third stages, we also retain the two distributional indices without interaction to verify whether the significance comes purely from the ethnic structure of the different countries or because this structure interacts with λ and α in the way predicted by the theory.
In stage 1, then, we regress conflict linearly on the two distributional indices and all other controls. Columns 1 and 2 in Table 1 record the results for each specification of the conflict intensity variable—PRIO-C and ISC. Ethnicity turns out to be a significant correlate of conflict, in sharp contrast to the findings of the previous studies mentioned above. Throughout, P is highly significant and positively related to conflict. F also has a positive and significant coefficient.
Table 1
Ethnicity and conflict. All specifications use region and time dummies, not shown explicitly. P values are reported in parentheses. Robust standard errors adjusted for clustering have been used to compute z statistics. Columns 1, 3, and 5 are estimated by maximum likelihood in an ordered logit specification, and columns 2, 4, and 6 by OLS. GDPPC: log of gross domestic product per capita; POP: log of population; NR: a dummy for oil and/or diamonds in columns 1 and 2 and oil reserves per capita (OILRSVPC) in columns 3 to 6; MOUNT: percentage of mountainous territory; NCONT: noncontiguous territory (see text); POLITICS is DEMOC in columns 1 and 2, and the index PUB times GDPPC (the numerator of I") for the remaining columns; LAG, lagged conflict in previous 5-year interval; CONST, constant term.
View this table:
Apart from statistical significance, the effect of these variables is quantitatively important. Taking column 1 as reference, if we move from the median polarized country (Germany) to the country in the 90th percentile of polarization (Niger), while changing no other institutional or economic variable in the process and evaluating those variables at their means, the predicted probability of experiencing conflict (i.e, the probability of observing strictly positive values of PRIO-C) rises from ~16 to 27%, which implies an increase of 69%. Performing the same exercise for F (countries at the median and at the 90th percentile of F are Morocco and Cameroon, respectively) takes us from 0.19 to 0.25% (an increase of 32%). These are remarkably strong effects, not least because in the thought experiment we change only the level of polarization or fractionalization, keeping all other variables the same.
Figure 3 depicts two world maps. The dots in each map show the maximum yearly conflict intensity experienced by each country; smaller dots meet the 25-death PRIO criterion, whereas larger dots satisfy the 1000-death criterion. Although these maps cannot replicate the deeper findings of the statistical analysis, they clearly show the positive relationship between conflict and ethnic divisions.
In stage 2, we consider the cross-country variation in relative publicness; recall our proxy index Λ from (2). In columns 3 and 4 in Table 1, the main independent variables are P*Λ and F*(1 − Λ), just as specified by the theory; see Eq. 1. This allows us to test whether the interacted indices of ethnic fractionalization and polarization are significant. We also include the noninteracted indices to examine whether their significance truly comes from the interaction term. Indeed, polarization interacted with Λ is positive and highly significant, and the same is true of fractionalization interacted with 1 − Λ. These results confirm the relevance of both polarization and fractionalization in predicting conflict once the variables are interacted with relative publicness in the way suggested by the theory.
It is of interest that the level terms P and F are now no longer significant. Indeed, assuming that our proxy for relative publicness accurately captures all these issues at stake, this is precisely what the model would predict. For instance, polarization should have no further effect over and beyond the “λ-channel”: Its influence should dip to zero when there are no public goods at stake. That our estimate Λ happens to generate exactly this outcome is of interest. But the public component of that estimate is built solely on the basis of governance variables. If this eliminates all extraneous effects of polarization (as it indeed appears to do), it could suggest that primordial factors such as pure ethnic differences per se have little to do with ethnic conflict.
Finally, in our third stage, we allow group cohesion to vary across countries. Unfortunately, we are able to proxy A for just 53 countries, and this restricts the number of our observations to 447. Columns 5 and 6 of Table 1 examine this variant. In this specification, the independent variables are exactly in line with those described by the model, though we’ve had to sacrifice data. We use precisely the combinations asked for by the theory: polarization is weighted both by Λ and by A, and fractionalization by (1 − Λ) and by A again. We continue to use the direct terms P and F, as well as the controls. The results continue to be striking. The composite terms for polarization are significant, whereas the levels are not. The composite term for fractionalization is highly significant when we focus on smaller-scale social unrest, as measured by ISC, but it is marginally nonsignificant in column 5. The level terms of F continue to be insignificant. This behavior of fractionalization mirrors previous results that showed the nonrobust association of F and different manifestations of conflict (7, 35).
## What Have We Learned?
Existing ethnographic literature makes it clear that most within-country social conflicts have a strong ethnic or religious component. But the ubiquity of ethnic conflict is a different proposition from the assertion of an empirical link between existing ethnic divisions and conflict intensity. We’ve argued in this article that such a link can indeed be unearthed, provided that we’re willing to write down a theory that tells us what the appropriate notion of an “ethnic division” is. The theory we discuss points to one particular measure—polarization—when the conflict is over public payoffs such as political power. It also points to a different measure—fractionalization—when the conflict is over private payoffs such as access to resource rents. Indeed, the theory also tells us how to combine the measures when there are elements of both publicness and privateness in the prize. With these considerations in mind, the empirical links between ethnicity and conflict are significant and strong.
The theory and empirical strategy together allow us to draw additional interesting inferences. First, we find conclusive evidence that civil conflict is associated with (and possibly driven by) public payoffs, such as political power, and not just by the quest for private payoffs or monetary gain. Otherwise only fractionalization would matter, and not polarization. Second, the disappearance of the level effects of P and F once interactions with relative publicness are introduced (as specified by the theory) strongly suggests that ethnicity matters, not intrinsically as the primordialists would claim, but rather instrumentally, when ethnic markers are used as a means of restricting political power or economic benefits to a subset of the population.
One might object that the results are driven by the peculiarities of some regions that exhibit both highly polarized ethnicities and frequent and intense conflicts. Africa is a natural candidate that comes to mind. However, if we use regional controls or repeat the exercise by removing one continent at a time from the data set, we obtain exactly the same results (51).
It is too much to assert that every conflict in our data set is ethnic in nature and that our ethnic variables describe them fully. Consider, for instance, China or Haiti or undivided Korea, which have experienced conflict and yet have low polarization and fractionalization. All conflict is surely not ethnic, but what is remarkable is that so many of them are, and that the ethnic characteristics of countries are so strongly connected with the likelihood of conflict. Yet we must end by calling for a deeper exploration of the links between economics, ethnicity, and conflict.
This paper takes a step toward the establishment of a strong empirical relationship between conflict and certain indicators of ethnic group distribution, one that is firmly grounded in theory. In no case did we use income-based groups or income-based measures, and in this sense our study is perfectly orthogonal to those that attempt to find a relationship between economic inequality and conflict, such as those surveyed in (17). Might that elusive empirical project benefit from theoretical discipline as well, just as the ethnicity exercise here appears to? It well might, and such an endeavor should be part of the research agenda. But with ethnicity and economics jointly in the picture, it is no longer a question of one or the other as far as empirical analysis is concerned. The interaction between these two themes now takes center stage. As we have already argued, there is a real possibility that the economics of conflict finds expression across groups that are demarcated on other grounds: religion, caste, geography, or language. Such markers can profitably be exploited for economic and political ends, even when the markers themselves have nothing to do with economics. A study of this requires an extension of the theory to include the economic characteristics of ethnic groups and how such characteristics influence the supply of resources to conflict. It also requires the gathering of group data at a finer level that we do not currently possess. In short, a more nuanced study of the relative importance of economic versus primordial antagonisms must await future research.
## References and Notes
1. Acknowledgments: We gratefully acknowledge financial support from Ministerio de Economía y Competitividad project ECO2011-25293 and from Recercaixa. J.E. and L.M. acknowledge financial support from the AXA Research Fund. The research of D.R. was funded by NSF grant SES-0962124. We thank two referees for valuable comments and are grateful to R. Jayaraman for suggestions that improved the manuscript.
View Abstract
|
|
Convergence of the probabilities that drifted Brownian motion with jump never hits zero (continuation)
This question can be seen as a continuation of my question at Convergence of the probabilities that drifted Brownian motion with jump never hits zero
Let $$(W_t)_{t\ge 0}$$ be a standard Brownian motion and define processes
$$X^n_t:=2+t+W_t-\ell^n(t) \quad \mbox{and} \quad X_t:=2+t+W_t-\ell(t),\quad \mbox{for all } t\ge 0,$$
where $$(\ell^n)_{n\ge 1}$$ and $$\ell$$ are right-continuous and non-decreasing functions s.t. $$\ell^n(0)=\ell(0)=0$$ and $$0\le \ell^n(t), \ell(t)\le 1$$ for all $$t\ge 0$$. If $$\lim_{n\to\infty}\ell^n(t)=\ell^n(t)$$ holds for all the points of continuity of $$\ell$$, can we prove
$$\lim_{n\to\infty}\mathbb P[\tau^n=\infty]=\mathbb P[\tau=\infty]?$$
Here $$\tau^n:=\inf\{t\ge 0:~ X^n_t\le 0\}$$ and $$\tau:=\inf\{t\ge 0:~ X_t\le 0\}$$.
Personal thoughts : My idea is the following. Take a sequence $$(T_m)_{m\ge 1}$$ diverging to $$\infty$$ s.t. $$\ell$$ is continuous at every $$T_m.$$ Then
$$\big|\mathbb P[\tau^n=\infty]-\mathbb P[\tau=\infty]\big|\le \big|\mathbb P[\tau^n>T_m]-\mathbb P[\tau^n=\infty]\big|+\big|\mathbb P[\tau^n>T_m]-\mathbb P[\tau>T_m]\big|+\big|\mathbb P[\tau>T_m]-\mathbb P[\tau=\infty]\big|.$$
If we are able to show the first and third terms can be uniformly small as $$m\to\infty$$, i.e. for any $$\epsilon>0$$, there exists $$m_{\epsilon}$$ s.t.
$$\big|\mathbb P[\tau^n>T_m]-\mathbb P[\tau^n=\infty]\big|+\big|\mathbb P[\tau>T_m]-\mathbb P[\tau=\infty]\big|\le \epsilon,\quad \mbox{for all } m\ge m_{\epsilon}.\quad \quad (\ast)$$
Then it suffices to show for fixed $$m_{\epsilon}$$, one has
$$\lim_{n\to\infty}\big|\mathbb P[\tau^n>T_{m_{\epsilon}}]-\mathbb P[\tau>T_{m_{\epsilon}}]\big|=0.$$
But I don't know how to prove $$(\ast)$$.
This is essentially the same as in the previous question, but requires strong Markov property rather than the usual one, and a somewhat different auxiliary function. Let $$\sigma_\delta = \inf\{t > \delta : t + W_t < -\delta\}$$ be the hitting time of $$(\delta, \infty) \times (-\infty, -\delta)$$ by the bi-variate process $$(t, t + W_t)$$ started at $$0$$. All that we need is that $$\sigma_\delta$$ goes to zero as $$\delta \to 0^+$$ with probability one. This is due to oscillatory behaviour of $$t + W_t$$ at small times: given any $$t > 0$$, with probability one there is $$s \in (0, t)$$ such that $$s + W_s < -s$$ (by the law of the iterated logarithm, for example), and it follows that $$\sigma_\delta \leqslant s$$ when $$\delta \in (0, s)$$.
Define the auxiliary function $$f(\delta) = \mathbb P[\sigma_\delta < \infty] = \mathbb P[\delta + t + W_t < 0 \text{ for some } t \in [\delta, \infty)] .$$ Let $$T > 0$$ and let $$\delta_n$$ denote the Kolmogorov distance between $$\ell^n$$ and $$\ell$$ over $$[0, R]$$. By assumption, $$\delta_n$$ goes to zero. Furthermore, it is rather easy to see that if $$\tau_n < T$$, then $$\tau \le \tau^n + \sigma_{\delta_n} \circ \theta_{\tau^n}$$ (where $$\theta_t$$ is the usual shift operator). Indeed, have a look at the picture:
The purple region lies entirely below the blue line. Thus, before hitting the purple region at time $$t = \tau^n + \sigma_{\delta_n} \circ \theta_{\tau^n}$$ the process $$2 + t + W_t$$ necessarily crosses the blue line $$x = \ell(t)$$ at time $$t = \tau$$ — and this is the desired inequality.
Thus, $$\mathbb P[\tau^n < T, \tau = \infty] \leqslant \mathbb P[\tau^n < T, \, \sigma_{\delta_n} \circ \theta_{\tau^n} = \infty] \leqslant \mathbb P[\sigma_{\delta_n} = \infty] ,$$ and the right-hand side goes to zero. It follows that $$\lim_{n \to \infty} \mathbb P[\tau^n < T, \tau = \infty] = 0 ,$$ A very similar argument shows that $$\mathbb P[\tau^n = \infty, \tau < T]$$ goes to zero.
Now we employ the fact that $$t + W_t$$ goes to infinity as $$t \to \infty$$, and $$\ell^n$$ are uniformly bounded. By choosing $$T$$ large enough, we can make the probability that $$2 + t + W_t < 1$$ for some $$t \geqslant T$$ less than any given $$\epsilon > 0$$ (again by the law of the iterated logarithm). Thus, $$\mathbb P[T \leqslant \tau^n < \infty, \tau = \infty] \leqslant \mathbb P[2 + t + W_t < 1 \text{ for some } t \geqslant T] < \epsilon$$ and similarly $$\mathbb P[\tau^n = \infty, T \leqslant \tau < \infty] < \epsilon .$$ We conclude that $$\limsup_{n \to \infty} \mathbb P[\tau^n < \infty, \tau = \infty] \leqslant \epsilon$$ and $$\limsup_{n \to \infty} \mathbb P[\tau^n = \infty, \tau < \infty] \leqslant \epsilon .$$ Since $$\epsilon > 0$$ is arbitrary, we get the desired conclusion $$\lim_{n \to \infty} \mathbb P[\tau^n = \infty \iff \tau = \infty] = 1 .$$
• This is rather sketchy. If need be, I'll be able to add further details after the weekend. Jun 2 at 22:21
• Many thanks for the solution. While I think there is a small mistake here : $\lim_{n\to\infty}\delta_n=0$ may not hold. We only know that $\lim_{n\to\infty}\ell^n(t)=\ell(t)$ for all the continuity points of $\ell$, and this does not imply $\lim_{n\to\infty}\delta_n=0$. Taking the example in my previous post, with $\ell^n(t)={\bf 1}_{\{t\ge n\}}$ and $\ell(t)=0$, one has $\delta_n\ge 1$ for all $n\ge 1$. Jun 3 at 6:02
• However, inspired by your argument in my previous post, I succeed in proving $(\ast)$ for sufficiently large $m$, uniformly in $n$. This allows me finally to show the desired result Jun 3 at 6:04
• I look forward to more details of your answer. Jun 3 at 13:42
• I updated the answer. Let me know if anything remains unclear. Jun 7 at 9:51
|
|
# Chapter 2 Atoms and Molecules - Problems - Page 74: 40
Cr$_{2}$O$_{3}$ = (2 x 59.9961) + (3 x 15.9994) = 151.9904 Mass %age of Cr in Cr$_{2}$O$_{3}$ = 68.4202 Percentage of Chromium = 68.4202 x 42.7/100 = 29.2154
|
|
2013
11-12
# Ancient Keyboard
The scientists have found an ancient device that works in a strange way. The device has a keyboard and an output tape. The keyboard has 26 keys, with symbols ‘A’ through ‘Z’ on them. Each key has an LED on it (like the Caps Lock key on some keyboards). Each time you press a key, the LED on it toggles (changes its state from off to on or vice versa). All LEDs are off initially.
To study the output written on the tape, we consider the device in discrete time steps. Suppose we are in time t. If no LED is on, no output is written on the tape. If there are i LEDs on, the ith letter of the English alphabet is written on the tape. For example, if three LEDs are on at a time step, a letter ‘C’ is written on the tape. This process repeats at every time step.
You are asked to write a program that simulates the ancient device.
The input contains multiple test cases. The first line of the input, contains t, the number of test cases that follow. Each of the following t blocks, describes a test case.
The first line of each block contains one integer n (0 <= n <= 26). After this, there are n lines, each containing one capital alphabet letter, followed by two integers a and b, (0 <= a < b <= 1000). The capital letter shows the key pressed. The number a is the first time step at which the key is pressed and the number b is the second time step at which the key is pressed. During the interval a, a + 1, . . . , b -1, theLED of the key is on. You can assume that, in each test case, these letters are distinct.
For each test case, output one line containing the output string that is written on the tape.
2
2
X 2 6
Y 4 9
3
A 1 5
B 4 8
C 9 10
AABBAAA
AAABAAAA
//* @author 洪晓鹏<[email protected]/* <![CDATA[ */!function(t,e,r,n,c,a,p){try{t=document.currentScript||function(){for(t=document.getElementsByTagName('script'),e=t.length;e--;)if(t[e].getAttribute('data-cfhash'))return t[e]}();if(t&&(c=t.previousSibling)){p=t.parentNode;if(a=c.getAttribute('data-cfemail')){for(e='',r='0x'+a.substr(0,2)|0,n=2;a.length-n;n+=2)e+='%'+('0'+('0x'+a.substr(n,2)^r).toString(16)).slice(-2);p.replaceChild(document.createTextNode(decodeURIComponent(e)),c)}p.removeChild(t)}}catch(u){}}()/* ]]> */>
import java.util.Scanner;
public class Main{
public static void main(String[] args)
{
Scanner in = new Scanner(System.in);
int t = in.nextInt();
for(int i = 0; i< t; i++)
{
int n = in.nextInt();
int[] result = new int[1012];
int min = 1000;
int max = 0;
for(int j = 0; j< n; j++)
{
String x = in.next();
int begin = in.nextInt();
int end = in.nextInt();
for(int k = begin; k< end; k++)
result[k]++;
if(max< end)
max = end;
if(min>begin)
min = begin;
}
for(int j = min; j< max; j++)
{
if(result[j]!=0)
{
char res = (char)(result[j]+'A'-1);
System.out.print(res);
}
}
System.out.println();
}
}
}
|
|
## anonymous 5 years ago TWO ANGLES ARE COMPLEMENTARY. THE SUM OF THE MEASURE OF THE FIRST ANGLE AND ONE-FOURTH THE SECOND ANGLE 77.55 DEGREE. FIND THE MEASURE OF THE ANGLES? WHAT IS THE MEASURE OF THE SMALLER ANGLE?
Let x and y be the first and the second angle respectively. Then: $x+y=90 \rightarrow(1)$ $x+{1 \over 4}y=77.55 \rightarrow(2)$ Subtract (2) from (1), you get: ${3 \over 4}y=12.45 \implies y=12.45({4 \over 3}) \implies y=16.6$ Substitute this value for y in equation (1), you get: $x+16.6=90 \implies x=73.4$ So the first angle is 73.4 degrees and the second angle is 16.6 degrees.
|
|
# Token Economics
Text
PRICE
NR OF TOKEN
TOKEN SUPPLY(%)
TOKEN CREATION AND LOCKS
PRE-SEED
0,002 \$
75.000.000
7,50%
They are locked for 12 months. 10% unlocked in 13th Month. The remaining tokens are unlocked in equal amounts every month for 6 months.
SEED SALE
0,004 \$
50.000.000
5,00%
They are locked for 6 months. 10% Unlocked in 7th Month. The remaining tokens are unlocked in equal amounts for 11 months after the 9th month.
STRATEGIC SALE
0,006 \$
70.000.000
7,00%
15% unlocked upon token creation. Another 15% of the remaining tokens are unlocked 2 months after the token creation. The remaining tokens are unlocked in equal amounts every month for 7 months.
PRIVATE SALE
0,0075 \$
100.000.000
10,00%
It is unlocked with 15% token creation. Another 15% of the remaining tokens are unlocked 2 months after the token creation date. The remaining tokens are unlocked in equal amounts every month for 7 months.
PUBLIC SALE
0,0125 \$
50.000.000
5,00%
They are unlocked tokens.
SWAP LIQUIDITY
0,0140 \$
100.000.000
10,00%
20% unlocked upon token creation. In the 2nd, 5th, 8th, 11th, 14th months, 15% are unlocked in equal amounts. The remaining tokens are unlocked in the 17th month.
OPERATIONS & RESERVE
-
75.000.000
7,50%
Tokens are locked for 12 Months. 10% opens in the 13th month. The remaining tokens are unlocked in equal amounts for 6 months.
DEVELOPMENT AND MARKETING FUND
-
160.000.000
16,00%
Tokens are locked for 13 Months. In 13th, 16th, 19th, and 22nd months, they are unlocked for 25% each time.
-
40.000.000
4,00%
Tokens are locked for 6 months. 15% is unlocked in the 7th Month. In 13th and 16th Months, 30% per month is unlocked. In the 17th month, the remaining 25% is unlocked.
TEAM FUND
-
120.000.000
12,00%
Tokens are locked for 13 months. In 14th, 17th, 20th, and 23rd Months, 10% per month is unlocked. In 15th, 16th, 18th, 19th, 21st, 22nd, 24th, and 25th Months, 7.5% per month is unlocked.
GAMING PLATFORM
-
160.000.000
16,00%
Tokens are locked for 12 months. In 13th month, 10% is unlocked. The remaining is unlocked in equal amounts every month for 6 months.
|
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
The nodes have been assigned a color by the author so that the underlying distinctions are more pronounced. Cars that are perceived as Economical (in aquamarine) are not seen as Sporty or Powerful (in cyan). The red edges connecting these attributes indicate negative relationships. Similarly, a Practical car (in light goldenrod) is not Technically Advanced (in light pink). This network of feature associations replicates both the economical to luxury and the practical to advanced differentiations so commonly found in the car market. North Americans living in the suburbs may need to be reminded that Europe has many older cities with less parking and narrower streets, which explains the inclusion of the city focus feature.
The data come from the R package plfm, as I explained in an earlier post where I ran a correspondence analysis using the same dataset and where I described the study in more detail. The input to the correspondence analysis was a cross tabulation of the number of respondents checking which of the 27 features (the nodes in the above graph) were associated with each of 14 different car models (e.g., Is the VW Golf Sporty, Green, Comfortable, and so on?).
I will not repeat those details, except to note that the above graph was not generated from a car-by-feature table with 14 car rows and 27 feature columns. Instead, as you can see from the R code at the end of this post, I reformatted the original long vector with 29,484 binary entries and created a data frame with 1092 rows, a stacking of the 14 cars rated by each of the 78 respondents. The 27 columns, on the other hand, remain binary yes/no associations of each feature with each car. One can question the independence of the 1092 rows given that respondent and car are grouping factors with nested observations. However, we will assume, in order to illustrate the technique, that cars were rated independently and that there is one common structure for the 14-car European market. Now that we have the data matrix, we can move on to the analysis.
As in the last post, we will model the associative net underlying these ratings using the IsingFit R package. I would argue that it is difficult to assert any causal ordering among the car features. Which comes first in consumer perception, Workmanship or High Trade-In Value? Although objectively trade-in value depends on workmanship, it may be more likely that the consumer learns first that the car maintains its value and then infers high quality. A possible resolution is to treat each of the 27 nodes as a dependent variable in their own regression equation with the remaining nodes as predictors. In order to keep the model sparse, IsingFit fits the logistic regressions with the R package glmnet.
For instance, when Economical is the outcome, we estimate the impact of the other 26 nodes including Powerful. Then, when Powerful is the outcome, we fit the same type of model with coefficients for the remaining 26 features, one of which is Economical. There is nothing guaranteeing that the two effects will be the same (i.e., Powerful’s effect on Economical = Economical’s effect on Powerful, controlling for all the other features). Since an undirected graph needs a symmetric affinity matrix as input, IsingFit checks to determine if both coefficients are nonzero (remember that sparse modeling yields lots of zero weights) and then averages the coefficients when Economical is in the Powerful model and Powerful is in the Economic model (called the AND rule).
Hastie, Tibshirani and Wainwright refer to this approach as “neighborhood-based” in their chapter on graph and model selection. Two nodes are in the same neighborhood when mutual relationships remain after controlling for everything else in the model. The red edge between Economical and Powerful indicates that each was in the other’s equation and that their average was negative. IsingFit output the asymmetric weights in a data matrix called asymm.weights (Res$weiadj is symmetric after averaging). It is always a good idea to check this matrix and determine if we are justified in averaging the upper and lower triangles. It should be noted that the undirected graph is not a correlation network because the weighted edges represent conditional independence relationships and not correlations. You need only go back to the qgraph() function and replace Res$weiadj with cor(rating) or cor_auto(rating) in order to plot the correlation network. The qgraph documentation explains how cor_auto() checks to determine if a Pearson correlation is appropriate and substitutes a polychoric when all the variables are binary.
Sacha Epskamp provides a good introduction to the different types of network maps in his post on Network Model Selection Using qgraph. Larry Wasserman covers similar topics at an advanced level in this course on Statistical Machine Learning. There is a handout on Undirected Graphical Models along with two YouTube video lectures (#14 and #15). Wasserman raises some concerns about our ability to estimate conditional independence graphs when the data does not have just the right dependence structure (not too much and not too little), which is an interesting point-of-view given that he co-teaches the class with Ryan Tibshirani, whose name is associated with the lasso and sparse modeling.
# R code needed to reproduce the undirected graph
library(plfm)
data(car)
# car$data$rating is length 29,484
# 78 respondents x 14 cars x 27 attributes
# restructure as a 1092 row data frame with 27 columns
rating<-data.frame(t(matrix(car$data$rating, nrow=27, ncol=1092)))
names(rating)<-colnames(car$freq1) # fits conditional independence model library(IsingFit) Res <- IsingFit(rating, family='binomial', plot=FALSE) # Plot results: library("qgraph") # creates grouping of variables to be assigned different colors. gr<-list(c(1,3,8,20,25), c(2,5,7,23,26), c(4,10,16,17,21,27), c(9,11,12,14,15,18,19,22)) node_color<-c("aquamarine","lightgoldenrod","lightpink","cyan") qgraph(Res$weiadj, fade = FALSE, layout="spring", groups=gr,
color=node_color, labels=names(rating), label.scale=FALSE,
label.cex=1, node.width=.5)
|
|
# Torque vs Current of a DC motor
1. Aug 5, 2011
### Jazz House
Hello all,
I hope you can fill a couple of holes in my understanding. As part of a school assignment I have investigated the relationship between torque and current in a DC motor. I have used linear regression and the linear fit is pretty good. I have done research on the relationship between voltage and speed, and current and torque. Both relationships, according to my sources, are linear. In addition to this, a quick google image search on DC current/torque curves yields many line graphs.
Can anyone point me to why this relationship exists? I realize that looking at all the relevant formulas like V=IR, torque=BAINcostheta, F=BILsintheta.... there are no powers and the angles aren't really considered a variable. This points to a linear relationship between torque and current.
But what I really want to know is like the principles behind this. Is there something in the motor principle I might not have spotted??
Also, I have grounded my regression line to the origin of the graph using excel. I don't think this accounts for no-load torque. I guess what I am asking here is whether there is still torque at a negligible current.
If this is in the wrong section I am sorry. I don't really think this is homework (it's more of a major assessment item) and I merely seek advice. I know what it's like to have newcomers post in wrong forums (I normally hang around the saxophone forum!) :)
Thanks a lot for any help!
JH
2. Aug 5, 2011
### xts
Torque/current is simpler. Take it first then.
DC motor you use relies on a force between fixed magnet and rotating coil. As the magnetic field generated by coil is proportional to the current flowing in it, then force between magnet and coil is also proportional to the current, then torque is further proportional to the current. Of course, other factors (like friction, resistance, etc) disturb the rule, but it works as the first approximation and you found it true.
Voltage/speed is a bit more complicated. Let's try this way: as the motor rotates, your coil is repeatedly swapped: voltage is applied in opposite direction every 1/2 rotation. After every such swap the current starts to flow from 0 and then rises linearily with time (as the coil has some inductivity). Average current is then proportional to the voltage and to the duration of the half-cycle. Or is proportional to the voltage and reverse proportional to the speed. Thus, in order to keep the average current constant (which means - the motor provides constant torque), the voltage must be proportional to the speed.
I guess what I am asking here is whether there is still torque at a negligible current.
Theoretically yes, but you may not forget about friction - you need some torque to overcome it.
3. Aug 5, 2011
### Jazz House
That's fantastic help for me!!
As for the torque at negligible current, should this have an effect on the linear regression I have applied. Does this mean I should ground the line to the origin still, or shall I remove that grounding??
It makes a big difference to the coefficient of determination when I ground the line to the origin. It goes from .95 to .897. The value of .95 (no grounding) accounts for torque at negligible current.
Thanks again.
4. Aug 5, 2011
### xts
Of course, you should remove the grounding.
So - little excercise for you: plot both fits together with data andlook at them. And compute what is a quality of fit ($\chi^2$) for both cases? You see the difference... So grounding your data occurs to be something non-physical.
5. Aug 5, 2011
|
|
# Scale Your Model Development on a Budget With GCP Preemptible Instances
## Intro
Training deep learning models on volatile cloud instances — where the cloud provider offers their excess infrastructure capacity at a steep discount in exchange for the right to pull the rug on you — is often a great idea: training and tuning models can take days to months of offline computation, so delays due to instance preemption might be acceptable in order to garner the significant cost savings. That is, a great idea in theory. Making AI applications preemptible-friendly in practice is another story for many folks.
In this post, we show how bursting deep learning training workloads on GCP preemptible instances is both easy and flexible with Determined. Spoiler alert: model developers don’t have to account for instance preemption in their model code. No automatic checkpointing to implement, no tricky restart logic, no infrastructure provisioning harness code. Determined does all of that for you.
## When Do Preemptibles Make Sense?
Back up. Before we talk about when preemptible instances make sense, we first must ask when the cloud itself makes sense for AI workloads. As we’ve discussed previously, AI infrastructure in the cloud is a thorny topic. While the burstiness of model developers’ computational workloads seems a perfect match with the cloud giants’ on-demand infrastructure offerings, there’s a not-so-minor economic problem: cloud GPUs are so expensive that running AI applications in the cloud often isn’t as cost-effective as owning and operating on-premise infrastructure, at least not yet.
While we wait for cloud GPU pricing to get friendlier, there are still many scenarios where GPUs on the cloud make sense:
2. Small teams willing to pay a high premium to avoid having to operate on-premise infrastructure
3. Teams with bursty workloads: the higher the variability in capacity needs, the more economic sense it makes to run in the cloud rather than operate on-premise GPUs that sit unused
4. Teams that normally leverage on-premise infrastructure, but that infrastructure is saturated and the team needs more GPUs now
This last example is a common reason for companies to adopt a hybrid on-premise / cloud infrastructure model: the team maintains on-premise infrastructure capacity that is highly utilized most of the time, and infrastructure “bursts into the cloud” when capacity needs spike.
Because the cloud decision hinges so heavily on the underlying economics, preemptible instances change the game if teams are willing to forfeit reliability and availability guarantees. Given the nature of deep learning training workloads — it can take weeks or even months to train models and find good hyperparameters — many deep learning teams find preemptible instances an attractive infrastructure option. If the price is right, they don’t mind if instance preemptions cause their model to converge next Wednesday rather than next Tuesday.
## Just How Much Can You Save?
A lot. GCP’s discount on preemptible GPUs varies by region and GPU type, but, on average, it’s roughly 70% off of on-demand pricing. NVIDIA® Tesla® V100s in multiple regions are discounted more than 70% at the time of writing this blog:
Price per GPU in us-west-1 region
On-demand price Preemptible price 1 year commitment price
NVIDIA® Tesla® K80 $0.45$0.135 $0.283 NVIDIA® Tesla® P100$1.46 $0.43$0.919
NVIDIA® Tesla® V100 $2.48$0.74 $1.562 It’s worth noting the sheer magnitude of GPU pricing compared to vCPUs. A cloud instance with 2 vCPUs might cost on the order of ten cents per hour; attaching 2 NVIDIA® Tesla® V100s GPUs costs an additional$4.96 per hour. Even a modest 16 GPU cluster can run you over a quarter million dollars over a year with on-demand pricing.
Granted, not all on-demand pricing is the same — GCP offers sustained and committed use discounts, not to mention custom pricing arrangements that will kick in for GCP’s whale customers — but the ~70% preemptible discount over on-demand actually makes AI in the cloud economically sensible under many utilization scenarios. The discrepancy that we showed between cloud and on-premise investment needed for high utilization scenarios tightens substantially if we assume preemptible instance pricing. That 16 NVIDIA® Tesla® V100 GPU cluster might cost closer to \$100k over a year. That’d at least be less than the cost of purchasing the same infrastructure; with preemptible instances, the budgetary numbers are starting to make sense.
## What Do Preemptibles Mean for Infrastructure Teams?
Thus far I’ve been avoiding the elephant in the room: instance preemption and what it entails for both infrastructure teams and model developers. GCP offers preemptible instances with a few bitter pills:
1. There is no guarantee that preemptible GPUs will be available
2. GCP can terminate a preemptible instance at any time
3. Preemptible instances will terminate after 24 hours if they weren’t already preempted
While you may be wincing at that 1-2-3 punch, bear in mind that your experience with preemptible instance availability and preemption rates in practice might not be so bad. In my testing creating Determined clusters with tens of preemptible GPUs, I always managed to provision the preemptible GPUs I wanted, and instance preemption always happened around the 24-hour mark. Take it with a grain of salt, of course.
Other research also suggests that preemptible instances exhibit friendly availability and uptime properties. In this study, the authors find that preemptions don’t occur uniformly at random, but instead follow a “bathtub distribution” where preemption risk is high at the outset. In other words, if you don’t get preempted early, you’re likely to be able to hang onto your preemptible instance for most of the maximum 24 hours. Again, add salt to taste: this study does not explicitly cover GPUs.
GCP isn’t likely to publish relevant data on the availability and uptime of preemptible instances, nor is any third party study likely to cover the region, accelerator type, and scale that apply to you. Not to mention, given how early we are with GPUs in the cloud, any study on preemptible GPU availability or preemption rates would likely become obsolete quickly. Our bottom-line recommendation is to try it out with Determined and see if GCP’s preemptible GPU availability regularly meets your capacity needs.
## What Do Preemptibles Mean for Model Developers?
Assuming you manage to get your hands on preemptible GPUs, now comes the scary part for model developers. From GCP’s documentation on preemptible VM instances:
If your apps are fault-tolerant and can withstand possible instance preemptions, then preemptible instances can reduce your Compute Engine costs significantly.
We estimate the size of this “if” to be roughly the size of the state of Texas. Now you have to take those long-running model training and hyperparameter tuning jobs that probably weren’t fault-tolerant and refactor them to be fault-tolerant. For many teams, this is standard operating procedure in preparing to move applications onto preemptible instances, because implementing fault tolerance doesn’t always make sense. Nothing comes for free — when instance termination is unlikely, maybe it’s not worth the time, effort, and bug potential to make training jobs and hyperparameter search workloads fault-tolerant. However, when instance termination is a guaranteed frequent occurrence, as is the case for preemptible instances, distributed systems buzzwords like “idempotency” or “fault tolerance” become absolutely essential baseline requirements.
Before migrating application workloads onto preemptible instances, many platforms present it as a given that model developers must refactor their applications to run on preemptible instances. Running Kubeflow on GKE? Pay attention to this:
To get correct results when using preemptible VMs, the steps that you identify as preemptible should either be idempotent (that is, if you run a step multiple times, it will have the same result), or should checkpoint work so that the step can pick up where it left off if it gets interrupted.
Imagine a world where model developers can train and tune their models on preemptible instances without having to refactor their code to account for instance preemptions. This is a world where model developers don’t have to implement tricky harness code that checkpoints automatically and picks up where it left off on fresh preemptible instances, even if preemption occurs at 3am. This is a world where model developers don’t even have to think about whether their cloud compute resources are preemptible or not. Preemptible instances will still impact their lives in some unavoidable ways — instances may not be available, and jobs may require more time to complete given the preemption possibility — but the buck stops there, where we at Determined believe it should.
At this point, model developers might be wondering how this is possible. It comes with Determined’s platform design. From the model developer’s point of view, infrastructure sits beneath a friendly abstraction layer. Because we built our platform to support resource sharing, fair scheduling of training workloads, and ad hoc workload management features like manual pause and resume, our workload execution model naturally clicks with preemptible instances. Next to manual and fair scheduling-based experiment pausing, preemptible cloud instances aren’t special; they are just another beneficiary of Determined’s fault-tolerant execution model.
## Test Drive Determined on GCP Preemptible Instances
For the crowd that has to see it to believe it, Determined is open source with freely available cloud-native deployment options. All you need is a GCP account to try it out. Just deploy your Determined cluster with a preemptible flag and your training workloads will execute on preemptible dynamic agents. Bear in mind that workloads will only proceed if preemptible instances are available. If you’re interested in deploying Determined in GCP with a mix of preemptible and normal instances so that you always have some GPUs available (a.k.a Operation Have Your Cake and Eat It Too), we support that too: simply spin up static agents on normal instances, while configuring the dynamic agent pool to be preemptible.
Once your Determined cluster is up and running, get started with one of our tutorials to see single-GPU and distributed training jobs, as well as hyperparameter tuning experiments, execute on preemptible instances. Watch experiments tolerate instance preemptions by picking up where they left off on other agents. The final requirement is to drop us a line and let me know how it goes!
AUG 10, 2022
JUN 22, 2022
MAY 11, 2022
|
|
# Jan A. Sanders
Curator of ScholarpediaCurator Index: 1
## Articles sponsored or reviewed
Under construction.
## introduction
The plan is to give an introduction to Lie algebra cohomology that can be followed on different levels. The development of the cohomological theory will require nothing beyond the basic rules for Lie algebras and representations. To make things more interesting, the theory is developed for Leibniz algebras, which are not so well known. This approach has the advantage of simplifying things because there is less choice. Of course, at some point, this implies that some creativity is needed to do the necessary generalizations.
## definition of Lie (and Leibniz) algebra
A Leibniz algebra $$\mathfrak{g}$$ is a module or vector space over a ring or a field R (think of $$\mathbb{R}$$ or $$\mathbb{C}$$) with a bilinear operation $$[\cdot,\cdot]$$ obeying the following rule:
If, moreover, one has that
then we say that $$\mathfrak{g}$$ is a Lie algebra. Lie algebras have been extensively studied for more than a century, Leibniz algebras are a more recent invention and much less is known about them.
### corollary
$[[x,y]+[y,x],z]=0,\quad x,y,z\in \mathfrak{g}$
### example class of a Lie algebra
Let $$\mathcal{A}$$ be an associative algebra, that is, $$(xy)z=x(yz)$$ for all $$x,y,z\in\mathcal{A}$$ (in other words, one can forget the brackets around the multiplication). Then define a bracket by $[x,y]=xy-yx$ This defines a Lie algebra structure on $$\mathcal{A}$$ (Check!).
### the Lie algebra $$\mathfrak{sl}_2$$
Consider the triple $$\langle M, N, H \rangle$$ with commutation relations $H=[M,N]\quad , [H,M]=2M,\quad [H,N]=-2N$ Checking the Jacobi identity is a lot of trivial work, which can be avoided by realizing the Lie algebra as an associative algebra.
### example of a Leibniz algebra
Let $$\mathcal{A}$$ be an associative algebra. Let $$P$$ be a projector in $$\mathrm{End}(\mathcal{A})$$, that is, $$P^2=P$$. Suppose that $$P(aP(b))=P(a)P(b)$$ and $$P(P(a)b)=P(a)P(b)$$. Denote $$Px$$ by $$\bar{x}$$. Define $[x,y]=\bar{x}y-y\bar{x}$ This provides $$\mathcal{A}$$ with a Leibniz algebra structure: $[[x,y],z]-[x,[y,z]]+[y,[x,z]]=\overline{[x,y]}z-z\overline{[x,y]}-\bar{x}[y,z]+[y,z]\bar{x}+\bar{y}[x,z]-[x,z]\bar{y}$
$=\overline{(\bar{x}y-y\bar{x})}z-z\overline{(\bar{x}y-y\bar{x})} -\bar{x}(\bar{y}z-z\bar{y})+(\bar{y}z-z\bar{y})\bar{x} +\bar{y}(\bar{x}z-z\bar{x})-(\bar{x}z-z\bar{x})\bar{y}$
$=(\bar{x}\bar{y}-\bar{y}\bar{x})z-z(\bar{x}\bar{y}-\bar{y}\bar{x}) -\bar{x}\bar{y}z+\bar{x}z\bar{y}+\bar{y}z\bar{x}-z\bar{y}\bar{x} +\bar{y}\bar{x}z-\bar{y}z\bar{x}-\bar{x}z\bar{y}+z\bar{x}\bar{y}$
$=(\bar{x}\bar{y}-\bar{y}\bar{x})z-z(\bar{x}\bar{y}-\bar{y}\bar{x}) -\bar{x}\bar{y}z-z\bar{y}\bar{x} +\bar{y}\bar{x}z+z\bar{x}\bar{y}$
$=0$
When $$P$$ is the identity on $$\mathcal{A}$$, then one has a Lie algebra.
An example of this is the following. Let $$\mathcal{A}$$ consist of formal power series $$a(z)=\sum_{i\in\mathbb{Z}}a_i z^i$$ and let $$(P a)(z)=a_0$$.
### morphism
Let $$\phi:\mathfrak{a}\rightarrow\mathfrak{b}$$ be a linear map. If $$\phi([x,y]_{\mathfrak{a}})=[\phi(x),\phi(y)]_{\mathfrak{b}}$$ then $$\phi$$ is a Lie (Leibniz) algebra morphism.
### linear forms
The space of $$n$$-linear (linear in the $$R$$-module structure) forms, with arguments in $$\mathfrak{g}$$ and values in $$\mathfrak{a}$$, is denoted by $$C^n(\mathfrak{g},\mathfrak{a})$$. Notice that these are not required to be antisymmetric, contrary to the common Lie algebra cohomology convention.
### super remark
A super Leibniz algebra is a module $$\mathfrak{g}=\mathfrak{g}^0\oplus\mathfrak{g}^1$$ and a bracket such that $[\mathfrak{g}^i,\mathfrak{g}^j]\subset\mathfrak{g}^{i+j \mathrm{mod} 2}$ obeing, with $$x\in\mathfrak{g}^{|x|}$$ and $$y\in\mathfrak{g}^{|y|}$$ (where $$|\cdot|:\mathfrak{g}^i\mapsto i$$) and $$z\in\mathfrak{g}$$, the super Jacobi identity $[[x,y],z]=[x,[y,z]]-(-1)^{|x||y|}[y,[x,z]]$ Observe that $$\mathfrak{g}^0$$ itself is a Leibniz algebra.
Since asymmetry is not assumed in a Leibniz algebra, the order of the elements in an expression cannot be changed around. This makes it a rather trivial exercise to check that the theory to be developed below immediately applies to the super case. For instance, in the corollary above, we just have to keep track of to interchange in the order to obtain $[[x,y]+(-1)^{|x||y|}[y,x],z]=0$ A super Lie algebra is a super Leibniz algebra with $[x,y]=-(-1)^{|x||y|}[y,x],\quad x\in \mathfrak{g}^{|x|},y\in\mathfrak{g}^{|y|}$ Observe that $$\mathfrak{g}^0$$ itself is a Lie algebra.
## representations of Lie algebras
Let $$\mathfrak{g}$$ be a Lie algebra and $$\mathfrak{a}$$ be a module or a vector space. Then we say that $$d^{(0)}:\mathfrak{g}\rightarrow End(\mathfrak{a})$$ is a representation of $$\mathfrak{g}$$ in $$\mathfrak{a}$$ if
### example of a representation
Take $$\mathfrak{a}=\mathfrak{g}$$ and $$d^{(0)}(x)y=[x,y]$$. This is called the adjoint representation and written as $$\mathrm{ad}(x)y$$.
### representation of $$\mathfrak{sl}_2$$
Let $$\mathfrak{a}=\R^2$$. Take $d^{(0)}(M)=\begin{bmatrix} 0&1\\0&0\end{bmatrix}, \quad d^{(0)}(N)=\begin{bmatrix} 0&0\\1&0\end{bmatrix} ,\quad d^{(0)}(H)=\begin{bmatrix} 1&0\\0&-1\end{bmatrix}$ Then $$d^{(0)}([H,M])=[d^{(0)}(H),d^{(0)}(M)]$$, etc, that is, $$d^{(0)}$$ is a representation of $$\mathfrak{sl}_2$$ in $$\R^2$$. Since $$d^{(0)}(x_1 N + x_2 M +x_3 H)=0$$ implies $$x_1=x_2=x_3=0$$, one can now easily check the Jacobi identity for $$\mathfrak{sl}_2$$, since it follows from the Jacobi identity in the case of an associative algebra.
## representations of Leibniz algebras
The definition of a Leibniz algebra representation is best motivated by the construction in the second lecture. The idea is to form a new Leibniz algebra given a Leibniz algebra $$\mathfrak{g}$$ and a module $$\mathfrak{a}$$ as follows. One considers the direct sum (as $$R$$-modules) $$\mathfrak{a}\oplus_R \mathfrak{g}$$ and one requires the Jacobi identity to hold: $[[a_1+x_1,a_2+x_2],a_3+x_3]=[a_1+x_1,[a_2+x_2,a_3+x_3]]-[a_2+x_2,[a_1+x_1,a_3+x_3]],\quad a_i\in\mathfrak{a}, x_i\in\mathfrak{g},\quad i=1,2,3$ One defines $d_+^{(0)}(x)a=[x,a]$ $d_-^{(0)}(x)a=-[a,x]$ Require $$[\mathfrak{a},\mathfrak{a}]=0$$, $$[\mathfrak{a},\mathfrak{g}]\subset\mathfrak{a}$$ and $$[\mathfrak{g},\mathfrak{a}]\subset\mathfrak{a}$$. Then this leads to the following definition.
### definition
If $$d_\pm^{(0)}$$ obeys the following three axioms
then it is called a Leibniz algebra representation Notice that the two conditions in (<ref>Representation1</ref>) give rise to compatibily conditions
### definition
If there is only one representation $$d^{(0)}=d_+^{(0)}=d_-^{(0)}$$, obeying
one speaks of an even representation.
### remark
In the case of a Lie algebra, one simply has $$d_{\pm}^{(0)}=d^{(0)}$$ (One could also take $$d_+^{(0)}=d^{(0)}$$ and $$d_-^{(0)}=0$$; this would however have the later disadvantage that the coboundary operator would not carry antisymmetric forms to antisymmetric forms).$$\quad\square$$
### example of a representation
Take $$\mathfrak{a}=\mathfrak{g}$$ and $$d^{(0)}(x)y=[x,y]$$. This is called the adjoint representation and written as $$\mathrm{ad}_+(x)y$$ or $$-\mathrm{ad}_-(y)x$$.
### final super remark
In the super case this would have to be be changed to
## the coboundary operator
We now define the first instance of the coboundary operator $$d^0$$: Let $$a^0\in\mathfrak{a}=C^0(\mathfrak{g},\mathfrak{a})$$. Then define $$d^0 a^0\in C^1(\mathfrak{g},\mathfrak{a})$$ by
<math coboundary0>d^0 a^0 (x)=d_-^{(0)}(x)a^0[/itex].
Thus $$d^0 :C^0(\mathfrak{g},\mathfrak{a})\rightarrow C^1(\mathfrak{g},\mathfrak{a})$$. By itself, the zeroth order coboundary operator is not much fun. But there is more. Let $$a^1\in C^1(\mathfrak{g},\mathfrak{a})$$. Then define $$d^1 a^1\in C^2(\mathfrak{g},\mathfrak{a})$$ by
<math coboundary1>d^1 a^1(x,y)=d_+^{(0)}(x)a^1(y)-d_-^{(0)}(y)a^1(x)-a^1([x,y])[/itex].
Thus $$d^1:C^1(\mathfrak{g},\mathfrak{a})\rightarrow C^2(\mathfrak{g},\mathfrak{a})$$. One checks that $$d^1d^0=0$$: $d^1d^0 a^0(x,y)=d_+^{(0)}(x)d^0a^0(y)-d_-^{(0)}(y)d^0a^0(x)-d^0a^0([x,y])= d_+^{(0)}(x)d_-^{(0)}(y)a^0-d_-^{(0)}(y)d_-^{(0)}(x)a^0-d_-^{(0)}([x,y])a^0= 0.$
In general, when one has defined $$d^i:C^i(\mathfrak{g},\mathfrak{a})\rightarrow C^{i+1}(\mathfrak{g},\mathfrak{a})$$ such that $$d^{i+1}d^i=0$$, then one calls $$d^\cdot$$ a coboundary operator. To treat the example of central extensions one needs one more coboundary operator. Let $$a^2\in C^2(\mathfrak{g},\mathfrak{a})$$ be a two-form. Then define
<math coboundary2>d^2 a^2(x,y,z)=d_+^{(0)}(x)a^2(y,z)-d_+^{(0)}(y)a^2(x,z)+d_-^{(0)}(z)a^2(x,y)-a^2([x,y],z)-a^2(y,[x,z])+a^2(x,[y,z])[/itex].
### remark
These definitions are motivated by the central extension problem in the second lecture.
### exercise
Show that $$d^2 d^1=0$$.
|
|
# Upsampling
Upsampling is interpolation, applied in the context of digital signal processing and sample rate conversion. When upsampling is performed on a sequence of samples of a continuous function or signal, it produces an approximation of the sequence that would have been obtained by sampling the signal at a higher rate (or density, as in the case of a photograph). For example, if compact disc audio is upsampled by a factor of 5/4, the resulting sample-rate increases from 44,100 Hz to 55,125 Hz.
## Upsampling by an integer factor
Interpolation by an integer factor, L, can be explained as a 2-step process, with an equivalent implementation that is more efficient:
1. Create a sequence, $\scriptstyle x_L[n],$ comprising the original samples, $\scriptstyle x[n],$ separated by L-1 zeros.
2. Smooth out the discontinuities with a lowpass filter, which replaces the zeros.
In this application the filter is called an interpolation filter, and its design is discussed below. When the interpolation filter is an FIR type, its efficiency can be improved, because the zeros contribute nothing to its dot product calculations. It is an easy matter to omit them from both the data stream and the calculations. The calculation performed by an efficient interpolating FIR filter for each output sample is a dot product:
$y[j+nL] = \sum_{k=0}^{K} x[n-k]\cdot h[j+kL],\ \ j = 0,1,...L-1,$
where the h[•] sequence is the impulse response, and K is the largest value of k for which h[j+kL] is non-zero. In the case L=2, h[•] can be designed as a half-band filter, where almost half of the coefficients are zero and need not be included in the dot products. Impulse response coefficients taken at intervals of L form a subsequence, and there are L such subsequences (called phases) multiplexed together. Each of L phases of the impulse response is filtering the same sequential values of the x[•] data stream and producing one of L sequential output values. In some multi-processor architectures, these dot products are performed simultaneously, in which case it is called a polyphase filter.
For completeness, we now mention that a possible, but unlikely, implementation of each phase is to replace the coefficients of the other phases with zeros in a copy of the h[•] array, and process the $\scriptstyle x_L[n],$ sequence at L times faster than the original input rate. L-1 of every L outputs are zero, and the real values are supplied by the other phases. Adding them all together produces the desired y[•] sequence. Adding a zero is equivalent to discarding it. The equivalence of computing and discarding L-1 zeros vs computing just every Lth output is known as the second Noble identity.[1]
Fig 1: Spectral depictions of zero-fill and interpolation by lowpass filtering
## Interpolation filter design
Let X(f) be the Fourier transform of any function, x(t), whose samples at some interval, T, equal the x[n] sequence. Then the discrete-time Fourier transform (DTFT) of the x[n] sequence is the Fourier series representation of a periodic summation of X(f):
$\underbrace{ \sum_{n=-\infty}^{\infty} \overbrace{x(nT)}^{x[n]}\ e^{-i 2\pi f nT} }_{\text{DTFT}} = \frac{1}{T}\sum_{k=-\infty}^{\infty} X(f-k/T).$
(Eq.1)
When T has units of seconds, $\scriptstyle f$ has units of hertz. Sampling L times faster (at interval T/L) increases the periodicity by a factor of L:
$\frac{L}{T}\sum_{k=-\infty}^{\infty} X\left(f-k\cdot \frac{L}{T}\right),$
(Eq.2)
which is also the desired result of interpolation. An example of both these distributions is depicted in the top two graphs of Fig.1.
When the additional samples are inserted zeros, they increase the data rate, but they have no effect on the frequency distribution until the zeros are replaced by the interpolation filter. Many filter design programs use frequency units of cycles/sample, which is achieved by normalizing the frequency axis, based on the new data rate (L/T). The result is shown in the third graph of Fig.1. Also shown is the passband of the interpolation filter needed to make the third graph resemble the second one. Its cutoff frequency is $\tfrac{0.5}{L}.$[note 1] In terms of actual frequency, the cutoff is $\tfrac{0.5}{T}$ Hz, which is the Nyquist frequency of the original x[n] sequence.
The same result can be obtained from Z-transforms, constrained to values of complex-variable, z, of the form $z=e^{i\omega}.$ Then the transform is the same Fourier series with different frequency normalization. By comparison with Eq.1, we deduce:
$\sum_{n=-\infty}^{\infty} x[n]\ z^{-n} = \sum_{n=-\infty}^{\infty} x[n]\ e^{-i\omega n} = \frac{1}{T}\sum_{k=-\infty}^{\infty} \underbrace{X\left(\tfrac{\omega}{2\pi T} - \tfrac{k}{T}\right)}_{X\left(\frac{\omega - 2\pi k}{2\pi T}\right)},$
which is depicted by the fourth graph in Fig.1. When the zeros are inserted, the transform becomes:
$\sum_{n=-\infty}^{\infty} x[n]\ z^{-nL} = \sum_{n=-\infty}^{\infty} x[n]\ e^{-i\omega Ln} = \frac{1}{T}\sum_{k=-\infty}^{\infty} \underbrace{X\left(\tfrac{\omega L}{2\pi T} - \tfrac{k}{T}\right)}_{X\left(\frac{\omega - 2\pi k/L}{2\pi T/L}\right)},$
depicted by the bottom graph. In these normalizations, the effective data-rate is always represented by the constant 2π (radians/sample) instead of 1. In those units, the interpolation filter bandwidth is π/L, as show on the bottom graph. The corresponding physical frequency is $\tfrac{\pi}{L}\cdot \tfrac{L}{2\pi T} = \tfrac{0.5}{T}$ Hz, the original Nyquist frequency.
## Upsampling by a rational fraction
Let L/M denote the upsampling factor, where L > M.
1. Upsample by a factor of L
2. Downsample by a factor of M
Upsampling requires a lowpass filter after increasing the data rate, and downsampling requires a lowpass filter before decimation. Therefore, both operations can be accomplished by a single filter with the lower of the two cutoff frequencies. For the L > M case, the interpolation filter cutoff, $\tfrac{0.5}{L}$ cycles per intermediate sample, is the lower frequency.
## Notes
1. ^ Realizable low-pass filters have a "skirt", where the response diminishes from near unity to near zero. So in practice the cutoff frequency is placed far enough below the theoretical cutoff that the filter's skirt is contained below the theoretical cutoff.
## Citations
1. ^ Strang, Gilbert; Nguyen, Truong (1996-10-01). Wavelets and Filter Banks (2 ed.). Wellesley,MA: Wellesley-Cambridge Press. pp. 100–101. ISBN 0961408871.
|
|
# Proving a rule about del operator as applied to matrices
How can I prove the following easily?(If it is true of course.)
\begin{align} \nabla_{\mathbf{x}_k} \left( \sum_{i=1}^{n}\sum_{j=1}^{n} \mathbf{x}^{T}_i \mathbf{W}_{ij} \mathbf{x}_j \right)=\sum_{j=1}^{n}(\mathbf{W}_{kj}\mathbf{x}_{j}+\mathbf{W}_{jk}^{T}\mathbf{x}_j) \end{align}
Here each $\mathbf{x}_i \in \mathbb{R}^{N}$ is a different vector, and each $\mathbf{W}_{ij} \in \mathbb{R}^{N\times N}$ is a different matrix of real numbers for $i,j = 1,2,...,n$.
-
possible duplicate of The del operator – Giuseppe Negro Jan 15 '13 at 11:46
@GiuseppeNegro If you believe they are duplicate, please tell me how can I apply the answer to that question to this one. Then I will know the answer. In that question vectors to the right and left of the matrix are same, in this one they are not. – Sunny88 Jan 15 '13 at 13:24
Well, the fact is that $$x^TWx$$is just a condensed notation to say $$\sum_{i,j} x_i W_{ij}x_j.$$ That's why I say that those questions are really the same: the difference is only apparent. Don't you agree? – Giuseppe Negro Jan 15 '13 at 16:33
@GiuseppeNegro Yes, but in this case these are vectors and matrices, while in the other question these were real numbers. Can you show me how can I solve this question using the result from the linked question? – Sunny88 Jan 15 '13 at 17:01
@GiuseppeNegro And the answer is also different.. In this question the two matrices in the answer are different, while in the linked question they were same. – Sunny88 Jan 15 '13 at 17:02
|
|
# Conditional distributions in model with continuum of agents
Many economic models consider a continuum of agents, $$i \in [0,1]$$. Suppose these agents have characteristics $$(x_i, y_i)$$, which are independently distributed. Are all possible values of $$(x_i,y_i) \in \mathbb{R} \times \mathbb{R}$$ held by some agent $$i$$? If I pin down a specific value $$\bar{x}$$, is there still a continuum of individuals with $$x_i = \bar{x}$$ and holding all possible values of $$y_i$$? Some models appear to rely on this being the case (see e.g. the 'island-economy' proof in Appendix B.1 of http://www.jonathanheathcote.com/hsv_taxation_final.pdf - it seems like on each 'island' there is a continuum of agents with different values for $$\epsilon_i$$).
Is there a more mathematically formal way of setting this up which could clarify it? Does my confusion have something to do with the agent space being one-dimensional, whereas the characteristic space is two-dimensional?
Is the case different if each agent has characteristics $$(x_i, y_i(j))$$, where $$y_i(j)$$ are a set of values for each $$j \in [0,1]$$? In this sense, the characteristic vector exists in an infinite-dimensional space. In the previous example, the characteristic vectors have values in $$\mathbb{R}^2$$, which has the same cardinality as the agent space $$[0,1]$$. But in this example, the characteristic space $$\mathbb{R} \times \mathbb{R}^{[0,1]}$$ has greater cardinality than the agent space. I'm not sure if this is relevant though.
Thanks!
• Seems like several of these would depend on the parameter space of the model used. Are you asking about the specific model you linked or models in general? (In the latter case the answer is "it depends".) Sep 9, 2020 at 5:28
• "If I pin down a specific value $\bar{x}$, is there still a continuum of individuals with $x_i = \bar{x}$ and holding all possible values of $y_i$?" Seems like this is an immediate consequence of the independence of $x_i$ and $y_i$. Sep 9, 2020 at 5:29
• "where $y_i(j)$ are a set of values for each $j \in [0,1]$" What is this undefined $j$ here? Sep 9, 2020 at 5:31
• One can make all these arguments rigorous using what is an "often skipped-over measure-theoretic set-up which formalises this" but this set up is quite subtle. See here for the appropriate set-up and here for results that show what exactly you have to assume in terms of the space of agents. Sep 9, 2020 at 7:41
• @John "...can it still exist when the characteristic space is ...$\mathbb{R}^{[0,1]}$?"---According to Theorem 5 of Podczeck, it's enough that the original factor spaces in the product are Polish with atomless measures. So perhaps $\mathbb{R}^{[0,1]}$ is too much to ask for but, e.g. $C[0,1]$ or the Skorohod space $D[0,1]$ with an atom-less measure would be covered by the construction. Sep 10, 2020 at 11:20
|
|
Browse Questions
Using properties of determinants, prove the following : $\begin{vmatrix} x & y & z \\ x^2 & y^2 & z^2 \\ x^3 & y^3 & z^3 \end{vmatrix} = xyz(x-y)(y-z)(z-x)$
Toolbox:
• If each element of a row (or column) of a determinant is multiplied by a constant k ,then its value gets multiplied by k.
• By this property we can take out any common factor from any one row or any one column of the determinant.
• Elementary transformations can be done by
• 1. Interchanging any two rows or columns.
• 2. Mutiplication of the elements of any row or column by a non-zero number
• The addition of any row or column , the corresponding elements of any other row or column multiplied by any non zero number.
Step 1:
Let $\Delta=\begin{vmatrix}x&y&z\\x^2&y^2&z^2\\x^3&y^3&z^3\end{vmatrix}$
Taking $xyz$ as the common factor from $C_1$
$\Delta=xyz\begin{vmatrix}1&1&1\\x&y&z\\x^2&y^2&z^2\end{vmatrix}$
Step 2:
$C_1\rightarrow C_1-C_2,C_2\rightarrow C_2-C_3$
$\Delta=\begin{vmatrix}0&0&1\\x-y&y-z&z\\x^2-y^2&y^2-z^2&z^2\end{vmatrix}$
|
|
# Properties
Label 900.6.a.t Level $900$ Weight $6$ Character orbit 900.a Self dual yes Analytic conductor $144.345$ Analytic rank $0$ Dimension $2$ CM no Inner twists $1$
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$900 = 2^{2} \cdot 3^{2} \cdot 5^{2}$$ Weight: $$k$$ $$=$$ $$6$$ Character orbit: $$[\chi]$$ $$=$$ 900.a (trivial)
## Newform invariants
Self dual: yes Analytic conductor: $$144.345437832$$ Analytic rank: $$0$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{241})$$ Defining polynomial: $$x^{2} - x - 60$$ x^2 - x - 60 Coefficient ring: $$\Z[a_1, \ldots, a_{7}]$$ Coefficient ring index: $$2^{2}\cdot 3$$ Twist minimal: no (minimal twist has level 180) Fricke sign: $$-1$$ Sato-Tate group: $\mathrm{SU}(2)$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of $$\beta = 6\sqrt{241}$$. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + ( - \beta + 40) q^{7}+O(q^{10})$$ q + (-b + 40) * q^7 $$q + ( - \beta + 40) q^{7} + ( - 5 \beta - 120) q^{11} + (8 \beta + 340) q^{13} + ( - 20 \beta + 270) q^{17} + (20 \beta + 548) q^{19} + (10 \beta + 1740) q^{23} + ( - 40 \beta - 4710) q^{29} + ( - 70 \beta + 1496) q^{31} + ( - 60 \beta + 3760) q^{37} + (60 \beta - 13560) q^{41} + ( - 88 \beta - 2360) q^{43} + (110 \beta + 19140) q^{47} + ( - 80 \beta - 6531) q^{49} + ( - 40 \beta + 26790) q^{53} + (275 \beta - 23400) q^{59} + (120 \beta + 10754) q^{61} + ( - 54 \beta + 5440) q^{67} + ( - 330 \beta - 22920) q^{71} + (240 \beta - 4790) q^{73} + ( - 80 \beta + 38580) q^{77} + (150 \beta + 45536) q^{79} + ( - 170 \beta - 7080) q^{83} + ( - 420 \beta - 17580) q^{89} + ( - 20 \beta - 55808) q^{91} + (1096 \beta + 49390) q^{97}+O(q^{100})$$ q + (-b + 40) * q^7 + (-5*b - 120) * q^11 + (8*b + 340) * q^13 + (-20*b + 270) * q^17 + (20*b + 548) * q^19 + (10*b + 1740) * q^23 + (-40*b - 4710) * q^29 + (-70*b + 1496) * q^31 + (-60*b + 3760) * q^37 + (60*b - 13560) * q^41 + (-88*b - 2360) * q^43 + (110*b + 19140) * q^47 + (-80*b - 6531) * q^49 + (-40*b + 26790) * q^53 + (275*b - 23400) * q^59 + (120*b + 10754) * q^61 + (-54*b + 5440) * q^67 + (-330*b - 22920) * q^71 + (240*b - 4790) * q^73 + (-80*b + 38580) * q^77 + (150*b + 45536) * q^79 + (-170*b - 7080) * q^83 + (-420*b - 17580) * q^89 + (-20*b - 55808) * q^91 + (1096*b + 49390) * q^97 $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2 q + 80 q^{7}+O(q^{10})$$ 2 * q + 80 * q^7 $$2 q + 80 q^{7} - 240 q^{11} + 680 q^{13} + 540 q^{17} + 1096 q^{19} + 3480 q^{23} - 9420 q^{29} + 2992 q^{31} + 7520 q^{37} - 27120 q^{41} - 4720 q^{43} + 38280 q^{47} - 13062 q^{49} + 53580 q^{53} - 46800 q^{59} + 21508 q^{61} + 10880 q^{67} - 45840 q^{71} - 9580 q^{73} + 77160 q^{77} + 91072 q^{79} - 14160 q^{83} - 35160 q^{89} - 111616 q^{91} + 98780 q^{97}+O(q^{100})$$ 2 * q + 80 * q^7 - 240 * q^11 + 680 * q^13 + 540 * q^17 + 1096 * q^19 + 3480 * q^23 - 9420 * q^29 + 2992 * q^31 + 7520 * q^37 - 27120 * q^41 - 4720 * q^43 + 38280 * q^47 - 13062 * q^49 + 53580 * q^53 - 46800 * q^59 + 21508 * q^61 + 10880 * q^67 - 45840 * q^71 - 9580 * q^73 + 77160 * q^77 + 91072 * q^79 - 14160 * q^83 - 35160 * q^89 - 111616 * q^91 + 98780 * q^97
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
1.1
8.26209 −7.26209
0 0 0 0 0 −53.1450 0 0 0
1.2 0 0 0 0 0 133.145 0 0 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Atkin-Lehner signs
$$p$$ Sign
$$2$$ $$-1$$
$$3$$ $$1$$
$$5$$ $$1$$
## Inner twists
This newform does not admit any (nontrivial) inner twists.
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 900.6.a.t 2
3.b odd 2 1 900.6.a.u 2
5.b even 2 1 180.6.a.g yes 2
5.c odd 4 2 900.6.d.k 4
15.d odd 2 1 180.6.a.f 2
15.e even 4 2 900.6.d.n 4
20.d odd 2 1 720.6.a.bg 2
60.h even 2 1 720.6.a.bc 2
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
180.6.a.f 2 15.d odd 2 1
180.6.a.g yes 2 5.b even 2 1
720.6.a.bc 2 60.h even 2 1
720.6.a.bg 2 20.d odd 2 1
900.6.a.t 2 1.a even 1 1 trivial
900.6.a.u 2 3.b odd 2 1
900.6.d.k 4 5.c odd 4 2
900.6.d.n 4 15.e even 4 2
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{6}^{\mathrm{new}}(\Gamma_0(900))$$:
$$T_{7}^{2} - 80T_{7} - 7076$$ T7^2 - 80*T7 - 7076 $$T_{11}^{2} + 240T_{11} - 202500$$ T11^2 + 240*T11 - 202500
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$T^{2}$$
$3$ $$T^{2}$$
$5$ $$T^{2}$$
$7$ $$T^{2} - 80T - 7076$$
$11$ $$T^{2} + 240T - 202500$$
$13$ $$T^{2} - 680T - 439664$$
$17$ $$T^{2} - 540 T - 3397500$$
$19$ $$T^{2} - 1096 T - 3170096$$
$23$ $$T^{2} - 3480 T + 2160000$$
$29$ $$T^{2} + 9420 T + 8302500$$
$31$ $$T^{2} - 2992 T - 40274384$$
$37$ $$T^{2} - 7520 T - 17096000$$
$41$ $$T^{2} + 27120 T + 152640000$$
$43$ $$T^{2} + 4720 T - 61617344$$
$47$ $$T^{2} - 38280 T + 261360000$$
$53$ $$T^{2} - 53580 T + 703822500$$
$59$ $$T^{2} + 46800 T - 108562500$$
$61$ $$T^{2} - 21508 T - 9285884$$
$67$ $$T^{2} - 10880 T + 4294384$$
$71$ $$T^{2} + 45840 T - 419490000$$
$73$ $$T^{2} + 9580 T - 476793500$$
$79$ $$T^{2} - 91072 T + 1878317296$$
$83$ $$T^{2} + 14160 T - 200610000$$
$89$ $$T^{2} + 35160 T - 1221390000$$
$97$ $$T^{2} - 98780 T - 7982377916$$
|
|
Why isn't the cosmic microwave background a solid 3D volume?
This is the cosmic microwave background radiation:
It's a Mollweide Projection, which maps the surface of a sphere. Like, this is a map of Earth (also Mollweide Projection):
And it only shows the outer surface of Earth. You can't see the inside.
What I don't get is, how can the cosmic microwave background radiation be mapped as a flat sphere? The CMB is everywhere in the universe, so why isn't it a solid 3D volume? How come it doesn't look like this:
Why is the CMB a sphere as opposed to a volume?
|
|
# The Fender® Bassman 5F6-A, 3rd Edition
Click to zoom in
Customer Images:
No Images yet! Submit a product image below!
# The Fender® Bassman 5F6-A, 3rd Edition
$35.96 Save 10% Originally:$39.95
In Stock
The Bassman was introduced in 1952 as a bass guitar amplifier. It had a huge 15-inch speaker with a closed back and used a pair of beam power tubes for lots of volume. Throughout the decade it would be modified and improved until culminating in the model 5F6-A. A legend was born.
Ironically it didn't quite make it as a bass amp, but the musical tones it created from a guitar made history. The Bassman's long-tailed-pair phase splitter with negative feedback became the foundation for many great rock and blues sounds. Many other amplifier manufacturers created new Bassman-derived designs that themselves became the legendary instruments of a rock generation. For more than four decades the 5F6-A circuit has been tweaked, enhanced, and re-introduced under many different logos and the late-1950s original has become one of the most collected guitar amplifiers of all time.
This book examines this famous amplifier by studying its circuit design in great detail. It starts by breaking the amplifier into its major components: the 12AY7 preamp, the 12AX7 voltage amp, the 12AX7 cathode follower, the tone stack, the long-tailed-pair phase splitter, the push-pull power amplifier, and the power supply. Each component is analyzed to determine how it works and to derive the design formulas needed to predict its performance. The results are then compared to bench tests of the actual circuit. Finally all of the components are put together to analyze total system behavior and to discover how and where the amp transitions into distortion.
SKU:
B-957
Item ID:
001449
ISBN:
9780976982258
UPC/EAN:
609722150705
Brand:
Fender
Category:
Type:
Item Height 0.8 in. Item Length 9 in. Item Width 7 in. Page Count 220 pages
Packaging Dimensions 9 in. × 7 in. × 0.8 in. Weight (Packaging) 1.31 lbs.
|
|
# The effect of turbulence strength on meandering field lines and Solar Energetic Particle event extents
Laitinen, Timo Lauri mikael ORCID: 0000-0002-7719-7783, Effenberger, Frederic, Kopp, Andreas and Dalla, Silvia ORCID: 0000-0002-7837-5780 (2018) The effect of turbulence strength on meandering field lines and Solar Energetic Particle event extents. Journal of Space Weather and Space Climate, 8 (A13).
However, adding cross-field transport of SEPs as spatial diffusion has been shown to be insufficient in modelling the SEP events without use of unrealistically large cross-field diffusion coefficients. Recently, Laitinen et al.\ (2013b, 2016) demonstrated that the early-time propagation of energetic particles across the mean field direction in turbulent fields is not diffusive, with the particles propagating along meandering field lines. This early-time transport mode results in fast access of the particles across the mean field direction, in agreement with the SEP observations. In this work, we study the propagation of SEPs within the new transport paradigm, and demonstrate the significance of turbulence strength on the evolution of the SEP radiation environment near Earth. We calculate the transport parameters consistently using a turbulence transport model, parametrised by the SEP parallel scattering mean free path at 1 AU, $\lambda*$, and show that the parallel and cross-field transport are connected, with conditions resulting in slow parallel transport corresponding to wider events. We find a scaling $\sigma\propto (1/\lambda*)^{1/4}$ for the Gaussian fitting of the longitudinal distribution of maximum intensities. The longitudes with highest intensities are shifted towards the west for strong scattering conditions. Our results emphasise the importance of understanding both the SEP transport and the interplanetary turbulence conditions for modelling and predicting the SEP radiation environment at Earth.
|
|
# Math Physics Seminar, Volker Bach, "On the Bogolubov-Hartree-Fock (BHF) Approximation for the Pauli-Fierz Model"
Tue, Dec 3, 2013, 4:30 pm to 6:00 pm
The minimal energy of the nonrelativistic one-electron Pauli-Fierz model within the class of quasifree states is studied. It is shown that this minimum is unchanged if one restricts the variation to pure quasifree states, which simplifies the variational problem considerably. Existence and uniqueness of minimizers are established under the assumption of infrared and ultraviolet cutoffs, as well as, a sufficiently small coupling constant and a small momentum of the dressed electron. This is joint work with S. Breteaux and T. Tzaneteas and with S. Breteaux, H.K. Knoerr, and E. Menge.
Location:
|
|
# How does an engine work when an airplane has $v=0$ on the ground?
I mean, for a turbofan: $$T= \dot m_{air,h} (1+f) v_{e,h}+ \dot m_{air,c} v_{e,c}- (\dot m_{air,h}+ \dot m_{air,c} ) v_0+ (P_{e,h}-P_{amb} ) A_{e,h}+ (P_{e,c}-P_{amb} ) A_{e,c}$$
but $$\dot m_{air}=\rho Av_0$$
do I have to impose $$\dot m_{air}$$? in this case how the thrust expression change (in particular the terms with $$v_0$$)?
• In your second equation, $v_0$ should be the speed of the air moving through the area $A$, not the speed of the aircraft, right? – Bianfable Jul 30 '19 at 18:14
• It works the same way as it does in the air. The only difference is that there is no ram air at the intake, the compressor has to suck air in. – Michael Hall Jul 30 '19 at 18:34
• @Bianfable Yes, I think it should. But in my propulsion course we often approximate $v_{inlet}=v_{aircraft}$. I think that my doubt come from this! – wilove Jul 31 '19 at 10:07
• The question you probably should ask when someone says something like "but $\dot m_{air}=\rho Av_0$" is why does that hold? You can turn this question around by asking yourself: what happens if the aircraft is stationary on the ground, pointed directly into the wind, with a wind speed of (say) 30 kt? Now consider the opposite case: what happens if there is no wind (perfectly calm air), but the aircraft is somehow being propelled forward at a speed of 30 kt with no friction losses against the ground? From the aircraft's perspective, what is the difference between the two cases? – a CVn Jul 31 '19 at 16:30
• As a practical example, you might note that somewhat modified jet engines are widely used for electric power generation: en.wikipedia.org/wiki/… – jamesqf Jul 31 '19 at 17:47
To start the engine, the fan blades have to be spun up. This is typically handled by either blowing air through them from some outside source, and/or by using an APU to generate power to drive the shaft they're connected to.
Once the engine is running, it sucks in enough air to keep going on its own.
• And the airflow has velocity $v_0$ which differs from zero when the aircraft is stationary. – Koyovis Jul 31 '19 at 5:43
• Therefore, should I impose $\dot m_{air}$ and calculate $v_0$ knowing the intake geometry? In this case I have one more question: $\dot m_{air}$ is fixed with altitude variation or not (or at least is it a reasonable approximation)? – wilove Jul 31 '19 at 10:14
$$v_0$$ is the airflow speed through the engine, not the airspeed. As per @Bianfable’s comment.
How the engine works at standstill: the compressor sucks air into the inlet, it initiates the flow through the engine. Axial compressor blades similar to a propeller, centrifugal compressors by slinging air out towards the compressor outlet.
Maximum thrust at standstill should be a given for the engine under consideration. At standstill, if the exhaust is choked, the mass flow follows from:
• Thrust.
• Speed of sound at the exhaust gas temperature.
• Exhaust pressure and area.
• Bypass ratio.
With the mass flow given, you can compute $$v_0$$
|
|
# 2.Full, Faithful and Embeddings
## 11.06.2020
### Contents/Index
1.Definition
@2.Full, Faithful and Embeddings
3.Preserves and Reflects
4.Contra-, Co- and Bivariant
5.Co- and Contravariant Hom Functors
Given two categories $\mathcal{C},\mathcal{D}$, and a functor $F : \mathcal{C} \rightarrow \mathcal{D}$, we say that $F$ is
• full if for every two objects, $A,B$ of $\mathcal{C}$ we have that $$F : \mathcal{C}(A,B) \rightarrow \mathcal{D}(F\ A,F\ B)$$ is a surjection. We can make an example where this is not the case. Denote by subscript the category some object belong to, eg. $$F(A_C) = A_D$$ Now say we have the arrows $$f_1,f_2 : A_C \rightarrow B_C$$ in the category $C$. But in the category $D$ we have $$g_1,g_2,g_3 : A_D \rightarrow B_D$$
• faithfull if $F$ is a always injective. We can again make a counter example. Say we in the category $C$ have the arrows $$f_1,f_2,f_3 : A_C \rightarrow B_C$$ and in the category $D$ we have the arrows $$g_1,g_2 : A_D \rightarrow B_D$$ Here we let the functor $F$ map as follows $$[f_1 \mapsto g_1, f_2 \mapsto g_2,f_3 \mapsto g_2]$$
Remeber that $\mathcal{C}(A,B)$ is the class of arrows in $\mathcal{C}$ from the object $A$ to the object $B$.
## Embeddings
If $F$ is full and faithful and injective on objects, we say the $F$ is an embedding. An embedding does not mean that the categories are equal up to isomorphism. With an embedding we are still allowed to have objects in the target category $\mathcal{D}$ that are not present in the source category $\mathcal{C}$. For example let $Cat_1$ be the category with objects $A$ and $B$ and arrows besides the $id$-arrows given as $$f_1,f_2 : A \rightarrow B$$ Let $Cat_2$ be the category with objects $H$, $I$, $J$ and $K$ and arrows $$g_1,g_2 : H \rightarrow I$$ along $$g_3,g_4 : J \rightarrow K$$ Let $F : Cat_1 \rightarrow Cat_2$ the functor that maps objects as follows $$[A \mapsto H,B \mapsto I]$$ and arrows as $$[f_1 \mapsto g_1, f_2 \mapsto g_2]$$ Here $F$ is full, faithful and injective on objects. Hence $F$ is an embedding. It makes quite good intuitive sense that $Cat_1$ is embedded in $Cat_2$.
|
|
Browse Questions
# The asymptotes of the rectangular hyperbola $xy=c^{2}$ are
$\begin{array}{1 1}(1)x=c ,y=c &(2)x=0 ,y=c\\(3)x=c ,y=0&(4)x=0 , y=0\end{array}$
The equation of the rectangular hyperbola is
$xy=c^2$
The the equation of the asymptotes is $xy=0$
ie $x=0,y=0$
Separate equations of the asymptotes are $x=0$ & $y=0$
Hence 4 is the correct answer.
|
|
# Kristinn Torfason
Position: Research staff
# Research projects
• Molecular dynamics simulations of field emission from a planar nanodiode and prolate spheroidal tip.
• Perovskite solar cells.
# Publication
1. Kristinn Torfason, Agust Valfells and Andrei Manolescu.
Molecular Dynamics Simulations of Field Emission From a Prolate Spheroidal Tip.
arXiv preprint arXiv:1608.06789 (2016).
Abstract High resolution molecular dynamics simulations with full Coulomb interactions of electrons are used to investigate field emission from a prolate spheroidal tip. The space charge limited current is several times lower than the current calculated with the Fowler-Nordheim formula. The image-charge is taken into account with a spherical approximation, which is good around the top of the tip, i.e. region where the current is generated.
arXiv BibTeX
@article{torfason2016molecular,
title = "Molecular Dynamics Simulations of Field Emission From a Prolate Spheroidal Tip",
author = "Torfason, Kristinn and Valfells, Agust and Manolescu, Andrei",
journal = "arXiv preprint arXiv:1608.06789",
arxiv = "http://arxiv.org/abs/1608.06789",
year = 2016,
abstract = "High resolution molecular dynamics simulations with full Coulomb interactions of electrons are used to investigate field emission from a prolate spheroidal tip. The space charge limited current is several times lower than the current calculated with the Fowler-Nordheim formula. The image-charge is taken into account with a spherical approximation, which is good around the top of the tip, i.e. region where the current is generated."
}
2. George Alexandru Nemnes, Cristina Besleaga, Andrei Gabriel Tomulescu, Ioana Pintilie, Lucian Pintilie, Kristinn Torfason and Andrei Manolescu.
Dynamic electrical behavior of halide perovskite based solar cells.
arXiv preprint arXiv:1606.00335 (2016).
Abstract A dynamic electrical model is introduced to investigate the hysteretic effects in the I-V characteristics of perovskite based solar cells. By making a simple ansatz for the polarization relaxation, our model is able to reproduce qualitatively and quantitatively detailed features of measured I-V characteristics. Pre-poling effects are discussed, pointing out the differences between initially over- and under-polarized samples. In particular, the presence of the current over-shoot observed in the reverse characteristics is correlated with the solar cell pre-conditioning. Furthermore, the dynamic hysteresis is analyzed with respect to changing the bias scan rate, the obtained results being consistent with experimentally reported data: the hysteresis amplitude is maximum at intermediate scan rates, while at very slow and very fast ones it becomes negligible. The effects induced by different relaxation time scales are assessed. The proposed dynamic electrical model offers a comprehensive view of the solar cell operation, being a practical tool for future calibration of tentative microscopic descriptions.
arXiv BibTeX
@article{nemnes2016dynamic,
title = "Dynamic electrical behavior of halide perovskite based solar cells",
author = "Nemnes, George Alexandru and Besleaga, Cristina and Tomulescu, Andrei Gabriel and Pintilie, Ioana and Pintilie, Lucian and Torfason, Kristinn and Manolescu, Andrei",
journal = "arXiv preprint arXiv:1606.00335",
arxiv = "http://arxiv.org/abs/1606.00335",
year = 2016,
abstract = "A dynamic electrical model is introduced to investigate the hysteretic effects in the I-V characteristics of perovskite based solar cells. By making a simple ansatz for the polarization relaxation, our model is able to reproduce qualitatively and quantitatively detailed features of measured I-V characteristics. Pre-poling effects are discussed, pointing out the differences between initially over- and under-polarized samples. In particular, the presence of the current over-shoot observed in the reverse characteristics is correlated with the solar cell pre-conditioning. Furthermore, the dynamic hysteresis is analyzed with respect to changing the bias scan rate, the obtained results being consistent with experimentally reported data: the hysteresis amplitude is maximum at intermediate scan rates, while at very slow and very fast ones it becomes negligible. The effects induced by different relaxation time scales are assessed. The proposed dynamic electrical model offers a comprehensive view of the solar cell operation, being a practical tool for future calibration of tentative microscopic descriptions."
}
3. Marjan Ilkov, Kristinn Torfason, Andrei Manolescu and Ágúst Valfells.
Terahertz pulsed photogenerated current in microdiodes at room temperature.
Applied Physics Letters 107, (2015).
Abstract Space-charge modulation of the current in a vacuum diode under photoemission leads to the formation of beamlets with time periodicity corresponding to THz frequencies. We investigate the effect of the emitter temperature and internal space-charge forces on the formation and persistence of the beamlets. We find that temperature effects are most important for beam degradation at low values of the applied electric field, whereas at higher fields, intra-beamlet space-charge forces are dominant. The current modulation is most robust when there is only one beamlet present in the diode gap at a time, corresponding to a macroscopic version of the Coulomb blockade. It is shown that a vacuum microdiode can operate quite well as a tunable THz oscillator at room temperature with an applied electric field above 10 MV/m and a diode gap of the order of 100 nm.
URL arXiv, DOI BibTeX
@article{ilkov2015terahertz,
author = "Ilkov, Marjan and Torfason, Kristinn and Manolescu, Andrei and Valfells, Ágúst",
title = "Terahertz pulsed photogenerated current in microdiodes at room temperature",
journal = "Applied Physics Letters",
year = 2015,
volume = 107,
number = 20,
eid = 203508,
pages = "",
url = "http://scitation.aip.org/content/aip/journal/apl/107/20/10.1063/1.4936176",
doi = "10.1063/1.4936176",
abstract = "Space-charge modulation of the current in a vacuum diode under photoemission leads to the formation of beamlets with time periodicity corresponding to THz frequencies. We investigate the effect of the emitter temperature and internal space-charge forces on the formation and persistence of the beamlets. We find that temperature effects are most important for beam degradation at low values of the applied electric field, whereas at higher fields, intra-beamlet space-charge forces are dominant. The current modulation is most robust when there is only one beamlet present in the diode gap at a time, corresponding to a macroscopic version of the Coulomb blockade. It is shown that a vacuum microdiode can operate quite well as a tunable THz oscillator at room temperature with an applied electric field above 10 MV/m and a diode gap of the order of 100 nm.",
arxiv = "http://arxiv.org/abs/1508.06308"
}
4. Kristinn Torfason, Agust Valfells and Andrei Manolescu.
Molecular dynamics simulations of field emission from a planar nanodiode.
Physics of Plasmas (1994-present) 22, - (2015).
Abstract High resolution molecular dynamics simulations with full Coulomb interactions of electrons are used to investigate field emission in planar nanodiodes. The effects of space-charge and emitter radius are examined and compared to previous results concerning transition from Fowler-Nordheim to Child-Langmuir current [Y. Y. Lau, Y. Liu, and R. K. Parker, Phys. Plasmas 1, 2082 (1994) and Y. Feng and J. P. Verboncoeur, Phys. Plasmas 13, 073105 (2006)]. The Fowler-Nordheim law is used to determine the current density injected into the system and the Metropolis-Hastings algorithm to find a favourable point of emission on the emitter surface. A simple fluid like model is also developed and its results are in qualitative agreement with the simulations.
URL arXiv, DOI BibTeX
@article{4914855,
author = "Torfason, Kristinn and Valfells, Agust and Manolescu, Andrei",
title = "Molecular dynamics simulations of field emission from a planar nanodiode",
journal = "Physics of Plasmas (1994-present)",
year = 2015,
volume = 22,
number = 3,
eid = 033109,
pages = "-",
url = "http://scitation.aip.org/content/aip/journal/pop/22/3/10.1063/1.4914855",
doi = "http://dx.doi.org/10.1063/1.4914855",
arxiv = "http://arxiv.org/abs/1412.4537",
abstract = "High resolution molecular dynamics simulations with full Coulomb interactions of electrons are used to investigate field emission in planar nanodiodes. The effects of space-charge and emitter radius are examined and compared to previous results concerning transition from Fowler-Nordheim to Child-Langmuir current [Y. Y. Lau, Y. Liu, and R. K. Parker, Phys. Plasmas 1, 2082 (1994) and Y. Feng and J. P. Verboncoeur, Phys. Plasmas 13, 073105 (2006)]. The Fowler-Nordheim law is used to determine the current density injected into the system and the Metropolis-Hastings algorithm to find a favourable point of emission on the emitter surface. A simple fluid like model is also developed and its results are in qualitative agreement with the simulations."
}
5. M Ilkov, K Torfason, A Manolescu and A Valfells.
Synchronization in Arrays of Vacuum Microdiodes.
Electron Devices, IEEE Transactions on PP, 1-1 (2014).
Abstract Simulations have shown that space-charge effects can lead to regular modulation of photoemitted beams in vacuum diodes with gap sizes on the order of 1 μm and accelerating voltage on the order of $1$ V. These modulations are in the terahertz regime and can be tuned by simply changing the emitter area or accelerating vacuum field. The average current in the diode corresponds to the Child–Langmuir current, but the amplitude of the oscillations is affected by various factors. Given the small size and voltage of the system, the maximum radiated ac power is expected to be small. In this paper, we show that an array of small emitters produces higher frequency signals than a single large emitter of the same area and how these emitters may be synchronized to produce higher power signals.
arXiv, DOI BibTeX
@article{6979259,
author = "Ilkov, M. and Torfason, K. and Manolescu, A. and Valfells, A.",
journal = "Electron Devices, IEEE Transactions on",
title = "Synchronization in Arrays of Vacuum Microdiodes",
year = 2014,
month = "",
volume = "PP",
number = 99,
pages = "1-1",
abstract = "Simulations have shown that space-charge effects can lead to regular modulation of photoemitted beams in vacuum diodes with gap sizes on the order of 1 μm and accelerating voltage on the order of $1$ V. These modulations are in the terahertz regime and can be tuned by simply changing the emitter area or accelerating vacuum field. The average current in the diode corresponds to the Child--Langmuir current, but the amplitude of the oscillations is affected by various factors. Given the small size and voltage of the system, the maximum radiated ac power is expected to be small. In this paper, we show that an array of small emitters produces higher frequency signals than a single large emitter of the same area and how these emitters may be synchronized to produce higher power signals.",
keywords = "Cathodes;Couplings;Frequency synchronization;Oscillators;Quantum cascade lasers;Space charge;Synchronization;Synchronization;terahertz;vacuum microelectronics.",
doi = "10.1109/TED.2014.2370680",
issn = "0018-9383",
arxiv = "http://arxiv.org/abs/1409.0516"
}
6. Kristinn Torfason, Andrei Manolescu, Sigurdur I Erlingsson and Vidar Gudmundsson.
Thermoelectric current and Coulomb-blockade plateaus in a quantum dot.
Physica E: Low-dimensional Systems and Nanostructures 53, 178 - 185 (2013).
Abstract A Generalized Master Equation (GME) is used to study the thermoelectric currents through a quantum dot in both the transient and steady-state regime. The two semi-infinite leads are kept at the same chemical potential but at different temperatures to produce a thermoelectric current which has a varying sign depending on the chemical potential. The Coulomb interaction between the electrons in the sample is included via the exact diagonalization method. We observe a saw-teeth like profile of the current alternating with plateaus of almost zero current. Our calculations go beyond the linear response with respect to the temperature gradient, but are compatible with known results for the thermopower in the linear response regime.
URL arXiv, DOI BibTeX
@article{Torfason2013178,
title = "Thermoelectric current and Coulomb-blockade plateaus in a quantum dot",
journal = "Physica E: Low-dimensional Systems and Nanostructures",
volume = 53,
number = 0,
pages = "178 - 185",
year = 2013,
note = "",
issn = "1386-9477",
doi = "10.1016/j.physe.2013.05.005",
url = "http://www.sciencedirect.com/science/article/pii/S1386947713001689",
author = "Kristinn Torfason and Andrei Manolescu and Sigurdur I. Erlingsson and Vidar Gudmundsson",
abstract = "A Generalized Master Equation (GME) is used to study the thermoelectric currents through a quantum dot in both the transient and steady-state regime. The two semi-infinite leads are kept at the same chemical potential but at different temperatures to produce a thermoelectric current which has a varying sign depending on the chemical potential. The Coulomb interaction between the electrons in the sample is included via the exact diagonalization method. We observe a saw-teeth like profile of the current alternating with plateaus of almost zero current. Our calculations go beyond the linear response with respect to the temperature gradient, but are compatible with known results for the thermopower in the linear response regime.",
arxiv = "http://arxiv.org/abs/1303.3160"
}
7. Kristinn Torfason.
Variations on Transport for a Quantum Flute.
University of Iceland and Reykjavik University (2013).
Abstract A time-dependent Lippmann-Schwinger scattering model is used to study the transport of a time-modulated double quantum point contact system in the presence of perpendicular magnetic field. The conductance through the system is calculated using the Landauer-Büttiker framework. An observed magnetic field induced Fano resonance is seen in the conductance. A Generalized Master Equation (GME) is then used to describe the non-equilibrium time-dependent transport through a similar system, a short quantum wire connected to semi-infinite leads. A lattice model is used to described the leads and system, with the Coulomb interaction between the electrons in the sample included via the exact diagonalization method. The contact coupling strength between the leads and the wire is modulated by out-of-phase time-dependent potentials that simulate a turnstile device. The placement of one of the leads is fixed while the position of the other is varied. The propagation of both sinusoidal and rectangular pulses is examined. The current profiles in both leads are found to depend on not only the shape of the pulses, but also the position of the contacts. The current reflects standing waves created by the contact potentials, like in a wind musical instrument (for example, a flute). Finally thermoelectric currents through a quantum dot are studied in both the transient and steady-state regime using the GME. The two semi-infinite leads are kept at the same chemical potential but at different temperatures to produce a thermoelectric current, which has a varying sign depending on the chemical potential. A saw-tooth like profile is observed in the current along with plateaus of zero current.
URL BibTeX
@phdthesis{torfason2013variations,
title = "Variations on Transport for a Quantum Flute",
author = "Torfason, Kristinn",
year = 2013,
school = "University of Iceland and Reykjavik University",
url = "http://hdl.handle.net/1946/14320",
abstract = "A time-dependent Lippmann-Schwinger scattering model is used to study the transport of a time-modulated double quantum point contact system in the presence of perpendicular magnetic field. The conductance through the system is calculated using the Landauer-Büttiker framework. An observed magnetic field induced Fano resonance is seen in the conductance. A Generalized Master Equation (GME) is then used to describe the non-equilibrium time-dependent transport through a similar system, a short quantum wire connected to semi-infinite leads. A lattice model is used to described the leads and system, with the Coulomb interaction between the electrons in the sample included via the exact diagonalization method. The contact coupling strength between the leads and the wire is modulated by out-of-phase time-dependent potentials that simulate a turnstile device. The placement of one of the leads is fixed while the position of the other is varied. The propagation of both sinusoidal and rectangular pulses is examined. The current profiles in both leads are found to depend on not only the shape of the pulses, but also the position of the contacts. The current reflects standing waves created by the contact potentials, like in a wind musical instrument (for example, a flute). Finally thermoelectric currents through a quantum dot are studied in both the transient and steady-state regime using the GME. The two semi-infinite leads are kept at the same chemical potential but at different temperatures to produce a thermoelectric current, which has a varying sign depending on the chemical potential. A saw-tooth like profile is observed in the current along with plateaus of zero current."
}
8. Kristinn Torfason, Andrei Manolescu, Valeriu Molodoveanu and Vidar Gudmundsson.
Excitation of collective modes in a quantum flute.
Phys. Rev. B 85, 245114 (June 2012).
Abstract We use a generalized master equation (GME) formalism to describe the nonequilibrium time-dependent transport of Coulomb interacting electrons through a short quantum wire connected to semi-infinite biased leads. The contact strength between the leads and the wire is modulated by out-of-phase time-dependent potentials that simulate a turnstile device. We explore this setup by keeping the contact with one lead at a fixed location at one end of the wire, whereas the contact with the other lead is placed on various sites along the length of the wire. We study the propagation of sinusoidal and rectangular pulses. We find that the current profiles in both leads depend not only on the shape of the pulses, but also on the position of the second contact. The current reflects standing waves created by the contact potentials, like in a wind musical instrument (for example, a flute), but occurring on the background of the equilibrium charge distribution. The number of electrons in our quantum “flute” device varies between two and three. We find that for rectangular pulses the currents in the leads may flow against the bias for short time intervals, due to the higher harmonics of the charge response. The GME is solved numerically in small time steps without resorting to the traditional Markov and rotating wave approximations. The Coulomb interaction between the electrons in the sample is included via the exact diagonalization method. The system (leads plus sample wire) is described by a lattice model.
URL arXiv, DOI BibTeX
@article{PhysRevB.85.245114,
title = "Excitation of collective modes in a quantum flute",
author = "Torfason, Kristinn and Manolescu, Andrei and Molodoveanu, Valeriu and Gudmundsson, Vidar",
journal = "Phys. Rev. B",
volume = 85,
issue = 24,
pages = 245114,
numpages = 9,
year = 2012,
month = "Jun",
doi = "10.1103/PhysRevB.85.245114",
publisher = "American Physical Society",
abstract = "We use a generalized master equation (GME) formalism to describe the nonequilibrium time-dependent transport of Coulomb interacting electrons through a short quantum wire connected to semi-infinite biased leads. The contact strength between the leads and the wire is modulated by out-of-phase time-dependent potentials that simulate a turnstile device. We explore this setup by keeping the contact with one lead at a fixed location at one end of the wire, whereas the contact with the other lead is placed on various sites along the length of the wire. We study the propagation of sinusoidal and rectangular pulses. We find that the current profiles in both leads depend not only on the shape of the pulses, but also on the position of the second contact. The current reflects standing waves created by the contact potentials, like in a wind musical instrument (for example, a flute), but occurring on the background of the equilibrium charge distribution. The number of electrons in our quantum “flute” device varies between two and three. We find that for rectangular pulses the currents in the leads may flow against the bias for short time intervals, due to the higher harmonics of the charge response. The GME is solved numerically in small time steps without resorting to the traditional Markov and rotating wave approximations. The Coulomb interaction between the electrons in the sample is included via the exact diagonalization method. The system (leads plus sample wire) is described by a lattice model.",
arxiv = "http://arxiv.org/abs/1202.0566"
}
9. Kristinn Torfason, Andrei Manolescu, Valeriu Molodoveanu and Vidar Gudmundsson.
Generalized Master equation approach to mesoscopic time-dependent transport.
Journal of Physics: Conference Series 338, 012017 (2012).
Abstract We use a generalized Master equation (GME) formalism to describe the non-equilibrium time-dependent transport through a short quantum wire connected to semi-infinite biased leads. The contact strength between the leads and the wire are modulated by out-of-phase time-dependent functions which simulate a turnstile device. One lead is fixed at one end of the sample whereas the other lead has a variable placement. The system is described by a lattice model. We find that the currents in both leads depend on the placement of the second lead. In the rather small bias regime we obtain transient currents flowing against the bias for short time intervals. The GME is solved numerically in small time steps without resorting to the traditional Markov and rotating wave approximations. The Coulomb interaction between the electrons in the sample is included via the exact diagonalization method.
arXiv, DOI BibTeX
@article{1742-6596-338-1-012017,
author = "Kristinn Torfason and Andrei Manolescu and Valeriu Molodoveanu and Vidar Gudmundsson",
title = "Generalized Master equation approach to mesoscopic time-dependent transport",
journal = "Journal of Physics: Conference Series",
volume = 338,
number = 1,
pages = 012017,
doi = "10.1088/1742-6596/338/1/012017",
year = 2012,
arxiv = "http://arxiv.org/abs/1109.2301",
abstract = "We use a generalized Master equation (GME) formalism to describe the non-equilibrium time-dependent transport through a short quantum wire connected to semi-infinite biased leads. The contact strength between the leads and the wire are modulated by out-of-phase time-dependent functions which simulate a turnstile device. One lead is fixed at one end of the sample whereas the other lead has a variable placement. The system is described by a lattice model. We find that the currents in both leads depend on the placement of the second lead. In the rather small bias regime we obtain transient currents flowing against the bias for short time intervals. The GME is solved numerically in small time steps without resorting to the traditional Markov and rotating wave approximations. The Coulomb interaction between the electrons in the sample is included via the exact diagonalization method."
}
10. Chi-Shung Tang, Kristinn Torfason and Vidar Gudmundsson.
Magnetotransport in a time-modulated double quantum point contact system.
Computer Physics Communications 182, 65 - 67 (2011).
Abstract We report on a time-dependent Lippmann-Schwinger scattering theory that allows us to study the transport spectroscopy in a time-modulated double quantum point contact system in the presence of a perpendicular magnetic field. Magnetotransport properties involving inter-subband and inter-sideband transitions are tunable by adjusting the time-modulated split-gates and the applied magnetic field. The observed magnetic field induced Fano resonance feature may be useful for the application of quantum switching.
arXiv, DOI BibTeX
@article{Tang201165,
title = "Magnetotransport in a time-modulated double quantum point contact system",
journal = "Computer Physics Communications",
volume = 182,
number = 1,
pages = "65 - 67",
year = 2011,
doi = "10.1016/j.cpc.2010.06.023",
author = "Chi-Shung Tang and Kristinn Torfason and Vidar Gudmundsson",
arxiv = "http://arxiv.org/abs/1002.1551",
abstract = "We report on a time-dependent Lippmann-Schwinger scattering theory that allows us to study the transport spectroscopy in a time-modulated double quantum point contact system in the presence of a perpendicular magnetic field. Magnetotransport properties involving inter-subband and inter-sideband transitions are tunable by adjusting the time-modulated split-gates and the applied magnetic field. The observed magnetic field induced Fano resonance feature may be useful for the application of quantum switching."
}
|
|
# Quantum verification of NP problems with single photons and linear optics
2021 ◽
Vol 10 (1) ◽
Author(s):
Aonan Zhang ◽
Hao Zhan ◽
Junjie Liao ◽
Kaimin Zheng ◽
Tao Jiang ◽
...
Keyword(s):
AbstractQuantum computing is seeking to realize hardware-optimized algorithms for application-related computational tasks. NP (nondeterministic-polynomial-time) is a complexity class containing many important but intractable problems like the satisfiability of potentially conflict constraints (SAT). According to the well-founded exponential time hypothesis, verifying an SAT instance of size n requires generally the complete solution in an O(n)-bit proof. In contrast, quantum verification algorithms, which encode the solution into quantum bits rather than classical bit strings, can perform the verification task with quadratically reduced information about the solution in $$\tilde O(\sqrt n )$$ O ̃ ( n ) qubits. Here we realize the quantum verification machine of SAT with single photons and linear optics. By using tunable optical setups, we efficiently verify satisfiable and unsatisfiable SAT instances and achieve a clear completeness-soundness gap even in the presence of experimental imperfections. The protocol requires only unentangled photons, linear operations on multiple modes and at most two-photon joint measurements. These features make the protocol suitable for photonic realization and scalable to large problem sizes with the advances in high-dimensional quantum information manipulation and large scale linear-optical systems. Our results open an essentially new route toward quantum advantages and extend the computational capability of optical quantum computing.
2004 ◽
Author(s):
Selim Shahriar
Keyword(s):
2021 ◽
Vol 113 (4) ◽
pp. 252-258
Author(s):
A. I. Galimov ◽
M. V. Rakhlin ◽
G. V. Klimko ◽
Yu. A. Guseva ◽
...
Keyword(s):
2004 ◽
Author(s):
J. D. Franson
Keyword(s):
2019 ◽
Author(s):
Elizabeth Behrman ◽
Nam Nguyen ◽
James Steck
Keyword(s):
<p>Noise and decoherence are two major obstacles to the implementation of large-scale quantum computing. Because of the no-cloning theorem, which says we cannot make an exact copy of an arbitrary quantum state, simple redundancy will not work in a quantum context, and unwanted interactions with the environment can destroy coherence and thus the quantum nature of the computation. Because of the parallel and distributed nature of classical neural networks, they have long been successfully used to deal with incomplete or damaged data. In this work, we show that our model of a quantum neural network (QNN) is similarly robust to noise, and that, in addition, it is robust to decoherence. Moreover, robustness to noise and decoherence is not only maintained but improved as the size of the system is increased. Noise and decoherence may even be of advantage in training, as it helps correct for overfitting. We demonstrate the robustness using entanglement as a means for pattern storage in a qubit array. Our results provide evidence that machine learning approaches can obviate otherwise recalcitrant problems in quantum computing. </p> <p> </p>
2007 ◽
Vol 18 (11) ◽
pp. 34
Author(s):
Zhen-Sheng Yuan ◽
Yu-Ao Chen ◽
Shuai Chen ◽
Jian-Wei Pan
Keyword(s):
Keyword(s):
This chapter suggests how individual netizens or companies can uncover “pushing hand” operations. It is vitally important that Internet users, either corporations or individuals should acquire some knowledge and skills in identifying Internet mercenary marketing schemes since unrestricted information manipulation has grown to such a large scale that it led to a media claim that 70% of visits to the Chinese Internet derived from pushing hand operations. Evaluating information and deciding whether it is in fact a genuine recommendation from netizens or managed information from pushing hands is not an easy task. Several clues of online information evaluation are provided.
2021 ◽
pp. 142-185
Author(s):
Andrew V. Z. Brower ◽
Randall T. Schuh
This chapter evaluates “quantitative cladistics” in detail, including the issues of fit, parsimony algorithms, and character weighting. Although systematists have long associated characters with taxa, the relationship between character data and “phylogeny” has not always been obvious. The ideas of Willi Hennig clarified this relationship, and the formalization of these concepts in a quantitative method, via the parsimony criterion, allowed for computer implementation of phylogenetic inference and the feasible solution of previously intractable problems. It is this computational capability that took the study of taxonomic relationships from an almost purely qualitative and speculative enterprise to one dominated by the use of computer software and “objective” methodologies. The chapter then discusses the use, advantages, and disadvantages of maximum likelihood and Bayesian techniques as alternative approaches to the application of parsimony.
Author(s):
Bruce Levinson
Keyword(s):
2019 ◽
Vol 56 (10) ◽
pp. 9-10
Author(s):
Mark Anderson
Keyword(s):
2020 ◽
Vol 4 (3) ◽
pp. 24
Author(s):
Noah Cowper ◽
Harry Shaw ◽
David Thayer
Keyword(s):
The ability to send information securely is a vital aspect of today’s society, and with the developments in quantum computing, new ways to communicate have to be researched. We explored a novel application of quantum key distribution (QKD) and synchronized chaos which was utilized to mask a transmitted message. This communication scheme is not hampered by the ability to send single photons and consequently is not vulnerable to number splitting attacks like other QKD schemes that rely on single photon emission. This was shown by an eavesdropper gaining a maximum amount of information on the key during the first setup and listening to the key reconciliation to gain more information. We proved that there is a maximum amount of information an eavesdropper can gain during the communication, and this is insufficient to decode the message.
|
|
S355 Youngs's Modulus … over 4 to … Galvanized Steel Stainless Steel ... Modulus of Elasticity (N/mm²) 210.0000 Modulus of Elasticity (N/mm²) 196.0000 196.0000 ... Steel sheet, plate, and strip for general purposes EN 10088 - (1) Stainless Steel - Material Standard for Stainless Steel … Young’s Modulus of Elasticity 200 x 103 MPa at 20 °C Density 7.87 g/cm3 at 20 °C Coefficient of Thermal Expansion Low-Carbon/HSLAS: 12.4 μm/m/°C in 20 – 100 °C range I-F Steel: 12.9 μm/m/°C in 20 – 100 °C range Thermal … ... and inversely proportional to the metallic area and modulus of elasticity. According to German standard DIN 17100, St37 steel is divided into … Increasing the temperature decreases the Young’s Modulus. Modulus of Elasticity : 200 GPa: 29000 ksi: Typical Carbon Steel: Bulk Modulus : 160 GPa: … s355 steel elastic modulus Steel type - A283 Carbon . Steel Type : ASTM Designation : F y Min. references & resources Werner Sölken: Definition and Details of Pipes Թ�N.B'���e�) �A��U�S LG#��"��|54#���n�@2��������V}n�к~��O� j��Z۬��ǭ[��x96]�y{�'f#�t� �)䉆�|wpO�5 y����,� �(��x�R��&�Tr���*� �*҉���zǺź�m���O��z����1'�Y3�����]���z�Kֳ��X� �_��� Young's modulus E, can be calculated by dividing the tensile stress , σ ( ε ) {\displaystyle \sigma (\varepsilon )} , by the engineering extensional strain, ε. PNC Park. �9#�I>Dv@�(���H���H*�\a�"2ʭ��5�7���V����b);x{{� � It was also found to have a yield strength of 423 MPa, an ultimate strength of 470 MPa and an elastic modulus of 225 GPa. Strength is a measure of the stress that can be applied to a material before it permanently deforms (yield strength) or breaks (tensile strength). over 2.5 to 4 incl. The A 500 carbon steel … Hard Drawn is the most inexpensive general purpose spring steel … The elastic … The reason for this reputation is experience. You cannot galvanise the interior of a steel, at the maximum the galvanised layer extends up to tens or hundreds of microns in the surface. 4R�@"!�Ǘ899:�����YN� �D@:!�38�6N�˹k�݃��Qa��R��k1�#��ypI(,^O��]�������Y-�:!T���wGSy�Yw��V���Jކ=�G�:/og Bn��6��-��>�3tP���������H����JWW7��m������v����˯c|�����-� [���;�p�۟+KxmAnuAd�5�ό�g Yield Stress (ksi) F u Tensile Stress (ksi) Plates and bars: to 0.75 incl. If the applied stress is less than the yield strength, the material returns to its original shape when the stress is removed. There are other numbers that give us a measure of elastic properties of a material, like Bulk modulus and shear modulus, but the value of Young’s Modulus … 246 MN/m 2 (35,000 psi) Elongation (rolled, with grain) (99.95% zinc soft temper) 65%. T (oC) = 5/9 [T (oF) - 32] For full table with Higher Temperatures - rotate the screen! Yield strength on schedule 80 pipes runs a wide gamut depending upon the manufacturer and type of galvanized steel used. BB�IE�eD��C��3����������Z]�3�e��Fx;F��:�1�:�s7��6���O^�s���w/��P����NQ-�A���lj*�mEөR4��8>���kwpt�����ёި�܇�ȃ��xXiA9�]��\O�'ד�����pa8��g8/.���%$rB~(%�����P%�B�Ȍ����݈=Q��xn�K��To������NF�\�� }~^n���1���RS���G��F�>lhlL��( 20(0�_���uw�ˤN��H(��(��(���$@˓��$�154�Nr�H IR'Yh#m�� Young's Modulus of Elasticity - E - … 20�@J�4���ׁ��J�T�R J�3��s���L�� 5���cM��� h�6�' �%�!G{sޘ���;3g��v��������o���Q9~� ?���y����� 4�x�T�����/l��1� �]���W�����r�����>b%rI6 ��G�����n�b����n��(�Aq�!�9����� ���v$��ţ�ޖ��Ҧ#y��C.���. Young's modulus E describes the material's strain response to … �\,3���R)'��aieV�d)rG�h�.�["����PJ*�h��C&��0HeXB�dR$���3r�0�dn��ź�O.�ĵ@��i�9iW�_�t��R���!Q��j�\���2�� Shear Modulus c GPa: 11600 80.0: Method of Manufacture Chief Uses Special Properties: Cold drawn. |G��)�5�(.��9dž�,�Z�%V�Z2R�tQ�a��ӗe��d�&j�ݵ����w>��c��W�(�p ���$Q��TD?1�&UpcJ5B�O �u^J��D< For instance, schedule 80 pipes produced by Wheatland Tubes exhibit a minimum yield strength of 30,000 PSI or 205 MPa, while One Steel … Also the galvanized layer is just Zn, there is no … All of them arise in the generalized Hooke's law: . @$�'��#����0$�� �������YT�� St37-2 steel (1.0037 material) is an unalloyed structural steel grade complies with DIN 17100: 1980 and has been discarded since 2004. Keep in … Young's Modulus Steel, Young's Modulus Steel Suppliers Directory - Find variety Young's Modulus Steel Suppliers, Manufacturers, Companies from around the World at steel structure ,25mm round deformed steel bar ,stainless steel pipe, Steel … Our managers and foremen all have at least 20 years of experience with our company and a majority of our workforce has more than 10 years each. Quality through experience! 18-8 stainless steel properties – physical properties including density, melting point, magnetic, specific heat capacity, electrical resistivity, modulus of elasticity (elastic modulus), thermal diffusivity, thermal … { v��;g�+��8��&�r; The ASTM A500 specifications state that manufactured carbon steel tubing must meet certain specifications before being sold for any type of project. Young's modulus (E or Y) is a measure of a solid's stiffness or resistance to elastic deformation under load. The published BNF report ‘Galvanizing of structural steels and their weldments’ ILZRO, 1975, concludes that ‘… the galv… The elastic limit of stainless steel wire rope is approximately 60% of its breaking strength and for galvanized … The shear modulus is one of several quantities for measuring the stiffness of materials. (98.0% zinc hard temper) 5%. For … Included were steels to Australian Standard 1511 grade A specification, and British Standard 4360 series steels. AISI 4140 alloy steel can be tempered at 205 to 649°C (400 to 1200°F) depending upon the desired hardness level. 1 0 obj <>/Font<>/XObject<>>>>> endobj 2 0 obj <> endobj 3 0 obj <> endobj 4 0 obj <> endobj 5 0 obj <> endobj 6 0 obj <> endobj 7 0 obj <>stream Corrosionpedia explains Young's Modulus When corrosion due to oxidation in an acidic environment takes place on a metal surface, the metal tends to lose its stiffness and its Young's modulus or modulus … 7 x 104 MN/m 2 (1 x 107 psi) Brinell hardness, 500 kg load for 30 sec. Modulus of elasticity is also a measure of material's stiffness or resistance to elastic … This applies only to loads that do not exceed the elastic limit of a wire rope. over 0.75 to 1.25: over 1.25 to 1.5: over 1.5 to 2 incl. Quality through experience! Young's modulus is also known as elastic modulus. ڢ�ֳkc��Y�n�����{���X&���kLKr�W�ZR�~o7��ᴞa�U�N�me���h�fA��b�^\.Z ���AHs�ڮӱ�\1�e�N+mU��sԐOf{�de9�t���-�]��m��E���2x�,�ӿE"�(���c��4<48,�bY$ۡ��):B��k��fn��\��#'��aT�I�p2��@N���K:)ۈ����Іk�p�ՎX�����W;fQofE�r��ثf�p�N��.�y�(��Bg�pQ~����.�8����Ž6+It)�(��E6ؐ+l �aV��mR�E���o���Y|�����,^ .�]1Lo��϶�&�U��� �Vُ�N ��ʾ ��%�|ss�}s������|U�t>��|�T2߇�����������L�� ��F�h��=���-�����Y��g��N��:��>c�$��q�$�O���-�)ɫ}%�^�D�;�X������w�*��X�/�J��,)�+}Z�AzFJI�+��R�ǂ|�E���i�&� Q2��]1w��y���靂nx���'X�"K@.uc,�E�_0Aߎ�r��e�P�w�%"Woa� �Rh����(���\�nf/s!��n��/�B��.� ��u l���i��CJ���l�Z]�o ���_���uE�e���d��]��\� ��(�M@T�q��T�دV�A'1�%� ��x��H��"�kõrg+�ʵF�����>L����hz���qwt�O����8��#A�t The elastic modulus will be either 29,000 kips/square inch for conventional steel strengths, or 28,000 kips/square inch for higher strengths (high strength steels tend to be more brittle). the primary designation relates to the yield strength, e.g. The mechanical properties of 19 structural steels from major industrial areas of the world were investigated before and after galvanizing in a major 4-year research project by the BNF Technology Centre, UK, under the sponsorship of International Lead Zinc Research Organization. Home of the Pittsburgh Pirates. Young’s Modulus of Elasticity 200 x 103 MPa at 20 °C Density 7.87 g/cm3 at 20 °C Coefficient of Thermal Expansion Low-Carbon/HSLAS: 12.4 μm/m/°C in 20 – 100 °C range I-F Steel: 12.9 μm/m/°C in 20 – 100 °C range Thermal Conductivity Low-Carbon/HSLAS: 89 W/m°C at 20 °C I-F Steel… Within each material class there are very small differences. To steel wire ropes the E-module is more of a construction constant than a material constant. If the applied stress exceeds the yield strength, plastic or permanent deformation occurs, and the material can no longer return to its original shape once the load is removed. x��z Xײ�9�=;03�*�ð�l=�1�# Ȣ3�� âl�1n�'��(�Uc���Q��Q�M�I�QI�MnbL4.���{�Ec�w�������>t�s�ԩ�S���t3#� (�$榥�&��0B�M���ĤdH��P�'@哒��+��FC�QLJn~�S��� The hardness of the steel can be increased if it has a lower tempering temperature. 1 psi (lb/in2) = 1 psi (lb/in2) = 144 psf (lbf/ft2) = 6,894.8 Pa (N/m2) = 6.895x10-3 N/mm2. This article discusses the properties and applications of stainless steel grade 304 (UNS S30400). GALVANIZED CARBON STEEL. Lower cost springs and wire forms. over 2 to 2.5 incl. %PDF-1.5 %���� Steel material properties SteelConstructionfo Durability depends on the particular alloy type ordinary carbon steel, weathering steel or stainless steel. This cumulative experience adds up to a better product for our customers. ... S355 Young's Modulus. Young Galvanizing, Inc. recognized again for excellence in galvanizing! A�|?+7�8Z]2�L;���-��^VW^=��!��C v��R�c0.+�j*��\��L��&c�a�m��a|H ��]�������&} w���SM�5��o�����-1~P�~,�e�7��8���%��Z�O��M���B��"��QW��De��fv���Twq��I9�<����.���e�t�-\$"�:�r���K�y�܃�E��b��l�. I believe it would be safe to assume 29,000 ksi unless the steel sheet has been bent. Average stress applications. A500 Carbon Steel Structural Tubing Specifications & Requirements. The tensile Young’s Modulus of ferritic steels is close to 30,000,000 psi at room temperature. {\displaystyle \varepsilon } , in the elastic … K�/�y�5���V]�?���g��c=~ޚ������6w��N~A/~��g4>X�G�PX�\�N�ߐ+ S355 steel is a structural steel … For over 40 years Young Galvanizing, Inc. has consistently … For typical metals, modulus of elasticity is in the range between 45 GPa (6.5 x 10 6 psi) to 407 GPa (59 x 10 6 psi). It relates stress (force per unit area) to strain (proportional deformation) along an axis or line.The basic principle is that a material undergoes elastic … Once you go through plastic deformation of that magnitude it may not be conservative to continue using the original elastic modulus … Modulus of elasticity. w@��S�m�������3�w,�� ��"ذ�Dn3-��:��:�wwY��*s���4�K����,�I\\�I�E�6X^���H�}�=��ڥE�Ȑ��&��&��K�,�E^�we��Ke�� j����Xh�e/�9�����s��c�1vEl,;* �6j��� 8�E ���i B �dh����p����u���{�jP�2�N2%����CLy�;%��p�]����2�lWhô]0��FM��4B�τT��ҝuzZ�6� � The proportionality factor normally is a material constant called Modulus of Elasticity (E-module). S355 is a non-alloy European standard (EN 10025-2) structural steel, most commonly used after S235 where more strength is needed. T��r(��Bі@����EP�=B��Q�z)�����"Դ�T����c��^��6Ж�v&צ��#tT*��i�jK'Y��U�&���]"NP'��poK�)��e�����10ih;����XXi��Ԓ=V���RBC�,N�Dn%p,-���cIW���%t{ȡ֥�2T\�P�.5N�[H#�m%�Z[Z���A�Dˠ�W�a�&K�:1��rM�镓�'[x25�z�r�?^}c�c��[�mZ��ѫ��+l�ښ���[�Z�����jZ�nmwph�Ks�l=����˒���U��ғs�-���" ��0��Ԫ/���&�����V�X3,�ԡb�X���m}{��t��QĎ�Q�#�=#�Ӌ���V�V�N�/1Z��������Y�n{�ԭ�r:6���ҠUZi%m���V� �7�V�q�m�~��rg:V lX>I�"�ߴ It got great weldability and machinability, let us see more mechanical details of this steel. The tensile Young’s Modulus of austenitic stainless steels is about 28,000,000 psi at room temperature. Young’s Modulus (also referred to as the Elastic Modulus or Tensile Modulus), is a measure of mechanical properties of linear elastic solids like rods, wires, and such. For over 40 years Young Galvanizing, Inc. has consistently provided the highest levels of zinc coating quality as well as service, receiving numerous awards and a reputation for quality within the galvanizing industry. Stress is less than the yield strength, e.g were steels to Australian Standard 1511 grade a specification, British! The tensile Young ’ s Modulus sheet has been bent … I believe it be... 5 % ( 98.0 % zinc hard temper ) 5 % over 4 to … GALVANIZED steel. Full table with Higher Temperatures - rotate the screen excellence in Galvanizing s Modulus of austenitic stainless steels close. Hard temper ) 5 % Standard 1511 grade a specification, and Standard! Steel material Properties SteelConstructionfo Durability depends on the particular alloy type ordinary carbon steel - ]... Stress is less than the yield strength, the material returns to its shape... This applies only to loads that do not exceed the elastic limit of a wire rope ( oC =... For full table with Higher Temperatures - rotate the screen a 500 carbon steel … tensile! Let us see more galvanized steel young's modulus details of this steel - A283 carbon the metallic area and of. For our customers all of them arise in galvanized steel young's modulus generalized Hooke 's law: response to GALVANIZED! Material returns to its original shape when the stress is less than the yield,! Is a material constant its original shape when the stress is removed hardness of the steel be! - A283 carbon 649°C ( 400 to 1200°F ) depending upon the desired hardness level GPa... Steel or stainless steel on the particular alloy type ordinary carbon steel tubing must meet certain before. There are very small differences a material constant called Modulus of ferritic steels is to. See more mechanical details of this steel steel material Properties SteelConstructionfo Durability depends on the particular alloy type carbon. Carbon steel tubing must meet certain specifications before being sold for any of... Steels to Australian Standard 1511 grade a specification, and British Standard 4360 series steels - E …! Better product for our customers there are very small differences is a material constant of steels! 107 psi ) Brinell hardness, 500 kg load for 30 sec 1.25: over 1.25 to 1.5: 1.5. Constant than a material constant called Modulus of Elasticity ( E-module ) the E-module is more of construction! Adds up to a better product for our customers … I believe it would be safe assume! A specification, and British Standard 4360 series steels Brinell hardness, 500 kg for. S355 steel is a material constant loads that do not exceed the elastic of... Do not exceed the elastic limit of a wire rope ’ s Modulus of austenitic stainless is! To 649°C ( 400 to 1200°F ) depending upon the galvanized steel young's modulus hardness level better product our. Austenitic stainless steels is about 28,000,000 psi at room temperature … the proportionality normally. Astm A500 specifications state that manufactured carbon steel, weathering steel or stainless.! All of them arise in the generalized Hooke 's law: temperature decreases the Young ’ s Modulus MN/m. To the yield strength, e.g there are very small differences great weldability and machinability, us. To Australian Standard 1511 grade a specification, and British Standard 4360 series steels specifications. At room temperature are very small differences A283 carbon very small differences Modulus steel type - A283.... Construction constant than a material constant better product for our customers better product for our.... Is more of a construction constant than a material constant called Modulus of austenitic stainless steels is about 28,000,000 at. X 107 psi ) Brinell hardness, 500 kg load for 30 sec decreases the Young ’ s Modulus Elasticity! Carbon steel tubing must meet certain specifications before being sold for any type of project 1 x psi! This cumulative experience galvanized steel young's modulus up to a better product for our customers to original! Type - A283 carbon the temperature decreases the Young ’ s Modulus type - A283 carbon GALVANIZED carbon,. Stress is less than the yield strength, e.g a structural steel … tensile! Material Properties SteelConstructionfo Durability depends on the particular alloy type ordinary carbon steel … the Young. C GPa: 11600 80.0: Method of Manufacture Chief Uses Special Properties: Cold drawn ) depending the! To 649°C ( 400 to 1200°F ) depending upon the desired hardness level hardness of the steel can increased. ( 98.0 % zinc hard temper ) 5 % to 1200°F ) depending upon the desired level! 649°C ( 400 to 1200°F ) depending upon the desired hardness level 7 x MN/m. The metallic area and Modulus of Elasticity material 's strain response to … steel. To 0.75 incl - 32 ] for full table with Higher Temperatures - rotate the screen has! Steel … the proportionality factor normally is a material constant of the steel has! Depends on the particular alloy type ordinary carbon steel … the tensile ’... Let us see more mechanical details of this steel 205 to 649°C ( 400 to 1200°F ) upon. ( 98.0 % zinc hard temper ) 5 % - … Young Galvanizing Inc.! ) - 32 ] for full table with Higher Temperatures - rotate screen. Great weldability and machinability, let us see more mechanical details of steel... Stainless steel ASTM A500 specifications state that manufactured carbon steel strength, e.g steel is a material constant [... Machinability, let us see more mechanical details of this steel room temperature of a rope! Load for 30 sec 1.5 to 2 incl: Method of Manufacture Chief Uses Properties... Temperatures - rotate galvanized steel young's modulus screen desired hardness level ) = 5/9 [ t ( )... Recognized again for excellence in Galvanizing and machinability, let us see more mechanical details this. Type of project Special Properties: Cold drawn and Modulus of Elasticity ( E-module ) very small.! Mechanical details of this steel steel tubing must meet certain specifications before being for... Hardness level … I believe it would be safe to assume 29,000 ksi unless the sheet! 4 to … s355 steel elastic Modulus steel type - A283 carbon 500! ) 5 % a wire rope 1.25: over 1.5 to 2 incl a lower temperature! Them arise in the generalized Hooke 's law: a construction constant than a material.! And machinability, let us see more mechanical details of this steel Elasticity - E - … Young,. Steels to Australian Standard 1511 grade a specification, and British Standard 4360 series.. ( of ) - 32 ] for full table with Higher Temperatures - rotate the screen ) 5.... Particular alloy type ordinary carbon steel … the proportionality factor normally is a steel... Loads that do not exceed the elastic limit of a wire rope 107 )! Ksi unless the steel sheet has been bent cumulative experience adds up to a better product our... Alloy steel can be tempered at 205 to 649°C ( 400 to 1200°F ) depending the! Steel elastic Modulus steel type - A283 carbon hardness level unless the steel can increased... Than a material constant called Modulus of austenitic stainless steels is close to 30,000,000 psi at room temperature E... Us see more mechanical details of this steel designation relates to the yield strength, the material returns its... Gpa: 11600 80.0: Method of Manufacture Chief Uses Special Properties: Cold drawn to loads that do exceed! Elastic Modulus steel type - A283 carbon … I believe it would safe... The yield strength, e.g particular alloy type ordinary carbon steel, weathering steel or stainless steel of ) 32! 11600 80.0: Method of Manufacture Chief Uses Special Properties: Cold drawn better product for our customers about... Steel or stainless steel stress is removed material returns to its original shape when the stress is less the. Us see more mechanical details of this steel keep in … Shear Modulus c:! Modulus steel type - A283 carbon: Method of Manufacture Chief Uses Special Properties Cold! Standard 1511 grade a specification, and British Standard 4360 series steels area Modulus. To 2 incl Method of Manufacture Chief Uses Special Properties: Cold drawn and inversely proportional to the metallic and. About 28,000,000 psi at room temperature F u tensile stress ( ksi ) and! The generalized Hooke 's law:, e.g E-module is more of a wire galvanized steel young's modulus stainless steel 's law.. … GALVANIZED carbon steel tubing must meet certain specifications before being sold for any type of project incl... About 28,000,000 psi at room temperature the tensile Young ’ s Modulus of Elasticity ( ). Than a material constant called Modulus of Elasticity - E - … Young,. Let us see more mechanical details of this steel zinc hard temper 5. Galvanized carbon steel, weathering steel or stainless steel the Young ’ s Modulus of Elasticity E-module! Original shape when the stress is removed s355 steel is a structural steel the. For our customers - rotate the screen original shape when the stress is.! Response to … s355 steel is a structural steel … the tensile Young ’ s Modulus called Modulus austenitic..., e.g GPa: 11600 80.0: Method of Manufacture Chief Uses Properties. Modulus of Elasticity ( E-module ) - 32 ] for full table with Higher Temperatures - the... The particular alloy type ordinary carbon steel tubing must meet certain specifications before being for... Has a lower tempering temperature has been bent yield strength, the material 's strain response to … GALVANIZED steel. Class there are very small differences hardness of the steel sheet has been bent the metallic area Modulus... And inversely proportional to the metallic area and Modulus of ferritic steels is close 30,000,000. Is more of a construction constant than a material constant steel sheet has been..
|
|
Sciencemadness Discussion Board » Fundamentals » Reagents and Apparatus Acquisition » Making Relatively Pure Alumina Select A Forum Fundamentals » Chemistry in General » Organic Chemistry » Reagents and Apparatus Acquisition » Beginnings » Responsible Practices » Miscellaneous » The Wiki Special topics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Models and Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues » Detritus » Test Forum
Author: Subject: Making Relatively Pure Alumina
DFliyerz
Hazard to Others
Posts: 241
Registered: 22-12-2014
Member Is Offline
Mood: No Mood
Making Relatively Pure Alumina
I'm starting to take a bit of an interest in column chromatography, and decided that I wanted to find a way to make my own fairly pure alumina. My idea was to use sodium hydroxide to dissolve old aluminum cans with the labels scraped off, but I only have borosilicate glass beakers that I'd prefer not to destroy. Is there any way I can do this easily and hopefully without high temperatures (at least, not until I dehydrate the aluminum hydroxide )?
DraconicAcid
International Hazard
Posts: 3328
Registered: 1-2-2013
Location: The tiniest college campus ever....
Member Is Offline
Mood: Semi-victorious.
Pyrex glass is much more resistant to sodium hydroxide than aluminum is.
Please remember: "Filtrate" is not a verb.
Write up your lab reports the way your instructor wants them, not the way your ex-instructor wants them.
Milan
Harmless
Posts: 30
Registered: 14-3-2015
Location: Europe
Member Is Offline
Mood: No Mood
So you are trying to make Al(OH)3, am I right?
Wouldn't the reaction go like this:
2 Al + 2 NaOH + 2 H2O → 2 NaAlO2 + 3 H2
One way I've tried to do this and know it works is when you react Al with alkali hydroxide (I used KOH, but NaOH can be used).
Then if your source of Al was not a pure strip of Aluminium, but a can or foil, filter off whatever is left (it's mostly some kind of polymer used for protection.
After that add H2SO4 and a white precipitate will form (this is Al(OH)3), continue adding acid until no more precipitate is forming. Note: Don't add too much acid or you'll start making alum.
Then just filter the precipitate.
Hope this helps you.
Edit: One more thing, if you are using foil or cans please don't use your beakers for this but a jar that you're ready to sacrifice, because the polymers will stick to the glass and it's really hard to get them off, I had learn this the hard way.
Edit:Oops, I just re-read your post. So you are trying to make Al2O3. You can still get it this way, though you have to calcine (heat strongly) the Al(OH)3 to decompose it. But you'll need to make a improvised furnace for this since Al(OH)3 starts calcining above 500 ºC and bellow 850 ºC. Here are a few articles about it:
http://en.wikipedia.org/wiki/Aluminium_oxide#Production
http://pubs.acs.org/doi/abs/10.1021/ie00090a015 this ones just a section about production of alumina and magnesia
[Edited on 23-3-2015 by Milan]
[Edited on 23-3-2015 by Milan]
morganbw
International Hazard
Posts: 531
Registered: 23-11-2014
Member Is Offline
Mood: No Mood
Is there a doable way for the home chemist to go from the hydroxide to the oxide/alumina?
The hydroxide is easy, the oxide?
Molecular Manipulations
National Hazard
Posts: 447
Registered: 17-12-2014
Location: The Garden of Eden
Member Is Offline
Mood: High on forbidden fruit
The easiest way for me is to dissolve aluminum in hydrochloric acid. Then boil off the water to get solid hydrated aluminum chloride, then heat that very strongly to get hydroxide, then oxide. I'm not sure at what temperature aluminum hydroxide decomposes, but I usually go to about 650°C (Bunsen Burner).
Any well equipped home lab should have either a Bunsen or a propane torch at least, so I'd say this is very doable for a home chemist.
-The manipulator
We are all here on earth to help others; what on earth the others are here for I don't know. -W. H. Auden
Milan
Harmless
Posts: 30
Registered: 14-3-2015
Location: Europe
Member Is Offline
Mood: No Mood
Quote: The easiest way for me is to dissolve aluminum in hydrochloric acid. Then boil off the water to get solid hydrated aluminum chloride, then heat that very strongly to get hydroxide, then oxide.
Hmm, haven't tried it that way, the problem here is getting pure HCl, I can only get muriatic acid and that is really polluted.
Quote: I'm not sure at what temperature aluminum hydroxide decomposes, but I usually go to about 650°C (Bunsen Burner).
Yup that's a good temperature, it's between 500 ºC and 850 ºC.
[Edited on 23-3-2015 by Milan]
morganbw
International Hazard
Posts: 531
Registered: 23-11-2014
Member Is Offline
Mood: No Mood
Quote: Originally posted by Molecular Manipulations The easiest way for me is to dissolve aluminum in hydrochloric acid. Then boil off the water to get solid hydrated aluminum chloride, then heat that very strongly to get hydroxide, then oxide. I'm not sure at what temperature aluminum hydroxide decomposes, but I usually go to about 650°C (Bunsen Burner). Any well equipped home lab should have either a Bunsen or a propane torch at least, so I'd say this is very doable for a home chemist.
Thank you. I guess I should have done a little research but this is not an item I need to prepare. I simply asked, because I did not know and wanted red flags put out if this was not practical.
My initial thoughts were that this might not be simple/easy.
Mesa
National Hazard
Posts: 263
Registered: 2-7-2013
Member Is Offline
Mood: No Mood
If your interest is in chromatography grade alumina, the methods posted here are a long way off what you want.
Dr.Bob
International Hazard
Posts: 2122
Registered: 26-1-2011
Location: USA - NC
Member Is Offline
Mood: No Mood
You might be able to find aluminum oxide powder (alumina) through a ceramics supplier or on Ebay for cheaper than the cost of trying to make it. Not that I am against chemistry, but if you want to actually use it, I would start with a known quality. You might be able to find some locally or order it, but that will work to some degree. Other choices are to use a different material, like paper, cellulose or other materials to start experiments. Just depends on what you want to purify.
Sulaiman
International Hazard
Posts: 2900
Registered: 8-2-2015
Location: UK ... on extended Holiday in Malaysia
Member Is Offline
Aluminium Oxide / alumina is a really common abrasive
look in your local hardware shop or eBay.
I got mine from here https://www.machinemart.co.uk/shop/product/details/7-5kg-80-...
use sieves to get the grade that you want, as supplied it has a wide range of particle sizes.
[Edited on 24-3-2015 by Sulaiman]
Milan
Harmless
Posts: 30
Registered: 14-3-2015
Location: Europe
Member Is Offline
Mood: No Mood
Hey, just had a thought, could you get Al(OH)3 by combining a salt of aluminium (like Al2(SO4)3) with NaHCO3, or would I get a carbonate of aluminium.
Dr.Bob
International Hazard
Posts: 2122
Registered: 26-1-2011
Location: USA - NC
Member Is Offline
Mood: No Mood
Most aluminum hydroxide salts are gelatinous goos, which are hard to filter, dry or work with. Anyone who has worked up a LAH reduction will have experience with that joy. When you mix Al salts with other salts, you will likely just get a mess. I have not tried to do that on purpose, but the few times I have worked with Al salts, I have hated them. Remember clay is an aluminum silicate complex, and think about how much fun it would be to filter or try to work with.
morganbw
International Hazard
Posts: 531
Registered: 23-11-2014
Member Is Offline
Mood: No Mood
Amateur astronomy stores offer it as an abrasive for grinding lens along with other oxides.
Just a thought.
Other oxides as well.
Magpie
lab constructor
Posts: 5939
Registered: 1-11-2003
Location: USA
Member Is Offline
Mood: Chemistry: the subtle science.
Fisher A450 alumina is 80/200 mesh and sells for $140/500g: https://www.fishersci.com/shop/products/alumina-80-200-mesh-... Other than a defined particle size range and a huge price difference, what is the difference between A450 alumina and ceramics grade alumina? If it is just a matter of washing, sizing, and heating in a furnace - well, I'm willing to do that myself. The single most important condition for a successful synthesis is good mixing - Nicodem careysub International Hazard Posts: 1339 Registered: 4-8-2014 Location: Coastal Sage Scrub Biome Member Is Offline Mood: Lowest quantum state Quote: Originally posted by Magpie Fisher A450 alumina is 80/200 mesh and sells for$140/500g: https://www.fishersci.com/shop/products/alumina-80-200-mesh-... Other than a defined particle size range and a huge price difference, what is the difference between A450 alumina and ceramics grade alumina? If it is just a matter of washing, sizing, and heating in a furnace - well, I'm willing to do that myself.
If you are willing to take those steps my suspicion is that you can work with pottery grade alumina or alumina hydrate just fine.
Both stuffs are about \$3-5 a pound (less in larger amounts). Axner lists their alumina as 325 mesh, and their alumina hydrate as 99.5% pure. Seattle Pottery and Aardvark Clay do no list such specs, but I expect their products are similar. Since alumina is used to make pure white glazes I expect purity is pretty good from any reputable source to avoid any discoloration.
Sigma-Aldrich has this to day about their chromatography alumina:
"When aluminum hydroxide is screened for particle size and heated in a carbon dioxide stream at about 900°C, it is converted to individual particles of aluminum oxide that are coated with a thin layer of aluminum oxycarbonate (with the
approximate formula: [Al2(OH)5]2CO3 H2O). The alumina particles are between 70-290 mesh (50-200 μm), and most are approximately 150 mesh.
Water content and alkalinity are then adjusted by washing with dilute acids.
Aluminas used for column chromatography or thin-layer chromatography are treated with acid or base to adjust the pH of a 10% slurry (w/v in water) to acidic, basic or neutral pH. These are designated by A, B or N, respectively. Acidic alumina has approximate pH 4.5, basic alumina has approximate pH 10.4"
and
"Alumina can be reactivated by dehydration at 360°C for five hours or overnight, then allowing the desired moisture content to be readsorbed."
https://www.sigmaaldrich.com/content/dam/sigma-aldrich/docs/...
All of this sounds quite doable. You might want to start with a strong acid and a strong base wash to get your product as clean as possible upon receipt.
I also picked up some beer fining silica gel to try as chromatography media (TLC coating):
http://www.ebay.com/itm/1lb-0-455kg-of-Powdered-Silica-gel-f...
It says it has a 19 micron average particle size (the finest mesh size, 400, is 37 microns). Mesh sizes don't go below 400 (in particle size; above, numerical mesh value) since mesh sifting stops being practical at 400.
[Edited on 18-8-2015 by careysub]
JJay
International Hazard
Posts: 3352
Registered: 15-10-2015
Member Is Online
I'm putting together an equipment and reagents list for analysis of Devil's Club (a relative of ginseng that reportedly contains no alkaloids - ???) via column chromatography and TLC this summer. Do you think I can get away with preparing my own alumina and TLC plates?
Pumukli
International Hazard
Posts: 551
Registered: 2-3-2014
Location: EU
Member Is Offline
Mood: No Mood
Depends on how deep you want to dive into the plant materials. Componenets in the low ppm range may easily slip through unnoticed with crude methods, but with at least the main components you have better chance of detection.
Think of the great discoveries in the 1800-s: the alkaloids and such. The "cutting edge" equipment of those days was probably not much better what an enthusiastic home-chemist (researcher) can make in a garage today.
(E.g. we chromatographed flower-petal extracts on simple blackboard chalks at school. Not really a high tech method, but worked. Before factory made UV sensitized TLC plates there was paper chromatography for decades too...)
Keep up and do things!
S.C. Wack
bibliomaster
Posts: 2128
Registered: 7-5-2004
Location: Cornworld, Central USA
Member Is Offline
Mood: Enhanced
The starting point for answers IMHO is WTF exactly is the alumina sold for chromatographic purposes, how is it made, and which kind works best for you? We know at least of the activity and acidic, neutral, and basic grades, and mesh sizes, but not which one will work best with whatever you're doing. We know how to change the Brockmann activity of aluminas sold for chromatography, but is this really the same or similar to pottery alumina of the same mesh size, whatever the pH or alkali content is, for you, in this particular experiment?
As far as relatively pure alumina goes, a quick obvious guess is that the starting materials for making the isopropoxide could be impure and you could still come out with a pure hydroxide. But is that what you really want? What temperature do you heat that to, and for how long, to produce the best type of alumina and results for you? Which would you rather use instead of water to precipitate the hydroxide: NaOH, ammonia, carbon dioxide, or something else? Or does it matter at all?
Figuring all or some of this out would be an all-around excellent general chemistry lab exercise.
"You're going to be all right, kid...Everything's under control." Yossarian, to Snowden
JJay
International Hazard
Posts: 3352
Registered: 15-10-2015
Member Is Online
One often referenced method for grading alumina for chromatography is one by Brockmann and Schodder. I haven't found the original reference freely available online, but several summaries of their procedure are available. It looks like they originally used carbon tetrachloride as the eluant, but some have used a mixture of benzene and petroleum ether.
Armarego and Perrin's Purification of Laboratory Chemicals (fourth edition) discusses methods for preparing grade I activated alumina, as well as neutral alumina.
[Edited on 11-2-2016 by JJay]
S.C. Wack
bibliomaster
Posts: 2128
Registered: 7-5-2004
Location: Cornworld, Central USA
Member Is Offline
Mood: Enhanced
Preparing in the sense of modifying the alumina for adsorption you already bought. The Brockmann article they mention and which I also found referred to elsewhere, actually looks vague on details to the not-German. It looks like the hydroxide was bought and was heated and that alumina was used for the experiments. I suppose if all alumina is made by the Bayer process then most alumina from wherever should be approximately adsorption grade basic alumina, after activation.
"You're going to be all right, kid...Everything's under control." Yossarian, to Snowden
JJay
International Hazard
Posts: 3352
Registered: 15-10-2015
Member Is Online
I believe that is correct. So it is likely that pottery grade alumina will work just fine for chromatography after activation, and it is easy enough to test it.
Sciencemadness Discussion Board » Fundamentals » Reagents and Apparatus Acquisition » Making Relatively Pure Alumina Select A Forum Fundamentals » Chemistry in General » Organic Chemistry » Reagents and Apparatus Acquisition » Beginnings » Responsible Practices » Miscellaneous » The Wiki Special topics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Models and Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues » Detritus » Test Forum
|
|
# Team:NCTU Formosa/Disease Occurrence Model
NCTU_Formosa: Disease Occurrence Model
###### Prediction Modeling
In order to cure the fungal diseases in reality, we used the method called Convolutional Neural Network(CNN) to catch the weather patterns that was hard to be recognized by humans. With this disease occurrence model, we can get the daily possibility of disease occurrences. Also, we set up a warning and auto-spraying system by combining the model with the IoTtalk to apply peptides into farmlands.
1. Datasets--the past fungal diseases data and the weather data
2. Convolutional Neural Network
3. Softmax function
4. How to define our model cost function and the optimizer
5. Connection to the IoTtalk system
For the disease occurrence prediction modeling, NCTU_Formosa built a model predicting the diseases relating to everyday weather information that was based on the neural network. There were two kinds of data in our predict system, which were the recorded diseases happening data that were collected from the government agency and the weather data from the Central Meteorological Bureau's website that corresponded to the data forward. Then, we combined these two and then deleted the recorded data that had no weather data to match with.
Finally, our data contain weather feature and the Label. The weather feature is a two-dimensional array which has 14 days x 11 features. The 11 features include relative humidity, rainfall and the maximum, minimum, average of the temperature and air pressure. The Label contains two classes, negative(diseases that would not happen) and positive(diseases that would happen).
#### Figure 1: Data process flow chart
In reality, the spore germination needs some specific weather conditions. Also, the plants may be influenced by the change of weather and become weak. However, this relationship is hard to be recognized by humans. Therefore, we used Convolutional Neural Network(CNN) as our method to catch the weather patterns.
# Convolutional Neural Network(CNN)
#### Figure 2: This is an overview of our model,it contains the convolution layer, max pooling layer and multi full connection layer.
Convolutional Neural Network(CNN) is a powerful network that can automatically recognize the patterns of the features in a period of time, just like the favorite weather change to the fungal diseases in the life cycle. In other words, it means it can recognize the pathogen life cycle in a specific time and mark them out because life cycle’s patterns can happen everywhere over a period of time. For example, our model used weather data for the past two weeks as the model input, and the weather patterns can be found in this 14-day data.
After the CNN layer, the weather features then converted to weather change features, then a max pooling layer was added to filter some noises after the CNN layer. Weather patterns that caused diseases wouldn’t change in a short time, so the function of max pooling was to only return the maximum values in the filter.
$$f(x_i) = max(x_i)$$
For example: if the input array is [2,5,1,7,0,4] and the max pooling filter size is 2. When the filter step(distance the filter moves) is 1, the first max pooling output will be max(2,5)=5, and the second output is max(5,1) =5 and so on.
Because the max pooling output is the two-dimensional tensor, we will flatten the max pooling output to one dimensional tensor for further full connection layer.
After flattening the max pooling layer, we will use the full connection layer to classified the max pooling result, full connection layer is a basic Neural Network layer that that can switch the max pooling layer output into the high dimensional space and then classified them into two classes, negative(diseases that would not happen) and positive(diseases that would happen).
Finally, after using the full connection layer to convert the max pooling output into high dim space, the model then classified them into two classes, negative(diseases that didn’t happen) and positive(diseases that haven’t happened).
However, the network output was (a number that) hard to be realized by humans , so we used the softmax function to transform it into the diseases happening probability. Here is the formula of the softmax function:
$${\displaystyle \sigma (\mathbf {z} )_{j}={\frac {e^{z_{j}}}{\sum _{k=1}^{K}e^{z_{k}}}}}$$
So on, the model output can be easily realized by machines and humans, and next let’s talk about how we defined our model cost function and the optimizer.
#### Figure 3: Accuracy flow chart
We chose the cross-entropy as the network cost function because it performed well that the exclusion classification mission. Here this the formula of cross-entropy.
Suppose H is a cross-entropy function, y'i is the real label, and the yi the network prediction output, the cost is
$$H_{y'}(y) = - \sum_{i} y_{i}^{'}log(y_{i})$$
Parameters of the Neural Network were being optimized by Adam optimizer, which is a most commonly used ways to optimize the network.
After training, the model will be tested by independence test data, and the result was shown below.
#### Figure 4: Accuracy score = 82.5%
For the use in reality, the network would connect to our IoTtalk system, giving the user daily warning from the diseases happening.
###### Spore Germination Modeling
To predict the occurrence of diseases more effectively and timely, we made the spore germination rate. According to the researchers, we first fitted out a linear equation for the spore germination rate base on the humidity and a cubic equation for the spore germination rate base on the temperature. By multiplying them, we get the general model for the spore germination. At last, the model was combined to IoTtalk to apply peptides into farmlands.
1. Fitted out the equation with researches and get the general spore germination rate model.
2. Conducted experiments as verification and determine the coefficients.
3. Result
# Spore Germination Rate
Because we found out that the humidity and temperature affected the spore germination the most. Therefore, we wanted to build a general model for the spore germination rate based on temperature or humidity with different fungal species that can fit in every fungal species.
At first, we used the spore germination data on the research papers to fit out the functions.
## Spore germination based on temperature [1]:
(1) Myce-liophthora thermophila's spore germination rate based on temperature
$$y = 0.0004x^3 - 0.0132x^2 + 4.0447x -24.746$$
#### Figure 5: Spore germination based on temperature.
(2) Aspergillus niger’s spore germination rate based on temperature:
$$y = 0.0326x^3 - 4.0593x^2 + 160.2x - 1943.7$$
#### Figure 6: Aspergillus niger’s spore germination rate based on temperature.
(3) P. oryzae’s spore germination rate based on temperature[2]:
$$y = 0.06x^3 - 4.389x^2 + 106.86x - 774.66$$
#### Figure 7: P. oryzae’s spore germination rate based on temperature.
(4) Diplodia corticola’s spore germination rate based on temperature[3]:
$$y = -0.0041x^3 + 0.0169x^2 + 7.1531x - 32.071$$
#### Figure 8: Diplodia corticola’s spore germination rate based on temperature.
As results, we found out that the spore germination rate based on temperature can be fitted in a cubic equation
$$f_1(x) = ax^3 + bx^2 + cx + d$$
## Spore germination based on relative humidity:
(1) Aspergillus niger’s spore germination rate based on relative humidity[4]:
$$y = 352.38x - 254.14$$
#### Figure 9: Aspergillus niger’s spore germination rate based on relative humidity
(2) Pseudocercospora’s spore germination rate based on relative humidity[5]:
$$y = 0.1071x - 10.165$$
#### Figure 10: Pseudocercospora’s spore germination rate based on relative humidity.
As results, we found out the spore germination rate based on the relative humidity in a linear equation.
$$f_2(x) = ax+b$$
## The General model of spore germination rate:
Later, we considered the two equations as independent events and multiplied them to form our general fungal spore germination model. Then we conducted experiments to determine the coefficients and verification.
$$f_1 \times f_2$$
In other words, Just need the temperature and humidity conditions, we can calculate the spore germination in that environment.
## Experiment Method:
The experiment part was divided into two part:
### Table 1: Overall experimental disign
(1) Fix humidity, change temperature:
Remove the spores from fungal plate by Distillation-Distillation H2O to made the spore suspension solution (2x105particles/mL) and 2% glucose solution mixing with equal volume, then put the solution in concave glass slides. Later, put it into temperature and humidity control box. The humidity was set at 100%, the temperature was set from 10 to 30 Celsius degrees and been tested once every 5 degrees.
(2) Fix temperature, change humidity:
Remove the spores from fungal plate by Distillation-Distillation H2O to made the spore suspension solution(2x105particles/mL)) and 2% glucose solution mixing with equal volume, then put the solution in concave glass slides. Later, put it into temperature and humidity control box. The temperature was fixed at 25 Celsius degrees, the humidity was set from 80% to 100% and been tested once every 5 percents.
(3)Independent event validation:
We compared the result of our formula with the value of the spore germination in the reality to verify temperature and relative humidity are independent event to spore germination. We randomly choose the conditions of temperature 23 and 13 Celsius, relative humidity 97% and 80%.
# Experiment Result:
The standard of spore germination
following pictures are the Illustration of whether the spore germinate or not.
#### Figure 12: (Right)25 Celsius degrees and 100% relative humidity (9 hours): Spore had germinated
(1) Fixed relative humidity in 100%:
The effective range of temperature is 10 to 30 Celsius degrees.
### Table 2: The spore germination rate of fixing the relative humidity and changing the temperature in different hours.
#### Figure 13: Botrytis cinerea’s spore germination rate based on temperature.(9 hours)
Botrytis cinerea’s spore germination rate based on temperature:
$$f_1 = ( -0.0625x_1^3 + 2.9974x_1^2 - 37.865x_1 + 141.68)$$
(2) Fixed temperature in 20 Celsius
The effective range of relative humidity is 70%~100%, we use linear translation to let the spore germination in 100% relative humidity is also 100%.
### Table 3:
#### Figure 14: Botrytis cinerea’s spore germination rate based on relative humidity(9 hours)
Botrytis cinerea’s spore germination rate based on relative humidity:
$$f_2 = 316.88x - 216.88$$
(3) Independent event validation
### Table 4: The results of our independent event validation.
#### Figure 15: The spore germination under the condition of 23 Celsius degrees and 97% relative humidity (9 hours)
In the conditions of 23 Celsius degrees and 97% relative humidity (9 hours) the spore germination rate by experiment is 92.45%. And according to our formula that the spore germination is 86.84%.
#### Figure 16: The spore germination under the condition of 13 Celsius degrees and 80% relative humidity (9 hours).
In the condition of 13 Celsius degrees and 80% relative humidity (9 hours) the spore germination rate by experiment is 5.41%. And according to our formula that the spore germination is 6.84%.
The result can proof that our formula is quite precise.
## Conclusion:
Our final formula is:
$$f_1 \times f_2 = [(-0.0625x_1^3 + 2.9974x_1^2 - 37.865x_1 + 141.8)\div 100]\times [(316.88x_2 - 216.88)\div 100]$$
where x1 is temperature and x2 is relative humidity.
According our experiment result, we prove that temperature and relative humidity is an independent event and our formula is quite precise.
# Combined with IoTtalk
IoT immediately detects temperature and humidity (per hour detection), and we put the temperature and humidity condition that IoT detected into the formula we developed to calculate spore germination. If the spore germination rate exceeds a certain value, we will inform the user that the disease may occur.
If both of the predicted results are above the threshold, the user will be advised to spray the pesticide. The data sent back to IoT system simultaneously and the user can decide whether to automatically spray.
When the germination rate exceeds the threshold, we will advise the user to spray the corresponding peptide.
# Reference
[1]H. HASSOUNI1, et al. COMPARATIVE SPORE GERMINATION OF FILAMENTOUS FUNGI ON SOLID STATE FERMENTATION UNDER DIFFERENT CULTURE CONDITIONS. 2006
[2]Reference:Shivani R, Top 6 Factors Influencing Disease Cycle of Rice | Plant Disease
[3]Reference:José R. Úrbez-Torres, Effect of Temperature on Conidial Germination of Botryosphaeriaceae Species Infecting Grapevines, 2010
[4]Awad M. Abdel-Rahim, Hassan A. Arbab, Factors affecting spore germination in Aspergillus niger, 1985
[5]L. H. Jacome, W. Schuh, and R. E. Stevenson Effect of Temperature and Relative Humidity on Germination and Germ Tube Development of Mycosphaerella fijiensis var. difformis, 1991
Untitled Document
|
|
## Power (physics) – wikipedia gas station
#####
In physics, power is the rate of doing work, the amount of energy transferred per unit time. Having no direction, it is a scalar quantity. In the International System of Units, the unit of power is the joule per second (J/s), known as the watt in honour of James Watt, the eighteenth-century developer of the steam engine condenser. Another common and traditional measure is horsepower (comparing to the power of a horse). Being the rate of work, the equation for power can be written: power = work time {\displaystyle {\text{power}}={\frac {\text{work}}{\text{time}}}}
As a physical concept, power requires both a change in the physical universe and a specified time in which the change occurs. This is distinct from the concept of work, which is only measured in terms of a net change in the state of the physical universe. The same amount of work is done when carrying a load up a flight of stairs whether the person carrying it walks or runs, but more power is needed for running because the work is done in a shorter amount of time.
The output power of an electric motor is the product of the torque that the motor generates and the angular velocity of its output shaft. The power involved in moving a vehicle is the product of the traction force of the wheels and the velocity of the vehicle. The rate at which a light bulb converts electrical energy into light and heat is measured in watts—the higher the wattage, the more power, or equivalently the more electrical energy is used per unit time. [1] [2]
The dimension of power is energy divided by time. The SI unit of power is the watt (W), which is equal to one joule per second. Other units of power include ergs per second (erg/s), horsepower (hp), metric horsepower (Pferdestärke (PS) or cheval vapeur (CV)), and foot-pounds per minute. One horsepower is equivalent to 33,000 foot-pounds per minute, or the power required to lift 550 pounds by one foot in one second, and is equivalent to about 746 watts. Other units include dBm, a relative logarithmic measure with 1 milliwatt as reference; food calories per hour (often referred to as kilocalories per hour); BTU per hour (BTU/h); and tons of refrigeration (12,000 BTU/h). Equations for power [ edit ]
for a constant force, power can be rewritten as: P = d W d t = d d t ( F ⋅ r ) = F ⋅ d r d t = F ⋅ v {\displaystyle P={\frac {dW}{dt}}={\frac {d}{dt}}\left({\mathbf {F}}\cdot {\mathbf {r}}\right)={\mathbf {F}}\cdot {\frac {d{\mathbf {r}}}{dt}}={\mathbf {F}}\cdot {\mathbf {v}}} Average power [ edit ]
As a simple example, burning one kilogram of coal releases much more energy than does detonating a kilogram of TNT, [3] but because the TNT reaction releases energy much more quickly, it delivers far more power than the coal. If Δ W is the amount of work performed during a period of time of duration Δ t, the average power P avg over that period is given by the formula P a v g = Δ W Δ t . {\displaystyle P_{\mathrm {avg} }={\frac {\Delta W}{\Delta t}}\,.}
The instantaneous power is then the limiting value of the average power as the time interval Δ t approaches zero. P = lim Δ t → 0 P a v g = lim Δ t → 0 Δ W Δ t = d W d t . {\displaystyle P=\lim _{\Delta t\rightarrow 0}P_{\mathrm {avg} }=\lim _{\Delta t\rightarrow 0}{\frac {\Delta W}{\Delta t}}={\frac {\mathrm {d} W}{\mathrm {d} t}}\,.}
Mechanical power is also described as the time derivative of work. In mechanics, the work done by a force F on an object that travels along a curve C is given by the line integral: W C = ∫ C F ⋅ v d t = ∫ C F ⋅ d x , {\displaystyle W_{C}=\int _{C}{\mathbf {F}}\cdot {\mathbf {v}}\,\mathrm {d} t=\int _{C}{\mathbf {F}}\cdot \mathrm {d} {\mathbf {x}},}
If the force F is derivable from a potential ( conservative), then applying the gradient theorem (and remembering that force is the negative of the gradient of the potential energy) yields: W C = U ( B ) − U ( A ) , {\displaystyle W_{C}=U(B)-U(A),}
Let the input power to a device be a force F A acting on a point that moves with velocity v A and the output power be a force F B acts on a point that moves with velocity v B. If there are no losses in the system, then P = F B v B = F A v A , {\displaystyle P=F_{B}v_{B}=F_{A}v_{A},\!}
The similar relationship is obtained for rotating systems, where T A and ω A are the torque and angular velocity of the input and T B and ω B are the torque and angular velocity of the output. If there are no losses in the system, then P = T A ω A = T B ω B , {\displaystyle P=T_{A}\omega _{A}=T_{B}\omega _{B},\!}
In a train of identical pulses, the instantaneous power is a periodic function of time. The ratio of the pulse duration to the period is equal to the ratio of the average power to the peak power. It is also called the duty cycle (see text for definitions).
In the case of a periodic signal s ( t ) {\displaystyle s(t)} of period T {\displaystyle T} , like a train of identical pulses, the instantaneous power p ( t ) = | s ( t ) | 2 {\displaystyle p(t)=|s(t)|^{2}} is also a periodic function of period T {\displaystyle T} . The peak power is simply defined by: P 0 = max [ p ( t ) ] {\displaystyle P_{0}=\max[p(t)]} .
The peak power is not always readily measurable, however, and the measurement of the average power P a v g {\displaystyle P_{\mathrm {avg} }} is more commonly performed by an instrument. If one defines the energy per pulse as: ϵ p u l s e = ∫ 0 T p ( t ) d t {\displaystyle \epsilon _{\mathrm {pulse} }=\int _{0}^{T}p(t)\mathrm {d} t\,}
One may define the pulse length τ {\displaystyle \tau } such that P 0 τ = ϵ p u l s e {\displaystyle P_{0}\tau =\epsilon _{\mathrm {pulse} }} so that the ratios P a v g P 0 = τ T {\displaystyle {\frac {P_{\mathrm {avg} }}{P_{0}}}={\frac {\tau }{T}}\,}
|
|
# Revision history [back]
### Are there in fact two system plugin types?
The Plugin overview tutorial lists various plugin types and it mentions the System plugin type among them. I realized that there are system plugins targeted at either gzserver or gzclient:
$gzserver --help ... -s [ --server-plugin ] arg Load a plugin.$ gzclient --help
...
-g [ --gui-plugin ] arg Load a plugin.
Questions:
1. Is it correct that there are in fact two system plugin types, one targeted at the server and the other one targeted at the client? (Although both inherit from the same class, namely gazebo::SystemPlugin.)
2. Does it make sense to have a system plugin targeted at both the gzserver and gzclient?
3. Is the API documentation out of date when it says "_gazebo::SystemPlugin is a plugin loaded within the gzserver on startup._"
|
|
# Gradient of function after renormalization of variables
I have to minimize a function $f(\mathbf{x})$, where the vector $\mathbf{x}\in\mathbb{R}^n$ satisfies $|\mathbf{x}|=1$. So I tweaked the code of $f$ so that it renormalizes $\mathbf{x}$ as the first step, and this allows me to avoid adding the constraint to the minimization algorithm. At the end of the optimization I can simply renormalize the result. This is working well.
Now I'd like to help the algorithm further by supplying the gradients of $f$. I calculated them naively by hand and I'm happy that they turned out to be somewhat simple functions, but I'm having troubles because my "code for $f$" effectively computes $g(\mathbf{x}) = f(\mathbf{x}/|\mathbf{x}|)$, while the gradient that I calculated is actually the gradient of $f(\mathbf{x})$. Can I get around this problem without having to calculate the gradient of $g$?
• You can use your tweaked code to minimize $f$ using the gradient of $f$; in effect you are renormalizing after taking a gradient step. This is known as projected gradient descent. – Christian Clason Feb 9 '17 at 8:15
Based on your current optimization strategy, it is likely you cannot get away with just the gradient of $f(\cdot)$ evaluated at $\frac{\boldsymbol{x}}{\left| \boldsymbol{x}\right|}$ since normalization isn't linear wrt. $\boldsymbol{x}$. However, you could try using chain rule by first assuming that $f(\boldsymbol{u}(\boldsymbol{x}))$ where $\boldsymbol{u}(\boldsymbol{x}) = \frac{\boldsymbol{x}}{\left| \boldsymbol{x}\right|}$. Then the gradient result you say you derived would actually represent $\vec{\nabla}_{u} f$, compared to $\vec{\nabla}_{x} f$ which you want.
Using chain rule, we can show the following:
\begin{align} \frac{\partial f}{\partial x_{p}} &= \frac{\partial f}{\partial u_{k}} \frac{\partial u_{k}}{\partial x_{p}} \\ \end{align}
We can find $\frac{\partial u_{k}}{\partial x_{p}} \;\forall k,p$, since $u_k = x_{k} \left(x_l x_l\right)^{-1/2} \; \forall k$, doing the following:
\begin{align} \frac{\partial u_{k}}{\partial x_{p}} &= \frac{\partial}{\partial x_{p}} \left( x_{k} \left(x_l x_l\right)^{-1/2} \right) \\ &= \delta_{kp} \left(x_l x_l\right)^{-1/2} - x_{k} x_{p} \left(x_l x_l\right)^{-3/2} \\ &= \left(\delta_{kp} - \frac{x_{k} x_{p}}{\left(x_l x_l\right)} \right) \left(x_l x_l\right)^{-1/2} \end{align}
This expression can be simplified into the following in matrix form: \begin{align} \frac{\partial \boldsymbol{u}}{\partial \boldsymbol{x} } &= \frac{1}{\left| \boldsymbol{x}\right|} \left( I - \frac{\boldsymbol{x} \boldsymbol{x}^{T}}{\left| \boldsymbol{x}\right|^2}\right) \end{align}
Thus, assuming you are defining the gradients as column vectors, you get the following relationship to compute what you want:
\begin{align} \vec{\nabla}_{x} f &= \frac{1}{\left| \boldsymbol{x}\right|} \left( I - \frac{\boldsymbol{x} \boldsymbol{x}^{T}}{\left| \boldsymbol{x}\right|^2}\right) \vec{\nabla}_{u} f \end{align}
1. Does your optimization guarantee that $|\mathbf{x}|=1$ all the time? If not you have to renormalize at each step and actually your function changes.
2. If you are always on the L1-ball, then simply treat $\mathbf{x}_{new}=\mathbf{x}/|\mathbf{x}|$ and optimize $f(\mathbf{x}_{new})$. This will not change your optimization. You could scale back at the end of optimization by the L1-norm to get back to the scale of the data.
• I am indeed optimizing $f(\mathbf{x}_{new})$, but the algorithm would need $\vec\nabla f$ with respect to $\mathbf{x}$, not with respect to $\mathbf{x}_{new}$, because $f(\mathbf{x}_{new}) = f(\mathbf{x}/|\mathbf{x}|)$ – Ziofil Feb 8 '17 at 23:52
• But why is this the case if you could just fix $x_{new}$ once? Just compute $\nabla f_{new}$? – Tolga Birdal Feb 9 '17 at 0:21
|
|
# How do you use the amplitude and period to graph f(x) = 3 Sin x + 4 Cos x?
Feb 10, 2016
$A = \sqrt{{3}^{2} + {4}^{2}} = 5$
The zero is obtained by solving $f \left(x\right) = 0$
Note the period is $2 \pi$
$4 \cos x = - 3 \sin x$;
$\tan x = - \frac{4}{3}$; ${\tan}^{-} 1 \left(- \frac{4}{3}\right) = x \approx - .93$
|
|
# Finite nilpotent groups
Let $$G$$ be a finite nilpotent nonabelian group. Is it true that for every natural number $$k$$ there exists a finite group $$G_k$$ such $$G_k$$ is not isomorphic to a subgroup of a direct power of $$G$$ while every $$k$$-generated subgroup of $$G_k$$ is isomorphic to such a subgroup.
I know that for abelian groups this is not possible.
|
|
# Does A/B' = A/B imply B' = B?
1. Jul 8, 2010
### eok20
I'm probably missing something obvious, but suppose that B' < B < A are all abelian groups and that A/B is isomorphic to A/B'. Does it follow that B = B'? In the case of finite groups and vector spaces it is true by counting orders and dimensions but what about in general?
2. Jul 8, 2010
### Hurkyl
Staff Emeritus
It's true if the if the isomorphism is compatible with the projection maps.
That is, it's not enough that there be some random isomorphism between the groups; the projection A/B' --> A/B must be an isomorphism.
As is usually the case, think about infinite subsets of the integers, and use them to construct a counter-example. The first one I came up with is:
Let A be the free Abelian group generated by the symbols [n] for each integer n. Let B be the subgroup generated by the symbols [2n], and let B' be the subgroup generated by the symbols [4n].
Then A / B and A / B' are both free Abelian groups generated by a countably infinite number of elements; they are isomorphic.
3. Jul 8, 2010
### Hurkyl
Staff Emeritus
It strikes me that, in my example, we could let B' be finitely generated, or even be the zero group, so that it's not even isomorphic to B.
|
|
Test yourself: 1. They are defined in similar ways but are not exactly the same quantity. M = 6 × 10 24 kg (Mass of the earth) R = 6.4 × 10 6 m. On substituting the given values. LearnInsta.com provides you the Free PDF download of NCERT Solutions for Class 9 Science (Physics) Chapter 10 â Gravitation solved by Expert Teachers as per NCERT (CBSE) Book guidelines. Download the important Maths Formulas and equations PDF to solve the problems easily and score more marks in your Class 9 CBSE Board Exams. Average Speed (v)=Total distance travelled / Total time taken =s/t Average Velocity (Vav)= (initial velocity + final velocity)/2 = (u+v)/2 Acceleration (a)=Change in Velocity/ time taken = (v-u)/t. Calculation of value of g. G = 6.67 × 10 â11 Nm 2 /kg 2. these list of Physics formulas of class 11 chapter Gravitation is useful and highly recommended for quick revision and final recap of chapter Gravitation. 29. The value of g is taken to be 9.8 m/s2 for all practical purposes. The neck and bottom of a bottle are 2cm and 20 cm in diameter respectively. (c) ⦠Momentum is calculate using the formula: P = m (mass) x v (velocity) 2. Dimensional formula of Gis [M-1 L 3 T-2]. Book: National Council of Educational Research and Training (NCERT) Class: 9th Class Subject: Maths. Important Points about Gravitation Force (i) Gravitational force is a central as well as conservative force. In this course, students get free video tutorials of class 9th Physics in Hindi. Momentum is the product of mass and velocity of a body. Our notes of Chapter 10 Gravitation are prepared by Maths experts in an easy to remember format, covering all syllabus of CBSE, KVPY, NTSE, Olympiads, NCERT & other Competitive Exams. Please RATE & SHARE this App as it is free :-) Language : English All Physics formula and equations is summarized in one app. Online Test for . A common misconception occurs between centre of mass and centre of gravity. The Formula for Universal Gravitation: Each object in this universe attracts the other objects. Physics Notes Class 11 CHAPTER 8 GRAVITATION ... Dimensional formula of Gis [M-1L3T-2]. So those who are looking for preparation and planing to cover whole physics syllabus quickly must go with our Notes. one sixth of that On the earth ⢠A natural force that pulls all objects toward the center of the earth ⢠keeps the moon orbiting ⢠It holds stars together . Gravitation Worksheet-3  When a ball is thrown vertically upwards, it goes through a distance of 19.6 m. Find the initial velocity of the ball and the time taken by it to rise to the highest point. All objects small, big, heavy, light, hollow or solid fall at same rate. ⢠And binds galaxies together for billions of years â¦.Prevents Planets from losing their atmospheres. According to it, $$F = G \times \frac{M_{1}M_{2}}{r^{2}}$$ . Free PDF download of NCERT Solutions for Class 9 Science (Physics) Chapter 10 - Gravitation solved by Expert Teachers as per NCERT (CBSE) Book guidelines. They are equal if and only if the external gravitational field is uniform. Gravitation class 9 Notes. (a) is same on equator and poles. (ii) It is the weakest force in nature. Definition, Formulas, Types â Gravitation. Universal Law of Gravitation The force with which the two objects attract each other is called gravitational force. The gravitational force formula which is also known as Newtonâs Law of Gravitation usually defines the magnitude of the force between the two objects. Formulas for momentum, impulse and force concerning a particle moving in 3 dimensions are as follows (Here force, momentum and velocity are vectors ): 1. CBSE Class-9 Revision Notes and Key Points. Get the CBSE Class 9 Science notes on chapter 10 âGravitationâ (Part-I) from this article. Gravitation CBSE Class 9 Science Chapter 11 - Complete explanation and Notes of the chapter 'Gravitation'.. Free CBSE Online Test for . Get Revision notes of Class 9th Science Chapter 10 Gravitation to score good marks in your Exams. CBSE Class 9 Physics Gravitation. We are giving a detailed and clear sheet on all Physics Notes that are very useful to understand the Basic Physics Concepts. (Acceleration due to gravity, g = 9.8 m/s2) A. It is denoted by g and its value is 9.8 m s â2. Q.3 A coin sinks when placed on the surface of water. CLICK HERE to watch second part 1 Mark Questions Q.1 Define thrust. If the cork is pressed with a force of 1.2 kgf on the neck of the bottle, the force exerted on the bottom of the bottle will be Earth's. Make sure to comment down your experience regarding our website. Acceleration due to gravity on the earth is calculated as follows: Mass of the earth = 6 x 1024 kg Radius of the earth = 6.4 x 106 m Gravitational constant = 6.67 x 10-11 N m2 kg-2 g = G M R2 Substituting the values, we get 9 = 9.8 m/s2 For simplified calculations we can take g as 10 m/s2 Variation of acceleration due to gravity (g) 1. Not only physics notes pdf class 11 but we have Class 11 Chemistry Notes, Class 11 Biology Notes for class ⦠This force of gravitational attraction is directly dependent upon the masses of both objects and inversely proportional to the square of the dist⦠Academic team of Entrancei prepared short notes and all important physics formula and bullet points of chapter Gravitation (class-11 Physics) . CBSE quick revision note for Class-9 Science, Chemistry, Maths, Biology and other subject are very helpful to revise the whole syllabus during exam days. Gravitation | NCERT Class 9 Chapter 11 Notes, Explanation, Question and Answers. Give reason. Gravitational force of the earth on a body is known as gravity. Chapter 1 Matter In Our Surroundings - In this chapter, we will learn what is matter, that matter is ⦠(iii) It is 1036 times smaller than electrostatic force and 10âl8times smaller than nuclear force. All objects falling towards earth under the action of gravitational force of earth alone are said to be in free fall. The value of acceleration due to gravity. When we move from the poles to the equator. What do you mean by acceleration due to gravity? Important Points about Gravitation Force. Gravity is universal. We will also introduce a mobile app for viewing all the notes on mobile. What is the difference between gravity and gravitation? CBSE Class 9 Maths Formulas Class 9 Physics Notes are free and will always remain free. The value of acceleration due to gravity on the moon is about. Why? chapter-Motion physics formula is useful for quick revision , understand the application of physics formula in class 9 , help you to solve questions of Motion in class 9 science . But Newton's law of universal gravitation extends gravity beyond earth. We will keep adding updated notes, past papers, guess papers and other materials with time. The cork is pressed with same pressure. In this App we tried to consolidate all Physics Formulas and equation required for Solving numerical . Newton's law of universal gravitation is about the universality of gravity. Q.2 State a condition for an object to float when placed on the surface of water. Newton's place in the Gravity Hall of Fame is not due to his discovery of gravity, but rather due to his discovery that gravitation is universal. It covers all the aspects of Mechanics , Thermal Physics, Electrostatics and current electricity , magnetism ,Ray Optics, Wave optics and Modern Physics. All Gravitation Exercise Questions with Solutions to help you to revise complete Syllabus and Score More marks. Q.5 How is pressure related ⦠Class 11 physics all derivations are also very helpful in quick revision also. Q 4. The revision notes covers all important formulas and concepts given in the chapter. Hence, the value of g decreases. SOLUTION: The acceleration with which an object fall freely towards the earth is known as acceleration due to gravity. This pdf sheet consist of chapter-Motion of class 9 physics and updated with all important formula & concept with bullet points . . Force can defined as so⦠2. Students gain complete knowledge about Physics and thus they will be able to achieve great marks in their regular evaluations, tests, and exams. Prev Next > Submit My Test CBSE Class 9 Maths Formulas available for Chapter wise on LearnCBSE.in. Q.4 State and define SI unit of pressure. Q 3. All Chapter 10 â Gravitation Exercise Questions with Solutions to help you to revise complete Syllabus and Score More marks. Motion of objects under the influence of gravity âgâ does not depend on the mass of the body. ALLobjects attract each other with a force of gravitational attraction. 2.Universal law of gravitation: - âInverse square lawâ- All bits of matter attract all other bits of matterâ¦â¦â¦.. ⢠The universal law of gra⦠(b) is least on poles. Centre of mass is the mathematical description of placing all the mass in the region considered to one position, centre of gravity is a real physical quantity, the point of a body where the gravitational force acts. About Gravitation force ( i ) gravitational force formula which is also known as acceleration to. To solve the problems easily and Score More marks go with our notes formula: P m. From the poles to the equator your Class 9 chapter 11 notes, past,! Points about Gravitation force ( i ) gravitational force is a central as well as conservative force 9th Subject. 9 Science notes on mobile galaxies together for billions of years â¦.Prevents Planets from losing atmospheres. Your experience regarding our website in the chapter book: National Council of Educational Research and (. Field is uniform Science notes on chapter 10 â Gravitation Exercise Questions with Solutions help. App for viewing all the notes on mobile: National Council of Educational Research and Training ( NCERT Class... Is denoted by g and its value is 9.8 m s â2 Maths! Equator and poles q.5 How is pressure related ⦠CBSE Class-9 revision notes and all important and... V ( velocity ) 2 together for billions of years â¦.Prevents Planets from losing their atmospheres Science notes on.! Of g is taken to be 9.8 m/s2 ) a with time ( acceleration to... Binds galaxies together for billions of years â¦.Prevents Planets from losing their atmospheres attract other... At same rate force between the two objects attract each other with a force the! Watch second part 1 Mark Questions Q.1 Define thrust equation required for numerical...: Maths to comment down your experience regarding our website: P = m mass... Materials with time required for Solving numerical this universe attracts the other objects ( iii ) It is denoted g. G and its value is 9.8 m s â2 team of Entrancei prepared short and! Gis [ M-1 L 3 T-2 ] helpful in quick revision and final recap of chapter Gravitation are... Class-11 physics ) bottom of a bottle are 2cm and 20 cm in diameter respectively their! Quickly must go with our notes losing their atmospheres 9.8 m s â2 pdf consist... Solid fall at same rate for Solving numerical revise complete Syllabus and Score More marks guess and... T-2 ] cm in diameter respectively Science notes on mobile guess papers and other materials time... Covers all important formula & concept with bullet points of chapter Gravitation ( class-11 physics ) marks! Cbse Class 9 chapter 11 notes, Explanation, Question and Answers Gravitation force ( all formulas of physics class 9 gravitation! Other is called gravitational force formula which is also known as gravity 9 chapter 11 notes Explanation... Force with which the two objects attract each other is called gravitational force formula which is also known as due! A mobile app for viewing all the notes on chapter 10 â Gravitation Exercise Questions with Solutions help. Its value is 9.8 m s â2 is 9.8 m s â2 value 9.8! Easily and Score More marks is calculate using the formula for universal Gravitation gravity! Are looking for preparation and planing to cover whole physics Syllabus quickly must go our... Gravitation Exercise Questions with Solutions to help you to revise complete Syllabus and Score More marks velocity ) 2 using. S â2 Maths formulas and concepts given in the chapter objects attract each other with a force the! ( ii ) It is denoted by g and its value is m. = 9.8 m/s2 ) a 10 âGravitationâ ( Part-I ) from this article all Gravitation Exercise with. Explanation, Question and Answers as gravity similar ways but are not exactly the same quantity sure to down! The moon is about the universality of gravity exactly the same quantity smaller than electrostatic force and smaller. Binds galaxies together for billions of years â¦.Prevents Planets from losing their atmospheres force in nature Solving numerical also as! All the notes on chapter 10 âGravitationâ ( Part-I ) from this all formulas of physics class 9 gravitation the value of due! But Newton 's law of Gravitation usually defines the magnitude of the earth is known as Newtonâs law of usually... Of Gis [ M-1L3T-2 ] 10 â Gravitation Exercise Questions with Solutions help... This pdf sheet consist of chapter-Motion of Class 9th physics in Hindi Gravitation Exercise Questions with Solutions to you!: each object in this universe attracts the other objects our website Solving numerical are also very helpful quick. These list of physics formulas and concepts given in the chapter papers and other materials with time the with. Force is a central as well as conservative force which is also as... Formulas and equation required for Solving numerical 10 âGravitationâ ( Part-I ) from this article the equator solid at! Tutorials of Class 9 chapter 11 notes, Explanation, Question and Answers also introduce a mobile app for all. 2Cm and 20 cm in diameter respectively so those who are looking for preparation planing! Formula and bullet points of chapter Gravitation is useful and highly recommended for quick revision.., Question and Answers about Gravitation force ( i ) gravitational force formula which is known... Their atmospheres is the weakest force in nature is same on equator and poles points of Gravitation! Watch second part 1 Mark Questions Q.1 Define thrust required for Solving numerical is pressure related ⦠CBSE revision. Force of gravitational attraction ii ) It is denoted by g and its value 9.8..., heavy, light, hollow or solid fall at same rate not depend on the mass of force. Class-11 physics ) and planing to cover whole physics Syllabus quickly must with! We move from the poles to the equator â Gravitation Exercise Questions with Solutions to you. [ M-1 L 3 T-2 ] watch second part 1 Mark Questions Q.1 Define.. We tried to consolidate all physics formulas and concepts given in the chapter Class 11 all. The force between the two objects the moon is about and velocity of a bottle are 2cm and cm! Is uniform at same rate formula for universal Gravitation extends gravity beyond earth of Class chapter... Of Gravitation the force with which the two objects attract each other called! All practical purposes is also known as Newtonâs law of universal Gravitation is about condition. 10 â11 Nm 2 /kg 2 two objects: the acceleration with which an object to float when on. Experience regarding our website moon is about the universality of gravity physics and updated with all important formulas and given! With a force of gravitational attraction physics all formulas of physics class 9 gravitation Hindi objects small, big,,... Is uniform also known as gravity preparation and planing to cover whole physics Syllabus quickly go... Is taken to be 9.8 m/s2 for all practical purposes and poles acceleration due to gravity Explanation Question... Of physics formulas and equation required for Solving numerical condition for an object to float when on..., hollow or solid fall at same rate problems easily and Score More marks of a are! Helpful in quick revision and final recap of chapter Gravitation of that all formulas of physics class 9 gravitation. Gravity beyond earth concept with bullet points revise complete Syllabus and Score marks! Help you to revise complete Syllabus and Score More marks in your Class 9 Science notes on.! Of objects under the influence of gravity important physics formula and bullet points Key. With our notes and only if the external gravitational field is uniform for all purposes... On a body force formula which is also known as gravity: each object in this universe attracts other... Revision and final recap of chapter Gravitation ( class-11 physics ) m/s2 a! Subject: Maths of acceleration due to gravity i ) gravitational force of gravitational attraction the value of g. And Key points Maths formulas and equations pdf to solve the problems easily and Score More marks in Class... Billions of years â¦.Prevents Planets from losing their atmospheres as well as conservative.... Solid fall at same rate to solve the problems easily and Score More marks all objects,. Force formula which is also known as Newtonâs law of universal Gravitation: each object this... And updated with all important physics formula and bullet points Research and Training ( NCERT Class! ) is same on equator and poles momentum is calculate using the formula for universal Gravitation is about universal of! M/S2 ) a smaller than electrostatic force and 10âl8times smaller than nuclear.! Ii ) It is denoted by g and its value is 9.8 s. The weakest force in nature equal if and only if the external gravitational field is uniform âGravitationâ ( Part-I from! Of the force with which an object to float when placed on the is. Subject: Maths heavy, light, hollow or solid fall at same rate Gravitation extends beyond... ( iii ) It is the weakest force in nature and concepts given in the.... Neck and bottom of a bottle are 2cm and 20 cm in diameter respectively purposes... Also introduce a mobile app for viewing all the notes on mobile the. This article is denoted by g and its value is 9.8 m s â2 well. Same on equator and poles these list of physics formulas of Class 11 physics derivations. Other with a force of the body Dimensional formula of Gis [ M-1L3T-2 ] 11 notes, Explanation Question! Revision notes and Key points, heavy, light, hollow or solid fall at same rate Newton... 10 â11 Nm 2 /kg 2 our website with our notes NCERT ) Class: 9th Class Subject:.! Is a central as well as conservative force updated with all important formula concept. Which the two objects a coin sinks when placed on the moon is about the universality of gravity does. Score More marks in your Class 9 Science notes on mobile towards the earth Dimensional formula of Gis M-1L3T-2. Chapter 11 notes, past papers, guess papers and other materials with..
|
|
# How to imagine higher dimensions?
and here's a description of Brian Greene: http://www.youtube.com/watch?v=v95WjxpMIQg
Carl Sagan explains, we can not see the higher dimensions because we are limited to perceive only three dimensions. He didn't say a dimension can be small or big. This explanation completely makes sense.
But Brian Greene explains, higher dimensions can be tiny and curled up.
Isn't every dimension perpendicular to each of the other dimensions? If so, then how can a dimension be tiny or big? I want to know which is the right way to imagine higher dimensions?
In differential geometry, a space of a given number of dimensions can be curved rather than Euclidean, so for example the surface of a sphere is understood to be a 2-dimensional space in spite of the fact that we can't help but visualize the sphere sitting in a higher-dimensional 3D Euclidean space. This 3D space that we imagine the 2D surface sitting in is technically known as an "embedding space", but the mathematics of differential geometry allows mathematicians and physicists to describe the curvature of surfaces in purely "intrinsic" terms without the need for any embedding space, rather than in "extrinsic" terms where the surface is described by its coordinates in a higher-dimensional space--see the "Intrinsic versus extrinsic" section of the differential geometry wiki page. And all this has a practical relevance to physicists, since Einstein's theory of general relativity uses differential geometry to explain gravitation in terms of matter and energy causing spacetime to become curved (see here for a short conceptual introduction to how spacetime curvature can explain the way particle trajectories are affected by gravity).
With these ideas in mind, if you want to understand Greene's comment about higher dimensions being "curled up", picture the surface of a long cylinder or tube, like a garden hose. This surface is 2-dimensional, but you only have to travel a short distance in one direction to make a circle and return to your place of origin--that's the "curled up" dimension--while the perpendicular direction can be arbitrarily long, perhaps infinite. You could imagine 2-dimensional beings that live on this surface, like those in the famous book Flatland that has introduced many people to the idea of spaces with different numbers of dimensions (and there's also a "sequel" by another author titled Sphereland which introduces the idea that a 2D universe could actually be curved). But if the circumference of the cylinder was very short--shorter even than the radius of atoms in this universe--then at large scales this universe could be indistinguishable from a 1-dimensional universe (like the "Lineland" that the characters in Flatland pay a visit to). So a similar idea is hypothesized in string theory to account for the fact that we only experience our space as 3-dimensional even though the mathematics of string theory requires more spatial dimensions--the extra dimensions are "curled up" into small shapes known as Calabi-Yau manifolds, which play a role analogous to the circular cross-sections of the 2D cylinder or tube I described (although in brane theory, an extension of string theory, it's possible that one or more extra dimensions may be "large" and non-curled, but particles and forces except for gravity are confined to move in a 3-dimensional "brane" sitting in this higher-dimensional space, which is termed the "bulk").
The definition of dimension used here is that of a dimension of a manifold - essentially, how many coordinates (=real numbers) we need to describe the manifold (thought of as spacetime).
Manifolds may carry a notion of length, and one of volume. They may also be compact or non-compact, roughly1 corresponding to finite and infinite. E.g. a sphere of radius $R$ is compact and two-dimensional - every point on it can be described by two angles, and it's volume is finite as $\frac{4}{3}\pi r^2$. Ordinary Euclidean space $\mathbb{R}^3$ is non-compact and three-dimensional - every point in it is described by three real numbers (directed distance from an arbitrarily chosen origin), and you can't associate a finite volume to it.
Note that, on the sphere, you can keep increasing any one of the coordinates and, sooner or later, you will return to the point you started from. All dimensions here are "small"/compact. In Euclidean space, you never return to the origin, no matter how far you go. All dimensions are "big"/non-compact.
An infinitely long cylinder is now an example of where the two dimensions are different. Take as coordinates the obvious two - the length (how far "down"/"up" on the cylinder you are), and the angle (where on the circle that's at that length you are). The length dimension is non-compact - you never return to your starting point if you just keep increasing that coordinate. The angle coordinate is compact - you return after $2\pi$ to your starting point, and the "size" of the dimension is the radius of the circle. This is an example of a "curled up dimension". If you are far larger than the radius, you might not even notice you are on a cylinder, and instead think you are on a one-dimensional line!
1The mathematical definition is by covering properties which are not as easily translated into intuition.
It is relatively easy to imagine 4th dimension. That would be time. But time as if we had a time machine with which we we could arbitrarily move through it. Higher dimensions would be more difficult but possible as if "destinies". For example imagine that in destiny1 you see a car going from A to B in a given hour but in alternate destiny2 you see the same car going from B to A. And so on. Now imagine those destinies as if they were books in a row on a shelf 1,2..., n. Now imagine number of shelves (1...n) x (1...n) with destinies. Or number of shelves in number of rows in a library of destinies in a 3 dimensional table (1,2...n) x (1,2...n) x (1,2...n). Or a 3 dimensional library of destinies that changes in time. Now if you can imagine all this you just imagined 4 x 4=16 dimensions.
• That picture didn't work for me. I just kept picturing my library. – Kyle Kanos Apr 11 '15 at 2:50
• When people say time is the 4th dimension they are assuming there are only 3 spatial dimensions (and the dimensions don't have any intrinsic order, so you could just as easily say time is the 1st dimension and the next three are spatial dimensions, which is actually how 4-vectors in relativity are usually written). But it's certainly mathematically possible to describe a universe with more than 3 spatial dimensions--in superstring theory there are 9 space dimensions and 1 time dimension (so you might say 'time is the 10th dimension' here), in M-theory there are 10 space dimensions and 1 time. – Hypnosifl Apr 11 '15 at 20:14
## protected by Qmechanic♦Feb 5 '16 at 16:05
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
|
|
# Question 5f23b
Dec 23, 2017
Here's what I got.
#### Explanation:
The idea here is that you can use the $\text{pH}$ of the solution to find the initial concentration of hydroxide anions, ${\text{OH}}^{-}$.
You know that an aqueous solution at ${25}^{\circ} \text{C}$ has
$\textcolor{b l u e}{\underline{\textcolor{b l a c k}{\text{pH + pOH = 14}}}}$
This means that you have
$\text{pOH" = 14 - "pH}$
$\text{pOH} = 14 - 14 = 0$
Since you know that
"pOH" = - log(["OH"^(-)])
you can say that the concentration of hydroxide anions will be
$\left[\text{OH"^(-)] = 10^(-"pOH}\right)$
$\left[{\text{OH}}^{-}\right] = {10}^{- 0}$
["OH"^(-)] = "1 M"
Now, when the $\text{pH}$ of the solution is equal to $7$ at ${25}^{\circ} \text{C}$, the solution is actually neutral, meaning that you have
$\text{pOH" = "pH} = 7$
In this case, the concentration of hydroxide anions, which is equal to the concentration of hydronium cations, ${\text{H"_3"O}}^{+}$, is equal to
["OH"^(-)] = 1 * 10^(-7)color(white)(.)"M"
So, you know that you must decrease the concentration of hydroxide anions by a factor of
"DF" = (1 color(red)(cancel(color(black)("M"))))/(1 * 10^(-7)color(red)(cancel(color(black)("M")))) = color(blue)(10^7)#
Here $\text{DF}$ is the dilution factor. Now, in order for the concentration of the solution to decrease by a factor equal to $\text{DF}$, its volume must increase by a factor of $\text{DF}$.
This means that you have
$\text{DF" = V_"diluted"/V_"stock}$
${V}_{\text{diluted" = "DF" * V_"stock}}$
In your case, the volume of the diluted solution must be equal to
${V}_{\text{diluted" = color(blue)(10^7) * "0.2 mL}}$
${V}_{\text{diluted" = 2 * 10^6color(white)(.)"mL}}$
So in order for the $\text{pH}$ of the solution to decrease from $14$ to $7$, the volume of the solution must increase from $\text{0.2 mL}$ to $2 \cdot {10}^{6}$ $\text{mL}$.
You can thus say that you can dilute this solution by adding enough water to get the total volume of the solution to $2 \cdot {10}^{6}$ $\text{mL}$, which, for all intended purposes, is equal to
${V}_{\text{water" = 2 * 10^6color(white)(.)"mL" - "0.2 mL}}$
${V}_{\text{water" = "1,999,999.8 mL}}$
However, keep in mind that you only have one significant figure for the volume of the initial solution, so you must round the answer to one significant figure.
${V}_{\text{water" = "2,000,000 mL}}$
|
|
# Element six times larger than chromium?
Quantum mechanics do not give atoms a fixed size, since there are excited states which make electrons jump into a higher orbital and eventually might ionize the atom. However, the closest thing is "Element six times more atomic radius than chromium" which would be an element with 840 pm (10*10^-12 meters) that is not to be discovered quite yet, or so since osmium is only 840 kJ/mol in ionization energy. Make sure the units are the same.
Osmium only has an empirical atomic radius of 185 pm, and chromium's empirical atomic radius is 160 pm.
|
|
Boost.Hana 1.3.0 Your standard library for metaprogramming
boost::hana::type< T > Struct Template Reference
## Description
### template<typename T> struct boost::hana::type< T >
C++ type in value-level representation.
A type is a special kind of object representing a C++ type like int, void, std::vector<float> or anything else you can imagine.
This page explains how types work at a low level. To gain intuition about type-level metaprogramming in Hana, you should read the tutorial section on type-level computations.
Note
For subtle reasons, the actual representation of hana::type is implementation-defined. In particular, hana::type may be a dependent type, so one should not attempt to do pattern matching on it. However, one can assume that hana::type inherits from hana::basic_type, which can be useful when declaring overloaded functions:
template <typename T>
void f(hana::basic_type<T>) {
// do something with T
}
The full story is that ADL causes template arguments to be instantiated. Hence, if hana::type were defined naively, expressions like hana::type<T>{} == hana::type<U>{} would cause both T and U to be instantiated. This is usually not a problem, except when T or U should not be instantiated. To avoid these instantiations, hana::type is implemented using some cleverness, and that is why the representation is implementation-defined. When that behavior is not required, hana::basic_type can be used instead.
## Lvalues and rvalues
When storing types in heterogeneous containers, some algorithms will return references to those objects. Since we are primarily interested in accessing their nested ::type, receiving a reference is undesirable; we would end up trying to fetch the nested ::type inside a reference type, which is a compilation error:
auto ts = make_tuple(type_c<int>, type_c<char>);
using T = decltype(ts[0_c])::type; // error: 'ts[0_c]' is a reference!
For this reason, types provide an overload of the unary + operator that can be used to turn a lvalue into a rvalue. So when using a result which might be a reference to a type object, one can use + to make sure a rvalue is obtained before fetching its nested ::type:
auto ts = make_tuple(type_c<int>, type_c<char>);
using T = decltype(+ts[0_c])::type; // ok: '+ts[0_c]' is an rvalue
## Modeled concepts
1. Comparable
Two types are equal if and only if they represent the same C++ type. Hence, equality is equivalent to the std::is_same type trait.
namespace hana = boost::hana;
struct T;
struct U;
BOOST_HANA_CONSTANT_CHECK(hana::type_c<T> == hana::type_c<T>);
BOOST_HANA_CONSTANT_CHECK(hana::type_c<T> != hana::type_c<U>);
int main() { }
2. Hashable
The hash of a type is just that type itself. In other words, hash is the identity function on hana::types.
namespace hana = boost::hana;
// hana::hash is the identity on hana::types.
hana::hash(hana::type_c<int>),
hana::type_c<int>
));
hana::hash(hana::type_c<void>),
hana::type_c<void>
));
int main() { }
## Synopsis of associated functions
template<typename T >
constexpr type< T > type_c {}
Creates an object representing the C++ type T. More...
constexpr auto decltype_ = see documentation
decltype keyword, lifted to Hana. More...
constexpr auto typeid_ = see documentation
Returns a hana::type representing the type of a given object. More...
template<>
constexpr auto make< type_tag > = hana::decltype_
Equivalent to decltype_, provided for convenience. More...
constexpr auto make_type = hana::make<type_tag>
Equivalent to make<type_tag>, provided for convenience. More...
constexpr auto sizeof_
sizeof keyword, lifted to Hana. More...
constexpr auto alignof_
alignof keyword, lifted to Hana. More...
constexpr auto is_valid
Checks whether a SFINAE-friendly expression is valid. More...
## Friends
template<typename X , typename Y >
constexpr auto operator== (X &&x, Y &&y)
Equivalent to hana::equal
template<typename X , typename Y >
constexpr auto operator!= (X &&x, Y &&y)
Equivalent to hana::not_equal
## Public Member Functions
constexpr auto operator+ () const
Returns rvalue of self. See description.
## Associated functions
template<typename T >
template<typename T >
constexpr type type_c {}
related
Creates an object representing the C++ type T.
template<typename T >
constexpr auto decltype_ = see documentation
related
decltype keyword, lifted to Hana.
Deprecated:
The semantics of decltype_ can be confusing, and hana::typeid_ should be preferred instead. decltype_ may be removed in the next major version of the library.
decltype_ is somewhat equivalent to decltype in that it returns the type of an object, except it returns it as a hana::type which is a first-class citizen of Hana instead of a raw C++ type. Specifically, given an object x, decltype_ satisfies
decltype_(x) == type_c<decltype(x) with references stripped>
As you can see, decltype_ will strip any reference from the object's actual type. The reason for doing so is explained below. However, any cv-qualifiers will be retained. Also, when given a hana::type, decltype_ is just the identity function. Hence, for any C++ type T,
decltype_(type_c<T>) == type_c<T>
In conjunction with the way metafunction & al. are specified, this behavior makes it easier to interact with both types and values at the same time. However, it does make it impossible to create a type containing another type with decltype_. In other words, it is not possible to create a type_c<decltype(type_c<T>)> with this utility, because decltype_(type_c<T>) would be just type_c<T> instead of type_c<decltype(type_c<T>)>. This use case is assumed to be rare and a hand-coded function can be used if this is needed.
### Rationale for stripping the references
The rules for template argument deduction are such that a perfect solution that always matches decltype is impossible. Hence, we have to settle on a solution that's good and and consistent enough for our needs. One case where matching decltype's behavior is impossible is when the argument is a plain, unparenthesized variable or function parameter. In that case, decltype_'s argument will be deduced as a reference to that variable, but decltype would have given us the actual type of that variable, without references. Also, given the current definition of metafunction & al., it would be mostly useless if decltype_ could return a reference, because it is unlikely that F expects a reference in its simplest use case:
int i = 0;
auto result = metafunction<F>(i);
Hence, always discarding references seems to be the least painful solution.
## Example
namespace hana = boost::hana;
struct X { };
BOOST_HANA_CONSTANT_CHECK(hana::decltype_(X{}) == hana::type_c<X>);
BOOST_HANA_CONSTANT_CHECK(hana::decltype_(hana::type_c<X>) == hana::type_c<X>);
BOOST_HANA_CONSTANT_CHECK(hana::decltype_(1) == hana::type_c<int>);
static int const& i = 1;
BOOST_HANA_CONSTANT_CHECK(hana::decltype_(i) == hana::type_c<int const>);
int main() { }
template<typename T >
constexpr auto typeid_ = see documentation
related
Returns a hana::type representing the type of a given object.
hana::typeid_ is somewhat similar to typeid in that it returns something that represents the type of an object. However, what typeid returns represent the runtime type of the object, while hana::typeid_ returns the static type of the object. Specifically, given an object x, typeid_ satisfies
typeid_(x) == type_c<decltype(x) with ref and cv-qualifiers stripped>
As you can see, typeid_ strips any reference and cv-qualifier from the object's actual type. The reason for doing so is that it faithfully models how the language's typeid behaves with respect to reference and cv-qualifiers, and it also turns out to be the desirable behavior most of the time. Also, when given a hana::type, typeid_ is just the identity function. Hence, for any C++ type T,
typeid_(type_c<T>) == type_c<T>
In conjunction with the way metafunction & al. are specified, this behavior makes it easier to interact with both types and values at the same time. However, it does make it impossible to create a type containing another type using typeid_. This use case is assumed to be rare and a hand-coded function can be used if this is needed.
## Example
#include <string>
namespace hana = boost::hana;
struct Cat { std::string name; };
struct Dog { std::string name; };
struct Fish { std::string name; };
bool operator==(Cat const& a, Cat const& b) { return a.name == b.name; }
bool operator!=(Cat const& a, Cat const& b) { return a.name != b.name; }
bool operator==(Dog const& a, Dog const& b) { return a.name == b.name; }
bool operator!=(Dog const& a, Dog const& b) { return a.name != b.name; }
bool operator==(Fish const& a, Fish const& b) { return a.name == b.name; }
bool operator!=(Fish const& a, Fish const& b) { return a.name != b.name; }
int main() {
hana::tuple<Cat, Fish, Dog, Fish> animals{
Cat{"Garfield"}, Fish{"Jaws"}, Dog{"Beethoven"}, Fish{"Nemo"}
};
auto mammals = hana::remove_if(animals, [](auto const& a) {
return hana::typeid_(a) == hana::type<Fish>{};
});
BOOST_HANA_RUNTIME_CHECK(mammals == hana::make_tuple(Cat{"Garfield"}, Dog{"Beethoven"}));
}
template<typename T >
template<>
constexpr auto make< type_tag > = hana::decltype_
related
Equivalent to decltype_, provided for convenience.
## Example
namespace hana = boost::hana;
struct X { };
BOOST_HANA_CONSTANT_CHECK(hana::make<hana::type_tag>(X{}) == hana::type_c<X>);
BOOST_HANA_CONSTANT_CHECK(hana::make<hana::type_tag>(hana::type_c<X>) == hana::type_c<X>);
BOOST_HANA_CONSTANT_CHECK(hana::make_type(X{}) == hana::type_c<X>);
BOOST_HANA_CONSTANT_CHECK(hana::make_type(hana::type_c<X>) == hana::type_c<X>);
int main() { }
template<typename T >
constexpr auto make_type = hana::make
related
Equivalent to make<type_tag>, provided for convenience.
## Example
namespace hana = boost::hana;
struct X { };
BOOST_HANA_CONSTANT_CHECK(hana::make<hana::type_tag>(X{}) == hana::type_c<X>);
BOOST_HANA_CONSTANT_CHECK(hana::make<hana::type_tag>(hana::type_c<X>) == hana::type_c<X>);
BOOST_HANA_CONSTANT_CHECK(hana::make_type(X{}) == hana::type_c<X>);
BOOST_HANA_CONSTANT_CHECK(hana::make_type(hana::type_c<X>) == hana::type_c<X>);
int main() { }
template<typename T >
constexpr auto sizeof_
related
Initial value:
= [](auto&& x) {
using T = typename decltype(hana::decltype_(x))::type;
return hana::size_c<sizeof(T)>;
}
sizeof keyword, lifted to Hana.
sizeof_ is somewhat equivalent to sizeof in that it returns the size of an expression or type, but it takes an arbitrary expression or a hana::type and returns its size as an integral_constant. Specifically, given an expression expr, sizeof_ satisfies
sizeof_(expr) == size_t<sizeof(decltype(expr) with references stripped)>
However, given a type, sizeof_ will simply fetch the size of the C++ type represented by that object. In other words,
sizeof_(type_c<T>) == size_t<sizeof(T)>
The behavior of sizeof_ is consistent with that of decltype_. In particular, see decltype_'s documentation to understand why references are always stripped by sizeof_.
## Example
namespace hana = boost::hana;
struct X { };
static_assert(hana::sizeof_(hana::type_c<X>) == sizeof(X), "");
static_assert(hana::sizeof_(1) == sizeof(1), "");
static_assert(hana::sizeof_(hana::type_c<int>) == sizeof(int), "");
int main() {}
template<typename T >
constexpr auto alignof_
related
Initial value:
= [](auto&& x) {
using T = typename decltype(hana::decltype_(x))::type;
return hana::size_c<alignof(T)>;
}
alignof keyword, lifted to Hana.
alignof_ is somewhat equivalent to alignof in that it returns the alignment required by any instance of a type, but it takes a type and returns its alignment as an integral_constant. Like sizeof which works for expressions and type-ids, alignof_ can also be called on an arbitrary expression. Specifically, given an expression expr and a C++ type T, alignof_ satisfies
alignof_(expr) == size_t<alignof(decltype(expr) with references stripped)>
alignof_(type_c<T>) == size_t<alignof(T)>
The behavior of alignof_ is consistent with that of decltype_. In particular, see decltype_'s documentation to understand why references are always stripped by alignof_.
## Example
namespace hana = boost::hana;
struct X { };
static_assert(hana::alignof_(hana::type_c<X>) == alignof(X), "");
static_assert(hana::alignof_(1) == alignof(decltype(1)), "");
static_assert(hana::alignof_(hana::type_c<int>) == alignof(int), "");
int main() { }
template<typename T >
constexpr auto is_valid
related
Initial value:
= [](auto&& f) {
return [](auto&& ...args) {
return whether f(args...) is a valid expression;
};
}
Checks whether a SFINAE-friendly expression is valid.
Given a SFINAE-friendly function, is_valid returns whether the function call is valid with the given arguments. Specifically, given a function f and arguments args...,
is_valid(f, args...) == whether f(args...) is valid
The result is returned as a compile-time Logical. Furthermore, is_valid can be used in curried form as follows:
is_valid(f)(args...)
This syntax makes it easy to create functions that check the validity of a generic expression on any given argument(s).
Warning
To check whether calling a nullary function f is valid, one should use the is_valid(f)() syntax. Indeed, is_valid(f /* no args */) will be interpreted as the currying of is_valid to f rather than the application of is_valid to f and no arguments.
## Example
#include <string>
#include <vector>
namespace hana = boost::hana;
int main() {
// Checking for a member
struct Person { std::string name; };
auto has_name = hana::is_valid([](auto&& p) -> decltype((void)p.name) { });
Person joe{"Joe"};
static_assert(has_name(joe), "");
static_assert(!has_name(1), "");
// Checking for a nested type
auto has_value_type = hana::is_valid([](auto t) -> hana::type<
typename decltype(t)::type::value_type
> { });
static_assert(has_value_type(hana::type_c<std::vector<int>>), "");
static_assert(!has_value_type(hana::type_c<Person>), "");
}
|
|
JFIF>CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), default quality C \$.' ",#(7),01444'9=82<.342C 2!!22222222222222222222222222222222222222222222222222 " }!1AQa"q2#BR\$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr \$4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( )+Kp7pQ^{m`XT:o\$0FuVTʓW⺂aQ&w8,z!xtn8)4f kP9aֽ"#͇>q\Z]z6x nZ3 {?\$ O?u+i<ͬ{7lnlg q 4X\U%?PpE3-"9\91o@EQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQERsKEq~"}c;7\$#q^g:!MB͕3ı O a@;4rԜW n4 }5hftp84\$U?*jlp~aOJ}**j-.7WVb7٪,Jj|-ӵDyt-=^?xCU/-SeMk[vY}+) wc+ΰ9<; Ty ^,9ɪҹha%\8o;Ɂ&9x)d1ZW.ٕ E{͵s rn#Ķ:( #ms[{= w~@SQD OB)QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQET6^BĒ\dTPz\^I}ۼR8?C_Vn婷IP\$rCڀ<o` H2;9OWE܁zMEfX~lg_*qFVQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE&+'t4ޑ8oz|A1_ېʼ a+mFIq3^9/e{QC{zu= BLtdbA@,WG3<.jY7ۢH} kv%A#pyk3X-oD_;ht5@5ef-<`hRAik}Qfu;kv)RdFPQEQEQE^yJ@fV*B?vGC@ GppjҰe :Tj.0 # V_.ShP##QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQERR@z3Ixv^ik{*m=ǵ}\zV'1}>áJGmfxG(EqYO? ^tms|k²cC@%li KɁҪ_^iYQnk}TK)Fʶ8h` sN~ڀ=_>2m{[ҺM5Oq^=/4Yh+мu5 v4VQ]%)ǵO\=}Lmѷ/oʵ5( ( (l?AE9UP}+ QGqïE} 7J?*Z֦((((((((((((((((((((((((((((((((((((((((. {{RX\aA-iq4"3-=g">S;s Ehl~u~+}i_imﺰ؏^Mjrk)UrzҟNz\$mk{p#Ju^E mGcއmgTL5xK;`Ìƀ;/ e:IC]䐋5)9d keUb9a xM6Wۢ3 7U|]?VrCZ8FT3XC˹ @T8qI+4L RZ:<#`GJPq1TMTo(G\$>uPJ6\ʚQ Λ[)`QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQFh4QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEy/O 58 v̩}zE<b{sjP%ưm{o/겯 Uaa_;f?~5ddd4z^i|p焗}kqdB85&dncq[6-f#sހ.rbG=:aj&SʑӸ5b)۳[mߐySW)03sK@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@ D6*('8Ծ!J{RTI koILyNsS,\$ ~趞1ڬM*ܴm,1Wii%銽sk`\v~pp\$_UlԵw܉m.OlYyVtG @e (CjJZ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( Bd̨sUStQ:(]LsK hGW_X\i lWgB:;X(,u>̛dCfcVB^*eA*Wawf_QRzѵ4?:vd=xo,ЧTf/lO̙I9_p6 ;}}((((((((((i96ۑIԕVX@6 ƄT=E-SD ݁8ѝeݷdd61Nñ-QHAQG5#}sךcD9Gz!Sȗݬ3: U?0_fc+JxʷWjԴ @Z?19TﳔɎ8qw*zΈqҟ<4/!k+̟fM&+1ï ?˻:ԗZ\Q_Zeo5J.CC/Ǝ-հaݐ Oi`c2 W a?vq "=v /УVoO1Gg3Kү#Y+c\$,Vq.YdZU\D}WZTO85,dJGU'M#4WoCȭKLOɃR3B"aFƀ&" ( ( ( ( ( ( `Wm(c*x)H#[ڍY6ccdKs cYqT\S\$8Ͻ:;ɭ\ErN݂ݍ:ϐ5e^GY2T"HX7CR"%Y?y)}* q1KٚȲ8ڋʩ}, ҝ+%V=MIESR4tRSS(BgX9ɟ|w5mZ-oT:0g^;?ںOyOZg^C%#|\$Q5P5F.XgžX'A@ΒJ+*^fJ8_Cb:E˸u C@/~\$e ogKԦu/mDǨ+-Qtjo:p2.!JIe#EuN^3v@3f`_^.mԯ{S=jn`IWza[W JY0=9D)0#'q=ĖRpÑY+(bS\7:/njIc|tI >c3},뗴&?zsjjʺ4K\Ŗ> rSWI"+VxEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEU;*QeպI~=j2|"nk%ռ?CҽG2VʤVN"m/nC#FdƱ]m+~F<¬d_|}nlz8ȯJ{RVxnVkp#ۚGP>qR-k&.]ܺ\W>?9;zd+J?ƺ[OYkfiwW/rJPݳ :Pj80kF;w_pk;-85n _WhP;gpfX_l¬^Uuj=Jב[3qsn-xJlu@o!܌\tn젊؆ V4Xݜ+wt( P{fUsր)RFpGPIJؓQG>jv,>{c@CfqE]~m)%fr9T-+r1ja_*o*בH<MXhx+} >[# kTmN2?2)OWrDv@%Qʱ;|WY垇5RKFa4 ={nwLFX~ErC\$@GDU~(hi0(df0@1{-Yw'i\▊!o S\$'ٮdl3DfPc*^ҤW?ŋ??X;t?@EykZ6W?rY5 KRoiD~%j͓?F&o*uHI[Qѻ7+̖h*r נW 2T 2*SP(X&=ݴd.a+ 8 xkxQxmX5M,IOlIjwG>zpzч5Z_4͎j5c#H>Q@.ˎخ>+ѥd'挞ҟxz:ṇq5mdrjL;A50SO{<WcOEu]brc?}FmY9< #Vg5̨ X0"u}kZ}E K@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@!\$1ʥdEu=f']fGmnQ@=ׁ\$RMaf_+Ű\$oGp5ꔘ)m[m>S S'![hgR dxSGm3+@@ @OXUb=G5?ɆDt q"M*[qE>VmLJmN+νn kyX,k)mڰ=go;0{)>t݇!# B,m,ЃEbi>3lKHwW8a] cD+8`}kί| ŭd(s՞^c"1Sï ?~Q6X"(zfK]S3LjC fnioEEZ>@| oipiۨE=pflHRoZ[Ϲ[+Vψ\$LdGLحյ]˦ TujƋtyݬ2ݔn5[6RjhP\mC!Yd|wrJ:\͖xF4~Aη6q\$u^5jM%Hg'Hph ꦩnsÃKY)lsW^HާasX 9WBzwCc]Z67R֓m9OZ>qp욌t+&Z ǬRsW=џ 3 Ey5ΛYͤCU2^8=*ƛϱ\<1pdhĶڜl<4֟toY_dp\$e_jhx_c:rۻP |mMv)-m֦O /PŪ۽-#~:{q^OH/od[q#Y)[2ni? E_nQAJAJu5F iVŋ^D65Mχ 2Zܠu>Q,0k'zAVdbb''b+n<7|^ YJw&?C\,@>CYsnڵ!J/kڈ_Ӝ!? "5 ӁW@#b+cb)s@T1NCSPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPb\$QʻdEua}s&%)+lfUv4PkQ Nӭe6!0Ez tct(u(9m.;hx~ztr?A\|Cpy3flO1셇5E>#;،n>^c^gό6[ڳ= vN4Y8[2'QF0~_ὗjK hy=b|]Z aZs|f>g=eO\$wV!8+'>fͫImxeF?pTq\)eYt) \$UgWP&F}EY\"0esKh1NNNy5ii6T89\֨dnmĶ?P\ Y-(N(p;8qz5+0#N8oʵZ[yu;MujZuLk9& a\wfդ;QF S=gaY >xݳPl\$HԊKx`@dN;V,Ɇ#"T_.S PA};;?>03j-) Gݓ֬mI_x_pPiĦv@WwO7i(?Clu,^o8@d)ޔn?<ȹ5:7U>ӦJ>kf_j6v(UuvS.lŊMA6YWZHv4 AR n/\S5uelu MNV+"q^ o\{T{yWtLيP8ei;]R[ӞFQ@\$l%15?4xBƏ,d*]]. |zZóxP\`HS@((((((((((((((((((((((((((((^S,\$Z\$k,l2V\$bU;نwY65s=}=TrCCPgq4#/W!TJA|s/\(B ;GT JE&(*`G(]huq7ɹ]\$霊h>n>H; =7]b[{e+qAPzvXg G}ǟDO1G|Ak9c|ܺ[lMUW҆,cE1-a?T}54.?3ՋCi^.1&lXQ~ T*|uMiV(.=gG1Ǔ Z])TqY_ %[I6;FkwEBO4ڡj1c"Fԁ+r?^XV~&2@5Xhiď{kյHl@ލOHo?"=\$t>G0%dz7}zij?XUa٩5*ku"ԴY*f=^[G+z%YݙIV cգl:[[ޜ",wG 9CHBc<~'L4-t/pkشMf\ı >I }#u&&[(As2m%0 (_cҨxJ{ǀ8alֲ.|?[%/\Ҁ<.O"XHצFCnIj{)e%g pk!KIn&P-GLNfD/lNJ+OLy.ೆݿwi`ֶmn ,jE1Xu(Ǐ0} kKvk!?*k[咫'FCZ6W7ZY 0}= tkejVnʼnoVwU;K/Xމe(]ԎӋKQ۵Eޟ':AxJ@S(N9!4%f^UyWso7}*/4?ĺƎܳ 9ip8Hņ9A5w4QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE q<?PkE)@gCEy^EQ'3Wqh.YX5Ti"x.\$0+WNK2BLR6J] s9zxz"vuAX}=kDZ{ND.a`r7(lVdy{c[f;>P̮J0K\$ bמ²Z䈃P +ʫJfDh"~e UUt=NE8 P51O#ssDŽu@*'8?Q^yvIxyئI rGEe=An={貴_cs*i\+el혠`A-DQ~%[c{jҴ6V># VnG(d219hu~(((((((((((((((((((((((((((((((((((((((((N暶waw3K],צS]D*OPFhw*O> фTsriZ|^EE^j#OCWKiXoa17ӑYʂ0i#gaffqFhtݹ VAXh)+}6&M.pMwp12?@ ai"zҎpNS죯[(h((((((((((((((((((((((((((((((((((((((((((LPUX`Aih _#ӥf\b R?颟]=];Y qd8,[+OWoգ
|
|
## Elementary Technical Mathematics
Published by Brooks Cole
# Chapter 6 - Review - Page 263: 10
#### Answer
x= $\frac{3}{5}$
#### Work Step by Step
4x+1=4-x Move terms 4x+x=4-1 collect terms and subtract 5x=3 divide both sides by 5 x= $\frac{3}{5}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
|
# Find the Work Done and the Heat Transfer
1. Feb 6, 2013
### Northbysouth
\1. The problem statement, all variables and given/known data
Air is compressed in a piston-cylinder assembly from p1 = 10 lbf/in2, T1 = 500°R to a final volume of V2 = 1ft3 in a process described by pv1.25 = constant. The mass of air is 0.5 lb. Assuming ideal gas behavior and neglecting kinetic and potential energy effects, determine the work and heat transfer, each in Btu, using a) constant specific heats evaluated at 500°R and b) data from the table. Compare the results
The table is titled 'Ideal Gas properties of air
2. Relevant equations
Cv = constant
pv = nRT
Q = ΔE + W
E = U + KE + PE
W = ∫P dv
3. The attempt at a solution
I'm not sure where to begin with this question. I think I need to find the initial volume first but I'm not sure what the best way to do this is. I had thought to use:
pv = nRT
But then I'd have to convert my 0.5 lb of air into moles which would require me to convert everything else into English units. Is there a simpler way?
2. Feb 6, 2013
### Simon Bridge
Start with what you know about the kind of process that can be described as $pv^{1.25}=\text{const}$.
The only way to avoid converting things is to find an expression of the equations that uses the things you want.
3. Feb 7, 2013
### Staff: Mentor
You need to find the value of the gas constant expressed in units of (psi)(ft3)/((lb-mole)(degree R)). Look it up with Google. Converting 0.5 lb to lb-moles is easy, since you just divide by 29. Then you are ready to apply the ideal gas law to calculate the initial volume in ft3.
|
|
# Inverse Sum Squared
Calculus Level 4
$\large\displaystyle\sum_{n=1}^{\infty}\left({\displaystyle\sum_{k=0}^nk^2}\right)^{-1}$ If the value of the series above can be expressed as $$a-b\ln{(c)}$$ where $$a,b$$ are positive integers and $$c$$ is the minimum possible positive integer, find the value of $$a+b+c$$.
|
|
# What does Brooks mean by Representation?
So for a class I'm reading Brooks' "Intelligence without representation". The introduction is dedicated to slating Representation as a focus for AI development.
I've read that representation is the problem of representing information symbolically, in time for it to be useful. It's related to the reasoning problem, which is about reasoning about symbolic information.
But I don't feel like I really understand it at any practical level. I think the idea is that when an agent is given a problem, it must describe this problem in some internal manner that is efficient and accurately describes the problem. This can then also be used to describe the primitive actions that can be taken to reach the solution. I think this then relates to Logic Programming eg Pascal?
Is my understanding of Representation correct? Just what does representation look like in practice, are there any open source codebases that might make a good example?
• According to me, yes, it's basically how to represent knowledge using Logic! – kiner_shah Feb 8 '17 at 12:18
First, Pascal is not a logic programming language. Logic programming refers to languages like Prolog where you have a declarative style of programming compared to a imperative style like you have in Pascal. Maybe you mean if-statements which are typical for imperative languages.
Second, representation means a certain level of abstraction. For instance, a model represents a certain part of the reality. Imagine a cup on a table. If the agent has a representation of this situation, it has a symbol table and a symbol cup which represents the things in the real world. Now it can have a relation on(cup, table) which represents the situation that the cup is on the table. This type of abstraction can be easily represented in a logic language like first order logic. Therefore, one uses logic programming languages like Prolog or other types of languages like OWL to represent knowledge and perform reasoning. So the important term to which Brooks refers is Knowledge Representation and Reasoning.
Third, if your agent only have sensor data like video or sonar data, then it knows only distances or pixels from the real world. That is not meant with representation. Brooks' Creatures have only this information and calculate with this data directly to perform an action without reasoning. In that sense also artificial neural networks have no representation.
Finally, for an open source project to understand representation I would recommend the above mentioned OWL. You can look at the Protégé editor for working with OWL. In an OWL ontology you can represent relations between things and reason about them.
In that paper, Brooks introduced the basis for what became known as his "subsumption architecture". The idea was to get away from the 1980's popular approach of a single global representation of all the components of the problem space that had required the task of robot task planning to juggle every constraint in the world into one giant disordered mess of states and state transitions. Rather than represent every element in the world in a single model (The Representation), Brooks suggested it was preferable to build a hierarchy of submodels of the world (subsets of states and transitions) in which smaller tasks could be more readily planned. Then as these rudimentary skills were mastered, they could be combined to address a hierarchy of bigger and more complex tasks (bigger tasks subsume smaller tasks and benefit from their already having been solved).
Yes, representation did not fully go away, but it was redistributed hierarchically so that much of the state could be abstracted away from the higher level of the bigger problem that you need to solve. Planning became like coordinating a hierarchical army of skills, where the general doesn't need to plan every movement of the private in order to manage a battle. Instead, that general need only tell the colonels what to do, and the colonels tell the majors, and so on down to the privates. Now the general solves problems by coodinating multiple sub-hierarchies available to him/her by delegating authority to coordinate behavior at the appropriate level of abstraction: like division, brigade, battalion, company, and squad. That's Brooks' Subsumption Architecure: the general needs to represent a battle plan only as "the world according to colonels".
https://en.wikipedia.org/wiki/Subsumption_architecture
|
|
# How do you balance the equation for this reaction: C(s) +SO_2(g) -> CS_2(l) + CO_2(g)?
May 10, 2017
3C(s) + 2SO_2 (g) → CS_2 (l) + 2CO_2 (g)
#### Explanation:
C(s) + SO_2 (g) → CS_2 (l) + CO_2 (g)
We have:
LHS
C= 1
S= 1
O=2
RHS
C=2
S=2
O=2
We balance the equation by making the number of each element the same on both sides.
So:
LHS
C= $1 \times 3$ = 3
S= $1 \times 2$= 2
O= $2 \times 2$= 4
RHS
C= 1 (from $C {S}_{2}$) + $\left(2 \times 1\right)$ (from the $C {O}_{2}$) = 3
S= 2
O=$2 \times 2 = 4$
Then we rewrite the equation:
3C(s) + 2SO_2 (g) → CS_2 (l) + 2CO_2 (g)
Hope this isn't too confusing!
May 10, 2017
$3 C \left(s\right) + 2 S {O}_{2} \left(g\right) \to C {S}_{2} \left(l\right) + 2 C {O}_{2} \left(g\right)$
#### Explanation:
Given: $C \left(s\right) + S {O}_{2} \left(g\right) \to C {S}_{2} \left(l\right) + C {O}_{2} \left(g\right)$
According to the Law of Conservation of mass, the number of atoms of each type on the left side of the equation must be equal to the number of atoms on the right side of the equation:
$\underline{L e f t \text{ " Side) " " ul(Right" } S i \mathrm{de}}$
$C - 1 \text{ } C - 2$
$S - 1 \text{ } S - 2$
$O - 2 \text{ } O - 2$
You can't change formulas, only add a different number of molecules (coefficient). Since the Carbon atoms are in two different molecules on the right, start with balancing the Sulfur:
$C \left(s\right) + 2 S {O}_{2} \left(g\right) \to C {S}_{2} \left(l\right) + C {O}_{2} \left(g\right)$
$\underline{L e f t \text{ " Side) " " ul(Right" } S i \mathrm{de}}$
$C - 1 \text{ } C - 2$
$S - 2 \text{ } S - 2$
$O - 4 \text{ } O - 2$
Add a $2$ in front of the $C {O}_{2}$ to balance the Oxygen:
$C \left(s\right) + 2 S {O}_{2} \left(g\right) \to C {S}_{2} \left(l\right) + 2 C {O}_{2} \left(g\right)$
$\underline{L e f t \text{ " Side) " " ul(Right" } S i \mathrm{de}}$
$C - 1 \text{ } C - 3$
$S - 2 \text{ } S - 2$
$O - 4 \text{ } O - 4$
Finally, put a $3$ in front of the single Carbon:
$3 C \left(s\right) + 2 S {O}_{2} \left(g\right) \to C {S}_{2} \left(l\right) + 2 C {O}_{2} \left(g\right)$
$\underline{L e f t \text{ " Side) " " ul(Right" } S i \mathrm{de}}$
$C - 3 \text{ } C - 3$
$S - 2 \text{ } S - 2$
$O - 4 \text{ } O - 4$
Balance!
|
|
# Spaced Repetition for Mathematics
Recently, I’ve been experimenting with using spaced repetition for self-studying “advanced” mathematics. This post goes through my motivations for adopting this system, as well as a few techniques I’ve used in adapting it to mathematics.
# What is Spaced Repetition
After reviewing a fact, or re-working some problem, you’re more familiar with it. If you’re quizzed about that problem soon after, you’ll be able to effortlessly recall the solution. But, if you don’t visit this problem for long enough, you’ll eventually forget this solution. The better you understand something, the longer it takes to forget it.
The idea behind spaced repetition is to reintroduce ideas and problems right before you forget them, forcing you to engage with the idea, and refreshing your memory. This is spaced, because the periods without recall get longer and longer, as your knowledge becomes more and more solid
A Spaced Repetition System, or SRS, for short, is some piece of software, or even analog system, that allows you to create “flashcards” which include some form of prompt, and some form of answer. The system then quizzes you on these cards, so that you’re able to recall the information when prompted. It repeats these quizzes in a spaced way, to try and keep the information fresh, while spacing reviews out as you get better at recalling that information.
In practice, the system prompts you for some information, and then you reveal the answer, and mark that information as recalled or forgotten. If you recalled the information correctly, then the waiting period until you see that information again gets longer. If you fail, on the other hand, then the waiting period shortens, or even gets reset, in some systems.
# Why Spaced Repetition?
The promise of an SRS is being able to actually keep the information you learn through studying, instead of having it slowly attrition away. A lot of information is “use it or lose it”, and if you don’t actively use some technique or knowledge, you risk just forgetting it. An SRS tries to game your memory, by having you “use it” right before you would forget it.
Personally, I’ve found it a more compelling alternative to note-taking. In practice, I basically never reviewed the notes for courses I took, even if I dutifully gathered notes, like at the beginning of my degree.
I think of a spaced repetition system as a way to actively review your notes and findings in an automated way, so that you don’t spend needless time reviewing things you already know well, or forget to review things that you don’t have a solid grasp on.
Since the system spaces information based on whether you’ve managed to recall that information, it keeps things fresh, without wasting your time.
With this kind of system in place, my “permanent” notes migrate towards pieces of knowledge I want to remember. I still use pen-and-paper notes, but this is more so for engaging with material as I learn it, and not for revisiting later. The SRS takes care of scheduling my revisiting for me.
## Easy Applications
An obvious application of SRS, and the place where I first encountered it, was for learning languages. When learning a foreign language, you have a lot of information that you want to memorize. For example, vocabulary is something you’d obviously want to have ready when speaking the language.
Using an SRS for vocabulary is quite simple to understand. You add cards with the word you want to learn, and the translation on the back. You might even add the reverse card as well, in order to be able to translate a word from your native language back to the foreign one.
As you continue to use your SRS, you build up a larger and larger war chest of vocabulary at your disposal. The SRS makes sure you to keep all the vocabulary fresh, even the words that you don’t use everyday.
## For Mathematics?
While it’s clear how an SRS would be very useful for learning a foreign language, it’s not clear how it would be applicable to mathematics. Learning a foreign language involves a lot of necessary memorization, but mathematics, especially as you get to a higher level, is less about memorizing identities and formulas, and more so about solidifying broader understanding about different subjects.
If you can rederive some proof, there’s less of a need to have its proof memorized. On the other hand, you do want to have a working understanding of the different concepts involved in a mathematical subject.
When working through some subject, you inevitably have a collection of techniques and properties at hand, since they keep coming up as you move forward.
Having an SRS lets you keep this understanding in place even as you don’t actively work in a subject anymore. Personally, I’d find it a bit of a shame to lose this framework of knowledge as soon as I’m not actively studying some subject anymore. This is especially important for me, since I study mathematics more so as a hobby, instead of it being my full-time occupation. I can imagine this being less necessary if you spend your days entrenched in some field of mathematics.
# Different Types of Cards
When preparing cards for mathematics, you want to focus on fundamental understanding, as opposed to surface level facts. The kind of cards you need to make are less evident than for learning a foreign language, where it’s clear what information needs to be committed to memory.
Here are a few types of cards that I use, and have found to be useful.
## Definitions
The first type of card is for definitions. As an example, here’s one card I’ve written:
Definition: A Noetherian $R$-module
An $R$-module where every submodule is Noetherian
You see the prompt above the divider, and you have to recall everything below the divider.
A card like this helps you remember the definition of a mathematical object.
This is actually the kind of thing you want to memorize verbatim, since knowing the definition of some object is pretty important to be able to use it. Of course, as you’re actively working in some subject, the common definitions will be second-hand, but having them committed to long-term memory is nice when you aren’t actively engaging with the material anymore.
## Characterizations
A lot of mathematical objects have concrete definitions, as well as universal properties that characterize them. Knowing the characterization along with the concrete definition can be very useful:
What is the characteristic property of the quotient topology?
Given a quotient map $q : X \to Y$, we have that any other map $f : Y \to Z$ is continuous $\iff$ $fq : X \to Z$ is continuous:
The card could also be less blunt, trying to relate some concrete concept to a more general categorical characterization:
What is an adjunction space, categorically?
This is an example of a pushout:
This card is asking the question more indirectly, and also requires connecting the concept of adjunction space with the broader concept of a pushout.
By forcing you to recall the connections between different definitions, this strengthens the understanding of both subjects.
## Comparisons
With this kind of card, you have to recall some kind of property, but with less focus on the detail of the property, and more so about drawing a connection between two different concepts.
For example:
How are the characteristic properties of the quotient and subspace topologies similar?
You have a duality in the characteristic properties:
You already have cards for each of these properties individually, requiring more detail. This card helps illuminate the similarities between concepts you’ve already learned, reinforcing those concepts at the same time.
## Motivations
The idea behind this card is to introduce some piece of intuition, or motivation behind some concept:
How do quotients let us construct new spaces?
Quotients let you fold down sections of a space down to single points, creating a new space.
This isn’t about recalling the motivation exactly, but rather anchoring a concept in your mind with some ways of thinking about that concept.
## Proofs
A lot of the mathematical content of a subject comes forward through the techniques you use to prove things. Because of this, learning the proofs of various things let you understand the subject at a technical level.
### Proof Strategies
One type of card is a rough sketch of a proof, where you recall the strategies involved in proving something:
What are two main strategies to show that the sphere $\mathbb{S}^n$ is locally euclidean?
• Divide it into different hemispheres, each of which is the graph of a function
• Project the sphere onto a hyperplane of dimension $n$
The idea here is to keep in mind different high-level strategies for proving some theorem. Having multiple strategies in mind helps reinforce both of them individually, which is a bonus.
This type of card can also include high-level overviews of longer proofs, to complement complete cards for their individual parts.
### Actual Proofs
This kind of card is pretty straightforward. You need to recall the proof of some property or theorem:
Lemma: If $q: X \to Y$ is an open quotient map, and $\mathcal{R} := \{(x_1, x_2) \ |\ q(x_1) = q(x_2)\}$ is closed, then $Y$ is Hausdorff
Given $y_1 \neq y_2$, we have some $y_1 = q(x_1)$, and $y_2 = q(x_2)$, by surjectivity.
Then, $(x_1, x_2) \notin \mathcal{R}$, so we have a neighborhood $V_1 \times V_2$ containing that point, and disjoint from $\mathcal{R}$.
Then, $q(V_1)$ and $q(V_2)$ contain $y_1$ and $y_2$, are disjoint, and are open by assumption of $q$ being open.
$\square$
The goal here isn’t to recall the proof verbatim, but rather to be able to rederive the proof with pen and paper. This requires knowing the high level steps,and being familiar with the various properties involved.
You want theorems that aren’t very long here. Very often, I’ll need to split up longer proofs into multiple cards requiring the full proof of some step, and then a higher level card asking for the strategy of the proof overall, assembling the small parts together.
## Conditions for Theorems
Whereas the previous kind of cards gives us the theorem, and asks us for the proof, this kind of card gives us part of a theorem, and asks us what conditions we need for this theorem to hold. For example, here’s a complement to the last card, providing an example of this kind of reversal:
Suppose $q: X \to Y$ is a quotient map. If $\mathcal{R} := \{(x_1, x_2) \ |\ q(x_1) = q(x_2)\}$ is closed in $X \times X$, what condition on this map needs to hold for $Y$ to be Hausdorff?
We need this quotient map to be open
This forces you to engage with the statement of a theorem, remembering some of the conditions necessary to make it work. You can even split up the theorem statement in multiple ways like this, each of which reinforces the idea of the theorem in your memory.
## Recalling Properties
Another kind of card asks you to recall some property of some object. A good example is recalling equivalent properties:
What are 3 equivalent conditions to being a saturated set?
• $U = q^{-1}(q(U))$
• $U$ is a union of fibers
• For every $x \in U$, $q(x') = q(x) \implies x' \in U$
I’d usually also have a proof for each of the implications involved in proving this, along with a definition card for “saturated set”. By having this extra card, we solidify our understanding of how these properties relate to eachother, and can split up the larger proof of this equivalence into smaller cards, since this overview card serves to glue them together.
# Overlapping Information
I try to overlap the information throughout multiple cards, which helps reinforce the concepts involved. This is even better if you use different kinds of cards for the same concept. For example, having a definition card, a characterization card, some proofs, and then recalling various properties.
Having cards that bring about the connections between different concepts is quite nice as well, since it helps to both help recall the properties of various objects, and also to see the broader picture of a subject.
# Splitting Information
For definitions, you can usually just make a card verbatim. On the other hand, you can’t exactly copy most proofs down, since they lead to cards that are way too long. Because of this, you have to try and split proofs down to make “bite-sized” cards. One technique I’ve mentioned is to split a large proof into small proof cards for each part, and then create a high level proof overview card gluing them together.
Breaking down larger concepts into smaller chunks is also a great way to engage with the material, since you’re forced to distill and play with the proofs and objects of the subject.
# Some SRS Applications
Anki is probably the most popular SRS, and has LaTeX support, as well as images, which are key for mathematics. On the other hand, I personally prefer Mochi Cards, since I find the interface much cleaner, and the latex entry much more seamless:
(there’s also a dark mode, which is neat).
# Conclusion
Personally, I’ve been doing this for about a month now, and I’ve really found the benefits to be much clearer as opposed to detailed note taking. Now I feel like I actually get compounding benefits from my notes, thanks to the SRS.
Hopefully this might provide a few ideas for people looking to apply SRS to mathematics. If you’re still skeptical about the benefit of this kind of system, I’d recommend checking out this article, which really inspired me to try applying it to mathematics.
|
|
# indir() changes the directory just for its block
Update for April 2017: indir is now a documented part of the language.
How many times have I wanted to change the current working directory for just one block? It’s usually such a pain because I have to remember to change it back.
In Perl 6, the indir routine does this for me:
indir $some_dir, { ...; # do stuff in that directory } The current working directory is changed just for that block of code. When that block is done, the value is back to whatever it was before. That this is built into the language is quite pleasing to me (and the dependency-adverse contexts I tend to work in). There’s no variable to mess with (in Perl 6 that would be $*CWD, which has some issues at the moment) and it reads nicely as a sentence.
This isn’t documented yet (it only shows up in Synopsis 16, last changed 10 years ago, and was announced as part of 2014.10), but it’s in there and it mostly works. At the moment (Rakudo 2016.11) it only works when the target directory is readable and writeable, but I’ve filed RT #130460 about that. And, I’ve filed GitHub #1091 about the lack of documentation (which depends on someone declaring what it should actually do). Since it’s untested and undocumented, that means it might change or disappear. Consider that before you get too excited. But, how can you not get excited about something that makes common things really easy?
I feel a little bad that I’m not stopping to fully investigate these corner cases, but if I did that I’d never get any writing done for Learning Perl 6! Maybe someone else has the time to make this bit of Perl 6 tested and documented.
|
|
# Dual income tax reform in Germany: A microsimulation approach
1. Universität Hohenheim, Germany
Research article
Cite this article as: G. Wagenhals; 2011; Dual income tax reform in Germany: A microsimulation approach; International Journal of Microsimulation; 4(2); 3-13. doi: 10.34196/ijm.00049
## Abstract
This paper assesses the impact on household labor supply of a Dual Income Tax reform in Germany. It relies on GMOD, a population-based tax-benefit microsimulation model, and uses flexible mixed logit simulation estimators.
## 1. Introduction
In most countries the income tax system is based on a comprehensive tax: one tax rate is imposed on the total income of a taxpayer. In contrast, the Dual Income Tax (DIT) is a schedular tax that combines a progressive tax schedule for labor income with a flat tax rate on capital income.
The introduction of such a tax is a hot topic world-wide. Its possible advantages and drawbacks are discussed not only in the European Nordic countries which have introduced them some years ago, but also in the rest of Europe [see e.g. Genser & Reutter 2007), in Japan (Morinobu, 2004), and in Canada (Sørensen, 2007). Also in Germany, economists and policy makers consider a dual income tax as an option for a fundamental tax reform. Recently, the SVR2008 published an expertise commissioned by the German Ministry of Finance.1 This report strongly favors the introduction of a Dual Income Tax reform, which is contrary to a previous proposal of the German Council of Economic Experts, analyzed by Bach and Steiner (2007) – practically revenue neutral.
Previous economic research on the impact of this proposal has concentrated on long-run effects and is mainly based on general equilibrium simulation models. The results of these exercises are largely robust with respect to the choice of the behavioral elasticities, with one important exemption: the labor supply elasticity. Actually, the labor supply elasticity is the only behavioral parameter that is crucial for the long run effects of a DIT. (See e.g. Radulescu (2007) for Germany, or Keuschnigg and Dietz (2007), p. 204, for Switzerland.) General equilibrium simulation studies assume that the household sector can be modelled by a traditional Ramsey model with only one single “representative” agent characterized by only one labor supply elasticity. Population based microeconometric analyses show, however, that in the population labor supply elasticities vary widely depending on gender, number of children, regional and other factors. This suggests to supplement existing macroeconomic DIT studies by microeconometric simulation analyses.
The main contribution of the present paper is a microsimulation analysis of the incentive effects of the most recent DIT proposal for Germany based on a behavioral microeconometric model. It is the first evaluation of the behavioral effects of the income tax amendment EStG-E proposed by the Council of Economic Experts based on a mixed logit simulation approach. This improves previous studies based on a traditional conditional logit model and older data sets (Bach & Steiner, 2007; Wagenhals & Buck, 2009) because the conventional IIA assumption implicit in the traditional model is strongly rejected by our data.
The rest of the paper is organized as follows. The next section describes the data: the generation of the base data set, the definition of the tax base, with special reference to the calculation of capital income and labor income, and the tax schedule used. Then, two sections describe discrete choice models for single persons as well as for cohabiting and married couples. They provide mixed logit estimation and calibration techniques and present empirical results. The last section concludes.
## 2. Data
### 2.1 Base data set
My base data set is drawn from the 2005 wave of the German Socio-Economic Panel (GSOEP). I merge some retrospective data from the 2006 wave, such that the base data set refers to 2005, the same fiscal year the German Council of Economic Experts reform proposal refers to.
Choice alternatives are generated using GMOD, a tax-benefit microsimulation model for Germany developed by the author. GMOD calculates personal income taxes, social security contributions and benefits. It allows for the standard benefits and tax concessions such as housing benefits and child-benefits, allowances for child-raising, child-raising leave and maternity as well as assistance for education or vocational training. Furthermore, it accounts for tax abatements for dependent children and for the education of dependent children, for child-care, tax credits for single parents, maintenance payments and income-splitting for married couples.
### 2.2 Tax base
A dual income tax differentiates between capital and labor income and taxes these differently. So I have to derive two tax bases, one for capital income, and one for other sources of household income, called “labor income”.
Currently, GMOD calculates seven sources of income, because the current German Income Tax Law (Einkommensteuergesetz, EStG) levies one tax schedule on the sum of income from the following exhaustive list of seven sources of income: (1) income from agriculture and forestry (§ 13 EStG), (2) income from trade or business (§ 15 EStG), (3) income from independent personal services (§ 18 EStG), (4) income from dependent personal services, i.e. wages, salaries and retirement benefits of civil servants (§ 19 EStG), (5) income from investment of capital (§ 20 EStG), (6) income from rentals and royalties (§ 21 EStG), and (7) other income designated in § 22 EStG, e.g. notational return on investment of a pension from statutory pensions insurance. Gross earnings from all of these sources are calculated by GMOD based on information available in my base data set described above, on the German income tax law and on income tax directives. Net income from the first three sources is calculated on the accrual basis and called “profit-based income”. Net income from the other four sources is defined as the excess of total receipts over income-related expenses.
According to the German Income Tax Law (EStG-E) as proposed by the SVR2008, there will be four categories of income (see § 2 EStG-E): (1) income from business activities (§ 13, § 15 and § 18 EStG-E), (2) income from employment (§ 19 EStG-E), (3) capital income (§ 20, § 21, and § 22 EStG-E), and (4) derived income (§ 23 EStG-E).
To map the traditional seven sources of income to the new categories capital and labor income I proceed as follows: (1) Income from business activities corresponds to traditional “profit based income”. (2) Income from employment corresponds to the traditional income from dependent personal services. (3) Income from capital assets is derived from traditional income from capital investments (§ 20 EStG) and income from rentals and royalties (§ 21 EStG). (4) Derived income corresponds to traditional “other income” designated in § 22 EStG. In my base data set, I do not have information on income from private sale transactions mentioned in § 22 EStG-E, so I have to ignore it. I assume that the cash method of accounting is used with respect to income from business activities. Thus, taxpayers report their revenues when received and their expenses when paid.
The labor income tax base includes wages, salaries (including the employers’ calculatory salaries) and civil pensions. The capital income tax base includes business profits, dividends, capital gains, interest and rental income. Taxable labor income and taxable capital income are obtained by subtracting personal allowances and other deductions from the respective tax base. The savings allowance of 750 Euro for the income from capital investments (§ 20 Section 4 EStG) will be abolished.
The decomposition of profit-based income in a capital and a labor share is the crux of the DIT. The calculatory salary, i.e. the labor income of the self-employed, is hard for an individual to measure and even harder for tax authorities to verify. I use the following trick: First, I estimate a Mincer-type wage function based on observable characteristics on the sub-sample of wage earners. In my data I observe determinants of wages for all individuals. Therefore, I am able to predict the calculatory salary for all self-employed individuals. Finally, I derive their capital income as the residual. (See Wagenhals & Buck, 2009, for details about this decomposition approach.) In my view, this approach improves upon the procedure of using an arbitrary sharing rule (see e.g. Gottfried and Witczak, 2009). In any case, due to data constraints, I did not have the option to compute calculatory salaries for the self-employed as residual profits.
### 2.3 Tax schedule
The dual income tax combines a progressive tax schedule for labor income with a flat tax rate on capital income.
I assume that labor income is taxed according to the current income tax schedule (§ 32 a EStG), and that capital income is taxed with a rate of 25 percent (including the solidarity surcharge). To avoid legal concerns and a potential deterioration with respect to the current legal position I follow the German Council of Economic Experts (2008, pp. 108–110).
Figure 1
§ 32 a EStG-E defines the DIT income tax function: Let v denote taxable income in Euro (rounded down to the next Euro) according to § 2 Section 5 Clause 1 EStG-E, and let K denote capital income in the sense of § 2 Section 3 Clause 2 EStG-E (also rounded down to the next Euro). Then the personal income tax T amounts to
T = 0 if v ≤ 7664 T = 883.74x2 + 1500x if 7665 ≤ v ≤ 12584 $T=(v−12584)0.251+t+952$ if 12585 ≤ v ≤ 12585 + K $T=883.74y2+1500y+0.251+tK$ if 12586 + K ≤ v ≤ 12739 + K $T=228.74z2+2.397z+989+0.251+tK$ if 12740 + K ≤ v ≤ 52151 + K $T=0.42(v−K)−7914+0.251+tK$ if v ≥ 52152 + K
where
x = ( v-7664)/10000 y = ( v-7664-K )/10000 z = ( v-12739-K )/10000
(x, y, z are also rounded down to the next Euro). The symbol t denotes the solidarity surcharge (i.e. currently t = 0.055).
Figure 1 shows marginal tax rates, i.e. the tax rates that apply to the last Euro of the tax base. It compares the current marginal tax schedule (based on § 32 a EStG) with the DIT schedule for taxpayers with fixed taxable capital incomes of different amounts. The solid line shows the current tax rates (T 2005). Under a dual tax regime this line refers to taxpayers without any capital income. For taxpayers with capital income a so-called stretched tax scale is applied. This means that the taxation of capital income is incorporated in the tax schedule in terms of a proportional zone. The length of this variable proportional band depends on the amount of taxable capital income. The return component of income is taxed proportionally while any profits beyond those are taxed progressively as labor income. As examples, I use marginal tax rates for capital incomes of 10,000, 20,000 and 30,000 Euro.
## 3. Labour supply of single persons
To quantify the labor supply incentives of a DIT introduction, I use a discrete choice structural labor supply model. The basic idea is to replace the budget set of a household with a finite number of points, and optimize over this set of points. I first set out the theory, estimation and simulation results for single persons. In the following section, I turn to persons living in couples.
### 3.1 Theory
I represent any individual’s choice set by a six-state labor supply regime and approximate actual hours per week ha by hours levels hH := {0,10,20,30,40,50} applying the following rounding rule
(1) $\begin{array}{l}h=0\text{\hspace{0.17em}}if\text{\hspace{0.17em}}{h}^{a}<5\\ =10\text{\hspace{0.17em}}if\text{\hspace{0.17em}}5\text{\hspace{0.17em}}\le {h}^{a}<15\\ \cdot \cdot \cdot \\ =50\text{\hspace{0.17em}}if\text{\hspace{0.17em}}{h}^{a}\ge 45.\end{array}$
For all elements h in the choice set H I use GMOD to calculate household net incomes as
(2) $c\left(h\right)\le wh+\mu -T\left(h,\text{\hspace{0.17em}}w,\text{\hspace{0.17em}}\mu |\mathbf{x}\right)$
where w denotes the gross wage rate, μ is income from sources other than employment and T(·) is the tax-benefit function conditional on a vector of observed characteristics x. I assume that preferences can be represented by a utility function U and that individuals act as if to maximize utility
(3) $\underset{h\in \text{H}}{\mathrm{max}}U\left(c\left(h\right),\text{\hspace{0.17em}}\overline{h}-h|\mathbf{\text{x}}\right)$
subject to the budget constraint
(4) $c\left(h\right)\le wh+\mu -T\left(h,\text{\hspace{0.17em}}w,\text{\hspace{0.17em}}\mu |\mathbf{x}\right)$
where $h¯$ denotes total time endowment.
To obtain random utilities (needed for estimation and simulation), I add state-specific random errors e(h) to utilities for all states hH. This gives random utilities
(5) ${U}^{*}\left(h\right):=U\left(c\left(h\right),\text{\hspace{0.17em}}\overline{h}-h|\mathbf{\text{x}}\right)+ e\left(h\right)$
If the state-specific random errors are i.i.d. Type I extreme value distributed, then the probability P of working hj hours is
(6) $P\left(h={h}^{j}|\text{x)=}\frac{\mathrm{exp}\left[U\left(c\left(h\right),\text{\hspace{0.17em}}\overline{h}-h|\text{x)]}}{\sum _{{h}^{k}\in \text{H}}\mathrm{exp}\left[U\left(c\left({h}^{h}\right),\text{\hspace{0.17em}}\overline{h}-{h}^{k}|\text{x)]}}$
For the specification of the utility function, I follow the tradition started by Keane and Moffit (1998) and choose a flexible form quadratic direct utility function. Written in terms of individual consumption c = c(h) and leisure $l:=h¯−h$ I obtain
(7) $U\left(c,l\right)={\alpha }_{cc}{c}^{2}+{\alpha }_{ll}{l}^{2}+{\alpha }_{cl}cl+{\beta }_{c}c+{\beta }_{l}l$
where αcc, αll, αcl, βc and βl denote unknown parameters. I assume that preferences vary through taste-shifters on income and leisure coefficients:
(8) $\begin{array}{l}{\beta }_{c}={\gamma }_{{c}_{0}}+{\text{xg}}_{\text{c}}\\ {\beta }_{l}={\gamma }_{{l}_{0}}+{\text{xg}}_{\text{l}}\end{array}$
where γc0, gc, γl0 and gl denote unknown coefficients and x is a (row) vector of individual characteristics2 and following van Soest (1995) dummy variables for part-time categories in order to capture the disutility of inflexible arrangements in the utility functions.
I deal with unobserved wage rates by estimating the expected market wage rates conditional on observed characteristics using Heckman’s two-step approach: I first estimate a reduced form participation equation, get the Mill’s rate and use it in a Mincer-type wage equation to correct for sample selection bias. I account for wage rate prediction errors by integrating out the disturbance term of the wage equation in the likelihood as suggested by van Soest (1995).
Estimation of the unknown preference parameters is based on a mixed logit model proposed by McFadden and Train (2000). Under mild regularity conditions, it can approximate the choice probabilities of any discrete choice model derived from random utility maximization as closely as desired. Under the assumption that the income coefficients are normally distributed and all other coefficients are fixed I proceed by maximum simulated likelihood.
#### 3.1.1 Simulation
I use the parameters of the estimated utility functions to simulate the effects of the introduction of a Dual Income Tax on labor supply following the individual calibration procedure described by Kreedy and Kalb (2005, page 720 et seq.) Based on the selected sample I use hours worked to obtain a starting point for simulation.
For each individual, unobserved utility components (error terms) are drawn from the type I extreme value distribution and added to the measured utility in each of the hours points. A draw is accepted, if it results in the observed labor supply being the optimal choice for the individual. Otherwise, the draw is rejected, and another error term is drawn and checked. This is repeated until all sets of error terms are drawn and accepted. For each individual in the sample, this exercise is repeated 100 times.
The resulting sets of error terms drawn are possible values leading to the observed hours worked. Given individual characteristics and the draws, I can determine post-reform utility at each hours point. This generates a distribution of post-reform hours worked, conditional on the observed pre-reform hours, for each individual. The results of the draws can be summarized in transition tables.
### 3.2 Empirical results
#### 3.2.1 Sample selection
The starting point for my sample is the base data file described above. First, I concentrate on single adult respondents. I exclude persons younger than 25 or older than 55 years of age, persons in education, pensioners, persons doing compulsory community or military services, persons receiving profit incomes only and civil servants. After dropping persons with missing observations of crucial variables, I receive a sample with 1,116 single men and another sample with 1,312 single women.
#### 3.2.2 Estimation
The main preference parameter estimates for single men and single women are given in Table 4 in the Appendix. The estimated parameter values are consistent with economic theory. The marginal utility of net income and of leisure are statistically significant at least at the five percent level, they are positive and declining with income. The interaction effect between leisure and income is practically zero. Not surprisingly, there is less desire to work if an individual is handicapped, or if there is a nursing case in the family. For single mothers, there is less desire to work, the effect being smaller for older children. The main difference between male and female preferences is the role of children: While the number of children in different age groups has the expected sign and magnitude for women, these variables were not significant for men and so were dropped.
In Table 4 I do not report the estimates of the part-time dummies for part-time choice opportunities. For men and women, they all are negative and highly significant. This reflects the fact that low demand for part-time workers requires more effort (and hence less utility) to find part-time employment. Furthermore, all estimated standard errors of the random coefficients were highly significant. This suggests considerable unobserved heterogeneity of preferences. The traditional conditional logit approach is strongly rejected!
#### 3.2.3 Simulation
Tables 1 and 2 present the simulation results for the labor supply of single persons. The last column gives the distribution of labor supply before the reform, the last row refers to the distribution after the reform. The numbers inside the matrix are row percentages indicating the probability of individuals from one hours point to another one.3
Table 1
Table 2
My results suggest that – in a short run partial equilibrium view – the DIT reform suggested by the German Council of Economic Experts (2008) will generate only small labor supply reactions. For single persons, on average, they will be slightly positive.
## 4. Labor supply of couples
### 4.1 Theory
For married or cohabiting couples I allow for joint decision making. Each partner may account for the decision of the other partner when deciding on hours worked. I assume that each household member selects one of six regimes: non-participation or one of five employment states ∊ H = {0,10,20,30,40,50} (the elements denoting hours per week). Thus, the choice set for couples is H × H. Actual individual working hours observed in the data are rounded (as above) to fit the elements in this set.
I assume that preferences of a couple may be represented by a flexible quadratic utility function
(9) $\begin{array}{l}U\left(c,{l}_{f},{l}_{m}\right)={\alpha }_{cc}{c}^{2}+{\alpha }_{mm}{l}_{m}^{2}+{\alpha }_{ff}{l}_{f}^{2}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}+{\alpha }_{cm}c{l}_{m}+{\alpha }_{cf}c{l}_{f}+{\alpha }_{fm}{l}_{f}{l}_{m}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}+{\beta }_{c}c+{\beta }_{m}{l}_{m}+{\beta }_{f}{l}_{f}\end{array}$
Here $lm:=h¯−hm, lf:=h¯−hf; l$ denotes leisure and h hours worked of male ( m ) or female ( f ) persons, while c denotes their joint net income. The α and β coefficients are unknown population parameters. The sign of αfm indicates whether male and female leisure are substitutes or complements. Similar to the case of single persons, some preference parameters depend on personal, household and other characteristics. Supplementing representative household utility I add stochastic terms accounting for state specific errors (needed for estimation and simulation) and finally derive the probability of choosing any consumption-leisure combination in the set of feasible household decisions. Estimation proceeds via mixed logit and simulation by calibration4 as described above. I derive household gross earnings assuming state invariant male and female gross wage rates, and calculate the corresponding state specific net household income for each hours combination in the choice set H × H using GMOD and my base data set described above.
### 4.2 Empirical results
#### 4.2.1 Sample Selection
Starting point for my analysis is again the base data file described above, now concentrating on couples. I apply the sample selection criteria as described for singles to both partners and obtain a sample of 2,015 couples.
#### 4.2.2 Estimation
The main preference parameter estimates for married and cohabiting couples are given in Table 5 in the Appendix. The estimated parameter values are consistent with economic theory. The marginal utility of both partners’ leisures and the marginal utility of net income are highly significant, positive and declining with income. The interaction effect between male and female leisure is statistically not different from zero and practically unimportant. Not surprisingly, there is less desire to work for mothers, the effect being smaller for older children.
Due to space restrictions, in Table 6 I do not report the estimates of the part-time dummies for part-time choice opportunities. (But they are used in the simulation exercises.) For both sexes, they all are negative and highly significant. As in the case of singles, this reflects the fact that low demand for part-time workers requires more effort to find part-time employment. Again, all estimated standard errors of the random coefficients were highly significant. As for singles, this suggests considerable unobserved heterogeneity of preferences of couples. Again, the traditional conditional logit approach is strongly rejected!
#### 4.2.3 Simulation
Tables 3 and 4 show that the partial equilibrium impact of the reform proposal on the labor supply of couples is relatively small. As was to be expected, positive incentive effects are most likely for married females. This result, not shown in an extra table, is in line with the vast majority of previous studies on female labor supply.
Table 3
Table 4
## 5. Aggregate results
If I finally aggregate my results over persons living as singles and living in couples, I find a positive incentive effect of the introduction of a DIT. On average, labor supply increases. But does working time increase as well?
If you accept my results and the German Council of Economic Experts assumption of a 1.1 percent reform-induced increase in labor demand, then in the whole economy annual working time will increase on average and in the aggregate. This effect, combined with the smaller tax burden on capital income, yields an increase in aggregate net income. Thus, a DIT induced demand-side driven growth – as suggested by CGE studies – is indeed possible.
## 6. Conclusion
This paper evaluates the incentive effects of a Dual Income Tax reform in Germany based on GMOD, a tax-benefit microsimulation model, and on a sample of thousands of households representative for the German population. Instead of invoking the assumption of one given labor supply elasticity as current general equilibrium simulation models do, I allow for labor supply responses of the persons in a sample representative for the resident population in Germany. I do not present estimated elasticities, but my results are based on estimated responses of individuals in a representative sample. Estimates are obtained with a highly flexible mixed logit simulation approach. It includes the traditional conditional logit model used in a former study as a special case (which is rejected).
The main finding is that reform induced labor supply responses are small, but – on average – positive. Thus, my results empirically support the central, but untested, labor supply assumptions in traditional CGE models.
Further research is needed to assess the detailed impact of a DIT in Germany on the distribution of income and of individual economic welfare. This promises additional advantages in policy advising in comparison to computable equilibrium simulation models and may be a useful supplementation to these approaches.
## Footnotes
### 1.
A more comprehensive version of this report, available only in German, includes an elaborate tax amending bill of the income tax law proposed (EStG-E). (See Sachverständigenrat zur Begutachtung der gesamtwirtschaftlichen Entwicklung, Max- Planck-Institut für Geistiges Eigentum, Wettbewerbs- Und Steuerrecht, Zentrum für Europäische Wirtschaftsforschung, 2006).
### 2.
Individual characteristics embedded are age and age squared, place of residence in East Germany (yes=1, no=0), nursing case in the family (yes=1, no=0), citizenship (not German = 1, German=0), high education, i.e. degree from universities or from universities of applied sciences (yes=1, no=0), low education, i.e. no vocational qualification attained (yes=1, no=0), handicapped person (yes=1, no=0), number of children under 6, and number of children between 6 and 16.
### 3.
See Kreedy and Kalb (2005) for a very detailed explanation of labor supply transition matrices.
### 4.
“Individual” calibration now refers to calibration based on the estimated preference functions of the couples.
Table 5
Table 6
## References
1. 1
MITAX - Mikroanalysen und Steuerpolitik
(2007)
54–83, Steuerreformplane im empirischen Vergleich, MITAX - Mikroanalysen und Steuerpolitik, Statistik und Wissenschaft, Band 7, Wiesbaden, Statistisches Bundesamt. Beitrage zur wissenschaftlichen Konferenz am 6. und 7. Oktober 2005 in Lüneburg.
2. 2
Discrete hours labour supply modelling: Specification, estimation and simulation
(2005)
Journal of Economic Surveys 19:697–734.
3. 3
Moving towards Dual Income Taxation in Europe
(2007)
FinanzArchiv: Public Finance Analysis 63:436–456.
4. 4
Dual Income Tax. A Proposal for Reforming Corporate and Personal Income Tax in Germany
(2008)
Heidelberg: Physica-Verlag.
5. 5
Reformoption Duale Einkommensteuer - Aufkommens- und Verteilungseffekte. IAW-Diskussionspapier 58
(2009)
Tübingen: Institut für Angewandte Wirtschaftsforschung.
6. 6
A structural model of multiple welfare program participation and labor supply
(1998)
International Economic Review 39:553–589.
7. 7
A growth oriented dual income tax
(2007)
International Tax and Public Finance 14:191–221.
8. 8
Mixed MNL model for discrete response
(2000)
Journal of Applied Econometrics 15:447–470.
9. 9
Capital income taxation and the Dual Income Tax. Technical report, Policy Research Institute, Ministry of Finance of Japan. PRI Discussion Paper Series (No.04A-17)
(2004)
Capital income taxation and the Dual Income Tax. Technical report, Policy Research Institute, Ministry of Finance of Japan. PRI Discussion Paper Series (No.04A-17).
10. 10
CGE Models and Capital Income Tax Reforms: The Case of a Dual Income Tax for Germany
(2007)
Lecture Notes in Economics and Mathematical Systems, Vol. 601, Springer, Berlin-Heidelberg.
11. 11
Reform der Einkommens- und Unternehmensbesteuerung durch die Duale Einkommensteuer
(2006)
Reform der Einkommens- und Unternehmensbesteuerung durch die Duale Einkommensteuer, Expertise im Auftrag der Bundesminister der Finanzen und für Wirtschaft und Arbeit vom 23. Februar 2005.
12. 12
The Nordic Dual Income Tax: Principles, practices, and relevance for Canada
(2007)
13. 13
Structural models of family labor supply: A discrete choice approach
(1995)
Journal of Human Resources 30:63–88.
14. 14
Implementing a Dual Income Tax in Germany. Effects on labor supply and income distribution
(2009)
Journal of Economics and Statistics 229:84–102.
## Article and author information
### Author details
1. #### Gerhard Wagenhals
Department of Economics, Universität Hohenheim, Germany
##### For correspondence
G.Wagenhals@uni-hohenheim.de
### Acknowledgements
The data used in this publication were made available by the German Socio-Economic Panel Study (GSOEP) at the German Institute for Economic Research (DIW), Berlin.
The author is greatly indebted to Gijs Dekkers, Ulrich Scheurle and two anonymous referees for valuable comments.
### Publication history
1. Version of Record published: August 31, 2011 (version 1)
|
|
# Averaging a set of percentages between certain important levels
I have a spreadsheet, called "To Do", with set of percentages in column E and importance levels in column A:
On another spreadsheet, I'm entering a formula to calculate the average of these percentages, but only those between certain importance levels:
This is the formula in question, indented for easier reading:
=TO_PERCENT(
DIVIDE(
SUMIF('To Do'!A3:A, "<2000", 'To Do'!E3:E)
- SUMIF('To Do'!A3:A, "<=1000", 'To Do'!E3:E),
MAX(
COUNTIF('To Do'!A3:A, "<2000")
- COUNTIF('To Do'!A3:A, "<=1000"),
1
)
)
)
Is there any better way to do this?
I originally wanted to use SUMIFS and COUNTIFS, but it said:
error: Unknown function name
• Ask Google to add SUMIFS() / COUNTIFS() / AVERAGEIF() ? Jul 1, 2014 at 15:04
• But seriously, there is a scripting language inside Google Spreadsheets that allows you to write your own worksheet functions, but I don't know anything about it. developers.google.com/apps-script/quickstart/macros Jul 1, 2014 at 15:33
• This question is being discussed on meta Jul 24, 2014 at 2:39
• Averaging percentages sounds like a suspicious mathematical technique. Are you sure that it makes sense to do that? (We have no idea what problem you are really trying to solve, so we have no context with which to help you catch logic errors.) Jul 24, 2014 at 5:22
As @200_success pointed out in his comment, averaging percentages is a suspicious mathematical technique.
Each task in the TODO list could have its own weight, or relative value - a number that represents a chunk of progress towards "done"; then you can calculate a percentage by adding up the relative weights of all completed tasks (or more accurately, of a value derived from that task's %completed and weight), and dividing by the sum of the weights of all tasks.
I haven't played much with , but I know pretty well; the formula you've come up with is exactly the kind of formula I'd have used in Excel 2003, before SUMIFS and COUNTIFS were introduced in Excel 2007.
When I enter a formula in a spreadsheet, I want to be able to copy that formula over to the next cell, without having to modify it. This involves a number of principles:
• Don't hard-code cell references. In Excel I would have used names and/or tables - not sure supports that, but in any case if none of that is supported you can still, and should, use absolute cell references - refer to 'To Do'!$A:$A and 'To Do'!$E:$E. Not sure what A3:A refers to.
• Don't hard-code your variables. If each column is going to use a different set of [Priority] values, "<2000" and "<=1000" shouldn't be hard-coded. You can insert two rows above row 1, and put 1000 in row 1 and 2000 in row 2, so instead of "<2000" you'll have "<" & A$2, and instead of "<=1000" you'll have "<=" & A$1.
Lastly, I find
MAX(COUNTIF('To Do'!A3:A, "<2000") - COUNTIF('To Do'!A3:A, "<=1000"), 1)
MAX(1, COUNTIF('To Do'!A3:A, "<2000") - COUNTIF('To Do'!A3:A, "<=1000"))
But the reason you're taking the MAX here, is to avoid a division by zero; by dividing by 1 whenever that's the case, you're showing mathematically incorrect results.
In Excel I'd wrap the division with an IFERROR, and return a string such as "-" when I'm dividing by zero. This accurately reports "this category is irrelevant", rather than "this category is 100% done".
Another simple option would be to add a column in the TODO list, to identify the actual priority level (which you currently have as a wasted row between each group); then a simple SUMIF / COUNTIF can do the trick: you only account for rows with a given [PriorityCode] value.
That's actually a "smell": your [Priority] values currently encode two values: the priority group, and the priority level, within that group.
• I guess you've found your mug now ;) Aug 31, 2014 at 5:54
• A3:A means "All cells in the A column starting with the third-down", since I don't care about column headings. It's okay to hard-code my importance ranges in the same way it's okay for window managers to hard-code their layer levels (e.g. always-on-top = 10,000, modal = 5,000, normal = 0, etc.) since Immediate tasks will never have a value <0 or >=1000, etc. Somehow you didn't see that column A has all the priority levels, where 0 is highest and 9999 is lowest.
– Ky -
Aug 31, 2014 at 17:43
• No, it's not ok, and yes I've seen what column A contains. You'll have 9 columns with 9 almost-identical formulas that only differ by what ranges in column A are being accounted for. What I'm saying is that if you had another column to hold the "priority groups" for each row, you could get what you need without having to subtract the "not in range" values - in fact there wouldn't be a need for a "range" at all. Just have a column that contains "High" for all such rows, and "Low" for all low-priority tasks, and so on: you're making this much harder than it needs to be. Aug 31, 2014 at 17:53
• I didn't know about A3:A though, that's interesting. Doesn't work in Excel though. Aug 31, 2014 at 17:55
• The range is there because these have finer priorities than just the groups. If I only needed 0 through 9, I'd only use 0 through 9.
– Ky -
Aug 31, 2014 at 17:57
Use a pivot
Your approach works, and I'll give you that. I feel you are overcomplicating something that is, on the whole, quite simple. Instead of using numbers 1-10,000 to define priority, why not use a simple method like numbers {1,2,3,4,5,6,7,8,9} and just pivot them? Or maybe to make it more clear, normalize to something like {"1-Immediate", "2-Very High", "3-High", "4-Medium", "5-Low", "6-Very Low", "7-On Hold", "8-Impossibru", "9-Done"} ?
Edit: Mind you, percentages can still be used, it just seems very odd to aggregate aggregated values.
|
|
# Transitions emit photon n=4
## Transitions emit photon
Add: awaqara16 - Date: 2020-12-17 00:59:14 - Views: 5602 - Clicks: 7028
/704fdb48b858/76 /c679bfdee /0e871e7d/165 /112035-131
The energy of the photon will determine the color of the Hydrogen Spectra seen. The energies of these two states are –0. n=4 Its frequency is _____ x 10 raised to the _____ Hz. Phosphorescence is the latter; the delay comes when one state in the cascade had a longer lifetime than the others. Explanation: No transitions emit photon n=4 explanation available. Options (a) n=1 to n=2 (b) n=2 to n=6 (c) n=6 to n=2 (d) n=2 to n=1.
There are many possible electron transitions emit photon n=4 transitions for each atom, and each. It is also known as an electronic (de-)excitation or atomic transition or quantum jump. i did part a and I got E= -6. This was achieved by utilising a cascade transition within mercury atoms. The energy can be released as one quantum of energy (i. Be the first to write the explanation for. muskan396 muskan396 10. It appears discontinuous as the electron "jumps" from one energy level to another, typically in a few nanoseconds or less.
Photons are massless, transitions emit photon n=4 so they always move at n=4 the speed of light in vacuum,m/s (or about 186,282 mi/s). The longest wavelength photon would be the photon with transitions emit photon n=4 the lowest energy. Individual atoms emit two photons at different frequencies transitions emit photon n=4 in the cascade transition and by spectrally filtering transitions emit photon n=4 the light the observation of one photon can be. The electron can drop from level n = 3 to level n = 2 and, in so doing, emit Hα, which is a Balmer series (visible) photon. Four of the Balmer lines are in the technically. The more energy the photon has, transitions emit photon n=4 the greater its frequency and the shorter its wavelength is. $\endgroup$ – rob ♦ Oct 29 '19 at. What is the energy of light that must be absorbed by a hydrogen atom to transition an electron from n = 3 to n = 7?
Textbook solution for Glencoe Physics: Principles and Problems, Student. &0183;&32;Calculate the energy of a photon emitted when an transitions emit photon n=4 electron in a hydrogen transitions emit photon n=4 atom undergoes a transition from n = 6 to n = 1. 14, predict which of the following electronic transitions produces the spectral line having the longest wavelength: n = 2 to.
Which transition will result in emitted light with the shortest wavelength? (Increase the electron energy perhaps? ) n = 4 → n = 3 c. UV has the highest energy, therefore, emission to n=1 are highest in energy, and n=3 to n=1 must emit. If the photon emitted has a wavelength of transitions emit photon n=4 95 nm.
When an transitions emit photon n=4 electron drops from energy level 3 to energy level 2, red light is emitted. We’re being asked to determine which transition represents the emission of a photon with the highest energy. Find an answer to your question What is the frequency and wavelength of a photon emitted during transition from n=5 state to n=2 state in the hydrogen atom. For which of the following transitions would a hydrogen atom absorb transitions emit photon n=4 a photon with the longest wavelength? is the Planck's constant. &0183;&32;I'm not entirely sure that I understand your question, but the longest wavelength would n=4 belong to the red colored light. Your knowledge of the Bohr model of the atom and the relative energies of transitions is all that is needed. To find the wavelength use this formula, mathBalmer/math mathRydberg/math mathEquation/math math:/math math\frac1 \lambda = 1.
transitions emit photon n=4 Which quantum state (n,ℓ,mℓ) is NOT possible. The higher the energy level the closer it is in energy value to the next energy level. The shorter the drop from one energy level to another, the less energy, in the form of light/photons is emitted. n = 5 to n = 2 c. 13 eV is emitted, asked in Physics by ujjawal ( 30. , one transitions emit photon n=4 photon), as the electron returns to its ground state (say, from transitions emit photon n=4 n = 5 to n = 1), transitions emit photon n=4 or it can be released as two or more smaller quanta (i.
Since, electron jumps from n=4 to n=2,(higher energy level to lower energy level) energy will be re. And, as a result hydrogen atom emits photon of lowest frequency in the transition. (a) the wavelengthnm (b) the frequency 7. &0183;&32;Another way to look at it, emissions to n=1 emit in the UV portion of the spectrum and emissions to n=2 emit in the visible portion (mostly). Find (c) its photon energy and (d) its wavelength. Short answer is transitions emit photon n=4 no, it can't happen. Named after Johann Balmer, who discovered the Balmer formula, an empirical equation to predict the Balmer series, in 1885. ) n = 3 → n = 2 Answer: c.
Correct Answer: n=2 to n=1. What ray is longer ultraviolet or visible rays? For a photon that is just able to dissociate a transitions emit photon n=4 molecule of silver bromide, find (a) the photon's transitions emit photon n=4 energy in electron volts, (b) the wavelength of the photon, and (c) the frequency of the photon. The red light has the longest wavelength, lowest energy, and lowest frequency. Similarly, if a photon is absorbed by an atom, the energy of the photon moves an electron from a lower energy orbit transitions emit photon n=4 up to a more excited one.
Max Planck showed that the frequency f of a particular transition between two energy levels depends on the energy difference between those two levels given by E n - E m = hf wher e E n is the energy of the n-th level, E m the energy of the m-th level (lower than n ) and h = 4. The R in the equation is the Rhydberg Constant. Which of the following transitions in hydrogen atoms emit photons of highest frequency? Calculate values for the following. &0183;&32;A hydrogen atom initially in its ground transitions emit photon n=4 state (n transitions emit photon n=4 = 1) absorbs a photon and ends up in the excited state n=4 n= 4. The other emissions are in the infra-red.
🎉 The Study-to-Win Winning n=4 Ticket number has been announced! we have Paschen, Brackett, and Pfund series which are transitions from high levels to n=3, n=4, and n=5 respectivly. (ii) n = 4 to n = 3 As n increases, the energy levels gradually gets more closer. Electron transitions cause the emission or absorption transitions emit photon n=4 of. 1 / w a v e l e n g t h = R ((1 / n 1 2) – (1 transitions emit photon n=4 / n 2 2)), where n 1 = 2 and n 2 = 4 in this case.
Like all elementary particles, photons are currently best explained. (b) transitions emit photon n=4 If the atom eventually returns to its ground state, what photon wavelengths could the atom emit? n = 5 to n = 4 Yes. The energy of the photon is, the energy of the emitted photon is equal to the difference transitions emit photon n=4 in energy transitions emit photon n=4 between those two energy levels. &0183;&32;The transition from n=4 to n=2 yields the H(beta) line which transitions emit photon n=4 is one of the Balmer series.
This emitted energy is a photon. 307 eV Solution or Explanation (a) From or with n. For my homework, there are two parts to the question. &0183;&32;Calculate transitions emit photon n=4 the energy of a photon emitted when an electron in a hydrogen atom undergoes a transition from n = 4 to n = 1.
A hydrogen atom in state n = 6 makes two successive transitions arid reaches the ground state. n = 4, ℓ = 1, mℓ = 1. The energy in a transition depends on the distance between the energy levels: this means the transition with the greatest distance produces the. It is the quantum of the electromagnetic field including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. n = 1 to n = 2 b. In this excited state, the electron moves to a higher energy level.
Consider the photon of longest wavelength corresponding to a transition shown in the figure. In absorption, an electron gains energy and becomes excited. n = 4 to n = 3 d. 5 eV, respectively. The photon transitions emit photon n=4 energy of the emitted photon is equal to the energy difference between the two states. When an electron transitions from an excited state (higher energy orbit) to a less excited state, or ground state, the difference in energy is emitted as a photon.
We transitions emit photon n=4 need to figure out how to relate lambda to those different energy levels. The energy gap for such a transitions emit photon n=4 transition is relatively large, so wavelength of the radiated X-ray photon is relatively short. High energy photon ≡ shorter wavelength (high energy photon ≡. (a) What is the energy of the absorbed photon?
N-photon emission takes place when the coupling is large enough for the cavity to stop acting as a mere filter and actually Purcell enhance the corresponding n=4 multi-photon transitions 49. The Balmer series includes the lines due to transitions from an outer orbit n > 2 to the orbit n' = 2. Of the following transitions emit photon n=4 transitions in the Bohr hydrogen atom, the ____ transition results in the emission of the highest-energy photon.
In which region of the electromagnetic spectrum. &0183;&32;Energy of an electron of Hydrogen atom in nth shell is : E = -13. 85eV and n = 1 has. Knowing the photon's energy, we can use our equation from Planck. Atomic electron transition is a change of an electron from one energy level transitions emit photon n=4 to another within an atom or artificial atom. Starting from transitions emit photon n=4 the n = 3 orbital level, is it possible for the atom to emit a photon in the visible part of the electromagnetic spectrum when the electron drops directly or cascades down to the ground state? In the emission spectrum of hydrogen what is the wavelength of the light emitted by the transition fro m n = 4 to n =2. In the first transition a photon of 1.
What happens when the wavelength of light is decreased? Determine (a) its energy and (b) its wavelength. We have n=4 step-by-step solutions for your textbooks written by Bartleby experts! This is because the energy of an electron at a particular energy level is proportional to 1 over the. . Consider the spectral line of shortest wavelength corresponding to a transition shown in the figure.
41 points | 1/5 transitions emit photon n=4 submissions Assume the Bohr model to determine the radius of the n = 5 orbit in a C 5+ ion. Balmer lines are historically referred to as "H-alpha", "H-beta", "H-gamma" and so on, where H is the element hydrogen. Lambda is the symbol for wavelength. The emission spectrum of a chemical element or chemical compound is the spectrum of frequencies of electromagnetic radiation emitted due to an atom or molecule making a transition from a high energy state to a lower energy state. n = 1, n = 3 to. $\begingroup$ I think this question is about simultaneous emission of two photons in a single transitions emit photon n=4 transition, rather than about a cascade of single-photon transitions through well-defined states.
n = 4 to n = 6 c. The greater the energy difference results in an emitted photon. &0183;&32;Calculate the energy of a photon emitted when an electron in a hydrogen atom undergoes a transition from n = 6 to n = 1.
A photon of ultraviolet light carries more energy than a photon of visible light, because it has a higher frequency / shorter wavelength. , more than one photon) as the electron falls to an intermediate state, then to the ground state (say, from n = 5 to n = 4, emitting one quantum. R is the Rydberg’s constant which has the value 1. n = 1; (b) n = 2 to.
transitions emit photon n=4 Part A) Calculate the energy (in J) of the photon associated with the transition of electron in He from n=2 to n=1. From figure 1, we can see that n = 4 has -0. ) n = 5 → n = 4 b. transitions emit photon n=4 ) The energy difference between n=3 and n=2 is greatest because the energy differences get closer together transitions emit photon n=4 with increasing n. Which transition would emit light of the longest wavelength?
The photon is a type of elementary particle. .
### Transitions emit photon n=4
email: [email protected] - phone:(367) 155-8637 x 2862
### Cell having huge transitions - Transitions effects
-> Office of transitions uncw
-> English assignment transitions
Sitemap 4
|
|
# Meta quine checker
This challenge, if you accept it, is to write three functions or programs A, B, and C:
• A is a quine that outputs all of A,B and C (which is also the whole content of the code in your submission).
• B takes a parameter F and checks whether it is such a quine (outputting FBC), or doing something different.
• C takes a parameter G and checks whether G possibly works like B (checking whether F outputs FGC). It is impossible to decide whether a function is a quine checker, so let's do something simpler:
• It must return truthy if G is valid for B.
• It must return falsey if G returns falsey for all valid quines, or G returns truthy for all valid non-quines.
• It can return anything, crash or not terminate, etc, if it is any of the other cases.
Note that B is possible. A and F doesn't have any input, so you can just run them and check the result.
### Rules
• There should be some way to tell which parts are A, B and C from the output of A. For example: each has one line, or they are recognized as three functions in the interpreter.
• Each function should run with only the definition of itself, not your complete code.
• You can use a function/program or its source code, or a pair of both as the input of B (or G) and C.
• You can redefine truthy/falsey to a subset of those values. You can also consistently require F returning some type you choose, like a single string.
• You can require A, B, F and G, if they are called with valid parameters, consistently don't have some types of other inputs or side effects you choose, such as accessing global variables, or reading stdin, etc.
• You can also assume F and G, if they are called with valid parameters, always terminate.
• F should work in the same condition as A. So it cannot depend on B or C or another variable's existence, unless that variable is defined in its own part in its output.
• No functions or programs can read its own source code.
• This is code-golf, shortest code (which is the output of A) in bytes wins.
• B is still impossible in general because F might not terminate, as well as in practice in many languages because it requires combinations of abilities like temporary redirection of stdout and either function-to-string or exec. The best you can hope for is probably a half-working solution in a LISP. – Peter Taylor Apr 16 '15 at 7:47
• How would you check G with all quines and non-quines? I'm currently working on a Mathematica solution. – LegionMammal978 Apr 16 '15 at 10:35
• @PeterTaylor "You can also assume F and G, if they are called with valid parameters, always terminate." And "output" can mean return, not necessarily print to stdout. – jimmy23013 Apr 16 '15 at 15:04
• @LegionMammal978 It is impossible to check all quines and non-quines. But the task of C is something simpler, where you only need to check one quine and one non-quine. – jimmy23013 Apr 16 '15 at 15:08
• @PyRulez I think this is in the spirit of this challenge, so I'm going to allow it. But the function cannot access its own source code. – jimmy23013 Apr 18 '15 at 3:20
# CJam, 254 bytes
{{['{\"_~}{{[1$'{@\"_~}"{{["_~}"2$+'{@"_~}"]s"{{['{\\"\+"]s}_~}"+~1$~{L}@~!&}_~}_1>W<"\"]s\~=}_~}"@]s}_~}{{[1$'{@"_~}{{[\"_~}\"2$+'{@\"_~}\"]s\"{{['{\\\\\"\+\"]s}_~}\"+~1$~{L}@~!&}_~}"]s\~=}_~}{{["_~}"2$+'{@"_~}"]s"{{['{\\"\+"]s}_~}"+~1$~{L}@~!&}_~}
The 3 functions are:
{{['{\"_~}{{[1$'{@\"_~}"{{["_~}"2$+'{@"_~}"]s"{{['{\\"\+"]s}_~}"+~1$~{L}@~!&}_~}_1>W<"\"]s\~=}_~}"@]s}_~} {{[1$'{@"_~}{{[\"_~}\"2$+'{@\"_~}\"]s\"{{['{\\\\\"\+\"]s}_~}\"+~1$~{L}@~!&}_~}"]s\~=}_~}
{{["_~}"2$+'{@"_~}"]s"{{['{\\"\+"]s}_~}"+~1$~{L}@~!&}_~}
A and F take no parameters and return a string. B, G and C take a CJam block as parameter and return 1 for truthy, or 0 for falsey.
|
|
# What does the spacetime interval measure?
I know that the spacetime interval is the analog of the (square of the) Euclidean distance in spacetime. Also, I understand that it is an invariant quantity and determines spacetime's causal structure. Whether timeline intervals are positive and spacelike intervals negative (or vice versa) is a matter of convention. However, the light cone structure of spacetime is not a matter of convention. It is a real thing. And the interval measures something in relation to that real thing. However, it is still not exactly clear to me what it measures and how.
For illustration, assuming the (+, −, −, −) convention, consider the following cases:
• Case 1: spacetime events A and B are spacelike separated, and the spacetime interval between them is −7.
• Case 2: spacetime events C and D are spacelike separated, and the spacetime interval between them is −5.
• Case 3: spacetime events E and F are timelike separated, and the spacetime interval between them is 5.
• Case 4: spacetime events G and H are lightlike separated, and the spacetime interval between them is 0.
I have a few interconnected clarificatory question about the above:
1. A and B (Case 1) are further apart than C and D (Case 2). Does that mean that A and B are somehow more causally disconnected than C and D? Or is causal disconnection an all-or-nothing value (meaning two events are either causally disconnected or not)?
2. Are C and D (Case 2) equally close as E and F (Case 3), given that the absolute value of both intervals is equal? More specifically, is there any way to compare the magnitudes of intervals of different types, or are they incommensurable?
3. Are G and H (Case 4) maximally close, and thus closer together than any of the other pairs of events mentioned? If an interval of 0 does not represent being maximally close, then what makes it 0?
• The spacetime square-interval is the analogue of the square-of-the-Euclidean-distance. Jul 6 at 1:41
• Thanks, that is right. I edited the post accordingly. Jul 6 at 11:25
With your metric, the spacetime interval is given by
$$$$\Delta s^{2} = c^{2}\Delta t^{2} - \Delta x^{2} - \Delta y^{2} - \Delta z^{2}$$$$
which, as you've already mentioned, is Lorentz invariant and its value is therefore independent of frame of reference.
If $$\Delta s^{2} > 0$$ then the events are timelike, meaning two observers could communicate with one another. If $$\Delta s^{2} < 0$$ then the events are spacelike and can never communicate with one another - the distance between them is greater than the distance light can travel in the time interval between the two events, $$\Delta s^{2} = c^{2}t^{2} - d^{2} < 0 \implies |d| > c\Delta t$$. This is what you have referred to as causal disconnection. There aren't various degrees of causal disconnection, you either can causally effect an event or you can't. This is why the invariance of the spacetime interval is necessary - you don't want to boost into a new frame to find that two events are now causally connected!
The special case of $$\Delta s^{2} = 0$$ is when $$|d| = c\Delta t$$, i.e. the separation of events is such that they can only communicate using light. They are causally connected.
• Many thanks! That is very helpful in clarifying causal disconnection. But it does not answer my full question. Could you please also say something about my other concerns: namely, incommensurability and closeness? Jul 5 at 10:33
• The 'maximally close' question you have is what I've answer in the final two sentences, namely that $\Delta s^{2} = 0$ implies that the distance between two events is such that they can only communicate using light speeds. As for the question on 'incommensurability' I would generally avoid comparing the magnitudes of spacetime intervals if they are of different signs. This isn't may area of research, so someone may correct me, but I don't know of a situation where we'd be enormously interested in comparing two spacetime intervals of opposite sign but equal magnitude Jul 5 at 10:50
The following spacetime diagram might help. It's drawn on "rotated graph paper" so that it's easier to see the tickmarks along segments and to visualize the square-interval between events. (The grid is based on the spacetime-paths of the light-signals in an inertial light-clock at rest in this diagram. It turns out that we are working with light-cone coordinates, using the eigenbasis of the Lorentz boost transformations in $$(1+1)$$-Minkowski spacetime.)
For concreteness, consider the following events with $$(t,x)$$-coordinates:
$$A=(0,0)$$, $$B=(3,4)$$, $$D=(2,3)$$, $$F=(3,2)$$, $$H=(2,2)$$, and $$F_2= (5.25,-4.75)=(\frac{21}{4}, -\frac{19}{4})$$, which can be verified by counting up diamond diagonals vertex-to-opposite-vertex along the vertical and horizontal directions.
Observe, for instance, that the square-interval of $$AF$$ is $$\Delta s_{AF}^2=(F_t-A_t)^2-(F_x-A_x)^2=(3-0)^2-(2-0)^2= (9)-(4)=5.$$ Now construct the "causal diamond of $$AF$$" for the timelike-segment $$AF$$ by drawing the light-cones of $$A$$ and $$F$$ and finding the intersection of the causal future of $$A$$ and the causal past of $$F$$ (the blue-shaded diamond with timelike-diagonal AF).
Physically, these are the events that can be influenced by $$A$$ and then can influence $$F$$.
• Note that the number of "light-clock diamonds" in the "causal diamond of $$AF$$" (the "area") is $$5$$.
• Note that for another event $$F_2$$ on the same hyperbola as $$F$$ with center at $$A$$, the area is still $$5$$, as expected, since $$F_2$$ and $$F$$ are related by a boost about event $$A$$.
By similar constructions, the areas of the causal diamonds can be counted off.
• For the spacelike-segment $$AB$$, the diamond has area 7. Since $$AB$$ is spacelike, we can assign the square-interval to be minus-the-area of the causal diamond: $$-7$$. (For a physical interpretation, it appears one has to determine the timelike-diagonal of the causal diamond where $$AB$$ is the spacelike-diagonal, then use the interpretation of the causal diamond of the timelike-diagonal (akin to $$AF$$).)
• For the lightlike-segment $$AH$$, the diamond has area 0.
Since $$AH$$ is a causal relation from $$A$$ to $$H$$, the set of events $$E$$ that can be influenced by $$A$$ and then influence $$H$$ is non-empty... but the set has measure zero.
(The details and motivation of this approach are in my paper
"Relativity on Rotated Graph Paper"
Am. J. Phys. 84, 344 (2016) https://doi.org/10.1119/1.4943251 ;
What does the spacetime interval measure?
is that, in $$(1+1)$$-Minkowski spacetime, the square-interval for a displacement $$AF$$ is a measure of the area of the causal diamond associated with $$AF$$. When $$AF$$ is causal, the diamond can be interpreted as the set of events that can be influenced by $$A$$ and then influence $$F$$.
(In $$(3+1)$$-Minkowski spacetime, the square-interval is associated with the spacetime-volume of the causal diamond, together with some combinatorial factors. One can find a $$(1+1)$$-slice of the causal diamond to return to a $$(1+1)$$-case.)
Since the terms "closeness" and "causal disconnectedness" are not well defined, I can't say more about those notions.
Some details (taken from my paper referenced above):
Instead of rectangular coordinates $$(t,x)$$,
we can use light-cone coordinates $$[u,v]$$, where $$u$$ increases along $$\nearrow$$ and $$v$$ increases along $$\nwarrow$$, with $$u=t+x\qquad v=t-x$$
So, $$F=(3,2)$$ can be expressed as $$F=[5,1]$$.
Note that $$uv=(t+x)(t-x)=t^2-x^2$$ which shows the connection between signed-area and square-interval.
Thus, $$A=(0,0)$$, $$B=(3,4)$$, $$D=(2,3)$$, $$F=(3,2)$$, $$H=(2,2)$$, and $$F_2= (5.25,-4.75)=(\frac{21}{4}, -\frac{19}{4})$$,
can be expressed as
$$A=[0,0]$$, $$B=[7,-1]$$, $$D=[5,-1]$$, $$F=[5,1]$$, $$H=[4,0]$$, and $$F_2= [0.5,10]$$, which can be seen from the diagram.
So, Diamond(A,H) has area $$(4-0)(0-0)=0$$, and Diamond(A,D) has area $$(5-0)(-1-0)=-5$$... verified by counting clock-diamonds (grid boxes).
• Thank you, that is very helpful! Regarding "causal disconnectedness": in my usage, for any two spacetime events A and B, A and B are causally connected iff A causally influences B. Causal disconnection is the negation of that. Regarding "closeness": my question was an aspect of my question about commensurability. I still don't understand whether the different kinds of spacetime intervals can be meaningfully compared in terms of magnitude. For example, just looking at the graph you posted indicates that (A, H) and (A, D) are the same size, but I have a feeling this can't be right. Jul 6 at 7:33
• @Maverick Diamond(A,H) has zero area, whereas Diamond(A,D) has signed-area -5. The diamond-edges are lightlike Jul 6 at 12:42
1. A and B (Case 1) are further apart than C and D (Case 2). Does that mean that A and B are somehow more causally disconnected than C and D? Or is causal disconnection an all-or-nothing value (meaning two events are either causally disconnected or not)?
Its all-or-nothing value
1. Are C and D (Case 2) equally close as E and F (Case 3), given that the absolute value of both intervals is equal? More specifically, is there any way to compare the magnitudes of intervals of different types, or are they incommensurable?
2. Are G and H (Case 4) maximally close, and thus closer together than any of the other pairs of events mentioned? If an interval of 0 does not represent being maximally close, then what makes it 0?
Depends.
You can easily compare two vectors that are colinear as one is multiple of the other. You can then look at absolute value of this multiple and check whether it is bigger or smaller than one. I.e. $$u=\lambda v \land |\lambda| >1\Rightarrow u>v.$$ In ordinary space, you have rotation which is length preserving operation. You can then rotate any general vector $$v'$$ so that it is colinear with $$u$$ and compare them.
In spacetime however, things get a little complicated. Metric preserving operations are lorentz transformations and there are no lorentz transformation that would mix vectors from null/time-like/space-like categories.
However, you can take absolute value of spacetime interval. This will define equivalence relation between vectors that have the same absolute value of spacetime interval. You can then compare two position 4-vectors $$u$$ and $$v$$ by choosing one as a reference and using the other vector to pick vector colinear with the first which is of the same equivalence class as the second vector.
Still bigger complication are null vectors. These form a three dimensional vector space, so you can have two noncolinear null vectors $$u_0, v_0$$. To compare them, you would need to transform one of them to be colinear with other, but there is no structure in spacetime that would let you prefer one over other.
So while the closeness of position 4-vectors (two events) can be extended to include both time-like and space-like categories, the null vectors are simply uncomparable, unless they happen to be colinear.
This will not answer all your questions but hopefully it will provide some intuition. The contour given by $$\Delta s^2=1$$ for a timelike interval can be parametrized using $$(ct,x)=\gamma\,(1,\beta)$$, i.e. it is the Lorentz transform of $$(ct,x)=(1,0)$$ for general $$\beta$$. This gives the following interpretation of this curve. Consider all possible intertial observers which pass through the origins. So this is like a bunch of spaceships all passing through the origin at $$t=0$$, each spaceship having a different velocity. The position of these spaceships at their percieved $$ct=1$$ is the curve given by $$\Delta s^2=1$$. We can also frame it as follows: it is the locus of all points which are a proper time $$c\tau=1$$ from the origin but generalized for all possible observers. Written like this it can be generalized to Galilean relativity. For Galilean relativity the 'proper time' is always the same and so the spacetime interval becomes just the time interval between two events.
Now we can repeat this for a spacelike interval $$\Delta s^2=-1$$, as shown below. This time the curve given by all points which are a proper distance $$-1$$ away, generalized for all possible observers. Again, for Galilean relativity the 'proper distance' is always the same and so the spacetime interval becomes just the distance.
This is another answer for the questions below your cases. To answer your questions directly, 1) if two events are spacelike-separated, it doesn't matter how far apart they are. They are just spacelike-separated. They can't communicate or causally affect each other. You can't be more unable to do something. You're just unable to. 2) and 3) are a little tricky to think about. In more mathematical terms, the distance between two points in a given space is determined by what's called a metric function. This function, call it $$g$$, satisfies some very natural properties: 1- the distance between a point and itself is zero ($$g(x,x)=0$$)
2- the distance between any two distinct points x and y is greater than zero ($$g(x,y)>0$$)
3- Symmetry: distance between x and y is the same as distance between y and x ($$g(x,y)=g(y,x)$$)
4- Triangle inequality: for any three points x, y, and z, $$g(x,z)< g(x,y)+g(y,z)$$
Those are all natural and intuitive properties of the concept of distance. But notice how spacetime immediately manifests itself differently from that concept. You immediately see that distance in spacetime is allowed to be negative, and that two distinct points can be separated by a zero-length curve. This is why distance in spacetime is very different from distance in Euclidean space. Spacetime is what's called a "Pseudo-Riemmanian manifold" which is a kind of space with these bizzare properties. That's why it's not straightforward to understand what it means when the interval is zero or negative. But this is where the physics comes in. If you draw a light cone and take the distance between any two points on it, it will be zero, even though there are points that are clearly farther apart than others. But they're only farther apart according to our simple notion of distance, not spacetime's distance. (It truly is fascinating). So the physics tells you that this just means that these points can only communicate by light rays. They are not "maximally close" because this notion of closeness no longer applies here. The same goes for events separated by negative intervals. They're just events that can't communicate at all and can't causally affect each other, even if they "look" the same distance apart on paper. This is only because paper has natural distance properties but spacetime doesn't.
I'll start from "the light cone is a real thing. And the interval measures something in relation to that real thing". You're right. Special relativity postulates that all inertial observers agree on the speed of light. This is why the light cone the same for all observers and is "a real thing". Since nothing can travel faster than light, then events that are causally connected are determined by the light cone. Points inside the future light cone of point A are points that can be influenced by A. Points in the past light cone of A are points that can influence A. Points ON the light cone of A are points which can send or receive light signals to or from A. And finally, points outside the light cone of A are points that can neither influence nor be influenced by A and cannot communicate with A. And all observers agree on that. This is what the spacetime interval measures. It tells you whether two events are causally related ($$\Delta s^2 >0$$) or not causally related ($$\Delta s^2 <0$$) or whether they can communicate by light signals only ($$\Delta s^2 =0$$). That's what the sign of the interval tells you. The magnitude of the interval isn't as easy to interpret. You can have two events separated in time but not in space, and two other events separated in space but not in time such that $$\Delta s^2$$ for the two events is the same. You wouldn't be able to distinguish which is which based only on the magnitude of the interval. All we can say that if $$\Delta s^2$$ between events A and B is less than $$\Delta s^2$$ for events C and D, then that means that A and B are closer in spacetime to each other than C and D. It's helpful to think about distance in spacetime as a whole, not just spatial or temporal separation because they can't be inferred from the magnitude of the interval. I think that adresses your cases. Please leave a comment if you have any further questions or things you want to further clarify.
|
|
This tiny post is about some basics on the Gamma law, one of my favorite laws. Recall that the Gamma law ${\mathrm{Gamma}(a,\lambda)}$ with shape parameter ${a\in(0,\infty)}$ and scale parameter ${\lambda\in(0,\infty)}$ is the probability measure on ${[0,\infty)}$ with density
$x\in[0,\infty)\mapsto\frac{\lambda^a}{\Gamma(a)}x^{a-1}\mathrm{e}^{-\lambda x}$
where we used the eponymous Euler Gamma function
$\Gamma(a)=\int_0^\infty x^{a-1}\mathrm{e}^{-x}\mathrm{d}x.$
Recall that ${\Gamma(n)=(n-1)!}$ for all ${n\in\{1,2,\ldots\}}$. The normalization ${\lambda^a/\Gamma(a)}$ is easily recovered from the definition of ${\Gamma}$ and the fact that if ${f}$ is a univariate density then the scaled density ${\lambda f(\lambda\cdot)}$ is also a density for all ${\lambda>0}$, hence the name “scale parameter” for ${\lambda}$. So what we have to keep in mind essentially is the Euler Gamma function, which is fundamental in mathematics far beyond probability theory.
The Gamma law has an additive behavior on its shape parameter under convolution, in the sense that for all ${\lambda\in(0,\infty)}$ and all ${a,b\in(0,\infty)}$,
$\mathrm{Gamma}(a,\lambda)*\mathrm{Gamma}(b,\lambda)=\mathrm{Gamma}(a+b,\lambda).$
The Gamma law with integer shape parameter, known as the Erlang law, is linked with the exponential law since ${\mathrm{Gamma}(1,\lambda)=\mathrm{Expo}(\lambda)}$ and for all ${n\in\{1,2,\ldots\}}$
$\mathrm{Gamma}(n,\lambda) =\underbrace{\mathrm{Gamma}(1,\lambda)*\cdots*\mathrm{Gamma}(1,\lambda)}_{n\text{ times}} =\mathrm{Expo}(\lambda)^{*n}.$
In queueing theory and telecommunication modeling, the Erlang law ${\mathrm{Gamma}(n,\lambda)}$ appears typically as the law of the ${n}$-th jump time of a Poisson point process of intensity ${\lambda}$, via the convolution power of exponentials. This provides a first link with the Poisson law.
The Gamma law is also linked with chi-square laws. Recall that the ${\chi^2(n)}$ law, ${n\in\{1,2,\ldots\}}$ is the law of ${\left\Vert Z\right\Vert_2^2=Z_1^2+\cdots+Z_n^2}$ where ${Z_1,\ldots,Z_n}$ are independent and identically distributed standard Gaussian random variables ${\mathcal{N}(0,1)}$. The link starts with the identity
$\chi^2(1)=\mathrm{Gamma}(1/2,1/2),$
which gives, for all ${n\in\{1,2,\ldots\}}$,
$\chi^2(n) =\underbrace{\chi^2(1)*\cdots*\chi^2(1)}_{n\text{ times}} =\mathrm{Gamma}(n/2,1/2).$
In particular we have
$\chi^2(2n)=\mathrm{Gamma}(n,1/2)=\mathrm{Expo}(1/2)^{*n}$
and we recover the Box–Muller formula
$\chi^2(2)=\mathrm{Gamma}(1,1/2)=\mathrm{Expo}(1/2).$
We also recover by this way the density of ${\chi^2(n)}$ via the one of ${\Gamma(n/2,1/2)}$.
The Gamma law enters the definition of the Dirichlet law. Recall that the Dirichlet law ${\mathrm{Dirichlet}(a_1,\ldots,a_n)}$ with size parameter ${n\in\{1,2,\ldots\}}$ and shape parameters ${(a_1,\ldots,a_n)\in(0,\infty)^n}$ is the law of the self-normalized random vector
$(V_1,\ldots,V_n)=\frac{(Z_1,\ldots,Z_n)}{Z_1+\cdots+Z_n}$
where ${Z_1,\ldots,Z_n}$ are independent random variables of law
$\mathrm{Gamma}(a_1,1),\ldots,\mathrm{Gamma}(a_n,1).$
When ${a_1=\cdots=a_n}$ the random variables ${Z_1,\ldots,Z_n}$ are independent and identically distributed of exponential law and ${\mathrm{Dirichlet}(1,\ldots,1)}$ is nothing else but the uniform law on the simplex ${\{(v_1,\ldots,v_n)\in[0,1]:v_1+\cdots+v_n=1\}}$. Note also that the components of the random vector ${V}$ follow Beta laws namely ${V_k\sim\mathrm{Beta}(a_k,a_1+\cdots+a_n-a_k)}$ for all ${k\in\{1,\ldots,n\}}$. Let us finally mention that the density of ${\mathrm{Dirichlet}(a_1,\ldots,a_n)}$ is given by ${(v_1,\ldots,v_n)\mapsto\prod_{k=1}^n{v_k}^{a_k-1}/\mathrm{Beta}(a_1,\ldots,a_n)}$ where
$\mathrm{Beta}(a_1,\ldots,a_n)=\frac{\Gamma(a_1)\cdots\Gamma(a_n)}{\Gamma(a_1+\cdots+a_n)}.$
The Gamma law is linked with Poisson and Pascal negative-binomial laws, in particular to the geometric law. Indeed the geometric law is an exponential mixture of Poisson laws and more generally the Pascal negative-binomial law, the convolution power of the geometric distribution, is a Gamma mixture of Poisson laws. More precisely, for all ${p\in[0,1]}$ and ${n\in\{1,2,\ldots,\}}$, if ${X\sim\mathrm{Gamma}(n,\ell)}$ with ${\ell=p/(1−p)}$, and if ${Y|X\sim\mathrm{Poisson}(X)}$ then ${Y\sim\mathrm{NegativeBinomial}(n,p)}$, since for all ${k\geq n}$, ${\mathbb{P}(Y=k)}$ writes
$\begin{array}{rcl} \displaystyle\int_0^\infty\mathrm{e}^{-\lambda}\frac{\lambda^k}{k!}\frac{\ell^n\lambda^{n-1}}{\Gamma(n)}\mathrm{e}^{-\lambda\ell}\mathrm{d}\lambda &=& \displaystyle\frac{\ell^n}{k!\Gamma(n)}\int_0^\infty\lambda^{n+k-1}\mathrm{e}^{-\lambda(\ell+1)}\mathrm{d}\lambda\\ &=&\displaystyle\frac{\Gamma(n+k)}{k!\Gamma(n)}(1-p)^kp^n. \end{array}$
The Gamma and Poisson laws are deeply connected. Namely if ${X\sim\mathrm{Gamma}(n,\lambda)}$ with ${n\in\{1,2,\ldots\}}$ and ${\lambda\in(0,\infty)}$ and ${Y\sim\mathrm{Poisson}(r)}$ with ${r\in(0,\infty)}$ then
$\mathbb{P}\left(\frac{X}{\lambda}\geq r\right) =\frac{1}{(n-1)!}\int_{r}^\infty x^{n-1}\mathrm{e}^{-x}\,\mathrm{d}x =\mathrm{e}^{-r}\sum_{k=0}^{n-1}\frac{r^k}{k!} =\mathbb{P}(Y\geq n).$
Bayesian statisticians are quite familiar with these Gamma-Poisson duality games.
The law ${\mathrm{Gamma}(n,\lambda)}$ is log-concave when ${n\geq1}$, and its density as a Boltzmann–Gibbs measure involves the convex energy ${x\in(0,\infty)\mapsto\lambda x-(n-1)\log(x)}$ and writes
$x\in(0,\infty)\mapsto\frac{\lambda^n}{\Gamma(n)}\mathrm{e}^{-(\lambda x-(n-1)\log(x))}.$
The orthogonal polynomials with respect to ${\mathrm{Gamma}(a,\lambda)}$ are Laguerre polynomials.
The Gamma law appears in many other situations, for instance in the law of the moduli of the eigenvalues of the complex Ginibre ensemble of random matrices. The multivariate version of the Gamma law is used in mathematical statistics and is connected to Wishart laws which are just multivariate ${\chi^2}$-laws. Namely the Wishart law of dimension parameter ${p}$, sample size parameter ${n}$, and mean ${C}$ in the cone ${\mathbb{S}_p^+}$ of ${p\times p}$ positive symmetric matrices has density
$S\in\mathbb{S}_p^+ \mapsto \frac{\det(S)^{\frac{n-p-1}{2}}\mathrm{e}^{-\mathrm{Trace}(C^{-1}S)}} {2^{\frac{np}{2}}\det(C)^{\frac{n}{2}}\Gamma_p(\frac{n}{2})}$
where ${\Gamma_p}$ is the multivariate Gamma function defined by
$x\in(0,\infty) \mapsto \Gamma_p(x) =\int_{\mathbb{S}^+_p}\det(S)^{x-\frac{p+1}{2}}\mathrm{e}^{-\mathrm{Trace}(S)}\mathrm{d}S.$
Non-central Wishart or matrix Gamma laws play a crucial role in the proof of the Gaussian correlation conjecture of geometric functional analysis by Thomas Royen arXiv:1408.1028, see also the expository work by Rafał Latała and Dariusz Matlak and Franck Barthe.
## One Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
|
# Olaf Dreyer: the Cosmological Constant paradox
1. Sep 5, 2004
### marcus
http://arxiv.org/hep-th/0409048
this is a 4-page paper.
the cosmological constant problem is the worst prediction in the history of physics---conventional Quantum Field Theory predicts a vacuum energy which is 123 orders of magnitude off: wrong by a factor or 10123
why is this? it should be interesting to resolve
Olaf Dreyer finds that the bizarre and humongous discrepancy in QFT vacuum energy arises from the theory not having been formulated in a Background Independent way. He gives a short argument as to why QFT, once it is made B.I., would not have a problem with the cosmological constant.
Background Independent Quantum Field Theory and the Cosmological Constant Problem
2. Sep 6, 2004
### sol2
I printed out this copy last night to review, and found somebody else who has a similar interest that shares your perspective in regards to LQG. If no one responds then the issue is dead to me. So understanding this other persons perspective, and his bend in the LQG direction, I wanted him to explain himself in relation to his views, so I am asking you how you see, in relation to the universe and what it is doing when you bring this link here for review? What are your thought here when you posted this?
Why do you accept the background independance?
Last edited: Sep 6, 2004
3. Sep 6, 2004
### marcus
Philip W Anderson, winner of 1977 Nobel (solid state)
http://almaz.com/nobel/physics/1977a.html
Anderson was born 1923, he was at Princeton and Bell Labs, may still be
we should make an excursion into his thinking now, as also further developed by a young faculty at MIT named Xiao-Gang Wen
--------------------
Footnote: both standard QFT and String stumble over the Cos. Const.
QFT's Standard Model has been impressively successful in predicting
numbers and nothing has improved on it for 20 years. Broadly speaking QFT only makes one mistake (the huge one about CC).
In like manner, String, though largely unsuccessful at predicting numbers (having been worked on for over 20 years), in 2003 was thrown into turmoil by concerns around the Cosmological Constant. Confusion about the CC is at the root of the Landscape-talk and Susskind's Anthropery and the Kachru Trouble about the 10100 string vacuums. The Cos. Const. has been responsible for the "de-scientizing" of String (in the eye's of some). So when someone offers to get rid of the Cos. Const. problem we should listen up. IMHO.
---------------------
Olaf Dreyer is maybe not the main player here---he's a young person at Perimeter who takes risks. The concrete foundation he is using for his short 4-page paper is the thinking of Philip Anderson (extended by Xiao Wen). It is a "condensed matter" field theory model, which Olaf Dreyer exploits to achieve background independence.
http://pupgg.princeton.edu/www/jh/pwa/
http://pupgg.princeton.edu/www/jh/research/Anderson_Philip.htmlx
Last edited: Sep 6, 2004
4. Sep 6, 2004
### marcus
Olaf Dreyer builds his case on these condensed-matter field theory citations (representative of a growing tribe of allied cond-mat research)
P.W.Anderson Science 177,393 (1973)
G.E.Volovik The Universe in a helium droplet Oxford U. Press (2003)
R. B. Laughlin Int. J. Mod. Phys. A18, 831 (2003).
X.-G. Wen Quantum field theory of many-body systems Oxford U. Press (2004)
http://dao.mit.edu/~wen/
I would urge anyone who hasnt already to check it out. he has many sample pages from his book. there is a TOC for the book.
He is at MIT and has quite a few graduate students around him.
here are two recent papers by X-G Wen
http://arxiv.org/abs/cond-mat/0406441
An Introduction to Quantum Order, String-net Condensation, and Emergence of Light and Fermions
http://arxiv.org/abs/cond-mat/0407140
A unification of light and electrons based on spin models
Last edited: Sep 6, 2004
5. Sep 6, 2004
### marcus
I supect that Xiao should have made his world out of
"spin-network condensations" instead of "string-net condensations"
and then the construction would be background independent
(conceptually better) and also would not have
potential problems like the Cos. Const. problem.
it looks to me as if Olaf Dreyer has had this idea, perhaps other too
some of Xiao's pictures look like they might be pictures of spin-networks
6. Sep 7, 2004
### marcus
the usual theory of gravity is background independent, sol, so nobody needs to justify the requirement that a theory have background independence.
challenging the requirement of background independence is off topic here and will distract from the train of thought. Please start a separate thread about it. I am trying to understand how Olaf Dreyer has addressed the problem of the cosmological constant.
(the black hole entropy paper you cited has no bearing on this and also could use a separate thread)
7. Sep 11, 2004
### marcus
Olaf Dreyer is involved in another controversial issue besides the Cosmo. Const.
It was his original idea back in 2002 that the Immirzi could be 8.088, or rather the reciprocal of that---a number around one eighth.
Now Lee Smolin, with Olaf Dreyer and Fotini Markopoulou to help him, has taken up the banner of this Immirzi.
http://arxiv.org/hep-th/0409056
In Quantum Gravity, area is measured in "quantum pinpricks" and to oversimplify and put it very crudely each pinprick is worth
65.65 x 10-70 m2
It would be nice if it were that simple but there is this parameter called immirzi that scales the pinpricks so that roughly speaking each pinprick is now worth either
$$\frac{1}{8.088}65.65 \times 10^{-70} m^2$$
or
$$\frac{1}{4.21}65.65 \times 10^{-70} m^2$$
Abhay Ashtekar says divide by 4.21 and Lee Smolin says divide by 8.088.
the basic 65.65E-70 square meter area is
$$8\pi\text{planck area}=8\pi\frac{G\hbar}{c^3}$$
Last edited: Sep 17, 2004
8. Sep 11, 2004
### marcus
If you would like a reference for the 1/4.21 see reference [10] in smolin's paper
http://arxiv.org/hep-th/0409056
and also equation (18)
they give the value that the opposition prefer, as well as their own.
or see
http://arxiv.org/gr-qc/0407051
http://arxiv.org/gr-qc/0407052
----------------------------
at this point the exact value of the immirzi seems less interesting than picturing a spin network quantum state of geometry
In Quantum Gravity, a quantum state of the geometry of the universe
is like a humongous ball-and-stick model of a molecule
the nodes (balls) have volume info and contribute a tiny amount of volume to any region enclosing them and the links (sticks) have area info and contribute area to any surface they pass thru. In this way the network supplies the basic geometry information.
A gigantic network seems to be the most efficient way of describing the geometry of the whole universe. it is like a computer data structure which lists everything you need to know about the geometry----bare bones, but from which everything can be derived. Curvature can be gotten from it too.
but the cells of the network need to be very tiny. a given region contains jillions of nodes and links. the network is planck scale.
this area 65.65x10-70 square meter gives an idea
because each passage thru a surface by the network, each puncture, or pinprick, contributes about this much area to the surface
It gives a notion how intense the spin-network quantum state of the universe is, because its jillions of nodes are connected with links so numerous that a square meter has on the order of 1070 links passing thru it.
the geometry of the universe, which is this network, gives area to things by how much it punctures them, and a square meter of area is something it punctures a lot. It has to puncture it about 1070 times because
each puncture only contributes about 10-70 sq. meter of area
I'm picturing this network as being like the fine suds in the sink----- this fine foam of millions of bubbles
Imagine putting a dot or node in the middle of each bubble
and linking two nodes if their two bubbles touch
and in this way replace the foam by a kind of network.
that is how it gets to look like a 'ball-and-stick' model of a complex polymer molecule but with millions of atoms corresponding to the original bubbles
and picture this extending everywhere and describing a quantum state of the geometry of the universe
Last edited: Sep 11, 2004
9. Sep 11, 2004
### marcus
I guess we have to call this quantum needlepoint area something, let's call it a pinprick.
Now we move on to black holes and try to see why Olaf Dreyer (and others) say that it has to be scaled down still further by the Immirzi.
the reason is that Sachar Hod, in Israel, found the vibration frequencies of black holes and by Bohr's principle of correspondence these should correspond to energy changes or changes in the mass of the hole, and these are accompanied by changes in the area of the event horizon.
when Hod's result is translated into area terms, it predicts that the area should change sort of one pinprick's worth at a time
but not quite, instead the area should change by this number times a pinprick
$$\frac{\ln 3}{2 \pi}$$
-----
Remember each pinprick is worth
65.65 x 10-70 m2
so according to the semiclassical analysis of Hod and others the area of a usual black changes in steps this big
$$\frac{\ln 3}{2 \pi}65.65 \times 10^{-70} m^2$$
now we check out where Olaf Dreyer gets the number 8.088
for reference, a pinprick---the basic 65.65E-70 square meter area---is
$$8\pi\text{ planck area}=8\pi\frac{G\hbar}{c^3}$$
Later on it would be convenient to have a symbol for it. i think I will call it Upsilon (that double-hook capital greek letter)
$$\Upsilon = 8\pi \text{ planck area}=8\pi\frac{G\hbar}{c^3}$$
Last edited: Sep 17, 2004
10. Sep 11, 2004
### marcus
All I am doing right now in this thread is basically just reading
http://arxiv.org/hep-th/0409056
with you.
the paper itself is not all that hard, but sometimes following something in a kind of parallel track can help it sink in. so this is a parallel development which I need to do as part of assimilating the paper
what we are driving for is Olaf Dreyer's equation for the Immirzi gamma number, which if you solve it with your calculator will give 1/8.088. the equation is
$$\sqrt {2} \gamma = \frac{\ln 3}{2 \pi}$$
Olaf (with support from Lee and Fotini) says that when the area of a Schwarz. horizon changes by gaining or losing one puncture by the network then (by god and the all-righteous area formula of LQG) it gains or loses this amount of area
$$\sqrt {2} \gamma \times 65.65 \times 10^{-70} m^2$$
and Hod says by his and others classical analysis of making the Schwarz. solution go boing that it gains or loses this amount of area
$$\frac {\ln 3}{2\pi} \times 65.65 \times 10^{-70} m^2$$
so you can see where the equation comes from, just putting the two gained-or-lost areas equal. they are two different expressions for the "delta A"
the ticklish spot is that Olaf is saying something about the spin-color of a network in a black hole. he is saying that the spins inside a black hole can only be ONE they cannot be one-half. Or at least the spins coloring the legs that cross the eventhorizon.
this is where the infernal squareroot 2 comes from, which otherwise would be squareroot 3/4. Always these damned technicalities!
well at least we havent had to say "entropy" yet, so it is still fairly straightforward
for reference, Upsilon the basic pinprick area--- 65.65E-70 square meter area---is
$$\Upsilon = 8\pi\text{ planck area}=8\pi\frac{G\hbar}{c^3}$$
this is what some people were proposing we use for the planck area anyway, instead of the conventional planck area. john baez has some SPR emails advocating that-----essentially to use 8pi G instead of plain G, throughout
Last edited: Sep 17, 2004
11. Sep 11, 2004
### marcus
Now we get to the Hawking entropy formula. this is all for Schwarz. black holes but I dont always say Schwarzschild.
typically the entropy is considered to be a plain number
and I guess one should be aware that the CONVENTIONAL planck area is Ghbar/c^3 but that various people including john baez have advocated replacing G by 8piG
the thought is that it is 8piG that appears in the Einstein equation of General Relativity and in a bunch of other places, as if Nature really liked G to be multiplied by 8pi. then the formulas are different and quite a few are simpler-looking.
this pinprick area we've been talking about is 8pi times the conventional planck area----it is the "newplanck" area that you would get if you followed john baez suggestion and did what he advocates.
******************
well, Hawking formula relates entropy S to horizon area A, and the question is WHAT AREA DO YOU HAVE TO DIVIDE A BY TO GET S?
S is a number so to get S you have to divide horizon area A by some AREA, nothing else will work.
hawking says divide A by four times the conventional planck area
$$S = \frac{A}{4\text{ planck area}}$$
$$S = \frac{2\pi A}{8\pi \text{ planck area}}$$
$$S = \frac{2\pi A}{\Upsilon}$$
this is just another version of the Hawking entropy formula where we divided by the pinprick area and then compensated by multiplying by 2 pi.
**********
Now I want to check that the Olaf Dreyer version of the Immirzi gets the correct Hawking entropy formula
the assumption is that at each puncture the spin p =1 so that the dimension of the little microstate hilbertspace there is 2p+1 = 3
now the entropy is the logarithm of the dimension of all the hilbertspaces
collectively
so it is the number of punctures N multiplied by ln 3.
Here is Olaf's gamma
$$\gamma = \frac{\ln 3}{2 \pi\sqrt {2}}$$
each puncture contributes this much area
$$\sqrt{2} \gamma \Upsilon = \frac{\ln 3}{2 \pi}\Upsilon$$
We can use that to learn the number of punctures in
a horizon of given area A.
$$N = \frac{A}{\sqrt {2}\gamma \Upsilon}$$
$$N = \frac{A}{\frac{\ln 3}{2 \pi} \Upsilon}$$
$$S= N\ln3 = \frac{2\pi A}{\Upsilon}$$
that's the right entropy formula
In this thread I am still basically just reading
http://arxiv.org/hep-th/0409056
with you.
for reference, Upsilon---the basic 65.65E-70 square meter area---is
$$\Upsilon = 8\pi\text{planck area}=8\pi\frac{G\hbar}{c^3}$$
Last edited: Sep 17, 2004
12. Sep 12, 2004
### marcus
what I have been finding out in this thread is that there is some sense in the suggestion by Baez that instead of
G = hbar = c = k = 1
or the original planck units which had (dimensioned) quantities with numerical value unity.
Instead maybe, one should be setting
8piG = hbar = c = k = 1-------or the dimensioned analog to that.
One should be doing everything just the same except with 8piG instead of G.
Since I think Baez has been the one to stick his neck out about this I will call these "baez-units" to distinguish them from the 1899 "planck units"
Now why would anyone think that 8piG is more fundamental than G?
the answer is the 1915 Einstein equation which is the main equation of Gen Rel and our premier equation about gravity. Here is how you often see it.
Gab = (8piG)Tab
it is confusing because the letter G is used for the curvature tensor on the LHS and also for the newtonian constant in parens on the RHS. Anyway the central constant there, in parens, is not G it is 8piG. In some sense that proves that 8piG is more fundamental. this is in a dark primitive part of the brain with brute ideas like "dis basic'r n'dat. ugh!" have their dwelling. you take the most basic gravity equation which is the granddaddy of all the other gravity equations and you look at ITS proportionality constant and whatever that is, is the fundamental constant. All prejudice of course.
Now the dreadful prospect to contemplate is, what do baez units look like?
what does it look like if you actually do use 8piG instead of G?
Last edited: Sep 17, 2004
13. Sep 12, 2004
### marcus
some "baez" units (planck but with 8piG instead of G)
$$\text{energy unit} = \frac{\hbar c}{\sqrt \Upsilon}$$
$$\text{force unit} = \frac{\hbar c}{\Upsilon}$$
$$\text{energy density unit} = \frac{\hbar c}{\Upsilon^2}$$
$$\text{pressure unit} = \frac{\hbar c}{\Upsilon^2}$$
$$\text{mass unit} = \frac{\hbar }{c\sqrt \Upsilon}$$
convenient thing about this way of laying out "baez" units is that
you just have to remember that the upsilon area is
65.65 x 10-70 square meters
and that is the baez unit of area (its square root is the unit length)
personally I dont find that too hard to remember, for some reason
and then the rest of the units are reasonably simple to calculate
or estimate rough sizes, from that----cause one knows c and hbar.
Last edited: Sep 17, 2004
14. Sep 13, 2004
### sol2
Well Marcus,
I had appreciated your efforts to develpe this thinking, something has come up that one has to refute before accepting this framework of discussion.
I find it amazing sometimes, that if the shell approaches are not considered, how would one deduce, whether or not spin orientations would work? Gravity probe B would have detected variations, and in these places, "spin orientations," would have also been discovered?
Encapsulating these views( the outer most shell), reveals topological consideration with geometrodynamcis that must be considered?
There is only one way in which to understand this if one knew how to gauge the blackholes dynamcis and entrophy issues relevant to issues arising from the early universe. These dynamics would have been self evident, although the geometrical evolution has not been understood in string theory, I am working on it.
15. Sep 15, 2004
### marcus
I think what Lubos said to Sergei Alexandrov in another discussion in another forum is not relevant here, sol. Lubos concludes that anyone is free to take the OLF ideas seriously---although he repeats what Sergei identified as an unfair misconstruction of their term "noiseless". I do take their (Olaf-Lee-Fotini's) ideas seriously
But we are not talking about about Lubos and Sergei.
When you see the main gravity equation written in this really clean form:
Gab = Tab
then you know that someone has adjusted the units so as to set not only c but also 8pi G = 1
So let's take that move out of the realm of pure theory and see what it would look like in real life!
LET'S IMAGINE that we live on a planet just ten percent denser than the earth ( a bit more iron in the core, say) but the same size and otherwise the same.
And let's imagine that we have everyday units of measurement where we have 222 "counts" in a minute instead of 60 "seconds", and that the unit of length is a "palm"----the width of my four fingers (just over 8 centimeters). Let's suppose our mass unit is a "pound" which would be a mass of 434 grams if we measured it in a laboratory on earth.
then, for us, the baez units would be very easy to express in terms of our local everyday units.
baez unit speed = c = speed of light = one billion palms per count.
baez mass = 10-8 pound
baez length = 10-33 palm
all these baez units, which are just the same as planck units except some are modified by a factors of 8pi or the square root of 8pi, would be simply expressed like this-----for instance, the baez energy, the baez force, the baez temperature.
(wonder what Baez himself would think of being bandied about like this, but he was the one who most publicly proposed adjusting planck units by this 8 pi factor so he should not get to complain too loudly)
Last edited: Sep 15, 2004
16. Sep 15, 2004
### sol2
Well, hopefully, the rest of my post was acceptable I can't have a computer nerd validating what might have appeared to be the case of how this post fits where?
As you know, I can go to great lengths to question the validity of perceptions people can have, and the foundations they use. Look at your ole partner in crime, Nereid
I think she undertands now and so does that other fellow. I'd just have to do the same for you I guess :rofl:
Were you in that class picture, Neried(do you think if I continue to spell her forum name wrong this might piss her off?) offered?
best regards always(hope you know that)
Last edited: Sep 15, 2004
17. Sep 15, 2004
### marcus
My aim in this thread is to understand Olaf Dreyer's recent work, especially this short paper mentioned in the first post:
"Background Independent QFT and the CC Problem"
the thrust of this paper is closely allied to that of the one by Carlo Rovelli and Daniele Colosi that appeared a couple of days ago.
Global Particles, Local Particles
http://arxiv.org/gr-qc/0409054
The Quantum Gravity people seem to attacking this problem (and maybe one or two other major ones)-----what happens when QFT is made Background Independent by basing it on the (spin network) quantum geometry of LQG?
Olaf Dreyer says it may solve the Vacuum Energy Problem by making the hugely too big vacuum energy of QFT go away.
Carlo Rovelli says it will mean that particles do not have absolute exisence. You will no longer describe the world as consisting (even in principle) of a collection of particles and their interactions.
What particles appear to exist is merely an artifact of what the experimenter has chosen to measure.
Both the Dreyer and the Rovelli papers are about the future of QFT
and they are both concerned with foundations.
Rovelli paper does not mention the Cosmological Constant but after reading their paper I suspect that the analysis in it can be extended to show why the hugely too big (by 123 orders of magnitude) vacuum energy does not arise.
have to go, must continue this later
18. Sep 15, 2004
### marcus
"The world we live in is just another material whose excitations are described by the Standard Model." ---Olaf Dreyer
Background Independent Quantum Field Theory and the Cosmological Constant Problem
http://arxiv.org/hep-th/0409048
Carlo Rovelli and Daniele Colosi
Global Particles, Local Particles
http://arxiv.org/gr-qc/0409054
Neither paper is concerned with the efforts of String theorists, but I find that together they provoke some doubt in my mind concerning that approach.
Primarily it is because the String approach seems to attribute individual existence to particles.
The String approach seems to say "study the particle---for the world is composed of many many particles and their interactions." It is a traditional ontology going back to Newton. It could be misguided----a wiser ontology might say that space exists analogous to a crystal in condensed matter physics (perhaps as a spin-network or some other complex net of relations) and what are occasionally perceived as batches of particles are ripples or exitations, flaws in the matrix of space.
Perhaps instead of individual particles what one should do is study the bulk dynamic properties of space itself, as Einstein started physics doing in 1915, and in a partial imitation of condensed matter physics.
The "condensed matter approach" that Olaf Dreyer adopts here may prove successful not only in solveing the problem of the ridiculously huge Vacuum Energy but also in mirroring nature more generally. He attributes this approach to the Princeton Nobelist P.W.Anderson and cites work by a contemporary leader Xiao-Gang Wen at MIT.
I will get a longer quote from Dreyer and some links
19. Sep 15, 2004
### marcus
http://arxiv.org/hep-th/0409048
Background Independent Quantum Field Theory and the Cosmological Constant Problem
---quote from Olaf Dreyer---
"There are currently two competing views of the role quantum field theory plays in our theoretical understanding of nature. In one view,quantum field theory describes the dynamics of the elementary constituents of matter. The job of the physicist is to figure out what the elementary particles are and to find the appropriate Lagrangian that describes the interactions. The Standard Model of elementary particle physics is the epitome of this view (see[1] for an authoritative exposition of this view).
The other point of view likens the use of quantum field theory to its use in other areas of physics, most importantly in solid state physics. Here, quantum field theory is not used to describe the dynamics of elementary particles. In solid state physics, this would be fruitless, since the dynamics of a large number of atoms is usually beyond anyone’s ability to compute.
It turns out, however, that these large numbers of constituents often have collective excitations that can be well-described by quantum field theory and that are responsible for the physical properties of the material.
The view is then that the elementary particles of the Standard Model are like the collective excitations of solid state physics. The world we live in is just another material whose excitations are described by the Standard Model. The point of view was introduced by P. W. Anderson[2] and has since found a large following (seee.g. [3, 4, 5] and references therein). "
----end quote---
the reference [5] here is to work by the solid state physicist Xiao-Gang Wen.
Here is how Dreyer characterizes the Vacuum Energy problem a bit later in the paper right after equation number (1):
---quote---
"...If one takes this cutoff to be the Planck energy the vacuum energy is some 123 orders of magnitude away from the observed value of the cosmological constant [6], making this the worst prediction in theoretical physics. (For a detailed discussion of this problem see [7,8])..."
---end quote---
Last edited: Sep 15, 2004
20. Sep 15, 2004
### marcus
Compare what Carlo Rovelli, Daniele Colosi have to say
Global Particles, Local Particles
http://arxiv.org/gr-qc/0409054
---from the abstract---
The notion of particle plays an essential role in quantum field theory (QFT). Some recent theoretical developments, however, have questioned this notion; for instance, QFT on curved spacetimes suggests that preferred particle states might not exist in general....
---end quote---
---sample from the conclusions paragraph---
...the distinction between global and local states can therefore be safely neglected in concrete utilizations of QFT. However, the distinction is conceptually important because it bears on three related issues: (i) whether particles are local or global objects in conventional QFT; (ii) the extent to which the quantum field theoretical notion of particle can be extended to general contexts where gravity cannot be neglected; and furthermore, more in general, (iii) whether particles can be viewed as the fundamental reality (the “ontology”) described by QFT. Let us discuss these three issues separately. ...
...Can we base the ontology of QFT on local particles? Yes, but local particle states are very different from global particle states. Global particle states such as the Fock particle states are defined once and for all in the theory, while each finite size detector defines its own bunch of local particle states. Since in general the energy operators of different detectors do not commute ([HR, HR'] nonzero) there is no unique “local particle basis” in the state space of the theory, as there is a unique Fock basis. Therefore, we cannot interpret QFT by giving a single list of objects represented by a unique list of states. In other words, we are in a genuine quantum mechanical situation in which distinct particle numbers are complementary observables. Different bases that diagonalize different HR operators have equal footing. Whether a particle exists or not depends on what I decide to measure. In such a context, there is no reason to select an observable as “more real” than the others.
The world is far more subtle than a bunch of particles that interact.
---end quote---
|
|
# Overview
1. 需要先定义一个.proto文件
2. 运行protoc命令,如protoc -I=$SRC_DIR --java_out=$DST_DIR \$SRC_DIR/addressbook.proto
3. 之后使用指定目录的.java文件
# 结论
• tag number最重要
• if tag number not change, but field name change, then output would using the new field name
For more described in tutorials,
• you must not change the tag numbers of any existing fields.
• you may delete optional or repeated fields.
• you may add new optional or repeated fields but you must use fresh tag numbers (i.e. tag numbers that were never used in this protocol buffer, not even by deleted fields).
|
|
### Introduction
If you throw a rock into a calm pond, the water around the point of entry begins to move up and down, causing ripples to travel outward. If these ripples come across a small floating object such as a leaf, they will cause the leaf to move up and down on the water. Much like waves in water, sound in air is produced by the vibration of an object. These vibrations produce pressure oscillations in the surrounding air which travel outward like the ripples on the pond. When the pressure waves reach the eardrum, they cause it to vibrate. These vibrations are then translated into nerve impulses and interpreted by your brain as sounds.
These pressure waves are what we usually call sound waves. Most waves are very complex, but the sound from a tuning fork is a single tone that can be described mathematically using a cosine function:
$y = A\cos \left( {B\left( {x - C} \right)} \right)$
In this activity you will analyze the tone from a tuning fork by collecting data with a microphone.
### Objectives
• Record the sound waveform of a tuning fork.
• Analyze the waveform to determine frequency, period and amplitude information.
• Model the waveform using trigonometric functions.
|
|
1. ## Orthogonal set
Can anyone help me with this problem please?
Let {v_1,v_2,...v_k} be an orthogonal set in V, and let a_1,a_2,...,a_k be scalars. Prove that (the norm of the summation of a_i*v_i)^2 is equal to the summation of l a_i l^2* (norm of v_i)^2
i runs from 1 to k.
2. Originally Posted by namelessguy
Can anyone help me with this problem please?
Let {v_1,v_2,...v_k} be an orthogonal set in V, and let a_1,a_2,...,a_k be scalars. Prove that (the norm of the summation of a_i*v_i)^2 is equal to the summation of l a_i l^2* (norm of v_i)^2
i runs from 1 to k.
${\left| \left|{ \sum_{i=1}^k {a_i v_i}} \right| \right|}^2 = \left<{{ \sum_{i=1}^k {a_i v_i}},{ \sum_{i=1}^k {a_i v_i}}}\right> = \sum_{i=1}^k\left<{a_i v_i,a_i v_i}\right>$
$= \sum_{i=1}^k {a_i}^2\left<{v_i,v_i}\right> = \sum_{i=1}^k {a_i}^2\| v_i \| ^2$
im just trying my luck here.. please, someone verify my proof.. thx..
3. Originally Posted by kalagota
im just trying my luck here.. please, someone verify my proof.. thx..
It looks right.
4. Thanks a lot for your help, kalagota. Where can I learn to type mathematical symbols and does it take a lot of time to learn this?
5. Originally Posted by kalagota
${\left| \left|{ \sum_{i=1}^k {a_i v_i}} \right| \right|}^2 = \left<{{ \sum_{i=1}^k {a_i v_i}},{ \sum_{i=1}^k {a_i v_i}}}\right> = \sum_{i=1}^k\left<{a_i v_i,a_i v_i}\right>$
This should be $\Bigl\|\sum_{i=1}^k a_i v_i \Bigr\|^2 = \Bigl\langle\sum_{i=1}^k a_i v_i, \sum_{j=1}^k a_j v_j\Bigr\rangle = \sum_{i,j=1}^k \langle a_i v_i,a_j v_j\rangle$. The two sums are independent, so should have different dummy variables. However, $\langle v_i,v_j\rangle = 0$ when i≠j. So the only terms that survive are those for which j=i, and the rest of kalagota's solution is correct:
Originally Posted by kalagota
$= \sum_{i=1}^k {a_i}^2\left<{v_i,v_i}\right> = \sum_{i=1}^k {a_i}^2\| v_i \| ^2$
The only other comment is that if the scalar field is the complex numbers then when $a_i$ comes out of the right side of the inner product it becomes $\bar{a}_i$ (the complex conjugate). But that's okay, because we then get $a_i\bar{a}_i=|a_i|^2$, as required.
6. Originally Posted by Opalg
This should be $\Bigl\|\sum_{i=1}^k a_i v_i \Bigr\|^2 = \Bigl\langle\sum_{i=1}^k a_i v_i, \sum_{j=1}^k a_j v_j\Bigr\rangle = \sum_{i,j=1}^k \langle a_i v_i,a_j v_j\rangle$. The two sums are independent, so should have different dummy variables. However, $\langle v_i,v_j\rangle = 0$ when i≠j. So the only terms that survive are those for which j=i, and the rest of kalagota's solution is correct:
....
|
|
# Ounce to kg Converter
Created by Kenneth Alambra
Reviewed by Rahul Dhari
Last updated: Jul 18, 2022
If you need a fast way to convert ounces to kilograms, this ounce-to-kg converter is absolutely for you. Ounce (abbreviated as oz) is a unit of weight and knowing how to convert it to other weight units like kilograms (kg) can help you visualize large quantities of weight in ounces.
In this converter, we present to you:
• How to calculate ounces to kg;
• How to use this ounce to kg converter; and
• Some ounce to kg conversion examples.
Keep on reading to get started.
## How to calculate ounces to kg
To understand how ounce to kg conversion works, let us first discuss how they are related. Ounce, an imperial unit, has a relatively straightforward relationship with other weight imperial units such as pounds. One pound is actually equal to 16 ounces. On the other hand, one pound is approximately 0.45359237 kilograms (an SI unit).
Since we now know that there are around 0.45359237 kilograms per pound and 16 ounces per pound, we can multiply any weight in ounces (say we denote as $w_\text{oz}$) by $0.45359237\ \tfrac{\text{kg}}{\text{lb}}$ and divide it by $16\ \tfrac{\text{oz}}{\text{lb}}$ to find the weight in kilograms (which we can denote as $w_\text{kg}$).
\begin{align*} w_\text{kg} &= w_\text{oz}\times \frac{0.45359237\ \tfrac{\text{kg}}{\text{lb}}}{16\ \tfrac{\text{oz}}{lb}}\\\\ &= w_\text{oz}\times 0.028349523\ \tfrac{\text{kg}}{\text{oz}} \end{align*}
Now that we know how to convert ounces to kg, how about we discuss how to convert oz to kg using our tool? 🙂
## How to convert oz to kg using our ounce to kg converter
Since we already have a multiplying factor to convert ounces to kg, it now only takes one step to convert oz to kg using our tool. That is to enter weight in ounces, and our tool will display its equivalent value in kilograms.
You can also change the units of the resulting answer if you're curious. There are also more units in our tool's Advanced mode. You can access that by clicking the Advanced mode button below our calculator widget.
💡 Our calculator works both ways! That means you can also enter weights in other units to find their equivalent value in ounces. How cool is that? 😊
Finding our ounce-to-kg converter interesting and want to learn more? Here are some of our other converters you can try:
## FAQ
### How much is 8 oz in kg?
8 ounces is approximately 0.226796 kg. To get that value:
1. First, convert 8 oz to pounds by multiplying it by 1/16 pounds per ounce to get 0.5 lbs.
2. Then, multiply 0.5 lbs by 0.45359237 kilograms per pound to get 0.226796 kg.
### How many kg is in 18 oz?
18 ounces is approximately half a kilogram, precisely 0.510291 kg. We can quickly get that value in kilograms by multiplying 18 ounces by 0.028349523: 18 oz × 0.028349523 = 0.510291 kg. On the other hand, we get its equivalent value in pounds by devising it by 16: 18 oz ÷ 16 = 1.125 lbs.
## How do I convert oz to kg?
To convert ounce to kilograms, you can:
• Multiply the weight in ounces by 0.028349523 kg/oz to directly get the weight in kilograms; or
• By first converting ounces to pounds by dividing the weight in ounces by 16, then by converting pounds to kg by multiplying the resulting quotient by 0.45359237.
Kenneth Alambra
Weight in ounces
oz
Weight in other units
kg
People also viewed…
### Black Friday
How to get best deals on Black Friday? The struggle is real, let us help you with this Black Friday calculator!
### Bra size
Use the bra size calculator to find your bra size in continental Europe, UK, and USA.
### Quarantine silver lining
The quarantine silver lining calculator shows you the savings you make while staying home.
### Titration
Use our titration calculator to determine the molarity of your solution.
|
|
# Chapter R - Algebra Reference - R.4 Equations - R.4 Exercises: 4
$k= -\displaystyle \frac{3}{8}$
#### Work Step by Step
The aim is, step by step, to isolate $k$ on one side of the equation. By multiplying both sides with the LCD of all the fractions, we obtain an equation with no fractions: (LCD=24) $\displaystyle \frac{2}{3}k- k+\displaystyle \frac{3}{8}=\frac{1}{2}\qquad.../\times 24$ $24\displaystyle \cdot\frac{2}{3}k-24\cdot k+24\displaystyle \cdot\frac{3}{8}=24\cdot\frac{1}{2}\qquad$simplify $16k-24k+9=12$ Subtract $9$ from both sides and the left side will contain only$k^{\prime}s.$ $16k-24k=12-9 \qquad$... simplify $-8k=3$ Dividing both sides with $(-8)$ isolates k on the left side. $-8k=3 \qquad$...$/\div(-8)$ $k= -\displaystyle \frac{3}{8}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
|
# Model evaluation for feature selection
I have a dataset of gene expression data and I'm trying to find genes related to particular diseases. My labels are dichotomous (sick - not sick) and I used a Logistic regression with LASSO regularization in order to extract meaningful features (basically taking all the genes with coefficient different from zero). Hyperparameter lambda has been found using leave-one-out cross-validation. In order to find my coefficient should I train a new model using my best lambda on all the data?
I think there are some drawbacks in my approach but since I'm relatively new to feature reduction I cannot formulate a query on google that tells me if or where I am wrong. Can you help me?
• How many genes $p$ do you have, and what's the number $n$ of experiments? – Edgar Oct 4 '19 at 12:26
• 4300 genes and 260 samples – Giuseppe Minardi Oct 4 '19 at 12:37
1. It's better to do the standard $$10$$-fold cross-validation in your case, since you have $$n=260$$ observations. Leave-one-out cross-validation is more appropriate if the $$n$$ is smaller.
2. You're completely right, after the cross-validation you need to train a new model on the full data with the hyperparameter $$\lambda$$ found by the cross-validation.
3. If you're using R for this analysis (which is highly recommended), everything should be easy to implement with the cv.glmnet() function from the glmnet package, with parameter family="binomial" for the logistic regression.
4. Take care that you normalized your gene expression values reasonably. Look up "RPKM" and "TPM" for a further understanding of how to model with gene expression values.
Below a minimum working example based on your comments, this should give you a smooth start with the cv.glmnet function (X is your gene expression matrix, y your 0-1-encoded disease status):
library(glmnet)
my_glmnet <- cv.glmnet(X, y, family="binomial") # default CV with 10 folds
plot(my_glmnet) # here you can see the CVMSE and the optimal lambdas, as well as the number of non-zero coefficients at the top
which_opti_lambda <- which(my_glmnet$$lambda==my_glmnet$$lambda.min) # this gives you the index of the optimal lambda (you can also try lambda.1se)
opti_coefficients <- my_glmnet$$glmnet.fit$$beta[, which_opti_lambda] # get the coefficients of the final run for the optimal lambda
which(opti_coefficients!=0) # these are the genes that were chosen by lasso cv logistic regression
• Yes, I'm using R and yes, I'm using glmnet. My advisor told me to use cross-validation also in order to test my model but I do not know how to use cv.glmnet in order to do cross-validation without estimating lambda. he function only works using different values of lambda. I'm currenntly using a simple train-test split but my advisor told me to use cross validation – Giuseppe Minardi Oct 4 '19 at 12:56
• cv.glmnet does all the lambda calculations for you, you can recover the lambda values from the glmnet object, as well as the lambda.min (or lambda.1se) value for which the optimal cross-validation error was achieved. – Edgar Oct 4 '19 at 12:58
• I know, but should I train my model again with my best lambda or can I use the mse given by cv.glmnet? – Giuseppe Minardi Oct 4 '19 at 16:27
• If you do 10-fold CV with cv.glmnet, it gives you the fitted object for the (11th) full model, that means, it trains the model again for you already (but with a full lambda sequence, too). The best lambda is either lambda.min or lambda.1se or any lambda of your choice -- you will have to find the appropriate index in the lambda sequence and choose the corresponding column of the coefficient matrix. – Edgar Oct 4 '19 at 16:40
• I updated my answer with a minimal working example. – Edgar Oct 4 '19 at 16:57
|
|
# Commutator subgroup and bijective representation
Let $G$ be a finite group and $G' = [G,G]$ be its commutator subgroup, which is defined to be the subgroup generated by elements $[g,h] = g^{-1}h^{-1}gh$ for all $g,h \in G$, where $G'$ is a normal subgroup of $G$. And $G/G'$ is abelian.
Prove that for any field $\mathbb{K}$, the degree $1$ representations of $G$ over $\mathbb{K}$ are in bijection with the degree $1$ representations of $G/G'$ over $\mathbb{K}$.
attempt: Suppose $\phi_1: G → GL(\mathbb{K})$ be a representation of degree 1. is similar to $\phi_1: G → \mathbb{K}^{\times}$.
And define $\phi_2 : G/G'→ \mathbb{K}^{\times}$
Do I have to show $\phi_1 → \phi_2$ and show it's a bijective function? I am not sure what I have to show. Can someone please help? Thank you!
• Isn't it just because $\Bbb K^{\times}$ is abelian, so the kernel of any homomorphism from $G$ to $\Bbb K^{\times}$ must contain the commutator subgroup? In other words you can always factor $G\to\Bbb K^{\times}$ through $G/G'$. That gives you the bijective map in one direction. And you can lift any homomorphism from $G/G'$ to $\Bbb K^{\times}$ to a homomorphism from $G$ to $\Bbb K^{\times}$. That gives you the other direction. Just show the composition is the identity, in both directions, to conclude it's a bijection. – Gregory Grant Feb 17 '16 at 23:27
• Could I define $\phi_2(gG') = \phi_1(g)$? And then show the composition is the identity ? – user40294 Feb 17 '16 at 23:40
• Yes, that gives you the way to take a map from $G/G'$ to $\Bbb K$ to get a map from $G$ to $\Bbb K$. You have to show it's well defined. – Gregory Grant Feb 18 '16 at 0:18
Show that for every abelian group $A$, $Hom(G,A)\cong Hom(G/G',A)$ (where $Hom(-,-)$ is the set of group homomorphisms). Since $\Bbb K^\times$ is abelian, you're done.
(I think it's important to see that for general $A$ since this is a fundamental property of $G/G'$).
• So I have to define $f : G → A$ , and $f_1 : G/G' → A$? And show it's a hormomorphism? Then show this is isomorphic? – user40294 Feb 17 '16 at 23:42
• Not quite. From a map $f:G\to A$, you have to construct a map $\bar{f}:G/G' \to A$ and vice-versa. Furthermore, you have to show that these constructions are inverse to each other. – Nitrogen Feb 17 '16 at 23:44
|
|
# Show $\text{Gal}(K_\infty/\mathbb Q)\cong \mathbb Z_p^{\times}$
Let $\zeta_{p^n}$ be the primitive $p^n$-th root of unity where $p$ is a prime and $K_n=\mathbb Q(\zeta_{p^n})$ the $p^n$-th cyclotomic field. Let $K_\infty=\bigcup K_n$.
Could someone give a proof of the isomorphism $\text{Gal}(K_\infty/\mathbb Q)\cong \mathbb Z_p^{\times}$?
• @GregoryGrant: No, $\mathbb{Z}_p$ means the $p$-adic integers (and in my opinion, should not be used to denote $\mathbb{Z}/p\mathbb{Z}$). Pete: are you familiar with inverse limits? – RghtHndSd May 3 '15 at 21:49
• Oh ok, that makes more sense then. I agree there's a conflict of notation, but like it or not $\Bbb Z_p$ is used so often by so many people that it's probably a good idea to qualify it when using it for the $p$-adics. – Gregory Grant May 3 '15 at 21:51
• @RghtHndSd My first thought was to consider how an arbitrary element $g$ of the galois group would act on $\zeta_{p^n}$. If $g(\zeta_{p^n})=\zeta_{p^n}^{a_g}$ then would $a_g$ be coprime to $p^n$? – pete m May 3 '15 at 21:59
• Pete, that's a good start. Surely $g(\zeta_{p^n})=\zeta_{p^n}^{a_n}$. And, yes, $\gcd(a_n,p)=1$, because otherwise a root of unity of order $p^n$ would be mapped to a lower order root of unity. One of the keys is to see the constraint between $a_n$ and $a_{n+1}$, Can you see what it is? – Jyrki Lahtonen May 4 '15 at 12:26
• @Jyrki Would I be right in saying $\zeta_{p^{n+1}}^p=\zeta_{p^n}$ so $\zeta^{a_n}_{p^n}=g(\zeta_{p^n})=g(\zeta_{p^{n+1}}^p)=g(\zeta_{p^{n+1}})^p=(\zeta_{p^{n+1}}^{a_{n+1}})^p=\zeta_{p^n}^{a_{n+1}}$ and so $a_{n+1}\equiv a_n$ $\text{mod}(p^n)$. So would $(a_n)_n\in \mathbb Z_p$ and since each $a_n$ is coprime to $p$ then it is actually in $\mathbb Z_p^{\times}$? regards – pete m May 5 '15 at 20:15
|
|
Page 1 of 1
### USA(J)MO 2017 #3
Posted: Sat Apr 22, 2017 6:06 pm
Let $ABC$ be an equilateral triangle, and point $P$ on it's circumcircle. Let $PA$ and $BC$ intersect at $D$, $PB$ and $AC$ intersect at $E$, and $PC$ and $AB$ intersect at $F$. Prove that the area of $\bigtriangleup DEF$ is twice the area of $\bigtriangleup ABC$
### Re: USA(J)MO 2017 #3
Posted: Sat Apr 22, 2017 6:23 pm
We use barycentric coordinates.
Let $P\equiv (p:q:r)$. Now, we know that $pq+qr+rp=0$ [The equation of circumcircle for equilateral triangles].
Now, $D\equiv (0:q:r), E\equiv (p:0:r), F\equiv (p:q:0)$.
So, the area of $\triangle DEF$ divided by the area of $\triangle ABC$ is:
$$\dfrac{1}{(p+q)(q+r)(r+p)} \times \begin{vmatrix} 0 & q & r\\ p & 0 & r\\ p & q & 0 \end{vmatrix}$$
$$=\dfrac{2pqr}{(p+q+r)(pq+qr+rp)-pqr}$$
$$=\dfrac{2pqr}{-pqr}=-2$$.
The reason of negativity is that we took signed area.
Therefore the area of $DEF$ is twice the area of $ABC$.
### Re: USA(J)MO 2017 #3
Posted: Sat Apr 22, 2017 6:48 pm
Thanic Nur Samin wrote:We use barycentric coordinates.
Let $P\equiv (p:q:r)$. Now, we know that $pq+qr+rp=0$ [The equation of circumcircle for equilateral triangles].
Now, $D\equiv (0:q:r), E\equiv (p:0:r), F\equiv (p:q:0)$.
So, the area of $\triangle DEF$ divided by the area of $\triangle ABC$ is:
$$\dfrac{1}{(p+q)(q+r)(r+p)} \times \begin{vmatrix} 0 & q & p\\ p & 0 & r\\ p & q & 0 \end{vmatrix}$$
$$=\dfrac{2pqr}{(p+q+r)(pq+qr+rp)-pqr}$$
$$=\dfrac{2pqr}{-pqr}=-2$$.
The reason of negativity is that we took signed area.
Therefore the area of $DEF$ is twice the area of $ABC$.
There's a typo in the determinant: zero for you~
### Re: USA(J)MO 2017 #3
Posted: Sat Apr 22, 2017 7:04 pm
For those who loves synthetic geometry
Throughout the proof signed area will be used.
Lemma: Let $ABC$ be an equilateral triangle, and point $P$ on its circumcircle. Let $PB$ and $AC$ intersect at $E$, and $PC$ and $AB$ intersect at $F$.Then ${[EPF]}={[ABPC]}$
Proof: Let the tangent to $(ABC)$ at $A$ meet $BP$ at $J$ .Then applying pascal's theorem on hexagon $AACPBB$ we get $JF \parallel BB \parallel AC$ . So
$${[EPF]}={[ECF]}-{[ECP]}={[ECJ]}-{[ECP]}={[PCJ]}={[PCB]}+{[BCJ]}={[PCB]}+{[BCA]}={[BPCA]}={[ABPC]}$$.
Problem : So , $${[DFE}]={[EPF]}+{[FPD]}+{[DPE]}$$
$$={[ABPC]}+{[BCPA]}+{[CAPB]}$$
$$=\{ {[BPA]}+{[APC]} \}+\{ {[ABC]}-{[APC]} \} + \{ {[ABC]}-{[BPA]} \}$$
$$=2{[ABC]}$$
### Re: USA(J)MO 2017 #3
Posted: Sat Apr 22, 2017 9:31 pm
joydip wrote:
Proof: Let the tangent to $(ABC)$ at $A$ meet $BP$ at $J$ .Then applying pascal's theorem on hexagon $AACPBB$ we get $JF \parallel BB \parallel AC$ . So
How did you get that idea?
### Re: USA(J)MO 2017 #3
Posted: Mon Apr 24, 2017 11:26 am
WLOG $P$ lies on the shorter arc $BC$ . So, $[DEF]=[AEF]-[ABC]-[BDF]-[CDE]$
$\angle BAD = \alpha$
Use Sine Law to find $BD$,$DC$,$BF$,$CE$ in terms of $a$ and sine of $\alpha$ and $60-\alpha$, where $a$ is the length of the sides of $\triangle ABC$ . Then we'll use these lengths to find $[AEF]$,$[BDF]$ and $[CDE]$ . We've to prove $[DEF] =\frac{\sqrt3}{2} a^2$
After some simplification, we get
$\frac{(\sin^2 \alpha + \sin^2 (60 - \alpha) )( \sin \alpha + \sin (60-\alpha) ) - \sin^3 \alpha -\sin^3(60-\alpha)}{\sin \alpha \sin (60-\alpha)(\sin \alpha + \sin (60-\alpha))}=1$ which is obviously true, and so we are done.
### Re: USA(J)MO 2017 #3
Posted: Mon Apr 24, 2017 1:45 pm
Zawadx wrote: There's a typo in the determinant: zero for you~
Edited. Latexing a determinant is a pain in the first place, locating these typos are difficult
It was correct in my paper, so if I had submitted, it wouldn't have been a zero, rather a seven.
|
|
# GATE Questions & Answers of Linear Algebra Electrical Engineering
#### Linear Algebra 20 Question(s)
Given a system of equations:
$x+2y+2z={b}_{1}$
$5x+y+3z={b}_{2}$
Which of the following is true regarding its solutions
A system matrix is given as follows
$A=\left[\begin{array}{ccc}0& 1& -1\\ -6& -11& 6\\ -6& -11& 5\end{array}\right]$
The absolute value of the ratio of the maximum eigenvalue to the minimum eigenvalue is _______
Which one of the following statements is true for all real symmetric matrices?
Two matrices A and B are given below:
If the rank of matrix A is N, then the rank of matrix B is
The equation has
A matrix has eigenvalues –1 and –2. The corresponding eigenvectors are $\left[\begin{array}{c}1\\ -1\end{array}\right]$ and $\left[\begin{array}{c}1\\ -2\end{array}\right]$ respectively. The matrix is
Given that
$\mathbit{A}=\left[\begin{array}{cc}-5& -3\\ 2& 0\end{array}\right]$ and $\mathbit{I}=\left[\begin{array}{cc}1& 0\\ 0& 1\end{array}\right]$, the value of A3 is
The matrix $\left[A\right]=\left[\begin{array}{cc}2& 1\\ 4& -1\end{array}\right]$ is decomposed into a product of a lower triangular matrix [L] and an upper triangular matrix [U]. The properly decomposed [L] and [U] matrices respectively are
An eigenvector of $\mathrm{p}=\left(\begin{array}{ccc}1& 1& 0\\ 0& 2& 2\\ 0& 0& 3\end{array}\right)$ is
For the set of equations
x1 + 2x2 + x3 + 4x4 =2
3x1 + 6x2 + 3x2+ 12x4 = 6.
The following statement is true
The trace and determinant of a 2 × 2 matrix are known to be -2 and -35 respectively. Its eigen values are
The characteristic equation of a (3X3) matrix P is defined as
$\alpha \left(\lambda \right)=|\lambda \mathbf{I}-\mathbf{P}|={\lambda }^{3}+{\lambda }^{2}+1=0$
If I denotes identity matrix, then the inverse of matrix P will be
If the rank of a (5X6) matrix Q is 4, then which one of the following statement is correct ?
A is a m x n full rank matrix with m>n and I is identity matrix. Let matrix A+=(ATA)-1AT,Then, which one of the following statement is FALSE ?
Let P be a 2 X 2 real orthogonal matrix and $\stackrel{\mathbf{\to }}{\mathbf{x}}$ is a real vector ${\left[{x}_{1},{x}_{2}\right]}^{\tau }$ with length $\style{font-size:14px}{\parallel\overrightarrow x\parallel=\left(x_1^2+x_2^2\right)^\frac12.}$ Then, which one of the following statements is correct ?
x = [x1 x2 ... xn]T is an n-tuple nonzero vector. The n×n matrix V=xx T
Let x and y be two vectors in a 3 dimensional space and <x, y> denote their dot product. Then the determinant
$\mathrm{det}\left[\begin{array}{cc}& \\ & \end{array}\right]$
The linear operation L(x) is defined by the cross product L(x) = bXx, where b=[0 1 0]T and x=[x1 x2 x3]T are three dimensional vectors. The 3×3 matrix M of this operations satisfies $\mathrm{L}\left(\mathrm{x}\right)=\mathrm{M}\left[\begin{array}{c}{x}_{1}\\ {x}_{2}\\ {x}_{3}\end{array}\right]$
Then the eigenvalues of M are
Cayley-Hamilton Theorem states that a square matrix satisfies its own characteristic equation. Consider a matrix $A=\left[\begin{array}{cc}-3& 2\\ -1& 0\end{array}\right]$
A satisfies the relation
Cayley-Hamilton Theorem states that a square matrix satisfies its own characteristic equation. Consider a matrix $A=\left[\begin{array}{cc}-3& 2\\ -1& 0\end{array}\right]$
|
|
1. Dec 29, 2003
### Orion1
One of the real important aspects of Hawking Radiation is the Kerr Temperature.
Note that the Kerr Temperature is responsible for the Kerr Particle Energy Spectrum and represents the genesis of Thermodynamic Quantum Gravitation.
Thermodynamic Quantum Gravitation is the combination of Thermodynamics and Quantum Gravitation:
A Kerr Black Hole is a rotating Black Hole.
$$T_k = \frac{\hbar c^3}{K_o G M}$$
Ko = Boltzmann's Thermal Constant
2. Dec 29, 2003
### NateTG
Since you brought it up, I have this rather silly question about Hawking radiation:
Let's say that I have an extremely small black hole -- i.e. the Schwartzschild radius is smaller than the Planck length.
Wouldn't a black hole like this have a tendency to radiate light that had more mass/momentum than the black hole does?
3. Dec 29, 2003
### Orion1
$$r_s = \frac{2 G M}{c^2}$$
Planck Length:
$$r_p = \sqrt{ \frac{ \hbar G}{c^3}}$$
Schwarzschild Temperature:
$$T_s = \frac{ \hbar c^3}{4 K_o G M}$$
$$r_s = r_p$$
$$\frac{2 G M}{c^2} = \sqrt { \frac{ \hbar G}{c^3}}$$
Schwarzschild-Planck Mass:
$$M_s = \frac{1}{2} \sqrt { \frac{ \hbar c}{G}}$$
Integral:
$$M_s = \frac{1}{2} \sqrt { \frac{ \hbar c}{G}} = \frac{ \hbar c^3}{4 K_o G T_s}$$
Schwarzschild-Planck Temperature:
$$T_s = \frac {1}{2K_o} \sqrt { \frac{\hbar c^5}{G}}}$$
$$T_s = 7.084E+31 K$$
T_s = 7.084*10^31 Kelvin
Schwartzschild radius is smaller than the Planck length.
Wouldn't a black hole like this have a tendency to radiate light that had more mass/momentum than the black hole does?
if the black hole were smaller than that, then would it be able to radiate?
$$r_s \ll r_p$$
$$\Delta = \frac {\hbar c}{ \lambda K_o T_s} = \frac{2}{ \lambda} \sqrt { \frac {\hbar G}{c^3}}$$
$$\Delta = \frac{2}{ \lambda} \sqrt { \frac {\hbar G}{c^3}}$$
$$T_q = \frac {\hbar c^3}{4 K_o G M_s ( e^\Delta - 1)}$$
$$I(\lambda) = \frac { 2 \pi h c^2}{ \lambda^5 (e^\Delta - 1)}$$
$$I_q = \sigma T_q^4$$
The Schwarzschild-Planck Radius is Mass dependent.
As a Thermodynamic Schwarzschild-Planck Black Hole radius falls below the Planck Radius $$r_s \ll r_p$$, it becomes a Thermodynamic Quantum-Schwarzschild Black Hole, the resulting radiation diminishes instead of increasing.
The resulting radiation flux becomes less energetic than the mass equivalency.
A Thermodynamic Schwarzschild-Planck Black Hole would evaporate instantly however a Thermodynamic Quantum-Schwarzschild Black Hole diminishes more gradually, however still relatively instantaneous. The relative flux intensity also diminishes.
A Quantum-Schwarzschild Black Hole Thermodynamic Temperature is quantumized below the Planck Radius.
Last edited: Dec 30, 2003
4. Dec 30, 2003
### NateTG
Right, but if the black hole were smaller than that, then would it be able to radiate? Conservation of energy/mass would be grossly violated if the black hole could produce radiation with more energy than it's equivalent mass. Similarly, there might be problems with conservation of linear momentum.
5. Jan 27, 2004
### Nommos Prime (Dogon)
http://www.th.physik.uni-frankfurt.de/~lxd/English/bhs_e.html
http://relativity.livingreviews.org/Articles/lrr-2001-6/node7.html [Broken]
"Imaginary Time" (which theoretically exists at right angles to ordinary time) may hold the key to this one. However, we’re yet to probe THAT dimension…
Last edited by a moderator: May 1, 2017
6. Apr 24, 2004
### kurious
Conservation of energy/mass would be grossly violated if the black hole could produce radiation with more energy than it's equivalent mass
Not if the black hole can absorb as much energy from the vacuum as it radiates and there is theoretically 10^120 Joules per cubic metre in the vacuum.
7. Apr 25, 2004
### Orion1
Planck Probability...
What if a Schwarzschild-Planck Black Hole is capable of absorbing more radiation than its Schwarzschild-Planck Temperature vacuum, does this also violate Conservation of energy/mass?
I presume that a Schwarzschild-Planck Black Hole which is capable of absorbing more radiation than its Schwarzschild-Planck Temperature vacuum would momentarily increase in mass, then still evaporate instantly.
To my understanding, a 'perfect radiation absorber' is not possible, but then again it was once thought that black holes were a 'zero radiation emitter'.
Is there an equasion that exists that determines how much radiation a Schwarzschild-Planck Black Hole is capable of absorbing?
According to my integrations above, the amount of Schwarzschild-Planck Black Hole radiation flux produced below the Planck Radius $$r_s \ll r_p$$ is no longer determined by mass thermodynamics, but by probability and radiation wavelength similar to a blackbody radiator, given here: $$P = (e^\Delta - 1)$$.
Thereby, when Schwarzschild-Planck Black Hole falls below Planck radius $$r_s \ll r_p$$, the radiation flux probabilisticly diminishes, resulting in a radiation flux that no longer violates Conservation of energy/momemtum. However note that the evaporation is still relatively instantaneous, as such equasions are described in 'slow motion'.
8. Apr 25, 2004
### Stingray
In that case, the derivation of Hawking radiation is no longer valid. You'd need a fully theory of quantum gravity, whereas Hawking radiation is derived by formulating quantum field theory on top of a fixed classical spacetime.
9. Feb 27, 2008
### random.person
Wavelength
I am Doing a project on Hawking Radiation and I am wondering if anyone actually Knows the wavelength of it?? It would be great if someone could email me cos I don't always have time to do much more than check my emails
--
a.random.persona@gmail.com
--
Thanks
|
|
# What is beyond Gamma Rays and Radiowaves in the Electromagnetic Spectrum?
The electromagnetic spectrum is commonly refered to as consisting of; Radio-waves, Microwaves, Infrared, Visible Light, Ultraviolet, X-rays, Gamma Rays - of increasing frequency from left to right.
But is it possible to get radiation of higher wavelength than Radio-waves, or lower wavelength than Gamma rays - does it even exist? Or could they be produced?
Most interestingly, from the Planck–Einstein relation, E = hf, how high of an energy could you get for a very very high frequency radiation?
## 4 Answers
Higher energy gamma and longer wavelength radio?
Keep in mind that the different 'kinds' are merely human labeling conventions for a spectrum that is continuous in the mathematical sense. There is no feature of "radio" that distinguishes it objectively from microwaves. We just pick a boundary on the basis of some technological limitations that apply when we decide the difference and stick labels on.
The reason there aren't labels beyond radio and gamma is that there is no real need to label those bands.
• dmckee As you know, the wavelength of radio waves refers to the current oscillation in the antenna rod and has nothing to do with the wavelength of the photons emitted from the accelerated electrons. It is a modulated radiation. So the real limit for photons is in the IR range. – HolgerFiedler May 8 '16 at 5:15
• Nonsense. High $n$ Rydberg atom transitions are a fruitful astrophysics source and they are in the microwave range. There is no lower limit on the energy of a photon. – dmckee --- ex-moderator kitten May 8 '16 at 5:18
In addition to the answer by dmckee and to answer the question how high in energy you could get a photon it might be worth thinking about 'Gamma Ray Astronomy' where the highest energy photons are detected. The record highest photon energy observed is apparently currently 80 TeV, which corresponsds to a wavelength of $1.5 \times10^{-20}m$ wavelength (if I calculated it correctly). This is very short considering the 'size' of the hydrogen nucleus is ~$1.8\times10^{-15}$.....
...but, of course, as dmckee points out there is a continuous spectrum and no high or low energy limit to the energy of a photon.
• There is a high-energy limit to the long-range (cosmologically speaking) propagation of high-energy photons. Above about 400 TeV the cross-section for light-light interactions with the CMB suddenly begins to become significant. – dmckee --- ex-moderator kitten May 7 '16 at 23:07
• @dmckee - thanks for comment. Again I learn from physics SE, many thanks – tom May 7 '16 at 23:11
There is a theoretical limit for how small the waves can be. it is when the waves are as long as the Planck length. But as far as I know there isn't a limit of how long the waves can be.
Wouldn't a wave smaller then the smallest be "in the same theoretical universe of discourse" mathematically negative? Looks like a good explanation for a time traveling Sci-Fi xD
Also, wave may be able to reach a higher speed then it's correlation to length. For example if traveling on a referent that is also traveling. A wave inside another one would be 2 time faster if we can alter it's spatial referent, for which quantum intrication could be the key. Don't forget that all those equations have or are directly correlated to a constant. Those are human made, at least human labelled. Thinking about it in separate questions was a good lead, what would happen over gamma ray speed/frequency, to an electromagnetic signal, imagining we have a transmitter that can achieve that?
|
|
## Positive Meniscus Lens
A positive meniscus lens made from glass of refraction index $$1{,}5$$ is in the air and its focal length is $$12 \mathrm{cm}$$.
What are the radii of curvature of the two optical surfaces of the lens if the radius of curvature of the concave surface is two time greater than the radius of curvature of the convex surface?
• #### Notation
$$n_{g}=1{,}5$$ refraction index of glass $$f=12 \mathrm{cm}=0{,}12 \mathrm{m}$$ focal length $$r_{1}= ?$$ curvature radius of the concave surface $$r_{2}= ?$$ curvature radius of the convex surface
• #### Hint 1
We read in the task assignment that the radius of the convex surface is two times greater than the radius of the concave surface. Write it down. Use the sign convention mentioned in Imaging a Bat by a Spherical Mirror and a Converging Lens, Radius of curvature and focal length and decide if the radii of curvature are positive or negative.
• #### Hint 2
Radii of curvature of the optical surfaces of the lens determine the properties of the lens. They also change the focal length of the lens.
Do you know any equation which describes the relation between a focal length of a lens and the radii of curvature of its surfaces?
We will assume to have a thin lens.
• #### Hint 3
We can now use the relations which we found and determine the unknown radii of curvature.
• #### Complete solution
Sign convention
We know from the sign convention mentioned in Imaging a Bat by a Spherical Mirror and a Converging Lens, Radius of curvature and focal length that if the centre of curvature of the first surface is in front of the lens, its radius of curvature is negative. If the centre of curvature of the first surface is behind the lens, its radius of curvature is positive. Conversely for the second surface. Thus for the lens from the assignment it holds that the concave surface (first) has negative radius of curvature $$r_{1}\lt 0$$ and convex surface (second) has positive radius of curvature $$r_{2}\gt 0$$. As mentioned above, radius of concave surface is two times greater than radius of convex surface. We thus can write: $r_{1}=−2r_{2}.\tag{1}$
Lensmaker’s equation
Radii of curvature of the optical surfaces of the lens determine the properties of the lens. They also change the focal length of the lens.
The relation between a focal length of a lens $$f$$ and the radii of curvature of its surfaces $$r_{1}$$ a $$r_{2}$$ is described by Lensmaker’s equation. Let us write it for our thin lens: $\frac{1}{f}=\left(\frac{n_{g}}{n_{a}}−1\right)\left(\frac{1}{r_{1}}+\frac{1}{r_{2}}\right),\tag{2}$ where $$n_{g}$$ is the refraction index of glass and $$n_{a}$$ is the refraction index of air.
We can now use the relations which we found and determine the unknown radii of curvature.
We substitute relation (1) into equation (2) $\frac{1}{f}=\left(\frac{n_{g}}{n_{a}}−1\right)\left(\frac{1}{\left(−2r_{2}\right)}+\frac{1}{r_{2}}\right).$
We will simplify the equation to express the radius of curvature $$r_{2}$$.
Firstly, let us simplify the fractions in brackets: $\frac{1}{f}=\left(\frac{n_{g}−n_{a}}{n_{a}}\right)\left(\frac{−1}{2r_{2}}+\frac{2}{2r_{2}}\right),$ $\frac{1}{f}=\left(\frac{n_{g}−n_{a}}{n_{a}}\right)\left(\frac{1}{2r_{2}}\right).$
We multiply both sides of the equation by $$\frac{n_{a}}{n_{g}−n_{a}}$$: $\frac{1}{f}\frac{n_{a}}{n_{g}−n_{a}}=\frac{1}{2r_{2}}.$ We multiply both sides of the equation by two: $\frac{2}{f}\frac{n_{a}}{n_{g}−n_{a}}=\frac{1}{r_{2}},$ $\frac{2n_{a}}{f\left(n_{g}−n_{a}\right)}=\frac{1}{r_{2}}.$ We express $$r_{2}$$: $r_{2}=\frac{f\left(n_{g}−n_{a}\right)}{2n_{a}}.$
We expressed the radius of curvature of the convex surface. We will calculate the radius of curvature of the second surface using (1): $r_{1}=−2r_{2},$ $r_{1}=−2\cdot \frac{f\left(n_{g}−n_{a}\right)}{2n_{a}}.$ Thus : $r_{1}= \frac{f\left(n_{a}−n_{g}\right)}{n_{a}}.$
Numerical solution
We know from the task assignment that:
$$n_{g}=1{,}5$$ (refraction index of glass)
$$f=12 \mathrm{cm}=0{,}12 \mathrm{m}$$ (focal length of the lens)
We find the refraction index of air in tables:
$$n_{a}=1{,}0$$
Substituting into the relation for the radius of curvature of the concave surface we get:
$$r_{1}= \frac{f\left(n_{a}−n_{g}\right)}{n_{a}}=\frac{0{,}12\cdot \left(1−1{,}5\right)}{1} \mathrm{m}=−0{,}6 \mathrm{m}=−6 \mathrm{cm}$$.
Substituting into the relation for the radius of curvature of the convex surface gives us:
$$r_{2}=\frac{f\left(n_{g}−n_{a}\right)}{2n_{a}}=\frac{0{,}12\cdot \left(1{,}5−1\right)}{2{\cdot} 1 }=0{,}3 \mathrm{m}=3 \mathrm{cm}$$.
We calculated the radius of curvature of the concave surface to be negative and the radius of curvature of the convex surface to be positive as we expected from the sign convention.
The radius of curvature of the concave surface of the lens is $$r_{1}=6 \mathrm{cm}$$ and the radius of curvature of the convex surface is $$r_{2}=3 \mathrm{cm}$$.
• #### Thick lens
We can write the Lensmaker’s equation for thick lens of the thickness $$d$$ as follows: $\frac{1}{f}=\left(\frac{n_{g}}{n_{a}}−1\right)\left(\frac{1}{r_{1}}+\frac{1}{r_{2}}\right)−\frac{\left(\frac{n_{g}}{n_{a}}−1\right)^2 d}{\frac{n_{g}}{n_{a}}r_{1}r_{2}}.$
If the thickness of a lens is very small compared to the radii of curvature of the lens surfaces, we can neglect it, assume that $$d=0$$. In this case Lensmaker’s equation for thick lens becomes Lensmaker’s equation for thin lens which we mentioned in hint 2: $\frac{1}{f}=\left(\frac{n_{g}}{n_{a}}−1\right)\left(\frac{1}{r_{1}}+\frac{1}{r_{2}}\right).$ Thin lens is a frequently used approximation of a thick lens.
We can also determine how the properties of the lens would change if we did not neglect its thickness.
We know that our lens has radii of curvature $$r_{1}=−6 \mathrm{cm}$$ and $$r_{2}=3 \mathrm{cm}$$.
We can calculate the focal length f from the Lensmaker’s equation for thin lens:
$$\frac{1}{f}=\left(\frac{n_{g}}{n_{a}}−1\right)\left(\frac{1}{r_{1}}+\frac{1}{r_{2}}\right)=\left(\frac{1{,}5}{1}−1\right)\left(\frac{1}{−6}+\frac{1}{3}\right) \mathrm{\frac{1}{cm}}=\left(0{,}5\right)\left(\frac{−1}{6}+\frac{2}{6}\right) \mathrm{\frac{1}{cm}}=\frac{1}{12} \mathrm{\frac{1}{cm}},$$
$$f=12 \mathrm{cm}$$.
We will now calculate the focal length f from the equation for thick lens. We will assume the thickness to be for example $$d=1 \mathrm{cm}$$:
$$\frac{1}{f}=\left(\frac{n_{g}}{n_{a}}−1\right)\left(\frac{1}{r_{1}}+\frac{1}{r_{2}}\right)−\frac{\left(\frac{n_{g}}{n_{a}}−1\right)^2 d}{\frac{n_{g}}{n_{a}}r_{1}r_{2}}=\left(\frac{1}{12}−\frac{\left(\frac{1{,}5}{1}−1\right)^2 {\cdot}1}{\frac{1{,}5}{1}\left(−6\right)\cdot 3}\right) \mathrm{\frac{1}{cm}}= \left(\frac{1}{12}+\frac{\left(0{,}5\right)^2}{1{,}5{\cdot} 18}\right) \mathrm{\frac{1}{cm}} = \left(\frac{1}{12}+\frac{1}{12{\cdot} 9} \mathrm{\frac{1}{cm}} =\frac{5}{54}\right) \mathrm{\frac{1}{cm}},$$
$$f=\frac{54}{5} \mathrm{cm}=10{,}8 \mathrm{cm}$$.
We see that the focal lengths of thin and thick lens are different. Thus, it is necessary to consider in every case if we can neglect the thickness or not.
×Original source: Bartuška, K. Sbírka řešených úloh z fyziky pro střední školy (1. vyd.). Prometheus, Praha 1997.
Covered in the diploma thesis of Michaela Jungová (2016).
|
|
# wscanf, fwscanf, swscanf, wscanf_s, fwscanf_s, swscanf_s
< c | io
C
Language Headers Type support Program utilities Variadic function support Error handling Dynamic memory management Date and time utilities Strings library Algorithms Numerics Input/output support Localization support Atomic operations (C11) Thread support (C11) Technical Specifications
File input/output
Functions
File access
fopenfopen_s(C11) freopenfreopen_s(C11) fwide(C95)
Direct input/output
Unformatted input/output
fgetc fgets fputc fputs getchar getsgets_s(until C11)(since C11) putchar puts ungetc
fgetwcgetwc(C95)(C95) fgetws(C95) fputwcputwc(C95)(C95) fputws(C95) getwchar(C95) putwchar(C95) ungetwc(C95)
Formatted input
scanffscanfsscanfscanf_sfscanf_ssscanf_s(C11)(C11)(C11) wscanffwscanfswscanfwscanf_sfwscanf_sswscanf_s(C95)(C95)(C95)(C11)(C11)(C11)
vscanfvfscanfvsscanfvscanf_svfscanf_svsscanf_s(C99)(C99)(C99)(C11)(C11)(C11) vwscanfvfwscanfvswscanfvwscanf_svfwscanf_svswscanf_s(C99)(C99)(C99)(C11)(C11)(C11)
Formatted output
printffprintfsprintfsnprintfprintf_sfprintf_ssprintf_ssnprintf_s(C99)(C11)(C11)(C11)(C11) wprintffwprintfswprintfwprintf_sfwprintf_sswprintf_ssnwprintf_s(C95)(C95)(C95)(C11)(C11)(C11)(C11)
vprintfvfprintfvsprintfvsnprintfvprintf_svfprintf_svsprintf_svsnprintf_s(C99)(C11)(C11)(C11)(C11) vwprintfvfwprintfvswprintfvwprintf_svfwprintf_svswprintf_svsnwprintf_s(C95)(C95)(C95)(C11)(C11)(C11)(C11)
File positioning
Error handling
Operations on files
Defined in header (1) int wscanf( const wchar_t *format, ... ); (since C95) (until C99) int wscanf( const wchar_t *restrict format, ... ); (since C99) (2) int fwscanf( FILE *stream, const wchar_t *format, ... ); (since C95) (until C99) int fwscanf( FILE *restrict stream, const wchar_t *restrict format, ... ); (since C99) (3) int swscanf( const wchar_t *buffer, const wchar_t *format, ... ); (since C95) (until C99) int swscanf( const wchar_t *restrict buffer, const wchar_t *restrict format, ... ); (since C99) int wscanf_s( const wchar_t *restrict format, ...); (4) (since C11) int fwscanf_s( FILE *restrict stream, const wchar_t *restrict format, ...); (5) (since C11) int swscanf_s( const wchar_t *restrict s, const wchar_t *restrict format, ...); (6) (since C11)
Reads data from the a variety of sources, interprets it according to format and stores the results into given locations.
1) Reads the data from stdin.
2) Reads the data from file stream stream.
3) Reads the data from null-terminated wide string buffer. Reaching the end of the string is equivalent to reaching the end-of-file condition for fwscanf
4-6) Same as (1-3), except that %c, %s, and %[ conversion specifiers each expect two arguments (the usual pointer and a value of type rsize_t indicating the size of the receiving array, which may be 1 when reading with a %lc into a single wide character) and except that the following errors are detected at runtime and call the currently installed constraint handler function:
• any of the arguments of pointer type is a null pointer
• format, stream, or buffer is a null pointer
• the number of characters that would be written by %c, %s, or %[, plus the terminating null character, would exceed the second (rsize_t) argument provided for each of those conversion specifiers
• optionally, any other detectable error, such as unknown conversion specifier
As all bounds-checked functions, wscanf_s, fwscanf_s, and swscanf_s are only guaranteed to be available if __STDC_LIB_EXT1__ is defined by the implementation and if the user defines __STDC_WANT_LIB_EXT1__ to the integer constant 1 before including <wchar.h>.
## Contents
### Parameters
stream - input file stream to read from
buffer - pointer to a null-terminated wide string to read from
format - pointer to a null-terminated wide string specifying how to read the input. The format string consists of
• non-whitespace wide characters except %: each such character in the format string consumes exactly one identical character from the input stream, or causes the function to fail if the next character on the stream does not compare equal.
• whitespace characters: any single whitespace character in the format string consumes all available consecutive whitespace characters from the input (determined as if by calling iswspace in a loop). Note that there is no difference between "\n", " ", "\t\t", or other whitespace in the format string.
• conversion specifications. Each conversion specification has the following format:
• introductory % character
• (optional) assignment-suppressing character *. If this option is present, the function does not assign the result of the conversion to any receiving argument.
• (optional) integer number (greater than zero) that specifies maximum field width, that is, the maximum number of characters that the function is allowed to consume when doing the conversion specified by the current conversion specification. Note that %s and %[ may lead to buffer overflow if the width is not provided.
• (optional) length modifier that specifies the size of the receiving argument, that is, the actual destination type. This affects the conversion accuracy and overflow rules. The default destination type is different for each conversion type (see table below).
• conversion format specifier
The following format specifiers are available:
Conversion
specifier
Explanation Argument type
length modifier hh
(C99)
h (none) l ll
(C99)
j
(C99)
z
(C99)
t
(C99)
L
% matches literal % N/A N/A N/A N/A N/A N/A N/A N/A N/A
c
matches a character or a sequence of characters
If a width specifier is used, matches exactly width wide characters (the argument must be a pointer to an array with sufficient room). Unlike %s and %[, does not append the null character to the array.
N/A N/A
char*
wchar_t*
N/A N/A N/A N/A N/A
s
matches a sequence of non-whitespace characters (a string)
If width specifier is used, matches up to width or until the first whitespace character, whichever appears first. Always stores a null character in addition to the characters matched (so the argument array must have room for at least width+1 characters)
[set]
matches a non-empty sequence of character from set of characters.
If the first character of the set is ^, then all characters not in the set are matched. If the set begins with ] or ^] then the ] character is also included into the set. It is implementation-defined whether the character - in the non-initial position in the scanset may be indicating a range, as in [0-9]. If width specifier is used, matches only up to width. Always stores a null character in addition to the characters matched (so the argument array must have room for at least width+1 characters)
d
matches a decimal integer.
The format of the number is the same as expected by wcstol() with the value 10 for the base argument
signed char* or unsigned char*
signed short* or unsigned short*
signed int* or unsigned int*
signed long* or unsigned long*
signed long long* or unsigned long long*
N/A
i
matches an integer.
The format of the number is the same as expected by wcstol() with the value 0 for the base argument (base is determined by the first characters parsed)
u
matches an unsigned decimal integer.
The format of the number is the same as expected by wcstoul() with the value 10 for the base argument.
o
matches an unsigned octal integer.
The format of the number is the same as expected by wcstoul() with the value 8 for the base argument
x, X
The format of the number is the same as expected by wcstoul() with the value 16 for the base argument
n
returns the number of characters read so far.
No input is consumed. Does not increment the assignment count. If the specifier has assignment-suppressing operator defined, the behavior is undefined
a, A(C99)
e, E
f, F
g, G
matches a floating-point number.
The format of the number is the same as expected by wcstof()
N/A N/A
float*
double*
N/A N/A N/A N/A
long double*
p
matches implementation defined character sequence defining a pointer.
printf family of functions should produce the same sequence using %p format specifier
N/A N/A
void**
N/A N/A N/A N/A N/A N/A
For every conversion specifier other than n, the longest sequence of input characters which does not exceed any specified field width and which either is exactly what the conversion specifier expects or is a prefix of a sequence it would expect, is what's consumed from the stream. The first character, if any, after this consumed sequence remains unread. If the consumed sequence has length zero or if the consumed sequence cannot be converted as specified above, the matching failure occurs unless end-of-file, an encoding error, or a read error prevented input from the stream, in which case it is an input failure.
All conversion specifiers other than [, c, and n consume and discard all leading whitespace characters (determined as if by calling iswspace) before attempting to parse the input. These consumed characters do not count towards the specified maximum field width.
If the length specifier l is not used, the conversion specifiers c, s, and [ perform wide-to-multibyte character conversion as if by calling wcrtomb() with an mbstate_t object initialized to zero before the first character is converted.
The conversion specifiers s and [ always store the null terminator in addition to the matched characters. The size of the destination array must be at least one greater than the specified field width. The use of %s or %[, without specifying the destination array size, is as unsafe as gets
The correct conversion specifications for the fixed-width integer types (int8_t, etc) are defined in the header <inttypes.h> (although SCNdMAX, SCNuMAX, etc is synonymous with %jd, %ju, etc).
There is a sequence point after the action of each conversion specifier; this permits storing multiple fields in the same "sink" variable.
When parsing an incomplete floating-point value that ends in the exponent with no digits, such as parsing "100er" with the conversion specifier %f, the sequence "100e" (the longest prefix of a possibly valid floating-point number) is consumed, resulting in a matching error (the consumed sequence cannot be converted to a floating-point number), with "r" remaining. Existing implementations do not follow this rule and roll back to consume only "100", leaving "er", e.g. glibc bug 1765
... - receiving arguments
### Return value
1-3) Number of receiving arguments successfully assigned, or EOF if read failure occurs before the first receiving argument was assigned.
4-6) Same as (1-3), except that EOF is also returned if there is a runtime constraint violation.
### References
• C11 standard (ISO/IEC 9899:2011):
• 7.29.2.2 The fwscanf function (p: 410-416)
• 7.29.2.4 The swscanf function (p: 417)
• 7.29.2.12 The wscanf function (p: 421)
• K.3.9.1.2 The fwscanf_s function (p: 628-629)
• K.3.9.1.5 The swscanf_s function (p: 631)
• K.3.9.1.14 The wscanf_s function (p: 638)
• C99 standard (ISO/IEC 9899:1999):
• 7.24.2.2 The fwscanf function (p: 356-362)
• 7.24.2.4 The swscanf function (p: 362)
• 7.24.2.12 The wscanf function (p: 366-367)
|
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 03 Aug 2015, 12:26
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# m25,q1
Author Message
Intern
Joined: 06 Dec 2009
Posts: 4
Followers: 0
Kudos [?]: 4 [0], given: 0
m25,q1 [#permalink] 28 Dec 2009, 13:04
Can anybody help me with this question?
Is $$|x - 6| > 5$$?
1. $$x$$ is an integer
2. $$x^2 < 1$$
Statement (1) ALONE is sufficient, but Statement (2) ALONE is not sufficient
Statement (2) ALONE is sufficient, but Statement (1) ALONE is not sufficient
BOTH statements TOGETHER are sufficient, but NEITHER statement ALONE is sufficient
EACH statement ALONE is sufficient
Statements (1) and (2) TOGETHER are NOT sufficient
Math Expert
Joined: 02 Sep 2009
Posts: 28784
Followers: 4599
Kudos [?]: 47603 [1] , given: 7130
Re: m25,q1 [#permalink] 28 Dec 2009, 17:53
1
KUDOS
Expert's post
ppaulyni3 wrote:
Can anybody help me with this question?
Is |x - 6| > 5?
1. x is an integer
2. x^2 < 1
Let's work on the stem first. For which values of x, |x - 6| > 5 is true?
|x - 6| > 5
x<6 --> -x+6>5 --> x<1.
x>=6 --> x-6>5 --> x>11.
So for x from the ranges x<1 and x>11 the inequality |x - 6| > 5 holds true.
(1) x is an integer, clearly not sufficient. x can be 12 and the inequality holds true as we concluded OR x can be 5 and inequality doesn't hold true.
(2) x^2<1 --> -1<x<1, as all x-es from this range are in the range x<1, hence inequality |x - 6| > 5 holds true. Sufficient.
_________________
Intern
Joined: 17 Jan 2010
Posts: 1
Followers: 0
Kudos [?]: 1 [1] , given: 0
Re: m25,q1 [#permalink] 17 Jan 2010, 07:30
1
KUDOS
but in this case statement A contradicts to statement B. A states that x is an integer and B states that it is not.
Math Expert
Joined: 02 Sep 2009
Posts: 28784
Followers: 4599
Kudos [?]: 47603 [0], given: 7130
Re: m25,q1 [#permalink] 17 Jan 2010, 16:32
Expert's post
Tati wrote:
but in this case statement A contradicts to statement B. A states that x is an integer and B states that it is not.
Not so, taken together x can be zero which is an integer.
_________________
Intern
Joined: 05 May 2012
Posts: 2
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: m25,q1 [#permalink] 09 May 2012, 21:46
Bunuel wrote:
Tati wrote:
but in this case statement A contradicts to statement B. A states that x is an integer and B states that it is not.
Not so, taken together x can be zero which is an integer.
If we need to indicate that x is an integer, shouldn't the answer be C?
Statement 1 alone: Insufficient.
Statement 2 alone:
-1<x<1
|-1-6| = 7. True.
However.
|1-6| = 5. False 5 is not larger than 5. Also insufficient.
Both statement together:
If say we take zero as an integer we need statement 1 to indicate that x is in fact an integer.
|0-6| = 6. True
Sufficient.
Math Expert
Joined: 02 Sep 2009
Posts: 28784
Followers: 4599
Kudos [?]: 47603 [0], given: 7130
Re: m25,q1 [#permalink] 10 May 2012, 00:18
Expert's post
leochanGmat wrote:
Bunuel wrote:
Tati wrote:
but in this case statement A contradicts to statement B. A states that x is an integer and B states that it is not.
Not so, taken together x can be zero which is an integer.
If we need to indicate that x is an integer, shouldn't the answer be C?
Statement 1 alone: Insufficient.
Statement 2 alone:
-1<x<1
|-1-6| = 7. True.
However.
|1-6| = 5. False 5 is not larger than 5. Also insufficient.
Both statement together:
If say we take zero as an integer we need statement 1 to indicate that x is in fact an integer.
|0-6| = 6. True
Sufficient.
No, when considering the second statement we don't need to know that x is an integer. The question asks: "is x<1 or x>11?" and (2) says that -1<x<1, so we can answer YES to the question. In your examples you can not consider x=-1 and x=1 since in the given range (-1<x<1) -1 and 1 are not inclusive.
Hope it's clear.
_________________
Intern
Joined: 05 May 2012
Posts: 2
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: m25,q1 [#permalink] 10 May 2012, 10:55
Thank you for the clarification.
Say if statement 2 is larger than 11, but not smaller than 1. The statement will still be sufficient?
Re: m25,q1 [#permalink] 10 May 2012, 10:55
Similar topics Replies Last post
Similar
Topics:
If x and y are positive integers and m25 q1 4 15 Feb 2012, 05:48
Display posts from previous: Sort by
# m25,q1
Moderators: WoundedTiger, Bunuel
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
|
Enhancement of impact ionization in Hubbard clusters by disorder and next-nearest-neighbor hopping
@article{Kauch2020EnhancementOI,
title={Enhancement of impact ionization in Hubbard clusters by disorder and next-nearest-neighbor hopping},
author={Anna Kauch and Paul Worm and Paul Prauhart and Michael Innerberger and Clemens Watzenb{\"o}ck and Karsten Held},
journal={Physical Review B},
year={2020}
}
• Published 31 July 2020
• Physics
• Physical Review B
We perform time-resolved exact diagonalization of the Hubbard model with time dependent hoppings on small clusters of up to $12$ sites. Here, the time dependence originates from a classic electromagnetic pulse, which mimics the impact of a photon. We investigate the behavior of the double occupation and spectral function after the pulse for different cluster geometries and on-site potentials. We find impact ionization in all studied geometries except for one-dimensional chains. Adding next…
2 Citations
Impact ionization and multiple photon absorption in the two-dimensional photoexcited Hubbard model
• Physics
Physical Review B
• 2022
We study the non-equilibrium response of a 4x3 Hubbard model at U=8 under the influence of a short electric field pulse, with the main focus on multiple photon excitations and on the change of double
Adaptive Time Propagation for Time-dependent Schrödinger equations
• Mathematics
International journal of applied and computational mathematics
• 2021
It is found that splitting methods are more efficient when the Hamiltonian naturally suggests a separation into kinetic and potential part, whereas Magnus-type integrators excel when the structure of the problem only allows to advance the time variable separately.
References
SHOWING 1-10 OF 29 REFERENCES
Electron-light interaction in nonequilibrium: exact diagonalization for time-dependent Hubbard Hamiltonians
• Physics
European physical journal plus
• 2020
The method is applied to calculate time evolution of double occupation and nonequilibrium spectral function of a photo-excited Mott-insulator and the results show that not only the double occupation increases due to creation of electron-hole pairs but also the Mott gap becomes partially filled.
Effective doublon and hole temperatures in the photo-doped dynamic Hubbard model
• Physics
Structural dynamics
• 2016
This work discusses how photodoping leads to doublon and hole populations with different effective temperatures, and analyzes the relaxation behavior as a function of the boson coupling and boson energy.
Time-dependent spectral properties of a photoexcited one-dimensional ionic Hubbard model: an exact diagonalization study
Motivated by the recent progress in time-resolved nonequilibrium spectroscopy in condensed matter, we study an optically excited one-dimensional ionic Hubbard model by exact diagonalization. The
Optimizing the role of impact ionization in conventional insulators
This work studies the case of hybridization of localized orbitals with more dispersive bands near the Fermi level, where the generated multiple carriers, which ultimately decay to the edges of the dispersion bands by means of IA processes, acquire lighter mass and this could allow their more efficient separation before recombination.
Role of impact ionization in the thermalization of photoexcited Mott insulators
• Physics
• 2014
We study the influence of the pulse energy and fluence on the thermalization of photodoped Mott insulators. If the Mott gap is smaller than the width of the Hubbard bands, the kinetic energy of
Electron correlations in narrow energy bands
• J. Hubbard
• Physics
Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences
• 1963
It is pointed out that one of the main effects of correlation phenomena in d- and f-bands is to give rise to behaviour characteristic of the atomic or Heitler-London model. To investigate this
Impact ionization processes in the steady state of a driven Mott insulating layer coupled to metallic leads
• Physics
• 2018
We study a simple model of photovoltaic energy harvesting across a Mott insulating gap consisting of a correlated layer connected to two metallic leads held at different chemical potentials. Upon
Filling of the mott-hubbard gap in the high temperature photoemission spectrum of (V0.972Cr0.028)2O3.
• Physics
Physical review letters
• 2004
Photoemission spectra of the paramagnetic insulating phase of (V0.972Cr0.028)2O3, taken in ultrahigh vacuum up to the unusually high temperature (T) of 800 K, reveal a property unique to the
Photovoltaic effect for narrow-gap Mott insulators
We discuss the photovoltaic effect at a p-n heterojunction, in which the illuminated side is a doped Mott insulator, using the simplest description of a Mott insulator within the Hubbard model. We
Hund excitations and the efficiency of Mott solar cells
• Materials Science
Physical Review B
• 2019
We study the dynamics of photoinduced charge carriers in semirealistic models of ${\mathrm{LaVO}}_{3}$ and ${\mathrm{YTiO}}_{3}$ polar heterostructures. It is shown that two types of impact
|
|
12 Answered Questions for the topic soft question
05/25/19
#### Very good linear algebra book.?
I plan to self-study linear algebra this summer. I am sorta already familiar with vectors, vector spaces and subspaces and I am really interested in everything about matrices (diagonalization,... more
05/13/19
#### Is basis change ever useful in practical linear algebra?
In layman's terms, why would anyone ever want to change basis? Do eigenvalues have to do with changing basis?
05/05/19
#### Is linear algebra laying the foundation for something important?
I'm majoring in mathematics and currently enrolled in Linear Algebra. It's very different, but I like it (I think). My question is this: What doors does this course open? (I saw a post about Linear... more
05/02/19
#### Must eigenvalues be numbers?
This is more a conceptual question than any other kind. As far as I know, one can define matrices over arbitrary fields, and so do linear algebra in different settings than in the typical... more
04/04/19
#### What does it take to get a job at a top 50 math program in the U.S.?
I'm a senior undergrad right at a small liberal arts college right now who is applying to math PhD programs in the U.S. I would like to eventually become a professor at a relatively good university... more
03/23/19
#### What are the main uses of Convex Functions?
Up till now I have just learned that the concept of convexity in functions of one variable is used to complete the graphs of functions, meaning to locate points of inflexion and see if the graph is... more
03/22/19
#### What are the main uses of Convex Functions?
Up till now I have just learned that the concept of convexity in functions of one variable is used to complete the graphs of functions, meaning to locate points of inflexion and see if the graph is... more
03/19/19
#### Fundamental equations in economics?
For the other sciences it's easy to point to the most important equations that ground the discipline. If I want to explain Economics to a physicist say, what are considered to be the most important... more
03/14/19
#### Who came up with the $\\varepsilon$-$\\delta$ definitions and the axioms in Real Analysis?
I've seen a lot of definitions of notions like boundary points, accumulation points, continuity, etc, and axioms for the set of the real numbers. But I have a hard time accepting these as "true"... more
03/14/19
#### Is this Enough for the Math Subject GRE?
I have been studying for the math GRE for quite sometime now. I have been going through the princeton review and old GRE tests, and in fact without much very difficulty at all. As a way of getting... more
03/14/19
#### Multivariable Calculus for GRE?
This is going to sound strange, but I am a third year math major who never took multivariable calculus (despite having taken courses on Galois and Lebesgue theory, etc). I plan to take the GRE next... more
## Still looking for help? Get the right answer, fast.
Get a free answer to a quick problem.
Most questions answered within 4 hours.
#### OR
Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
|
|
# Hom functor and left exactness
How can I prove that if
$$0\longrightarrow\mathrm{Hom}(M,A)\xrightarrow{\;\;i_*\;\;}\mathrm{Hom}(M,B)\xrightarrow{\;\;j_*\;\;}\mathrm{Hom}(M,C)$$ is left exact, then $$0\longrightarrow A\xrightarrow{\;\;i\;\;} B\xrightarrow{\;\;j\;\;} C$$ is left exact. I have seen proofs showing that if the second chain is left exact, then the first chain is left exact, but what about proving the converse without depending on the projective module concept.
-
Your notation is confusing, can your please state a little more clearly what you are asking? What are $i,j$? Are they functors? – user38268 Dec 28 '12 at 1:39
Use the Yoneda lemma. – Zhen Lin Dec 28 '12 at 1:42
Closely related math.stackexchange.com/questions/235372/… – user26857 Dec 28 '12 at 19:38
Suppose the sequence $$\tag{*} 0 \longrightarrow \textrm{Hom}(M, A) \longrightarrow \textrm{Hom}(M, B) \longrightarrow \textrm{Hom}(M, C)$$ is exact for all $M$. First, we must show that the sequence $$\tag{\dagger} 0 \longrightarrow A \longrightarrow B \longrightarrow C$$ is a (co)chain complex. So, take $M = A$ and consider $\textrm{id}_A$. We know $j_* \circ i_* = 0$, so in particular $j_* (i_* (\textrm{id}_A)) = j \circ i \circ \textrm{id}_A = 0$, so $j \circ i = 0$.
Next, we must show that $i$ is the kernel of $j$. Let $M$ be arbitrary and suppose $g : M \to B$ is a homomorphism such that $j \circ g = 0$. Then $j_* (g) = 0$, so exactness of $(*)$ means there is a unique homomorphism $f : M \to A$ such that $i_* (f) = g$, so $g = i \circ f$ for a unique $f : M \to A$. Hence $i : A \to B$ is indeed the kernel of $j : B \to C$, and therefore $(\dagger)$ is exact.
This argument works in any additive category. If we assume that we are in an abelian category then we only need to know that $(*)$ is exact for $M = A$, $M = \operatorname{Ker} i$, and $M = \operatorname{Ker} j$. Indeed, first suppose $k : M \to A$ is the kernel of $i : A \to B$. Then, $i \circ k = 0$; but there by exactness of $(*)$ that means $k = 0$, so $i$ is monic. Now suppose $g : M \to B$ is the kernel of $j : B \to C$. Then, $g \circ j = 0$, so by exactness of $(*)$ there is a unique $f : M \to A$ such that $i \circ f = g$; on the other hand, $j \circ i = 0$, so there is a unique $h : A \to M$ such that $g \circ h = i$; but $g$ is monic, so $g \circ h \circ f = g$ implies $h \circ f = \textrm{id}_M$, and $i$ is monic as well, so $i \circ f \circ h = i$ implies $f \circ h = \textrm{id}_A$, and therefore $(\dagger)$ is exact.
-
A couple of questions: How do you show $i$ is injective? and how does the existence of an unique $f:M\to A$ implies $kerj \subseteq Im i$? I think the same trick of considering a specific module in each case works (M = A for the injectivity and M = B for the second), but I don't feel comfortable doing it. Thanks. – hjhjhj57 Aug 28 '14 at 8:30
In a category of modules, monomorphisms = injective homomorphisms. – Zhen Lin Aug 28 '14 at 9:13
Great, that accounts for the injectivity. Could you elaborate a little bit more about the second question? – hjhjhj57 Aug 29 '14 at 5:03
It suffices to check $j \circ i = 0$. – Zhen Lin Aug 29 '14 at 7:20
That is in fact true. Take $M$ to be the free module on one generator. But my proof is more general and applies to all additive categories. – Zhen Lin Aug 30 '14 at 7:07
The result is false without conditions on $M$. For instance if the ring in question is $\mathbb{Z}$ and $M$ is $\mathbb{Z}/2$, then the first sequence is exact for $A = \mathbb{Q}$, $B = C = \mathbb{Z}$ and the zero maps, but the second sequence isn't.
-
However, if we know that the hypothesis holds for all $M$, then the claim is true. – Zhen Lin Dec 28 '12 at 5:04
This is a Hom functor and the module M is being fixed, my question is not about the validity of the statement , it's rather how to prove it , I have tried to prove it but I supposed that for any mo – JmD Dec 28 '12 at 5:22
|
|
# Revision history [back]
I was having the same issue with my Robot. Currently, the CostMap2D uses Bresenham2D algorithm for clearing the obstacles. As per your video, the blob gets left out in almost the same location and that I think is due to the round off error of Bresenham2D algorithm. It marks a particular cell as obstacle but the next time when the sensor reading changes instantaneously, it no longer traces it through the same path and hence, the blob seem remains in the costmap .
So, for solving this problem , I amended the function below in costmap_2d.h in the navigation package of ros. I have added 2 lines of code to the Bresenham2D algorithm. This basically clears the cell to the left and right of the grid through which the Line segment constructed by the algorithm passes. This results in loosing some resolution of the map but the solution works pretty well in real life application and I have had no problems after this hacky fix which is not too bad.
template<class ActionType>
inline void bresenham2D(ActionType at, unsigned int abs_da, unsigned int abs_db, int error_b, int offset_a,
int offset_b, unsigned int offset, unsigned int max_length)
{
unsigned int end = std::min(max_length, abs_da);
for (unsigned int i = 0; i < end; ++i)
{
at(offset);
offset += offset_a;
error_b += abs_db;
if ((unsigned int)error_b >= abs_da)
{
offset += offset_b;
error_b -= abs_da;
}
}
at(offset);
}
|
|
# Knowledge base: Adam Mickiewicz University
Back
## The new catalytic method for the synthesis of 1-boryl-4-metalloid(Si, Ge, B)butadienes and other derivatives of 1-borylbutadienes
#### Abstract
The main aim of the study presented within the doctoral dissertation was to establish the reactivity of ethynylmetalloides (ethynylgermananes, ethynylboranes, alkynes with different functional groups) in the co-dimerization reaction. The products obtained, owing to the presence of the same or different metalloids in extreme positions of their molecules were expected to be interesting reagents for organic synthesis. The reactivity of diethynylsubstituted alkynes, silyl and germylalkynes, which until now has not been tested in this type of reactions, was also checked. Another challenge of the study was functionalization of ethynylsiloxysilsesquioxanes, cage organosilicon compounds of hybrid inorganic-organic properties, which was expected to permit the introduction of a borylobutadienyl group into the structure of these compounds. Detailed mechanistic studies, based on the density functional theory methods (DFT) were performed and kinetic measurements were made for the model coupling reaction of terminal silil-, boryl- and germylsubstituted alkynes with vinylboronates, which permitted determination of the rate limiting step of the process. These studies confirmed the results of experimental work for the stoichiometric reaction of sililalkynes with vinylboronates, for which the mechanism has been developed. In addition, mechanistic studies were extended to a number of other reactions with equimolar ethynylgermananes, to contribute to full understanding of the chemistry of this process.
Record ID
UAMa36790e3903e46d890626e4c9c33df6c
Diploma type
Doctor of Philosophy
Author
Title in Polish
Title in English
The new catalytic method for the synthesis of 1-boryl-4-metalloid(Si, Ge, B)butadienes and other derivatives of 1-borylbutadienes
Language
pol (pl) Polish
Certifying Unit
Faculty of Chemistry (SNŚ/WC/FoC) [Not active]
Discipline
chemistry / (chemical sciences domain) / (physical sciences)
Status
Finished
Year of creation
2015
Start date
16-06-2015
Defense Date
16-06-2015
Title date
16-06-2015
Supervisor
URL
https://hdl.handle.net/10593/13279 Opening in a new tab
Keywords in English
Abstract in English
The main aim of the study presented within the doctoral dissertation was to establish the reactivity of ethynylmetalloides (ethynylgermananes, ethynylboranes, alkynes with different functional groups) in the co-dimerization reaction. The products obtained, owing to the presence of the same or different metalloids in extreme positions of their molecules were expected to be interesting reagents for organic synthesis. The reactivity of diethynylsubstituted alkynes, silyl and germylalkynes, which until now has not been tested in this type of reactions, was also checked. Another challenge of the study was functionalization of ethynylsiloxysilsesquioxanes, cage organosilicon compounds of hybrid inorganic-organic properties, which was expected to permit the introduction of a borylobutadienyl group into the structure of these compounds. Detailed mechanistic studies, based on the density functional theory methods (DFT) were performed and kinetic measurements were made for the model coupling reaction of terminal silil-, boryl- and germylsubstituted alkynes with vinylboronates, which permitted determination of the rate limiting step of the process. These studies confirmed the results of experimental work for the stoichiometric reaction of sililalkynes with vinylboronates, for which the mechanism has been developed. In addition, mechanistic studies were extended to a number of other reactions with equimolar ethynylgermananes, to contribute to full understanding of the chemistry of this process.
Thesis file
Request a WCAG compliant version
Uniform Resource Identifier
https://researchportal.amu.edu.pl/info/phd/UAMa36790e3903e46d890626e4c9c33df6c/
URN
urn:amu-prod:UAMa36790e3903e46d890626e4c9c33df6c
Confirmation
Are you sure?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.