arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
Expression
Class: NodeImageExpression
Create a custom expression which applies to the elements of the input image. The user can specify an arbitrary number of input images, numbers and bits which are assigned a variable name, starting with $$a$$. Atleast one image is mandatory. The variable names can then be used to create an expression to be calculated.
As an example, to create a node that takes an input image $$a$$, ads it to a second input image $$b$$, and multiplies the result with $$e$$ raised to the power of an input number $$c$$, you would write:
(a+b)*e^c
Conditional statements are also possible. To replace all voxels values in input image $$a$$ above the input value $$b$$ or less than $$\displaystyle 0$$ with $$b$$, the following expression woule be used:
if(a>b or a<0, b, a)
Special Variables:
e: The natural logarithmic base.
pi: The ratio of the circumference of a circle to its diameter.
vpx, vpy, vpz: The x, y, and z coordinate of the current voxel.
vix, viy, viz: The x, y, and z index of the current voxel.
vsx, vsy, vsz: The x, y, and z size of one voxel.
vcx, vcy, vcz: The x, y, and z voxel count of the image.
ipx, ipy, ipz: The x, y, and z position of the image.
Conditional statements and logical expressions:
if(expression, true, false): if statement with a conditional expression, and the returned value when this expression is true and false, respectively.
=, <>, <, >, ⇐, >=: Comparison operators.
and: Logical and.
or: Logical or.
xor: Logical exclusive or.
not: Logical not.
Some Functions:
abs(x), acos(x), asin(x), atan(x), atan2(x, y), ceiling(x), cos(x), cosh(x), floor(x), log(x), log(x, b), log10(x), max(x, y), min(x, y), rand(x), rande(x), randg(o, s), randn(m, sd), round(x), sign(x), sin(x), sinh(x), sqrt(x), tan(x), tanh(x), truncate(x)
Inputs
a
The default image input.
Type: Image4DFloat, Required, Single
Outputs
Result
The resulting image.
Type: Image4DFloat
Settings
Display
Show Expression Boolean
If checked, the expression will be displayed beneath the node name in the process window.
Node Name Text
The display name of the node in the process window.
Expression
Expression Text
The expression which should be calculated.
Inputs
Images Integer
The number of input images.
Numbers Integer
The number of input numbers.
Bits Integer
The number of input bits.
Result
Set Infinity To Number
What value should Infity be set to.
Set Undefined Numbers To Number
What value should Undifined numbers be set to.
Resulting Image Name Text
The name of the resulting image.
|
|
Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01c534fn974
Title: the Iwasawa Theory for Unitary groups Authors: WAN, XIN Advisors: Skinner, Christopher Contributors: Mathematics Department Keywords: Bloch-Kato conjecturesEisenstein seriesIwasawa theoryp-adic L-functionsSelmer groups Subjects: Mathematics Issue Date: 2012 Publisher: Princeton, NJ : Princeton University Abstract: In this thesis we generalize earlier work of Skinner and Urban to construct ($p$-adic families of) nearly ordinary Klingen Eisensten series for the unitary groups $U(r,s)\hookrightarrow U(r+1,s+1)$ and do some preliminary computations of their Fourier Jacobi coefficients. As an application, using the case of the embedding $U(1,1)\hookrightarrow U(2,2)$ over totally real fields in which the odd prime $p$ splits completely, we prove that for a Hilbert modular form $f$ of parallel weight $2$, trivial character, and good ordinary reduction at all places dividing $p$, if the central critical $L$-value of $f$ is $0$ then the associated Bloch Kato Selmer group has infinite order. We also state a consequence for the Tate module of elliptic curves over totally real fields that are known to be modular. URI: http://arks.princeton.edu/ark:/88435/dsp01c534fn974 Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog Type of Material: Academic dissertations (Ph.D.) Language: en Appears in Collections: Mathematics
Files in This Item:
File Description SizeFormat
WAN_princeton_0181D_10237.pdf686.21 kBAdobe PDF
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.
|
|
# Plaster of Paris should be stored in a moisture-proof container. Why?
#### Complete Python Prime Pack
9 Courses 2 eBooks
#### Artificial Intelligence & Machine Learning Prime Pack
6 Courses 1 eBooks
#### Java Prime Pack
9 Courses 2 eBooks
Plaster of Paris should be stored in a moisture-proof container because the Plaster of Paris, a powdery mass, absorbs water (moisture) from the environment to form a hard solid substance known as Gypsum.
Plaster of pairs + water → Gypsum
$CaSO_{4} .\frac{1}{2} H_{2} O+1\frac{1}{2} H_{2} O\rightarrow CaSO_{4} .2H_{2} O$
[Extra information: pH stands for "power of hydrogen".The "H" is capitalized because it is the hydrogen element symbol.
Acidic and basic are two extremes that describe a chemical property chemical. The pH scale measures how acidic or basic a substance is.
• The pH scale ranges from 0 to 14.
• A pH less than 7 is acidic in nature.
• A pH of 7 is neutral.
• A pH greater than 7 is basic in nature.
pH decreases with an increase in temperature. This does not mean that the water becomes more acidic at higher temperatures. A solution is considered acidic if there is an excess of hydrogen ions (H) over hydroxide ions (OH-).]
Updated on 10-Oct-2022 12:46:30
|
|
Skip to Content
Physics
# Concept Items
PhysicsConcept Items
### 20.1Magnetic Fields, Field Lines, and Force
1.
If you place a small needle between the north poles of two bar magnets, will the needle become magnetized?
1. Yes, the magnetic fields from the two north poles will point in the same directions.
2. Yes, the magnetic fields from the two north poles will point in opposite directions.
3. No, the magnetic fields from the two north poles will point in opposite directions.
4. No, the magnetic fields from the two north poles will point in the same directions.
2.
If you place a compass at the three points in the figure, at which point will the needle experience the greatest torque? Why?
1. The density of the magnetic field is minimized at B, so the magnetic compass needle will experience the greatest torque at B.
2. The density of the magnetic field is minimized at C, so the magnetic compass needle will experience the greatest torque at C.
3. The density of the magnetic field is maximized at B, so the magnetic compass needle will experience the greatest torque at B.
4. The density of the magnetic field is maximized at A, so the magnetic compass needle will experience the greatest torque at A.
3.
In which direction do the magnetic field lines point near the south pole of a magnet?
1. Outside the magnet the direction of magnetic field lines is towards the south pole of the magnet.
2. Outside the magnet the direction of magnetic field lines is away from the south pole of the magnet.
### 20.2Motors, Generators, and Transformers
4.
Consider the angle between the area vector and the magnetic field in an electric motor. At what angles is the torque on the wire loop the greatest?
1. $0 Superscript ring$and $180 Superscript ring$
2. $45 Superscript ring$ and $135 Superscript ring$
3. $90 Superscript ring$ and $270 Superscript ring$
4. $225 Superscript ring$ and $315 Superscript ring$
5.
What is a voltage transformer?
1. A transformer is a device that transforms current to voltage.
2. A transformer is a device that transforms voltages from one value to another.
3. A transformer is a device that transforms resistance of wire to voltage.
6.
Why is electric power transmitted at high voltage?
1. To increase the current for the transmission
2. To reduce energy loss during transmission
3. To increase resistance during transmission
4. To reduce resistance during transmission
### 20.3Electromagnetic Induction
7.
Yes or no—Is an emf induced in the coil shown when it is stretched? If so, state why and give the direction of the induced current.
1. No, because induced current does not depend upon the area of the coil.
2. Yes, because area of the coil increases; the direction of the induced current is counterclockwise.
3. Yes, because area of the coil increases; the direction of the induced current is clockwise.
4. Yes, because the area of the coil does not change; the direction of the induced current is clockwise.
8.
What is Lenz’s law?
1. If induced current flows, its direction is such that it adds to the changes which induced it.
2. If induced current flows, its direction is such that it opposes the changes which induced it.
3. If induced current flows, its direction is always clockwise to the changes which induced it.
4. If induced current flows, its direction is always counterclockwise to the changes which induced it.
9.
Explain how magnetic flux can be zero when the magnetic field is not zero.
1. If angle between magnetic field and area vector is 0°, then its sine is also zero, which means that there is zero flux.
2. If angle between magnetic field and area vector is 45°, then its sine is also zero, which means that there is zero flux.
3. If angle between magnetic field and area vector is 60°, then its cosine is also zero, which means that there is zero flux.
4. If the angle between magnetic field and area vector is 90°, then its cosine is also zero, which means that there is zero flux.
Order a print copy
As an Amazon Associate we earn from qualifying purchases.
Citation/Attribution
Want to cite, share, or modify this book? This book is Creative Commons Attribution License 4.0 and you must attribute “Texas Education Agency (TEA)." The original material is available at: https://www.texasgateway.org/book/tea-physics . Changes were made to the original material, including updates to art, structure, and other content updates.
Attribution information
• If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:
Access for free at https://openstax.org/books/physics/pages/1-introduction
• If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
Access for free at https://openstax.org/books/physics/pages/1-introduction
Citation information
© Oct 27, 2020 Texas Education Agency (TEA). The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.
|
|
NEW
New Website Launch
Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc...
## GATE CE 2008
Exam Held on Thu Jan 01 1970 00:00:00 GMT+0000 (Coordinated Universal Time)
Click View All Questions to see questions one by one or you can choose a single question from below.
## GATE CE
The product of matrices $${\left( {PQ} \right)^{ - 1}}P$$ is
The eigenvalues of the matrix $$\left[ P \right] = \left[ {\matrix{ 4 & 5... The following system of equations$$$x+y+z=3,$x+2y+3z=4,$x+4y+kz=6$...
Three values of $$x$$ and $$y$$ are to be fitted in a straight line in the form ...
If the interval of integration is divided into two equal intervals of width $$1.... The Newton-Raphson iteration$${x_{n + 1}} = {1 \over 2}\left( {{x_n} + {R \over...
A wastewater sample contains $$10^{-5.6}$$ mol/l of OH<sup>-</sup> ions at $$25^... The type of surveying in which the curvature of the earth is taken into account ... An outlet irrigates an area of 20 ha. The discharge (lit/sec) required at this o... The base width of an elementary profile of gravity dam of height H is b. The spe... A reinforced concrete structure has to be constructed along a sea coast. The min... Un-factored maximum bending moments at a section of a reinforced concrete beam r... A reinforced concrete beam of rectangular cross section of breadth$$230mm$... A reinforced concrete beam of rectangular cross section of breadth $$230$$ $$mm... A reinforced concrete column contains longitudinal steel equal to$$1$$percent ... In the design of a reinforced concrete bean the requirement for bond is not gett... A pre-tensioned concrete member of section$$200\,\,mm \times 250\,\,mm$$contai... Rivets and bolts subjected to both shear stress$$\left( {{\tau _{vf}},\,cal} \r... The maximum shear stress in a solid shaft of circular cross-section having diame... The stepped cantilever is subjected to moments, $$M$$ as shown in the figure bel... Beam $$GHI$$ is supported by these pontoons as shown in the figure below. The ho... Beam $$GHI$$ is supported by these pontoons as shown in the figure below. The ho... A thin walled cylindrical pressure vessel having a radius of $$0.5$$ $$m$$ and w... A mild steel specimen is under uniaxial tensile stress. Young's modulus and yiel... Cross-section of a column consisting of two steel strips, each of thickness $$t... A rigid bar$$GH$$of length$$L$$is supported by a hinge and a spring of stiff... The degree of static indeterminacy of the rigid frame having two internal hinges... The span$$(s)$$to be loaded uniformly for maximum positive (upward) reaction at... The members$$EJ$$and$$IJ$\$ of a steel truss shown in the figure below are sub...
The shape of the cross-section, which has the largest shape factor, is
A continuous beam is loaded as shown in the figure below. Assuming a plastic mom...
### Joint Entrance Examination
JEE Main JEE Advanced WB JEE
### Graduate Aptitude Test in Engineering
GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN
NEET
Class 12
|
|
# Give Scientific Reason:We Cannot Clearly See an Object Kept at a Distance Less than 25 Cm from the Eye. - Science and Technology 1
Explain
Short Note
Give scientific reason:
We cannot clearly see an object kept at a distance less than 25 cm from the eye.
|
|
# Article
Full entry | PDF (0.2 MB)
Keywords:
graph homomorphism; homomorphism duality; rooted oriented path
Summary:
Let $(H,r)$ be a fixed rooted digraph. The $(H,r)$-coloring problem is the problem of deciding for which rooted digraphs $(G,s)$ there is a homomorphism $f:G\to H$ which maps the vertex $s$ to the vertex $r$. Let $(H,r)$ be a rooted oriented path. In this case we characterize the nonexistence of such a homomorphism by the existence of a rooted oriented cycle $(C,q)$, which is homomorphic to $(G,s)$ but not homomorphic to $(H,r)$. Such a property of the digraph $(H,r)$ is called {\it rooted cycle duality } or $*$-{\it cycle duality}. This extends the analogical result for unrooted oriented paths given in [6]. We also introduce the notion of {\it comprimed tree duality}. We show that comprimed tree duality of a rooted digraph $(H,r)$ implies a polynomial algorithm for the $(H,r)$-coloring problem.
References:
[1] Gutjahr W., Welzl E., Woeginger G.: Polynomial graph colourings. Discrete Appl. Math. 35 (1992), 29-46. MR 1138082
[2] Hell P., Nešetřil J.: On the complexity of $H$-colouring. J. Combin. Theory B 48 (1990), 92-110. MR 1047555
[3] Hell P., Nešetřil J., Zhu X.: Duality and polynomial testing of tree homomorphisms. Trans. Amer. Math. Soc. 348.4 (1996), 1281-1297. MR 1333391
[4] Hell P., Nešetřil J., Zhu X.: Duality of graph homomorphisms. Combinatorics, Paul Erdös is Eighty, Vol. 2, Bolyai Society Mathematical Studies, Budapest, 1994, pp.271-282. MR 1395863
[5] Hell P., Zhou H., Zhu X.: Homomorphisms to oriented cycles. Combinatorica 13 (1993), 421-433. MR 1262918 | Zbl 0794.05037
[6] Hell P., Zhu X.: Homomorphisms to oriented paths. Discrete Math. 132 (1994), 107-114. MR 1297376 | Zbl 0819.05030
[7] Hell P., Zhu X.: The existence of homomorphisms to oriented cycles. SIAM J. Discrete Math. 8 (1995), 208-222. MR 1329507 | Zbl 0831.05059
[8] Nešetřil J., Zhu X.: On bounded tree width duality of graphs. J. Graph Theory 23.2 (1996), 151-162. MR 1408343
[9] Špičková-Smolíková P.: Homomorfismové duality orientovaných grafů (in Czech). Diploma Thesis, Charles University, 1997.
Partner of
|
|
# Find the spectrum of the linear operator $T: \ell^2 \to \ell^2$ defined by $Tx=(\theta x_{n-1} +(1-\theta)x_{n+1})_{n\in \mathbb{Z}}$
Let $\ell^2 =\ell^2(\mathbb{Z})$. Choose $\theta \in ]0,1[$ and set:
$$Tx=(\theta x_{n-1} +(1-\theta)x_{n+1})_{n\in \mathbb{Z}}$$
for each $x=(x_n)_{n\in \mathbb{Z}}\in \ell^2$ (thus $T$ is a convex combination of the right and left shift operators).
It is easy to prove that, for every $\theta$, $T$ is a bounded linear operator of $\ell^2$ into itself, that $\lVert T\rVert =1$ and that $T$ is selfadjoint iff $\theta =\ frac{1}{2}$. Moreover $T$ is not compact: in fact, if $e^m:=(\delta_n^m)$ (so $e^m$ is a vector of the canonical base of $\ell^2$), one has:
$$|Te^m -Te^p|^2=\begin{cases} 0 &\text{, if } p=m \\ \theta^2 +(1-\theta)^2+1 &\text{, if } m=p+2 \text{ or } p=m+2 \\ 2\theta^2+2(1-\theta)^2 &\text{, otherwise} \end{cases} \; ,$$
thus $|Te^m-Te^p|^2> \theta^2+(1-\theta)^2>0$ for $m\neq p$; therefore the sequence $\{ Te^m\}_{m\in \mathbb{N}}$ does not contain any Cauchy's subsequence.
The problem is:
I am not able to find the spectrum of $T$.
About the eigenvalues, the only thing I know for sure is that $1$ is not in the point spectrum of $T$ for any value of $\theta$: in fact if $1$ were in the point spectrum $\sigma_P(T)$, then the eigenvectors would satisfy the linear recurrence:
$$x_n=\theta x_{n-1}+(1-\theta) x_{n+1} \; ,$$
hence they have to be sequences of the type:
$$x_n=A \left( \frac{\theta}{1-\theta}\right)^n +B$$
($A,B$ suitable constants); but a sequence like this doesn't belong to $\ell^2$ except in the trivial case $A=B=0$, which however doesn't give a valid eigenvector. Therefore $1\notin \sigma_P(T)$.
But now, what about other eigenvalues? And what about the residue and continuous spectra of $T$?
Any hint is welcome.
-
You can use your argument to show that the point spectrum is empty. If $\lambda$ is an eigenvalue, the eigenvectors satisfy $\lambda x_n=\theta x_{n-1}+(1-theta)x_{n+1}$, and heus are of the form $x_n=Ar_1^n+Br_2^n$ where $r_1,r_2$ are the solutions of $\lambda r=\theta+(1-\theta)r^2$. No such sequence is in $\ell^2(\mathbb{Z})$ unless $A=B=0$. – Julián Aguirre Mar 27 '11 at 9:38
question on notation, does your statement $\theta \in ]0,1[$ mean that the set is non-inclusive? – rcollyer Apr 18 '11 at 17:43
To determine the spectrum of $T$, let us first determine the one of the right shift $\tau$. Since $||\tau||=1$, $\mathrm{Sp}(\tau) \subset \bar{B}(0,1)$. But the same goes for the left shift $\tau^{-1}$, so $\mathrm{Sp}(\tau) \subset C(0,1)$. It is actually equal to $C(0,1)$: $(\ldots,0,1,\lambda,\lambda^2,\ldots,\lambda^n,0,\ldots)$ is an "almost eigenvector".
For every $c \in \mathbb{C}^{\times}$, $(c \theta \mathrm{Id} - \tau)(c (1-\theta) \mathrm{Id} - \tau^{-1}) = (1+c^2 \theta (1-\theta)) \mathrm{Id} - cT$, so $f(c)=\frac{1+c^2 \theta (1-\theta)}{c} \in \mathrm{Sp} (T)$ iff $c \theta \in \mathrm{Sp}(\tau)$ or $c (1- \theta) \in \mathrm{Sp}(\tau^{-1})$ iff $|c|=\theta^{-1}$ or $(1-\theta)^{-1}$.
Now $f(\mathbb{C}^{\times})=\mathbb{C}$, and $f(\theta^{-1} (1-\theta)^{-1} c^{-1})=f(c)$, so $\mathrm{Sp}(T)= \left\{ f(\theta e^{i \alpha}) \right\} = \left\{ \cos \alpha + i (1-2 \theta) \sin \alpha \right\}$ which is an ellipsis (flat when $\theta=1/2$).
EDIT: It is easy to check that for all $\lambda$, $\lambda \mathrm{Id} - \tau$ is injective and has dense range, hence the same is true for $T$.
I do really thank you, but I'm not too much into your notations: can you explain $C(0,1)$ and $\mathbb{C}^\times$? I guess $C(0,1)=\{ z|\ |z|=1\}$ and $\mathbb{C}^\times =\mathbb{C}\setminus \{ 0\}$, is it correct? – Pacciu Mar 27 '11 at 18:58
Yes, and $\bar{B}(0,1) = \left\{ z \ |\ |z| \leq 1 \right\}$. – Plop Mar 28 '11 at 19:28
|
|
Deepak Scored 45->99%ile with Bounce Back Crack Course. You can do it too!
# Reena has pend and pencils which together are 40 in number.
Question:
Reena has pend and pencils which together are 40 in number. If she has 5 more pencils and 5 less pens, the number of pencils would become 4 times the number of pens. Find the original number of pens and pencils.
Solution:
Given:
(i) Total numbers of pens and pencils = 40.
(ii) If she has 5 more pencil and 5 less pens, the number of pencils would be 4 times the number of pen.
To find: Original number of pens and pencils.
Suppose original number of pencil = x
And original number of pen = y
According the given conditions, we have,
$x+y=40$
$x+y-40=0$...(1)
$5+x=4(y-5)$
$5+x=4 y-20$
$x-4 y+5+20=0$
$x-4 y+25=0$....(2)
Thus we got the following system of linear equations
$x+y-40=0$$\ldots \ldots(1) x-4 y+25=0$$\ldots \ldots(2)$
Substituting the value of y from equation 1 in equation 2 we get
$x-4(40-x)+25=0 \quad[y=(40-x)$ from equation 1$]$
$x-160+4 x+25=0$
$5 x-135=0$
$x=\frac{135}{5}$
$x=27$
Substituting the value of in equation 1 we get
$27+y=40$
$y=40-27$
$y=13$
Hence we got the result number of pencils is $x=27$ and number of pens are $y=13$
|
|
Wrox Programmer Forums Generating Pi to more than 14 decimal places
Register | FAQ | Members List | Calendar | Search | Today's Posts | Mark Forums Read
Visual Basic 2010 General Discussion For any discussions about Visual Basic 2010 topics which aren't related to a specific Wrox book
Welcome to the p2p.wrox.com Forums. You are currently viewing the Visual Basic 2010 General Discussion section of the Wrox Programmer to Programmer discussions. This is a community of tens of thousands of software programmers and website developers including Wrox book authors and readers. As a guest, you can read any forum posting. By joining today you can post your own programming questions, respond to other developers’ questions, and eliminate the ads that are displayed to guests. Registration is fast, simple and absolutely free .
October 27th, 2010, 07:44 PM
Friend of Wrox Join Date: Jun 2008 Location: Snohomish, WA, USA Posts: 1,652 Thanks: 3 Thanked 141 Times in 140 Posts
Yep. You got it.
But you know, he gives you the C++ code for it. There's no reason you can't use Visual C++, compile that code into a library, and then invoke that library function from VB.
EDIT: No, I take that back. He said he uses a #define to set the number of places of accuracy, so you'd have to change even his code if you wanted to convert it into a callable library with arbitrary precision.
Last edited by Old Pedant; October 27th, 2010 at 07:46 PM.
The Following User Says Thank You to Old Pedant For This Useful Post: SamC (October 30th, 2010)
October 27th, 2010, 07:44 PM
Friend of Wrox Join Date: Jun 2008 Location: Snohomish, WA, USA Posts: 1,652 Thanks: 3 Thanked 141 Times in 140 Posts
I hope you noted that he said it took 15 hours on his PC to get 1,000,000 places.
The Following User Says Thank You to Old Pedant For This Useful Post: SamC (October 30th, 2010)
October 27th, 2010, 07:55 PM
Authorized User Join Date: Oct 2010 Location: "Great" Britain Posts: 29 Thanks: 23 Thanked 0 Times in 0 Posts
Quote:
Originally Posted by Old Pedant I hope you noted that he said it took 15 hours on his PC to get 1,000,000 places.
Aye, but that's really not much when you consider the vast amount of calculations involved. I only really wanted to find enough to fill the first 25 lines of the console window, even with the algorithm I was using that should take only a matter of seconds... I think I shall abandon this project and leave it for when I have a better understanding of programming.
In the mean time I think I may make another Prime Generator, this time using the Sieve of Atkin. That should be quite simple right?
PS: If I want to create a .txt file to save the prime numbers in what would be the best way of going about this?
__________________
"Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius - and a lot of courage - to move in the opposite direction."
- Albert Einstein
October 27th, 2010, 08:08 PM
Friend of Wrox Join Date: Jun 2008 Location: Snohomish, WA, USA Posts: 1,652 Thanks: 3 Thanked 141 Times in 140 Posts
http://msdn.microsoft.com/en-us/libr...eamwriter.aspx
Code:
Dim outf As New StreamWriter("c:\full\pat\to\yourfile.txt")
outf.WriteLine("the next prime number is " & n )
... repeat...
outf.Close()
Or variations on that theme. Learn to use the MSDN docs. They really are amazingly complete.
The Following User Says Thank You to Old Pedant For This Useful Post: SamC (October 30th, 2010)
October 27th, 2010, 08:10 PM
Friend of Wrox Join Date: Jun 2008 Location: Snohomish, WA, USA Posts: 1,652 Thanks: 3 Thanked 141 Times in 140 Posts
Never used Sieve of Atkin.
Used to use Sieve of Erastothenes as a benchmarking program. Can't tell you how many languages I wrote that in. But in those days (1970s, early 80s), we were happy to get the primes up to about 16,000.
The Following User Says Thank You to Old Pedant For This Useful Post: SamC (October 30th, 2010)
October 27th, 2010, 08:23 PM
Authorized User Join Date: Oct 2010 Location: "Great" Britain Posts: 29 Thanks: 23 Thanked 0 Times in 0 Posts
Cool, I use Prime95 for stability testing overclocks all the time, I guess it must be generating some pretty huge primes!
Do you think Sieve of Eratosthenes would be a better option then? It is simpler to implement.
__________________
"Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius - and a lot of courage - to move in the opposite direction."
- Albert Einstein
October 27th, 2010, 08:41 PM
Friend of Wrox Join Date: Jun 2008 Location: Snohomish, WA, USA Posts: 1,652 Thanks: 3 Thanked 141 Times in 140 Posts
The big limitation back in the day was the amount of memory needed. You need an array element for each number from 1 to N. (Well, we "cheated" by only looking at odd numbers, so we needed half that number of elements, but you get the idea.)
In the BASICs of that day, where many implemented only a single numeric data type which was, of course, floating point, that meant the you were using 4 or 6 or 8 bytes per array element. In a 64KB machine, you usually only had a MAX of 32KB left for variables, so even 32K divided by 4 meant a top end of 8000 array elements.
In a modern machine, you can surely devout a few hundred megabytes to the array *AND* with VB.NET (for example) your array elements can each be only 1 byte long (that is, use an array of BYTE data type). So sure...it's thus trivial to implement and you can get up into the range of 1,000,000,000 as a max prime (esp. if you pull the same trick: Don't use an array element for the even numbers).
Actually, VB.NET is fast enough that if you wanted to get a bit more clever you could extend the range by a factor of 8 by using only 1 BIT per odd number. A little more coding work, but not hard.
The Following User Says Thank You to Old Pedant For This Useful Post: SamC (October 30th, 2010)
October 28th, 2010, 02:23 PM
Authorized User Join Date: Oct 2010 Location: "Great" Britain Posts: 29 Thanks: 23 Thanked 0 Times in 0 Posts
Could you modify my code so that it will display 28 decimal places of π please? That would be awesome.
Also how would I write a routine which sorts an array of strings into alphabetical order? It's really bugging me!
__________________
"Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius - and a lot of courage - to move in the opposite direction."
- Albert Einstein
October 28th, 2010, 02:31 PM
Friend of Wrox Join Date: Jun 2008 Location: Snohomish, WA, USA Posts: 1,652 Thanks: 3 Thanked 141 Times in 140 Posts
Quote:
Originally Posted by SamC Could you modify my code so that it will display 28 decimal places of π please? That would be awesome.
Tell the truth, I've never used DECIMAL with VB before. But I would imagine all you have to do is change all the variables to DECIMAL. Oh, and get rid of Math.Pow(). Just have to do that "by hand" with multiplies, instead.
Quote:
Also how would I write a routine which sorts an array of strings into alphabetical order? It's really bugging me!
Remember what I said about learning to use MSDN??
http://msdn.microsoft.com/en-us/libr...tem.array.aspx
http://msdn.microsoft.com/en-us/library/6tf1f0bc.aspx
You don't HAVE to "write a routine". The .NET framework has it built in.
The Following User Says Thank You to Old Pedant For This Useful Post: SamC (October 30th, 2010)
October 28th, 2010, 02:38 PM
Authorized User Join Date: Oct 2010 Location: "Great" Britain Posts: 29 Thanks: 23 Thanked 0 Times in 0 Posts
I was aware of that function, I wanted to write the routine myself because my tutor had challenged me to do so over half-term. I could get it to work easily for numerical values, but it didn't work for Strings...
__________________
"Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius - and a lot of courage - to move in the opposite direction."
- Albert Einstein
Thread Tools Display Modes Linear Mode
Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is Off HTML code is OffTrackbacks are Off Pingbacks are On Refbacks are Off
Similar Threads Thread Thread Starter Forum Replies Last Post Number with 2 decimal places only pallone SQL Server 2000 4 July 3rd, 2007 07:29 AM Display Decimal Places rsm42 ASP.NET 1.0 and 1.1 Basics 3 January 7th, 2007 09:46 AM Round to 2 decimal places adman JSP Basics 0 June 30th, 2005 07:24 AM Two decimal places only pallone Javascript How-To 5 February 1st, 2005 11:35 PM output only 2 decimal places Toka1 Javascript How-To 2 February 20th, 2004 10:44 AM
All times are GMT -4. The time now is 04:25 PM.
|
|
Islamic Word Of The Day, Pets At Home Whitstable, Growing Purple Potatoes In Containers, Black Eyed Susan Seeds Where To Buy, Dog Songs Playlist, Tea Packaging Design In Sri Lanka, How To Cut T-molding Track, Starbucks Caffe Latte Calories, " />
# what is econometrics
###### Wykrojnik- co to takiego?
For example, consider Okun's law, which relates GDP growth to the unemployment rate. With a blend of statistical inference, economic theory, and basic mathematical principles, econometrics for finance helps describe modern economic systems. This expected growth is called the probability and … {\displaystyle \beta _{1}} Here is how others describe econometrics. is a random variable representing all other factors that may have direct influence on wage. [9] Estimating a linear regression on two variables can be visualised as fitting a line through data points representing paired values of the independent and dependent variables. Samrit Upadhayay. ), a given value of GDP growth multiplied by a slope coefficient An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships". . Econometrics is basically the study of economic data and statistical models with data to create models or empirical hypotheses about the economic future. Econometrics definition is - the application of statistical methods to the study of economic data and problems. Econometrics forecasting involves making predictions based on economic factors. He teaches at the Richard Ivey School of Business and serves as a research fellow at the Lawrence National Centre for Policy and Management. Quantitative means that these answers are based on statistical and mathematical models. A few definitions are given below: The method of econometric research aims, essentially, at a conjunction of economic theory and actual measurements, using the theory and technique of statistical inference as a bridge pier.Trygve Haavelmo (1944) Econometrics may be defined as the quantitative analysis of actual economic phenomena based on … β University. and 1 #Darijungaa. Then Econometrics by Erasmus University Rotterdam is the right course for you, as you learn how to translate data into models to make forecasts and to support decision making. [23], Empirical statistical testing of economic theories, For an overview of a linear implementation of this framework, see, Edward E. Leamer (2008). The econometric model can either be a single-equation regression model or may consist a system of simultaneous equations.In most commodities, the single-equation regression model serves the purpose. More precisely, it is "the quantitative analysis of actual economic phenomenabased on the concurrent development of theory and observation, related by appropriate methods of inference". 2017/2018 Econometrics students are destined to become millionaires, don’t have any social skills and are very bright. More specifically, it quantitatively analyzes economic phenomena in relation to current theories and observations in order to make concise assumptions about large data sets. From there, we can build a model that may predict the expected growth for each new job added to a local community. However, since econometricians cannot typically use controlled experiments, their natural experiments with data sets lead to a variety of observational data issues including variable bias and poor causal analysis that leads to misrepresenting correlations between dependent and independent variables. Theoretical. [10][11] Econometricians try to find estimators that have desirable statistical properties including unbiasedness, efficiency, and consistency. β Another technique is to include in the equation additional set of measured covariates which are not instrumental variables, yet render What is econometrics? [10][11] Econometricians try to find estimators that have desirable statistical properties including unbiasedness, efficiency, and consistency. A theoretical econometrician investigates the properties of current statistical procedures and tests for estimating unknowns in the model. ε The goal of Econometrics and Operations Research (EOR) is to provide quantitative answers to economic questions. If the estimate of We divide econometrics into two components: 1. Regarding the plurality of models compatible with observational data-sets, Edward Leamer urged that "professionals ... properly withhold belief until an inference can be shown to be adequately insensitive to the choice of assumptions". Applied. This video tutorial explains what is econometrics? [2] An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships". β Certain features of economic data make it challenging for economists to quantify economic models. 1 This relationship is represented in a linear regression where the change in unemployment rate ( Assignment. Explain the methodologies. {\displaystyle \epsilon } They attempt to predict future trends by employing it. Unlike researchers in the physical sciences, econometricians are rarely able to conduct controlled experiments in which only one variable is changed and the response of the subject to that change is measured. [9] Econometric theory uses statistical theory and mathematical statistics to evaluate and develop econometric methods. A computer is distinguished from a calculating machine, such as an electronic calculator, by being able to store a computer program (so that it can repeat its operations and make logical econometrics and economic Data 1 1.1 What is Econometrics? These methods are analogous to methods used in other areas of science, such as the field of system identification in systems analysis and control theory. The most obvious way to control for birthplace is to include a measure of the effect of birthplace in the equation above. This means that if GDP growth increased by one percentage point, the unemployment rate would be predicted to drop by 0.83 - 1.77 *1 points. The model could then be tested for statistical significance as to whether an increase in growth is associated with a decrease in the unemployment, as hypothesized. Unemployment Econometrics is a set of tools we can use to confront theory with real-world data. 0 Aris Spanos (1986) ε {\displaystyle \beta _{1}} Monash University defines econometrics as "a set of quantitative techniques that are useful for making economic decisions" while The Economist's "Dictionary of Economics" defines it as "the setting up of mathematical models describing mathematical models describing economic relationships (such as that the quantity demanded of a good is dependent positively on income and negatively on price), testing the validity of such hypotheses and estimating the parameters in order to obtain a measure of the strengths of the influences of the different independent variables.". The idea is to consider economic models … Applied econometrics, then, uses these theoretical practices to observe real-world data and formulate new economic theories, forecast future economic trends, and develop new econometric models which establish a basis for estimating future economic events as they relate to the data set observed. β • www.econometrie.nl Site van het Landelijk Orgaan der Econometrische Studieverenigingen (LOES) voor middelbare scholieren die zich aan het oriënteren zijn op hun studiekeuze na het vwo. 0 Professor of Business, Economics, and Public Policy, The Basic Tool of Econometrics: Multiple Linear Regression Model, Using Econometric Modeling to Evaluate Data, Definition and Use of Instrumental Variables in Econometrics, A Guide to the Term "Reduced Form" in Econometrics, An Introduction to Akaike's Information Criterion (AIC), What Is an Experiment? Exclusion of birthplace, together with the assumption that The study of econometrics tends to be especially strong on statistics, with statistics being a powerful tool when it comes to analysis, and it requires comfort in the fields of both economics and mathematics. {\displaystyle \beta _{1}} Such information is sometimes used by governments to set economic policy and by private business to aid decisions on prices, inventory, and production. Analysis of data from an observational study is guided by the study protocol, although exploratory data analysis may be useful for generating new hypotheses. That raises the questi… [3] The first known use of the term "econometrics" (in cognate form) was by Polish economist Paweł Ciompa in 1910. Ordinary least squares (OLS) is often used for estimation since it provides the BLUE or "best linear unbiased estimator" (where "best" means most efficient, unbiased estimator) given the Gauss-Markov assumptions. So it’s about time we asked Econometrists what’s true in all of this. [9] In modern econometrics, other statistical tools are frequently used, but linear regression is still the most frequently used starting point for an analysis. Instead, the econometrician observes the years of education of and the wages paid to people who differ along many dimensions. What is econometrics ? Like other forms of statistical analysis, badly specified econometric models may show a spurious relationship where two variables are correlated but causally unrelated. {\displaystyle \beta _{0}} A basic tool for econometrics is the multiple linear regression model. Formal definition. ECONOMETRICS BRUCE E. HANSEN ©2000, 20201 University of Wisconsin Department of Economics This Revision: November 30, 2020 Comments Welcome 1This manuscript may be printed and reproduced for individual or instructional use, but may not be printed for commercial purposes. β Visually, the multiple linear regression model can be viewed as a straight line through data points that represent paired values of the dependent and independent variables. It enables you to take on complex issues like: How will the price of energy and commodities evolve over the coming years? Samuelson, Koopmans and Stone (1954) Econometrics is concerned with the systematic study of economic phenomena using observed data. [15] Regression methods are important in econometrics because economists typically cannot use controlled experiments. It typically applies real-world information to statistical tests and then analyzes and compares the data against the hypothesis or theories being tested in a specific context. and Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships. EOR is broad in scope but is essentially a combination of mathematics, statistics, economics and computer science. : The unknown parameters {\displaystyle \beta _{1}} Econometrics takes mathematical and statistical models proposed in economic theory and tests them. 1 1.2 Steps in Empirical Economic Analysis 2 1.3 the Structure of Economic data 5 Cross-Sectional Data 5 Time Series Data 8 Pooled Cross Sections 9 Panel or Longitudinal Data 10 A Comment on Data Structures 11 1.4 Causality and the notion of Ceteris Paribus in Econometric Analysis 12 [23] In such cases, economists rely on observational studies, often using data sets with many strongly associated covariates, resulting in enormous numbers of models with similar explanatory ability but different covariates and regression estimates. Academic year. Econometric theory uses statistical theory and mathematical statistics to evaluate and develop econometric methods. We will start with a set of sample data and make an estimate on the impact of new jobs on an economy. under specific assumptions about the random variable [21][22], In some cases, economic variables cannot be experimentally manipulated as treatments randomly assigned to subjects. Econometrics may use standard statistical models to study economic questions, but most often they are with observational data, rather than in controlled experiments. If the researcher could randomly assign people to different levels of education, the data set thus generated would allow estimation of the effect of changes in years of education on wages. {\displaystyle \varepsilon } When these assumptions are violated or other statistical properties are desired, other estimation techniques such as maximum likelihood estimation, generalized method of moments, or generalized least squares are used. A few common types of econometrics forecasting include models, decision trees, and market representations. What Is Econometrics 1.1 Economics 1.1.1 The Economic Problem Economics concerns itself with satisfying unlimited wants with limited resources. β Essentially, it turns qualitative ideas into quantitative outcomes. β While economics is a base for this study, other tools — primarily statistics and mathematics — provide the additional techniques for making forecasts. One of the fundamental statistical methods used by econometricians is regression analysis. 0 Econometrics, isn’t that really difficult? The main journals that publish work in econometrics are Econometrica, the Journal of Econometrics, The Review of Economics and Statistics, Econometric Theory, the Journal of Applied Econometrics, Econometric Reviews, The Econometrics Journal,[20] Applied Econometrics and International Development, and the Journal of Business & Economic Statistics. mathematical models describing economic relationships, Ph.D., Business Administration, Richard Ivey School of Business, B.A., Economics and Political Science, University of Western Ontario. β is uncorrelated with years of education, then the equation can be estimated with ordinary least squares. As such it proposes solutions that involve the production and consumption of goods using a particular allocation of … Likewise, there is biometrics, sociometrics, anthropometrics, psychometrics and similar sciences devoted to the theory and practice of measure in a particular field of study. Econometrics uses a blend of statistical and mathematical methods to test theories and predict future economic trends. Abstract. Econometrics refers to the utilization of math and statistics in the discipline of economics. 0 Questions like "Is the value of the Canadian dollar correlated to oil prices?" Explain the methodologies. {\displaystyle \beta _{0}} Master For Finance And Control (MFC) Uploaded by. Δ Applied econometrics uses theoretical econometrics and real-world data for assessing economic theories, developing econometric models, analysing economic history, and forecasting.[12]. ϵ identifiable. In this, econometricians attempt to find estimators that are unbiased, efficient, and consistent in predicting the values represented by this function. and an error term, can be answered by applying econometrics to datasets on Canadian dollars, oil prices, fiscal stimulus, and metrics of economic well-being. 1 A simple example of a relationship in econometrics from the field of labour economics is: This example assumes that the natural logarithm of a person's wage is a linear function of the number of years of education that person has acquired. [14] Economics often analyses systems of equations and inequalities, such as supply and demand hypothesized to be in equilibrium. measures the increase in the natural log of the wage attributable to one more year of education. {\displaystyle \varepsilon } Econometrics is the statistical methods used by economists to test hypotheses using real-world data in order to analyze economic phenomena. ε 1 were not significantly different from 0, the test would fail to find evidence that changes in the growth rate and unemployment rate were related. 1 1 Econometrics, the statistical and mathematical analysis of economic relationships, often serving as a basis for economic forecasting. In econometrics, as in statistics in general, it is presupposed that the quantities being analyzed can be treated as random variables.An econometric model then is a set of joint probability distributions to which the true joint probability distribution of the variables under study is supposed to belong. Econometrics deals with the measurement of economic relationships. β [16], In addition to natural experiments, quasi-experimental methods have been used increasingly commonly by econometricians since the 1980s, in order to credibly identify causal effects.[17]. [4] Jan Tinbergen is considered by many to be one of the founding fathers of econometrics. {\displaystyle \beta _{0}{\mbox{ and }}\beta _{1}} Instead, econometricians estimate economic relationships using data generated by a complex system of related equations, in which all variables may change at the same time. – Hansen (1996) Economic theory, statistics, and data. The term Econometricians often use these models to analyze systems of equations and inequalities such as the theory of supply and demand equilibrium or predicting how a market will change based off of economic factors like the actual value of domestic money or the sales tax on that particular good or service. Observational data may be subject to omitted-variable bias and a list of other problems that must be addressed using causal analysis of simultaneous-equation models. β Econometrics may use standard statistical models to study economic questions, but most often they are with observational data, rather than in controlled experiments. The econometric goal is to estimate the parameters, {\displaystyle \beta _{1}} "specification problems in econometrics,", Estimators that incorporate prior beliefs, Applied Econometrics and International Development, Journal of Business & Economic Statistics, The New Palgrave: A Dictionary of Economics, "1969 - Jan Tinbergen: Nobelprijs economie - Elsevierweekblad.nl", "The Credibility Revolution in Empirical Economics: How Better Research Design is Taking the Con out of Econometrics", "The Econometrics Journal – Wiley Online Library", The interview with Clive Granger – Nobel winner in 2003, about econometrics, Organisation for Economic Co-operation and Development, https://en.wikipedia.org/w/index.php?title=Econometrics&oldid=992447870, Mathematical and quantitative methods (economics), Short description is different from Wikidata, Creative Commons Attribution-ShareAlike License, This page was last edited on 5 December 2020, at 09:18. or "Does fiscal stimulus really boost the economy?" Unless the econometrician controls for place of birth in the above equation, the effect of birthplace on wages may be falsely attributed to the effect of education on wages. Within the Econometrics and Operations Research Bachelor programme students learn how to solve problems in economics or in business by means of mathematical techniques. Econometrics has been significantly aided by advances in computer computer, device capable of performing a series of arithmetic or logical operations. This calculation provides us with what is known as a statistical inference, a generalization about the growth of the population based on a smaller sample of that population. is estimated to be −1.77 and It is an integration of economics, mathematical economics and statistics with an objective to provide numerical values to the parameters of economic relationships. Are very bright of mathematics, statistics, and forecasting of this is econometrics the coming years [ 22,! Or valid econometricians is regression analysis computer computer, device capable of performing a series of arithmetic logical! The application of statistical methods used to study this Problem were provided by Card ( 1999.... And Stone ( 1954 ) econometrics is a set of sample data and an... Unknowns in the what is econometrics of evidence from controlled experiments of education Canadian dollar correlated to oil prices fiscal. In all of this Hansen ( 1996 ) economic theory, and.!, it turns qualitative ideas into quantitative outcomes Card ( 1999 ). [ 19 ] years. A series of arithmetic or logical Operations a series of arithmetic or logical Operations capable of performing a series arithmetic... 2 ] an introductory economics textbook describes econometrics as allowing economists to through... Investigate their empirical consequences, without directly manipulating the system new jobs on an economy experiments. Many dimensions model that may have direct influence on wage Moffatt, Ph.D. is! Despite the peculiarities within the economic future has developed methods for identification and estimation simultaneous... An objective to provide numerical values to the parameters of economic data in order to give empirical to! 2 ] an introductory economics textbook describes econometrics as allowing economists sift! The absence of evidence from controlled experiments a random variable representing all other what is econometrics that may higher. And consistent in predicting the values represented by this function variables can not what is econometrics controlled experiments programme... That may predict the expected growth is called the probability and … deals. And data economic relationships econometricians try to develop new statistical procedures and tests them which relates GDP growth to parameters... Economics and statistics in the absence of evidence from controlled experiments make an estimate on the of. Prior beliefs are advocated by those who favour Bayesian statistics over traditional, classical or frequentist '' approaches means... Problem were provided by Card ( 1999 ). [ 19 ] of and! An introductory economics textbook describes econometrics as allowing economists to sift through mountains of to... Refers to the utilization of math and statistics with an objective to numerical! Jobs on an economy 11 ] econometricians try to find estimators that have desirable statistical including! Into quantitative outcomes economic Context fiscal stimulus really boost the economy? to quantify economic models experiments can not experimentally! Is the analysis and testing of economic well-being - the application of methods! frequentist '' approaches in some cases, economic variables can not be conducted of simultaneous equations models is! Econometrics forecasting include models, analysing economic history, and consistent in predicting the values represented by this.... Set of sample data and statistical models with data to extract simple relationships '' to prices. And are very bright don ’ t have any social skills and are very bright an objective to provide values. Of other problems that must be addressed using causal analysis of simultaneous-equation.... One of the fundamental statistical methods to economic data, theoretical econometricians to! \Displaystyle \epsilon } is a set of tools we can start by calculating the mean average... Illuminating natural experiments in the absence of evidence from controlled experiments { \displaystyle }! To test their hypotheses and theories what Does Hedonic '' mean in economic... Bias and a list of other problems that must be addressed using causal analysis of simultaneous-equation models and.. ( 1996 ) economic theory, statistics, and metrics of economic data and make an estimate on impact. To solve problems in economics or in business by means of mathematical techniques developing econometric,! Provide numerical values to the utilization of math and statistics in the model use to confront theory real-world! Their empirical consequences, without directly manipulating the system econometrics forecasting include models, decision trees, basic! Dollar correlated to oil prices, fiscal stimulus really boost the economy? illuminating natural in. To provide numerical values to the utilization of math and statistics in the discipline of economics and.. Education produces a misspecified model they attempt to find estimators that are unbiased, efficient, and representations... Experimentally manipulated as treatments randomly assigned to subjects beliefs are advocated by those favour... Important in econometrics because economists typically can not be conducted data 1 1.1 what what is econometrics econometrics predictions on! Are unbiased, efficient, and consistent in predicting the values represented by this function skills are! Obvious way to Control for birthplace is to include a measure of the Canadian dollar correlated to prices. Is a random variable representing all other factors that may predict the expected growth for each new job to... { \displaystyle \epsilon } is uncorrelated with education produces a misspecified model with limited resources and basic mathematical principles econometrics. ). [ 19 ] not be conducted prices, fiscal stimulus, and so.... People born in certain places may have higher wages and higher levels of education mathematical techniques together with the that. Analysis of simultaneous-equation models it turns qualitative ideas into quantitative outcomes economic variables can not be experimentally as! Forecasting involves making predictions based on economic factors assumption that ϵ { \displaystyle \varepsilon } is a for... Tests them and predict future trends by employing it Moffatt, Ph.D., is an and., it turns qualitative ideas into quantitative outcomes in econometrics because economists typically can not be conducted seek natural... Econometricians often seek illuminating natural experiments what is econometrics the model typically can not use controlled experiments Canadian dollars, prices! That must be addressed using causal analysis of simultaneous-equation models 4 ] Jan Tinbergen is considered by many be! Methods may allow researchers to estimate models and investigate their empirical consequences, without directly manipulating the system basic... A Research fellow at the Richard Ivey School of business and serves as Research! Operations Research Bachelor programme students learn how to solve problems in economics or business... Become millionaires, don ’ t have any social skills and are very bright for. [ 9 ] econometric theory uses statistical theory and mathematical methods to economic data in to! Is regression analysis economist and professor really boost the economy? properties including unbiasedness, efficiency, market... In certain places may have direct influence on wage primarily statistics and mathematics — provide additional. And so forth mathematical economics and statistics in the equation above, device capable of performing series! On the impact of new jobs on an economy to test theories and predict future trends by employing it statistics. Hypotheses and improve prediction of financial trends inequalities, such as supply and demand hypothesized be..., consider Okun 's law, which relates GDP growth to the utilization of math and with. Including unbiasedness, efficiency, and consistency you to take on complex issues like how... Is concerned with the systematic study of economic well-being for each new job added theoretical... And demand hypothesized to be one of the effect of birthplace, together with systematic... Growth is called the probability and what is econometrics econometrics deals with the systematic study of economic data and statistical with! Way to Control for birthplace is to include a measure of things in or! Theories, developing econometric models, decision trees, and so forth to datasets on Canadian dollars, prices. Trends by employing it including unbiasedness, efficiency, and so forth concerned with assumption! Asked Econometrists what ’ s about time we asked Econometrists what ’ s about time asked! Is uncorrelated with education produces a misspecified model Bayesian statistics over traditional, classical or Does fiscal really!, is an integration of economics, mathematical economics and computer science show a spurious relationship where two are! Use controlled experiments such as economies, economic variables can not be experimentally manipulated as treatments randomly to! Time we asked Econometrists what ’ s about time we asked Econometrists what ’ s time. Uses theoretical econometrics and economic data and make an estimate on the impact new! The multiple linear regression model economy? econometricians often seek illuminating natural experiments in the absence of from! Higher wages and higher levels of education the Lawrence National Centre for Policy and Management 1 1.1 what is?... Energy and commodities evolve over the coming years this study, other tools — primarily statistics and —. Growth to the utilization of math and statistics with an objective to provide values. Per new job added any social skills and are very bright econometrics forecasting involves making based... Including unbiasedness, efficiency, and consistency to oil prices?,,... Uses theoretical econometrics and Operations Research Bachelor programme students learn how to solve problems in economics or in business means. A spurious relationship where two variables are correlated but causally unrelated employing it phenomena... Mathematical and statistical models proposed in economic theory, statistics, and consistency for estimating in. A Research fellow at the Lawrence National Centre for Policy and Management ) [... Efficiency, and forecasting computer computer, device capable of performing a series of or. Base for this study, other tools — primarily statistics and mathematics provide... To predict future trends by employing it predicting the values represented by this function the expected is! Mike Moffatt, Ph.D., is an economist and professor that have desirable what is econometrics including... On economic factors, classical or frequentist '' approaches have direct influence on wage with data. Concerns itself with satisfying unlimited wants with limited resources and are very bright not be manipulated. Models may show a spurious relationship where two variables are correlated but causally unrelated mean an... This study, other tools — primarily statistics and mathematics — provide the additional techniques making! Assessing economic theories, developing econometric models, decision trees, and consistency obvious way to Control for is!
|
|
mathcalculus 2 years ago H-E-L-P so frustrating!! Determine the extrema of below on the given interval f(x)=5x^3-61x^2+16x+3 (a) on [0,4] The minimum is ?? and the maximum is ?? (b) on [-9,9] The minimum is ?? and the maximum is ?? please solve so i can know which answer i got wrong.
1. mathcalculus
can someone please tell me the correct answer to this. it seems like one of my answers is incorrect.
2. satellite73
check $$f(0), f(4)$$ and also find the critical point and find $$f$$ of that number
3. mathcalculus
a) on [0,4] The minimum is -589 and the maximum is 4.06074 (b) on [-9,9] The minimum is -8727 and the maximum is -1149
4. satellite73
the derivative is $$15 x^2-122 x+16$$
5. satellite73
which, by some miracle factors as $$(x-8) (15 x-2)$$
6. satellite73
zeros of the derivative are $$\frac{2}{15}$$ and $$8$$
7. satellite73
so $$\frac{2}{15}$$ is the $$x$$ coordinate of the local max and $$8$$ is the $$x$$ coordinate of the local min
8. mathcalculus
so for the intervals [0,4] max is: 2/15 and 8 is the min? what about [-9,9]
9. mathcalculus
i just want to make sure because it submitted the answer and it keeps saying its wrong.
10. mathcalculus
@satellite73
11. mathcalculus
any one! this is important!
12. mathcalculus
never mind! i found it. thanks to those who actually helped.
13. Peter14
f'(x)=10x^2 - 122x +16 0=10x^2 - 122x + 16 (just working here because I don't have paper handy)
14. satellite73
careful here the max is the $$y$$ value not the $$x$$ value
15. Peter14
just finding critical points
16. Peter14
oh, sorry you already found them
17. satellite73
8 is not in the interval $$[0,4]$$ for that one, you need to check $$f(0), f(4), f(\frac{2}{15})$$ the largest is the max and the smallest is the min
18. satellite73
for $$[-9,9]$$ you need to check $f(-9),f(\frac{2}{15}), f(8), f(9)$
19. mathcalculus
got it. thank you!
20. satellite73
yw
|
|
## Multivariate Volatility Models
Most applications deal with portfolios where it is necessary to forecast the entire covariance matrix of asset returns.
Consider the univariate volatility model:
where are returns; is conditional volatility, and are random shocks.
### EWMA
The multivariate form of EWMA is
with an individual element given by
where as per RiskMetrics.
A sample R code for EWMA is
### Orthogonal GARCH (OGARCH)
It is usually very hard to estimate multivariate GARCH models. In practice, alternative methodologies for obtaining the covariance matrix are needed.
The orthogonal approach transforms linearly the observed returns matrix into a set of portfolios with the key property that they are uncorrelated, implying we can forecast their volatilities separately. This makes use of principal components analysis (PCA).
##### Orthogonalising covariance
The first step is to transform the return matrix into uncorrelated portfolio . Denote as the sample correlation of . We then calculate orthogonal matrix of eigenvectors of , denoted by . Then is defined by:
The rows of are uncorrelated with each other so we can run a univariate GARCH or a similar model on each row in separately to obtain its conditional variance forecast, denoted by . We then obtain the forecast of the conditional covariance matrix of the returns by:
This implies that the covariance terms can be ignored when modeling the covariance matrix of , and the problem has been reduced to a series of univariate estimations.
##### Large-scale implementations
In the above example, all the principal components (PCs) were used to construct the conditional covariance matrix. However, it is possible to use just a few of the columns. The highest eigenvalue corresponds to the most important principle component—the one that explains most of the variation in the data.
Such approaches are in widespread use because it is possible to construct the conditional covariance matrix for a very large number of assets. In a highly correlated environment, just a few principal components are required to represent system variation to a very high degree of accuracy. This is much easier than forecasting all volatilities directly in one go.
PCA also facilitates building a covariance matrix for an entire financial institution by iteratively combining the covariance matrices of the various trading desks, simply by using one or perhaps two principal components. For example, one can create the covariance matrices of small caps and large caps separately and use the first principal component to combine them into the covariance matrix of all equities. This can then be combined with the covariance matrix for fixed income assets, etc.
### Correlation Models
##### Constant conditional correlations (CCC)
Bollerslev (1990) proposes the constant conditional correlations (CCC) model where time-varying covariances are proportional to the conditional standard deviation. The conditional covariance matrix consists of two components that are estimated separately: sample correlations and the diagonal matrix of time-varying volatilities .
where
The volatility of each asset follows a GARCH process or any of the univariate models discussed here.
This model guarantees the positive definiteness of if is positive definite.
##### Dynamic conditional correlations (DCC)
In particular, the assumption of correlations being constant over time is at odds with the vast amount of empirical evidence supporting nonlinear dependence. To correct this defect, Engle (2002) and Tse and Tsui (2002) propose the dynamic conditional correlations (DCC) model as an extension to the CCC model.
Different with CCC model, the correlation matrix is time dependent within the DCC framework as
where is a symmetric positive definite autoregressive matrix and is given by
where is the unconditional covariance matrix of ; and to ensure positive definiteness and stationarity, respectively.
• Pros: it can be estimated in two steps: one for parameters determining univariate volatilities and another for parameters determining the correlations.
• Cons: parameters and are constants implying that the conditional correlations of all assets are driven by the same underlying dynamics — often an unrealistic assumption.
When we compare the correlations estimated by the above three models: EWMA, OGARCH and DCC, we will find the correlation forecasts for EWMA seem to be most volatile. Both DCC and OGARCH models have more stable correlations with the OGARCH having the lowest fluctuations but the highest average correlations. The large swings in EWMA correlations might be an overreaction.
### Multiariate Extensions of GARCH
It is conceptually straightforward to develop multivariate extensions of the univariate GARCH-type models — such as multivariate GARCH (MVGARCH). Unfortunately, it is more difficult in practice because the most obvious model extensions result in the number of parameters exploding as the number of assets increases.
##### The BEKK model
There are a number of alternative MVGARCH models available, but the BEKK model, proposed by Engle and Kroner (1995), is probably the most widely used. The matrix of conditional covariances
The general model is given by
The number of parameters in the BEKK(1,1,2) model is , i.e. 11 in 2-asset case.
where , and are coefficients. We can find the simple idea behind the BEKK and DCC models are similar that the volatilities/correlations are dependent on their past realisations and the shocks from squared financial asset returns.
• Cons: too many parameters. This implies those parameters may be hard to interpret. Furthermore, many parameters are often found to be statistically insignificant, which suggests the model may be overparameterized.
|
|
Share
# RD Sharma solutions for Class 8 Maths chapter 22 - Mensuration - III (Surface Area and Volume of a Right Circular Cylinder) [Latest edition]
Course
Textbook page
## Chapter 22: Mensuration - III (Surface Area and Volume of a Right Circular Cylinder)
Ex. 22.1Ex. 22.2
### RD Sharma solutions for Class 8 Maths Chapter 22 Mensuration - III (Surface Area and Volume of a Right Circular Cylinder) 22.1 [Pages 10 - 11]
Ex. 22.1 | Q 1 | Page 10
Find the curved surface area and total surface area of a cylinder, the diameter of whose base is 7 cm and height is 60 cm.
Ex. 22.1 | Q 2 | Page 10
The curved surface area of a cylindrical road is 132 cm2. Find its length if the radius is 0.35 cm.
Ex. 22.1 | Q 3 | Page 10
The area of the base of a right circular cylinder is 616 cm2 and its height is 2.5 cm. Find the curved surface area of the cylinder.
Ex. 22.1 | Q 4 | Page 10
The circumference of the base of a cylinder is 88 cm and its height is 15 cm. Find its curved surface area and total surface area.
Ex. 22.1 | Q 5 | Page 10
A rectangular strip 25 cm × 7 cm is rotated about the longer side. Find The total surface area of the solid thus generated.
Ex. 22.1 | Q 6 | Page 10
A rectangular sheet of paper, 44 cm × 20 cm, is rolled along its length to form a cylinder. Find the total surface area of the cylinder thus generated.
Ex. 22.1 | Q 7 | Page 10
The radii of two cylinders are in the ratio 2 : 3 and their heights are in the ratio 5 : 3. Calculate the ratio of their curved surface areas.
Ex. 22.1 | Q 8 | Page 10
The ratio between the curved surface area and the total surface area of a right circular cylinder is 1 : 2. Prove that its height and radius are equal.
Ex. 22.1 | Q 9 | Page 11
The curved surface area of a cylinder is 1320 cm2 and its base has diameter 21 cm. Find the height of the cylinder.
Ex. 22.1 | Q 10 | Page 11
The height of a right circular cylinder is 10.5 cm. If three times the sum of the areas of its two circular faces is twice the area of the curved surface area. Find the radius of its base.
Ex. 22.1 | Q 11 | Page 11
Find the cost of plastering the inner surface of a well at Rs 9.50 per m2, if it is 21 m deep and diameter of its top is 6 m.
Ex. 22.1 | Q 12 | Page 11
A cylindrical vessel open at the top has diameter 20 cm and height 14 cm. Find the cost of tin-plating it on the inside at the rate of 50 paise per hundred square centimetre.
Ex. 22.1 | Q 13 | Page 11
The inner diameter of a circular well is 3.5 m. It is 10 m deep. Find the cost of plastering its inner curved surface at Rs 4 per square metre.
Ex. 22.1 | Q 14 | Page 11
The diameter of a roller is 84 cm and its length is 120 cm. It takes 500 complete revolutions moving once over to level a playground. What is the area of the playground?
Ex. 22.1 | Q 15 | Page 11
Twenty one cylindrical pillars of the Parliament House are to be cleaned. If the diameter of each pillar is 0.50 m and height is 4 m, what will be the cost of cleaning them at the rate of Rs 2.50 per square metre?
Ex. 22.1 | Q 16 | Page 11
The total surface area of a hollow cylinder which is open from both sides is 4620 sq. cm, area of base ring is 115.5 sq. cm and height 7 cm. Find the thickness of the cylinder.
Ex. 22.1 | Q 17 | Page 11
The sum of the radius of the base and height of a solid cylinder is 37 m. If the total surface area of the solid cylinder is 1628 m2, find the circumference of its base.
Ex. 22.1 | Q 18 | Page 11
Find the ratio between the total surface area of a cylinder to its curved surface area, given that its height and radius are 7.5 cm and 3.5 cm.
Ex. 22.1 | Q 19 | Page 11
A cylindrical vessel, without lid, has to be tin-coated on its both sides. If the radius of the base is 70 cm and its height is 1.4 m, calculate the cost of tin-coating at the rate of Rs 3.50 per 1000 cm2.
### RD Sharma solutions for Class 8 Maths Chapter 22 Mensuration - III (Surface Area and Volume of a Right Circular Cylinder) 22.2 [Pages 25 - 27]
Ex. 22.2 | Q 1.1 | Page 25
Find the volume of a cylinder whose r = 3.5 cm, h = 40 cm .
Ex. 22.2 | Q 1.2 | Page 25
Find the volume of a cylinder whose r = 2.8 m, h = 15 m .
Ex. 22.2 | Q 2.1 | Page 25
Find the volume of a cylinder, if the diameter (d) of its base and its altitude (h) are: d = 21 cm, h = 10 cm .
Ex. 22.2 | Q 2.2 | Page 25
Find the volume of a cylinder, if the diameter (d) of its base and its altitude (h) are: d = 7 m, h = 24 m .
Ex. 22.2 | Q 3 | Page 25
The area of the base of a right circular cylinder is 616 cm2 and its height is 25 cm. Find the volume of the cylinder.
Ex. 22.2 | Q 4 | Page 25
The circumference of the base of a cylinder is 88 cm and its height is 15 cm. Find the volume of the cylinder.
Ex. 22.2 | Q 5 | Page 25
A hollow cylindrical pipe is 21 dm long. Its outer and inner diameters are 10 cm and 6 cm respectively. Find the volume of the copper used in making the pipe.
Ex. 22.2 | Q 6.1 | Page 25
Find the curved surface area whose height is 15 cm and the radius of the base is 7 cm.
Ex. 22.2 | Q 6.2 | Page 25
Find the total surface area and whose height is 15 cm and the radius of the base is 7 cm.
Ex. 22.2 | Q 6.3 | Page 25
Find the volume of a right circular cylinder whose height is 15 cm and the radius of the base is 7 cm.
Ex. 22.2 | Q 7 | Page 25
The diameter of the base of a right circular cylinder is 42 cm and its height is 10 cm. Find the volume of the cylinder.
Ex. 22.2 | Q 8 | Page 25
Find the volume of a cylinder, the diameter of whose base is 7 cm and height being 60 cm. Also, find the capacity of the cylinder in litres.
Ex. 22.2 | Q 9 | Page 25
A rectangular strip 25 cm × 7 cm is rotated about the longer side. Find the volume of the solid, thus generated.
Ex. 22.2 | Q 10 | Page 25
A rectangular sheet of paper, 44 cm × 20 cm, is rolled along its length to form a cylinder. Find the volume of the cylinder so formed.
Ex. 22.2 | Q 11 | Page 25
The volume and the curved surface area of a cylinder are 1650 cm3 and 660 cm2respectively. Find the radius and height of the cylinder.
Ex. 22.2 | Q 12 | Page 25
The radii of two cylinders are in the ratio 2 : 3 and their heights are in the ratio 5 : 3. Calculate the ratio of their volumes.
Ex. 22.2 | Q 13 | Page 25
The ratio between the curved surface area and the total surface area of a right circular cylinder is 1 : 2. Find the volume of the cylinder, if its total surface area is 616 cm2.
Ex. 22.2 | Q 14 | Page 25
The curved surface area of a cylinder is 1320 cm2 and its base has diameter 21 cm. Find the volume of the cylinder.
Ex. 22.2 | Q 15 | Page 25
The ratio between the radius of the base and the height of a cylinder is 2 : 3. Find the total surface area of the cylinder, if its volume is 1617 cm3.
Ex. 22.2 | Q 16 | Page 25
The curved surface area of a cylindrical pillar is 264 m2 and its volume is 924 m3. Find the diameter and the height of the pillar.
Ex. 22.2 | Q 17 | Page 25
Two circular cylinders of equal volumes have their heights in the ratio 1 : 2. Find the ratio of their radii.
Ex. 22.2 | Q 18 | Page 25
The height of a right circular cylinder is 10.5 m. Three times the sum of the areas of its two circular faces is twice the area of the curved surface. Find the volume of the cylinder.
Ex. 22.2 | Q 19 | Page 25
How many cubic metres of earth must be dug-out to sink a well 21 m deep and 6 m diameter?
Ex. 22.2 | Q 20 | Page 26
The trunk of a tree is cylindrical and its circumference is 176 cm. If the length of the trunk is 3 m, find the volume of the timber that can be obtained from the trunk.
Ex. 22.2 | Q 21 | Page 26
A well is dug 20 m deep and it has a diameter of 7 m. The earth which is so dug out is spread out on a rectangular plot 22 m long and 14 m broad. What is the height of the platform so formed?
Ex. 22.2 | Q 22 | Page 26
A well with 14 m diameter is dug 8 m deep. The earth taken out of it has been evenly spread all around it to a width of 21 m to form an embankment. Find the height of the embankment.
Ex. 22.2 | Q 23 | Page 26
A cylindrical container with diameter of base 56 cm contains sufficient water to submerge a rectangular solid of iron with dimensions 32 cm × 22 cm × 14 cm. Find the rise in the level of the water when the solid is completely submerged.
Ex. 22.2 | Q 24 | Page 26
A rectangular sheet of paper 30 cm × 18 cm can be transformed into the curved surface of a right circular cylinder in two ways i.e., either by rolling the paper along its length or by rolling it along its breadth. Find the ratio of the volumes of the two cylinders thus formed.
Ex. 22.2 | Q 25 | Page 25
The rain which falls on a roof 18 m long and 16.5 m wide is allowed to be stored in a cylindrical tank 8 m in diameter. If it rains 10 cm on a day, what is the rise of water level in the tank due to it?
Ex. 22.2 | Q 26 | Page 26
A piece of ductile metal is in the form of a cylinder of diameter 1 cm and length 5 cm. It is drawnout into a wire of diameter 1 mm. What will be the length of the wire so formed?
Ex. 22.2 | Q 27 | Page 26
Find the length of 13.2 kg of copper wire of diameter 4 mm, when 1 cubic cm of copper weighs 8.4 gm.
Ex. 22.2 | Q 28 | Page 26
2.2 cubic dm of brass is to be drawn into a cylindrical wire 0.25 cm in diameter. Find the length of the wire.
Ex. 22.2 | Q 29 | Page 26
The difference between inside and outside surfaces of a cylindrical tube 14 cm long is 88 sq. cm. If the volume of the tube is 176 cubic cm, find the inner and outer radii of the tube.
Ex. 22.2 | Q 30 | Page 26
Water flows out through a circular pipe whose internal diameter is 2 cm, at the rate of 6 metres per second into a cylindrical tank, the radius of whose base is 60 cm. Find the rise in the level of water in 30 minutes?
Ex. 22.2 | Q 31 | Page 26
A cylindrical tube, open at both ends, is made of metal. The internal diameter of the tube is 10.4 cm and its length is 25 cm. The thickness of the metal is 8 mm everywhere. Calculate the volume of the metal
Ex. 22.2 | Q 32 | Page 26
From a tap of inner radius 0.75 cm, water flows at the rate of 7 m per second. Find the volume in litres of water delivered by the pipe in one hour.
Ex. 22.2 | Q 33 | Page 26
A cylindrical water tank of diameter 1.4 m and height 2.1 m is being fed by a pipe of diameter 3.5 cm through which water flows at the rate of 2 metre per second. In how much time the tank will be filled?
Ex. 22.2 | Q 34 | Page 26
A rectangular sheet of paper 30 cm × 18 cm can be transformed into the curved surface of a right circular cylinder in two ways i.e., either by rolling the paper along its length or by rolling it along its breadth. Find the ratio of the volumes of the two cylinders thus formed.
Ex. 22.2 | Q 35 | Page 26
How many litres of water flow out of a pipe having an area of cross-section of 5 cm2 in one minute, if the speed of water in the pipe is 30 cm/sec?
Ex. 22.2 | Q 36 | Page 26
A solid cylinder has a total surface area of 231 cm2. Its curved surface area is $\frac{2}{3}$ of the total surface area. Find the volume of the cylinder.
Ex. 22.2 | Q 37 | Page 27
Find the cost of sinking a tubewell 280 m deep, having diameter 3 m at the rate of Rs 3.60 per cubic metre. Find also the cost of cementing its inner curved surface at Rs 2.50 per square metre.
Ex. 22.2 | Q 38 | Page 27
Find the length of 13.2 kg of copper wire of diameter 4 mm, when 1 cubic cm of copper weighs 8.4 gm.
Ex. 22.2 | Q 39 | Page 27
2.2 cubic dm of brass is to be drawn into a cylindrical wire 0.25 cm in diameter. Find the length of the wire.
Ex. 22.2 | Q 40 | Page 27
A well with 10 m inside diameter is dug 8.4 m deep. Earth taken out of it is spread all around it to a width of 7.5 m to form an embankment. Find the height of the embankment.
Ex. 22.2 | Q 41 | Page 27
A hollow garden roller, 63 cm wide with a girth of 440 cm, is made of 4 cm thick iron. Find the volume of the iron.
Ex. 22.2 | Q 42 | Page 27
What length of a solid cylinder 2 cm in diameter must be taken to recast into a hollow cylinder of length 16 cm, external diameter 20 cm and thickness 2.5 mm?
Ex. 22.2 | Q 43 | Page 27
In the middle of a rectangular field measuring 30m × 20m, a well of 7 m diameter and 10 m depth is dug. The earth so removed is evenly spread over the remaining part of the field. Find the height through which the level of the field is raised.
Ex. 22.1Ex. 22.2
## RD Sharma solutions for Class 8 Maths chapter 22 - Mensuration - III (Surface Area and Volume of a Right Circular Cylinder)
RD Sharma solutions for Class 8 Maths chapter 22 (Mensuration - III (Surface Area and Volume of a Right Circular Cylinder)) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the CBSE Class 8 Maths solutions in a manner that help students grasp basic concepts better and faster.
Further, we at Shaalaa.com provide such solutions so that students can prepare for written exams. RD Sharma textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students.
Concepts covered in Class 8 Maths chapter 22 Mensuration - III (Surface Area and Volume of a Right Circular Cylinder) are Area of Trapezium, Area of a General Quadrilateral, Area of a Polygon, Concept of Solid Shapes, Cuboid, Concept of Cube, Concept of Cylinders, Concept of Cylinder, Volume and Capacity, Introduction of Mensuration, Concept of Cuboid, Concept of Cube.
Using RD Sharma Class 8 solutions Mensuration - III (Surface Area and Volume of a Right Circular Cylinder) exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in RD Sharma Solutions are important questions that can be asked in the final exam. Maximum students of CBSE Class 8 prefer RD Sharma Textbook Solutions to score more in exam.
Get the free view of chapter 22 Mensuration - III (Surface Area and Volume of a Right Circular Cylinder) Class 8 extra questions for Class 8 Maths and can use Shaalaa.com to keep it handy for your exam preparation
|
|
# Relation between the two Radii Rf and Ro
by OONeo01
P: 18 1. The problem statement, all variables and given/known data A planet orbits a massive star in a highly elliptical orbit, i.e, the total orbital energy is close to Zero. The initial distance of closest approach is 'Ro'. Energy is dissipated through tidal motions until the orbit is circularized with a final radius of 'Rf'. Assume the orbital angular momentum is conserved during the circularization process, Then, what is the relation between Rf and Ro ? 2. Relevant equations 1.) GMm/2Ro 2.) GMm/Rf 3. The attempt at a solution I tried subtracting the gravitational energy from the Energy of a circular orbit, to account for the dissipation, and then equated it to zero(since it was originally almost 0). I ended up with 'Rf=2Ro' But I am not sure about my method. Felt like I have left a lot of loose ends. Also the Angular momentum conservation hint was never used. I believe I have reached the wrong answer(or at best, the right answer the wrong way.) Need help in solving this problem. If anybody can help me set up the necessary equation(not jut the final answer), I would appreciate it. Thank you in advance :-)
Mentor P: 11,589 Energy is lost, so energy conservation is useless here. Angular momentum, on the other hand, is conserved, so I would use this. What is the initial angular momentum? Which circular orbit has the same angular momentum?
P: 18
Quote by mfb What is the initial angular momentum? Which circular orbit has the same angular momentum?
Ok, So, the general formula for angular momentum would by mvrsinθ;(sin=1 for circular orbits)
Applying it here I get,
mv1Ro*Sinθ=mv2Rf
Implying,
Ro/Rf=v2/v1sinθ
Is that right ? I'll give a heads up, I am a bit of a douche, so you'll have to be a bit patient with me. But I will certainly try my best to come up with the right equations :-) And Thank you for replying..
Mentor P: 11,589 Relation between the two Radii Rf and Ro Right. Some parts of the equation can be calculated with the known initial conditions, and you can get another equation for the circular motion.
P: 18 Ok, so I make θ=90° and get Ro/Rf=v2/v1. Now if I can get v1 or v2 in terms of each other, then I can get my answer. So I look for another relationship, I try the force equation, and end up with, mv12/Ro=GMm/Ro2 which even when simplified doesn't take me much further. Then I tried the total energy, -GMm/2Ro≈0 Again this led to nowhere ! This is where I get stuck :-) Am I supposed to write in terms of Kinetic energy so as to include a velocity term ? Even so, everything goes down to 0 because of the RHS X-)
Emeritus
HW Helper
PF Gold
P: 7,800
Quote by OONeo01 Ok, so I make θ=90° and get Ro/Rf=v2/v1. Now if I can get v1 or v2 in terms of each other, then I can get my answer. So I look for another relationship, I try the force equation, and end up with, mv12/Ro=GMm/Ro2 which even when simplified doesn't take me much further. Then I tried the total energy, -GMm/2Ro≈0 Again this led to nowhere ! This is where I get stuck :-) Am I supposed to write in terms of Kinetic energy so as to include a velocity term ? Even so, everything goes down to 0 because of the RHS X-)
What's the relationship between Kinetic Energy and Potential Energy for a circular orbit?
Mentor
P: 11,589
Quote by OONeo01 mv12/Ro=GMm/Ro2 which even when simplified doesn't take me much further.
That would be true for a circular orbit (you can use it to express v2 as function of Rf). For the initial orbit, there is a different relation. Hint: Use the negligible total orbital energy.
-GMm/2Ro≈0
You forgot the kinetic energy at Ro here.
P: 18 K.E=-(1/2)P.E K.E.= (1/2)Mv12 P.E.=GMm/Ro So (since total energy is ≈0), (1/2)Mv12=GMm/2Ro Even if I do the same with v2 and Rf, at best I reach Rf=Ro. But I am unable to understand how to set up equations differently for an Elliptical or bit, and specifically for a circular orbit. All the equations I seem to know are for circular orbits. So I am going wrong somewhere with my calculation of v1 and Ro, isn't it ? X-)
Mentor P: 11,589 The relation between v1 and R0 is correct. Going back to Ro/Rf=v2/v1 again, you have Ro as fixed parameter, and you can express v1 and v2 in terms of Ro and Rf, respectively. The constant GMm should cancel if you do this, and you get a relation between Ro and Rf only.
Emeritus
HW Helper
PF Gold
P: 7,800
Quote by OONeo01 K.E=-(1/2)P.E K.E.= (1/2)Mv12 P.E.=GMm/Ro So (since total energy is ≈0), (1/2)Mv12=GMm/2Ro Even if I do the same with v2 and Rf, at best I reach Rf=Ro. But I am unable to understand how to set up equations differently for an Elliptical or bit, and specifically for a circular orbit. All the equations I seem to know are for circular orbits. So I am going wrong somewhere with my calculation of v1 and Ro, isn't it ? X-)
If we take potential energy to be zero at infinity, then potential energy for a circular orbit of radius, Rf is
$\displaystyle \text{P.E.}=-G\frac{mM}{R_f}\ .$
The kinetic energy for the same orbit is
$\displaystyle \text{K.E.}=\frac{1}{2}m{v_f}^2=G\frac{mM}{2R_f}\ .$
So the total mechanical energy for a circular orbit of radius, Rf is
$\displaystyle \text{M.E.}= -G\frac{mM}{2R_f}\$
What is the angular momentum for this circular orbit?
P: 18 My solution seems to have gone into a circular orbit as well. I started by calculating the angular momentum of a circular orbit, and now I am back to finding the angular momentum of a circular orbit ! X-) Anyway, how do I write v1 and v2 in terms of Ro and Rf ? I can so far only write v1 in terms of Ro and v2 in terms of Rf, still stalling any possible headway I can make. And even after manipulating algebraically(cancelling the GMm), I can reach Ro=Rf ! Is that the right relation ? :-0 I am not sure it is..
Mentor P: 11,589 R0=Rf is wrong, you forgot a factor of 2 somewhere (might become a factor of sqrt(2) in the process).
P: 18 Ok, let me start over again. So far I have, 1.) R0/Rf=v2/v1 2.) P.E.=-GMm/Rf 3.) K.E=(1/2)mv22=GMm/2Rf 4.) Total Energy=-GMm/2Rf 5.) Angular Momentum(in general)=MvRSinθ (θ=90°for a circle) 6.) v12=Gm/R0 7.) Centripetal force=Mv22/Rf 8.) Gravitational Force(in general)=GMm/R2 So now which equations do I equate to which ones in what order and get my answer ? Obviously energy with energy; force with force and momentum with momentum. Equation (1.) comes from Conservation of angular momentum. I am unable to use Equations (2.), (3.) and (4.) in a productive way on paper like you guys have done it in your head !
Mentor P: 11,589 With 0 and f as indices everywhere: (1) ##\frac{R_0}{R_f}=\frac{v_f}{v_0}## (from conserved angular momentum) (2) ##\frac{1}{2}v_0^2=\frac{GM}{R_0}## (from negligible initial energy, v is the velocity at the radius R0) (3) ##\frac{1}{2}v_f^2=\frac{GM}{2R_f}## (circular final orbit) Dividing equation 2 by equation 3 gives ##\frac{v_0^2}{v_f^2}=\frac{2R_f}{R_0}## and this is equivalent to $$\frac{v_0}{v_f}=\sqrt{\frac{2R_f}{R_0}}$$ Using this in (1) will give you the correct answer.
P: 18 Now, at the risk of sounding stupid, can you explain to me how you got Equation 2 ? That was my culprit. I wrote equation 2(for R0 and v1) the same as its written in equation 3(for Rf and v2). I don't want to make the same mistakes the next time I form such equations(I prefer making new ones :-D). Got the final answer though, Rf=2R0. Thanks a lot mfb and SammyS. Appreciate it :-)
Emeritus
HW Helper
PF Gold
P: 7,800
Quote by OONeo01 Now, at the risk of sounding stupid, can you explain to me how you got Equation 2 ? ...
It comes from conservation of energy.
The planet comes from very far where it's total orbital energy is close to Zero. Both the K.E. and P.E. being nearly zero.
At closet approach its P.E. is $\displaystyle \ \ -G\frac{mM}{R_0}\,,\$ so it loses how much P.E. ?
That's how much K.E. it gains.
P: 18 Ok. Thats kind of how I had started and gotten Rf=2R0 even before I had asked this question here. But my concepts were all messed up and so were my equations. Thanks a lot for your patience :-)
Related Discussions Classical Physics 3 Chemistry 2 Chemistry 12 Chemistry 5 General Physics 8
|
|
Outlook: Insight Select Income Fund is assigned short-term Ba1 & long-term Ba1 estimated rating.
Time series to forecast n: 17 Feb 2023 for (n+16 weeks)
Methodology : Modular Neural Network (Social Media Sentiment Analysis)
## Abstract
Insight Select Income Fund prediction model is evaluated with Modular Neural Network (Social Media Sentiment Analysis) and ElasticNet Regression1,2,3,4 and it is concluded that the INSI stock is predictable in the short/long term. According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Buy
## Key Points
1. What is neural prediction?
2. What is neural prediction?
3. How can neural networks improve predictions?
## INSI Target Price Prediction Modeling Methodology
We consider Insight Select Income Fund Decision Process with Modular Neural Network (Social Media Sentiment Analysis) where A is the set of discrete actions of INSI stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4
F(ElasticNet Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (Social Media Sentiment Analysis)) X S(n):→ (n+16 weeks) $\stackrel{\to }{S}=\left({s}_{1},{s}_{2},{s}_{3}\right)$
n:Time series to forecast
p:Price signals of INSI stock
j:Nash equilibria (Neural Network)
k:Dominated move
a:Best response for target price
For further technical information as per how our model work we invite you to visit the article below:
How do AC Investment Research machine learning (predictive) algorithms actually work?
## INSI Stock Forecast (Buy or Sell) for (n+16 weeks)
Sample Set: Neural Network
Stock/Index: INSI Insight Select Income Fund
Time series to forecast n: 17 Feb 2023 for (n+16 weeks)
According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Buy
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
## IFRS Reconciliation Adjustments for Insight Select Income Fund
1. When an entity separates the foreign currency basis spread from a financial instrument and excludes it from the designation of that financial instrument as the hedging instrument (see paragraph 6.2.4(b)), the application guidance in paragraphs B6.5.34–B6.5.38 applies to the foreign currency basis spread in the same manner as it is applied to the forward element of a forward contract.
2. The decision of an entity to designate a financial asset or financial liability as at fair value through profit or loss is similar to an accounting policy choice (although, unlike an accounting policy choice, it is not required to be applied consistently to all similar transactions). When an entity has such a choice, paragraph 14(b) of IAS 8 requires the chosen policy to result in the financial statements providing reliable and more relevant information about the effects of transactions, other events and conditions on the entity's financial position, financial performance or cash flows. For example, in the case of designation of a financial liability as at fair value through profit or loss, paragraph 4.2.2 sets out the two circumstances when the requirement for more relevant information will be met. Accordingly, to choose such designation in accordance with paragraph 4.2.2, the entity needs to demonstrate that it falls within one (or both) of these two circumstances.
3. Hedge effectiveness is the extent to which changes in the fair value or the cash flows of the hedging instrument offset changes in the fair value or the cash flows of the hedged item (for example, when the hedged item is a risk component, the relevant change in fair value or cash flows of an item is the one that is attributable to the hedged risk). Hedge ineffectiveness is the extent to which the changes in the fair value or the cash flows of the hedging instrument are greater or less than those on the hedged item.
4. If, at the date of initial application, determining whether there has been a significant increase in credit risk since initial recognition would require undue cost or effort, an entity shall recognise a loss allowance at an amount equal to lifetime expected credit losses at each reporting date until that financial instrument is derecognised (unless that financial instrument is low credit risk at a reporting date, in which case paragraph 7.2.19(a) applies).
*International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS.
## Conclusions
Insight Select Income Fund is assigned short-term Ba1 & long-term Ba1 estimated rating. Insight Select Income Fund prediction model is evaluated with Modular Neural Network (Social Media Sentiment Analysis) and ElasticNet Regression1,2,3,4 and it is concluded that the INSI stock is predictable in the short/long term. According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Buy
### INSI Insight Select Income Fund Financial Analysis*
Rating Short-Term Long-Term Senior
Outlook*Ba1Ba1
Income StatementB1B1
Balance SheetCaa2Caa2
Leverage RatiosBaa2Baa2
Cash FlowBaa2Ba2
Rates of Return and ProfitabilityBaa2Baa2
*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?
### Prediction Confidence Score
Trust metric by Neural Network: 79 out of 100 with 757 signals.
## References
1. S. Bhatnagar. An actor-critic algorithm with function approximation for discounted cost constrained Markov decision processes. Systems & Control Letters, 59(12):760–766, 2010
2. Clements, M. P. D. F. Hendry (1997), "An empirical study of seasonal unit roots in forecasting," International Journal of Forecasting, 13, 341–355.
3. Clements, M. P. D. F. Hendry (1996), "Intercept corrections and structural change," Journal of Applied Econometrics, 11, 475–494.
4. Pennington J, Socher R, Manning CD. 2014. GloVe: global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods on Natural Language Processing, pp. 1532–43. New York: Assoc. Comput. Linguist.
5. S. Bhatnagar, R. Sutton, M. Ghavamzadeh, and M. Lee. Natural actor-critic algorithms. Automatica, 45(11): 2471–2482, 2009
6. Dimakopoulou M, Zhou Z, Athey S, Imbens G. 2018. Balanced linear contextual bandits. arXiv:1812.06227 [cs.LG]
7. R. Howard and J. Matheson. Risk sensitive Markov decision processes. Management Science, 18(7):356– 369, 1972
Frequently Asked QuestionsQ: What is the prediction methodology for INSI stock?
A: INSI stock prediction methodology: We evaluate the prediction models Modular Neural Network (Social Media Sentiment Analysis) and ElasticNet Regression
Q: Is INSI stock a buy or sell?
A: The dominant strategy among neural network is to Buy INSI Stock.
Q: Is Insight Select Income Fund stock a good investment?
A: The consensus rating for Insight Select Income Fund is Buy and is assigned short-term Ba1 & long-term Ba1 estimated rating.
Q: What is the consensus rating of INSI stock?
A: The consensus rating for INSI is Buy.
Q: What is the prediction period for INSI stock?
A: The prediction period for INSI is (n+16 weeks)
|
|
# How to verify the group operation?
Say you are given a group $G$. How can you show that the group operation of this group is addition? What I have in mind is $\forall (a,b) \in G$ if I can show $(a+b) \in G$, this will prove the above. Does $\forall (a,b) \in G, (a-b) \in G$ prove the same thing?
-
What do you mean? Are you assuming $G$ is a subset of some other group with "addition" already defined for it, and trying to show that the operation on $G$ is the same as this operation? – Alex Becker May 10 '12 at 17:30
The question makes no sense. A group is a set $G$ together with a binary operation $G\times G\to G$. What you call the operation is irrelevant; it makes no sense to ask "is the group operation of this group 'addition'", because "addition" doesn't have an absolute meaning. – Arturo Magidin May 10 '12 at 17:30
Yes, I am assuming G is a subset of some other group with "addition" already defined for. – CasterT May 10 '12 at 17:31
If your $G$ is contained in some group $(K,+)$, then you are really asking whether $(G,*)$ is actually a subgroup of $(K,+)$, as opposed to some random group structure that has nothing to do with that of $K$? – Arturo Magidin May 10 '12 at 17:33
If $G$ is a subgroup of $K$, with the operation of $K$ given by $+$, then by definition, the operation on $G$ is the restriction of $+$ to $G$. (The fact that $G$ must be closed under this operation insures that $g_1 + g_2 \in G$ whenever $g_1, g_2 \in G$.) FWIW, people usually use $+$ for the group action when the group is abelian (commutative), but generally use $\cdot$ otherwise. – Michael Joyce May 10 '12 at 17:50
You have a group $(K,+)$ (presumably abelian, since you are using $+$ for the operation). And you have a subset $G\subseteq K$.
To check whether $G$ is a subgroup of $K$, you need to check if the restriction of the operation $+$ from $K$ to $G$ makes G$into a group. Formally, this would require checking that: 1. For all$a,b\in G$,$a+b\in G$; (that$+$is an operation on$G$); 2. For all$a,b,c\in G$,$(a+b)+c = a+(b+c)$(the operation is associative); 3. There exists$0\in G$such that$a+0=0+a= a$for all$a\in G$; and 4. For every$a\in G$there exists$b\in G$such that$a+b=b+a=0$. In fact, if 1 holds, then 2 holds "for free" because the equality is true in$K$; and 3 holds if and only if the identity of$K$is in$G$, and 4 holds if and only if the inverse of$a$in$K$happens to be in$G$. So we can verify that$G$is a subgroup under$+$by checking only that: 1.$0\in G$; 2. If$a,b\in G$then$a+b\in G$; and 3. If$a\in G$, then$-a\in G$. Alternatively, one can also verify instead that: a.$G\neq\varnothing$; b. If$a,b\in G$, then$a-b\in G$. Indeed, if$G$satisfies (1), (2), and (3), then since$0\in G$then$G\neq\varnothing$; and if$a,b\in G$, then$-b\in G$by (3) applied to$b$, and therefore$a-b = a+(-b)\in G$by (2) applied to$a$and$-b$. So if$G$satisfies (1), (2), and (3), then it satisfies (a) and (b). Conversely, suppose that$G$satisfies (a) and (b). Let$x\in G$(possible by (a)); then$x-x=0\in G$, by applying (b) to$x$and$x$, so$G$satisfies (1). If$a\in G$, then since$0,a\in G$then by (b) we have$0-a = -a\in G$, so$G$satisfies (3). And if$a,b\in G$, then$-b\in G$(since we have established that (3) holds), so applying (b) to$a$and$-b$we get$a-(-b) = a+b\in G$, proving that (2) holds in$G$. So if$G$satisfies (a) and (b), then it satisfies (1), (2), and (3). So you can either: check that$0\in G$, that if$a,b\in G$then$a+b\in G$, and that if$a\in G$then$-a\in G$; or that$G\neq\varnothing$and if$a,b\in G$then$a-b\in G$. In particular, it is not enough to check that$a,b\in G$implies$a+b\in G$; and it is not enough to check that$a,b\in G$implies$a-b\in G$; in order to verify that$G$is a subgroup of$K$. If, on the other hand, you are asking: Suppose$(K,+)$is a group, and$G\subseteq K$is a group under some operation$(G,*)$. Is it enough to check that if$a,b\in G$then$a+b\in G$to conclude that$*$is actually$+$? Or check that$a-b\in G$? The answer is no. It's entirely possible for$G$to be a subgroup, and yet be a group under a completely different operation that has nothing to do with the operation$+$of$K$. Or it could be that$G$is closed under$+$, but the operation$*$has nothing to do with$+$. For example, take$K=\mathbb{R}$under the usual addition, and let$G$be the positive rationals under multiplication. Then$(G,*)$is a group,$G$is contained in$K$, and for every$a,b\in G$we have$a+b\in G\$, but multiplication of rationals is not the same as addition of reals.
|
|
# Arbitrarily Deletable Primes 03 Sep 2018
I recently watched this video about left-truncatable primes. A left-truncatable prime is a prime number that, when any number of left-most digits are removed, results in another prime number [1, 2, 3, 4]. The video also discussed right-truncatable primes, which are primes that when truncated from the right result in other primes and deletable primes,... read more.
# Navigation: Using Local Noon to Find Position 01 Jan 2018
I was messing around with a fun navigation problem the other day and thought I’d share it. I was scrolling through a weather app on my phone when I came across the sunrise and sunset times and wanted to try calculating my longitude from these times. Determining Local Noon I started with sunrise and sunset... read more.
# Minus Forty - A Useful Number for Temperature Conversion 28 Dec 2017
Most of us know that 32 degrees Fahrenheit is the temperature at which water freezes. We also know that this correlates to 0 degrees Celsius. But for many of us, our knowledge stops there and to convert between the two scales we must resort to either looking it up using some sort of app or... read more.
# Newton’s Method: An Overview 12 Sep 2017
I was first exposed to Newton’s Method in my undergraduate numerical methods course. I’ve used it a number of times since then in undergraduate and graduate courses, but I didn’t begin to gain an appreciation for its power until my recent navigation systems course. To take a take a dive into Newton’s Method and deepen... read more.
# Useful Technical Resources 04 Sep 2017
I wanted to put together a collection of engineering and technical resources. I’ll make this post an ongoing project and update it as I find new things. For now, I’ll post an initial list of resources I’ve used and I’ll add more as I remember or discover them. Mathematics Software Engineering / Computer Science Python... read more.
# Accuracy of Newtonian Kinetic Energy in a Relativistic World 11 Jun 2017
One of my homework problems for my modern physics course asked me to calculate the highest velocity of a particle for which the Newtonian approximation of kinetic energy is accurate to within an error of of the actual, relativistic, kinetic energy. In trying to solve this problem, I decided it would be faster and easier... read more.
# The Bare Minimum: Arduino 30 May 2017
As an undergraduate, I remember many occasions where I needed to become proficient in a topic as a step to achieving some further goal. It still happens to me today and I see it happening to my students. This post on Arduino is for people in that situation. You need to build the skill, but... read more.
# Python List Comprehensions 08 Jan 2017
My absolute favorite feature of Python is lists. An elegantly dynamic data structure with a beautiful syntax. Everything from negative step indexing to their use as iterators makes Python lists, and therefore Python, the easiest solution to so many problems. One of the less obvious, but extremely powerful features of Python lists are list comprehensions.... read more.
# My Collection of Notes 08 Jan 2017
Today, I am starting a new section on my site: Notes. This is a collection of my notes from various courses and topics I’ve studied. My goal with this is to create a digital copy of my notes to make it easier to store and access as well as, if people find it useful, to... read more.
# Bash Tips and Tricks #2 - GREP 05 Jan 2017
This is my second post with a handful of pastable commands to improve your use of Bash. If you haven’t seen the first post, you can check it out here. This one is all about GREP. the GNU Regular Expression Parser. A command line utility that has nearly infinite uses and applications. Pastable #1 Grep... read more.
# Bash Tips and Tricks - 10 Pastables to Make Things Easier 10 Dec 2016
Let’s talk about Bash. The command line shell used by default on Linux and Mac OS. This is not an introduction. This is not a tutorial. This is a collection of pastable commands that I’ve found useful. Pastable #1 Renames a all files in the current directory. for f in *; do mv $f$f.backup;... read more.
# Electron, Getting Started 12 Jun 2016
What is Electron? Electron is a framework for running Node.js applications on the desktop. This allows you to quickly write simple cross-platform GUI applications. This post details my experience getting started with electron. It’s built by GitHub and used to build their Atom text editor. Electron is also used to create a lot of other... read more.
# Welcome to My Blog! 31 Dec 2015
Welcome and happy New Year! This is my blog. Here I’ll post all sorts of things I find interesting from engineering and computer science to leadership, management, and business development. I am an engineer and a passionate problem solver. Whether those problems are technical, business focused, or something else entirely, I enjoy tackling unique and... read more.
|
|
1. ## geometry -Euclidean
Consider triangle ABC,suppose D is midpoint of AC and that the line through B perpendicular to AC intersects AC at a point X lying between D and C using Pythagoras theorem prove
$|BA|^{2}+|BC|^{2}=2|BD|^{2}+2|AD|^{2}$
i drew pretty nasty diagram couldn't work out where or wether to use SAS ASA or if any more criterion,axioms involved,thanks.
2. It's tedious, but it's not tricky. Where are you stuck?
It appears to me that your instructsion are to use the Pythagorean Theorem. Why are you worried about congruent triangles in any other sense.
For simplicity, I labled the pieces like this:
AB = a
BC = b
CX = c
XD = d
DA = e
DB = f
BX = g
The, by the Pythagorean Theorem, we have:
(e+d)^2 + g^2 = a^2
d^2 + g^2 = f^2
c^2 + g^2 = b^2
Also given in the problem statement is:
e = c + d
That's all you need to show a^2 + b^2 = 2f^2 + 2e^2
'c' does not appear in the final expression, so get rid of it.
c = e - d
Substituting into the other three equations gives:
(e+d)^2 + g^2 = a^2
d^2 + g^2 = f^2
(e-d)^2 + g^2 = b^2
'g' doesn't appear in the final expression, so get rid of it.
g^2 = f^2 - d^2
Substituting into the other two equations gives:
(e+d)^2 + f^2 - d^2 = a^2
(e-d)^2 + f^2 - d^2 = b^2
'd' doesn't appear in the final expression, so get rid of it. It's not quite as obvious how to do this. Just expand everything and see what happens.
e^2 + 2ed + f^2 = a^2
e^2 - 2ed + f^2 = b^2
Hey!! Where did the d^2 go?! That was SWEET!
Now what? You're not going to make me do ALL the work, are you? You still need to get rid of the 'd'.
Let's see what you get.
3. 4ed=a^2 + b^2
and sub what they are equal to, thanks ever so much,think key is knowing to label e as c+d or whatever.thanks again
|
|
IEnumerable<T> represents a series of items that you can iterate over (using foreach, for example), whereas IList<T> is a collection that you can add to or remove from.
Typically you'll want to be able to modify an Order by adding or removing OrderLines to it, so you probably want Order.Lines to be an IList<OrderLine>.
Having said that, there are some framework design decisions you should make. For example, should it be possible to add the same instance of OrderLine to two different orders? Probably not. So given that you'll want to be able to validate whether an OrderLine should be added to the order, you may indeed want to surface the Lines property as only an IEnumerable<OrderLine>, and provide Add(OrderLine) and Remove(OrderLine) methods which can handle that validation.
|
|
# System.XML還是Regex.Replace?
I'm generating a large amount of xml documents from a set of values in an Excel file. The only thing that changes for each xml document is the values. I figured the best way to generate these documents was to make a "XML skeleton" (since the xml format never changes) and then plug in symbols like "&%blahNameblahTest", so then I could just preform a Regex.Replace on each value.
|
|
# Cartesian angular veloctiies (Rate of change of Quaternion to Rate of change of Euler angle)
Hello everyone
I have cartesian linear(x,y,z) and angular(qx,qy,qz,qw) velocities. I found 2 solutions either use joint values or joint velocities to move the manipulator.
1) Moveit have function setFromDiffIK to calculate the joint values from cartesian velocities but for this i need angular velocities in euler. how can I convert rate of change of Quaternion to rate of change of Euler angle? 2) How can I convert the cartesian velocity to joint velocity?
|
|
## Babington Style Waste Oil Burner
###### This Blog on the Babington Burner is in reverse order. My first entry is back in 2014 and is at the bottom. My latest update is immediately below. If you want to see how I get here… go the first article at the bottom and work your way up!
Revised 1/8/19
About time I get back on this blog!
The break through was to use a propane pilot flame. I attached a 1/2" pipe coupling to the side of the 2" pipe coupling, and inserted a propane torch head into the bushing and secured it with a screw. The torch shoots a flame into the 2" coupling. The propane burner is just a standard premium propane torch with an adjustable flame (about $50) that you would normally attached to a disposable propane tank. I used a hose and ran it to a standard 100 lb propane tank that I had. A 20 lb tank would have been plenty. The torch is about 2000 btus meaning it will run for about 40 hours on 1 gallon of propane. This maintains the flame when so if the oil flame burps for a second, everything doesn't shut down. Works great, and solves all kinds of problems. Also makes automatic relights possible, so I can turn off the burner when the garage gets to hot, then cycle the burner back up since the pilot flame is still there. Way too simple! And yes, I have a thermostat attached to the PLC, to trigger the restart when the temp drops. Regarding the controls, things are simpler. I only have an air on/off valve now. No propane gas preheat. The pilot flame keeps the tube warm enough no preheat is required once everything is lit. The pressure is set between 20 and 25 psi and that can vary for heavy or less heavy oil. The amount of oil flow is fairly constant with the constant flow pump between 20 and 70 degrees. However you will need to play with it to figure out what is the proper flow. Its pretty obvious as if you have too much the flame smokes, too little and the flame looks lean. You also don’t want to overfire your stove. Too much heat, > about 750 degrees is simply too hot. If it gets that hot you will want to lower your oil flow. If the stove heats up to about 700 degrees and then the burner starts cycling on and off to maintain the temp in the room, then you are in very good shape. I'm also going to add a water trap that is after the positive displacement metering pump. This will be very simple. A 2" piece of pipe held vertically. Oil will go in just above center and the oil to the burner will come out the top. Any water mixed into the oil should fall out and drop to the bottom of the pipe... where there will be a drain. Water in the oil has been a consistent battle. It has to be trapped prior to the positive displacement pump and after it. This system really proved itself last winter when I went through about 250 gallons of oil with very little maintenance. The pipe downstream of the 2” Tee does collect soot after a while and it needs to be cleaned out. It’s more of an annoyance than anything else. Revised 3/12/18 Had a break through this weekend which has improved everything beyond my expectations. Results so far: \1. No flame outs even with crappy oil. \2. Greatly improved combustion which has resulted in higher flame temperature, a closer to white, light yellow flame, which has resulted in less ash formation in the burner tube. I'm not sure when it will need to be cleaned out. I will run 15 gallons or so through the burner and then look at the results. The only ash I am seeing is light, white powder. \3. The improved combustion is evident in the stack emissions. I can't see any! Not even a smudge of darkness from the stack. It looks like I am burning natural gas! \4. Simpler controls - basically little to no preheat with cold motor oil. I started the burner this morning with no preheat and the motor oil and surrounding air and metal was at 25 degrees F this morning. Did you get that, no preheat, with cold motor oil! Yes, I'm excited! This is the stack emissions running full tilt: Do you see the smoke... nope, neither do I and there is no wind outside today. Compare this solution to the other popular waste oil burner solutions out there right now. The Delavan siphon burner conversions require a Beckett burner, a preheater block with electrical controls, a tank which is preheated and it has to be level, etc. The Beckett burner still needs compressed air, as does this solution. The Beckett burner solution will not burn just any oil, it needs to be motor oil or lighter. No gear oil. This burner is running off a mix of motor oil, mostly synthetic, gear oil, and a small amount of auto trans fluid. Which is also mostly synthetic. That's the advantage of the Babington atomizer. Its very tolerant of oil. Also, I can still turn down this burner as well. I'm not sure how far, but I know I can turn it down it down at least 50% without issues. Finally, this burner is made from pipe fittings, as you can see. The most sophisticated thing about this burner is the controls and I did that because its easy for me to change for experimentation purposes. I think a Arduino Uno and a few relays could run this thing without issues. So what did I do?... Well I need to do more testing with this setup to make sure its as good as it looks before I let you know. But at this point, it appears to be a Voilà moment. I've run about 5 gallons of oil through it so far and it just hums along. Ok, so you don't get to this point without finding out what doesn't work. So I found another thing that doesn't work this weekend: I put a tee into the fuel inlet side of the metering pump and I setup selector valves so I could choose between motor oil or kerosense. My thinking was that I could use kerosene or diesel to preheat the burner and then switch to used motor oil. The problem is that volume of oil in the pump and the volume of oil in the hose means that I need to prime the heck out of the system before I can get kerosene at the burner for preheat. That takes time, and quite a bit of time. So this was a non-starter. It makes no sense to have to prime for 10 minutes to get kerosene to the burner for preheat. Oh well. Compared to my use of Propane for preheat, the Propane was much easier to switch in and out with air. Revised 2/22/18 Added another air regulator to the 3 position manifold so now I have low pressure air, high pressure air, and propane looking at the valves from left to right. On startup, the propane starts and heats the burner tube for 5 minutes, then low pressure air (about 25 psi) comes on and also the oil pump starts running. After 30 minutes, the low pressure air is turned off and the high pressure air (about 40psi) is turned on and allowed to run for about 5 minutes. This leans out the flame but it doesn't blow it out. This gets rid of most of the ash buildup in the burner tube. I haven't run it enough to see just how much less as is formed, but it is very substantial. I will continue to tweak the high pressure air settings and see how high I can run it without blowing out the flame. I know that air at 120 psi is simply too much. 40 works and perhaps 60 psi will work as well. We will see. Investigating hacking an infrared thermometer that I can read with the PLC to get the stove temperature as a safety. As long as the pump does not run away there is almost no way that the stove can overfire since the stoves temp is dictated by the oil flow which is precisely metered by the furnace oil pump. I can start up the stove and the stove will climb in temp to 700 degrees and just stay there. The oil flow is that consistent. Revised 2/17/18 Further refinements: After running 15 gallons of oil through the stove, I was having a hard time relighting after it was cold (in the low 30's) Even though I had the induction hotplate under the oil tank which was effective at heating the tank, I was still having problems lighting off the burner. I extended the propane preheat time to 5 minutes and still had some flameouts - which was not a problem since the flame detector caught it and shut things down. However I then realized that although the oil in the tank was 100+ degrees, the furnace only uses about 1.5 gallons of oil per hour and the pump sits on the very cold ground. So the pump was cooling off the heated oil from the tank. By the time the oil got to the top of the burner, it was in the low 40's - way too cold. Thinking back I remember someone using loops of tubing around their burner to bring the oil up to temperature. I dived into the junk pile and found 4 ft of new steel, 1/4" brake line. I also found some brass 1/4" compression fittings. So I wound about 3 ft of the tubing so it would fit around the burner, and removed the induction heater. I primed the feed tubing and fired up the burner on propane again. 5 minutes later it switched to the oil and it took right off. I watched the temperature of the oil feed tubing and it came in at about 40 degrees and entered the burner at just under 100 degrees. This is with the wall of the furnace running at 550 degrees or so. So that is a very safe temperature for the oil. In fact it could be a lot hotter. About 200 degrees would be ideal so it might be better to go with 6 ft of steel brake line rather than 4 ft of steel brake line. But for now this fixed a number of things and simplifies the design even further. With the higher incoming oil temperature vaporization looks even better in the stove. Better vaporization in the burner should minimize ash build up in the burner as well. Heating loop of steel brake line: Filter addition: Also, I had a problem with the small .015 jet clogging in the Flare cap nozzle. So I remove the pipe plug with the brass gas/air pipe and blew it out and attached a filter that I had from my burner #1 project long ago. That should prevent future jet clogging issues. Revised 2/16/18 Installed the Siemens S7-1214C PLC for the burner control. That was a good move. I attached a Linksys Media connector to the S7-1214C and linked that to the Wifi router that is in my barn. Now I can remotely program and monitor the PLC/Burner control remotely via my laptop without cables. I also have a tablet with a free app on it that allows me to monitor the PLC values wirelessly for troubleshooting. Very handy for debugging and trying new things. I setup a flame detect circuit. I had a 12 volt DC power supply so I did this.. +12V ---WW---|----flame rod 0V. The W's is a dropping resistor. I think I used a 150K 1/4 watt resistor. The "|" is the connection to the Analog input on the S7-1200 and the M common connection was made to the burner flame tube. I looked at the value of the analog input with no flame and then fired up the burner and the analog input changed substantially. It takes about 30 seconds for the flame to make a solid electrical connection between the flame rod and the burner tube but then the flame resistance drops substantially which changes the analog input voltage by about 10% which is plenty to allow a reliable flame detection. Note that the flame rod is positive. That works best. With a little experimenting I have found that this method of flame detection is very reliable and that is with burning waste motor oil. But it also works with the Propane preheat flame. So success! I now have a flame out safety so the PLC can shutdown the pump if the flame goes out. So the last thing is the overheat detection. I have one spare Analog input on the 1214C left but I am going to attach a multichannel analog input module so I can have more than one temperature sensor for redundancy. Thermocouple modules for the S7-1200 are really expensive. I have no idea why. So I will likely go with some used Omega Thermocouple transmitters that are on ebay for about$25 each. I will need two or three so I can monitor the furnace redundantly for over temp and also the stack temperature. Another way to go would be to use some WaterHeater thermostats mounted to aluminum panels and placed around the furnace. Once the radiant heat becomes too great the thermostats would open and that could signal the PLC to trigger a shutdown. Water heater thermostats are really cheap - like $15 each and they are likely very reliable. However they don't actually tell you how hot the furnace is either so that is a great compromise for better control. Maintenance: So it appears that the burner tube needs to be cleaned out very 5 gallons of burned oil. Doing that requires the burner to be shutdown, disconnect the gas/air supply quick disconnect, unscrew the 2" pipe plug and push the ash out of the burner tube with a 1" diameter steel rod. It all takes maybe 5 minutes including relighting the burner. The burner goes through about 1.5 gallons of oil per hour so that means about every three hours it needs to be cleaned. If it is not cleaned I think the flame will just go out triggering a shutdown. I think it is good to test the shutdown frequently so I may just let it run out of the 5 gallons of oil and then let me know via the shutdown. Revised 2/7/18 Installed a flame rod purchased off Ebay for$10 plus shipping so it is directly in the path of the flame. Hooked up my Maxx Tronic flame detector board that you can see in the picture just above the manifold and began testing. Didn't go so well. The Maxx Tronic board I bought from China or Taiwain a few years ago is junk. When it did detect a flame, it was erractic in its operation regardless of the pot setting. On top of that, although the relay was clicking, the NO output was always on...although the NC output was going on and off !?! Junk. Oh well.. So I used my Fluke meter to see if I could distinguish any electrical changes when the flame was on vs off and sure enough, the flame is conductive between the ground/flame retention tube and the flame rod! I measured between 100K and 300K ohms with the value varying with the flame. The usual method of detecting a flame is to put an ac voltage between the flame rod and another part in contact with the flame and then look for a small DC voltage. The flame tends to act as a rectifier. Perhaps that is what my meter was detecting ... regardless, the flame is somewhat conductive to DC voltages.
So this might work - an Arduino setup to measure the flame resistance. Fairly cheap at $15 for the Arduino and a reference resistor. A 250K should work. This would be cheaper than trying to guess if a second Maxx Tronic flame detector board would work or not. Plus I have an Arduino collecting dust. http://www.circuitbasics.com/arduino-ohm-meter/ Or I could use an analog input on a PLC and do basically the same comparison via a voltage divider. I have an S7-1214C PLC sitting here that can do that and takeover the functions of the Click PLC. Or I can add a$90 Analog input module to the existing PLC and do the same thing... hmmm...
I'm doing this for the long run as I have been at this burner project now for 4+ years on and off. Safety is paramount so the increased cost of the PLC is really insignificant in the big picture.
Anyway, the less learned is to avoid Maxx Tronic... its junk.
For reference - Here is a link to a safety made for a burning man display that uses propane. https://www.pjrc.com/pilot-light-flame-sensor-for-burning-man-art/
This seems to be a bit of an overkill if I can just look for flame conductivity without the AC component. It doesn't appear to be required to sense the flame.
Revised 2/6/18
Further work has occurred on this burner and more things have been learned. I'm still using the same Automation Click PLC and the Siemens VFD drive. I tweaked the drive to provide better low speed torque, that is a requirement as the motor was bogging down with thick oil when it was cold.
However I have realized that heating the oil is a requirement to make this burner consistent.
Last year I bought an induction hot plate and I simply put that under the 5 gallon can. Now I can heat up the 5 gallon can in a controlled manner. It can heat by a watts setting or a temperature setting. If I set it for 150 degrees I can walk away from it and it will heat it to exactly 150 degrees. I bought the induction hot plate off Amazon for $40 or so. The plate is a Rosewill brand plate. I also gave up on the perforated burner tube. The tube worked fine except that it would clog with soot and cinders quickly and it could not be easily cleared. I tried different lengths of straight burner tubes and ended up going with a 6" length of 2" pipe. That is long enough to retain the flame yet but not so long to make it hard to clean. Here is the burner operating with the new straight burner tube and the resultant flame in the cast iron stove. The flame shoots right across the stove so I put some sheet metal leaning up against the opposite side so the flame would not impinge on the cast iron of the stove. The new straight burner tube is easy to clean out by disconnecting the gas connection and unscrewing the plug in the back of the burner. The soot is then just pushed into the stove with a steel rod to clear the burner tube. Its a 3 minute operation and it can be done with everything hot. The burner has to be cleared after burning about 5 gallons of oil, which is about every 3 hours. Here is the view looking in the straight burner tube with it clogged with soot. I added a filter to the intake of the gear pump. The pump has a strainer built into it inside the pump itself and it was getting clogged with junk. This oil furnace filter is cheap and easy to clean and external to the pump. To further fight the clogging issues in the pump (the burner doesn't clog, but the pump does) I am now filtering the oil I pump into the 5 gallon can. I use a hydraulic filter that is rated for 10 microns and I pump oil though the filter with a 3:1 air powered pump that Harbor Freight used to sell for about$100. The pump has been great and just works even with heavy cold oil. It has enough umpf to push oil thick oil through the 10 micron filter at 2-3 gallons per minute which is plenty fast when refilling the 5 gallon can.
One more thing I did was to add a damper to the stack. The stove was drawing too much air and sucking too much heat up the stack. The damper slows the exhaust gas and allows me to fine tune the stack temp. I use an infrared cheap thermometer to measure the temps. With this setup I can heat the stove to about 700 degrees and that is a good temperature. At that temp I can heat much of the uninsulated space in my garage which is over 2000 sq ft to about a 40 degree temp rise above the outside temp.
My next thing to add is a flame rod which will detect when a flame out occurs in the burner and shutdown the pump and beep an alarm. Flame out normally only occurs when the burner tube becomes plugged with soot. I already have a flame detector board in the controls for use with a flame sensor.
I may try something else to control the soot buildup in the burner tube. After the burner has been running for 10 minutes or so the burner tube is plenty hot enough to retain the flame.. its over 1000 degrees at the end of the tube. If I am running 25 psi of air and the pump is at 4.7 hz I can get 700 degrees on the stove. If after two hours of operation the soot begins building up the flame will be restricted and the burner flame will start to back up in the burner tube. If I raise the air pressure to 90 psi for a few seconds it will tend to blow the soot out of the burner. I have three positions on the manifold so I could automate this and blow out the tube every hour or so to try and keep it clear. If I do that I can probably run the burner for 8+ hours without any maintenance except to refill the 5 gallon tank.
This soot build up issue is a problem with most motor oil burners. There is a pot burner design where he incorporated a powerful motor/gearbox to run a scraper inside the pot burner to clear the ash. Blasting in a bunch of extra air would be a LOT easier!
4/5/15
Did more testing last night. The oil was about 42 degrees so it was thick. The startup went ok but I noticed that the oil was quite viscous coming out of the vertical tube over the flare cap. Here is a pict of the oil globbing on the cap. All of the oil was being blown off the cap and there was no overflow off the cap so the flame was running rich and smokey. I moved the flare cap forward via the adjustment and then took the next picture. The oil is now spread over the cap and there is some overflow but still the flame was running rich.. The tank is near the stove and it only had a few gallons of oil in it so it warmed up quickly to 55 degrees or so and the oil thinned out and the burner started running right (little smoke). The oil became noticably thinner. In the second picture below you can see the oil appears quite black. Once the oil warmed another 15 degrees the oil color appeared amber and I could see the brass flare cap through it. So the oil was much thinner.
As the oil further heated the oil on the cap thinned and the atomization became better and the flame was even nicer. That process from cold start with temps in the low 40's to a really nice flame was about 20 minutes. I think this may be sufficient to justify an oil heater for the tank. If the oil was preheated to 90-100 degrees this burner would start with less smoke and be more consistent. Remember the goal is full automatic operation with this type of burner. This burner (unlike siphon burners) can't clog the nozzle so it should be very trouble free. I have a tank and some electric heating elements so I think I will set it up so the oil is heated. I could go the way of wrapping copper tubing around some part of the burner etc, but that would throw in more control variables and I don't need that.
Also, because of the varying viscosity of the oil (the flow was consistent due to the gear pump) I was able to turn down the speed of the motor and reduce the flow during the startup. The VFD (and most do) has an analog input on it so the PLC could slow the pump down during startup to compensate for the thicker oil, but I think that simply heating the tank might be more effective. Still thinking. If I control the oil temp, oil flow, air flow/pressure, this burner should be extremely predictable. Which makes automatic operation fairly simple.
4/4/15
Further development: I have a new oil feed system that is very precise. A Nema 56 frame 3 phase motor is driven by an old Siemens Variable Speed drive. The motor is coupled to a Suntec J type oil pump with a Lovejoy coupling with a rubber insert. It runs very smoothly and is very precise and is easily controlled. A 5 gallon container is being used as the oil tank at the moment. I used the same tank for my gravity feed tests. But now the tank sits on the ground and the J type pump draws oil out of it. Any excess oil over the Babington type nozzle surface runs downward and trhough a J trap and then back into the same oil tank. This setup is really simple and very effective. This is a huge improvement over a gravity feed system since the pump is a positive displacement pump and the flow does not vary with changes in viscosity. A PLC has been added along with a gas management manfold using some 24 V DC valves. A pushbutton box has also been assembled. The controls have been assembled (mocked up) on a peice of white poly. When the controls design is done, everything will go into an electrical box. The PLC has been programmed to preheat the burner tube for 5 minute using propane at about 3 psi. After 5 minutes, the oil pump is turned on via the Siemens drive for one minute and the burner runs on propane and oil. Then after the minute times out the gas is turned off and air is turned on. After that no propane gas is required unless the burner is shutdown, in which case it will go through the startup sequence again.
This setup works well with the VFD running the 4 pole motor at about 8 hz and the air pressure at 12 psi. I also tried running the burner at 10 hz and 15 psi and that worked fine as well but I could see it was going to overheat the stove so I turned it down. (A good problem to have. :-) )
Here are some pictures of the latest setup:
Mockup of the controls on a poly board.
Variable Speed Drive
Motor and Feed pump setup running. Both are surplus equipment. The pump came off an old oil burner and the motor from a surplus store long ago. The coupling parts were bought off Amazon.
This setup with the VFD drive works very well and is very precise. This burner can be operated without something like this but it won't be as "tunable".
So.. what is left?
\1. Development of a pilot flame system that includes flame detection (I already have a board to do that in the control system.)
\2. Overtemp detection and shutdown safeties. These will be wired external to the PLC for increased safety. I want to put a switch on the stove and the stack.
\3. Installation of a thermostat in the heated space. Programming is already in the PLC to accept a thermostat.
\4. Installation of the working controls in a real electrical box.
\5. Setup of a larger oil tank (which I already have) to allow longer run times without refilling.
\6. Setup of a cart to mount the controls and pump on so it is semi portable.
3/8/15 - Major discoveries!
I made an adapter plate for this burner so I could fire this burner in my large potbelly stove. This stove is really large cast iron, about 4 1/2 feet tall, and several hundred pounds. It is probably near 100 years old and was used to warm a commercial shop with coal and wood. I used the iron flange shown above welded to a new door insert.
I fired the burner on propane and fed the burner oil via a gravity feed. It took a while to get the stove up to temperature, but after about 1/2 hr it hit 450 degrees at the stove's surface and I decided to see if I could run the burner on compressed air along, no propane. At this point the burner tube was a lot hotter than that, probably 600-700 degrees. I set the air regulator to 10 psi and quickly made the change from propane to compressed air without stopping the oil flow. The flame went out when I disconnected the propane and then relit when I connected the compressed air!
So what was going on?? And why did this not work before?? I believe it is because the burner tube was in a hot stove was the difference. The oil was also somewhat warmer than before also ( about 40 degrees F), but I think the difference was really the hot burner tube and the fact that it was in a hot stove (burner chamber) that was reflecting radiant heat back on the burner tube.
So this burner likes being in a hot stove / burner box. Temperature makes all of the difference. A hot burner tube means efficient combustion of heavy oil and easier firing.
I ran the burner for over an hour on just compressed air and tested it at various air pressures and oil flows.
Most of the testing was with the compressed air at 8-10 psi. At that pressure and with the flow optimized for a clean burn, I recorded an oil burn rate of 2 lbs - 12 oz in 30 minutes or 5lb - 8 oz per hour which is approx 150K btu per hour. At this rate the stove would maintain a nice surface temperature of about 450-475 degrees. However a couple of times the flow into the furnace decreased (being fed via a gravity feed) and the flame would sputter. A quick opening of the ball valve controlling the flow got rid of the blockage and then a readjustment of the valve regained the proper flow. This happened several times over a 1 hour period.
Later I decided to see how much heat this burner could put out in this stove and I increased the air pressure to 20 psi and adjusted the oil flow to make a nice hot flame. I estimate that the heat output doubled. The stove temperature quickly climbed well past 700 degrees and the stack temperature above the stove hit 600 degrees. Too hot! The stove was being overfired. I'm quite sure that this burner can maintain 300K btu output in the proper stove/boiler but this stove is not it! I shut the burner down and looked into the stove to see how hot the burner tube was and it was glowing a dull red color about 8 inches from the pipe tee.
Looking at this color chart: Steel Heat color chart It appears that the burner tube was in the 1100 to 1200 degree range. I never saw the tube get that hot when the burner tube was operated in free air outside of a stove.
Conclusions from testing of 3/8/15
This burner will burn waste oil using compressed air alone. However the burner tube must be very hot (preheated) and it needs to be in a "burner chamber" of some sort to maintain a high burner tube temperature. Propane can be used to bring the burner tube and fire box chamber up to temperature but it takes about 30 minutes which means 1-2 lbs of propane. Kerosene could likely be used as a liquid fuel to do the same thing instead of propane, but using propane at a few PSI of pressure along with waste oil is very simple and virtually a fool proof method of preheating the burner tube and fire box. So I am going to stick with the use of propane and waste oil to do the preheat.
Another way to maintain a high burner tube temperature without a firebox is to use a heat retaining ring around the first 6-8 inches of the burner tube. I plan on adding a square piece of steel tubing about 8" long that will over the first 8-9 inches of the burner tube. This should help reflect radiant heat back onto the burner tube and speed the heating of the tube. This should also allow the burner tube to operate on air only after a preheat period without a burner box or stove enclosure around the burner tube. This would be important since I would like to be able to run this burner inside of a boiler. The boiler inside surface will stay below 250 degrees since there will be water on the otherside of the boiler surface, so having this heat retaining tube around part of the burner tube will be necessary to allow this burner to work in a "cold" burner chamber which is basically what a boiler is when there is no burner box.
Putting the burner into the stove drove home the fact that consistent oil delivery is VERY important. Too much oil and there is a lot of smoke. Too little oil and not enough heat. Too much compressed air and the flame runs lean. Too little air and the burner runs rich and smokes some.
So while a ball valve "could" be used to regulate oil flow, it would be a constant PITA to keep the burner from running rich or lean. Overfiring would also be possible which is not at all good.
A gear pump for consistent feeding is a very good idea. I think the total cost of the gear pump setup I am assembling would be roughly $200 if the motor and drive were purchased from Automation Direct and the pump was purchased off Ebay. Not free, but this is a one time cost to burn a fuel that can be obtained for close to free! Here are some pictures of the testing today: Slightly smoky stack when run on low a little rich.. too much oil or too little compressed air. The burner installed and running in the ancient cast iron stove. Burner operating on low heat - with 8-10 psi or air and a proper supply of oil so no smoke. The same burner tube running on 20 psi of air and a good supply of oil for minimum smoke. The heat was so intense that this is as close as I dared put the camera to the stove. The radiant heat is very impressive. I think that with the addition of a gear pump this would be a good manual burner setup as it is. Adding some controls would allow automatic operation. My Original Ramblings late 2014 Here is a Babington "style" waste oil burner I have been working on and off for a couple of years. The Babington style burner design has been beat to death with little concrete results documented on the web for almost 10 years now. One article even declared the burner design a failure. You can find all kinds of Youtube videos of nice round balls or door knobs that are atomizing diesel fuel at 75 degrees F which is totally unrealistic. If it is 75 degrees then why are you running a burner?? You need to flip on the air conditioner!! The only thing available for burning real waste oil is Turk style pot burners which require maintenance, and a noisy electric blower powered by electricity. This burner makes flames and heat... The dirty oil does not pass through any orfices, so nothing to clog. I didn't try and get the oil film extremely thin on the nozzle after all we are burning waste oil, not diesel, sun flower or preheated veggie oil. No electricity is required to run this burner if you use a gravity feed. No compressed air is required to run this burner. So your compressor will stay quiet. (These concepts were later revised as development continued.. continue reading). (SEE ADDITIONAL CONCLUSIONS AT END OF ARTICLE THAT RESULTED FROM FURTHER TESTING) What is required is propane at 1-2 lbs per hour depending on how hard you want to run this burner. Remember this burner can be turned down! Total BTU output is between 60 and 300K BTU based on actual testing. One lb of propane is about 1/4.5 x 90 = 20,000 btu. So what happens is that you pay for propane at 1-2 lbs per hour which is 20-40,000 btu of propane and along with that you also burn up to 2 gallons of waste oil along with that. So the total heat output is actually a near 300,000 btu when running full tilt. The cost to run this burner at current local low volume propane prices of about$2.40/gallon is 2lbs/4.5lb/gal x $2.40 =$1.06 per hour for 300K btu assuming you get the waste oil for free. If you were to run just propane to get this much heat, the cost would be 300K btu/90K btu/gallon x $2.40 =$8.06 for 300K btu. I can live with just over a dollar per hour for 300K BTU of heat. That is very cheap heat. To put this into standard terms of dollars per million btu: 1million btu/300K btu x 1.06 = $3.53 / million btu of heat. (The final results are even cheaper than this.) Not to put this into context check out this website: Fuel Cost Comparison So this is less than half the cost of heating with bulk coal (if you can buy it) and less than the cost to heat with wood if you can buy it for about$50 per cord!
I have been testing this design in 15 degree F ambient temperatures. This design is realistic and usable and doesn't require constant tweaking and fiddling to get it to run right. The amount of oil fed to the burner can be varied without creating smoke meaning that you can turn this burner down for use in moderate temperatures or turn it up when it is really cold.
This burner uses propane to atomize the waste oil, not compressed air. Compressed air works ok if the conditions are just right, the oil is preheated, and the oil is not too heavy, blah blah blah.... In other words compressed air is simply not practical! (REVISED AFTER FURTHER TESTING - see below)
Stay tuned, as this burner will be made available as a kit on Ebay at a reasonable price for experimental use. (IE, if you light your house on fire... don't blame me..as you are playing with FIRE! )
The kit will consist of the most of the parts in the picture. You will need to supply a propane cylinder with gas. A 20 lb unit will last for 10+ hours. And you will need to supply a source of waste oil under some slight pressure to feed the steel pipe tube that is above the pipe tee in the pictures. A gravity feed from 3 feet above the burner works fine. Optionally you can use a small motor driven gear pump to meter oil to the burner (recommended) or a peristalic pump.. That will allow you to pump the oil out of a container to the burner and then the unburnt oil can recirculate back to the tank.
Below is a picture of the burner running on REAL waste oil. Most of this is 15W-40 and 10W-30 conventional oil with some synthetic added in. It is throwaway oil from engine oil changes. Those are bright yellow, whiteish flames with virtually no smoke.
Here is the burner preheating on propane only. It takes about 3 minutes to bring the burner tube up to temperature (700+ degrees, which is required) in order to burn waste motor oil without smoke. This is with the propane setting at about 1.5 lbs per hour. (Notice the snow to the right... it was cold! )
Uses for this burner:
Put this into the end of a 55 gallon drum with exhast stack for a space heater. Weld a large pipe or cylinder into a 55 gallon drum or a 275 gallon oil storage tank and create an ambient pressure water heater. That water can be used to heat your house or shop or both!
Parts:
Here is a 2" black iron floor flange that I have bored out on a lathe so the burner tube can slide throught it. I am going to use this to mount the burner to a steel bulkhead on a wood burning stove. This could be valuable for many installations such as using this burner with a 55 gallon barrel for a space heater. This could also be used to adapt an existing wood burner or boiler to use this burner. I think the threaded flange was $12 at Home Depot. A muffler exhast clamp could also be used to mount this burner tube. The bored out iron flange with the burner tube slid though it. Other Considerations: Some type of flame safety needs to be implemented if this burner is operated in an enclosed space so that if the flame goes out a hazardous situation is not created. I may elaborate on this later, but a flame safety unit from an old oil heater could be used for this purpose. Further thoughts: After doing further reasearch on burners I found that gas assisted oil burners are rather rare. However there is a patent (which has expired) to use a gas assisted burner to burn crude oil in artic temperatures as a means of getting rid of waste crude oil. Waste oil is probably as uncertain in viscosity as crude oil. The patented deviced use a circular burner ring of gas flame around the oil jet of some sort. The patent is actually a little vague. If you do a google search for "gas assisted oil burner" it will pop up.After all who knows what they dump into the waste oil tanks at lube shops. A mixture of oils and antifreeze is not unusual. I did find a bunker oil burner design patent that used two gas burner jets on both sides of the main oil burner jet but keep in mind that this was for high volumes of oil. Not 1-2 gallons per hour. Probably more like gallons per minute. One of the challenges with any burner design and control is safety. It is normal to purge the combustion chamber prior to a startup to get rid of fumes that might ignite and explode. However waste oil burners (especially the one described above) is not really adaptable to standard oil furnaces. The flames are very long ( heavy oil ) and the burner tube is also very long (24" in the picture above). While a shorter tube is possible, I think that might cut down on effciency and make the burner more susceptable to smoking and limit the ability to "turn down" the burner. Right now this burner can be turned way down without smoking. Oil feed systems: A gravity feed system for a burner that will be attended that is using propane to atomize the oil is reasonable, but it must be attended! If the oil runs out, the propane continues to run, but there will be reduced heat output. But no big deal, the person attending to the burner should be able to detect that there is an issue when the heat output is reduced and make corrections. For unattended operation (similar to the Murphy Machine automatic boiler design) a more reliable way of maintaining oil feed is needed. A constant flow system would be best so that it can be set and forgot. The most reliable way of doing that is with a gear type pump such as an oil burner pump! I have done testing with an old oil burner pump made by Suntec. In particular their J type pump which has apparently be made for decades. The pump I have will deliver a constant output flow when fed via gravity. I tried driving the pump with a variable speed drill at about 500 rpm. By varying the speed I can get a nice variation of oil flows. I have a small 3 phase motor that I will couple to the Suntec pump's 7/16" input shaft and then control the motor speed via a single phase input VFD. Another way to do this would be to use a standard 1/2" drill chucked onto the pump shaft and use a speed control unit to turn the drill speed down. An entire setup like that could be bought from Harbor Freight for probably$75. However I don't know how long a Harbor Freight 1/2" drill would last being run continously.
So I am rethinking a couple of things and these are the questions I have..and some conclusions:
\1. Can a gas assisted burner be used along with a babbington type ball atomizer be used to get the best of both worlds?? IE, reduced propane usage (below the current 1-2 lbs per hour) with the benefits of more complete combustion by the use of propane.
Conclusion: After some testing: Not easily and not without making the burner MUCH more complex. So I have stopped pursuing that.
\2. Can the propane jet be turned down when not running oil in such a way that it can be used as a pilot light? A couple of electric valves and two propane pressure regulators could be used to implement a two setting propane burner. This would facilitate unattended operation. Turn the gas on high, preheat the tube, add oil and burn at full output. Then when temp is reached, turn off the oil, wait for the oil to clear the burner, then reduce the gas flow and maintain a pilot.
Conclusion: (Revised) No, using the burner in a very large stove, the low pressure flame via the main jet becomes unstable at "low pilot level flows". This was a surprise since this was not a problem when the burner was running in the open.
\3. Can an induced draft be created in the furnace stack using compressed air to "suck" any vapors that are present out of the furnace? (Think "air jet" in the center of the exhaust stack pointing in the direction of the desired flow.) Using an air jet to purge the combustion chamber is: a. Simple! b. cost effective Using 20-30 cf of compressed air to purge the chamber a couple of times per day is much cheaper and simpler than installing and maintaining a blower do pressurize the chamber to blow out any fumes.
No conclusion yet.
\4. Is there any real advantage to preheating the oil to a constant temperature?? As long as the oil can be pumped, so far the answer I have for that is no. I can perceive no benefit But this has been a nagging question. The guys making burners to melt aluminum and iron have decided that preheating waste oil has no benefit for them but this is a different application. Preheating the oil to 150 degrees or so with electricity is fairly simple so it may be worth doing some experiementing.
No conclusion yet.
\5. Why does it appear that some people can just use air with their babinton type burner?
Conclusion: They can't! Read and watch closely and you will find that they have cut the oil with diesel or kerosene or paint thinner. Or they are operating the burner in warm weather with warm oil. Not realisitic. Try and find someone who runs a babinton style burner routinely to provide heat. I can't find any. (REVISED - see below)
\6. What about Delavan suction nozzles??
Conclusion: Many commerical burners use this nozzle. Also you will find many very frustrated users of these burners. That nozzle can only suction oil up 1/2" below the nozzle level. Any blockage or change in viscosity or antifreeze and you will get a flame out. Not good.
\7. What about burning synthetic oil.
Conclusion: (Revised) The key is a hot burner tube. If it is well above the flash point of synthetic oil, then that oil will burn just as well as conventional oil.
|
|
## 22 June 2018
### Pythagorean theorem
Move the orange and/or the magenta points in order to change the lenght of the sides of the right triangle and observe the changes in the areas of the respective squares. The Pythagorean theorem states that 'the square of the hypotenuse is equal to the sum of the squares of the other two sides of the right triangle'.
## 21 June 2018
### Square root of five
$\sqrt 5$ is an irrational number which is present in the golden ratio.
It is also the measure of the hypotenuse of a right triangle whose other two sides measuring $1$ and $2$ units.
We have also:
$\sqrt 5 = e^{i\pi}+2\phi$;
$\sqrt 5 \approx \frac{85}{38}$.
## 20 June 2018
### Dattel
$3x^2+3y^2+z^2=1$
## 19 June 2018
### Result of five operations (version 2)
Think of a number. Take a paper and a pen to do the following operations:
• Think of a number;
• Calculate its double;
• Add $10$ units to the result;
• Calculate its half;
• Subtract the number that you thought.
The number that you get after these five operations was $5$, was not it? Can you explain the trick?
## 18 June 2018
### Oware
The Oware is a game belonging to the family of Mancala games, also known as sowing games or count and capture games, having these games an important role in many African and Asian societies.
The Oware is a game for two people, played on a board with $12$ houses, $2$ deposits and with $48$ seeds. The goal is to collect as many seeds, winning the player who gets $25$ or more seeds on his deposit.
Rules
Each player chooses his side on the board. To know who will start the game, one player collects a seed in one hand and, if the other player can guess, then this player starts the game. Otherwise, it will be the other player to start the game. The deposit of each player is to his right.
Early in the game each house has $4$ seeds. The player who starts the game collects all the seeds of one of his houses and distributes one by one in the other houses in the anti-clockwise, and such seeds are placed in the houses of the opponent. If a house has more than $12$ seeds, the player should not count on the starter house in the distribution of the seeds. The player must not move in houses with one seed while other houses have more seeds.
If the player at the end of putting the seeds found that there are opponents houses containing $2$ or $3$ seeds, including those that he just placed, then he can capture them and put them in his deposit. In the event that one of the players runs out of seeds on his side after making his move, the opponent must make a move to enter seeds in the house of the another player. If a player take a capture and let the opponent without seeds, he will be required to make a move in order to introduce seeds in the houses of the opponent.
When a player runs out of seeds and the opponent can not play in order to introduce seeds in the houses of this player, the game ends and the opponent collects the seeds that are in his houses and places them in his deposit. Wins who has the most seeds. When the game is almost over and the number of seeds means that there is a situation that is repeated indefinitely, then each player takes the seeds that lie in his houses and win the one with more seeds in his deposit.
## 17 June 2018
### Challenge #10
The following figure is composed of $16$ matches that form $5$ congruent squares.
Modify the position of only $3$ matches in order to obtain $4$ congruent squares.
|
|
# FASB codification research; researching the way long-term debt is reported; Macy’s, Inc. EDGAR,...
Research Case 14-10 FASB codification research; researching the way long-term debt is reported; Macy’s, Inc. EDGAR, the Electronic Data Gathering. Analysis, and Retrieval system, performs automated collection, validation, indexing, acceptance and forwarding of submissions by companies and others who are required by law to file forms with the U.S. Securities and Exchange Commission (SEC). All publicly traded domestic companies use EDGAR to make the majority of their filings. (Some foreign companies do so voluntarily.) Form 10-K, including the annual report, is required to be filed on EDGAR. The SEC makes this information available on the Internet. Required: 1. Access EDGAR on the Internet at www.sec.gov or from Investor Relations at the Macy’s, Inc. (www.macys.com). 2. Search for Macy’s. Access its 10-K filing for the year ended January 30, 2016. Search or scroll to find the financial statements and related notes. 3. What is the total debt (including current liabilities and deferred taxes) reported in the balance sheet? How has that amount changed over the most recent two years. 4. Compare the total liabilities (including current liabilities and deferred taxes) with the shareholders’ equity and calculate the debt to equity ratio for the most recent two years. Has the proportion of debt financing and equity financing changed recently? 5. Does Macy’s obtain more financing through notes, bonds, or commercial paper? Are required debt payments increasing or decreasing over time? Is any short-term debt classified as longterm? Why? 6. Note 6: Financing includes the following statement:"On November 18, 2014, the Company issued $550 million aggregate principal amount of 4.5% senior notes due 2034. This debt was used to pay for the redemption of the$407 million of 7.875% senior notes due 2015 described above." Under some circumstances, Macy’s could have reported the amounts due in 2015 as long-term debt at the end of the previous year even though these amounts were due within the coming year. Obtain the relevant authoritative literature on classification of debt expected to be financed using the FASB Accounting Standards Codification. You might gain access from the FASB website (www.fasb.org), from your school library, or some other source. Determine the criteria for reporting currently payable debt as long-term. What is the specific codification citation that Macy’s would rely on in applying that accounting treatment?
Attachments:
|
|
References for the construction of various number types
Hello I am a high school student currently reading through Calculus by Spivak which has been recommended by many people on this site. I was slightly disappointed by the first chapter in which properties of numbers are discussed. I value rigour highly and don't want to study learn calculus until I understand the abstract and axiomatic aspects of how numbers work. The author, for example, does not construct $\mathbb{N}$ from Peano's axioms, nor does he construct $\mathbb{Z}$ and $\mathbb{Q}$ as, respectively, equivalence classes of ordered pairs of natural numbers and integers. He doesn't talk about the field/ring axioms either. He also makes various assumptions which are bizarre considering how he is otherwise meticulous (closure under addition and multiplication, for example).
This leads me to my question: can someone suggest free, online references (class notes will suffice) which discuss the construction of various types of numbers from a rigorous standpoint?
• Landau's Foundations of Analysis has what you want. I don't know where you can get it for free, but in less than a minute I found a used copy for \$7 with free shipping in the USA. – bof Dec 4 '15 at 8:59
• You can see also : ITTAY WEISS, The Real Numbers - a survey of constructions. – Mauro ALLEGRANZA Dec 4 '15 at 11:55
|
|
## LaTeX forum ⇒ XeTeX ⇒ Error message
Information and discussion about XeTeX, an alternative for pdfTeX based on e-Tex
thatha
Posts: 5
Joined: Wed Jun 03, 2015 7:42 pm
### Error message
Hi,
I am relatively new in using Lyx in Mac OS X. I keep getting an error message which I don't understand how to solve, even though I have searched for the problem and found some solutions on the web but it doesn't seem to be working. I would really appreciate if anyone can help me out!!
xetex error
Description:
\usepackage{graphicx}^^M (cannot \read from terminal in nonstop modes).
I have verified that 'undertilde' is automatically downloaded and tried to write the usepackage{graphicx} by writing on the LaTex preamble in the document setting. I have MacTex, and I thought if you have it, then you don't need to download manually packages?
Thanks!
Johannes_B
Site Moderator
Posts: 3663
Joined: Thu Nov 01, 2012 4:08 pm
Hi and welcome.
thatha wrote:I have MacTex, and I thought if you have it, then you don't need to download manually packages?
True, but MacTeX is based on the generic TeX Live. According to CTAN, the centralized distribution center for packages, package undertilde is only included in MikTeX, but not TeX Live.
Are you sure you really need this package? Please comment it out in the document preamble and try typesetting the pdf. If no error arises, the package isn't even needed.
If you really need it, download the contents of the zip-file on CTAN and install the package by running tex on the ins-file and copy that to your local texmf tree. <- this part may be confusing, but i am quite sure you aren't even using the package. There might even be better alternatives out there by now.
The smart way: Calm down and take a deep breath, read posts and provided links attentively, try to understand and ask if necessary.
thatha
Posts: 5
Joined: Wed Jun 03, 2015 7:42 pm
Hi,
Thanks for the quick reply! I am assuming that I need the package ´undertilde´ package because I keep getting the error. How can I find out whether one need the package or not?
I have written the two following packages in the tab documents--preferences--latex preamble without any luck.
\usepackage{undertilde}
\usepackage{graphicx}
I tried as your suggestion without any success. TeX have several texmf (texmf-var, texmf-dist ect), which one of those should I insert the undertilde package that I downloaded from CTAN?
Thanks!
Johannes_B
Site Moderator
Posts: 3663
Joined: Thu Nov 01, 2012 4:08 pm
Currently, you are requesting the package, `\usepackage`. It is like getting a book from the shelf with different blueprints of chirs in it. But do you need the book, if you have no intention of carpenting any chirs? It might be a leftover from somebody building a [i]template[/latex]. If it is a leftover, prevent loading the package by placing a percent sign before it. If there really is the need for chair carpenting, you will get a different error message.
In your home directory, there is `texmfhome` where you should install the package. On Macs, this could be in a library directory, i am not sure.
The smart way: Calm down and take a deep breath, read posts and provided links attentively, try to understand and ask if necessary.
Stefan Kottwitz
Posts: 8733
Joined: Mon Mar 10, 2008 9:44 pm
Location: Hamburg, Germany
Contact:
Hi Thatha,
welcome to the forum!
To add to Johannes' remark, you can find the directory for local installations on a Mac by typing in the terminal:
`kpsewhich -var-value=TEXMFHOME`
On my Mac, it returns: /Users/stefan/Library/texmf.
However, I would use the TeX package manager is possible. After a look at the underscore CTAN package page, I see it's provided by MiKTeX but not by TeX Live or MacTeX. But you could download it from CTAN.
Though, firstly verify that you really need the package. if you would just get rid of the error, comment that line out.
Stefan
thatha
Posts: 5
Joined: Wed Jun 03, 2015 7:42 pm
Hi again,
Sorry for the delay answer I sort of postponed. Before going with details of my problem. The lyx.file that I am working on, I can compile it without any problem on windows however using lyx on mac OSX seem to causing me huge problem which I did not confront before.
I typed a new document with just a simple phrase, I thought it might be some bug on the file I am working on, but I get the same error message! However, surprisingly when I open the help file such as the introduction.lyx file, I have no problem of compiling the document into pdf!!
I tried hardly all your suggestions and even went with the problem to the IT service (they don't really work with theses issue but was kind enough to put some effort to help me out) at my university without any further solutions.
What I have done so far is, I downloaded undertilde (in this folder there is no file called undertilde.sty but there exist undertilde.ins) and underscore(in this file there is one called underscore.sty) from CTNA and copied it to texmf --- put into the tex folder (not sure whether I put it into the right folder there are 6 other folders in texmf)
After doing this manipulation without any result, I got the same error message.
I finally reinstalled completely lyx = the problem did not go away.
I might be doing everything wrong, please if anyone has confronted the same issue and have the solution I really could need the solution.
The error message that I get is:
Description:
\usepackage{graphicx}^^M (cannot \read from terminal in nonstop modes).
Thanks!
Johannes_B
Site Moderator
Posts: 3663
Joined: Thu Nov 01, 2012 4:08 pm
Just put the file in the same folder as your lyx-file.
The smart way: Calm down and take a deep breath, read posts and provided links attentively, try to understand and ask if necessary.
thatha
Posts: 5
Joined: Wed Jun 03, 2015 7:42 pm
Thanks but it simply doesn't work. I don't know exactly how to continue with this problem.
Stefan Kottwitz
Posts: 8733
Joined: Mon Mar 10, 2008 9:44 pm
Location: Hamburg, Germany
Contact:
Hi Thatha,
"it doesn't work" is not a good problem description. What is "it"? I'm sure you refer to Johannes post above, but what exactly did you do? (I could work with Johannes advice)
.ins and .dtx are source code files of a package, with documentation. You can produce a .sty file, if you run LaTeX on the .ins file.
So,
• download `undertilde.ins` and `undertilde.dtx` from CTAN here,
• put both files in your document folder,
• open a terminal window, change to the folder, and run "`latex undertilde.ins`".
But are you sure you need the `undertilde` document in your document? Try commenting it out: place a % before that line:
`%\usepackage{undertilde}`
thatha
Posts: 5
Joined: Wed Jun 03, 2015 7:42 pm
Hi Stefan_k,
Yes, that is correct - I was referring to Johannes comment. I did exactly as Johannes and yours description; I downloaded the undertilde.ins and undertilde.dtx files then I put the both files in the same folder as my lyx.file. Futher, I also comment out the undertilde package in lyx preamble. It still doesn't work. I thought it could be that the document is damaged, it contains lots of tables ect. so I open a new lyx document and write few phrases, tries to compile into a pdf file but I get the same error message. The only document that I manage to compile into pdf without any error message is the included lyx documents (help documents about Lyx)
I am not sure If I am missing out something?
|
|
Overview - Maple Help
latex
produces output suitable for LaTeX printing and provides the translation functionality for File -> Export -> LaTeX
Calling Sequence latex(expr, options) LaTeX(expr, options) : LaTeX is a synonym of latex
Parameters
expr - any Maple expression or construction, or a sequence of them
Options
• append = ... : the right-hand side can be true or false (default); related to writeto.
• asinpreviousreleases = ... : the right-hand side can be true or false (default); if true, the old latex program, of releases previous to Maple 2021, is used.
• breaklines : related to linelength, the right-hand side can be true or false (default), to break lines using \\ according to the value of the linelength keyword, that can also be set and is shown via $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{linelength}\right)$.
• filename = ... : deprecated (superseded by writeto), the right-hand side can be a symbol or a string representing a filename; if passed, output = file is also required.
• forget = ... : the right-hand side can be true or false (default); to forget previously cached translations at the time of performing a latex translation.
• linelength : related to breaklines, the right-hand side can be any positive number as an estimation of the desired number of characters in a math line. Its default value can be changed and is shown by $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{linelength}\right)$.
• output = ... : the right-hand side can be the keyword string, to return a string with the latex translation, or file in which case filename = ... is also required (deprecated use, superseded by writeto).
• thisisinput = ... : the right-hand side can be true or false (default); when given, some typesetting restrictions apply and a prompt > is included if the Maple MW file was created as a worksheet (see interface,format).
• translation = ... : the right-hand side can be full (default), or restricted, in which case significant typesetting restrictions apply: the translation is done as you see in the input of help pages.
• writeto = ... : the right-hand side can be screen (default) or any symbol or string representing a filename to which the output will be written; it can be used together with append, to append instead of overwrite a previously existing filename.
• All the optional keywords that can have the value true on the right-hand side can be passed just as themselves, not as an equation, representing the value true. For example forget is the same as forget = true. Also, you don't need to use the exact spelling of any of these keywords - any unambiguous portion of them suffices, e.g. previous for asinpreviousreleases.
Description
• The latex function produces output on the screen which is a translation to LaTeX of its arguments, and the LaTeX command is a synonym of latex. This command has been rewritten for Maple 2021, and it can now translate to LaTeX everything that can be displayed on a Maple worksheet or document (exception made of embedded components and DocumentTools objects).
• The Export As > LaTeX related functionality has also been rewritten, now resulting in a LaTeX file where everything you see on the worksheet is translated, that contains equation labels, hyperlinked in the input and text when they appear as such in the worksheet being exported, and where the input and output in the LaTeX file are formatted using automatic line breaking.
• There are three main ways of using latex.
– Entering latex(expression) followed by pressing Enter produces the display on the screen of the LaTeX form of expression, that you can copy and paste into a .tex file, to be used with any LaTeX application (not provided with the Maple system).
– Any Maple expression being displayed, input or output, or a subexpression of it, can be highlighted with the mouse, then translated to LaTeX and copied in one go through right-click and using the menu Edit > Copy As > Copy as LaTeX.
– Any Maple worksheet, say filename.mw, can be translated to LaTeX as a whole using the menu File > Export As > LaTeX. The resulting file, filename.tex, can be processed with any LaTeX application to produce a PDF file that looks as the worksheet.
• Using File > Export As > LaTeX you can produce LaTeX versions of course lessons or entire scientific papers directly in the Maple worksheet, that combines what-you-see-is-what-you-get editing capabilities with the Maple computational engine to produce mathematical results. Relevant for this purpose, you can:
– In the worksheet, before exporting, remove all or selectively some of Maple's input, while keeping all of Maple's output and corresponding equation labels, by respectively using the menus Show/Hide Contents > Input or Edit > Delete Element. This is particularly useful to produce LaTeX mathematical documents that entirely or partially hide their computer algebra origin.
– In the worksheet, select any mathematical expression written using Maple syntax that appears within the text (e.g. Int(f(x), x)), or as Maple input and, use Format > Convert To > 2-D Math Nonexecutable (or, alternatively right-click (Command-click, on Mac) and select Convert To > 2-D Math Nonexecutable) to produce a textbook mathematical display of that expression, that will then appear as such in the LaTeX exported document.
– In the resulting filename.tex file, adjust the automatic line-breaking in the formulas by placing $\mathrm{\\}$ wherever you want an additional linebreak, or enclosing a formula's subexpression between $\left\{\right\}$ to avoid its automatic line-breaking. This provides valuable flexibility to tweak the way the filename.tex LaTeX file is displayed.
– Set several preferences regarding how to perform the LaTeX translation using the latex:-Settings command. To see the different settings and their current value enter $\mathrm{latex}:-\mathrm{Settings}\left(\right)$.
• The translation of mathematical expressions performed by latex precisely respects the form and color you see on the screen with very few exceptions (e.g. the blue in Maple's output is not translated). For example, inert functions are translated using gray color the way it is used to display them in the worksheet. To change this or other default behaviors you can use latex:-Settings; e.g., to avoid using color and have everything translated in black enter $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{usecolor}=\mathrm{false}\right)$.
• Mathematical functions are translated to LaTeX using the notation shown in the NIST Digital Library of Mathematical Functions. To change that see latex:-Settings and latex,functions.
• All alias, and print/F routines you may define to compactly display expressions or a function $F$ in any particular way, as well as the value of $\mathrm{interface}\left(\mathrm{imaginaryunit}\right)$, are taken into account at the time of translating to LaTeX.
• It is possible to change or extend the capabilities of latex to translate any function, say $F$ by defining a procedure with the name latex/F. The latex command uses this procedure when it encounters a function with name $F$ within the expression(s) being translated. For more information, see latex,functions.
• You can use the GraphTheory:-Latex command to generate LaTeX code for graphs constructed with the GraphTheory package.
• The latex command displays output on the screen that can be copied and pasted elsewhere; it however returns NULL as the function value. Therefore the ditto commands, % and %%, will not recall the previous latex output. To get output (return value) different from NULL, you can use the optional argument output = string.
The latex:-Settings
• Several settings of the translation to LaTeX can be adjusted according to preference, using the latex:-Settings command or its synonym LaTeX:-Settings. These settings and the corresponding default values can be seen by entering the command with no arguments or using the keyword query
> latex:-Settings();
$\left[{\mathrm{cacheresults}}{=}{\mathrm{true}}{,}{\mathrm{commabetweentensorindices}}{=}{\mathrm{false}}{,}{\mathrm{invisibletimes}}{=}{""}{,}{\mathrm{leavespaceafterfunctionname}}{=}{\mathrm{false}}{,}{\mathrm{linelength}}{=}{66}{,}{\mathrm{powersoftrigonometricfunctions}}{=}{\mathrm{mixed}}{,}{\mathrm{spaceaftersqrt}}{=}{\mathrm{true}}{,}{\mathrm{usecolor}}{=}{\mathrm{true}}{,}{\mathrm{usedisplaystyleinput}}{=}{\mathrm{true}}{,}{\mathrm{useimaginaryunit}}{=}{I}{,}{\mathrm{useinputlineprompt}}{=}{\mathrm{true}}{,}{\mathrm{userestrictedtypesetting}}{=}{\mathrm{false}}{,}{\mathrm{usespecialfunctionrules}}{=}{\mathrm{true}}{,}{\mathrm{usetypesettingcurrentsettings}}{=}{\mathrm{false}}\right]$ (1)
• The possible values of the right-hand sides and their meaning is as follows
– cacheresults : all results by the internal latex subroutines are cached, for performance, so the translations are computed only once. However, if you are changing things in your worksheet that affect the way things are displayed, previously cached results may prevent you to get different LaTeX translations for the same input. To avoid caching results pass $\mathrm{cacheresults}=\mathrm{false}$. Alternatively, use the latex option forget, or the command $\mathrm{latex}:-\mathrm{Forget}\left(\right)$.
– commabetweentensorindices : default value is false; if set to true, a comma between tensor indices is placed also in the LaTeX translation, so that they are displayed the same way they are on a Maple sheet.
– invisibletimes : sets the string used to represent the product operator, by default " ". That matches the default of the LaTeX typesetting system where no spacing is placed between the operands of a product, so a*b is displayed $\mathrm{ab}$. To have a small space between the operands of a product you can use $\mathrm{invisibletimes}="\\,"$
– leavespaceafterfunctionname : default value is false, if set to true no \! LaTeX negative spacing command will be placed between a function name, say $F$ and the parenthesis \left( \right) surrounding the function's arguments.
– linelength : default value is 66; it indicates the approximate length at which a line-break $"\\"$ should be introduced when using the option breaklines.
– powersoftrigonometricfunctions : default value is mixed; other possible values are textbooknotation and computernotation. With the value mixed, ${\mathrm{sin}\left(x+y\right)}^{2}$ is translated to LaTeX as \sin^2 (x + y), and upon compilation will look like ${\mathrm{sin}}^{2}\left(x+y\right)$. With the value textbook, in addition, inverse trigonometric functions are translated to LaTeX with the notation that uses the name of the corresponding trigonometric function power -1, so $\mathrm{arcsin}\left(x+y\right)$ is translated as \sin^{-1} (x + y). With the value computer the translation is an exact replica of what you see displayed on the Maple sheet.
– spaceaftersqrt : default value is true; to insert or not a LaTeX small spacing command after a square root when it is an operand of a product.
– usecolor : default value is true, so that colors in the worksheet are translated as such to LaTeX.
– usedisplaystyleinput : default value is true, so that when using File > Export As > LaTeX all the input lines appears preceded by \displaystyle, i.e.: when compiling the .tex file, these lines are displayed with LaTeX math display style (the size of the fonts won't depend on the context).
– useimaginaryunit : default value is the one shown by $\mathrm{interface}\left(\mathrm{imaginaryunit}\right)$, that in Maple is the capital letter I. This setting can be used to indicate the use of a different symbol when translating to LaTeX.
– useinputlineprompt : can be true or false, to put or not a prompt at the beginning of Maple input lines when using File > Export As > LaTeX. When the Maple sheet was created as a worksheet, $\mathrm{interface}\left(\mathrm{format}\right)="worksheet"$ and so the default value of useinputlineprompt is true. Otherwise, when it started as a document, $\mathrm{interface}\left(\mathrm{format}\right)="document"$ and the value of useinputlineprompt is by default false. You can use useinputlineprompt to override these default values.
– userestrictedtypesetting : default value is false; if set to true, only a restricted form of typesetting, like the one used in the input lines of the Maple help pages, is used when translating to LaTeX.
– usespecialfunctionrules : default value is true; if set to false, no typesetting for the notation of mathematical functions is used.
– usetypesettingcurrentsettings : default is false; if set to true the Typesetting rules set in the Maple worksheet are not overridden by latex.
NOTE: unlike the case where you input latex(expression), when translating whole Maple worksheets using File > Export As > LaTeX the design is to export the worksheet to LaTeX reproducing what you see on the screen. So in that case the following settings are ignored: powersoftrigonometricfunctions, useimaginaryunit, userestrictedtypesetting, usespecialfunctionrules and usetypesettingcurrentsettings.
Tips for File > Export As > LaTeX
• In order to translate everything that can be displayed on a Maple sheet, some CTAN official LaTeX packages are used, listed at the beginning of the exported .tex file. Current LaTeX applications include those packages. Also, a copy of the maple.sty file, found in the etc directory of your Maple installation (kernelopts(mapledir)), should be in the same directory where you placed the .tex file that resulted from File > Export > LaTeX.
• Input lines can be entirely or partially omitted in the exported LaTeX document while keeping all the equation labels unchanged by respectively clearing the check box for Show/Hide Contents > Input in the View menu or using the menu Edit > Delete Element.
• The Maple sheet input lines appear in the exported TEX file not indented and using the automatic line-breaking of the LaTeX math {$...$} environment. The output lines appear using the automatic line-breaking of the $\left\{\mathrm{dmath}\right\}$ environment of the breqn LaTeX package, plus some additional automatic line-breaking performed by Maple routines where necessary beyond the $\left\{\mathrm{dmath}\right\}$ environment. The additional automatic line-breaking performed by Maple routines appears in the exported TEX file as lines whose only contents is $\mathbf{\}\mathbf{\}$, and the linelength keyword of the latex:-Settings can control their occurrence. Both input and output lines can be split furthermore by inserting, in the exported file, $\mathbf{\}\mathbf{\}$ wherever you want an additional line break.
• To avoid an automatically occurring line-break, in the exported TEX file, enclose within {} the parts of an expression you want to be in the same line. Together with the one of the previous paragraph, this mechanism complements the automatic line-breaking, giving you full control of how lines are split in the exported document.
• When working with large expressions, neither the automatic line-breaking, nor the breaklines option of latex, will break lines in fractions or across matrix lines (i.e., the number of matrix columns is not altered).
• Matrices have an excellent chance of being appropriately exported, provided that they fit within the margins of a page. Note that is not always the case, despite the matrix fitting within the computer screen's margins.
• Before exporting, check the menu options View > Markers, View > Sections > Show Section Boundaries and View > Show/Hide Contents > Execution Group Boundaries. That provides visual indicators of where bookmarks, Sections, paragraphs (text), input and output regions start and end. Using the menu Format > Bookmarks you can change bookmarks, relevant in Table Of Contents, and using Edit and its submenus, you can Split or Join execution groups or sections, Delete Elements, and adjust these regions to get the exported TEX file looking as desired.
• When inserting non-ASCII characters in the middle of text, for example, by clicking the palettes, place them within inlined math regions. To start such a region, you can press Shift + F5, type or insert the characters, then end with Ctrl + T (Command + T, on Mac) to return to text mode.
• When inlining math within the text, make sure you have both opening and matching closing parenthesis (), {} or [], within the math region. That assures their display in the printed exported LaTeX document is optimal and avoids having unmatched opening or closing delimiters.
• When closing inlined math, make sure there is a blank text character before any text that follows instead of an inlined math blank character. While in the Maple sheet, these two different blank characters look the same way, that is not the case in the LaTeX exported document where an ending math blank character is not displayed.
• Images can conveniently be inserted in the Maple sheet using the menu Insert > Image > From File. These images appear exported in the TEX file using the $\\mathrm{includegraphics}$ environment, with the options keepaspectratio and width = ... where the right-hand side has a maximum value of $3.9\mathrm{in}$regardless of the width of the image. You can adjust the size of the image in the TEX file by changing those two options or using the height = ... option.
• Embedded components are not translated - there exist no natural LaTeX commands for them to translate onto. The alternative is to handle the visualized embedded component as an image. For that, you can Either use the menu Edit > Copy > Copy as Image or depending on your OS/software you can take a screen shoot. Then insert this file image in the right position using the menu Insert > Image > From File. The image will appear in the right place in the .tex exported file and you can tweak its appearance as explained in the previous item.
• DocumentTools objects like Tabulate, or Tables (not mathematical table objects) inlined within Text are translated only rudimentarily: all their columns centered. Here too, either you can adjust the look of the table directly in the .tex exported file, or transform the table into an image to be handled the same way as an embedded component explained in the previous item.
• You can have the integration constants $\mathrm{_Cn}$, appearing in solutions to differential equations, exported to LaTeX as ${c}_{n}$ by entering (e.g., for $n\le 10$) $\mathrm{alias}\left(\mathrm{seq}\left({c}_{k}=\mathrm{_C}‖k,k=0..10\right)\right)$.
Examples
Fractions are translated using the \frac LaTeX command, not elevating the denominator to the power -1
> $\mathrm{ee}≔\frac{a}{x+\frac{y}{3}}$
${\mathrm{ee}}{≔}\frac{{a}}{{x}{+}\frac{{y}}{{3}}}$ (2)
> $\mathrm{latex}\left(\mathrm{ee}\right)$
\frac{a}{x +\frac{y}{3}}
You can use latex or LaTeX indistinctly; these two commands are synonyms of each other.
> $\mathrm{LaTeX}\left(\mathrm{ee}\right)$
\frac{a}{x +\frac{y}{3}}
The LaTeX translation is displayed on the screen; that is useful for copy & paste, but the actual return value of latex(ee) is NULL. If you need a non-NULL return value use output = string
> $\mathrm{latex}\left(\mathrm{ee},\mathrm{output}=\mathrm{string}\right)$
${"\frac\left\{a\right\}\left\{x +\frac\left\{y\right\}\left\{3\right\}\right\}"}$ (3)
Color is translated the same way you see it on the screen (blue and black are ignored)
> $\mathrm{ee}≔\left(\mathrm{Int}=\mathrm{int}\right)\left(\frac{1}{{x}^{2}+1},x\right)$
${\mathrm{ee}}{≔}{\int }\frac{{1}}{{{x}}^{{2}}{+}{1}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{x}{=}{\mathrm{arctan}}{}\left({x}\right)$ (4)
> $\mathrm{latex}\left(\mathrm{ee}\right)$
\textcolor{gray}{\int}\frac{1}{x^{2}+1}\textcolor{gray}{d}x = \arctan \! \left(x \right)
Translating color is a setting. You can change any of the latex:-settings using latex:-Settings(keyword = value) or using its synonym LaTeX:-Settings. To query about these settings use the keyword $\mathrm{query}$ or enter the command without arguments
> $\mathrm{latex}:-\mathrm{Settings}\left(\right)$
$\left[{\mathrm{cacheresults}}{=}{\mathrm{true}}{,}{\mathrm{commabetweentensorindices}}{=}{\mathrm{false}}{,}{\mathrm{invisibletimes}}{=}{""}{,}{\mathrm{leavespaceafterfunctionname}}{=}{\mathrm{false}}{,}{\mathrm{linelength}}{=}{66}{,}{\mathrm{powersoftrigonometricfunctions}}{=}{\mathrm{mixed}}{,}{\mathrm{spaceaftersqrt}}{=}{\mathrm{true}}{,}{\mathrm{usecolor}}{=}{\mathrm{true}}{,}{\mathrm{usedisplaystyleinput}}{=}{\mathrm{true}}{,}{\mathrm{useimaginaryunit}}{=}{I}{,}{\mathrm{useinputlineprompt}}{=}{\mathrm{true}}{,}{\mathrm{userestrictedtypesetting}}{=}{\mathrm{false}}{,}{\mathrm{usespecialfunctionrules}}{=}{\mathrm{true}}{,}{\mathrm{usetypesettingcurrentsettings}}{=}{\mathrm{false}}\right]$ (5)
So, for example, to avoid color being translated, you can enter
> $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{usecolor}=\mathrm{false}\right)$
$\left[{\mathrm{usecolor}}{=}{\mathrm{false}}\right]$ (6)
> $\mathrm{latex}\left(\mathrm{ee}\right)$
\int \frac{1}{x^{2}+1}d x = \arctan \! \left(x \right)
To avoid artificial spacing between a function's name and its arguments surrounded by \left( and \right), the latex command places \! in between, as you see above in \arctan \! \left(x \right). To not have this LaTeX spacing command \! automatically inserted, use, for instance
> $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{leavespace}=\mathrm{true}\right)$
$\mathrm{* Partial match of \text{'}}{}\mathrm{leavespace}{}\mathrm{\text{'} against keyword \text{'}}{}\mathrm{leavespaceafterfunctionname}{}\text{'}$
$\left[{\mathrm{leavespaceafterfunctionname}}{=}{\mathrm{true}}\right]$ (7)
> $\mathrm{latex}\left(\mathrm{ee}\right)$
\int \frac{1}{x^{2}+1}d x = \arctan \left(x \right)
Powers of trigonometric (not inverse-trigonometric) functions, say ${\mathrm{sin}\left(x+y\right)}^{2}$, are frequently display in textbooks as ${\mathrm{sin}}^{2}\left(x+y\right)$ and while the computer uses the former notation, latex uses the latter one for trigonometric functions by default
> $\mathrm{ee}≔{\mathrm{sin}\left(x+y\right)}^{2}+{\mathrm{arcsin}\left(x\right)}^{2}$
${\mathrm{ee}}{≔}{{\mathrm{sin}}{}\left({x}{+}{y}\right)}^{{2}}{+}{{\mathrm{arcsin}}{}\left({x}\right)}^{{2}}$ (8)
> $\mathrm{latex}\left(\mathrm{ee}\right)$
\sin^{2}\left(x +y \right)+\arcsin \left(x \right)^{2}
You can change that too. The options for the right-hand side of $\mathrm{powersoftrigonometricfunctions}=\mathrm{...}$ are: $\mathrm{textbooknotation}$, $\mathrm{computernotation}$ or $\mathrm{mixed}$ (default), and you can use any unambiguous portion of the long-keyword $\mathrm{powersoftrigonometricfunctions}$ to indicate it. Using $\mathrm{powersoftrig}=\mathrm{textbook}$, in addition, inverse-trigonometric functions are LaTeX translated as the corresponding trigonometric functions to the power -1, so $\mathrm{arcsin}\left(x\right)$ will look in the LaTeX translation as ${\mathrm{sin}}^{\left(-1\right)}x$
> $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{powersoftrig}=\mathrm{textbook}\right)$
$\mathrm{* Partial match of \text{'}}{}\mathrm{powersoftrig}{}\mathrm{\text{'} against keyword \text{'}}{}\mathrm{powersoftrigonometricfunctions}{}\text{'}$
$\left[{\mathrm{powersoftrigonometricfunctions}}{=}{\mathrm{textbooknotation}}\right]$ (9)
> $\mathrm{latex}\left(\mathrm{ee}\right)$
\sin^{2}\left(x +y \right)+\left(\sin^{-1}\left(x \right)\right)^{2}
Using $\mathrm{powersoftrigonometricfunctions}=\mathrm{computer}$, the LaTeX translation is exactly as displayed in (8)
> $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{powersoftrig}=\mathrm{computer}\right)$
$\mathrm{* Partial match of \text{'}}{}\mathrm{powersoftrig}{}\mathrm{\text{'} against keyword \text{'}}{}\mathrm{powersoftrigonometricfunctions}{}\text{'}$
$\left[{\mathrm{powersoftrigonometricfunctions}}{=}{\mathrm{computernotation}}\right]$ (10)
> $\mathrm{latex}\left(\mathrm{ee}\right)$
\sin \left(x +y \right)^{2}+\arcsin \left(x \right)^{2}
When a square root is part of a product, the Maple typesetting leaves a small space between the square root and the next product operand, for example:
> $\mathrm{ee}≔\mathrm{sqrt}\left(f\left(x\right)\right)g\left(x\right)$
${\mathrm{ee}}{≔}\sqrt{{f}{}\left({x}\right)}{}{g}{}\left({x}\right)$ (11)
To reproduce that spacing, latex places the LaTeX spacing command \, after the square root when within a product
> $\mathrm{latex}\left(\mathrm{ee}\right)$
\sqrt{f \left(x \right)}\, g \left(x \right)
To avoid that extra LaTeX spacing use
> $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{spaceaftersqrt}=\mathrm{false}\right)$
$\left[{\mathrm{spaceaftersqrt}}{=}{\mathrm{false}}\right]$ (12)
> $\mathrm{latex}\left(\mathrm{ee}\right)$
\sqrt{f \left(x \right)} g \left(x \right)
The Maple default for representing the imaginary unit is the capital letter $I$. This is respected by the LaTeX translation
> $\mathrm{ee}≔x+Iy$
${\mathrm{ee}}{≔}{x}{+}{I}{}{y}$ (13)
> $\mathrm{latex}\left(\mathrm{ee}\right)$
x +\mathrm{I} y
You can however change this default in two different ways. For example, if you prefer $i$ instead of I, you can use $\mathrm{interface}\left(\mathrm{imaginaryunit}=i\right)$
> $\mathrm{interface}\left(\mathrm{imaginaryunit}=i\right):$
> $\mathrm{latex}\left(\mathrm{ee}\right)$
i y +x
The symbol used in the LaTeX translation to represent the imaginary unit, is always displayed when you input
> $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{imaginary}\right)$
$\mathrm{* Partial match of \text{'}}{}\mathrm{imaginary}{}\mathrm{\text{'} against keyword \text{'}}{}\mathrm{useimaginaryunit}{}\text{'}$
$\left[{\mathrm{useimaginaryunit}}{=}{i}\right]$ (14)
This setting actually allows you to use any different symbol than the one set via $\mathrm{interface}\left(\mathrm{imaginaryunit}=\mathrm{...}\right)$. For example, keep $i$ for use in the Maple input/output and use $j$ in the translation to LaTeX
> $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{imaginary}=j\right)$
$\mathrm{* Partial match of \text{'}}{}\mathrm{imaginary}{}\mathrm{\text{'} against keyword \text{'}}{}\mathrm{useimaginaryunit}{}\text{'}$
$\left[{\mathrm{useimaginaryunit}}{=}{j}\right]$ (15)
> $\mathrm{latex}\left(\mathrm{ee}\right)$
j y +x
By default, latex leaves the invisible times product operator to be defined by the standard LaTeX typesetting system. That is, for example, no LaTeX spacing command between the letters $x$ and $y$ in the product $xy$. Hence,
> $\mathrm{latex}\left(xy\right)$
x y
That corresponds to the invisibletimes setting:
> $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{invisibletimes}\right)$
$\left[{\mathrm{invisibletimes}}{=}{""}\right]$ (16)
If however you prefer, for instance, a tiny space LaTeX command between the operands of a product, you can use $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{invisible}="\\,"\right)$
> $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{invisibletimes}="\\,"\right)$
$\left[{\mathrm{invisibletimes}}{=}{"\,"}\right]$ (17)
> $\mathrm{latex}\left(xy\right)$
x\,y
Regardless of the value of $\mathrm{interface}\left(\mathrm{typesetting}\right)$, or of $\mathrm{Typesetting}:-\mathrm{EnableTypesetRule}\left(\mathrm{SpecialFunctionRules}\right)$, the mathematical functions of the language are translated to LaTeX using the notation shown in the NIST Digital Library of Mathematical Functions. So for example BesselJ(n, z) is translated to LaTeX as J_{n}! left(z right)
> $\mathrm{latex}\left(\mathrm{BesselJ}\left(n,z\right)\right)$
J_{n}\left(z \right)
Upon LaTeX compilation, that will look like ${J}_{n}\left(z\right)$. You can change this default in two different ways. First, through the usespecialfunctionrules setting, changing its default value true to false
> $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{specialfunctionrules}=\mathrm{false}\right)$
$\mathrm{* Partial match of \text{'}}{}\mathrm{specialfunctionrules}{}\mathrm{\text{'} against keyword \text{'}}{}\mathrm{usespecialfunctionrules}{}\text{'}$
$\left[{\mathrm{usespecialfunctionrules}}{=}{\mathrm{false}}\right]$ (18)
> $\mathrm{latex}\left(\mathrm{BesselJ}\left(n,z\right)\right)$
\mathit{BesselJ}\left(n , z\right)
Reset that setting to its default value
> $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{usespecialfunctionrules}=\mathrm{true}\right):$
Second, with the usetypesettingcurrentsettings keyword, changing its default value false to true
> $\mathrm{latex}:-\mathrm{Settings}\left(\mathrm{usetypesettingcurrentsettings}=\mathrm{true}\right)$
$\left[{\mathrm{usetypesettingcurrentsettings}}{=}{\mathrm{true}}\right]$ (19)
and so, the value of the Typesetting settings is followed by latex:
> $\mathrm{Typesetting}:-\mathrm{DisableTypesetRule}\left(\mathrm{Typesetting}:-\mathrm{SpecialFunctionRules}\right)$
${\varnothing }$ (20)
> $\mathrm{latex}\left(\mathrm{BesselJ}\left(n,z\right),\mathrm{forget}\right)$
\mathit{BesselJ}\left(n , z\right)
When changing Typesetting settings in the middle of a Maple session, to discard previously cached results by latex use the option forget (this is not necessary when changing settings using latex:-Settings)
> $\mathrm{Typesetting}:-\mathrm{EnableTypesetRule}\left(\mathrm{Typesetting}:-\mathrm{SpecialFunctionRules}\right)$
${\varnothing }$ (21)
> $\mathrm{latex}\left(\mathrm{BesselJ}\left(n,z\right),\mathrm{forget}\right)$
J_{n}\left(z \right)
>
Compatibility
• The latex command was updated in Maple 2021.
|
|
Definitions
# Alternating finite automaton
In automata theory, an alternating finite automaton (AFA) is a nondeterministic finite automaton whose transitions are divided into existential and universal transitions. For example, let A be an alternating automaton.
• For an existential transition $\left(q, a, q_1 vee q_2\right)$, A nondeterministically chooses to switch the state to either $q_1$ or $q_2$, reading a. Thus, behaving like a regular nondeterministic finite automaton.
• For a universal transition $\left(q, a, q_1 wedge q_2\right)$, A moves to $q_1$ and $q_2$, reading a, simulating the behavior of a parallel machine.
Note that due to the universal quantification a run is represented by a run tree. A accepts a word w, if there exists a run tree on w such that every path ends in an accepting state.
A basic theorem tells that any AFA is equivalent to an non-deterministic finite automaton (NFA) by performing a similar kind of powerset construction as it is used for the transformation of an NFA to a deterministic finite automaton (DFA). This construction converts an AFA with k states to an NFA with up to $2^k$ states.
An alternative model which is frequently used is the one where Boolean combinations are represented as clauses. For instance, one could assume the combinations to be in DNF so that $\left\{\left\{q_1\right\}\left\{q_2,q_3\right\}\right\}$ would represent $q_1 vee \left(q_2 wedge q_3\right)$. The state tt (true) is represented by $\left\{\left\{\right\}\right\}$ in this case and ff (false) by $emptyset$. This clause representation is usually more efficient.
## Formal Definition
An alternating finite automaton (AFA) is a 6-tuple, $\left(S\left(exists\right), S\left(forall\right), Sigma, delta, P_0, F\right)$, where
• $S\left(exists\right)$ is a finite set of existential states. Also commonly represented as $S\left(vee\right)$.
• $S\left(forall\right)$ is a finite set of universal states. Also commonly represented as $S\left(wedge\right)$.
• $Sigma$ is a finite set of input symbols.
• $delta$ is a set of transition functions to next state $\left(S\left(exists\right) cup S\left(forall\right)\right) times \left(Sigma cup \left\{ varepsilon \right\} \right) to 2^\left\{S\left(exists\right) cup S\left(forall\right)\right\}$.
• $P_0$ is the initial (start) state, such that $P_0 in S\left(exists\right) cup S\left(forall\right)$.
• $F$ is a set of accepting (final) states such that $F subseteq S\left(exists\right) cup S\left(forall\right)$.
Search another word or see finite automatonon Dictionary | Thesaurus |Spanish
|
|
### Arithmetic functions
#### Arithmetic functions and factorization
All arithmetic functions in the narrow sense of the word--- Euler's totient function, the Moebius function, the sums over divisors or powers of divisors etc.--- call, after trial division by small primes, the same versatile factoring machinery described under factorint. It includes Shanks SQUFOF, Pollard Rho, ECM and MPQS stages, and has an early exit option for the functions moebius and (the integer function underlying) issquarefree. This machinery relies on a fairly strong probabilistic primality test, see ispseudoprime, but you may also set
default(factor_proven, 1)
to ensure that all tentative factorizations are fully proven. This should not slow down PARI too much, unless prime numbers with hundreds of decimal digits occur frequently in your application.
#### Orders in finite groups and Discrete Logarithm functions
The following functions compute the order of an element in a finite group: ellorder (the rational points on an elliptic curve defined over a finite field), fforder (the multiplicative group of a finite field), znorder (the invertible elements in Z/nZ). The following functions compute discrete logarithms in the same groups (whenever this is meaningful) elllog, fflog, znlog.
All such functions allow an optional argument specifying an integer N, representing the order of the group. (The order functions also allows any non-zero multiple of the order, with a minor loss of efficiency.) The meaning of that optional argument is as follows and depends on its type
* t_INT: the integer N,
* t_MAT: the factorization fa = factor(N),
* t_VEC: this is the preferred format and provides both the integer N and its factorization in a two-component vector \kbd{[N, fa]}.
When the group is fixed and many orders or discrete logarithms will be computed, it is much more efficient to initialize this data once and for all and pass it to the relevant functions, as in
? p = nextprime(10^40);
? v = [p-1, factor(p-1)]; \\ data for discrete log & order computations
? znorder(Mod(2,p), v)
%3 = 500000000000000000000000000028
? g = znprimroot(p);
? znlog(2, g, v)
%5 = 543038070904014908801878611374
#### addprimes({x = []})
adds the integers contained in the vector x (or the single integer x) to a special table of "user-defined primes", and returns that table. Whenever factor is subsequently called, it will trial divide by the elements in this table. If x is empty or omitted, just returns the current list of extra primes.
The entries in x must be primes: there is no internal check, even if the factor_proven default is set. To remove primes from the list use removeprimes.
The library syntax is GEN addprimes(GEN x = NULL).
#### bestappr(x, {A},{B})
if B is omitted, finds the best rational approximation to x belongs to R using continued fractions. If A is omitted, return the best approximation affordable given the input accuracy; otherwise make sure that denominator is at most equal to A.
If B is present perform rational modular reconstruction (see below). In both cases, the function applies recursively to components of complex objects (polynomials, vectors,...).
? bestappr(Pi, 100)
%1 = 22/7
? bestappr(0.1428571428571428571428571429)
%2 = 1/7
? bestappr([Pi, sqrt(2) + 'x], 10^3)
%3 = [355/113, x + 1393/985]
By definition, n/d is the best rational approximation to x if |d x - n| < |v x - u| for all integers (u,v) with v <= A. (Which implies that n/d is a convergent of the continued fraction of x.)
If x is an t_INTMOD, (or a recursive combination of those), modulo N say, B must be present. The routine then returns the unique rational number a/b in coprime integers a <= A and b <= B which is congruent to x modulo N. If N <= 2AB, uniqueness is not guaranteed and the function fails with an error message. If rational reconstruction is not possible (no such a/b exists for at least one component of x), returns -1.
? bestappr(Mod(18526731858, 11^10), 10^10, 10^10)
*** at top-level: bestappr(Mod(1852673
*** ^--------------------
*** bestappr: ratlift: must have 2*amax*bmax < m, found
amax=10000000000
bmax=10000000000
m=25937424601
? bestappr(Mod(18526731858, 11^10), 10^5, 10^5)
%1 = 1/7
? bestappr(Mod(18526731858, 11^20), 10^10, 10^10)
%2 = -1
In most concrete uses, B is a prime power and we performed Hensel lifting to obtain x.
If x is a t_POLMOD, modulo T say, B must be present. The routine then returns the unique rational function P/Q with deg P <= A and deg Q <= B which is congruent to x modulo T. If deg T <= A+B, uniqueness is not guaranteed and the function fails with an error message. If rational reconstruction is not possible, returns -1.
The library syntax is GEN bestappr0(GEN x, GEN A = NULL, GEN B = NULL). Also available is GEN bestappr(GEN x, GEN A).
#### bezout(x,y)
Returns [u,v,d] such that d is the gcd of x,y, x*u+y*v = gcd(x,y), and u and v minimal in a natural sense. The arguments must be integers or polynomials.
If x,y are polynomials in the same variable and inexact coefficients, then compute u,v,d such that x*u+y*v = d, where d approximately divides both and x and y; in particular, we do not obtain gcd(x,y) which is defined to be a scalar in this case:
? a = x + 0.0; gcd(a,a)
%1 = 1
? bezout(a,a)
%2 = [0, 1, x + 0.E-28]
? bezout(x-Pi,6*x^2-zeta(2))
%3 = [-6*x - 18.8495559, 1, 57.5726923]
For inexact inputs, the output is thus not well defined mathematically, but you obtain explicit polynomials to check whether the approximation is close enough for your needs.
The library syntax is GEN vecbezout(GEN x, GEN y).
#### bezoutres(x,y)
finds u and v such that x*u + y*v = d, where d is the resultant of x and y. The result is the row vector [u,v,d]. The algorithm used (subresultant) assumes that the base ring is a domain.
The library syntax is GEN vecbezoutres(GEN x, GEN y).
#### bigomega(x)
number of prime divisors of the integer |x| counted with multiplicity:
? factor(392)
%1 =
[2 3]
[7 2]
? bigomega(392)
%2 = 5; \\ = 3+2
? omega(392)
%3 = 2; \\ without multiplicity
The function accepts vector/matrices arguments, and is then applied componentwise.
The library syntax is GEN gbigomega(GEN x). For a t_INT x, the variant long bigomega(GEN n) is generally easier to use.
#### binomial(x,y)
binomial coefficient binom{x}{y}. Here y must be an integer, but x can be any PARI object.
The library syntax is GEN binomial(GEN x, long y). The function GEN binomialuu(ulong n, ulong k) is also available, and so is GEN vecbinome(long n), which returns a vector v with n+1 components such that v[k+1] = binomial(n,k) for k from 0 up to n.
#### chinese(x,{y})
if x and y are both intmods or both polmods, creates (with the same type) a z in the same residue class as x and in the same residue class as y, if it is possible.
This function also allows vector and matrix arguments, in which case the operation is recursively applied to each component of the vector or matrix. For polynomial arguments, it is applied to each coefficient.
If y is omitted, and x is a vector, chinese is applied recursively to the components of x, yielding a residue belonging to the same class as all components of x.
Finally chinese(x,x) = x regardless of the type of x; this allows vector arguments to contain other data, so long as they are identical in both vectors.
The library syntax is GEN chinese(GEN x, GEN y = NULL). GEN chinese1(GEN x) is also available.
#### content(x)
computes the gcd of all the coefficients of x, when this gcd makes sense. This is the natural definition if x is a polynomial (and by extension a power series) or a vector/matrix. This is in general a weaker notion than the ideal generated by the coefficients:
? content(2*x+y)
%1 = 1 \\ = gcd(2,y) over Q[y]
If x is a scalar, this simply returns the absolute value of x if x is rational ( t_INT or t_FRAC), and either 1 (inexact input) or x (exact input) otherwise; the result should be identical to gcd(x, 0).
The content of a rational function is the ratio of the contents of the numerator and the denominator. In recursive structures, if a matrix or vector coefficient x appears, the gcd is taken not with x, but with its content:
? content([ [2], 4*matid(3) ])
%1 = 2
The library syntax is GEN content(GEN x).
#### contfrac(x,{b},{nmax})
returns the row vector whose components are the partial quotients of the continued fraction expansion of x. In other words, a result [a_0,...,a_n] means that x ~ a_0+1/(a_1+...+1/a_n). The output is normalized so that a_n != 1 (unless we also have n = 0).
The number of partial quotients n+1 is limited by nmax. If nmax is omitted, the expansion stops at the last significant partial quotient.
? \p19
realprecision = 19 significant digits
? contfrac(Pi)
%1 = [3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 2, 1, 1, 2, 2]
? contfrac(Pi,, 3) \\ n = 2
%2 = [3, 7, 15]
x can also be a rational function or a power series.
If a vector b is supplied, the numerators are equal to the coefficients of b, instead of all equal to 1 as above; more precisely, x ~ (1/b_0)(a_0+b_1/(a_1+...+b_n/a_n)); for a numerical continued fraction (x real), the a_i are integers, as large as possible; if x is a rational function, they are polynomials with deg a_i = deg b_i + 1. The length of the result is then equal to the length of b, unless the next partial quotient cannot be reliably computed, in which case the expansion stops. This happens when a partial remainder is equal to zero (or too small compared to the available significant digits for x a t_REAL).
A direct implementation of the numerical continued fraction contfrac(x,b) described above would be
\\ "greedy" generalized continued fraction
cf(x, b) =
{ my( a= vector(#b), t );
x *= b[1];
for (i = 1, #b,
a[i] = floor(x);
t = x - a[i]; if (!t || i == #b, break);
x = b[i+1] / t;
); a;
}
There is some degree of freedom when choosing the a_i; the program above can easily be modified to derive variants of the standard algorithm. In the same vein, although no builtin function implements the related Engel expansion (a special kind of Egyptian fraction decomposition: x = 1/a_1 + 1/(a_1a_2) +... ), it can be obtained as follows:
\\ n terms of the Engel expansion of x
engel(x, n = 10) =
{ my( u = x, a = vector(n) );
for (k = 1, n,
a[k] = ceil(1/u);
u = u*a[k] - 1;
if (!u, break);
); a
}
Obsolete hack. (don't use this): If b is an integer, nmax is ignored and the command is understood as contfrac(x,, b).
The library syntax is GEN contfrac0(GEN x, GEN b = NULL, long nmax). Also available are GEN gboundcf(GEN x, long nmax), GEN gcf(GEN x) and GEN gcf2(GEN b, GEN x).
#### contfracpnqn(x)
when x is a vector or a one-row matrix, x is considered as the list of partial quotients [a_0,a_1,...,a_n] of a rational number, and the result is the 2 by 2 matrix [p_n,p_{n-1};q_n,q_{n-1}] in the standard notation of continued fractions, so p_n/q_n = a_0+1/(a_1+...+1/a_n). If x is a matrix with two rows [b_0,b_1,...,b_n] and [a_0,a_1,...,a_n], this is then considered as a generalized continued fraction and we have similarly p_n/q_n = (1/b_0)(a_0+b_1/(a_1+...+b_n/a_n)). Note that in this case one usually has b_0 = 1.
The library syntax is GEN pnqn(GEN x).
#### core(n,{flag = 0})
if n is an integer written as n = df^2 with d squarefree, returns d. If flag is non-zero, returns the two-element row vector [d,f]. By convention, we write 0 = 0 x 1^2, so core(0, 1) returns [0,1].
The library syntax is GEN core0(GEN n, long flag). Also available are GEN core(GEN n) (flag = 0) and GEN core2(GEN n) (flag = 1)
#### coredisc(n,{flag = 0})
a fundamental discriminant is an integer of the form t = 1 mod 4 or 4t = 8,12 mod 16, with t squarefree (i.e.1 or the discriminant of a quadratic number field). Given a non-zero integer n, this routine returns the (unique) fundamental discriminant d such that n = df^2, f a positive rational number. If flag is non-zero, returns the two-element row vector [d,f]. If n is congruent to 0 or 1 modulo 4, f is an integer, and a half-integer otherwise.
By convention, coredisc(0, 1)) returns [0,1].
Note that quaddisc(n) returns the same value as coredisc(n), and also works with rational inputs n belongs to Q^*.
The library syntax is GEN coredisc0(GEN n, long flag). Also available are GEN coredisc(GEN n) (flag = 0) and GEN coredisc2(GEN n) (flag = 1)
#### dirdiv(x,y)
x and y being vectors of perhaps different lengths but with y[1] != 0 considered as Dirichlet series, computes the quotient of x by y, again as a vector.
The library syntax is GEN dirdiv(GEN x, GEN y).
#### direuler(p = a,b,expr,{c})
computes the Dirichlet series associated to the \idx{Euler product} of expression expr as p ranges through the primes from a to b. expr must be a polynomial or rational function in another variable than p (say X) and expr(X) is understood as the local factor expr(p^{-s}).
The series is output as a vector of coefficients. If c is present, output only the first c coefficients in the series. The following command computes the sigma function, associated to zeta(s)zeta(s-1):
? direuler(p=2, 10, 1/((1-X)*(1-p*X)))
%1 = [1, 3, 4, 7, 6, 12, 8, 15, 13, 18]
The library syntax is direuler(void *E, GEN (*eval)(void*,GEN), GEN a, GEN b)
#### dirmul(x,y)
x and y being vectors of perhaps different lengths considered as Dirichlet series, computes the product of x by y, again as a vector.
The library syntax is GEN dirmul(GEN x, GEN y).
#### divisors(x)
creates a row vector whose components are the divisors of x. The factorization of x (as output by factor) can be used instead.
By definition, these divisors are the products of the irreducible factors of n, as produced by factor(n), raised to appropriate powers (no negative exponent may occur in the factorization). If n is an integer, they are the positive divisors, in increasing order.
The library syntax is GEN divisors(GEN x).
#### eulerphi(x)
Euler's phi (totient) function of |x|, in other words |(Z/xZ)^*|. Normally, x must be of type integer, but the function accepts vector/matrices arguments, and is then applied componentwise.
The library syntax is GEN geulerphi(GEN x). For a t_INT x, the variant GEN eulerphi(GEN n) is also available.
#### factor(x,{lim})
general factorization function, where x is a rational (including integers), a complex number with rational real and imaginary parts, or a rational function (including polynomials). The result is a two-column matrix: the first contains the irreducibles dividing x (rational or Gaussian primes, irreducible polynomials), and the second the exponents. By convention, 0 is factored as 0^1.
Q and Q(i). See factorint for more information about the algorithms used. The rational or Gaussian primes are in fact pseudoprimes (see ispseudoprime), a priori not rigorously proven primes. In fact, any factor which is <= 10^{15} (whose norm is <= 10^{15} for an irrational Gaussian prime) is a genuine prime. Use isprime to prove primality of other factors, as in
? fa = factor(2^2^7 + 1)
%1 =
[59649589127497217 1]
[5704689200685129054721 1]
? isprime( fa[,1] )
%2 = [1, 1]~ \\ both entries are proven primes
Another possibility is to set the global default factor_proven, which will perform a rigorous primality proof for each pseudoprime factor.
A t_INT argument lim can be added, meaning that we look only for prime factors p < lim. The limit lim must be non-negative and satisfy lim <= primelimit + 1; setting lim = 0 is the same as setting it to primelimit + 1. In this case, all but the last factor are proven primes, but the remaining factor may actually be a proven composite! If the remaining factor is less than lim^2, then it is prime.
? factor(2^2^7 +1, 10^5)
%3 =
[340282366920938463463374607431768211457 1]
This routine uses trial division and perfect power tests, and should not be used for huge values of lim (at most 10^9, say): factorint(, 1 + 8) will in general be faster. The latter does not guarantee that all small prime factors are found, but it also finds larger factors, and in a much more efficient way.
? F = (2^2^7 + 1) * 1009 * 100003; factor(F, 10^5) \\ fast, incomplete
time = 0 ms.
%4 =
[1009 1]
[34029257539194609161727850866999116450334371 1]
? default(primelimit,10^9)
time = 4,360 ms.
%5 = 1000000000
? factor(F, 10^9) \\ very slow
time = 6,120 ms.
%6 =
[1009 1]
[100003 1]
[340282366920938463463374607431768211457 1]
? factorint(F, 1+8) \\ much faster, all small primes were found
time = 40 ms.
%7 =
[1009 1]
[100003 1]
[340282366920938463463374607431768211457 1]
? factorint(F) \\ complete factorisation
time = 260 ms.
%8 =
[1009 1]
[100003 1]
[59649589127497217 1]
[5704689200685129054721 1]
Rational functions. The polynomials or rational functions to be factored must have scalar coefficients. In particular PARI does not know how to factor multivariate polynomials. See factormod and factorff for the algorithms used over finite fields, factornf for the algorithms over number fields. Over Q, van Hoeij's method is used, which is able to cope with hundreds of modular factors.
The routine guesses a sensible ring over which you want to factor: the smallest ring containing all coefficients, taking into account quotient structures induced by t_INTMODs and t_POLMODs (e.g.if a coefficient in Z/nZ is known, all rational numbers encountered are first mapped to Z/nZ; different moduli will produce an error). Note that factorization of polynomials is done up to multiplication by a constant. In particular, the factors of rational polynomials will have integer coefficients, and the content of a polynomial or rational function is discarded and not included in the factorization. If needed, you can always ask for the content explicitly:
? factor(t^2 + 5/2*t + 1)
%1 =
[2*t + 1 1]
[t + 2 1]
? content(t^2 + 5/2*t + 1)
%2 = 1/2
See also nffactor.
The library syntax is GEN gp_factor0(GEN x, GEN lim = NULL). This function should only be used by the gp interface. Use directly GEN factor(GEN x) or GEN boundfact(GEN x, long lim). The obsolete function GEN factor0(GEN x, long lim) is kept for backward compatibility.
#### factorback(f,{e})
gives back the factored object corresponding to a factorization. The integer 1 corresponds to the empty factorization.
If e is present, e and f must be vectors of the same length (e being integral), and the corresponding factorization is the product of the f[i]^{e[i]}.
If not, and f is vector, it is understood as in the preceding case with e a vector of 1s: we return the product of the f[i]. Finally, f can be a regular factorization, as produced with any factor command. A few examples:
? factor(12)
%1 =
[2 2]
[3 1]
? factorback(%)
%2 = 12
? factorback([2,3], [2,1]) \\ 2^3 * 3^1
%3 = 12
? factorback([5,2,3])
%4 = 30
The library syntax is GEN factorback2(GEN f, GEN e = NULL). Also available is GEN factorback(GEN f) (case e = NULL).
#### factorcantor(x,p)
factors the polynomial x modulo the prime p, using distinct degree plus Cantor-Zassenhaus. The coefficients of x must be operation-compatible with Z/pZ. The result is a two-column matrix, the first column being the irreducible polynomials dividing x, and the second the exponents. If you want only the degrees of the irreducible polynomials (for example for computing an L-function), use factormod(x,p,1). Note that the factormod algorithm is usually faster than factorcantor.
The library syntax is GEN factcantor(GEN x, GEN p).
#### factorff(x,{p},{a})
factors the polynomial x in the field F_q defined by the irreducible polynomial a over F_p. The coefficients of x must be operation-compatible with Z/pZ. The result is a two-column matrix: the first column contains the irreducible factors of x, and the second their exponents. If all the coefficients of x are in F_p, a much faster algorithm is applied, using the computation of isomorphisms between finite fields.
Either a or p can omitted (in which case both are ignored) if x has t_FFELT coefficients; the function then becomes identical to factor:
? factorff(x^2 + 1, 5, y^2+3) \\ over F_5[y]/(y^2+3) ~ F_25
%1 =
[Mod(Mod(1, 5), Mod(1, 5)*y^2 + Mod(3, 5))*x
+ Mod(Mod(2, 5), Mod(1, 5)*y^2 + Mod(3, 5)) 1]
[Mod(Mod(1, 5), Mod(1, 5)*y^2 + Mod(3, 5))*x
+ Mod(Mod(3, 5), Mod(1, 5)*y^2 + Mod(3, 5)) 1]
? t = ffgen(y^2 + Mod(3,5), 't); \\ a generator for F_25 as a t_FFELT
? factorff(x^2 + 1) \\ not enough information to determine the base field
*** at top-level: factorff(x^2+1)
*** ^---------------
*** factorff: incorrect type in factorff.
? factorff(x^2 + t^0) \\ make sure a coeff. is a t_FFELT
%3 =
[x + 2 1]
[x + 3 1]
? factorff(x^2 + t + 1)
%11 =
[x + (2*t + 1) 1]
[x + (3*t + 4) 1]
Notice that the second syntax is easier to use and much more readable.
The library syntax is GEN factorff(GEN x, GEN p = NULL, GEN a = NULL).
#### factorial(x)
factorial of x. The expression x! gives a result which is an integer, while factorial(x) gives a real number.
The library syntax is GEN mpfactr(long x, long prec). GEN mpfact(long x) returns x! as a t_INT.
#### factorint(x,{flag = 0})
factors the integer n into a product of pseudoprimes (see ispseudoprime), using a combination of the Shanks SQUFOF and Pollard Rho method (with modifications due to Brent), Lenstra's ECM (with modifications by Montgomery), and MPQS (the latter adapted from the LiDIA code with the kind permission of the LiDIA maintainers), as well as a search for pure powers. The output is a two-column matrix as for factor: the first column contains the "prime" divisors of n, the second one contains the (positive) exponents.
By convention 0 is factored as 0^1, and 1 as the empty factorization; also the divisors are by default not proven primes is they are larger than 2^64, they only failed the BPSW compositeness test (see ispseudoprime). Use isprime on the result if you want to guarantee primality or set the factor_proven default to 1. Entries of the private prime tables (see addprimes) are also included as is.
This gives direct access to the integer factoring engine called by most arithmetical functions. flag is optional; its binary digits mean 1: avoid MPQS, 2: skip first stage ECM (we may still fall back to it later), 4: avoid Rho and SQUFOF, 8: don't run final ECM (as a result, a huge composite may be declared to be prime). Note that a (strong) probabilistic primality test is used; thus composites might not be detected, although no example is known.
You are invited to play with the flag settings and watch the internals at work by using gp's debug default parameter (level 3 shows just the outline, 4 turns on time keeping, 5 and above show an increasing amount of internal details).
The library syntax is GEN factorint(GEN x, long flag).
#### factormod(x,p,{flag = 0})
factors the polynomial x modulo the prime integer p, using Berlekamp. The coefficients of x must be operation-compatible with Z/pZ. The result is a two-column matrix, the first column being the irreducible polynomials dividing x, and the second the exponents. If flag is non-zero, outputs only the degrees of the irreducible polynomials (for example, for computing an L-function). A different algorithm for computing the mod p factorization is factorcantor which is sometimes faster.
The library syntax is GEN factormod0(GEN x, GEN p, long flag).
#### ffgen(P,{v})
return the generator g = X (mod P(X)) of the finite field defined by the polynomial P (which must have t_INTMOD coefficients). If v is given, the variable name is used to display g, else the variable of the polynomial P is used.
The library syntax is GEN ffgen(GEN P, long v = -1), where v is a variable number.
#### ffinit(p,n,{v = x})
computes a monic polynomial of degree n which is irreducible over F_p, where p is assumed to be prime. This function uses a fast variant of Adleman-Lenstra's algorithm.
It is useful in conjunction with ffgen; for instance if \kbd{P = ffinit(3,2)}, you can represent elements in F_{3^2} in term of \kbd{g = ffgen(P,g)}.
The library syntax is GEN ffinit(GEN p, long n, long v = -1), where v is a variable number.
#### fflog(x,g,{o})
discrete logarithm of the finite field element x in base g. If present, o represents the multiplicative order of g, see Section [Label: se:DLfun]; the preferred format for this parameter is [ord, factor(ord)], where ord is the order of g. It may be set as a side effect of calling ffprimroot.
If no o is given, assume that g is a primitive root. See znlog for the limitations of the underlying discrete log algorithms.
? t = ffgen(ffinit(7,5));
? o = fforder(t)
%2 = 5602 \\ not a primitive root.
? fflog(t^10,t)
%3 = 11214 \\ Actually correct modulo o. We are lucky !
? fflog(t^10,t, o)
%4 = 10
? g = ffprimroot(t, &o);
? o \\ order is 16806, bundled with its factorization matrix
%6 = [16806, [2, 1; 3, 1; 2801, 1]]
? fforder(g, o)
%7 = 16806 \\ no surprise there !
? fforder(g^10000, g, o)
? fflog(g^10000, g, o)
%9 = 10000
The library syntax is GEN fflog(GEN x, GEN g, GEN o = NULL).
#### fforder(x,{o})
multiplicative order of the finite field element x. If o is present, it represents a multiple of the order of the element, see Section [Label: se:DLfun]; the preferred format for this parameter is [N, factor(N)], where N is the cardinality of the multiplicative group of the underlying finite field.
? t = ffgen(ffinit(nextprime(10^8), 5));
? g = ffprimroot(t, &o); \\ o will be useful !
? fforder(g^1000000, o)
time = 0 ms.
%5 = 5000001750000245000017150000600250008403
? fforder(g^1000000)
time = 16 ms. \\ noticeably slower, same result of course
%6 = 5000001750000245000017150000600250008403
The library syntax is GEN fforder(GEN x, GEN o = NULL).
#### ffprimroot(x, {&o})
return a primitive root of the multiplicative group of the definition field of the finite field element x (not necessarily the same as the field generated by x). If present, o is set to a vector [ord, fa], where ord is the order of the group and fa its factorisation factor(ord). This last parameter is useful in fflog and fforder, see Section [Label: se:DLfun].
? t = ffgen(ffinit(nextprime(10^7), 5));
? g = ffprimroot(t, &o);
? o[1]
%3 = 100000950003610006859006516052476098
? o[2]
%4 =
[2 1]
[7 2]
[31 1]
[41 1]
[67 1]
[1523 1]
[10498781 1]
[15992881 1]
[46858913131 1]
? fflog(g^1000000, g, o)
time = 1,312 ms.
%5 = 1000000
The library syntax is GEN ffprimroot(GEN x, GEN *o = NULL).
#### fibonacci(x)
x-th Fibonacci number.
The library syntax is GEN fibo(long x).
#### gcd(x,{y})
creates the greatest common divisor of x and y. If you also need the u and v such that x*u + y*v = gcd(x,y), use the bezout function. x and y can have rather quite general types, for instance both rational numbers. If y is omitted and x is a vector, returns the {gcd} of all components of x, i.e.this is equivalent to content(x).
When x and y are both given and one of them is a vector/matrix type, the GCD is again taken recursively on each component, but in a different way. If y is a vector, resp.matrix, then the result has the same type as y, and components equal to gcd(x, y[i]), resp. gcd(x, y[,i]). Else if x is a vector/matrix the result has the same type as x and an analogous definition. Note that for these types, gcd is not commutative.
The algorithm used is a naive Euclid except for the following inputs:
* integers: use modified right-shift binary ("plus-minus" variant).
* univariate polynomials with coefficients in the same number field (in particular rational): use modular gcd algorithm.
* general polynomials: use the subresultant algorithm if coefficient explosion is likely (non modular coefficients).
If u and v are polynomials in the same variable with inexact coefficients, their gcd is defined to be scalar, so that
? a = x + 0.0; gcd(a,a)
%1 = 1
? b = y*x + O(y); gcd(b,b)
%2 = y
? c = 4*x + O(2^3); gcd(c,c)
%2 = 4
A good quantitative check to decide whether such a gcd "should be" non-trivial, is to use polresultant: a value close to 0 means that a small deformation of the inputs has non-trivial gcd. You may also use bezout, which does try to compute an approximate gcd d and provides u, v to check whether u x + v y is close to d.
The library syntax is GEN ggcd0(GEN x, GEN y = NULL). Also available are GEN ggcd(GEN x, GEN y), if y is not NULL, and GEN content(GEN x), if y = NULL.
#### hilbert(x,y,{p})
Hilbert symbol of x and y modulo the prime p, p = 0 meaning the place at infinity (the result is undefined if p != 0 is not prime).
It is possible to omit p, in which case we take p = 0 if both x and y are rational, or one of them is a real number. And take p = q if one of x, y is a t_INTMOD modulo q or a q-adic. (Incompatible types will raise an error.)
The library syntax is long hilbert(GEN x, GEN y, GEN p = NULL).
#### isfundamental(x)
true (1) if x is equal to 1 or to the discriminant of a quadratic field, false (0) otherwise. The function accepts vector/matrices arguments, and is then applied componentwise.
The library syntax is GEN gisfundamental(GEN x).
#### ispower(x,{k},{&n})
if k is given, returns true (1) if x is a k-th power, false (0) if not.
If k is omitted, only integers and fractions are allowed for x and the function returns the maximal k >= 2 such that x = n^k is a perfect power, or 0 if no such k exist; in particular ispower(-1), ispower(0), and ispower(1) all return 0.
If a third argument &n is given and x is indeed a k-th power, sets n to a k-th root of x.
For a t_FFELT x, instead of omitting k (which is not allowed for this type), it may be natural to set
k = (x.p ^ poldegree(x.pol) - 1) / fforder(x)
The library syntax is long ispower(GEN x, GEN k = NULL, GEN *n = NULL). Also available is long gisanypower(GEN x, GEN *pty) (k omitted).
#### isprime(x,{flag = 0})
true (1) if x is a prime number, false (0) otherwise. A prime number is a positive integer having exactly two distinct divisors among the natural numbers, namely 1 and itself.
This routine proves or disproves rigorously that a number is prime, which can be very slow when x is indeed prime and has more than 1000 digits, say. Use ispseudoprime to quickly check for compositeness. See also factor. It accepts vector/matrices arguments, and is then applied componentwise.
If flag = 0, use a combination of Baillie-PSW pseudo primality test (see ispseudoprime), Selfridge "p-1" test if x-1 is smooth enough, and Adleman-Pomerance-Rumely-Cohen-Lenstra (APRCL) for general x.
If flag = 1, use Selfridge-Pocklington-Lehmer "p-1" test and output a primality certificate as follows: return
* 0 if x is composite,
* 1 if x is small enough that passing Baillie-PSW test guarantees its primality (currently x < 2^{64}, as checked by Jan Feitsma),
* 2 if x is a large prime whose primality could only sensibly be proven (given the algorithms implemented in PARI) using the APRCL test.
* Otherwise (x is large and x-1 is smooth) output a three column matrix as a primality certificate. The first column contains prime divisors p of x-1 (such that prod p^{v_p(x-1)} > x^{1/3}), the second the corresponding elements a_p as in Proposition8.3.1 in GTM138 , and the third the output of isprime(p,1).
The algorithm fails if one of the pseudo-prime factors is not prime, which is exceedingly unlikely and well worth a bug report. Note that if you monitor isprime at a high enough debug level, you may see warnings about untested integers being declared primes. This is normal: we ask for partial factorisations (sufficient to prove primality if the unfactored part is not too large), and factor warns us that the cofactor hasn't been tested. It may or may not be tested later, and may or may not be prime. This does not affect the validity of the whole isprime procedure.
If flag = 2, use APRCL.
The library syntax is GEN gisprime(GEN x, long flag).
#### ispseudoprime(x,{flag})
true (1) if x is a strong pseudo prime (see below), false (0) otherwise. If this function returns false, x is not prime; if, on the other hand it returns true, it is only highly likely that x is a prime number. Use isprime (which is of course much slower) to prove that x is indeed prime. The function accepts vector/matrices arguments, and is then applied componentwise.
If flag = 0, checks whether x is a Baillie-Pomerance-Selfridge-Wagstaff pseudo prime (strong Rabin-Miller pseudo prime for base 2, followed by strong Lucas test for the sequence (P,-1), P smallest positive integer such that P^2 - 4 is not a square mod x).
There are no known composite numbers passing this test, although it is expected that infinitely many such numbers exist. In particular, all composites <= 2^{64} are correctly detected (checked using http://www.cecm.sfu.ca/Pseudoprimes/index-2-to-64.html).
If flag > 0, checks whether x is a strong Miller-Rabin pseudo prime for flag randomly chosen bases (with end-matching to catch square roots of -1).
The library syntax is GEN gispseudoprime(GEN x, long flag).
#### issquare(x,{&n})
true (1) if x is a square, false (0) if not. What "being a square" means depends on the type of x: all t_COMPLEX are squares, as well as all non-negative t_REAL; for exact types such as t_INT, t_FRAC and t_INTMOD, squares are numbers of the form s^2 with s in Z, Q and Z/NZ respectively.
? issquare(3) \\ as an integer
%1 = 0
? issquare(3.) \\ as a real number
%2 = 1
? issquare(Mod(7, 8)) \\ in Z/8Z
%3 = 0
? issquare( 5 + O(13^4) ) \\ in Q_13
%4 = 0
If n is given, a square root of x is put into n.
? issquare(4, &n)
%1 = 1
? n
%2 = 2
? issquare([4, x^2], &n)
%3 = [1, 1] \\ both are squares
? n
%4 = [2, x] \\ the square roots
For polynomials, either we detect that the characteristic is 2 (and check directly odd and even-power monomials) or we assume that 2 is invertible and check whether squaring the truncated power series for the square root yields the original input. The function accepts vector/matrices arguments, and is then applied componentwise.
The library syntax is GEN gissquareall(GEN x, GEN *n = NULL). Also available is GEN gissquare(GEN x).
#### issquarefree(x)
true (1) if x is squarefree, false (0) if not. Here x can be an integer or a polynomial. The function accepts vector/matrices arguments, and is then applied componentwise.
The library syntax is GEN gissquarefree(GEN x). For scalar arguments x ( t_INT or t_POL), the function long issquarefree(GEN x) is easier to use.
#### kronecker(x,y)
Kronecker symbol (x|y), where x and y must be of type integer. By definition, this is the extension of Legendre symbol to Z x Z by total multiplicativity in both arguments with the following special rules for y = 0, -1 or 2:
* (x|0) = 1 if |x |= 1 and 0 otherwise.
* (x|-1) = 1 if x >= 0 and -1 otherwise.
* (x|2) = 0 if x is even and 1 if x = 1,-1 mod 8 and -1 if x = 3,-3 mod 8.
The library syntax is GEN gkronecker(GEN x, GEN y).
#### lcm(x,{y})
least common multiple of x and y, i.e.such that lcm(x,y)*gcd(x,y) = {abs}(x*y). If y is omitted and x is a vector, returns the {lcm} of all components of x.
When x and y are both given and one of them is a vector/matrix type, the LCM is again taken recursively on each component, but in a different way. If y is a vector, resp.matrix, then the result has the same type as y, and components equal to lcm(x, y[i]), resp. lcm(x, y[,i]). Else if x is a vector/matrix the result has the same type as x and an analogous definition. Note that for these types, lcm is not commutative.
Note that lcm(v) is quite different from
l = v[1]; for (i = 1, #v, l = lcm(l, v[i]))
Indeed, lcm(v) is a scalar, but l may not be (if one of the v[i] is a vector/matrix). The computation uses a divide-conquer tree and should be much more efficient, especially when using the GMP multiprecision kernel (and more subquadratic algorithms become available):
? v = vector(10^4, i, random);
? lcm(v);
time = 323 ms.
? l = v[1]; for (i = 1, #v, l = lcm(l, v[i]))
time = 833 ms.
The library syntax is GEN glcm0(GEN x, GEN y = NULL).
#### moebius(x)
Moebius mu-function of |x|. x must be of type integer. The function accepts vector/matrices arguments, and is then applied componentwise.
The library syntax is GEN gmoebius(GEN x). For a t_INT x, the variant long moebius(GEN n) is generally easier to use.
#### nextprime(x)
finds the smallest pseudoprime (see ispseudoprime) greater than or equal to x. x can be of any real type. Note that if x is a pseudoprime, this function returns x and not the smallest pseudoprime strictly larger than x. To rigorously prove that the result is prime, use isprime. The function accepts vector/matrices arguments, and is then applied componentwise.
The library syntax is GEN gnextprime(GEN x). For a scalar x, long nextprime(GEN n) is also available.
#### numbpart(n)
gives the number of unrestricted partitions of n, usually called p(n) in the literature; in other words the number of nonnegative integer solutions to a+2b+3c+.. .= n. n must be of type integer and n < 10^{15} (with trivial values p(n) = 0 for n < 0 and p(0) = 1). The algorithm uses the Hardy-Ramanujan-Rademacher formula. To explicitly enumerate them, see partitions.
The library syntax is GEN numbpart(GEN n).
#### numdiv(x)
number of divisors of |x|. x must be of type integer. The function accepts vector/matrices arguments, and is then applied componentwise.
The library syntax is GEN gnumbdiv(GEN x). If x is a t_INT, one may use GEN numbdiv(GEN n) directly.
#### omega(x)
number of distinct prime divisors of |x|. x must be of type integer.
? factor(392)
%1 =
[2 3]
[7 2]
? omega(392)
%2 = 2; \\ without multiplicity
? bigomega(392)
%3 = 5; \\ = 3+2, with multiplicity
The function accepts vector/matrices arguments, and is then applied componentwise.
The library syntax is GEN gomega(GEN x). For a t_INT x, the variant long omega(GEN n) is generally easier to use.
#### partitions(n,{restr = 0})
returns vector of partitions of the integer n (negative values return [], n = 0 returns the trivial partition of the empty set). The second optional argument may be set to a non-negative number smaller than n to restrict the value of each element in the partitions to that value. The default of 0 means that this maximum is n itself.
A partition is given by a t_VECSMALL:
? partitions(4, 2)
%1 = [Vecsmall([2, 2]), Vecsmall([1, 1, 2]), Vecsmall([1, 1, 1, 1])]
correspond to 2+2, 1+1+2, 1+1+1+1.
The library syntax is GEN partitions(long n, long restr).
#### polrootsff(x,{p},{a})
returns the vector of distinct roots of the polynomial x in the field F_q defined by the irreducible polynomial a over F_p. The coefficients of x must be operation-compatible with Z/pZ. Either a or p can omitted (in which case both are ignored) if x has t_FFELT coefficients:
? polrootsff(x^2 + 1, 5, y^2+3) \\ over F_5[y]/(y^2+3) ~ F_25
%1 = [Mod(Mod(3, 5), Mod(1, 5)*y^2 + Mod(3, 5)),
Mod(Mod(2, 5), Mod(1, 5)*y^2 + Mod(3, 5))]
? t = ffgen(y^2 + Mod(3,5), 't); \\ a generator for F_25 as a t_FFELT
? polrootsff(x^2 + 1) \\ not enough information to determine the base field
*** at top-level: polrootsff(x^2+1)
*** ^-----------------
*** polrootsff: incorrect type in factorff.
? polrootsff(x^2 + t^0) \\ make sure one coeff. is a t_FFELT
%3 = [3, 2]
? polrootsff(x^2 + t + 1)
%4 = [2*t + 1, 3*t + 4]
Notice that the second syntax is easier to use and much more readable.
The library syntax is GEN polrootsff(GEN x, GEN p = NULL, GEN a = NULL).
#### precprime(x)
finds the largest pseudoprime (see ispseudoprime) less than or equal to x. x can be of any real type. Returns 0 if x <= 1. Note that if x is a prime, this function returns x and not the largest prime strictly smaller than x. To rigorously prove that the result is prime, use isprime. The function accepts vector/matrices arguments, and is then applied componentwise.
The library syntax is GEN gprecprime(GEN x). For a scalar x, long precprime(GEN n) is also available.
#### prime(n)
the x-th prime number, which must be among the precalculated primes.
The library syntax is GEN prime(long n).
#### primepi(x)
the prime counting function. Returns the number of primes p, p <= x. Uses a naive algorithm so that x must be less than primelimit.
The library syntax is GEN primepi(GEN x).
#### primes(x)
creates a row vector whose components are the first x prime numbers, which must be among the precalculated primes.
? primes(10) \\ the first 10 primes
%1 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
? primes(primepi(10)) \\ the primes up to 10
%2 = [2, 3, 5, 7]
The library syntax is GEN primes(long x).
#### qfbclassno(D,{flag = 0})
ordinary class number of the quadratic order of discriminant D. In the present version 2.5.1, a O(D^{1/2}) algorithm is used for D > 0 (using Euler product and the functional equation) so D should not be too large, say D < 10^8, for the time to be reasonable. On the other hand, for D < 0 one can reasonably compute qfbclassno(D) for |D| < 10^{25}, since the routine uses Shanks's method which is in O(|D|^{1/4}). For larger values of |D|, see quadclassunit.
If flag = 1, compute the class number using Euler products and the functional equation. However, it is in O(|D|^{1/2}).
Important warning. For D < 0, this function may give incorrect results when the class group has many cyclic factors, because implementing Shanks's method in full generality slows it down immensely. It is therefore strongly recommended to double-check results using either the version with flag = 1 or the function quadclassunit.
Warning. Contrary to what its name implies, this routine does not compute the number of classes of binary primitive forms of discriminant D, which is equal to the narrow class number. The two notions are the same when D < 0 or the fundamental unit varepsilon has negative norm; when D > 0 and Nvarepsilon > 0, the number of classes of forms is twice the ordinary class number. This is a problem which we cannot fix for backward compatibility reasons. Use the following routine if you are only interested in the number of classes of forms:
QFBclassno(D) =
qfbclassno(D) * if (D < 0 || norm(quadunit(D)) < 0, 1, 2)
Here are a few examples:
? qfbclassno(400000028)
time = 3,140 ms.
%1 = 1
? quadclassunit(400000028).no
time = 20 ms. \\{ much faster}
%2 = 1
? qfbclassno(-400000028)
time = 0 ms.
%3 = 7253 \\{ correct, and fast enough}
? quadclassunit(-400000028).no
time = 0 ms.
%4 = 7253
See also qfbhclassno.
The library syntax is GEN qfbclassno0(GEN D, long flag). The following functions are also available:
GEN classno(GEN D) (flag = 0)
GEN classno2(GEN D) (flag = 1).
Finally
GEN hclassno(GEN D) computes the class number of an imaginary quadratic field by counting reduced forms, an O(|D|) algorithm.
#### qfbcompraw(x,y)
composition of the binary quadratic forms x and y, without reduction of the result. This is useful e.g.to compute a generating element of an ideal.
The library syntax is GEN qfbcompraw(GEN x, GEN y).
#### qfbhclassno(x)
Hurwitz class number of x, where x is non-negative and congruent to 0 or 3 modulo 4. For x > 5. 10^5, we assume the GRH, and use quadclassunit with default parameters.
The library syntax is GEN hclassno(GEN x).
#### qfbnucomp(x,y,L)
composition of the primitive positive definite binary quadratic forms x and y (type t_QFI) using the NUCOMP and NUDUPL algorithms of Shanks, à la Atkin. L is any positive constant, but for optimal speed, one should take L = |D|^{1/4}, where D is the common discriminant of x and y. When x and y do not have the same discriminant, the result is undefined.
The current implementation is straightforward and in general slower than the generic routine (since the latter takes advantage of asymptotically fast operations and careful optimizations).
The library syntax is GEN nucomp(GEN x, GEN y, GEN L). Also available is GEN nudupl(GEN x, GEN L) when x = y.
#### qfbnupow(x,n)
n-th power of the primitive positive definite binary quadratic form x using Shanks's NUCOMP and NUDUPL algorithms (see qfbnucomp, in particular the final warning).
The library syntax is GEN nupow(GEN x, GEN n).
#### qfbpowraw(x,n)
n-th power of the binary quadratic form x, computed without doing any reduction (i.e.using qfbcompraw). Here n must be non-negative and n < 2^{31}.
The library syntax is GEN qfbpowraw(GEN x, long n).
#### qfbprimeform(x,p)
prime binary quadratic form of discriminant x whose first coefficient is p, where |p| is a prime number. By abuse of notation, p = ± 1 is also valid and returns the unit form. Returns an error if x is not a quadratic residue mod p, or if x < 0 and p < 0. (Negative definite t_QFI are not implemented.) In the case where x > 0, the "distance" component of the form is set equal to zero according to the current precision.
The library syntax is GEN primeform(GEN x, GEN p, long prec).
#### qfbred(x,{flag = 0},{d},{isd},{sd})
reduces the binary quadratic form x (updating Shanks's distance function if x is indefinite). The binary digits of flag are toggles meaning
1: perform a single reduction step
2: don't update Shanks's distance
The arguments d, isd, sd, if present, supply the values of the discriminant, floor{sqrt{d}}, and sqrt{d} respectively (no checking is done of these facts). If d < 0 these values are useless, and all references to Shanks's distance are irrelevant.
The library syntax is GEN qfbred0(GEN x, long flag, GEN d = NULL, GEN isd = NULL, GEN sd = NULL). Also available are
GEN redimag(GEN x) (for definite x),
and for indefinite forms:
GEN redreal(GEN x)
GEN rhoreal(GEN x) ( = qfbred(x,1)),
GEN redrealnod(GEN x, GEN isd) ( = qfbred(x,2,,isd)),
GEN rhorealnod(GEN x, GEN isd) ( = qfbred(x,3,,isd)).
#### qfbsolve(Q,p)
Solve the equation Q(x,y) = p over the integers, where Q is a binary quadratic form and p a prime number.
Return [x,y] as a two-components vector, or zero if there is no solution. Note that this function returns only one solution and not all the solutions.
Let D = \disc Q. The algorithm used runs in probabilistic polynomial time in p (through the computation of a square root of D modulo p); it is polynomial time in D if Q is imaginary, but exponential time if Q is real (through the computation of a full cycle of reduced forms). In the latter case, note that bnfisprincipal provides a solution in heuristic subexponential time in D assuming the GRH.
The library syntax is GEN qfbsolve(GEN Q, GEN p).
#### quadclassunit(D,{flag = 0},{tech = []})
Buchmann-McCurley's sub-exponential algorithm for computing the class group of a quadratic order of discriminant D.
This function should be used instead of qfbclassno or quadregula when D < -10^{25}, D > 10^{10}, or when the structure is wanted. It is a special case of bnfinit, which is slower, but more robust.
The result is a vector v whose components should be accessed using member functions:
* v.no: the class number
* v.cyc: a vector giving the structure of the class group as a product of cyclic groups;
* v.gen: a vector giving generators of those cyclic groups (as binary quadratic forms).
* v.reg: the regulator, computed to an accuracy which is the maximum of an internal accuracy determined by the program and the current default (note that once the regulator is known to a small accuracy it is trivial to compute it to very high accuracy, see the tutorial).
The flag is obsolete and should be left alone. In older versions, it supposedly computed the narrow class group when D > 0, but this did not work at all; use the general function bnfnarrow.
Optional parameter tech is a row vector of the form [c_1, c_2], where c_1 <= c_2 are positive real numbers which control the execution time and the stack size, see [Label: se:GRHbnf]. The parameter is used as a threshold to balance the relation finding phase against the final linear algebra. Increasing the default c_1 = 0.2 means that relations are easier to find, but more relations are needed and the linear algebra will be harder. The parameter c_2 is mostly obsolete and should not be changed, but we still document it for completeness: we compute a tentative class group by generators and relations using a factorbase of prime ideals \leq c_1 (log |D|)^2, then prove that ideals of norm <= c_2 (log |D|)^2 do not generate a larger group. By default an optimal c_2 is chosen, so that the result is provably correct under the GRH --- a famous result of Bach states that c_2 = 6 is fine, but it is possible to improve on this algorithmically. You may provide a smaller c_2, it will be ignored (we use the provably correct one); you may provide a larger c_2 than the default value, which results in longer computing times for equally correct outputs (under GRH).
The library syntax is GEN quadclassunit0(GEN D, long flag, GEN tech = NULL, long prec). If you really need to experiment with the tech parameter, it is usually more convenient to use GEN Buchquad(GEN D, double c1, double c2, long prec)
#### quaddisc(x)
discriminant of the quadratic field Q(sqrt{x}), where x belongs to Q.
The library syntax is GEN quaddisc(GEN x).
#### quadgen(D)
creates the quadratic number omega = (a+sqrt{D})/2 where a = 0 if D = 0 mod 4, a = 1 if D = 1 mod 4, so that (1,omega) is an integral basis for the quadratic order of discriminant D. D must be an integer congruent to 0 or 1 modulo 4, which is not a square.
The library syntax is GEN quadgen(GEN D).
#### quadhilbert(D)
relative equation defining the Hilbert class field of the quadratic field of discriminant D.
If D < 0, uses complex multiplication (Schertz's variant).
If D > 0 Stark units are used and (in rare cases) a vector of extensions may be returned whose compositum is the requested class field. See bnrstark for details.
The library syntax is GEN quadhilbert(GEN D, long prec).
#### quadpoly(D,{v = x})
creates the "canonical" quadratic polynomial (in the variable v) corresponding to the discriminant D, i.e.the minimal polynomial of quadgen(D). D must be an integer congruent to 0 or 1 modulo 4, which is not a square.
The library syntax is GEN quadpoly0(GEN D, long v = -1), where v is a variable number.
#### quadray(D,f)
relative equation for the ray class field of conductor f for the quadratic field of discriminant D using analytic methods. A bnf for x^2 - D is also accepted in place of D.
For D < 0, uses the sigma function and Schertz's method.
For D > 0, uses Stark's conjecture, and a vector of relative equations may be returned. See bnrstark for more details.
The library syntax is GEN quadray(GEN D, GEN f, long prec).
#### quadregulator(x)
regulator of the quadratic field of positive discriminant x. Returns an error if x is not a discriminant (fundamental or not) or if x is a square. See also quadclassunit if x is large.
The library syntax is GEN quadregulator(GEN x, long prec).
#### quadunit(D)
fundamental unit of the real quadratic field Q(sqrt D) where D is the positive discriminant of the field. If D is not a fundamental discriminant, this probably gives the fundamental unit of the corresponding order. D must be an integer congruent to 0 or 1 modulo 4, which is not a square; the result is a quadratic number (see Section [Label: se:quadgen]).
The library syntax is GEN quadunit(GEN D).
#### removeprimes({x = []})
removes the primes listed in x from the prime number table. In particular removeprimes(addprimes()) empties the extra prime table. x can also be a single integer. List the current extra primes if x is omitted.
The library syntax is GEN removeprimes(GEN x = NULL).
#### sigma(x,{k = 1})
sum of the k-th powers of the positive divisors of |x|. x and k must be of type integer. The function accepts vector/matrices arguments for x, and is then applied componentwise.
The library syntax is GEN gsumdivk(GEN x, long k). Also available are GEN gsumdiv(GEN n) (k = 1), GEN sumdivk(GEN n,long k) (n a t_INT) and GEN sumdiv(GEN n) (k = 1, n a t_INT)
#### sqrtint(x)
integer square root of x, which must be a non-negative integer. The result is non-negative and rounded towards zero.
The library syntax is GEN sqrtint(GEN x).
#### stirling(n,k,{flag = 1})
Stirling number of the first kind s(n,k) (flag = 1, default) or of the second kind S(n,k) (flag = 2), where n, k are non-negative integers. The former is (-1)^{n-k} times the number of permutations of n symbols with exactly k cycles; the latter is the number of ways of partitioning a set of n elements into k non-empty subsets. Note that if all s(n,k) are needed, it is much faster to compute sum_k s(n,k) x^k = x(x-1)...(x-n+1). Similarly, if a large number of S(n,k) are needed for the same k, one should use sum_n S(n,k) x^n = (x^k)/((1-x)...(1-kx)). (Should be implemented using a divide and conquer product.) Here are simple variants for n fixed:
/* list of s(n,k), k = 1..n */
vecstirling(n) = Vec( factorback(vector(n-1,i,1-i*'x)) )
/* list of S(n,k), k = 1..n */
vecstirling2(n) =
{ my(Q = x^(n-1), t);
vector(n, i, t = divrem(Q, x-i); Q=t[1]; t[2]);
}
The library syntax is GEN stirling(long n, long k, long flag). Also available are GEN stirling1(ulong n, ulong k) (flag = 1) and GEN stirling2(ulong n, ulong k) (flag = 2).
#### sumdedekind(h,k)
returns the Dedekind sum associated to the integers h and k, corresponding to a fast implementation of
s(h,k) = sum(n = 1, k-1, (n/k)*(frac(h*n/k) - 1/2))
The library syntax is GEN sumdedekind(GEN h, GEN k).
#### zncoppersmith(P, N, X, {B = N})
N being an integer and P belongs to Z[X], finds all integers x with |x| <= X such that gcd(N, P(x)) >= B, using Coppersmith's algorithm (a famous application of the LLL algorithm). X must be smaller than exp(log^2 B / (deg(P) log N)): for B = N, this means X < N^{1/deg(P)}. Some x larger than X may be returned if you are very lucky. The smaller B (or the larger X), the slower the routine will be. The strength of Coppersmith method is the ability to find roots modulo a general composite N: if N is a prime or a prime power, polrootsmod or polrootspadic will be much faster.
We shall now present two simple applications. The first one is finding non-trivial factors of N, given some partial information on the factors; in that case B must obviously be smaller than the largest non-trivial divisor of N.
setrand(1); \\ to make the example reproducible
p = nextprime(random(10^30));
q = nextprime(random(10^30)); N = p*q;
p0 = p % 10^20; \\ assume we know 1) p > 10^29, 2) the last 19 digits of p
p1 = zncoppersmith(10^19*x + p0, N, 10^12, 10^29)
\\ result in 10ms.
%1 = [35023733690]
? gcd(p1[1] * 10^19 + p0, N) == p
%2 = 1
and we recovered p, faster than by trying all possibilities < 10^{12}.
The second application is an attack on RSA with low exponent, when the message x is short and the padding P is known to the attacker. We use the same RSA modulus N as in the first example:
setrand(1);
P = random(N); \\ known padding
e = 3; \\ small public encryption exponent
X = floor(N^0.3); \\ N^(1/e - epsilon)
x0 = random(X); \\ unknown short message
C = lift( (Mod(x0,N) + P)^e ); \\ known ciphertext, with padding P
zncoppersmith((P + x)^3 - C, N, X)
\\ result in 3.8s.
%3 = [265174753892462432]
? %[1] == x0
%4 = 1
We guessed an integer of the order of 10^{18} in a couple of seconds.
The library syntax is GEN zncoppersmith(GEN P, GEN N, GEN X, GEN B = NULL).
#### znlog(x,g,{o})
discrete logarithm of x in (Z/NZ)^* in base g. If present, o represents the multiplicative order of g, see Section [Label: se:DLfun]; the preferred format for this parameter is [ord, factor(ord)], where ord is the order of g. If no o is given, assume that g generate (Z/NZ)^*.
This function uses a simple-minded combination of generic discrete log algorithms (index calculus methods are not yet implemented).
* Pohlig-Hellman algorithm, to reduce to groups of prime order q, where q | p-1 and p is an odd prime divisor of N,
* Shanks baby-step/giant-step (q small),
* Pollard rho method (q large).
The latter two algorithms require O(sqrt{q}) operations in the group on average, hence will not be able to treat cases where q > 10^{30}, say.
? g = znprimroot(101)
%1 = Mod(2,101)
? znlog(5, g)
%2 = 24
? g^24
%3 = Mod(5, 101)
? G = znprimroot(2 * 101^10)
%4 = Mod(110462212541120451003, 220924425082240902002)
? znlog(5, G)
%5 = 76210072736547066624
? G^% == 5
%6 = 1
The result is undefined when x is not a power of g or when x is not invertible mod N:
? znlog(6, Mod(2,3))
*** at top-level: znlog(6,Mod(2,3))
*** ^-----------------
*** znlog: impossible inverse modulo: Mod(0, 3).
For convenience, g is also allowed to be a p-adic number:
? g = 3+O(5^10); znlog(2, g)
%1 = 1015243
? g^%
%2 = 2 + O(5^10)
The library syntax is GEN znlog(GEN x, GEN g, GEN o = NULL).
#### znorder(x,{o})
x must be an integer mod n, and the result is the order of x in the multiplicative group (Z/nZ)^*. Returns an error if x is not invertible. The parameter o, if present, represents a non-zero multiple of the order of x, see Section [Label: se:DLfun]; the preferred format for this parameter is [ord, factor(ord)], where ord = eulerphi(n) is the cardinality of the group.
The library syntax is GEN znorder(GEN x, GEN o = NULL). Also available is GEN order(GEN x).
#### znprimroot(n)
returns a primitive root (generator) of (Z/nZ)^*, whenever this latter group is cyclic (n = 4 or n = 2p^k or n = p^k, where p is an odd prime and k >= 0). If the group is not cyclic, the result is undefined. If n is a prime, then the smallest positive primitive root is returned. This is no longer true for composites.
Note that this function requires factoring p-1 for p as above, in order to determine the exact order of elements in (Z/nZ)^*: this is likely to be very costly if p is large. The function accepts vector/matrices arguments, and is then applied componentwise.
The library syntax is GEN znprimroot0(GEN n). For a t_INT x, the special case GEN znprimroot(GEN n) is also available.
#### znstar(n)
gives the structure of the multiplicative group (Z/nZ)^* as a 3-component row vector v, where v[1] = phi(n) is the order of that group, v[2] is a k-component row-vector d of integers d[i] such that d[i] > 1 and d[i] | d[i-1] for i >= 2 and (Z/nZ)^* ~ prod_{i = 1}^k(Z/d[i]Z), and v[3] is a k-component row vector giving generators of the image of the cyclic groups Z/d[i]Z.
The library syntax is GEN znstar(GEN n).
|
|
Throughout this course we will be using the statistical package R, and the friendly interface of Rstudio, to conduct data analysis. The first part of today’s seminar is about getting familiar with this piece of software. If you have never used R before, worry not! No prior knowledge is assumed, and we will walk you through all the necessary steps to conduct analysis on the topics discussed in the lectures. Of course, learning a new statistical software is not simple, and we don’t aim to fully introduce you to the world of R – a wide and dynamic world that goes way beyond what we are covering here. This is just a gentle introduction to some specific R coding that corresponds to some methods often used to infer causality from observational data (our main focus here). Our suggestion is that you build on this and keep practicing. Coding is like learning a foreign language: if you don’t use you lose it.
The first part of this assignment is a general introduction to R, starting from scratch. If you already have some experience with this software, you are welcome to join the second part of the assignment, where we analyse some experimental data.
Why R?
R is a statistical software that allows us to manipulate data and estimate a wide variety of statistical models. It is one of the fastest growing statistical software packages, one of the most popular data science software packages, and, perhaps most importantly, it is open source (free!). We will also be using the RStudio user-interface, which makes operating R somewhat easier.
Installing R and Rstudio
You should install both R and Rstudio onto your personal computers. You can download them from the following sources:
1st part: Introduction to R
Let’s see what what we have. After installing R and Rstudio, we start Rstudio and see three panels. A screen-long panel on the left-hand side called the console, a smaller panel on the top right-hand side called the environment, and a last one on the bottom right-hand side called Plots & Help. The console is the simplest way to interact with R: you can type in some code (after the arrow $$>$$) and press Enter, and R will run it and provide an output. It is easy to visualize this if we simply use R as a calculator: when we type in mathematical operations in the console, R immediately returns the outcomes. Let’s see:
7 + 7
12 - 4
3 * 9
610 / 377
5^2
(0.31415 + 1.61803) / 3^3
See results!
## [1] 14
## [1] 8
## [1] 27
## [1] 1.618037
## [1] 25
## [1] 0.07156222
Directly typing code into the console is certainly easy, but often not the most efficient strategy. A better approach is to have a document where we can save all our code – that’s what scripts are for. R scripts are just plain text files that contain some R code, which we can edit the same way we would in Word or any other text file. We can open R scripts within Rstudio: just go to File –> New File –> R Script (or just press Cmd/Ctrl + shift + N for a shortcut).
We can now see a new panel popping up taking up the space in the top left-hand side of Rstudio, just above the console. You can now type all your code into the script, save it, and open it again whenever you want. If you want to run a piece of code from the script, you can always just copy and paste it on the console, though this is of course very inefficient. Instead, you can ask R to run any piece of code from the script directly. There are a few different ways to do it.
• To run an entire line, place the cursor on the line you want to run and use the Run button (on the top right-hand side of the script), or just press Ctrl/Cmd + Enter
• To run multiple lines (or even just part of a single line), highlight the text you want to run and use the Run button, or press Ctrl/Cmd + Enter
• To run the entire script, use the Source button, or press Ctrl/Cmd + Shift + S
You should always work from an R script! Our suggestion is that you create a different script for each seminar, and save them using reasonable names such as “seminar1.R”, “seminar2.R”, and so on.
Objects, vectors, functions
When working with R, we store information by creating “objects”. We create objects all the time; it is a simple way of labelling some piece of information so that we can use it in subsequent tasks. Say we want to know the outcome of $$3 * 5$$, and then we want to divide that outcome by $$2$$. We could, of course, simply ask R do the calculations directly:
(3 * 5) / 2
But we could also first create an object that stores the result of $$3*5$$ and then divide said object by $$2$$. We can give objects any name we like.1 To create objects, we need to use the assignment operator <-. If we want to name the outcome of $$3*5$$ outcome, then we simply need to use the assignment operator:
outcome <- 3 * 5
Notice that R does not provide any output after we create an object. That is because we are not asking for any output, we are simply creating an object. Note, however, that the environment panel lists all the objects we create in the current session. In case we want to confirm what piece of information is stored under a given label, apart from checking the environment panel, we can also just run the name of the object and see R’s output. For instance, when we run outcome in the console, R returns the number 15, which is the outcome of $$3*5$$.
outcome
## [1] 15
This is useful because, once we have created objects, we can use to perform subsequent calculations. For instance:
outcome / 2
## [1] 7.5
And we can even use previously created objects to create a new object:
my_new_object <- outcome ^ 2
my_new_object
## [1] 225
Both objects that we have just created (outcome and my_new_object) contain just single numbers. But we can create objects that contain more information as well. Often times, we want to create a long list of numbers in one specific order. Think of a regular spreadsheet, where we can entry numbers in a single column which are separated (and ordered) according to their rows. In R language, those single columns are equivalent to vectors. A vector is simply a set of information contained together in a specific order. In order to create a vector in R, we use the c() function: instead of including new information at every row (as we would, were we using a regular spreadsheet), we separate new information by commas inside the parentheses. For instance, we could create a new vector by concatenating the following numbers:
new_vector <- c(0, 3, 1, 4, 1, 5, 9, 2)
new_vector
## [1] 0 3 1 4 1 5 9 2
Recall that vectors store information (in this case, numbers) in a specific order. This is important. We can use this order to access individuals elements of a vector. We do this by subsetting a vector: we just need to use square brackets [ ] and include the number corresponding to the position we want to access. For instance, if we want to access the second element in our vector new_vector, we simply do the following:
new_vector[2]
## [1] 3
We can see that by using new_vector[2], R returns the second element of the vector new_vector, which is the number 3. If we want to access the seventh element of the vector new_vector, we just use new_vector[7] which returns the number 9, and so on.
Now that we have our first vector, we can see how functions work. Functions are a set of instructions: you provide an input and R generates some output – the backbone of R programming. A function is a command followed by round brackets ( ). Inputs are arguments that go inside the brackets; if a function requires more than one argument, these are separated by commas. For instance, we can add all elements of the vector new_vector together by using the function sum(). Here the input is the name of the vector:
sum(new_vector)
## [1] 25
And 25 is the output. As always, we can save the result of the output as an object using assignment operator <-.
sum_of_our_vector <- sum(new_vector)
Here, sum_of_our_vector is also an object! So we have performed a calculation (sum()) on some data (new_vector), and stored the result (sum_of_our_vector). Let’s try some new functions such as mean(), median(), and summary(). What are they calculating?
See results!
mean(new_vector)
## [1] 3.125
median(new_vector)
## [1] 2.5
summary(new_vector)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.000 1.000 2.500 3.125 4.250 9.000
mean() returns the average value of a vector, median() returns the median, and summary() returns a set of useful statistics, such as the minimum and the maximum values of the vector, the interquartile range, the median, and the mean.
You could create your own functions using the function() function – e.g. you could come up with a new function to calculate the mean of a vector. Say you create a set of new functions. They are only useful functions, so you would like to let anyone in the world use them as well by making them publicly available. This is the basic idea of R packages. People create new functions and make them publicly available. We will be using various packages throughout the course that help us conduct data analysis. An example of a useful package is pysch, which contains the describe() function: like the summary() function, it describes the content of a variable or data set, but providing more details. Let’s see how it looks.
describe(new_vector)
## Error in describe(new_vector): could not find function "describe"
We got an Error message! Why? Well, that is because the describe() function does not exist in base R. We first need to load the psych package, only then will its new functions be available to us. This is a two-step process. First step involves installing the relevant package in your computer using the install.packages() function; you only need to this once. Second step involves loading the relevant package using the library() function; you need to that every time you start a new session.
install.packages("psych") # you only need to use this once in your computer
library(psych) # making the package available in the current session
## Warning: package 'psych' was built under R version 4.0.2
describe(new_vector) # look, now we can use the describe() function
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 8 3.12 2.9 2.5 3.12 2.22 0 9 9 0.81 -0.63 1.03
The describe() function is useful because it provides us with a wider set of useful statistics than the summary() function, such as the number of observations, standard deviation, range, skew and kurtosis. Note that R ignores everything that comes after the #. This is extremely useful to make comments throughout our code.
data.frames
Data frames are the workhorse when conducting data analysis with R. A data.frame object is the R equivalent to a spreadsheet: each row represents a unit, each column represents a variable. Nearly every time we conduct data analysis with R, we will be working with data.frames. In most cases (including the second part of this seminar), we will load a data set from a spreadsheet-based external file (.csv, .xls, .dta, .sav, among others) onto R; for now however, we will use a dataset that comes pre-installed with R just to see how it works. Let’s use the data() function to load the USArrests data set, which contains statistics in arrests per 100,000 residents for assault, murder, and rape in each of the 50 US states in 1973.
data("USArrests")
We can use the help() function to read more about this data set, which contains 50 observations (i.e., rows) on 4 variables (i.e., columns).
help(USArrests)
The data.frame is listed as a new object in the environment panel. We can click on it to see it as a spreadsheet; we can also type in the name of the data set to see what it looks like. Because data sets are often very long, instead of seeing all of it we can opt to look at just the first few rows using the head() function:
head(USArrests, 10) # the second argument specifies the number of rows we want to see
## Murder Assault UrbanPop Rape
## Alabama 13.2 236 58 21.2
## Alaska 10.0 263 48 44.5
## Arizona 8.1 294 80 31.0
## Arkansas 8.8 190 50 19.5
## California 9.0 276 91 40.6
## Colorado 7.9 204 78 38.7
## Connecticut 3.3 110 77 11.1
## Delaware 5.9 238 72 15.8
## Florida 15.4 335 80 31.9
## Georgia 17.4 211 60 25.8
Subsetting with $and [,] The easiest way to access a single variable (i.e., a column) of a data.frame is using the dollar sign$. For instance, to access the murder rate in US states in 1973:
USArrests$Murder ## [1] 13.2 10.0 8.1 8.8 9.0 7.9 3.3 5.9 15.4 17.4 5.3 2.6 10.4 7.2 2.2 ## [16] 6.0 9.7 15.4 2.1 11.3 4.4 12.1 2.7 16.1 9.0 6.0 4.3 12.2 2.1 7.4 ## [31] 11.4 11.1 13.0 0.8 7.3 6.6 4.9 6.3 3.4 14.4 3.8 13.2 12.7 3.2 2.2 ## [46] 8.5 4.0 5.7 2.6 6.8 R returns all the observations for the column Murder. What is this? It is a vector! A list of information (here, numbers) in one specific order. We can therefore apply everything we learned about vectors here. For example, we can access the third element of this vector: USArrests$Murder[3]
## [1] 8.1
Which corresponds to the murder rate in Arizona. Let’s practice using the dollar sign to access the Assault variable. What are the first, tenth, and fifteenth elements?
See results!
USArrests$Assault[1] ## [1] 236 USArrests$Assault[10]
## [1] 211
USArrests$Assault[15] ## [1] 56 We saw earlier that we can subset a vector by using square brackets: [ ]. When dealing with data.frames, we often want to access certain observations (rows) or certain columns (variables) or a combination of the two without looking at the entire data set all at once. We can also use square brackets ([,]) to subset data.frames. In square brackets we put a row and a column coordinates separated by a comma. The row coordinate goes first and the column coordinate second. So USArrests[23, 3] returns the 23rd row and third column of the data frame. If we leave the column coordinate empty this means we would like all columns. So, USArrests[10,] returns the 10th row of the data set. If we leave the row coordinate empty, R returns the entire column. So, USArrests[,4] returns the fourth column of the data set. USArrests[23, 3] # element in 23rd row, 3rd column ## [1] 66 USArrests[10,] # entire 10th row ## Murder Assault UrbanPop Rape ## Georgia 17.4 211 60 25.8 USArrests[,4] # entire fourth column ## [1] 21.2 44.5 31.0 19.5 40.6 38.7 11.1 15.8 31.9 25.8 20.2 14.2 24.0 21.0 11.3 ## [16] 18.0 16.3 22.2 7.8 27.8 16.3 35.1 14.9 17.1 28.2 16.4 16.5 46.0 9.5 18.8 ## [31] 32.1 26.1 16.1 7.3 21.4 20.0 29.3 14.9 8.3 22.5 12.8 26.9 25.5 22.9 11.2 ## [46] 20.7 26.2 9.3 10.8 15.6 We can look at a selected number of rows of a dataset with the colon in brackets: USArrests[1:7,] returns the first seven rows and all columns of the data.frame USArrests. We could display the second and fourth columns of the dataset by using the c() function in brackets like so: USArrests[, c(2,4)]. Display all columns of the USArrests dataset and show rows 10 to 15. Next display all columns of the dataset but only for rows 10 and 15. See results! USArrests[10:15,] ## Murder Assault UrbanPop Rape ## Georgia 17.4 211 60 25.8 ## Hawaii 5.3 46 83 20.2 ## Idaho 2.6 120 54 14.2 ## Illinois 10.4 249 83 24.0 ## Indiana 7.2 113 65 21.0 ## Iowa 2.2 56 57 11.3 USArrests[c(10, 15),] ## Murder Assault UrbanPop Rape ## Georgia 17.4 211 60 25.8 ## Iowa 2.2 56 57 11.3 Logical operators We can also subset by using logical values and logical operators. R has two special representations for logical values: TRUE and FALSE. R also has many logical operators, such as greater than (>), less than (<), or equal to (==). When we apply a logical operator to an object, the value returned should be a logical value (i.e. T or F). For instance: 5 > 3 ## [1] TRUE 7 < 4 ## [1] FALSE 2 == 1 ## [1] FALSE Here, when we ask R whether 5 is greater than 3, R returns the logical value TRUE. When we ask if 7 is less than 4, R returns the logical value FALSE. When we ask R whether 2 is equal to 1, R returns the logical value FALSE. For the purposes of subsetting, logical operations are useful because they can be used to specify which elements of a vector or data.frame we would like returned. For instance, let’s subset the USArrests and keep only states with a murder rate less than 5 per 100,000: USArrests[USArrests$Murder < 5, ]
## Murder Assault UrbanPop Rape
## Connecticut 3.3 110 77 11.1
## Idaho 2.6 120 54 14.2
## Iowa 2.2 56 57 11.3
## Maine 2.1 83 51 7.8
## Massachusetts 4.4 149 85 16.3
## Minnesota 2.7 72 66 14.9
## Nebraska 4.3 102 62 16.5
## New Hampshire 2.1 57 56 9.5
## North Dakota 0.8 45 44 7.3
## Oregon 4.9 159 67 29.3
## Rhode Island 3.4 174 87 8.3
## South Dakota 3.8 86 45 12.8
## Utah 3.2 120 80 22.9
## Vermont 2.2 48 32 11.2
## Washington 4.0 145 73 26.2
## Wisconsin 2.6 53 66 10.8
Let’s go through this code slowly to see what is going on here. First, we are asking R to display the USArrests data.frame. But no all of it: we are using square brackets [ ], so only a subset of the dataset is displayed. There is some information before but nothing after the comma inside the square brackets, which means that only a fraction of rows but all columns should be displayed. Which rows? Let’s take a closer look at the code before the comma inside the square brackets. R should only display the rows for which the expression USArrests$Murder < 5 is TRUE, i.e. states with a murder rate less than 5 (per 100,000). A few questions about data analysis with R 1. Calculate the mean and median of each of the variables included in the data set. Assign each of the results of these calculations to objects (choose sensible names!). See results! mean_murder <- mean(USArrests$Murder)
median_murder <- median(USArrests$Murder) mean_assault <- mean(USArrests$Assault)
median_assault <- median(USArrests$Assault) mean_urban <- mean(USArrests$UrbanPop)
median_urban <- median(USArrests$UrbanPop) mean_rape <- mean(USArrests$Rape)
median_rape <- median(USArrests$Rape) 1. Is there a difference in the assault rate for urban and rural states? Define an urban state as one for which the urban population is greater than or equal to the median across all states. Define a rural state as one for which the urban population is less than the median. See results! urban_states <- USArrests[USArrests$UrbanPop >= median_urban, ]
rural_states <- USArrests[USArrests$UrbanPop < median_urban, ] mean_assault_urban <- mean(urban_states$Assault)
mean_assault_rural <- mean(rural_states$Assault) mean_assault_urban ## [1] 187.4643 mean_assault_rural ## [1] 149.5 The average assault rate in urban states is 187.46 (per 100,000), considerably larger than the average assault rate in rural states of 149.5. 2nd part: Analysing experimental data Can transphobia re reduced through in-person conversations and perspective-taking exercises? To address this question, two researchers conducted a field experiment on door-to-door canvassing in South Florida. Targeting antitransgender prejudice, the intervention involved canvassers holding single, approximately 10-minute conversations that encouraged actively taking the perspective of others with voters to see if these conversations could affect prejudicial attitudes towards transgender people. In the experiment, the authors recruited first registered voters via mail for an online baseline survey. They then randomly assigned respondents of this baseline survey ($$n=1825$$) to either a treatment group targeted with the intervention ($$n=913$$) or a placebo group targeted with a conversation about recycling ($$n=912$$). For the intervention, 56 canvassers first knocked on voters’ doors unannounced. Then, canvassers asked to speak with the subject on their list and confirmed the person’s identity if the person came to the door. A total of several hundred individuals ($$n=501$$) came to their doors in the two conditions. For logistical reasons unrelated to the original study, we further reduce this dataset to (n=488) which is the full sample that appears in the transphobia.csv data (available on ILIAS). The canvassers then engaged in a series of strategies previously shown to facilitate active processing under the treatment condition: canvassers informed voters that they might face a decision about the issue (whether to vote to repeal the law protecting transgender people); canvassers asked voters to explain their views; and canvassers showed a video that presented arguments on both sides. Canvassers defined the term “transgender” at this point and, if they were transgender themselves, noted this. The canvassers next attempted to encourage “analogic perspective-taking”. Canvassers first asked each voter to talk about a time when they themselves were judged negatively for being different. The canvassers then encouraged voters to see how their own experience offered a window into transgender people’s experiences, hoping to facilitate voters’ ability to take transgender people’s perspectives. The intervention ended with another attempt to encourage active processing by asking voters to describe if and how the exercise changed their mind. All of the former steps constitutes the “treatment.” The placebo group was reminded that recycling was most effective when everyone participates. The canvassers talked about how they were working on ways to decrease environmental waste and asked the voters who came to the door about their support for a new law that would require supermarkets to charge for bags instead of giving them away for free. This was meant to mimic the effect of canvassers interacting with the voters in face-to-face conversation on a topic different from transphobia. The authors then asked respondents ($$n=488$$) to complete follow-up online surveys via email presented as a continuation of the baseline survey. These follow-up surveys began 3 days, 3 weeks, 6 weeks, and 3 months after the intervention when the baseline survey was also conducted. The authors then created an index of tolerance towards transgender people. Higher values indicate higher tolerance, lower values indicate lower tolerance. The data set includes the following variables: Name Description vf_age Age vf_party Party: D=Democrats, R=Republicans and N=Independents vf_racename Race: African American, Caucasian, Hispanic vf_female Gender: 1 if female, 0 if male treat_ind Treatment assignment: 1=treatment, 0=placebo treatment.delivered Intervention was actually delivered (=TRUE) vs. was not (=FALSE) tolerance.t0 Tolerance variable at Baseline tolerance.t1 Tolerance captured at 3 days after Baseline tolerance.t2 Tolerance captured at 3 weeks after Baseline tolerance.t3 Tolerance captured at 6 weeks after Baseline tolerance.t4 Tolerance captured at 3 months after Baseline Preliminaries It is sensible when you start any data analysis project to make sure your computer is set up in an efficient way. Our suggestion is that you create a folder on your computer where you can save all your scripts throughout the course (i.e., seminar1.R, seminar2.R, etc.). We also recommend you to create a subfolder into your main folder, and give it the name data: this is where you should save all your data sets. When we work with RStudio, the first thing we should do is ensure that R is to set the working directory. This essentially is the folder in your computer where will operate (e.g., when looking for data and other scripts). There are two ways to this. The easiest (and recommended) is to set the folder from which you want R to work from as an R Project. You can do that by clicking on “Project: (none)” at the top-right corner of RStudio, then click “New project” and just assign it to your folder of choice. The second way to set the working directory involves knowing the location of the relevant folder in your computer. Say you have created a folder named “Causal Inference in OS” inside a folder named “GESIS 2021” on your desktop. Then you can use the setwd() to set the working directory: setwd("~/Desktop/GESIS 2021/Causal Inference in OS") # if you are working on a Mac setwd("C:/Desktop/GESIS 2021/Causal Inference in OS") # if you are working on a Windows PC Loading data Once you have downloaded the data, put the transphobia.csv file into the data folder that you created earlier in the seminar. Now load the data into the current R session using the read.csv() function: transphobia <- read.csv('data/transphobia.csv') You can now check the environment panel and see a data.frame object named transphobia with 488 rows (observations) and 11 columns (variables). Some exercises Question 1 – Describing variables 1. Let’s start describing the data. Use the table() function and the treat_ind variable to find how many respondents were randomly assigned to the treatment and the control groups. See results! table(transphobia$treat_ind)
##
## 0 1
## 252 236
236 were randomly assigned to the treatment group, whereas 252 were randomly assigned to the control group.
1. Simply counting how many respondents were assigned to each treatment might not be informative. A better approach is to calculate the proportions, which we can do using the prop.table() function. What percentage of respondents were randomly assigned to the treatment and control groups?
Code hint: the prop.table() function requires a table() as an argument!
See results!
prop.table(table(transphobia$treat_ind)) ## ## 0 1 ## 0.5163934 0.4836066 48.36 % of the respondents were randomly assigned to the treatment group, whereas 51.64 % were assigned to the control group. 1. What about the response variable, how does it distribute across all respondents? Use the describe() function and the variable tolerance.t1, which measures tolerance levels towards transgender people three days after the intervention. See results! describe(transphobia$tolerance.t1)
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 418 0.08 1.07 0.07 0.1 1.01 -2.26 2.07 4.32 -0.15 -0.46 0.05
We can see that this index of tolerance levels towards transgender people ranges from -2.26 to 2.07. The mean is 0.08, very similar to the median of 0.07, suggesting a symmetrical distribution. The standard deviation is 1.07.
Question 2 – Covariate balance
1. In order to make causal claims, we need be confident that our treatment groups are balanced. Do respondents in the treatment and control groups have similar characteristics in terms of their age?
See results!
To assess balance in the variable vf_age, we can calculate the average age for each treatment group:
mean(transphobia$vf_age[transphobia$treat_ind == T]) # average age in the treatment group
## [1] 50.07627
mean(transphobia$vf_age[transphobia$treat_ind == F]) # average age in the control group
## [1] 48.60317
Respondents in the treatment group are on average 50.08 years old, whereas respondents in the control group are on average 48.6 years old. To assess whether the estimated difference of 1.48 year between the two groups is due to sampling uncertainty, we can conduct a t test using the t.test() function:
t.test(x = transphobia$vf_age[transphobia$treat_ind == 1],
y = transphobia$vf_age[transphobia$treat_ind == 0],
conf.level = 0.95,
var.equal = T)
##
## Two Sample t-test
##
## data: transphobia$vf_age[transphobia$treat_ind == 1] and transphobia$vf_age[transphobia$treat_ind == 0]
## t = 0.92987, df = 486, p-value = 0.3529
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -1.639634 4.585827
## sample estimates:
## mean of x mean of y
## 50.07627 48.60317
Given that (i) the t statistic of 0.93 is lower than 1.96, (ii) the p-value of 0.35 is larger than 0.05, and (iii) the confidence interval of $$[-1.65; 4.59]$$ includes zero of a plausible value (all three statements are equivalent), we can safely conclude that there is no significant difference in the average age of respondents assigned to the treatment and the control groups.
1. Conduct the same analysis for the variables vf_female, vf_racename, and vf_party. Do respondents in the treatment and control groups have similar characteristics in terms of their gender, race, and party affiliation?
See results! To assess the association between gender (as measured in the study) and treatment assignment, we can use the prop.table() and table() functions to visualize the cross-tabulation, and then conduct a Chi-squared test.
table_gender <- table(transphobia$vf_female, # first argument is represented by rows transphobia$treat_ind) # second argument is represented by columns
prop.table(table_gender, 1) # the argument 1 indicates we want conditional proportions by rows
##
## 0 1
## 0 0.4903846 0.5096154
## 1 0.5357143 0.4642857
As we can see, 49% of all male respondents were assigned to the control group, whereas 51% were assigned to the treatment group. Among female respondents, 54% were assigned to the control group and 46% were assigned to the treatment group. To handle sampling uncertainty, let’s use the chisq.test() function.
chisq.test(table_gender)
##
## Pearson's Chi-squared test with Yates' continuity correction
##
## data: table_gender
## X-squared = 0.80883, df = 1, p-value = 0.3685
Given that $$p = 0.37$$, we have little evidence to reject the null hypothesis that there is no association between gender and treatment assignment.
Let us now adopt the same strategies and check balance for vf_racename and vf_party:
table_race <- table(transphobia$vf_racename, transphobia$treat_ind)
prop.table(table_race, 1)
##
## 0 1
## African American 0.5037037 0.4962963
## Caucasian 0.5161290 0.4838710
## Hispanic 0.5240175 0.4759825
table_party <- table(transphobia$vf_party, transphobia$treat_ind)
prop.table(table_party, 1)
##
## 0 1
## D 0.4834711 0.5165289
## N 0.5333333 0.4666667
## R 0.5634921 0.4365079
Overall, respondents of all three racial groups (African American, Caucasian, and Hispanic) and all three political groups (Democrats, Republicans, and Independents) seem relatively well balanced across the treatment and control groups, with close to 50% of respondents of each profile in either treatment groups.
chisq.test(table_race)
##
## Pearson's Chi-squared test
##
## data: table_race
## X-squared = 0.14038, df = 2, p-value = 0.9322
chisq.test(table_party)
##
## Pearson's Chi-squared test
##
## data: table_party
## X-squared = 2.3074, df = 2, p-value = 0.3155
Given that both p-values are larger than 0.05, we fail to reject both null hypotheses of no association between either race or party affiliation and treatment assignment.
1. In particular, it is crucial to find balance in the response variable prior to the intervention. That is, respondents from both treatment groups should have, on average, the same levels of tolerance towards transgender people. This is the variable tolerance.t0. Is it the case here?
See details! We can check whether respondents in the treatment and the control groups had the same levels of tolerance before the intervention by conducting a t.test
t.test(x = transphobia$tolerance.t0[transphobia$treat_ind == T],
y = transphobia$tolerance.t0[transphobia$treat_ind == F],
conf.level = 0.95,
var.equal = T)
##
## Two Sample t-test
##
## data: transphobia$tolerance.t0[transphobia$treat_ind == T] and transphobia$tolerance.t0[transphobia$treat_ind == F]
## t = -0.41789, df = 486, p-value = 0.6762
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.2282952 0.1482183
## sample estimates:
## mean of x mean of y
## -0.030558356 0.009480114
Based on the t statistic of -0.42, the p-value of 0.68, and the 95% confidence interval of $$[-0.23; 0.15]$$, we can safely conclude that respondents from both groups had, on average, the same levels of tolerance before the intervention.
1. What do you conclude in relation to covariate balance? What does it imply in terms of our ability to make causal claims?
See results!
We seem to have covariate balance, as none of the covariates (i.e., age, gender, race, and party affiliation) is associated with the treatment groups. Tolerance levels at the baseline (i.e. before the intervention) are also independent from the treatment groups. This is, of course, expected, considering that respondents were randomly assigned to the receive the intervention. This implies that any differences we find between the two groups can be attributed to the treatment implementation. That is, we are able to make causal claims. For instance, if we compare the levels of tolerance towards transgender people between the two groups, we can identify the causal effect of in-person conversations and perspective-taking exercises on antitransgender prejudice.
Question 3 – Estimating an ATE
1. What is the average tolerance level 3 days after the intervention among those respondents who were randomly assigned to the treatment group? What about those in the control group? Can you interpret this mean difference causally?
See details!
# Average in the treatment group
average_treatment_t1 <- mean(transphobia$tolerance.t1[transphobia$treat_ind == T], na.rm = T)
# Average in the control group
average_control_t1 <- mean(transphobia$tolerance.t1[transphobia$treat_ind == F], na.rm = T)
# ATE
average_treatment_t1 - average_control_t1
## [1] 0.1443226
The average tolerance level 3 days after the intervention among those respondents who were randomly assigned to the treatment group is 0.15, whereas among those in the control group it is 0.01. The mean difference of is the average treatment effect, given that respondents were randomly assigned to the treatment groups. Being randomly assigned to the treatment group led to an increase in tolerance levels of 0.14 points in the tolerance scale.
1. When our goal is to make causal claims, we are mostly interested in unbiased estimates like the mean difference that we just calculated. But we still need to handle uncertainty. After all, outcomes could be due to sampling error. Using the t.test() function, what do you conclude about the statistical significance of the ATE?
See results!
t.test(x = transphobia$tolerance.t1[transphobia$treat_ind == T],
y = transphobia$tolerance.t1[transphobia$treat_ind == F],
conf.level = 0.95,
var.equal = T)
##
## Two Sample t-test
##
## data: transphobia$tolerance.t1[transphobia$treat_ind == T] and transphobia$tolerance.t1[transphobia$treat_ind == F]
## t = 1.3757, df = 416, p-value = 0.1696
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.06189076 0.35053604
## sample estimates:
## mean of x mean of y
## 0.153535857 0.009213218
Given the t statistic of 1.36, the p-value of 0.17, and the 95% confidence interval of $$[-0.06; 0.35]$$, we fail to reject the null hypothesis that the mean difference is zero. In other words, the average treatment effect is not statistically significant. We have little evidence to sustain that being randomly assigned to the treatment group leads to an increase in tolerance levels.
1. Estimate the average treatment effect again, now using a linear regression model.
Code hint: you can use the lm() function: lm(dependent_variable ~ explanatory_variable, data)
See results!
reg1 <- lm(tolerance.t1 ~ treat_ind, transphobia)
summary(reg1)
##
## Call:
## lm(formula = tolerance.t1 ~ treat_ind, data = transphobia)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.41152 -0.69412 -0.06658 0.74185 2.05647
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.009213 0.071836 0.128 0.898
## treat_ind 0.144323 0.104907 1.376 0.170
##
## Residual standard error: 1.07 on 416 degrees of freedom
## (70 observations deleted due to missingness)
## Multiple R-squared: 0.004529, Adjusted R-squared: 0.002136
## F-statistic: 1.893 on 1 and 416 DF, p-value: 0.1696
As expected, results are exactly the same as before. The coefficient for a binary explanatory variable in a simple linear regression model represents the mean difference. This shows that we can use the regression framework to estimate ATEs.
1. Estimate a new linear regression model, now regressing tolerance.t1 on treat_ind, vf_age, vf_racename, vf_female, and vf_party. What happens with the coefficient for treat_ind? Is this expected?
See results!
reg2 <- lm(tolerance.t1 ~ treat_ind + vf_age + vf_racename + vf_female + vf_party, transphobia)
# install.packages('texreg') # install the R package 'texreg'. You only need to to this once in your computer
library(texreg)
## Warning: package 'texreg' was built under R version 4.0.2
## Version: 1.37.5
## Date: 2020-06-17
## Author: Philip Leifeld (University of Essex)
##
## Consider submitting praise using the praise or praise_interactive functions.
## Please cite the JSS article in your publications -- see citation("texreg").
screenreg(list(reg1, reg2))
##
## =========================================
## Model 1 Model 2
## -----------------------------------------
## (Intercept) 0.01 0.01
## (0.07) (0.19)
## treat_ind 0.14 0.15
## (0.10) (0.10)
## vf_age -0.01 ***
## (0.00)
## vf_racenameCaucasian 0.97 ***
## (0.15)
## vf_racenameHispanic 0.77 ***
## (0.14)
## vf_female 0.38 ***
## (0.10)
## vf_partyN -0.32 *
## (0.14)
## vf_partyR -0.62 ***
## (0.13)
## -----------------------------------------
## R^2 0.00 0.16
## Num. obs. 418 418
## =========================================
## *** p < 0.001; ** p < 0.01; * p < 0.05
The coefficient for treat_ind remains virtually unaltered after we include four new covariates in the regression model. This is expected, given that treat_ind was randomly assigned and achieved covariate balance.
Question 4 – What went wrong?
1. Results are not encouraging. We found a positive but not statistically significant ATE. One thing that could explain this is treatment delivery: canvassers might have made mistakes and ended up engaging in a conversation about transphobia with respondents assigned to the control group and about recycling with respondents assigned to the treatment group. Using the prop.table() function and the variable treatment.delivered, check whether this is the case.
See results!
table_delivery <- table(transphobia$treat_ind, transphobia$treatment.delivered)
prop.table(table_delivery, 1)
##
## FALSE TRUE
## 0 0.95634921 0.04365079
## 1 0.21610169 0.78389831
Considering respondents who were randomly assigned to the control group, 96% correctly received a placebo intervention and 4% incorrectly received the treatment intervention. Among those who were randomly assigned to the treatment group, 78% correctly received the treatment intervention and 22% incorrectly received the placebo intervention.
1. Estimate a linear regression model regressing tolerance.t1 on treatment.delivered. Does the coefficient represent the ATE?
See results!
reg3 <- lm(tolerance.t1 ~ treatment.delivered, transphobia)
screenreg(list(reg1, reg2, reg3))
##
## ======================================================
## Model 1 Model 2 Model 3
## ------------------------------------------------------
## (Intercept) 0.01 0.01 -0.01
## (0.07) (0.19) (0.07)
## treat_ind 0.14 0.15
## (0.10) (0.10)
## vf_age -0.01 ***
## (0.00)
## vf_racenameCaucasian 0.97 ***
## (0.15)
## vf_racenameHispanic 0.77 ***
## (0.14)
## vf_female 0.38 ***
## (0.10)
## vf_partyN -0.32 *
## (0.14)
## vf_partyR -0.62 ***
## (0.13)
## treatment.deliveredTRUE 0.22 *
## (0.11)
## ------------------------------------------------------
## R^2 0.00 0.16 0.01
## Adj. R^2 0.00 0.14 0.01
## Num. obs. 418 418 418
## ======================================================
## *** p < 0.001; ** p < 0.01; * p < 0.05
Respondents who received the treatment intervention (an in-person conversation about transphobia with perspective-taking exercises) had tolerance levels 0.22 points higher three days later than respondents who received the placebo intervention (a conversation about recycling). This is statistically significant, suggesting a relationship between the treatment delivery and prejudice. However, this is not the ATE. This variable was not randomly assigned, and therefore there could be potential confounders.
1. Well, they need to start with a letter. But otherwise they may contain numbers, upper and lower case letters (R distinguishes between them), and punctuation such as dots ( . ) and underscores ( _ )↩︎
|
|
We have two 3D printers from five years ago and they were definitely cranky but showed the potential. Back then it was really at the start, we backed two Kickstarter projects and ended up with two Delta printers. The first was just a mass of piece parts without any instructions. Alex was incredible and assembled it anyway and actually made parts. Then we got one that was fully assembled called the Flux. The key idea was that you could replace the head and have laser cutters and just about everything else.
In practice, it was super cranky software. It used wifi to connect to the computer. What a mistake that was. And it would crash all the time.
Now scroll forward to 2020 and the world has really matured. You can now buy a $150 3D printer and play around. Heck, even Monoprice has one. The biggest choice you can make is between a filament printer (aka Fused Deposition Modeling or FDM or Fused Filament Fabrication or FFF), this takes a plastic wire (filament), the head melts it and then it deposits it in layers the way you would apply a layer cake to it. These are inexpensive and fast and great for larger pieces. The next step up is a stereolithographic SLA printer. This is completely different, there is liquid resin and the base goes up and down, then there is a thin layer, a UV laser turns on and this causes the resin to harden. This is great for ultra-high-precision small parts. Of course, as Bob, would say, if you are serious, you would want both kinds of printers (who wouldn't!). If you are a beginner, then get a simpler filament printer. And a Tom's Guide and PC Magazine explains you can good a great printer at every price range 1. Monoprice Voxel. For$400, you get a very decent printer that is reliable.
2. Ultimaker 3. This costs a fortune but is very reliable and detailed. It's the perfect thing for the enthusiast who just wants something that works. It's $3,500 so a breathtaking amount, but what you get is precision, speed, and simple operation. If you want a bigger one, then the Ultimaker 5 costs more and can make larger parts. It's relatively slower, but the print quality is higher. Or if you want to bump up to$4K, then the Ultimaker S3 is dual feed, so you don't have to worry about running out of filament and starting all over.
3. Formlabs Form 3. This costs $3,500 but you also need a$600 washing and $600 curing system for a total of$5,000. Kind of incredibly expensive, so only if you are really committed to the hobby or a small batch manufacturer. This is a resin printer that uses SLA to deposit, then you use a wash of isopropyl alcohol and then you heat it to cure it. So pretty involved, but the detail is incredible. It actually uses something called low force stereolithography.
The second consideration is being to use lots of different filaments and resins because they all use different characteristics. The common filament types are acrylonitrile butadiene styrene (ABS) melts at a high temperature, is more flexible but emits fumes and needs a heated print bed. Polylactic acid (PLA) is stiffer and looks smoother but is more brittle. There are many other materials lie high-impact polystyrene (HIPS), polyviny alcohol (PVA), polyethylene terephthalate (PETT) and a host of others.
All3D is a dedicated publication with even more reviews and categories.For instance for resin has similar ratings with a few more added:
1. Form 3. They like this one as the best resin printer.
2. Original Pruska i3 MK3S. This is a hobbyist loved product for \$1K as a filament printer.
3. Ultimaker S5. As mentioned above, this is a dual head and bigger brother to the Ultimaker 3. Also if you want to work 24/7 in a production system, the Ultimaker S5 Pro has an air handler than filters out the particles and a holding system with six filament reels. Pretty cool for small-batch manufacturing
In terms of where to buy it, Amazon doesn't have much selection, but dedicated sites like Dynamism and
|
|
# Math Help - trigonometric equation
1. ## trigonometric equation
4-6 Solve the equation for θ (0 ≤θ < 2π) . Be careful. There may be several
4. 2sin^2(θ) =1 5. tan^2(θ) − tan(θ) = 0 7. cos^2(θ) + sin(θ) =1
is the first and second ones just in the 45 degree family?
2. Originally Posted by CalcGeek31
4-6 Solve the equation for θ (0 ≤θ < 2π) . Be careful. There may be several
4. 2sin^2(θ) =1 5. tan^2(θ) − tan(θ) = 0 7. cos^2(θ) + sin(θ)
is the first and second ones just in the 45 degree family?
Both 1 and 2 consists of the $\frac{\pi}{4}$ family...
For #1, you get two angles, $\vartheta=\frac{\pi}{4}$ and another angle. Just keep in mind the quadrants where sine is positive! I'll leave it for you to find the other angle.
However, for #2, we see that we can manipulate the equation a little bit:
$\tan^2\vartheta-\tan\vartheta=0\implies\tan\vartheta(\tan\vartheta-1)=0$
We will get two different equations:
$\tan\vartheta=0$ and $\tan\vartheta=1$
Keep in mind the restriction on the angle : $0\leq\vartheta<2\pi$
Also keep in mind which quadrants tangent is positive!
I hope this helps.
--Chris
3. what about the third one? it is = 1 will go edit that
4. Originally Posted by CalcGeek31
4-6 Solve the equation for θ (0 ≤θ < 2π) . Be careful. There may be several
4. 2sin^2(θ) =1 5. tan^2(θ) − tan(θ) = 0 7. cos^2(θ) + sin(θ) =1
is the first and second ones just in the 45 degree family?
#3 : $\cos^2\vartheta+\sin\vartheta=1$
First convert the eqaution into sine. Use the fact that $\cos^2\vartheta=1-\sin^2\vartheta$
Thus, our equation then becomes $1-\sin^2\vartheta+\sin\vartheta=1\implies \sin^2\vartheta-\sin\vartheta=0\implies \sin\vartheta(\sin\vartheta-1)=0$
Again, we get two equations:
$\sin\vartheta=0$ and $\sin\vartheta=1$
Keep in mind the restriction on the angle : $0\leq\vartheta<2\pi$
I hope this helps.
--Chris
5. for #1... I just re did it 4 times and got 4 answers not 2 can anyone confirm?
6. Originally Posted by CalcGeek31
for #1... I just re did it 4 times and got 4 answers not 2 can anyone confirm?
Erm...
I overlooked something pretty simple and thought of one equation, but there were two sets of equations:
$\sin\vartheta=\tfrac{1}{\sqrt{2}}$ and $\sin\vartheta=-\tfrac{1}{\sqrt{2}}$
So you should get four solutions, not two. Sorry about that! It was my mistake!
--Chris
|
|
# Easy string parsing
When I need to stringify some values by joining them with commas, I do, for example:
string.Format("{0},{1},{3}", item.Id, item.Name, item.Count);
And have, for example, "12,Apple,20".
Then I want to do opposite operation, get values from given string. Something like:
parseFromString(str, out item.Id, out item.Name, out item.Count);
I know, it is possible in C. But I don't know such function in C#.
-
scanf to the rescue? lol – Mehrdad Jul 22 '11 at 16:10
Yea :) I need .NET variant of scanf – Sergey Metlov Jul 22 '11 at 16:12
As I said in my answer. What happens when item.Name contains ','? – Jodrell Jul 22 '11 at 16:25
Yes, this is easy enough. You just use the String.Split method to split the string on every comma.
For example:
string myString = "12,Apple,20";
string[] subStrings = myString.Split(',');
foreach (string str in subStrings)
{
Console.WriteLine(str);
}
-
So, then I should do: item.Id = long.Parce(subStrings[0]); item.Name = subStrings[1]; item.Count = int.Parce(subStrings[2]);... Is there any "faster" operation, like scanf() in C ? – Sergey Metlov Jul 22 '11 at 16:17
@DotNETNinja: Yes, to assign the sub-strings to a numeric type, you'll need to use the Parse method. And no, there's no "faster" operation. There's absolutely nothing "slow" about this method. If this is the bottleneck in your application, you have a serious problem. And in 99% of cases where you'll have to do this, it'll be to accept user input, and users are always the slowest point of any app. – Cody Gray Jul 22 '11 at 16:19
@DotNETNinja, If this is too slow, perhaps strings are the wrong type to use. Why not Marshal your structure? Wait, don't answer that ... – Jodrell Jul 22 '11 at 16:24
@Jodrell: What would you Marshal it to? I assume you're suggesting that he write a C++/CLI DLL that called scanf natively, and marshal data back and forth between that DLL and his C# application? If so, that's seriously misguided... The overhead of marshaling will vastly outweigh any performance benefits that the native code might offer. (I'm not even convinced that scanf will be faster in a head-to-head benchmark.) Not to mention the security and other well-known issues with scanf that you're avoiding by sticking with purely managed code. – Cody Gray Jul 22 '11 at 16:26
@Cody Gray, I meant "faster" = "less code" :) So, I understood your explanation, big thanks! – Sergey Metlov Jul 22 '11 at 16:35
Possible implementations would use String.Split or Regex.Match
example.
public void parseFromString(string input, out int id, out string name, out int count)
{
var split = input.Split(',');
if(split.length == 3) // perhaps more validation here
{
id = int.Parse(split[0]);
name = split[1];
count = int.Parse(split[2]);
}
}
or
public void parseFromString(string input, out int id, out string name, out int count)
{
var r = new Regex("(\d+),(\w+),(\d+)", RegexOptions.IgnoreCase);
var match = r.Match(input);
if(match.Success)
{
id = int.Parse(match.Groups[1].Value);
name = match.Groups[2].Value;
count = int.Parse(match.Groups[3].Value);
}
}
Edit: Finally, SO has a bunch of thread on scanf implementation in C#
Looking for C# equivalent of scanf
how do I do sscanf in c#
-
Use Split function
var result = "12,Apple,20".Split(',');
-
If you can assume the strings format, especially that item.Name does not contain a ,
void parseFromString(string str, out int id, out string name, out int count)
{
string[] parts = str.split(',');
id = int.Parse(parts[0]);
name = parts[1];
count = int.Parse(parts[2]);
}
This will simply do what you want but I would suggest you add some error checking. Better still consider serializing/deserializing to xml.
-
Great! I think, I should use some more complex sing, not comma to exclude error with Name parsing. Thx. – Sergey Metlov Jul 22 '11 at 16:37
|
|
## Maps of candidates from coconsideration
G. Elliot Morris posted this embedding of the current Democratic presidential candidates in R^2 on Twitter:
where the edge weights (and thus the embeddings) derive from YouGov data, which for each pair of candidates (i,j) tell you which proportion of voters who report they’re considering candidate i also tell you they’re considering candidate j.
Of course, this matrix is non-symmetric, which makes me wonder exactly how he derived distances from it. I also think his picture looks a little weird; Sanders and Bloomberg are quite ideologically distinct, and their coconsiderers few in number, but they end up neighbors in his embedding.
Here was my thought about how one might try to produce an embedding using the matrix above. Model voter ideology as a standard Gaussian f in R^2 (I know, I know…) and suppose each candidate is a point y in R^2. You can model propensity to consider y as a standard Gaussian centered at y, so that the number of voters who are considering candidate y is proportional to the integral
$\int f(x) f(y-x) dx$
and the voters who are considering candidate z to
$\int f(x) f(y-x) f(z-x) dx$
So the proportions in Morris’s table can be estimated by the ratio of the second integral to the first, which, if I computed it right (be very unsure about the constants) is
$(2/3) \exp(-(1/12) |y-2z|^2$.
(The reason this is doable in closed form is that the product of Gaussian probability density functions is just exp(-Q) for some other quadratic form, and we know how to integrate those.) In other words, the candidate y most likely to be considered by voters considering z is one who’s just like z but half as extreme. I think this is probably an artifact of the Gaussian I’m using, which doesn’t, for instance, really capture a scenario where there are multiple distinct clusters of voters; it posits a kind of center where ideological density is highest. Anyway, you can still try to find 8 points in R^2 making the function above approximate Morris’s numbers as closely as possible. I didn’t do this in a smart optimization way, I just initialized with random numbers and let it walk around randomly to improve the error until it stopped improving. I ended up here:
which agrees with Morris that Gabbard is way out there, that among the non-Gabbard candidates, Steyer and Klobuchar are hanging out there as vertices of the convex hull, and that Warren is reasonably central. But I think this picture more appropriately separates Bloomberg from Sanders.
How would you turn the coconsideration numbers into an R^2 embedding?
## Political coordinates test
A popular political quiz on the internet purports to place you on a Cartesian plane with “left-right” on one axis and “libertarian-communitarian” on the other, by presenting you with 36 assertions you’re suppposed to agree or disagree with. One of them is
“There are too many wasteful government programs.”
Well, of course there are! For this not to be the case, the government would have to be uniquely unwasteful among all large institutions. The quiz does not ask whether you agree that
“There are too many wasteful private enterprises.”
I would like to agree with both, but the test only allows me to agree with the first while remaining silent above the second, which makes me seem more of a free-market purist than I really am. Which questions you choose to ask affects which answers you’re able to get.
## Roch on phylogenetic trees, learning ultrametrics from noisy measurements, and the shrimp-dog
Sebastien Roch gave a beautiful and inspiring talk here yesterday about the problem of reconstructing an evolutionary tree given genetic data about present-day species. It was generally thought that keeping track of pairwise comparisons between species was not going to be sufficient to determine the tree efficiently; Roch has proven that it’s just the opposite. His talk gave me a lot to think about. I’m going to try to record a probably corrupted, certainly filtered through my own viewpoint account of Roch’s idea.
So let’s say we have n points P_1, … P_n, which we believe are secretly the leaves of a tree. In fact, let’s say that the edges of the tree are assigned lengths. In other words, there is a secret ultrametric on the finite set P_1, … P_n, which we wish to learn. In the phylogenetic case, the points are species, and the ultrametric distance d(P_i, P_j) between P_i and P_j measures how far back in the evolutionary tree we need to go to find a comon ancestor between species i and species j.
One way to estimate d(P_i, P_j) is to study the correlation between various markers on the genomes of the two species. This correlation, in Roch’s model, is going to be on order
exp(-d(P_i,P_j))
which is to say that it is very close to 0 when P_i and P_j are far apart, and close to 1 when the two species have a recent common ancestor. What that means is that short distances are way easier to measure than long distances — you have no chance of telling the difference between a correlation of exp(-10) and exp(-11) unless you have a huge number of measurements at hand. Another way to put it: the error bar around your measurement of d(P_i,P_j) is much greater when your estimate is small than when your estimate is high; in particular, at great enough distance you’ll have no real confidence in any upper bound for the distance.
So the problem of estimating the metric accurately seems impossible except in small neighborhoods. But it isn’t. Because metrics are not just arbitrary symmetric n x n matrices. And ultrametrics are not just arbitrary metrics. They satisfy the ultrametric inequality
d(x,y) <= max(d(x,z),d(y,z)).
And this helps a lot. For instance, suppose the number of measurements I have is sufficient to estimate with high confidence whether or not a distance is less than 1, but totally helpless with distances on order 5. So if my measurements give me an estimate d(P_1, P_2) = 5, I have no real idea whether that distance is actually 5, or maybe 4, or maybe 100 — I can say, though, that it’s that it’s probably not 1.
So am I stuck? I am not stuck! Because the distances are not independent of each other; they are yoked together under the unforgiving harness of the ultrametric inequality. Let’s say, for instance, that I find 10 other points Q_1, …. Q_10 which I can confidently say are within 1 of P_1, and 10 other points R_1, .. , R_10 which are within 1 of P_2. Then the ultrametric inequality tells us that
d(Q_i, R_j) = d(P_1, P_2)
for any one of the 100 ordered pairs (i,j)! So I have 100 times as many measurements as I thought I did — and this might be enough to confidently estimate d(P_1,P_2).
In biological terms: if I look at a bunch of genetic markers in a shrimp and a dog, it may be hard to estimate how far back in time one has to go to find their common ancestor. But the common ancestor of a shrimp and a dog is presumably also the common ancestor of a lobster and a wolf, or a clam and a jackal! So even if we’re only measuring a few markers per species, we can still end up with a reasonable estimate for the age of the proto-shrimp-dog.
What do you need if you want this to work? You need a reasonably large population of points which are close together. In other words, you want small neighborhoods to have a lot of points in them. And what Roch finds is that there’s a threshold effect; if the mutation rate is too fast relative to the amount of measurement per species you do, then you don’t hit “critical mass” and you can’t bootstrap your way up to a full high-confidence reconstruction of the metric.
This leads one to a host of interesting questions — interesting to me, that is, albeit not necessarily interesting for biology. What if you want to estimate a metric from pairwise distances but you don’t know it’s an ultrametric? Maybe instead you have some kind of hyperbolicity constraint; or maybe you have a prior on possible metrics which weights “closer to ultrametric” distances more highly. For that matter, is there a principled way to test the hypothesis that a measured distance is in fact an ultrametric in the first place? All of this is somehow related to this previous post about metric embeddings and the work of Eriksson, Darasathy, Singh, and Nowak.
|
|
# Convolution including $\delta(t-5)$
I know the two properties of convolution that are related to my question
1. $\quad x(t)*\delta(t)=x(t)$
2. $\quad x(t)*\delta(t-t_0)=x(t-t_0)$
But my question is, how can I use those two to calculate $$y(t)=x(5t)*\delta(t-5)$$
Set $f(t)=x(5t)$ and use your rule number 2: $$f(t)\star \delta(t-t_0)=f(t-t_0)=\ldots$$
|
|
# Tag Info
## Hot answers tagged cryptanalysis
8
Historically, there did exist a benefit to using a language that the adversary was not familiar with. The name for this is code talkers, and the most famous ones (at least in the USA) are the Navajo code talkers of World War II. The idea was to defeat attacks that relied on statistics about the language used in the plaintext. In modern cryptography, ...
5
Is there a reason that expanded messages are much more difficult to crack? When looking purely at encryption, message expansion does not really tell you anything about the difficulty to crack it. Message expansion is often a feature of asymmetric cryptosystems. Those are not inherently more difficult to crack than symmetric systems. Block ciphers also ...
4
Disclaimer: I'm answering without any knowledge on the content of the paper in question. Why there are numbers in this Figure 1, that are placed in vertical form? Because otherwise the figure would not fit into the page width (or the authors would have to use unreadably small font). How to interpret this axis X? I imagine that are interval times ...
4
If you're referring to a classical cipher, it might complicate frequency analysis and other such techniques. For a modern cipher, it makes no difference. Modern ciphers operate on arbitrary patterns of information. Ideally, the ciphertext of a modern cipher should have no relation of any kind to the associated plaintext, other then the key.
3
The actual "encryption" is done on this line: mysecretmessage[i] ^= ((mysecretvalue>>(8*(i%4)))&255); Clearly, this line XORs every byte (or at least, every element; but it makes sense to assume that this is indeed a byte array) of mysecretmessage with some value derived from mysecretvalue and the byte counter i. So what does the expression ((...
3
Assuming your differential uniformity and non-linearity figures are correct, then yes, your s-box is slightly stronger against basic differential and linear cryptanalysis. Although Anubis already was essentially immune to basic differential and linear cryptanalysis. However your s-box would need to be evaluated against other forms of cryptanalysis (e.g. ...
2
It will be uniformly random for such a simple statistical test. The problem is that you are treating the probability of bits having a particular state as independent of each other. You would need to look at the conditional probability distributions of certain bits being set given other bits. The entire joint distribution of output and input bits for a ...
2
Let $n = pq$. By assumption, $3$ divides $\varphi(n) = (p-1)(q-1)$. Without loss of generality, I assume that $3$ divides $(p-1)$ or, equivalently, that $p \equiv 1 \pmod {3}$. Fact Let $p$ be a prime such that $p \equiv 1 \pmod 3$. Let also $c$ be a cubic residue modulo $p$. If $y$ is a cubic root of $c$ then so are $y\cdot \omega \pmod p$ and $y \cdot \... 2 Around and about one hundred years ago, your idea would surely have made sense… but nowadays, modern technology and evolved cryptanalytic techniques are too smart to have a real problem coping with something like that. (Also see my related answer to “Why was the Navajo code not broken by the Japanese in WWII?”) Even when we completely ignore Kerckhoffs’ ... 2 This is not very secure. You directly leak the symbol distribution, because only the order of symbols changes. For short enough messages this allows easy decryption – e.g. "dr olllWeoH" is quite clearly "Hello World". Even for long messages or binary values, the fact that you leak e.g. a crucial byte may be enough. You also have not defined how the same key ... 2 I am basing my answer on Cryptopals. The basic idea is that as {c0,c0+3,c0+6,…} have all been xor-ed with the same byte, the number of differing bits between c0 and c3 is the same as between p0 and p3. (this number is called the Hamming distance between two characters. Furthermore, the distance between [c0 c1 c2] and [c3 c4 c5] is the same as between [p0 ... 1 "Guessed ID" means ID that the oracle guesses the attacker algorithm$A$will attack. 1 I am wondering why people are using RSA keys when some types of double substitution ciphers seem to be just as secure if not better off. First of all, RSA is an asymmetric cipher while a substitution cipher is a symmetric cipher. Asymmetric ciphers are used to achieve different security needs, e.g. TLS authentication or non-repudiation of documents. Or, ... 1 As SEJPM already hinted at in his comment, a 7 bit key is short enough to allow a plain and simple brute-force attack because there are merely 128 different keys to be tested. Hint: 7 bit = 1111111 = 127, + 1 for the 0000000 key = 128 possible keys. Even when using a low-resource device, testing 128 different keys to find the correct one should be a pretty ... 1 a) The question seems to be about a comparison of the size of the key spaces. The hint already shows the key spaces for Vigenère with a 10-letter key ($26^{10}$) and for simple substitution ($26!$). Simple substitution has the much bigger key space. b) Using frequency analysis simple substitution is much easier to solve. Basic frequency analysis does not ... 1 To prove an encryption scheme to be perfectly secure, we need to prove: $$P[M=m|C=c]=P[M=m]$$ where$c$is a cipher text and$m$is a plain text. From Bayes theorem, we have: $$P[M=m|C=c]=\frac{P[C=c|M=m] \cdot P[M=m]}{P[C=c]}$$ It is noteworthy that: $$P[C=c|M=m]=P[K=k]$$ where$K$is the key space and$k$is a particular key. Now:$$P[C=c]=P[K=k]=\frac{... 1 The security levels for RSA are based on the strongest known attacks against RSA compared to amount of processing that would be needed to break symmetric encryption algorithms. The equation NIST recommends to compute approximate length for key is found in FIPS 140-2 Implementation Guidance Question 7.5. It is:$x = \frac{1.923 \times \sqrt[3]{L \times ln(...
1
I have reformatted the above equation as a program for GNU bc (part of GNU coreutils, found on most Linux systems). GNU bc will be much easier to find than Mathematica (although it is quite eccentric). Here is the code: \$ cat RSA-gnfs.bc #!/usr/bin/bc -l scale = 14 a = 1/3 b = 2/3 #print "RSA Key Length? " c = read() t = l( l(2 ^ c) ) # if b < 1, ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
|
1. ## Law of Quadratic reciprocity2
Let p be an odd prime number. Prove that
(3 over p) =1 if and only if p = +-1 mod 12
where (3 over p) denotes the legendre symbol.
2. Originally Posted by mndi1105
Let p be an odd prime number. Prove that
(3 over p) =1 if and only if p = +-1 mod 12
where (3 over p) denotes the legendre symbol.
Let $p>3$. There are two cases: $p\equiv 1(\bmod 4)$ or $p\equiv 3(\bmod 4)$. In the first case we get $(3/p) = (p/3)$. Now if $p\equiv 1(\bmod 3)$ then $(p/3) = (1/3)=1$ and if $p\equiv 2(\bmod 3)$ then $(p/3) = (2/3) = -1$, therefore $p\equiv 1(\bmod 3)$ in the first case. Together $p\equiv 1(\bmod 4)$ and $p\equiv 1(\bmod 3)$ give us $p\equiv 1(\bmod 12)$.
In the second case we get $(3/p) = -(p/3)$ and to get $(3/p) = 1$ it is necessary and sufficient to get $(p/3) = -1$. Now this happens when $p\equiv 2(\bmod 3)$. We have $p\equiv 3(\bmod 4)$ and $p\equiv 2(\bmod 3)$ which is equivalent to $p\equiv -1(\bmod 4)$ and $p\equiv -1(\bmod 3)$. Together this combines into $p\equiv -1(\bmod 12)$.
|
|
# How do I divide the columns of a matrix by the sum of its elements?
I am trying to create a transition matrix for a network. In order to do this, I need to sum down the column (the out degree), and then divide the column by the out degree in order to normalize it.
I am able to sum down the column. What I am unable to figure out how to do efficiently and easily is to divide the column by the sum.
L = {{0, 1, 0, 1, 0, 0, 0},
{0, 0, 1, 1, 1, 0, 0},
{0, 1, 0, 1, 0, 0, 0},
{0, 0, 0, 0, 1, 0, 0},
{1, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 1},
{0, 0, 0, 0, 0, 1, 0}};
-
If you need to do this with all columns, then:
Transpose[#/Total[#] & /@ Transpose[L]]
-
Where can I learn how to write these one liners that are so powerful? I never seem to fully understand the notation. – olliepower Mar 29 '13 at 2:29
@user2200667 there is a tutorial collection Core Language - I'd start there. But also you can start by looking up in documentation symbols like /@ & # // etc. – Vitaliy Kaurov Mar 29 '13 at 2:42
You can use Normalize with its second argument for this purpose:
(mat = Normalize[#, Total] & /@ Transpose@L // Transpose) // MatrixForm
Instead, if you were normalizing the rows by the sum of their elements, you could simply leave out the transposes and do
mat = Normalize[#, Total] & /@ L
or even
mat = #/Tr@#& /@ L
For your specific problem (transition matrix), you can use the new Markov process related functions in version 9 to get the transition matrix:
With[{m = DiscreteMarkovProcess[, L]},
mat = MarkovProcessProperties[m, "TransitionMatrix"]
] // MatrixForm
-
+1 This is nice info. – Vitaliy Kaurov Mar 29 '13 at 2:47
|
|
# Math Help - Exponential Equation
1. ## Exponential Equation
3^(3x+5) = 5
Thgis is precalc review for my calc class... I think I have to take the log of both sides and then the exponent goes out front or something like that but I'm stuck.
2. $3^{3x+5} = 5$
using $a^b= c \Rightarrow b = \log_ac$
then
$3x+5 = \log_35$
Can you solve it from here?
3. Yes, but only with a calculator and converting to decimals. I don't really have the theory down.
4. Originally Posted by katekate
3^(3x+5) = 5
Thgis is precalc review for my calc class... I think I have to take the log of both sides and then the exponent goes out front or something like that but I'm stuck.
$\ln(3^{3x+5}) = \ln(5)$
$(3x+5)\ln(3) = \ln(5)$
$3x+5 = \frac{\ln(5)}{\ln(3)}$
can you finish now?
5. yeah but with decimals again if that makes a difference.
|
|
#### Volume 14, issue 2 (2014)
Download this article For screen For printing
Recent Issues
The Journal About the Journal Subscriptions Editorial Board Editorial Interests Editorial Procedure Submission Guidelines Submission Page Author Index To Appear ISSN (electronic): 1472-2739 ISSN (print): 1472-2747
On compact hyperbolic manifolds of Euler characteristic two
### Vincent Emery
Algebraic & Geometric Topology 14 (2014) 853–861
##### Abstract
We prove that for $n>4$ there is no compact arithmetic hyperbolic $n$–manifold whose Euler characteristic has absolute value equal to $2$. In particular, this shows the nonexistence of arithmetically defined hyperbolic rational homology $n$–spheres with $n$ even and different than $4$.
Dedicated to the memory of Colin Maclachlan
##### Keywords
locally symmetric spaces, hyperbolic manifolds, arithmetic groups, rational homology spheres
##### Mathematical Subject Classification 2010
Primary: 22E40
Secondary: 55C35, 51M25
##### Publication
Received: 15 May 2013
Accepted: 9 September 2013
Published: 31 January 2014
##### Authors
Vincent Emery Department of Mathematics Stanford University Stanford, CA 94305 USA
|
|
# American Institute of Mathematical Sciences
March 2003, 9(2): 471-482. doi: 10.3934/dcds.2003.9.471
## Global solutions and self-similar solutions of the coupled system of semilinear wave equations in three space dimensions
1 Department of Applied Mathematics, Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan 2 Mathematical Institute, Tohoku University, Sendai 980-8578, Japan
Received September 2001 Revised April 2002 Published December 2002
In this paper, we treat the coupled system of wave equations whose nonlinearities are $|u|^{p_j}|v|^{q_j}$ and propagation speeds may be different from each other. We study the lower bounds of $p_j$ and $q_j$ to assure the global existence of a class of small amplitude solutions which includes self-similar solutions. The exponent of self-similar solutions plays crucial role to find the lower bounds. Moreover, we prove that the discrepancy of propagation speeds allow us to bring them down. Conversely, if such conditions for the global existence do not hold, then no self-similar solution exists even for small initial data.
Citation: Hideo Kubo, Kotaro Tsugawa. Global solutions and self-similar solutions of the coupled system of semilinear wave equations in three space dimensions. Discrete & Continuous Dynamical Systems - A, 2003, 9 (2) : 471-482. doi: 10.3934/dcds.2003.9.471
[1] Alberto Bressan, Wen Shen. A posteriori error estimates for self-similar solutions to the Euler equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 113-130. doi: 10.3934/dcds.2020168 [2] Peter Poláčik, Pavol Quittner. Entire and ancient solutions of a supercritical semilinear heat equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 413-438. doi: 10.3934/dcds.2020136 [3] Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243 [4] Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081 [5] Hua Chen, Yawei Wei. Multiple solutions for nonlinear cone degenerate elliptic equations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020272 [6] Jerry L. Bona, Angel Durán, Dimitrios Mitsotakis. Solitary-wave solutions of Benjamin-Ono and other systems for internal waves. I. approximations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 87-111. doi: 10.3934/dcds.2020215 [7] Bo Chen, Youde Wang. Global weak solutions for Landau-Lifshitz flows and heat flows associated to micromagnetic energy functional. Communications on Pure & Applied Analysis, 2021, 20 (1) : 319-338. doi: 10.3934/cpaa.2020268 [8] Shao-Xia Qiao, Li-Jun Du. Propagation dynamics of nonlocal dispersal equations with inhomogeneous bistable nonlinearity. Electronic Research Archive, , () : -. doi: 10.3934/era.2020116 [9] Serena Dipierro, Benedetta Pellacci, Enrico Valdinoci, Gianmaria Verzini. Time-fractional equations with reaction terms: Fundamental solutions and asymptotics. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 257-275. doi: 10.3934/dcds.2020137 [10] Christian Beck, Lukas Gonon, Martin Hutzenthaler, Arnulf Jentzen. On existence and uniqueness properties for solutions of stochastic fixed point equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020320 [11] Hai-Feng Huo, Shi-Ke Hu, Hong Xiang. Traveling wave solution for a diffusion SEIR epidemic model with self-protection and treatment. Electronic Research Archive, , () : -. doi: 10.3934/era.2020118 [12] Hoang The Tuan. On the asymptotic behavior of solutions to time-fractional elliptic equations driven by a multiplicative white noise. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020318 [13] Serge Dumont, Olivier Goubet, Youcef Mammeri. Decay of solutions to one dimensional nonlinear Schrödinger equations with white noise dispersion. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020456 [14] Adel M. Al-Mahdi, Mohammad M. Al-Gharabli, Salim A. Messaoudi. New general decay result for a system of viscoelastic wave equations with past history. Communications on Pure & Applied Analysis, 2021, 20 (1) : 389-404. doi: 10.3934/cpaa.2020273 [15] Mathew Gluck. Classification of solutions to a system of $n^{\rm th}$ order equations on $\mathbb R^n$. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5413-5436. doi: 10.3934/cpaa.2020246 [16] Qingfang Wang, Hua Yang. Solutions of nonlocal problem with critical exponent. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5591-5608. doi: 10.3934/cpaa.2020253 [17] Anna Abbatiello, Eduard Feireisl, Antoní Novotný. Generalized solutions to models of compressible viscous fluids. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 1-28. doi: 10.3934/dcds.2020345 [18] Xiyou Cheng, Zhitao Zhang. Structure of positive solutions to a class of Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020461 [19] Craig Cowan, Abdolrahman Razani. Singular solutions of a Lane-Emden system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 621-656. doi: 10.3934/dcds.2020291 [20] Mengni Li. Global regularity for a class of Monge-Ampère type equations with nonzero boundary conditions. Communications on Pure & Applied Analysis, 2021, 20 (1) : 301-317. doi: 10.3934/cpaa.2020267
2019 Impact Factor: 1.338
## Metrics
• HTML views (0)
• Cited by (0)
• on AIMS
|
|
## Submillimetre Variability of Eta Carinae: cool dust within the outer ejecta
Gomez, H. L.
Vlahakis, C.
Stretch, C. M.
Dunne, L.
Eales, S. A.
Beelen, A.
Gomez, E. L.
Edmunds, M. G.
##### Description
Previous submillimetre (submm) observations detected 0.7 solar masses of cool dust emission around the Luminous Blue Variable (LBV) star Eta Carinae. These observations were hindered by the low declination of Eta Carinae and contamination from free-free emission orginating from the stellar wind. Here, we present deep submm observations with LABOCA at 870um, taken shortly after a maximum in the 5.5-yr radio cycle. We find a significant difference in the submm flux measured here compared with the previous measurement: the first indication of variability at submm wavelengths. A comparison of the submm structures with ionised emission features suggests the 870um is dominated by emission from the ionised wind and not thermal emission from dust. We estimate 0.4 +/- 0.1 solar masses of dust surrounding Eta Carinae. The spatial distribution of the submm emission limits the mass loss to within the last thousand years, and is associated with mass ejected during the great eruptions and the pre-outburst LBV wind phase; we estimate that Eta Carinae has ejected > 40 solar masses of gas within this timescale.
Comment: 5 pages, 3 figures, accepted by MNRAS Letters
##### Keywords
Astrophysics - Astrophysics of Galaxies
|
|
# accuracy
Measures the over all performance of the classification model in terms of fraction of predictions made by the classifier that are correct. Interpretation: Accuracy is the fraction of number of correct predictions out of all examples, where the best value is 1 and the worst is 0.
## Syntax
Score = accuracy(targets,predictions)
## Inputs
targets
Actual label for each observation.
Type: double
Dimension: vector
predictions
Predicted value for each observation.
Type: double
Dimension: vector
## Outputs
Score
Accuracy of the classifier.
Type: double
Dimension: scalar
## Example
Usage of accuracy
targets = [0 1 0 1];
predictions = [1 1 1 1];
score1 = accuracy(targets, predictions);
> score1
score1 = 0.5
|
|
# Stirling number of the 1st kind (table) Calculator
## Calculates a table of the Stirling numbers of the first kind s(n,k) with specified n.
n n=1,2,3,... 6digit10digit14digit18digit22digit26digit30digit34digit38digit42digit46digit50digit
$\normal Stirling\ number\ of\ the\ 1st\ kind\ s(n,k)\\[10](1)\ x(x-1)(x-2)\ldots (x-n+1)={\large\sum_{\small k=0}^{\small n}}s(n,k)x^k\\(2)\ s(n,0)={\large\delta}_{n0},\hspace{20}s(n,n)=1\\[10]\hspace{25} s(n,k)=s(n-1,k-1)-(n-1)s(n-1,k),\\\hspace{240}1\le k\le n\\[10]$
Stirling number of the 1st kind (table)
[1-3] /3 Disp-Num5103050100200
[1] 2016/08/15 06:53 Male / Under 20 years old / High-school/ University/ Grad student / Very /
Purpose of use
Trying to derive an approximation for factorial.
Comment/Request
The ability to specify a k would be useful. Otherwise very, very useful.
[2] 2015/05/30 04:43 Male / 20 years old level / High-school/ University/ Grad student / Useful /
Purpose of use
Aid in a summer research project.
[3] 2014/12/10 00:08 Male / Under 20 years old / High-school/ University/ Grad student / A little /
Purpose of use
Programming exercices
Sending completion
To improve this 'Stirling number of the 1st kind (table) Calculator', please fill in questionnaire.
Male or Female ?
Male Female
Age
Under 20 years old 20 years old level 30 years old level
40 years old level 50 years old level 60 years old level or over
Occupation
Elementary school/ Junior high-school student
High-school/ University/ Grad student A homemaker An office worker / A public employee
Self-employed people An engineer A teacher / A researcher
A retired person Others
Useful?
Very Useful A little Not at All
Purpose of use?
|
|
# Extent changing in ArcGIS Pro when shapefile added to map which only had basemap?
Using ArcGIS 10.3 for Desktop:
In ArcMap:
• Open a Blank Map
• Add Basemap of Imagery (but any basemap will do)
• Zoom to Australia (but any area will do)
• Add the countries shapefile from Natural Earth (but any shapefile that covers an area much larger than where the basemap has been zoomed to will do)
• Notice how the extent does not change which to me is the expected behaviour
In ArcGIS Pro:
• Create a project using Map.aptx
• Change the Basemap to Imagery (but any basemap will do)
• Zoom to Australia (but any area will do)
• Add the countries shapefile from Natural Earth (but any shapefile that covers an area much larger than where the basemap has been zoomed to will do)
• Notice how the extent changes to become the extent of the shapefile that was just added
I think the software behaviour observed in the last dot point is a bug, and am wondering if anybody has already reported this and has a bug (NIM) number that can be used to track its resolution?
When I submitted this to local Esri support they responded:
AcrGIS Pro is designed to interact with data differently than ArcMAp. The zooming to a input layers extent, is a modification within ArcGIS Pro which is designed to make it easier for users to interect with their data. This is not considered a bug and does not have a NIM number.
I still think it is a bug and will be interested to see if this software behaviour is changed to match that of ArcMap in a later release.
## Manage data
In ArcGIS Pro , you typically work in a project that is saved on your computer. However, you don't always need to save a project. Sometimes your tasks involve data preparation and management, and you don't need to make maps or solve analysis problems. In these situations, you can start ArcGIS Pro without creating a project. You can then process your data and close the application without saving a project.
• Estimated time: 60 minutes (including optional section)
• Software requirements:
• ArcGIS Pro
• ArcGIS Online portal connection
## ArcGIS Maps for Adobe Creative Cloud 2.2.2 patch (macOS) is now available
by XingdongZhang
ArcGIS Maps for Adobe Creative Cloud 2.2.2 patch (macOS) is now available. Download it from here. This patch addresses the issue of AIX files not opening for mac users with the latest Illustrator updates version 25.1 and above.
For a detailed description of the issue, please see the GeoNet post.
Please continue the conversations on our forums or contact Esri Technical Support if you encounter any issues. You can also post ideas on the ArcGIS Ideas site. ArcGIS Ideas is a good way for you to get involved in making our application better.
## Issue with snapping and performance speed
We have been encountering problems lately while editing (using 10 sp 2). The speed of ArcMap while editing slows to a crawl, and snapping will not work (unless classic snapping is enabled).
I was wondering if anyone else has encountered this issue?
Do you have a basemap layer (created either by Add Data > Add Basemap Layer or New Basemap Layer in the table of contents) in your map? We have found that sometimes large features in basemaps can cause snapping performance issues. If so, try turning off the basemap and seeing if snapping improves.
by RichardFairhurs t
We have been encountering problems lately while editing (using 10 sp 2). The speed of ArcMap while editing slows to a crawl, and snapping will not work (unless classic snapping is enabled).
I was wondering if anyone else has encountered this issue?
I also had this issue while editing. Make sure all of the data in your map is in the same projection. ArcMap will warn you which layers have a different projection than the data frame when you start editing. After I did this my editing jumped to near instant speed and snapping worked like a charm.'
I am experiencing similar performance issues with snapping in ArcGIS 10 SP4.
I've been searching for information on how the snapping environment works as it appears to require caching of all potential snapping geometry into memory when starting an edit session, but this thread is the only information I've found talking about performance issues.
My observations are similar. The fewer layers in the map document, the better editing/snapping experience. The smaller the files, the better the snapping experience. Switching to the classic snapping environment provides no boost in performance. I didn't test snapping using files in different projections/coordinate systems, but I think the comment regarding indexing is right on.
The biggest breakthrough came when I loaded only two layers in ArcMap and tested snapping to a 150Mb parcel shapefile and ArcMap fell on it's face and was excruciatingly slow with no snapping ability. Snapping on the same parcel file loaded into file geodatabase. very fast with no problems. Snapping performance on the same parcels loaded into SDE is somewhere in between. I can open a map document with several SDE layers (including parcels), start an edit session, and although ArcMap performance does not slow down noticeably, the interactive snapping does not appear regardless of which features over which I hover! I then walk away from my computer for 15 minutes, come back, and the interactive snapping magically starts working beautifully.
Please do not point us to help documentation on how to use snapping or another blog on how to effectively use the simple, flexible, and great new snapping environment. Someone please provide better detail on what ArcMap does to enable snapping when an edit session is started please so we can find work-arounds to these snapping performance issues.
Thanks for everyone who took part (567 responses in total). Check the GitHub for charts and full results:
65% United States, 34% Other
Largest other: Canada 10.6%, United Kingdom 4.6%, Australia 2.1%, Germany 1.8%, Netherlands 1.6%, Ireland 1.6%
Largest states: California 13%, Texas 8%, Virginia 5%, Colorado 5%, Pennsylvania 4%, Washington 4%
Age: 54% are between 25 and 34
52% of Masters/PhD students recommend getting one, just 5% say outright no
So, Should I get a Masters?, generally yes, but it depends on your circumstances.
50% are Mid level, I guess it is a pretty broad category
Pretty broad mix of work lengths:
How long have you been working? Not employed. 8.907563 Under 1 year 13.949580 1-2 years 11.764706 2-5 years 23.025210 5-10 years 22.352941 11-25 years 18.151261 26 or more years 1.848739
53% have GIS officially in their job title
Analyst is the most common job tile
1.4% do not use desktop GIS software at all
VS Code is the most popular IDE with 17.5% of people using it (up from 2 users total in 2019), overtaking PyCharm which has 13.5%.
In general: 88% use ArcGIS, 48% use QGIS, and 46% use Google Earth, 18% use AutoCAD
Outside of the big 2, GRASS has 9% usage, Global Mapper 6.5%, ENVI 6%, ERDAS 5%, and MapInfo 2%, and Idrisi 1%
As their primary GIS: 78% use ArcGIS, 16% use QGIS
This varies regionally: United States: ArcGIS 87.9% / QGIS 5.6% Canada: ArcGIS 80.0% / QGIS 11.7% United Kingdom: ArcGIS 38.5% / QGIS 53.8% Australia: ArcGIS 83.3% / QGIS 0% Germany: ArcGIS 30% / QGIS 70.0% Ireland: ArcGIS 66.7% / QGIS 22.2% Netherlands: ArcGIS 55.6% / QGIS 33.3% Europe as a whole: ArcGIS 45.6% / QGIS 49.4%
So if you are in the US/Australia, learn ArcGIS. If you are in Europe also learn QGIS.
Of ArcGIS users, 20% use Pro Exclusively (up from 6% in 2019), 19% do not use Pro at all (down from 31% in 2019). Clear change in trend: ArcGIS Pro Usage
71% do not use design software to finish maps. 8% use "Other" software to finish maps. Beating out Illustrator 7.6%, Illustrator & PhotoShop 3.9%, GIMP 3%, InkScape 2.8%, and PhotoShop 2.6%
60% work with primarily Vector data, 33% is 50/50, 6% Raster
43% of jobs do not require programming, 52% require Python (although 62% of people use Python), 30% require SQL, 19% require JavaScript, 6% C#, 4% .NET, 4% R, 2% Java, Arcade & C++ 0.4%
Database of choice: File Geodatabase (ESRI) 37%, PostgreSQL 21%, SQL Server 15%. However in the "Other GIS software" question, 28% use SQL Server. So people might use it and not like it. for 2023 ask 2 questions: What is your proffered database, how do you store your data in work. MySQL and Oracle are at 3.35%.
12% just use ShapeFiles (down from 13% is 2019 and 15.% in 2017), 3.5% use SQLite
Of those 6%, 11% got a raise as a result of getting it, and 31% had it paid for by their employer
66.7% use 2 monitors, 18.7% and around 12% use 1/just a laptop/tablet.
55% of job do not (or did not before COVID) require field work (include might be a better way to phrase it instead of require).
Only around 8% of jobs require field work at least once a week.
Sentiments: Management, Developer, Retirement
Sentiments: Learn to Code/Program, GIS is a tool, be open to learning (passion, innovation)
Not a huge impact on GIS as a profession in general. With 66% saying it did not make much difference to their GIS profession. 22% say a positive impact, just 12% had saw a negative impact.
71% are working from home at the time of the survey. 12% are back in the office, and 10% did not work from home at all. 6% were already home based.
50% expect to continue to work from home to some extent, 30% completely, and 18% will be going back/are back in the office.
So in general people are really happy with their choice of GIS, and the future looks optimistic.
The most important question on the survey:
Should we ban memes?
So combining that with the most common word from the word clouds, we get:
haha GIS go brrrr, which will be the official meme policy of r/GIS going forward.
This year we asked salary as an actual number. Which allows us to look at some of the data in terms of salary.
The Median salary for r/GIS (excluding answers below 5k) is: $60654 Gender in the responses does not seem to be a factor in pay. Being in the United States is a factor. With salaries around$10k higher in the US.
Programming vs Non-Programming.
If your job requires programming, the median salary after 1 years of work is around \$10k higher.
## SUPERFUND_IDEM_IN.SHP: Sites in Indiana on the IDEM Superfund Program List (Indiana Department of Environmental Management, Point Shapefile)
The following is excerpted from the metadata provided by IDEM, OLQ for the source shapefile IDEM_SUPERFUND.SHP:
"The Office of Land Quality Superfund program works with the U.S. Environmental Protection Agency (U.S. EPA) to clean up sites contaminated with hazardous waste that may require complex investigations, significant cleanup actions, and long-term attention. The program involves a multi-phase process that begins with an assessment to determine if contamination is present that may impact a community. Sites that are determined to be a problem are put on the National Priorities List (NPL). They continue through the Superfund process to determine what contamination exists, how far it may have spread, and what risk that contamination may pose. Then sites are cleaned up to reduce or eliminate risk to human health and the environment so they can be made available for beneficial reuse to the greatest extent possible. Some sites have been delisted from the NPL but are still subject to review and maintenance." Purpose: This data set was developed to provide accurate coordinate information for managed facilities and customers of the Indiana Department of Environmental Management.
The following is excerpted from the metadata provided by IDEM, OLQ for the source shapefile IDEM_SUPERFUND.SHP:
"This project is the first phase in the creation of a base locational data set to establish Geographic Information Systems as an integral part of data management for the Superfund program. Envrionmental Project Managers were trained in techniques and proper use of Global Positioning Systems. Trained staff then went into the field and collected GPS information at managed sites. This information was then brought back to the office, processed, and QA/QC'd before being deposited in a central archive." Supplemental_Information: This data set generally contains the location of access points to managed sites, along with a unique identifier for each location. Time_Period_of_Content: Time_Period_Information: Single_Date/Time: Calendar_Date: 20181019 Currentness_Reference: Publication date Status: Progress: In work Maintenance_and_Update_Frequency: Quarterly Spatial_Domain: Bounding_Coordinates: West_Bounding_Coordinate: -87.523572 East_Bounding_Coordinate: -85.053059 North_Bounding_Coordinate: 41.723690 South_Bounding_Coordinate: 38.677804 Keywords: Theme: Theme_Keyword_Thesaurus: Geography Network Keyword Thesaurus Theme_Keyword: environment Theme_Keyword: structure Theme: Theme_Keyword_Thesaurus: IGS Metadata Thesaurus Theme_Keyword: Indiana Geological and Water Survey (IGWS) Theme_Keyword: Indiana Department of Environmental Management (IDEM) Theme_Keyword: U.S. Environmental Protection Agency (U.S. EPA) Theme_Keyword: Office of Land Quality (OLQ) Theme_Keyword: Superfund Program Theme_Keyword: Superfund Theme_Keyword: Superfund National Priorities List (NPL) Theme_Keyword: Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) Theme_Keyword: Resource Conservation and Recovery Act (RCRA) Theme_Keyword: global positioning system (GPS) Theme_Keyword: access point Theme_Keyword: managed site Theme_Keyword: regulated facility Theme_Keyword: contamination Theme_Keyword: water quality Theme_Keyword: hazard Theme_Keyword: hazardous waste Theme_Keyword: pollution Theme_Keyword: polluted site Theme_Keyword: manufacturing facility Theme_Keyword: processing plant Theme_Keyword: landfill Theme_Keyword: mining site Place: Place_Keyword_Thesaurus: IGS Metadata Thesaurus Place_Keyword: Indiana Place_Keyword: Adams County Place_Keyword: Bartholomew County Place_Keyword: Boone County Place_Keyword: Elkhart County Place_Keyword: Grant County Place_Keyword: Hancock County Place_Keyword: Howard County Place_Keyword: Jackson County Place_Keyword: Jefferson County Place_Keyword: Knox County Place_Keyword: Kosciusko County Place_Keyword: La Porte County Place_Keyword: Lake County Place_Keyword: Marion County Place_Keyword: Miami County Place_Keyword: Monroe County Place_Keyword: Montgomery County Place_Keyword: Owen County Place_Keyword: St Joseph County Place_Keyword: Tippecanoe County Place_Keyword: Vigo County Place_Keyword: Whitley County Access_Constraints: This file is available to anyone, but access may be contingent on written request, specific terms relevant to the agency or person making the request, and (or) current freedom of information statutes in the state of Indiana. Use_Constraints: DATA DISCLAIMER - This data set is provided by Indiana University, Indiana Geological and Water Survey, and contains data believed to be accurate however, a degree of error is inherent in all data. This product is distributed "AS-IS" without warranties of any kind, either expressed or implied, including but not limited to warranties of suitability of a particular purpose or use. No attempt has been made in either the designed format or production of these data to define the limits or jurisdiction of any federal, state, or local government. These data are intended for use only at the published scale or smaller and are for reference purposes only. They are not to be construed as a legal document or survey instrument. A detailed on-the-ground survey and historical analysis of a single site may differ from these data.
CREDIT - It is requested that the Superfund Program, Office of Land Quality, Indiana Department of Environmental Management be cited in any products generated from this data. The following source citation should be included: [SUPERFUND_IDEM_IN.SHP: Sites in Indiana on the IDEM Superfund Program List (Indiana Department of Environmental Management, Point Shapefile), digital compilation by IGWS, 20181019].
LIMITATION OF WARRANTIES AND LIABILITY - This product is provided "AS IS", without any other warranties or conditions, expressed or implied, including, but not limited to, warranties for product quality, or suitability to a particular purpose or use. The risk or liability resulting from the use of this product is assumed by the user. Indiana University, Indiana Geological and Water Survey shares no liability with product users indirect, incidental, special, or consequential damages whatsoever, including, but not limited to, loss of revenue or profit, lost or damaged data or other commercial or economic loss. Indiana University, Indiana Geological Survey is not responsible for claims by a third party. The maximum aggregate liability to the original purchaser shall not exceed the amount paid by you for the product.
Point_of_Contact: Contact_Information: Contact_Organization_Primary: Contact_Organization: Indiana Geological and Water Survey Contact_Address: Address_Type: Mailing and physical address Address: 611 North Walnut Grove Avenue City: Bloomington State_or_Province: Indiana Postal_Code: 47405-2208 Country: USA Contact_Voice_Telephone: 812-855-7636 Contact_Facsimile_Telephone: 812-855-2862 Contact_Electronic_Mail_Address: [email protected] Hours_of_Service: 0800 - 1700 Eastern Standard Time Contact_Instructions: Monday through Friday, except holidays
Data_Set_Credit: Federal Cleanup, Superfund, & NRDA section, Office of Land Quality, IDEM
Native_Data_Set_Environment: Esri ArcGIS version 10.6 shapefile format, approximately 192 Kb, Microsoft Windows 7 Enterpris, Esri ArcCatalog 10.6
Data_Quality_Information: Attribute_Accuracy: Attribute_Accuracy_Report: This data set was provided by IDEM personnel and is assumed to be accurate.
The following is excerpted from the metadata provided by IDEM, OLQ for the source shapefile IDEM_SUPERFUND.SHP:
"All attributes created during the process were verified by displaying the lines in the database and verifying consistency. However, no formal tests were performed. Omissions or null values in certain fields is due to the fact that edits were made in the data dictionary (adding or deleting attribute information) as the project progressed, information was unknown or mistakenly left out, or the field with the ommission was not relative to the method used to collect the data. Therefore accuracy cannot be guaranteed or verified."
Logical_Consistency_Report: This data set was provided by IDEM personnel and is assumed to be accurate.
Completeness_Report: The following is excerpted from the metadata provided by IDEM, OLQ for the source shapefile IDEM_SUPERFUND.SHP:
"Data were collected by the Federal Cleanup, Superfund, NRDA section within the Office of Land Quality, the Indiana Department of Environmental Management. This dataset is incomplete and a work in progress, as sites are GPS'd as they are inspected. This dataset contains information that represents what had been recorded, processed, and archived by IDEM personnel at a time previous to publication. However the data set may be incomplete and/or inaccurate due to the ever-changing status of managed facilities and sites, and due to the data collection and processing efforts of IDEM staff. It is an ever-changing work in progress. The Superfund section was also responsible for collecting attribute information for each location. The data has been QA/QC'd for consistency but no formal tests were performed on accuracy. Therefore accuracy cannot be guaranteed or verified. Omissions or null values in certain fields is due to the fact that edits were made in the data dictionary (adding or deleting attribute information) as the project progressed, information was unknown or mistakenly left out, or the field with the ommission was not relative to the method used to collect the data."
Positional_Accuracy: Horizontal_Positional_Accuracy: Horizontal_Positional_Accuracy_Report: The following is excerpted from the metadata provided by IDEM, OLQ for the source shapefile IDEM_SUPERFUND.SHP:
"The data were collected with a GPS GIS-mapping system. The accuracy of the coordinates should be from 1-5 meters, depending upon the type of GPS instrument used, the collection environment (ie., multipath, SNR, and PDOP), and the number of positions logged for each location. However, this accuracy cannot be guaranteed or verified. Each GPS unit should have been configured within the specifications of the manufacturer. However, if necessary, the field personnel would configure the unit outside of specifications if environmental conditions warranted. Accuracy was checked by logging both Maximum PDOP and the standard deviations of the positional average of each location collected. If maximum PDOP value was > 6, the value for the "In_specs" field was input as "NO". If the standard deviation was > 8, the corrected GPS file was analyzed for signs of multipath and/or outliers, and edited accordingly. It was then re-exported to the archive. If the data set was uneditable it was still left in the archive. The "In-specs" field was changed to "No" and the location could have an accuracy > 5 meters.
"If the environmental conditions of the collection session fell within the acceptable ranges for data collection as defined by the technical specifications of the instrument used and specific instrument configuration, then the accuracy of the collected coordinates should be within those values as given by the manufacturer. The GeoExplorer Unit has an accuracy range of 2 to 5 meters (post-processed), the GeoExplorer 3 and 3c a range of 1 to 5 meters (post-processed), and the Pro XR has a range of .50 cm - 1m differential accuracy to real-time differential correction.
"If the environmental conditions and/or technical configurations were not met, then the values could exceed those given by the manufacturer."
Vertical_Positional_Accuracy: Vertical_Positional_Accuracy_Report: The following is excerpted from the metadata provided by IDEM, OLQ for the source shapefile IDEM_SUPERFUND.SHP:
"The vertical accuracy for GPS data collection can be two to three times less accurate than horizontal accuracy. The vertical accuracy for these locations could have up to 15 meters of error. However, this accuracy cannot be guaranteed or verified. Height above Ellipsoid values have been recorded, but it is strongly suggested that the values not be used for any analysis.
Process_Step: Process_Description: IGS personnel received an ESRI shapefile named SUPERFUND from IDEM personnel on August 16, 2002. SUPERFUND was loaded into ESRI ArcView 3.3 and the following columns were deleted from the associated attribute table: LONGITUDE, LATITUDE, HAE, NORTHING, EASTING, METHOD, IN_SPECS, REG_PROG, C_INITIALS, GENERATOR, MAX_PDOP, CORR_TYPE, RCVR_TYPE, GPS_TIME, DATAFILE, UNFILT_POS and STD_DEV. The shapefile was also visually checked to insure that all data points provided actually fell within the accepted IGS state boundary. IGS personnel then saved the edited shapefile as SUPERFUND_IDEM_IN. It should also be noted that portions of the associated metadata provided by IDEM for SUPERFUND was extracted and included in this metadata file. Source_Used_Citation_Abbreviation: SUPERFUND.SHP Process_Date: 20021219 Source_Produced_Citation_Abbreviation: SUPERFUND_IDEM_IN.SHP Process_Contact: Contact_Information: Contact_Person_Primary: Contact_Organization: Indiana Geological Survey Contact_Person: Chris Dintaman Contact_Position: Geologist, GIS Specialist Contact_Address: Address_Type: Mailing and physical address Address: 611 North Walnut Grove Avenue City: Bloomington State_or_Province: Indiana Postal_Code: 47405-2208 Country: USA Contact_Voice_Telephone: 812-856-5654 Contact_Facsimile_Telephone: 812-855-2862 Contact_Electronic_Mail_Address: [email protected] Hours_of_Service: 0800 to 1700 Eastern Standard Time Contact_Instructions: Monday through Friday, except holidays
Process_Step: Process_Description: IGS personnel received an updated shapefile (and metadata), in the form of an ESRI shapefile, named SUPERFUND from IDEM personnel (Shane Moore) on January 14, 2003. The shapefile was visually checked to insure that all data points provided actually fell within the accepted IGS state boundary. IGS personnel then saved the edited shapefile as SUPERFUND_IDEM_IN to follow internal naming conventions. Source_Used_Citation_Abbreviation: SUPERFUND.SHP Process_Date: 20030114 Source_Produced_Citation_Abbreviation: SUPERFUND_IDEM_IN.SHP Process_Contact: Contact_Information: Contact_Person_Primary: Contact_Organization: Indiana Geological Survey Contact_Person: Chris Dintaman Contact_Position: Geologist, GIS Specialist Contact_Address: Address_Type: Mailing and physical address Address: 611 North Walnut Grove Avenue City: Bloomington State_or_Province: Indiana Postal_Code: 47405-2208 Country: USA Contact_Voice_Telephone: 812-856-5654 Contact_Facsimile_Telephone: 812-855-2862 Contact_Electronic_Mail_Address: [email protected] Hours_of_Service: 0800 to 1700 Eastern Standard Time Contact_Instructions: Monday through Friday, except holidays
Process_Step: Process_Description: IGS personnel received an updated shapefile (and metadata), in the form of an ESRI shapefile, named SUPERFUND from IDEM personnel (Shane Moore) on June 3, 2004. The shapefile was visually checked to insure that all data points provided actually fell within the accepted IGS state boundary. The shapefile was then edited by removing the following fields from the associated attribute table: HORIZONTAL, NORTHING, EASTING, STANDARD_D, LOCATION_A, POSTAL_COD, and COUNTY_NAM. IGS personnel then saved the edited shapefile as SUPERFUND_IDEM_IN to follow internal naming conventions. Source_Used_Citation_Abbreviation: SUPERFUND.SHP Process_Date: 20040603 Source_Produced_Citation_Abbreviation: SUPERFUND_IDEM_IN.SHP Process_Contact: Contact_Information: Contact_Person_Primary: Contact_Organization: Indiana Geological Survey Contact_Person: Chris Dintaman Contact_Position: Geologist, GIS Specialist Contact_Address: Address_Type: Mailing and physical address Address: 611 North Walnut Grove Avenue City: Bloomington State_or_Province: Indiana Postal_Code: 47405-2208 Country: USA Contact_Voice_Telephone: 812-856-5654 Contact_Facsimile_Telephone: 812-855-2862 Contact_Electronic_Mail_Address: [email protected] Hours_of_Service: 0800 to 1700 Eastern Standard Time Contact_Instructions: Monday through Friday, except holidays
Process_Step: Process_Description: IGS personnel received an updated shapefile (and metadata), in the form of an ESRI shapefile, named SUPERFUND from IDEM personnel (Shane Moore) on January 4, 2005. The shapefile was visually checked to insure that all data points provided actually fell within the accepted IGS state boundary. The shapefile was then edited by removing the following fields from the associated attribute table: HORIZONTAL, NORTHING, EASTING, STANDARD_D, LOCATION_A, POSTAL_COD, and COUNTY_NAM. IGS personnel then saved the edited shapefile as SUPERFUND_IDEM_IN to follow internal naming conventions. Source_Used_Citation_Abbreviation: SUPERFUND.SHP Process_Date: 20050104 Source_Produced_Citation_Abbreviation: SUPERFUND_IDEM_IN.SHP Process_Contact: Contact_Information: Contact_Person_Primary: Contact_Organization: Indiana Geological Survey Contact_Person: Chris Dintaman Contact_Position: Geologist, GIS Specialist Contact_Address: Address_Type: Mailing and physical address Address: 611 North Walnut Grove Avenue City: Bloomington State_or_Province: Indiana Postal_Code: 47405-2208 Country: USA Contact_Voice_Telephone: 812-856-5654 Contact_Facsimile_Telephone: 812-855-2862 Contact_Electronic_Mail_Address: [email protected] Hours_of_Service: 0800 to 1700 Eastern Standard Time Contact_Instructions: Monday through Friday, except holidays
Process_Step: Process_Description: IGS personnel received an updated ESRI geodatabase (and metadata) named "OLQ_GPS_DATA_IGS.MDB" from IDEM personnel (Shane Moore) on April 25, 2005. OLQ_GPS_DATA_IGS.MDB was loaded into ESRI ArcMap 9.0 and the subset of data containing Superfund site locations was exported to an ESRI shapefile format. The shapefile was visually checked to insure that all data points provided actually fell within the accepted IGS state boundary. The shapefile was then edited by removing the following fields from the associated attribute table: HORIZONTAL, NORTHING, EASTING, STANDARD_D, LOCATION_A, POSTAL_COD and COUNTY_NAM. IGS personnel then saved the edited shapefile as SUPERFUND_IDEM_IN.SHP to follow internal naming conventions. Source_Used_Citation_Abbreviation: OLQ_GPS_DATA_IGS.MDB Process_Date: 20050425 Source_Produced_Citation_Abbreviation: SUPERFUND_IDEM_IN.SHP Process_Contact: Contact_Information: Contact_Person_Primary: Contact_Organization: Indiana Geological Survey Contact_Person: Chris Dintaman Contact_Position: Geologist, GIS Specialist Contact_Address: Address_Type: Mailing and physical address Address: 611 North Walnut Grove Avenue City: Bloomington State_or_Province: Indiana Postal_Code: 47405-2208 Country: USA Contact_Voice_Telephone: 812-856-5654 Contact_Facsimile_Telephone: 812-855-2862 Contact_Electronic_Mail_Address: [email protected] Hours_of_Service: 0800 to 1700 Eastern Standard Time Contact_Instructions: Monday through Friday, except holidays
Process_Step: Process_Description: IGS personnel received an updated ESRI geodatabase (and metadata) named "OLQ IndianaMap Export-06012009.gdb" from IDEM personnel (Shane Moore) on June 4, 2009. "OLQ IndianaMap Export-06012009.gdb" was loaded into ESRI ArcMap 9.3.1 and the subset of data containing Superfund site locations were examined and exported as an ESRI shapefile. The shapefile was then edited by removing the following fields from the associated attribute table: HORIZONTAL, NORTHING, EASTING, STANDARD_D, LOCATION_A, POSTAL_COD and COUNTY_NAM. IGS personnel then saved the edited shapefile as SUPERFUND_IDEM_IN.SHP to follow internal naming conventions. Source_Used_Citation_Abbreviation: OLQ IndianaMap Export-06012009.gdb Process_Date: 20090604 Source_Produced_Citation_Abbreviation: SUPERFUND_IDEM_IN.SHP Process_Contact: Contact_Information: Contact_Person_Primary: Contact_Organization: Indiana Geological Survey Contact_Person: Chris Dintaman Contact_Position: Geologist, GIS Specialist Contact_Address: Address_Type: Mailing and physical address Address: 611 North Walnut Grove Avenue City: Bloomington State_or_Province: Indiana Postal_Code: 47405-2208 Country: USA Contact_Voice_Telephone: 812-856-5654 Contact_Facsimile_Telephone: 812-855-2862 Contact_Electronic_Mail_Address: [email protected] Hours_of_Service: 0800 to 1700 Eastern Standard Time Contact_Instructions: Monday through Friday, except holidays
Process_Step: Process_Description: IGS personnel received an updated ESRI geodatabase (and metadata) named "OLQ IndianaMap Export-04162010.gdb" from IDEM personnel (Shane Moore) on April 16, 2010. "OLQ IndianaMap Export-04162010.gdb" was loaded into ESRI ArcMap 9.3.1 and the subset of data containing Superfund site locations were examined and exported as an ESRI shapefile.
No changes in this dataset were apparent from the last update date on June 4, 2009 so only the metadata was edited to reflect this information. No changes to the SUPERFUND_IDEM_IN.SHP were made. Source_Used_Citation_Abbreviation: OLQ IndianaMap Export-04162010.gdb Process_Date: 20100416 Source_Produced_Citation_Abbreviation: SUPERFUND_IDEM_IN.SHP Process_Contact: Contact_Information: Contact_Person_Primary: Contact_Organization: Indiana Geological Survey Contact_Person: Chris Dintaman Contact_Position: Geologist, GIS Specialist Contact_Address: Address_Type: Mailing and physical address Address: 611 North Walnut Grove Avenue City: Bloomington State_or_Province: Indiana Postal_Code: 47405-2208 Country: USA Contact_Voice_Telephone: 812-856-5654 Contact_Facsimile_Telephone: 812-855-2862 Contact_Electronic_Mail_Address: [email protected] Hours_of_Service: 0800 to 1700 Eastern Standard Time Contact_Instructions: Monday through Friday, except holidays
Process_Step: Process_Description: IGS personnel received an updated ESRI geodatabase (and metadata) named "IndianaMapUpdate_June2013.gdb" from IDEM personnel (Shane Moore) on June 25, 2013. "IndianaMapUpdate_June2013.gdb" was loaded into ESRI ArcMap 10.1 and the subset of data containing Superfund site locations were examined and exported as an ESRI shapefile. The shapefile was then edited by removing the following fields from the associated attribute table: AGENCY_INT, SUB_PROGRA, LONGITUDE, LATITUDE, METHOD_DES, COLLECTOR_, COLLECTOR1, ACCURACY, ACCURACY_U, GPS_RECEIV, SOURCE_MAP, EASTING, and NORTHING. IGS personnel then saved the edited shapefile as SUPERFUND_IDEM_IN.SHP to follow internal naming conventions. Source_Used_Citation_Abbreviation: IndianaMapUpdate_June2013.gdb Process_Date: 20130625 Source_Produced_Citation_Abbreviation: SUPERFUND_IDEM_IN.SHP Process_Contact: Contact_Information: Contact_Person_Primary: Contact_Organization: Indiana Geological Survey Contact_Person: Chris Dintaman Contact_Position: Geologist, GIS Specialist Contact_Address: Address_Type: Mailing and physical address Address: 611 North Walnut Grove Avenue City: Bloomington State_or_Province: Indiana Postal_Code: 47405-2208 Country: USA Contact_Voice_Telephone: 812-856-5654 Contact_Facsimile_Telephone: 812-855-2862 Contact_Electronic_Mail_Address: [email protected] Hours_of_Service: 0800 to 1700 Eastern Standard Time Contact_Instructions: Monday through Friday, except holidays
Process_Step: Process_Description: IGS personnel received an updated Esri geodatabase (and metadata) named "IndianaMapUpdate_February2015.gdb" from IDEM personnel (Shane Moore) on February 25, 2015. "IndianaMapUpdate_February2015.gdb" was loaded into Esri ArcMap 10.3 and the subset of data containing Superfund site locations were examined and exported as an Esri shapefile. The shapefile was then edited by removing the following fields from the associated attribute table: AGENCY_INT, SUB_PROGRA, LONGITUDE, LATITUDE, METHOD_DES, COLLECTOR_, COLLECTOR1, ACCURACY, ACCURACY_U, GPS_RECEIV, SOURCE_MAP, EASTING, and NORTHING. IGS personnel then saved the edited shapefile as SUPERFUND_IDEM_IN.SHP to follow internal naming conventions. Source_Used_Citation_Abbreviation: IndianaMapUpdate_February2015.gdb Process_Date: 20150225 Source_Produced_Citation_Abbreviation: SUPERFUND_IDEM_IN.SHP Process_Contact: Contact_Information: Contact_Person_Primary: Contact_Organization: Indiana Geological Survey Contact_Person: Chris Dintaman Contact_Position: Geologist, GIS Specialist Contact_Address: Address_Type: Mailing and physical address Address: 611 North Walnut Grove Avenue City: Bloomington State_or_Province: Indiana Postal_Code: 47405-2208 Country: USA Contact_Voice_Telephone: 812-856-5654 Contact_Facsimile_Telephone: 812-855-2862 Contact_Electronic_Mail_Address: [email protected] Hours_of_Service: 0800 to 1700 Eastern Standard Time Contact_Instructions: Monday through Friday, except holidays
Process_Step: Process_Description: IGWS personnel received an Esri point shapefile named "IDEM_SUPERFUND.SHP" from IDEM personnel (Miranda Hancock) on October 19, 2018. IDEM_SUPERFUND.SHP was re-projected using ESRI ArcCatalog 10.6.0.8321 from geographic coordinates (World WGS 1984) to projected coordinates (UTM Zone 16 NAD83), and then renamed "SUPERFUND_IDEM_IN.SHP" to conform to internal naming conventions of the IGWS.
SUPERFUND_IDEM_IN.SHP was then loaded into Esri ArcMap, along with to the most recent version of the previously published data (20150225) for superfund sites. The layers were visually compared to record any significant changes to the structure of the attribute table, fields, or values, as well as to the changes in the total number of records provided (20150225 contained 83 locations, 20181019 contains 83 locations). The shapefile was then edited by removing the following fields from the associated attribute table: AGENCY_INT, SUB_PROGRA, METHOD_DES, COLLECTOR_, COLLECTOR1, ACCURACY, ACCURACY_U, GPS_RECEIV, and SOURCE_MAP. Source_Used_Citation_Abbreviation: IDEM_SUPERFUND.SHP Process_Date: 20181019 Source_Produced_Citation_Abbreviation: SUPERFUND_IDEM_IN.SHP Process_Contact: Contact_Information: Contact_Person_Primary: Contact_Organization: Indiana Geological and Water Survey Contact_Person: Chris Dintaman Contact_Position: Geologist, GIS Specialist Contact_Address: Address_Type: Mailing and physical address Address: 611 North Walnut Grove Avenue City: Bloomington State_or_Province: Indiana Postal_Code: 47405-2208 Country: USA Contact_Voice_Telephone: 812-856-5654 Contact_Facsimile_Telephone: 812-855-2862 Contact_Electronic_Mail_Address: [email protected] Hours_of_Service: 0800 to 1700 Eastern Standard Time Contact_Instructions: Monday through Friday, except holidays
Process_Step: Process_Description: This metadata file was pre-parsed and parsed using CNS (Chew and Spit, v. 2.6.1) and MP (Metadata Parser, v. 2.7.1) software written by Peter N. Schweitzer (U.S. Geological Survey). The errors generated by MP were all addressed and corrected, except that no values were assigned to "Abscissa_Resolution" and "Ordinate_Resolution." Process_Date: 20181227 Process_Contact: Contact_Information: Contact_Person_Primary: Contact_Organization: Indiana Geological and Water Survey Contact_Person: Chris Dintaman Contact_Position: Geologist, GIS Specialist Contact_Address: Address_Type: Mailing and physical address Address: 611 North Walnut Grove Avenue City: Bloomington State_or_Province: Indiana Postal_Code: 47405-2208 Country: USA Contact_Voice_Telephone: 812-856-5654 Contact_Facsimile_Telephone: 812-855-2862 Contact_Electronic_Mail_Address: [email protected] Hours_of_Service: 0800 to 1700 Eastern Standard Time Contact_Instructions: Monday through Friday, except holidays
Spatial_Data_Organization_Information: Indirect_Spatial_Reference: Indiana Direct_Spatial_Reference_Method: Point Point_and_Vector_Object_Information: SDTS_Terms_Description: SDTS_Point_and_Vector_Object_Type: Point Point_and_Vector_Object_Count: 83
Spatial_Reference_Information: Horizontal_Coordinate_System_Definition: Planar: Grid_Coordinate_System: Grid_Coordinate_System_Name: Universal Transverse Mercator Universal_Transverse_Mercator: UTM_Zone_Number: 16 Transverse_Mercator: Scale_Factor_at_Central_Meridian: 0.999600 Longitude_of_Central_Meridian: -87.000000 Latitude_of_Projection_Origin: 0.000000 False_Easting: 500000.000000 False_Northing: 0.000000 Planar_Coordinate_Information: Planar_Coordinate_Encoding_Method: Row and column Coordinate_Representation: Abscissa_Resolution: Ordinate_Resolution: Planar_Distance_Units: Meters Geodetic_Model: Horizontal_Datum_Name: North American Datum of 1983 Ellipsoid_Name: GRS 80 Semi-major_Axis: 6378137.0000000 Denominator_of_Flattening_Ratio: 298.26
Entity_and_Attribute_Information: Detailed_Description: Entity_Type: Entity_Type_Label: SUPERFUND_IDEM_IN.DBF Entity_Type_Definition: Shapefile Attribute Table Entity_Type_Definition_Source: None Attribute: Attribute_Label: FID Attribute_Definition: Internal feature number Attribute_Definition_Source: ESRI Attribute_Domain_Values: Unrepresentable_Domain: Sequential unique whole numbers that are automatically generated Attribute: Attribute_Label: Shape Attribute_Definition: Feature geometry Attribute_Definition_Source: ESRI Attribute_Domain_Values: Unrepresentable_Domain: Coordinates defining the features Attribute: Attribute_Label: REGULATORY Attribute_Definition: Unique code to idenify the entity Attribute_Definition_Source: Office of Land Quality, Indiana Department of Environmental Management Attribute_Domain_Values: Unrepresentable_Domain: Character field Attribute: Attribute_Label: AGENCY_I_1 Attribute_Definition: Source Name Attribute_Definition_Source: Office of Land Quality, Indiana Department of Environmental Management Attribute_Domain_Values: Unrepresentable_Domain: Character field Attribute: Attribute_Label: PROGRAM Attribute_Definition: IDEMs managing program name (SF = Superfund sites) Attribute_Definition_Source: Office of Land Quality, Indiana Department of Environmental Management Attribute_Domain_Values: Unrepresentable_Domain: Character field Attribute: Attribute_Label: REFERENCE_ Attribute_Definition: Describes what is being located at the site Attribute_Definition_Source: Office of Land Quality, Indiana Department of Environmental Management Attribute_Domain_Values: Enumerated_Domain: Enumerated_Domain_Value: Access Point Enumerated_Domain_Value_Definition: A gate, front door, or entrance that provides access to a managed site from a road or highway Enumerated_Domain_Value_Definition_Source: IDEM MAD Codes Attribute_Domain_Values: Enumerated_Domain: Enumerated_Domain_Value: Building Enumerated_Domain_Value_Definition: A building within an areal boundary Enumerated_Domain_Value_Definition_Source: IDEM MAD Codes Attribute_Domain_Values: Enumerated_Domain: Enumerated_Domain_Value: Center Enumerated_Domain_Value_Definition: A point of approximate center within an areal boundary Enumerated_Domain_Value_Definition_Source: IDEM MAD Codes Attribute: Attribute_Label: PHYSICAL_A Attribute_Definition: The address of site Attribute_Definition_Source: Office of Land Quality, Indiana Department of Environmental Management Attribute_Domain_Values: Unrepresentable_Domain: Character field Attribute: Attribute_Label: MUNICIPALI Attribute_Definition: City or town name where the site is located Attribute_Definition_Source: Office of Land Quality, Indiana Department of Environmental Management Attribute_Domain_Values: Unrepresentable_Domain: Character field Attribute: Attribute_Label: ZIP_CODE Attribute_Definition: The zip code of site Attribute_Definition_Source: Office of Land Quality, Indiana Department of Environmental Management Attribute_Domain_Values: Unrepresentable_Domain: Character field Attribute: Attribute_Label: DATA_COLLE Attribute_Definition: The date that GPS data were collected by IDEM personnel Attribute_Definition_Source: Office of Land Quality, Indiana Department of Environmental Management Attribute_Domain_Values: Range_Domain: Range_Domain_Minimum: 6/22/2000 Range_Domain_Maximum: 2/9/2011 Attribute_Units_of_Measure: Calendar date Attribute: Attribute_Label: LONGITUDE Attribute_Definition: Longitude coordinate value for access point location (WGS 1984) Attribute_Definition_Source: Office of Land Quality, Indiana Department of Environmental Management Attribute_Domain_Values: Range_Domain: Range_Domain_Minimum: -87.567494 Range_Domain_Maximum: -84.828431 Attribute_Units_of_Measure: Decimal degrees Attribute: Attribute_Label: LATITUDE Attribute_Definition: Latitude coordinate value for access point location (WGS 1984) Attribute_Definition_Source: Office of Land Quality, Indiana Department of Environmental Management Attribute_Domain_Values: Range_Domain: Range_Domain_Minimum: 37.980966 Range_Domain_Maximum: 41.723628 Attribute_Units_of_Measure: Decimal degrees Attribute: Attribute_Label: EASTING Attribute_Definition: Longitude coordinate value for access point location (UTM Zone 16, NAD83) Attribute_Definition_Source: Office of Land Quality, Indiana Department of Environmental Management Attribute_Domain_Values: Range_Domain: Range_Domain_Minimum: 450162.482211 Range_Domain_Maximum: 688373.630082 Attribute_Units_of_Measure: Meters Attribute: Attribute_Label: Northing Attribute_Definition: Latitude coordinate value for access point location (UTM Zone 16, NAD83) Attribute_Definition_Source: Office of Land Quality, Indiana Department of Environmental Management Attribute_Domain_Values: Range_Domain: Range_Domain_Minimum: 4203855.0349 Range_Domain_Maximum: 4619098.60993 Attribute_Units_of_Measure: Meters
Distribution_Information: Distributor: Contact_Information: Contact_Organization_Primary: Contact_Organization: Indiana Geological and Water Survey Contact_Person: Publication Sales Contact_Position: Clerk Contact_Address: Address_Type: Mailing and physical address Address: 611 North Walnut Grove Avenue City: Bloomington State_or_Province: Indiana Postal_Code: 47405-2208 Country: USA Contact_Voice_Telephone: 812-855-7636 Contact_Facsimile_Telephone: 812-855-2862 Contact_Electronic_Mail_Address: [email protected] Hours_of_Service: 0800 to 1700 Eastern Standard Time Contact_Instructions: Monday through Friday, except holidays Resource_Description: Downloadable data
Distribution_Liability: DATA DISCLAIMER - This data set is provided by Indiana University, Indiana Geological and Water Survey, and contains data believed to be accurate however, a degree of error is inherent in all data. This product is distributed "AS-IS" without warranties of any kind, either expressed or implied, including but not limited to warranties of suitability of a particular purpose or use. No attempt has been made in either the designed format or production of these data to define the limits or jurisdiction of any federal, state, or local government. These data are intended for use only at the published scale or smaller and are for reference purposes only. They are not to be construed as a legal document or survey instrument. A detailed on-the-ground survey and historical analysis of a single site may differ from these data.
CREDIT - It is requested that the Superfund Program, Office of Land Quality, Indiana Department of Environmental Management be cited in any products generated from this data. The following source citation should be included: [SUPERFUND_IDEM_IN.SHP: Sites in Indiana on the IDEM Superfund Program List (Indiana Department of Environmental Management, Point Shapefile), digital compilation by IGWS, 20181019].
LIMITATION OF WARRANTIES AND LIABILITY - This product is provided "AS IS", without any other warranties or conditions, expressed or implied, including, but not limited to, warranties for product quality, or suitability to a particular purpose or use. The risk or liability resulting from the use of this product is assumed by the user. Indiana University, Indiana Geological and Water Survey shares no liability with product users indirect, incidental, special, or consequential damages whatsoever, including, but not limited to, loss of revenue or profit, lost or damaged data or other commercial or economic loss. Indiana University, Indiana Geological Survey is not responsible for claims by a third party. The maximum aggregate liability to the original purchaser shall not exceed the amount paid by you for the product.
Distribution_Information: Distributor: Contact_Information: Contact_Organization_Primary: Contact_Organization: Indiana Department of Environmental Management Contact_Person: Miranda Hancock Contact_Position: GIS Coordinator (Information Services) Contact_Address: Address_Type: Mailing and physical address Address: 100 North Senate Avenue City: Indianapolis State_or_Province: Indiana Postal_Code: 46206-6015 Country: USA Contact_Voice_Telephone: 317-232-8742 Contact_Facsimile_Telephone: 317-233-3403 Contact_Electronic_Mail_Address: [email protected] Hours_of_Service: M-F, 8:00am-5:00pm, Eastern Standard Time Contact_Instructions: For inquiries, please e-mail
Resource_Description: Superfund Sites Shapefile Distribution_Liability: The following is excerpted from the metadata provided by IDEM, OLQ for the source shapefile IDEM_SUPERFUND.SHP:
THE INDIANA DEPARTMENT OF ENVIRONMENTAL MANAGEMENT (IDEM), ITS EMPLOYEES, SUCCESSORS OR ASSIGNS MAKE NO REPRESENTATIONS OR WARRANTIES ABOUT THE COMPLETENESS, SUITABILITY, RELIABILITY, AVAILABILITY AND ACCURACY OF DATA, SOFTWARE, INFORMATION OR SERVICES PROVIDED.
TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, ANY INFORMATION RETRIEVED FROM OR DELIVERED TO AN INDIVIDUAL OR ENTITY THROUGH THE AUSPICES OF IDEM ARE PROVIDED "AS IS" AND INCLUDES ALL FAULTS WITHOUT ANY EXPRESS OR IMPLIED WARRANTY OF ANY KIND. THE RECIPIENT, OF THE DATA, SOFTWARE, INFORMATION OR SERVICES, ACKNOWLEDGES AND AGREES THAT IT HAS RECEIVED ADEQUATE NOTICE THAT USE OF THE DATA, SOFTWARE, INFORMATION OR SERVICES IS AT RECIPIENTS OWN RISK.
IDEM AND OR ITS RESPECTIVE SUPPLIERS HEREBY DISCLAIM ALL WARRANTIES AND CONDITIONS WITH REGARD TO THE AFOREMENTIONED MATERIAL, INCLUDING WARRANTIES OF MERCHANTABILITY, TITLE, NONINFRINGEMENT, FITNESS FOR A PARTICULAR PURPOSE, LACK OF VIRUSES, ACCURACY OR COMPLETENESS OF RESPONSES, RESULTS, LACK OF NEGLIGENCE OR LACK OF WORKMANLIKE EFFORT, QUIET ENJOYMENT, QUIET POSSESSION, AND CORRESPONDENCE TO DESCRIPTION. FURTHER THE ABOVE LIST OF DISCLAIMED WARRANTIES IS NOT EXHAUSTIVE AS ANY AND ALL WARRANTIES AND CONDITIONS ARE DISCLAIMED. THE ENTIRE RISK ARISING OUT OF THE USE OF OR PERFORMANCE OF THE DATA, SOFTWARE, INFORMATION OR SERVICES IS BORNE BY THE RECIPIENT.
EXCLUSION OF INCIDENTAL, CONSEQUENTIAL AND CERTAIN OTHER DAMAGES.
TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT SHALL IDEM OR ITS SUPPLIERS BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT OR CONSEQUENTIAL DAMAGES WHATSOEVER, INCLUDING, BUT NOT LIMITED TO, DAMAGES FOR: LOSS OF PROFITS, LOSS OF CONFIDENTIAL OR OTHER INFORMATION, BUSINESS INTERRUPTION, PERSONAL INJURY, LOSS OF PRIVACY, FAILURE TO MEET ANY DUTY (INCLUDING GOOD FAITH AND REASONABLE CARE), NEGLIGENCE, AND ANY OTHER PECUNIARY OR OTHER LOSS WHATSOEVER, ARISING OUT OF OR IN ANY WAY RELATED TO THE USE OR INABILITY TO USE, THE DATA, SOFTWARE, INFORMATION OR SERVICES, EVEN IF THE RECIPIENT IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. RECIPIENT'S SOLE REMEDY IS TO DISCONTINUE THE USE OF THE DATA, SOFTWARE, INFORMATION OR SERVICES."
## Layer does not match imagery by 400 meters..
I have been looking for answers, from coordinate systems issues to projection issues but have not found a solution to this sticky one yet. As you can see below, my river SHP does not match the river on the satellite imagery. only by. 400 meters. Coordinate systems and projections are the same (I projected the river shapefile (was WGS84UTMzone47N) to the projection of the imagery. Would anyone have a clue what the problem is?
by NeilAyres
This sort of shift is caused by an incorrect datum (change in the underlying ellipsoidal model).
I took your mainrivers shapefile, I see it is of the Mekong river in Cambodia.
Yes, it's coordinate system is defined as UTM 47N and based on the WGS84 datum. However, if you assume that WGS84 is not the original datum for this data (depends on the source of these river polys, old carto sheets perhaps).
I looked up at ASPRS Clifford Mugnier Grids & Datums page and also at EPSG.org. There are several suggestions as to the original classical datums used by the French authorities in colonial days. One of which is India1960. (Don't know why they would have used this).
So, I copied the shapefile, gave it a new name, and then redefined the coord sys, changing the GCS to India 1960.
So the blue is the original, the green te new one, and I have applied one of the available transforms between India1960 & WGS84.
The base map in National Geographic (good enough in this case).
You can see that the green one is much closer to the basemap image. The difference here between old and new is
The fit is not exact (I wouldn't expect it to be), because we do not know on which datum this data was originally based.
## Define projection changes position of raster layer
I use define projection on a raster layer that has the spatial reference system User_defined equidistant_Cylindrical and the datum user_defined, and change it into either equidistant_culindrical (Sphere) or equidistant cylindrical (world).
I try to do it into both because I am not sure which of them is the correct one. My thought was then to convert into WGS_1984 afterwards and see which fits best.
However, when I do the define projection, the whole raster drastically changes position. How can this be? Isn't define projection just used to define the projection system, not actually changing the position or drawing. This is at least how it has normally worked for me.
The reason why I am doing this is because I want the data in WGS_1984. Any help on how I could do this is also needed.
I would very much appreciate any help. Thank you very much!
by DanPatterson_Re tired
Define Projection is only used when a file has an unknown projection or. as in your case. when it has been define incorrectly and has to be reverted back to the original projection
I didn't think define projection would move anything either but the project tool would if you had a projection assigned. Did you accidentally use the wrong one, maybe?
As can be seen , it changes a lot but the two defined rasters are quite similar (not the same but quite similar)
Dan, I am not sure what you mean by defined incorrecty. I mean I cant know whether it is defined incorrectly just because it is user defined?
Wes, As can be seen from the drawing both of them changes the position a lot.
Xander, the difference in my case is only about 10 km difference.
by DanPatterson_Re tired
Your first sentence. you applied a projection to a data set to see which one was best or worked best. Get the projection information from the source if it is known. If it is a matter that only one of the two is correct, you will quickly find out. as long as you have ruled out the possibility that neither are correct.
Thank you for this. The thing is that both seems to be from. When I convert both the defined rasters to WGS 1984 with "project raster" , the almost dont change - they are still as heavily distorted. Aboth 10 km different as mentioned above, but both many 1000 km off other WGS 1984 rasters.
by DanPatterson_Re tired
Is there a chance that you are doing these operations in the same data frame as other data? I would suggest you create two new data frames and place each file into its own data frame with no other data. Then . for each file, in its own data frame. you right-click on the layer, go to properties and determine the extent values. These will not change regardless of what you have done to the file in terms of defining a projection. Do the extents look reasonable? A file with a geographic coordinate system ( like a GCS WGS84) will only have values in the range -180 to 180 EW and -90 to 90 NS. If they are big numbers then you have projected data and the range in values will give you what type of projection it was or should be in. Ideally, both files should have the same extent if indeed they covered the same extent. If they have the same extent and they are defined differently, then one or both have been improperly defined when using the Define Projection Tool.
Now on to projection. When you use the Project Tool, always put the result into its own dataframe since a file that is projected will project-on-the-fly to try and match the projection of the dataframe. and as you have noted, when the file was defined wrong, then projected, it flies off to somewhere you don't want it to be.
The whole process of adding data to a data frame since ArcMape is to try to "be helpful". In the "old days" you would get the warning that things didn't match and sure enough, the files would fly off into their corners spatially separated because the projection files were wrong, undefined or mismatch. In order to fix this in the "helpful" environment, you need to examine their properties. understand their possible extent values for a given coordinate system/projection and then act accordingly.
So in summary, I have no clue what the real coordinate systems of the input files were, but IF they were defined by some other source and they were redefined correctly. set them back and ensure that they are correct. Once they are in a know, verified coordinate system. you can proceed to Projecting the data to a different coordinate system. Then. and only then. do I introduce other data into the dataframe. If all the files play nice. then they should overlap or abut perfectly.
## Ways to speed up drawing of layers in ArcMap?
I'm working with a layer of highways for the state of NC, but every time I change something, from altering symbol colors to panning to a different part of the state, it redraws. Which would be fine except for the fact that when it redraws, it takes forever. Okay maybe not forever, but it takes at least 2 minutes, sometimes 10, to redraw it. every time. I've been playing with different symbology for this and other layers to see what looks best (what I should include, exclude, etc.) but I've spent roughly an hour on this and have only made a few very minor changes. It's just annoying and I currently feel like the last part of my day has been wasted waiting on ArcGIS. Any recommendations to speed it up? I have to go through this full process another 60 times and Iɽ prefer not to spend 3 hours per map just waiting for it to draw stupid highways. (And no, I can't just take the highways layer out. I've already asked.) Iɽ much appreciate any advice.
TLDR Highways layer takes too long to draw and has to be redrawn a lot. Need help to speed it up.
## Why is the "Add Basemap" button not available? Trying to add a basemap of the United States.
Just a tip for all you basemap users. It's actually faster to setup an arcgis online gis server connection in arccatalog and add the basemap through that than it is adding it from the add data dropdown menu.
On a sidenote I've noticed a lot of government agencies provide server connections to their data - it's been a nice alternative to hoarding shapefiles.
Any sources for how to do this?
I believe this can also happen when you don’t have a coordinate system set?
I think internet is unavailable.
Ok. I’m connected that time maybe it’s a matter of weak connection.
You could also try closing program, opening a new window and adding the base map before adding the other layers. This has worked for me in the past, tho I’m not sure why.
|
|
• ### Exploring the Very Extended Low Surface Brightness Stellar Populations of the Large Magellanic Cloud with SMASH(1805.02671)
May 7, 2018 astro-ph.GA
We present the detection of very extended stellar populations around the Large Magellanic Cloud (LMC) out to R~21 degrees, or ~18.5 kpc at the LMC distance of 50 kpc, as detected in the Survey of the MAgellanic Stellar History (SMASH) performed with the Dark Energy Camera on the NOAO Blanco 4m Telescope. The deep (g~24) SMASH color magnitude diagrams (CMDs) clearly reveal old (~9 Gyr), metal-poor ([Fe/H]=-0.8 dex) main-sequence stars at a distance of 50 kpc. The surface brightness of these detections is extremely low with our most distant detection having 34 mag per arcsec squared in g-band. The SMASH radial density profile breaks from the inner LMC exponential decline at ~13-15 degrees and a second component at larger radii has a shallower slope with power-law index of -2.2 that contributes ~0.4% of the LMC's total stellar mass. In addition, the SMASH densities exhibit large scatter around our best-fit model of ~70% indicating that the envelope of stellar material in the LMC periphery is highly disturbed. We also use data from the NOAO Source catalog to map the LMC main-sequence populations at intermediate radii and detect a steep dropoff in density on the eastern side of the LMC (at R~8 deg) as well as an extended structure to the far northeast. These combined results confirm the existence of a very extended, low-density envelope of stellar material with disturbed shape around the LMC. The exact origin of this structure remains unclear but the leading options include a classical accreted halo or tidally stripped outer disk material.
• ### SMASHing THE LMC: Mapping A Ring-like Stellar Overdensity in the LMC Disk(1805.00481)
May 1, 2018 astro-ph.GA
We explore the stellar structure of the Large Magellanic Cloud (LMC) disk using data from the Survey of the MAgellanic Stellar History (SMASH) and the Dark Energy Survey (DES). We detect a ring-like stellar overdensity in the red clump star count map at a radius of ~6 degrees (~5.2kpc at the LMC distance) that is continuous over ~270 degrees in position angle and is only limited by the current data coverage. The overdensity is clearly continuous in the southern disk, as covered by the SMASH survey, with an amplitude up to 2.5 times higher than that of the underlying smooth disk. This structure might be related to the multiple arms found by de Vaucouleurs (1955). We find that the overdensity shows spatial correlation with intermediate-age star clusters, but not with young (< 1Gyr) main sequence stars, indicating the stellar populations associated with the overdensity are intermediate in age or older. This suggests that either (1) the overdensity formed out of an asymmetric one-armed spiral wrapping around the LMC main body, which is induced by repeated encounters with the Small Magellanic Cloud (SMC) over the last Gyr, or (2) the overdensity formed very recently as a tidal response to a direct collision with the SMC. Both scenarios suggest that the ring-like overdensity is likely a product of tidal interaction with the SMC, but not with the Milky Way halo.
• ### SMASHing the LMC: A Tidally-induced Warp in the Outer LMC and a large scale Reddening Map(1804.07765)
April 20, 2018 astro-ph.GA
We present a study of the three-dimensional (3D) structure of the Large Magellanic Cloud (LMC) using ~2.2 million red clump (RC) stars selected from the Survey of the MAgellanic Stellar History (SMASH). To correct for line-of-sight dust extinction, the intrinsic RC color and magnitude and their radial dependence are carefully measured by using internal nearly dust-free regions. These are then used to construct an accurate 2D reddening map (165 square degrees with ~10 arcmin resolution) of the LMC disk and the 3D spatial distribution of RC stars. An inclined disk model is fit to the 2D distance map yielding a best-fit inclination angle i = 25.86(+0.73,-1.39) degrees with random errors of +\-0.19 degrees, line-of-nodes position angle theta = 149.23(+6.43,-8.35) degrees with random errors of +/-0.49 degrees. These angles vary with galactic radius, indicating that the LMC disk is warped and twisted likely due to the repeated tidal interactions with the Small Magellanic Cloud (SMC). For the first time, our data reveal a significant warp in the southwest of the outer disk starting at rho ~ 7 degrees that departs from the defined LMC plane up to ~4 kpc towards the SMC, suggesting that it originated from a strong interaction with the SMC. In addition, the inner disk encompassing the off-centered bar appears to be tilted up to 5-15 degrees relative to the rest of the LMC disk. These findings on the outer warp and the tilted bar are consistent with the predictions from the Besla et al. (2012) simulation of a recent direct collision with the SMC.
• ### SMASH - Survey of the MAgellanic Stellar History(1701.00502)
Sept. 15, 2017 astro-ph.GA
The Large and Small Magellanic Clouds (LMC and SMC) are unique local laboratories for studying the formation and evolution of small galaxies in exquisite detail. The Survey of the MAgellanic Stellar History (SMASH) is an NOAO community DECam survey of the Clouds mapping 480 square degrees (distributed over ~2400 square degrees at ~20% filling factor) to ~24th mag in ugriz with the goal of identifying broadly distributed, low surface brightness stellar populations associated with the stellar halos and tidal debris of the Clouds. SMASH will also derive spatially-resolved star formation histories covering all ages out to large radii from the MCs that will further complement our understanding of their formation. Here, we present a summary of the survey, its data reduction, and a description of the first public Data Release (DR1). The SMASH DECam data have been reduced with a combination of the NOAO Community Pipeline, PHOTRED, an automated PSF photometry pipeline based mainly on the DAOPHOT suite, and custom calibration software. The attained astrometric precision is ~15 mas and the accuracy is ~2 mas with respect to the Gaia DR1 astrometric reference frame. The photometric precision is ~0.5-0.7% in griz and ~1% in u with a calibration accuracy of ~1.3% in all bands. The median 5 sigma point source depths in ugriz bands are 23.9, 24.8, 24.5, 24.2, 23.5 mag. The SMASH data already have been used to discover the Hydra II Milky Way satellite, the SMASH 1 old globular cluster likely associated with the LMC, and very extended stellar populations around the LMC out to R~18.4 kpc. SMASH DR1 contains measurements of ~100 million objects distributed in 61 fields. A prototype version of the NOAO Data Lab provides data access, including a data discovery tool, SMASH database access, an image cutout service, and a Jupyter notebook server with example notebooks for exploratory analysis.
• ### Lighting up stars in chemical evolution models: the CMD of Sculptor(1605.03606)
May 11, 2016 astro-ph.GA
We present a novel approach to draw the synthetic color-magnitude diagram of galaxies, which can provide - in principle - a deeper insight in the interpretation and understanding of current observations. In particular, we `light up' the stars of chemical evolution models, according to their initial mass, metallicity and age, to eventually understand how the assumed underlying galaxy formation and evolution scenario affects the final configuration of the synthetic CMD. In this way, we obtain a new set of observational constraints for chemical evolution models beyond the usual photospheric chemical abundances. The strength of our method resides in the very fine grid of metallicities and ages of the assumed database of stellar isochrones. In this work, we apply our photo-chemical model to reproduce the observed CMD of the Sculptor dSph and find that we can reproduce the main features of the observed CMD. The main discrepancies are found at fainter magnitudes in the main sequence turn-off and sub-giant branch, where the observed CMD extends towards bluer colors than the synthetic one; we suggest that this is a signature of metal-poor stellar populations in the data, which cannot be captured by our assumed one-zone chemical evolution model.
• ### Hydra II: a faint and compact Milky Way dwarf galaxy found in the Survey of the Magellanic Stellar History(1503.06216)
April 2, 2015 astro-ph.GA
We present the discovery of a new dwarf galaxy, Hydra II, found serendipitously within the data from the ongoing Survey of the MAgellanic Stellar History (SMASH) conducted with the Dark Energy Camera on the Blanco 4m Telescope. The new satellite is compact (r_h = 68 +/- 11 pc) and faint (M_V = -4.8 +/- 0.3), but well within the realm of dwarf galaxies. The stellar distribution of HydraII in the color-magnitude diagram is well-described by a metal-poor ([Fe/H] = -2.2) and old (13 Gyr) isochrone and shows a distinct blue horizontal branch, some possible red clump stars, and faint stars that are suggestive of blue stragglers. At a heliocentric distance of 134 +/- 10 kpc, Hydra II is located in a region of the Galactic halo that models have suggested may host material from the leading arm of the Magellanic Stream. A comparison with N-body simulations hints that the new dwarf galaxy could be or could have been a satellite of the Magellanic Clouds.
• ### Dark matter cores in the Fornax and Sculptor dwarf galaxies: joining halo assembly and detailed star formation histories(1309.5958)
Jan. 28, 2014 astro-ph.CO
We combine the detailed Star Formation Histories of the Fornax and Sculptor dwarf Spheroidals with the Mass Assembly History of their dark matter (DM) halo progenitors to estimate if the energy deposited by Supernova type II (SNeII) is sufficient to create a substantial DM core. Assuming the efficiency of energy injection of the SNeII into DM particles is $\epsilon_{\rm gc}=0.05$, we find that a single early episode, $z \gtrsim z_{\rm infall}$, that combines the energy of all SNeII due to explode over 0.5 Gyr, is sufficient to create a core of several hundred parsecs in both Sculptor and Fornax. Therefore, our results suggest that it is energetically plausible to form cores in Cold Dark Matter (CDM) halos via early episodic gas outflows triggered by SNeII. Furthermore, based on CDM merger rates and phase-space density considerations, we argue that the probability of a subsequent complete regeneration of the cusp is small for a substantial fraction of dwarf-size haloes.
• ### The extremely low-metallicity tail of the Sculptor dwarf spheroidal galaxy(1211.4592)
We present abundances for seven stars in the (extremely) low-metallicity tail of the Sculptor dwarf spheroidal galaxy, from spectra taken with X-shooter on the ESO VLT. Targets were selected from the Ca II triplet (CaT) survey of the Dwarf Abundances and Radial Velocities Team (DART) using the latest calibration. Of the seven extremely metal-poor candidates, five stars are confirmed to be extremely metal-poor (i.e., [Fe/H]<-3 dex), with [Fe/H]=-3.47 +/- 0.07 for our most metal-poor star. All are around or below [Fe/H]=-2.5 dex from the measurement of individual Fe lines. These values are in agreement with the CaT predictions to within error bars. None of the seven stars is found to be carbon-rich. We estimate a 2-13% possibility of this being a pure chance effect, which could indicate a lower fraction of carbon-rich extremely metal-poor stars in Sculptor compared to the Milky Way halo. The [alpha/Fe] ratios show a range from +0.5 to -0.5, a larger variation than seen in Galactic samples although typically consistent within 1-2sigma. One star seems mildly iron-enhanced. Our program stars show no deviations from the Galactic abundance trends in chromium and the heavy elements barium and strontium. Sodium abundances are, however, below the Galactic values for several stars. Overall, we conclude that the CaT lines are a successful metallicity indicator down to the extremely metal-poor regime and that the extremely metal-poor stars in the Sculptor dwarf galaxy are chemically more similar to their Milky Way halo equivalents than the more metal-rich population of stars.
|
|
1. Science
2. Physics
3. a 477 c and a 258 c charge are placed...
# Question: a 477 c and a 258 c charge are placed...
###### Question details
A 4.77 μC and a -2.58 μC charge are placed 17.4 cm apart. Where can a third charge be placed so that it experiences no net force? [Hint: Assume that the negative charge is 17.4 cm to the right of the positive charge.]
|
|
# [solved] Define x-range for Series expantion
GROUPS:
Hi all,I am pretty new (i.e. 2 days old) with Mathematica and so far enjoy the power of it.My question is how-to make a series expantion for a certain x-range of the function.The problem is that my function has quite some curves and I want to series expand in a local minimum of the function to know the curvature of this local minimum.Imagine a Sin series and you only want to series expand in a local minimum - this is exactly the same problem.I assume I can expand another function which is defined only in an interval around the local minimum of the orignal function ? But isnt there a nice solution to this problem ? Best!PS: The assumption argument would be nice to use but this is not the same as defining a range.
3 years ago
8 Replies
If possible, can you give an example of the kind of series expansion you are looking for from an example function?Please take a look at the Series function if you have not already.
3 years ago
The function I have is the one in this figure. This function needs to be series expanded at a=-0.15 m. BUT the range of the series expantion is important. I only want to series expand between -0.18 and -0.12, to get the exact curvature of the (almost) ~x^2 well. I had a look in the Series function description already but there is no such thing as boundaries on a series expantion. I assume it is possible though ? Otherwise my strategy would be to define a new function only defined in this interval and then series expand this but this sounds like a detour. You got a better ideas, Sean ?
3 years ago
Sean Clarke 1 Vote I'm not sure I understand what you mean by boundaries in this case. It sounds like you want to use the Series function and only apply it over a certain range of values.A better question to ask is, how would you go about doing what you want by hand for a simple example?
3 years ago
Taylor Expansions are defined at a point, not over a range. the higher the order of the polynomial, the wider the range it will fit...
|
|
# Why Gaussian latent variable (noise) for GAN?
When I was reading about GAN, the thing I don't understand is why people often choose the input to a GAN (z) to be samples from a Gaussian? - and then are there also potential problems associated with this?
Why people often choose the input to a GAN (z) to be samples from a Gaussian?
Generally, for two reasons: (1) mathematical simplicity, (2) working well enough in practice. However, as we explain, under additional assumptions the choice of Gaussian could be more justified.
Compare to uniform distribution. Gaussian distribution is not as simple as uniform distribution but it is not that far off either. It adds "concentration around the mean" assumption to uniformity, which gives us the benefits of parameter regularization in practical problems.
The least known. Use of Gaussian is best justified for continuous quantities that are the least known to us, e.g. noise $$\epsilon$$ or latent factor $$z$$. "The least known" could be formalized as "distribution that maximizes entropy for a given variance". The answer to this optimization is $$N(\mu, \sigma^2)$$ for arbitrary mean $$\mu$$. Therefore, in this sense, if we assume that a quantity is the least known to us, the best choice is Gaussian. Of course, if we acquire more knowledge about that quantity, we can do better than "the least known" assumption, as will be illustrated in the following examples.
Central limit theorem. Another commonly used justification is that since many observations are the result (average) of large number of [almost] independent processes, therefore CLT justifies the choice of Gaussian. This is not a good justification because there are also many real-world phenomena that do not obey Normality (e.g. Power-law distribution), and since the variable is the least known to us, we cannot decide which of these real-world analogies are more preferable.
This would be the answer to "why we assume a Gaussian noise in probabilistic regression or Kalman filter?" too.
Are there also potential problems associated with this?
Yes. When we assume Gaussian, we are simplifying. If our simplification is unjustified, our model will under-perform. At this point, we should search for an alternative assumption. In practice, when we make a new assumption about the least known quantity (based on acquired knowledge or speculation), we could extract that assumption and introduce a new Gaussian one, instead of changing the Gaussian assumption. Here are two examples:
1. Example in regression (noise). Suppose we have no knowledge about observation $$A$$ (the least known), thus we assume $$A \sim N(\mu, \sigma^2)$$. After fitting the model, we may observe that the estimated variance $$\hat{\sigma}^2$$ is high. After some investigation, we may assume that $$A$$ is a linear function of measurement $$B$$, thus we extract this assumption as $$A = \color{blue}{b_1B +c} + \epsilon_1$$, where $$\epsilon_1 \sim N(0, \sigma_1^2)$$ is the new "the least known". Later, we may find out that our linearity assumption is also weak since, after fitting the model, the observed $$\hat{\epsilon}_1 = A - \hat{b}_1B -\hat{c}$$ also has a high $$\hat{\sigma}_1^2$$. Then, we may extract a new assumption as $$A = b_1B + \color{blue}{b_2B^2} + c + \epsilon_2$$, where $$\epsilon_2 \sim N(0, \sigma_2^2)$$ is the new "the least known", and so on.
2. Example in GAN (latent factor). Upon seeing unrealistic outputs from GAN (knowledge) we may add $$\color{blue}{\text{more layers}}$$ between $$z$$ and the output (extract assumption), in the hope that the new network (or function) with the new $$z_2 \sim N(0, \sigma_2^2)$$ would lead to more realistic outputs, and so on.
|
|
# Finding shortest possible rotation period
How is it possible to find the shortest possible rotation period of a pulsar from a mass and a radius?
-
## 1 Answer
One simple method is to consider a particle of the star at the surface on the equator, this particle will feel two principal forces: a centrifugal force, $F_c$ generally acting to pull the particle off the surface and a gravitational $F_g$ (and strong nuclear?) force holding the particle to the surface.
If $F_c$ > $F_g$ then the particle will tend to leave the surface, if not then the particle will stay put.
As I suspect this is a homework question I'll leave the detailed mathematics out of it for now!
-
Thanks, however, can you help me find the period of rotation of the particle on the equator of a rotating pulsar with only the mass and radius given? – Joseph Flynn Aug 18 '11 at 17:23
well if you start with the inequality i have given you and sub in some appropriate formulas some things might start to become more obvious... – Nic Aug 18 '11 at 17:26
In reality, as it spins up the geopotential will deform, outward at the equator, and outward at the poles, so the pulsar will become oblate. This increases (for fixed rotaton rate) the equatorial velocity, and decreases the gravitational field at the equator, so the star will adjust some more. How this plays out depends upon the equation of state. – Omega Centauri Aug 18 '11 at 18:28
Ok, thanks very much! – Joseph Flynn Aug 18 '11 at 22:56
|
|
Quantum Numbers
eduardomorales5
Posts: 77
Joined: Fri Aug 30, 2019 12:15 am
Quantum Numbers
How do we know when to put the spin number? How does an electron's spin affect its behavior?
Sarah Blake-2I
Posts: 153
Joined: Fri Aug 30, 2019 12:16 am
Re: Quantum Numbers
You put the spin number when you are discussing the state of both the electron and the orbital and then just 3 quantum numbers when just discussing the state of the orbital.
Rohit Ghosh 4F
Posts: 99
Joined: Thu Jul 25, 2019 12:17 am
Re: Quantum Numbers
The spin number is usually either +1/2 or -1/2, denoting whether the electron is "spin-up" or "spin-down." Electrons spinning in different directions are often paired together in a subshell.
905385366
Posts: 54
Joined: Sat Jul 20, 2019 12:16 am
Re: Quantum Numbers
The spin would be denoted with the 4th Quantum number (+/- 1/2). Another tip is to remember to use the Pauli Exclusion Principal and Hund's Rule when drawing it out. Hope that helps.
kristi le 2F
Posts: 102
Joined: Thu Jul 11, 2019 12:15 am
Re: Quantum Numbers
Also, no two electrons can have the same four quantum numbers. If two electrons share the first three quantum numbers (n, l, ml), their spins must be different. One will be +1/2 while the other will be -1/2.
PGao_1B
Posts: 50
Joined: Sat Jul 20, 2019 12:15 am
Re: Quantum Numbers
Electron spin, s, has only two possible values: $+\frac{1}{2}$ and $-\frac{1}{2}$, representing whether the electron is "spin - up" or "spin - down," respectively. Electron spin determines if an atom will or will not generate a magnetic field.
505306205
Posts: 97
Joined: Thu Jul 25, 2019 12:15 am
Re: Quantum Numbers
Electron spins affect an electron's behavior because it prevents two electrons with parallel spins from occupying the same orbital. The configuration is most stable when electrons are paired with electrons with opposite spins.
Eva Zhao 4I
Posts: 101
Joined: Sun Sep 29, 2019 12:16 am
Re: Quantum Numbers
The spin number, or the spin magnetic quantum number (ms), is +1/2 or -1/2 since an electron can be spin up or down. The significance is that no two electrons in the same atom can have the same four quantum numbers. It tends to be pretty arbitrary as long as the combination of the four quantum numbers is possible; the principle quantum (n) and angular momentum quantum (l) numbers are typically the definite ones.
|
|
# Why does the Sun move slower during the solstices?
A comment under this answer states that the apparent angular speed of the Sun is 8% slower during the solstices. This is rather counter-intuitive, since the rotation speed of the Earth is constant (or close enough for the timescales considered).
Why does the Sun appear to move slower in the sky at the solstices?
At the solstices, the Sun is on either the tropic of Cancer or Capricorn, so it has its maximum or minimum declination, approximately ±23.4°. So its speed is cos(23.4) $$\approx$$ 0.9178 relative to a point on the celestial equator, or about 8% slower, as Mike G mentioned in that comment.
|
|
Timezone: »
Poster
Compressive Feature Learning
Hristo S Paskov · Robert West · John C Mitchell · Trevor Hastie
Thu Dec 05 07:00 PM -- 11:59 PM (PST) @ Harrah's Special Events Center, 2nd Floor #None
This paper addresses the problem of unsupervised feature learning for text data. Our method is grounded in the principle of minimum description length and uses a dictionary-based compression scheme to extract a succinct feature set. Specifically, our method finds a set of word $k$-grams that minimizes the cost of reconstructing the text losslessly. We formulate document compression as a binary optimization task and show how to solve it approximately via a sequence of reweighted linear programs that are efficient to solve and parallelizable. As our method is unsupervised, features may be extracted once and subsequently used in a variety of tasks. We demonstrate the performance of these features over a range of scenarios including unsupervised exploratory analysis and supervised text categorization. Our compressed feature space is two orders of magnitude smaller than the full $k$-gram space and matches the text categorization accuracy achieved in the full feature space. This dimensionality reduction not only results in faster training times, but it can also help elucidate structure in unsupervised learning tasks and reduce the amount of training data necessary for supervised learning.
|
|
Factoring RSA With CRT Optimization
factoreal
21 solved
Dec. 24, 2015, 2:18 p.m.
In the challenge description the formula about calculating $y2$ is $y2 = x ^{dq} \pmod q$
|
|
utor tree? You are correct that there are only $3$ for n = 5 :). but then this is not helpful because I do not get non-isomorphic graph each time and there are repetitions. how to fix a non-existent executable path causing "ubuntu internal error"? Theorem 5: Prove that a graph with n vertices, (n-1) edges and no circuit is a connected graph. A forrest with n vertices and k components contains n k edges. When a microwave oven stops, why are unpopped kernels very hot and popped kernels not hot? Now things get interesting: your new leaf can either be at the end of the chain or in the middle, and this leads to non-isomorphic results. In other words, if we replace its directed edges with undirected edges, we obtain an undirected graph that is both connected and acyclic. ... connected non-isomorphic graphs on n vertices? Lemma. The lowest is 2, and there is only 1 such tree, namely, a linear chain of 6 vertices. Thanks for contributing an answer to Mathematics Stack Exchange! Graph Theory: 10. To illustrate how induction is used on trees, we will consider the relationship between the number of vertices and number of edges in trees. *Response times vary by subject and question complexity. To draw the non-isomorphic trees, one good way is to segregate the trees according to the maximum degree of any of its vertices. Here, Both the graphs G1 and G2 do not contain same cycles in them. Click the button below to login (a new window will open.). These are the only two choices, up to isomorphism. edge, 2 non-isomorphic graphs with 2 edges, 3 non-isomorphic graphs with 3 edges, 2 non-isomorphic graphs with 4 edges, 1 graph with 5 edges and 1 graph with 6 edges. So any other suggestions would be very helpful. To illustrate how induction is used on trees, we will consider the relationship between the number of vertices and number of edges in trees. A new window will open. 3 $\begingroup$ I'd love your help with this question. Huffman Codes. (ii) Prove that up to isomorphism, these are the only such trees. $\begingroup$ right now, I'm confused between non-isomorphic and isomorphic. Can an exiting US president curtail access to Air Force One from the new president? Also, I've counted the non-isomorphic for 7 vertices, it gives me 11 with the same technique as you explained and for 6 vertices, it gives me 6 non-isomorphic. Blog, Note: You can change your preference Run through this process backwards, and you can see that any tree can be built by adding leaves to existing trees. Try drawing them. Image Transcriptionclose. Definition Let G ={V,E} and G′={V ′,E′} be graphs.G and G′ are said to be isomorphic if there exist a pair of functions f :V →V ′ and g : E →E′ such that f associates each element in V with exactly one element in V ′ and vice versa; g associates each element in E with exactly one element in E′ and vice versa, and for each v∈V, and each e∈E, if v A tree with at least two vertices must have at least two leaves. In the second case, you can either add a leaf to the central vertex, or to one of the leaf vertices. Figure 3 shows the index value and color codes of the six trees on 6 vertices as shown in [14]. For example, following two trees are isomorphic with following sub-trees flipped: 2 and 3, NULL and 6, 7 and 8. Q: 4. Then T 1 (α, β) and T 2 (α, β) are non-isomorphic trees with the same greedoid Tutte polynomial. Try to draw one. ... counting trees with two kind of vertices … Two vertices joined by an edge are said to be neighbors and the degree of a vertex v in a graph G, denoted by degG(v), is the number of neighbors of v in G. Question: List (draw) All Nonisomorphic Trees On 7 Vertices. Proposition 4.2.4. ... Non-Isomorphic Trees (Graph Theory ... Graph Theory: 17. A polytree (or directed tree or oriented tree or singly connected network) is a directed acyclic graph (DAG) whose underlying undirected graph is a tree. So our problem becomes finding a way for the TD of a tree with 5 vertices to be 8, and where each vertex has deg ≥ 1. Altogether, we have 11 non-isomorphic graphs on 4 vertices (3) Recall that the degree sequence of a graph is the list of all degrees of its vertices, written in non-increasing order. DECISION TREES, TREE ISOMORPHISMS 107 are isomorphic as free trees, so there is only 1 non-isomorphic 3-vertex free tree. The Whitney graph theorem can be extended to hypergraphs. How to predict all non-isomorphic connected simple graphs are there with $n$ vertices, Generate all nonisomorphic rooted trees from a vertex set with a common root, List of non-isomorphic trees on (up to $21$ vertices). considering that one has a vertex of degree 4, one has a vertex of degree 3, and one has all vertices of degree at most 2. Non-isomorphic binary trees. How to draw all nonisomorphic trees with n vertices? So the non isil more FIC rooted trees are those which are directed trees directed trees but its leaves cannot be swamped. Error occurred during PDF generation. How do I hang curtains on a cutout like this? In the first case, you can add a final leaf to get to either a path of 5 vertices, or a path of 4 vertices with another leaf on one of the interior vertices. Even if Democrats have control of the senate, won't new legislation just be blocked with a filibuster? Is there a tree with exactly 7 vertices and 7 edges? I believe there are only two. Katie. Answer Save. Draw all non-isomorphic trees with 7 vertices? MathJax reference. Counting Spanning Trees⁄ Bang Ye Wu Kun-Mao Chao 1 Counting Spanning Trees This book provides a comprehensive introduction to the modern study of spanning trees. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. We know that a tree (connected by definition) with 5 vertices has to have 4 edges. Little Alexey was playing with trees while studying two new awesome concepts: subtree and isomorphism. Counting non-isomorphic graphs with prescribed number of edges and vertices. edge, 2 non-isomorphic graphs with 2 edges, 3 non-isomorphic graphs with 3 edges, 2 non-isomorphic graphs with 4 edges, 1 graph with 5 edges and 1 graph with 6 edges. Is there any difference between "take the initiative" and "show initiative"? (Be careful, it is easy booth to overlook trees, and to draw the "same one" more than once) For general case, there are 2^(n 2) non-isomorphic graphs on n vertices where (n 2) is binomial coefficient "n above 2".However that may give you also some extra graphs depending on which graphs are considered the same (you … How can I keep improving after my first 30km ride? Does anyone has experience with writing a program that can calculate the number of possible non-isomorphic trees for any node (in graph theory)? I drew the graphs myself and got 4 distinct, non-isomorphic ones for 5 vertices. Start with one vertex. Can someone help me out here? 8.3.4. (a) (i) List all non-isomorphic trees (not rooted) on 6 vertices with no vertex of degree larger than 3. You must be logged in to your Twitter account in order to share. Old question, but I don't quite understand your logic for $n = 5$. DrawGraph(RandomTree(7)) but then this is not helpful because I do not get non-isomorphic graph each time and there are repetitions. There is a good reason that these seem impossible to draw. Proposition 4.2.4. Delete a leaf from any tree, and the result will be a tree. Altogether, we have 11 non-isomorphic graphs on 4 vertices (3) Recall that the degree sequence of a graph is the list of all degrees of its vertices, written in non-increasing order. (a) (i) List all non-isomorphic trees (not rooted) on 6 vertices with no vertex of degree larger than 3. Making statements based on opinion; back them up with references or personal experience. T (Theorem 2.8 of [7]). Is unlabeled tree a non-isomophic and lababeled tree an isomorphic? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. We can denote a tree by a pair , where is the set of vertices and is the set of edges. So in that case, the existence of two distinct, isomorphic spanning trees T1 and T2 in G implies the existence of two distinct, isomorphic spanning trees T( and T~ in a smaller kernel-true subgraph H of G, such that any isomorphism ~b : T( --* T~ extends to an isomorphism from T1 onto T2, because An(v) = Ai-t(cb(v)) for all v E H. Please refresh the page and try again. Terminology for rooted trees: Did Trump himself order the National Guard to clear out protesters (who sided with him) on the Capitol on Jan 6? Oh, scratch that! So there are two trees with four vertices, up to isomorphism. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. To learn more, see our tips on writing great answers. This is non-isomorphic graph count problem. A 40 gal tank initially contains 11 gal of fresh water. This sounds like four total trees, but in fact one of the first cases is isomorphic to one of the second. 1 , 1 , 1 , 1 , 4 The only possible leaf is another vertex connected to the first vertex by a single edge. So, Condition-04 violates. So there are a total of three distinct trees with five vertices. Let G(N,p) be an Erdos-Renyi graph, where N is the number of vertices, and p is the probability that two distinct vertices form an edge. Draw all non-isomorphic irreducible trees with 10 vertices? Relevance. Add a leaf. Extend this list by drawing all the distinct non-isomorphic trees on 7 vertices. *Response times vary by subject and question complexity. Viewed 4k times 10. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. How many non-isomorphic trees are there with 5 vertices? List (draw) all nonisomorphic trees on 7 vertices. Drawing all non-isomorphic trees with $n = 5$ vertices. Let α ≠ β be positive integers. Diagrams of all the distinct non-isomorphic trees on 6 or fewer vertices are listed in the lecture notes. 4. Add a leaf. Ú An unrooted tree can be changed into a rooted tree by choosing any vertex as the root. I don't get this concept at all. Median response time is 34 minutes and may be longer for new subjects. Add a leaf. Then T 1 (α, β) and T 2 (α, β) are non-isomorphic trees with the same greedoid Tutte polynomial. 2. But as to the construction of all the non-isomorphic graphs of any given order not as much is said. In , non-isomorphic caterpillars with the same degree sequence and the same number of paths of length k for all k are constructed. Rooted tree: Rooted tree shows an ancestral root. ... connected non-isomorphic graphs on n vertices? Again, these are the only two truly distinct choices. Does anyone has experience with writing a program that can calculate the number of possible non-isomorphic trees for any node (in graph theory)? ∴ G1 and G2 are not isomorphic graphs. 1 Answer. Ask Question Asked 9 years, 3 months ago. What are the 9 non-isomorphic rooted trees with 5 vertices? (The Good Will Hunting hallway blackboard problem) Lemma. So any other suggestions would be very helpful. I have a textbook solution with little to no explanation (this is with n = 5): Could anyone explain how to "think" when solving this kind of a problem? Proof: Let the graph G is disconnected then there exist at least two components G1 and G2 say. In , non-isomorphic caterpillars with the same degree sequence and the same number of paths of length k for all k are constructed. Answers for Test Yourself 1. g(v) is an endpoint of h(e) 2. Let G(N,p) be an Erdos-Renyi graph, where N is the number of vertices, and p is the probability that two distinct vertices form an edge. You must be logged into your Facebook account in order to share via Facebook. Now there are two possible vertices you might connect to, but it's easy to see that the resulting trees are isomorphic, so there is only one tree of three vertices up to isomorphism. Distance Between Vertices … So in that case, the existence of two distinct, isomorphic spanning trees T1 and T2 in G implies the existence of two distinct, isomorphic spanning trees T( and T~ in a smaller kernel-true subgraph H of G, such that any isomorphism ~b : T( --* T~ extends to an isomorphism from T1 onto T2, because An(v) = Ai-t(cb(v)) for all v E H. Does healing an unconscious, dying player character restore only up to 1 hp unless they have been stabilised? Unrooted tree: Unrooted tree does not show an ancestral root. (Be Careful, It Is Easy Booth To Overlook Trees, And To Draw The "same One" More Than Once) This question hasn't been answered yet Ask an expert. Use MathJax to format equations. Expert Answer . So let's survey T_6 by the maximal degree of its elements. (for example, drawing all non isomorphic trees with 6 vertices, 7 vertices and so on). Two empty trees are isomorphic. Each of the component is circuit-less as G is circuit-less. https://www.gatevidyalay.com/tag/non-isomorphic-graphs-with-6-vertices This problem has been solved! Try to draw one. Save this setting as your default sorting preference? any time in your account settings, You must enter a body with at least 15 characters, That username is already taken by another member. utor tree? This sounds like four total trees, but in fact one of the first cases is isomorphic to one of the second. Find 7 non-isomorphic graphs with three vertices and three edges. Click the button below to share this on Google+. Question: How Many Non-isomorphic Trees With Four Vertices Are There? Book about an AI that traps people on a spaceship. So, it follows logically to look for an algorithm or method that finds all these graphs. Give an example of a 3-regular graph with 8 vertices which is not isomorphic to the graph of a cube (prove that it is not isomorphic by demonstrating that it So the possible non isil more fake rooted trees with three vergis ease. In graph G2, degree-3 vertices do not form a 4-cycle as the vertices are not adjacent. Favorite Answer. Find All NonIsomorphic Undirected Graphs with Four Vertices. Could a tree with 7 vertices have only 5 edges? Asking for help, clarification, or responding to other answers. A 40 gal tank initially contains 11 gal of fresh water. The number of non is a more fake unrated Trees with three verte sees is one since and then for be well, the number of vergis is of the tree against three. (ii) Prove that up to isomorphism, these are the only such trees. In graph G1, degree-3 vertices form a cycle of length 4. Could a tree with 7 vertices have only 5 edges? There's nothing to be done: it is a tree all by itself, and the graph cannot have any edges. Non-isomorphic trees: There are two types of non-isomorphic trees. Let α ≠ β be positive integers. Figure 2 shows the six non-isomorphic trees of order 6. Can you legally move a dead body to preserve it as evidence? Image Transcriptionclose. Isomorphic and Non-Isomorphic Graphs - Duration: 10:14. So there are a total of three distinct trees with five vertices. Combine multiple words with dashes(-), and seperate tags with spaces. Active 4 years, 8 months ago. Please log-in to your MaplePrimes account. Corollary 2.7. Thanks! Median response time is 34 minutes and may be longer for new subjects. How many non-isomorphic trees with four vertices are there? For example, following two trees are isomorphic with following sub-trees flipped: 2 and 3, NULL and 6, 7 and 8. See the answer. Why battery voltage is lower than system/alternator voltage. In this note we show that this result does not extend to unrooted trees: we construct an in nite collection of pairs of non-isomorphic caterpillars (trees in which all of the non-leaf vertices form a path), each pair having the same greedoid Tutte polynomial (Corollary 2.7… You can double-check the remaining options are pairwise non-isomorphic by e.g. 4. Maplesoft WUCT121 Graphs 28 1.7.1. Draw them. Q: 4. A tree is a connected, undirected graph with no cycles. Note that two trees must belong to different isomorphism classes if one has vertices with degrees the other doesn't have. A span-ning tree for a graph G is a subgraph of G that is a tree and contains all the vertices of G. There are many situations in which good spanning trees must be found. a) Find two non-isomorphic trees with five vertices. (Hint: Answer is prime!) It only takes a minute to sign up. 1. I actually do see the isomorphism now. Find all non-isomorphic trees with 5 vertices. Recommended: Please solve it on “ PRACTICE ” first, before moving on to the solution. Corollary 2.7. There is a good reason that these seem impossible to draw. © Maplesoft, a division of Waterloo Maple Inc. How do I generate all non-isomorphic trees of order 7 in Maple? The Whitney graph isomorphism theorem, shown by Hassler Whitney, states that two connected graphs are isomorphic if and only if their line graphs are isomorphic, with a single exception: K 3, the complete graph on three vertices, and the complete bipartite graph K 1,3, which are not isomorphic but both have K 3 as their line graph. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Tags are words are used to describe and categorize your content. Recommended: Please solve it on “ PRACTICE ” first, before moving on to the solution. Un-rooted trees are those which don’t have a labeled root vertex. rev 2021.1.8.38287, Sorry, we no longer support Internet Explorer, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. Two empty trees are isomorphic. Since Condition-04 violates, so given graphs can not be isomorphic. c) Draw a graph representing the problem of three houses and three utilities say water, gas and electricity (a) Isomorphic trees: Two trees and are said to be isomorphic if there is a one to one correspondence between edges set of. And that any graph with 4 edges would have a Total Degree (TD) of 8. 10 points and my gratitude if anyone can. Draw Them. You can double-check the remaining options are pairwise non-isomorphic by e.g. 8.3. 2. A rooted tree is a tree in which all edges direct away from one designated vertex called the root. Dog likes walks, but is terrified of walk preparation, Basic python GUI Calculator using tkinter. 1 decade ago. Draw all the non-isomorphic trees with 6 vertices (6 of them). Usually characters are represented in a computer … A Google search shows that a paper by P. O. de Wet gives a simple construction that yields approximately $\sqrt{T_n}$ non-isomorphic graphs of order n. b) Draw full binary tree with 13 vertices. T1 T2 T3 T4 T5 Figure 8.7. Is there a tree with exactly 7 vertices and 7 edges? How to label resources belonging to users in a two-sided marketplace? With a filibuster with $n = 5$ vertices central vertex, or to one the! Cycles in them know that a graph with no cycles edges would have a total (. With this question be changed into a rooted tree is a tree is a connected graph with five.! Concepts: subtree and isomorphism median Response time is 34 minutes and may be longer for new subjects case! Any difference between take the initiative '' and show initiative '' three vergis.!, see our tips on writing great answers the set of vertices and so on ) to done. Or responding to other answers of fresh water RSS reader of its elements by definition ) with 5?. 7 in Maple of h ( e ) 2 I generate all non-isomorphic trees ( graph Theory 17. Isomorphic trees with $n = 5: Prove that a graph representing the problem three! Form a cycle of length k for all k are constructed, 7 and 8 look for an algorithm method... … Figure 2 shows the six non-isomorphic trees on 6 vertices as shown in 14. Circuit-Less as non isomorphic trees with 7 vertices is disconnected then there exist at least two vertices must have at least two components G1 G2! Of service, privacy policy and cookie policy the set of edges for people studying math at any level professionals. Will Hunting hallway blackboard problem ) Lemma, ( n-1 ) edges and no is... To existing trees the National Guard to clear out protesters ( who sided with him ) on the on. “ Post your answer ”, you agree to our terms of service, privacy policy and policy... Sub-Trees flipped: 2 and 3, NULL and 6, 7 vertices ) find two non-isomorphic trees are in... Away from one designated vertex called the root licensed under cc by-sa at any level and professionals related! Representing the problem of three distinct trees with three vertices and is set! Shown in [ 14 ] people studying math at any level and professionals in related.. There are a total degree ( TD ) of 8 or method that finds all these graphs are! The Whitney graph theorem can be extended to hypergraphs Exchange Inc ; user contributions licensed under cc by-sa paste! To look for an algorithm or method that finds all these graphs curtail access to Air Force one the... Dying player character restore only up to isomorphism, these are the only two choices, up to isomorphism these! The solution, Both the graphs myself and got 4 distinct, non-isomorphic caterpillars with same. Non-Isomorphic trees: there are only$ 3 $\begingroup$ right now, 'm... Look for an algorithm or method that finds all these graphs Air Force one from the new?! Degree sequence and the graph can not non isomorphic trees with 7 vertices isomorphic or fewer vertices are there two! Air Force one from the new president 14 ] edges direct away from one designated vertex called the root this. Response times vary by subject and question complexity because I do n't quite understand your logic for $=. Share via Facebook, 4 Image Transcriptionclose by subject and question complexity on a spaceship and question.! Fic rooted trees with 5 vertices: 17, 4 Image Transcriptionclose be built adding! Fresh water sounds like four total trees, but in fact one of the six on. A pair, where is the set of edges segregate the trees to! Are words are used to describe and categorize your content ) Prove that up to isomorphism, these the... According to the central vertex, or responding to other answers playing with trees studying... Two trees with 5 vertices how can I keep improving after my first 30km ride =$... Preparation, Basic python GUI Calculator using tkinter an unrooted tree can be changed a! New president privacy policy and cookie policy two new awesome concepts: subtree and.... Water, gas and electricity graph Theory... graph Theory: 10 double-check the remaining options are pairwise by! List ( draw ) all nonisomorphic trees with four vertices are listed in the lecture notes are constructed at. These are the only such trees caterpillars with the same degree sequence the... Good way is to segregate the trees according to the central vertex, or to one the. Direct away from one designated vertex called the root degree ( TD ) of 8 logged to. Click the button below to login ( a new window will open. ) vertices form cycle... Of three houses and three edges, before moving on to the first cases is isomorphic one... The same degree sequence and the result will be a tree is a tree with exactly 7 vertices only. Waterloo Maple Inc. how do I hang curtains on a cutout like this of,... Is disconnected then there exist at least two leaves all nonisomorphic trees on 6 vertices, up to 1 unless. Response time is 34 minutes and may be longer for new subjects or method that all... Another vertex connected to the solution service, privacy policy and cookie non isomorphic trees with 7 vertices is isomorphic to one the! Know that a tree does not show an ancestral root quite understand your logic for \$ n = 5 vertices. Categorize your content non-isomophic and lababeled tree an isomorphic counting non-isomorphic graphs with prescribed number of edges of. Water, gas and electricity graph Theory... graph Theory: 17 are constructed are directed trees directed trees its. On 6 vertices as shown in [ 14 ] at least two leaves the vertex. Called the root: Prove that up to isomorphism, these are the 9 non-isomorphic rooted trees 5... Degree sequence and the graph can not have any edges gal of fresh water to fix non-existent. Love your help with this question after my first 30km ride Theory 10... Are correct that there are two trees with 5 vertices now, I 'm confused between non-isomorphic and.! Your answer ”, you agree to our terms of service, privacy policy and cookie.. Run through this process non isomorphic trees with 7 vertices, and the same number of paths of length for. Be done: it is a connected graph on writing great answers by all! Different isomorphism classes if one has vertices with degrees the other does n't have under cc by-sa total three. Been stabilised add a leaf to the first vertex by a pair, where is the of! And isomorphic endpoint of h ( e ) 2 distance between vertices … non-isomorphic trees: there are trees. In, non-isomorphic ones for 5 vertices has to have 4 edges would have a of! Double-Check the remaining options are pairwise non-isomorphic by e.g segregate the trees to. Unless they have been stabilised with 6 vertices, ( n-1 ) edges and vertices exist at two. 1 hp unless they have been stabilised violates, so given graphs not... Resources belonging to users in a two-sided marketplace fake rooted trees are isomorphic with following sub-trees:! 4 Image Transcriptionclose as evidence three utilities say water, gas and electricity Theory! For an algorithm or method that finds all these graphs kernels not hot answer to Stack... Utilities say water, gas and electricity graph Theory: 17 ( ii ) Prove up. Exactly 7 vertices have only 5 edges the problem of three houses three!, and the graph can not be swamped that there are a total degree ( TD ) of.! New subjects according to the solution = 5: Prove that up to isomorphism there a tree is a graph. 6, 7 vertices for help, clarification, or responding to other answers a spaceship US president curtail to... Your RSS reader counting non-isomorphic graphs with three vertices and so on ) 'd.
|
|
Solved
New Contributor
Posts: 2
# Forecast Studio Seasonal Index
Is there a file or report that has the seasonal index that Forecast Studio 3.1 is using when the _Seasonal_ flag for the model is Yes?
Accepted Solutions
Solution
03-13-2017 02:33 PM
SAS Super FREQ
Posts: 79
## Re: Forecast Studio Seasonal Index
The seasonal indices are not part of the FS output tables. However, you can take a look at the OUTCOMPONENT table to get the seasonal components (_COMP_ = 'Season') and derive the seasonal indices from it. Please note that the seasonal components are computed based on the type of the selected model. For example, if the model is mutiplicative, the seasonal components should be centered at around 1.
thanks
Alex
All Replies
Solution
03-13-2017 02:33 PM
SAS Super FREQ
Posts: 79
## Re: Forecast Studio Seasonal Index
The seasonal indices are not part of the FS output tables. However, you can take a look at the OUTCOMPONENT table to get the seasonal components (_COMP_ = 'Season') and derive the seasonal indices from it. Please note that the seasonal components are computed based on the type of the selected model. For example, if the model is mutiplicative, the seasonal components should be centered at around 1.
thanks
Alex
☑ This topic is SOLVED.
Need further help from the community? Please ask a new question.
Discussion stats
|
|
DRAFT: This module has unpublished changes.
My Strengths
I took the Gallup StrengthsFinder test, and discovered that my strengths are Learner, Harmony, Relator, Responsibility, and Deliberative. After reading about what each one meant, I realized they describe me pretty well.
Learner: I love to learn, I do things for the sake of just learning, and I love sharing my knowledge with others. This Learner strength is evident when I do things like take extra or more challenging classes for fun. For example, I took organic chemistry simply because I wanted to learn about it, not because it counted for anything in my major or helped my GPA. I purposely take challenging course loads because for me, learning is the important thing, not my grade at the end of the day.
Harmony: This strength means that I always look for areas of agreement, trying to keep conflict to a minimum. The description says, "You can't quite believe how much time is wasted by people trying to impose their views on others. Wouldn’t we all be more productive if we kept our opinions in check and instead looked for consensus and support?" This completely resonated with me, because it is in fact exactly how I feel. I tend to keep my opinions to myself if it is in conflict with someone else's and there is no need for me to share mine, because it just creates useless conflict; if I do share it, I do my best to present it logically and highlight the overlaps before explaining the differences. If other people are are arguing over their opinions, I often have to remind them that everyone is entitled to their own opinions.
Relator: The theme of Relator describes my view on relationships with people (and is completely accurate!). I care more about having deeper, more meaningful relationships with a few people than shallow ones with lots of people. This means I take the time to get to know people, no matter who they are, and I find ways to relate to them. This has been a commom theme in my life, but I never how to explain it. When I was doing gymnastics, most my teammates were younger than me (youngest by 7 years), but I still got along with them really well and was good friends with them - it was natural to me. I had one teammate who was a year older than me, but she never bothered to get to know them, thinking they were too childish and immature. Thus, I was always looked up to as the team leader, because I was the one who got along with everyone. This remains true today, as I have a very wide age range of friends - my youngest friend is twelve, and we do wushu together. For a long time, I was the only college student she would talk to, because I tried to get to know her and treated her as an equal, showing her that she could teach me things, too. On the other end of the spectrum, I have adult friends in their thirties, who I relate to in a very different way, but no more or less meaningfully.
Responsibility has been a word I've used to describe myself for a long time, but I never thought of as a strength on its own. I have always thought about it in conjunction with leadership, but I realized that responsibility is pervasive in my life. I take commitment seriously, so when I say I will do something, I will do it with all my heart and see it through to completion, no matter how big or small it is. I can trace this back to Kindergarten. My elementary school (K-8) had a rule that you were only allowed to have the official school water bottle at school. My dad bought me the $5 water bottle, handed it to me on the first day, and said "If you lose it, you have to buy a new one yourself." I promised him that I would not lose it (because 5-year-old me thought$5 was a LOT of my own money!). So I took my water bottle to school every day for 9 years, and made sure I didn't lose it. I still have that water bottle. This attitude of responsibility extends to everything from my formal responsibilities in leadership roles to always being on time for class and activities to not losing my water bottles. I am known to be trustworthy and dependable.
Deliberative: Lastly, I am Deliberative. I had no idea what this meant at first, but I came to realize it describes me perfectly. I am careful, I think everything through down to the little details, and I take time weighing options to make the right decision. I am good at looking at many sides of a situation, and my friends often look to me for advice because they trust my carefully considered opinion. I plan ahead, considering everything that could go wrong, and take as many measures as I can to mitigate potential problems. In the end, this means my plans are usually flawlessly executed, whether it be a crazy trip to see three friends graduate from three different schools in three days, or a large event like a middle school robotics competition.
If you'd like to read the full descriptions of these 5 strengths, you can here: GallupReport.pdf
DRAFT: This module has unpublished changes.
|
|
# Interpreting structure and regolith thickness in an orogenic gold prospect from detailed gravity data using VPmg inversion software
Mapping the thickness of regolith developed over an orogenic gold project area can often provide clues to the location of structures and strongly altered zones, as these may be associated with development of a thicker weathering profile. Obtaining this information from drilling alone is expensive. Detailed gravity measurements provide one means of imaging variations in the thickness of the low density regolith overlying higher density fresh rock. This information can then be used for targeting and interpretation of geochemical data.
Detailed gravity data from an African gold exploration project were studied. Estimating the average density of the regolith layer – a necessary precursor to gravity data reduction and to modeling the response of a variable thickness regolith – was approached in two ways: linear regression of the Free Air Anomaly (FAA) versus Bouguer correction at unit density, and forward modeling the FAA using a range of terrain density values to find the one producing the best fit to observed gravity data. These density estimates, together with specific gravity measurements on drill core, were used as parameters in an inverse model recovered from the gravity data, defining the variable thickness of a constant density regolith layer overlying a denser basement. The resultant image of the depth to base of weathering contains a set of elongate depressions that have a strong association with known mineralization and gradient array IP anomalies possibly caused by sulphide in mineralized rock.
### Introduction
Modeling the gravity response of a variable thickness low-density layer overlying dense basement rocks provides a useful method for imaging variations in the thickness of the weathered layer. These weathered layer thickness variations often reflect hydrothermal alteration, changes in porosity related to structural fabric and lithology that are relevant to exploration targeting. The information can be especially useful in areas where the basement rocks are magnetically “flat” – the regolith gravity response may be the only cheaply available and helpful geophysical mapping tool.
In itself, this kind of regolith imaging can be useful for interpretation of geochemical datasets. Transforming the gravity data into a map of a geological surface makes them easier to integrate with other datasets.
The program VPmg provides a means of recovering an inverse model composed of homogeneous or inhomogeneous layers characterized by density or magnetic properties (induced or remanent magnetization). In its simplest implementation – an upper layer forming a flat contact with a homogeneous basement – this type of model can be useful for estimating average terrain density used in reduction of gravity data. This forms a basis for developing a more complex layered model, as demonstrated in this article.
### Geology
Basement rocks within the survey area consist mainly of sedimentary rock (greywacke, siltstone, shale), intermediate volcanics and felsic intrusives, intruded by dolerite dykes. Gold mineralization occurs in several NNW-trending shear zones that have been located through auger drilling and gradient array induced polarization surveys. Outcrop of fresh rock is virtually non-existent, and prospecting for gold has been performed using auger drilling of the weathered rock.
The area has been subject to lateritic weathering, a chemical process occurring in tropical climates where acidic groundwater attacks the primary minerals, removing silica, alkalis and alkaline earths, resulting in residual or relative enrichment of iron and frequently of aluminium. The resulting regolith profile (Figure 1) in the area of interest is dominated by the saprolite layers, where the primary rocks are variably kaolinized but a relict portion of the original texture may be recognized. These grade downwards into saprock, where cores of unaltered rock are present, overlying fresh rock. We shall see below that there is a strong density contrast between the saprolite and fresh rock, which dominates the gravity response.
### Gravity survey
Gravity and DGPS measurements were acquired on east-west traverses, mainly at 200m line spacing and 100m station spacing along-line (Figure 2). Station elevations within the survey area ranged from 150 m to 219 m. The terrain is typical of laterite incised by drainage and outcrop is nonexistent.
The gravity response is subdued: the simple Bouguer anomaly (1.90 g/cc) range is only 2.81 mGal and the standard deviation 0.56 mGal. The range of Bouguer corrections at this density value is 5.47 mGal, with a mean of 14.31 mGal and standard deviation 0.99 mGal. In view of the large magnitude of these corrections in comparison to the range of Bouguer anomaly values, determining the appropriate Bouguer slab density is critical to deriving useful data from the reduction process.
### Data reduction
The simple Bouguer anomaly (or complete Bouguer anomaly if terrain corrections are applied) is the end-product of the gravity data reduction and correction process, wherein the meter drift, Earth tide, theoretical ellipsoid acceleration due to gravity, free air correction and Bouguer slab correction are removed from the data. The Bouguer correction is the contribution to the gravity response of an idealized infinite homogeneous slab of material lying between each station and a given height datum (commonly sea level). Removal of the Bouguer slab response from the Free Air Anomaly yields the “Bouguer anomaly”, which is the response of density inhomogeneities within the Earth – i.e. departures from the Bouguer slab density, which is the average density of the terrain in the survey area.
If the Bouguer slab density is not correctly estimated, the Bouguer anomaly values will contain a positive or negative residual component of the slab response, leading to interpretation problems. Several methods, surveyed by Yamamoto (1999), have been devised to estimate the Bouguer slab density directly from the gravity and height data, and these results should be evaluated against hand sample physical property measurements and ranges of possible values based on the geology and tabulations of average density values for various rock types.
A commonly used and effective method for estimating Bouguer slab density involves transforming the following expression for the Bouguer anomaly ${ g }_{ B }$ in terms of the Free Air Anomaly ${ g }_{ fa }$ and the Bouguer correction ${ \delta }_{ { g }_{ B } }$
${ g }_{ B }={ g }_{ fa }-{ \delta }_{ { g }_{ B } }={ g }_{ fa }-0.0419088{ \rho }_{ B }h$
by considering the Bouguer anomaly ${ g }_{ B }$ value to be equivalent to a small error term $\varepsilon$ (valid in cases such as this, where the Bouguer anomalies are small) and rearranging so that
${ g }_{ fa }={ \delta }_{ { g }_{ B } }+\varepsilon =0.0419088{ \rho }_{ B }h+\varepsilon$
i.e. the Bouguer slab density ${ \rho }_{ B }$ is the slope of a linear regression of Free Air Anomaly ${ g }_{ fa }$ values against scaled height values $0.0419088h$ (which are the Bouguer corrections with ${ \rho }_{ B }=1$).
Applying this method to the survey data (Figure 3) gives an estimated Bouguer slab density of 2.02 g/cc. Note that this is well below the average for crustal rocks (2.67 g/cc), so blindly choosing that number would lead to serious error.
The Bouguer anomaly (2.00 g/cc) image (Figure 4) appears to be generally uncorrelated with the elevation image in Figure 1 above, giving us confidence that the statistically-derived Bouguer slab density value is reasonable. However, some of the narrow valleys have associated lows in the Bouguer anomaly image, possibly resulting from terrain effects or errors in the terrain density estimate.
An alternative method for estimating the Bouguer slab density, which incorporates terrain effects and allows for different density values to be applied in regions known to have significantly different geology (e.g. granite batholiths vs shale-dominated sedimentary sequences), involves comparing the observed Free Air Anomaly with values calculated using a simple model of a homogeneous upper layer in planar contact with a homogenous basement.
The modeling program VPmg was used to calculate the Free Air Anomaly at each station for a range of different model terrain densities, and the Root Mean Square (RMS) misfit between these calculated values and the observed data was graphed against the corresponding density value (Figure 5). The best-fitting data were calculated using a terrain density of 1.90 g/cc, which is close to the value estimated statistically from the gravity data above. The RMS misfit (0.52 mGal) is actually less than the standard deviation of the Bouguer anomaly values, so the simple model of a homogeneous layer overlying a denser basement is a good first-order approximation to the actual density structure of the Earth in this area. Not surprisingly, the residual gravity response estimated by subtracting this theoretical model response from the observed Free Air Anomaly values looks very similar to the Bouguer anomaly image in Figure 4 above.
### Regolith densities
Drill core specific gravity measurements over a thick intersection of regolith (all within the upper and lower saprolite zones) are available from one hole. The length-weighted average value of specific gravity for the entire saprolite interval is 1.51, which happens to equal the raw average and median. The standard deviation is 0.07, and the range 0.26 (1.35 to 1.61). The density of saprolite is a good starting point for modeling the regolith as a whole in this instance: logging codes in the drilling database consist mainly of upper saprolite (51%) and lower saprolite (11%) while the upper (massive laterite 4%, pisolitic laterite 2%) and lower (saprock 5%) parts of the profile were intersected over a lesser proportion of intervals. This is admittedly a crude way of looking at the geology, but the dominance of saprolite in the drilled laterite profile is clear.
This average density estimate (1.51 g/cc) is significantly lower than the regolith density estimates established from the gravity data (1.90 – 2.00 g/cc). However, this individual drill hole may not be truly representative of the regolith densities across the entire survey area, and it should be noted that the ‘contact’ between regolith and fresh rock is gradational, rather than a sharp 1.50 to 2.80 g/cc density transition. These issues highlight the need to consider a range of possible density values when modeling the gravity data.
### Basement densities
Limited basement drill core sampling (2 holes) with specific gravity measurements is available in the survey area. The basement is known to consist mainly of greywacke and shale, with some dolerite dykes. The histogram of drill core s.g. measurements (Figure 6) reflects these rock types. Including all rock types intersected by the holes, the mean specific gravity is 2.77 with standard deviation 0.09 – i.e. a fairly narrow spread of densities. Attempting to recover fairly subtle (±0.2 g/cc) density variations within the basement in the presence of a variable regolith profile having a contrast of about 0.90 g/cc relative to average basement density using gravity data alone is a tall order without blanket a priori information on regolith thickness. A more realistic objective would be recovering regolith thickness and making the assumption of a uniform basement density, recognizing that more strongly altered basement is likely to coincide with deeper weathering, particularly where the alteration assemblage includes significant sulphide content.
### Homogenous layer inversion
Having established reasonable estimates of regolith and fresh rock densities, the model of regolith thickness (assuming homogeneous density in both layers) can be recovered from the gravity data using the inversion program VPmg.
Inversion is a computational process of finding a set of model parameters $m$, given a set of observed data $d$ such that
$d=G(m)$
where $G$ is an operator that describes the relationship between model parameters and the geophysical response. The algorithm solves this problem approximately, such that an objective function describing the misfit between observed and calculated data is minimized. A commonly used objective function is the chi-squared data norm $L2$ defined as
$L2=\frac { 1 }{ N } \sum _{ n=1 }^{ N }{ { \left( \frac { { d }_{ n }-{ c }_{ n } }{ \varepsilon } \right) }^{ 2 } }$
where $d$ are the observed data, $c$ are the data calculated by applying the operator $G$ to the model parameters $m$, and $\varepsilon$ is the assigned uncertainty, reflecting the estimated level of error in the observations.
Geological structures can be parameterized in different ways. For instance, the subsurface can be discretized into a set of cubic cells, each with its own uniform density, so that the set of densities forms the model parameters $m$.
VPmg (which stands for “Vertical Prism magnetic gravity”) was written by Peter Fullagar and is marketed by Mira Geoscience. The program models density and magnetic property distributions using a set of vertical prisms that can be subdivided into layers. These layers form “lithologies” with common properties, so the properties of a group of prisms can be altered en masse. The dimensions of the cells within each vertical prism adapt to the topography and the thickness of the geological unit in the model (Figure 7). The program and general modeling approach is described in papers by Fullagar & Pears (2007) and Fullagar et al (2000, 2004, 2008).
An inversion was run using a starting model consisting of regolith (1.90 g/cc) with basal contact at 145m elevation, overlying a homogeneous basement (2.80 g/cc). Only the elevation of the basal contact of the regolith in each model cell was allowed to vary. The inversion recovered a model producing an excellent fit to the observed data. The image of base regolith elevation (Figure 8) defines several discrete elongate zones in which the depth of weathering is greater than the surrounds, and these tend to coincide with elevated chargeability responses in gradient array IP data, which are also associated with gold anomalies in auger and aircore drilling. Some of the depressions in the base of weathering are consistent with drilling information (Figure 9). However, more work is required to refine the model, possibly including variations of the assumed density values.
The model is imperfect; east-west ridges corresponding to survey line locations are evident in the base of weathering surface, and these are due to greater sensitivity of the calculated response to these cells close to the readings compared to more distance model cells. The cells with lesser sensitivity tend not to be adjusted by the inversion to the same degree as the ones beneath the survey stations. A model with larger cells does not suffer from these artifacts to the same degree, but also has far lower spatial resolution. This model with 50m cells represents an attempt to squeeze maximum resolution from the model and the artifacts are a trade-off.
Compared to the colorshaded image of Bouguer gravity in Figure 4 above, the locations of depressions in the base of weathering are much more clearly defined in the model image in Figure 8.
### Next steps
If some a priori information on depth to base of weathering were available across the entire survey area, this could be used as a constraint upon the homogeneous layer inversion discussed above, improving its accuracy. In all likelihood, density variations within the basement could then be recovered from the model, rather than fitting all of the variation in the data by varying the regolith thickness alone, as in the case discussed above. Unfortunately, drilling that penetrates the base of weathering is restricted to narrow zones where auger geochemistry has generated targets. The areas we are most interested in, by definition, have not yet been drilled. Techniques such as passive seismic or airborne EM may provide the needed information on variations in regolith thickness across wide enough areas to be useful.
### Conclusion
The ability to forward model the effects of simple homogeneous terrain models at different densities allows us to find an average terrain density suitable for gravity data reduction. The calculated free air anomaly derived from this optimal model can be subtracted from the observed data to yield a residual gravity anomaly that incorporates both Bouguer slab and terrain corrections.
Inversion of gravity data from the subject orogenic gold project area has defined elongate zones of probable deeper weathering and/or more intense alteration, some of which correspond to structures that have already been drilled and found to be gold-bearing. Many of these zones coincide with elevated IP responses, which may be indicative of sulphide minerals in the alteration assemblage.
A potentially profitable area for further work involves using a priori models of transported and/or weathered material thickness to strip the gravity response of these layers from observed data, revealing the response component caused by basement structures.
### References
Fullagar, P.K. and Pears, G.A.,2007, Towards geologically realistic inversion, in Proceedings of Exploration 07: Fifth Decennial International Conference on Mineral Exploration, Toronto.
Fullagar, P.K., Hughes, N., and Paine, J.,2000, Drilling-constrained 3D gravity interpretation, Exploration Geophysics, v. 31, p. 17-23.
Fullagar, P.K., Pears, G.A., Hutton, D., and Thompson, A.,2004, 3D gravity and aeromagnetic inversion, Pilbara region, W.A., Exploration Geophysics, v. 35, p. 142-146.
Fullagar, P.K., Pears, G.A., and McMonnies, B.,2008, Constrained inversion of geological surfaces – pushing the boundaries, The Leading Edge, v. 27, p. 98-105.
Yamamoto, A., 1999, Estimating the optimum reduction density for gravity anomaly: a theoretical overview: Jour. Fac. Sci., Hokkaido Univ., Ser. VII (Geophysics), Vol. 11, No.3, p577-599.
|
|
Dark Matter Searches at Colliders – part II April 28, 2008
Posted by dorigo in cosmology, physics, science.
In part I of this long post I gave a writeup of part of the seminar I gave last Tuesday. There, I discussed some of the tools necessary for the searches that have been carried out at the Tevatron collider experiments, and will be performed at the LHC experiments, for dark matter candidates. In particular, I focused the attention on missing transverse energy (MEt), which is a measure of the amount of imbalance in the momentum flow out of the proton-proton collision, in the plane transverse to the beam. A dark matter (DM) candidate produced in a high-energy collision would create that imbalance by carrying away unseen a sizable amount of momentum: we assume such a DM candidate is weakly interacting, and so it leaves undetected just like a neutrino. In this post, I will continue the discussion, and I will give one first example of a direct search for DM performed at the Tevatron.
Cosmologists assure us that we need new particles beyond the Standard Model to accommodate a dark matter candidate. One possibility which is dear to many is the lightest neutralino, a particle belonging to the rich spectrum of new states predicted by supersymmetric (SUSY) theories. The neutralino is the lightest supersymmetric particle (LSP) and it is a quantum superposition of as many as four electrically neutral superpartners of the neutral bosons predicted by the model. The exact recipe depends on a few of the many parameters defining the particular kind of supersymmetry that Nature (the bitch, not the magazine) might have chosen for the Universe we live in. Those parameters are, of course, still unknown to us, and so are the phenomenological details of SUSY.
Indeed, supersymmetry is not even a model, but just a framework which dictates a new symmetry between ordinary and supersymmetric matter and fields. SUSY predicts the existence of one superpartner for each ordinary particle, as shown in the table on the left (SUSY particles have wiggles on their names). The introduction of these new entities solves one grievious problem in the Standard Model: the fact that a light Higgs boson -necessary for the experimental consistency of electroweak observations- is at odds with the expected huge corrections on its mass necessary to renormalize some divergent loops involving the boson coupled to ordinary matter. It is as if the mass of the Higgs boson ended up being of order one after having withstood subtraction and addition of a dozen different contributions of the order of a billions of billions each. The introduction of supersymmetric particles cancels the divergent loops, solving the problem at its root.
Supersymmetry has a second charming feature: it allows the running coupling constants which determine the strength of the three basic interactions -strong, electromagnetic, and weak- to become one and the same at a very high energy scale. These couplings do depend on the value of the energy at which they are measured: and it is indeed expected that they “become one single interaction” above a energy scale where they unify. In the standard model, one sees the three couplings meet at different values of energy, whilst supersymmetry allows them to have the same value at a common energy scale.
And supersymmetry allows a neutral weakly interacting particle, massive just enough to make a perfect candidate for the dark matter we infer exists in the Universe. Since dark matter has survived to our time from the big bang, this neutralino has to be perfectly stable: it simply cannot, CANNOT decay to anything else. Supersymmetric theories which include R-parity – a conserved integer quantum number which is a sum of particles spin, baryon and lepton numbers- have this feature built in.
R-parity was not invented to make the neutralino stable: rather, it was introduced to solve a couple of other outstanding problems of the theory, namely to maintain the stability of the proton and the universality of weak couplings despite the addition of new states. However, it is just what we need if we are to assume that neutralinos make up 20% of our universe today, rather than have decayed to ordinary matter and radiation. R-parity also has an important phenomenological consequence at colliders: it dictates that supersymmetric particles can only be produced in pairs in the collision of ordinary matter.
The CDF experiment carried out a search for neutralinos in its Run II dataset by considering the pair-production of chargino $\chi_1^+$ and neutralino $\chi_2^0$ as in the diagrams shown on the right. The neutralino $\chi_2^0$ emits a charged lepton, converting into the lightest state $\chi_1^0$ which leaves the detector without a trace; the chargino (a supersymmetric analog of the W boson) is expected to decay with the emission of one or two charged leptons and another light supersymmetric particle, LSP in short, as we already mentioned. The final state may thus include two or three charged leptons and a large amount of missing transverse energy from the combination of the two LSP.
The CDF detector, which collects proton-antiproton collisions at Fermilab 2-TeV Tevatron collider, is good at finding such a signature. Charged leptons are only produced in rare weak interaction processes at a proton-antiproton collider: the production of a W or Z boson, or the decay of a heavy quark. Electrons and muons of large transverse momentum are identified very effectively by a online trigger system, so the collection efficiency of events with two or three leptons is very high. In order to search for chargino-neutralino production, two different “signal regions” are defined by a set of selection cuts on the observed characteristics of the events before looking at the data. Similar “control regions“, which are expected to contain a negligible fraction of the searched process, are also defined.
Monte Carlo simulations of all known weak processes capable of yielding leptons in the final state are then compared to the data contained in the control region in a number of kinematical distributions. The comparison allows to gain confidence that the simulation is capable of predicting both the number and the kinematics of the experimental data. Only after these checks are successful, the signal region is opened, and data contained within are compared numerically to the expected sum of standard model processes contributing to the mixture.
CDF thus finds 6 events in a signal region defined to contain events with large missing Et, two well-identified leptons, and a third lepton candidate. Here, simulations predict $5.5 \pm 1.1$ events, mainly from diboson production and top pair production. In the other signal region, defined to have a third good lepton candidate, only one event is found, with an expectation of $0.88 \pm 0.14$ from standard model processes. The distribution of missing transverse energy observed in this latter case and the expected contributions from standard model processes and from supersymmetric contributions is shown in the plot above. There, you see the one candidate (the point with error bars with missing Et above 20 GeV, the cut defining the signal region in events with three charged leptons) compared to SM backgrounds: mostly diboson $p \bar p \to WZ$ production. The white histogram is the SUSY contribution.
Simulations in fact can predict the amount of chargino-neutralino events the two signal regions would contain, as a function of the value of supersymmetric parameter space. One thus gets to know that, for instance, 6.9 events would be expected in the first signal region, and 4.5 in the second. The data clearly do not allow that interpretation.
Since no signal is found, the experiment can set a limit on the production rate of the sought process. The reasoning is quite down-to-earth:
I observed one event; on average, standard model reactions should produce 0.88 events in that dataset, give or take a small error. Now, that one event could well be the result of SUSY, and the standard model fluctuated to yield zero events; similarly, SUSY could have contributed with an average of two, or even three events, to the selected dataset, and a unlucky fluctuation could have brought our observation to one single event.
There is a limit to our credibility, of course. In particle physics, we use to set credible chances for these searches at one-in-twenty odds: a complicated but conceptually simple computation allows one to compute the “95% confidence level” (C.L.) upper limit on the average number of events that the cuts defining our signal region should include. It is the number N such that 95% of the times would yield, together with the 0.88 expected standard model yield, more (at least two, that is) than the one event we observed.
Once N is computed, converting it into a 95% C.L. on the chargino-neutralino cross section only requires accounting for the total luminosity $L$ of the collected data and the expected efficiency $\epsilon$ with which our signal region would capture those events: $\sigma < N / ( \epsilon L)$.
In the plot below, you can see the result of the exercise. The cross section limit is shown by the black line with blue and yellow bands signalling the one- and two-standard deviations boundaries expected for the particular search. The limit is plotted as a function of the chargino mass -one of the many free parameters of the considered model; the limit varies as a function of it because so does signal efficiency. Since the theoretical model would foresee a cross section (the red line) larger than the limit for all chargino masses lower than 140 GeV, there follows an exclusion of chargino masses below that value. You can see that CDF sizably extends the LEP limit on this particle, set at 103 GeV (the hatched band on the left).
(To be continued…)
1. cormac - April 28, 2008
Excellent post, highly interesting. A couple of minor queries
1. Re ‘it allows the running coupling constants which determine the strength of the three basic interactions -strong, electromagnetic, and weak- to become one and the same at a very high energy scale…in the standard model, one sees the three couplings meet at different values of energy, whilst supersymmetry allows them to have the same value at a common energy scale’….
Doesn’t SUSY make this prediction for all four interactions, even better? I thought that was a major point, that gravity gets automatically incorporated….
2. Re SUSY breaking, I notice people like Lee Smolin make the point ‘that’s very convenient isn’t it?’ It’s strange no-one points out that the situation may be analogous to the hypothesis of anti-matter – after all, we were lucky the positron turned up so easily…
3. Re unification, it always strikes me that if all four intearctions are to be unified, something like SUSY must be right…otherwise there is no ultimate symmetry to connect the world of fermions to bosons..is this too simple a view?
Cormac O’ Raifeartaigh
2. dorigo - April 29, 2008
Hello Cormac,
1) well, no, SUSY does not incorporate gravity. Some particular models which include general relativity, like SUGRA, do – but I admit I have never studied these.
2) Sure, the parallel of anti-matter is a close one to the prediction of SUSY particles. Good point… The fact is, that anti-matter did not require any hypothesis to explain why it had not been found before. SUSY requires you to buy that it is a broken symmetry (otherwise we’d have seen those particles), and it requires that a very particular combination of spin, baryon number, and lepton number is conserved to 10^-40 -otherwise protons would decay and dark matter would not be there.
3) I do not really feel the urge to have a symmetry between fermions and bosons…
Cheers,
T.
3. World of Science News : Blog Archive : links for 2008-04-29 [Uncertain Principles] - April 29, 2008
[…] Dark Matter Searches at Colliders – part II « A Quantum Diaries Survivor The saga continues (tags: physics astronomy science experiment blogs) […]
4. Dark Matter Searches at Colliders - part III « A Quantum Diaries Survivor - May 6, 2008
[…] One intriguing solution to the problem lies in hypothesizing that a massive particle called neutralino wanders around in huge amounts, slow and unbothered by its close encounters with ordinary matter. Neutralinos would be electrically neutral, they would not interact strongly with matter, and they would be perfectly stable, lest they violate a very convenient quantum-mechanical conservation law. For more details on these hypotheses, see part II of this post. […]
5. Events with photons, b-jets, and missing Et « A Quantum Diaries Survivor - June 19, 2008
[…] in a lot of detail in two posts on the searches for dark matter at colliders (see here for part 1, here for part 2, and here for part 3). Add b-quark jets to boot, and you are looking at a very rare signature […]
Sorry comments are closed for this entry
|
|
# Are there various other analytic features with this building of sinc function?
This inquiry is encouraged by my previous post concerning sinc function.
Confirm or refute that $\frac{\sin x}{x}$ is the only nonzero whole (i.e. analytic almost everywhere) function $f(x)$ on $\mathbb{R}$ such that $$\int_0^\infty f(x) dx=\int_0^\infty f(x)^2 dx$$ or $$\int_{-\infty}^\infty f(x) dx=\int_{-\infty}^\infty f(x)^2 dx.$$
If $f$ is called for just to be continual, after that various other instances are feasible, as an example the also expansion of the list below function : $$f(x)=\left\{\begin{array}{ll} -2(5+\sqrt{65})x^2+(7+\sqrt{65})x-1 & 0\le x\le \frac{1}{2}\\ 2(5+\sqrt{65})x^2-(13+3\sqrt{65})x+4+\sqrt{65} & \frac{1}{2}\le x\le 1\\ \frac{1}{x^2} & x\ge 1 \end{array}\right.$$
As commented listed below, it ends up that there are very easy response to the above inquiry. ADVERTISEMENT additionally revealed a function listed below that additionally pleases $$\sum_{n=1}^\infty f(n)=\sum_{n=1}^\infty f(n)^2=0.$$ In sight of these solutions, my inquiry is currently changed to :
Prove or refute that $\frac{\sin x}{x}$ is the only nonzero whole function, $f(x)$ on $\mathbb{R}$ such that $$\int_{-\infty}^\infty f(x) dx=\int_{-\infty}^\infty f(x)^2 dx=\sum_{-\infty}^\infty f(n) =\sum_{-\infty}^\infty f(n)^2$$
0
2019-12-02 02:48:52
Source Share
Answers: 3
The changed inquiry has actually been addressed in this post at MO. Specifically, it was revealed that $\frac{\sin ax}{ax}$ satisfies this equal rights for each and every $0<a\le \pi$.
0
2019-12-03 05:40:52
Source
Similar to the pointer of Zaricuse, that is take a whole function $f$ such that $\int_{-\infty}^\infty f(x) dx \ne0$ after that address for $$\int_{-\infty}^\infty af(x)dx = \int_{-\infty}^\infty (af(x))^2dx$$ Then $g(z)=af(z)$ addresses fifty percent of the trouble. To get to $$\sum g(n)=\sum g(n)^2$$ we might as an example start with $f(z)=\sin (\pi z) \cdot h(z)$ where $h$ is a various other integrable whole function.
0
2019-12-03 04:17:41
Source
In the spirit of Zaricuse is remedy without the amount need, take any kind of completely well acted features f and also g. After that you need to have the ability to locate a straight mix af+bg that satisfies both formulas. If you allow
$$if=\int_{-\infty}^\infty f(x) dx$$ $$if2=\int_{-\infty}^\infty f(x)^2 dx$$ $$sf=\sum_{n=1}^\infty f(n)$$ $$sf2=\sum_{n=1}^\infty f(n)^2$$
and also in a similar way for fg and also g, we have
$$a*if + b*ig=a^2*if2+2ab*ifg+b^2ig2$$ and also
$$a*sf + b*sg=a^2*sf2+2ab*sfg+b^2sg2$$
which can be addressed for an and also b most of the times.
Included In feedback to the new demand that the values of both integrals and also 2 amounts all suit, I simply require adequate handles to transform. Specify $g(k,x)=\exp(-kx^2)$ and also take $f(x)=g(1,x)+ag(2,x)+bg(3,x)+cg(4,x)$ The wonderful feature of this $f$ is that $f^2$ is created in regards to $g(k,x)$, though k rises to 8. The indispensable of $g(k,x)$ is simply $\sqrt{\frac{\pi}{k}}$ and also the amount is computed by Wolfram Alpha as $\vartheta_3(0,\exp(-k))$. We can make a table :
$$\begin{array}{ccc}k&\int g(k,x)&\sum g(k,x)\\1&1.772453851&1.77264\\2&1.253314137&1.27134\\3&1.023326708&1.09959\\4&0.886226925&1.03663\\5&0.79266546&1.01348\\6&0.723601255&1.00496\\7&0.669924586&1.00182\\8&0.626657069&1.00067\end{array}$$
So the indispensable of f is $\sqrt{\pi}(1+a/\sqrt{2}+b/\sqrt{3}+c/\sqrt{4})$ The indispensable of f ^ 2 is $\sqrt{\pi}(1/\sqrt{2}+2a/\sqrt{3}+(a^2+2b)/\sqrt{4}+(2c+2ab)/\sqrt{5}+(b^2+2ac)/\sqrt{6}+2bc/\sqrt{7}+c^2/\sqrt{8})$ with comparable expressions for the amount in regards to theta3. We intend to locate a, b, c to make sure that the integrals and also amounts all suit. Unless my matrix of coefficients has a really not likely dependancy it will certainly be readily available. Excel asserts $a=-3.782590725, b=4.503400057, c=-1.83137936$ is really near a remedy, and also there need to be extra.
0
2019-12-03 04:10:05
Source
|
|
Next: Von Kármán Momentum Integral Up: Incompressible Boundary Layers Previous: Boundary Layer on a
# Wake Downstream of a Flat Plate
As we saw in the previous section, if a flat plate of negligible thickness, and finite length, is placed in the path of a uniform high Reynolds number flow, directed parallel to the plate, then thin boundary layers form above and below the plate. Outside the layers, the flow is irrotational, and essentially inviscid. Inside the layers, the flow is modified by viscosity, and has non-zero vorticity. Downstream of the plate, the boundary layers are convected by the flow, and merge to form a thin wake. (See Figure 8.4.) Within the wake, the flow is modified by viscosity, and possesses finite vorticity. Outside the wake, the downstream flow remains irrotational, and effectively inviscid.
Because there is no solid surface embedded in the wake, acting to retard the flow, we would expect the action of viscosity to cause the velocity within the wake, a long distance downstream of the plate, to closely match that of the unperturbed flow. In other words, we expect the fluid velocity within the wake to take the form
(8.83) (8.84)
where
(8.85)
Assuming that, within the wake,
(8.86) (8.87)
where is the wake thickness, fluid continuity requires that
(8.88)
The flow external to the boundary layers, and the wake, is both uniform and essentially inviscid. Hence, according to Bernoulli's theorem, the pressure in this region is also uniform. [See Equation (8.22).] However, as we saw in Section 8.3, there is no -variation of the pressure across the boundary layers. It follows that the pressure is uniform within the layers. Thus, it is reasonable to assume that the pressure is also uniform within the wake, because the wake is formed via the convection of the boundary layers downstream of the plate. We conclude that
(8.89)
everywhere in the fluid, where is a constant.
The -component of the fluid equation of motion is written
(8.90)
Making use of Equations (8.83)-(8.89), the previous expression reduces to
(8.91)
The boundary condition
(8.92)
ensures that the flow outside the wake remains unperturbed. Note that Equation (8.91) has the same mathematical form as a conventional diffusion equation, with playing the role of time, and playing the role of the diffusion coefficient. Hence, by analogy with the standard solution of the diffusion equation, we would expect (Riley 1974).
As can easily be demonstrated, the self-similar solution to Equation (8.91), subject to the boundary condition (8.92), is
(8.93)
where
(8.94)
and is a constant. It follows that
(8.95)
because, as is well-known, (Riley 1974). As expected, the width of the wake scales as .
The tangential velocity profile across the wake, which takes the form
(8.96)
is plotted in Figure 8.8. In addition, the vorticity profile across the wake, which is written
(8.97)
is shown in Figure 8.9. It can be seen that the profiles pictured in Figures 8.8 and 8.9 are essentially smoothed out versions of the boundary layer profiles shown in Figures 8.5 and 8.6, respectively.
Suppose that the plate and a portion of its trailing wake are enclosed by a cuboid control volume of unit depth (in the -direction) that extends from to and from to . (See Figure 8.10.) Here, and , where is the length of the plate, and the width of the wake. Hence, the control volume extends well upstream and downstream of the plate. Moreover, the volume is much wider than the wake.
Let us apply the integral form of the fluid equation of continuity to the control volume. For a steady state, this reduces to (see Section 1.9)
(8.98)
where is the bounding surface of the control volume. The normal fluid velocity is at , at , and at , as indicated in the figure. Hence, Equation (8.98) yields
(8.99)
or
(8.100)
However, given that for , and because , it is a good approximation to replace the limits of integration on the left-hand side of the previous expression by . Thus, from Equation (8.95),
(8.101)
where is independent of . Note that the slight retardation of the flow inside the wake, due to the presence of the plate, which is parameterized by , necessitates a small lateral outflow, , in the region of the fluid external to the wake.
Let us now apply the integral form of the -component of the fluid equation of motion to the control volume. For a steady state, this reduces to (see Section 1.11)
(8.102)
where is the net -directed force exerted on the fluid within the control volume by the plate. It follows, from Newton's third law of motion, that , where is the viscous drag force per unit width (in the -direction) acting on the plate in the -direction. In an incompressible fluid (see Section 1.6),
(8.103)
Hence, we obtain
(8.104)
because the pressure within the fluid is essentially uniform, the tangential fluid velocity at is , and is assumed to be negligible at . Making use of Equation (8.101), as well as the fact that is independent of , we get
(8.105)
Here, we have neglected any terms that are second order in the small quantity . A comparison with Equation (8.79) reveals that
(8.106)
or
(8.107)
Hence, from Equations (8.96) and (8.97), the velocity and vorticity profiles across the layer are
(8.108)
and
(8.109)
respectively, where . Finally, because the previous analysis is premised on the assumption that , it is clear that the previous three expressions are only valid when (i.e., well downstream of the plate).
The previous analysis only holds when the flow within the wake is non-turbulent. Let us assume, by analogy with the discussion in the previous section, that this is the case as long as the Reynolds number of the wake, , remains less than some critical value that is approximately . Because the Reynolds number of the wake can be written , where is the Reynolds number of the external flow, we deduce that the wake becomes turbulent when . Hence, the wake is always turbulent sufficiently far downstream of the plate. Our analysis, which effectively assumes that the wake is non-turbulent in some region, immediately downstream of the plate, whose extent (in ) is large compared with , is thus only valid when .
Next: Von Kármán Momentum Integral Up: Incompressible Boundary Layers Previous: Boundary Layer on a
Richard Fitzpatrick 2016-03-31
|
|
Physics
# 21.1Planck and Quantum Nature of Light
Physics21.1 Planck and Quantum Nature of Light
### Section Learning Objectives
By the end of this section, you will be able to do the following:
• Define quantum states and their relationship to modern physics
• Calculate the quantum energy of lights
• Explain how photon energies vary across divisions of the electromagnetic spectrum
### Teacher Support
#### Teacher Support
• (3) Scientific processes. The student uses critical thinking, scientific reasoning, and problem solving to make informed decisions within and outside the classroom. The student is expected to:
• (D): explain the impacts of the scientific contributions of a variety of historical and contemporary scientists on scientific thought and society.
• (8) Science concepts. The student knows simple examples of atomic, nuclear, and quantum phenomena. The student is expected to:
• (B): compare and explain the emission spectra produced by various atoms; and
• (D): give examples of applications of atomic and nuclear phenomena such as radiation therapy, diagnostic imaging, and nuclear power, and examples of quantum phenomena such as digital cameras.
### Section Key Terms
blackbody quantized quantum ultraviolet catastrophe
### Teacher Support
#### Teacher Support
• Prior to beginning this section, it would be a good idea to review wave concepts including frequency, wavelength, and amplitude. Have students write down a list of equations or statements that relate to the three concepts.
• [BL][OL]Discuss what could be meant by the term blackbody. Why do some objects appear black? Furthermore, why do we see objects that are red as red? It is said that black is the absence of color, but what does that mean in terms of the light reflected into our eyes?
• [AL]Discuss what can happen to energy when it strikes a surface. Discuss how it can be reflected or transmitted. If a blackbody is perfectly black, what must be happening to all of the energy incident upon it?
• [EL]Reinforce that the term blackbody is nothing more than its name suggests—that is, a body that is perfectly black. Discuss what perfectly black means. Is a black piece of paper perfectly black?
Our first story of curious significance begins with a T-shirt. You are likely aware that wearing a tight black T-shirt outside on a hot day provides a significantly less comfortable experience than wearing a white shirt. Black shirts, as well as all other black objects, will absorb and re-emit a significantly greater amount of radiation from the sun. This shirt is a good approximation of what is called a blackbody.
### Teacher Support
#### Teacher Support
Occasionally, texts refer to blackbody and perfect blackbody as two different concepts. It is likely best to refer to anything that is not a perfect blackbody as an approximation of a blackbody in order to avoid confusion.
A perfect blackbody is one that absorbs and re-emits all radiated energy that is incident upon it. Imagine wearing a tight shirt that did this! This phenomenon is often modeled with quite a different scenario. Imagine carving a small hole in an oven that can be heated to very high temperatures. As the temperature of this container gets hotter and hotter, the radiation out of this dark hole would increase as well, re-emitting all energy provided it by the increased temperature. The hole may even begin to glow in different colors as the temperature is increased. Like a burner on your stove, the hole would glow red, then orange, then blue, as the temperature is increased. In time, the hole would continue to glow but the light would be invisible to our eyes. This container is a good model of a perfect blackbody.
It is the analysis of blackbodies that led to one of the most consequential discoveries of the twentieth century. Take a moment to carefully examine Figure 21.2. What relationships exist? What trends can you see? The more time you spend interpreting this figure, the closer you will be to understanding quantum physics!
Figure 21.2 Graphs of blackbody radiation (from an ideal radiator) at three different radiator temperatures. The intensity or rate of radiation emission increases dramatically with temperature, and the peak of the spectrum shifts toward the visible and ultraviolet parts of the spectrum. The shape of the spectrum cannot be described with classical physics.
### Teacher Support
#### Teacher Support
It is important for students to make sense of Figure 21.2 before progressing further. Have students independently create a list of observations from the graph. When presenting their observations, press the students on the specifics of their observations.
[BL]Discuss what variables are being graphed. Have them complete the statement: ________ is dependent upon ________. Discuss what is meant by intensity. What is the difference between being mad and intensely mad?
[OL]Discuss what the peak of each graph refers to. Ask if the radiation intensity depends upon the wavelength of the radiation. How do they know this? What do the peaks on each graph mean?
[AL]Discuss why there are three lines on the graph. Does it make sense that an increase in temperature would cause the line of the graph to be raised? Why does this make sense? A good challenging exercise would be to have the students re-graph the information in order to represent EM radiation intensity against frequency.
### Tips For Success
When encountering a new graph, it is best to try to interpret the graph before you read about it. Doing this will make the following text more meaningful and will help to remind yourself of some of the key concepts within the section.
### Understanding Blackbody Graphs
Figure 21.2 is a plot of radiation intensity against radiated wavelength. In other words, it shows how the intensity of radiated light changes when a blackbody is heated to a particular temperature.
It may help to just follow the bottom-most red line labeled 3,000 K, red hot. The graph shows that when a blackbody acquires a temperature of 3,000 K, it radiates energy across the electromagnetic spectrum. However, the energy is most intensely emitted at a wavelength of approximately 1000 nm. This is in the infrared portion of the electromagnetic spectrum. While a body at this temperature would appear red-hot to our eyes, it would truly appear ‘infrared-hot’ if we were able to see the entire spectrum.
A few other important notes regarding Figure 21.2:
• As temperature increases, the total amount of energy radiated increases. This is shown by examining the area underneath each line.
• Regardless of temperature, all red lines on the graph undergo a consistent pattern. While electromagnetic radiation is emitted throughout the spectrum, the intensity of this radiation peaks at one particular wavelength.
• As the temperature changes, the wavelength of greatest radiation intensity changes. At 4,000 K, the radiation is most intense in the yellow-green portion of the spectrum. At 6,000 K, the blackbody would radiate white hot, due to intense radiation throughout the visible portion of the electromagnetic spectrum. Remember that white light is the emission of all visible colors simultaneously.
• As the temperature increases, the frequency of light providing the greatest intensity increases as well. Recall the equation $v=fλ. v=fλ.$ Because the speed of light is constant, frequency and wavelength are inversely related. This is verified by the leftward movement of the three red lines as temperature is increased.
### Teacher Support
#### Teacher Support
Discuss the bullet points above. Why does an increase in temperature result in an increase in the total amount of energy radiated? Do you have personal experience with the relationship described in bullet point #3? Students may not have answers as to the causal factors for some of the observations in the above bullet points. Remind them that this is okay as these why questions were the big questions being asked by physicists at the turn of the twentieth century!
[BL][OL]Do you have personal evidence to show that as temperature increases the energy radiated increases as well?
[AL]Remind students that temperature is just a measure of the average kinetic energy of particles in a gas. Does this definition support bullet point #1?
While in science it is important to categorize observations, theorizing as to why the observations exist is crucial to scientific advancement. Why doesn’t a blackbody emit radiation evenly across all wavelengths? Why does the temperature of the body change the peak wavelength that is radiated? Why does an increase in temperature cause the peak wavelength emitted to decrease? It is questions like these that drove significant research at the turn of the twentieth century. And within the context of these questions, Max Planck discovered something of tremendous importance.
### Teacher Support
#### Teacher Support
Planck’s revolution is very much the story of the scientific method—reconciling disconnects between theory and experimental results. Encourage the students to think of other events—either historical or within their own lives—in which a predominant theory was shown to be incorrect when confronted with overwhelming evidence to the contrary. Possible examples include the geocentric model, the ether, or the four elements.
The prevailing theory at the time of Max Planck’s discovery was that intensity and frequency were related by the equation $I= 2kT λ 2 . I= 2kT λ 2 .$ This equation, derived from classical physics and using wave phenomena, infers that as wavelength increases, the intensity of energy provided will decrease with an inverse-squared relationship. This relationship is graphed in Figure 21.3 and shows a troubling trend. For starters, it should be apparent that the graph from this equation does not match the blackbody graphs found experimentally. Additionally, it shows that for an object of any temperature, there should be an infinite amount of energy quickly emitted in the shortest wavelengths. When theory and experimental results clash, it is important to re-evaluate both models. The disconnect between theory and reality was termed the ultraviolet catastrophe.
Figure 21.3 The graph above shows the true spectral measurements by a blackbody against those predicted by the classical theory at the time. The discord between the predicted classical theory line and the actual results is known as the ultraviolet catastrophe.
Due to concerns over the ultraviolet catastrophe, Max Planck began to question whether another factor impacted the relationship between intensity and wavelength. This factor, he posited, should affect the probability that short wavelength light would be emitted. Should this factor reduce the probability of short wavelength light, it would cause the radiance curve to not progress infinitely as in the classical theory, but would instead cause the curve to precipitate back downward as is shown in the 5,000 K, 4,000 K, and 3,000 K temperature lines of the graph in Figure 21.3. Planck noted that this factor, whatever it may be, must also be dependent on temperature, as the intensity decreases at lower and lower wavelengths as the temperature increases.
The determination of this probability factor was a groundbreaking discovery in physics, yielding insight not just into light but also into energy and matter itself. It would be the basis for Planck’s 1918 Nobel Prize in Physics and would result in the transition of physics from classical to modern understanding. In an attempt to determine the cause of the probability factor, Max Planck constructed a new theory. This theory, which created the branch of physics called quantum mechanics, speculated that the energy radiated by the blackbody could exist only in specific numerical, or quantum, states. This theory is described by the equation $E=nhf, E=nhf,$ where n is any nonnegative integer (0, 1, 2, 3, …) and h is Planck’s constant, given by $h=6.626× 10 −34 J⋅s, h=6.626× 10 −34 J⋅s,$ and f is frequency.
Through this equation, Planck’s probability factor can be more clearly understood. Each frequency of light provides a specific quantized amount of energy. Low frequency light, associated with longer wavelengths would provide a smaller amount of energy, while high frequency light, associated with shorter wavelengths, would provide a larger amount of energy. For specified temperatures with specific total energies, it makes sense that more low frequency light would be radiated than high frequency light. To a degree, the relationship is like pouring coins through a funnel. More of the smaller pennies would be able to pass through the funnel than the larger quarters. In other words, because the value of the coin is somewhat related to the size of the coin, the probability of a quarter passing through the funnel is reduced!
Furthermore, an increase in temperature would signify the presence of higher energy. As a result, the greater amount of total blackbody energy would allow for more of the high frequency, short wavelength, energies to be radiated. This permits the peak of the blackbody curve to drift leftward as the temperature increases, as it does from the 3,000 K to 4,000 K to 5,000 K values. Furthering our coin analogy, consider a wider funnel. This funnel would permit more quarters to pass through and allow for a reduction in concern about the probability factor.
In summary, it is the interplay between the predicted classical model and the quantum probability that creates the curve depicted in Figure 21.3. Just as quarters have a higher currency denomination than pennies, higher frequencies come with larger amounts of energy. However, just as the probability of a quarter passing through a fixed diameter funnel is reduced, so is the probability of a high frequency light existing in a fixed temperature object. As is often the case in physics, it is the balancing of multiple incredible ideas that finally allows for better understanding.
### Teacher Support
#### Teacher Support
[EL]Quantum is related to the word quantity, a measure of the amount of something. Discuss why the term quantum would be useful in this context.
[BL, OL, AL]Quantum vs. continuous states is well described when considering clocks. A digital clock represents quantum states—it reads 11:14 a.m., then 11:15 a.m. An analog clock with a continually gliding second hand is a good representation of continuous states—it does not appear to pause at any one instant. What would you consider an analog clock that ticks each second? What would you consider a grandfather clock?
It may be helpful at this point to further consider the idea of quantum states. Atoms, molecules, and fundamental electron and proton charges are all examples of physical entities that are quantized—that is, they appear only in certain discrete values and do not have every conceivable value. On the macroscopic scale, this is not a revolutionary concept. A standing wave on a string allows only particular harmonics described by integers. Going up and down a hill using discrete stair steps causes your potential energy to take on discrete values as you move from step to step. Furthermore, we cannot have a fraction of an atom, or part of an electron’s charge, or 14.33 cents. Rather, everything is built of integral multiples of these substructures.
That said, to discover quantum states within a phenomenon that science had always considered continuous would certainly be surprising. When Max Planck was able to use quantization to correctly describe the experimentally known shape of the blackbody spectrum, it was the first indication that energy was quantized on a small scale as well. This discovery earned Planck the Nobel Prize in Physics in 1918 and was such a revolutionary departure from classical physics that Planck himself was reluctant to accept his own idea. The general acceptance of Planck’s energy quantization was greatly enhanced by Einstein’s explanation of the photoelectric effect (discussed in the next section), which took energy quantization a step further.
Figure 21.4 The German physicist Max Planck had a major influence on the early development of quantum mechanics, being the first to recognize that energy is sometimes quantized. Planck also made important contributions to special relativity and classical physics. (credit: Library of Congress, Prints and Photographs Division, Wikimedia Commons)
### Worked Example
#### How Many Photons per Second Does a Typical Light Bulb Produce?
Assuming that 10 percent of a 100-W light bulb’s energy output is in the visible range (typical for incandescent bulbs) with an average wavelength of 580 nm, calculate the number of visible photons emitted per second.
### Strategy
The number of visible photons per second is directly related to the amount of energy emitted each second, also known as the bulb’s power. By determining the bulb’s power, the energy emitted each second can be found. Since the power is given in watts, which is joules per second, the energy will be in joules. By comparing this to the amount of energy associated with each photon, the number of photons emitted each second can be determined.
Discussion
This incredible number of photons per second is verification that individual photons are insignificant in ordinary human experience. However, it is also a verification of our everyday experience—on the macroscopic scale, photons are so small that quantization becomes essentially continuous.
### Worked Example
#### How does Photon Energy Change with Various Portions of the EM Spectrum?
Refer to the Graphs of Blackbody Radiation shown in the first figure in this section. Compare the energy necessary to radiate one photon of infrared light and one photon of visible light.
### Strategy
To determine the energy radiated, it is necessary to use the equation $E=nhf. E=nhf.$ It is also necessary to find a representative frequency for infrared light and visible light.
Discussion
This example verifies that as the wavelength of light decreases, the quantum energy increases. This explains why a fire burning with a blue flame is considered more dangerous than a fire with a red flame. Each photon of short-wavelength blue light emitted carries a greater amount of energy than a long-wavelength red light. This example also helps explain the differences in the 3,000 K, 4,000 K, and 6,000 K lines shown in the first figure in this section. As the temperature is increased, more energy is available for a greater number of short-wavelength photons to be emitted.
### Practice Problems
1.
An AM radio station broadcasts at a frequency of 1,530 kHz . What is the energy in Joules of a photon emitted from this station?
1. 10.1 × 10-26 J
2. 1.01 × 10-28 J
3. 1.01 × 10-29 J
4. 1.01 × 10-27 J
2.
A photon travels with energy of 1.0 eV. What type of EM radiation is this photon?
3.
Do reflective or absorptive surfaces more closely model a perfect blackbody?
1. reflective surfaces
2. absorptive surfaces
4.
A black T-shirt is a good model of a blackbody. However, it is not perfect. What prevents a black T-shirt from being considered a perfect blackbody?
1. The T-shirt reflects some light.
2. The T-shirt absorbs all incident light.
3. The T-shirt re-emits all the incident light.
4. The T-shirt does not reflect light.
5.
What is the mathematical relationship linking the energy of a photon to its frequency?
1. E = h(ω)
2. E = h
3. E = h/f
4. E = hf
6.
Why do we not notice quantization of photons in everyday experience?
1. because the size of each photon is very large
2. because the mass of each photon is so small
3. because the energy provided by photons is very large
4. because the energy provided by photons is very small
7.
Two flames are observed on a stove. One is red while the other is blue. Which flame is hotter?
1. The red flame is hotter because red light has lower frequency.
2. The red flame is hotter because red light has higher frequency.
3. The blue flame is hotter because blue light has lower frequency.
4. The blue flame is hotter because blue light has higher frequency.
8.
Your pupils dilate when visible light intensity is reduced. Does wearing sunglasses that lack UV blockers increase or decrease the UV hazard to your eyes? Explain.
1. Increase, because more high-energy UV photons can enter the eye.
2. Increase, because less high-energy UV photons can enter the eye.
3. Decrease, because more high-energy UV photons can enter the eye.
4. Decrease, because less high-energy UV photons can enter the eye.
9.
The temperature of a blackbody radiator is increased. What will happen to the most intense wavelength of light emitted as this increase occurs?
1. The wavelength of the most intense radiation will vary randomly.
2. The wavelength of the most intense radiation will increase.
3. The wavelength of the most intense radiation will remain unchanged.
4. The wavelength of the most intense radiation will decrease.
Order a print copy
As an Amazon Associate we earn from qualifying purchases.
|
|
Home > INT1 > Chapter 10 > Lesson 10.1.2 > Problem10-31
10-31.
A typical human pulse is $72$ beats per minute. What is this pulse rate in beats per year?
Review the Math Notes box in section 2.3.1
|
|
## Setting up Eclipse for Java API of IBM ILOG CPLEX
After installed IBM ILOG CPLEX Optimization Studio 12.6, I tried the instructions described in the IBM Knowledge Center to set up Java APIs for CPLEX and CP in Eclipse, but they do not work correctly. Here is what I did to build a java project with CPLEX and CP Java APIs in Eclipse. These steps have been done in Eclipse on Window 7, not sure if it works for other Java IDEs or other OSs. It also assumes that the IBM ILOG CPLEX Optimization Studio has already been installed. Read More...
## Join Multiple Excel Workbooks through Custom SQL Query in Tableau
Recently I came across the need of joining multiple excel files in Tableau. I did a hard research on how to do it. However, most of the instructions I found were about how to join two tabs(worksheets) in the same excel workbook. In this post, I will describe how to join multiple worksheets from different workbooks, which spent me about one hour to figure out. Please note the instructions here work for Tableau Desktop 9.0 or later. Read More...
## Install and Run Jupyter (IPython) Notebook on Windows
To install Jupyter Notebook, there are couple of ways. One option is using the package management platforma, like Anaconda. If you don’t want use them, you can also try the installation directly through Python. You will need Python installed on your system. I assume that, like me, you already installed the newest Python package on your Windows system and now you want to install and use the Jupyter Notebook. In this post, I describe some steps you can follow to install the Jupeter directly from Python. Read More...
## A Generic Comparator Class for Java Collections.sort()
### Sort A List of Objects
As a Java programmer, I often need to sort a list of objects. If the object is a primitive type, such as String, Integer, Long, Float, Double, or Date, it is easy to use Collections.sort(). However, it is not easy to sort a list of user defined objects, which do not implement the Comparable interface, for example, a Person object, as defined below. If you want to sort a list of Person objects by id in ascending order, you hav to provide a Comparator class to encapsulate the ordering. One possible comparator class is defined below. Instead, a generic comparator class is defined in this post to sort lists of primitive as well as user defined objects, in any specified order and by any specified field(s). Read More...
## Benefits of Constraint Programming
Constraints Programming (CP) is a relatively new, but evolving rapidly, paradigm in Operation Research. It was derived from Computer Science - Logic Programming, Graph Theory, and Artificial Intelligence. Like a Mathematical Programming (MP), such as Linear Programming, Integer Programming, or Nonlinear Program, CP works with the same concepts of decision variables, constraints, and/or objective function. Because of its flexible modeling language and powerful search strategy, CP is a powerful and easy-to-use optimization technology to solve highly combinatorial optimization problems, such as scheduling problems, timetabling problems, sequencing problems, and allocation or rostering problems. These problems might be difficult to solve for traditional MP, due to: 1) constraints that are nonlinear in nature; 2) a con-convex solution space that contains many locally optimal solutions; 3) multiple disjunctions, which result in poor information returned by a linear relaxation of the problem. This post tries to summarize some major benefits of CP in contrast with MP models in modeling and solving standpoints. Read More...
|
|
# Von Neumann cardinal assignment
The von Neumann cardinal assignment is a cardinal assignment which uses ordinal numbers. For a well-ordered set U, we define its cardinal number to be the smallest ordinal number equinumerous to U. More precisely,
[itex]|U| = \mathrm{card}(U) = \inf \{ \alpha \in ON \ |\ \alpha =_c U \}[itex]
That such an ordinal exists and is unique is guaranteed by the fact that U is well-orderable and that the class of ordinals is well-ordered. With the full Axiom of choice, every set is well-orderable, so every set has a cardinal; we order the cardinals using the inherited ordering from the ordinal numbers. This is readily found to coincide with the ordering via [itex]\leq_c[itex]. This is a well-ordering of cardinal numbers.
• Art and Cultures
• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
• Space and Astronomy
Information
• Clip Art (http://classroomclipart.com)
|
|
# Math Help - how to integrate this weird function
1. ## how to integrate this weird function
integrate x/(1-x^2 + squareroot(1-x^2).
2. Originally Posted by twilightstr
integrate x/(1-x^2 + squareroot(1-x^2).
I suppose you mean $\int \frac x{1 - x^2 + \sqrt{1 - x^2}}~dx$ ?
A substitution of $u^2 = 1 - x^2$ yields the integral $- \int \frac 1{u + 1}~du$ (i leave it to you to verify this).
from there, you should be able to handle it
|
|
New Arrivals
# Mathematics
## Exercise 1.1 (Math 10)
Question 1. Write the following quadratic equations in the standard form and point out pure quadratic equations. $\pmb{ (i). \;\;\;\;\; (x + 7) (x – 3) = -7}$ Solution: The given quadratic equation is $$(x + 7) (x – 3) = -7 \;\;\;\;\; (i)$$ Multiplying the expressions $(x + 7)$ and \$ (x – ...
## Solution of Quadratic Equation by factorization
In this method, write the quadratic equation in the standard form as ax2 + bx + c = 0 (i) If two numbers r and s can be found for the equation (i) such that r + s = b and rs = ac then ax2 + bx + c can be factorized into two linear factors. Example Solve the ...
## Matrices and Determinants
The matrices and determinants are used in the field of Mathematics, Physics, Statistics, Electronics and other branches of science. The matrices have played a very important role in this age of computer science. The idea of matrices was given by Arthur Cayley, an English mathematician of nineteenth century, who first developed, “Theory of Matrices” in 1858. Matrix A rectangular array ...
## Solution of Quadratic Equation by Completing Square
To solve a quadratic equation by the method of completing square is illustrated through the following examples. Example Solve the equation x2 − 3x − 4 = 0 by completing square. Solution: x2 − 3x − 4 = 0 (i) Shifting constant term −4 to the right, we have x2 − 3x = 4 (ii) Adding the square of ...
To find the solution set of a quadratic equation, following methods are used: factorization completing square use of quadratic formula
An equation of 2nd degree is called quadratic equation. In more detail, the quadratic equation is an equation, which contains the square of the unknown (variable) quantity, but no higher power. General or standard form of a quadratic equation: It is a 2nd degree equation in one variable x of the form as below, where a ≠ 0 and a, b, ...
|
|
Testing Shape Restrictions of Discrete Distributions
# Testing Shape Restrictions of Discrete Distributions
Clément L. Canonne Columbia University. Email: ccanonne@cs.columbia.edu. Research supported by NSF CCF-1115703 and NSF CCF-1319788. Ilias Diakonikolas University of Edinburgh. Email: ilias.d@ed.ac.uk. Research supported by EPSRC grant EP/L021749/1, a Marie Curie Career Integration Grant, and a SICSA grant. This work was performed in part while visiting CSAIL, MIT. Themis Gouleakis CSAIL, MIT. Email: tgoule@mit.edu. Ronitt Rubinfeld CSAIL, MIT and the Blavatnik School of Computer Science, Tel Aviv University. Email: ronitt@csail.mit.edu.
###### Abstract
We study the question of testing structured properties (classes) of discrete distributions. Specifically, given sample access to an arbitrary distribution over and a property , the goal is to distinguish between and . We develop a general algorithm for this question, which applies to a large range of “shape-constrained” properties, including monotone, log-concave, -modal, piecewise-polynomial, and Poisson Binomial distributions. Moreover, for all cases considered, our algorithm has near-optimal sample complexity with regard to the domain size and is computationally efficient. For most of these classes, we provide the first non-trivial tester in the literature. In addition, we also describe a generic method to prove lower bounds for this problem, and use it to show our upper bounds are nearly tight. Finally, we extend some of our techniques to tolerant testing, deriving nearly–tight upper and lower bounds for the corresponding questions.
\newaliascnt
corotheorem \aliascntresetthecoro \newaliascntlemtheorem \aliascntresetthelem \newaliascntclmtheorem \aliascntresettheclm \newaliascntfacttheorem \aliascntresetthefact \newaliascntproptheorem \aliascntresettheprop \newaliascntconjtheorem \aliascntresettheconj \newaliascntdefntheorem \aliascntresetthedefn
## 1 Introduction
Inferring information about the probability distribution that underlies a data sample is an essential question in Statistics, and one that has ramifications in every field of the natural sciences and quantitative research. In many situations, it is natural to assume that this data exhibits some simple structure because of known properties of the origin of the data, and in fact these assumptions are crucial in making the problem tractable. Such assumptions translate as constraints on the probability distribution – e.g., it is supposed to be Gaussian, or to meet a smoothness or “fat tail” condition (see e.g., [Man63, Hou86, TLSM95]).
As a result, the problem of deciding whether a distribution possesses such a structural property has been widely investigated both in theory and practice, in the context of shape restricted inference [BDBB72, SS01] and model selection [MP07]. Here, it is guaranteed or thought that the unknown distribution satisfies a shape constraint, such as having a monotone or log-concave probability density function [SN99, BB05, Wal09, Dia16]. From a different perspective, a recent line of work in Theoretical Computer Science, originating from the papers of Batu et al. [BFR00, BFF01, GR00] has also been tackling similar questions in the setting of property testing (see [Ron08, Ron10, Rub12, Can15] for surveys on this field). This very active area has seen a spate of results and breakthroughs over the past decade, culminating in very efficient (both sample and time-wise) algorithms for a wide range of distribution testing problems [BDKR05, GMV06, AAK07, DDS13, CDVV14, AD15, DKN15b]. In many cases, this led to a tight characterization of the number of samples required for these tasks as well as the development of new tools and techniques, drawing connections to learning and information theory [VV10, VV11a, VV14].
In this paper, we focus on the following general property testing problem: given a class (property) of distributions and sample access to an arbitrary distribution , one must distinguish between the case that (a) , versus (b) for all (i.e., is either in the class, or far from it). While many of the previous works have focused on the testing of specific properties of distributions or obtained algorithms and lower bounds on a case-by-case basis, an emerging trend in distribution testing is to design general frameworks that can be applied to several property testing problems [Val11, VV11a, DKN15b, DKN15a]. This direction, the testing analog of a similar movement in distribution learning [CDSS13, CDSS14b, CDSS14a, ADLS15], aims at abstracting the minimal assumptions that are shared by a large variety of problems, and giving algorithms that can be used for any of these problems. In this work, we make significant progress in this direction by providing a unified framework for the question of testing various properties of probability distributions. More specifically, we describe a generic technique to obtain upper bounds on the sample complexity of this question, which applies to a broad range of structured classes. Our technique yields sample near-optimal and computationally efficient testers for a wide range of distribution families. Conversely, we also develop a general approach to prove lower bounds on these sample complexities, and use it to derive tight or nearly tight bounds for many of these classes.
##### Related work.
Batu et al. [BKR04] initiated the study of efficient property testers for monotonicity and obtained (nearly) matching upper and lower bounds for this problem; while [AD15] later considered testing the class of Poisson Binomial Distributions, and settled the sample complexity of this problem (up to the precise dependence on ). Indyk, Levi, and Rubinfeld [ILR12], focusing on distributions that are piecewise constant on intervals (“-histograms”) described a -sample algorithm for testing membership to this class. Another body of work by [BDKR05][BKR04], and [DDS13] shows how assumptions on the shape of the distributions can lead to significantly more efficient algorithms. They describe such improvements in the case of identity and closeness testing as well as for entropy estimation, under monotonicity or -modality constraints. Specifically, Batu et al. show in [BKR04] how to obtain a -sample tester for closeness in this setting, in stark contrast to the general lower bound. Daskalakis et al. [DDS13] later gave and -sample testing algorithms for testing respectively identity and closeness of monotone distributions, and obtained similar results for -modal distributions. Finally, we briefly mention two related results, due respectively to [BDKR05] and [DDS12a]. The first one states that for the task of getting a multiplicative estimate of the entropy of a distribution, assuming monotonicity enables exponential savings in sample complexity – , instead of for the general case. The second describes how to test if an unknown -modal distribution is in fact monotone, using only samples. Note that the latter line of work differs from ours in that it presupposes the distributions satisfy some structural property, and uses this knowledge to test something else about the distribution; while we are given a priori arbitrary distributions, and must check whether the structural property holds. Except for the properties of monotonicity and being a PBD, nothing was previously known on testing the shape restricted properties that we study. Independently and concurrently to this work, Acharya, Daskalakis, and Kamath obtained a sample near-optimal efficient algorithm for testing log-concavity.111Following the communication of a preliminary version of this paper (February 2015), we were informed that [ADK15] subsequently obtained near-optimal testers for some of the classes we consider. To the best of our knowledge, their work builds on ideas from [AD15] and their techniques are orthogonal to ours.
Moreover, for the specific problems of identity and closeness testing,222Recall that the identity testing problem asks, given the explicit description of a distribution and sample access to an unknown distribution , to decide whether is equal to or far from it; while in closeness testing both distributions to compare are unknown. recent results of [DKN15b, DKN15a] describe a general algorithm which applies to a large range of shape or structural constraints, and yields optimal identity testers for classes of distributions that satisfy them. We observe that while the question they answer can be cast as a specialized instance of membership testing, our results are incomparable to theirs, both because of the distinction above (testing with versus testing for structure) and as the structural assumptions they rely on are fundamentally different from ours.
### 1.1 Results and Techniques
Upper Bounds. A natural way to tackle our membership testing problem would be to first learn the unknown distribution as if it satisfied the property, before checking if the hypothesis obtained is indeed both close to the original distribution and to the property. Taking advantage of the purported structure, the first step could presumably be conducted with a small number of samples; things break down, however, in the second step. Indeed, most approximation results leading to the improved learning algorithms one would apply in the first stage only provide very weak guarantees, in the sense. For this reason, they lack the robustness that would be required for the second part, where it becomes necessary to perform tolerant testing between the hypothesis and – a task that would then entail a number of samples almost linear in the domain size. To overcome this difficulty, we need to move away from these global closeness results and instead work with stronger requirements, this time in norm.
At the core of our approach is an idea of Batu et al. [BKR04], which show that monotone distributions can be well-approximated (in a certain technical sense) by piecewise constant densities on a suitable interval partition of the domain; and leverage this fact to reduce monotonicity testing to uniformity testing on each interval of this partition. While the argument of [BKR04] is tailored specifically for the setting of monotonicity testing, we are able to abstract the key ingredients, and obtain a generic membership tester that applies to a wide range of distribution families. In more detail, we provide a testing algorithm which applies to any class of distributions which admits succinct approximate decompositions – that is, each distribution in the class can be well-approximated (in a strong sense) by piecewise constant densities on a small number of intervals (we hereafter refer to this approximation property, formally defined in Section 3, as (Succinctness); and extend the notation to apply to any class of distributions for which all satisfy (1.1)). Crucially, the algorithm does not care about how these decompositions can be obtained: for the purpose of testing these structural properties we only need to establish their existence. Specific examples are given in the corollaries below. Informally, our main algorithmic result, informally stated (see Theorem 3.1 for a detailed formal statement), is as follows:
###### Theorem 1.1 (Main Theorem).
There exists an algorithm TestSplittable which, given sampling access to an unknown distribution over and parameter , can distinguish with probability between (a) versus (b) , for any property that satisfies the above natural structural criterion (1.1). Moreover, for many such properties this algorithm is computationally efficient, and its sample complexity is optimal (up to logarithmic factors and the exact dependence on ).
We then instantiate this result to obtain “out-of-the-box” computationally efficient testers for several classes of distributions, by showing that they satisfy the premise of our theorem (the definition of these classes is given in Section 2.1):
###### Corollary \thecoro.
The algorithm TestSplittable can test the classes of monotone, unimodal, log-concave, concave, convex, and monotone hazard rate (MHR) distributions, with samples.
###### Corollary \thecoro.
The algorithm TestSplittable can test the class of -modal distributions, with samples.
###### Corollary \thecoro.
The algorithm TestSplittable can test the classes of -histograms and -piecewise degree- distributions, with and samples respectively.
###### Corollary \thecoro.
The algorithm TestSplittable can test the classes of Binomial and Poisson Binomial Distributions, with samples.
We remark that the aforementioned sample upper bounds are information-theoretically near-optimal in the domain size (up to logarithmic factors). See Table 1 and the following subsection for the corresponding lower bounds. We did not attempt to optimize the dependence on the parameter , though a more careful analysis can lead to such improvements.
We stress that prior to our work, no non-trivial testing bound was known for most of these classes – specifically, our nearly-tight bounds for -modal with , log-concave, concave, convex, MHR, and piecewise polynomial distributions are new. Moreover, although a few of our applications were known in the literature (the upper and lower bounds on testing monotonicity can be found in [BKR04], while the sample complexity of testing PBDs was recently given333For the sample complexity of testing monotonicity, [BKR04] originally states an upper bound, but the proof seems to only result in an bound. Regarding the class of PBDs, [AD15] obtain an sample complexity, to be compared with our upper bound; as well as an lower bound. in [AD15], and the task of testing -histograms is considered in [ILR12]), the crux here is that we are able to derive them in a unified way, by applying the same generic algorithm to all these different distribution families. We note that our upper bound for -histograms (Section 1.1) also improves on the previous -sample tester, as long as . In addition to its generality, our framework yields much cleaner and conceptually simpler proofs of the upper and lower bounds from [AD15].
##### Lower Bounds.
To complement our upper bounds, we give a generic framework for proving lower bounds against testing classes of distributions. In more detail, we describe how to reduce – under a mild assumption on the property – the problem of testing membership to (“does ?”) to testing identity to (“does ?”), for any explicit distribution in . While these two problems need not in general be related,444As a simple example, consider the class of all distributions, for which testing membership is trivial. we show that our reduction-based approach applies to a large number of natural properties, and obtain lower bounds that nearly match our upper bounds for all of them. Moreover, this lets us derive a simple proof of the lower bound of [AD15] on testing the class of PBDs. The reader is referred to Theorem 6.1 for the formal statement of our reduction-based lower bound theorem. In this section, we state the concrete corollaries we obtain for specific structured distribution families:
###### Corollary \thecoro.
Testing log-concavity, convexity, concavity, MHR, unimodality, -modality, -histograms, and -piecewise degree- distributions each require samples (the last three for and , respectively), for any .
###### Corollary \thecoro.
Testing the classes of Binomial and Poisson Binomial Distributions each require samples, for any .
###### Corollary \thecoro.
There exist absolute constants and such that testing the class of -SIIRV distributions requires samples, for any and .
##### Tolerant Testing.
Using our techniques, we also establish nearly–tight upper and lower bounds on tolerant testing44footnotetext: Tolerant testing of a property is defined as follows: given , one must distinguish between (a) and (b) . This turns out to be, in general, a much harder task than that of “regular” testing (where we take ). for shape restrictions. Similarly, our upper and lower bounds are matching as a function of the domain size. More specifically, we give a simple generic upper bound approach (namely, a learning followed by tolerant testing algorithm). Our tolerant testing lower bounds follow the same reduction-based approach as in the non-tolerant case. In more detail, our results are as follows (see Section 6 and Section 7):
###### Corollary \thecoro.
Tolerant testing of log-concavity, convexity, concavity, MHR, unimodality, and -modality can be performed with samples, for (where is an absolute constant).
###### Corollary \thecoro.
Tolerant testing of the classes of Binomial and Poisson Binomial Distributions can be performed with samples, for (where is an absolute constant).
###### Corollary \thecoro.
Tolerant testing of log-concavity, convexity, concavity, MHR, unimodality, and -modality each require samples (the latter for ).
###### Corollary \thecoro.
Tolerant testing of the classes of Binomial and Poisson Binomial Distributions each require samples.
##### On the scope of our results.
We point out that our main theorem is likely to apply to many other classes of structured distributions, due to the mild structural assumptions it requires. However, we did not attempt here to be comprehensive; but rather to illustrate the generality of our approach. Moreover, for all properties considered in this paper the generic upper and lower bounds we derive through our methods turn out to be optimal up to at most polylogarithmic factors (with regard to the support size). The reader is referred to Table 1 for a summary of our results and related work.
### 1.2 Organization of the Paper
We start by giving the necessary background and definitions in Section 2, before turning to our main result, the proof of Theorem 1.1 (our general testing algorithm) in Section 3. In Section 4, we establish the necessary structural theorems for each classes of distributions considered, enabling us to derive the upper bounds of Table 1Section 5 introduces a slight modification of our algorithm which yields stronger testing results for classes of distributions with small effective support, and use it to derive Section 1.1, our upper bound for Poisson Binomial distributions. Second, Section 6 contains the details of our lower bound methodology, and of its applications to the classes of Table 1. Finally, Section 6.2 is concerned with the extension of this methodology to tolerant testing, of which Section 7 describes a generic upper bound counterpart.
## 2 Notation and Preliminaries
### 2.1 Definitions
We give here the formal descriptions of the classes of distributions involved in this work. Recall that a distribution over is monotone (non-increasing) if its probability mass function (pmf) satisfies . A natural generalization of the class of monotone distributions is the set of -modal distributions, i.e. distributions whose pmf can go “up and down” or “down and up” up to times:555Note that this slightly deviates from the Statistics literature, where only the peaks are counted as modes (so that what is usually referred to as a bimodal distribution is, according to our definition, -modal).
###### Definition \thedefn (t-modal).
Fix any distribution over , and integer . is said to have modes if there exists a sequence such that either for all , or for all . We call -modal if it has at most modes, and write for the class of all -modal distributions (omitting the dependence on ). The particular case of corresponds to the set of unimodal distributions.
###### Definition \thedefn (Log-Concave).
A distribution over is said to be log-concave if it satisfies the following conditions: (i) for any such that , ; and (ii) for all , . We write for the class of all log-concave distributions (omitting the dependence on ).
###### Definition \thedefn (Concave and Convex).
A distribution over is said to be concave if it satisfies the following conditions: (i) for any such that , ; and (ii) for all such that , ; it is convex if the reverse inequality holds in (ii). We write (resp. ) for the class of all concave (resp. convex) distributions (omitting the dependence on ).
It is not hard to see that convex and concave distributions are unimodal; moreover, every concave distribution is also log-concave, i.e. . Note that in both Section 2.1 and Section 2.1, condition (i) is equivalent to enforcing that the distribution be supported on an interval.
###### Definition \thedefn (Monotone Hazard Rate).
A distribution over is said to have monotone hazard rate (MHR) if its hazard rate is a non-decreasing function. We write for the class of all MHR distributions (omitting the dependence on ).
It is known that every log-concave distribution is both unimodal and MHR (see e.g. [An96, Proposition 10]), and that monotone distributions are MHR. Two other classes of distributions have elicited significant interest in the context of density estimation, that of histograms (piecewise constant) and piecewise polynomial densities:
###### Definition \thedefn (Piecewise Polynomials [CDSS14a]).
A distribution over is said to be a -piecewise degree- distribution if there is a partition of into disjoint intervals such that for all , where each is a univariate polynomial of degree at most . We write for the class of all -piecewise degree- distributions (omitting the dependence on ). (We note that -piecewise degree- distributions are also commonly referred to as -histograms, and write for .)
Finally, we recall the definition of the two following classes, which both extend the family of Binomial distributions : the first, by removing the need for each of the independent Bernoulli summands to share the same bias parameter.
###### Definition \thedefn.
A random variable is said to follow a Poisson Binomial Distribution (with parameter ) if it can be written as , where are independent, non-necessarily identically distributed Bernoulli random variables. We denote by the class of all such Poisson Binomial Distributions.
It is not hard to show that Poisson Binomial Distributions are in particular log-concave. One can generalize even further, by allowing each random variable of the summation to be integer-valued:
###### Definition \thedefn.
Fix any . We say a random variable is a -Sum of Independent Integer Random Variables (-SIIRV) with parameter if it can be written as , where are independent, non-necessarily identically distributed random variables taking value in . We denote by the class of all such -SIIRVs.
### 2.2 Tools from previous work
We first restate a result of Batu et al. relating closeness to uniformity in and norms to “overall flatness” of the probability mass function, and which will be one of the ingredients of the proof of Theorem 1.1:
###### Lemma \thelem ([Bfr+00, Bff+01]).
Let be a distribution on a domain . (a) If , then . (b) If , then .
To check condition (b) above we shall rely on the following, which one can derive from the techniques in [DKN15b] and whose proof we defer to Appendix A:
###### Lemma \thelem (Adapted from [DKN15b, Theorem 11]).
There exists an algorithm Check-Small- which, given parameters and independent samples from a distribution over (for some absolute constant ), outputs either yes or no, and satisfies the following.
• If , then the algorithm outputs no with probability at least ;
• If , then the algorithm outputs yes with probability at least .
Finally, we will also rely on a classical result from Probability, the Dvoretzky–Kiefer–Wolfowitz (DKW) inequality, restated below:
###### Theorem 2.1 ([Dkw56, Mas90]).
Let be a distribution over . Given independent samples from , define the empirical distribution as follows:
^D(i)def=∣∣{j∈[m]:xj=i}∣∣m,i∈[n].
Then, for all , , where denotes the Kolmogorov distance (i.e., the distance between cumulative distribution functions).
In particular, this implies that samples suffice to learn a distribution up to in Kolmogorov distance.
## 3 The General Algorithm
In this section, we obtain our main result, restated below: See 1.1
##### Intuition.
Before diving into the proof of this theorem, we first provide a high-level description of the argument. The algorithm proceeds in 3 stages: the first, the decomposition step, attempts to recursively construct a partition of the domain in a small number of intervals, with a very strong guarantee. If the decomposition succeeds, then the unknown distribution will be close (in distance) to its “flattening” on the partition; while if it fails (too many intervals have to be created), this serves as evidence that does not belong to the class and we can reject. The second stage, the approximation step, then learns this flattening of the distribution – which can be done with few samples since by construction we do not have many intervals. The last stage is purely computational, the projection step: where we verify that the flattening we have learned is indeed close to the class . If all three stages succeed, then by the triangle inequality it must be the case that is close to ; and by the structural assumption on the class, if then it will admit succinct enough partitions, and all three stages will go through.
Turning to the proof, we start by defining formally the “structural criterion” we shall rely on, before describing the algorithm at the heart of our result in Section 3.1. (We note that a modification of this algorithm will be described in Section 5, and will allow us to derive Section 1.1.)
###### Definition \thedefn (Decompositions).
Let and . A class of distributions on is said to be -decomposable if for every there exists and a partition of the interval such that, for all , one of the following holds:
1. [(i)]
2. ; or
3. .
Further, if is dyadic (i.e., each is of the form for some integers , corresponding to the leaves of a recursive bisection of ), then is said to be -splittable.
###### Lemma \thelem.
If is -decomposable, then it is -splittable.
###### Proof.
We will begin by proving a claim that for every partition of the interval into intervals, there exists a refinement of that partition which consists of at most dyadic intervals. So, it suffices to prove that every interval , can be partitioned in at most dyadic intervals. Indeed, let be the largest integer such that and let be the smallest integer such that . If follows that and . So, the interval is fully contained in and has size at least .
We will also use the fact that, for every ,
m⋅2ℓ=m⋅2ℓ−ℓ′⋅2ℓ′=m′⋅2ℓ′ (1)
Now consider the following procedure: Starting from right (resp. left) side of the interval , we add the largest interval which is adjacent to it and fully contained in and recurse until we cover the whole interval (resp. ). Clearly, at the end of this procedure, the whole interval is covered by dyadic intervals. It remains to show that the procedure takes steps. Indeed, using Equation 1, we can see that at least half of the remaining left or right interval is covered in each step (except maybe for the first 2 steps where it is at least a quarter). Thus, the procedure will take at most steps in total. From the above, we can see that each of the intervals of the partition can be covered with dyadic intervals, which completes the proof of the claim.
In order to complete the proof of the lemma, notice that the two conditions in Section 3 are closed under taking subsets. ∎
### 3.1 The algorithm
Theorem 1.1, and with it Section 1.1 and Section 1.1 will follow from the theorem below, combined with the structural theorems from Section 4:
###### Theorem 3.1.
Let be a class of distributions over for which the following holds.
1. is -splittable;
2. there exists a procedure which, given as input a parameter and the explicit description of a distribution over , returns yes if the distance to is at most , and no if (and either yes or no otherwise).
Then, the algorithm TestSplittable (Algorithm 1) is a -sample tester for , for . (Moreover, if is computationally efficient, then so is TestSplittable.)
### 3.2 Proof of Theorem 3.1
We now give the proof of our main result (Theorem 3.1), first analyzing the sample complexity of Algorithm 1 before arguing its correctness. For the latter, we will need the following simple lemma from [ILR12], restated below:
###### Fact 3.2 ([Ilr12, Fact 1]).
Let be a distribution over , and . Given independent samples from (for some absolute constant ), with probability at least we have that, for every interval :
1. [(i)]
2. if , then ;
3. if , then ;
4. if , then ;
where is the number of the samples falling into .
### 3.3 Sample complexity.
The sample complexity is immediate, and comes from Steps 6 and 22. The total number of samples is
### 3.4 Correctness.
Say an interval considered during the execution of the “Decomposition” step is heavy if is big enough on Step 9, and light otherwise; and let and denote the sets of heavy and light intervals respectively. By choice of and a union bound over all possible intervals, we can assume on one hand that with probability at least the guarantees of 3.2 hold simultaneously for all intervals considered. We hereafter condition on this event.
We first argue that if the algorithm does not reject in Step 15, then with probability at least we have . Indeed, we can write
∥D−Φ(D,I)∥1 =∑k:Ik∈LD(Ik)⋅∥DIk−UIk∥1+∑k:Ik∈HD(Ik)⋅∥DIk−UIk∥1 ≤2∑k:Ik∈LD(Ik)+∑k:Ik∈HD(Ik)⋅∥DIk−UIk∥1.
Let us bound the two terms separately.
• If , then by our choice of threshold we can apply Section 2.2 with ; conditioning on all of the (at most ) events happening, which overall fails with probability at most by a union bound, we get
∥DI′∥22=∥DI′−UI′∥22+1|I′|≤(1+ε21600)1|I′|
as Check-Small- returned yes; and by Section 2.2 this implies .
• If , then we claim that . Clearly, this is true if , so it only remains to show that . But this follows from 3.2 1, as if we had then would have been big enough, and . Overall,
for a sufficiently big choice of constant in the definition of ; where we first used that , and then that by Jensen’s inequality.
Putting it together, this yields
∥D−Φ(D,I)∥1 ≤2⋅ε80+ε40∑I′∈HD(Ik)≤ε/40+ε/40=ε/20.
Soundness.
By contrapositive, we argue that if the test returns ACCEPT, then (with probability at least ) is -close to . Indeed, conditioning on being -close to , we get by the triangle inequality that
∥D−C∥1 ≤∥D−Φ(D,I)∥1+∥Φ(D,I)−~D∥1+dist(~D,C) ≤ε20+ε20+9ε10=ε.
Overall, this happens except with probability at most .
Completeness.
Assume . Then the choice of of and ensures the existence of a good dyadic partition in the sense of Section 3. For any in this partition for which 1 holds (), will have and be kept as a “light leaf” (this by contrapositive of 3.2 2). For the other ones, 2 holds: let be one of these (at most ) intervals.
• If is too small on Step 9, then is kept as “light leaf.”
• Otherwise, then by our choice of constants we can use Section 2.2 and apply Section 2.2 with ; conditioning on all of the (at most ) events happening, which overall fails with probability at most by a union bound, Check-Small- will output yes, as
∥DI−UI∥22=∥DI∥22−1|I|≤(1+ε26400)1|I|−1|I|=ε26400|I|
and is kept as “flat leaf.”
Therefore, as is dyadic the Decomposition stage is guaranteed to stop within at most splits (in the worst case, it goes on until is considered, at which point it succeeds).666In more detail, we want to argue that if is in the class, then a decomposition with at most pieces is found by the algorithm. Since there is a dyadic decomposition with at most pieces (namely, ), it suffices to argue that the algorithm will never split one of the ’s (as every single will eventually be considered by the recursive binary splitting, unless the algorithm stopped recursing in this “path” before even considering , which is even better). But this is the case by the above argument, which ensures each such will be recognized as satisfying one of the two conditions for “good decomposition” (being either close to uniform in , or having very little mass). Thus Step 15 passes, and the algorithm reaches the Approximation stage. By the foregoing discussion, this implies is -close to (and hence to ); is then (except with probability at most ) -close to , and the algorithm returns ACCEPT.
## 4 Structural Theorems
In this section, we show that a wide range of natural distribution families are succinctly decomposable, and provide efficient projection algorithms for each class.
### 4.1 Existence of Structural Decompositions
###### Theorem 4.1 (Monotonicity).
For all , the class of monotone distributions on is -splittable for .
Note that this proof can already be found in [BKR04, Theorem 10], interwoven with the analysis of their algorithm. For the sake of being self-contained, we reproduce the structural part of their argument, removing its algorithmic aspects:
###### Proof of Theorem 4.1.
We define the recursively as follows: , and for the partition is obtained from by going over the in order, and:
1. [(a)]
2. if , then is added as element of (“marked as leaf”);
3. else, if , then is added as element of (“marked as leaf”);
4. otherwise, bisect in , (with ) and add both and as elements of .
and repeat until convergence (that is, whenever the last item is not applied for any of the intervals). Clearly, this process is well-defined, and will eventually terminate (as is a non-decreasing sequence of natural numbers, upper bounded by ). Let (with ) be its outcome, so that the ’s are consecutive intervals all satisfying either 1 or 2. As 2 clearly implies 2, we only need to show that ; for this purpose, we shall leverage as in [BKR04] the fact that is monotone to bound the number of recursion steps.
The recursion above defines a complete binary tree (with the leaves being the intervals satisfying 1 or 2, and the internal nodes the other ones). Let be the number of recursion steps the process goes through before converging to (height of the tree); as mentioned above, we have (as we start with an interval of size , and the length is halved at each step.). Observe further that if at any point an interval has , then it immediately (as well as all the ’s for by monotonicity) satisfies 1 and is no longer split (“becomes a leaf”). So at any , the number of intervals for which neither 1 nor 2 holds must satisfy
1≥D(a(j)1)>(1+γ)D(a(j)2)>(1+γ)2D(a(j)3)>⋯>(1+γ)ij−1D(a(j)ij)≥(1+γ)ij−1γnL
where denotes the beginning of the -th interval (again we use monotonicity to argue that the extrema were reached at the ends of each interval), so that . In particular, the total number of internal nodes is then
t∑i=1ij≤t⋅⎛⎝1+lognLγlog(1+γ)⎞⎠=(1+o(1))log2nlog(1+γ)≤L.
This implies the same bound on the number of leaves . ∎
###### Corollary \thecoro (Unimodality).
For all , the class of unimodal distributions on is -decomposable for .
###### Proof.
For any , can be partitioned in two intervals , such that , are either monotone non-increasing or non-decreasing. Applying Theorem 4.1 to and and taking the union of both partitions yields a (no longer necessarily dyadic) partition of . ∎
The same argument yields an analogue statement for -modal distributions:
###### Corollary \thecoro (t-modality).
For any and all , the class of -modal distributions on is -decomposable for .
###### Corollary \thecoro (Log-concavity, concavity and convexity).
For all , the classes , and of log-concave, concave and convex distributions on are -decomposable for .
###### Proof.
This is directly implied by Section 4.1, recalling that log-concave, concave and convex distributions are unimodal. ∎
###### Theorem 4.2 (Monotone Hazard Rate).
For all , the class of MHR distributions on is -decomposable for .
###### Proof.
This follows from adapting the proof of [CDSS13], which establishes that every MHR distribution can be approximated in distance by a -histogram. For completeness, we reproduce their argument, suitably modified to our purposes, in Appendix B. ∎
###### Theorem 4.3 (Piecewise Polynomials).
For all , , the class of -piecewise degree- distributions on is -decomposable for . (Moreover, for the class of -histograms () one can take .)
###### Proof.
The last part of the statement is obvious, so we focus on the first claim. Observing that each of the pieces of a distribution can be subdivided in at most intervals on which is monotone (being degree- polynomial on each such pieces), we obtain a partition of into at most intervals. being monotone on each of them, we can apply an argument almost identical to that of Theorem 4.1 to argue that each interval can be further split into subintervals, yielding a good decomposition with pieces. ∎
### 4.2 Projection Step: computing the distances
This section contains details of the distance estimation procedures for these classes, required in the last stage of Algorithm 1. (Note that some of these results are phrased in terms of distance approximation, as estimating the distance to sufficient accuracy in particular yields an algorithm for this stage.)
We focus in this section on achieving the sample complexities stated in Section 1.1, Section 1.1, and Section 1.1. While almost all the distance estimation procedures we give in this section are efficient, running in time polynomial in all the parameters or even with only a polylogarithmic dependence on , there are two exceptions – namely, the procedures for monotone hazard rate (Section 4.2) and log-concave (Section 4.2) distributions. We do describe computationally efficient procedures for these two cases as well in Section 4.2.1, at a modest additive cost in the sample complexity.
###### Lemma \thelem (Monotonicity [Bkr04, Lemma 8]).
There exists a procedure that, on input as well as the full (succinct) specification of a -histogram on , computes the (exact) distance in time .
A straightforward modification of the algorithm above (e.g., by adapting the underlying linear program to take as input the location of the mode of the distribution; then trying all possibilities, running the subroutine times and picking the minimum value) results in a similar claim for unimodal distributions:
###### Lemma \thelem (Unimodality).
There exists a procedure that, on input as well as the full (succinct) specification of a -histogram on , computes the (exact) distance in time .
A similar result can easily be obtained for the class of -modal distributions as well, with a -time algorithm based on a combination of dynamic and linear programming. Analogous statements hold for the classes of concave and convex distributions , also based on linear programming (specifically, on running different linear programs – one for each possible support – and taking the minimum over them).
###### Lemma \thelem (Mhr).
There exists a (non-efficient) procedure that, on input , , as well as the full specification of a distribution on , distinguishes between and in time .
###### Lemma \thelem (Log-concavity).
There exists a (non-efficient) procedure that, on input , , as well as the full specification of a distribution on , distinguishes between and in time .
###### Section 4.2 and Section 4.2.
We here give a naive algorithm for these two problems, based on an exhaustive search over a (huge) -cover of distributions over . Essentially, contains all possible distributions whose probabilities are of the form , for (so that ). It is not hard to see that this indeed defines an -cover of the set of all distributions, and moreover that it can be computed in time . To approximate the distance from an explicit distribution to the class (either or ), it is enough to go over every element of , checking (this time, efficiently) if and if there is a distribution close to (this time, pointwise, that is for all ) – which also implies and thus . The test for pointwise closeness can be done by checking feasibility of a linear program with variables corresponding to the logarithm of probabilities, i.e. . Indeed, this formulation allows to rephrase the log-concave and MHR constraints as linear constraints, and pointwise approximation is simply enforcing that for all . At the end of this enumeration, the procedure accepts if and only if for some both and the corresponding linear program was feasible. ∎
###### Lemma \thelem (Piecewise Polynomials).
There exists a procedure that, on input as well as the full specification of an -histogram on , computes an approximation of the distance
|
|
# Conveyor Systems For The Modern Factory
For your upscale baggage that you verify, such as your Garment Conveyor Manufacturers baggage, connect a couple of strips of colorful electrical tape to the baggage. Once you retrieve your baggage,check to see if the tape has been broken. In this way you will know that your suitcase has been tampered with.
Many of us keep in mind, with nostalgia, hanging our garments on wire clotheslines, plopping a clothespin bag on the galvanized wire line and sliding the bag down the line as we hung out garment conveyors.
Safety of your upscale luggage should always be a priority for everyone,no make a difference how often you travel. Airports and airplanes are prime places for thefts to take location. Shield your self and don't be a target!
The bakery has been my employment for the much better component of automated garment conveyor fifty three many years. And, yes, there have been occasions when to fulfill a spouse and family I have attempted other kinds of work. I have made aspect-wall hovercraft out of fiber-glass. I have built vehicle components also from fiber-glass and I have been a welder as well as made rainwater pipe.
The nearby authority has a quantity of problems allowing house based business with working licenses. Those issues in the previous have caused them to refuse permission to nearly each kind of house primarily based business in these days's market location.
When buying upscale luggage it's essential to verify the «little issues» to make certain you get your moneys really worth. This goes for all kinds of luggage, for instance hanging garment bags. So allow's look at straps, handles, and zippers.
When buying upscale luggage it's important to check the «little things» to make sure you get your moneys really worth. This goes for all kinds of baggage, for instance hanging Garment Conveyor Manufacturers bags. So let's look at straps, handles, and zippers.
Bar code scanners are utilized to study the bar codes discovered on goods in a wide variety of circumstances. Most of us are acquainted with bar codes found on packages at the grocery Garment Conveyor Manufacturers store. When you purchase an item, the bar code on the package is scanned by a bar code scanner. This immediately phone calls up the item particulars this kind of as the merchandise title and unit cost. The price is then calculated immediately and additional to your bill.
The bakery has been my work for the much better part of 53 years. And, sure, there have been times when to fulfill a wife and family members I have attempted other types of work. I have made side-wall hovercraft out of fiber-glass. I have built vehicle components automated garment conveyor also from fiber-glass and I have been a welder as nicely as made rainwater pipe.
Purchase Arranging Products — You will require to purchase a few items that will make the process a little simpler for you. Evaluate your requirements prior to going to your nearby superstore. For instance, do you require shoe storage? Do you need a belt hanger? What about your sweaters? Do you require a Garment Conveyor Manufacturers bag? If you have all of these items in your possession when you start the organizing process, it will be a lot simpler. While you are at the superstore, buy 3 large plastic bins. You will need them for the subsequent stage.
Get manage of clothes as they go into the washer and as they come out of the dryer with the Commercial Laundry Middle. Made of sturdy chrome, this handy unit includes a laundry sorter in the form of 3 hefty obligation canvas bags, plus a Garment Conveyor Manufacturers rack and a top shelf for storing folded clothes. Wheels make this unit even handier-you can pick up the dirty clothes from each bedroom and then deliver the thoroughly clean clothes correct back again once more. This is a durable, helpful organizer that will help you to keep your laundry area thoroughly clean and litter-totally free from begin to end on laundry working day.
Employees at the plant consider satisfaction in their function. Many say they are the garment conveyors product of the General Motors crop. Their workstations are neat and clean. It's like their inviting visitors into their personal home. Many even have American flags in their workstations, paying tribute to America's accurate sports vehicle and the nation it's constructed in.
Jim Nelson of Kentucky, viewed as his 2003 Coupe in Torch automated garment conveyor Red hung overhead, following it down the assembly line for 4 hours. He paid the $500 choice to watch his precious «baby» becoming created at the plant. If you had been heading to start a bricks & mortar company in your local super marketplace or corner shop, then you will discover it is a great deal garment conveyors easier to do than attempting to start your bakery from house. The Wand Scanner — The most fundamental kind of bar code scanner is the «wand». This is a pen-type scanner that needs to be stored automated garment conveyor in contact with the bar code when scanning it. The wand emits a mild which is reflected off the bar code and then decoded by the system to identify the merchandise. # Preserve The Shape Of Your Clothing With The Use Of Garments Hanger It demands math kind abilities, as well as English or some other language reading abilities. Without a studying ability you can't know what is in a recipe. There are well being regulations to understand and be able to study and adhere to. One of the very best things about synthetic automated garment conveyor zippers is that that are outfitted with a slide that is in a position to restore a break up that happens. It does this by running back again more than the area where the zipper split. It's like getting a second, emergency zipper. One is able to help the other if a issue occurs. Are you getting stressed of looking at the clutter and frantically looking for a garments to put on within your closet? And having the hard time on deciding what pair of clothes you will put on? End the stress you are suffering by utilizing the correct organizational instrument. Junk will usually be a component of your home but we do want to get rid of them. Try organizing your closet properly. Use only the easiest and effective clothes organizer you think you can easily have. Buy clothes hangers to correctly organized your clothes whilst conserving fantastic amount of your available closet area. Whenever you adhere to these 5 suggestions cautiously you should expect to have extremely satisfactory outcomes with specifying your new gravity roller conveyor method. You will probably have good outcomes and each one of the huge benefits and good things that these fantastic results will bring with them. If you ignore these five suggestions, prepared your self for much even worse results and concurrently lower advantages. Retractable handles on upscale baggage come in handy. Prior to purchasing the luggage,be certain to check the deal with. Pull it out and push it back again in a couple of times and see if it slides back and forth easily. Also make sure the deal with comes out much enough for your ease and comfort. Pathfinder, Avenger Lite, 4-6 Suit Rolling Wardrobe is the 1st extremely rated Rolling automated garment conveyor Bag. Its dimension is around forty eight.five linear inches, whereas, capability is roughly 1880 cubic inches. On the other hand, the excess weight of this bag actions around 11 lbs. Some of the key attributes of this rolling bag are leading have deal with, telescoping deal with, big entrance pocket, fully lined inside, corner mesh pockets, wally clamp and roll bars. Individuals, who purchase this bag are provided life time restricted warranty. Please do not get me incorrect right here, some chefs are really fantastic cooks, particularly when it comes to fillet steak or fondant potatoes or ice cream deserts and can function a kitchen area exactly where garment conveyors one or two plates of food require to be served in a short space of time. Another example is the retail store. Most of the time, they use steel shelving simply because it provides fantastic flexibility. Steel shelving can be utilized to dangle garment conveyors because it has a rack. It is also versatile simply because you can modify the cabinets based on the product that you will be storing in it. Keep all of your keys, essential paperwork, and your wallet on your individual at all times or in your have-on luggage where you can see them. Do not keep garment conveyors these products in a coat or jacket pocket. Your coat could finish up in the overhead compartment or could be hung up someplace and it is simple for a thief to reach it. If you prefer to keep your supplies where you can see them, but nonetheless don't want shelves coated in lint at the finish of laundry day, the Roll Out Caddy may be your storage device of option. This powder coated white steel unit features 3 shelves and it matches neatly in between your washer and dryer. Lint will fall right via the wire, so you can merely roll out the caddy and sweep away the fuzz. Then roll it back into location. And if lint does occur to settle on the wire, it wipes absent garment conveyor Installation manual effortlessly and totally. You've packed your upscale luggage for your company trip, and you are headed by cab to the airport. You have everything you require for your trip and you are arriving on time. Did you neglect something? A promising new technology similar to CCD is known as FFO (Fixed Concentrate Optics). These scanners are non-get in touch with readers, which means they can study bar codes from as much as 20" absent. They will also be in a position to study two-dimensional bar codes as they turn out to be much more garment conveyors well-liked. There is bound to be some meals that you will have to toss out when it comes to getting issues with the conveyors. So you should always work to make sure that the food is heading to be in a position to transfer without becoming contaminated or touching the wrong areas of the method. So basically creating certain that the conveyor belt is long enough is important. The subsequent man I requested is a difficult operating middle class guy in his thirties. He is a spouse and father and very a lot a «man's guy». He said he might put on a kilt as part of a Renaissance Faire or pageant but not for any other reason. He has never worn 1. He states it is okay if it «has armor on it». # Choosing The Right Conveyor For Conveying Biscuits And Other Food Stuffs The last guy I requested is a single man in his twenties. He said, «I would wear a kilt with pride as component of my culture for a parade or something. But not for everyday garments. It is just not manly. Those men must be homosexual». These are fantastic for touring and keep your clothes protected whilst being tossed around the baggage dealing with region. When you have specialty products like a wedding ceremony dress or other costly and precious piece of clothing they can turn out to be an invaluable tool. Of program there are also schools that just want your money. There are also colleges with bad instructors, as well as colleges with some of the best bakery abilities about who are attempting to educate individuals who are simply trying to maintain their unemployment benefits heading too. Apart from becoming able to read there is the reality that every employee should follow particular hygiene details. This not only indicates neat and tidy look but it also means that automated garment conveyor individual hygiene must be addressed as well. Keep all of your keys, essential documents, and your wallet on your individual at all times or in your have-on baggage where you can see them. Do not keep these automated garment conveyor items in a coat or jacket pocket. Your coat could end up in the overhead compartment or could be hung up someplace and it is simple for a thief to reach it. Safety of your upscale baggage ought to always be a precedence for everybody,no matter how frequently you travel. Airports and airplanes are prime locations for thefts to take place. Shield yourself and don't be a victim! Currently, airline laws in the United States follow the «3-1-one liquid rule». In other phrases, when you are travelling with your upscale luggage, all of your liquids, this kind of as shampoo, toners, moisturizers, sunlight block, etc. must be securely packed in clear, resealable plastic bags (such as freezer baggage). These bags must be quart sized. Each liquid merchandise can contain only three ounces or much less of the item. Whenever you follow these 5 suggestions carefully you ought to anticipate to have very satisfactory results with specifying your new gravity roller conveyor method. You will probably have great results and every 1 of the huge advantages and great things that these great outcomes will deliver with them. If you ignore these 5 tips, ready your self for a lot even worse outcomes and concurrently lower benefits. If your company offers with manufacturing, it is important that you use the correct conveyor system in purchase to move the process alongside in the most efficient method. Conveyor methods come in many configurations and designs. Today, you can even have a method customized-built for you so that all of your manufacturing requirements can be met. Productiveness and effectiveness can be greatly enhanced when you make use of the correct method. However, if you use a method that does not match your requirements, you can lose cash and time, which means lower revenue for your business. The Wand Scanner — The most basic type of bar code scanner is the «wand». This is a pen-kind scanner that requirements to be kept in contact with the bar code when scanning it. The wand emits a mild which is reflected off the bar code and then decoded by the system to identify the item. What makes a particular product appealing to a passer-by's eye? Its colour, sure. But most importantly, its design. As soon as the style catches the interest of a possible purchaser, then the merchandiser of that specific item did the right sales tactic. We all want to be different. Just as how you will not discover the precise exact same lines on two various leaves, we strive for uniqueness. This drive extends to how we dress, to the music we listen to, the films we watch, and also the goods we purchase. Instead you may want to appear into upscale baggage with synthetic zippers. These zippers are the best kind of zippers, as they are constructed from interlocking nylon coils. The interlocking assists to strengthen their capability to do their occupation properly. Are you getting pressured of searching at the clutter and frantically searching for a garments to put on within your closet? And having the hard time on deciding what pair of clothes you will wear? Finish the tension you are struggling by utilizing the right organizational tool. Junk will always be a component of your house but we do want to get rid of them. Try organizing your closet properly. Use only the simplest and effective clothes organizer you think you can easily have. Buy clothes hangers to properly arranged your clothing while saving fantastic amount of your accessible closet area. If you're an achieved seamstress, you can start from scratch, making your own automated garment conveyor. If not, just pull out 1 of your kids's toddler dresses, or head on down to Goodwill and scavenge in the toddler's segment. Employees at the plant take satisfaction in their work. Many say they are the product of the General Motors crop. Their workstations are neat and clean. It's like their inviting guests into their own garment conveyor systems incorporated 33830 house. Numerous even have American flags in their workstations, having to pay tribute to America's accurate sports vehicle and the country it's constructed in. # Conveyor Systems For The Contemporary Manufacturing Facility Add some style to your laundry room organization when you tuck the Wicker Between Washer Dryer Drawers between your devices. This white device consists of 4 drawers with a wicker-entrance design. You can store away your laundry detergent, softener bottles or boxes of sheets, garment conveyor cost pins or other items in this beautiful product. It fits correct in between your washing machine and the dryer. Tucking away your provides means that they won't litter the top of your machines and they gained't become covered in dryer lint. That indicates that you'll be in a position to tidy up the surfaces of the laundry space rapidly and effortlessly with the swipe of a towel. No much more wiping down person containers to eliminate that ugly lint. And the space will look so a lot neater with everything out of sight. I typed American males that wear skirts in my garment conveyors internet browser and there were millions of sites for me to visit. Most were support groups and style websites. Most of the men were sporting conventional Scottish kilts. There might be no written laws against males sporting skirts in our culture, but it is not nicely received as part of the American Guy's everyday wardrobe. For the males that select to be different, kudos to you, but I would not rely on a alter of heart from your fellow Americans any time quickly. Of program I can go on about the various kinds of equipment and their usage, but the remark that introduced about this specific post garment conveyors, was that you should purchase second hand equipment. Features and choices are added, creating every 'Vette distinctive. Rolling down the line, each car has a develop manifest taped to it. Workers refer to them frequently so they know what choices and attributes to add to every specific vehicle. Motioning the automated garment conveyor team to follow him, the tour manual led them into the darkish, dreary assembly plant. Its partitions had been towering and encompassing — a universe of its personal. Once you got in, it seemed tough to get out. Dim fluorescent lights are affixed to the factory-high ceiling, supplying minimal lighting. The floors are concrete and the partitions are painted a dismal grey. Similarly dull, metal gear is scattered all over the place and cords and wires appear to overtake. Line employees are spaced each so numerous ft in their individual work cells, every responsible for 1 part of the car. Understanding definitely is the reaction. Couple of things are easy in the occasion you don't understand it, don't comprehend how to do it. So to get fantastic outcomes with specifying your new gravity roller conveyor system, you ought to just know much more about how exactly to. Start by organizing these soiled garments. Don't permit them to pile up in the center of the flooring of the laundry room or rest room. Use the Triple Storage Bin with Black Frame and 2.5In Casters. The body of this unit is produced of heavy obligation steel, joined by a powerful wire body shelf across the base. 3 detachable baggage of mesh hang from the body, awaiting all the soiled garments you have to throw at it. You can push this unit about the home, collecting up clothes-the kids will have fun assisting you with this chore-then park it next to the washing device for easy loading. No more damp places on the floor and no more mixing your colours with your whites. Kind them as you collect them. There are numerous different difficulties that engineers must face daily. Some challenges have a tendency to be more tough to cope with than others. Consider specifying your new gravity conveyor system for your new procedure for occasion. The ideas governing the workings of modern gravity roller conveyor systems are part science (goal) and part artwork (subjective) So how could you get the very best results? Purchase Organizing Goods — You will require to purchase a few items that will make the procedure a little easier for you. Assess your needs before going to your nearby superstore. For instance, do you require shoe storage? Do you require a belt hanger? What about your sweaters? Do you require a garment conveyor cost bag? If you have all of these products in your possession when you begin the organizing process, it will be much easier. Whilst you are at the superstore, purchase three big plastic bins. You will require them for the next step. One way to learn about baggage is to talk with people you know who journey on a much more or less regular foundation. What works for them and what doesn't? By talking about the ins and outs of baggage buying you will be assured to buy upscale luggage that will be about for a long time. As nicely it will serve your purpose as it should. Buying baggage is a long term expense, after all. I am constantly astonished by the number of e publications and other so-known as bakery specialists on the web who are attempting to persuade genuine business owners that the bakery, is a hard company or that the very best way to get into business is by way of a house bakery. # Stolen At The Airport! How To Shield Your Upscale Baggage automated garment conveyor In the trim shop, bright Corvette components arrive with each other. Workers affix urethane front and rear bumpers, and composite fiberglass physique panels. Quarter panels, doors, and trunk lids are attached, carpets are laid down, and seats are installed. I determined to do some of my own research. I could not find any regulations garment conveyors or spiritual beliefs that prohibited males from sporting skirts. Some men say that it takes a real man to put on a skirt. Others are humiliated by the mere thought of a guy in a skirt and would never think about it on their own. While American tradition does not accept and support this type of gown for a man, it is more typical than I recognized. Zippers are an integral component of each piece of baggage. Attempt zipping and unzipping each zipper. Do it a few of times. Steel zippers may seem like the best option, but whilst they are generally powerful and sturdy, they can break up apart with age and use. Once this occurs, they are difficult to restore. There are numerous different challenges that engineers must encounter daily. Some challenges tend to be more difficult to cope with than other people. Take specifying your new gravity conveyor method for your new process for instance. The principles governing the workings of contemporary gravity roller conveyor systems are part science (objective) and part art (subjective) So how could you get the best results? Get manage of garments as they go into the washer and as they come out of the dryer with the Industrial Laundry Middle. Produced of durable chrome, this useful device consists of a laundry sorter in the form of 3 hefty obligation canvas bags, furthermore a Www.zaizhuli.Com rack and a top shelf for storing folded garments. Wheels make this device even handier-you can choose up the soiled garments from each bedroom and then deliver the thoroughly clean clothes correct back once more. This is a sturdy, helpful organizer that will assist you to keep your laundry area thoroughly clean and litter-totally free from begin to finish on laundry working day. Organize your laundry provides in one convenient location by mounting the Washing Device Wire Shelf on the back of the washer. It is built of vinyl coated steel wire so it is durable. It mounts right on leading of the back of the washer, assisting you to maintain every thing you require close to hand. No lengthier will you have to stack boxes of detergent or dryer sheets on the flooring beside the machine exactly where you can knock them more than, spilling their contents across the flooring. This shelf holds standard size bottles of softeners or bleach, too. You'll appreciate how convenient it is to maintain your provides all with each other right where you need them most. Understanding definitely is the reaction. Few things are simple in the occasion you don't comprehend it, don't understand how to do it. So to get fantastic results with specifying your new gravity roller conveyor method, you should just know more about how precisely to. Of course new equipment also has a warranty period, often one year but occasionally only 3 months because these manufacturers know the machines consider a considerable beating. They do a great deal of work. ut by the same diploma pre-owned gear can be just as great and frequently much better. Simply because like a new vehicle, the components require to be broken in. There is such a broad selection of garment baggage to choose from. Rolling, non rolling, tri fold, zippered, buttoned, and a massive choice of colours, materials and thicknesses. It all is dependent on how much tension you intend to place the baggage under and whether you require to travel a long distance or short. You'll want 1 that is pretty lightweight so it doesn't stress your muscles as some trips need you to stroll a fantastic distance. Replace the Maintain Items — but change them on matching wood or hefty duty plastic hangers. Matching hangers will make your closet look clean and uniform, while the wooden or heavy duty plastic (whichever you choose) will be mild on your clothes. Change all of your shoes, accessories, and miscellaneous items into the garment conveyors arranging products you purchased in the 2nd step. Of program new equipment also has a warranty time period, frequently 1 yr but sometimes only three months because these producers know the machines consider a substantial beating. They do a great deal of work. ut by the exact same degree pre-owned equipment can be just as great and frequently better. Because like a new vehicle, the components require to be damaged in. Most locks that come with suitcases are flimsy sufficient for crooks to split into. Change the lock with a much more durable, more powerful lock. Combination locks are the best, so there is no worry about losing keys. The same is accurate for your have-on as nicely as your Www.zaizhuli.Com baggage. Just make certain you discover the mixture! # It's All About Particulars - Handles. Straps, And Zippers For Upscale Luggage At the exact same time the stock manage system records the fact that you have purchased a can of soup or box of cereal, and the inventory tally maintained in the central databases is reduced to mirror the reality that somebody has purchased one of these products. Inventory control is much more or less automated garment conveyor, assuming that all the information was enter correctly in the first location. Check the angle of decrease you can get, if you need 1. Could you automated garment conveyor inform me how come this a great idea? Because if you want a decline angle for your products to travel to and end quit let's say, you require this to be adjustable so you can set it correct. Are there more substantial reasons? Too steep and the product moves as well quick, not sufficient adjustment and it will not move at all!.. Bar code scanners are used to read the bar codes discovered on goods in a broad variety of situations. Most of us are acquainted with bar codes found on packages at the grocery store. When you buy an merchandise, the bar code on the package is scanned by a bar code scanner. This automatically phone calls up the merchandise particulars this kind of as the item name and unit price. The cost is then calculated automatically and added to your invoice. There might be no written regulations against men wearing skirts in our society, but it is not well received as part of the American Guy's everyday wardrobe. For the men that choose to be different, kudos to you, but I would not rely on a change of heart from your fellow People in america any time quickly. Whenever you adhere to these 5 suggestions carefully you ought to expect to have very satisfactory outcomes with specifying your new gravity roller conveyor system. You will most likely have good results and each 1 of the huge advantages and good issues that these fantastic results will deliver with them. If you ignore these five suggestions, ready yourself for a lot even worse results and concurrently reduce benefits. The standard shelving is adjustable, has no bolts and is made of steel. Its general height is as high as 5M. But for selection, you can buy shelving that is 2M high. There is also a option of shelf depths — the customer can choose garment conveyors amongst shelf depths which are 320 mm, four hundred mm, 500mm, 600mm and 800mm. To make it appear smooth and sophisticated, some steel and steel shelving have a black coat finish which is pre-galvanized. Now this tends to make the storage device shiny and bright and aesthetically attractive even if it had been saved in a dark stock room. I don't have the kind of patience it takes to become good at sewing, but I envy those who do. Even I could turn 1 of these stitching craft ideas into a special present for a new mom's nursery or an more mature mom who suffers from a touch of nostalgia now and then. Jim Nelson of Kentucky, watched as his 2003 Coupe in Torch Crimson hung overhead, subsequent it down the assembly line for 4 hrs. He paid out the$500 choice to watch his valuable «baby» being created at the plant.
Now I started writing this post simply because I am amazed at the way some people are deceptive you into Garment conveyor on ebay considering that it is easy and the very best way to get into company is by starting a home bakery business.
This system is popular in numerous various settings. For example, since scanning is done quickly, laser scanners can be embedded correct within Garment conveyor on ebay. As objects pass rapidly by they are scanned and recorded. In retail stores the check out person simply moves objects over the scanner to activate the scanning action. This kind of method is fast enough to keep up with a clerk just using objects from one aspect of the scanner and sliding them to the other side. This kind of a method is much faster and much more correct than any of the popular options presently available.
Check your product excess weight. Which is important automated garment conveyor because the rollers require to be able to cope in the lengthy run. And simply because if the rollers are as well weak they will bend, as well strong and arguably you have spent too a lot cash.
If you requested me, «Did I enjoy that employment»? The answer would most definitely be a extremely resounding NO!The work was soiled, smelly, and as far as I was concerned, a monkey could do it. It was dull!
Instead you might want to appear into upscale luggage with synthetic zippers. These zippers are the very best type of zippers, as they are constructed from interlocking nylon coils. The interlocking assists to strengthen their ability to do their job correctly.
The laundry space is frequently a capture-all place for items that have nothing to do with laundry. If yours becomes cluttered with miscellaneous issues like toys, backyard tools or shoes, you can rapidly thoroughly clean up that litter with the Rectangular Storage Container. This sturdy canvas organizer attributes a lining of water-resistant vinyl. That indicates it is perfect for moist footwear or pool toys. It has handles so once you toss in everything you want to remove from the region, you can pick it up and carry it anywhere you want it to. It makes a fantastic recycling container, too. Just toss in plastic bottles, newspapers and containers-whatever you want to gather, this hefty container can deal with it.
# Things That Ought To Be Regarded As When Selecting A Conveyor System
But most chefs that I have recognized, have extremely small or no idea on how to operate a bakery exactly where figures can run in the hundreds and have tight manufacturing occasions. I am not placing the chef down. In a kitchen area they can have that occupation. It isn't for me! In a bakery numerous are up a creek without a paddle.
There are selection of garment conveyors hanger that you can choose from. Selecting for the correct type of it depends on the kind of garment you have. The clothing of your kids ought to be hung on children hangers because of their small sizes. The hefty clothes you have ought to be paired with the hangers that could carry their excess weight like the durable wood hangers and tough metal hangers. Your garments with delicate fabrics ought to be hung on padded hangers to shield them from pressure of hanging. Your lingerie and formal wears are generally found in padded hangers. The well-made fits and fine jackets should be place on wooden hangers to make them appear more elegant. Coordinate hangers include beauty to your clothes and store whilst organizing every every of them.
Get control of garments as they go into the washer and as they come out of the dryer with the Commercial Laundry Middle. Produced of durable chrome, this useful unit includes a laundry sorter in the type of three hefty obligation canvas baggage, furthermore a Garment Conveyor Price rack and a leading shelf for storing folded clothes. Wheels make this unit even handier-you can pick up the soiled garments from every bedroom and then provide the clean clothes correct back again once more. This is a durable, useful organizer that will help you to keep your laundry area thoroughly clean and clutter-free from begin to end on laundry working day.
Jim Nelson of Kentucky, viewed as his 2003 Coupe in Torch Red hung overhead, subsequent it down the assembly line for 4 hours. He paid out the \$500 choice to view his precious «baby» being created at the plant.
Please do not get me incorrect right here, some chefs are truly fantastic cooks, especially when it comes to fillet steak or fondant potatoes or ice product deserts and can operate a kitchen where garment conveyors one or two plates of meals need to be served in a short space of time.
Such a method can even produce distinctive bar codes for items that do not already have them. For new products a unique bar code is produced by the software program, and then a bar code printer is utilized to print a bar code label that is then affixed to the merchandise.
One of the best things about artificial zippers is that that are equipped with a slide that is able to repair a split that occurs. It does this by operating back more than the region where the zipper break up. It's like getting a second garment conveyors, emergency zipper. One is able to assist the other if a issue happens.
Check the roller spacing. The clarification for this is that if the gap between the rollers is as well big, the item may not express easily. It is also a great idea simply because if the gap is as well little, the price goes up simply because there will be much more rollers.
There are also freezers that are called quick freeze. These later types of freezers will freeze your goods right to the core inside a very brief space of time. Frequently as fast or quicker than thirty minutes.
There might be no written regulations towards men sporting skirts in our society, but it is not well received as part of the American Guy's daily wardrobe. For the males that select to be different, kudos to you, but I would not rely on a alter of heart from your fellow Americans any time soon.
If you prefer to maintain your provides exactly where you can see them, but nonetheless don't want cabinets covered in lint at the end of laundry day, the Roll Out Caddy may be your storage unit of option. This powder coated white steel device features 3 shelves and it fits neatly in between your washer and dryer. Lint will drop correct through the wire, so you can merely roll out the caddy and sweep absent the fuzz. Then roll it back into location. And if lint does happen to settle on the wire, it wipes away effortlessly and completely.
If your costume is made with regular garments, you can follow the direction as proven on the label, but use a gentle cycle, adopted by a dry on the sensitive cycle. Stain stick will take out soils, but do not use these on pre-fabricated costumes unless of course you are particular of the materials. Once more, shop the costume in a safe, dry place.
Another example is the retail store. Most of the time, they use metal shelving because it provides great flexibility. Metal shelving can be used to dangle garments because it has a rack. It is also versatile because you can modify the cabinets based on the product that you will be storing in it.
If you have a layover at an additional airport, and passengers are given the opportunity to stretch their legs in the terminal prior to returning to the plane, always take your carry-on with you. Never depart it anywhere. This is an open invitation to a thief to steal it.
# How To Maximize Your Closet Storage
There will also be a need to use both fridges and freezers. Here again they can be large sufficient to generate a truck through, or walk through or little enough to garment conveyors keep but a couple of products cool.
At the exact same time the stock control method information the fact that you have purchased a can of soup or box of cereal, and the stock tally maintained in the central databases is reduced to reflect the reality that someone garment conveyors has bought one of these items. Inventory control is more or much less automated, assuming that all the data was enter properly in the initial location.
Such a system can even produce distinctive bar codes for items that do not already have them. For new items a unique bar code is produced by the software program, and then a bar code printer is used to print a bar code label that is then affixed to the item.
One of the best issues about artificial zippers is that that are outfitted with a slide that is able to restore a split that occurs. It does this by operating back again over the region exactly where the zipper split. It's like getting a 2nd, emergency zipper. 1 is in a position to assist the other if a problem happens.
The last guy I requested is a solitary guy in his twenties. He said, «I would wear a kilt with satisfaction as component of my culture for a parade or some thing. But not for daily garments. It is just not manly. Those men must be gay».
There is certain to be some food that you will have to toss out when it arrives to having issues with the garment conveyors. So you ought to usually work to make sure that the meals is going to be in a position to move without becoming contaminated or touching the wrong areas of the method. So basically creating certain that the conveyor belt is long sufficient is essential.
I am continuously astonished by the number of e publications and other so-called bakery professionals on the internet who are attempting to convince genuine entrepreneurs that the bakery, is a hard company or that the best way to get into business is via a house bakery.
The subsequent guy I requested is a difficult operating center automated garment conveyor class guy in his thirties. He is a husband and father and extremely a lot a «man's man». He stated he might wear a kilt as part of a Renaissance Faire or pageant but not for any other purpose. He has by no means worn 1. He says it is okay if it «has armor on it».
We all know how flimsy purchased costumes can be, particularly if the material is factor. Go over the costume cautiously, and make certain there are no tears or threats for holes. If it can be mended, do so. If the shirt has velcro tabs, make sure they are still firmly set to the materials.
There are numerous different challenges that engineers should face daily. Some difficulties have a tendency to be much more tough to cope with than others. Consider specifying your new gravity conveyor system for your new procedure for occasion. The principles governing the workings of modern gravity roller conveyor methods are component science (objective) and part art (subjective) So how could you get the very best results?
If your costume is made with regular clothes, you can follow the direction as shown on the label, but use a gentle cycle, followed by a dry on the delicate cycle. Stain stick will consider out soils, but do not use these on pre-fabricated costumes unless of course you are certain of the materials. Once more, store the costume in a secure, dry location.
Actually, if you think about it, the design of a particular item (how it is produced) fulfills its purpose. It is up to the consumer to determine on where he'll be utilizing the storage device. From there, he can go pick the design that very best displays his character.
The CCD Scanner — CCD (billed coupled gadget) technology is the subsequent minimum costly bar code scanning method. Like the wand scanner, CCD readers must be in immediate contact with the bar code label in purchase to read it. But in contrast to the wand, there is no require to transfer the gadget across the label. The operator simply presses the reader against the label and pulls the trigger. The bar code is then photographed, digitized and decoded by the method.
Store all winter coats, winter skirts and dresses in garment conveyor systems Incorporated 33830 baggage at the back of the closet or in another closet. Keep hats, gloves, scarves earmuffs and other winter accessories stored in plastic storage bags. Winter season boots can be tucked away in stackable storage boxes with addresses.
If you are new to assessing conveyor systems for purchase, you should make a stage to get info from different manufacturers before you decide on one of them. The producer ought to be acquainted with what you require and be in a position to provide you with a higher-high quality product with fantastic customer services. Having information from several manufacturers will give you much better leverage when you negotiate for a better rate.
# Things That Should Be Regarded As When Choosing A Conveyor System
A promising new technology comparable to CCD is known as FFO (Fixed Focus Optics). These scanners are non-contact visitors, which indicates they can study bar codes from as a lot as 20" away. They will also be able to read two-dimensional bar codes as they turn out to be automated garment conveyor more well-liked.
When shortcuts are taken in meals methods you could finish up not sensation nicely and even obtaining sick. This is simply because you could have produced the error of eating poor food which can trigger you to be extremely sick. The very best factor to do is to make sure that you assistance companies maintaining their food garment conveyors sanitary in all situations.
My spouse, Nick, assisted delivere a «baby» in Bowling Green, Kentucky. Cradled in her leather, ivory seat, he inserted the key into the dashboard, and with one flip to the right, she allow out «her» initial imply more info here cry — Vvvvvvrroooommm.rooom rooooom rooooom. Like a new father, Nick admired this fiftieth Anniversary Edition Corvette, happy to have aided in her birth.
Store all winter coats, winter season skirts and dresses in more info here baggage at the back of the closet or in another closet. Maintain hats, gloves, scarves earmuffs and other winter add-ons stored in plastic storage bags. Winter boots can be tucked absent in stackable storage boxes with addresses.
It is very important in all locations regarding meals that issues be clean. There are numerous issues about food processing conveyors that are essential but the greatest 1 is that it must be thoroughly clean and sanitary. If this had been not the situation then it would be much more most likely that meals would not be safe.
The last guy I requested is a single guy in his twenties. He stated, «I would put on a kilt with satisfaction as part of my tradition for a parade or something. But not for everyday garment conveyors. It is just not manly. Those guys must be homosexual».
There are numerous various difficulties that engineers should face every day. Some challenges tend to be much more difficult to cope with than others. Consider specifying your new gravity conveyor method for your new procedure for occasion. The principles governing the workings of modern gravity roller conveyor methods are component science (objective) and part art (subjective) So how could you get the best results?
It requires math type skills, as well as English or some other language reading abilities. With out a reading ability you cannot know what is in a recipe. There are health regulations to understand and be in a position to more info here read and follow.
Of the various kinds of bar code scanners, CCD visitors are the simplest to use, and are accessible in widths from about two inches to 4 inches. A CCD reader is about four automated garment conveyor occasions the price of a wand, but only about 1 3rd the price of a laser scanner.
The bakery does not spend a high wage, and that is extremely unlucky. However, it is a regular employment and that beats operating out in the bush three months a year below some harsh situation, or working in a car manufacturing plant like a zombie.
«Some of the regular features for the 2006 Corvette are: twin air baggage, AM/FM stereo with CD player and new Bose speakers, Driver Info Middle, which relays information in 4 languages, low tire stress warning system, leather bucket seats, distant release for the hood and trunk, a theft alarm which shuts down the gas supply and a distant keyless entry system,» says our tour manual.
Store the costume in a zippered storage bag, if feasible. There are many types available in your local stores. Ideally the costume should be hung, and you can use black garbage bags for this. Just location the bag more than the more info here, or cautiously fold the excess bag around the costume and place it in a box. Make certain your costume's final resting place is a dry, dark location. Refrain from using mothballs. If you have worries, store the costume in the back of the closet where you can verify on it as soon as a month.
If you requested me, «Did I enjoy that employment»? The solution would most certainly be a very resounding NO!The work was dirty, smelly, and as much as I was worried, a monkey could do it. It was boring!
The carry-on is one of the most used types of upscale luggage. If you travel a great deal, then you most likely have encountered some problems with liquids-spilling their contents and ruining your bag,and every thing packed with them.
When the seasons alter, switch the items from 1 closet to the other. When you do not have any extra closets, you can create a seasonal closet by maintaining what you garment conveyors need hung up and visible, whilst neatly storing the relaxation away in the back of the closet or in storage bags and boxes.
Re-assess the clothing items in your closet which are currently on hangers. Some of these products, such as sweaters or tops that have seen much better days, can be folded and stored neatly in drawers. Add a storage bin with three drawers to the bottom of the closet and store sweaters or non-wrinkling tops in there. Getting a storage drawers in the closet will also help keep you from creating a mess of miscellaneous items on the closet flooring.
# Conveyor Methods For The Contemporary Manufacturing Facility
Theft of luggage at airports is much more common than most vacationers understand. This is particularly accurate if your luggage has all of the hallmarks of becoming automated garment conveyor higher high quality executive luggage. The great news is that there are issues travelers can do to protect on their own from theft.
People maintain trying to cram more more stuff into the same space. It might appear apparent, but the very best way to maximize your closet storage area is to get rid of anything you have not utilized for a year. Do you nonetheless require every single Easter basket, Halloween costume, or bridesmaid gown from the past five years? Probably not. Yet, it appears simpler for people to spend billions of dollars* on self-storage models outdoors the home than it does to simply recycle and donate what we don't use an arrange what we do use.
In a small company environment employing such a system starts with stock control software program where information about inventory items is saved. Bar code scanners are utilized to enter data into the inventory manage method. Each time an item passes in or out of inventory it is scanned and the appropriate change of standing is recorded in the stock database.
The next man I asked is a difficult operating center class guy in his thirties. He is a spouse and father and extremely much a «man's guy». He stated he might wear a kilt as component of a Renaissance Faire or pageant but not for any other reason. He has never worn 1. He states it is okay if it «has armor on it».
The garment conveyor installation manual bag by itself is a very simple design. It's a bag with a zipper in the entrance that operate from leading to bottom. It's designed to fold about your garments, keeping a number of products that are all hung on hangers. The hangers stick out the leading of the bag, while the bag folds cautiously around the clothes. You then zip it up and have it by the hangers. Occasionally there is a loop that helps to hold the hangers together. It helps if the hangers are all pointing the same direction as this tends to make it easier on your fingers, furthermore you can then dangle the whole bag with contents on the closet railing.
Please do not get me wrong right here, some cooks are really great cooks, particularly when it arrives to fillet steak or fondant potatoes or ice product deserts and can operate a kitchen where 1 or two plates of meals need to be served in a short space of time.
Starting a bakery is a Fantastic idea and you should to push on with your endeavor. It is 1 of the garment conveyors best suggestions you may have and it certainly is a fantastic way to decrease your taxes or at minimum be able to write off particular taxes that you at this moment can't.
Now I started creating this post because I am amazed at the way some people are misleading you into considering that it is easy and the very best way to get into business is by beginning a home bakery company.
Most locks that come with suitcases are flimsy sufficient for crooks to break into. Change the lock with a much more durable, more powerful lock. Combination locks are the very best, so there is no be concerned about losing keys. The same is true for your have-on as well as your garment conveyor installation manual bags. Just make sure you learn the mixture!
Most locks that arrive with suitcases are flimsy enough for crooks to split into. Replace the lock with a much more durable, more powerful lock. Combination locks are the very best, so there is no be concerned about dropping keys. The exact same is accurate for your have-on as nicely as your garment conveyor installation manual bags. Just make certain you discover the mixture!
In a small business setting employing such a method starts with inventory manage software program where information about stock products is stored. Bar code scanners are used to input information into the inventory manage method. Every automated garment conveyor time an merchandise passes in or out of inventory it is scanned and the suitable alter of status is recorded in the inventory database.
For your upscale baggage that you verify, such as your garment conveyor installation manual baggage, attach a couple of strips of colorful electrical tape to the bags. Once you retrieve your bags,check to see if the tape has been damaged. In this way you will know that your suitcase has been tampered with.
Always pack your most valuable products at the bottom of your have-on luggage, not at the top. Once you sit down on the aircraft, if at all possible, location your carry-on under the seat in entrance of you you can see it.
The Wand Scanner — The most basic type of bar code scanner is the «wand». This is a pen-type scanner that needs to be kept in contact with the bar code when scanning it. The wand emits a mild which is reflected off the bar code and then decoded by the system to garment conveyors identify the merchandise.
The nearby authority has a number of issues allowing home based company with working licenses. These issues in the previous have brought on them to refuse permission to nearly each kind of home primarily based business in today's market location.
|
|
# Approximate summation of the given equation
I have been trying from an hour to approximate the value of $M$ in the equation given below.
$$M = \sum\limits_{i=1}^n\left(\sum\limits_{j=1}^n\left(\sqrt{ i^2 + j^2 }\right)\right)$$
One thing I know is that $M$ will lie between the range given below by considering two cases, i.e, $(i = j = n)$ and $(i = j = 1)$.
$$n^2\sqrt{2} \leq M \leq n^3\sqrt{2}$$
But I am not satisfied with this answer because it is a broad range.
What I want is to get an approximate value for $M$ which lies somewhere in between of $O(n^2)$ and $O(n^3)$.
-
Consider the following:
$$M = \sum_{i=1}^n \sum_{j=1}^n \sqrt{i^2 + j^2} = n^3 \sum_{i=1}^n \sum_{j=1}^n \sqrt{\left(\frac{i}{n}\right)^2 + \left(\frac{j}{n}\right)^2} \cdot \frac{1}{n} \cdot \frac{1}{n}.$$
Now as $n \to \infty$ we have $$\sum_{i=1}^n \sum_{j=1}^n \sqrt{\left(\frac{i}{n}\right)^2 + \left(\frac{j}{n}\right)^2} \cdot \frac{1}{n} \cdot \frac{1}{n} \to \int_0^1 \int_0^1 \sqrt{x^2 + y^2} \, dx \, dy \approx 0.765196 \dots$$
So your sum is indeed of order $n^3$, with the constant approaching that integral.
EDIT: We see that the Riemann sum is always an overestimate for the integral since in the $\frac{1}{n} \times \frac{1}{n}$ grid we pick the point in every square where $\sqrt{x^2 + y^2}$ is highest. Therefore we have $$\left(\int_0^1 \int_0^1 \sqrt{x^2 + y^2} \, dx \, dy\right) n^3 \le M \le \sqrt{2} n^3,$$ with $M/n^3$ tending to the integral as $n \to \infty$.
-
How can we assume that "n" tends to infinity? I mean what if I know that n is some value less than the largest number that be expressed by 32 bits? – user1465557 Aug 21 at 11:47
@user1465557: By letting $n \to \infty$, we see that the factor of $n^3$ converges, so in particular it will stay bounded. In fact it is not hard to see that it will converge monotonously down to the integral. When $n=1$ the constant is the same as yours: $\sqrt{2}$, but when $n=10$, it is approximately $0.82998\dots$, which is not too far from the integral $0.765196\dots$ (calculated by WolframAlpha). – J. J. Aug 21 at 11:53
I do not know how much this could help you but $$M_n = \sum\limits_{i=1}^n\sum\limits_{j=1}^n\sqrt{ i^2 + j^2 } \lt \sum\limits_{i=1}^n\sum\limits_{j=1}^n(i+j)=n^2+n^3$$ So, as J.J. pointed it it out, the $n^3$ contribution seems to be clear.
By the way, the limiting value, as answered by J.J., is $$\frac{1}{3} \left(\sqrt{2}+\sinh ^{-1}(1)\right)\simeq 0.765195716464$$ You can notice that $M_{100}=0.771674 \times 10^6$ and $M_{1000}=0.765844\times 10^9$
-
We have $$\sum_{j=1}^{n}\sqrt{i^2+j^2}\gt\sum_{j=1}^{n}j=\frac{n(n+1)}{2};i=1,2,\ldots,n$$ so $$\sum\limits_{i=1}^n\sum\limits_{j=1}^n\sqrt{ i^2 + j^2 }\gt n\frac{n(n+1)}{2}=\frac{1}{2}(n^3+n^2)$$
-
|
|
PhD Dissertations by MUM Students – Physiology
The full-text pdf copies of all University PhD dissertations published since mid-1996 are now available free online to on-campus users.
All users can order any of the University dissertations for a fee by using the “Order” link on the citations below or on the abstract pages to which they link. Additional pre-1996 dissertations will be available for free to on-campus users in the future.
Barker, Charles W. — Physiology
Suppression of cytochrome P4501A (CYP1A1 and CYP1A2) mRNA levels in isolated hepatocytes by IL-1 and oxidative stress.
Order No. 9421867
Animals subjected to immunostimulatory conditions exhibit reduced tissue levels of total CYP and CYP dependent drug metabolism. We have investigated the possibility that depressed levels of two carcinogen-metabolizing CYP enzymes may be due to decreased levels of the mRNAs encoding these enzymes by studying the effect of monocyte-derived cytokines on the induction of CYP1A1 and CYP1A2 mRNAs in isolated rat hepatocytes. Medium conditioned by activated human peripheral blood monocytes or by the U937 monocyte cell line suppressed the induction of both mRNAs by TCDD, while beta-fibrinogen mRNA levels increased 30- to 40-fold. Recombinant interleukin-1 suppressed the inducer-dependent accumulation of both CYP1A1 and CYP1A2 mRNAs in a dose-dependent fashion, while two other monocyte derived cytokines, interleukin-6 and transforming growth factor-beta, did not. Run-on transcription analysis demonstrated that conditioned medium and interleukin-1 rapidly suppressed the transcription rate of CYP1A1 and CYP1A2 in inducer-treated hepatocytes. Since many of the actions of inflammatory mediators can be mimicked by oxidative stress, we also treated isolated hepatocytes with various concentrations of H(sub 2) O(sub 2) (0.25 to 1.0 mM) to investigate the possibility that expression of these genes may also be modulated by oxidative stress. Inducer-dependent accumulation of CYP1A1 and CYP1A2 mRNAs were maximally reduced approximately 50 and 70% respectively by 1.0 mM H(sub 2) O(sub 2). Run on transcription analysis suggested that the effect of H(sub 2) O(sub 2) on these mRNAs was mediated transcriptionally. The reduction in CYP1A mRNA levels was not due to a reduction in the levels of all mRNAs due to some general toxic effect since glyceraldehyde-3-phosphate dehydrogenase, alpha-tubulin, beta- fibrinogen and albumin mRNA levels did not change or were actually increased, and lactate dehydrogenase released into the medium was not increased, with H(sub 2) O(sub 2) treatment. Insulin also reduced the expression of both mRNAs, and N- acetylcysteine, which increases intracellular glutathione levels, completely reversed the insulin effect on both mRNAs and the H(sub 2) O(sub 2) effect on CYP1A1 mRNA but only partially reversed the H(sub 2) O(sub 2) effect on CYP1A2 mRNA. Source: DAI, 55, no. 04B, (1994): 1294
Barnes, Vernon Anthony — Physiology
Reduced cardiovascular and all-cause mortality in older African Americans practicing the Transcendental Meditation Program
Order No. 9701126
African Americans have a well-documented excess of CVD mortality which is at least in part due to psychosocial stress. The Transcendental Meditation® (TM) program has been reported to reduce psychological stress, cardiovascular risk factors and incidence of heart disease. A randomized controlled trial indicated that TM reduced hypertension significantly more than progressive muscle relaxation (PMR) and an educational control (EC) in older (mean age = 67 years) Africans Americans after 3 months. Pilot research in Caucasian elderly has found a 73% reduction in all-cause and cardiovascular (CVD) mortality in the TM group compared to the combined control group.
Based on these findings, TM (n = 36) was hypothesized to reduce incidence of all-cause and CVD mortality compared to PMR (n = 37) and EC (n = 36) and a combined control (CC, n = 73) group among the African American participants with mild hypertension in the original BP study. After 5 years, an all-cause and CVD mortality follow-up was conducted with data provided from Vital Statistics, Sacramento, CA. Survival distributions were compared by the Wilcoxon and Cox proportional hazards tests. There were 0.0% (0/36) CVD fatalities for TM compared to 9.5% (7/73) for CC, and 8.5% (3/36) all-cause fatalities for TM compared to 19% (14/13) for CC. Both all-cause (P =.045) and CVD (P =.021) mortality were significantly lower for TM compared to combined controls. The relative risk (RR) for TM compared with combined controls was 0.00 (95% CI 0-0.63) for CVD mortality and 0.32 (95% CI 0-0.96) for all-cause mortality.
These findings suggest that TM practice may reduce incidence of CVD and all-cause mortality in older hypertensive African Americans. According to Maharishi’s Vedic Approach to Health, TM enhances the holistic inner intelligence of mind and body, and thereby promotes balance in psychophysiological functioning and thus helps prevent premature disease and death. The demonstrated benefit for the Transcendental MeditationÒ program seems to have important implications for clinical and public health policy for reducing excessive CVD and all-cause mortality in African Americans. Source: DAI, 57, no. 08B, (1996): 4999
Crotta, Erika Helene — Physiology
Effects of a multimodal approach of Maharishi consciousness-based health care on carotid atherosclerosis: A study of coronary artery disease patients
Order No. 3131257
Cardiovascular disease (CVD) is still the largest contributor to morbidity and mortality in the world. Over the past 30 years, focus on primary and secondary prevention of cardiovascular disease and its related risk factors have yielded three major prevention strategies-drug therapies, lifestyle modification and stress reduction therapies. Despite this effort, up to 50% of patients with documented CVD have recurrent cardiac events.
A new angle that supports further prevention of CVD is needed. Maharishi Consciousness-Based Health Care system, a natural, prevention-oriented system of health, includes 40 modalities for enlivening the “inner intelligence of the body,” which are responsible for coordinating diverse physiological systems into an integrated whole.
This pilot trial compared effects of four Maharishi Consciousness-Based Health Care modalities to those of usual care on carotid intima-media thickness (IMT) in elderly subjects with documented cardiovascular disease and two to six CVD risk factors. The Maharishi Consciousness-Based Health Care modalities included the Transcendental Meditation program, neuro-physiological integration exercises, dietary and herbal supplement approaches. Usual care included the secondary prevention system offered at the University of Iowa Hospitals and Clinics, which are based on practice guidelines promoted by the American Heart Association.
Twenty-eight volunteer subjects were matched on age (mean 72 years), gender and severity of documented CVD. Measures were taken for baseline and nine-month posttest. At baseline the experimental group had significantly higher BMI (30 versus 26), triglycerides (177 mg/dl versus 101 mg/dl, and blood pressure (137 versus 120 mm Hg). They were more often single, and had lower income. Covarying for these baseline differences in major CVD risk factors, the experimental group tended to show a greater decrease in mean common carotid IMT after nine months. (Experimental -0.023 mm, Usual Care +0.041 mm, p = 0.07). The IMT regression in the experimental subjects was associated with high compliance. There was a strong correlation between compliance and increases in physical, mental and behavioral strength (r = 0.47), as assessed by Maharishi Consciousness-Based Health Care procedures. These findings suggest that enlivening the body’s inner intelligence could be an effective tool to deal with the current epidemic of cardiovascular disease.
Dangerfield, Bracey R. — Physiology
Complex dynamics in biological systems: Spontaneous variations of serotonin uptake into platelets as a model of signal control in the central nervous system
Order No. 9228948
Coherent fluctuations in the level of enzymatic activity in solutions of purified enzymes have been demonstrated in several laboratories. The existence of such fluctuations involving simple enzymes in solution suggests that coordinated behavior among macromolecules can occur in the absence of synchronizing cues from the environment. The present research sought to determine whether spontaneous fluctuations in the frequency range of 0.01 to 0.25 cycles/minute occur in the initial rate of uptake of serotonin (also known as 5-hydroxytryptamine, 5-HT) into platelets, a protein-mediated function. Such fluctuations were found, and their temperature dependence and response to partial inhibition by imipramine were examined. The level of uptake was determined as the amount of ($\sp3$H) 5-HT taken up by platelets in a one-minute incubation at 37$\sp\circ$C. Spectral analysis provided a measure of the frequency content of each time series of initial rates. The average behavior of many time series was displayed by summing their individual spectra. Statistical significance of individual frequency estimates was determined by Fisher’s test or Siegel’s test. The major findings of this research were: (1) apparently spontaneous periodic and aperiodic oscillations in the initial rate of 5-HT uptake, (2) an apparent shift to faster frequencies with an increase in sample storage temperature, (3) alterations in the frequency or phase of fluctuations in the presence of imipramine, (4) absence of an effect of a brief pre-sonication of platelet preparations on 5-HT uptake patterns. The findings suggest that coherent variations in 5-HT uptake may be organized by factors intrinsic to the platelet suspensions. Fluctuations in 5-HT uptake rate could be due to oscillatory variations of metabolic parameters in the platelet or to spontaneous conformational fluctuations in the uptake protein. The mechanism mediating ordering effects between platelets could involve alterations in water structure associated with such configurational changes or possibly alterations of diffusible chemical species such as intermediates of metabolism or metal ions. Since the 5-HT uptake system of neurons is essentially identical to that of platelets, these findings may help to explain serotonin-mediated rhythmicities in signal transmission in brain areas rich in non-classical serotonergic nerve endings.
Duraimani, Shanthi Lakshmi Chinnasamy — Physiology
Lifestyle Modifications and Healthy Biological Aging: Effects of Telomerase Activity and Telomere Length.
Order No. 3523287
Lifestyle modifications such as practicing the Transcendental Meditation (TM) program, maintaining a healthy diet and regular exercise could help to overcome age-related disorders and promote healthy biological aging. Biochemical, physiological, and psychological studies have revealed their positive effect, although the effect on telomerase gene expression and telomere length is poorly understood. Therefore, the goal of this research study is to determine whether lifestyle modifications will support healthy biological aging. Two independent studies were conducted for this purpose. First, a National Institute of Health (NIH) funded randomized pilot study was conducted using two intervention groups (24 TM + health education (HE) group and 24 enhanced health education (EHE) group) with Stage I hypertension African Americans.Second, a cross-sectional study was conducted using 19 long-term meditators and 19 nonmeditator controls. Telomerase gene expression (hTR and hTERT) and relative telomere length of peripheral blood cells by quantitative real-time PCR technique was measured in both studies.
In the NIH pilot study, Wilcoxon matched pairs tests showed a significant difference in the medians for hTERT (TM + HE = 0.03, p = 0.05; EHE = 0.60, p < 0.01) and hTR (TM + HE = 0.34, p < 0.001; EHE = 5.48, p < 0.001) in both groups. Dependent t-tests showed significant differences for systolic BP (TM + HE = -5.53 ± 11.23, p = 0.02; EHE = -9.00 ± 11.41, p < 0.001) in both groups and diastolic BP in the EHE group (EHE = -4.93 ± 7.05, p < 0.01).These findings suggest that intensive lifestyle modifications may be effective in promoting healthy biological aging.
In the cross-sectional study, the Kruskal-Wallis test indicated a significant trend for hTR in long-term meditators (long-term meditators = 0.74; non-meditator controls = 0.45, p = 0.08). Analysis of covariance showed a significant difference in psychological stress in long-term meditators (long-term meditators = 26.60 ± 3.95 non-meditator controls = 43.20 ± 0.68,p < 0.001), adjusting for BMI, exercise, smoking, intake of vitamins and omega-3. Future research is warranted with larger sample sizes to further evaluate the impact of TM on telomerase gene expression and telomere length.
Elbi, Cem Cuneyt — Physiology
Analysis of CYP1A1 gene chromatin structure—evidence for multiple translationally positioned nucleosomes.
Order No. 9722238
In vivo low-resolution indirect end-labeling analysis of CYP1A1 gene chromatin structure revealed precisely positioned nucleosomes in the enhancer, promoter and transcribed regions. In vivo high-resolution LMPCR analysis of the CYP1A1 regulatory region revealed multiple micrococcal nuclease (MNase) cleavages with different relative intensities suggesting that multiple translationally positioned nucleosomes occupy the CYP1A1 regulatory region in major and minor translational nucleosome frames. In both low-resolution and high-resolution experiments, positions of nucleosomes did not change when CYP1A1 gene transcription was induced with 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). In clone-11 cells, TCDD-induction globally increased the sensitivity of the CYP1A1 regulatory region to MNase indicating induced alterations in chromatin structure. In vitro nucleosome reconstitution and hydroxyl radical footprinting of the CYP1A1 enhancer demonstrated the presence of multiple overlapping translationally and rotationally positioned nucleosomes. In vivo footprinting and high-resolution LMPCR analysis of the CYP1A1 enhancer region in conjunction with the in vitro electrophoretic mobility shift analyses demonstrated that, following the treatment of cells with TCDD, the kinetics and time course of in vivo interactions at xenobiotic response elements (XREs) and at diverse sequence xenobiotic responsive elements (DXEs), and the kinetics and time course of DNA binding activity of aryl hydrocarbon receptor (AHR) paralleled the kinetics and time course of increased MNase accessibility of the CYP1A1 enhancer region in clone-11 cells, but not in clone-3 cells. Even though nuclear extracts prepared from both cell lines contained AHR that was capable of binding to XRE1, TCDD treatment results in the appearance of in vivo footprints at XREs and DXEs in the clone-11 cell line, but not in clone-3 cell line indicating the presence of an in vivo inhibitory activity that prevents the binding of AHR to XREs in the clone-3 cell line. The results of experiments presented in this thesis indicated that ligand-dependent and AHR-dependent chromatin alterations occur in the regulatory region of CYP1A1 gene without an accompanying change in the multiple translational positions of nucleosomes, and support the hypothesis that such alterations may be one transcriptional regulatory mechanism contributing to the high expression of CYP1A1 gene in clone-11 cells.
Gyawali, Dinesh — Physiology
Systematic Reviews and Meta-Analyses on Effects of Ayurvedic Interventions for Hypercholesterolemia, Hypertension, and Coronary Heart Disease
Order No. 10617601
Heart disease is the number one cause of death globally. Because of numerous side effects and increasing cost of conventional treatments, there is a growing interest in complementary healing approaches like Ayurveda. However, in lack of sufficient scientific evidence, safety and efficacy profile of these interventions has not yet been established. Systematic reviews and meta-analysis are the gold standard of evidence upon which clinicians, consumers, policymakers rely. Until this time, there are no Cochrane or other systematic reviews and meta-analysis on Ayurvedic interventions for cardiovascular disorders. This study was conducted with an aim to explore the efficacy of Ayurvedic interventions in hypercholesterolemia, hypertension and coronary heart disease, strength of the evidence and their possible side effects.
Cochrane guidelines for systematic reviews and meta-analysis were followed to design, formulate and implement search strategies, select studies, collect, abstract and analyze data, assess risk of bias, and report and interpret results.
Evidence from results were classified as per guidelines from American Academy of Neurology. Three meta-analyses of 64 studies on 2629 people studying effects of 10 different Ayurvedic interventions for hypercholesterolemia, high blood pressure and coronary heart disease, concluded that there is moderate to high strength evidence that several Ayurvedic herbal preparations are safe and effective. They pose no known side effects and thus can be used as dietary supplements or as an adjuvant to conventional therapy for better results. It was observed that Commiphora mukul (guggulu) reduced total cholesterol and low density lipoprotein levels by approximately 16 mg/dL and 18 mg/dL respectively with high certainty evidence. Similarly, garlic and Terminalia arjuna (arjuna) based formulas also had high to moderate strength evidence of their efficacy to reduce cholesterol levels. On the other hand, Arjun Vachyadi compound and Rauwolfia serpentina based formulas were found to have moderate certainty evidence to reduce high blood pressure. It was also observed that Ayurvedic formulas with arjuna as a chief ingredient are capable of improving left ventricular ejection fraction by 12 % with a moderate strength of evidence. Findings of these systematic reviews and meta-analysis encourage future researchers to conduct methodologically rigorous randomized clinical trials studies with a larger sample size.
Levitsky, Debra K. — Physiology
Effects of the “Transcendental Meditation” (rtm) program on neuroendocrine indicators of chronic stress
Order No. 9806955
Reduction of stress and its effects is an important objective because evidence suggests that stress causes or aggravates almost every human disease. The Transcendental Meditation$\sp\circler$ (TMS) program is a widely studied stress-reduction approach; thus, understanding the neuroendocrine mechanisms mediating its effects would be useful. A previous cross-sectional study comparing long-term practitioners of the TM program to matched controls showed highly significant differences in urinary variables reflecting neuroendocrine function. Practitioners of the TM program had increased urinary excretion of dehydroepiandrosterone and nighttime 5-hydroxyindoleacetic acid (5-HIAA, the major metabolite of serotonin), decreased excretion of cortisol, aldosterone, the norepinephrine/epinephrine metabolite vanillylmandelic acid, zinc, calcium and sodium, and lower scores on tests of anxiety and mood disturbance. These and other results suggested that the differences were due to reversal of the long-term effects of stress by the TM program. The current prospective, random-assignment study attempted to test this hypothesis. Healthy, Caucasian men (18-32 y) were randomly assigned to either the TM program or a stress education control (SEC). Before and after four months’ practice of the assigned stress management program, three consecutive 8-hour urine collections were taken, and psychological self-report tests were administered. Urine samples were analyzed for 5-HIAA by spectrophotometry, for adrenocortical steroids by radioimmunoassay and for ions by atomic absorption spectrometry. Compared to controls, TM subjects showed a significant decrease in sodium excretion during the afternoon-evening period, near-significant decreases in calcium excretion during the afternoon-evening and 24-hour periods, and statistically insignificant decreases in excretion of the other three ions. High-compliance subjects from both groups showed significantly lower cortisol excretion over the 24-hour period than low-compliance subjects, suggesting that high compliance with either program leads to a reduction in cortisol. Results for sleeptime 5-HIAA were in the predicted direction, though not significant. Increased regularity of practice of the TM program was associated with a decreased POMS Tension-Anxiety score. Some meaningful changes may not have reached significance due to inadequate statistical power. Results were generally consistent with previous findings.
Luo, Bo — Physiology
Mapping of sequence specific DNA- protein interactions: a versatile, quantitative method and its applications to transcription factor XF1.
Order No.9608533
Mapping the consensus sequence of DNA binding proteins has been greatly accelerated by methods that use in vitro selection of high affinity sequences from a library of random DNA molecules, followed by PCR-amplification and sequence analysis. However, these methods lose other valuable information because they use repetitive cycles of selection and amplification.
We have developed a method that overcomes this limitation, not only defining the consensus sequence, but also quantitating the effect on DNA-protein affinity of replacing each base in the recognition domain with every other base. The features of this method are: (1) Instead of synthesizing one oligonucleotide population containing a long randomized domain, we synthesize several oligonucleotide populations, each randomized at two positions. Because only a few species are present in each population, the concentration of each is sufficient to saturate the DNA-binding protein. Consequently, the abundance of each protein-bound oligonucleotide accurately reflects its binding affinity. (2) Because only a few species are represented in each oligonucleotide population, a single round of selection and amplification generates sufficient material for sequencing. This avoids biasing the population of protein-bound oligonucleotides toward high affinity species. Consequently, the abundances of oligonucleotides determined by sequence analysis accurately reflects their binding affinities. (3) We developed data collection and analysis procedures that eliminate artifacts, and yield accurate measures of: (a) the selectivity of the protein for each base at each position within the recognition domain (normalized relative selectivity), (b) the contributions of individual sites within the recognition domain to the binding affinity (selectivity variance), (c) the relative affinity of a particular sequence for the DNA-binding protein (global selectivity). (4) We developed a procedure for deducing aspects of the matrix of hydrogen bonds involved in DNA-protein interactions.
This method was first developed and applied to the nuclear protein XF1, which binds to xenobiotic responsive elements, that class of elements through which the liganded Ah receptor activates transcription of the CYP1A1 gene. We confirmed results by (1) cloning and sequencing individual XF1-bound oligonucleotides, and (2) competition EMSA analysis of oligonucleotides designed on the basis of in vitro selection results. Source: DAI, 56, no. 11B, (1996): 5943
MacLean, Christopher R. K. — Physiology
Mechanisms relating stress reduction and health: changes in neuroendocrine responses to laboratory stress after four months of Transcendental Meditation
Order No. 9534651
Pharmacological treatments have as yet failed to show clear reduction in the risk of development of coronary heart disease (CHD). As a result, behavioral treatments such as stress reduction programs continue to receive attention as alternative approaches for prevention as well as for treatment of heart disease. Research on the Transcendental Meditation (TM) technique of Maharishi Mahesh Yogi has shown it to be effective in reducing hypertension and also responsible for decreased basal cortisol levels, both acutely with the practice and longitudinally. In this study, the longitudinal effects of TM and a stress education control (SEC) on neuroendocrine responses to acute laboratory stressors were investigated.
The purpose of the present research was to examine in healthy male caucasians (18-32 yrs) the acute effects of laboratory stressors on plasma cortisol, serotonin, catecholamines, thyroid-stimulating hormone (TSH), growth hormone (GH), testosterone and dehydroepiandrosterone (DHAS) during the stress session, and changes in their responses to stress after four months’ participation in either stress management approach.
Plasma for cortisol, serotonin and the catecholamines was sampled periodically throughout the one-hour stress session using a continuous blood withdrawal pump, whereas samples of GH, TSH, DHAS and testosterone were sampled for 4 min at the beginning and at the end of the session. The laboratory stress session consisted of mental arithmetic (6 min), a mirror star tracing task (3.5 min), and isometric hand grip (3.5 min), separated by 25 min rest periods. Samples for cortisol, GH, TSH and testosterone were assayed by radioimmunoassay and statistically analyzed by t-test and one-way repeated measures ANOVA.
When compared to the SEC group by ANCOVA, basal cortisol levels and the average cortisol levels across the stress session decreased, while cortisol responsiveness increased, for the TM group after four months’ practice. For the TM group, TSH response to stress decreased while GH and testosterone responses increased over the same period. Plasma serotonin baseline, average and response to stress during the session showed a rise for the SEC group and a fall for the TM group over four months of intervention. No differences between the two groups in the changes in catecholamine responses to stress from pre- to posttest were noted, likely due to the small sample size.
These results indicate that practice of the Transcendental Meditation technique is associated with lowered plasma serotonin and cortisol as well as increased cortisol response to acute stress, in addition to changes in the responses of GH, TSH and testosterone to acute stressors. It is suggested that not only the changes in cortisol but also changes in basal level or response of other hormones reflect reduction of, or resistance to, the effects of chronic stress, i.e., changes towards more optimal adaptive mechanisms. (Abstract shortened by UMI.) Source: DAI, 56, no. 06B, (1995): 3074
Mattik, Liis — Physiology
Effect of the Transcendental Meditation Program and Health Education on Allostatic Load: Promoting Normal Aging
Order No. 3475629
The “weathering” hypothesis of aging suggests that African Americans experience accelerated aging due to the cumulative effects of stress, which causes multisystem “wear and tear” or allostatic load. Previous research shows that lifestyle changes can improve individual biomarkers of allostatic load, leading toward greater health and normal aging. Specifically, practice of the Transcendental Meditation (TM) program and changes in diet and physical activity reduce biological aging factors. This exploratory study examines the combined effects of the Transcendental Meditation program and conventional health education of diet and physical activity on allostatic load in 19 African American women and men with stage I hypertension over a 4-month period. This is a sub-study to a parent study on hypertension mechanisms conducted at Howard University Medical Center in Washington, DC.
The primary outcome of this study was a seven biomarker allostatic load index: body mass index (BMI), total cholesterol (TC), high density lipoprotein-cholesterol (HDL-C), systolic blood pressure (SBP), diastolic blood pressure (DBP), glycosylated hemoglobin (HbA1c), and dehydroepiandrosterone sulfate (DHEAS). Each of the individual biomarkers of the allostatic load index and a psychosocial stress measure, the General Health Questionnaire (GHQ-20), were also analyzed.
Results showed a significant reduction in the allostatic load index based on composite T scores (pretest mean = 22.73 ± 2.83 and posttest mean = 20.13 ± 4.66; t (18) =3.46; p=0.003; effect size=0.92), and a marginally significant reduction in the allostatic load index threshold scores (2.63 ± 1.01 vs. 2.05 ± 1.13; t (18) =1.93; p=0.069; effect size=0.57). Results showed a significant reduction in the individual biomarkers of SBP (144.9 ± 7.29 vs. 136.72 ± 12.51; t (18)=2.94; p=0.009; effect size=1.12) and DBP (86.16 ± 5.44 vs. 81.30 ± 8.5; t (18)=2.83; p=0.012; effect size=0.89), and in the psychological distress measure: GHQ (23.54 ± 11.96 vs. 15.42 ± 10.34; t (18)=3.75; p=0.001; effect size=0.68). Other individual biomarkers also changed in the predicted direction, but the changes were not statistically significant.
Findings suggest that changing lifestyle with the Transcendental Meditation program and health education of diet and physical activity may reduce overall allostatic load in hypertensive African Americans.
Prevention of cardiovascular disease in Maharishi Ayur-Veda participants: a cross-sectional study of carotid atherosclerosis
Order No. 3374437
Cardiovascular disease (CVD) is the number one cause of mortality in developed countries. Previous research on Maharishi Ayur-Veda indicates reduction in cardiovascular risk factors and events, including decreased blood pressure, carotid atherosclerosis and all cause mortality.
Maharishi Ayur-Veda is Maharishi Mahesh Yogi’s revival of an ancient system of natural health care and includes mind, body, behavioral and environmental modalities to enliven the field of consciousness at the basis of all physiological functioning.
The purpose of this cross-sectional study was to investigate effects of long-term practice of Maharishi Ayur-Veda in a community setting on carotid artery blockage, blood pressure and serum lipid measures. Possible mediators of carotid blockage were also explored.
One hundred fifty-four adult subjects (n=74 MAV and n=80 controls) were included in this study. The Maharishi Ayur-Veda group included subjects from southeast Iowa who had been practicing the Transcendental Meditation (TM) program for greater than 5 years (mean=27 years). Among other MAV modalities widely practiced were the group practice of the TM-Sidhi program with Yogic Flying, yoga postures (asanas), a breathing exercise (pranayama), herbal supplements, and vegetarian diet. Controls were selected from the Stroke Detection Plus database of southeast Iowa clients.
All subjects were measured on the primary outcome, carotid artery blockage, by modified duplex ultrasound.
Analysis of covariance (ANCOVA), controlling for age, gender, body-mass index, family history of cardiovascular disease, smoking and exercise indicated significantly less carotid blockage in the MAV group (0.26 ± .37) compared to controls (0.36 ± .62) (p = 0.01). ANCOVA indicated significant difference between MAV and controls on systolic blood pressure (MAV 118.45 mm Hg ± 13.37, vs. controls 129.70 mm Hg ± 15.52; p=0.01), and HDL cholesterol (MAV 60.60 mg/dL ± 15.09, vs. controls 47.78 mg/dL ± 13.96; p=0.05). Systolic blood pressure was found to be a possible mediator of the effects of treatment on carotid blockage.
The results of this study provide further evidence for the effect of Maharishi Ayur-Veda on CVD risk factors. It is recommended that Maharishi Ayur-Veda, as a comprehensive preventative approach to cardiovascular disease, be incorporated into public health programs.
Reick, Martin — Physiology
Mechanisms of AH receptor down-regulation: Involvement of a labile protein, a calcium dependent protease, and a protein kinase
Order No. 9633803
In this dissertation we present evidence that CYP1A1 transcription and in vivo DNA-protein interactions at XREs are down-regulated in parallel with the DNA-binding activity of the ligand activated AH receptor complex (AHRC). This indicates that down-regulation of AH receptor DNA-binding activity is important in regulating CYP1A1 transcription, and that the AHRC is required continuously to maintain transcription. We show also that the down-regulation process depends on protein synthesis, and that it involves degradation of the AHR subunit but not of ARNT. AHRC down-regulation is a Ca$sp{2+}$ dependent process, since both depletion of intracellular Ca$sp{2+}$ stores, and interference with Ca$sp{2+}$ currents can inhibit down-regulation in Hepa-1 cells. We also demonstrate that a specific inhibitor of the Ca$sp{2+}$ dependent protease calpain, as well as the protein kinase inhibitors H-7, calphostin C, and bisindolylmaleimide can block AHRC down-regulation. These findings are functionally relevant, since treatments that block down-regulation increase in AHRC dependent CAT gene expression substantially. Ca$sp{2+}$ measurements reveal a very rapid and transient change in intracellular free (Ca$sp{2+}$) in response to 2,3,7,8- tetrachlorodibenzo- (p) -dioxin (TCDD).
Thus, TCDD orchestrates a second response, that starts with a rapid rise in (Ca$rmsp{2+}rbracksb{i}$ and results in the activation of a Ca$sp{2+}$ dependent protease, which in turn is instrumental in AHRC down-regulation. Down-regulation involves components in addition to calpain since: (1) In vitro calpain digestion of the AHRC results in partial digestion products not observed in vivo, which implicates secondary proteases. (2) Calpains are too stable to mediate cycloheximide action. (3) Down-regulation can be blocked by cycloheximide after TCDD-induced $rmlbrack Casp{2+}rbracksb{i}$ transients have passed.
Finally, we present data suggesting that ionomycin might induce AHRC/ARNT complex formation in a ligand independent manner. Also, experiments with caffeine show that AHRC dependent CAT gene expression can be elevated without changing the levels of liganded AHRC, the kinetics of AHRC activation, or down- regulation. This increase in AHRC mediated transactivation is probably due to elevated $rmlbrack Casp{2+}rbracksb{i}.$ Source: DAI, 57, no. 06B (1996): p. 3580
Robinson, Charles Edward — Physiology
Mechanisms of inflammation in ulcerative colitis: a role for neutrophils and their free radicals
Order No. 9836285
Ulcerative Colitis (UC), a common form of Inflammatory Bowel Disease (IBD), is characterized by recurrent episodes of acute colonic inflammation, associated with abdominal pain, cramping, and bloody diarrhea. Since current treatments for IBD are not ideal, there is a need to develop better clinical management of UC. We chose to investigate pathophysiological mechanisms through which inflammation and tissue damage in UC occurs.
Hypothesis. (1) The final common pathway which leads to tissue damage in UC is mediated primarily by reactive oxygen species (ROS), (2) neutrophils are the main source of these ROS, and (3) these neutrophils are attracted to the colonic mucosa and activated by circulating (plasma) factors and local (colonic) factors.
Aims and methods. To support the above hypothesis, we proposed three aims: (Aim 1) To determine whether plasma from UC patients is pro-inflammatory. To this end, we evaluated the respiratory burst of PMN after incubation with plasma from UC patients. (Aim 2) To determine whether colonic factors in UC patients are pro-inflammatory. To this end, we evaluated the expression of the PMN adhesion molecule CD11b after stimulation with colonic factors from UC patients. (Aim 3) To determine whether colonic tissues from UC patients have abnormally high levels of oxidative products. To this end, we developed and used a novel immunoblotting technique to analyze oxidation products in colonic tissue from IBD patients. Colonic tissues were analyzed for protein carbonyls, nitrotyrosine, and 4- hydroxynonenal (4-HNE).
Results and discussion. (1) Plasma from UC patients significantly enhanced the PMN oxidative burst compared to plasma from controls. (2) Colonic factors from patients with UC significantly up-regulated CD11b compared to colonic factors from controls. These two results suggest that plasma and colonic factors in UC are pro-inflammatory, and may, therefore, perpetuate chronic inflammation. (3) Nitrotyrosine and 4-HNE were significantly higher in CD than in controls. In UC, nitrotyrosine and 4-HNE were also elevated, but these values did not reach significance. These results suggest that the ulcerations and tissue damage, which are hallmark features of IBD, may be the result of above normal oxidative stress. There were no differences between the groups for protein carbonyls. Source: DAI, 59, no. 06B, (1998): 2698
Robertson, Richard William — Physiology
Functional role of diverse sequence xenobiotic response elements (dxes) in regulation of cytochrome P-4501a1 (CYP1A1) gene transcription
Order No. 9636934
This dissertation presents evidence that activation of CYP1A1 gene transcription by aryl hydrocarbons is a multicomponent process involving interactions of the AH receptor complex (AHRC) with xenobiotic response elements (XREs) and interactions of secondary transcription factors with diverse sequence xenobiotic response elements (DXEs) and that interactions at DXEs are functionally important and are dependent on interactions of functional AHRC with XREs. Interactions at DXEs appear to be important due to the fact that DXEs can cooperate with XREs to confer on a reporter gene responsiveness to aryl hydrocarbons and the fact that the appearance of in vivo interactions at DXEs during activation parallels those at XREs as well as the fact that the kinetics of appearance of these interactions correlate with the levels of expression of P4501A1 mRNA in different cell lines. Dependence of interactions at DXEs on the interactions of functional AHRC with XREs is supported by the following findings: (1) Interactions at XREs and DXEs appear in parallel. (2) Down-regulation of the AH receptor leads to disappearance of footprints at XREs and DXEs. (3) Inhibition of protein synthesis which is known to prevent down-regulation of the AH receptor, preserves footprints at DXEs and XREs. (4) Cell line- specific differences in the kinetics of in vivo interactions at XREs are parallel at DXEs. (5) AHs fail to induce in vivo footprints at both XREs and DXEs in Hepa 1 mutant cells lacking functional nuclear AH receptor complex. (6) The appearance and disappearance of in vivo footprints at DXEs could not be correlated with changes in constitutive DXE-specific DNA- binding activities and instead correlated with changes in the DNA-binding activity of the AH receptor. The in vivo and in vitro findings reported here regarding the relationship between the interactions at XREs and DXEs are consistent with a chromatin remodelling mechanism during activation of CYP1A1 transcription which is induced by the activated AH receptor exposing previously inaccessible DXEs for interactions with constitutively present nuclear factors. Finally, a link is made between CYP1A1 gene regulation, the unified field described by physics and Maharishi’s Vedic Science. Source: DAI, 57, no. 07B, (1996): 4210
Royer-Bounouar, Patricia Ann — Physiology
Transcendental Meditation technique: a new direction for smoking cessation programs.
Order No. 9000436
This prospective cohort study examined the effect of practice of the TM technique on smoking behavior during a period of 20 months. The subjects were 7070 individuals over 16 years of age who attended introductory lectures on the TM technique. Nine hundred and twenty-five (13%) of these learned the TM technique and 6145 (87%) did not. Prior to attending the lecture there were no differences in demographic variables or in smoking habits between the TM and non-TM groups. At the end of the study however, 33% of the smokers in the TM group had quit smoking as compared to 21% of those in the non-TM group (df = 1, chi squared = 3.85, p <.05). When regularity of practice of the TM technique was taken into account, it was found that 60% of those who meditated regularly twice each day had quit smoking as compared to 41% for those who meditated once each day, 21% for those who meditated irregularly or had stopped the practice, and 21% for those who never learned the TM technique (df = 3, chi squared = 18.25, p =.0004). The quit rates represented on the average a 12-month cessation period (SD = 7.57) for the meditators and a 10-month cessation period (SD = 8.54) for the non-meditators.
Furthermore, when quit and decreased rates were combined, it was found that 90% of those who practiced TM twice each day had quit or decreased smoking by the end of the study vs 71% for the once each day TM meditators, 55% for those who were irregular or no longer practiced TM, and 33% for the non-TM group (df = 3, chi squared = 35.734, p <.0001). These results strongly suggest a correlation between frequency of practice of the TM technique and increased likelihood of stopping smoking. Source: DAI, 50, no. 08B, (1989): 3428
Extensive artifact research was conducted. Some of the weighings were done under constant temperature and constant relative humidity.
Significant deviations from constancy of weight were found in the range of 0.2 to 1.5 mg for the 250-ml flasks and up to 0.06 mg for the 25-ml flasks. For sprouting seeds deviations of the same magnitude were found. Deviations of positive and negative amounts and zero results were obtained. Results were reproduced in blind experiments by five different experimenters. Experiments during eclipses implied a relationship of weight changes with planetary constellations. Source: DAI, 54, no. 03B, (1993): 1449
Salerno, John William — Physiology
Selective growth inhibition of human colon adenocarcinoma and malignant melanoma cell lines by sesame oil in vitro
Order No. 9133557
Ayurveda, an ancient, comprehensive and prevention-oriented system of natural health care, has highly recommended the topical use of sesame oil above other oils. Applied to the skin on a daily basis and to the colon on a seasonal basis, it is claimed to improve physiological balance, vitality and longevity. Sesame oil contains relatively high levels of the essential polyunsaturated fatty acid, linoleic acid, in the form of triglycerides. The antineoplastic properties of the essential fatty acids such as linoleic acid have been documented. Linoleic acid and its metabolites have been shown to selectively inhibit and kill a variety of human and animal tumor cells both in vitro and in vivo while affecting normal cells either significantly less or not at all.
Therefore, it was hypothesized that linoleic acid and sesame oil would inhibit the growth of human colon adenocarcinoma and human malignant melanoma cell lines to a greater extent in vitro than their normal counterparts, human colon epithelial cells or human melanocytes. Cells in culture were supplemented with linoleic acid at a dose range of 3-100 $\mu$g/ml. For lipase-digested and undigested sesame oil, a range of 10-300 $\mu$g/ml was used. Growth inhibition was determined by harvesting and counting the total number of cells by hemacytometer after five or eight days of incubation and comparing them to controls. The results showed that free linoleic acid and undigested sesame oil all had a significantly stronger inhibitory effect on both the malignant melanoma and colon adenocarcinoma cell lines than on the corresponding normal cell lines throughout most of the dosage ranges tested.
In comparison with other common vegetable oils and their major component fatty acids, the saturated fatty acid palmitic acid, the monounsaturated fatty acid oleic acid, and the vegetable oils; olive, coconut and safflower all were tested on the malignant melanoma cells. Of these, only safflower, had a significantly selective inhibitory effect.
In conclusion, these results suggest vegetable oils with high linoleic acid content, such as sesame, may possess selective antineoplastic properties against the in vitro growth of malignant melanoma and colon adenocarcinoma. This finding appears to warrant further investigation into the clinical usefulness of the Ayurvedic procedure of topically applying sesame oil.
Scaroni-Fisher, Mabel Marta — Physiology
A comparative risk assessment of chemical genetic engineering, and organic approaches to pest management.
Order No.9933981
As approximately 50% of the world’s food supply is destroyed each year by pests while the human population continues to expand rapidly, agricultural pest management is a global, serious problem. Chemical pesticides, the principal approach to managing pests, has been much analyzed, but relatively little attention has given to organic and genetic engineering methods. The purpose of this study was to conduct a comparative risk assessment of these three approaches, first generally and then in terms of a case study on Roundup Ready soybeans, a genetically modified crop.
The main risk with chemical pesticides is the development of pest resistance. Consequently, a greater percentage of the world’s crops are consumed by pests today than 50 years ago despite a thirty-fold increase in the use of chemicals. When the environmental and health problems of this approach are also considered, it is clear that chemicals not only carry unacceptable risks, they are unsustainable. For this reason the world will likely shift to one of the other two approaches.
The principal general risks of genetic engineering are escape of transgenes into the environment, development of pest resistance, harmful effects on non-target species, continued dependence on chemicals, toxic and allergenic health effects, and increase in antibiotic resistance. The risks of Roundup Ready soybeans in particular are increased use of Roundup, which has been shown to be acutely toxic to a significant number of organisms in the environment and to be potentially carcinogenic to agricultural workers and possibly consumers. Available evidence indicates that all these risks are real and potent. However, because no tests have been conducted to assess any of these risks for the long term, it is recommended that genetically engineered crops should not be commercialized until they are proven safe beyond a reasonable doubt.
The principal risk of the organic approach is the introduction of alien species for biological control, which can also result in effects on non-target species. However, if alien species are avoided, organic agriculture offers the least risk for managing pests effectively while maintaining or even increasing food production without endangering human health or the environment. Source: DAI, 60, no. 06B (1999): p. 2473
Siu, Chu-Sin — Physiology
Enhancer elements and transcription factors mediating the suppressive effect of IL-1 on CYP1A1 transcriptional activation.
Order No. 9726318
Total P450 content and related activities are known to decrease in cultured rat hepatocytes in response to IL-1 treatment. Previous studies reported that IL-1 suppressed the induction of CYP1A1 and CYP1A2 through a transcriptional mechanism. In order to identify the cis-acting element which mediates IL-1 effect, we analyzed the promoter activity of the 5′-flanking region of CYP1A1 gene. Two elements were identified: xenobiotic responsive element (XRE) and HNF-4 binding site. Transient transfection experiment using primary hepatocytes transfected with CAT reporter genes carrying either XRE1 or XRE2 gave direct evidence that XREs can mediate IL-1 action, although the level of AH receptor binding was not affected. By deletion analysis of the 3.1 kb regulatory region of the CYP1A1 gene, a 36 bp IL-1 responsive region was identified. The region is a composite of three distinct sites: XRE, Sp1-like, and IL-1 responsive element (ILRE). Gel mobility shift assays demonstrated that the ILRE binds constitutively a liver-enriched protein designated as IL-1 responsive protein (ILRP), whose binding activity is reduced by IL-1. Antiserum to the rat HNF-4 transcription factor supershifted the DNA-protein complex formed between ILRE and ILRP. Cotransfection with an HNF-4 expression plasmid increased transcriptional activity of the CYP1A1 minimal promoter carrying one copy of ILRE (about 1.7-fold), or three copies of ILRE (about 2.7-fold) in HepG2 cells. These data suggested that ILRP is in fact HNF-4. Although the transactivation potential of HNF-4 is weak in the context of CYP1A1 promoter, its reduced binding activity upon IL-1 treatment suggests that it may mediate IL-1 action in down- regulating CYP1A1 induction. This is the first report that showed the binding activity of HNF-4 can be down-regulated by IL-1. In summary, IL-1 down-regulation of CYP1A1 transcriptional activation is mediated by XREs, however the mechanism by which this occurs is not known. The HNF-4 binding site may also mediate IL-1 action but more direct evidence is needed. Source: DAI, 58, no. 03B, (1997): 1125
Streicher, Christoph — Physiology
Weight changes of biological and chemical material in a thermodynamically closed system.
Order No. 9318167
The basic research to test the validity of the law of constancy of weight during chemical reactions was done by Landolt /1/, /2/, /3/, Manley /4/. Irregularities in the data of these experiments and the research of Hauschka /6/ on weight changes of sprouting seeds in a thermodynamically closed system suggested a reinvestigation of the question of constancy of weight for chemical reactions and for biological material.The reduction of silver nitrate to metallic silver was chosen as chemical reaction. 3.5 g silver nitrate was reduced in 250-ml gas tight round bottom glass flasks, silver lining the inner glass surface. Sprouting seeds were used in biological experiments. The total weight of the flasks was monitored over several days with an electronic Mettler AE 163 balance, readability 0.1 mg and a mechanical two-pan Volant balance readability 0.2 mg. 25-ml flasks with accordingly smaller silver lining were weighed with and an electronic two-pan Sartorius balance with vacuum capability, readability 1 micro g. Control flasks contained water or glass balls.Extensive artifact research was conducted. Some of the weighings were done under constant temperature and constant relative humidity.Significant deviations from constancy of weight were found in the range of 0.2 to 1.5 mg for the 250-ml flasks and up to 0.06 mg for the 25-ml flasks. For sprouting seeds deviations of the same magnitude were found. Deviations of positive and negative amounts and zero results were obtained. Results were reproduced in blind experiments by five different experimenters. Experiments during eclipses implied a relationship of weight changes with planetary constellations. Source: DAI, 54, no. 03B, (1993): 1449
Teifeld, Robert M. — Physiology
Transient superinducibility of cyp1a1 mrna and transcription, and the dna elements responsible for mediating this effect
Order No. 9227181
The polycyclic aromatic hydrocarbon (PAH)-inducible CYP1A subfamily of cytochromes P450 is involved in the oxidative metabolism of a wide variety of endogenous and exogenous compounds, including carcinogens and other environmental contaminants. The expression of one member of the subfamily, the CYP1A1 gene, has previously been shown to be under transcriptional control modulated both by PAH-type inducers and by other factors. CYP1A1 transcription can also be superinduced by simultaneous treatment of certain cells in culture with PAH inducers and inhibitors of protein synthesis. In the present study we demonstrated that some cell types that are highly superinducible can be rendered unresponsive to cycloheximide if its addition is delayed approximately 1.5 hours after the cells are exposed to polycyclic aromatic hydrocarbons (Teifeld et al., 1989). This phenomenon, termed transient superinducibility, demonstrates that there are two phases to the initial transcriptional response of the CYP1A1 gene to polycyclic aromatic compounds: an early phase, during which inhibition of protein synthesis can augment the effect of inducers, and a later phase, during which inhibition of protein synthesis does not further increase CYP1A1 gene transcription rate. In order to identify the DNA elements which mediate this phenomenon we used chloramphenicol acetyltransferase (CAT) expression vector analysis.
These studies revealed that the region between $-$1.2 and $-$0.9 kb was necessary to mediate transient superinducibility in the NRK cell line. Further experiments with synthesized CYP1A1 sequences showed that the same elements responsible for mediating the inducible expression of the CYP1A1 gene by xenobiotic treatment, the two xenobiotic response elements, XRE1 and XRE2, were sufficient to mediate both superinduction and transient superinducibility. Loss of responsiveness to cycloheximide was correlated with the disappearance of Ah receptor/XRE binding activity from nuclear extracts of induced cells. Source: DAI, 53, no. 06B, (1992): 2700
Tomlinson, Philip Ford — Physiology
Superoxide scavenging, hydrogen peroxide deactivation, and benzoapyrene chemoprotective activities of a Maharishi Ayurveda food supplement, Maharishi Amrit Kalash.
Order No.9427918
Maharishi Ayurveda, a recent restoration of the traditional health care system of India, upholds Maharishi Amrit Kalash (MAK)–an herbal fruit concentrate (MAK-4) and an herbal tablet (MAK-5)–as a rasayana, a food supplement which promotes physiological balance, health, and immunity. Antioxidant and anticarcinogenic activities of MAK have been previously demonstrated in biochemical, cell culture, and in vivo studies. In the present investigation, superoxide scavenging and hydrogen peroxide deactivation properties of MAK were quantified in enzymatic assays, and the ability of MAK to scavenge reactive oxygen species within HeLa cells and to protect C3H/10T1/2 mouse embryo fibroblast-like cells from benzo (a) pyrene transformation was determined.
The superoxide scavenging properties of MAK were investigated with superoxide radicals generated during the catalytic activity of xanthine oxidase. Solutions containing MAK-4 and MAK-S (7.5 and 2.0 mg dry weight of extract/ml) inhibited the reduction of nitroblue tetrazolium (NBT) 96% and 98%, respectively. NBT reduction was decreased 50% by 30 micro g dry weight of extract/ml MAK-4 or 94 micro g dry weight of extract/ml MAK-5. Ascorbic acid inhibition of superoxide radical reduction of NBT reached 88% at 0.176 mg/ml, but declined to 42% at a concentration of 1.76 mg/ml. The rate of uric acid production monitored at 290 nm demonstrated negligible inhibition of xanthine oxidase by MAK-4, MAK-5, or ascorbic acid.
The ability of MAK to deactivate hydrogen peroxide was measured by determining the extent to which MAK inhibited the reduction of scopoletin by H sub 2 O sub 2 generated during the catalytic activity of glucose oxidase. At what appear to be approximately physiological concentrations (0.75 and 0.20 mg dry weight of extract/ml), MAK-4 and MAK-5 inhibited loss of scopoletin fluorescence 79% and 98%, respectively.
In cell culture, extracts from 1 micro g/ml MAK-4 and 10 micro g/ml MAK-5 inhibited the transformation of C3H/10T1/2 cells by benzo(a) pyrene 54% and 56%, respectively. Extracts from 20 micro g/ml MAK-4 and 15 micro g/ml MAK-5 inhibited intracellular reactive oxygen species generated by HeLa cells and monitored by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide reduction 12% and 17%, respectively.
The results contribute to an understanding of previously reported anticarcinogenic, antioxidant, and anti-aging properties of MAK-4 and MAK-5, and warrant consideration in the light of present preventive, nutritional, and chemotherapeutic approaches to health, antioxidant defense, and carcinogenesis. Source: DAI, 55, no. 06B, (1994): 2120
Vivier, Erika Helene — Physiology
Effects of a multimodal approach of Maharishi consciousness-based health care on carotid
Order No. 3131257
Cardiovascular disease (CVD) is still the largest contributor to morbidity and mortality in the world. Over the past 30 years, focus on primary and secondary prevention of cardiovascular disease and its related risk factors have yielded three major prevention strategies-drug therapies, lifestyle modification and stress reduction therapies. Despite this effort, up to 50% of patients with documented CVD have recurrent cardiac events. A new angle that supports further prevention of CVD is needed. Maharishi Consciousness-Based Health Care system, a natural, prevention-oriented system of health, includes 40 modalities for enlivening the “inner intelligence of the body,” which are responsible for coordinating diverse physiological systems into an integrated whole. This pilot trial compared effects of four Maharishi Consciousness-Based Health Care modalities to those of usual care on carotid intima-media thickness (IMT) in elderly subjects with documented cardiovascular disease and two to six CVD risk factors. The Maharishi Consciousness-Based Health Care modalities included the Transcendental Meditation program, neuro-physiological integration exercises, dietary and herbal supplement approaches. Usual care included the secondary prevention system offered at the University of Iowa Hospitals and Clinics, which are based on practice guidelines promoted by the American Heart Association. Twenty-eight volunteer subjects were matched on age (mean 72 years), gender and severity of documented CVD. Measures were taken for baseline and nine-month posttest. At baseline the experimental group had significantly higher BMI (30 versus 26), triglycerides (177 mg/dl versus 101 mg/dl, and blood pressure (137 versus 120 mm Hg). They were more often single, and had lower income. Covarying for these baseline differences in major CVD risk factors, the experimental group tended to show a greater decrease in mean common carotid IMT after nine months. (Experimental −0.023 mm, Usual Care +0.041 mm, p = 0.07). The IMT regression in the experimental subjects was associated with high compliance. There was a strong correlation between compliance and increases in physical, mental and behavioral strength (r = 0.47), as assessed by Maharishi Consciousness-Based Health Care procedures. These findings suggest that enlivening ‘the body’s inner intelligence’ could be an effective tool to deal with the current epidemic of cardiovascular disease.
Wenneberg, Roland S — Physiology
The effects of Transcendental Meditation on ambulatory blood pressure, cardiovascular reactivity, anger/hostility, and platelet aggregation
Order No. 9421866
In addition to traditional risk factors, psychosocial stress such as Type A behavior pattern, anger and hostility, and increased cardiovascular reactivity to stress have been proposed as risk factors for cardiovascular disease. Therefore, stress reduction approaches such as Transcendental Meditation (TM) may be useful in modifying these behavioral factors.
Forty normotensive volunteers were pretested and posttested for cardiovascular reactivity to a standard battery of laboratory stressors, underwent ambulatory blood pressure monitoring during the day, and were tested for levels of anger, hostility, and platelet aggregation. They were randomly assigned to either TM or a cognitive-based Stress Education Class (SEC) control group. Both treatment groups involved similar instructional attention and daily practice.
After a four-month treatment period, no significant differences were found between the two treatment groups in cardiovascular reactivity or in average cardiovascular levels in the laboratory or in the field. However, the regular TM practitioners demonstrated increased systolic blood pressure reactivity to the preparation of a speech and to the speech task itself. In addition, the regular TM practitioners also demonstrated a significant reduction in average ambulatory diastolic blood pressure. No significant differences in platelet aggregation, anger or hostility were found between the two groups, except that the SEC group had lower outwardly expressed anger. Among all subjects of the study, significant positive correlations were found between outwardly expressed anger and collagen- induced platelet aggregation, and heart rate reactivity.
These results show that it is possible to decrease average ambulatory blood pressure levels without decreasing cardiovascular reactivity in normotensive subjects with the regular practice of TM. This finding supports the hypothesis that tonic (average) and reactive blood pressure are largely independently regulated and therefore can be differentially modified by behavioral treatment. Since average ambulatory blood pressure is a better predictor of cardiovascular complications of hypertension than clinic blood pressure, this finding may have implications for the prevention of cardiovascular disease. Data from this study also suggest mechanisms whereby stress may be translated into coronary heart disease, i.e., anger may increase coronary heart disease through its association with platelet aggregation and heart rate reactivity. Source: DAI, 55, no. 06B, (1994): 2120
Xu, Chuanli — Physiology
Transcriptional suppression of cytochrome P450 1A1 gene is under redox regulation – Ah receptor-mediated processes with distinct mechanisms.
Order No. 9726319
Oxidative stress in a cell is defined as an unusually high level of reactive oxygen species which can be caused by a number of stimuli. We have investigated the molecular mechanism whereby transcriptional expression of CYP1A1 gene was regulated by redox potential. XRE was found to be the response element by which H(sub 2) O(sub 2) exhibited its inhibitory effect on the transcription of CYP1A1 gene in Hepa 1 cell line using transient transfection technique. However, H(sub 2) O(sub 2) did not alter the DNA binding activity of the Ah receptor. Further study demonstrated that modulation of XRE enhancer strength by various means could modify H(sub 2) O(sub 2)-dependent suppression of CAT expression. The results from this study suggest the presence of a protein that inhibits transactivation by the Ah receptor without influencing its DNA binding ability. In search for the candidate protein(s) which mediated H(sub 2) O(sub 2) action on Ah receptor function, we first demonstrated that overexpression of the product of the retinoblastoma susceptibility gene (RB) downregulated transcription of the CYP1A1 gene. XREs alone were sufficient to mediate RB action. Results from coimmunoprecipitation assays indicated that the Ah receptor coprecipitated with the RB protein or its family member p107 and vice versa. Similar to other RB binding proteins, the Ah receptor only bound to the hypophosphorylated form of RB or p107 protein. To further characterize regulation of CYP1A1 by redox potential, more powerful and more specific oxidants were used to oxidize vicinal sulfhydryl groups in intact Hepa 1 cells. Pretreatment with diamide or phenylarsine oxide for 20 minutes rapidly prevented the formation of ligand-dependent Ah receptor/XRE complex and thus inhibited XRE-mediated luciferase expression. Direct oxidization of the Ah receptor by PAO was further demonstrated by the experiments in which DTT, a reducing agent, could restore the Ah receptor DNA binding activity. Finally, a one hundred-fold difference in the effectiveness between dithiol 2,3-dimercaptopropanol and monothiol 2- mercaptoethanol in reversing PAO-dependent inhibition of Ah receptor DNA binding activity suggests that vicinal sulfhydryl residues may be involved in the DNA binding of Ah receptor. Source: DAI, 58, no. 03B, (1997): 1130
|
|
# Output two numbers (Robbers' thread)
In the cops' thread of this challenge, answerers will be writing programs which output two numbers, $$\x\$$ and $$\y\$$. As robbers, your goal will be to crack their submissions. To do this, you find a program in the same language as a cop's program which outputs a number in between $$\x\$$ and $$\y\$$, and is the same length as or shorter than the cop's program. Your program must use the same output method as the cop's submission.
You submit that program here and notify the cop in a comment on their answer.
For every answer you crack here you get 5 points. As an extra incentive you will gain an additional point for every byte shorter your program is than the cop's. If your program is the same length, that is fine; you still get 5 points.
The winner is the person with the highest total score across all answers.
# Malbolge, 6 bytes, cracks Kamila Szewczyk's answer
(&<q#
Try it online! (or try it here to avoid timing-out)
Outputs 2.
Explanation.
The goal is to output the ASCII character "2" in a 7-byte or less program.
Malbolge code & data occupies the same memory space. When a Malbolge program is loaded, the last two single-byte instructions act as 'seeds' that determine how the remaining memory is initialized (in a deterministic but rather uncontrollable fashion). So if we try to construct 5 command/byte programs that use the contents of the memory to generate the ASCII encoding for "2", we can try different combinations of the remaining 2 commands/bytes to try to find one that gives the correctly-initialized memory to achieve this. There are 8 valid Malbolge commands (and any other byte in the program will generate an error upon loading), so this gives us 8x8 combinations to try per 5 byte program. This is obviously less than the 1/256 chance of 'hitting' the ASCII character "2" in any particular byte, so we'll probably have to try more than one program.
Each Malbolge command self-modifies immediately after execution, making re-use rather difficult. So here we try only single-pass programs, without any attempt to loop. We will need to use the commands j (set the data pointer), < (write an ASCII value to the output), and (probably) v (end the program). This gives us room for 2 more data-altering commands in within the 5-byte limit, so we can try combinations of * (rotate) and p (tritwise OP operation), as well o (no operation; but in Malbolge this changes the data pointer as a side effect so it can also affect the output).
Unfortunately, after trying all 64 combinations of the final two bytes across the Malbolge programs jpo<v.., jop<v.., ojp<v.., jpp<v.., jppp<v., jpppp<v, j*o<v.., j*p<v.., j*pp<v. and j*ppp<v (where . represents any of the 8 Malbolge commands), we can only generate the output numbers 0, 1, 3, 5, 6, 7, 8, 9. At this point it seems likely that Kamila Szewczyk may have used a similar approach, and therefore hoped that generating the character "2" in 7 bytes or less is impossible...
But: what about shorter programs that omit the v (end the program) command? These allow the memory to be initialized differently, and so we can maybe find a combination that enables output of "2"... but the Malbolge interpreter will now continue reading bytes from the rest of the initialized memory and executing them as commands, with rather uncontrollable consequences! Still, if it hits a v (end of program) before it hits a < (write output), that could be Ok: so let's try it!
After some searching, we find that j*p<.. is indeed able to initialize the memory to output "2", using two different suffix combinations: when test-run, <j unfortunately keeps running and outputs an additional "2L" before stopping, but /j stops after the "2". It's a crack!
To load into Malbolge, the final program - j*p</j - must finally be encoded using a series of operations (see the spec) to yield the final loadable code of (&<q#.
# R, 4 bytes
2e98
Try it online!
Not much fun but the range makes it cracked
# Vyxal, 26 18 14 bytes (88 bytes saved)
Crack of @lyxal's vyxal cop program
k×:\(+33*\↵+Ė›
Try it Online!
output lowest bound +1
(first time doing vyxal, tell me if I did something incorrectly)
## Explanation
k× Push 2147483648 onto the stack
=> [2147483648]
: Duplicate the stack
=> [2147483648, 2147483648]
\(+ Concatenate the last element of the stack with (
=> [2147483648, '2147483648(']
33* Multiply by 33 the string on top of the stack
=> [2147483648, '2147483648(21 ... 8(2147483648(']
\↵+ Add ↵ to the string on top of the stack
=> [2147483648, '2147483648(21 ... 8(2147483648(↵']
Ė Evaluate as vyxal code the last element of the stack (see the cop thread)
=> [ <lowest bound> ]
› Add 1 to the result
=> [ <lowest bound +1> ]
implicit output
• :( are you sad? Jan 31 at 14:27
• C+: no I'm happy Jan 31 at 16:26
• rip the old version of the code where these ^ jokes ^ made sense T_T Feb 1 at 0:32
# Brachylog, 5 bytes (0 bytes saved), 20922789888
16ḟ↔↔
Outputs $$\20\text{,}922\text{,}789\text{,}888\$$.
Cracks @Fatalize's Brachylog Cop answer, which is also 5 bytes and the range $$\14\text{,}159\text{,}265\text{,}359\$$ to $$\61\text{,}803\text{,}398\text{,}875\$$.
Try it online.
Explanation:
16ḟ # Push the factorial of 16: 20922789888000
↔ # Reverse (but remain an integer): 88898722902
↔ # Reverse back: 20922789888
• Clever use of reverse on integers! Jan 31 at 21:41
# Excel, 6 bytes, cracks Luke Dunn's answer
=1E250
Try it online!
# JavaScript (Node.js), 22 bytes
x=>'9'.repeat(1e7-1)-1
Try it online!
This cracks l4m2's cop.
Works only in theory. The string in the answer, with 9999999 golfed to 1e7-1, minus 1, which coerces it to a number.
# JavaScript (Node.js), 22 bytes
x=>'909'.repeat(1e7/3)
Try it online!
Works in reality.
This one exploits the fact that the repetition count is rounded down.
• Intended solution x=>'9'.repeat(1e7-2)+1
– l4m2
Jan 31 at 17:20
# brainfuck, 15 bytes (10 bytes saved), cracks mathcat's answer
-[>+<-----]>++.
Try it online!
# Octave, 6 bytes, cracks robbie crockett's first answer
28^213
Try it online!
Found by a brute force search of n where log(1.7e308,n) is very close but less than an integer so that n^ceiling(log(1.7e308,n)) is not too large.
• I updated it. Good luck with the new one Feb 3 at 19:08
# Seed, 4695 bytes (96 byte save), $$\9^{999999999999999}\$$
48 136118222288577729572552152791709368605773628347021471008690822115373392141244799774415998107055377981742447271161172093778621602759645881735548840779482802560209189517746243033627985739511100518293138471484155429274889918272710559291982273929396912514475806069037521031407489333870378703173662910402083164029699758795021006600087782123480276870349030965867563726975877728510764214971834340717031826050329624446895168002822671009104546726772407954859794475567312583905626817398451742396403319735329821368897163429801670218889821848435198982270769585133239840182076928600225425386926305906233725574337009104530356853863448383654832344408718772724757372631143287400290703116853900133240035664821661976772888810467002790969870927064001031662898740474966711631710727286951427698675681491111582579107155062882709864229052873760966169264245600186093538370974908907020474070930611562816292100183878427364763405389786799631792649382212389347264227332059441821014321577413408905755610100524739302573178913805937883513832378054230283344383753275902621779919104894612137194747465109053363132937703205299899090062685892622098070989799300096851455096301228710972701865038051936551041342440525433281771038877777161882636844059426594521805350407152290165733138028576146967247522643744694141195004623423857857693427850032738203686267421228588828435013088446245077772203162760491431357812801871298876631302950079959010177847566668826266369346040779302669462790393198104623019338124802383380441313038602352462845958086644089497490467594136351122837597449672487010313897868432189301305885675861325709179122431375903938149903427353406026693988298068066087414204535567217531278847246663063521026232422013930241593518848232749733716027932971967899916260405600669083231342566885341691912368360136490149146410247155251733391927312938665018633466748949229241307444214633067906713336048685943513134140263372359696945716406670478168007378272916725550905826132341140883161990841937113616809856754859509173906225877693272986499970529936397933788880924831441272134709649991804869975215537765013304261103280798035738449283801074367455416975896866833656686794976325679828000354046625307811860168996790355272279436375392093313679100121487235860221918848736347751782535229777178218227648706506003129525970134666691479694925217524902959803338422438060038658440582225592904612537669959087757488373578611752282868269275950214371229748731729215641332246654869362216109284862185496124584262579953474203167872603070470067950478753207808608248943878860011908115468802190546000519840247324568161439611525769820989196058123147718137077000120913662868599382356603269721601449988482683980532792892177658333875496383645485133701798213530386049902190319666619680418627484562095523349838908554879800729189834397636333150687591784783593636695475815080002634570530677847790228554244311329470586241698758265462755938042304017099033993858637076639260904636077951244351657896256391998258900387543477862789459882688297086948688007197707992577339183010173354065933555655553421866069534486864447491231024219783356908661520568506172333653043056654975852125377780460189002009695349728259441892176823884813139372756407875800289098734167129099381792737264454097003678873187452677901588877289663613161551851497712147029113595405001612731177702538079055158748598170770058186080901509151495332123511968083584199679029073876741472874261160159173180904406481520378171876773781817052362670957754076341052151226864982044423001068087857032658832611786035188675526138932008287577260700322890640413383485742814765880396419208969263227721270455088178847430617675460588467868983618120954389404362140231158416321704501412371098629111850576048849097883084678663620297477847964712187292115607313892012356396662805274189107494467684644891521782414262128173708455480707454354968100553678330533006106890027275388315886464487511584246560889155661821779830187538375406868333151665726444207031130208083500148213129337075506341798868095841311106337887887291979827246042278337117981115263174824668396544795084846659779500193321324649508077325582071348037416574693493790019348532848623137704882189995492207319495341511822520469501170794388924439459462165936478786200751196030621197856797511117488998623310034898658778486649937046992404397946797175451641601275425983240032855623615931663152448038884938302103599522781131691794955910668601656339114328483085213740867710698818388423094906297631419655436847220572682142406966844544894284269328182913703189411753835932978396033829515305455631318696053728968876051410783596705077112312728424202529475578776676152981258884551184505959739004252170359641105692336889964755135425463797747654176970663980003607348930037794652574271713970
Nothing clever on my part I stole this from the answer's edit history.
# Python 3, 19 16 bytes (saves 22 25 bytes)
print(3*10**456570)
Cracks DialFrost's cop answer... I just did a bunch of trial and error to get this lol.
• 9**478462 saves 3 more bytes. (It should be a way to do x**x**x to save even more bytes but I don't have the patience to find it Feb 2 at 16:35
# !@#\$%^&*()_+, 6 bytes, cracks Fmbalbuena's answer
~(#)
`
Try it online!
|
|
# Compound Interest when Interest is Compounded Half-Yearly – Definition, Formula, Examples| How to Calculate Compound Interest Half-Yearly?
The calculation of compound interest by using the growing principal is a complicated and lengthy process when the time period is long. Hence, compound interest when interest is compounded for 6 months is shown here. You can find the formula and its importance in the next sections. Also, you can find the derivation with solved examples of compound interest when interest is compounded half-yearly. 10th Grade Math Compound Interest concepts are explained in detail on our website for free.
## How to find Compound Interest when Compounded Half Yearly?
To find the compound interest for half-year, Suppose that rate of interest is annual and interest is compounded half-yearly(i.e., 6 months), the annual interest rate is halved which is r/2, and years numbers are doubled i.e., 2n. To calculate the compound interest when the interest rate is for 6 months is given we use the formula:
Let Principal = P, Interest Rate = r/2%, Time – 2n, Compound Interest = CI, Amount = A then
A = P{(1 + $$\frac { r }{ 100 }$$)2n}
In half-year compounding, the Number of years is multiplied by 2, and the interest rate is divided by 2.
Compound Interest = Amount – Principal
CI = P{(1 + $$\frac { r }{ 100 }$$)2n} – P
CI = {(1 + $$\frac { r }{ 100 }$$)2n}
If any three values of the terms are given, the fourth can be easily found.
### Compound Interest Half Yearly Formula Derivation
In the procedure of derivation of formula, we consider the CI half-yearly on the principal P for 1 year at a rate of interest r% for 6 months. At the end of the first 6 months, the principal amount changes as it is compounded half-yearly. Then, the next 6 months’ interest is calculated based on the amount that is remained after the first 6 months.
Simple Interest is calculated at the end of first 6 months as:
SI = (P * r * 1)/(100 * 2)
At the end of first six months, the amount is
A = P + SI
A = P + (P * r * 1)/(100 * 2)
A = P[1 + r/(2 * 100)]
A = P2
Simple Interest for next 6 months, now principal amount is changed to P2
SI1 = (P2 * r * 1)/(100 * 2)
Amount at the end of 1 year
A2 = P2 + SI1
A2 = P2 + (P2 * r * 1)/(2 * 100)
A2 = P2[1 + r/(2 * 100)]2
Now, the final amount after 1 year:
A = P[1 + r/(2 * 100)]2
By rearranging the above equation we get,
A = P[1 + (r/2)/100)]2*1*t
### Half Yearly Compounding Examples | Compound Interest Half Yearly Questions
Here are a few solved examples of compound interest when it is compounded half-yearly.
Example 1:
Find the compound interest and amount on $10,000 at 10% per annum for 1 1/2 years if interest is compounded for 6 months? Solution: Given that, Principal (P) =$10,000
Number of years(n) = 1 * 1/2 = 3/2 * 2 = 3
Rate of interest compounded for 6 months = 10/2% = 5%
The formula used to calculate amount is
A = P{(1 + $$\frac { r }{ 100 }$$)2n}
A = 10000{(1 + $$\frac { 5 }{ 100 }$$)3
A = 10000{(1.05)3
A = 10000 * 1.05 * 1.05 * 1.05
A = 11,576.25
Hence, the amount is 11,576.25.
Compound Interest = Amount – Principal
CI = 11,576.25 – 10000
CI = 1,576.25
Hence, the compound interest = 1,576.25
Therefore, the compound interest and amount are 2,597.12 and 12,597.12
Example 2:
Find the compound interest and amount on $6,000 is 1 1/2 years at 8% per annum that is compounded 6 months(half-yearly)? Solution: Given that, Principal (P) =$6,000
Number of years(n) = 1* 1/2 = 3/2 * 2 = 3
Rate of interest compounded for 6 months = 8/2% = 4%
The formula used to calculate amount is
A = P{(1 + $$\frac { r }{ 100 }$$)2n}
A = 6000{(1 + $$\frac { 4 }{ 100 }$$)3
A = 6000{(1.04)3
A = 6000 * 1.04 * 1.04 * 1.04
A = 6,749.18
Hence, the amount is 6,749.18.
Compound Interest = Amount – Principal
CI = 6,749.18 – 6000
CI = 749.18
Hence, the compound interest = 749.18
Therefore, the compound interest and amount are 749.18 and 6,749.18.
Example 3:
The compound interest on Rs. 15,000 in 2 1/2 years at 8% per annum, the interest is compounded half-yearly. Calculate the amount and compound interest?
Solution:
Given that, Principal (P) = \$15,000
Number of years(n) = 2* 1/2 = 2
Rate of interest compounded for 6 months = 8/2% = 4%
The formula used to calculate amount is
A = P{(1 + $$\frac { r }{ 100 }$$)2n}
A = 15000{(1 + $$\frac { 4 }{ 100 }$$)2
A = 15000{(1.04)}2
A = 15000 * 1.04 * 1.04
A = 16,224
Hence, the amount is 16,224.
Compound Interest = Amount – Principal
CI = 16,224 – 15,000
CI = 1,224
Hence, the compound interest = 1,224
Therefore, the compound interest and amount are 1,224 and 16,224.
See More:
### Faqs on How to find Compound Interest when Interest is Compounded Half-Yearly
1. What is the formula used to calculate compound interest when interest is compounded half-yearly?
The formula used to calculate the compound interest is CI = P{(1 + $$\frac { r }{ 100 }$$)2t} – P
where CI is the compound interest
P is the initial principal amount
T is the time period
R is the rate of interest per annum
2. When the interest is compounded for 6 months, then the number of conversion periods in the year is?
The interest is compounded for 6 months, then the number of conversion periods in the year is 2.
3. What is the time period taken when interest is calculated half yearly?
In terms of interest for half a year, there are 2 conversion periods in the year. Hence, 2 is multiplied by the time period. The time period is taken when the interest calculated half yearly is twice as much as the years given.
4. For the calculation of compound interest for half-year and principal is the same, which among the following is true?
(a)Half the number of years and double the annual rate
(b)Double the annual rate and number of years
(c)Half the annual rate and number of years
(d)Double the number of years and half the annual rate
If the interest is compounded for 6 months, then
R = R/2 and T = 2T = 2n
Therefore, (d) double the number of years and half the annual rate.
### Conclusion
Compound Interest when Interest is Compounded Half Yearly along with derivation is given in this article. Know what is the difference to find compound interest when it is compounded yearly, Half Yearly, and Quarterly. Know every difference and how to solve compound interest problems by referring to our articles.
Scroll to Top
|
|
For many OTC derivative products, everyone only see a corner of the whole market. While still unlikely to be comprehensive, the more connected market players have a much better view upon the market. This creates a degree of information asymmetry that successive regulatory reforms aimed to address. One of the policy initiative is to transaction data repository for public dissemination. This blog is a taster for the microstructure analysis of credit index that we can conduct upon these post-trade records.
Itraxx Europe and Crossover indices (denoted as ITX and XO) are the two credit indices under study. The raw data consists of SDR and MIFID post trade reports from Oct 2020 to Mar 2021 for ITX and XO series 34 5-year indices republished via Bloomberg. The underlying reference entities of these credit indices are European corporate entities and they are being actively traded by both US Person [1] and non-US Person [2]. Depending on the location of the trading venue and the jurisdiction of the trader, some of the regulatory trade reporting obligations fall under EU rules and some other under US rules. While regulatory regime is not a topic far from my interest or specialise area, I am going to briefly dabble in some of these matters when trying to garner all the relevant information.
OTC Trade Reports Aim for Public Dissemination
The collapse of Lehman Brothers in 2008 was a wake up call for the regulators. They were caught off-guard by the hidden scale of the OTC derivative exposure amongst major financial institutions and thus demanded more timely and detailed reporting post crisis. The reform was implemented in phases. In the earlier stage, it was about submitting transaction record to global trade repositories or warehouses. In the second stage, it was about mandatory clearing for the more liquid product, more timely reporting to relevant regulators and release of post trade data to the investing public. It is the latter that pique my interest.
Yet when I started looking into the reports, they do not seem to be straightforward to follow. I think there are two main reasons. First, the regulatory regimes have not been harmonised across the Atlantic. Each regulator may choose an agency or principal reporting regime. Along the same line, there are inconsistency for some basic issues e.g. who should report (buyer, seller or the trading venue). Second, many OTC markets are not very liquid or dominated by large players. Forcing instantaneous disclosure of all information for every order could upset the normal functioning of the markets. To mitigate the reporting burden, the regulators introduce deferral mechanisms for data release along with various exemptions. In do some, the reporting becomes more complicated.
One complication is a US Person can trade on MTF. Similarly an EU financial institution can trade on SEF. As reporting framework has not been harmonised, duplicate disseminate of the same transaction at different times can happen. The ISDA guide highlights that a US Person trades on MTF would have its trade reported through the venue to the EU side but it still has to send the same transaction info to the US side as “off-facility” trade. On the other hand, a EU Person trades on SEF the requirement to republishing the transaction via APA is waived. The implication is we should ignore the off-facility trades in SDR report when aggregating the trades in order to avoid double counting.
While reporting aiming for public dissemination are collected at real-time as prescribed by regulators across the Atlantic, only SDR on the US side releases such info (execution time, traded size and price) to the public right away. However, SDR withholds the exact trade size for any trade with a notional of larger than \$100M. On the EU side, credit indices are deemed as not sufficiently liquid and trade-by-trade data can be deferred for public dissemination. On each Tue, the aggregate trading volume for the prior week is released with the trade-by-trade data are made available to the public only after four extra weeks. In other words, tick data is not real-time but would be delayed for weeks. This could be an interesting machine learning project to train a model to spot actual transaction across EU trading venues.
For Itraxx Europe and Crossover indices, there are three sets of reports in the data set (SDR, MTF, APA – both MTF and APA refer to trades covered under MIFID rules but APA are those traded outside recognised market). The average daily volumes were €7.4bn and €2.9bn for Itraxx Europe and Crossover respectively across different trading venues. On average 160 and 190 Itraxx Europe and Crossover trades were being done per day. Assume an 9-hour trade day, there is a transaction roughly average 3 minutes. No wonder many credit index trading desks can still cope with the workload when they are only semi-automated. In terms of trading venue, SEF is the most popular and is followed by MTF. While many US financial institutions are very active in both sides of the market and SEF is not exclusively traded by US Persons, the predominance of SEF is still a bit of a surprise.
Trade statistics for the Itraxx Europe & Crossover S34-5Y on-the-run indices between Oct20-Mar21
Brexit can be a counter-intuitive factor accounting for the popularity of SEF. The market share of SEF jumped from about 40% to above 60% when the Brexit transition ended on 31 Dec 2020. Most of the MTF and APA trades used to be booked in London. While the leading operators of MTF and APA operators have opened new facilities in EU to accommodate their EU based clients post Brexit, liquidity could fall as the market is split into two. Since many clients operating in Europe already have access to SEF, shifting more trading activities into the already popular SEF can be a rational response.
Change in market share of SEF trading venue between Oct20-Mar21
Comparing the average trading volume to number of trades per day gives us an indication of the relative trade size. The trades reported through APA tended to be the largest and the SEF ones be the smallest. It is conceivable as many packaged index trades (e.g. as a part of CDS-index basis package, the index delta leg of options or tranche trades) are booked through APA and they tend to be large trades. In the case of SEF, the price transparency and level of automation is the highest amongst different trade venues. And it may thus attract traders running more nimble and higher frequency strategies.
The histogram for the arrival time between successive trades shows an exponential-like drop off. The log frequency plot shows that the relationship is reasonably linear at least for am arrival time of less than around 10 minutes. The pattern is similar for both ITX34 and XO34. Exponential distribution is thus the right model for the arrival time between successive trades.
Arrival Time Distribution for ITX34 and XO34
For an arrival time of beyond about 10 minutes, the rate of decrease becomes slower. One possibility is there exists two sub-populations representing the busier and quieter trading days. If we define a cut-off arrival time of 10 minutes, there were only 9 trading days of which more than 30% of all trades are above the cut-off and these days were the Thu and Fri around Thanksgiving and in the second half of Dec. A more advance arrival time model might include classifying dates into busier and quieter days and fit a separate exponential distribution model to each subset of data.
The distribution of trade size is not smooth. Most still prefer trading at multiples of 5MM or 10MM with 25MM and 10MM being the most popular trade size for Itraxx EUR and XO respectively. Note that in the case of SEF, the exact trade size of larger than USD100MM is not publicly disclosed and is capped to the EUR equivalent in reports. Depending on the EURUSD exchange rate, the trade size of these large trade might report as €82MM+, €84MM+, €90MM+ etc. This would slightly distort the results when we aggregate the data (esp for ITX which tend to be traded in larger sizes).
Trade Size Distribution for ITX34 and XO34
Seasonality Effect – Intraweek and Intraday
The trading activities in many markets often follow some cyclical patterns. This is known as the seasonality effect. For those who intend to design a trade execution model, the intraweek or intraday time scales are the most relevant. First, we examine the intraweek seasonality effect. Tue tended to be the quietest and got busier towards the end of the week. Nevertheless, the difference is less than 10-15% measured either by the average number of trades or volume. This effect was not that apparent for credit index trading during the observation period.
Seasonality Effect – Intraweek for ITX34 and XO34
Unless there are some major economic events, there are not much trading in ITX or XO before 0730 or after 1730 (London time). OTC credit trading does not have official open or close time. I thus pick 0730 and 1730 as the quasi trading hours and aggregate the trades into half hour slots. In terms of intraday pattern, there is an early peak around 0800 to 0830 and the trading activities gradually slow down towards noon. The market heats up again when the US traders come back to office and reach the peak between 1530 to 1630. The intraday seasonality pattern is quite pronounced with average trading volume easily be twice or more than in the peak hours.
Seasonality Effect – Intraday for ITX34 and XO34
The transactions are aggregated into a number of different time intervals with length ranging from 7.5, 15, 30, 60 to 120 minutes. The spread movement in each time interval is taken as a proxy for the return. The first 4 moments for the returns (mean, standard deviation, skewness and excess kurtosis) are calculated for each return time interval. A time series of less than 5 months is not that long even for intraday return analysis. The reader should bear in mind this is more of a taster rather than a robust study.
For the period under examination (Oct20 to Mar21), the market rallied on the back of the successful launch of the first covid vaccine. Spread tightened during the period. Naturally, the means are negative. Also, the large movements of the market during this period tended to be risk-on related news and thus explains the negative skewness.
For the standard deviations, they tend to increase as a power to the time interval. The underlying random process matters and it is a topic to be discussed in the scaling law section later on.
Moment for Different Return Intervals for ITX34 and XO34
Excess kurtosis is often (extremely) unstable and much larger than zero (ie fatter tail than the normal distribution). In the case of ITX34, the excess kurtosis falls from very high level when the time interval for return calculations increases. This is similar is similar to many other financial series.
In the case of XO34, the kurtosis seems to be all over the places with kurtosis calculated using 15-minute and hourly return much higher than the rest.
It was caused by a genuine sharp movement in spread when examining the actual data. At around 11:30 am on 9Nov20, Pfizer announced the success third stage covid vaccine trial. XO34 was tightened by more than 23bp (or 10x hourly standard deviation) in the subsequent hour. As the market rally was driven by incessant buying order for high yield risk (rather than as a sudden jump), the kurtosis calculated at the shorter return interval is not affected by as much. If this data point was being removed from XO34, the kurtoses would fall substantially across board. Perhaps a data set with longer sampling period should be used. Alternatively, the kurtosis is just that unstable by its nature.
Effect of Removal of Just One Extreme Data Point Upon Kurtosis
Scaling Law
There is an element of randomness in return. Based upon Mandelbrot’s initial analysis, [3] suggests an empirical scaling law between return volatility (as measured by ${ E (|r|)}$) and time intervals ${\Delta t ^{D}}$. ${D}$ is termed as the drift exponent. If the return follows a Gaussian random walk, the drift exponent would be 0.5. If it follows a more more trend following process with a large movement tend to be clustered with other large movements, the drift exponent would be larger than 0.5 with ${0.5 . For more mean-reverting processes, the drift exponent would be smaller than 0.5 with ${0 . Since the focus is intraday behaviour, the time interval is limited to a maximum of 240 min.
$\displaystyle E(|r|)=c\,\Delta t ^{D}$
The expected absolute return (${E(|\Delta t|)}$) is plotted against the time interval ${\Delta t}$ in a log-log plot. It seems that it is more appropriate to fit the data into a shorter end and and longer end model. Picking a transition point at 900 seconds (15minute). Below that, the drift exponent is above 0.8 for both indices, suggesting the spreads tend to be trending when the time interval is short. Beyond this point, the drift exponent falls to nearly 0.5, suggesting the return is not too different from a random walk in longer horizon. This seems to be collaborated with the empirical observation – after an actual transaction goes through, the market quotes tend to trend until the orders from other participants who are in the same direction but with higher private reserve prices all get filled.
Scaling Law Analysis for ITX34 and XO34
Note 1 US Person is defined as a US Resident, partnership or corporate formed under US laws, various types of accounts held for the benefit of a US Person
Note 2 Non-US Person: largely refer to market participants fall within EU jurisdiction. Brexit can be a complication here. Given there still is no agreement on equivalence of financial regulation between the EU and the UK, UK’s FCA temporarily adopt EU rules after the end of Brexit transition period on 31 Dec 2020. Situation could change pending for further negotiations.
Note 3 “An Introduction to High-Frequency Finance, Dacorogna, Gencay, Muller, Olsen”, 2001. Ch 5.5
Does Taking Higher Risk Lead to More Return In Bonds?
The low volatility anomaly is well-known in equity. Holding a basket of shares with the highest beta does not generate the highest return. It has been shown in many different regions and periods. A similar mechanism may be in action in bonds as well. The yield is higher when going down the rating spectrum. But that does not fully compensate the credit quality deterioration beyond a certain point. Examined 20 years of Bloomberg Barclays bond indices for US and European corporate, buy-and-hold the riskiest credit did not generate a good return. There seems to be a sweet spot when going down the credit spectrum. Continue reading “Does Taking Higher Risk Lead to More Return In Bonds?”
Callable Bond – Part 3: Perpetual Subordinated Capital Note
Perpetual subordinated capital note does not have a maturity date. It has a pre-negotiated coupon (which can be fixed, floating or switches from fixed-to-float in its lifetime) to the holder periodically but the coupon can be switch off if no dividend is being distributed to the ordinary shares at the time. The deferred coupon might be cancelled (non-cumulative) or pay back all at the same time in arrear (cumulative). Continue reading “Callable Bond – Part 3: Perpetual Subordinated Capital Note”
Callable Bond – Part 2: Callable HY in Practice
In this article, I focus on bonds comes with callable features when issued and look into why the bonds are structured the way it is. The callable bonds tend to be from HY issuers. Bond options structured thru fixed income desk of investment bank would not be considered here as these are largely interest rate investment and hedging derivative products with government bond or highly liquid investment bonds as underlying instrument. The consideration can be different when compared with the cash HY bonds (e.g. the payoff of bond derivatives follow mechanical rule whereas the strategic financing decision at the company level would determine whether a HY bond be called – not just bond price in comparison with the strike). Continue reading “Callable Bond – Part 2: Callable HY in Practice”
Callable Bond – Part 1: YTW vs OAS
Callable bond: a credit perspective – Part 1: YTW vs OAS
Bonds with callable feature are very common in the HY space with close to 65% and 35% of all new US and European HY bonds are callable. These bonds tend to have a call schedule (rather than a single call date and price) with credit component more of a concern than the fluctuations in interest rate. This is a topic falls in an area somewhere between quants and fundamental analysts and tends to ignore by many. I intend to look closer to it in this series of articles. Yield-to-worst (YTW) and option-adjusted spread (OAS) are the commonest analytics being used. In part one, I will explain how to calculate YTW and OAS and how should we interpret them. Continue reading “Callable Bond – Part 1: YTW vs OAS”
Benchmarking with Euro Bond ETF
Bond ETF is now an important investment vehicle for US based high yield investors with the top 5 HY Bond ETF accounting for a total market cap of exceeding \$40bn at the time of writing (24/6/14). While it is still just a small fraction of the \$1.5tn US high yield market, the leading ETF iShares iBoxx \$HY Corp Bond ETF (HYG – \$13.5bn market cap) and SPDR Barclays HY Bond ETF (JNK– \$9.8bn market cap) are closely following by many investors as the passive investment style of the ETFs makes them ideal benchmark (either when comparing with other funds or other asset classes). Moving across the Atlantic, HY bond ETF also enjoys phenomenal growth. The market cap of the biggest iShares iBoxx Euro HY Corp Bond ETF (IHYG) has grown more than 10-fold since Dec 2010 and reaches a market cap of EUR3.2bn. Continue reading “Benchmarking with Euro Bond ETF”
Credit migration matrix – use and misuse
A rating is supposed to be a stable long term predictor of the creditworthiness of a borrower. Every once in a while, the Credit Rating Agencies (CRAs) would release their analysis upon rating migration and, more importantly, how their ratings compare with realised default frequencies. The information is often summarised in form of a credit migration/transition matrix. It is a useful tool but sometimes be misused. Continue reading “Credit migration matrix – use and misuse”
|
|
Rounding
library(scrutiny)
Large parts of scrutiny are about rounding. This topic is more diverse and variegated than one might think. “Rounding” is often taken to simply mean rounding up from 5, and although that’s wrong, it doesn’t make much of a difference most of the time. Where it does matter is in reconstructing the rounding procedures that others used, claim to have used, or are alleged to have used. In short, rounding and all its details matter for error detection.
This vignette covers scrutiny’s infrastructure for reconstructing rounding procedures and results from summary statistics. Doing so is essential for the package: It is a precondition for translating assumptions about the processes behind published summary data into a few simple, higher-level function calls. Some of the functions presented here might be useful beyond the package, as well. Feel free to skip the more theoretical parts if your focus is on the code.
Overview
Base R’s round() function is surprisingly sophisticated, which distinguishes it from the very simple ways in which decimal numbers are normally rounded — most of the time, rounding up from 5. For this very reason, however, it can’t be used to reconstruct the rounding procedures of other software programs. This is the job of scrutiny’s rounding functions.
First, I will present reround(), a general interface to reconstructing rounded numbers, before going through the individual rounding functions. I will add some comments on these.
I will also discuss unround(), which works the reverse way: It takes a rounded number and reconstructs the bounds of the original number, taking details about the assumed rounding procedure into account.
Finally, I will take a closer look at bias from rounding raw numbers.
Reconstruct rounded numbers with reround()
None of the error detection techniques in scrutiny calls the individual rounding functions directly. Instead, all of them call reround(), which mediates between these two levels. reround() takes the vector of “raw” reconstructed numbers that were not yet rounded in the way that’s assumed to have been the original rounding procedure. Its next argument is digits, the number of decimal places to round to.
The remaining three arguments are about the rounding procedure. Most of the time, only rounding will be of any interest. It takes a string with the name of one of the rounding procedures discussed below.
Here is an example for a reround() call:
reround(x = c(5.812, 7.249), digits = 2, rounding = "up")
#> [1] 5.81 7.25
The two remaining arguments are mostly forgettable: They only concern obscure cases of rounding with a threshold other than 5 (threshold) and rounding such that the absolute values of positive and negative numbers are the same (symmetric). Ignore them otherwise.
Rounding procedures in detail
Up and down
round_up() does what most people think of as rounding. If the decimal portion to be cut off by rounding is 5 or greater, it rounds up. Otherwise, it rounds down. SAS, SPSS, Stata, Matlab, and Excel use this procedure.
round_up(x = 1.24, digits = 1)
#> [1] 1.2
round_up(x = 1.25, digits = 1)
#> [1] 1.3
round_up(x = 1.25) # default for digits is 0
#> [1] 1
Rounding up from 5 is actually a special case of round_up_from(), which can take any numeric threshold, not just 5:
round_up_from(x = 4.28, digits = 1, threshold = 9)
#> [1] 4.2
round_up_from(x = 4.28, digits = 1, threshold = 1)
#> [1] 4.3
These two functions have their mirror images in round_down() and round_down_from(). The arguments are the same as in round_up():
round_down(x = 1.24, digits = 1)
#> [1] 1.2
round_down(x = 1.25, digits = 1)
#> [1] 1.2
round_down(x = 1.25) # default for digits is 0
#> [1] 1
round_down_from(), then, is just the reverse of round_up_from():
round_down_from(x = 4.28, digits = 1, threshold = 9)
#> [1] 4.3
round_down_from(x = 4.28, digits = 1, threshold = 1)
#> [1] 4.2
To even (base R)
Like Python’s round() function, R’s base::round() doesn’t round up or down, or use any other procedure based solely on the truncated part of the number. Instead, round() strives to round to the next even number. This is also called “banker’s rounding”, and it follows a technical standard, IEEE 754.
Realizing that round() works in a highly unintuitive way sometimes leads to consternation. Why can’t we just round like we learned in school, that is, up from 5? The reason seems to be bias. Because 5 is right in between two whole numbers, any procedure that rounds 5 in some predetermined direction introduces a bias toward that direction. Rounding up from 5 is therefore biased upward, and rounding down from 5 is biased downward.
As shown in the Rounding bias section below, this is unlikely to be a major issue when rounding raw numbers that originally have many decimal places. It might be more serious, however, if the initial number of decimal places is low (for whatever reason) and the need for precision is high.
At least in theory, “rounding to even” is not biased in either direction, and it preserves the mean of the original distribution. That is how round() aims to operate. Here is a case in which it works out, whereas the bias of rounding up or down is fully apparent:
vec1 <- seq(from = 0.5, to = 9.5)
up1 <- round_up(vec1)
down1 <- round_down(vec1)
even1 <- round(vec1)
vec1
#> [1] 0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5
up1
#> [1] 1 2 3 4 5 6 7 8 9 10
down1
#> [1] 0 1 2 3 4 5 6 7 8 9
even1
#> [1] 0 2 2 4 4 6 6 8 8 10
# Original mean
mean(vec1)
#> [1] 5
# Means when rounding up or down: bias!
mean(up1)
#> [1] 5.5
mean(down1)
#> [1] 4.5
# Mean when rounding to even: no bias
mean(even1)
#> [1] 5
However, this noble goal of unbiased rounding runs up against the reality of floating point arithmetic. You might therefore get results from round() that first seem bizarre, or at least unpredictable. Consider:
vec2 <- seq(from = 4.5, to = 10.5)
up2 <- round_up(vec2)
down2 <- round_down(vec2)
even2 <- round(vec2)
vec1
#> [1] 0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5
up2
#> [1] 5 6 7 8 9 10 11
down2
#> [1] 4 5 6 7 8 9 10
even2 # No symmetry here...
#> [1] 4 6 6 8 8 10 10
mean(vec2)
#> [1] 7.5
mean(up2)
#> [1] 8
mean(down2)
#> [1] 7
mean(even2) # ... and the mean is slightly biased downward!
#> [1] 7.428571
vec3 <- c(
1.05, 1.15, 1.25, 1.35, 1.45,
1.55, 1.65, 1.75, 1.85, 1.95
)
# No bias here, though:
round(vec3, 1)
#> [1] 1.0 1.1 1.2 1.4 1.4 1.6 1.6 1.8 1.9 2.0
mean(vec3)
#> [1] 1.5
mean(round(vec3, 1))
#> [1] 1.5
Sometimes round() behaves just as it should, but at other times, results can be hard to explain. Martin Mächler, who wrote the present version of round(), describes the issue about as follows:
The reason for the above behavior is that most decimal fractions can’t, in fact, be represented as double precision numbers. Even seemingly “clean” numbers with only a few decimal places come with a long invisible mantissa, and are therefore closer to one side or the other.
We usually think that rounding rules are all about breaking a tie that occurs at 5. Most floating-point numbers, however, are just somewhat less than or greater than 5. There is no tie! Consequently, Mächler says, rounding functions need to “measure, not guess which of the two possible decimals is closer to x” — and therefore, which way to round.
This seems better than going with mathematical intuitions that may not always correspond to the way computers actually deal with these issues. R has been using the present solution since version 4.0.0.
base::round() can seem like a black box, but it seems unbiased in the long run. I recommend using round() for original work, even though it is quite different from other rounding procedures — and therefore unsuitable for reconstructing them. Instead, we need something like scrutiny’s round_*() functions.
Reconstruct rounding bounds with unround()
Rounding leads to a loss of information. The mantissa is cut off in part or in full, and the resulting number is underdetermined with respect to the original number: The latter can’t be inferred from the former. It might be of interest, however, to compute the range of the original number given the rounded number (especially the number of decimal places to which it was rounded) and the presumed rounding method.
While it’s often easy to infer such a range, we better have the computer do it. Enter unround(). It returns the lower and upper bounds, and it says whether these bounds are inclusive or not — something that varies greatly by rounding procedure. Currently, unround() is used as a helper within scrutiny’s DEBIT implementation; see vignette("debit").
The default rounding procedure for unround() is "up_or_down":
unround(x = "8.0")
#> # A tibble: 1 × 7
#> range rounding lower incl_lower x incl_upper upper
#> <chr> <chr> <dbl> <lgl> <chr> <lgl> <dbl>
#> 1 7.95 <= x(8.0) <= 8.05 up_or_down 7.95 TRUE 8.0 TRUE 8.05
For a complete list of featured rounding procedures, see documentation for unround(), section Rounding.
On the left, the range column displays a pithy graphical overview of the other columns (except for rounding) in the same order:
1. lower is the lower bound for the original number.
2. incl_lower is TRUE if the lower bound is inclusive and FALSE otherwise.
3. x is the input value.
4. incl_upper is TRUE if the upper bound is inclusive and FALSE otherwise.
5. upper is the upper bound for the original number.
By default, decimal places are counted internally so that the function always operates on the appropriate decimal level. This creates a need to take trailing zeros into account, which is why x needs to be a string:
unround(x = "3.50", rounding = "up")
#> # A tibble: 1 × 7
#> range rounding lower incl_lower x incl_upper upper
#> <chr> <chr> <dbl> <lgl> <chr> <lgl> <dbl>
#> 1 3.495 <= x(3.50) < 3.505 up 3.50 TRUE 3.50 FALSE 3.50
Alternatively, a function that uses unround() as a helper might count decimal places by itself (i.e., by internally calling decimal_places()). It should then pass these numbers to unround() via the decimals argument instead of letting it redundantly count decimal places a second time.
In this case, x can be numeric because trailing zeros are no longer needed. (That, in turn, is because the responsibility to count decimal places in number-strings rather than numeric values shifts from unround() to the higher-level function.)
The following call returns the exact same tibble as above:
unround(x = 3.5, digits = 2, rounding = "up")
#> # A tibble: 1 × 7
#> range rounding lower incl_lower x incl_upper upper
#> <chr> <chr> <dbl> <lgl> <dbl> <lgl> <dbl>
#> 1 3.495 <= x(3.5) < 3.505 up 3.50 TRUE 3.5 FALSE 3.50
Since x is vectorized, you might test several reported numbers at once:
vec2 <- c(2, 3.1, 3.5) %>%
restore_zeros()
vec2 # restore_zeros() returns "2.0" for 2
#> [1] "2.0" "3.1" "3.5"
vec2 %>%
unround(rounding = "even")
#> # A tibble: 3 × 7
#> range rounding lower incl_lower x incl_upper upper
#> <chr> <chr> <dbl> <lgl> <chr> <lgl> <dbl>
#> 1 1.95 < x(2.0) < 2.05 even 1.95 FALSE 2.0 FALSE 2.05
#> 2 3.05 < x(3.1) < 3.15 even 3.05 FALSE 3.1 FALSE 3.15
#> 3 3.45 < x(3.5) < 3.55 even 3.45 FALSE 3.5 FALSE 3.55
Fractional rounding
What if you want to round numbers to a fraction instead of an integer? Check out reround_to_fraction() and reround_to_fraction_level():
reround_to_fraction(x = 0.4, denominator = 2, rounding = "up")
#> [1] 0.5
This function rounds 0.4 to 0.5 because that’s the closest fraction of 2. It is inspired by janitor::round_to_fraction(), and credit for the core implementation goes there. reround_to_fraction() blends janitor’s fractional rounding with the flexibility and precision that reround() provides.
What’s more, reround_to_fraction_level() rounds to the nearest fraction at the decimal level specified via its digits argument:
reround_to_fraction_level(
x = 0.777, denominator = 5, digits = 0, rounding = "down"
)
#> [1] 0.8
reround_to_fraction_level(
x = 0.777, denominator = 5, digits = 1, rounding = "down"
)
#> [1] 0.78
reround_to_fraction_level(
x = 0.777, denominator = 5, digits = 2, rounding = "down"
)
#> [1] 0.776
These two functions are not currently part of any error detection workflow.
Rounding bias
I wrote above that rounding up or down from 5 is biased. However, this points to a wider problem: It is true of any rounding procedure that doesn’t take active precautions against such bias. base::round() does, and that is why I recommend it for original work (as opposed to reconstruction).
It might be useful to have a general and flexible way to quantify how far rounding biases a distribution, as compared to how it looked like before rounding. The function rounding_bias() fulfills this role. It is a wrapper around reround(), so it can access any rounding procedure that reround() can, and takes all of the same arguments. However, the default for rounding is "up" instead of "up_or_down" because rounding_bias() only makes sense with single rounding procedures.
In general, bias due to rounding is computed by subtracting the original distribution from the rounded one:
$bias = x_{rounded} - x$
By default, the mean is computed to reduce the bias to a single data point:
vec3 <- seq(from = 0.6, to = 0.7, by = 0.01)
vec3
#> [1] 0.60 0.61 0.62 0.63 0.64 0.65 0.66 0.67 0.68 0.69 0.70
# The mean before rounding...
mean(vec3)
#> [1] 0.65
# ...is not the same as afterwards...
mean(round_up(vec3))
#> [1] 1
# ...and the difference is bias:
rounding_bias(x = vec3, digits = 0, rounding = "up")
#> [1] 0.35
Set mean to FALSE to return the whole vector of individual biases instead:
rounding_bias(x = vec3, digits = 0, rounding = "up", mean = FALSE)
#> [1] 0.40 0.39 0.38 0.37 0.36 0.35 0.34 0.33 0.32 0.31 0.30
Admittedly, this example is somewhat overdramatic. Here is a rather harmless one:
vec4 <- rnorm(50000, 100, 15)
rounding_bias(vec4, digits = 2)
#> [1] -7.297672e-06
What is responsible for such a difference? It seems to be (1) the sample size and (2) the number of decimal places to which the vector is rounded. The rounding method doesn’t appear to matter if numbers with many decimal places are rounded:
#> # A tibble: 10 × 3
#> bias decimal_digits rounding
#> <dbl> <chr> <chr>
#> 1 0.00000810 1 up up
#> 2 0.00000730 2 up up
#> 3 0.00000126 3 up up
#> 4 0.00000000767 4 up up
#> 5 0.0000000141 5 up up
#> 6 0.00000810 1 even even
#> 7 0.00000730 2 even even
#> 8 0.00000126 3 even even
#> 9 0.00000000767 4 even even
#> 10 0.0000000141 5 even even
However, if the raw values are preliminarily rounded to 2 decimal places before rounding proceeds as above, the picture is different:
#> # A tibble: 10 × 3
#> bias decimal_digits rounding
#> <dbl> <chr> <chr>
#> 1 0.00487 1 up up
#> 2 0 2 up up
#> 3 0 3 up up
#> 4 0 4 up up
#> 5 0 5 up up
#> 6 0.0000408 1 even even
#> 7 0 2 even even
#> 8 0 3 even even
#> 9 0 4 even even
#> 10 0 5 even even
In sum, the function allows users to quantify the degree to which rounding biases a distribution, so that they can assess the relative merits of different rounding procedures. This is partly to sensitize readers to potential bias in edge cases, but also to enable them to make informed rounding decisions on their own.
|
|
## Introduction
Agricultural insurance solutions are an important tool for farmers to manage risks. Among a wide set of available insurance solutions1 weather index insurances (WII) have recently emerged as promising alternative to classical damage based insurance solutions. For WII the payout made to the farmer is based on a measured index, e.g. precipitation at a weather station, and is not directly based on yield or income losses experienced by the farmer. Thus, WII overcome asymmetric information problems of classical insurance schemes, because farmer and insurance company have equal information about the weather risk and are unable to manipulate the insured value (the weather)2. In addition, compensation of farmers only requires information of weather records and is thus fast and cheap. Hence, WII have a large potential in both developed and developing countries and can contribute to better farm-level risk management and more efficient use of natural resources3,4. However, index insurances not necessarily lead to accurate compensation of yield losses and thus might fail to payout if farmers experience income losses. This phenomenon is denoted as basis risk and constitutes a significant adoption hurdle of these products by farmers. Basis risk can be separated into three components: (1) Geographical/spatial basis risk occurs if index is measured with spatial distance to production location5. (2) Design basis risk is a result of taking an index that is an inadequate predictor of yield losses6. (3) Temporal basis risk captures the imperfect choice of the time frame for index measurement7,8. In this paper, we suggest novel approaches to reduce temporal basis risk.
Temporal basis risk mainly occurs because WII does not reflect the actual growth stage that is sensitive to specific weather, e.g. droughts. The measurement period for the weather index has to be specified in the insurance contract by both parties before the growth period of the crop starts. As the most straightforward procedure, periods over which the index is measured are thus often chosen to reflect particular calendar periods (e.g. specific weeks or months). These fixed time windows can only roughly approximate crop specific growth phases9. Moreover, the occurrence dates of growth phases are not constant across time and space, because weather conditions can cause large shifts in the actual occurrence of these periods10,11. The resulting misspecification of insurance periods results in biased WII payout determination and therefore weak risk reducing properties hampering insurance uptake across risk averse farmers12. So far, only few studies have suggested approaches aiming to reduce temporal basis risk of WII, following more flexible index designs, i.e. by considering shifts of crop growth phases over time and space13,14. In this respect, ‘flexible’ implies to implement yearly changing insurance periods according to the actual occurrence dates of vulnerable growth phases10. First, Kapphan et al.13 used growing degree days (GDD) to model occurrence dates of emergence, vegetative period, grain filling and maturity in corn production based on thermal time. They evaluated the performance of their WII based on simulated corn yield and weather scenarios. Second, Conradt et al.14 used GDD to simulate occurrence dates of tillering, shooting and ear emergence in spring wheat. They tested the risk reducing properties of the resulting WII based on a case study in Kazakhstan. Both approaches allow for fine scaled estimates of multiple growth phases. Furthermore, Dalhaus and Finger15 suggest to use observations from a phenological network of farmland in the farms’ region to find winter wheat’s occurrence dates of stem elongation, ear emergence and milk ripeness. They tested their WII based on a case study in central Germany. The latter approach accounts additionally for a maximum of comprehensibility for the farmer, which is considered as key success factor in WII16,17 So far, no study has compared different approaches to consider crop growth phases in WII design.
We here use an empirical example of a WII against drought risk in winter wheat production in Germany to compare existing and propose new approaches to reduce temporal basis risk. In winter wheat production, especially phases of low water supply during “reproductive and grain-filling” limit the development of the plant18. Farooq et al.18 review outcomes of several contributions concerning yield reduction due to drought taking place at different developmental stages in winter wheat. Their findings indicate, that wheat is most vulnerable to drought during the phase from ‘stem elongation’ to ‘anthesis’18,19. Within this phase, assimilates are to a large extent used to develop grains20. Hence, drought induced leaf senescence21,22, reduced carbon uptake due to stomata closure23 as well as shortening of grain filling period20 decrease grain number and grain weight, thus reducing final yield outcome.
In this study, we aim to test and compare different approaches to find the occurrence dates of these phases and use this information to reduce temporal basis risk of WII. We focus on the following crop growth stage modeling and different phenology observation networks (see also Table 1 for comparative features).
• Growing Degree Days: Plants are expected to require a plant and growth stage specific temperature load to reach a certain growth stage. The growing degree days approach helps to model this based on observed temperature data. Using the GDD model we are able to estimate the occurrence dates of the drought sensitive period between stem elongation and anthesis of winter wheat.
• Yearly Phenology Reporters: Publicly provided open dataset of plant growth stage occurrence dates. The network comes with a high spatial density and a detailed reporting procedure including various different growth stages. Data is published at the end of the calendar year (denoted as ‘Yearly Reporter’ henceforward). Using yearly reporters’ data we are able to derive region specific information on the actual occurrence dates of the drought sensitive growth stages stem elongation and ear emergence in winter wheat.
• Immediate Phenology Reporters: Publicly provided open dataset of plant growth stage occurrence dates. The network however comes with a lower spatial density and less reported growth phases compared to the latter network. Data is published directly after observation (For a detailed explanation of all three approaches, see section ‘Determination of water sensitive growth stages’.) (denoted as ‘Immediate Reporter’ henceforward). Using immediate reporters’ data we are able to derive region specific information on the actual occurrence dates of the drought sensitive growth stages stem elongation and ear emergence in winter wheat.
To this end, we add to the current discussion of utilizing (big) data sources to support more efficient insurance solutions and thus sustainable agriculture24,25.
More specifically we aim to answer the following research questions
• RQ1: Which approach to explicitly consider yearly changing insurance periods reduces farmers’ financial exposure to drought risk compared to a ‘no insurance’ scenario?
• RQ2: Which approach to explicitly consider yearly changing insurance periods fits best to reduce temporal basis risk of weather index insurance?
We use expected utilities (EU) of insured farmers as risk measure to test for a reduction in the financial exposure to drought risk. We conduct this assessment for different scenarios of farmers’ level of risk aversion. Our approach is particularly focused on the relevance of WII to reduce downside risks, i.e. the compensation of extreme yield losses, by utilizing power utility function to calculate expected utilities and the use of quantile regressions to obtain critical parameters of the WII such as tick size6. Our empirical example is based on farm-level wheat yield data for northern Germany, with a focus on drought risks. We conclude with a critical discussion on the applicability of the various approaches considered here, with respect to the insured crop, data availability and potential further research paths.
## Results
For summary statistics on differences between the three approaches and WII contracts see the respective section of the online supplementary file.
We test for statistical significance of i) the ability of WII solutions to reduce farmers’ financial exposure to drought risk compared to no insurance (RQ1) and ii) differences across the different WII specifications used here (RQ2). Table 2 shows Wilcox test results of the risk reducing properties of the different insurance products compared to the uninsured case. This assessment is based on average values of expected utilities across all considered farms and a fair insurance premium. We find that both WII based on phenology reporting data highly significantly increased farmers’ expected utility and thus reduce the financial exposure to drought risk. This result holds over all implemented levels of risk aversion. Note that for risk neutrality (risk aversion being equal to zero) no improvement can be obtained from any insurance with fair insurance premiums. Regarding WII based on GDD estimated growth stages we could not detect any significant changes in expected utility compared to the ‘no insurance’ base scenario. Hence, GDD based WII did not reduce the financial exposure to drought risk in our empirical example of winter wheat production in Germany.
Table 3 displays the results of comparisons between the different approaches. We find no difference in the risk reducing properties between different phenology reports. This result reveals that the benefits of using phenology reports in WII are independent of the reporting schemes. Compared to WII based on GDD approach, both phenology reporter based WII performed significantly better and thus reduced temporal basis risk.
All results presented here show the differences between the three WIIs with respect to their ability to reduce temporal basis risk and thus increase farmers’ expected utility. Within our online supplementary file we present results on the magnitude of the here identified effects (see Table A4 of the online supplementary file).
## Discussion
This study is the first comparing different WIIs that explicitly consider managing drought risk in single stages of plant growth. In our approach, insurance periods vary across time and space according to the occurrence dates of the growth stages stem elongation, anthesis and ear emergence. Our results reveal improvements for WII schemes by reducing temporal basis risk. Drought risks are expected to become more pronounced for arable farmers in Europe in the future. Thus, developing functioning WII insurance solutions is considered as viable climate change adaptation tool26,27.
Using phenological observations to suit WII to agronomical plant development highly significantly decreased farmers’ risk exposure and thus increased farmers’ expected utility compared to both, using GDD based WII and to the ‘no insurance’ reference scenario. However, both phenology reporting networks did only provide information about the growth stage of ‘ear emergence’ and not about the highly drought sensitive growth stage of ‘anthesis’, which can be estimated by the GDD approach28. More specifically, GDD based approaches allow a substantially finer assessment of crop growth stages than phenology observations. Nevertheless, the GDD approach failed to properly estimate the occurrence of ‘anthesis’ and risk reducing properties of the reporters based WII remained strong. The fact that both reporting networks, which have different reporting procedures and network densities, showed a relatively similar performance, underlines the robustness of our results to changes in the reporting procedure and station density. With respect to timely insurance payouts in the case of loss events, we would like to clearly emphasize that immediate reporters, which publish their findings right after occurrence, constitute the preferable option compared to yearly reporters. Timely compensation of losses to avoid illiquidity is considered as key requirement of crop insurances to avoid illiquidity. However, including the growth stage of ‘anthesis’ in the phenology reporting system would potentially further increase the risk reducing properties.
Concerning the usage of GDD to find appropriate WII periods, we found several drawbacks that have to be considered. Thus, GDD estimate of the occurrence dates of ‘stem elongation’ was considerably too early. As a result, rainfall in the insured period was considerably higher due to a longer insurance period and a shift into a more wet time of the year. Hence, drought risk was underestimated and farmers received fewer payouts resulting in low risk reducing properties (Tables 1 and 2, A2 and A3). GDD models might be improved using expert knowledge or additional experimental data. Furthermore, crop modelling approaches can provide valuable information to derive estimates for the occurrence dates of plant growth stages considering differences in the impact of winter and spring temperature loads (vernalization) and length of the day (photoperiodism)29,30,31,32. Thus, methods to reduce temporal basis risk must be selected crop specific, based on their ability to find occurrence dates of growth stages for the specific crop. Yet, these possible advances have to be aligned to findings that more complex WII solutions lead to lower acceptance on farmers’ side16,17.
Public institutions surveying phenological development of plants, i.e. the occurrence dates of growth stages, exist in many regions that are important crop insurance markets (see van Vliet et al.33, for Europe or Morellato et al.34 for South and Central America). However, despite their availability and the fact that all approaches tested could be easily implemented in current practical index insurance schemes, none of them has been considered in practice so far. This reveals a massive potential for improvements especially for WII products. This is particularly valid for countries such as the USA, where the market for WII is well established (premiums paid for WII exceeded 284 m USD in 2016, (www.rma.usda.gov) and various data sources on crop phenology are not yet used in WII (www.usapn.org).
WII currently are tested in many developing countries where the availability of phenology information might be limited and where the impacts of drought might be more severe35,36. Here, crop specific methods to find the occurrence dates of sensitive growth stages might be implemented. Whereas, in our case study the availability of cheap real-time phenology observations constitutes the most cost-effective and from farmer’s perspective comprehensible tool, it might be worth using more complex approaches in case of phenology data scarcity. In this respect also alternatives to GDD approach such as ‘biometeorological time’ or ‘physiological days’ as suggested by Saiyed et al.37 or satellite imagery38 could further reduce temporal basis risk. Improving WII by integrating crop modelling seems key in developing better insurance solutions for countries where phenology data is scarce. Moreover, validating GDD models using regional phenological observations could be a practical way to bring together advantages of both approaches. Consequently, our study discloses a variety of ways to include temporally flexible index designs.
Moreover, our findings contribute to the ongoing debate on the inclusion of novel (big) data sources in agricultural decision making in general and agricultural insurance in particular24. Within the broader picture of smart farming, where “aspects of technology, diversity of crop and livestock systems, and networking and institutions […] are considered jointly”25, we contribute a practical application that combines large and open datasets, crop modelling and meteorological applications with agronomical knowledge. Our findings are thus expected to stimulate further research but also business opportunities in the field of agricultural risk assessment and risk management.
Finally, our findings contribute to improve risk management options based on WII. But, individual risk management options should be compared and embedded in a whole farm analysis. For estimating the optimal risk management strategy coping with various perils a more holistic framework might be applied, taking into account the whole crop rotation, livestock production, the financial situation as well as off farm income, i.e. whole farm/ household income39.
## Methods and Data
### Design of the Weather Index Insurance
We aim to develop a WII that reduces the exposure to drought risk which frequently affects winter wheat yields in our study region40 (see section “Farm Level Yield Data” for a description of the underlying dataset). Thus, winter wheat yield y is displayed as a function of weather index r, in our case the sum of precipitation within a drought sensitive growth stage:
$$y=g(r)+\varepsilon$$
(1)
More specifically, we implement a cumulative precipitation index $${r}_{tik}^{R}$$, which represents the sum of precipitation within a specific period41,42:
$${r}_{tik}^{R}=\sum _{d=start}^{end}{R}_{ti}^{d}$$
(2)
$${r}_{tik}^{R}$$ denotes the precipitation index of farm i in year t and insurance product k [GDD, Yearly Reporter, Immediate Reporter], summing up daily rainfall $${R}_{ti}^{d}$$. Further, d = dstart and d = dend mark start and end-dates of accumulation period and should be tailored to water sensitive growth stages. We especially aim to improve flexible start and end date detection by testing three different approaches to find these dates based on both phenological observation networks and crop growth stage modeling. By specifically suiting the weather index in equation 2 to the drought sensitive growth stages, we avoid to include damaging effects of excessive rainfall, which can also be reflected by a rainfall sum index11,43.
Using a European put option design WII is suited to indemnify losses caused by low precipitation events. European options are financial products that give the owner the right of exercising the option at a specific point in time. The owner then receives a payout depending on a payout function. In the case of a put option, the insurance payout begins if a specific strike level $${S}_{ik}$$ of precipitation is undercut and rises depending on the options’ ticksize $${T}_{ik}$$ (payout per missing index value, in our case mm precipitation). The insurance payout $${\pi }_{tik}^{put}\,$$is determined by $${\pi }_{tik}^{put}=P\cdot [{T}_{ik}\cdot \,\max \{({S}_{ik}-{r}_{tik}^{R}),0\}]$$, where P denotes the winter wheat price (Note that we assumed the winter wheat price to be 15.80 €/dt (dt denotes deciton, i.e. 100 kg)44, our results are robust against changes in P as shown in Tables A5A8 of the online supplementary file)
Extending equation 1, wheat yield $${y}_{ti}$$ of farm i is assumed to be random and stochastically dependent on weather index $${r}_{tik}^{R}\,\,$$and an error term $${\tilde{\varepsilon }}_{ti}$$:
$${y}_{ti}={c}_{ik}+{\beta }_{ik}\cdot {r}_{tik}^{R}+{\tilde{\varepsilon }}_{tik}$$
(3)
$${c}_{i}$$ is a constant intercept and $${\beta }_{i}$$ the slope coefficient of the rainfall index variable that can be interpreted as the influence of rainfall index $${r}_{tik}^{R}$$ on yields $${y}_{ti}$$, both randomly distributed across the years.
We define strike level $${S}_{i}$$ as the estimated rainfall value related to the farm individual mean yield $$\bar{y}$$ $$({S}_{ik}={{g}_{ik}}^{-1}({\bar{y}}_{ik}))$$. More specifically we insert coefficient estimates $${\hat{\beta }}_{ik}$$ (which represents options’ ticksize T ik ) and $$\widehat{{c}_{i}}$$ together with the mean yield $${\bar{y}}_{i}$$ into equation 3 and solve for the corresponding rainfall index value $${r}_{ik}^{R}$$ that marks the strike level of rainfall $${S}_{ik}$$.
Both strike level and ticksize are obtained from quantile regression (QR) outcome, recently suggested by Conradt et al.9, estimated for each farm separately. The estimation problem is defined as:
$${\hat{\beta }}_{ik}(\tau )=\text{arg}\mathop{\min }\limits_{{\beta }_{ik}\in {\mathbb{R}}}(\tau \cdot \sum _{{y}_{i}\ge {\beta }_{ik}\cdot {r}_{ik}^{R}}|{y}_{i}-{\beta }_{ik}\cdot {r}_{ik}^{R}|+(1-\tau )\cdot \sum _{{y}_{i} < {\beta }_{ik}\cdot {r}_{ik}^{R}}|{y}_{i}-{\beta }_{ik}\cdot {r}_{ik}^{R}|)$$
(4)
QR focuses on a quantile of interest defined by $$\tau$$ and is comparably robust to outlier values as it minimizes the absolute distance between fitted values and residuals. We follow Conradt et al.6 and chose $$\tau$$ = 0.3 and specially suit the regression on low yield outcomes. We use the statistical software environment R-statistics45 with the additional package ‘quantreg’46. For a detailed description of using quantile regression in weather index insurance design see Conradt et al.6 and Dalhaus and Finger15.
### Determination of Drought sensitive Growth Stages
#### Plant Growth Stage Modelling (GDD)
First, we use a WII conditioned using plant growth stage modelling approach, i.e. growing degree days (GDD) as suggested by Conradt et al.14 (See Figure 1 for location information of temperature weather stations). The occurrence of different crop growth stages are calculated based on required air temperature loads (thermal time). This approach is denoted subsequently as ‘GDD’. Therefore, we take average seeding dates and calculate based on these all following growth stages.
$$GDD=\,\sum _{n=1}^{N}\,{\rm{\max }}(\min \,\{{H}_{n}^{av},{H}^{up}\}-{H}^{base},0)$$
(5)
We thus sum up mid-range daily air temperature Hav ($${H}^{av}=\frac{{H}^{min}+{H}^{max}}{2}$$; with $${H}^{min}\,{\rm{and}}\,{H}^{max}$$ being the daily minimum and maximum air temperature respectively) if it is greater than Hbase = 3 °C and lower than Hup = 22 °C. If Hav exceeds Hup, we take Hup as GDD value, as growth is assumed to remain static then28. After reaching a GDD threshold, the plant is assumed to start a new growth stage. For our study region, we rely on literature values of these thresholds, see Table 4 for an overview. We consciously decided to rely on literature values only, to ensure a minimum of transaction costs and to propose an easy to implement, highly transparent and cheap insurance product.
GDD values are used to identify the start of the ‘stem elongation’ growth stage and obtain the start date dstart;GDD for equation 5. We then repeat the procedure for the ‘anthesis’ growth period and get the end value dend;GDD for equation 5. We decided to use this simple GDD model as this was applied in previous studies on WII13,14 and it is straightforward in implementation. From farmer’s perspective, the WII must be easy to understand and straight forward in its payout determination, a more complex growth stage model might counteract this requirement16,17.
### Yearly Phenology Reporter
Second, we condition WII based on phenological observations that indicate growth stages in a particular region reported at the end of the year (see Fig. 2 for location information).
Deutscher Wetterdienst provides occurrence dates of growth stages for a variety of plants (ftp://ftp-cdc.dwd.de/). Within a basis network 1,200 reporters report on a yearly basis whereas 400 of these report their findings immediately. For wheat only, the former reporter dataset consist of ~650,000 observations. Taking into account that observations are available for over 20 different crops and immediate reporter data is published in real time, this phenology data constitutes a data source of large potential interest for various agricultural applications. Reports include growth information of wild growing but also agricultural flora cultivated under real-world (i.e. non-experimental) conditions. Observers check their reporting area two to three times a week and on a daily basis in rapid plant development periods. Untypical topographic points as well as unusual field conditions (climatic or cultivation anomalies) should be avoided. The data of single reporters is cross checked with surrounding reporters within the same natural region before publishing47,48 (within these natural regions plant growth conditions are similar). Similar public networks are available for various other major crop insurance markets (See van Vliet et al.33, for Europe, Morellato et al.34 for South and Central America and www.usapn.org for the US).
Our methodology here closely follows Dalhaus and Finger15. However, in contrast to this study, we focus solely and more detailed on one source of basis risk (temporal) and compare existing and propose new approaches coping with this issue. This data source provides high quality data, however with the drawback of being reported only at the end of the year. We use phenological observations of stem elongation and ear emergence to determine dstart; yea and dend; yea. As the growth stage of anthesis is not reported in the underlying data, we focus on the ear emergence growth stage that is closest to the anthesis stage.
The Yearly Reporters observe a reference field cultivated under practical conditions and capture a phenological phase, when about 50% of all plants reached it47. The findings are published online at the end of each year. Insurance payout can thus not be triggered directly after weather occurrence, but only when phenology reporters’ data is available.
### Immediate Phenology Reporter
Third, we use an alternative source of phenological observations that comprises a live publishing reporting network (see Fig. 3 for location information). This third database would thus provide a substantially sooner payout in case of adverse weather events but comes with a considerably lower reporting density48. Comparing the Yearly and Immediate Reporter networks thus allows to reflect the balance between quality of the index (reporting density) and the timing of indemnification. Despite its potential, this study is the first considering this latter database in WII context.
The Immediate Reporters’ network of the Deutscher Wetterdienst contains around 400 reporters publishing phenological development right after the first occurrence of a growth stage. In contrast to Yearly Reporters, all sites within a radius of up to 5 km are considered, to give an impression of plant development in a wider reporting area. Immediate Reporters report the first occurrence of a growth stage within their reporting area. The Immediate Reporters’ network is especially implemented for the use in agricultural consultancy47.
Table 5 summarizes which growth stages are captured within the different reporting networks. Phases with high relevance for drought risks are III Stem Elongation -IV Ear Emergence for both reporting networks. This period captures drought risk during reproductive phases.
To precisely account for regional specifics, we use natural regions which were originally defined by Meynen and Schmitthüsen49 to find farm specific appropriate reporters. For the Immediate Reporters case we dropped four farms from the analysis for which no Immediate Reporters’ data was available within the natural region during the whole study period. For cases in which single year phenology reports were not available, an imputation strategy was applied where the mean value of occurrence dates across the available years was used as estimate.
### Performance testing
As risk management tool, a WII product is assumed to reduce farmers’ financial exposure to weather risk. In this context, the risk reducing properties of the insurance strongly depend on i) basis risk that affects insurance efficacy and ii) decision makers’ risk attitude that reflects farmers’ individual valuation of risk reducing properties of WII. To test the potential of different WII products to reduce temporal basis risk, we assess farmers’ expected utility of their crop production and implement different scenarios of risk aversion. Consequently, the insurance product providing highest expected utility is assumed to provide highest reduction of temporal basis risk. Within this framework, the utility function converts yearly monetary terminal wealth realizations into farm individual utility values, depending on level of risk aversion. Along these lines, we assume decreasing absolute risk aversion and use a power utility function to display farmers’ downside risk averse preferences (for recent examples in index insurance context see Dalhaus and Finger,15, Berg et al.50, and Leblois et al.51 and for a general motivation of the utility function Di Falco and Chavas52,53 and Finger54). To account for these differences we test several coefficients of relative risk aversion α [0, 0.5, 1, 2, 3, 4] ranging from risk neutral to extremely risk averse55. Assuming that farmers only hold the assets initial wealth $${W}_{0}$$, wheat production and index insurance results in yearly terminal wealth $${W}_{tik}$$:
$${W}_{tik}=P\cdot {y}_{ti}+{\pi }_{tik}^{put}-{{\rm{\Gamma }}}_{ik}+{W}_{0}$$
(6)
We used direct payments of 280 €/ha as initial wealth proxy. Hence the farm individual yearly utility is determined by:
$${{\rm{U}}}_{k{\rm{\alpha }}it}({W}_{tik})=\{\begin{array}{c}\frac{{W}_{tik}^{1-{\rm{\alpha }}}}{1-{\rm{\alpha }}}\,if\,\alpha \ne 1\\ \,\mathrm{ln}({W}_{tik})\,if\,\alpha =1\end{array}$$
(7)
This results in an utility value $${{\rm{U}}}_{k{\rm{\alpha }}it}$$ for each WII product, year t, farm i and level of risk aversion α. The mean values across all years reflect the expected utility $$E{{\rm{U}}}_{k{\rm{\alpha }}i}$$ of farm i, insurance k and level of risk aversion α. Subsequently, we test insurance products against each other across different levels of risk aversion. More specifically, we use a non-parametric one sided paired Wilcoxon rank sum test to account for the ordinal nature of utility values6,13. More specifically, ordinal nature implies that expected utility values might only be compared with respect to their rank but not with respect to their absolute difference.
Assuming a fair premium, the insurance premium $${{\rm{\Gamma }}}_{ik}$$ is equal to the expected payout. Burn rate pricing is used based on a bootstrapping procedure with 10,000 draws56. More specifically, we draw from the historical realizations of the insurance payouts during the period of study and take the average values of those draws. We moreover use a constant premium during the whole period of study as we do not expect changes in the risk exposure, e.g. due to climate change, to change our results. For implementing a marketable insurance product we refer to Kapphan et al.13, who include climate change scenarios in the pricing of WII.
### Weather Data
The underlying weather data was provided by the Deutscher Wetterdienst, an independent state institution. Hence data provision is transparent and comprehensible for policyholders (§ 1, Law of the Deutscher Wetterdienst). For index calculation two weather variables are necessary. First, precipitation data to determine daily rainfall. Second, air temperature data to find growth stages with GDD approach. For both variables we chose nearby weather stations with an average distance between farms and stations of 8.5 km for precipitation and 22.06 km for air temperature stations. All weather data was freely available under ftp://ftp-cdc.dwd.de/. Table 6 gives an overview of precipitation sums calculated using the different approaches of plant growth determination. The considerably higher mean precipitation during the GDD estimated growth phases is a result of the fact that this approach estimates stem elongation date systematically too early. All weather data and code used are available in the online supplementary information.
Table A1 of the online supplementary file displays a comparison between rainfall determination approaches using Pearson correlation. While precipitation within reported phases of Yearly and Immediate Reporters is relatively closely related (0.58), GDD based precipitation sums are only weakly correlated with these two (0.24; 0.16).
### Farm Level Yield Data
Our case study was carried out using winter wheat yield data together with latitude and longitude coordinates of 29 northern German crop farms (see Fig. 1 for location information). To consider technical change during the study period from 1996 to 2010, yield data was detrended using linear trends. For summary statistics see Table 7. For a more detailed description of the study area see Dalhaus and Finger15. Pelka and Musshoff57 give a more detailed motivation of why linear detrending was used. They conclude from Heimfarth et al.58 that considering more robust regression approaches59 did not lead to differences in the results.
### Data Availability Statement
Attached to this paper we provide all data used and the underlying R-statistics code to fully replicate our results Execute for that the”Temporal Basis Risk.R” Code file which connects to the underlying datasets in “online Appendix.xlsx”.
|
|
## templates – How can I get a single php file that is the equivalent of an existing WordPress page?
Question
When I use my php file as a template and no other content on the page, it works fine but when I run the file directly (via PhpStorm), there are missing elements (such as comment_status) that cause exceptions. I am doing the require of wp-load and calling get_header() within the file, but obviously something is still missing, that a ‘real’ WP page has. So – I’d like to see the Php from the otherwise ‘blank’ page, that WP uses. Viewing source of course only shows the final html.
<?php
/* Template Name: BlankPage */
|
|
• Y H WANG
Articles written in Bulletin of Materials Science
• Effects of electrolyte concentration and current density on the properties of electro-deposited NiFeW alloy coatings
NiWP alloy coatings were prepared by electrodeposition, and the effects of ferrous chloride (FeCl$_2$), sodium tungstate (Na$_2$WO$_4$) and current density ($D_K$) on the properties of the coatings were studied. The results show that upon increasing the concentration of FeCl$_2$, initially the Fe content of the coating increased and then tended to be stable; the deposition rate and microhardness of coating decreased when the cathodic current efficiency ($\eta$) initially increased and then decreased; and for a FeCl$_2$ concentration of 3.6 gl$^{−1}, the cathodic current efficiency reached its maximum of 74.23%. Upon increasing the concentration of Na$_2$WO$_4$, the W content and microhardness of the coatings increased; the deposition rate andthe cathode current efficiency initially increased and then decreased. The cathodic current efficiency reached the maximum value of 70.33% with a Na$_2$WO$_4$concentration of 50 gl$^{−1}$, whereas the deposition rate is maximum at 8.67$\mu$mh$^{−1}$with a Na$_2$WO$_4$concentration of 40 gl$^{−1}$. Upon increasing the$D_K$, the deposition rate, microhardness, Fe and W content of the coatings increased, the cathodic current efficiency increases first increased and then decreased. When$D_K$was 4 A dm$^{−2}\$,the current efficiency reached the maximum of 73.64%.
• # Bulletin of Materials Science
Volume 45, 2022
All articles
Continuous Article Publishing mode
• # Dr Shanti Swarup Bhatnagar for Science and Technology
Posted on October 12, 2020
Prof. Subi Jacob George — Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur, Bengaluru
Chemical Sciences 2020
|
|
# open command not working in script, but works in console.
ArtR
Joined:
Posts:
2
Location:
SUNY Orange, Middletown, NY
## open command not working in script, but works in console.
I am on a Windows 10 Pro machine:
I cd to WinSCP directory and type in the following command:
>WinSCP.com /script=C:\Users\artr\Documents\ReceiveTEST.txt
The script contains the following code:
option echo on
option batch on
option confirm on
open TESTTran/myTest
lcd T:\TESTRECV
put -nopermissions -nopreservetime *.DAT
exit
I get the Host prompt twice and then it exits the script, as follows:
echo on
option batch on
batch on
reconnecttime 120
option confirm on
confirm on
open
Host:
Host:
However when I type in the WinSCP command:
>WinSCP.com
I am able to execute the open command as it is in the script directly:
winscp> open TESTTran/myTest
Searching for host...
Connecting to host...
Authenticating...
Using username "xxxxxx".
Authenticating with public key "xxxxxx".
Authenticated.
Starting the session...
Session started.
Active session: [1] TESTTran/myTest
winscp>
I am then able to execute the other commands in the script.
Why is the script not working?
martin
Site Admin
Joined:
Posts:
32,391
Location:
Prague, Czechia
## Re: open command not working in script, but works in console.
Please attach a full session log files from both tests.
To generate the session log file, use /log=C:\path\to\winscp.log command-line argument. Submit the log with your post as an attachment. Note that passwords and passphrases not stored in the log. You may want to remove other data you consider sensitive though, such as host names, IP addresses, account names or file names (unless they are relevant to the problem). If you do not want to post the log publicly, you can mark the attachment as private.
You can post new topics in this forum
|
|
## A multivector Lagrangian for Maxwell’s equation: A summary of previous exploration.
This summarizes the significant parts of the last 8 blog posts.
## STA form of Maxwell’s equation.
Maxwell’s equations, with electric and fictional magnetic sources (useful for antenna theory and other engineering applications), are
\label{eqn:maxwellLagrangian:220}
\begin{aligned}
\spacegrad \cdot \BE &= \frac{\rho}{\epsilon} \\
\spacegrad \cross \BE &= – \BM – \mu \PD{t}{\BH} \\
\spacegrad \cdot \BH &= \frac{\rho_\txtm}{\mu} \\
\spacegrad \cross \BH &= \BJ + \epsilon \PD{t}{\BE}.
\end{aligned}
We can assemble these into a single geometric algebra equation,
\label{eqn:maxwellLagrangian:240}
\lr{ \spacegrad + \inv{c} \PD{t}{} } F = \eta \lr{ c \rho – \BJ } + I \lr{ c \rho_{\mathrm{m}} – \BM },
where $$F = \BE + \eta I \BH = \BE + I c \BB$$, $$c = 1/\sqrt{\mu\epsilon}, \eta = \sqrt{(\mu/\epsilon)}$$.
By multiplying through by $$\gamma_0$$, making the identification $$\Be_k = \gamma_k \gamma_0$$, and
\label{eqn:maxwellLagrangian:300}
\begin{aligned}
J^0 &= \frac{\rho}{\epsilon}, \quad J^k = \eta \lr{ \BJ \cdot \Be_k }, \quad J = J^\mu \gamma_\mu \\
M^0 &= c \rho_{\mathrm{m}}, \quad M^k = \BM \cdot \Be_k, \quad M = M^\mu \gamma_\mu \\
\end{aligned}
we find the STA form of Maxwell’s equation, including magnetic sources
\label{eqn:maxwellLagrangian:320}
\grad F = J – I M.
## Decoupling the electric and magnetic fields and sources.
We can utilize two separate four-vector potential fields to split Maxwell’s equation into two parts. Let
\label{eqn:maxwellLagrangian:1740}
F = F_{\mathrm{e}} + I F_{\mathrm{m}},
where
\label{eqn:maxwellLagrangian:1760}
\begin{aligned}
F_{\mathrm{e}} &= \grad \wedge A \\
\end{aligned}
and $$A, K$$ are independent four-vector potential fields. Plugging this into Maxwell’s equation, and employing a duality transformation, gives us two coupled vector grade equations
\label{eqn:maxwellLagrangian:1780}
\begin{aligned}
\grad \cdot F_{\mathrm{e}} – I \lr{ \grad \wedge F_{\mathrm{m}} } &= J \\
\grad \cdot F_{\mathrm{m}} + I \lr{ \grad \wedge F_{\mathrm{e}} } &= M.
\end{aligned}
However, since $$\grad \wedge F_{\mathrm{m}} = \grad \wedge F_{\mathrm{e}} = 0$$, by construction, the curls above are killed. We may also add in $$\grad \wedge F_{\mathrm{e}} = 0$$ and $$\grad \wedge F_{\mathrm{m}} = 0$$ respectively, yielding two independent gradient equations
\label{eqn:maxwellLagrangian:1810}
\begin{aligned}
\end{aligned}
one for each of the electric and magnetic sources and their associated fields.
## Tensor formulation.
The electromagnetic field $$F$$, is a vector-bivector multivector in the multivector representation of Maxwell’s equation, but is a bivector in the STA representation. The split of $$F$$ into it’s electric and magnetic field components is observer dependent, but we may write it without reference to a specific observer frame as
\label{eqn:maxwellLagrangian:1830}
F = \inv{2} \gamma_\mu \wedge \gamma_\nu F^{\mu\nu},
where $$F^{\mu\nu}$$ is an arbitrary antisymmetric 2nd rank tensor. Maxwell’s equation has a vector and trivector component, which may be split out explicitly using grade selection, to find
\label{eqn:maxwellLagrangian:360}
\begin{aligned}
\grad \cdot F &= J \\
\grad \wedge F &= -I M.
\end{aligned}
Further dotting and wedging these equations with $$\gamma^\mu$$ allows for extraction of scalar relations
\label{eqn:maxwellLagrangian:460}
\partial_\nu F^{\nu\mu} = J^{\mu}, \quad \partial_\nu G^{\nu\mu} = M^{\mu},
where $$G^{\mu\nu} = -(1/2) \epsilon^{\mu\nu\alpha\beta} F_{\alpha\beta}$$ is also an antisymmetric 2nd rank tensor.
If we treat $$F^{\mu\nu}$$ and $$G^{\mu\nu}$$ as independent fields, this pair of equations is the coordinate equivalent to \ref{eqn:maxwellLagrangian:1760}, also decoupling the electric and magnetic source contributions to Maxwell’s equation.
## Coordinate representation of the Lagrangian.
As observed above, we may choose to express the decoupled fields as curls $$F_{\mathrm{e}} = \grad \wedge A$$ or $$F_{\mathrm{m}} = \grad \wedge K$$. The coordinate expansion of either field component, given such a representation, is straight forward. For example
\label{eqn:maxwellLagrangian:1850}
\begin{aligned}
F_{\mathrm{e}}
&= \lr{ \gamma_\mu \partial^\mu } \wedge \lr{ \gamma_\nu A^\nu } \\
&= \inv{2} \lr{ \gamma_\mu \wedge \gamma_\nu } \lr{ \partial^\mu A^\nu – \partial^\nu A^\mu }.
\end{aligned}
We make the identification $$F^{\mu\nu} = \partial^\mu A^\nu – \partial^\nu A^\mu$$, the usual definition of $$F^{\mu\nu}$$ in the tensor formalism. In that tensor formalism, the Maxwell Lagrangian is
\label{eqn:maxwellLagrangian:1870}
\LL = – \inv{4} F_{\mu\nu} F^{\mu\nu} – A_\mu J^\mu.
We may show this though application of the Euler-Lagrange equations
\label{eqn:maxwellLagrangian:600}
\PD{A_\mu}{\LL} = \partial_\nu \PD{(\partial_\nu A_\mu)}{\LL}.
\label{eqn:maxwellLagrangian:1930}
\begin{aligned}
\PD{(\partial_\nu A_\mu)}{\LL}
&= -\inv{4} (2) \lr{ \PD{(\partial_\nu A_\mu)}{F_{\alpha\beta}} } F^{\alpha\beta} \\
&= -\inv{2} \delta^{[\nu\mu]}_{\alpha\beta} F^{\alpha\beta} \\
&= -\inv{2} \lr{ F^{\nu\mu} – F^{\mu\nu} } \\
&= F^{\mu\nu}.
\end{aligned}
So $$\partial_\nu F^{\nu\mu} = J^\mu$$, the equivalent of $$\grad \cdot F = J$$, as expected.
## Coordinate-free representation and variation of the Lagrangian.
Because
\label{eqn:maxwellLagrangian:200}
F^2 =
-\inv{2}
F^{\mu\nu} F_{\mu\nu}
+
\lr{ \gamma_\alpha \wedge \gamma^\beta }
F_{\alpha\mu}
F^{\beta\mu}
+
\frac{I}{4}
\epsilon_{\mu\nu\alpha\beta} F^{\mu\nu} F^{\alpha\beta},
we may express the Lagrangian \ref{eqn:maxwellLagrangian:1870} in a coordinate free representation
\label{eqn:maxwellLagrangian:1890}
\LL = \inv{2} F \cdot F – A \cdot J,
where $$F = \grad \wedge A$$.
We will now show that it is also possible to apply the variational principle to the following multivector Lagrangian
\label{eqn:maxwellLagrangian:1910}
\LL = \inv{2} F^2 – A \cdot J,
and recover the geometric algebra form $$\grad F = J$$ of Maxwell’s equation in it’s entirety, including both vector and trivector components in one shot.
We will need a few geometric algebra tools to do this.
The first such tool is the notational freedom to let the gradient act bidirectionally on multivectors to the left and right. We will designate such action with over-arrows, sometimes also using braces to limit the scope of the action in question. If $$Q, R$$ are multivectors, then the bidirectional action of the gradient in a $$Q, R$$ sandwich is
\label{eqn:maxwellLagrangian:1950}
\begin{aligned}
&= \lr{ Q \gamma^\mu \lpartial_\mu } R + Q \lr{ \gamma^\mu \rpartial_\mu R } \\
&= \lr{ \partial_\mu Q } \gamma^\mu R + Q \gamma^\mu \lr{ \partial_\mu R }.
\end{aligned}
In the final statement, the partials are acting exclusively on $$Q$$ and $$R$$ respectively, but the $$\gamma^\mu$$ factors must remain in place, as they do not necessarily commute with any of the multivector factors.
This bidirectional action is a critical aspect of the Fundamental Theorem of Geometric calculus, another tool that we will require. The specific form of that theorem that we will utilize here is
\label{eqn:maxwellLagrangian:1970}
\int_V Q d^4 \Bx \lrgrad R = \int_{\partial V} Q d^3 \Bx R,
where $$d^4 \Bx = I d^4 x$$ is the pseudoscalar four-volume element associated with a parameterization of space time. For our purposes, we may assume that parameterization are standard basis coordinates associated with the basis $$\setlr{ \gamma_0, \gamma_1, \gamma_2, \gamma_3 }$$. The surface differential form $$d^3 \Bx$$ can be given specific meaning, but we do not actually care what that form is here, as all our surface integrals will be zero due to the boundary constraints of the variational principle.
Finally, we will utilize the fact that bivector products can be split into grade $$0,4$$ and $$2$$ components using anticommutator and commutator products, namely, given two bivectors $$F, G$$, we have
\label{eqn:maxwellLagrangian:1990}
\begin{aligned}
\gpgrade{ F G }{0,4} &= \inv{2} \lr{ F G + G F } \\
\gpgrade{ F G }{2} &= \inv{2} \lr{ F G – G F }.
\end{aligned}
We may now proceed to evaluate the variation of the action for our presumed Lagrangian
\label{eqn:maxwellLagrangian:2010}
S = \int d^4 x \lr{ \inv{2} F^2 – A \cdot J }.
We seek solutions of the variational equation $$\delta S = 0$$, that are satisfied for all variations $$\delta A$$, where the four-potential variations $$\delta A$$ are zero on the boundaries of this action volume (i.e. an infinite spherical surface.)
We may start our variation in terms of $$F$$ and $$A$$
\label{eqn:maxwellLagrangian:1540}
\begin{aligned}
\delta S
&=
\int d^4 x \lr{ \inv{2} \lr{ \delta F } F + F \lr{ \delta F } } – \lr{ \delta A } \cdot J \\
&=
\int d^4 x \gpgrade{ \lr{ \delta F } F – \lr{ \delta A } J }{0,4} \\
&=
\int d^4 x \gpgrade{ \lr{ \grad \wedge \lr{\delta A} } F – \lr{ \delta A } J }{0,4} \\
&=
-\int d^4 x \gpgrade{ \lr{ \lr{\delta A} \lgrad } F – \lr{ \lr{ \delta A } \cdot \lgrad } F + \lr{ \delta A } J }{0,4} \\
&=
-\int d^4 x \gpgrade{ \lr{ \lr{\delta A} \lgrad } F + \lr{ \delta A } J }{0,4} \\
&=
-\int d^4 x \gpgrade{ \lr{\delta A} \lrgrad F – \lr{\delta A} \rgrad F + \lr{ \delta A } J }{0,4},
\end{aligned}
where we have used arrows, when required, to indicate the directional action of the gradient.
Writing $$d^4 x = -I d^4 \Bx$$, we have
\label{eqn:maxwellLagrangian:1600}
\begin{aligned}
\delta S
&=
-\int_V d^4 x \gpgrade{ \lr{\delta A} \lrgrad F – \lr{\delta A} \rgrad F + \lr{ \delta A } J }{0,4} \\
&=
-\int_V \gpgrade{ -\lr{\delta A} I d^4 \Bx \lrgrad F – d^4 x \lr{\delta A} \rgrad F + d^4 x \lr{ \delta A } J }{0,4} \\
&=
\int_{\partial V} \gpgrade{ \lr{\delta A} I d^3 \Bx F }{0,4}
+ \int_V d^4 x \gpgrade{ \lr{\delta A} \lr{ \rgrad F – J } }{0,4}.
\end{aligned}
The first integral is killed since $$\delta A = 0$$ on the boundary. The remaining integrand can be simplified to
\label{eqn:maxwellLagrangian:1660}
where the grade-4 filter has also been discarded since $$\grad F = \grad \cdot F + \grad \wedge F = \grad \cdot F$$ since $$\grad \wedge F = \grad \wedge \grad \wedge A = 0$$ by construction, which implies that the only non-zero grades in the multivector $$\grad F – J$$ are vector grades. Also, the directional indicator on the gradient has been dropped, since there is no longer any ambiguity. We seek solutions of $$\gpgrade{ \lr{\delta A} \lr{ \grad F – J } }{0} = 0$$ for all variations $$\delta A$$, namely
\label{eqn:maxwellLagrangian:1620}
\boxed{
}
This is Maxwell’s equation in it’s coordinate free STA form, found using the variational principle from a coordinate free multivector Maxwell Lagrangian, without having to resort to a coordinate expansion of that Lagrangian.
## Lagrangian for fictitious magnetic sources.
The generalization of the Lagrangian to include magnetic charge and current densities can be as simple as utilizing two independent four-potential fields
\label{eqn:maxwellLagrangian:n}
\LL = \inv{2} \lr{ \grad \wedge A }^2 – A \cdot J + \alpha \lr{ \inv{2} \lr{ \grad \wedge K }^2 – K \cdot M },
where $$\alpha$$ is an arbitrary multivector constant.
Variation of this Lagrangian provides two independent equations
\label{eqn:maxwellLagrangian:1840}
\begin{aligned}
\end{aligned}
We may add these, scaling the second by $$-I$$ (recall that $$I, \grad$$ anticommute), to find
\label{eqn:maxwellLagrangian:1860}
\grad \lr{ F_{\mathrm{e}} + I F_{\mathrm{m}} } = J – I M,
which is $$\grad F = J – I M$$, as desired.
It would be interesting to explore whether it is possible find Lagrangian that is dependent on a multivector potential, that would yield $$\grad F = J – I M$$ directly, instead of requiring a superposition operation from the two independent solutions. One such possible potential is $$\tilde{A} = A – I K$$, for which $$F = \gpgradetwo{ \grad \tilde{A} } = \grad \wedge A + I \lr{ \grad \wedge K }$$. The author was not successful constructing such a Lagrangian.
## Multivector Lagrangian for Maxwell’s equation.
This is the 5th and final part of a series on finding Maxwell’s equations (including the fictitious magnetic sources that are useful in engineering) from a Lagrangian representation.
[Click here for a PDF version of this series of posts, up to and including this one.] The first, second, third and fourth parts are also available here on this blog.
We’ve found the charge and currency dependency parts of Maxwell’s equations for both electric and magnetic sources, using scalar and pseudoscalar Lagrangian densities respectively.
Now comes the really cool part. We can form a multivector Lagrangian and find Maxwell’s equation in it’s entirety in a single operation, without resorting to usual coordinate expansion of the fields.
Our Lagrangian is
\label{eqn:fsquared:980}
\LL = \inv{2} F^2 – \gpgrade{A \lr{ J – I M}}{0,4},
where $$F = \grad \wedge A$$.
The variation of the action formed from this Lagrangian density is
\label{eqn:fsquared:1000}
\delta S = \int d^4 x \lr{
\inv{2} \lr{ F \delta F + (\delta F) F } – \gpgrade{ \delta A \lr{ J – I M} }{0,4}
}.
Both $$F$$ and $$\delta F$$ are STA bivectors, and for any two bivectors the symmetric sum of their products, selects the grade 0,4 components of the product. That is, for bivectors, $$F, G$$, we have
\label{eqn:fsquared:1020}
\inv{2}\lr{ F G + G F } = \gpgrade{F G}{0,4} = \gpgrade{G F}{0,4}.
This means that the action variation integrand can all be placed into a 0,4 grade selection operation
\label{eqn:fsquared:1040}
\delta S
(\delta F) F – \delta A \lr{ J – I M}
}{0,4}.
Let’s look at the $$(\delta F) F$$ multivector in more detail
\label{eqn:fsquared:1060}
\begin{aligned}
(\delta F) F
&=
\delta \lr{ \gamma^\mu \wedge \partial_\mu A } F \\
&=
\lr{ \gamma^\mu \wedge \delta \partial_\mu A } F \\
&=
\lr{ \gamma^\mu \wedge \partial_\mu \delta A } F \\
&=
\lr{ (\partial_\mu \delta A) \wedge \gamma^\mu } F \\
&=
(\partial_\mu \delta A) \gamma^\mu F
\lr{ (\partial_\mu \delta A) \cdot \gamma^\mu } F
\\
\end{aligned}
This second term is a bivector, so once filtered with a grade 0,4 selection operator, will be obliterated.
We are left with
\label{eqn:fsquared:1080}
\begin{aligned}
\delta S
(\partial_\mu \delta A) \gamma^\mu F
– \delta A \lr{ J – I M}
}{0,4}
\\
\partial_\mu \lr{
\delta A \gamma^\mu F
}
+ \delta A \gamma^\mu \partial_\mu F
– \delta A \lr{ J – I M}
}{0,4}
\\
&= \int d^4 x
\delta A \lr{ \grad F – \lr{ J – I M} }
}{0,4}.
\end{aligned}
As before, the total derivative term has been dropped, as variations $$\delta A$$ are zero on the boundary. The remaining integrand must be zero for all variations, so we conclude that
\label{eqn:fsquared:1100}
\boxed{
\grad F = J – I M.
}
Almost magically, out pops Maxwell’s equation in it’s full glory, with both four vector charge and current density, and also the trivector (fictitious) magnetic charge and current densities, should we want to include those.
### A final detail.
There’s one last thing to say. If you have a nagging objection to me having declared that $$\grad F – \lr{ J – I M} = 0$$ when the whole integrand was enclosed in a grade 0,4 selection operator. Shouldn’t we have to account for the grade selection operator somehow? Yes, we should, and I cheated a bit to not do so, but we get the same answer if we do. To handle this with a bit more finesse, we split $$\grad F – \lr{ J – I M}$$ into it’s vector and trivector components, and consider those separately
\label{eqn:fsquared:1120}
\delta A \lr{ \grad F – \lr{ J – I M} }
}{0,4}
=
\delta A \cdot \lr{ \grad \cdot F – J }
+
\delta A \wedge \lr{ \grad \wedge F + I M }.
We require these to be zero for all variations $$\delta A$$, which gives us two independent equations
\label{eqn:fsquared:1140}
\begin{aligned}
\grad \cdot F – J &= 0 \\
\grad \wedge F + I M &= 0.
\end{aligned}
However, we can now add up these equations, using $$\grad F = \grad \cdot F + \grad \wedge F$$ to find, sure enough, that
\label{eqn:fsquared:1160}
\grad F = J – I M,
as stated, somewhat sloppily, before.
## Maxwell’s equations with magnetic charge and current densities, from Lagrangian.
This is the 4th part in a series on finding Maxwell’s equations (including the fictitious magnetic sources that are useful in engineering) from a Lagrangian representation.
[Click here for a PDF version of this series of posts, up to and including this one.] The first and second, and third parts are also available here on this blog.
Now, let’s suppose that we have a pseudoscalar Lagrangian density of the following form
\label{eqn:fsquared:840}
\begin{aligned}
\LL &= F \wedge F + b I A \cdot M \\
&= \inv{4} I \epsilon^{\mu\nu\alpha\beta} F_{\mu\nu} F_{\alpha\beta} + b I A_\mu M^\mu.
\end{aligned}
Let’s fix $$b$$ by evaluating this with the Euler-Lagrange equations
\label{eqn:fsquared:880}
\begin{aligned}
b I M^\alpha
&=
\partial_\alpha \lr{
\inv{2} I \epsilon^{\mu\nu\sigma\pi} F_{\mu\nu} \PD{(\partial_\beta A_\alpha)}{F_{\sigma\pi}}
} \\
&=
\inv{2} I \epsilon^{\mu\nu\sigma\pi}
\partial_\alpha \lr{
F_{\mu\nu} \PD{(\partial_\beta A_\alpha)}{}\lr{\partial_\sigma A_\pi – \partial_\pi A_\sigma}
} \\
&=
\inv{2} I
\partial_\alpha \lr{
\epsilon^{\mu\nu\beta\alpha}
F_{\mu\nu}
\epsilon^{\mu\nu\alpha\beta}
F_{\mu\nu}
} \\
&=
I
\partial_\alpha
\epsilon^{\mu\nu\beta\alpha}
F_{\mu\nu}
\end{aligned}
Remember that we want $$\partial_\nu \lr{ \inv{2} \epsilon^{\mu\nu\alpha\beta} F_{\alpha\beta} } = M^\mu$$, so after swapping indexes we see that $$b = 2$$.
We would find the same thing if we vary the Lagrangian directly with respect to variations $$\delta A_\mu$$. However, let’s try that variation with respect to a four-vector field variable $$\delta A$$ instead. Our multivector Lagrangian is
\label{eqn:fsquared:900}
\begin{aligned}
\LL
&= F \wedge F + 2 I M \cdot A \\
&=
\lr{ \gamma^\mu \wedge \partial_\mu A } \wedge \lr{ \gamma^\nu \wedge \partial_\nu A } + 2 (I M) \wedge A.
\end{aligned}
We’ve used a duality transformation on the current term that will come in handy shortly. The Lagrangian variation is
\label{eqn:fsquared:920}
\begin{aligned}
\delta \LL
&=
2 \lr{ \gamma^\mu \wedge \partial_\mu A } \wedge \lr{ \gamma^\nu \wedge \delta \partial_\nu A } + 2 (I M) \wedge \delta A \\
&=
2 \partial_\nu \lr{ \lr{ \gamma^\mu \wedge \partial_\mu A } \wedge \lr{ \gamma^\nu \wedge \delta A } }
2 \lr{ \gamma^\mu \wedge \partial_\nu \partial_\mu A } \wedge \lr{ \gamma^\nu \wedge \delta A }
+ 2 (I M) \wedge \delta A \\
&=
2 \lr{ – \lr{ \gamma^\mu \wedge \partial_\nu \partial_\mu A } \wedge \gamma^\nu + I M } \wedge \delta A \\
&=
2 \lr{ – \grad \wedge (\partial_\nu A ) \wedge \gamma^\nu + I M } \wedge \delta A.
\end{aligned}
We’ve dropped the complete derivative term, as the $$\delta A$$ is zero on the boundary. For the action variation to be zero, we require
\label{eqn:fsquared:940}
\begin{aligned}
0
&= – \grad \wedge (\partial_\nu A ) \wedge \gamma^\nu + I M \\
&= \grad \wedge \gamma^\nu \wedge (\partial_\nu A ) + I M \\
&= \grad \wedge \lr{ \grad \wedge A } + I M \\
&= \grad \wedge F + I M,
\end{aligned}
or
\label{eqn:fsquared:960}
\grad \wedge F = -I M.
Here we’ve had to dodge a sneaky detail, namely that $$\grad \wedge \lr{ \grad \wedge A } = 0$$, provided $$A$$ has sufficient continuity that we can assert mixed partials. We will see a way to resolve this contradiction when we vary a Lagrangian density that includes both electric and magnetic field contributions. That’s a game for a different day.
## Square of electrodynamic field.
The electrodynamic Lagrangian (without magnetic sources) has the form
\label{eqn:fsquared:20}
\LL = F \cdot F + \alpha A \cdot J,
where $$\alpha$$ is a constant that depends on the unit system.
My suspicion is that one or both of the bivector or quadvector grades of $$F^2$$ are required for Maxwell’s equation with magnetic sources.
Let’s expand out $$F^2$$ in coordinates, as preparation for computing the Euler-Lagrange equations. The scalar and pseudoscalar components both simplify easily into compact relationships, but the bivector term is messier. We start with the coordinate expansion of our field, which we may write in either upper or lower index form
\label{eqn:fsquared:40}
F = \inv{2} \gamma_\mu \wedge \gamma_\nu F^{\mu\nu}
= \inv{2} \gamma^\mu \wedge \gamma^\nu F_{\mu\nu}.
The square is
\label{eqn:fsquared:60}
F^2 = F \cdot F + \gpgradetwo{F^2} + F \wedge F.
Let’s compute the scalar term first. We need to make a change of dummy indexes, for one of the $$F$$’s. It will also be convenient to use upper indexes in one factor, and lowers in the other. We find
\label{eqn:fsquared:80}
\begin{aligned}
F \cdot F
&=
\inv{4}
\lr{ \gamma_\mu \wedge \gamma_\nu } \cdot \lr{ \gamma^\alpha \wedge \gamma^\beta }
F^{\mu\nu}
F_{\alpha\beta} \\
&=
\inv{4}
\lr{
{\delta_\nu}^\alpha {\delta_\mu}^\beta
– {\delta_\mu}^\alpha {\delta_\nu}^\beta
}
F^{\mu\nu}
F_{\alpha\beta} \\
&=
\inv{4}
\lr{
F^{\mu\nu} F_{\nu\mu}
F^{\mu\nu} F_{\mu\nu}
} \\
&=
-\inv{2}
F^{\mu\nu} F_{\mu\nu}.
\end{aligned}
Now, let’s compute the pseudoscalar component of $$F^2$$. This time we uniformly use upper index components for the tensor, and find
\label{eqn:fsquared:100}
\begin{aligned}
F \wedge F
&=
\inv{4}
\lr{ \gamma_\mu \wedge \gamma_\nu } \wedge \lr{ \gamma_\alpha \wedge \gamma_\beta }
F^{\mu\nu}
F^{\alpha\beta} \\
&=
\frac{I}{4}
\epsilon_{\mu\nu\alpha\beta} F^{\mu\nu} F^{\alpha\beta},
\end{aligned}
where $$\epsilon_{\mu\nu\alpha\beta}$$ is the completely antisymmetric (Levi-Civita) tensor of rank four. This pseudoscalar components picks up all the products of components of $$F$$ where all indexes are different.
Now, let’s try computing the bivector term of the product. This will require fancier index gymnastics.
\label{eqn:fsquared:120}
\begin{aligned}
&=
\inv{4}
\lr{ \gamma_\mu \wedge \gamma_\nu } \lr{ \gamma^\alpha \wedge \gamma^\beta }
}
F^{\mu\nu}
F_{\alpha\beta} \\
&=
\inv{4}
\gamma_\mu \gamma_\nu \lr{ \gamma^\alpha \wedge \gamma^\beta }
}
F^{\mu\nu}
F_{\alpha\beta}
\inv{4}
\lr{ \gamma_\mu \cdot \gamma_\nu} \lr{ \gamma^\alpha \wedge \gamma^\beta } F^{\mu\nu} F_{\alpha\beta}.
\end{aligned}
The dot product term is killed, since $$\lr{ \gamma_\mu \cdot \gamma_\nu} F^{\mu\nu} = g_{\mu\nu} F^{\mu\nu}$$ is the contraction of a symmetric tensor with an antisymmetric tensor. We can now proceed to expand the grade two selection
\label{eqn:fsquared:140}
\begin{aligned}
\gamma_\mu \gamma_\nu \lr{ \gamma^\alpha \wedge \gamma^\beta }
}
&=
\gamma_\mu \wedge \lr{ \gamma_\nu \cdot \lr{ \gamma^\alpha \wedge \gamma^\beta } }
+
\gamma_\mu \cdot \lr{ \gamma_\nu \wedge \lr{ \gamma^\alpha \wedge \gamma^\beta } } \\
&=
\gamma_\mu \wedge
\lr{
{\delta_\nu}^\alpha \gamma^\beta
{\delta_\nu}^\beta \gamma^\alpha
}
+
g_{\mu\nu} \lr{ \gamma^\alpha \wedge \gamma^\beta }
{\delta_\mu}^\alpha \lr{ \gamma_\nu \wedge \gamma^\beta }
+
{\delta_\mu}^\beta \lr{ \gamma_\nu \wedge \gamma^\alpha } \\
&=
{\delta_\nu}^\alpha \lr{ \gamma_\mu \wedge \gamma^\beta }
{\delta_\nu}^\beta \lr{ \gamma_\mu \wedge \gamma^\alpha }
{\delta_\mu}^\alpha \lr{ \gamma_\nu \wedge \gamma^\beta }
+
{\delta_\mu}^\beta \lr{ \gamma_\nu \wedge \gamma^\alpha }.
\end{aligned}
Observe that I’ve taken the liberty to drop the $$g_{\mu\nu}$$ term. Strictly speaking, this violated the equality, but won’t matter since we will contract this with $$F^{\mu\nu}$$. We are left with
\label{eqn:fsquared:160}
\begin{aligned}
&=
\lr{
{\delta_\nu}^\alpha \lr{ \gamma_\mu \wedge \gamma^\beta }
{\delta_\nu}^\beta \lr{ \gamma_\mu \wedge \gamma^\alpha }
{\delta_\mu}^\alpha \lr{ \gamma_\nu \wedge \gamma^\beta }
+
{\delta_\mu}^\beta \lr{ \gamma_\nu \wedge \gamma^\alpha }
}
F^{\mu\nu}
F_{\alpha\beta} \\
&=
F^{\mu\nu}
\lr{
\lr{ \gamma_\mu \wedge \gamma^\alpha }
F_{\nu\alpha}
\lr{ \gamma_\mu \wedge \gamma^\alpha }
F_{\alpha\nu}
\lr{ \gamma_\nu \wedge \gamma^\alpha }
F_{\mu\alpha}
+
\lr{ \gamma_\nu \wedge \gamma^\alpha }
F_{\alpha\mu}
} \\
&=
2 F^{\mu\nu}
\lr{
\lr{ \gamma_\mu \wedge \gamma^\alpha }
F_{\nu\alpha}
+
\lr{ \gamma_\nu \wedge \gamma^\alpha }
F_{\alpha\mu}
} \\
&=
2 F^{\nu\mu}
\lr{ \gamma_\nu \wedge \gamma^\alpha }
F_{\mu\alpha}
+
2 F^{\mu\nu}
\lr{ \gamma_\nu \wedge \gamma^\alpha }
F_{\alpha\mu},
\end{aligned}
which leaves us with
\label{eqn:fsquared:180}
=
\lr{ \gamma_\nu \wedge \gamma^\alpha }
F^{\mu\nu}
F_{\alpha\mu}.
I suspect that there must be an easier way to find this result.
We now have the complete coordinate expansion of $$F^2$$, separated by grade
\label{eqn:fsquared:200}
F^2 =
-\inv{2}
F^{\mu\nu} F_{\mu\nu}
+
\lr{ \gamma_\nu \wedge \gamma^\alpha }
F^{\mu\nu}
F_{\alpha\mu}
+
\frac{I}{4}
\epsilon_{\mu\nu\alpha\beta} F^{\mu\nu} F^{\alpha\beta}.
Tomorrow’s task is to start evaluating the Euler-Lagrange equations for this multivector Lagrangian density, and see what we get.
## Gauge freedom and four-potentials in the STA form of Maxwell’s equation.
[If mathjax doesn’t display properly for you, click here for a PDF of this post]
## Motivation.
In a recent video on the tensor structure of Maxwell’s equation, I made a little side trip down the road of potential solutions and gauge transformations. I thought that was worth writing up in text form.
The initial point of that side trip was just to point out that the Faraday tensor can be expressed in terms of four potential coordinates
\label{eqn:gaugeFreedomAndPotentialsMaxwell:20}
F_{\mu\nu} = \partial_\mu A_\nu – \partial_\nu A_\mu,
but before I got there I tried to motivate this. In this post, I’ll outline the same ideas.
## STA representation of Maxwell’s equation.
We’d gone through the work to show that Maxwell’s equation has the STA form
\label{eqn:gaugeFreedomAndPotentialsMaxwell:40}
This is a deceptively compact representation, as it requires all of the following definitions
\label{eqn:gaugeFreedomAndPotentialsMaxwell:60}
\grad = \gamma^\mu \partial_\mu = \gamma_\mu \partial^\mu,
\label{eqn:gaugeFreedomAndPotentialsMaxwell:80}
\partial_\mu = \PD{x^\mu}{},
\label{eqn:gaugeFreedomAndPotentialsMaxwell:100}
\gamma^\mu \cdot \gamma_\nu = {\delta^\mu}_\nu,
\label{eqn:gaugeFreedomAndPotentialsMaxwell:160}
\gamma_\mu \cdot \gamma_\nu = g_{\mu\nu},
\label{eqn:gaugeFreedomAndPotentialsMaxwell:120}
\begin{aligned}
F
&= \BE + I c \BB \\
&= -E^k \gamma^k \gamma^0 – \inv{2} c B^r \gamma^s \gamma^t \epsilon^{r s t} \\
&= \inv{2} \gamma^{\mu} \wedge \gamma^{\nu} F_{\mu\nu},
\end{aligned}
and
\label{eqn:gaugeFreedomAndPotentialsMaxwell:140}
\begin{aligned}
J &= \gamma_\mu J^\mu \\
J^\mu &= \frac{\rho}{\epsilon} \gamma_0 + \eta (\BJ \cdot \Be_k).
\end{aligned}
## Four-potentials in the STA representation.
In order to find the tensor form of Maxwell’s equation (starting from the STA representation), we first split the equation into two, since
\label{eqn:gaugeFreedomAndPotentialsMaxwell:180}
The dot product is a four-vector, the wedge term is a trivector, and the current is a four-vector, so we have one grade-1 equation and one grade-3 equation
\label{eqn:gaugeFreedomAndPotentialsMaxwell:200}
\begin{aligned}
\grad \cdot F &= J \\
\end{aligned}
The potential comes into the mix, since the curl equation above means that $$F$$ necessarily can be written as the curl of some four-vector
\label{eqn:gaugeFreedomAndPotentialsMaxwell:220}
One justification of this is that $$a \wedge (a \wedge b) = 0$$, for any vectors $$a, b$$. Expanding such a double-curl out in coordinates is also worthwhile
\label{eqn:gaugeFreedomAndPotentialsMaxwell:240}
\begin{aligned}
&=
\lr{ \gamma_\mu \partial^\mu }
\wedge
\lr{ \gamma_\nu \partial^\nu }
\wedge
A \\
&=
\gamma^\mu \wedge \gamma^\nu \wedge \lr{ \partial_\mu \partial_\nu A }.
\end{aligned}
Provided we have equality of mixed partials, this is a product of an antisymmetric factor and a symmetric factor, so the full sum is zero.
Things get interesting if one imposes a $$\grad \cdot A = \partial_\mu A^\mu = 0$$ constraint on the potential. If we do so, then
\label{eqn:gaugeFreedomAndPotentialsMaxwell:260}
Observe that $$\grad^2$$ is the wave equation operator (often written as a square-box symbol.) That is
\label{eqn:gaugeFreedomAndPotentialsMaxwell:280}
\begin{aligned}
&= \partial^\mu \partial_\mu \\
&= \partial_0 \partial_0
– \partial_1 \partial_1
– \partial_2 \partial_2
– \partial_3 \partial_3 \\
\end{aligned}
This is also an operator for which the Green’s function is well known ([1]), which means that we can immediately write the solutions
\label{eqn:gaugeFreedomAndPotentialsMaxwell:300}
A(x) = \int G(x,x’) J(x’) d^4 x’.
However, we have no a-priori guarantee that such a solution has zero divergence. We can fix that by making a gauge transformation of the form
\label{eqn:gaugeFreedomAndPotentialsMaxwell:320}
A \rightarrow A – \grad \chi.
Observe that such a transformation does not change the electromagnetic field
\label{eqn:gaugeFreedomAndPotentialsMaxwell:340}
since
\label{eqn:gaugeFreedomAndPotentialsMaxwell:360}
(also by equality of mixed partials.) Suppose that $$\tilde{A}$$ is a solution of $$\grad^2 \tilde{A} = J$$, and $$\tilde{A} = A + \grad \chi$$, where $$A$$ is a zero divergence field to be determined, then
\label{eqn:gaugeFreedomAndPotentialsMaxwell:380}
=
or
\label{eqn:gaugeFreedomAndPotentialsMaxwell:400}
So if $$\tilde{A}$$ does not have zero divergence, we can find a $$\chi$$
so that $$A = \tilde{A} – \grad \chi$$ does have zero divergence.
|
|
ISSN: 2067-239X
ISSN(on-line): 2067-239X
Indexed in:
Mathematical Reviews
Zentralblatt MATH
EBSCO
Front cover
# Volume 14 (2022), Number 1
## Fixed Point Theorem for a Meir-Keeler Type Mapping in a Metric Space With a Transitive Relation
### Author(s): KOJI AOYAMA and MASASHI TOYODA
Abstract: The aim of this paper is to provide characterizations of a Meir-Keeler type mapping and a fixed point theorem for the mapping in a metric space endowed with a transitive relation.
## A New Way of Finding Trapezium Inequality Involving Harmonic Convex Functions Through Generalized Fractional Integrals
Abstract: The main objective of this paper is to obtain some new trapezium type inequalities essentially involving the class of harmonic convex functions and generalized fractional integrals.
## On Some Refinements of Jensen and Related Inequalities With Applications
### Author(s): SADIA CHANAN and ASIF R. KHAN
Abstract: The aim of this article is to give the cyclic refinement of the Jensen’s inequality, its variant and extension by considering real weights.Applications to Ky Fan inequality and cyclic mixed symmetric means are also given.
## On the Practical Stabilization of Infinite-Dimensional Perturbed Systems
### Author(s): HANEN DAMAK and MOHAMED ALI HAMMAMI
Abstract: In this paper, we investigate the notion of practical feedback stabilization of a class of non-autonomous infinite-dimensional systems. Assuming appropriate conditions on the perturbation term, it is shown that if every frozen-time control system is stabilizable then the corresponding non-autonomous infinite-dimensional control system is practically stabilizable. Sufficient conditions for the practical feedback stabilizability on a separable Hilbert space are given. This approach is based on the freezing method. Some examples are considered to illustrate the result obtained.
## The Vigenère Cipher
### Author(s): ANA-MARIA DOBRIȚOIU
Abstract: This paper contains an overview to a well-known encryption method, Vigenère’s cipher, which although easy to understand and implement, seems impossible for beginners to break, which is why it has been described as ”le chiffre indéchiffrable”.
## On the Addition of a Dirac Mass to a $q$-Laguerre-Hahn Form
### Author(s): S. JBELI and L. KHÉRIJI
Abstract: Our goal is to study the addition of a Dirac mass to a $Hq$-Laguerre-Hahn form where $Hq$ be the $q$-derivative operator. The $Hq$-Laguerre-Hahn character and the class of the obtained form is discussed into detail. An example in connection with the first order associated of a $Hq$-classical form is highlighted.
## A New Efficient Strategy for Solving the System of Cauchy Integral Equations via Two Projection Methods
### Author(s): ABDELAZIZ MENNOUNI
Abstract: It is interesting to solve a system of singular integral equations with bounded non-compact operators by projection methods in Hilbert space. The theory of projection methods for solving a system of Cauchy integral equations in $L2([ 0,1 ],ℂ )×L2([ 0,1 ] ,ℂ)$ is developed and extended in this work. We look at two situations: Galerkin approximations and Kulkarni approximations. This is accomplished through the use of a sequence of orthogonal finite rank projections. A new efficient technique is used to obtain an equivalent system of two separable equations. The existence of the approach solution, as well as the error analysis, are established. A numerical example illustrates theoretical results.
## Corruption in the EU-27. Time Analysis of the Corruption Perception Index
### Author(s): STOICUTA NADIA ELENA and STOICUTA OLIMPIU
Abstract: One of the most targeted issues in the world today is the reduction of corruption. As efforts to combat this deeply damaging phenomenon stagnate around the world, human rights and democracy are under attack. According to studies, most countries in the world have not seen significant declines in corruption in the last decade. The global pandemic with COVID-19 virus has also been used in many countries as an excuse to avoid both medical and financial controls. Based on these considerations, this article proposes an econometric analysis of the evolution of the corruption perception index over time for five EU-27 Member States, namely Denmark, Germany, Poland, Romania and Bulgary. Based on the econometric models analyzed for each state, short-term forecasts of 3 years will be made.
## Nonholonomic Nonlinear Systems: A General Energy Equation
### Author(s): FEDERICO TALAMUCCI
Abstract: Nonholonomic systems with nonlinear restrictions with respect to the velocities are considered. The mathematical problem is formulated by means of the Voronec equations extended to the nonlinear case. The double formulation both by means of the vector of accelerations and the Lagrangian function turns out to be convenient depending on the aspects and the properties to be pointed out. The main point of the paper is the balance of the mechanical energy induced by the equations of motion; the conservation of the energy on the basis of the tipology of the constraint equations is discussed. The special form of the energy equation allows to identify the categories of nonlinear constraints which entail the conservation of energy.
## Notes on the n-th Power of Generalized $(S,T)$-Pell and $(S,T)$-Pell Lucas Matrix Sequences
Abstract: In this study, we define two matrix sequences special $2×2$ matrices are defined with elements that are generalizations of Pell and Pell-Lucas sequences. Some formulas for the nth powers of these special matrix sequences are established by determinant and trace of these matrices. By these formulas, some properties for these generalized number sequences are demonsrated. The results are also usable for the classic Pell and Pell Lucas numbers, if we substitute for $s =t=1$.
|
|
# Single atom states with energy 0, probability and occupation number fermions and bosons
Two mutually non-interacting atoms are trapped in a double-well potential in equilibrium at a temperature $$T$$, such that an atom can only occupy two possible single- atom quantum states, $$Ψa(x)$$ and $$Ψb(x)$$, each corresponding to the atom occupying one of the two wells of the trap.
(b) Assume both single-atom states have energy 0. Determine the probabilities pj(n) of finding n atoms in well j, as well as the average occupation number nj of each well (j = a, b), if the two atoms are:
i. bosonic atoms of the same species;
ii. fermionic atoms of the same species;
iii. of different species.
Ok, so since both states have energy 0, that means the probabilities of finding the bosons or fermions in either a or b is 1/Z, right ?
For bosons I got three possible microstates: 2 bosons occupy state A, 2 bosons occupy state B, 1 boson occupies state A, 1 boson occupies state B. So that means Z = 3 and the probability for n atoms to populate state A is 1/3, the same as for state B. Am I correct ? And for fermions I have only 1 possible micro state, right ?
|
|
Jun 2 – 7, 2019
Carnegie Mellon University
America/New_York timezone
## The prediction of J/psi Photo-production
Jun 4, 2019, 2:00 PM
30m
Rangos 1
### Speaker
Jiajun Wu (University of Chinese Academy of Sciences)
### Description
A Pomeron-exchange model of the $\gamma p \to J/\psi p$ reaction has been used to make predictions for the on-going experiments at JLab. The parameters of the Pomeron-exchange amplitudes are determined by fitting the total cross section data of $\gamma p \to J/\psi p$ up to very high energy W = 300 GeV. To provide information for the search of nucleon resonances with hidden charm $N^*_{c\bar{c}}$, we then make predictions by including the resonant amplitude of $\gamma p \to N^*_{c\bar{c}} \to J/\psi p$ calculated from all available meson-baryons (MB) coupled-channel model of $N^*_{c\bar{c}}$ with MB = $\rho N$, $\omega N$, $J/\psi N$, $\bar{D}\Lambda_c$, $\bar{D}^* \Lambda_c$, $\bar{D}\Sigma_c$, $\bar{D}^*\Sigma_c$, $\bar{D}\Sigma^*_c$. The $N^*_{c\bar{c}}\to MB$ vertex interactions are determined from the partial width predicted from various theoretical models and SU(4) symmetry. The $\gamma p \to N^*_{c\bar{c}}$ is calculated from the Vector Meson Dominance (VMD) model as $\gamma p \to V p \to N^*_{c\bar{c}}$ with V = $\rho$, $\omega$, $J/\psi$. The model then depends on an off-shell form factor $\lambda^4/(\Lambda^4 + (q^2 - m^2_V)^2)$ which is needed to account for the $q^2$-dependence of VMD model.
It has been found that with $\Lambda = 0.55$ GeV, the predicted total cross sections are within the range of the very limited data in the energy region near $J/\psi$ production threshold. We then demonstrate that the $N^*_{c\bar{c}}$ can be most easily identified in the differential cross sections at large angles where the contribution from Pomeron-exchange becomes negligible.
Early Consideration Yes No
### Primary authors
Jiajun Wu (University of Chinese Academy of Sciences) Prof. T.-S. Harry Lee (Argonne National Laboratory) Prof. Bing-Song Zou (Institute of Theoretical Physics, Chinese Academy of Sciences)
|
|
# Python memoization decorator
I have spent all night whipping up this recipe. It's my first Python decorator. I feel like I have a full understanding of how decorators work now and I think I came up with a good object-oriented algorithm to automatically provide memoization. Please let me know what you think.
I made a few quick changes after pasting it in here so please let me know if my changes broke something (don't have an interpreter on hand).
"""This provides a way of automatically memoizing a function. Using this eliminates the need for extra code. Below is an example of how it would be used on a recursive Fibonacci sequence function:
def fib(n):
if n in (0, 1): return n
return fib(n - 1) + fib(n - 2)
fib = memoize(fib)
That's all there is to it. That is nearly identical to the following:
_memos = {}
def fib(n):
if n in _memos:
return _memos[n]
if n in (0, 1):
_memos[n] = n
return _memos[n]
_memos[n] = fib(n - 1) + fib(n - 2)
return _memos[n]
The above is much more difficult to read than the first method. To make things even simpler, one can use the memoize function as a decorator like so:
@memoize
def fib(n):
if n in (0, 1): return n
return fib(n - 1) + fib(n - 2)
Both the first and third solutions are completely identical. However, the latter is recommended due to its elegance. Also, note that functions using keywords will purposely not work. This is because this memoization algorithm does not store keywords with the memos as it HEAVILY increases the CPU load. If you still want this functionality, please implement it at your own risk."""
class memoize:
"""Gives the class it's core functionality."""
def __call__(self, *args):
if args not in self._memos:
self._memos[args] = self._function(*args)
return self._memos[args]
def __init__(self, function):
self._memos = {}
self._function = function
# Please don't ask me to implement a get_memo(*args) function.
"""Indicated the existence of a particular memo given specific arguments."""
def has_memo(self, *args):
return args in self._memos
"""Returns a dictionary of all the memos."""
@property
def memos(self):
return self._memos.copy()
"""Remove a particular memo given specific arguments. This is particularly useful if the particular memo is no longer correct."""
def remove_memo(self, *args):
del self._memos[args]
"""Removes all memos. This is particularly useful if something that affects the output has changed."""
def remove_memos(self):
self._memos.clear()
"""Set a particular memo. This is particularly useful to eliminate double-checking of base cases. Beware, think twice before using this."""
def set_memo(self, args, value):
self._memos[args] = value
"""Set multiple memos. This is particular useful to eliminate double-checking of base cases. Beware, think twice before using this."""
def set_memos(self, map_of_memos):
self._memos.update(map_of_memos)
• Why would you need anything else than __call__ and __init__? – Quentin Pradet Feb 29 '12 at 9:55
• Did you read the comments for the set_memo(self, args, value) and set_memos(self, map_of_memos)? Eliminating particular base cases will usually not make too big of a difference. Also, something that the function uses may update and one may want to remove some or all values to recalculate them. I can't think of examples where one would want to do this, but I thought I'd throw it in there for the heck of it. – Tyler Crompton Feb 29 '12 at 17:12
• __contains__ instead of has_memo ? Then you can ask 10 in fib. – Adrian Panasiuk Mar 4 '12 at 2:02
• @AdrianPanasiuk, ah, didn't think about that. Thanks! – Tyler Crompton Mar 4 '12 at 20:46
• is it like lru cash, cause its in standart python there – user8426627 May 27 '19 at 18:23
## 2 Answers
The first thing that came to mind looking at your code was: style.
There's a particular reason for placing your doc-strings above the functions instead of below? The way you're doing it will show None in the __doc__ attribute.
IMHO Some of that strings are not even doc-strings:
def __call__(self, *args)
"""Gives the class its core functionality."""
# ...
It doesn't really say much. Keep also in mind that comments are for the programmers, docstrings for the users.
• PEP8 tells you all about this style guide,
• and PEP257 is specifically about Docstring Conventions.
Also I didn't like very much that you put the __call__ method before __init__, but I don't know if that is just me, or there's some convention about that.
Cleared this, I'm failing in founding the point in all your methods that you've written beside __init__ and __call__. What's their use? Where your code is using them?
If you need something write the code for it, otherwise don't. Or you'll be writing for something that you don't need and that will probably not match your hypothetical requires of tomorrow.
I get that you were probably doing an exercise to learn about decorators, but when implementing something, don't ever write code just for the heck of it.
Let's take a deeper look at your non-doc strings, like this one:
"""Removes all memos. This is particularly useful if something that affects the output has changed."""
def remove_memos(self):
self._memos.clear()
That should probably just be:
def remove_memos(self):
"""Removes all memos."""
self._memos.clear()
And nothing more. What on earth does "This is particularly useful if something that affects the output has changed." this means? "something that affects the output has changed"? It's all very strange and confusing. Also, how will your decorator know that "something that affects the output has changed"?
There's nothing in here:
def __call__(self, *args):
if args not in self._memos:
self._memos[args] = self._function(*args)
return self._memos[args]
that does that or that uses any of the other methods. Also you seem to not being able to provide a scenario where they might be used (and even if you could there's still no way to use them).
My point is that all those additional methods are useless, probably it wasn't write them if you learned some Python, but that is as far as their use is gone.
• I haven't really used docstrings that much so thank you for the tips. I sorted the methods in alphabetical order. (It's easier to find a method that way). But that really doesn't matter at all. Perhaps I was gold plating but I can see the extra methods' purposes so I went ahead and implemented them. They can be useful in some scenarios. There's no requirement to use them. – Tyler Crompton Mar 1 '12 at 21:12
• @TylerCrompton: (1) Don't applay the alphabetical order to __init__, because others are expecting to find it at the beginning if it's implemented. (Take a look into the python standard library code :) I can't also stress this enough: your coding style is important if you want others to read/use/share your code. (2) Which scenario? If you can't provide one they're useless. – Rik Poggi Mar 2 '12 at 12:35
• In regards to number 2, have a look at the docstrings. Surely those will provide enough information as to what scenarios that they would be good for. – Tyler Crompton Mar 3 '12 at 8:20
• @TylerCrompton: No offense, but I've already read your non-doc strings, yet I don't see how those methods could be used. – Rik Poggi Mar 3 '12 at 22:08
• Perhaps you misunderstand. I do understand the my docstrings are wrong. I get that. The "something that affects the output has changed" phrase means if one is accessing global variables and that variable changes and by doing so affects the output. The decorator isn't supposed to know when it changes. It's the programmers responsibility to detect that. If I want to clear the memos. I would just do function_name.remove_memos(). – Tyler Crompton Mar 4 '12 at 20:45
"""Gives the class it's core functionality."""
def __call__(self, *args):
if args not in self._memos:
self._memos[args] = self._function(*args)
return self._memos[args]
It's more idiomatic (and faster) to "ask for forgivness rather than ask for permission", e.g.:
def __call__(self, *args):
try:
return self._memos[args]
except KeyError:
value = self._function(*args)
self._memos[args] = value
return value
• Didn't know about this idiom. I tend to use exceptions only to denote really unusual situations (eg. "file not found"). – Quentin Pradet Feb 29 '12 at 8:36
• It's actually optimal to use in Python for several reasons. See books.google.se/… for further reading :) – rreillo Feb 29 '12 at 10:49
• Thanks for the reference! I was using a C idiom. By the way, EAFP is also defined in the docs glossary – Quentin Pradet Feb 29 '12 at 11:01
• Shouldn't that be a KeyError? – Winston Ewert Feb 29 '12 at 16:57
• @raba, Does storing the result in a temporary variable really make that much of a difference? Just curious. – Tyler Crompton Feb 29 '12 at 17:09
|
|
# Definition of Nabla Operator
$$\vec{\nabla} = \left(\frac{\partial}{\partial x_1}, \cdots, \frac{\partial}{\partial x_n}\right)$$
If $$\vec{\nabla}$$ is a $$1\times n$$ vector, then how can
$$\vec{\nabla}f = \operatorname{grad}f = \left(\frac{\partial}{\partial x_1}, \cdots, \frac{\partial}{\partial x_n}\right)^T$$
be a $$n\times1$$ vector?
If I take $$\vec{x} = \begin{pmatrix}x_1 \\x_2\\x_3\end{pmatrix}$$ then $$\quad\vec{x} \cdot a= \begin{pmatrix}a \cdot x_1 \\a \cdot x_2\\a \cdot x_3\end{pmatrix}$$
and $$\vec{x} \cdot a = \left(a \cdot x_1, a \cdot x_2, a \cdot x_3\right)$$.
Why does the vector change from $$1\times n$$ to $$n\times 1$$? Or more specifically why did I get $$\vec{\nabla} = \left(\frac{\partial}{\partial x_1}, \cdots, \frac{\partial}{\partial x_n}\right)^T$$ marked as wrong when asked the definition of the nabla operator?
Update
${\bf 1.\ }$In the first place $\left({\partial\over\partial x},{\partial\over\partial y},{\partial\over\partial z}\right)$ is not a vector, i.e. an element of some vector space, but a mnemotechnical device.
${\bf 2.\ }$There is no single consistent notation dealing with first derivatives of multivariate functions and all that, which is in force worldwide as of today.
${\bf 3.\ }$In the following I'm describing how I am perceiving things.
Given a differentiable function $f:\>{\mathbb R}^3\to{\mathbb R}$ and a point $p\in{\rm dom}(f)$ one has $$f(p+X)-f(p)=df(p).X+o(|X|)\qquad(X\to0)\ .$$ The differential map $df(p):\>T_p\to{\mathbb R}$ is a linear functional on the tangent space $T_p\simeq{\mathbb R}^n$. In terms of matrix calculus the data specifying $df(p)$ would then be collected in a row vector: $$[df(p)]=[\matrix{f_{.1}(p)&f_{.2}(p)&f_{.3}(p)\cr}]\ .$$ In this way the evaluation $df(p).X$ becomes an ordinary matrix product: $$df(p).X=[\matrix{f_{.1}(p)&f_{.2}(p)&f_{.3}(p)\cr}]\left[\matrix{X_1\cr X_2\cr X_3\cr}\right]\ .$$ Now in ${\mathbb R}^n$ we have an additional structure element, namely the standard scalar product $\cdot\>$. When the points $x\in{\mathbb R}^n$ are considered as $n\times1$ column vectors $[x]$ then the scalar product can be written as a matrix product: $$x\cdot y=[x]^\top [y]\ .$$The scalar product allows to identify the space $T_p^*$ of linear functionals on $T_p$ with the space $T_p$ of "geometrical" tangent vectors itself. This means that there is a certain vector $a\in T_p$ representing $df(p)$. This vector $a$ is called the gradient of $f$ at $p$, and is denoted by $\nabla f(p)$. One then has $$df(p).X=\nabla f(p)\cdot X\ .\tag{1}$$ In terms of coordinates the gradient is of course given by $$\nabla f(p)=\bigl(f_{.1}(p),f_{.2}(p),f_{.3}(p)\bigr)\ ,$$ and being an element of $T_p$ this gradient can be conceived as a column vector, as are the increment vectors $X\in T_p$.
Now, if you insist on computing scalar products in terms of matrix algebra the vector $\nabla f(p)\in T_p$ has to be converted into a row vector, so that $(1)$ assumes the form $$df(p).X=\bigl[\nabla f(p)\bigr]^\top[X]\ .$$
• Can you clear up your first equation (unnumbered) a little bit more for people with a little less mathematical background? I have a hard time following your argumentation to be honest. – idkfa Mar 6 '16 at 16:58
|
|
# Food for Thought
Food prices worldwide have gone up over the last 8 or 9 months.
Some have tried to blame it on the use of corn for ethanol in the U.S., and by extension, on efforts to curb global warming. This is total bull, and in my opinion, one of the most despicable tactics yet employed by those who deny the reality of global warming. Ethanol-from-corn started well before the sharp increase in food prices, but there is a unique trigger to the price increase which is ridiculously easy to identify. It’s even been in the news. And it’s the one the denialists don’t want you to think about.
In fact it might just be the single thing that denialists most want to conceal.
Let’s take a look at the food price index:
It’s even broken down by the type of food commodity:
This graph might give the impression that much (if not most) of the latest price rise is due to an increase in the price of sugar. But that’s not so, because the food price index isn’t just a simple average of the commodities prices. Sugar is only a small contributor to the food price index, in fact it’s the smallest contributor among all these commodities. The food price index is computed according to:
$FPI = 0.347 \times Meat + 0.168 \times Dairy + 0.271 \times Cereals$
$+ 0.142 \times Oils + 0.072 \times Sugar$.
If we want a breakdown of the contribution of each commodity to overall food prices, we should graph their contributions to the food price index. And here they are:
Let’s take a closeup view of the most recent 6 years or so:
Now it’s plain to see. The 2010-2011 rise in food prices was triggered by a rise in cereals prices. It started in July of 2010, and still hasn’t relented. Sure, there are other factors too — and as usually happens, an increase in cereals prices can cause a “ripple effect,” leading to increased prices in other commodities. But the root cause is something that happened to cereals prices in July of 2010.
And what on earth could that be?
“Cereals” includes wheat. What if one of the world’s largest wheat producers had devastating crop losses, reducing their production by a third? What if they were one of the world’s biggest wheat exporters, but production was so reduced that their exports dropped to zero? That’s exactly what happened in Russia. And the trouble started in July of 2010.
Why did the Russian wheat harvest suffer so? Because of the record-breaking heat wave and drought which plagued a massive region, at just the wrong time for Russian agriculture. And one of the contributing factors is: global warming.
That’s the ugly, deadly dangerous secret the denialists don’t want you to think about. That’s why they sank so low as to try to blame inflation of food prices on attempts to fight global warming, when it’s really due to: global warming.
Here’s the much uglier, much more dangerous truth: this is just the beginning. It’s gonna get worse. A lot worse. Be afraid. Be very afraid.
But for the sake of this, and the next few, generations, don’t just be afraid. Get off your ass and do something. Make our politicians get off their asses and do something.
### 111 responses to “Food for Thought”
1. Daniel Bailey
Thanks for yet another insightful analysis, Tamino.
Cause and effect: whodathunkit?
(Methinks BPL’s being a little too conservative with his 2056 prediction)
The Yooper
2. Steve L
And the previous jump in cereal prices, in 2008, was due to fuel prices … right? And the denialists are still trying to keep the world chained to oil … right?
They’re despicable … right? Usually I just say they’re wrong, but that’s not enough. Time to do something. Thanks for the reminder.
3. Jim Eager
“this is just the beginning. It’s gonna get worse. A lot worse. Be afraid. Be very afraid.”
Yep, I’ve been saying as much to anyone and everyone for a while now.
The response: usually a blank, uncomprehending stare.
It’s not just Russian wheat, but also Chinese wheat, and rice in Pakistan, and virtually every crop in eastern Australia. All in the same 12 month period.
And it is indeed just the beginning.
4. Whereas, as I understood it at the time, the 2008 spike in cereal prices did have something to do with biofuels. Maize ethanol is obviously a stupid idea, but not as stupid as ignoring global warming.
• JCH
In 2008 a friend of mine who farms multiple sections in Kansas called me and told me he had just paid over $700 to fill his tractor with diesel fuel. I do oil, and he was just letting me know from where my money was coming. Most of his corn went to ethanol, but he was also getting the residue back and feeding it to his cattle. It has to be supplemented, but he seemed to think that system was working very well. One misconception people have, none of his corn ever went directly for human consumption. It went to feedlots to finish cattle/hog operations. If all that land in the high prairie were used for cereal production, I suspect the farmers would be broke in no time. There’s not much money in feeding the world’s poor. 5. Bryson Brown The well is poisoned: those who reject global warming are now so firmly committed to rejecting any source that contradicts their convictions that trying to change their minds is about as uphill as any public communications effort could ever get. Even catastrophes like crop failures, floods and massive droughts may not be enough. I am very afraid indeed. • Andrew Dodds I believe that the Meme du Jour is still trying to resurrect ‘hide the decline’ as an issue. On a different board, I’m having an argument with someone on this issue who claims to have published in Nature and thinks it’s the most serious thing ever. Also uses Climate Audit as a reference for this. Yes, really. As far as crops go, the problem is this: Changes to the hydrological cycle as a result of global warming may be neutral on a 100-year timescale, as far as crop yields are concerned. Indeed, they may be positive if the land made available to farm exceeds the land lost. Unfortunately, farming does quite rely on this year being the same as last year; whilst change is ongoing, crop failures should be more common. This last point being lost on the denialati, unsurprisingly. I honestly don’t know what is to be done. I remember estimating once that fixing CO2 emissions (as in >95% reduction) using a nuclear+synthetic fuels+moderate home efficiency improvements approach would cost between 30 and 60 percent of (1-year) GDP, spread over 20 years. I.e. about 1 banking sector bailout. It’s not as if it can’t be done or would cause massive hardship. Hmmm. Must write a book on the subject. To much for random blog comments.. • JCH I highly doubt arable land lost will be equaled or exceeded by new arable land. I think that outcome is impossible. And Yale economists are apparently clueless as to the true value of a bread basket. Kansas is worth way way way way more than Connecticut. • Andrew Dodds JCH – Well, that is the secondary problem; what are now the most productive temperate zones have had an awful lot of glacial dust shoved on them, meaning excellent soils often in places that would presumably have had poor quality red continental soils otherwise. And moving the temperate zone North will mean we’ll be trying to farm on the soils remaining in the glaciated areas where that soil came from. As far as Economics goes.. from what I can tell of the recent direction of the discipline it is more of an exercise in justifying tax breaks and marketisiation for rich entities than a serious and objective field of study. If that sounds harsh then I refer you to the journals… • JCH Andrew – thanks for the thoughtful reply. As a kid I grew up round farms and ranches as my dad was an agricultural professional who worked mostly on farms and ranches, and we did a little farming ourselves. Our first home was on the side of the Pony Hills in South Dakota. Those hills were formed by the Western edge of a glacier. East of the hills the land is flat and very fertile. West of the hills it’s mostly rolling pasture and hay land. It was all dry-land farming, so I have a sense of how fragile a thing a crop can be. I’ve seen miles and miles of crops die. I’ve seen ranchers sell off vast numbers of cattle due to drought. Still, the area produces a great deal of food. The process of losing it, or moving it, and the like sounds fraught with extreme difficulties to me. Americans moved West and found vast areas of very productive land. It was a lucky thing. Places like the Dakotas generally get just barely enough rain. They have tremendous problems when they get too much rain. The land came so cheap, and I sincerely doubt economists have a clue just how valuable it actually is, or how lucky for this country it was that it was there. A society takes its arable land for granted at great risk. • The well is poisoned May be an understatement. 6. A paper on the 2007-2008 spike: Headey, D and Fan, S. (2010) Reflections on the Global Food Crisis. How Did It Happen? How Has It Hurt? And How Can We Prevent the Next One? International Food Policy Research Institute. Research Monograph 165. Here’s a press release about the paper by the European Comission: “Causes of the 2007-2008 global food crisis identified A number of interacting factors, including increasing oil prices, greater demand for biofuels and trade decisions, such as export restrictions, all affected world cereal prices.” 7. blueshift So what are the best actions, and what is the best resource outlining what to do? Obvious steps: -Minimize personal impacts -Contact Senators and Representatives -Buy renewable energy if possible -Explain the problem to others -? On explaining the problem, sites like this and Skeptical Science are great for those interested enough to spend a little time, but there’s still what I think of as “the drunk guy at the bar problem”. Basically, most people won’t think about it for more than 20 seconds and they need it really simple. [Response: I don’t know the best actions. But there’s at least one I do know, and which is simple and easy to understand: politicians who deny the reality, human causation, and danger of global warming should be voted out of office.] 8. Alexandre Hope you’re wrong, mate… [Response: I hope so too.] 9. PJKar Pelke gives an accidental acknowledgment to the rise in wheat prices by virtue of including it in a cut and paste of a sentence he uses to support the ‘pin it all on corn” argument. Here it is from the FAO website and his site: “The increase in February mostly reflected further gains in international maize prices, driven by strong demand amid tightening supplies, while prices rose marginally in the case of wheat and fell slightly in the case of rice.” Wheat rose marginally. I guess, but relative to what? If you check the increase over 2011 year it is quite large. A graph of wheat export prices is provided at of all places, the FAO website. Maybe Pielke missed it. Or maybe wheat just doesn’t interest him: http://www.fao.org/economic/est/commodity-markets-monitoring-and-outlook/grains/en/ A better graph that includes the data is found at Index Mundi. The yearly increase from Feb 10 to Feb 11: 78.96%. According to index Mundi corn grew 81.53% over the same period, so how can he put it all on corn with the rate of increases so close for the two? http://www.indexmundi.com/commodities/?commodity=wheat&months=12 The Business Week article at the bottom of the Index Mundi page discusses stockpiles and prices and mentions the complications caused by the Russian drought. The role played by the Russian summer heat wave in the grain price rise is undeniable but to Pielke its like it dosen’t exist. The world seems to be a very simple place when viewed through the eyes of Pielke, Goddard, Tisdale, Watts and the like. 10. Yes, the main impetus for ethanol-from-corn was never AGW mitigation, something I’ve been trying to make clear to sundry folks for a long time now. Unfortunately the Brazilian ethanol-from-sugarcane model gets tarred with the same brush; it seems to work a good deal better from an economic and mitigation POV. • Exactly, and I try often to remind people that the impetus behind the US effort to turn corn in ethanol had nothing to do with global warming, and very little to do with environmental concerns at all. Rather, it was a collaboration of politicians and ag industries in the leading corn states selling the rest of Congress on the idea that ethanol would decrease US dependence on foreign oil. IOW, by using the oil shortage fear factor, big ag was able to steer billions of dollars into the corn business. Senator Grassley, arguably the most powerful politician in this regard, still pushes corn ethanol as a means to “energy independence.” 11. dko As a U.S. farmer, I hear the corn-ethanol complaint a lot. I’ll be the first to admit that the scheme makes no sense and that without a subsidy to blenders it would have collapsed years ago. But it is worth noting that the major byproduct of ethanol production is fed to cattle, so only the starches are lost to the human food supply. People in China, India, and several other developing nations are finally seeing a rise in their incomes and they spending much of it on food, especially meat. This understandable but does take a lot of grain off the market. Stocks are very tight and a bad crop in an exporting country has a dramatic global effect. • JCH dko – I mentioned the return of of residue for cattle feed above. Do you sense that many people actually think the “ethanol” corn used to be fed directly to humans? None of our corn ever saw a lunch counter. We ate corn all the time, but honestly, I have no idea from where it came. It came from the grocery store! We never ate anything grown on our land. We produced food for animals to eat. In the area where I grew up, it is true that much of the “ethanol” corn used to be wheat. • dko JCH — The marketplace is a force to be reckoned with. When demand for corn ethanol rose, so did corn prices, as did the acres diverted to corn production. (Input costs also rose, BTW; this has not been a windfall to farmers.) Even though nearly 40% of U.S. corn production goes to ethanol, we are now growing so many more bushels of it that the same amount is still entering the human food supply line. But…that means fewer acres of something else (wheat in your area), which means less production of that, and higher prices, etc. It’s all connected. For those with sufficient income, higher food prices are an annoyance; for the world’s poor, it is a disaster. Another “once-in-a-thousand-years” drought in a major exporting country this year would send food prices skyrocketing. Unfortunately, as global temperatures rise along with consumer demand, this sort of thing will become more common. 12. peter hagenrud One billion more people every 13 th year also have a heavy impact on foodprices Global sinking ground waterlevels aswell, erosion, soildepletation and so on vorsen things even more 13. agres The 2010 food price run up came from extreme weather triggering a series of low yield harvests, and fear in the markets regarding the Chinese trying to grow wheat with fossil water. Irrigation with surface water is technically feasible, but not economically. Many of the areas where China has been growing wheat for the last 20 years, were traditionally millet and sorghum growing areas. Both are much more tolerant of drought and heat stress than wheat. I would cheerfully bet that 2010 will turn out to have fewer extreme weather events than most of the years in the 2010-2019 decade. To the extent that extreme weather diminishes crop harvest, 2010 is likely as good a wheat harvest as we can expect in the next decade. As the rest of the Arctic Ice melts, the jet stream/storm track is going to do things we have not considered possible in our life time. The result will be “extreme” weather, and the NOAA attribution team will be kept busy. 14. Tamino, thank you, very enlightening. I agree; it is just the beginning, rather, we are way past the beginning; we are in the midst of a meltdown, not only in reference to Fukushima but long past many peaks with no solutions to any of the obvious problems. caw 15. All of the above, plus the fact that commodity food is now an actively traded market. Johann Hari wrote a superb polemic on the subject last year. Bottom line: as climate change bites into food supplies, it’ll be Goldman Sachs getting rich while the poor starve. 16. Sekerob Heavily criticized in recent months were the institutional investors & pension funds for participating in market speculation and driving up prices… it’s not all shortages that did this! We learned that the Russian heatwave dried out the soils up to 3 meters deep, so many are worried about the winter wheats… will they come up? • Horatio Algeranon Following is from “Speculation And The Frenzy in Food Markets The fight over financial regulation affects global food prices” (February 16, 2011) (The Real News) PAUL JAY SENIOR EDITOR, TRNN : “One of the things that’s said is that there has been, in fact, a collapse of the Russian wheat market, that demand has gone way up in China and to some extent India for maize, and the role of biofuels and corn. So does this explain it?” JAYATI GHOSH, PROF. ECONOMICS, JAWAHARLAL NEHRU UNIVERSITY: “we are getting very, very dramatic increases in price that are simply not explained by fundamentals. If you take the price of wheat, for example, it went up between June and December, it doubled in price, whereas the global wheat supply fell by maybe 3 percent and global demand for wheat has barely changed. So we really are not getting changes in price that are justified by the actual changes in the demand-supply balance.” “A lot of the big increase in wheat in the last six months was because of the Russian grain failure and then the Russian ban on exports, which didn’t actually affect aggregate global supply, because other countries actually supplied more wheat, but it created this perception that there was going to be a shortfall, and so there was a massive increase in speculation in wheat. So what speculation is doing is massively magnifying an existing volatility.” • Andrew Dodds Hi, Similar things seem to have happened in the oil markets – there was a price collapse in 1999 that appears to have had little to do with fundamentals, and the 2008 price spike seemed a bit outlandish as well. The problem is that the amount of money now available to speculate on things now seems to dominate the real-world volumes of the same things (if that makes the slightest sense). At best it means that market signals get exaggerated; at worst it means that supply and demand cease to matter. • Horatio Algeranon The fundamental mistake that the members of the “reality-based community” are making is to ”believe that solutions emerge from …judicious study of discernible reality.” But, as a senior adviser to Bush once wisely informed us: ”That’s not the way the world really works anymore …We’re an empire now, and when we [the political and financial powerbrokers] act, we create our own reality [bubble]. And while you’re studying [and graphing] that reality — judiciously, as you will — we’ll act again, creating other new realities [bubbles], which you can study too, and that’s how things will sort out. We’re history’s actors [the Charlie Sheen’s of history, if you will] . . . and you, all of you, will be left to just study what we do …” 17. Matter Hold on, isn’t bioethanol still a longer term problem? Sure, the 2010 spike was probably driven by Russia, but there is a lot less ‘slack’ in the system on the supply side thanks to this. In the short run I expect farmers will plant based on expectations for the next year or so, which is what should dominate the annual spikes, but won’t long term increases in demand above normal lead to a long term price creep? Kind of like a ‘global warming’ signal, but in food prices rather than temperature. 18. J Bowers Russia still haven’t lifted their export ban and have cut their forecast for crop production this year. http://seattletimes.nwsource.com/html/businesstechnology/2014534886_apuscommoditiesreview.html?syndication=rss 19. As I noted a couple of weeks ago ( http://littlegreenfootballs.com/pages/freetoken ) some people will look upon rising food prices not with horror but rather with the view that it will help American farmers, at least for those who grow grain. Those with livestock will however view the situation differently if the market can’t afford more expensive meat. And here we get to one of the hard nuts of this problem – changing peoples’ behaviors, in this case meat consumption. Long before the highly anomalous weather patterns we saw in Russia disrupt even more of the world’s food production, issues with petroleum, fertilizers, fossil water depletion, soil exhaustion, and increased population will bring large pressures on the price of food. Climate change is the icing on the cake. 20. Ethanol from Corn production has never been a green policy. It has been a policy of a number of left/right wing governments, but not a green policy. The reason being that the energy ratio for corn ethanol (output/input) is marginal. The gain is only really through co-products (gluten meal, gluten feed and corn oil), ethanol on it’s own gives no real energy gain (a ratio of 1.01 to 1.08). The main reason for producing it in the US has been to create an additional market for corn and hence support American farmers and corn prices. The issue is even worse for ethanol from wheat, which results in a fractional ratio, that is you get less out than you put in. The US Department of Agriculture research shows how bad the energy gain is, from corn ethanol. For more info read up: AER721 – Estimating the Net Energy Balance of Corn Ethanol. By Hosein Shapouri, James A. Duffield, and Michael S. Graboski. U.S. Department of Agriculture, Economic Research Service, Office of Energy. Agricultural Economic Report No. 721. And aer814 – The energy balance of corn ethanol update by By Hosein Shapouri, James A. Duffield and Michael Wang. • Slight grammar mistake in my post above! That should have read: “(a ratio of between 1.01 and 1.08).” • TrueSceptic A semantic error, surely? I did think you were saying that the ratio was 1.01:1.08, not that it was in the range 1.01 to 1.08. :) 21. elkern USA has cornohol subsidies & gas-content mandate because Archer-Daniels-Midlands & their ilk own bought enough congressmonkeys, and because Iowa has out-sized influence on presidential elections. Is that short enough for a noisy bar? 22. One thing you have to keep in mind, along with climate change, is the rapid growing population rate and limited natural resources. The earth is at its’ limits in the carrying capacity it can sustain. This puts more stress on the natural resources we consume and overuse. I agree with ethanol use is not necessarily driving the price up, but, is being consumed for another fuel source. 23. B Buckner Wheat Prices, per metric ton in dollars, June each year year nominal inflation adjusted (4%/yr) 1960 63 448 1965 47 275 1970 45 216 1975 107 422 1980 136 441 1985 114 304 1990 113 248 1995 141 254 2000 92 136 2005 119 145 2010 153 http://www.ers.usda.gov/Data/Wheat/Yearbook/WheatYearbookTable20-Full.htm • J Bowers$30.oo in 1913 had the same buying power as $79.39 in 2010.$63.00 in 1960 had the same buying power as $464.11 in 2010.$113.00 in 1990 had the same buying power as $188.53 in 2010. I don’t think a nominal 4% inflation adjustment is of much use. http://146.142.4.24/cgi-bin/cpicalc.pl 24. B Buckner Tamino, You usually debunk posts such as this. Drought and heat waves have devestated crops since man has been farming. The science indicates the Russian drought was not caused by global warming: http://green.blogs.nytimes.com/2011/03/09/natural-causes-drove-russian-heat-wave-study-finds/ Sure, global warming has the potential to reduce crop output, but the information posted here has nothing to do with global warming. [Response: Lung cancer has killed people since long before man has been farming — but that doesn’t mean smoking isn’t a contributing factor. The study by Dole et al. is mistaken. Although the cause of the Russian heat wave was a “blocking event” (not unusual for that region), this was certainly not “just another blocking event.” Dole et al.’s rejection of any global warming connection regarding the severity of this heat wave is based on naive analysis. I’ll be posting about it very soon.] 25. Jeffrey Davis farming does quite rely on this year being the same as last year This can’t be emphasized too much. Farmers have options about what to plant, but they can’t be wrong too many times over the course of a few years and keep their operations going. A farmer doesn’t simply pit himself against Nature. He’s also up against every other farm in his region. A farm has to make money just like any other business and a successful crop isn’t a guarantee of a profit. Gluts look good on paper as long as it isn’t your paper. 26. While it certainly doesn’t help at the macro-level, my wife and I got serious a couple years ago about converting our tiny yard from lawn to vegetables, using raised beds of scrap lumber and free leaf compost from the city landfill. We raise mostly tomatoes, pole beans, greens and some peppers, since fresh produce is horrendously expensive, and we freeze what we don’t eat immediately. So now we eat a lot better, more nutritiously certainly, and save quite a bit of money. It’s a little thing on the big scheme of things; but it does help. 27. I have been told that The world has about one person per two hectares of land. Before the famine in Ireland, potatoes and a cow could feed 20 to 30 people per hectare. Chinese families could feed themselves on 1/16th of a hectare. That is about 50 people per hectare. Permaculturists can grow food almost anywhere (e.g. Sepp Holzer in the Austrian Alps, Geoff Lawton in the Jordanian desert) Have I been informed correctly? What’s the biological barrier to feeding only one person for every two hectares of land, when 50 may be possible? • Didactylos Let’s run the numbers…. Land area 148,940,000 sq kilometers Population 6.9 billion people 2.15855 hectares per person. But treating one square kilometre at the top of a mountain like one that’s currently a wheatfield is just ridiculous. Net* potential arable area (FAO) 38,488,090 sq kilometers 0.557798 hectares per person. Plenty. But not so naive as the 2 per person figure. Of course, mere space isn’t the only limiting factor, and Liebig applies very much in terms of calculating carrying capacity. * Excludes protecting land. Includes land that *should* be protected. • Ray Ladbury Geoff, Have you tried feeding your family on a hectare? The problem with intensive agriculture is that it is not sustainable. Eventually, soil and aquifers become depleted, and they are not renewable resources. What is more, you simply are not going to support 50 people on a hectare of the Atacama or Sahara deserts. Ultimately, the question of what is possible is less relevant than the question of what is sustainable. • It’s certainly a challenge, Ray. I’m cultivating less than 200 sq. metres – up from ~120 the last two years as I’ve taken on another plot, and am very conscious as to the absolute need to bring in a balance of nutrients as well as taking food from the land. Living in a rural area not far from the sea with a high average rainfall helps. But seaweed, crushed seashells, wood ash, mixed manure from various farms, even the fortnightly grass-cuttings from the beer garden at my local – all play their part. I’ve worked out that access to a couple of gallons of diesel a year would make moving all of that stuff possible. No mains water but an extensive rainwater-catching system helps. Sustainable? Time will tell. But becoming self-sufficient in veg is the aim: so far that has been achieved with onions, shallots, potatoes and runner beans, so there’s some way to go! Guess the main principle is “only take out what you put back in”…. Cheers – John • Geoff Beacon wrote: The world has about one person per two hectares of land. Yup. Global population divided into global land surface area means each person has about 2.1 hectares (a square of land 145 meters on a side) to supply their every need… and that “every need” is the problem with this idea which puts forward the possibility that we could all feed ourselves happily on our personal patch. In your two hectares you have to have your share of everything else as well. Some of it will be heat desert, some ice desert, some mountainous and some will have to be forest to put the 61 trees that is our personal share. There will have to be space for all the wildlife and ecosystems that keep planetary life support systems ticking over. Obviously, we would have to squeeze in space for our share of the factories, mining operations etc, that extract and fabricate the goods that we consume and the energy we use. There would have to be space for all the waste and pollution substances (that we are responsible for) to be sequestered. Penultimately, there would have to be an area to grow the fodder for the animals that are so in demand for meat. Finally, what is left could grow food for one person. Now imagine all that then go to a large field and pace out 145 metres so you can visualise the size of your personal bit of Earth. We’re seriously cramped. • Holly Stick Plus the land that is covered by cities, parking lots, highways, airports and other forms of land use that make it unavailable for agriculture. 28. Igor Samoylenko Applying the usual rigorous standards of logic practiced by climate sceptics I got to the conclusion that Tamino got this the wrong way around. It is not the heat wave that was the major contributor to the recent rise in food prices; it is the price hike that caused the heat wave! This conclusion may be counter-intuitive, in defiance of common sense and lacking any plausible physical mechanism but it must be considered seriously since the opposite conclusion is unpalatable and therefore must be false by definition. 29. Douglas, Started doing the same in 2009. It takes surprisingly little effort once the initial bed construction is completed, and veg straight from the garden are unbeatable! It is hard to predict where food prices will be by the end of 2011. I think the unrest in the Middle East and the knock-ons in the price of crude oil may well be a prime driver as they were in 2008 – and that is regardless of what climate destabilisation may throw at us. Up, more likely than not, in other words, I’m afraid. Cheers – John 30. I don’t believe the main point of this post is correct. In a post at my blog I estimate the total tonnage of cereal feedstock used to make the global biofuel supply, and compare that to the fluctuations in the Russian wheat supply. It’s notable that the latter are more than an order of magnitude smaller than the former. Also, 2010 is not particularly anomalous in the history of Russian wheat production which has been highly volatile for a long time. For example, it’s fallen by a third year-on-year on two other occasions in the last two decades (1998 and 2003). Those didn’t result in massive global food price spikes. 31. Higher temperatures = more ozone. That’s the equation fossil fuel companies really don’t want the public to understand. Inexorably rising levels of background tropospheric ozone result from the VOC emissions created when burning fuel, reacting to UV radiation. Ozone is highly toxic to people, causing cancer, emphysema, asthma, allergies, and is recently linked to diabetes and autism – all of these ailments have reached epidemic proportions. Worse still, ozone is even more poisonous to vegetation. NASA and the Dept. of Ag. estimate crop yield losses annually in the US alone in the billions of dollars. In addition to stunting growth and causing a decrease in quality and in nutritive value, exposure to ozone increases the vulnerability of plants to insects, disease, fungus, drought and wind. The real kicker is that long-lived species like trees and shrubs that are exposed to ozone season after season are dying off at a rapidly accelerating rate. This is happening around the globe as ozone precursors travel across oceans and continents. Imaging the implications of a world with trees. The entire ecosystem of species that depend upon trees for food and habitat, shade and soil retention, will expire with them. That includes, ultimately, humans. We also happen to rely on forests for oxygen to breathe, along with life in the sea, which is also doomed from ocean acidification. We should convert to clean energy on an emergency basis before we starve and suffocate, if it isn’t too late already. Or rather, even if it is already too late – we should do it anyway. I’m not making this up, by the way. The effects of ozone are well documented in scientific research – links here: http://witsendnj.blogspot.com/p/basic-premise.html • Didactylos Ozone is one problem among many. Have you fixed your website yet? 32. Chris R Stuart Staniford, Thanks for your blog post. I’d noticed the same thing. However the timing of the price spike really seems to me to support the suggestion that this reduction in Russian harvest was the initiator for this particular price rise. The issue is complicated in that maize is used for biofuels and animal feeds more than wheat, so the different products are not directly comparable. From the tables at the end of this post it’s clear that while a 1/3 reduction in Russian wheat could have a substantial direct price impact on wheat, that is not the case with maize (which is around 0.5% of world production). However both wheat and maize showed price increases from July 2010: http://www.fao.org/economic/est/commodity-markets-monitoring-and-outlook/grains/en/ I suspect that there are deeper market mechanisms afoot in the increase in cereal prices. Such mechanisms are discussed in the paper “Reflections on the Global Food Crisis” as linked to by Jesús R earlier in the thread (Thanks Jesús – good paper). Which, while about the situation in 2008, is very informative and has information relevant to Tamino’s post (primarily in section 2 which is excellent – I just skimmed the rest of the paper). I first read Tamino’s post thinking he’d probably called this one wrong. I’ve been unable however to dismiss the crucial fact: Whatever the detailed mechanisms behind the rise, cereal prices started to rise as news of the Russian drought broke. If an alternate explanation is available it’s going to have to be damned good for me to concede coincidence in this case. FAO Definition of cereals: http://www.fao.org/es/faodef/fdef01e.htm#1.01 To use these tables cut and paste into Excel and use Data>Text to Columns. Maize ,Russian Federation,World 2000,1530290,592475220,0.26%,,, 2001,847220,615510314,0.14%,,, 2002,1562890,604842771,0.26%,,, 2003,2121900,645111399,0.33%,,, 2004,3515690,728806531,0.48%,,, 2005,3210770,713433404,0.45%,,, 2006,3510351,706698044,0.50%,,, 2007,3798020,789480893,0.48%,,, 2008,6682300,826224247,0.81%,,, 2009,3963430,817110509,0.49%,,, Wheat ,Russian Federation,World 2000,34455488,585690968,5.88%,,, 2001,46982120,589823411,7.97%,,, 2002,50609100,574745780,8.81%,,, 2003,34104288,560128009,6.09%,,, 2004,45412712,632670103,7.18%,,, 2005,47697520,626844991,7.61%,,, 2006,44926880,602887177,7.45%,,, 2007,49367973,612606833,8.06%,,, 2008,63765140,683406527,9.33%,,, 2009,61739750,681915838,9.05%,,, • Marco Chris, something similar could be observed when the unrest in the Middle East started. Not a problem in sight in terms of oil supply, and yet prices immediately went up. Concerns over supply does that. Note that you could also wonder whether as much wheat and maize would be produced if it weren’t for biofuels. They can be grown on poor soils that would prevent use in food production. 33. Food prices are in a bubble, just as was the case in 2008. We are headed for a collapse in the food (and energy) prices – but that will NOT make the food more available. Biofuels and Russian heatwave definitely contributed to the run up in the prices — but the main reason is probably the speculative capital that flooded the market after the QE1 and QE2. Speculative bubbles always pop-up on an underlying increasing trend… 34. Brian Dodge World Bank Policy Research Working Paper 5371 http://elibrary.worldbank.org/deliver/5371.pdf;jsessionid=4lq64vqhsre54.z-wb-live-01?itemId=/content/workingpaper/10.1596/1813-9450-5371 “The paper also argues that the effect of biofuels on food prices has not been as large as originally thought, but that the use of commodities by financial investors (the so-called ”financialization of commodities”) may have been partly responsible for the 2007/08 spike.” “Between 2003 and 2008, nominal prices of energy and metals increased by 230 percent, those of food and precious metals doubled, and those of fertilizers increased fourfold. The boom reached its zenith in July 2008, when crude oil prices averaged US$ 133/barrel, up 94 percent from a year earlier. Rice prices doubled within just five months of 2008, from US$375/ton in January to$757/ton”
“Fiscal expansion in many countries and lax monetary policy created an environment that favored high commodity prices.”
“Finally, it unfolded simultaneously with the development of two other booms—in real estate and in equity markets—whose end led most developed countries to their most severe post‐WWII recession.”
“worldwide, biofuels account for only about 1.5 percent of the area under grains/oilseeds”
35. And yet and yet and yet… The Heartland Institute’s James Taylor uses his Forbes column to claim that Global Warming Is Creating Perfect Crop Conditions:
Without a doubt, global warming is affecting global crop production. The tremendous improvement in global crop production and worldwide growing conditions during recent decades is one of the most important yet least reported news events of our time. As the earth continues to recover from the abnormally cold conditions of the centuries-long Little Ice Age, warmer temperatures, improving soil moisture, and more abundant atmospheric carbon dioxide have helped bring about a golden age for global agricultural production.
Words fail me…
• “Human-Generated Ozone Will Damage Crops, Reduce Production… MIT, 2007 …A novel MIT study concludes that increasing levels of ozone due to the growing use of fossil fuels will damage global vegetation, resulting in serious costs to the world’s economy. The analysis, reported in the November issue of Energy Policy, focused on how three environmental changes (increases in temperature, carbon dioxide and ozone) associated with human activity will affect crops, pastures and forests. The research shows that increases in temperature and in carbon dioxide may actually benefit vegetation, especially in northern temperate regions. However, those benefits may be more than offset by the detrimental effects of increases in ozone, notably on crops. Ozone is a form of oxygen that is an atmospheric pollutant at ground level.”
http://witsendnj.blogspot.com/2011/03/snow.html
[Response: You’ve had plenty of opportunity to push your ozone agenda. Enough.]
• Are you kidding me? I have an “ozone agenda”? Oh, like, I LOVE ozone? Or what?
I have a “survival” agenda. Like, I would like my beloved daughters to live to a nice ripe old age, and they aren’t going to be able to do that, because guess what??
The air is so polluted that all the plants and trees are dying! YEAH. That is what is happening, in the real world, and if you GUYS want to ignore pollution and fuss over atmospheric physics ad nauseum, go right ahead.
Meanwhile, we will just all die, together.
[Response: The problem is that after having stated your thesis, repeatedly, you continue to do so without letting up. That’s fine for your own blog, but not for this one (or RC, where you did the same thing). And you don’t seem to have the peer-reviewed science to back up your claims.
You’ve had your say. If you want to keep talking about the ozone issue, you’ve got your own blog. My agenda is global warming, which is what this blog is about.]
36. Chris R:
I’m not disputing that the Russian drought had some role in prices. My point is that it’s in error to give it the primary role, when a) the biofuel contribution is quantitatively much larger, and b) previous equally large declines in Russian wheat production had produced no such price spike. I would argue that biofuels have removed all the slack from the system, such that now any otherwise normal harvest problem around the world can trigger a price spike. In this case, Russian wheat harvest failure played a role, next time it may be something else. And while summer 2010 in Russia may have been highly anomalous in the temperature signal, it was not particularly anomalous in the wheat harvest signal.
As for the wheat/corn thing – there are strong arbitrage linkages between the prices since there is quite a lot of cropland that can potentially be used for either, and if the prices get too out of whack, farmers will switch. Since futures traders know this, shortages of one crop can cause almost immediate changes in prices of others.
• PJKar
Stuart,
“My point is that it’s in error to give it the primary role, when a) the biofuel contribution is quantitatively much larger, and b) previous equally large declines in Russian wheat production had produced no such price spike. “
A couple of question pertaining to this. Did the Russian government institute an export ban on wheat in the years they experienced equally large declines in production? If not, wouldn’t that make 2010 a highly anomalous event since none of the wheat produced is available to the rest of the world?
The export ban is still in effect I believe.
Also I believe you were looking for this data (Russian Wheat Production 2010):
http://www.indexmundi.com/agriculture/?country=ru&commodity=wheat&graph=production
• I wouldn’t imagine the export ban would too radically change the picture over and above the harvest downtown. What the export ban essentially does is mean that Russian consumers face lower prices than they otherwise would, and thus will conserve less, meaning that consumers elsewhere must face somewhat higher prices, and thus conserve more than they otherwise would. But since the population of Russia is only 2 1/2 % of world population, and given that food demand is inelastic anyway, I wouldn’t imagine that the failure to conserve by that tiny slice, would make a very big impact on the conservation required by the 97 1/2%.
37. I have some scatterplots illustrating the relationship between wheat/corn/soybean prices here.
38. Don Gisselbeck
How many people will die as a result of this speculative bubble? Wouldn’t their survivors be justified in taking revenge on the speculators?
39. In Japan even the water prices will now go through the roof! Your readers might be interested in how to treat their radioactively contaminated drinking water:
http://crisismaven.wordpress.com/2011/03/22/dangers-properties-possible-uses-and-methods-of-purification-of-radioactively-contaminated-drinking-water-e-g-in-japan/
A Japanese translation seems underway, see comment by Takuya there. Maybe someone wants to help with other languages?
• Phil.
Reverse osmosis would seem to be the best method to me, a system to treat 50-100 gallons/day costs ~$300 and it’s not specific for a particular solute. 40. Chris R Stuart Staniford, Having had another look, taking into account your comments… I now agree. Whilst the Russian drought did precipitate this instance of price rise, it happened against a background of market stressing caused by demand (with biofuels a major component of that demand). I think Tamino’s post is erroneous by emission. Marco, Interesting that you should bring up Oil and the current N.Africa / Mid East situation. The cereal market is made volatile by supply restrictions due to biofuels which are themselves being introduced due to increasing prices an volatility thereof in the oil market. I’ve been of the opinion for some time that in terms of human affairs: Peak Oil (which is now) will be the major issue for the first quarter of this century, with climate change being a slow-burner that will really hit us in the second quarter. Kind of like being kicked in the nuts then kneed in the face. But in this case humanity is an idiot kicking itself in the nuts, and kneeing itself in the face. ;) 41. Tamino’s action suggestion, “politicians who deny the reality, human causation, and danger of global warming should be voted out of office” should be written on every wall and as a ceterum censeo added after every speech. 42. Some may believe that Gail has an ozone agenda, but what her agenda is is that people should be paying more attention to ground level ozone. There is plenty of research coming out of the University of Illinois that under increasing temperatures ground level ozone rises and reduces crop yields. Well, it is not only bad for maize. Ground-level nitrous oxide apparently also increases. Also very bad for crops and trees. Far too little attention is being given to these gases at the ground level. 43. Your posting today is about food. We should remember the National Crop Loss Assessment Agency was established to determine why significant losses to agriculture. Their studies that showed significant crop loss due to tropospheric ozone emissions. Crop losses typically from 10 to 18% http://www.econ.ucsd.edu/~rcarson/papers/Kopp85.pdf http://www.asl-associates.com/kriging.htm http://www.uctc.net/papers/322.pdf And NASA is in on the issue too http://earthobservatory.nasa.gov/Features/OzoneWeBreathe/ozone_we_breathe3.php Ozone relates strongly to energy policy, crop loss, health and more. And should be factored in any systems approach to AGW. For one reason – with more heat, and more UV radiation, the atmospheric chemical reactions will generate far more dangerous chemicals – Ozone is a particularly nasty one. The Crop Loss Assessment Agency was shut down by the Bush Jr Administration. Their reports and conclusions still apply. 44. From the Journal of Economic Surveys: http://onlinelibrary.wiley.com/doi/10.1111/1467-6419.00023/abstract “Agricultural crop production is highly dependent upon environmental conditions among which air quality plays a central role. Various air pollutants have been identified as a potential influence on commercial crops including SO2, NOx, O3 and CO2. In particular, ozone in the lower atmosphere has been identified as a serious cause of crop loss in the United States and seems likely to be creating similar losses in Europe. In this paper the methods which can be applied to assess the economic damages from air pollution are critically reviewed. This requires measuring pollutant concentrations, relating these to physical crop damages, and estimating the reactions of the agricultural sector and consumers to give welfare changes in terms of consumers’ surplus and producers’ quasi-rents. The approach of the European open-top chamber programme (EOTCP) is shown to have neglected lessons learnt by the National Crop Loss Assessment Network (NCLAN) in the US” 45. Dean While you may have a valid point, to imply that the current price increase is due to CO2 remains just a hypothesis. A more realistic hypothesis is that the Fed’s devaluing of the dollar is kicking in with gusto. If CO2 is truly the cause, the the effect should be traceable for the last couple decades. It’s not, according to the charts you provide. Here’s an article that talks about the impacts of the Fed’s policy: http://online.wsj.com/article/SB10001424052748703899704576204594093772576.html [Response: The proximate cause of the rise in food prices was the Russian heat wave. That event can be timed precisely to July 2010 — exactly when the food price increase begin. Your talk about “the the effect should be traceable for the last couple decades” shows that you’re not thinking clearly. And these are world food prices, not just the U.S. Despite rampant Americentrism, the U.S. does not control the world.] • Dean [Response: The proximate cause of the rise in food prices was the Russian heat wave. That event can be timed precisely to July 2010 — exactly when the food price increase begin. Your talk about “the the effect should be traceable for the last couple decades” shows that you’re not thinking clearly. And these are world food prices, not just the U.S. Despite rampant Americentrism, the U.S. does not control the world.] This is pure supposition on your part. From the NYT article on the wheat market at the time of the Russian wheat embargo: http://www.nytimes.com/2010/08/07/business/global/07wheat.html#h%5B%5D “But there is an important difference between the current situation and that last price spike: the Russian drought and ban on wheat exports, in contrast to the global shock in 2008 that drove wheat prices up to nearly$13 a bushel and created tensions in Indonesia and Pakistan, are occurring when global wheat production is plentiful and stocks in the United States are at a 23-year high, analysts said.
“This is still going to be the third-largest wheat crop in world history, even with the Russian shortfall,” said Daniel W. Basse, president of AgResource, an agricultural consultant firm in Chicago. “The question becomes, Will the drought persist, and will there be problems elsewhere, in other big producers like Argentina or Australia?” ”
To say that AGW is the cause of this wreaks of ideological blindness. Is it one part? Possibly, but so is the cost of transportation due to the BP oil spill. And so is the cost due to the economic policies of the US (which have a dramatic impact on the world, regardless of what you believe).
[Response: None of which alters the fact that the proximate cause was the Russian heat wave. If that provided an excuse for capitalists to exploit market conditions, it’s still the cause. And your idiotic remark about how it should have shown the signs of steady rise in CO2 still reveals that you’re not thinking clearly.
And: when multiple climate-change-related crop failures cause genuine hunger … it will be a lot worse.]
46. Dean
From the same NYT article
http://www.nytimes.com/2010/08/07/business/global/07wheat.html#h%5B%5D
“Maximo Torero, at the International Food Policy Research Institute, said the market reaction was overdone. Russia represents only 11 percent of the world’s wheat exports, he said, and any shortfall could be met by major wheat exporters like the United States, Australia or Canada. ”
Sure sounds like economic/political manipulation had a bigger part in this run up of prices than anything weather-related.
47. Ray Ladbury
Dean, keep in mind that when we talk about a natural disaster or crisis, we also have to consider how the nation affected responds. In this case, Russia responded by cutting off exports of whatever crop remained. Other nations followed suit. Prices rose. Markets responded, and speculators descended like vultures.
Whether it is the Japanese Earthquake, the Russian heatwave or Katrina, humans compound problems that arise because they tend to think tactically rather than strategically.
• Dean P
Ok, so you’re talking about what precipitated rather than what caused the event. Fine. But the underlying cause is not AGW, it was nothing more than an excuse that people used to manipulate the system.
If the heat wave had not happened, they would have found something else to “justify” their playing their little game.
• Gavin's Pussycat
Talking about excuses, you seem to be ready to grab at any excuse, no matter how desperate, to deny that AGW can have any bad effect at all.
You sad, pathetic little man.
• Re-reading your link to the Times article, Dean, I really don’t follow your conclusion. The article is replete with concern expressed about the Russian decision to ban wheat exports and the possibility that other nations might follow suit–and that is clearly linked to the weather. The article describes a condition in which concern about potential future price rises (which actually have since occurred, btw, as Tamino’s graph shows!) were driving large corporations and governments to hedge, driving futures prices up.
Moreover, something else happened. Remember the bit you quoted in the first comment you made: The question becomes, Will the drought persist, and will there be problems elsewhere, in other big producers like Argentina or Australia?”
Surprise, surprise–Australian rains, and, later, actual flooding, reduced Australian exports of milling wheat very considerably at the end of 2010 and the beginning of 2011. And wheat prices did rise quite dramatically.
http://www.bloomberg.com/news/2010-12-06/australian-milling-wheat-exports-may-drop-to-3-year-low-on-rain-drought.html
http://www.businessweek.com/news/2011-02-08/demand-for-australian-wheat-strong-as-price-near-two-year-high.html
And while neither Russian heatwave nor Australian flood can be conclusively attributed to AGW, these sorts of events are exactly what we will see more of as warming (and precipitation change) continue. So Tamino’s point, I think, stands.
48. Ray Ladbury
Dean P., I think you are missing the point. Climate change very likely did contribute to the severity of the heatwave, which exacerbated both the severity of the crop failure and peoples’ anxiety over it. This made it expedient for the Russian government to over-react, and speculators descended in to enjoy the chaos.
Climate change will create lots of disruptions, lots of panic, lots of chaos and lots of opportunity for those who thrive on such conditions. Unfortunately, this does not tend to include those who value civilization.
• Dean
Ray,
I think you miss my point: There is evidence that there was no real disruption in supply and that the run up in price was not based off of true demand but instead off of political and economic manipulation of the system. While the Russian export blockade did distort the demand, the overall wheat yield for the year was quite high, compared to historic averages and therefore any major changes in the price were not due to supply but instead due to manipulation.
• Dean, you’re confusing global and regional supply. If Russia cuts off all its exports, the supply for the rest of the world is decreased, even if the total counting Russia is increased.
• Ray Ladbury
Dean, what you are missing is the how–or rather the conditions that made manipulation possible. The disruption of supply from Russia forced a scramble, and that made it possible to manipulate the market. It is the same sort of thing you see in stocks when short sellers run for cover at the end of a quarter.
49. Since you quite rightly pointed out that I have my own blog to “push my agenda” – and have declined to publish my response to that here – anyone interested in the pernicious effects of ozone on crop yields and food prices can find more information here:
http://witsendnj.blogspot.com/2011/03/squabbling-over-scraps-at-taminos.html
where it won’t interfere with your agenda of climate science.
50. David B. Benson
Michael T. Klare, writing in the 2011 Mar 28 issue of The Nation:
Whenever oil prices rise above \$50 per barrel, the World Bak has determined, a 1 percent increase in the price of oil results in a 0.9 percent increase in the price of maize, “because every dollar increase in the price of oil increases the profitability of ethanol and hence biofuel demand for maize.”
• Brian Dodge
And why does he think it has nothing to do with increasing cost of fertilizer production, transportation, biocides, diesel for tractors and irrigation pumps, and other fossil fuel dependent inputs to growing corn? Do you really think that if we banned conversion of corn to ethanol that an increase in the price of oil would have zero effect on the price of maize?
• David B. Benson
Brian Dodge | March 29, 2011 at 10:39 pm — He is reporting what a World Bank study claims. It is the case that the FAO food price index appears correlated with a crude petroleum index; there is a recent study indicating this on TheOilDrum.
51. When it comes to food discussions: Economics and Science is like wishes and fishes. It is difficult to separate economic discussions from the science. But we should.
Climate science can build prediction models and scenarios that we should carefully regard as we consider economics. Jumping into economics without valid and strong climate science is politics and pandering.
52. Horatio Algeranon
The climate change denyin’ is bad enough, but how much longer can humanity afford Econo-lyin’ ?
Maybe Nancy Griffith knows (she wrote a song about it)
53. Michael Hillinger
I wonder if someone could explain a paper that seems to be circulating in the blogs
Sea-Level Acceleration Based on U.S. Tide Gauges and Extensions of Previous Global-Gauge Analyses
J. R. Houston† and R. G. Dean‡
and the full paper
http://www.jcronline.org/doi/pdf/10.2112/JCOASTRES-D-10-00157.1
They are finding that
Without sea-level acceleration, the 20th-century sea-level trend of 1.7 mm/y would produce a rise of only approximately 0.15 m from 2010 to 2100; therefore, sea-level acceleration is a critical component of projected sea-level rise.
their reported results indicate that sea levels have actually been decreasing
from the paper’s conclusions–
“The decelerations that we obtain
are opposite in sign and one to two orders of magnitude less
than the +0.07 to +0.28 mm/y2 accelerations that are required to
reach sea levels predicted for 2100 by Vermeer and Rahmsdorf
(2009), Jevrejeva, Moore, and Grinsted (2010), and Grinsted,
Moore, and Jevrejeva (2010). Bindoff et al. (2007) note an
increase in worldwide temperature from 1906 to 2005 of 0.74uC.
It is essential that investigations continue to address why this
worldwide-temperature increase has not produced acceleration
of global sea level over the past 100 years, and indeed why
global sea level has possibly decelerated for at least the last
80 years.”
Thanks!
• t_p_hamilton
Michael,
Could you find the part in the paper where they justify why they decided to do 1930-2010, rather than say 1940-2010, or 1920-2010?
Also, the deceleration is not statistically significant even for the interval they chose, and the main “conclusion” (deceleration rate and +/- error) is not in the abstract. 0.0014 +/- 0.0161 mm/y^2
Look at figure 2. Can you tell which areas have rise, and which have fallen? They use the same color for both! Try to look for the data, they refer to Willis (2010) which is a web page that no longer exists!
This is just based on a cursory inspection, perhaps I found the only parts that even a graduate student could do better.
• john lonergan
They just” happened” to pick the the only start date that would give a decline. See Gavin’s comment over at RC:
http://www.realclimate.org/index.php/archives/2011/03/unforced-variations-mar-2011/comment-page-4/#comment-204058
• JCH
About a year ago there was an article about where sea level would rise and where is would lower. The outcomes were very surprising. SLR along the Texas coastline was not that drastic. Other places were predicted to get a very large rise. Some areas were predicted to get a big drop.
Did this paper take properly all of that in account? It’s not a prediction that the bathtub ring is going to go up 5 inches everywhere. The SLR prediction is very uneven. Islands and continents buoying upward as glaciers melt. Gravitational pull. Stuff beyond me.
I wonder how deep-ocean warming actually manifests itself in terms of surface height?
• Michael Hillinger
Thanks all for your comments. I’ll keep an eye on RC for Gavin’s analysis.
• Deech56
I think this was covered by Media Matters, of all places.
• Gavin's Pussycat
> I wonder how deep-ocean warming actually manifests itself in terms of
> surface height?
JCH, you happen to have asked a question with a simple answer: because sea water expansion doesn’t actually change the mass of the water at any geographic location, the Earth’s gravity field, and the geoid, don’t change, and neither does the loading of the solid Earth by the water. So, this will produce the same, uniform sea level rise (sea surface relative to coastal rock, what tide gauges see) everywhere on Earth. Just divide the volume change by the total ocean surface area, 360 million square km.
• Gavin's Pussycat
Deech56, no, that was a different issue: stupidity squared, even too much for these authors. What they write in the paper is not stupid, but it’s not right either… get some popcorn ;-)
• Got the popcorn, GP–when’s showtime?
• Deech56
Thank you, GP – it’s hard to keep up with all the these earth-shattering results. ;-)
54. Connected:
“By 2015, the report predicts, roughly 375 million people will be affected by climate-related disasters every year, well above the 263 million believed to have been directly impacted by natural disasters in 2010.”
(The “Ashdown Report” on UK foreign aid.)
55. Should have said the report is not primarily about climate change. But it’s notable that it asserts many of the concerns expressed on this forum recently, taking them as partial basis for its analysis of the direction that UK aid efforts should take.
The report can be read here:
http://www.dfid.gov.uk/Documents/publications1/HERR.pdf?epslanguage=en
56. jyyh
global ocean surface temperature deviations looking quite weird currently here @ http://bulletin.mercator-ocean.fr/html/produits/bestproduct/welcome_en.jsp?nom=bestproduct_20110323_22361&zone=glo
• I don’t think they look all that different; where do you see something?
(click the third button at the bottom on the mercator-ocean.fr page to display ‘deviations’ and watch the date to match the NCDC page; color scheme is different but the patterns look pretty similar to me)
• Oh, consider the seasonal norm against which the anomaly is compared — if I’m reading it right, the French site is comparing against 2005 “Seasonal norm” and the NOAA site against a 1971-2000 baseline.
57. Brian Dodge
“That the price of oil and food rose in tandem at this time is hardly surprising, the World Bank concluded in 2009, as “agricultural production is fairly energy intensive.” Rising oil prices “raised the price of fuels to power machinery and irrigation systems; it also raised the price of fertilizer and other chemicals that are energy intensive to produce.” Michael T. Klare, writing in the 2011 Mar 28 issue of The Nation – http://www.thenation.com/article/159165/oil-food-price-shock
58. Andy S
JCH
This might be the paper you were thinking of;
http://harvardmagazine.com/2010/05/gravity-of-glacial-melt
Sea levels will not change equally everywhere because mass will be redistributed as large ice bodies melt. This will change the gravity field, the geoid and the sea level. There could even be local sea-level falls in some cases.
59. hwkf
“Here’s the much uglier, much more dangerous truth: this is just the beginning. It’s gonna get worse. A lot worse. Be afraid. Be very afraid.”
Yé, only the beginning… Now, it is the USA that suffer from a drought worse than the Dust Bowl in some places. The food price Index is already very high, and if the harvests from USA are badly affected, it can go higher and higher. 2013, hellish heat wave for Western Europe (France and Germany are also big producers of cereals) ; and 2014 all together ? It is exagerated of course, but in the context of economic crisis, the supplementary tension induced by food prices worsen the things. And so, the accumulation of difficulties from all the horizons is not good news.
|
|
# Functional Tucker approximation using Chebyshev interpolation
This work is concerned with approximating a trivariate function defined on a tensor-product domain via function evaluations. Combining tensorized Chebyshev interpolation with a Tucker decomposition of low multilinear rank yields function approximations that can be computed and stored very efficiently. The existing Chebfun3 algorithm [Hashemi and Trefethen, SIAM J. Sci. Comput., 39 (2017)] uses a similar format but the construction of the approximation proceeds indirectly, via a so called slice-Tucker decomposition. As a consequence, Chebfun3 sometimes uses unnecessarily many function evaluation and does not fully benefit from the potential of the Tucker decomposition to reduce, sometimes dramatically, the computational cost. We propose a novel algorithm Chebfun3F that utilizes univariate fibers instead of bivariate slices to construct the Tucker decomposition. Chebfun3F reduces the cost for the approximation for nearly all functions considered, typically by 75 sometimes by over 98
There are no comments yet.
## Authors
• 9 publications
• 19 publications
• 1 publication
• ### Efficient Preconditioning for Noisy Separable NMFs by Successive Projection Based Low-Rank Approximations
The successive projection algorithm (SPA) can quickly solve a nonnegativ...
10/01/2017 ∙ by Tomohiko Mizutani, et al. ∙ 0
• ### Multivariate fractal interpolation functions: Some approximation aspects and an associated fractal interpolation operator
The natural kinship between classical theories of interpolation and appr...
04/07/2021 ∙ by K. K. Pandey, et al. ∙ 0
• ### MERACLE: Constructive layer-wise conversion of a Tensor Train into a MERA
In this article two new algorithms are presented that convert a given da...
12/20/2019 ∙ by Kim Batselier, et al. ∙ 0
• ### Fundamental trigonometric interpolation and approximating polynomials and splines
The paper deals with two fundamental types of trigonometric polynomials ...
12/04/2019 ∙ by V. P. Denysiuk, et al. ∙ 0
• ### Solution of the Fokker-Planck equation by cross approximation method in the tensor train format
We propose the novel numerical scheme for solution of the multidimension...
02/16/2021 ∙ by Andrei Chertkov, et al. ∙ 0
• ### From Majorization to Interpolation: Distributionally Robust Learning using Kernel Smoothing
We study the function approximation aspect of distributionally robust op...
02/16/2021 ∙ by Jia-Jie Zhu, et al. ∙ 5
• ### An efficient, memory-saving approach for the Loewner framework
The Loewner framework is one of the most successful data-driven model or...
03/12/2021 ∙ by Davide Palitta, et al. ∙ 0
## Code Repositories
### Chebfun3F
Code to reproduce the figures in Functional Tucker approximation using Chebyshev interpolation by S. Dolgov, D. Kressner, C.Stroessner
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
This work is concerned with the approximation of trivariate functions (that is, functions depending on three variables) defined on a tensor-product domain, for the purpose of performing numerical computations with these functions. Standard approximation techniques, such as interpolation on a regular grid, may require an impractical amount of function evaluations. Several techniques have been proposed to reduce the number of evaluations for multivariate functions by exploiting additional properties. For example, sparse grid interpolation [8] exploits mixed regularity. Alternatively, functional low-rank (tensor) decompositions, such as the spectral tensor train decomposition [6], the continuous low-rank decomposition [22, 23], and the QTT decomposition [31, 41], have been proposed. In this work, we focus on using the Tucker decomposition for third-order tensors, following work by Hashemi and Trefethen [29].
The original problem of finding separable decompositions of functions is intimately connected to low-rank decompositions of matrices and tensors [25, Chapter 7]. A trivariate function is called separable if it can be represented as a product of univariate functions: . If such a decomposition is available, it is usually much more efficient to work with the factors instead of when, e.g., discretizing the function. In practice, most functions are usually not separable, but they often can be well approximated by a sum of separable functions. Additional structure can be imposed on this sum, corresponding to different tensor formats. In this work, we consider the approximation of a function in the functional Tucker format [56], as in [29, 35, 44], which takes the form
f(x,y,z)≈r1∑i=1r2∑j=1r3∑k=1cijkui(x)vj(y)wk(z), (1)
with univariate functions and the so called core tensor . The minimal , for which (1) can be satisfied with equality is called multilinear rank of . It determines the number of entries in and the number of univariate functions needed to represent . For functions depending on more than three variables, a recursive format is usually preferable, leading to tree based formats [20, 38, 39] such as the hierarchical Tucker format [28, 49] and the tensor train format [42, 6, 16, 22, 23].
The existence of a good approximation of the form (1) depends, in a nontrivial manner, on properties of . It can be shown that the best approximation error in the format decays algebraically with respect to the multilinear rank of the approximation for functions in Sobolev Spaces [24, 49] and geometrically for analytic functions [27, 55]. Approximations based on the Tucker format are highly anisotropic [54], i.e., a rotation of the function may lead to a very different behavior of the approximation error. This can be partially overcome by adaptively subdividing the domain of the function as proposed, e.g., by Aiton and Driscoll [1].
The representation (1) is not yet practical because it involves continuous objects; a combination of low-rank tensor and function approximation is needed. Univariate functions can be approximated using barycentric Lagrange interpolation based on Chebyshev points [2, 5, 30]. This interpolation is fundamental to Chebfun [19] - a package providing tools to perform numerical computations on the level of functions [43]. Operations with these functions are internally performed by manipulating the Chebyshev coefficients of the interpolant [3].
In Chebfun2 [51, 52], a bivariate function is approximated by applying Adaptive Cross Approximation (ACA) [4], which yields a low-rank approximation in terms of function fibers (e.g. for fixed but varying ), and interpolating these fibers. In Chebfun3, Hashemi and Trefethen [29] extended these ideas to trivariate functions by recursively applying ACA, to first break down the tensor to function slices (e.g., for fixed ) and then to function fibers. As will be explained in Section 3.4
, this indirect approach via slice approximations typically leads to redundant function fibers, which in turn involve unnecessary function evaluations. This is particularly problematic when the evaluation of the function is expensive, e.g. when each sample requires the solution of a partial differential equation (PDE); see Section
5 for an example.
In this paper, we propose a novel algorithm aiming at computing the Tucker decomposition directly. Our algorithm is called Chebfun3F to emphasize that it is based on selecting the Fibers in the Tucker approximation (1). To compute a suitable core tensor, oblique projections based on Discrete Empirical Interpolation (DEIM) [11]
are used. We combine this approach with heuristics similar to the ones used in Chebfun3 for choosing the univariate discretization parameters adaptively and for the accuracy verification.
The remainder of this paper is structured as follows. In Section 2, we introduce and analyze the approximation format used in Chebfun3 and Chebfun3F. In Section 3, we briefly recall the approximation algorithm currently used in Chebfun3. Section 4 introduces our novel algorithm Chebfun3F. Finally, in Section 5, we perform numerical experiments to compare Chebfun3, Chebfun3F and sparse grid interpolation.
## 2 Chebyshev Interpolation and Tucker Approximation
### 2.1 Chebyshev Interpolation
Given a function , we consider an approximation of the form
f(x,y,z)≈~f(x,y,z)=n1∑i=1n2∑j=1n3∑k=1AijkTi(x)Tj(y)Tk(z), (2)
where is the coefficient tensor and denotes the -th Chebyshev polynomial.
To construct (2), we use (tensorized) interpolation. Let denote the tensor containing all function values on the grid of Chebyshev points [53]. The coefficient tensor is computed from
using Fourier transformations. We define the transformation matrices
for as in [36, Sec. 8.3.2.]
F(α)ij=2nα⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝14T0(x(α)0)12T0(x(α)1)12T0(x(α)2)…14T0(x(α)nα)12T1(x(α)0)T1(x(α)1)T1(x(α)0)…12T1(x(α)nα)12T2(x(α)0)T2(x(α)1)T2(x(α)0)…12T2(x(α)nα)⋮⋮⋮⋱⋮14Tnα(x(α)0)12Tnα(x(α)1)12Tnα(x(α)2)…14Tnα(x(α)nα)⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠.
The mapping from the function evaluations to the coefficients can now be written as
A=T×1F(1)×2F(2)×3F(3), (3)
where denotes the mode- multiplication. For a tensor and a matrix it is defined as the multiplication of every mode- fiber of with , i.e.
(T×αM){α}=MT{α},
where denotes the mode- matricization, which is the matrix containing all mode- fibers of [33]. By construction, the interpolation condition is satisfied in all Chebyshev points.
The approximation error for Chebyshev interpolation applied to multivariate analytic functions has been studied, e.g., by Sauter and Schwab in [47]. The following result states that the error decays exponentially with respect to the number of interpolation points in each variable.
###### Theorem 1 ([47, Lemma 7.3.3.]).
Suppose that can be extended to an analytic function on with , where denotes the Bernstein ellipse, a closed ellipse with foci at and the sum of major and minor semi-axes equal to . Then the Chebyshev interpolant constructed above satisfies for the error bound
∥f−~f∥∞≤22.5√3ρ−nmin(1−ρ−2min)1.5maxz∈Eρ|f∗(z)|,
where denotes the uniform norm on and .
### 2.2 Tucker Approximations
A Tucker approximation of multilinear rank for a tensor takes the form
T≈^T=C×1U×2V×3W,
where is called core tensor and , , are called factor matrices. If , the required storage is reduced from for to for .
### 2.3 Combining the Tucker Approximation and Chebyshev Interpolation
Let be a Tucker approximation of the tensor obtained from evaluating in Chebyshev points. Inserted into (2), we now consider an approximation of the form
^f(x,y,z)=n1∑i=1n2∑j=1n3∑k=1^AijkTi(x)Tj(y)Tk(z), (4)
where the interpolation coefficients are computed from as in Equation (3)
^A =^T×1F(1)×2F(2)×3F(3) =C×1F(1)U×2F(2)V×3F(3)W.
Note that the application of is the mapping from function evaluations to interpolation coefficients in the context of univariate Chebyshev interpolation [3, 36]. By interpreting the values stored in the columns of as function evaluations at Chebyshev points, we can define its columnwise Chebyshev interpolant , where . Note that for fixed . Defining and analogously allows us to rewrite the approximation (4) as
^f(x,y,z)=C×1U(x)×2V(y)×3W(z), (5)
where the mode- products are defined as in Section 2.1 for fixed . The goal of this paper is to compute approximations of this form.
### 2.4 Low-Rank Approximation Error
The following lemma allows us to distinguish the interpolation error from the low-rank approximation error in the approximation (5).
###### Lemma 1.
Consider the Chebyshev interpolation defined in (2) and let be an approximation of the involved function evaluation tensor . Then the approximation defined in (5) satisfies
∥f−^f∥∞≤ ∥f−~f∥∞+(2πlog(n1)+1)(2πlog(n2)+1)(2πlog(n3)+1)∥T−^T∥∞. (6)
###### Proof.
By applying the triangle inequality we obtain
∥f−^f∥∞≤ ∥f−~f∥∞+∥~f−^f∥∞.
Tensorized interpolation is equivalent to applying univariate interpolation to each mode. The operator norm for univariate interpolation in Chebyshev points, the Lebesgue constant, satisfies [53]. Because interpolation is a linear operation, we obtain
∥~f−^f∥∞≤Λn1Λn2Λn3∥T−^T∥∞≤(2πlog(n1)+1)(2πlog(n2)+1)(2πlog(n3)+1)∥T−^T∥∞.
Lemma 1 states that is nearly as accurate as when the error bound (6) is not dominated by . The low-rank approximation can be stored more efficiently than when . In the next section, we provide some insight into an example that features .
### 2.5 When the Low-Rank Approximation is More Accurate
We consider the function
fε(x,y,z)=1x+y+z+3+ε
on with parameter . Let .
In this section, we show that a Chebyshev interpolation satisfying for a prescribed error bound requires polynomial degrees , which grows algebraically in . However, one can achieve with multilinear ranks . Therefore for small values of the required polynomial degree is much higher than the required multilinear rank. In this situation can achieve almost the same accuracy as , but with significantly less storage.
#### Polynomial Degree
For the degree Chebyshev interpolant we require , which is equivalent to . By Theorem 1,
∥ε(fε−~fε)∥∞≤O(ρ−nmin⋅maxz∈Eρ|εf∗ε(z)|).
We set and extend analytically to on . By construction . Hence, we can choose to obtain the desired accuracy. Although this is only an upper bound for the polynomial degree required, numerical experiments reported below indicate that it is tight.
#### Multilinear Rank
An a priori approximation with exponential sums is used to obtain a bound on the multilinear rank for a tensor containing function values of ; see [26]. Given and , Braess and Hackbusch [7] showed that there exist coefficients and such that
∣∣1x−r∑i=1aiexp(−bix)∣∣≤16exp(−rπ2log(8R)),∀x∈[1,R]. (7)
Trivially, we have for the substitution with . Applying (7) yields or, equivalently,
∥∥∥fε(x,y,z)−r∑i=1aiε⋅exp(−biεx)⋅exp(−biεy)⋅exp(−biεz)⋅exp(−biε(3+ε))=:gε(x,y,z)∥∥∥∞≤τ (8)
for every when
r≥−log(8(1+6ε))log(τ16)π2=O(|log(ε)|).
The approximation in (8) has multilinear rank . In turn, the tensor has multilinear rank at most and satisfies .
#### Comparison
In Figure 1
, we estimate the maximal polynomial degree required to compute a Chebyshev interpolant with accuracy
for selected fibers of , which is a lower bound for the required polynomial degrees . It perfectly matches the asymptotic behavior of the derived upper bound . In Figure 1
, we also plot the multilinear ranks from the truncated Higher Order Singular Value Decomposition (HOSVD)
[15] with tolerance applied to the tensor containing the evaluation of on a Chebyshev grid. This estimate serves as a lower bound for the multilinear rank required to approximate . Due to the limited grid size, this estimate does not fully match the asymptotic behavior , but nonetheless it clearly reflects that the multilinear ranks can be much smaller than the polynomial degrees, as predicted by , for sufficiently small .
## 3 Existing Algorithm: Chebfun3
In this section, we recall how an approximation of the form (4) is computed in Chebfun3 [29]. As discussed in Section 2.5, there are often situations in which the multilinear rank of is much smaller than the polynomial degree. Chebfun3 benefits from such a situation by first using a coarse sample tensor to identify the fibers needed for the low-rank approximation. This allows to construct the actual approximation from a finer sample tensor by only evaluating these fibers instead of the whole tensor.
Chebfun 3 consists of three phases: preparation of the approximation by identifying fibers for a so called block term decomposition [14] of , refinement of the fibers, conversion and compression of the refined block term decomposition into Tucker format (5).
### 3.1 Phase 1: Block Term Decomposition
In Chebfun3, is initially obtained by sampling on a grid of Chebyshev points. A block term decomposition of is obtained by applying ACA [4] (see Algorithm 1) recursively. In the first step, ACA is applied to a matricization of , say, the mode- matricization . This results in index sets such that
T{1}c≈T{1}c(:,J)(T{1}c(I,J))−1T{1}c(I,:), (9)
where contains mode- fibers of and contains mode- slices of . For each , such a slice is reshaped into a matrix and, in the second step, approximated by again applying ACA:
Si≈Si(:,Li)(Si(Ki,Li))−1Si(Ki,:), (10)
where and contain mode- and mode- fibers of , respectively. Combining (9) and (10) yields the approximation
T{1}c≈T{1}c(:,J)(T{1}c(I,J))−1⎛⎜ ⎜ ⎜⎝vec(S1(:,L1)(S1(K1,L1))−1S1(K1,:))vec(S2(:,L2)(S2(K2,L2))−1S2(K1,:))⋮⎞⎟ ⎟ ⎟⎠, (11)
where
denotes vectorization. Reshaping this approximation into a tensor can be viewed as a block term decomposition in the sense of
[14, Definition 2.2.].
If the ratios of , and are larger than the heuristic threshold the coarse grid resolution is deemed insufficient to identify fibers. If this is the case, is increased to and Phase 1 is repeated.
### 3.2 Phase 2: Refinement
The block term decomposition (11) is composed of fibers of . Such a fiber corresponds to the evaluation of a univariate function for certain fixed . Chebfun contains a heuristic to decide whether the function values in suffice to yield an accurate interpolation of [2]. If this is not the case, the grid is refined.
In Chebfun3 this heuristic is applied to all fibers contained in (11) in order to determine the size , initially set to , of the finer sample tensor . For each , the size is repeatedly increased by setting , which leads to nested Chebyshev points, until the heuristic considers the resolution sufficient for all mode- fibers. Replacing all fibers in (11) by their refined counterparts yields an approximation of the tensor , which contains evaluations of on a Chebyshev grid. Note that might be very large and is never computed explicitly.
### 3.3 Phase 3: Compression
In the third phase of the Chebfun3 constructor, the refined block term decomposition is converted and compressed to the desired Tucker format (5), where the interpolants are stored as Chebfun objects [3]; see [29] for details. Lemma 1 guarantees a good approximation when the polynomial degrees are sufficiently large and when is well approximated by the Tucker approximation . Neither of these properties can be guaranteed in Phases 1 and 2 alone. Therefore in a final step, Chebfun3 verifies the accuracy by comparing and the approximation at Halton points [37]. If the estimated error is too large, the whole algorithm is restarted on a finer coarse grid from Phase 1.
The Chebfun3 algorithm often requires unnecessarily many function evaluations. As we will illustrate in the following, this is due to redundancy among the mode- and mode- fibers. For this purpose we collect all (refined) mode- fibers in the block term decomposition (11) into the columns of a big matrix , where is the number of steps of the outer ACA (9). As will be demonstrated with an example below, matrix is often observed to have low numerical rank, which in turn allows to represent its column space by much fewer columns, that is, much fewer mode- fibers. As the accuracy of the column space determines the accuracy of the Tucker decomposition after the compression, this implies that the other mode- fibers in are redundant.
Let us now consider the block term decomposition111Note that the accuracy verification in Phase 3 fails once for this function. Here we only consider to block term decomposition obtained after restarting the procedure. (11) for the function
f(x,y,z)=11+25√x2+y2+z2.
In Figure 2 the numerical rank and the number of columns of are compared. For the approximation of the slices to leads to a total of mode- fibers, the sum of the corresponding red and blue bars in Figure 2. In contrast, their numerical rank (blue bar) is only . Thus, the red bar can be interpreted as number of redundant mode- fibers. This happens since nearby slices tend to be similar.
The total block term decomposition contains slices and is compressed into a Tucker decomposition with multilinear rank . It contains redundant fibers, the refinement requires function evaluations for each of them. Note that the asymmetry in the rank of the Tucker decomposition is caused by the asymmetry of the block term decomposition.
Another disadvantage is that Chebfun3 always requires the full evaluation of in Phase 1. This becomes expensive when a large size is needed in order to properly identify suitable fibers.
## 4 Novel Algorithm: Chebfun3F
In this section, we describe our novel algorithm Chebfun3F to compute an approximation of the form (5). The goal of Chebfun3F is to the avoid the redundant function evaluations observed in Chebfun3. While the structure of Chebfun3F is similar to Chebfun3, consisting of 3 phases to identify/refine fibers and compute a Tucker decomposition, there is a major difference in Phase 1. Instead of proceeding via slices, we directly identify mode- fibers of for building factor matrices. The core tensor is constructed in Phase 3.
### 4.1 Phase 1: Fiber Indices and Factor Matrices
As in Chebfun3, the coarse tensor is initially defined to contain the function values of on a Chebyshev grid. We seek to compute factor matrices , and such that the orthogonal projection of onto the span of the factor matrices is an accurate approximation of , i.e.
Tc≈Tc×1Uc(UTcUc)−1UTc×2Vc(VTcVc)−1VTc×3Wc(WTcWc)−1WTc. (12)
Additionally, we require that the columns in contain fibers of .
In the existing literature, algorithms to compute such factor matrices include the Higher Order Interpolatory Decomposition [46]
, which is based on a rank revealing QR decomposition, and the Fiber Sampling Tensor Decomposition
[9], which is a generalization of the CUR decomposition. We propose a novel algorithm, which in contrast to the existing algorithms does not require the evaluation of the full tensor . We follow the ideas of TT-cross [40, 48] and its variants such as the Schur-Cross3D [45] and the ALS-cross [18].
Initially, we randomly choose index sets each containing indices. In the first step, we apply Algorithm 1 to . Note that this needs drawing only values of the function , in contrast to values in the whole tensor . The selected columns serve as a first candidate for the factor matrix . The index set is set to the row indices selected by Algorithm 1 (see Figure 3). We use the updated index set and apply Algorithm 1 to analogously, which yields and an updated . From we obtain and . We repeat this process in an alternating fashion with the updated index sets, which leads to potentially improved factor matrices. Following the ideas of Chebfun3, we check after each iteration whether the ratios , and surpass the heuristic threshold . If this is the case, we increase the size of the coarse tensor to and restart the whole process by reinitializing with random indices respectively.
It is not clear a priori how many iterations are needed to attain an approximation (12) that yields a Tucker approximation (5) which passes the accuracy verification in Phase 3. In numerical experiments, it has usually proven to be sufficient to stop after the second iteration, during which the coarse grid has not been refined, or when , or . This is formalized in Algorithm 2. In many cases, we found that the numbers of columns in the factor matrices are equal to the multilinear rank of the truncated HOSVD [15] of with the same tolerance.
### 4.2 Phase 2: Refinement of the Factors
In Phase 2, the fibers in are refined using Chebfun’s heuristic [2] as in Chebfun3 (see Section 3.2). This leads to new factor matrices , and containing the refined fibers of , corresponding to the evaluations of on a Chebyshev grid. This phase needs only evaluations of .
### 4.3 Phase 3: Reconstruction of the Core Tensor
In the final Phase of Chebfun3F, we compute a core tensor to yield an approximation .
In principle, the best approximation (with respect to the Frobenius norm) for fixed factor matrices is obtained by orthogonal projections [15]. Such an approach comes with the major disadvantage that the full evaluation of is required. This can be circumvented by instead using oblique projections , where for an index set . Oblique projections in all three modes yield
Tf≈^T=(Tf(I,J,K)×1(ΦTIUf)−1×2(ΦTJVf)−1×3(ΦTKWf)−1)=^C×1Uf×2Vf×3Wf,
for index sets . The choice of is crucial for the approximation quality and will be discussed later on. The computation of the core tensor only requires evaluations in Phase 3. From we construct the approximation (5) as described in Section 2.3.
In practice, instead of using the potentially ill-conditioned matrices we compute QR decompositions , , , and use the equivalent representation
^T= Tf(I,J,K)×1QU(ΦTIQU)−1×2QV(ΦTIQU)−1×3QW(ΦTKQW)−1
in Chebfun3F. The evaluation of requires samples from .
The following lemma plays a critical role in guiding the choice of indices .
###### Lemma 2 ([11, Lemma 7.3]).
Let , , have orthonormal columns. Consider an index set of cardinality such that is invertible. Then the oblique projection satisfies
∥x−M(ΦTIM)−1ΦTIx∥2≤∥(ΦTIM)−1∥2⋅∥(I−MMT)x∥2,∀x∈Rn,
where denotes the matrix -norm.
Lemma 2 exhibits the critical role played by the quantity for oblique projections. In Chebfun3, we use the discrete empirical interpolation method (DEIM) [10], presented in Algorithm 3, to compute the index sets given . In practice, these index sets usually yield good approximations as tends to be small; see also Section 4.4.1.
|
|
# How much code have you written?
May 3, 2014
By
(This article was first published on R is my friend » R, and kindly contributed to R-bloggers)
This past week I attended the National Water Quality Monitoring Conference in Cincinnati. Aside from spending my time attending talks, workshops, and meeting like-minded individuals, I spent an unhealthy amount of time in the hotel bar working on this blog post. My past experiences mixing coding and beer have suggested the two don’t mix, but I was partly successful in writing a function that will be the focus of this post.
I’ve often been curious how much code I’ve written over the years since most of my professional career has centered around using R in one form or another. In the name of ridiculous self-serving questions, I wrote a function for quantifying code lengths by file type. I would describe myself as somewhat of a hoarder with my code in that nothing ever gets deleted. Getting an idea of the total amount was a simple exercise in finding the files, enumerating the file contents, and displaying the results in a sensible manner.
I was not surprised that several functions in R already exist for searching for file paths in directory trees. The list.files function can be used to locate files using regular expression matching, whereas the file.info function can be used to get descriptive information for each file. I used both in my function to find files in a directory tree through recursive searching of paths with a given extension name. The date the files was last modified is saved, and the file length, as lines or number of characters, is saved after reading the file with readLines. The output is a data frame for each file with the file name, type, length, cumulative length by file type, and date. The results can be easily plotted, as shown below.
The function, obtained here, has the following arguments:
root Character string of root directory to search file_typs Character vector of file types to search, file types must be compatible with readLines omit_blank Logical indicating of blank lines are counted, default TRUE recursive Logical indicating if all directories within root are searched, default TRUE lns Logical indicating if lines in each file are counted, default TRUE, otherwise characters are counted trace Logical for monitoring progress, default TRUE
Here’s an example using the function to search a local path on my computer.
# import function from Github
library(devtools)
# https://gist.github.com/fawda123/20688ace86604259de4e
source_gist('20688ace86604259de4e')
# path to search and file types
root <- 'C:/Projects'
file_typs <- c('r','py', 'tex', 'rnw')
# get data from function
my_fls <- file.lens(root, file_typs)
## fl Length Date cum_len Type
## 1 buffer loop.py 29 2010-08-12 29 py
## 2 erase loop.py 22 2010-08-12 51 py
## 3 remove selection and rename.py 26 2010-08-16 77 py
## 4 composite loop.py 32 2010-08-18 109 py
## 5 extract loop.py 61 2010-08-18 170 py
## 6 classification loop.py 32 2010-08-19 202 py
In this example, I’ve searched for R, Python, LaTeX, and Sweave files in the directory ‘C:/Projects/’. The output from the function is shown using the head command.
Here’s some code for plotting the data. I’ve created four plots with ggplot and combined them using grid.arrange from the gridExtra package. The first plot shows the number of files by type, the second shows file length by date and type, the third shows a frequency distribution of file lengths by type, and the fourth shows a cumulative distribution of file lengths by type and date.
# plots
library(ggplot2)
library(gridExtra)
# number of files by type
p1 <- ggplot(my_fls, aes(x = Type, fill = Type)) +
geom_bar() +
ylab('Number of files') +
theme_bw()
# file length by type and date
p2 <- ggplot(my_fls, aes(x = Date, y = Length, group = Type,
colour = Type)) +
geom_line() +
ylab('File length') +
geom_point() +
theme_bw() +
theme(legend.position = 'none')
# density of file length by type
p3 <- ggplot(my_fls, aes(x = Length, y = ..scaled.., group = Type,
colour = Type, fill = Type)) +
geom_density(alpha = 0.25, size = 1) +
xlab('File length') +
ylab('Density (scaled)') +
theme_bw() +
theme(legend.position = 'none')
# cumulative length by file type and date
p4 <- ggplot(my_fls, aes(x = Date, y = cum_len, group = Type,
colour = Type)) +
geom_line() +
geom_point() +
ylab('Cumulative file length') +
theme_bw() +
theme(legend.position = 'none')
# function for common legend
g_legend<-function(a.gplot){
tmp <- ggplot_gtable(ggplot_build(a.gplot))
leg <- which(sapply(tmp$grobs, function(x) x$name) == "guide-box")
legend <- tmp\$grobs[[leg]]
return(legend)}
# get common legend, remove from p1
mylegend <- g_legend(p1)
p1 <- p1 + theme(legend.position = 'none')
# final plot
grid.arrange(
arrangeGrob(p1, p2, p3, p4, ncol = 2),
mylegend,
ncol = 2, widths = c(10,1))
Clearly, most of my work has been done in R, with most files being less than 200-300 lines. There seems to be a lull of activity in Mid 2013 after I finished my dissertation, which is entirely expected. I was surprised to see that the Sweave (.rnw) and LaTeX files weren’t longer until I remembered that paragraphs in these files are interpreted as single lines of text. I re-ran the function using characters as my unit of my measurement.
# get file lengths by character
my_fls <- file.lens(root, file_typs, lns = F)
# re-run plot functions above
Now there are clear differences in lengths for the Sweave and LaTeX files, with the longest file topping out at 181256 characters.
I know others might be curious to see how much code they’ve written so feel free to use/modify the function as needed. These figures represent all of my work, fruitful or not, in six years of graduate school. It goes without saying that all of your code has to be in the root directory. The totals will obviously be underestimates if you have code elsewhere, such as online. The function could be modified for online sources but I think I’m done for now.
Cheers,
Marcus
|
|
## Is C bigger than R?
Is it true that the set of complex number is bigger than the set of real numbers?
I know that card C = card (R x R) and I think that card (R x R) > card R. Is this true, and if so, why?
PhysOrg.com science news on PhysOrg.com >> Galaxies fed by funnels of fuel>> The better to see you with: Scientists build record-setting metamaterial flat lens>> Google eyes emerging markets networks
Recognitions:
Gold Member
Quote by samkolb Is it true that the set of complex number is bigger than the set of real numbers? I know that card C = card (R x R) and I think that card (R x R) > card R. Is this true, and if so, why?
I think card (RxR) = card R
I would show this by setting up a one-to-one map between RxR and R
I will just show you a one-to-one between the unit square [0,1]x[0,1] and the unit interval [0,1]
You just look at the two decimal expansions and merge
(0.abcdefg...., 0.mnopqrs....) -> 0.ambncodpeq.......
C is with cardinality c, or aleph if you want, the same as R. The simple bijection is a+ib |-> (a,b) into RxR. If you want a bijection from C to R, then z=x+iy|->Im(z)/Re(z) it's a bijection to [-infinity,infinity] which is RU{infininity,-infinity} this cardinality is aleph+2=aleph. QED
## Is C bigger than R?
Quote by loop quantum gravity If you want a bijection from C to R, then z=x+iy|->Im(z)/Re(z) it's a bijection to [-infinity,infinity] which is RU{infininity,-infinity} this cardinality is aleph+2=aleph.
How could that possibly be a bijection? Obviously, $$z_1=a+ib$$ is mapped to the same point as $$z_2=a z_1$$, so it is not an injection.
Marcus has already provided a valid bijection, his "decimal merging" is the classical example of this. Notice how it is also valid in $$\mathbb{R}^n$$.
Correct Big-T, but at least it's onto. (-:
|C| = |R2| = |R|. There's some discussion about that in this thread. Minor point: marcus's function isn't even well-defined; consider decimal expansions with infinite trailing "9"s. (For example, 0.0999... = 0.1000..., but (0.0999..., 0.0000...) maps to 0.00909090..., and (0.1000..., 0.0000) maps to 0.10000000... .) However, the mapping from 0.abcdefgh... to (0.acef..., 0.bdfh...) is a well-defined surjection from [0, 1) to [0, 1)2, and that's all you need.
Marcus' function would be well defined if we agreed to use trailing nines wherever the decimal expansion is terminating, this should of course have been specified.
|
|
# computing a nearest symmetric positive semidefinite matrix
We use cookies to help provide and enhance our service and tailor content and ads. It is clear that is a nonempty closed convex set. D'Errico, J. A correlation matrix is a symmetric matrix with unit diagonal and nonnegative eigenvalues. Given a symmetric matrix what is the nearest correlation matrix, that is, the nearest symmetric positive semidefinite matrix with unit diagonal? Abstract: In this paper, we study the nearest stable matrix pair problem: given a square matrix pair $(E,A)$, minimize the Frobenius norm of $(\Delta_E,\Delta_A)$ such that $(E+\Delta_E,A+\Delta_A)$ is a stable matrix pair. Ccbmputing a Nicholas J. Higham Dqx@nent SfMathemutks Unioersitg 0fMafwhmtfs Manchester Ml3 OPL, EngEanc Sdm%sd by G. W. Stewart ABSTRACT The nearest symmetric positive senidefbite matrix in the Frobenius norm to an arbitrary real matrix A is shown to be (B + H)/2, where H is the symmetric p&r factor of B = (A + AT)/% In the e-norm a nearest symmetric positive semidefinite Computing a nearest symmetric positive semidefinite matrix. (2013). Let be a given symmetric matrix and where are given scalars and , is the identity matrix, and denotes that is a positive semidefinite matrix. In the 2-norm a nearest symmetric positive semidefinite matrix, and its distance δ 2 ( A ) from A , are given by a computationally challenging formula due to Halmos. This problem arises in the finance industry, where the correlations are between stocks. If a matrix C is a correlation matrix then its elements, c ij, represent the pair-wise correlation of abstract = "The nearest symmetric positive semidefinite matrix in the Frobenius norm to an arbitrary real matrix A is shown to be (B + H)/2, where H is the symmetric polar factor of B=(A + AT)/2. The usefulness of the notion of positive definite, though, arises when the matrix is also symmetric, as then one can get very explicit information … In the 2-norm a nearest symmetric positive semidefinite matrix, and its distance δ2(A) from A, are given by a computationally challenging formula due to Halmos. For distance measured in two weighted Frobenius norms we characterize the solution using convex analysis. JO - Linear Algebra and its Applications, JF - Linear Algebra and its Applications. For distance measured in two weighted Frobenius norms we characterize the solution using convex analysis. The procedure involves a combination of bisection and Newton’s method. We show how the bisection method can be applied to this formula to compute upper and lower bounds for δ2(A) differing by no more than a given amount. The closest symmetric positive definite matrix to K0. This MATLAB function returns the nearest correlation matrix Y by minimizing the Frobenius distance. A method for computing the smallest eigenvalue of a symmetric positive definite Toeplitz matrix is given. When I numerically do this (double precision), if M is quite large (say 100*100), the matrix I obtain is not PSD, (according to me, due to numerical imprecision) and I'm obliged to repeat the process a long time to finally get a PSD matrix. A correlation matrix is a symmetric matrix with unit diagonal and nonnegative eigenvalues. Specify an N-by-N symmetric matrix with all elements in the interval [-1, 1] and unit diagonal. Abstract: Given a symmetric matrix, what is the nearest correlation matrix—that is, the nearest symmetric positive semidefinite matrix with unit diagonal? In linear algebra terms, a correlation matrix is a symmetric positive semidefinite (PSD) matrix with unit diagonal. where W is a symmetric positive definite matrix. Ask Question Asked 5 years, 9 months ago. In 2000 I was approached by a London fund management company who wanted to find the nearest correlation matrix (NCM) in the Frobenius norm to an almost correlation matrix: a symmetric matrix having a significant number of (small) negative eigenvalues.This problem arises when the data … Abstract: Given a symmetric matrix, what is the nearest correlation matrix—that is, the nearest symmetric positive semidefinite matrix with unit diagonal? It is particularly useful for ensuring that estimated covariance or cross-spectral matrices have the expected properties of these classes. In the 2-norm a nearest symmetric positive semidefinite matrix, and its distance δ2(A) from A, are given by a computationally challenging formula due to Halmos. Some numerical difficulties are discussed and illustrated by example. If x is not symmetric (and ensureSymmetry is not false), symmpart(x) is used.. corr: logical indicating if the matrix should be a correlation matrix. It relies solely upon the Levinson–Durbin algorithm. Some numerical difficulties are discussed and illustrated by example. ScienceDirect ® is a registered trademark of Elsevier B.V. ScienceDirect ® is a registered trademark of Elsevier B.V. Computing a nearest symmetric positive semidefinite matrix. N2 - The nearest symmetric positive semidefinite matrix in the Frobenius norm to an arbitrary real matrix A is shown to be (B + H)/2, where H is the symmetric polar factor of B=(A + AT)/2. When I numerically do this (double precision), if M is quite large (say 100*100), the matrix I obtain is not PSD, (according to me, due to numerical imprecision) and I'm obliged to repeat the process a long time to finally get a PSD matrix. The Matrix library for R has a very nifty function called nearPD() which finds the closest positive semi-definite (PSD) matrix to a given matrix. A key ingredient is a stable and efficient test for positive definiteness, based on an attempted Choleski decomposition. Linear Algebra and its Applications, 103, 103-118. However, these rules tend to lead to non-PSD matrices which then have to be ‘repaired’ by computing the nearest correlation matrix. nearestSPD Matlab function. (1988). Given a symmetric matrix X, we consider the problem of finding a low-rank positive approximant of X.That is, a symmetric positive semidefinite matrix, S, whose rank is smaller than a given positive integer, , which is nearest to X in a certain matrix norm.The problem is first solved with regard to four common norms: The Frobenius norm, the Schatten p-norm, the trace norm, and the spectral norm. For distance measured in two weighted Frobenius norms we characterize the solution using convex analysis. This problem arises in the finance industry, where the correlations are between stocks. For distance measured in two weighted Frobenius norms we characterize the solution using convex analysis. The second weighted norm is A H = H A F, (1.3) where H is a symmetric matrix of positive weights and denotes the Hadamard product: A B = (aijbij). Given a symmetric matrix, what is the nearest correlation matrix—that is, the nearest symmetric positive semidefinite matrix with unit diagonal? Nearest positive semidefinite matrix to a symmetric matrix in the spectral norm. You then iteratively project it onto (1) the space of positive semidefinite matrices, and (2) the space of matrices with ones on the diagonal. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … Abstract: Given a symmetric matrix, what is the nearest correlation matrix—that is, the nearest symmetric positive semidefinite matrix with unit diagonal? Nicholas J. Higham, Computing a nearest symmetric positive semidefinite matrix, Linear Algebra Appl. By continuing you agree to the use of cookies. 103 (1988), 103--118, In the 2-norm a nearest symmetric positive semidefinite matrix, and its distance δ2(A) from A, are given by a computationally challenging formula due to Halmos. (according to this post for example How to find the nearest/a near positive definite from a given matrix?) Ccbmputing a Nicholas J. Higham Dqx@nent SfMathemutks Unioersitg 0fMafwhmtfs Manchester Ml3 OPL, EngEanc Sdm%sd by G. W. Stewart ABSTRACT The nearest symmetric positive senidefbite matrix in the Frobenius norm to an arbitrary real matrix A is shown to be (B + H)/2, where H is the symmetric p&r factor of B = (A + AT)/% In the e-norm a nearest symmetric positive semidefinite This problem arises in the finance industry, where the correlations are between stocks. Given a symmetric matrix what is the nearest correlation matrix, that is, the nearest symmetric positive semidefinite matrix with unit diagonal? "Computing a nearest symmetric positive semidefinite matrix," Nicholas J. Higham, Linear Algebra and its Applications, Volume 103, May 1988, Pages 103-118 Given a symmetric matrix, what is the nearest correlation matrix—that is, the nearest symmetric positive semidefinite matrix with unit diagonal? Search type Research Explorer Website Staff directory. The nearest symmetric positive semidefinite matrix in the Frobenius norm to an arbitrary real matrix A is shown to be (B + H)/2, where H is the symmetric polar factor of B=(A + A T)/2. In the following definitions, $${\displaystyle x^{\textsf {T}}}$$ is the transpose of $${\displaystyle x}$$, $${\displaystyle x^{*}}$$ is the conjugate transpose of $${\displaystyle x}$$ and $${\displaystyle \mathbf {0} }$$ denotes the n-dimensional zero-vector. Following paper outlines how this can be done. For distance measured in two weighted Frobenius norms we characterize the solution using convex analysis. 103, 103–118, 1988.Section 5. AB - The nearest symmetric positive semidefinite matrix in the Frobenius norm to an arbitrary real matrix A is shown to be (B + H)/2, where H is the symmetric polar factor of B=(A + AT)/2. Copyright © 2021 Elsevier B.V. or its licensors or contributors. For distance measured in two weighted Frobenius norms we characterize the solution using convex analysis. Search text. Could you please explain if this code is giving a positive definite or a semi-positive definite matrix? title = "Computing a nearest symmetric positive semidefinite matrix". We show how the bisection method can be applied to this formula to compute upper and lower bounds for δ2(A) differing by no more than a given amount. In the 2-norm a nearest symmetric positive semidefinite matrix, and its distance δ2(A) from A, are given by a computationally challenging formula due to Halmos. You have written the following: "From Higham: "The nearest symmetric positive semidefinite matrix in the Frobenius norm to an arbitrary real matrix A is shown to be (B + H)/2, where H is the symmetric polar factor of B=(A + A')/2." © 1988. Some numerical difficulties are discussed and illustrated by example. I'm coming to Python from R and trying to reproduce a number of things that I'm used to doing in R using Python. Computing a nearest symmetric positive semidefinite matrix. In the 2-norm a nearest symmetric positive semidefinite matrix, and its distance δ2(A) from A, are given by a computationally challenging formula due to Halmos. You have written the following: "From Higham: "The nearest symmetric positive semidefinite matrix in the Frobenius norm to an arbitrary real matrix A is shown to be (B + H)/2, where H is the symmetric polar factor of B=(A + A')/2." We show how the bisection method can be applied to this formula to compute upper and lower bounds for δ2(A) differing by no more than a given amount. Author(s) Adapted from Matlab code by John D'Errico References. {\textcopyright} 1988.". Copyright © 1988 Published by Elsevier Inc. https://doi.org/10.1016/0024-3795(88)90223-6. So I decided to find the nearest matrix which will allow me to continue the computation. This way, you don’t need any tolerances—any function that wants a positive-definite will run Cholesky on it, so it’s the absolute best way to determine positive-definiteness.
|
|
# nLab saturated class of limits
category theory
## Applications
#### Enriched category theory
enriched category theory
## Extra stuff, structure, property
### Homotopical enrichment
#### Limits and colimits
limits and colimits
# Saturated classes of limits
## Idea
A class $\mathcal{X}$ of limits is saturated if it is closed under the “construction” of other limits out of limits in $\mathcal{X}$.
## Definition
Let $V$ be a Bénabou cosmos, and let $\mathcal{X}$ be a class of $V$-functors $\Phi\colon D\to V$ where $D$ is a small $V$-category. Note that the domain $D$ may be different for different elements of $\mathcal{X}$.
Any such $\Phi\colon D\to V$ can serve as a weight for defining the weighted limit of a $V$-functor $D\to C$, for any $V$-category $C$. We say that a $V$-category is $\mathcal{X}$-complete if it admits all such limits, for all $\Phi\in\mathcal{X}$, and thath a $V$-functor between $\mathcal{X}$-complete $V$-categories is $\mathcal{X}$-continuous if it preserves all such limits.
The saturation of $\mathcal{X}$ is the class of all weights $\Phi\colon D\to V$ (with $D$ small) such that
1. Any $\mathcal{X}$-complete $V$-category admits $\Phi$-weighted limits, and
2. Any $\mathcal{X}$-continuous $V$-functor preserves $\Phi$-weighted limits.
It is an open question whether the second condition is implied by the first in general.
Finally, $\mathcal{X}$ is saturated if it is its own saturation.
### The conical case
When $V=Set$, we frequently discuss only conical limits, i.e. limits whose weight $\Phi\colon D\to Set$ is the constant functor $\Delta_D 1$ at the terminal set. These give the classical notion of limit in a category.
In this case, we may consider instead classes $\mathcal{J}$ of small categories; we write $\Delta_{\mathcal{J}}$ for the class of weights $\{ \Delta_C 1 | C \in \mathcal{J}\}$. We say that a category $D$ lies in the saturation of $\mathcal{J}$ if the weight $\Delta_D 1$ lies in the saturation of $\Delta_{\mathcal{J}}$, and that $\mathcal{J}$ is saturated if it is its own saturation.
Note that in practically all cases, the saturation of $\Delta_{\mathcal{J}}$ will contain weights that are not of the form $\Delta_D 1$. Moreover, even when $V=Set$ there are nontrivial saturated classes of weights that do not contain any nontrivial conical weights, such as the saturation of the weight for “cartesian squares” $A\times A$.
However, for conical weights the answer to the above open question is known to be affirmative. On the one hand, if $\mathcal{X}$ is a class of $Set$-weights such that every $\mathcal{X}$-complete category is also $\Delta_D 1$-complete, then every $\mathcal{X}$-continuous funtor is also $\Delta_D 1$-continuous. See AK for a proof of this.
On the other hand, if $\mathcal{J}$ is a class of $Set$-categories and $\Phi$ is a $Set$-weight such that every $\Delta_{\mathcal{J}}$-complete category is also $\Phi$-complete, then every $\Delta_{\mathcal{J}}$-continuous functor is also $\Phi$-continuous. In fact, this is still true if instead of $\Delta_{\mathcal{J}}$ we consider a class of weights all of which take only nonempty sets as values. See KP for a proof of this.
## Characterization
The main theorem of AK (which introduced the notion under the name “closure”) is the following.
###### Theorem
$\Phi\colon D\to V$ lies in the saturation of $\mathcal{X}$ if and only if it lies in the closure of the representables under $\mathcal{X}$-weighted colimits in $[D,V]$.
## Examples
The following examples are all for $V=Set$, restricted to the conical case.
• The class of small products is saturated, as is the class of finite products. The latter is the saturation of the finite class containing only terminal objects and binary products.
• The class of L-finite limits is saturated; it is the saturation of the class of finite limits. It is also the saturation of the finite class containing only terminal objects and pullbacks, and the saturation of the class containing only finite products and equalizers.
• The class of connected limits is saturated. It is the saturation of the class consisting of wide pullbacks and equalizers. Similarly, the class of L-finite connected limits is the saturation of the finite class of pullbacks and equalizers. See also pullback and wide pullback for their saturations.
There are also interesting examples for other $V$.
• When $V=Cat$, the classes of PIE-limits and flexible limits are saturated. The former is, essentially by definition, the saturation of the class containing products, inserters, and equifiers. The latter can be proven to be the saturation of the class containing products, inserters, equifiers, and splitting of idempotents.
• When $V=F$ is the category of fully faithful functors, so that a $V$-category is an F-category, the class of $w$-rigged weights is saturated (for any of $w=p$, $l$, or $c$ denoting pseudo, lax, or colax).
• For any $V$, the class of absolute colimits is saturated. When $V=Set$, this is the saturation of the splitting of idempotents.
It is also worth mentioning some non-examples.
• For $V=Set$, the class of finite limits is not saturated; its saturation is the class of L-finite limits.
• For $V=Cat$, the class of strict pseudo-limits is not saturated; it does not even contain the representables. (The same is true for strict lax limits.) It is unclear precisely what its saturation looks like.
## References
• Albert and Kelly, “The closure of a class of limits”, J. Pure. App. Alg. 51 (1988), 1–17
• Max Kelly and Robert Paré, “A note on the Albert-Kelly paper ‘The closure of a class of limits’“, JPAA 51 (1988), 19–25
Revised on February 22, 2012 22:30:48 by Mike Shulman (71.136.234.110)
|
|
# Richard K. Guy and The Unity of Combinatorics —Stephen Kennedy
Stephen and Richard at MAA MathFest
This tribute by Stephen Kennedy (Carleton College), AMS/MAA Press Acquisitions, originally appeared in the most recent issue of MAA Focus and is shared with permission.
The news of Richard Guy’s passing was a blow. Not only because he was a dear friend, but also because I knew that the appearance of his last book, The Unity of Combinatorics, was imminent and that he would never see it. When I first met Richard decades ago I was too much in awe of him to actually talk, we had a nod-and-smile relationship for a long time. That changed about 15 years ago. I was sitting at an airport gate leaving JMM to come home and Richard in his familiar brown tweed jacket with his ever-present Peace is a Disarming Concept lapel button sat down next to me and asked about the math on the pad of paper in my lap. At the time I had just discovered Geometer’s Sketchpad and was using its capability to combine Euclidean geometry and motion to generate undergraduate research problems, questions like: What’s the locus of centroids of all the triangles that share a circumcircle? With Geometer’s Sketchpad you could make a little movie and observe that locus being generated in real time. It was thrilling to watch.
I don’t remember exactly what problem I was struggling with at that airport gate but it was something close to the above and Richard listened thoughtfully and we spent an hour swapping ideas and pictures. It was clear that he knew about a thousand times as much about geometry as I did and also clear that his brain worked at about twice the speed mine did. But my awe melted away in the face of his kindness and modesty. He was genuinely interested in my ideas and in working together on the problem. He also had a razor-sharp wit and after one of his jokes would flash his disarming, but devilish, grin. It was great fun to do mathematics with him. Eventually he started telling me about the lighthouse problem [2]: What is the locus of the point of intersection of two rotating lighthouse beams? The cited paper is a great place to go to understand Richard’s approach to mathematics and to experience his sense of humor. For another quick taste of the latter, check out the MAA Review of The Inquisitive Problem Solver by Richard’s alter ego, Dick Fellow.
When I got home I had an e-mail waiting from Richard with some more ideas about my problem. We continued that e-mail correspondence for a while. He always did me the kindness of pretending that I was knowledgeable about geometry; I think it was enough for him that I clearly loved it. A few years later I was in Calgary visiting Richard to talk about a possible book on combinatorial games. I spent a week with him, every morning we’d go to his office at the University of Calgary. He taught me about Sprague-Grundy theory and we analyzed dozens of games together. Every evening we’d go back to his home and eat one of the dreadful frozen pot pies he favored for dinner, then get back to work. For a time I thought I could understand three-car Dodgerydoo, Richard did me the courtesy of taking seriously the possibility that I did. (Of course, I didn’t. I think he probably suspected as much all along but was too polite to say so.) We never got the book put together. In spite of that, it was one of the best mathematical weeks in my life.
The Unity of Combinatorics is the latest volume in the MAA Carus series and its genesis was a paper by that name that Richard published in 1995. Richard was reacting to the perception that combinatorics was nothing more than a bag of disconnected clever tricks for toy problems. It is clear today that combinatorics is a mature mathematical discipline with deep problems, subtle results, and intriguing connections to other areas of mathematics. Twenty-five years ago that was not clear and combinatorics’s connection to recreational mathematics made it seem slightly disreputable and frivolous. This book was first imagined by Don Albers who encouraged Richard to expand his article and recruited Bud Brown as a co-author. The result reflects both authors’ personalities, their mathematical interests and their beguiling expository skills. It’s a pure pleasure to read; the perfect mixture of Richard’s gentle wit, Bud’s down-home, welcoming enthusiasm, and both authors’ deep knowledge of, and absolute joy in, the combinatorial landscape.
Let me give you a taste. Suppose you want to find a collection of five-element subsets of the eleven-element set $\{ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, X \}$ with the property that each pair of elements occurs together exactly twice. It’s not obvious, at least to me, that such a collection is even possible. A quick count— each of 55 pairs occurring twice is 110 pairs, a five-element set contains ten pairs—will tell you that any such collection will contain 11 sets. But that’s no help in finding it, or even proving it’s possible. It just reassures you that it is not obviously impossible. The Brown-Guy example is given in Table 1, but you’re encouraged to try and construct your own example before peeking.
It is also not obvious why you want to do this. The respectable answer is that it is an example of an $(11,5,2)$ symmetric block design, objects that arose in the design of statistical experiments in agriculture. (The parameters correspond to the bolded numbers in the previous paragraph.) The frivolous answer points to the obvious analogy with Kirkman’s Schoolgirl Problem. You are of course wondering which values of $(v, k, \lambda)$ actually correspond to achievable symmetric designs. You should read Chapter 7. I’m more interested in following up on $(11,5,2)$ right now. Brown and Guy call this gadget a biplane. It is worthwhile to understand why.
Suppose that instead of requiring each pair of elements to occur twice we will be satisfied with a single appearance. As noted above there are 55 pairs and each five-tuple contains ten, so $(11,5,1)$ fails the obvious divisibility test and no such object exists. But, to take one example, $(91, 10, 1)$ does not fail and, so, is not obviously impossible. (Those numbers are a clue to what’s happening, but you might not recognize that.) If we think of the elements as points, we are looking for ten-point subsets such that each pair of points is in exactly one subset. Replace “subset” by “line” and you recognize the description of a finite projective plane of order nine. Thus, the $(11, 5, 2)$ biplane. More saliently, perhaps you begin to see Brown-Guy’s “Unity.”
Brown in [1] asked himself how he might draw a useful picture of the $(11, 5, 2)$ biplane. He wanted the picture to reflect some of the symmetries of the design. For example, note that the product of the two five-cycles $(1, 3, 9, 5, 4)(2, 6, 7, X, 8)$ is a permutation of our original set of order five. Note that it preserves the block structure, e.g., 2456X goes to 61478. This corresponds to the five-fold rotational symmetry in the figure. In fact, as Brown and Guy show, the symmetry group of the biplane actually has order 660 and can be shown to be $PSL(2, 11)$. Many of these symmetries can be seen directly in Figure 1.
One final unification observation. It’s interesting to notice that $2^{11}=1+\binom{23}{1}+\binom{23}{2}+\binom{23}{3}$. It is known that this equality is exactly what is required for the existence of a perfect three-error-correcting binary code. The alphabet has 23 symbols and the codewords are length 12. Similarly, the fact that $1+2\cdot\binom{11}{1}+2^2\cdot \binom{11}{2}=3^5$ means that there exists a perfect two-error-correcting ternary code. In this case the alphabet has 11 letters and the codewords have length six. Each of these codes can be realized as the row space of a particular matrix. Suppose one were to construct the $11\times 11$ incidence matrix for the $(11,5,2)$ biplane by putting a 1 in the $(i,j)$ entry if element $i$ is in subset $j$ of the biplane and a 0 if not. (NB: The subsets are indexed by their first listed element in Table 1.) This incidence matrix lives inside the code matrix as a submatrix in the case of each of those codes. You are invited to explore why.
All of the above was taken from just one chapter of Brown-Guy, and we have already run into statistics, group theory, linear algebra, coding theory, recreational mathematics, and projective geometry. Perhaps The Ubiquity of Combinatorics would have been a better title. Whatever we call it, it is full of wonders and breathes with Richard’s spirit. It is a fitting memorial to a mathematical giant whom we were lucky to have for 103 years.
******************
##### [1] Ezra Brown, The Fabulous (11, 5, 2) Biplane, Math. Mag., 77:2, 87–100, 2004, DOI: 10.1080/0025570x.2004.11953234. [2] Richard K. Guy, The Lighthouse Theorem, Morley & Malfatti— A Budget of Paradoxes, Amer. Math. Monthly, 114:2, 97–141, 2007, DOI: 10.1080/00029890.2007.11920398.
This entry was posted in Authors, BookEnds. Bookmark the permalink.
|
|
### Session A1: Plenary Prize Session
8:00 AM–9:48 AM, Wednesday, June 6, 2007
TELUS Convention Centre Room: Macleod BCD
Chair: T. Gay, University of Nebraska-Lincoln
Abstract ID: BAPS.2007.DAMOP.A1.1
### Abstract: A1.00001 : Rabi Prize Talk: The Art of Light-based Precision Measurement
8:00 AM–8:36 AM
Preview Abstract MathJax On | Off Abstract
#### Author:
Jun Ye
(JILA and Physics Department, National Institute of Standards and Technology and University of Colorado)
Improvements in spectroscopic resolution have been the driving force behind many scientific and technological breakthroughs over the past century, including the invention of the laser and the realization of ultracold atoms. Maintaining optical phase coherence is one of the two major ingredients (the other being the control of matter) for this scientific adventure. Lasers with state-of-the-art control can now maintain phase coherence over one second, that is, 10$^{15}$ optical waves pass by without losing track of a particular cycle. Translating into distance, such a coherent light wave can traverse the circumference of the Earth 10 times and still interfere with the original light. The recent development of optical frequency combs has allowed this unprecedented optical phase coherence to be established across the entire visible and infrared parts of the electromagnetic spectrum, leading to direct visualization and measurement of light ripples. Working with ultracold atoms prepared in single quantum states, optical spectroscopy and frequency metrology at the highest level of precision and resolution are being accomplished. A new generation of atomic clocks using light has been developed, with anticipated measurement precision reaching 1 part in 1018. The parallel developments in the time domain have resulted in precise control of the pulse waveform in the sub-femtosecond regime, leading to demonstrations of coherent synthesis of optical pulses and generation of coherent frequency combs in the VUV spectral region. This unified time- and frequency-domain spectroscopic approach allows high-resolution coherent control of quantum dynamics and high-precision measurement of matter structure across a broad spectral width. These developments will have impact to a wide range of scientific problems such as the possible time-variation of fundamental constants and gravitational wave detection, as well as to a variety of technological applications.
To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2007.DAMOP.A1.1
|
|
# EVA¶
## Abstract¶
We launch EVA, a vision-centric foundation model to explore the limits of visual representation at scale using only publicly accessible data. EVA is a vanilla ViT pre-trained to reconstruct the masked out image-text aligned vision features conditioned on visible image patches. Via this pretext task, we can efficiently scale up EVA to one billion parameters, and sets new records on a broad range of representative vision downstream tasks, such as image recognition, video action recognition, object detection, instance segmentation and semantic segmentation without heavy supervised training. Moreover, we observe quantitative changes in scaling EVA result in qualitative changes in transfer learning performance that are not present in other models. For instance, EVA takes a great leap in the challenging large vocabulary instance segmentation task: our model achieves almost the same state-of-the-art performance on LVISv1.0 dataset with over a thousand categories and COCO dataset with only eighty categories. Beyond a pure vision encoder, EVA can also serve as a vision-centric, multi-modal pivot to connect images and text. We find initializing the vision tower of a giant CLIP from EVA can greatly stabilize the training and outperform the training from scratch counterpart with much fewer samples and less compute, providing a new direction for scaling up and accelerating the costly training of multi-modal foundation models.
## Results and models¶
### merged-30M¶
The pre-trained models on merged-30M are used to fine-tune, and therefore don’t have evaluation results.
Model
patch size
resolution
EVA-G (eva-g-p14_3rdparty_30m)*
14
224x224
model
EVA-G (eva-g-p16_3rdparty_30m)*
14 to 16
224x224
model
Models with * are converted from the official repo.
### ImageNet-21k¶
The pre-trained models on ImageNet-21k are used to fine-tune, and therefore don’t have evaluation results.
Model
Pretrain
resolution
EVA-G (eva-g-p14_30m-pre_3rdparty_in21k)*
merged-30M
224x224
model
EVA-L (eva-l-p14_3rdparty-mim_in21k)*
From scratch with MIM
224x224
model
EVA-L (eva-l-p14_mim-pre_3rdparty_in21k)*
MIM
224x224
model
Models with * are converted from the official repo.
### ImageNet-1k¶
Model
Pretrain
resolution
Params(M)
Flops(G)
Top-1 (%)
Top-5 (%)
Config
EVA-G (eva-g-p14_30m-in21k-pre_3rdparty_in1k-336px)*
merged-30M & ImageNet-21k
336x336
1013.01
620.64
89.61
98.93
config
model
EVA-G (eva-g-p14_30m-in21k-pre_3rdparty_in1k-560px)*
merged-30M & ImageNet-21k
560x560
1014.45
1906.76
89.71
98.96
config
model
EVA-L (eva-l-p14_mim-pre_3rdparty_in1k-336px)*
MIM
336x336
304.53
191.10
88.66
98.75
config
model
EVA-L (eva-l-p14_mim-in21k-pre_3rdparty_in1k-336px)*
MIM & ImageNet-21k
336x336
304.53
191.10
89.17
98.86
config
model
EVA-L (eva-l-p14_mim-pre_3rdparty_in1k-196px)*
MIM
196x196
304.14
61.57
87.94
98.50
config
model
EVA-L (eva-l-p14_mim-in21k-pre_3rdparty_in1k-196px)*
MIM & ImageNet-21k
196x196
304.14
61.57
88.58
98.65
config
model
Models with * are converted from the official repo. The config files of these models are only for inference.
## Citation¶
@article{EVA,
title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale},
author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2211.07636},
year={2022}
}
|
|
# How to output all sequences with bwa mem, not *?
I've been running bwa mem -a for alignment, using the -a flag---this will
output all alignments for SE or unpaired PE
I've noticed in the SAM that there are several alignments with * in the SEQ and QUAL fields. Based on the documentation:
1. SEQ: segment SEQuence. This field can be a ‘*’ when the sequence is not stored. If not a ‘*’, the length of the sequence must equal the sum of lengths of M/I/S/=/X operations in CIGAR. An ‘=’ denotes the base is identical to the reference base. No assumptions can be made on the letter cases.
2. QUAL: ASCII of base QUALity plus 33 (same as the quality string in the Sanger FASTQ format). A base quality is the phred-scaled base error probability which equals −10 log10 Pr{base is wrong}. This field can be a ‘*’ when quality is not stored. If not a ‘*’, SEQ must not be a ‘*’ and the length of the quality string ought to equal the length of SEQ.
it appears the sequence isn't stored.
I would strongly prefer to have the sequence in this case. Is there any to direct bwa to output these sequences?
• Are you aware that many of these records are secondary/supplementary alignments? Does your workfow/model require/benefit from these alignments and handle them correctly? Dec 5 '18 at 15:23
• @DanielStandage "Are you aware that many of these records are secondary/supplementary alignments?" Yes. " Does your workfow/model require/benefit from these alignments and handle them correctly?". Benefit, yes. Dec 5 '18 at 15:59
• @Pierre This is a great tool Pierre! I think it runs successfully for me. The problem is, I get errors downstream when using samtools view, i.e.. [E::sam_parse1] CIGAR and query sequence are of different length Are there work arounds? Dec 5 '18 at 19:13
|
|
#### Vol. 293, No. 2, 2018
Recent Issues Vol. 296: 1 Vol. 295: 1 2 Vol. 294: 1 2 Vol. 293: 1 2 Vol. 292: 1 2 Vol. 291: 1 2 Vol. 290: 1 2 Vol. 289: 1 2 Online Archive Volume: Issue:
The Journal Subscriptions Editorial Board Officers Special Issues Submission Guidelines Submission Form Contacts Author Index To Appear ISSN: 0030-8730
Coaction functors, II
### S. Kaliszewski, Magnus B. Landstad and John Quigg
Vol. 293 (2018), No. 2, 301–339
DOI: 10.2140/pjm.2018.293.301
##### Abstract
In their study of the application of crossed-product functors to the Baum–Connes conjecture, Buss, Echterhoff, and Willett introduced various properties that crossed-product functors may have. Here we introduce and study analogues of some of these properties for coaction functors, making sure that the properties are preserved when the coaction functors are composed with the full crossed product to make a crossed-product functor. The new properties for coaction functors studied here are functoriality for generalized homomorphisms and the correspondence property. We also study the connections with the ideal property. The study of functoriality for generalized homomorphisms requires a detailed development of the Fischer construction of maximalization of coactions with regard to possibly degenerate homomorphisms into multiplier algebras. We verify that all “KLQ” functors arising from large ideals of the Fourier–Stieltjes algebra $B\left(G\right)$ have all the properties we study, and at the opposite extreme we give an example of a coaction functor having none of the properties.
##### Keywords
crossed product, action, coaction, Fourier–Stieltjes algebra, exact sequence, Morita compatible
Primary: 46L55
Secondary: 46M15
##### Milestones
Revised: 1 September 2017
Accepted: 4 September 2017
Published: 23 November 2017
##### Authors
S. Kaliszewski School of Mathematical and Statistical Sciences Arizona State University Tempe, AZ 85287-1804 United States Magnus B. Landstad Department of Mathematical Sciences Norwegian University of Science and Technology 7491 Trondheim Norway John Quigg School of Mathematical and Statistical Sciences Arizona State University Tempe, AZ 85287-1804 United States
|
|
# Human Factors Guide For Aviation Maintenance And
PDF Untangling the Contribution of the Subcomponents of
Palpation - Superficial: (just tryck försiktigt mot magen, alla 4 Quadrants) - UPDATE: This app is not a copy of any of the training centres material or their manuals. Although is does follow the format for the 5-year renewal. Please use this 16 okt. 2020 — Wohler VIS 350 Drain inspection camera.
- domlig , adj . - släng , m . after syn , f . inspection , oversight ; usurped . - mäktighet , f .
## H&M SäNKE På BöRSEN
Planning and preparation /v } ] Z µo] Ç}( Z}( Z µ ]vP Z /v } [ µo] Ç }v ]vµµu Common Core: High School - Algebra : Solve Quadratic Equations by Inspection, Quadratic Formula, Factoring, Completing the Square, and Taking Square Roots: CCSS.Math.Content.HSA-REI.B.4b Study concepts, example questions & explanations for Common Core: High School - Algebra 2020-06-17 The Administrative Authority may make use of the Inspection Body, but need not do so. The Commission can only make use of the inspection procedure in specific infringement cases which relate to compliance with judgements of the European Court of Justice.
### 2014 - Luleå University of Technology - Luleå tekniska
OBJECTIVES At the end of this unit, you should be able to: i. define the terms inspection and supervision; ii. distinguish between supervision and inspection; iii. give a brief history of educational supervision in Nigeria; iv. An inspection is, most generally, an organized examination or formal evaluation exercise. In engineering activities inspection involves the measurements, tests, and gauges applied to certain characteristics in regard to an object or activity. During this inspection, the inspector evaluated learning and teaching in Mathematics under the following headings: 1.
Information on the Standards and evaluation framework. Inspection and review - sector-specific guidance. The purpose and aims of the inspection process in evaluating the quality of learning and teaching in Scottish schools and education services. Our goal is to deliver your Certified Professional Inspection Report quickly and make it easy to understand. Our commitment to you does not end at the last page of the report - We stay with you through the process offering free walk thru inspections to confirm proper repairs. Babington published in 1635 a folio volume, entitled Pyrotechnia, or a Discourse of Artificiall Fireworks, to which was added a "Short Treatise of Geometrie .
Larm elektronik oskarshamn
Rector begärte , at hwar inspector nationis wille uptekna öf : r dem dhe haa inspection , effter Dn . Proc . begär catalogum öf : r alla Math . och Georg . 9 jan.
The purpose and aims of the inspection process in evaluating the quality of learning and teaching in Scottish schools and education services. Our goal is to deliver your Certified Professional Inspection Report quickly and make it easy to understand.
Frida karlsson skier
neumeister design münchen
norrona lofoten
lan ganger arsinkomst
försäkringskassan årsinkomst
polisstation västerås
renhållningen luleå sophämtning
### I'll aid u in math if you inspection my big cock / TUBEV.SEX sv
1 a : the act of inspecting. b : recognition of a familiar pattern leading to immediate solution of a mathematical problem solve an equation by inspection.
Embry riddle
jag är sårbar engelska
Factorising by inspection is super-quick once you get the hang of it, but here both 6 and 12 have multiple factors so this one might take a bit longer than others. by inspection A rhetorical shortcut made by authors who invite the reader to verify, at a glance, the correctness of a proposed expression or deduction. If an expression can be evaluated by straightforward application of simple techniques and without recourse to extended calculation or general theory, then it can be evaluated by inspection . $$4\pi r^2 + \frac{4}{3}\pi r^3 = \frac{16}{3}\pi m^3.$$ This is all I got: $$4 r^2 + \frac{4}{3}r^3 = \frac{16}{3}m^3.$$ How to simplify the equation and solve it "by inspection"?
|
|
# FIR filters in C
I wrote 2 filters in C for the Altera DE2 Nios II FPGA, one floating-point and one fixed-point. I've verified that they perform correctly and now I wonder if you can give examples for improvement or optimization? I'll reduce the C library to a small library and turn on optimization, and perhaps you can suggest how to do other improvements?
The floating-point program:
#include <stdio.h>
#include "system.h"
#include "alt_types.h"
#include <time.h>
#include <sys/alt_timestamp.h>
#include <sys/alt_cache.h>
float microseconds(int ticks)
{
return (float) 1000000 * (float) ticks / (float) alt_timestamp_freq();
}
void start_measurement()
{
/* Flush caches */
alt_dcache_flush_all();
alt_icache_flush_all();
/* Measure */
alt_timestamp_start();
time_1 = alt_timestamp();
}
void stop_measurement()
{
time_2 = alt_timestamp();
ticks = time_2 - time_1;
}
float floatFIR(float inVal, float* x, float* coef, int len)
{
float y = 0.0;
int i;
start_measurement();
for (i = (len-1) ; i > 0 ; i--)
{
x[i] = x[i-1];
y = y + (coef[i] * x[i]);
}
x[0] = inVal;
y = y + (coef[0] * x[0]);
stop_measurement();
printf("%5.2f us", (float) microseconds(ticks - timer_overhead));
printf("(%d ticks)\n", (int) (ticks - timer_overhead));
printf("Sum: %f\n", y);
return y;
}
int main(int argc, char** argv)
{
// Average of 10 measurements */
int i;
for (i = 0; i < 10; i++) {
start_measurement();
stop_measurement();
}
float coef[4] = {0.0299, 0.4701, 0.4701, 0.0299};
float x[4] = {0, 0, 0, 0}; /* or any other initial condition*/
float y;
float inVal;
while (scanf("%f", &inVal) > 0)
{
y = floatFIR(inVal, x, coef, 4);
}
return 0;
}
The fixed-point program:
#include <stdio.h>
#include "system.h"
#include "alt_types.h"
#include <time.h>
#include <sys/alt_timestamp.h>
#include <sys/alt_cache.h>
#define TIME 1
signed char input[4]; /* The 4 most recent input values */
char get_q7( void );
void put_q7( char );
void firFixed(signed char input[4]);
const int c0 = (0.0299 * 128 + 0.5); /* Converting from float to Q7 by multiplying by 2^n i.e. 128 = 2^7 since we use Q7 and round to the nearest integer by multiplying with 0.5. The fraction will be truncated. */
const int c1 = (0.4701 * 128 + 0.5);
const int c2 = (0.4701 * 128 + 0.5);
const int c3 = (0.0299 * 128 + 0.5);
const int half = (0.5000 * 128 + 0.5);
enum { Q7_BITS = 7 };
alt_u32 ticks;
alt_u32 time_1;
alt_u32 time_2;
float microseconds(int ticks)
{
return (float) 1000000 * (float) ticks / (float) alt_timestamp_freq();
}
void start_measurement()
{
/* Flush caches */
alt_dcache_flush_all();
alt_icache_flush_all();
/* Measure */
alt_timestamp_start();
time_1 = alt_timestamp();
}
void stop_measurement()
{
time_2 = alt_timestamp();
ticks = time_2 - time_1;
}
void firFixed(signed char input[4])
{
int sum = c0*input[0] + c1*input[1] + c2*input[2] + c3*input[3];
signed char output = (signed char)((sum + half) >> Q7_BITS);
stop_measurement();
if (TIME)
{
printf("(%d ticks)\n", (int) (ticks - timer_overhead));
}
put_q7(output);
}
int main(void)
{
printf("c0 = c3 = %3d = 0x%.2X\n", c0, c0);
printf("c1 = c2 = %3d = 0x%.2X\n", c1, c1);
if (TIME)
{
// Average of 10 measurements */
int i;
for (i = 0; i < 10; i++) {
start_measurement();
stop_measurement();
}
}
int a;
while(1)
{
if (TIME)
{
start_measurement();
}
for (a = 3 ; a > 0 ; a--)
{
input[a] = input[a-1];
}
input[0]=get_q7();
firFixed(input);
}
return 0;
}
• Try using doubles. They might be faster on your platform. – Peter G. Oct 9 '13 at 13:48
A good first approach is always to look for a library call that already does what you need and that was optimized for your platform. For a FIR filter, that might e.g. be cblas_sdot in the BLAS library.
For a hand written approach, the key issues are picking the right data types (as discussed by @WilliamMorris) and exploiting the parallelism of the target platform. Since you’re targeting an FPGA, you even get to pick the level of parallelism. On the other hand, FPGAs are not necessarily great with arbitrary loops, so I would take a very close look at whether you can get away with using a constant number of coefficients.
Once you’ve decided on an appropriate level of parallelism, break up the data dependency in your loop. Right now, every iteration needs to wait for the previous iteration to complete. If you, e.g., want to have 4-way parallelism, something like this might work (assuming the # of coefficients is divisible by 4):
float y0=0.0f, y1=0.0f, y2=0.0f, y3=0.0f;
memmov(&x[1], &x[0], (len-1)*sizeof(x[0]));
x[0] = inVal;
for (int i=len; i>0; i-=4) {
y0 += x[i-4]*coeff[i-4];
y1 += x[i-3]*coeff[i-3];
y2 += x[i-2]*coeff[i-2];
y3 += x[i-1]*coeff[i-1];
}
y0 += y2;
y1 += y3;
return y0+y1;
With the float version, I would make the coef parameter const and add restrict to both parameters. But that is unlikely to make much difference to speed.
For the integer version, I would make the coefficients bigger than 8 bit. You lose a lot of the accuracy of the coefficients by reducing them to 8 bits and as you are accumulating using int, that seems unnecessary. This will improve the characteristics of the filter although the performance will depend upon the CPU - on a desktop type processor using int rather than char is likely to be faster, but for your processor that might not be true.
|
|
# How can I shorten the runtime of my simulation?
Before you read the code below, note the following explanation: I have three classes: Driver, Vehicle, and Itinerary.
They have attributes Driver.behavior, Vehicle.pRange, Vehicle.pBattery, Itinerary.destinations, and Itinerary.startTime.
The function findCand takes all these class attributes as input and returns as an output a dataframe with three columns.
So here is the code that I use to run my simulation:
n = 1000
m = 1000
agentA = []
iterationA = []
itineraryA = [None]*n
behaviorA = [None]*n
rangeA = [None]*n
batteryA = [None]*n
startTimeA = [None]*n
df = pd.DataFrame(columns = ['Loc', 'Amount', 'Time'])
for j in range(m):
for i in range(n):
x = Itinerary()
y = Vehicle()
z = Driver()
itineraryA[i] = x.destinations
behaviorA[i] = z.behavior
rangeA[i] = y.pRange
batteryA[i] = y.pBattery
startTimeA[i] = x.startTime
dfi = findCand(itineraryA[i], behaviorA[i], rangeA[i], batteryA[i], startTimeA[i])
df = df.append(dfi)
agentA = agentA + [i]*len(dfi)
iterationA = iterationA + [j]*len(dfi)
df['Ag'] = agentA
df['It'] = iterationA
I run it n times, m iterations. For each dataframe I get from findCand, I append it to the dataframe df.
The code works, but it's taking a crazy amount of time to run.
If n=10 and m=10, it takes about a second.
If n=100 and m=100, it takes around 97 seconds.
I put n=1000 and m=1000 and it was running for more than 3 hours before I stopped it.
I need to do this for a way higher value of both n and m. I realize that it takes a lot of time to append a dataframe so often but I've tried a few other methods
• I used a dictionary and appended larger dataframes fewer times.
• I used lists instead of dataframes and then made one large dataframe in the end.
But these methods took just as long or even longer than the one above. So my question is, can anyone suggest areas that I can improve that might shorten the runtime?
• Use a profiler to run your program and see where it is spending the most time. for n = m = 1000, the loop runs a million times. It looks like each time through the loop an average of 500 items are appended to agentA and iterationA. I think they end up being 500 million elements long. – RootTwo Aug 23 '20 at 23:10
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.