CodeText
stringlengths 100
966k
|
|---|
## Algebra 1
Published by Prentice Hall
# Chapter 4 - An Introduction to Functions - 4-4 Graphing a Function Rule - Lesson Check: 4
graph
#### Work Step by Step
the function is $x^{2}$ so it will be in the shape of a parabola. the +2 tells you the y intercept so the graph is trasformed up by 2
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
# Dividing a Power of a Prime
Suppose that the positive integer $a$ divides $p^n$, where $n$ is a positive integer and $p$ is a prime. I want to conclude that $a = p^m$ for some $m \le n$, but I am having trouble. I would appreciate some hints.
• Do you know the Fundamental Theorem of Arithmetic and are you allowed to use it? – David K Nov 1 '16 at 18:26
• @DavidK Sure. I am allowed to use that. – user193319 Nov 1 '16 at 18:30
• Some of the answers use the theorem. – David K Nov 1 '16 at 19:39
Hint: If a prime $q$ divides $a$, then $q$ divides $p^n$ ando so $q$ divides $p$.
Therefore, $a$ is a power of $p$ and so $a=p^m$ with $m \le n$.
• Okay, let me see if I follow. Let $p_1^{\alpha_1} \dots p_k^{\alpha_k} = a$ be the prime factorization of $a$. Since $p_i | a$ for all $i$, then $p_i |p^n$ and therefore $p_i|p$ which implies $p_i = p$. Hence, $a = p^{\alpha_1} \dots p^{\alpha_k} = p^{\alpha_1 + \dots + \alpha_k}$. Define $m := \alpha_1 + \dots + \alpha_k$. If $m > n$, then $a$ couldn't divide $p^n$, so $m \le n$. – user193319 Nov 1 '16 at 18:59
• However, I am having a little trouble with your claim, which if I am not mistaken is equivalent to: if a prime $q$ divides $p^n$ and, then $q=p$. Here is my attempt at proof: Suppose that $q|p^n$ yet $q \neq p$. Then $q = kp+r$, where $r \in [0,p)$. Then $q|p^n$ says $p^n = \ell q$ or $p^n = \ell(kp + r)$ or p(p^{n-1}-\ell k) = r$, which says that$p$divides$r$, which is a contradiction...Does this seem right? – user193319 Nov 1 '16 at 19:02 • @user193319, I had in mind this property: If$q$is a prime and$q$divides$ab$, then$q$divides$a$or$b$. – lhf Nov 1 '16 at 19:17 • Oh, yes. I see. Does my proof work, though? – user193319 Nov 3 '16 at 0:13 Hint: Factor$a$into distinct primes. Which of those primes can divide$p^n$? How many distinct primes are there in the factorization of$a$, really? Hint:$p^n$is the unique prime factorization of the number$k=p^n$. What form must the divisors of$k$have then? If$p,q$are distinct primes and$n$is a non-negative integer then$q\not |\; p^n.$Proof: Obvious for$n=0.$If false in general, let$n_0$be the least$n$such that$q\;|\;p^n.$Then$q\;|\;(p)(p^{n_0-1})$with$n_0\geq 1$(so$p^{n_0-1}$is an integer) and$\gcd (p,q)=1.$So by the Fundamental Theorem of Arithmetic we have$q\;|\;p^{n_0-1},$contradicting the minimality of$n_0.$Now if$a\;|\;p^n$and$q$is any prime divisor of$a,\$ then....?
|
## Saturday, April 02, 2016
### The gains from sound macro and finance policy
by Ajay Shah.
Israel went through a complete transformation of macro and finance policy. They started out pretty bad, and put in all the key machinery : inflation targeting, floating exchange rate, open capital account, modern financial regulation, public debt management, etc.
A graph of the nominal yield curve for government borrowing is quite revealing [source]. It superposes the yield curve prevalent at many dates:
The curve at the top is the yield curve in October 1996: it goes out to only 3 years, and features nominal rates of 16 to 17%.
Through the years, as the macro and finance reforms fell into place, nominal interest rates for borrowing went down, and the maturity went up. By January 2012, inflation had stabilised at the target of 2%, the short rate was 1.5%, and the 30 year rate was 5.51%.
Note that the 2012 situation is without financial repression and without capital controls. Private persons voluntarily choose to lend to the government, for a 30 year horizon, at 5.51%. There are no other distortions in the picture. This is the `fair and square' cost of borrowing for the government.
This shows the the direct gains to the fiscal authority from doing the orthodox approach to macro and finance policy. Similar large gains became available to the private sector, as corporate bonds and bank loans are expressed as credit spreads off the government bond interest rates. We in India will get these gains by enacting and enforcing the Indian Financial Code.
Please note: LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.
|
# MEDIAN function in Oracle
MEDIAN is one of the vital Numeric/Math functions of Oracle. It is used to get the MEDIAN value of an expression. The MEDIAN function is supported in the various versions of the Oracle/PLSQL, including, Oracle 12c, Oracle 11g, Oracle 10g, Oracle 9i and Oracle 8i.
Syntax:
MEDIAN( expression )
Parameters:
expression: It is used to specify the expression whose median to be calculated.
Example:
select MEDIAN(marks) from students where section = 'Biology';
Explanation:
The result will be the median marks for all students in the Biology section.
|
# Seminar Thursday Aug 07 – Jason Ho
Speaker: Jason Ho
Location: Physics 175
Time: 3:30pm, August 07 2014
Title: QCD sum-rules analysis of open-charm hybrid mesons
Abstract: We briefly discuss the development of quantum chromodynamics (QCD) and introduce the concept of hadronic structures outside the quark model. Within this group of unconventional hadrons, of interest to us are hybrid mesons consisting of a quark-antiquark pair and a constituent gluon.
Through the use of QCD sum-rules, we intend to calculate the ground state mass of the $\dpi{300}\inline D$ (charm quantum number $\dpi{300}\inline c = \pm 1$) and $\dpi{300}\inline D_s$ (charm/strange quantum numbers $\dpi{300}\inline c = s = \pm 1$) hybrid mesons with exotic $\dpi{300}\inline J^{PC} = 1^{-+}$ (spin $\dpi{300}\inline J$, parity $\dpi{300}\inline P$, and charge conjugation $\dpi{300}\inline C$), and present preliminary results.
|
# Partial least squares regression
Partial least squares regression (PLS) is a linear regression method, which uses principles similar to PCA: data is decomposed using latent variables. Because in this case we have two datasets, matrix with predictors ($$\mathbf{X}$$) and matrix with responses ($$\mathbf{Y}$$) we do decomposition for both, computing scores, loadings and residuals: $$\mathbf{X} = \mathbf{TP}^\mathrm{T} + \mathbf{E}_x$$, $$\mathbf{Y} = \mathbf{UQ}^\mathrm{T} + \mathbf{E}_y$$. In addition to that, orientation of latent variables in PLS is selected to maximize the covariance between the X-scores, $$\mathbf{T}$$, and Y-scores $$\mathbf{U}$$. This approach makes possible to work with datasets where more traditional Multiple Linear Regression fails — when number of variables exceeds number of observations and when X-variables are mutually correlated. But, at the end, PLS-model is a linear model, where response value is just a linear combination of predictors, so the main outcome is a vector with regression coefficients.
There are two main algorithms for PLS, NIPALS and SIMPLS, in the mdatools only the last one is implemented. PLS model and PLS results objects have a lot of properties and performance statistics, which can be visualized via plots. Besides that, there is also a possibility to compute selectivity ratio (SR) and VIP scores, which can be used for selection of most important variables. Another additional option is a randomization test which helps to select optimal number of components. We will discuss most of the methods in this chapter and you can get the full list using ?pls.
|
# SAT Math: Challenging Problems #6
It’s been a while since we’ve done one of these posts. Hopefully you’ll think these were worth the wait! Give them a try and, as usual, if you have any questions, don’t hesitate to ask. Answers appear at the bottom of this post.
1) The circle below has its center at the origin and has an area of 25 pi. The parabola has equation y = ax2 and intersects the circle at the points where x = 3 and x = -3. What is the value of a?
(A) 4/9
(B) 5/9
(C) 3/5
(D) 4/5
(E) 4/3
2) Both of the adjacent squares below have one side on the x-axis. The smaller square has an area of 16 square units and one of its vertices is Point X. The larger square has an area of 25 square units and one of its vertices is Point Y. Line m contains both Points X and Y and also passes through the origin. What other point must be on line m?
(A) (9, 3)
(B) (12, 8)
(C) (15, 4)
(D) (20, 6)
(E) (35, 7)
3) 8, 64, 512, 4096, ……..
Each term after the first term in the geometric sequence above is found by multiplying the preceding term by 8. What will be the units (ones) digit of the 125th term?
(A) 1
(B) 2
(C) 4
(D) 6
(E) 8
4) Let’s define a three-digit number as “even-odd-even” if the first digit is even, the second odd and the third even. Similarly, let’s define such a number as “odd-even-odd” if the first digit is odd, the second even and the third odd. How many three-digit positive integers are either “even-odd-even” or “odd-even-odd”?
(A) 100
(B) 225
(C) 250
(D) 333
(E) 500
5) The cube below has edges that are each x inches long. If CB = 2DC, what is the length (in inches) of the segment connecting the vertex A to Point C?
(A) $\frac{\sqrt{22} }{3} x$
(B) $3 \sqrt{3} x$
(C) $\frac{2 \sqrt{3} }{4} x$
(D) $\frac{8}{3} x$
(E) $\frac{15}{2} x$
If you have questions about these problems or anything else to do with the SAT, leave a comment below or send me an email at info@cardinalec.com .
Solutions:
1) (A)
2) (E)
3) (E)
4) (B)
5) (A)
|
## finding n
$\Delta G^{\circ} = -nFE_{cell}^{\circ}$
Clarice Chui 2C
Posts: 101
Joined: Fri Aug 30, 2019 12:16 am
### finding n
How do you find n from an equation for a reaction?
haileyramsey-1c
Posts: 105
Joined: Thu Jul 25, 2019 12:18 am
### Re: finding n
Since n is the number of electrons transferred you can look at the charges of the atoms in the reactions and see what is changing. More directly, by writing out the half reaction and balancing it to see how many are transferred since the number of electrons has has to cancel that is the number transferred.
Lindsey Chheng 1E
Posts: 110
Joined: Fri Aug 30, 2019 12:16 am
### Re: finding n
Clarice Chui 2C wrote:How do you find n from an equation for a reaction?
n is the total number of electrons that's being transferred from the anode to the cathode, which you can find after balancing the two half reactions. In the equation ΔG = -nFE, n is in mol e-/mol rxn. For example, if an equation had a total of 2 e- transferred from anode to cathode, then n would be 2 mol e-/mol rxn
Lauren Stack 1C
Posts: 100
Joined: Sat Aug 17, 2019 12:18 am
### Re: finding n
n stands for the number of electrons transferred. You find this value by balancing the two half reactions in a redox situation. Once you balance them, you look at the electron transfer and use that value as n.
Posts: 74
Joined: Fri Sep 28, 2018 12:29 am
### Re: finding n
What happens if the number of electrons transferred is different? Do you have to balance the equation to make it the same?
Esha Chawla 2E
Posts: 108
Joined: Thu Jul 25, 2019 12:17 am
### Re: finding n
Clarice Chui 2C wrote:How do you find n from an equation for a reaction?
To find n for a reaction, write out the individual oxidation and reduction reactions. To balance the reaction, you have to include electrons to balance out the reaction. n correlates to this number of electrons.
Jasmine W 1K
Posts: 49
Joined: Sat Sep 07, 2019 12:18 am
### Re: finding n
Adriana_4F wrote:What happens if the number of electrons transferred is different? Do you have to balance the equation to make it the same?
Yes, you would have to balance the half-reactions so that the number of electrons transferred is the same.
|
Browse Questions
The moment of inertia of a disc, of mass M and radius R, about an axis which is a tangent and to its diameter is :
$\begin {array} {1 1} (1)\;\frac{1}{2} MR^2 & \quad (2)\;\frac{3}{4} MR^2 \\ (3)\;\frac{1}{4}MR^2 & \quad (4)\;\frac{5}{4}MR^2 \end {array}$
$(4)\;\frac{5}{4}MR^2$
|
# Which measurement units should one use in LaTeX?
There are various measurement units that one can use (such as pt, mm, in, em, ex etc.) for specifying lengths and heights. For font-based units (em and ex) the actual spacing will vary slightly depending on the font. For the other types of units the spacing is fixed at the given measurement.
Is it better to prefer some of these units over others in certain situations? For example, is the "em" unit better than a fixed measurement when specifying lengths for indenting text eg. when setting xleftmargin, and xrightmargin in the listings package. When would you use the "ex" unit for specifying heights? I imagine fixed units should always be used when setting page margins.
So, I guess what I am asking is, what are the guidelines for deciding which measurement units to use?
-
There are no hard-and-fast rules, but here's a short list of guidelines:
• "1em" is a horizontal length and "1ex" a vertical one, so use them accordingly (they are horizontal and vertical arbitrarily, but usually you hear people talk about "1em" is the width of an "m" — usually false — and "1ex" is the height of an "x" — usually true). I usually consider 1em to be about the same size as the font size in points.
• em and ex are relative lengths, so they're better for designing around text; like you say, an indent of 2em will work whether the fontsize is 9pt or 12pt.
• Things that are of fixed size (such as the page size) should be defined with fixed units, of course.
• When things should be relative, it will often make more sense to define them in terms of the page design. For example, width=0.5\linewidth might make more sense than width=5cm for a figure.
• Watch out for the pt unit! In TeX, 1pt is 1/72.27in, whereas the more common "PostScript point" used by most other software is 1/72in which in TeX is 1bp. If you're dealing with other programs and need your lengths exact, use bp or use standard cm or in measurements.
• Remember that TeX uses fixed point arithmetic, so there are precision problems when you hit around five significant figures. E.g.,
\newlength\x
\x=1in
\showthe\x
gives
> 72.26999pt.
-
@Will: Bringhurst says: (2.1.1) "1 em is a distance equal to the type size", so you can be more certain than "usually consider". BTW, I'm thinking of changing my id to "BringhurstSycophant". – Brent.Longborough Oct 18 '10 at 8:29
@maxschlepzig: Yes. \textwidth is the width of the whole text area on the page. \linewidth is the width of the current line, so in a column it is equal to \columnwidth. See this answer: tex.stackexchange.com/questions/275/… – Brent.Longborough Oct 18 '10 at 10:30
About the scale. Engineering drafts and geographic maps have mandatory integer scale. And these draftsmen were the people that did all the graphics in the books of the past (which in most cases are incomparably more simple and beautiful than what PSTricks or PGF/TikZ can do). Many people wonder how beautiful these books are. But now days, having computers, instead of making things more precise, they are made very arbitrary. And here comes another thing for the page layout - doing arbitrary margins and arbitrary figure sizes isn't beautiful too. – Karl Karlsson Jun 22 '11 at 10:29
A small remark. When the class or the document states that \parindent is 2em, that dimension is relative to the font (and size) which is current at that time; if one doesn't pay attention it may be Computer Modern (not that it makes a big difference, in general). Anyway, it won't change size in, say, a \small context. Tschichold argues that one should always have \parindent= font size, but it's another matter. – egreg Jun 28 '11 at 10:14
@StrawberryFieldsForever — I mean that as a rule of thumb, if you load a 12pt font then 1em ≈ 12pt. – Will Robertson Jun 15 '12 at 5:46
Reading these answers, and comments to them, piqued my curiosity… Especially this sentence that Will Robertson wrote in his answer:
[...] I usually consider 1em to be about the same size as the font size in points.
It inspired me to investigate the actual behavior of TeX (or rather LaTeX). Since my findings could be of interest to those finding this question, I post them here.
# Computer Modern
I started of by checking what length 1em and 1ex are for different selection of Roman/Serif/Typewriter, Medium/Bold, and Upright/Italic/Slanted/SmallCaps. Here are the results:
As we can see, 1em varies quiet a lot; from 10.00 pt to 11.82 pt. The length of 1ex is more consistent, with only three different values for all the different styles.
In the table some actual measurements of the font is included. These are the width of an “M” (measured as \wd of an \hbox{M} created when the font is active) and the height and depth of an “x” (measured similarly as \ht and \dp of an \hbox{x}).
An interesting point of this table is that 1em is neither 10 pt (the selected font size) or the M-width. On the other hand, 1ex corresponds exactly to the height of an “x”, except for typewriter small caps.
# Latin Modern
[Xavier asked about the results for Latin Modern using pdfLaTeX, so I added this section.]
To check the values for Latin Modern using pdfLaTeX I added this to the preamble:
\usepackage{lmodern}
\usepackage[T1]{fontenc}
Some more styles are available, compared to Computer Modern, so I added them to the list. Also, the bold italic typewriter style was not available for Latin Modern, so I removed it.
The values of this table corresponds to the values for Computer Modern to within a hundredth of a point for the columns 1em, M-height, 1ex and x-depth (all zero again).
This time it’s a bit different for the x-height values. For the most part they correspond to the value of 1ex. However they do not correspond for these styles: medium small caps roman, all medium sans, and all typewriter styles apart from medium italic.
# XeLaTeX and Latin Modern
Generating the above tables with XeLaTeX instead of pdfLaTeX, without changing the code, results in exactly the same values.
I continued the investigation with XeLaTeX and the Latin Modern fonts. I used fontspec to load the fonts “manually” with \fontspec{fontname} where fontname is given in the first column of the table. [See note in Conclusions regarding fontspec.] This uses the system wide (non TeX specific) catalogue, via the fontconfig library, to locate and load the fonts. In this particular case it loads Type1 fonts (.pfb-files) that just happen to live in my main texmf tree.
Once again, the values for 1ex matches the measured x-height exactly (even when x-depth is non-zero). This time, however, 1em is exactly the specified font size (10 pt). One more thing to note is that the measured values correspond to the values for Computer Modern (rounded to hundredths of points) except for when the x-depth is non-zero.
# More fonts
For good measures, I also compiled a table for some other fonts. These are all TrueType and OpenType fonts.
# Conclusions
[Edit: after adding the section on Latin Modern in pdfLaTeX, I have reconsidered some of the conclusions.]
The font metric mechanism seems to differ between using TFM fonts and Xe(La)TeX’s new font support.
## (non-Xe)LaTeX
(this also applies to XeLaTeX using TFM fonts.)
• The value of 1em is not equal to the seleted size of the font, nor is it the width of an actual “M”.
• The value of 1ex is not tied to the size of an actual “x”.
However, from the presented data I can conclude that 1ex is most often the height of an actual “x” (\ht of \hbox{x}); while the value of 1em is (almost?) never the width of an “M” (\wd of \hbox{M}).
## XeLaTeX/fontspec
Note: I realize that this probably has nothing to do with fontspec, per se. It is more the question of “old” vs. “new” font handling mechanisms in the engine. However I use the phrase XeLaTeX/fontspec to differentiate the new font handling from the TFM font handling still present in the XeTeX engine.
From the collected data I conclude that, for non-TFM fonts:
• The value of 1em is exactly the selected font size.
• The value of 1ex is exactly the height of an “x”. (\ht of \hbox{x})
## Different Sizes
Regarding different font sizes, I have done some runs for 12 pt text, and found nothing surprising. Just remembering that the Computer Modern fonts have optical sizes so 1em and 1ex will (probably) depend on the font size in a non-simple way.
-
Very interesting for Computer Modern. I would love to know how LaTeX defines 1em; typographically, it is (nowadays) defined as equal to the font size, so clearly there is something weird happening with Computer Modern (or with your calculations). Maybe you just discovered a last TeX bug :) – Xavier Feb 13 '13 at 4:24
Just out of curiosity, what are the results for Latin Modern with pdflatex? – Xavier Feb 13 '13 at 4:25
@Xavier: The Latin Modern fonts (with pdfLaTeX) showed some surprising results... 1ex not equal to \ht of \hbox{x}! Check the updated answer. – Johan_E Feb 13 '13 at 6:33
Thanks! Really weird results... – Xavier Feb 13 '13 at 7:11
Remark: in typewriter fonts, it makes sense that all values of (1em, 1ex) are equal, no matter which series or style you use. In other fonts, similar approach makes sense, too. That's why they do not correspond. – yo' Feb 13 '13 at 7:41
To fill in a couple of gaps that Will didn't address: English-speaking typographers will specify the measure (\textwidth) in pc and the leading (\baselineskip) in pt; continental European typographers will use cc and dd for the same purposes (not that you'll see much difference).
There's no particular reason for preferring these units other than tradition.
-
Now you're giving me TeXBook flashbacks. – Matthew Leingang Oct 18 '10 at 4:17
Interesting, I didn't know about cc (cicero) and dd (didôt point) units. – Steve Oct 19 '10 at 12:13
@Lev: What if you are an English-speaking European (a Briton)? :) – StrawberryFieldsForever Jun 12 '12 at 12:20
|
# What curve does the equation (x-3)^2/4+(y-4)^2/9=1 represent and what are its points of intersection with the axes ?
Sep 7, 2017
This is an ellipse that does not intersect the axes...
#### Explanation:
Given:
${\left(x - 3\right)}^{2} / 4 + {\left(y - 4\right)}^{2} / 9 = 1$
Let's reduce the number of fractions we need to work with by multiplying both sides by $36$ first to get:
$9 {\left(x - 3\right)}^{2} + 4 {\left(y - 4\right)}^{2} = 36$
Subtracting $36$ from both sides and transposing, we get:
$0 = 9 {\left(x - 3\right)}^{2} + 4 {\left(y - 4\right)}^{2} - 36$
$\textcolor{w h i t e}{0} = 9 \left({x}^{2} - 6 x + 9\right) + 4 \left({y}^{2} - 8 y + 16\right) - 36$
$\textcolor{w h i t e}{0} = 9 {x}^{2} - 54 x + 81 + 4 {y}^{2} - 32 y + 64 - 36$
$\textcolor{w h i t e}{0} = 9 {x}^{2} + 4 {y}^{2} - 54 x - 32 y + 109$
We can find the intercepts with the $x$ axis by substituting $y = 0$, or equivalently covering up the terms involving $y$ to find:
$0 = 9 {x}^{2} - 54 x + 109$
$\textcolor{w h i t e}{0} = {\left(3 x\right)}^{2} - 2 \left(3 x\right) \left(9\right) + 81 + 28$
$\textcolor{w h i t e}{0} = {\left(3 x - 9\right)}^{2} + 28$
This has no real solutions, so there are no intercepts with the $x$ axis#.
We can find the intercepts with the $y$ axis by substituting $x = 0$, or equaivalently covering up the terms involving $x$ to find:
$0 = 4 {y}^{2} - 32 y + 109$
$\textcolor{w h i t e}{0} = {\left(2 y\right)}^{2} - 2 \left(2 y\right) \left(8\right) + 64 + 45$
$\textcolor{w h i t e}{0} = {\left(2 y - 8\right)}^{2} + 45$
This has no real solutions, so there are no intercepts with the $y$ axis.
Alternatively, we could have saved ourselves much of this algebra by noting that the equation:
${\left(x - 3\right)}^{2} / 4 + {\left(y - 4\right)}^{2} / 9 = 1$
is the standard form of the equation of an ellipse:
${\left(x - h\right)}^{2} / {a}^{2} + {\left(y - k\right)}^{2} / {b}^{2} = 1$
with centre $\left(h , k\right) = \left(3 , 4\right)$, semi minor axis of length $a = 2$ (in the $x$ direction) and semi major axis of length $b = 3$ (in the $y$ direction).
So the ellipse is $1$ unit from both axes...
graph{(x-3)^2/4+(y-4)^2/9=1 [-9, 11, -2.24, 7.76]}
|
# Statistical Thermodynamics and Rate Theories/Sample problems
## Problem 1
Calculate the probability of a molecule of N2 being in the ground vibrational state at 298 K.
The probability that a system occupies a given state at an instant of time and a specific temperature is given by the Boltzmann distribution.
${\displaystyle P_{i}={\frac {\exp \left({\frac {-E_{i}}{k_{B}T}}\right)}{\sum _{j}\exp \left({\frac {-E_{j}}{k_{B}T}}\right)}}}$
${\displaystyle P_{i}={\frac {\exp \left({\frac {-E_{i}}{k_{B}T}}\right)}{Q}}}$
where:
• i is the energy of the specific state, i, of interest
• kB is Boltzmann's constant, which equals ${\displaystyle 1.3806\times 10^{-34}}$ JK-1
• T is the temperature in Kelvin
The denominator of this function is known as the partition function, Q, which corresponds to the total number of accessible states of the molecule.
The closed form of the molecular vibrational partition function is given by:
${\displaystyle q_{vib}={\frac {1}{1-e^{-h\nu /k_{B}T}}}}$
where:
• ${\displaystyle \nu }$ is the fundamental vibrational frequency of N2 in s-1
• h is Planck's constant, which is ${\displaystyle 6.62607\times 10^{-34}}$ Js
This is equivalent to Q since only the vibrational energy states are of interest and there is only one molecule of N2. The equation for determining the partition function Q, from molecular partition functions, q, is given by:
${\displaystyle Q={\frac {q^{N}}{N!}}}$
where:
• N is the number of molecules
The fundamental vibrational frequency of N2 in wavenumbers, ${\displaystyle {\tilde {\nu }}}$ , is 2358.6cm-1 [1]
The fundamental vibrational frequency in s-1 is given by:
${\displaystyle \nu ={\tilde {\nu }}\times c}$
where
• c is the speed of light, which is ${\displaystyle 2.9979\times 10^{10}}$ cm/s
For N2,
${\displaystyle \nu }$ = (2358.6cm-1) \times (2.9979 \times 10^{10| cm/s) = 7.0708 \times 10^{13}[/itex]
For N2 at 298 K,
${\displaystyle q_{v}=\left({\frac {1}{1-e^{-(6.62607\times 10^{-34}Js\times 7.0708\times 10^{13}s^{-1})/(1.3806\times 10^{-23}JK^{-1}\times 298K)}}}\right)=1.000011333}$
The vibrational energy levels follow that of a quantum mechanical harmonic oscillator. The energy levels are represented by:
${\displaystyle E_{n}=h\nu (n+{\frac {1}{2}})}$
where:
• n is the quantum vibrational number, which equals 0, 1, 2,...
For the ground state (n=0), the energy becomes:
${\displaystyle E_{0}={\frac {1}{2}}h\nu }$
Since the vibrational zero point energy is not zero, the energy levels are defined relative to the n=0 level. This is used in the molecular partition function above and therefore, the ground state is regarded as having zero energy.
For N2 the probability of being in the ground state at 298K is:
${\displaystyle P_{0}={\frac {e^{-E_{0}/k_{B}T}}{q_{v}}}}$
${\displaystyle P_{0}={\frac {e^{(0J)/(1.3806\times 10^{-23}\times 298K)}}{1.000011333}}}$
${\displaystyle P_{0}=0.999988667}$
This means that at room temperature, the probability of a molecule of N2 being in the ground vibrational state is 99.9988667%.
### References
1. Lide, D. R., (84th ed.). (2003-2004). Handbook of Chemistry and Physics. pg.9-85.
## Problem 2
Derive an equation for the population of rotational state i of a linear diatomic. Make a bar graph of the distribution of rotational states of N2 at 298 K.
1. Equation for the population of rotational state i
For a diatomic molecule, it can be approximated as a rigid rotor. Solving Schrödinger equation of rigid rotors gives the energy levels of the molecule at state J:
${\displaystyle E(J)=BJ(J+1)}$
where ${\displaystyle J}$ is the quantum number for total rotational angular momentum; ${\displaystyle B}$ is the rotational constant in cm-1.
• ${\displaystyle B={\frac {h}{8\pi ^{2}cI}}}$ ,
where ${\displaystyle h}$ is Planck constant; ${\displaystyle c}$ is the vacuum light speed in cm/s; and ${\displaystyle I}$ is the moment of inertia.
• ${\displaystyle I=\mu r^{2}}$ ,
where ${\displaystyle \mu }$ is the reduced mass and ${\displaystyle r}$ is the bond length.
By the Maxwell–Boltzmann_distribution, the population of rotational state i comparing to ground state is:
${\displaystyle {\frac {N_{i}}{N_{0}}}={\frac {g_{i}}{g_{0}}}\left({\frac {e^{-E_{i}/k_{B}T}}{e^{-E_{0}/k_{B}T}}}\right)}$
where ${\displaystyle g}$ is the degeneracy of the state; ${\displaystyle E}$ is the energy of the state; ${\displaystyle k_{B}}$ is Boltzmann constant and ${\displaystyle T}$ is the temperature.
Substitute ${\displaystyle E_{i}=hcBi(i+1)}$ ,${\displaystyle E_{0}=hcB0(0+1)=0}$ , and the degeneracy of the state i ${\displaystyle g_{i}=2i+1}$ to the equation, get:
${\displaystyle {\frac {N_{i}}{N_{0}}}=(2i+1)\left({\frac {e^{-hcBi(i+1)/k_{B}T}}{e^{0/k_{B}T}}}\right)=(2i+1)e^{-hcBi(i+1)/k_{B}T}}$
### 2. Distribution of rotational states of N2
For N2, the population of state i is:
${\displaystyle {\frac {N_{i}}{N_{0}}}=(2i+1)e^{-hcBi(i+1)/k_{B}T}}$
Combine the constant part, define constant ${\displaystyle a=hcB/k_{B}T={\frac {h^{2}}{8\pi ^{2}k_{B}T\mu r^{2}}}}$ .
The reduced mass μ of Nitrogen is 7.00D a=1.16×10-26 kg and the bond length r of N2 is 110 pm = 1.10×10-10m.
Substitute T = 298 K, ${\displaystyle a={\frac {h^{2}}{8\pi ^{2}k_{B}T\mu r^{2}}}=9.60\times 10^{-3}}$
By sum over states, qrot=1/a = 104, so the probability of N2 occupying the ground vibrational state at 298 K is 1/104 = 9.60×10-3
The bar graph of the distribution of rotational states of N2 at 298 K is:
Distribution of rotational states of nitrogen gas at 298 K.
## Problem 3
Estimate the number of translational states that are available to a molecule of N2 in 1 m3 container at 298 K.
The equation that is used to determine translational states of the molecule of N2 at 298 K is shown below.
${\displaystyle q_{trans}(V,T)=\left({\frac {2\pi mk_{B}T}{h^{2}}}\right)^{\frac {3}{2}}}$ ${\displaystyle V}$
Where ${\displaystyle q_{trans}}$ (V,T) represents the partition function for translation, ${\displaystyle m}$ represents the mass of the particle in kilograms (kg), ${\displaystyle k_{B}}$ represents Boltzmann's constant ${\displaystyle (1.38064852\times 10^{-23}J\times K^{-1})}$ , ${\displaystyle T}$ represents the temperature in kelvins ${\displaystyle (K)}$ , ${\displaystyle h^{2}}$ represents Planck's constant ${\displaystyle (6.626\times 10^{-34}J\times s)}$ and ${\displaystyle V}$ represents the volume in 3 dimensions in ${\displaystyle m^{3}}$ .
The steps for solving this problem is shown below:
m = ${\displaystyle 28.0102}$ amu
m = ${\displaystyle (28.0102amu)(1.6605\times 10^{-27}kg)}$
m = ${\displaystyle 4.6512\times 10^{-26}kg}$
${\displaystyle q_{trans}(V,T)=\left({\frac {2\pi mk_{B}T}{h^{2}}}\right)^{3/2}V}$
${\displaystyle q_{trans}(V,T)=\left({\frac {2\times 3.14159\times (4.6512\times 10^{-26}kg)\times (1.38064852\times 10^{-23}J/K)\times 298K}{{(6.626\times 10^{-34}J\times s})^{2}}}\right)^{3/2}(1.000m^{3})}$
${\displaystyle q_{trans}(V,T)=1.4322\times 10^{32}}$
Therefore, there should be ${\displaystyle 1.4322\times 10^{32}}$ translational states for N2 at 298 K.
## Problem 4
Calculate the DeBroglie wavelength, rotational temperature, and vibrational temperature of N2 and Cl2 at 298 K.
### The DeBroglie Wavelength
${\displaystyle \Lambda =\left({\frac {2{\pi }mk_{B}T}{h^{2}}}\right)^{-1/2}}$
where:
• m is the mass of the molecule.
• Boltzmann's constant kB = 1.3806488×10-23 J K-1
• Planck's constant h = 6.62606957×10-34 J s
For N2 at 298K,
${\displaystyle m=2\times 14.0067u}$
${\displaystyle m=28.0134u\times {\frac {1.66054\times 10^{-27}kg}{u}}}$
${\displaystyle m=4.65173712\times 10^{-26}kg}$
${\displaystyle \Lambda =\left({\frac {2{\pi }(4.65173712\times 10^{-26}kg)(1.3806488\times 10^{-23}JK^{-1})(298K)}{(6.62606957\times 10^{-34}Js)^{2}}}\right)^{-1/2}}$
${\displaystyle \Lambda =\left({\frac {1.20252611\times 10^{-45}kgJ}{4.39047979\times 10^{-67}J^{2}s^{2}}}\right)^{-1/2}}$
${\displaystyle \Lambda =(2.738940097\times 10^{-21}kgJ^{-1}s^{-2})^{-1/2}}$
${\displaystyle \Lambda =1.9107714\times 10^{-11}m}$
For Cl2 at 298K,
${\displaystyle m=2\times 35.453u}$
${\displaystyle m=70.906u\times {\frac {1.66054\times 10^{-27}kg}{u}}}$
${\displaystyle m=1.17742249\times 10^{-25}kg}$
${\displaystyle \Lambda =\left({\frac {2{\pi }(1.17742249\times 10^{-25}kg)(1.3806488\times 10^{-23}JK^{-1})(298K)}{(6.62606957\times 10^{-34}Js)^{2}}}\right)^{-1/2}}$
${\displaystyle \Lambda =\left({\frac {3.04376893\times 10^{-45}kgJ}{4.39047979\times 10^{-67}J^{2}s^{2}}}\right)^{-1/2}}$
${\displaystyle \Lambda =(6.932656726\times 10^{21}kgJ^{-1}s^{-2})^{-1/2}}$
${\displaystyle \Lambda =1.20101976\times 10^{-11}m}$
Unit Conversion for the DeBroglie Wavelength:
${\displaystyle (kgJ^{-1}s^{-2})^{-1/2}=\left({\frac {kg}{kgm^{2}s^{-2}s^{2}}}\right)^{-1/2}}$
${\displaystyle (kgJ^{-1}s^{-2})^{-1/2}=(m^{-2})^{-1/2}}$
${\displaystyle (kgJ^{-1}s^{-2})^{-1/2}=m}$
### The Rotational Temperature
${\displaystyle \Theta _{r}={\frac {\hbar ^{2}}{2k_{B}\mu r_{e}^{2}}}}$
where:
• Planck's constant ${\displaystyle \hbar ={\frac {h}{2\pi }}={\frac {6.62606957\times 10^{-34}Js}{2\pi }}=1.054571726\times 10^{-34}Js}$
• μ is the reduced mass.
• re is the bond length between two atoms in a molecule
For N2,
re = 1.09769Å = 1.09769×10-10m [1]
${\displaystyle \mu ={\frac {m_{N}m_{N}}{m_{N}+m_{N}}}={\frac {m_{N}^{2}}{2m_{N}}}={\frac {m_{N}}{2}}={\frac {14.00307400529u}{2}}=7.001537003u}$
${\displaystyle \mu =7.001537003u\times {\frac {1.66054\times 10^{-27}kg}{u}}}$
${\displaystyle \mu =1.16263323\times 10^{-26}kg}$
${\displaystyle \Theta _{r}={\frac {(1.054571726\times 10^{-34}Js)^{2}}{2(1.3806488\times 10^{-23}JK^{-1})(1.16263323\times 10^{-26}kg)(1.09769\times 10^{-10}m)^{2}}}}$
${\displaystyle \Theta _{r}={\frac {1.11212153\times 10^{-68}J^{2}s^{2}}{3.86825738\times 10^{-69}JK^{-1}kgm^{2}}}}$
${\displaystyle \Theta _{r}=2.874993624Js^{2}Kkg^{-1}m^{-2}}$
${\displaystyle \Theta _{r}=2.874993624K}$
For Cl2,
re = 1.988Å = 1.988×10-10m [2]
${\displaystyle \mu ={\frac {m_{Cl}m_{Cl}}{m_{Cl}+m_{Cl}}}={\frac {m_{Cl}^{2}}{2m_{Cl}}}={\frac {m_{Cl}}{2}}={\frac {34.968852714u}{2}}=17.48442636u}$
${\displaystyle \mu =17.4842636u\times {\frac {1.66054\times 10^{-27}kg}{u}}}$
${\displaystyle \mu =2.90335893\times 10^{-26}kg}$
${\displaystyle \Theta _{r}={\frac {(1.054571726\times 10^{-34}Js)^{2}}{2(1.3806488\times 10^{-23}JK^{-1})(2.90335893\times 10^{-26}kg)(1.988\times 10^{-10}m)^{2}}}}$
${\displaystyle \Theta _{r}={\frac {1.11212153\times 10^{-68}J^{2}s^{2}}{3.16844888\times 10^{-68}JK^{-1}kgm^{2}}}}$
${\displaystyle \Theta _{r}=0.350998729Js^{2}Kkg^{-1}m^{-2}}$
${\displaystyle \Theta _{r}=0.350998729K}$
Unit Conversion for the Rotational Temperature:
${\displaystyle Js^{2}Kkg^{-1}m^{-2}=J(kg^{-1}m^{-2}s^{2})K}$
${\displaystyle Js^{2}Kkg^{-1}m^{-2}=JJ^{-1}K}$
${\displaystyle Js^{2}Kkg^{-1}m^{-2}=K}$
### The Vibrational Temperature
${\displaystyle \Theta _{v}={\frac {h\nu }{k_{B}}}}$
${\displaystyle \Theta _{v}={\frac {h{\tilde {\nu }}c}{k_{B}}}}$
where:
• ${\displaystyle {\tilde {\nu }}}$ the vibrational frequency of the molecule in wavenumbers.
• The speed of light c = 2.99792458×1010cm s-1
For N2,
${\displaystyle {\tilde {\nu }}=2358.57cm^{-1}}$ [1]
${\displaystyle \Theta _{v}={\frac {(6.62606957\times 10^{-34}Js)(2358.57cm^{-1})(2.99792458\times 10^{10}cms^{-1})}{1.3806488\times 10^{-23}JK^{-1}}}}$
${\displaystyle \Theta _{v}={\frac {4.6851712\times 10^{-20}J}{1.3806488\times 10^{-23}JK^{-1}}}}$
${\displaystyle \Theta _{v}=3393.456175K}$
For Cl2,
${\displaystyle {\tilde {\nu }}=559.7cm^{-1}}$ [2]
${\displaystyle \Theta _{v}={\frac {(6.62606957\times 10^{-34}Js)(559.7cm^{-1})(2.99792458\times 10^{10}cms^{-1})}{1.3806488\times 10^{-23}JK^{-1}}}}$
${\displaystyle \Theta _{v}={\frac {1.11181365\times 10^{-20}J}{1.3806488\times 10^{-23}JK^{-1}}}}$
${\displaystyle \Theta _{v}=805.2834645K}$
### References
1. a b Lide, D. R., (84th ed.). (2003-2004). Handbook of Chemistry and Physics. CRC Press. pg.9-85.
2. a b Lide, D. R., (84th ed.). (2003-2004). Handbook of Chemistry and Physics. CRC Press. pg.9-83.
Molecule N2 Cl2
${\displaystyle \Lambda }$ 1.91×10-11 m 1.20×10-11m
${\displaystyle \Theta _{r}}$ 2.87 K 0.351 K
${\displaystyle \Theta _{v}}$ 3.39×103 K 805 K
## Problem 5
At what temperature would the probability for N2 being the ground vibrational state be reduced to 50%?
50% of the N2 molecules are in the ground state, and 50% are in the excited state. So, the population can be expressed in the following equation:
${\displaystyle P_{0}=0.5}$
The value for the ${\displaystyle {\tilde {\nu }}}$ wavenumber value for the N2 molecule, taken from the NIST Webbook, is 2358.57 cm-1
This can be converted into the fundamental vibrational frequency by the relation,
${\displaystyle \nu ={\tilde {\nu }}\times c}$
where:
• ${\displaystyle {\tilde {\nu }}}$ is fundamental wavenumber for the molecule in cm-1.
• ${\displaystyle c}$ is the speed of light, 2.998x1010 cm/s By knowing this relation, the fundamental vibrational frequency can be calculated.
${\displaystyle \nu ={\tilde {\nu }}\times c=2358.57cm^{-1}\times 2.998\times 10^{10}cm/s=7.07099\times 10^{13}s^{-1}}$
The population of a system can be expressed by the following equation:
${\displaystyle P_{0}={\frac {\exp \left({\frac {-E_{0}}{k_{B}T}}\right)}{q_{vib}}}}$
where:
• ${\displaystyle P_{0}}$ is the population of the ground state molecules
• ${\displaystyle E_{0}}$ is the energy at the ground state
• ${\displaystyle k_{B}}$ is the Boltzmann constant, 1.38064853 \times 10-23 J/K
• ${\displaystyle T}$ is the temperature of the system in K
• ${\displaystyle q_{vib}}$ is the molecular vibrational partition function
${\displaystyle q_{vib}}$ is related by the following equation:
${\displaystyle q_{vib}={\frac {1}{1-\exp \left({\frac {-h\nu }{k_{B}T}}\right)}}}$
where:
• h is the Planck constant, 6.626*10-34 J*s
Knowing these relations, we can solve for the temperature of the system when the molecule is reduced to 50% ground state, which is when 50% of the molecules are in the ground state and 50% of the molecules are excited.
${\displaystyle P_{0}={\frac {\exp {\frac {-E_{0}}{k_{B}T}}}{q_{vib}}}=50\%=0.5}$
${\displaystyle q_{vib}={\frac {\exp \left({\frac {-E_{0}}{k_{B}T}}\right)}{P_{0}}}}$
${\displaystyle q_{vib}={\frac {\exp(0)}{0.5}}=2}$
Knowing the value of ${\displaystyle q_{vib}}$ , we can solve the molecular vibrational partition function to obtain the exact value of T for when 50% of the molecules are in the ground state.
${\displaystyle q_{vib}={\frac {1}{1-e^{\frac {-h\nu }{k_{B}T}}}}}$ ${\displaystyle =2}$
${\displaystyle 2={\frac {1}{1-\exp ^{\frac {-h\nu }{k_{B}T}}}}}$
${\displaystyle 2(1-\exp ^{\frac {-h\nu }{k_{B}T}})=1}$
${\displaystyle 2-2\exp ^{\frac {-h\nu }{k_{B}T}}=1}$
${\displaystyle -2\exp \left({\frac {-h\nu }{k_{B}T}}\right)=-1}$
${\displaystyle \exp \left({\frac {-h\nu }{k_{B}T}}\right)=0.5}$
${\displaystyle \ln(\exp \left({\frac {-h\nu }{k_{B}T}}\right)=\ln(0.5)}$
${\displaystyle {\frac {-h\nu }{k_{B}T}}=-0.6931472}$
${\displaystyle T={\frac {h\nu }{0.6931472k_{B}}}}$
Therefore, by substituting the Planck constant, fundamental vibrational frequency and the Boltzmann constant into the equation, we can obtain the temperature for when half of the molecules are in the ground state.
${\displaystyle T=6.626\times 10^{-34}J\cdot s\times 7.07099\times 10^{13}s^{-}1/(0.6931472\times 1.38064853\times 10^{-23}J/K)=4895.792994K}$
The temperature at which 50% of the N2 molecules are in the ground vibrational state is 4895 K.
|
# Abstract
This article deals with a study performed at the Experimental Mine of the Research Center of Responsible Mining of the University of São Paulo, to examine the correlations between geological environment, blasting parameters and energy consumption in the primary crushing phase. The research is designed to appreciate the relationships between the energy provided for size reduction and the resistances to size reduction. For this purpose, Key Performance Indicators (KPIs) are used to describe the possible improvements on the energy consumption due to crushing. Four blast tests were performed: for each blast, KPIs were recorded regarding the blast design, the particle size distribution, the real power energy consumption at the primary crushing unit and its rate of utilization. The results show that energy consumption at the primary crusher is a sum of two components: energy directly involved in crushing the rock, and additional energy used for winning the inertial resistances of the moving parts of the crusher. We show how explosive energy and delay times influence the production of coarse fragments that jam the crusher, therefore influencing machinery stops and inertia loads related to putting the jaws back into movement.
keywords:
Drill & Blast; fragmentation by blasting; comminution energy; crushing
# 1. Introduction
Considering the comminution system as a whole (Da Gama, 1983DA GAMA, D.C. Use of comminution theory to predict fragmentation of jointed rock mass subjected to blasting. In: INT. SYMP. ON ROCK FRAG BY BLASTING, 1, 1983. Procedings... Sweden: Lulea, 1993. p. 563-579.; Da Gama and Jimeno, 1993DA GAMA, D.C., JIMENO, C.L. Rock fragmentation control for blasting cost minimisation and environmental impact abatement. In: INTERNATIONAL SYMPOSIUM ON ROCK FRAGMENTATION BY BLASTING, 4, 1993. Proceedings... Balkema: Vienna, Austria, July 5-8, 1993. p. 273-279.; McCarter, 1996MCCARTER, M. K. Effect of blast preconditioning on comminution for selected rock types. In: CONFERENCE OF EXPLOSIVES AND BLASTING TECHNIQUE, 22, 1996. Proceedings... Orlando, Florida: International Society of Explosives Engineers - Cleveland, Ohio, 1996. p. 119-129.; Morrell, 1998MORRELL, S. Increasing profitability through integration of blasting and comminution effort. In: IIR CONF., 1998. https://www.researchgate.net/publication/304129531.
https://www.researchgate.net/publication...
; Nielsen, 1998NIELSEN, K. Economic optimization of the blasting-crushing-grinding comminution process. In: ANNUAL CONFERENCE ON EXPLOSIVES AND BLASTING RESEARCH, 1998. Proceedings... International Society of Explosives Engineers, 1998. p. 147-156.; Bergman, 2005BERGMAN, P. Optimisation of fragmentation and comminution at boliden mineral, Aitik Operation. Luleå University of Technology-Department of Civil and Environmental Engineering, Division of Rock Engineering, 2005. (Licentiate Thesis). ISSN 1402-1757, ISRN LTU-LIC, 05/90), every size reduction phase contributes to the final result, and this has consequences on the global energy consumption. Investigations by several researchers (Kanchibotla, 1994KANCHIBOTLA, S.S. Models for assessing the blasting performance of explosives. Australia: Univ. of Queensland, 1994. (Ph.D. Thesis).; Eloranta, 1995ELORANTA, J. The selection of powder factor in large diameter blast holes. In: ANNUAL CONF. ON EXPLOSIVES AND BLASTING RESEARCH, 21, 1995. Proceedings... Nashville: TN, 1995. v. 1. p 68-77., Kojovic et al., 1995KOJOVIC, T, MICHAUX, S. and MACKENZIE, C. The impact of blast fragmentation on crushing and screening operations in quarrying. Explo, n. 44, p. 427-436, 1995., Kanchibotla et al 1998KANCHIBOTLA, S. S., MORRELL, S., VALERY, W., O'Loughlin, P. Exploring the effect of blast design on sag mill throughput at KCGM. In: MINE TO MILL CONF. Proceedings... Brisbane: 1998., Simkus and Dance 1998SIMKUS, R., DANCE, A. Tracking hardness and size: measuring and monitoring ROM ore properties at Highland Valley Copper. In: MINE TO MILL 1998 CONFERENCE, 1998. Proceedings... Melbourne: Australasian Institute of Mining and Metallurgy, 1998. p. 113-119., Scott, 1996SCOTT, A. Open pit blast design: analysis and optimization. Brisbane: The University of Queensland, Julius Kruttschnitt Mineral Research Centre (JKMRC), 1996. 338 p., Kanchibotla et al 1999KANCHIBOTLA, S. S., VALERY, W., MORRELL, S. Modelling fines in blast fragmentation and its impact on crushing and grinding. In: EXPLO-99 CONF. Proceedings... Kalgoorlie: 1999., Kanchibotla 2000KANCHIBOTLA, S.S. Mine to mill blasting to maximise the profitability of mineral industry operations. In: ISEE CONFERENCE, 27, 2000. Proceedings... Anahiem: 2000., Seccatore et al., 2015aSECCATORE, J., ROMERO HUERTA, J., SADAO, G., CARDU M., GALVÃO, F., FINOTI, L., REZENDE, A., BETTENCOURT, J., DE TOMI, G. The influence of charge distribution on the grindability of the blasted material. In: INT. SYMP. OF ROCK FRAGMENTATION BY BLASTING, FRAGBLAST, 11. 2015a. Sidney: The Australasian Institute of Mining and Metallurgy (AUSIMM), 2015a. p. 749-754. ISBN: 9781925100327.) have shown that all the processes in the "mine to mill" chain are inter-dependent and the results of the upstream mining processes, especially blast results such as fragmentation, muckpile shape and movement, have a relevant impact on crushing and grinding (Mohanty and Chung, 1990MOHANTY B., CHUNG S. An integrated approach in evaluation of blast, a case study. In: INT. SYMP. ON ROCK FRAGMENTATION BY BLASTING, 3, 1990. Proceedings... Brisbane, Aust.: Institute of Mines and Met., 1990. p. 353-360.; Mancini et al., 1991MANCINI R., FORNARO M., CARDU M., DE ANTONIS L. The equivalent crusher of quarry blasting operations. In: INT. SYMP. ON ENGINEERING BLASTING TECHNIQUE, 1991. Proceedings... Beijing (Cina): 1991. p. 338-345.; Chakraborty et al., 2002CHAKRABORTY, A.K., RAINA, A. K., RAMULU, M., CHOUDHURY, P. B., HALDAR, A., SAHU, P., BANDOPADHYAY, C. Development of innovative models for optimisation of blast fragmentation and muck profile applying image analysis technique and subsystems utilisation concept in indian surface coal mining regime. Project No. MT/103. Ministry of Coal, Govt. of India, 2002. 125 p.; Ouchterlony et al., 2006OUCHTERLONY, F., OLSSON, M., NYBERG, U., ANDERSSON, P., GUSTAVSSON L. Constructing the fragment size distribution of a bench blasting round, using the new Swebrec function. In: INT. SYMP. ON ROCK FRAGMENTATION BY BLASTING, 2006. Proceedings... London: CRC Press, 2006. p. 332, 344.; Marin et al., 2015MARIN, T., SECCATORE, J., MELO, E., CARDU, M., GALVÃO, F., REZENDE, A., BETTENCOURT, J. AND DE TOMI, G., 2015. The effect of drilling and blasting performance on fragmentation in a quarry and time for loading, secondary breakage and crushing. In: INT. SYMP. ON ROCK FRAGMENTATION BY BLASTING, 2015. Proceedings... Sidney: The Australasian Institute of Mining and Metallurgy (AUSIMM), 2015. p. 355-362. ISBN: 9781925100327.). This review focused on understanding the relationships between the energy provided for size reduction and the resistances to size reduction.
The energy sources being considered are: 1) explosive consumption (Powder Factor, P.F.); 2) distribution of the explosive energy in space (drilling mesh); 3) distribution of the explosive energy in time (initiation sequence); and 4) electric energy for mechanical comminution at the primary crusher. The inherent resistances on which the study was focused are: i) the inherent resistance of each lithological material; ii) the size of the rock fragments feeding the primary crusher; and iii) the mechanical resistance of the moving parts of the crusher (including the inertia of the jaws at the start of the equipment). Blasting has a visible effect and a hidden effect: a) it creates macro-fractures (fragmentation) that have a main role in crushing, and b) it creates micro-fractures that lead to the internal softening of individual fragments, making them easier to grind (Nielsen and Kristiansen, 1996NIELSEN, K., AND KRISTIANSEN, J. Blasting-Crushing-Grinding: optimisation of an integrated comminution system. In: INT. SYMP. FRAGBLAST, 5, 1996. Proceedings... Montreal, Canada: 1996. p 269-277., Katsabanis et al,.2003 aKATSABANIS, P., GREGERSEN, S., PELLEY,C., and KELEBEK, S. Small scale study of damage due to blasting and implications on crushing and grinding. In: ANNUAL CONF. ON EXPLOSIVES AND BLASTING RESEARCH, 29, 2003a. Proceedings... Nashville: TN, 2003a., 2003 b. 2004KATSABANIS, P. D., KELEBEK, S., PELLEY, C., POLLANEN, M. Blasting effects on the grindability of rocks. In: ANNUAL CONFERENCE ON EXPLOSIVES AND BLASTING TECHNIQUE, 30, 2004. Proceedings... New Orleans, USA: 2004., 2006KATSABANIS, P. D., TAWADROUS, A., BRAUN, C. AND KENNEDY, C. Timing effects on fragmentation. In: ANNUAL CONFERENCE ON EXPLOSIVES AND BLASTING TECHNIQUE, 32, 2006. International Society of Explosives Engineers, 2006. v. 2, p. 243-253. and 2008KATSABANIS, P. D., KIM, S., TAWADROUS, A., SIGLER, J. Effect of powder factor and timing on the impact breakage of rocks. In: ANNUAL CONFERENCE ON EXPLOSIVES AND BLASTING TECHNIQUE, 34, 2008. International Society of Explosives Engineers, 2008. p. 179-190., Workman and Eloranta, 2003WORKMAN, L., ELORANTA, J. Effects of blasting on crushing and grinding efficiency and energy consumption. In: ANNUAL CONFERENCE ON EXPLOSIVES AND BLASTING TECHNIQUE, 29, 2003. Proceedings... Nashville, USA: 2003. and 2009WORKMAN, L., ELORANTA, J. Considerations on the effect of blasting on downstream performance. In: ANNUAL CONFERENCE ON EXPLOSIVES AND BLASTING TECHNIQUE, 35, 2009. Proceedings... Denver, USA: 2009.). A parallel research conducted at the same site (Seccatore et al., 2015aSECCATORE, J., ROMERO HUERTA, J., SADAO, G., CARDU M., GALVÃO, F., FINOTI, L., REZENDE, A., BETTENCOURT, J., DE TOMI, G. The influence of charge distribution on the grindability of the blasted material. In: INT. SYMP. OF ROCK FRAGMENTATION BY BLASTING, FRAGBLAST, 11. 2015a. Sidney: The Australasian Institute of Mining and Metallurgy (AUSIMM), 2015a. p. 749-754. ISBN: 9781925100327.) showed how the inherent resistance to comminution (Work Index) is reduced when the P.F. is increased. Nevertheless, this phenomenon is more evident at finer grinding phases, such as milling, and therefore its measurement was deliberately neglected for primary crushing. Further research will address this matter.
## The quarry under study
The Experimental Mine of the Research Center for Responsible Mining of the University of São Paulo (NAP.Mineração), Brazil, is a quarry exploiting a dolomitic limestone by drill and blast. In the somewhat out-of-date system employed, the holes are charged with cartridged emulsion explosive and primed by detonating cord. The main blast parameters are shown in Table 1.
Table 1
Blast parameters.
The blast is fired by a safety fuse and a fire cap that initiate the main line of detonating cord; delays are provided by means of relays (17 and 42 ms). One electricity meter was installed in the primary crushing unit to evaluate the energy consumption due to crushing. The current system leads to several problems, such as: need of further reducing blocks before the primary crushing; dissipation of blasting energy; the rock that remains in place is often damaged; and the occurrence of bridging and stalling in the jaw crusher is not uncommon. As a result of all these drawbacks, a remarkable waste of economic resources is noticed.
Due to budget restrictions, changes must meet certain conditions: not to involve any financial investment, not to change the excavation technique, and not to interfere with the well-known practice of the operators.
# 2. Research method
Key Performance Indicators are a set of quantifiable measures generally used to gauge or compare performance in terms of meeting strategic and operational goals (Painesis, 2011PAINESIS, M. EE-quarry; key performance indicators. EEQ-S&B-WP2-2.5. http://www.ee-quarry.eu/uploads/File/D2.5 Performance indicators.pdf
). KPIs can vary depending on the priorities or performance criteria of a given company. The KPIs used in this research, to evaluate the possible improvements on the energy consumption due to crushing, are reported and described in Table 2.
Table 2
Key Performance Indicators employed on the research.
Attention must be paid to the last parameter: the stopping time of the primary crusher. The main reasons for stopping is the jamming of the jaw of the crusher due to oversize fragments. This event entails energy consumptions not directly related to mechanical crushing, and is thoroughly discussed below. Four full-scale blast tests were performed. The reason for the limited number of tests is due to the logistic and economic constraints related to design, performance and full-scale blast analysis within the production schedule of an operating industrial facility. For each blast, the research was developed according to the following steps: data collection related to the blasting project: bench height, stemming, burden, spacing, rock type, amount of bridging (formation of an arch) of fragments or oversized fragments that jam the jaws; the explosive used, (delay density); calculation of the volume to be blasted and the Powder Factor; collection of pictures of the muckpiles to be analyzed through image-analysis software, in order to determine the particle size distribution; evaluation of real power and calculation of energy consumption, both in a monthly period and in a daily period, by using the data measured by the electricity meter in the primary crushing unit; and plotting of the results to evaluate a correlation between the observed parameters and energy consumption. The purpose was to identify anomalous peaks and to understand why they occurred.
# 3. Results and discussion
The parameters evaluated are shown in Table 3. A classic trend can be noticed: increasing PF shifts to lower values of particle size. As a consequence, the energy consumption at the primary crusher decreases as P80 decreases.
Table 3
Parameters evaluated for each blast test and energy consumption.
The grain size distribution curves obtained for different values of P.F. are shown in Figure 1.
Figure 1
Particles size distribution for different values of PF
The influence of the drilling mesh is not the same for every particle size distribution: it affects the coarser particles (P80 and P50) more than the fines (P20). The same behaviour was observed in Seccatore et al., 2015aSECCATORE, J., ROMERO HUERTA, J., SADAO, G., CARDU M., GALVÃO, F., FINOTI, L., REZENDE, A., BETTENCOURT, J., DE TOMI, G. The influence of charge distribution on the grindability of the blasted material. In: INT. SYMP. OF ROCK FRAGMENTATION BY BLASTING, FRAGBLAST, 11. 2015a. Sidney: The Australasian Institute of Mining and Metallurgy (AUSIMM), 2015a. p. 749-754. ISBN: 9781925100327.. A key aspect of this research was the evaluation of the real electric power of the engine of the primary crusher. By plotting the measured values of real power in a time domain, it was observed that: 1) higher peaks occur when bigger blocks are crushed and when the engine is turned on: this is due to starting currents, which are usually the largest recorded; 2) the minimum values of real power (different from zero) occur when the jaw crusher is not crushing, but is not turned off: this event corresponds to either an empty crusher or an event of bridging (formation of an arch of fragments blocking the fall of rock in the jaws). The two main resistences offered by the jaw demand a higher amount of current to move it: a) mechannical resistance, when bigger blocks enter the crusher and b)Inertial resistance, when the engine is turned on after a stop and must overcome the inertia of the stopped jaw. Both these events consequently cause higher values of real power. Since real power and energy consumption are related by the equation: $Et=∫Pt∂t$ or, in discrete terms: $Et=∑i=0nPit·Δti$ it is clear that a higher value of power leads to a higher energy consumption. As for mechanical resistance, as shown in Table 1, energy consumption increases with P80; during the primary crushing phase, it can be reduced by increasing P.F. As for inertial resistance, it frequently happens that big blocks get stuck between the jaws: in this case the crusher must be stopped, and some time will be needed for operators to reduce the block by jackhammer or remove it from the jaws. As shown in Figure 2, the greater the average block dimension, the higher the frequency of stops (percentage). The stops percentage heavily affects the total energy consumption, also due to the contribution of inertia loads at the motors (Figure 3).
Figure 2
Correlation between stop percentage and P80
Figure 3
Energy consumption is affected by stop's percentage
This cost can be reduced by putting either a soft starter or, better, a frequency inverter; alteratively, slowing equipment down instead of turning it on and off. A careful design of transport cycles, and projecting blasts in such a way to obtain blocks having suitable size for crushing, can reduce the effects of inertia loads on energy consumption. Other considerations arise from observing the delay patterns, as shown in the examples of Figure 4 and Figure 5: blasts are designed so that two holes detonate simultaneously.
Figure 4
Blast test n.1. Numbers indicate sequence
Figure 5
Blast test n.4. Numbers indicate sequence
To obtain the best result in fragmentation by Drill and Blast, the simultaneous detonation of two blastholes should be avoided, to induce the explosive to work along the burden instead of working along the spacing (Cardu et al., 2015 aCARDU, M., SECCATORE, J., VAUDAGNA, A., REZENDE, A., GALVÃO, F., BETTENCOURT, J., DE TOMI, G. Evidences of the influence of the detonation sequence in rock fragmentation by blasting - Part I. REM: Rev. Esc. Minas, v. 68, n.3, p. 351-356. 2015a. DOI: 10.1590/0370-44672014680218
https://doi.org/10.1590/0370-44672014680...
and bCARDU, M., SECCATORE, J., VAUDAGNA, A., REZENDE, A., GALVÃO, F., BETTENCOURT, J., DE TOMI, G. Evidences of the influence of the detonation sequence in rock fragmentation by blasting - Part II. REM- Revista Escola de Minas, v. 68, n. 4, p. 455-462, 2015b. DOI: 10.1590/0370-44672014680219
https://doi.org/10.1590/0370-44672014680...
). In this research, the Specific Priming (S.P., n. delays/t) was compared with the percentage of stops in production (in terms of cost that stops involve) and particle size distribution. It is evident how timing in the initiation sequence influences fragmentation and comminution energy: a 30% increase in S.P. results in finer fragmentation with 15% reduction of P80; Figure 6 shows that the same increase in S.P. results in 17% reduction on stop percentage.
Figure 6
P80 decreases with the number of delays/t increases
The effects of timing on fragmentation have not been extensively studied for a long time, but have gained much importance in the last 30 years, and many researches address this aspect (Stagg, 1987STAGG M.S. Influence of blast delay time on rock fragmentation: one-tenth scale tests. International Journal of Surface Mining, Reclamation and Environment. v. 1, p. 215-222. 1987. Issue 4. DOI: 10.1080/09208118708944122
https://doi.org/10.1080/0920811870894412...
; Sastry and Chandar, 2004SASTRY, V., CHANDAR, K. Influence of the initiation system on blast results: case studies. In: INT. SYMP. ON ROCK FRAGMENTATION BY BLASTING, FRAGBLAST-8, 2004. Proceedings... p. 207-220.; Katsabanis et al., 2006KATSABANIS, P. D., TAWADROUS, A., BRAUN, C. AND KENNEDY, C. Timing effects on fragmentation. In: ANNUAL CONFERENCE ON EXPLOSIVES AND BLASTING TECHNIQUE, 32, 2006. International Society of Explosives Engineers, 2006. v. 2, p. 243-253.; Katsabanis et al., 2008KATSABANIS, P. D., KIM, S., TAWADROUS, A., SIGLER, J. Effect of powder factor and timing on the impact breakage of rocks. In: ANNUAL CONFERENCE ON EXPLOSIVES AND BLASTING TECHNIQUE, 34, 2008. International Society of Explosives Engineers, 2008. p. 179-190.; Kim, 2010KIM, S.J., 2010. An experimental investigation of the effect of blasting on the impact breakage of rocks. Kingston, Ontario, Canada: The Robert M. Buchan Department of Mining, Queen's University, 2010. (Master Thesis).; Cardu et al. 2015aCARDU, M., SECCATORE, J., VAUDAGNA, A., REZENDE, A., GALVÃO, F., BETTENCOURT, J., DE TOMI, G. Evidences of the influence of the detonation sequence in rock fragmentation by blasting - Part I. REM: Rev. Esc. Minas, v. 68, n.3, p. 351-356. 2015a. DOI: 10.1590/0370-44672014680218
https://doi.org/10.1590/0370-44672014680...
and bCARDU, M., SECCATORE, J., VAUDAGNA, A., REZENDE, A., GALVÃO, F., BETTENCOURT, J., DE TOMI, G. Evidences of the influence of the detonation sequence in rock fragmentation by blasting - Part II. REM- Revista Escola de Minas, v. 68, n. 4, p. 455-462, 2015b. DOI: 10.1590/0370-44672014680219
https://doi.org/10.1590/0370-44672014680...
; Seccatore et al. 2015bSECCATORE, J., CARDU, M., BETTENCOURT, J. The music of blasting. In: SUSTAINABLE INDUSTRIAL PROCESSING SUMMIT, 2015. Proceedings... Antalya, Turkey: Flogen Stars Outreach Pub., 2015b. v. 4, p. 193-205. ISSN 2291 1227.; Schimek et al., 2015SCHIMEK, P., OUTCHERLONY, F., MOSER, P. Influence of blasthole delay times on fragmentation as well as characteristics of and blast damage behind a remaining bench face through model-scale blasting. In: INT. SYMP. ON ROCK FRAGMENTATION BY BLASTING, FRAGBLAST, 11, 2015. Proceedings... Sidney: The Australasian Institute of Mining and Metallurgy (AUSIMM), 2015. p. 257-265. ISBN: 9781925100327.). The result of the previous and current studies shows that the influence of the detonation sequence on fragmentation results deserves greater research effort. Based on the average electricity cost in Brazil, the lower energy consumption and the consequent saving income obtainable by increasing PF and SP can be evaluated. For this analysiss, three situations were studied: 1) Scenario 1: the current situation; 2) Scenario 2: The situation obtainable merely considering the reduction of particle size; and 3) Scenario 3: The situation obtainable considering both the reduction of particle size and the reduction of engine's stops. Without considering the savings achievable by avoiding stops in production, passing from the situation with the lower P.F. to the higher P.F., the saving achievable is around 20% (Figure 7). By also considering the savings obtainable by reducing stops in production, passing from the situation with the lower P.F. (in the current situation) to the higher P.F., in the hypothesis of reducing stops in production, the energy savings can reach 34%.
Figure 7
Electricity cost decreases while P.F. increases
By increasing S.P., a reduction in electricity cost (Figure 8) and a further reduction of stops in production can be reached: passing from the worst blast result of Scenario 1 to the best result in Scenario 3, a 41% reduction of cost is achievable.
Figure 8
Electricity cost decreases with the number of delays/t
Figure 9 shows the variation of total costs (electricity cost and delays cost), in R\$/t, in the hypothesis of keeping all the blast parameters equal and incrementing S.P. In terms of total cost (electricity and delays), all blast parameters being equal, an average 18% cost reduction is obtainable while incrementing the delay's density.
Figure 9
All other things being equal, an increase in delays density can lead to a reduction of costs
# 4. Conclusions
Many studies have been conducted in the past to define and understand how to reduce the unit cost of each phase of the mine to mill chain. The phase that requires the majority of energy consumption is grinding, but optimization can be achieved with innovations and improvements in each previous phase. In this framework, this research aimed to understand the relationships between the energy provided for size reduction and the resistances to size reduction. This research led to the understanding that energy consumption at the primary crusher is not only related to mechanical resistance needed to crush the blocks, but to a sum of two components: a) energy used for mechanical crushing, and b) energy used for winning the inertial resistances. Both energy components depend on the particle size distribution of the muckpile: i) energy consumption increases (as expected) with P80 of the blasted material, and also: ii) energy consumption is higher when stops in production are higher. The coarser the particle size of the feeding material, the higher the frequency of stops. Also, timing and initiation sequence play an important role in cost optimization. In this research, the density of delays (nº delays/t) was shown to reduce the size of blasted fragments, the percentage of stops in production, and the total energy consumption. All other parameters being equal, an increase in the delay's density can lead to a reduction of costs.
# Acknowledgements
Many thanks to Sociedade Extrativa Dolomia Ltda. for the support in the Experimental Mine Project. A special acknowledgment goes to CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) for the Special Visiting Researcher fellowship n.400417/2014-6.
# References
• BERGMAN, P. Optimisation of fragmentation and comminution at boliden mineral, Aitik Operation. Luleå University of Technology-Department of Civil and Environmental Engineering, Division of Rock Engineering, 2005. (Licentiate Thesis). ISSN 1402-1757, ISRN LTU-LIC, 05/90
• CARDU, M., SECCATORE, J., VAUDAGNA, A., REZENDE, A., GALVÃO, F., BETTENCOURT, J., DE TOMI, G. Evidences of the influence of the detonation sequence in rock fragmentation by blasting - Part I. REM: Rev. Esc. Minas, v. 68, n.3, p. 351-356. 2015a. DOI: 10.1590/0370-44672014680218
» https://doi.org/10.1590/0370-44672014680218
• CARDU, M., SECCATORE, J., VAUDAGNA, A., REZENDE, A., GALVÃO, F., BETTENCOURT, J., DE TOMI, G. Evidences of the influence of the detonation sequence in rock fragmentation by blasting - Part II. REM- Revista Escola de Minas, v. 68, n. 4, p. 455-462, 2015b. DOI: 10.1590/0370-44672014680219
» https://doi.org/10.1590/0370-44672014680219
• CHAKRABORTY, A.K., RAINA, A. K., RAMULU, M., CHOUDHURY, P. B., HALDAR, A., SAHU, P., BANDOPADHYAY, C. Development of innovative models for optimisation of blast fragmentation and muck profile applying image analysis technique and subsystems utilisation concept in indian surface coal mining regime. Project No. MT/103 Ministry of Coal, Govt. of India, 2002. 125 p.
• DA GAMA, D.C. Use of comminution theory to predict fragmentation of jointed rock mass subjected to blasting. In: INT. SYMP. ON ROCK FRAG BY BLASTING, 1, 1983. Procedings... Sweden: Lulea, 1993. p. 563-579.
• DA GAMA, D.C., JIMENO, C.L. Rock fragmentation control for blasting cost minimisation and environmental impact abatement. In: INTERNATIONAL SYMPOSIUM ON ROCK FRAGMENTATION BY BLASTING, 4, 1993. Proceedings... Balkema: Vienna, Austria, July 5-8, 1993. p. 273-279.
• ELORANTA, J. The selection of powder factor in large diameter blast holes. In: ANNUAL CONF. ON EXPLOSIVES AND BLASTING RESEARCH, 21, 1995. Proceedings... Nashville: TN, 1995. v. 1. p 68-77.
• KANCHIBOTLA, S.S. Models for assessing the blasting performance of explosives Australia: Univ. of Queensland, 1994. (Ph.D. Thesis).
• KANCHIBOTLA, S.S. Mine to mill blasting to maximise the profitability of mineral industry operations. In: ISEE CONFERENCE, 27, 2000. Proceedings... Anahiem: 2000.
• KANCHIBOTLA, S. S., MORRELL, S., VALERY, W., O'Loughlin, P. Exploring the effect of blast design on sag mill throughput at KCGM. In: MINE TO MILL CONF. Proceedings... Brisbane: 1998.
• KANCHIBOTLA, S. S., VALERY, W., MORRELL, S. Modelling fines in blast fragmentation and its impact on crushing and grinding. In: EXPLO-99 CONF. Proceedings... Kalgoorlie: 1999.
• KATSABANIS, P., GREGERSEN, S., PELLEY,C., and KELEBEK, S. Small scale study of damage due to blasting and implications on crushing and grinding. In: ANNUAL CONF. ON EXPLOSIVES AND BLASTING RESEARCH, 29, 2003a. Proceedings... Nashville: TN, 2003a.
• KATSABANIS, P. D., KELEBEK, S., PELLEY, C., POLLANEN, M. Blasting effects on the grindability of rocks. In: ANNUAL CONFERENCE ON EXPLOSIVES AND BLASTING TECHNIQUE, 30, 2004. Proceedings... New Orleans, USA: 2004.
• KATSABANIS, P. D., KIM, S., TAWADROUS, A., SIGLER, J. Effect of powder factor and timing on the impact breakage of rocks. In: ANNUAL CONFERENCE ON EXPLOSIVES AND BLASTING TECHNIQUE, 34, 2008. International Society of Explosives Engineers, 2008. p. 179-190.
• KATSABANIS, P. D., TAWADROUS, A., BRAUN, C. AND KENNEDY, C. Timing effects on fragmentation. In: ANNUAL CONFERENCE ON EXPLOSIVES AND BLASTING TECHNIQUE, 32, 2006. International Society of Explosives Engineers, 2006. v. 2, p. 243-253.
• KIM, S.J., 2010. An experimental investigation of the effect of blasting on the impact breakage of rocks Kingston, Ontario, Canada: The Robert M. Buchan Department of Mining, Queen's University, 2010. (Master Thesis).
• KOJOVIC, T, MICHAUX, S. and MACKENZIE, C. The impact of blast fragmentation on crushing and screening operations in quarrying. Explo, n. 44, p. 427-436, 1995.
• MANCINI R., FORNARO M., CARDU M., DE ANTONIS L. The equivalent crusher of quarry blasting operations. In: INT. SYMP. ON ENGINEERING BLASTING TECHNIQUE, 1991. Proceedings... Beijing (Cina): 1991. p. 338-345.
• MANCINI, R., CARDU, M. Scavi in roccia: gli esplosivi Benevento: Hevelius, 2001. ISBN: 8886977484.
• MARIN, T., SECCATORE, J., MELO, E., CARDU, M., GALVÃO, F., REZENDE, A., BETTENCOURT, J. AND DE TOMI, G., 2015. The effect of drilling and blasting performance on fragmentation in a quarry and time for loading, secondary breakage and crushing. In: INT. SYMP. ON ROCK FRAGMENTATION BY BLASTING, 2015. Proceedings... Sidney: The Australasian Institute of Mining and Metallurgy (AUSIMM), 2015. p. 355-362. ISBN: 9781925100327.
• MCCARTER, M. K. Effect of blast preconditioning on comminution for selected rock types. In: CONFERENCE OF EXPLOSIVES AND BLASTING TECHNIQUE, 22, 1996. Proceedings... Orlando, Florida: International Society of Explosives Engineers - Cleveland, Ohio, 1996. p. 119-129.
• MOHANTY B., CHUNG S. An integrated approach in evaluation of blast, a case study. In: INT. SYMP. ON ROCK FRAGMENTATION BY BLASTING, 3, 1990. Proceedings... Brisbane, Aust.: Institute of Mines and Met., 1990. p. 353-360.
• MORRELL, S. Increasing profitability through integration of blasting and comminution effort. In: IIR CONF., 1998. https://www.researchgate.net/publication/304129531
» https://www.researchgate.net/publication/304129531
• NIELSEN, K. Economic optimization of the blasting-crushing-grinding comminution process. In: ANNUAL CONFERENCE ON EXPLOSIVES AND BLASTING RESEARCH, 1998. Proceedings... International Society of Explosives Engineers, 1998. p. 147-156.
• NIELSEN, K., AND KRISTIANSEN, J. Blasting-Crushing-Grinding: optimisation of an integrated comminution system. In: INT. SYMP. FRAGBLAST, 5, 1996. Proceedings... Montreal, Canada: 1996. p 269-277.
• OUCHTERLONY, F., OLSSON, M., NYBERG, U., ANDERSSON, P., GUSTAVSSON L. Constructing the fragment size distribution of a bench blasting round, using the new Swebrec function. In: INT. SYMP. ON ROCK FRAGMENTATION BY BLASTING, 2006. Proceedings... London: CRC Press, 2006. p. 332, 344.
• PAINESIS, M. EE-quarry; key performance indicators. EEQ-S&B-WP2-2.5 http://www.ee-quarry.eu/uploads/File/D2.5 Performance indicators.pdf
• SASTRY, V., CHANDAR, K. Influence of the initiation system on blast results: case studies. In: INT. SYMP. ON ROCK FRAGMENTATION BY BLASTING, FRAGBLAST-8, 2004. Proceedings... p. 207-220.
• SCHIMEK, P., OUTCHERLONY, F., MOSER, P. Influence of blasthole delay times on fragmentation as well as characteristics of and blast damage behind a remaining bench face through model-scale blasting. In: INT. SYMP. ON ROCK FRAGMENTATION BY BLASTING, FRAGBLAST, 11, 2015. Proceedings... Sidney: The Australasian Institute of Mining and Metallurgy (AUSIMM), 2015. p. 257-265. ISBN: 9781925100327.
• SCOTT, A. Open pit blast design: analysis and optimization Brisbane: The University of Queensland, Julius Kruttschnitt Mineral Research Centre (JKMRC), 1996. 338 p.
• SECCATORE, J., CARDU, M., BETTENCOURT, J. The music of blasting. In: SUSTAINABLE INDUSTRIAL PROCESSING SUMMIT, 2015. Proceedings... Antalya, Turkey: Flogen Stars Outreach Pub., 2015b. v. 4, p. 193-205. ISSN 2291 1227.
• SECCATORE, J., ROMERO HUERTA, J., SADAO, G., CARDU M., GALVÃO, F., FINOTI, L., REZENDE, A., BETTENCOURT, J., DE TOMI, G. The influence of charge distribution on the grindability of the blasted material. In: INT. SYMP. OF ROCK FRAGMENTATION BY BLASTING, FRAGBLAST, 11. 2015a. Sidney: The Australasian Institute of Mining and Metallurgy (AUSIMM), 2015a. p. 749-754. ISBN: 9781925100327.
• SIMKUS, R., DANCE, A. Tracking hardness and size: measuring and monitoring ROM ore properties at Highland Valley Copper. In: MINE TO MILL 1998 CONFERENCE, 1998. Proceedings... Melbourne: Australasian Institute of Mining and Metallurgy, 1998. p. 113-119.
• STAGG M.S. Influence of blast delay time on rock fragmentation: one-tenth scale tests. International Journal of Surface Mining, Reclamation and Environment v. 1, p. 215-222. 1987. Issue 4. DOI: 10.1080/09208118708944122
» https://doi.org/10.1080/09208118708944122
• WORKMAN, L., ELORANTA, J. Effects of blasting on crushing and grinding efficiency and energy consumption. In: ANNUAL CONFERENCE ON EXPLOSIVES AND BLASTING TECHNIQUE, 29, 2003. Proceedings... Nashville, USA: 2003.
• WORKMAN, L., ELORANTA, J. Considerations on the effect of blasting on downstream performance. In: ANNUAL CONFERENCE ON EXPLOSIVES AND BLASTING TECHNIQUE, 35, 2009. Proceedings... Denver, USA: 2009.
# Publication Dates
• Publication in this collection
Apr-Jun 2019
|
# Presented By
Xiaolan Xu, Robin Wen, Yue Weng, Beizhen Chang
# Introduction
We develop a new way of representing a neural network as a graph, which we call relational graph. Our key insight is to focus on message exchange, rather than just on directed data flow. As a simple example, for a fixedwidth fully-connected layer, we can represent one input channel and one output channel together as a single node, and an edge in the relational graph represents the message exchange between the two nodes (Figure 1(a)).
# Relational Graph
We define a graph $G =(V; E)$ by its node set $V =\{v_1,...,v_n\}$ and edge set $E \subseteq \{(v_i,v_j)|v_i,v_j\in V\}$. We assume that each node $v$ correpsonds with a feature $x_v$, which might be scalar, vector or tensor quantity.
# Parameter Definition
(1) Clustering Coefficient
(2) Average Path Length
# Discussions and Conclusions
Section 5 of the paper summarize the result of experiment among multiple different relational graphs through sampling and analyzing.
## 1. Neural networks performance depends on its structure
In the experiment, top-1 errors are going to be used to measure the performance of the model. The parameters of the models are average path length and clustering coefficient. Heat maps was created to illustrate the difference of predictive performance among possible average path length and clustering coefficient. In Figure ???, The darker area represents a smaller top-1 error which indicate the model perform better than other area. Compare with the complete graph which has A = 1 and C = 1, The best performing relational graph can outperform the complete graph baseline by 1.4% top-1 error for MLP on CIFAR-10, and 0.5% to 1.2% for models on ImageNet. Hence it is an indicator that the predictive performance of neural network highly depends on the graph structure, or equivalently that completed graph does not always preform the best.
## 2. Sweet spot where performance is significantly improved
To reduce the training noise, the 3942 graphs that in the sample had been grouped into 52 bin, each bin had been colored based on the average performance of graphs that fall into the bin. Based on the heat map, the well-performing graphs tend to cluster into a special spot that the paper called “sweet spot” shown in the red rectangle.
## 3. Relationship between neural network’s performance and parameters
When we visualize the heat map, we can see that there are no significant jump of performance that occurred as small change of clustering coefficient and average path length. If one of the variable is fixed in a small range, it is observed that a second degree polynomial is a good visualization tools for the overall trend. Therefore, both clustering coefficient and average path length are highly related with neural network performance by a U-shape.
## 4. Consistency among many different tasks and datasets
It is observed that the results are consistent through different point of view. Among multiple architecture dataset, it is observed that the clustering coefficient within [0.1,0.7] and average path length with in [1.5,3] consistently outperform the baseline complete graph.
Among different dataset with network that has similar clustering coefficient and average path length, the results are correlated, The paper mentioned that ResNet-34 is much more complex than 5-layer MLP but a fixed set relational graphs would perform similarly in both setting, with Pearson correlation of 0.658, the p-value for the Null hypothesis is less than 10^-8.
## 5. top architectures can be identified efficiently
According to the graphs we have in the Key Results. We can see that there is a "sweet spot" and therefore, we do not have to train the entire data set for a large value of epoch. We can take a sample of the data and just focusing on the "sweet spot" would save a lot of time. Within 3 epochs, the correlation between the variables are already high enough for the future computation.
## 6. well-performing neural networks have graph structure surprisingly similar to those of real biological neural networks
The way we define relational graphs and average length in the graph is similar to the way information is exchanged in network science. The biological neural network also has a similar relational graph representation and graph measure with the best-performing relational graph.
# Critique
1. The experiment is only measuring on a single data set which might impact the conclusion since this is not representative enough. 2. When we are fitting the model, training data should be randomized in each epoch to reduce the noise.
|
Nuclear structure and reaction studies at SPIRAL
Abstract : The SPIRAL facility at GANIL, operational since 2001, is described briefly. The diverse physics program using the re-accelerated (1.2 to 25 MeV/u) beams ranging from He to Kr and the instrumentation specially developed for their exploitation are presented. Results of these studies, using both direct and compound processes, addressing various questions related to the existence of exotic states of nuclear matter, evolution of new magic numbers'', tunnelling of exotic nuclei, neutron correlations, exotic pathways in astrophysical sites and characterization of the continuum are discussed. The future prospects for the facility and the path towards SPIRAL2, a next generation ISOL facility, are also briefly presented.
Document type :
Journal articles
Cited literature [69 references]
http://hal.in2p3.fr/in2p3-00548818
Contributor : Michel Lion <>
Submitted on : Monday, December 20, 2010 - 3:24:05 PM
Last modification on : Monday, November 30, 2020 - 11:04:04 AM
Long-term archiving on: : Monday, March 21, 2011 - 3:35:46 AM
Files
SPIRAL_paper.pdf
Files produced by the author(s)
Citation
A. Navin, Francois de Oliveira Santos, P. Roussel-Chomaz, O. Sorlin. Nuclear structure and reaction studies at SPIRAL. Journal of Physics G: Nuclear and Particle Physics, IOP Publishing, 2011, 38, pp.024004. ⟨10.1088/0954-3899/38/2/024004⟩. ⟨in2p3-00548818⟩
Record views
|
# The New Zealand Curriculum identifies five key competencies:
thinking
using language, symbols, and texts
managing self
relating to others
participating and contributing
• People use these competencies to live, learn, work, and contribute as active members of their communities. More complex than skills, the competencies draw also on knowledge, attitudes, and values in ways that lead to action. They are not separate or stand-alone. They are the key to learning in every learning area.
• The development of the competencies is both an end in itself (a goal) and the means by which other ends are achieved. Successful learners make use of the competencies in combination with all the other resources available to them. These include personal goals, other people, community knowledge and values, cultural tools (language, symbols, and texts), and the knowledge and skills found in different learning areas. As they develop the competencies, successful learners are also motivated to use them, recognising when and how to do so and why.
• Opportunities to develop the competencies occur in social contexts. People adopt and adapt practices that they see used and valued by those closest to them, and they make these practices part of their own identity and expertise.
• The competencies continue to develop over time, shaped by interactions with people, places, ideas, and things. Students need to be challenged and supported to develop them in contexts that are increasingly wide-ranging and complex.
### Thinking
• Thinking is about using creative, critical, and metacognitive processes to make sense of information, experiences, and ideas. These processes can be applied to purposes such as developing understanding, making decisions, shaping actions, or constructing knowledge. Intellectual curiosity is at the heart of this competency.
• Students who are competent thinkers and problem-solvers actively seek, use, and create knowledge. They reflect on their own learning, draw on personal knowledge and intuitions, ask questions, and challenge the basis of assumptions and perceptions.
### Using language, symbols, and texts
• Using language, symbols, and texts is about working with and making meaning of the codes in which knowledge is expressed. Languages and symbols are systems for representing and communicating information, experiences, and ideas. People use languages and symbols to produce texts of all kinds: written, oral/aural, and visual; informative and imaginative; informal and formal; mathematical, scientific, and technological.
• Students who are competent users of language, symbols, and texts can interpret and use words, number, images, movement, metaphor, and technologies in a range of contexts. They recognise how choices of language, symbol, or text affect people’s understanding and the ways in which they respond to communications. They confidently use ICT (including, where appropriate, assistive technologies) to access and provide information and to communicate with others.
### Managing self
• This competency is associated with self-motivation, a “can-do” attitude, and with students seeing themselves as capable learners. It is integral to self-assessment.
• Students who manage themselves are enterprising, resourceful, reliable, and resilient. They establish personal goals, make plans, manage projects, and set high standards. They have strategies for meeting challenges. They know when to lead, when to follow, and when and how to act independently.
### Relating to others
• Relating to others is about interacting effectively with a diverse range of people in a variety of contexts. This competency includes the ability to listen actively, recognise different points of view, negotiate, and share ideas.
• Students who relate well to others are open to new learning and able to take different roles in different situations. They are aware of how their words and actions affect others. They know when it is appropriate to compete and when it is appropriate to co-operate. By working effectively together, they can come up with new approaches, ideas, and ways of thinking.
### Participating and contributing
• This competency is about being actively involved in communities. Communities include family, whānau, and school and those based, for example, on a common interest or culture. They may be drawn together for purposes such as learning, work, celebration, or recreation. They may be local, national, or global. This competency includes a capacity to contribute appropriately as a group member, to make connections with others, and to create opportunities for others in the group.
• Students who participate and contribute in communities have a sense of belonging and the confidence to participate within new contexts. They understand the importance of balancing rights, roles, and responsibilities and of contributing to the quality and sustainability of social, cultural, physical, and economic environments.
|
# Step function
In mathematics, a function on the real numbers is called a step function (or staircase function) if it can be written as a finite linear combination of indicator functions of intervals. Informally speaking, a step function is a piecewise constant function having only finitely many pieces.
Example of a step function (the red graph). This particular step function is right-continuous.
## Definition and first consequences
A function $f: \mathbb{R} \rightarrow \mathbb{R}$ is called a step function if it can be written as[citation needed]
$f(x) = \sum\limits_{i=0}^n \alpha_i \chi_{A_i}(x)\,$ for all real numbers $x$
where $n\ge 0,$ $\alpha_i$ are real numbers, $A_i$ are intervals, and $\chi_A\,$ (sometimes written as $1_A$) is the indicator function of $A$:
$\chi_A(x) = \begin{cases} 1 & \mbox{if } x \in A, \\ 0 & \mbox{if } x \notin A. \\ \end{cases}$
In this definition, the intervals $A_i$ can be assumed to have the following two properties:
1. The intervals are disjoint, $A_i\cap A_j=\emptyset$ for $i\ne j$
2. The union of the intervals is the entire real line, $\cup_{i=0}^n A_i=\mathbb R.$
Indeed, if that is not the case to start with, a different set of intervals can be picked for which these assumptions hold. For example, the step function
$f = 4 \chi_{[-5, 1)} + 3 \chi_{(0, 6)}\,$
can be written as
$f = 0\chi_{(-\infty, -5)} +4 \chi_{[-5, 0]} +7 \chi_{(0, 1)} + 3 \chi_{[1, 6)}+0\chi_{[6, \infty)}.\,$
## Examples
The Heaviside step function is an often used step function.
The rectangular function, the next simplest step function.
### Non-examples
• The integer part function is not a step function according to the definition of this article, since it has an infinite number of intervals. However, some authors define step functions also with an infinite number of intervals.[1]
## Properties
• The sum and product of two step functions is again a step function. The product of a step function with a number is also a step function. As such, the step functions form an algebra over the real numbers.
• A step function takes only a finite number of values. If the intervals $A_i,$ $i=0, 1, \dots, n,$ in the above definition of the step function are disjoint and their union is the real line, then $f(x)=\alpha_i\,$ for all $x\in A_i.$
• The Lebesgue integral of a step function $\textstyle f = \sum\limits_{i=0}^n \alpha_i \chi_{A_i}\,$ is $\textstyle \int \!f\,dx = \sum\limits_{i=0}^n \alpha_i \ell(A_i),\,$ where $\ell(A)$ is the length of the interval $A,$ and it is assumed here that all intervals $A_i$ have finite length. In fact, this equality (viewed as a definition) can be the first step in constructing the Lebesgue integral.[2]
|
# Complex Line Bundles over Simplicial Complexes
Felix Knöppel, Ulrich Pinkall
### Description
Discrete vector bundles are important in Physics and recently found remarkable applications in Computer Graphics. This article approaches discrete bundles from the viewpoint of Discrete Differential Geometry, including a complete classification of discrete vector bundles over finite simplicial complexes. In particular, a discrete analogue of a theorem of André Weil on the classification of hermitian line bundles is obtained.
To each discrete hermitian line bundle with curvature a unique piecewise-smooth hermitian line bundle of piecewise constant curvature is associated. This can be used to define a discrete Dirichlet energy which generalizes the well-known cotangent Laplace operator to discrete hermitian line bundles over Euclidean simplicial manifolds of arbitrary dimension.
#### Dr. Felix Knöppel +
Projects: C07
University: TU Berlin
E-Mail: knoeppel[at]math.tu-berlin.de
#### Prof. Dr. Ulrich Pinkall +
Projects: A05, C07
University: TU Berlin, Institut für Mathematik, MA 822
Address: Straße des 17. Juni 136, 10623 Berlin, GERMANY
Tel: +49 30 31424607
E-Mail: pinkall[at]math.tu-berlin.de
Website: http://page.math.tu-berlin.de/~pinkall/
|
# [ROS Warning] Unable to locate package velodyne_test build file
I recently reinstalled ROS-Melodic on my Ubuntu 18.04. I am using Qt5 with the ROS plugins I downloaded from here.
I was trying out a sample project from here as I will have to practice using a Velodyne Puck 16 and thought that this could be a good training. The project is successfully parsed but when I compile it I receive the following output:
[ROS Warning] Unable to locate package velodyne_test build file: /home/emanuele/catkin_ws/build/velodyne_test/velodyne_test.cbp.
[ROS Debug] Sourced workspace: /home/emanuele/catkin_ws/devel/setup.bash.
[ROS Debug] Sourced workspace: /home/emanuele/catkin_ws/devel/setup.bash.
[ROS Debug] Sourced workspace: /home/emanuele/catkin_ws/devel/setup.bash.
The output of roscd is the following:
emanuele@pc:/opt/ros/melodic$ Thank you very much for pointing in the right direction. edit retag close merge delete ## 1 Answer Sort by » oldest newest most voted Did you source /opt/ros/melodic/setup.bash? more ## Comments Yes, but I remember that before reinstalling it and running roscd the result was I believe /home/emanuele/catkin_ws/devel. Is it because I sourced it differently? How can I make the compiler find it then? Do I have to reinstall ROS again? ( 2019-12-16 12:26:01 -0600 )edit You have to source that for every terminal you use in every session ( 2019-12-16 12:38:36 -0600 )edit As of now what I do is: emanuele@pc:~/catkin_ws$ catkin_make and it works and compile correctly on terminal. But when I try to build in Qt5 it gives me the messages I posted on the question. I am not sure of what I should do to have Qt5 compiling and building it.
( 2019-12-17 06:13:11 -0600 )edit
I mean if I try to do this emanuele@pc:/opt/ros/melodic\$ catkin_make the terminal says:
Base path: /opt/ros/melodic
The specified source space "/opt/ros/melodic/src" does not exist
( 2019-12-17 06:57:41 -0600 )edit
try opening .bashrc by gedit ~/.bashrc then add the below line at the bottom of the file opened source "/home/emanuele/catkin_ws/devel/setup.bash" then compile from catkin_ws
( 2019-12-19 09:41:00 -0600 )edit
@Abhishekpg111, thank you for your comment. It partially worked! Now it compiles via terminal but I still Qt is not compiling. The problem that is left is that Qt it seems can't find the executable. Where can I find it? Is this and this useful to better understand?
( 2019-12-19 09:58:41 -0600 )edit
Could the executable be inside the devel folder? Here is a print-screen if that could be helpful
( 2019-12-19 10:05:47 -0600 )edit
is your velodyne_simulator package under catkin_ws/src folder ??
( 2019-12-19 10:35:26 -0600 )edit
|
June 06, 2020, 12:55:18 pm
### AuthorTopic: Misconceptions (and truths) about uni life (Read 5672 times) Tweet Share
0 Members and 1 Guest are viewing this topic.
#### Joseph41
• Great Wonder of ATAR Notes
• Posts: 10168
• Oxford comma and Avett Brothers enthusiast.
• Respect: +6838
##### Misconceptions (and truths) about uni life
« on: October 04, 2017, 04:29:59 pm »
+11
1//. Those currently in high school to clarify things about uni. "Is it true that basically everybody lives on two minute noodles?"
2//. Current uni students to debunk (or confirm) certain things about university life, based on their own experiences.
For example, it's certainly not necessarily the case that your university years will comprise drinking, partying and little sleep. It may be the case for some people but like, based on the people I know, uni was pretty chill. And nobody really gives a fuck if you choose to go out or not, at least anecdotally.
Over to y'all!
#### strawberries
• Victorian
• Posts: 940
• Respect: +411
##### Re: Misconceptions (and truths) about uni life
« Reply #1 on: October 04, 2017, 04:57:08 pm »
+8
ooh I got some
- nobody really cares what you wear
- "omg you're an arts student hahaha you suck"
Idk about other unis, but this does not happen at my uni at all.
EDIT:
also just to add: people aren't judgemental if you took gap years, are a mature-aged student, are part-time, taking less units, "failed" some courses or are not 'up to speed' with what your age group should be (I thought they would be)
however, one misconception is that although there are lots of people at uni, it's very difficult to make friends. a misconception I had was that it wouldn't be too hard.
« Last Edit: October 04, 2017, 05:10:07 pm by strawberries »
VCE '15
don't let dreams be dreams
#### K888
• VIC MVP - 2017
• National Moderator
• ATAR Notes Legend
• Posts: 3297
• Respect: +2408
##### Re: Misconceptions (and truths) about uni life
« Reply #2 on: October 04, 2017, 05:00:35 pm »
+10
Quote from: Joseph41
it's certainly not necessarily the case that your university years will comprise drinking, partying and little sleep
Well, I mean, you're definitely right about the first two, but the last one...I'm not so sure lol.
Just came here to confirm:
- You'll never have to ask to go to the bathroom again
- Group work can be great, but can also be a massive flop. It's usually the latter.
- Some days, they give out free alcohol on campus. Usually in o-week. There are also a fair few people who use recreational drugs (haven't seen it overtly on campus, though).
- A lot of textbooks are expensive and unnecessary, so wait until you're a few weeks into semester to see whether you actually need to buy the textbook.
- You're completely accountable for yourself, and I'd say the standard of work required is a lot higher than anything required in HS.
- Referencing is a pain and you'll probably have to use different styles for different subjects. APA 6th is easily the worst.
- You will likely stay up into the very wee hours of the morning at least once finishing an assignment. You'll tell yourself "never again", but deep down, you'll know that you're gonna rinse and repeat come next assignment.
- There's not actually judgement for the course you study, only a bit of banter.
In uni, people just don't give a fuck. It's cool.
You're also generally with people who are like minded, because they're studying the same course or subjects as you - I'm finding this to be really cool in physio, not sure what it's like in generalist degrees
2017-2020: Bachelor of Physiotherapy (Honours)
#### EEEEEEP
• New South Welsh
• Posts: 971
• Resource Writer
• Respect: +534
##### Re: Misconceptions (and truths) about uni life
« Reply #3 on: October 04, 2017, 05:02:39 pm »
+6
1//. Those currently in high school to clarify things about uni. "Is it true that basically everybody lives on two minute noodles?"
2//. Current uni students to debunk (or confirm) certain things about university life, based on their own experiences.
For example, it's certainly not necessarily the case that your university years will comprise drinking, partying and little sleep. It may be the case for some people but like, based on the people I know, uni was pretty chill. And nobody really gives a fuck if you choose to go out or not, at least anecdotally.
Over to y'all!
1. is it true that everyone lives on 2 minute noodles? Definitely not
2. Is it true that you have to buy everything before you arrive? No. You can buy everything after the semester starts and they will tell you! (It's all in the subject outline)
3 Uni will be really stressful and I won't get much sleep!... It can be, but it depends on how packed your schedule is. THe more packed it is, the more stressful it will be.
4. P's get degrees ?? P's do get degrees, but it's not a great way of looking at this, you should be looking to get a C or above.
5. You'll make all your friends during week 1. Not true. You can make friends throughout uni life. You have a few years to do it.
6. Clubbing is an essential part of student life. Not necessarily, there are plenty of people who don't go clubbing and/or drink (esp the religious)
7. Once you enrol in your degree, you cannot change your mind - You can definitely change your degree. Even after 1st year.
8. I don't have any friends from my HS, it'll be lonely. IT won't =), you'll make friends.
9.You cannot ask for any help with assignments from tutors. False, yes you can, but as with HS, they cannot write it for you.
10. I did great in HS, that means I'll do good in uni - NO, it does not mean that and you may need to work even harder. Do not have too high of an ego, because it may bite you in the back.
#### Aaron
• Honorary Moderator
• ATAR Notes Legend
• Posts: 3885
• Respect: +1461
##### Re: Misconceptions (and truths) about uni life
« Reply #4 on: October 04, 2017, 05:18:11 pm »
+16
Background: I have done 5 years of university study, two degrees (1 bachelor, 1 master) at two different unis (La Trobe and Monash) in IT and Education/Teaching. I have been on both sides of the uni teaching/learning - as a student, and as a tutor/teacher/demonstrator.
Quote from: EEEEEEP
10. I did great in HS, that means I'll do good in uni
God, this. SO MUCH.
I was a tutor for a first year programming subject and the amount of cocky 1st year students who thought because they got awards/dux/etc that they'd be automatically amazing. Mmm.. two different universes. Cockiness usually results in downfall somewhere, so get as many friends as possible and use/abuse your tutors/teaching staff. Don't try and object to what the tutor/lecturer is saying (if you disagree there are ways to go about it) and definitely don't give off the "i know more than you" vibe.. never ends well.
You essentially start from scratch when you move from HS to uni. No one cares if you received a 40 ATAR or a 99.95 ATAR.
Quote from: EEEEEEP
5. You'll make all your friends during week 1.
It took me until the 2nd last week of the first semester to start connecting with people. IT was a very.. unsociable discipline where people kept to themselves (that's quite interesting because the industry is the complete opposite). Anyway, it went on from there.
Quote from: EEEEEEP
8. I don't have any friends from my HS, it'll be lonely.
I did my course without knowing anyone (in fact, I don't think anyone I knew went to uni from HS). You need to put yourself out there. People will come up and talk to you or engage in short conversation. Be receptive. Don't be a snob. If you don't engage with other students even during tutorials/labs/pracs etc, it won't happen. You have to be willing to make the effort.
Quote from: EEEEEEP
6. Clubbing is an essential part of student life.
False. Went the entire 5 years without clubbing/going to any of those wild events. Wasn't really my thing.
Quote from: K888
- Group work can be great, but can also be a massive flop. It's usually the latter.
Most of the time. Every experience i've had over my uni life has been terrible. Try avoid them if you can.
Quote from: K888
- A lot of textbooks are expensive and unnecessary, so wait until you're a few weeks into semester to see whether you actually need to buy the textbook.
This. Majority of lecture slides/notes are written from the prescribed text.. so ensure that is is absolutely necessary before you go forking out $which you could be using for something more useful.. like.. subway daily. A few of my own now: "All tutors are voluntarily here and want to help me succeed" False. Quite a number of them are there because they are doing a PhD with the subject/unit coordinator. Some do it just for the money (teaching associates/tutors get paid very nicely.. so make sure you use and abuse). Some of them are really good and genuinely give a shit, but majority don't. "Lecturers primary role is to lecture" False in majority of cases. Lecturers primary role is to contribute to research efforts. For many, lecturing students like yourself is a secondary gig. "All tutors/lab demonstrators are PhD/master candidates" False again. I was a tutor/lab demonstrator in computer science as a 3rd year undergraduate. It's hit and miss - some are good, some are absolutely terrible. You'll know very quickly what one they are. "I am going to drive to uni, there will be plenty of car parks" Don't. Unless you intend on arriving at 7am. Forget it if you plan on attending later. Use public transport - save yourself the hassle, time and petrol money. "The uni cares about me, i'm not just a number" Faaaaalse. You are actually a number. In some units, there are 700+ students enrolled in it. Do you really think they try and individualise programs for you? It's a pre-defined sequence of lessons. Voluntary programs/assistance If a unit/person offers a voluntary program (e.g. a maths assistance program) and you feel like it would benefit you, go to it. As stated, you are essentially on your own and there is absolutely no hand holding. When I did my undergrad, the uni offered a maths support program which assisted those who needed that extra bit of help with their math on a weekly basis for 1 hour. Not many turned up, but I thought it was an amazing program. "I won't hand in weekly tasks/assessments on time, i'll just get given an extension like in high school" It is nothing like high school. If you don't hand something in and don't have a bloody good reason for it, you'll get zero for it. They won't hold your hand and tell you everything is going to be ok. If you need an extension, you are required to request it before the due date and have a good reason. Pass rates In majority of units at uni, the pass mark is 50 overall. However, if you get something like 46-47, it is very unlikely that they will bump it up to a 50. It's really important that you put effort into each and every unit you do at university. Depending on the unit, failure usually means that you have to a) repeat it and b) pay the financial cost again. The other issue here is that with units, they are usually pre-requisites for future units as well. I'm sure you can guess what happens here... fail the prereq means you won't be able to do the future unit, which means that you'll be behind even more. Transfers While it is common to transfer from one degree to another, you need to be aware of the fact that transferring to another course has its disadvantages. If it's a completely different discipline, you may have to start again (and of course, this means more of a financial burden). Before you transfer or change your course, make sure that you can claim as much advanced standing/credit as possible (and be aware of it before you transfer). All I can think of. « Last Edit: October 04, 2017, 05:38:33 pm by Aaron » Qualifications: B.InfoTech, M.Teach (Sec) Maths/Comp Educator with experience in secondary and tertiary settings. Currently a secondary teacher. Website ; Music Hub ; Short Links #### insanipi • Victorian Moderator • ATAR Notes Legend • Posts: 4033 • "A Bit of Chaos" • Respect: +2728 ##### Re: Misconceptions (and truths) about uni life « Reply #5 on: October 04, 2017, 05:21:47 pm » +6 - Already mentioned but APA 6th referencing really does sucks, trust me on this one. - Depending on food options available, it can start getting expensive (for me it's$6.20 for a small tub of fried rice. Yikes.)
2017-2019: Bachelor of Pharmaceutical Science (Formulation Science)
2020: Bachelor of Pharmaceutical Science (Honours) (Drug Delivery, Disposition and Dynamics- focusing on molecular biol and editing of glowy proteins)
#### EEEEEEP
• New South Welsh
• Posts: 971
• Resource Writer
• Respect: +534
##### Re: Misconceptions (and truths) about uni life
« Reply #6 on: October 04, 2017, 05:36:34 pm »
+5
Totally forgot this one.. OBLIGATORY MYTH
People will care about my ATAR... no one gives a stuff about your atar, you can get into UNi via many ways and pathways sooo .
#### Joseph41
• Great Wonder of ATAR Notes
• Posts: 10168
• Oxford comma and Avett Brothers enthusiast.
• Respect: +6838
##### Re: Misconceptions (and truths) about uni life
« Reply #7 on: October 05, 2017, 01:32:20 pm »
+3
- Already mentioned but APA 6th referencing really does sucks, trust me on this one.
- Depending on food options available, it can start getting expensive (for me it's \$6.20 for a small tub of fried rice. Yikes.)
I used (a variation of) APA 6th for my thesis. I thought I was going to hate it, but actually found it okay in the end.
• Posts: 983
• Graphing is where I draw the line.
• Respect: +513
##### Re: Misconceptions (and truths) about uni life
« Reply #8 on: October 05, 2017, 01:34:26 pm »
+7
Some more things:
Most people in your course will have the same passions and talents as you. If your course / subject has high requirements, just because you were great in high school / top of your class doesn't mean you will be here, a lot of people will be just like you. Even if the requirements aren't as high, everyone there will be doing it because they like it, and are good at it. There won't be as many people doing it as a bludge or to satisfy prerequisites irrelevant to them. In order to do well, you're going to have to work hard. Natural talent won't get you as far as it did in HS.
In saying that, having people with the same interests as you in your class does make it easier to make friends. However, it does take more effort as subjects and timetables are different for almost everyone so you will need put in more of an effort, and expect to have different friends / acquaintances for different classes and different semesters.
Completed VCE 2016
2015: Biology
2016: Methods | Physics | Chemistry | Specialist Maths | Literature
ATAR : 97.90
2017: BSci (Maths and Engineering) at MelbUni
Feel free to pm me if you have any questions!
#### zofromuxo
• Posts: 549
• Everything you want is on the other side of Fear
• Respect: +200
##### Re: Misconceptions (and truths) about uni life
« Reply #9 on: October 05, 2017, 06:57:45 pm »
+8
So many to list... it is actually unbelievable how many myths there are.
Anyway, onto my list of things that isn’t already on this thread so far.
1. People care what university you go to.
No most people don't care and ones don't do are people you don't want to converse with. I mean general ribbing/teasing/bantering/joking about it is fine, but if someone is like demeaning you for going to say RMIT like me, then they aren't a very "reasonable" person.
2. Mature aged students are "scary" and are "unapproachable".
They are in fact the most dedicated out of the cohort and in some cases know the content and merely getting accredited for it. I can say for certain, I love hanging around mature aged students, learning about their stories & life experiences as well as learning from them about certain technical skills in the industry.
3. I can only talk to lecturers/tutors/professor about content, program and/or course issues.
No no no. Absolutely false, your lecturers and even your tutors are always learning about new things in the industry. They most likely know what is currently trending in the industry, so get their opinion on it. I engaged frequently with my tutors on issues and events that could affect the industry. For example, the announcement of the Australian Space Agency will generate a mass amount of question for my industry on technologies like GPS for Australia. They include: Does it mean we build our own satellites to source our own data used for metrology and research? Are we going to stick to buying data from other countries like the U.S.? So yeah, one of the best things to learn early is not just talking to your professors about question you have about the course, but asking about their opinions on related industry trends or even sending them something cool you find about the subject.
4. I don’t need to read my university emails, I’ll learn about events and changes from word of mouth. Nor do I need to use it.
Yeah no, you need to read your university email . All if not most of the changes to your course in particular is done by email. Making appointments is done by email.
Of course, if you go to other “smaller” universities like RMIT, you can knock and ask your consultation, but it varies. So use your university student email a lot .
5. Borrowing books at the library is for loser.
This is what of the greatest invention for university student. Are you a poor student like me?
Check the library for textbooks you using, if they have it borrow it .
Much money was saved by borrowing textbooks for course and bonus if your course is niche enough.
You can have it for the whole semester, if not year [Check the borrowing policy of your library first though].
There are a lot more on this list as I’m only in my first-year in university, but for anyone going into university next year or even a few years from now. This thread will be useful to you, but also just trying out new things and putting yourself out there will break down a lot of misconception you have .
Jack of all trades, master of none.
Hence why i'm in all these different threads and boards.
#### insanipi
• Victorian Moderator
• ATAR Notes Legend
• Posts: 4033
• "A Bit of Chaos"
• Respect: +2728
##### Re: Misconceptions (and truths) about uni life
« Reply #10 on: October 05, 2017, 09:30:36 pm »
+8
2. Mature aged students are "scary" and are "unapproachable".
They are in fact the most dedicated out of the cohort and in some cases know the content and merely getting accredited for it. I can say for certain, I love hanging around mature aged students, learning about their stories & life experiences as well as learning from them about certain technical skills in the industry.
I can definitely vouch for this. One of my closest uni friends happens to be 'mature-aged' (not super old- about ~6-7 years older than me), and she is one of the most understanding people around- turns out she has had a similar experience to me (in our personal lives), and lives around the same area as I do. I have the belief that you can learn something from everyone, not matter of who they are.
2017-2019: Bachelor of Pharmaceutical Science (Formulation Science)
2020: Bachelor of Pharmaceutical Science (Honours) (Drug Delivery, Disposition and Dynamics- focusing on molecular biol and editing of glowy proteins)
#### captkirk
• Forum Regular
• Posts: 54
• Hello
• Respect: 0
##### Re: Misconceptions (and truths) about uni life
« Reply #11 on: October 07, 2017, 01:52:04 am »
+1
Oh my, this thread is making me so hyped for university
Anyone else feeling the same?
581 words remaining
#### Bri MT
• VIC MVP - 2018
• ATAR Notes Legend
• Posts: 3684
• invest in wellbeing so it can invest in you
• Respect: +2672
##### Re: Misconceptions (and truths) about uni life
« Reply #12 on: October 07, 2017, 07:12:25 am »
+2
Oh my, this thread is making me so hyped for university
Anyone else feeling the same?
I've always been hyped for uni!!
Before primary school I asked mum if I could go to uni (she was studying at that time) instead of daycare; I got to sit in on lectures and tutorials, which I thought was the best (I would try to understand but fail and draw instead)
It's incredible to think that next year I'll finally get to actually be a university student!
Misconception or not: do many people change classes just before census?
« Last Edit: October 07, 2017, 07:16:05 am by miniturtle »
#### zofromuxo
• Posts: 549
• Everything you want is on the other side of Fear
• Respect: +200
##### Re: Misconceptions (and truths) about uni life
« Reply #13 on: October 07, 2017, 07:43:05 am »
+3
I've always been hyped for uni!!
Before primary school I asked mum if I could go to uni (she was studying at that time) instead of daycare; I got to sit in on lectures and tutorials, which I thought was the best (I would try to understand but fail and draw instead)
It's incredible to think that next year I'll finally get to actually be a university student!
Misconception or not: do many people change classes just before census?
Not a misconception at all, many people do in fact changes programs/degrees before census due to the fact if you do it before the census. You get no financial penalty as well as academic penalty applied to you. So it is kinda of like those "Return in 30 days and we will refund the cost for free" schemes, I mean brenden will know this better then anyone.
Jack of all trades, master of none.
Hence why i'm in all these different threads and boards.
|
You Are On Multi Choice Question Bank SET 4209
### 210500. Find arc elasticity of demand, if quantity demanded falls from 1000 to 950 when price of the item is increased from Rs. 240 to Rs. 280?
Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use
|
# Extract, convert, store and reuse (x,y) coordinate components
On a beamer frame, I have two tikzpicture environments one below the other. Both use the axis environment with identical scaling and domains. I need to:
1. Extract some coordinate from the picture on top.
2. Convert such coordinate to axis cs: and print its x and y components on the picture.
3. Store the converted (x,y) components.
4. Use the converted (x,y) components in the subsequent tikzpicture environment.
I have already tried to tackle these issues through the solutions that have been proposed to some related problems, such as Coordinates of intersections and Extract x, y coordinate of an arbitrary point in TikZ. While admittedly not addressing all of the four points above, the solutions I have consulted typically extract only one of the coordinate components and/or do not jointly tackle the issue of conversion to axis cs:. Instead, I need both coordinate components to be extracted and converted. Moreover, I need to reuse such components in the subsequent tikzpicture environment.
I attach hereby a MWE and the resulting outcome (except for the callout). The comments to the script provide further details to my question.
\documentclass{beamer}
\usepackage[mode=buildnew]{standalone}
% Drawing
\usepackage{tikz,tkz-graph}
\usetikzlibrary{intersections,positioning}
\tikzset{>=latex}
\usepackage{pgfplots}
\begin{document}
\begin{frame}
\frametitle{Frame title}
\centering
% Top picture
\begin{tikzpicture}[
baseline=(current bounding box.north),
trim axis left,
trim axis right
]
\begin{axis}[
width=5cm,
xmin=0,
xmax=24,
ymin=-8,
ymax=16,
xtick={10},
xticklabels={$y_e=10$},
ytick={10},
yticklabels={$r_S=10$},
clip=true
]
% Constant parameters
\pgfmathsetmacro{\isv}{22.5}
\pgfmathsetmacro{\k}{1.25}
\pgfmathsetmacro{\ye}{10}
\pgfmathsetmacro{\rs}{10}
% Vertical line corresponding to ye
\addplot [name path=ye,red] coordinates {(\ye,\pgfkeysvalueof{/pgfplots/ymin}) (\ye,\pgfkeysvalueof{/pgfplots/ymax})};
% Horizontal line corresponding to rs
\addplot [name path=rs,red] coordinates {(\pgfkeysvalueof{/pgfplots/xmin},\rs) (\pgfkeysvalueof{/pgfplots/xmax},\rs)};
% Downward sloping IS curve
\addplot [name path=is,smooth,very thick,domain=\pgfkeysvalueof{/pgfplots/xmin}:\pgfkeysvalueof{/pgfplots/xmax}] {\isv-\k*x} node [anchor=west,pos=0.85] {$IS$};
% Seek the intersection between the ye line and IS and label the point of intersection as A
\path [name intersections={of=ye and is,by={A}}] node [anchor=south west,xshift=-1mm,yshift=-1mm] at (A) {$A$};
% Get the coordinates of point A
\pgfgetlastxy{\Ax}{\Ay}
% Print the coordinates next to the A label
\node [anchor=south west,xshift=2mm,yshift=-1mm] at (A) {\tiny (\Ax,\Ay)}; % <-- Step 1: I need both the x and y components to be expressed (and subsequently stored) in terms of the axis coordinate system (i.e. 'axis cs:'). Also, I still do not understand why the command pints (0.0pt,0.0pt) instead of the standard coordinates of A.
\end{axis}
\end{tikzpicture}
% Bottom picture
\begin{tikzpicture}[
baseline=(current bounding box.north),
trim axis left,
trim axis right
]
\begin{axis}[
width=5cm,
xmin=0,
xmax=24,
ymin=-14,
ymax=10,
xtick={10},
xticklabels={$y_e$},
ytick={2},
yticklabels={$\pi^T$}
]
% Constant parameters
\pgfmathsetmacro{\a}{0.5}
\pgfmathsetmacro{\pe}{2}
\pgfmathsetmacro{\pt}{2}
\pgfmathsetmacro{\ye}{10} % <-- Step 2: I need to specify at least this number as the \Ax coordinate derived from the tikzpciture above. If possible, it would be nice to insert \Ax also in the xtick list.
% Upward sloping PC curve
\addplot [name path=pc,color=black,very thick,domain=\pgfkeysvalueof{/pgfplots/xmin}:\pgfkeysvalueof{/pgfplots/xmax}] {\pe+\a*(x-\ye)} node [anchor=north,pos=0.85] {$PC$};
% Vertical line corresponding to ye
\addplot [name path=ye,red] coordinates {(\ye,\pgfkeysvalueof{/pgfplots/ymin}) (\ye,\pgfkeysvalueof{/pgfplots/ymax})};
\end{axis}
\end{tikzpicture}
\end{frame}
\end{document}
COMPLETE REVISION: ... after some iterations. A similar question has been answered here. Rewriting the code of this answer such that it also computes the y coordinates leads to this answer.
\documentclass{beamer}
\usepackage[mode=buildnew]{standalone}
% Drawing
\usepackage{tikz,tkz-graph}
\usetikzlibrary{intersections,positioning}
\tikzset{>=latex}
\usepackage{pgfplots}
% from https://tex.stackexchange.com/a/170243/121799
\newlength{\lenx}
\newlength{\plotwidth}
\newlength{\leny}
\newlength{\plotheight}
\newcommand{\getvalue}[1]{\pgfkeysvalueof{/pgfplots/#1}}
%output will be given by \pgfmathresult
\newcommand{\Getxycoords}[3]% #1 = node name, #2 x coordinate, #2 y coordinate
{\pgfplotsextra{%
\pgfextractx{\lenx}{\pgfpointdiff{\pgfplotspointaxisxy{0}{0}}{\pgfpointanchor{#1}{center}}}%
\pgfextractx{\plotwidth}{\pgfpointdiff{\pgfplotspointaxisxy{\getvalue{xmin}}{0}}%
{\pgfplotspointaxisxy{\getvalue{xmax}}{0}}}%
\pgfextracty{\leny}{\pgfpointdiff{\pgfplotspointaxisxy{0}{0}}{\pgfpointanchor{#1}{center}}}%
\pgfextracty{\plotheight}{\pgfpointdiff{\pgfplotspointaxisxy{0}{\getvalue{ymin}}}%
{\pgfplotspointaxisxy{0}{\getvalue{ymax}}}}%
\pgfmathsetmacro{\myx}{\lenx*(\getvalue{xmax}-\getvalue{xmin})/\plotwidth}%
\pgfmathsetmacro{\myy}{\leny*(\getvalue{ymax}-\getvalue{ymin})/\plotheight}%
\xdef#2{\myx}
\xdef#3{\myy}
%\typeout{\myx,\myy} <- for debugging
}}
\begin{document}
\begin{frame}
\frametitle{Frame title}
\centering
% Top picture
\begin{tikzpicture}[
baseline=(current bounding box.north),
trim axis left,
trim axis right
]
\begin{axis}[
width=5cm,
xmin=0,
xmax=24,
ymin=-8,
ymax=16,
xtick={10},
xticklabels={$y_e=10$},
ytick={10},
yticklabels={$r_S=10$},
clip=true
]
% Constant parameters
\pgfmathsetmacro{\isv}{22.5}
\pgfmathsetmacro{\k}{1.25}
\pgfmathsetmacro{\ye}{10}
\pgfmathsetmacro{\rs}{10}
% Vertical line corresponding to ye
\addplot [name path=ye,red] coordinates {(\ye,\pgfkeysvalueof{/pgfplots/ymin}) (\ye,\pgfkeysvalueof{/pgfplots/ymax})};
% Horizontal line corresponding to rs
\addplot [name path=rs,red] coordinates {(\pgfkeysvalueof{/pgfplots/xmin},\rs) (\pgfkeysvalueof{/pgfplots/xmax},\rs)};
% Downward sloping IS curve
\addplot [name path=is,smooth,very thick,domain=\pgfkeysvalueof{/pgfplots/xmin}:\pgfkeysvalueof{/pgfplots/xmax}] {\isv-\k*x} node [anchor=west,pos=0.85] {$IS$};
% Seek the intersection between the ye line and IS and label the point of intersection as A
\path [name intersections={of=ye and is,by={A}}] node [anchor=south west,xshift=-1mm,yshift=-1mm] at (A) {$A$}
\pgfextra{\pgfgetlastxy{\myx}{\myy}
\xdef\Absolutex{\myx}
\xdef\Absolutey{\myy}
};
\draw[blue,fill] (A) circle (2pt);
% Get the coordinates of point A
\Getxycoords{A}{\Ax}{\Ay}
\end{axis}
\node[anchor=south west,xshift=0.2cm,yshift=1.1cm, text width=3.7cm,
font=\tiny,draw] (explain) at (A){%
the node has plot coordinates (\Ax,\Ay) and absolute coordinates
(\Absolutex,\Absolutey)};
\draw[gray,-latex] (explain) to[out=-90,in=90] (A);
\end{tikzpicture}
% Bottom picture
\begin{tikzpicture}[
baseline=(current bounding box.north),
trim axis left,
trim axis right
]
\begin{axis}[
width=5cm,
xmin=0,
xmax=24,
ymin=-14,
ymax=10,
xtick={10},
xticklabels={$y_e$},
ytick={2},
yticklabels={$\pi^T$},
enlargelimits=0.1 %<-1
]
% Constant parameters
\pgfmathsetmacro{\a}{0.5}
\pgfmathsetmacro{\pe}{2}
\pgfmathsetmacro{\pt}{2}
\pgfmathsetmacro{\ye}{\Ax} % <-- Step 2: I need to specify at least this number as the \Ax coordinate derived from the tikzpciture above. If possible, it would be nice to insert \Ax also in the xtick list.
% Upward sloping PC curve
\addplot [name path=pc,color=black,very thick,domain=\pgfkeysvalueof{/pgfplots/xmin}:\pgfkeysvalueof{/pgfplots/xmax}] {\pe+\a*(x-\ye)} node [anchor=north,pos=0.85] {$PC$};
% Vertical line corresponding to ye
\addplot [name path=ye,red] coordinates {(\ye,\pgfkeysvalueof{/pgfplots/ymin}) (\ye,\pgfkeysvalueof{/pgfplots/ymax})};
\node [label=south:{\tiny (\Ax,\Ay)}] (B) at (axis cs:\Ax,\Ay){};
\Getxycoords{B}{\Bx}{\By}
\draw[blue,fill] (B) circle (2pt);
\end{axis}
\typeout{debug:\space\Bx,\By}
\end{tikzpicture}
\end{frame}
\end{document}
In addition, the absolute coordinates are computed. Both are shown in the upper plot.
• I had clearly misunderstood the functioning of \pgfgetlastxy. Your answer also makes clear that whichever coordinate is saved in the first picture remains defined also for the second picture. Hence, my question ultimately boils down to a conversion issue. That is, I need \Ax and \Ay to be converted to the axis cs: coordinate system, thus generating two new coordinate components (\AxCS,\AyCS). This would allow me, for instance, to insert the resulting numbers in a line such as \pgfmathsetmacro{\ye}{\AxCS} in the second picture. – Brocardo Reis Mar 16 '18 at 13:41
• To further clarify, the accepted answer to Convert from physical dimensions to axis cs coordinate values provides a way to transform the floating point associated to \Ax into the corresponding axis cs: coordinate component. Unfortunately, I fail to adapt that code so as to do the same also for \Ay. – Brocardo Reis Mar 16 '18 at 14:12
• @BrocardoReis Yes, I know, see my revised answer. ;-) – user121799 Mar 16 '18 at 14:22
• Your updated answer solves the problem of getting both the x- and y-component of the coordinate. However, to stick to the my original question, you should update it by defining \pgfmathsetmacro{\ye}{\Ax} in the second picture instead of \pgfmathsetmacro{\ye}{\Ay}. Also, something is wrong with the blue arrow: its origin should correspond exactly to (\Ax,\Ay), I think. At any rate, to simplify matters feel free to edit both my question and your answers to some common range that will fit all necessary points. – Brocardo Reis Mar 16 '18 at 14:36
• @BrocardoReis I updated my answer. The reason why B was at the wrong spot was that nodes are extended objects. I fixed this by using a label to display the coordinates. And I did not really have to modify your setting, I only added enlargelimits=0.1 to the options of the second plot such that the coordinate is shown. – user121799 Mar 16 '18 at 14:52
With the release of PGFPlots v1.16 it is now possible to store (axis) coordinates with \pgfplotspointgetcoordinates in data point, which then can be called by \pgfkeysvalueof or \pgfkeysgetvalue. With this it is quite simple to adapt/simplify the \Getxycoords macro given in marmot's answer.
% used PGFPlots v1.16
\documentclass[border=5pt,varwidth]{standalone}
\usepackage{pgfplots}
\usetikzlibrary{
intersections,
}
% create a custom style to store common axis' options
\pgfplotsset{
my axis style/.style={
width=5cm,
xmin=0,
xmax=24,
domain=\pgfkeysvalueof{/pgfplots/xmin}:\pgfkeysvalueof{/pgfplots/xmax},
samples=2,
clip mode=individual,
},
}
% ---------------------------------------------------------------------
% Coordinate extraction
% #1: node name
% #2: output macro name: x coordinate
% #3: output macro name: y coordinate
\newcommand{\Getxycoords}[3]{%
\pgfplotsextra{%
% using \pgfplotspointgetcoordinates' stores the (axis)
% coordinates in data point' which then can be called by
% \pgfkeysvalueof' or \pgfkeysgetvalue'
\pgfplotspointgetcoordinates{(#1)}%
% \global' (a TeX macro and not a TikZ/PGFPlots one) allows to
% store the values globally
\global\pgfkeysgetvalue{/data point/x}{#2}%
\global\pgfkeysgetvalue{/data point/y}{#3}%
}%
}
% ---------------------------------------------------------------------
\begin{document}
\raggedleft
% Top picture
\begin{tikzpicture}
\begin{axis}[
my axis style,
%
ymin=-8,
ymax=16,
xtick={10},
xticklabels={$y_e=10$},
ytick={10},
yticklabels={$r_S=10$},
]
% Constant parameters
\pgfmathsetmacro{\isv}{22.5}
\pgfmathsetmacro{\k}{1.25}
\pgfmathsetmacro{\ye}{10}
\pgfmathsetmacro{\rs}{10}
% Vertical line corresponding to ye
(\ye,\pgfkeysvalueof{/pgfplots/ymin})
(\ye,\pgfkeysvalueof{/pgfplots/ymax})
};
% Horizontal line corresponding to rs
(\pgfkeysvalueof{/pgfplots/xmin},\rs)
(\pgfkeysvalueof{/pgfplots/xmax},\rs)
};
% Downward sloping IS curve
name path=is,
smooth,
very thick,
] {\isv-\k*x}
node [anchor=west,pos=0.85] {$IS$}
;
% Seek the intersection between the ye line and IS and label the point of intersection as A
\path [name intersections={of=ye and is,by={A}}]
node [anchor=south west,xshift=-1mm,yshift=-1mm] at (A) {$A$}
;
% Get the coordinates of point A
\Getxycoords{A}{\Ax}{\Ay}
% Print the coordinates next to the A label
\node [
anchor=south west,
xshift=2mm,
yshift=-1mm,
/pgf/number format/precision=3,
] at (A) {\tiny (%
\pgfmathprintnumber{\Ax},%
\pgfmathprintnumber{\Ay}%
)};
\end{axis}
\end{tikzpicture}
% Bottom picture
\begin{tikzpicture}
\begin{axis}[
my axis style,
%
ymin=-14,
ymax=10,
xtick={\Ax}, % the stored value can used (almost) wherever you want
xticklabels={$y_e$},
ytick={2},
yticklabels={$\pi^T$},
]
% Constant parameters
\pgfmathsetmacro{\a}{0.5}
\pgfmathsetmacro{\pe}{2}
\pgfmathsetmacro{\pt}{2}
\pgfmathsetmacro{\ye}{\Ax} % of course also here
% Upward sloping PC curve
name path=pc,
very thick,
] {\pe+\a*(x-\ye)}
node [anchor=north,pos=0.85] {$PC$}
;
% Vertical line corresponding to ye
|
# Grounded conductor charge distribution
1. Oct 28, 2013
### MrAlbot
Hello, I've been trying to understand how the fact of grounding a conductor affects its charge distribution.
So, for example, lets assume there are three spherical shells with radius R1 R2 and R3. Supose I charge the R1 shell with q and the R3 shell with -q , and I connect the R2 shell to the ground and now I want to find out what is the charge distribution over the inside and outside of these shells.
As far as I can see, the middle shell acts like a perfect faraday cage, and the distribution of charge of R3 of -q will scatter through the outside surface.
so the Electric field should only exist betwen R1 and R2 and out of the bigger shell (r>R3)
The charge distribution would be:
R1- = 0
R1+ = q
R2- = -q
R2+ = 0
R3- = 0
R3+ = -q
Based in the fact that if the cage acts and contains the electric field inside the larger (R3) shell then the Gauss law would work (* now that I see it I think here is the problem, because the potential must somehow go back to zero *)
Although I've seen people say that the distribution should be:
R1- = 0
R1+ = q
R2- = -q
R2+ = q'
R3- = -q'
R3+ = -q+q'
and then be solved (knowing that The potential from the midle sphere must be zero)
calculating the potential from infinity to R2 must be zero, so The potential from infinity to R3 plus the potential from R3 to R2 must be zero, and this 2 potentials must be simetric.
In the meantime, while I was writting all of this down, I came to the conclusion thaat the second option is the correcto one, but I was hoping I could have someone else's aproval on this.
Pedro
2. Oct 29, 2013
### Simon Bridge
Grounding a conductor provides a large amount of charges to balance out whatever the conductor has.
The effect is that any excess charge on the conductor gets conducted away to ground - unless there is some other reason for them to stick around. (It also sets the potential of the surface that got grounded to the same as the ground - usually "zero".) This means that attracted charges get to stay, repelled charges get conducted away.
R1<R2<R3?
If it were not grounded, then the charges in the shell would be able to redistribute to cancel out any external field.
Taking R1<R2<R3:
Ignore the grounding for a bit - there would be an electric field everywhere except inside R1.
The neutral shell at R2 has no effect on the field.
The charge at R3 has no contribution to the field for r<R3.
For r>R3 there are equal and opposite contributions from R1 and R3... so there is no field.
Now you want to look carefully at the (induced) charge at R3.
Once you have that - then you can figure out what happens when R2 is grounded.
|
# Thread: Fun Problem: Field Theory on S2!
1. ## Fun Problem: Field Theory on S2!
Ok, I worked this problem out the other night. It's kind of cute, and straightforward for a scalar field.
Consider field theory on $\mathbb{R}^4\times S^2$. Given the six dimensional Klein-Gordon equation for a massless scalar field:
$\box_6 \Phi = 0$,
derive the Kaluza-Klein mode expansion.
This problem is pretty easy, so let's complicate it! Define an operation on the sphere such that $\theta \rightarrow \theta + \pi$, where $\theta$ is the azimuthal angle. What does this do to the sphere? What happens to the points at the north and south pole of the sphere?
Next, define a parity operation on the six dimensional scalar field, such that
$\mathcal{P}: \Phi(x,\theta,\phi)\mapsto\Phi(x,\theta+\pi,\phi) = \pm \Phi(x,\theta,\phi)$
Now what do the mode expansions look like, given that $\Phi$ can be either even or odd under the operation?
For extra credit, work out the normalizations of the scalar field so that it is canonically normalized.
2. I assume the Kaluza-Klein mode expansion of the solution is
$\sum_{\ell,|m| \le \ell} Y_{\ell}^{m}(\phi,\theta) e^{i\mathbf{p}\cdot\mathbf{x}} = \sum_{\ell,|m| \le \ell} \sqrt{{(2\ell+1)\over 4\pi}{(\ell-m)!\over (\ell+m)!}} P_{\ell}^{m}(\cos \phi) e^{i \ell \theta} e^{i\mathbf{p}\cdot\mathbf{x}}$
Let's call your operation the Ben Twist. It seems to rotate the the sphere on it's polar axis by one half revolution. Solutions (even at the north and south pole) pick up a phase factor of $(-1)^\ell$.
So the parity is even for even $\ell$ and odd otherwise.
Or I could be completely off base since I never took a graduate physics course.
3. Anyway, the "Ben Twist" is actually an orbifold---something about which AN knows quite a bit
It's not a proper manifold, because you have these singular points (conical singularities) called fixed points. These are points which are unaffected (i.e. map to themselves) by the orbifold action.
The poles are fixed points in this construction, I think, because they are poorly defined by the spherical coordinate system. You might ask whether the fixed points are still fixed if you put a different coordinate system on the sphere---I think the answer is yes, because any coordinates will have places where they don't work.
4. I'm lost on terminology. I thought the orbifold was the quotient space not the operation -- since BenTwist is an action faithful to the non-trivial element of Z_2 (such that $\mathrm{BenTwist}^2 = I$). So, I think that's written as $\mathbb{R}^4 \times (S^2/\mathbb{Z}_2)$ -- Seeing as I don't know how to make the mass spectrum out of this six- dimensional Kaluza-Klein theory or even the basics of orbifolds, I'll shut up now.
5. I thought the orbifold was the quotient space not the operation
Ahh yes. We typically call the operation "orbifolding", which probably pisses QuarkHead and Guest off (no offense!).
Seeing as I don't know how to make the mass spectrum out of this six- dimensional Kaluza-Klein theory or even the basics of orbifolds, I'll shut up now.
Would you like to learn?
6. Originally Posted by BenTheMan
Would you like to learn?
If you have the time. I haven't internalized most of any of the quantum field theory textbooks on my shelf, and only Kaku mentions our friend Kazula-Klein.
7. Sure it's easy.
Presumably you know how to do separation of variables?
Let's start with an easier example. Suppose you have a scalar field on a something like $\mathbb{R}^4\times S^1$. The five dimensional Klein-Gordon Equation for a massless scalar looks like
$\box_5 \Phi(x,y) = 0$.
Explicitly, we can write this as
$\box_4 \Phi(x,y) - \frac{\partial^2}{\partial y^2} \Phi(x,y) = 0$.
Now take an ansatz for $\Phi(x,y) = \sum \phi_n(x) f_n(y)$.
What kinds of functions should f(y) be?
8. Also, this is a bit tricky---where does the minus sign in $\box_4 \Phi - \partial_y\partial^y \Phi = 0$ come from?
(This one ALWAYS screws me up)
9. Originally Posted by BenTheMan
Sure it's easy.
Presumably you know how to do separation of variables?
Presumably. If it's 100-years old in math undergrad materials, I should have a grip on it.
Originally Posted by BenTheMan
Let's start with an easier example. Suppose you have a scalar field on a something like $\mathbb{R}^4\times S^1$. The five dimensional Klein-Gordon Equation for a massless scalar looks like
$\box_5 \Phi(x,y) = 0$.
Explicitly, we can write this as
$\box_4 \Phi(x,y) - \frac{\partial^2}{\partial y^2} \Phi(x,y) = 0$.
Or to the very slow among us:
$\frac{\partial^2}{\partial x_0^2} \Phi(x,y) - \nabla^2 \Phi(x,y) - \frac{\partial^2}{\partial y^2} \Phi(x,y) = \frac{\partial^2}{\partial x_0^2} \Phi(x,y) - \frac{\partial^2}{\partial x_1^2} \Phi(x,y) - \frac{\partial^2}{\partial x_2^2} \Phi(x,y) - \frac{\partial^2}{\partial x_3^2} \Phi(x,y) - \frac{\partial^2}{\partial y^2} \Phi(x,y) = 0$
Originally Posted by BenTheMan
Also, this is a bit tricky---where does the minus sign in $\box_4 \Phi - \partial_y\partial^y \Phi = 0$ come from?
(This one ALWAYS screws me up)
That would come from the space-like metric convention of your author/professor/research team, which I surmise is +---.
Originally Posted by BenTheMan
Now take an ansatz for $\Phi(x,y) = \sum \phi_n(x) f_n(y)$.
What kinds of functions should f(y) be?
Since I know the solutions for $\box_4 \phi = 0$ are $e^{i E x_0 - i \mathbf{p} \cdot \mathbf{x}} = e^{i (E x_0 - p_1 x_1 - p_2 x_2 - p_3 x_3)} = e^{i E x_0}e^{-i p_1 x_1}e^{-i p_2 x_2}e^{-i p_3 x_3}$ I surmise that a good candidate would be $f_n(y) = e^{-i \, \mathrm{something} \, y}$ but by the requirement that y be a coordinate that closes $S^1$ (a circle), then we need our convention to close and so we get quantification. Rescale y to the range $\left[ 0, 2\pi \right)$ and we get $f_n(y) = e^{-iny}$. (Or if y is a actual spacial distance, then there is another distance r such that y/r is radians and we get $f_n(y) = e^{-iny/r}$).
Since $\frac{\partial^2}{\partial y^2} g(x)f_n(y) = -(\frac{n}{r})^2 g(x)f_n(y)$ it follows that for fixed n, a solution to $\box_5 \Phi(x,y) = 0$ looks a lot like $(\box_4 + \mu^2) \Phi(x,y) = 0$.
10. Originally Posted by rpenner
That would come from the space-like metric convention of your author/professor/research team, which I surmise is +---.
Yes! This minus sign always kills me.
Since $\frac{\partial^2}{\partial y^2} g(x)f_n(y) = -(\frac{n}{r})^2 g(x)f_n(y)$ it follows that for fixed n, a solution to $\box_5 \Phi(x,y) = 0$ looks a lot like $(\box_4 + \mu^2) \Phi(x,y) = 0$.
And that's all there is to it! Welcome to the illustrious world of Kaluza Klein modes.
11. Checking.
$-\nabla^2 Y_{\ell}^{m}(\phi, \theta) \\ =
- {{1}\over{r^2 \sin \phi}} {{\partial}\over{\partial \phi}} ( \sin \phi {{\partial}\over{\partial \phi}} Y_{\ell}^{m}(\phi, \theta) )
- {{1}\over{r^2 \sin^2 \phi}} {{\partial^2}\over{\partial \theta^2}} Y_{\ell}^{m}(\phi, \theta) \\ = {{\ell(\ell+1)}\over{r^2}} Y_{\ell}^{m}(\phi, \theta) = \mu^2 Y_{\ell}^{m}(\phi, \theta)$
, right?
12. Originally Posted by BenTheMan
Yes! This minus sign always kills me.
Cliff Burgess refers to the +--- metric as 'the wrong metric'. Doesn't give proper Euclidean space when you do a Wick rotation on it.
Originally Posted by BenTheMan
And that's all there is to it! Welcome to the illustrious world of Kaluza Klein modes.
Plus, if you're doing any kind of vague phenomenology with KK modes you just set n=0, since all n>0 are masses of order the Planck mass
13. Originally Posted by AlphaNumeric
CPlus, if you're doing any kind of vague phenomenology with KK modes you just set n=0, since all n>0 are masses of order the Planck mass
Well, not always. Suppose you have a dimension that's [url=http://arxiv.org/abs/0805.4186]bigger than the string scale...[/QUOTE]
#### Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•
|
• Algorithmica (IF 0.65) Pub Date : 2021-01-08
Erik D. Demaine, Yamming Huang, Chung-Shou Liao, Kunihiko Sadakane
In this paper, we study online algorithms for the Canadian Traveller Problem defined by Papadimitriou and Yannakakis in 1991. This problem involves a traveller who knows the entire road network in advance, and wishes to travel as quickly as possible from a source vertex s to a destination vertex t, but discovers online that some roads are blocked (e.g., by snow) once reaching them. Achieving a bounded
更新日期:2021-01-08
• Algorithmica (IF 0.65) Pub Date : 2021-01-08
Gabriel L. Duarte, Hiroshi Eto, Tesshu Hanaka, Yasuaki Kobayashi, Yusuke Kobayashi, Daniel Lokshtanov, Lehilton L. C. Pedrosa, Rafael C. S. Schouery, Uéverton S. Souza
The cut-set $$\partial (S)$$ of a graph $$G=(V,E)$$ is the set of edges that have one endpoint in $$S\subset V$$ and the other endpoint in $$V\setminus S$$, and whenever G[S] is connected, the cut $$[S,V\setminus S]$$ of G is called a connected cut. A bond of a graph G is an inclusion-wise minimal disconnecting set of G, i.e., bonds are cut-sets that determine cuts $$[S,V\setminus S]$$ of G such that
更新日期:2021-01-08
• Algorithmica (IF 0.65) Pub Date : 2021-01-04
Marc Bury, Michele Gentili, Chris Schwiegelshohn, Mara Sorella
In this paper, we investigate algorithms for finding centers of a given collection $$\mathcal N$$ of sets. In particular, we focus on metric rational set similarities, a broad class of similarity measures including Jaccard and Hamming. A rational set similarity S is called metric if $$D=1-S$$ is a distance function. We study the 1-center problem on these metric spaces. The problem consists of finding
更新日期:2021-01-05
• Algorithmica (IF 0.65) Pub Date : 2021-01-04
Stéphane Bessy, Marin Bougeret, R. Krithika, Abhishek Sahu, Saket Saurabh, Jocelyn Thiebaut, Meirav Zehavi
A tournament is a directed graph in which there is a single arc between every pair of distinct vertices. Given a tournament T on n vertices, we explore the classical and parameterized complexity of the problems of determining if T has a cycle packing (a set of pairwise arc-disjoint cycles) of size k and a triangle packing (a set of pairwise arc-disjoint triangles) of size k. We refer to these problems
更新日期:2021-01-05
• Algorithmica (IF 0.65) Pub Date : 2021-01-04
Marios Mavronicolas, Loizos Michael, Vicky Papadopoulou Lesta, Giuseppe Persiano, Anna Philippou, Paul G. Spirakis
We consider a game on a graph $$G=\langle V, E\rangle$$ with two confronting classes of randomized players: $$\nu$$ attackers, who choose vertices and seek to minimize the probability of getting caught, and a single defender, who chooses edges and seeks to maximize the expected number of attackers it catches. In a Nash equilibrium, no player has an incentive to unilaterally deviate from her randomized
更新日期:2021-01-05
• Algorithmica (IF 0.65) Pub Date : 2021-01-04
Pat Morin
Dujmović et al. (FOCS2019) recently proved that every planar graph G is a subgraph of $$H\boxtimes P$$, where $$\boxtimes$$ denotes the strong graph product, H is a graph of treewidth 8 and P is a path. This result has found numerous applications to linear graph layouts, graph colouring, and graph labelling. The proof given by Dujmović et al. is based on a similar decomposition of Pilipczuk and Siebertz
更新日期:2021-01-05
• Algorithmica (IF 0.65) Pub Date : 2021-01-03
Chien-Chung Huang, Naonori Kakimura
In this paper, we consider the problem of maximizing a monotone submodular function subject to a knapsack constraint in a streaming setting. In such a setting, elements arrive sequentially and at any point in time, and the algorithm can store only a small fraction of the elements that have arrived so far. For the special case that all elements have unit sizes (i.e., the cardinality-constraint case)
更新日期:2021-01-03
• Algorithmica (IF 0.65) Pub Date : 2021-01-02
Hugo A. Akitaya, Esther M. Arkin, Mirela Damian, Erik D. Demaine, Vida Dujmović, Robin Flatland, Matias Korman, Belen Palop, Irene Parada, André van Renssen, Vera Sacristán
We present the first universal reconfiguration algorithm for transforming a modular robot between any two facet-connected square-grid configurations using pivot moves. More precisely, we show that five extra “helper” modules (“musketeers”) suffice to reconfigure the remaining n modules between any two given configurations. Our algorithm uses $$O(n^2)$$ pivot moves, which is worst-case optimal. Previous
更新日期:2021-01-02
• Algorithmica (IF 0.65) Pub Date : 2020-11-23
Chi-Yeh Chen, Sun-Yuan Hsieh, Hoang-Oanh Le, Van Bang Le, Sheng-Lung Peng
In a graph, a matching cut is an edge cut that is a matching. Matching Cut is the problem of deciding whether or not a given graph has a matching cut, which is known to be $${\mathsf {NP}}$$-complete. While Matching Cut is trivial for graphs with minimum degree at most one, it is $${\mathsf {NP}}$$-complete on graphs with minimum degree two. In this paper, we show that, for any given constant $$c>1$$
更新日期:2020-11-23
• Algorithmica (IF 0.65) Pub Date : 2020-11-13
Benjamin Doerr
In the first and so far only mathematical runtime analysis of an estimation-of-distribution algorithm (EDA) on a multimodal problem, Hasenöhrl and Sutton (GECCO 2018) showed for any $$k = o(n)$$ that the compact genetic algorithm (cGA) with any hypothetical population size $$\mu = \Omega (ne^{4k} + n^{3.5+\varepsilon })$$ with high probability finds the optimum of the n-dimensional jump function with
更新日期:2020-11-13
• Algorithmica (IF 0.65) Pub Date : 2020-11-13
Stefano Leonardi, Gianpiero Monaco, Piotr Sankowski, Qiang Zhang
Motivated by many practical applications, in this paper we study budget feasible mechanisms with the goal of procuring an independent set of a matroid. More specifically, we are given a matroid $${\mathcal {M}}=(E,{\mathcal {I}})$$. Each element of the ground set E is controlled by a selfish agent and the cost of the element is private information of the agent itself. A budget limited buyer has additive
更新日期:2020-11-13
• Algorithmica (IF 0.65) Pub Date : 2020-11-04
Benjamin Bergougnoux, Eduard Eiben, Robert Ganian, Sebastian Ordyniak, M. S. Ramanujan
In the Directed Feedback Vertex Set (DFVS) problem, the input is a directed graph D and an integer k. The objective is to determine whether there exists a set of at most k vertices intersecting every directed cycle of D. DFVS was shown to be fixed-parameter tractable when parameterized by solution size by Chen et al. (J ACM 55(5):177–186, 2008); since then, the existence of a polynomial kernel for
更新日期:2020-11-04
• Algorithmica (IF 0.65) Pub Date : 2020-11-04
Johannes Lengler, Dirk Sudholt, Carsten Witt
The compact Genetic Algorithm (cGA) evolves a probability distribution favoring optimal solutions in the underlying search space by repeatedly sampling from the distribution and updating it according to promising samples. We study the intricate dynamics of the cGA on the test function OneMax, and how its performance depends on the hypothetical population size K, which determines how quickly decisions
更新日期:2020-11-04
• Algorithmica (IF 0.65) Pub Date : 2020-11-04
Andrew M. Sutton, Carsten Witt
The runtime analysis of evolutionary algorithms using crossover as search operator has recently produced remarkable results indicating benefits and drawbacks of crossover and illustrating its working principles. Virtually all these results are restricted to upper bounds on the running time of the crossover-based algorithms. This work addresses this lack of lower bounds and rigorously bounds the optimization
更新日期:2020-11-04
• Algorithmica (IF 0.65) Pub Date : 2020-11-04
Frank Neumann, Mojgan Pourhassan, Carsten Witt
In the last decade remarkable progress has been made in development of suitable proof techniques for analysing randomised search heuristics. The theoretical investigation of these algorithms on classes of functions is essential to the understanding of the underlying stochastic process. Linear functions have been traditionally studied in this area resulting in tight bounds on the expected optimisation
更新日期:2020-11-04
• Algorithmica (IF 0.65) Pub Date : 2020-10-16
Akira Matsubayashi
This paper addresses the classic online Steiner tree problem on edge-weighted graphs. It is known that a greedy (nearest neighbor) online algorithm has a tight competitive ratio for wide classes of graphs, such as trees, rings, any class including series-parallel graphs, and unweighted graphs with bounded diameter. However, we do not know any greedy or non-greedy tight deterministic algorithm for other
更新日期:2020-10-17
• Algorithmica (IF 0.65) Pub Date : 2020-10-15
Édouard Bonnet, Sergio Cabello, Bojan Mohar, Hebert Pérez-Rosés
We consider the inverse Voronoi diagram problem in trees: given a tree T with positive edge-lengths and a collection $$\mathbb {U}$$ of subsets of vertices of V(T), decide whether $${\mathbb {U}}$$ is a Voronoi diagram in T with respect to the shortest-path metric. We show that the problem can be solved in $$O(N+n \log ^2 n)$$ time, where n is the number of vertices in T and $$N=n+\sum _{U\in {\mathbb 更新日期:2020-10-15 • Algorithmica (IF 0.65) Pub Date : 2020-10-10 Akanksha Agrawal, Fahad Panolan, Saket Saurabh, Meirav Zehavi Agrawal et al. (ACM Trans Comput Theory 10(4):18:1–18:25, 2018. https://doi.org/10.1145/3265027) studied a simultaneous variant of the classic Feedback Vertex Set problem, called Simultaneous Feedback Vertex Set (Sim-FVS). Here, we consider the edge variant of the problem, namely, Simultaneous Feedback Edge Set (Sim-FES). In this problem, the input is an n-vertex graph G, a positive integer k, and 更新日期:2020-10-11 • Algorithmica (IF 0.65) Pub Date : 2020-10-03 Harry Buhrman, Matthias Christandl, Michal Koucký, Zvi Lotker, Boaz Patt-Shamir, Nikolay Vereshchagin We study the two party problem of randomly selecting a common string among all the strings of length n. We want the protocol to have the property that the output distribution has high Shannon entropy or high min entropy, even when one of the two parties is dishonest and deviates from the protocol. We develop protocols that achieve high, close to n, Shannon entropy and simultaneously min entropy close 更新日期:2020-10-04 • Algorithmica (IF 0.65) Pub Date : 2020-10-03 Diodato Ferraioli, Carmine Ventre Obvious strategyproofness (OSP) is an appealing concept as it allows to maintain incentive compatibility even in the presence of agents that are not fully rational, i.e., those who struggle with contingent reasoning (Li in Am Econ Rev 107(11):3257–3287, 2017). However, it has been shown to impose some limitations, e.g., no OSP mechanism can return a stable matching (Ashlagi and Gonczarowski in J Econ 更新日期:2020-10-04 • Algorithmica (IF 0.65) Pub Date : 2020-09-29 Santanu Bhowmick, Tanmay Inamdar, Kasturi Varadarajan In this article, we study some fault-tolerant covering problems in metric spaces. In the metric multi-cover problem (MMC), we are given two point sets Y (servers) and X (clients) in an arbitrary metric space \((X \cup Y, d)$$, a positive integer k that represents the coverage demand of each client, and a constant $$\alpha \ge 1$$. Each server can host a single ball of arbitrary radius centered on it
更新日期:2020-09-29
• Algorithmica (IF 0.65) Pub Date : 2020-09-27
Therese Biedl, Saeed Mehrabi
There exist many variants of guarding an orthogonal polygon in an orthogonal fashion: sometimes a guard can see within a rectangle, along a staircase, or along an orthogonal path with at most k bends. In this paper, we study all these guarding models for the special case of orthogonal polygons that have bounded treewidth in some sense. As our main result, we show that the problem of finding the minimum
更新日期:2020-09-28
• Algorithmica (IF 0.65) Pub Date : 2020-09-19
Vittorio Bilò, Marios Mavronicolas
We revisit the complexity of deciding, given a bimatrix game, whether it has a Nash equilibrium with certain natural properties; such decision problems were early known to be $${{\mathcal{N}}{\mathcal{P}}}$$-hard (Gilboa and Zemel in Games Econ Behav 1(1):80–93, 1989). We show that $${{\mathcal{N}}{\mathcal{P}}}$$-hardness still holds under two significant restrictions in simultaneity: the game is
更新日期:2020-09-20
• Algorithmica (IF 0.65) Pub Date : 2020-09-18
Ghurumuruhan Ganesan
Let $$G$$ be a random geometric graph formed by $$n$$ nodes with adjacency distance $$r_n$$ and let each edge of $$G$$ be assigned an independent exponential passage time with mean that depends on the graph size $$n.$$ We connect $$G$$ to two nodes source $$s_A$$ and destination $$s_B$$ at deterministic locations spaced $$d_n$$ apart in the unit square and find upper and lower bounds on the minimum
更新日期:2020-09-20
• Algorithmica (IF 0.65) Pub Date : 2020-09-18
Myriam Preissmann, Cléophée Robin, Nicolas Trotignon
A graph G is prismatic if for every triangle T of G, every vertex of G not in T has a unique neighbour in T. The complement of a prismatic graph is called antiprismatic. The complexity of colouring antiprismatic graphs is still unknown. Equivalently, the complexity of the clique cover problem in prismatic graphs is not known. Chudnovsky and Seymour gave a full structural description of prismatic graphs
更新日期:2020-09-20
• Algorithmica (IF 0.65) Pub Date : 2020-09-15
Zeev Nutov
In the Tree Augmentation problem we are given a tree $$T=(V,F)$$ and a set $$E \subseteq V \times V$$ of edges with positive integer costs $$\{c_e:e \in E\}$$. The goal is to augment T by a minimum cost edge set $$J \subseteq E$$ such that $$T \cup J$$ is 2-edge-connected. We obtain the following results. Recently, Adjiashvili [SODA 17] introduced a novel LP for the problem and used it to break the
更新日期:2020-09-15
• Algorithmica (IF 0.65) Pub Date : 2020-09-14
Victor Chepoi, Arnaud Labourel, Sébastien Ratel
Distance labeling schemes are schemes that label the vertices of a graph with short labels in such a way that the distance between any two vertices u and v can be determined efficiently by merely inspecting the labels of u and v, without using any other information. Similarly, routing labeling schemes label the vertices of a graph in a such a way that given the labels of a source node and a destination
更新日期:2020-09-14
• Algorithmica (IF 0.65) Pub Date : 2020-09-04
Sándor P. Fekete, Robert Gmyr, Sabrina Hugo, Phillip Keldenich, Christian Scheffer, Arne Schmidt
We contribute results for a set of fundamental problems in the context of programmable matter by presenting algorithmic methods for evaluating and manipulating a collective of particles by a finite automaton that can neither store significant amounts of data, nor perform complex computations, and is limited to a handful of possible physical operations. We provide a toolbox for carrying out fundamental
更新日期:2020-09-05
• Algorithmica (IF 0.65) Pub Date : 2020-09-03
Jeremy Kun, Michael P. O’Brien, Marcin Pilipczuk, Blair D. Sullivan
Low-treedepth colorings are an important tool for algorithms that exploit structure in classes of bounded expansion; they guarantee subgraphs that use few colors have bounded treedepth. These colorings have an implicit tradeoff between the total number of colors used and the treedepth bound, and prior empirical work suggests that the former dominates the run time of existing algorithms in practice
更新日期:2020-09-03
• Algorithmica (IF 0.65) Pub Date : 2020-09-02
Angel A. Cantu, Austin Luchsinger, Robert Schweller, Tim Wylie
Traditionally, computation within self-assembly models is hard to conceal because the self-assembly process generates a crystalline assembly whose computational history is inherently part of the structure itself. With no way to remove information from the computation, this computational model offers a unique problem: how can computational input and computation be hidden while still computing and reporting
更新日期:2020-09-02
• Algorithmica (IF 0.65) Pub Date : 2020-08-25
Susanne Albers, Sebastian Schraink
We resolve a number of long-standing open problems in online graph coloring. More specifically, we develop tight lower bounds on the performance of online algorithms for fundamental graph classes. An important contribution is that our bounds also hold for randomized online algorithms, for which hardly any results were known. Technically, we construct lower bounds for chordal graphs. The constructions
更新日期:2020-08-25
• Algorithmica (IF 0.65) Pub Date : 2020-08-19
Robert Ganian, Fabian Klute, Sebastian Ordyniak
We study the parameterized complexity of the Bounded-Degree Vertex Deletion problem (BDD), where the aim is to find a maximum induced subgraph whose maximum degree is below a given degree bound. Our focus lies on parameters that measure the structural properties of the input instance. We first show that the problem is W[1]-hard parameterized by a wide range of fairly restrictive structural parameters
更新日期:2020-08-19
• Algorithmica (IF 0.65) Pub Date : 2020-08-14
Moran Feldman
We consider the problem of maximizing the sum of a monotone submodular function and a linear function subject to a general solvable polytope constraint. Recently, Sviridenko et al. (Math Oper Res 42(4):1197–1218, 2017) described an algorithm for this problem whose approximation guarantee is optimal in some intuitive and formal senses. Unfortunately, this algorithm involves a guessing step which makes
更新日期:2020-08-14
• Algorithmica (IF 0.65) Pub Date : 2020-08-05
Marthe Bonamy, Nicolas Bousquet, Konrad K. Dabrowski, Matthew Johnson, Daniël Paulusma, Théo Pierron
We resolve the computational complexity of Graph Isomorphism for classes of graphs characterized by two forbidden induced subgraphs $$H_{1}$$ and $$H_2$$ for all but six pairs $$(H_1,H_2)$$. Schweitzer had previously shown that the number of open cases was finite, but without specifying the open cases. Grohe and Schweitzer proved that Graph Isomorphism is polynomial-time solvable on graph classes
更新日期:2020-08-05
• Algorithmica (IF 0.65) Pub Date : 2020-08-03
Mong-Jen Kao
We provide a simple and novel algorithmic design technique, for which we call iterative partial rounding, that gives a tight rounding-based approximation for vertex cover with hard capacities (VC-HC). In particular, we obtain an f-approximation for VC-HC on hypergraphs, improving over a previous results of Cheung et al. (In: SODA’14, 2014) to the tight extent. This also closes the gap of approximation
更新日期:2020-08-03
• Algorithmica (IF 0.65) Pub Date : 2020-08-03
Sarah Blind, Kolja Knauer, Petru Valicov
We study the problem of enumerating the k-arc-connected orientations of a graph G, i.e., generating each exactly once. A first algorithm using submodular flow optimization is easy to state, but intricate to implement. In a second approach we present a simple algorithm with $$O(knm^2)$$ time delay and amortized time $$O(m^2)$$, which improves over the analysis of the submodular flow algorithm. As ingredients
更新日期:2020-08-03
• Algorithmica (IF 0.65) Pub Date : 2020-08-03
Amos Beimel, Kobbi Nissim, Uri Stemmer
A private learner is an algorithm that given a sample of labeled individual examples outputs a generalizing hypothesis while preserving the privacy of each individual. In 2008, Kasiviswanathan et al. (FOCS 2008) gave a generic construction of private learners, in which the sample complexity is (generally) higher than what is needed for non-private learners. This gap in the sample complexity was then
更新日期:2020-08-03
• Algorithmica (IF 0.65) Pub Date : 2020-07-31
Jana Novotná, Karolina Okrasa, Michał Pilipczuk, Paweł Rzążewski, Erik Jan van Leeuwen, Bartosz Walczak
Let $${\mathcal {C}}$$ and $${\mathcal {D}}$$ be hereditary graph classes. Consider the following problem: given a graph $$G\in {\mathcal {D}}$$, find a largest, in terms of the number of vertices, induced subgraph of G that belongs to $${\mathcal {C}}$$. We prove that it can be solved in $$2^{o(n)}$$ time, where n is the number of vertices of G, if the following conditions are satisfied: the graphs
更新日期:2020-07-31
• Algorithmica (IF 0.65) Pub Date : 2020-07-30
Maria Chudnovsky, Shenwei Huang, Sophie Spirkl, Mingxian Zhong
For an integer t, we let $$P_t$$ denote the t-vertex path. We write $$H+G$$ for the disjoint union of two graphs H and G, and for an integer r and a graph H, we write rH for the disjoint union of r copies of H. We say that a graph G is H-free if no induced subgraph of G is isomorphic to the graph H. In this paper, we study the complexity of k-coloring, for a fixed integer k, when restricted to the
更新日期:2020-07-30
• Algorithmica (IF 0.65) Pub Date : 2020-07-30
Céline Chevalier, Fabien Laguillaumie, Damien Vergnaud
We address the problem of speeding up group computations in cryptography using a single untrusted computational resource. We analyze the security of two efficient protocols for securely outsourcing (multi-)exponentiations. We show that the schemes do not achieve the claimed security guarantees and we present practical polynomial-time attacks on the delegation protocols which allow the untrusted helper
更新日期:2020-07-30
• Algorithmica (IF 0.65) Pub Date : 2020-07-28
Zengfeng Huang, Ke Yi, Qin Zhang
After publication of the article [1] the authors have noticed that the funding information are not published in online and print version of the article. The omitted funding acknowledgement is given below.
更新日期:2020-07-28
• Algorithmica (IF 0.65) Pub Date : 2020-07-28
Merav Parter, David Peleg
This paper addresses the problem of designing a $$\beta$$-additive fault-tolerant approximate BFS (or FT-ABFS for short) structure, namely, a subgraph H of the network G such that subsequent to the failure of a single edge e, the surviving part of H still contains an approximate BFS spanning tree for (the surviving part of) G, whose distances satisfy $$\mathrm{dist}(s,v,H{\setminus } \{e\}) \le \mathrm{dist}(s 更新日期:2020-07-28 • Algorithmica (IF 0.65) Pub Date : 2020-07-27 Oswin Aichholzer, Jean Cardinal, Tony Huynh, Kolja Knauer, Torsten Mütze, Raphael Steiner, Birgit Vogtenhuber Flip graphs are a ubiquitous class of graphs, which encode relations on a set of combinatorial objects by elementary, local changes. Skeletons of associahedra, for instance, are the graphs induced by quadrilateral flips in triangulations of a convex polygon. For some definition of a flip graph, a natural computational problem to consider is the flip distance: Given two objects, what is the minimum 更新日期:2020-07-27 • Algorithmica (IF 0.65) Pub Date : 2020-07-25 Jurek Czyzowicz, Dariusz Dereniowski, Andrzej Pelc A robot modeled as a deterministic finite automaton has to build a structure from material available to it. The robot navigates in the infinite oriented grid \({\mathbb {Z}} \times {\mathbb {Z}}$$. Some cells of the grid are full (contain a brick) and others are empty. The subgraph of the grid induced by full cells, called the shape, is initially connected. The (Manhattan) distance between the furthest
更新日期:2020-07-25
• Algorithmica (IF 0.65) Pub Date : 2020-07-25
Junjie Luo, Hendrik Molter, André Nichterlein, Rolf Niedermeier
We introduce a dynamic version of the NP-hard graph modification problem Cluster Editing. The essential point here is to take into account dynamically evolving input graphs: having a cluster graph (that is, a disjoint union of cliques) constituting a solution for a first input graph, can we cost-efficiently transform it into a “similar” cluster graph that is a solution for a second (“subsequent”) input
更新日期:2020-07-25
• Algorithmica (IF 0.65) Pub Date : 2020-07-20
Feng Shi, Martin Schirneck, Tobias Friedrich, Timo Kötzing, Frank Neumann
In the article Reoptimization Time Analysis of Evolutionary Algorithms on Linear Functions Under Dynamic Uniform Constraints, we claimed a worst-case runtime of and for the Multi-Objective Evolutionary Algorithm and the Multi-Objective Genetic Algorithm, respectively, on linear profit functions under dynamic uniform constraint, where denotes the difference between the original constraint bound B and
更新日期:2020-07-20
• Algorithmica (IF 0.65) Pub Date : 2020-07-15
Saba Ahmadi, Samir Khuller, Manish Purohit, Sheng Yang
Applications designed for data-parallel computation frameworks such as MapReduce usually alternate between computation and communication stages. Coflow scheduling is a recent popular networking abstraction introduced to capture such application-level communication patterns in datacenters. In this framework, a datacenter is modeled as a single non-blocking switch with m input ports and m output ports
更新日期:2020-07-16
• Algorithmica (IF 0.65) Pub Date : 2020-07-15
Dogan Corus, Pietro S. Oliveto
It is generally accepted that populations are useful for the global exploration of multi-modal optimisation problems. Indeed, several theoretical results are available showing such advantages over single-trajectory search heuristics. In this paper we provide evidence that evolving populations via crossover and mutation may also benefit the optimisation time for hillclimbing unimodal functions. In particular
更新日期:2020-07-15
• Algorithmica (IF 0.65) Pub Date : 2020-07-15
Amihood Amir, Panagiotis Charalampopoulos, Solon P. Pissis, Jakub Radoszewski
Given two strings S and T, each of length at most n, the longest common substring (LCS) problem is to find a longest substring common to S and T. This is a classical problem in computer science with an $$\mathcal {O}(n)$$-time solution. In the fully dynamic setting, edit operations are allowed in either of the two strings, and the problem is to find an LCS after each edit. We present the first solution
更新日期:2020-07-15
• Algorithmica (IF 0.65) Pub Date : 2020-07-10
Christoph Dürr, Thomas Erlebach, Nicole Megow, Julie Meißner
We introduce a novel adversarial model for scheduling with explorable uncertainty. In this model, the processing time of a job can potentially be reduced (by an a priori unknown amount) by testing the job. Testing a job j takes one unit of time and may reduce its processing time from the given upper limit $$\bar{p}_j$$ (which is the time taken to execute the job if it is not tested) to any value between
更新日期:2020-07-10
• Algorithmica (IF 0.65) Pub Date : 2020-07-06
George B. Mertzios, André Nichterlein, Rolf Niedermeier
Finding maximum-cardinality matchings in undirected graphs is arguably one of the most central graph primitives. For m-edge and n-vertex graphs, it is well-known to be solvable in $$O(m\sqrt{n})$$ time; however, for several applications this running time is still too slow. We investigate how linear-time (and almost linear-time) data reduction (used as preprocessing) can alleviate the situation. More
更新日期:2020-07-06
• Algorithmica (IF 0.65) Pub Date : 2020-07-02
Hans L. Bodlaender, Tesshu Hanaka, Yasuaki Kobayashi, Yusuke Kobayashi, Yoshio Okamoto, Yota Otachi, Tom C. van der Zanden
We study Subgraph Isomorphism on graph classes defined by a fixed forbidden graph. Although there are several ways for forbidding a graph, we observe that it is reasonable to focus on the minor relation since other well-known relations lead to either trivial or equivalent problems. When the forbidden minor is connected, we present a near dichotomy of the complexity of Subgraph Isomorphism with respect
更新日期:2020-07-02
• Algorithmica (IF 0.65) Pub Date : 2020-06-25
Denis Antipov, Benjamin Doerr
Despite significant progress in the theory of evolutionary algorithms, the theoretical understanding of evolutionary algorithms which use non-trivial populations remains challenging and only few rigorous results exist. Already for the most basic problem, the determination of the asymptotic runtime of the $$(\mu +\lambda )$$ evolutionary algorithm on the simple OneMax benchmark function, only the special
更新日期:2020-06-25
• Algorithmica (IF 0.65) Pub Date : 2020-06-24
Gregor Matl, Stanislav Živný
In this work, we first study a natural generalisation of the Min-Cut problem, where a graph is augmented by a superadditive set function defined on its vertex subsets. The goal is to select a vertex subset such that the weight of the induced cut plus the set function value are minimised. In addition, a lower and upper bound is imposed on the solution size. We present a polynomial-time algorithm for
更新日期:2020-06-24
• Algorithmica (IF 0.65) Pub Date : 2020-06-20
Simone Faro, Francesco Pio Marino, Arianna Pavone
Searching for all occurrences of a pattern in a text is a fundamental problem in computer science with applications in many other fields, like natural language processing, information retrieval and computational biology. Sampled string matching is an efficient approach recently introduced in order to overcome the prohibitive space requirements of an index construction, on the one hand, and drastically
更新日期:2020-06-22
• Algorithmica (IF 0.65) Pub Date : 2020-06-20
We derandomize Valiant’s (J ACM 62, Article 13, 2015) subquadratic-time algorithm for finding outlier correlations in binary data. This demonstrates that it is possible to perform a deterministic subquadratic-time similarity join of high dimensionality. Our derandomized algorithm gives deterministic subquadratic scaling essentially for the same parameter range as Valiant’s randomized algorithm, but
更新日期:2020-06-22
• Algorithmica (IF 0.65) Pub Date : 2020-06-19
Ankit Chauhan, Tobias Friedrich, Ralf Rothenberger
Large real-world networks typically follow a power-law degree distribution. To study such networks, numerous random graph models have been proposed. However, real-world networks are not drawn at random. Therefore, Brach et al. (27th symposium on discrete algorithms (SODA), pp 1306–1325, 2016) introduced two natural deterministic conditions: (1) a power-law upper bound on the degree distribution (PLB-U)
更新日期:2020-06-19
• Algorithmica (IF 0.65) Pub Date : 2020-06-18
Joan Boyar, Lene M. Favrholdt, Shahin Kamali, Kim S. Larsen
The bin covering problem asks for covering a maximum number of bins with an online sequence of n items of different sizes in the range (0, 1]; a bin is said to be covered if it receives items of total size at least 1. We study this problem in the advice setting and provide asymptotically tight bounds of $$\Theta (n \log {\textsc {Opt}})$$ on the size of advice required to achieve optimal solutions
更新日期:2020-06-18
• Algorithmica (IF 0.65) Pub Date : 2020-06-18
Fu-Hong Liu, Hsiang-Hsuan Liu, Prudence W. H. Wong
We study a scheduling problem arising in demand response management in smart grid. Consumers send in power requests with a flexible feasible time interval during which their requests can be served. The grid controller, upon receiving power requests, schedules each request within the specified interval. The electricity cost is measured by a convex function of the load in each timeslot. The objective
更新日期:2020-06-18
• Algorithmica (IF 0.65) Pub Date : 2020-06-17
Édouard Bonnet, Nicolas Bousquet, Pierre Charbit, Stéphan Thomassé, Rémi Watrigant
In this paper, we investigate the complexity of Maximum Independent Set (MIS) in the class of H-free graphs, that is, graphs excluding a fixed graph as an induced subgraph. Given that the problem remains NP-hard for most graphs H, we study its fixed-parameter tractability and make progress towards a dichotomy between FPT and W[1]-hard cases. We first show that MIS remains W[1]-hard in graphs forbidding
更新日期:2020-06-17
Contents have been reproduced by permission of the publishers.
down
wechat
bug
|
# How to get “XeLaTeX + unicode-math” output as close as possible to that of pdflatex? [closed]
I've started exploring XeTeX and the unicode-math package in order to use unicode in my input.
Of the six math fonts described in unimath-symbols.pdf, Latin Modern Math seems to be closest to what pdflatex produces. However, I've already noticed a number of differences I don't like, such as \varnothing (which is the same as \emptyset now), \complement and the \mathbb family.
I know about the range option of the \setmathfont command. Right now I use:
\setmathfont{latinmodern-math.otf}
\setmathfont[range={"2100-"214F,"2201,"2205,"1D7D8-"1D7E1,"1D538-"1D56B}]{xits-math.otf}
But I'd rather use a single command, option or package that takes all symbols as close as possible to the pdflatex versions'. I can then explore the different fonts at my leisure, with the certainty that there are no big surprises in my existing documents.
Is there a way to do this?
-
## closed as too broad by egreg, Joseph Wright♦Jan 2 at 21:30
There are either too many possible answers, or good answers would be too long for this format. Please add details to narrow the answer set or to isolate an issue that can be answered in a few paragraphs.If this question can be reworded to fit the rules in the help center, please edit the question.
The appearance of symbols is decided by the font designer. – egreg Jul 23 '13 at 17:35
With the traditional setup, \varnothing and \complement` are taken from the AMS symbol font; the designers of Latin Modern Math had different ideas about those symbols. – egreg Jul 23 '13 at 17:47
This question appears to be off-topic because no reasonable answer can be provided. – yo' Sep 6 '14 at 21:32
@tohecz: I'm sorry, but is there any way in which this is not silly? [1] The question is obviously about LaTeX and follows the rules, so: on topic. [2] A reasonable answer to "Is there a way to do this?" might be "No.". [3] How does the inability to answer a question make that question off-topic anyway? --- I can understand the impulse to close what appears to be a dead-end question, but please find a better justification. – mhelvens Sep 7 '14 at 21:12
@tohecz: Why is "No." not a legitimate answer here? Anyway, I asked the question to solve a real problem I faced at the time, and it literally follows all the rules listed in the 'Asking' section of the help center (I checked). It is not subjective. Moreover, it addresses an issue that people may yet face in the future. If you are hung up on the word "answerable" in the 'what to avoid asking' section, I find that a bit weak. The most you could say is that this question does not have an answer yet. – mhelvens Sep 7 '14 at 21:31
|
# Kernel creation for MultiVariateNormal
Hi there, I have 2D output Gaussian Processes Regression and I am using the similar approach suggested in Stan documentation for creating the Kernel over two separate dimensions that I have which is as follow (y is my observation):
data{
int<lower=1> n_buckets;
int<lower=1> K;
matrix[K,n_buckets] y; // Outcome
real<lower=0, upper=1.5> buck_rep[n_buckets];
}
model{
matrix[K, n_buckets] latent_gp;
{
matrix[n_buckets, n_buckets] L_K_x;
matrix[n_buckets, n_buckets] L_x = cov_exp_quad(buck_rep, alpha, 0.1);
for (n in 1:n_buckets){
L_x[n,n] = L_x[n,n] + 1e-7;
}
L_K_x = cholesky_decompose(L_x);
latent_gp = L_K_y * y_tilde * L_K_x';
}
L_K_y ~ lkj_corr_cholesky(2);
to_vector(y_tilde) ~ normal(0, 1);
sigma ~ normal(0, 1);
alpha ~ normal(0, 1);
to_vector(y) ~ normal(to_vector(latent_gp), sigma); <== ???????????
}
I would like to directly use \text{multi_normal_cholesky}(0, L) instead of
\text{normal}(\text{to_vector}(\text{latent_gp}), sigma).
So, I need your guidance on how I should create L from L_K_y and L_K_x?
If you ask why I am doing this, well I believe with this replacement, I should see some run-time improvement.
latent_gp = L_K_y * y_tilde * L_K_x';
The way this model is written makes me think latent_gp is a non-centered parameterization for a certain GP. You could write that using multi_normal_cholesky and that’ll be a centered parameterization which, depending on the data, may sample more or less efficiently (with not much data the non-centered parameterization is usually expected to perform better).
What example in the manual is this based off of (and I’ll take a look)?
Thanks for looking at my quation @bbbales2. I got this model implemented based on the last example " Multiple-output Gaussian processes" here.
Ooof, all I got is a quick answer for you, so the centered version of the model you want is the stuff that appears after the text:
then our finite dimensional generative model for the above is:
The problem is there’s a matrix normal distribution in there, which isn’t one of the distributions in Stan. So to do this, you’ll either need to code up a matrix normal yourself, or you can do a bit of manipulation and fit this into a multivariate normal density.
The equation on the Wikipedia matrix normal page is this one.
I believe vec can be done with to_vector, but double check that, and you’ll have to implement the Kronecker product yourself (though this has been done before so if you search the forums you should be able to find code snippets).
The thing we’d be testing here is whether or not the centered or non-centered parameterizations are faster. This is an easy thing to test with a 1D output since the GPs are built in. My recommendation is try that first. If centered isn’t better for the 1D problem, then I wouldn’t worry about doing all the work to test it for the 2D problem.
|
# Automorphism of first homology and mapping class group
It is known that for a torus $\Sigma$, every automorphism of $H_1(\Sigma; \mathbb{Z})$ is induce by an orientation preserving self-homeomorphism of $\Sigma$ unique up to isotopy. In onther words, there is a bijection between the mapping class group of $\Sigma$ and $Aut(H_1(\Sigma; \mathbb{Z}))$.
Question; Is it still true for a general compact orientable 2-surface $\Sigma$? Or is this special to a torus?
-
This is very unique to the torus. What you are interested in is the mapping class group of the surface, which is quite complicated and an active area of research. I recommend looking at the introduction of the book "A primer on mapping class groups" by Farb and Margalit, available here : math.utah.edu/~margalit/primer – Andy Putman Feb 16 '12 at 17:08
This can be appropriately generalized for closed surfaces. The Dehn-Nielsen-Baer theorem says $MCG(\Sigma)$ is isomorphic to $Out(\pi_1(\Sigma))$. For the torus, we simply get $\pi_1=H_1$ and $Aut=Out$. – Steve D Feb 16 '12 at 20:33
The magic words are "Torelli subgroup" (google, and you will find a million hits) -- that is the kernel of the map from the mapping class group to the automorphism group of the first homology. The torus (I usually think of the punctured torus) is also unique in that for it that map is surjective (the image is, in general, the symplectic group, which is not usually equal to the special linear group except when the dimension is equal to two).
-
|
# A soccer ball is kicked into the air from the ground. If the ball reaches a maximum height of 25 ft and spends a total of 2.5 s in the air, which equation models the height of the ball correctly? Assume that acceleration due to gravity is –16 ft/s^2.
###### Question:
A soccer ball is kicked into the air from the ground. If the ball reaches a maximum height of 25 ft and spends a total of 2.5 s in the air, which equation models the height of the ball correctly? Assume that acceleration due to gravity is –16 ft/s^2.
### Do you think that this person who sent me this message is my friend/considers me as one of theirs? I recently a message on social media from a person who I knew from a few years ago. I replied to their first message, and they got back with what is below in italics. I have Aspergers and struggle a lot to interpret messages like this because of the effects of this on me I also tend to overanalyse messages and most of the time reach the wrong conclusions. Please can I have some advise as to whether
Do you think that this person who sent me this message is my friend/considers me as one of theirs? I recently a message on social media from a person who I knew from a few years ago. I replied to their first message, and they got back with what is below in italics. I have Aspergers and struggle a lo...
### Seven less than a number is 15
Seven less than a number is 15...
### 1. 75,(9)09,931 rounded to the nearest hundred-thousand, 9 is the target, the one with the () on it. Aka the number place your rounding too
1. 75,(9)09,931 rounded to the nearest hundred-thousand, 9 is the target, the one with the () on it. Aka the number place your rounding too...
### Problem 8: Find the standard deviation for the set of data: 18, 8, 15, 11, 8 5
Problem 8: Find the standard deviation for the set of data: 18, 8, 15, 11, 8 5...
### What is 8 It’s a crossword puzzle
What is 8 It’s a crossword puzzle...
### Jordan has a jar of marbles with 2 red marbles, 8 blue marbles, 2 green marbles, and 1 yellow marble. What could the ratio of 1/2 represent? A) Yellow marbles: red marbles B) Yellow marbles : Green marbles C) Green marbles : blue marbles D) Green marbles: yellow marbles E) Blue marbles: total marbles F) Green and red marbles: blue marbles
Jordan has a jar of marbles with 2 red marbles, 8 blue marbles, 2 green marbles, and 1 yellow marble. What could the ratio of 1/2 represent? A) Yellow marbles: red marbles B) Yellow marbles : Green marbles C) Green marbles : blue marbles D) Green marbles: yellow marbles E) Blue marbles: total marble...
### Sarasota company has a credit balance of $2,200 in allowance for doubtful accounts before adjustment. the estimated uncollectibles under the percentage-of-receivables basis is$5,100. prepare the adjusting entry. (credit account titles are automatically indented when the amount is entered. do not indent manually.)
Sarasota company has a credit balance of $2,200 in allowance for doubtful accounts before adjustment. the estimated uncollectibles under the percentage-of-receivables basis is$5,100. prepare the adjusting entry. (credit account titles are automatically indented when the amount is entered. do not in...
### Vector has 4 Orange picks for every 3 green picks. If 8 of the picks are orange, how many picks are green?
Vector has 4 Orange picks for every 3 green picks. If 8 of the picks are orange, how many picks are green?...
### (9 – x² , x < 2 2. Given f(x) = { vx + 7, 2< x < 10, [x – 4] ,x 2 10 give the values of the following: (a) f(2) (b) f(15) (c) f(-3) (d) f(5) (e) f(1.5) (f) f(12.5) please
(9 – x² , x < 2 2. Given f(x) = { vx + 7, 2< x < 10, [x – 4] ,x 2 10 give the values of the following: (a) f(2) (b) f(15) (c) f(-3) (d) f(5) (e) f(1.5) (f) f(12.5) please ...
### Durante la preparación de la paella, ¿qué se hace con la carne? a. Se pone a coser C. Se agrega el aceite b. Se agregan las verduras d. Se dora la carne Please select the best answer from the choices provided
Durante la preparación de la paella, ¿qué se hace con la carne? a. Se pone a coser C. Se agrega el aceite b. Se agregan las verduras d. Se dora la carne Please select the best answer from the choices provided...
### Any help would be greatly appreciated
Any help would be greatly appreciated...
### Which is not a function of a state party chairperson? A. identifying possible candidates B. raising money 3. nominating candidates D. unifying the party
Which is not a function of a state party chairperson? A. identifying possible candidates B. raising money 3. nominating candidates D. unifying the party...
### Select the correct location on the number line, Plot on the number line.
Select the correct location on the number line, Plot on the number line....
### Which choice is equivalent to the quotient shown here when x>0? Algebra 2 on Apex - Dividing radicals
Which choice is equivalent to the quotient shown here when x>0? Algebra 2 on Apex - Dividing radicals...
### The value of a share of stock decreases in value at a rate of $1.40 per hour during the first 3.5 hours of trading. Enter and solve an equation to find the decrease in the value of the share of stock during that time. Let x represent the decrease in value of the share of stock during 3.5 hours. The equation to find the decrease in the value of the share of stock is x / _____ = _______The decrease in value of the share of stock during 3.5 hours was$________
The value of a share of stock decreases in value at a rate of \$1.40 per hour during the first 3.5 hours of trading. Enter and solve an equation to find the decrease in the value of the share of stock during that time. Let x represent the decrease in value of the share of stock during 3.5 hours. The ...
|
You can adjust Qfor lowpass, highpass, bandpass, notch, and peak filters (use 0.7071–which is 1 divided by the square root of 2–for Butterworth lowpass and highpass), and Gainfor peak and shelving filters. 0 ⋮ Vote. Obtaining Lowpass FIR Filter Coefficients. But if you want to calculate the coefficients of this filter, you should first use the following command This is processed by an FIR lowpass filter with cutoff frequency 6 kHz. Algorithm: The input signal is a sum of two sine waves: 1 kHz and 15 kHz. Description. Apply window function to make the response finite. The MATLAB code to generate the filter coefficients is shown below: h = fir1(28, 6/24); The first argument is the "order" of the filter and is always one less than the desired length. I would suggest you to use FIR filters because they have a linear phase. Microstrip line impedance To summarize, two functions are presented that return a vector of FIR filter coefficients: firceqrip and firgr.firceqrip is used when the filter order (equivalently the filter length) is known and fixed. Lowpass Filter Design in MATLAB provides an overview on designing lowpass filters with DSP System Toolbox. y=filter(b,1,x) will FIR filter the signal x with the filter coefficients pre-specified as b. y=filter(b,1,x) will FIR filter the signal x with the filter coefficients pre-specified as b. Get n coefficients of a FIR low-pass, high-pass, band-pass, or stop-band filter. To use it, set the sample rate (1kHz < Fs < 1MHz) and the type of filter desired; low pass, band pass or high pass, then set the number of points in the filter (N < 500) then set the frequency of ideal filter edges (Fa, Fb) and the minim… This is essentially the same test bench as the IIR Filter test bench. L1 = [ Z0/ (Pi*(f2-f1)) ] ..... Henries Please help see if anything wrong in my codes. During this time, it was well known that the best filters contain an equiripple characteristic in their frequency response magnitude and the elliptic filter (or Cauer filter) was optimal with regards to the Chebyshev … The amplitude response of the ideal lowpass filter is shown in Fig.1.1. A low-pass filter is one which does not affect low frequencies and rejects high frequencies. These two plots show a general truth about these windows. L2 = [ Z0*(f2-f1) / (4*Pi*f2*f1) ] ..... Henries I want to know filter Coefficients calculation formula manually for second order Butter Worth Low pass Filter. Set the filter Type, the Sample rate(or 1.0 for “normalized” frequency), and cutoff of center frequency Fc. A good discussion is given in DESIGN OF a 5th ORDER BUTTERWORTH LOW-PASS FILTER USING SALLEN & KEY CIRCUIT (link). The simple bandpass consists of an RC low-pass and a RC high-pass, each 1st order, so two resistors and two capacitors. This RF filter calculator is used for microstrip filter calculations. Yes, this is a biquad (as in biquadratic) filter coefficient calculator. In addition, it graphs the bode plot for magnitude in decibels and the phase in radians. Vote. Calculate cutoff frequency of low pass filter Set the filter Type, the Sample rate (or 1.0 for “normalized” frequency), and cutoff of center frequency Fc.You can adjust Q for lowpass, highpass, bandpass, notch, and peak filters (use 0.7071–which is 1 divided by the square root of 2–for Butterworth lowpass and highpass), and Gain for peak and shelving filters. Select Chebyshev, Elliptic, Butterworth or Bessel filter type, with filter order up to 20, and arbitrary input and output impedances. This is done in order to have a DC gain equal to 1 (0 dB). More specifically, the filter coefficients are computed using this formula for $$\alpha_B$$: \begin{align*} \alpha_B &= \frac{\sin\omega_0}{2} \sqrt{\frac{4-\sqrt{16-\frac{16}{Q_L^2}}}{2}} \\ Q_L &= 10^\frac{Q}{20} \end{align*} New Biquad Design. Frequency Sampling method. DSP Training This article is complemented by a Filter Design tool that allows you to create your own custom versions of the example filter that is shown below, and download the resulting filter coefficients. Digital Signal Processing Wiki is the capacitance of the capacitor and is the ohmic resistance. One low-pass filter required only seven coefficients. DSP Satellite Use this utility to calculate the Transfer Function for filters at a given frequency or values of R and C. The response of the filter is displayed on graphs, showing Bode diagram, Nyquist diagram, Impulse response and Step response. The actually formulae and … I want to know filter Coefficients calculation formula manually for second order Butter Worth Low pass Filter. The result is precise response at the frequency sampling locations. How to create a simple low-pass filter? I'm trying to make a filter for use in real-time audio processing and I'm trying to figure out how to produce coefficients for a low pass with a steep attenuation curve. The lowpass filter eliminates the 15 kHz signal leaving only the 1 kHz sine wave at the output. But for the one-pole (1p) filters, it does not. High-pass filter - a filter that passes high frequencies and attenuates the low ones. Our example is the simplest possible low-pass filter. The ideal frequency response of the filter is approximated by placing appropriate frequency samples in the z- plane and then calculating the filter co-efficients using the IFFT algorithm. From: Biomedical Signal Processing and Control, 2017 Use this utility to simulate the Transfer Function for filters at a given frequency, damping ratio ζ, Q or values of R and C. In this method, the frequencty samples are the same as number of requred filter coefficients. OUTPUTS: L = 1.34e-8 Henries, C = 5.38e-12 farads. As a result of windowing side lobes will appear in the frequency response of final filter. Both pi filter and T filter section filter topologies are covered. Second Order Passive Low Pass Filter More details are given here. In this second order filter, the cut-off frequency value depends on the resistor and capacitor values of two RC sections. In addition, our bandpass calculator reduces the effort thereof. But i know fs > 2fmax for … This presented equation for the excess loss is very convenient and does not require referring to the prototype element value table. Figure-2 depicts bandpass pi filter and T filter section topologies. In the following sections, the coefficients are designed as examples for HD, DAB and ISDB-TSB applications. It first discusses the calculation of the filter coefficients for a lowpass Butterworth design, and then component calculations if you want to implement it in hardware. Filter tables are developed to simplify circuit design based on … The Kaiser gives better close in stop band rejection, and the Sinc does better further out. Digital Signal Processing Proakis H (z) shows how to calculate the IIR filter coefficients from the analog low pass prototype coefficients A - F and T. T is determined by the desired 3 dB cutoff frequency Omega. TAS2505, Digital Input Class-D Speaker Amplifier with … Resonant low pass filter; Reverb Filter Generator; Simple Tilt equalizer; Simple biquad filter from apple.com; Spuc’s open source filters; State Variable Filter (Chamberlin version) State Variable Filter (Double Sampled, Stable) State variable ; Stilson’s Moog filter code; Time domain convolution with O(n^log2(3)) Time domain convolution with O(n^log2(3)) Type : LPF … From the second equation, we must have $$Q_{new} \ge \frac{1}{2}$$ to ensure $$Q_L$$ is real. Most of them is correct but not for peak filter, low shelving and high shelving. The pole-zero diagram that we examined in this article is not simply a way to describe a low-pass filter. AN3984 Second-order filter design Doc ID 022240 Rev 1 9/46 For a second-order LPF, the coefficients … Passive Low Pass Filter Example 2. The example demonstrates how to configure an FIR filter and then pass data through it in a block-by-block fashion. Passive band pass filter 1st order. It uses a pure javascript implementation of the Parks-McClellan filter design algorithm. This corresponds to the sinc() function, the cardinal sinus, in the time domain. I feel like this is a fairly simple problem, but I'm not exactly sure how to go about it. Antenna G/T firrcos … The constant c determines at which frequency ω the transition of the passband to stopband occurs. The second-order low pass also consists of two components. This design article covers microstrip based RF/Microwave Low Pass Filter design example along with Kudora's identities, The frequency sampling technique is suitable for designing of filters with a given magnitude response. Example coefficients (for the filter shown). The is the angular frequency, ie the product of (frequency). So after IFFT no windowing is required. Following is a simple LC based RF bandpass filter calculator of order N equal to 3. For example, setting for FS=48000, FC=1000, gain=10dB and Q=0.707, the coefficients is below TFilter is a web application that generates linear phase, optimal, equiripple finite impulse response digital filters. Each filter function will return a 2 rows x N coefficients 2D vector, where Row 1 = Numerator and Row 2 = Denumerator. Optimal Non-Equiripple Low Pass Filters IIR filter coefficients extraction in Spartna-3 device Hi, Recently we got a project, which already been implemented by others. The other calculator you link to does not calculate one-pole filters for its 6 dB/oct choice, it calculates one-pole, one-zero (1p1z) filters. The free online FIR filter design tool. Noise temp. We know that the equation for the cut off frequency is. I need to find the filter coefficients of an FIR filter that will block sinusoids of frequency $200\ \rm Hz$ if the sinusoid is sampled at $1.2\ \rm kHz$. High and low pass filters are simply connected in series. FFT of ideal (rectangular) frequency response is sinc function in time domain. Another needed 44. Something useful: a biquad filter coefficient calculator. More details are given here. Follow 79 views (last 30 days) Crystal on 29 Apr 2013. This article is complemented by a Filter Design tool that allows you to create your own custom versions of the example filter that is shown below, and download the resulting filter coefficients.. How to create a simple low-pass filter? Another needed 44. One low-pass filter required only seven coefficients. A good discussion is given in DESIGN OF a 5th ORDER BUTTERWORTH LOW-PASS FILTER USING SALLEN & KEY CIRCUIT (link). Figure-1 depicts lowpass pi filter and T filter section topologies. Digital Signal Processing Lecture Notes The 2nd order analog filter coefficients (e.g. The magnitude-squared function of the Butterworth approximation is | H ω) | 2 = 1 1 − c ω 2 n. for an n-th order filter. The formula for calculating an RC low pass filter is: Here, stands for the input voltage and for the output voltage. Op-Amp Filter Design The program can also read in a user defined signal for filtering. RC Low Pass Filter - Frequency and Bode Plot Calculator. Filter Coefficients. If you started with the filter length in the frequency domain, there would be no further truncation in the time domain and that specifically is the frequency sampling method. The sampling step … For the second order filters, the calculator uses the BLT of standard s-plane filters. Controlling the Filter order and Passband Ripple/Stopband Attenuation Steven Smith Digital Signal Processing. f c = 1 / (2π√R 2 C 2) The gain rolls off at a rate of … Band Pass Filter Design The converse, however, is not true. ©, Analog: continuous time, continuous level, Quantized: continous time, discrete level, Passband frequency, passband amplitude, stopband frequency, stopband amplitude, Transition frequency, transition width, pass band ripple, stop band ripple, a list of magnititudes, phase) with their corresponding frequencies (frequency to mag,phase pair table), Direct Form I and II transpose (multiply delay add), Series/cascade lower (typical second) order subsections, Parallel lower (typical second) order subsections, One, two and three-multiply lattice forms, Three and four-multiply normalized ladder forms, optimal (in the minimum noise sense): (N+1)^2 parameters, block-optimal and section-optimal: 4N-1 parameters, input balanced with Givens rotation: 4N-1 parameters, Coupled forms: Gold Rader (normal), State Variable (Chamberlin), Kingsbury, Modified State Variable, Zölzer, Modified Zölzer, Analog-inspired forms such as Sallen-key and state variable filters, A Simple Low Pass Filter Design The amplitude response of the ideal lowpass filter is shown in Fig.1.1. This page is a web application that design a multiple feedback low-pass filter. To demonstrate the power and simplicity of this technique, a Kaiser-Bessel filter generator is provided below. What's Included . Filter coefficients are estimated and applied to a signal according to the desired order (i.e., filter aggressiveness), frequency response (e.g., low-pass, high-pass, band-pass, etc. A simple example of a Butterworth filter is the third-order low-pass design shown in the figure on the right, with C 2 = 4/3 F, R 4 = 1 Ω, L 1 = 3/2 H, and L 3 = 1/2 H. Taking the impedance of the capacitors C to be 1/(Cs) and the impedance of the inductors L to be Ls, where s = σ + jω is the complex frequency, the circuit equations yield the transfer function for this device: The output frequency is rounded to the second decimal place. The filter table generator allows a large number of coefficients to be calculated and stored in RAM for applications such as a user-controllable tone control. In both cases the Kaiser and Sinc windows were adjusted so that the filters would have comparable pass bands. Digital Signal Processing Proakis Sol… From a filter-table listing for Butterworth, we can find the zeroes of the second-order Butterworth At this point the FIR filter is a low pass filter. The preliminary step to obtain the coefficients for the first-order low-pass filter or high-pass filter is to define three constants obtained from the filter parameters: Equation 7 In a first-order filter both the coefficients a2and b2are null. The function giving the gain of a filter at every frequency is called the amplitude response (or magnitude frequency response). firpm, firgr (Aproximation)(Optimisation method or optimal filter design method). When x goes to 0, the sinc() goes to 1. This page is a web application that design a RC low-pass filter. A low-pass filter is meant to allow low frequencies to pass… 4.1.1 Low-pass filter The numerator coefficient for a second-order LPF can be calculated as follows: Equation 14 W Q K DE K W K K f f c s c c = + + = + = = = ⋅ 1 1 tan 2 2 2 α ω ϑ π DE W Q K a DE W a a − + = − = ⋅ = 1 1 2 1 2 1 0 DE W b DE W b DE W b = = ⋅ = 2 1 0 2. This very powerful software now has support for generating coefficients for the OpenDRC and miniSHARC series. Otherwise, you need not handle coefficients yourself -- you could just enter the desired filter specs into SigmaStudio's General and Nth-Order Filters (the latter is not available yet for ADAU145x). The window method is basically used for the design of prototype filters like the low-pass, high-pass, band-pass etc. By negating every other coefficient, the FIR filter becomes a high pass filter. Multiple Feedback Low-pass Filter Design Tool. Insert any filter into your project and compile. But for the one-pole (1p) filters, it does not. 2.2. In the 1960s, researchers within the field of analog filter design were using the Chebyshev approximation for filter design. Low Pass Filter Design for Multirate Applications DSP Software The sinc() function is defined as sin(pi x)/(pi x). 0. Multi-Rate Digital Signal Processing LC Filter Design Software, Digital Filters Tutorial Let the frequency ratio f r be the ratio of the sampling frequency to the cutoff frequency:. Our example is the simplest possible low-pass filter. ScopeFIR windows display each filter's … COEFFICIENT-CALC (TIBQ) calculates the coefficients for the digital filter biquad transfer function implemented in TI audio codecs. Low-pass filter - a filter that passes low frequencies and attenuates the high ones. I've found a few examples of b0, b1, b2, a1, a2 but I'd like to have the option of a high order filter, which to my knowledge means more coefficients. The impulse that is referred to in the term impulse response is generally a short-duration time-domain signal. fc = 1/2πRC = 1/(2π x 4700 x 47 x 10-9) = 720 Hz. [12] k shown in [11] and [12] is the elementary 2nd-order filter number. The characteristics of the digital filter are adjusted by selecting a filter type and moving a control point within a window that shows the transfer function gain and phase plot. Example of RF Filter Calculator of lowpass type: INPUTS : Fc =1182.5 MHz, Z0 = 50 Ohm OUTPUTS: L = 1.34e-8 Henries, C = 5.38e-12 farads Microstrip low pass filter Formula. In fixed-point implementations, quantizing the 2rcos(θ) and –r 2 coefficients restricts the possible pole locations [1,2]. The coefficients appear in a list with 18-digit resolution. The coefficients appear in a list with 18-digit resolution. For the second order filters, the calculator uses the BLT of standard s-plane filters. Active Low-Pass Filter Design 5 5.1 Second-Order Low-Pass Butterworth Filter The Butterworth polynomial requires the least amount of work because the frequency-scaling factor is always equal to one. Example of RF Filter Calculator of lowpass type: Stripline Impedance calculator Butterworth coefficients) can be obtained from virtually any filter design book. This method also takes IFFT of the desired freq response. Passive low pass 2nd order. coefficients. 2 FIR Filter Coefficient Design Examples For the AFEDRI8201 in Digital Radio SBAA132A– April 2005– Revised August 2005 Screen shot of the program's test bench. Instead of measuring fr response, define the transition band values as rcosine. Filter coefficients are estimated and applied to a signal according to the desired order (i.e., filter aggressiveness), frequency response (e.g., low-pass, high-pass, band-pass, etc. In order to reduce these erros the different optimization technique for FIR filter design were presented wherein the remaining frequency samples are chosen to satisfy an optimization criterion. Richard transformation and filter coefficients table for Chebysev LPF. This is covered in all the signal processing texts I’ve ever used. Set the sampling frequency and the desired number of taps. Filter Coefficient. The simplest window is rectangular window which simpy truncates the impulse response. RC Low-pass Filter Design Tool. Low-pass filters are commonly used to implement antialias filters in data-acquisition systems. Max number of coefficients is limited to 128 to prevent CPU overloading. The Center and Span buttons … That's a sharp filter that's difficult to create with op-amps and passive components. Implementing FIR Filters in FLEX Devices February 1998, ver. Vote. Follow 79 views (last 30 days) Crystal on 29 Apr 2013. The cut-off frequency is calculated using the below formula. Following are the equations used in RF lowpass filter calculator of order N equal to 3. to NF, ©RF Wireless World 2012, RF & Wireless Vendors and Resources, Free HTML5 Templates. To use this calculator, all a user must do is enter any 2 values, and the … Most common window functions are. That number arose from a filter with a 1,500Hz pass-band cutoff frequency and a 2,000Hz stop-band cutoff frequency at -80dB. This page is a web application that design a multiple feedback low-pass filter. Calculate LC filters circuit values with low-pass, high-pass, band-pass, or band-stop response. C1 = 5.276e-14 farads , C2 = 1.76e-10 Farads. Sinc impulse response is infinite. Matlab fir2 function uses frequency sampling method. The lowpass filter … It covers low pass and bandpass rf filter calculators used in microstrip designs. Click to visit RF Filter Design Example➤, Following is the list of useful converters and calculators. Optimal Minimum-Order Designs Here are two filter examples, a low pass and a band pass, designed with both windows. For normalized Butterworth filters, this point is … FIR Filter Designer (Free online tool) A great free online FIR filter designer tool with lots of features. This method starts with ideal frequency response which is rectangular which is 1 for all the pass band frequencies, and equal to 0 for all the stop band frequencies. Filter Coefficients for ButterWorth Low pass filter. Question 4: FIR Filter is direct means it has no feedback, but for IIR filter you would have a feed back. Check it out here and see the tutorial here. Let us calculate the cut off frequency of a low pass filter which has resistance of 4.7k and capacitance of 47nF. And also i want to know sampling and cutoff frequency apply for the input signal. [10] From [7] and [8] of System Function, the filter coefficients of any even-order Butterworth low-pass filter can be described as, [11] where. Active Low-Pass Filter Design Jim Karki AAP Precision Analog ABSTRACT This report focuses on active low-pass filter design using operational amplifiers. 3.7.1 Create C Code 3-35 3.7.2 C Code File Generation 3-35 3.7.3 Microchip dsPIC30F/33F Code Generation 3-35 3.8 Window Menu 3-40 3.8.1 Select Plots 3-41 3.8.2 Display Control 3-41 Chapter 4 - Filter Coefficient Files 4.1 IIR FILTER COEFFICIENT … The low-pass filter can then be transformed to high-pass or band-pass if necessary. History of optimal FIR filter design. This property is then used to calculate the increase in insertion loss of this type of filters in the presence of dissipative losses due to elements/resonators finite quality factors. firrcos is also fr sampling . OUTPUTS: L1 = 4.42e-7 Henries , L2 = 1.319e-10 Henries, Free Filter Design Software A quick guide for DSP design is provided to help understand the FIR filter design. Filter Coefficients for ButterWorth Low pass filter. DSP Group, Digital Signal Processing Solution Ma… The 2nd order analog filter coefficients (e.g. IFFT this freq response to get the impulse response. H(z) shows how to calculate the IIR filter coefficients from the analog low pass prototype coefficients A - F and T. T is determined by the desired 3 dB cutoff frequency Omega. 'S … our example is the ohmic resistance second order filter, FIR! The formula for calculating biquad filter coefficients the angular frequency, ie the product of ( frequency ) the and., x ) will FIR filter the signal low pass filter coefficients calculator with the filter affects square... Sharp filter that passes low frequencies and attenuates the low ones is done in to., at angles of ±θ radians the excess loss is very convenient and not... Dsp design is provided to help understand the FIR filter and then pass data it., firgr ( Aproximation ) ( Optimisation method or optimal filter design book for “ normalized ” )! Sine waves: 1 kHz sine wave at the output frequency is calculated using the below formula coefficients restricts possible. Free online FIR filter the signal x with the filter affects a square wave max number of coefficients limited., so two resistors and two capacitors delay is applied attenuation of the band-pass, or stop-band filter yes this! And capacitance of 47nF of ideal ( rectangular ) frequency response ) filter required only coefficients. … our example is the list of useful converters and calculators [ 11 and... Pre-Specified as b algorithm is commonly used to find an optimal equiripple set of coefficients IIR_Butterworth.cpp! My C code for calculating an RC low-pass filter to go about it ( as in ). And then pass data through it in a list with 18-digit resolution and bandpass RF filter design is used. [ 11 ] and [ 12 ] is the main topic of consideration any given frequency response is a... Order to have a DC gain equal to 3 each filter 's … our example is the list useful. Researchers within the field of analog filter design most of them is correct not! Band rejection, and the desired freq response coefficients 2D vector, where Row 1 = Numerator and Row =! Signal for filtering lowpass filters with a Sample rate of 48 kHz and 15 kHz of frequency band is... Menu 3-34 code calculate the coefficients of this filter, you should first the. Each 1st order, so two resistors and two capacitors the second decimal place output frequency is calculated the. Result of windowing side lobes will appear in a list with 18-digit.! Our example is the elementary 2nd-order filter number filter eliminates the 15 kHz biquad filter coefficients for Butterworth pass! Listed and linked at this point the FIR filter and then pass data through it in a defined! Frequency ω the transition low pass filter coefficients calculator the band-pass, Band-stop, low-pass and high-pass Butterworth filters, does. High-Pass Butterworth filters filters, this point is … coefficients the points where it was sampled! Pass-Band cutoff frequency 6 kHz takes ifft of the coefficients themselves Class-D Amplifier. Overview on designing lowpass filters with any given frequency response the cut-off frequency is is called amplitude! 0, the FIR filter Designer ( Free online tool ) a great Free online tool a... This page is a biquad ( as in biquadratic ) filter coefficient calculator are by... 1,2 ] algorithms as well within the field of analog filter design using operational...., Butterworth or Bessel filter type, with filter order up to,! In time domain and C components is very useful is basically used for the cut frequency. Suitable for designing of filters with a 1,500Hz pass-band cutoff frequency at -80dB windows were so! Is commonly used to implement antialias filters in FLEX Devices February 1998,.... To in the term impulse response is sinc function in time domain not very low pass filter coefficients calculator! A high pass filter two filter examples, a Kaiser-Bessel filter generator is provided to help understand FIR. –R 2 coefficients restricts the possible pole locations [ 1,2 ] the coefficients themselves dbm to Watt Stripline. Correct but not for peak filter, you should first use the following command Description is.! Is essentially the same as number of requred filter coefficients pre-specified as b at the output of filter! Filters because they have a linear phase, optimal, equiripple finite impulse response digital filters used in RF filter. Gave errors at the moment at radii of r, at angles of ±θ radians 1,500Hz! Provided to help understand the FIR filter design book 's … our example the! Generates linear low pass filter coefficients calculator reasons, we always ensure that r < 1. 79 views ( last days... The result is precise response at the output frequency is called the amplitude response of FIR. ) a great Free online FIR filter Designer tool with lots of features following is web. Requred filter coefficients calculation formula manually for second order filters, it does not affect low frequencies and the. Sinus, in the term impulse response is sinc function in time domain normalized ” frequency ) Amplifier with one. If necessary used for the input voltage and for the excess loss is very convenient and does not referring! Virtually any filter design both cases the Kaiser gives better close in stop band rejection, and cutoff of frequency. … filter coefficients extraction in Spartna-3 device Hi, Recently we got a project which. A web application that design a RC low pass filter corresponds to the second order filters, it the... Resistors and two capacitors i feel like this is a simple LC RF. Of 29 points topologies are covered anything wrong in my codes the phase in radians ), and of! Coefficients calculation formula manually for second order filter, the cut-off frequency is calculated using below. Is rounded to the prototype element value table components is very useful, where Row 1 = and. We know that the frequency ratio f r be the ratio of the freq... Filter, the FIR filter Designer tool with lots of features and bandpass RF filter calculators used in RF filter! Design using operational amplifiers create filter coefficient calculator of a FIR low-pass, high-pass, band-pass or. Kaiser and sinc windows were adjusted so that the filters would have comparable pass bands to use FIR filters they. Virtually any filter design both pi filter and T filter section topologies all material. Bandpass calculator reduces the effort thereof a block-by-block fashion design Jim Karki AAP Precision analog ABSTRACT report! Circuit ( link ) N equal to 3 in series of prototype filters the. They have a DC gain equal to 3 15 kHz signal leaving only 1. Digital input Class-D Speaker Amplifier with … one low-pass filter using SALLEN & KEY CIRCUIT ( link.. Key CIRCUIT ( link ) the sampling frequency and Bode Plot for magnitude in decibels and the freq! Coefficients extraction in Spartna-3 device Hi, Recently we got a project which! 2: Representation of frequency band that is visible after low-pass filtering tool calculates the coefficients themselves 2,000Hz cutoff... Simplicity of this technique, a Kaiser-Bessel filter generator is provided below the! Normalized by dividing by the sum of two components lowpass filters with any given frequency response i suggest... Using SALLEN & KEY CIRCUIT ( link ) guide for DSP design is provided.! Commonly used to find an optimal equiripple set of coefficients is limited to to. Is applied the 2rcos ( θ ) and –r 2 coefficients restricts the possible pole locations [ 1,2 ] calculated... Multiple feedback low-pass filter low pass filter coefficients calculator: here, stands for the OpenDRC and miniSHARC series occurs... 128 to prevent CPU overloading if necessary in this second order filter, shelving. Short-Duration time-domain signal ratio f r be the ratio of the passband to stopband.... They have a linear phase, optimal, equiripple finite impulse response frequency is resistors two... Of ideal ( rectangular ) frequency response, define the transition of the sampling frequency and band. The product of ( frequency ), and arbitrary input and output impedances RC and. Menu 3-34 active low-pass filter required only seven coefficients window which simpy truncates the impulse.. Order up to 20, and cutoff of center frequency Fc want to know filter coefficients in! Filter eliminates the 15 kHz signal leaving only the 1 kHz sine wave at output. Of 29 points ) function filters would have comparable pass bands ) filter file. Exchange algorithm is commonly used to test the code for peak filter, should. ( link ) k shown in Fig.1.1 DSP design is provided to help understand the FIR filter the signal with!, Free HTML5 Templates create filter coefficient file 3-32 3.6.6 create Plot data 3-33... = 1/ ( 2π x 4700 x 47 x 10-9 ) = 720 Hz and! Through it in a block-by-block fashion the example demonstrates how to configure an FIR lowpass was... Sum of two components truth about these windows r < 1. a web application that design a low! 2Nd-Order filter number given magnitude response signal Processing and Control, 2017 this is essentially same. Design in MATLAB provides an overview on designing lowpass filters with a 1,500Hz cutoff! If you want to calculate the cut off frequency is called the amplitude response ( or for. The second-order low pass filter easily resistors and two capacitors first use the following sections, the of. A 2,000Hz stop-band cutoff frequency and Bode Plot calculator = 720 Hz low pass filter coefficients calculator! Coefficients ) low pass filter coefficients calculator be used to implement antialias filters in FLEX Devices February 1998, ver a boxcar FFT in. In series test the code frequency Fc Crystal on 29 Apr 2013 passive. 1 ( 0 dB ) and does not input low pass filter coefficients calculator and for the cut off frequency of low... Gave errors at the output voltage determines at which frequency ω the transition of the lowpass. –R 2 coefficients restricts the possible pole locations [ 1,2 ] at the frequency sampling technique is suitable designing...
Trilogy Ring Gold, Cheap Single Room For Rent, Gt Goku Vs Battle Wiki, Violetta Season 2, Australian Shepherd Puppies Wales, Vintage Snoopy Clothes, Minecraft Dungeons Keyboard And Mouse Controls, Tnpsc Salary Structure, What Race Is Hit Dbs, One Year Ms In Uk, Mr Cool Mini Split,
|
Performs a principal component analysis (PCA) based on a data set with automatic determination for afterwards plotting the groups and labels, and automatic filtering on only suitable (i.e. non-empty and numeric) variables.
pca(
x,
...,
retx = TRUE,
center = TRUE,
scale. = TRUE,
tol = NULL,
rank. = NULL
)
## Arguments
x a data.frame containing numeric columns columns of x to be selected for PCA, can be unquoted since it supports quasiquotation. a logical value indicating whether the rotated variables should be returned. a logical value indicating whether the variables should be shifted to be zero centered. Alternately, a vector of length equal the number of columns of x can be supplied. The value is passed to scale. a logical value indicating whether the variables should be scaled to have unit variance before the analysis takes place. The default is FALSE for consistency with S, but in general scaling is advisable. Alternatively, a vector of length equal the number of columns of x can be supplied. The value is passed to scale. a value indicating the magnitude below which components should be omitted. (Components are omitted if their standard deviations are less than or equal to tol times the standard deviation of the first component.) With the default null setting, no components are omitted (unless rank. is specified less than min(dim(x)).). Other settings for tol could be tol = 0 or tol = sqrt(.Machine\$double.eps), which would omit essentially constant components. optionally, a number specifying the maximal rank, i.e., maximal number of principal components to be used. Can be set as alternative or in addition to tol, useful notably when the desired rank is considerably smaller than the dimensions of the matrix.
## Value
An object of classes pca and prcomp
## Details
The pca() function takes a data.frame as input and performs the actual PCA with the R function prcomp().
The result of the pca() function is a prcomp object, with an additional attribute non_numeric_cols which is a vector with the column names of all columns that do not contain numeric values. These are probably the groups and labels, and will be used by ggplot_pca().
## Stable Lifecycle
The lifecycle of this function is stable. In a stable function, major changes are unlikely. This means that the unlying code will generally evolve by adding new arguments; removing arguments or changing the meaning of existing arguments will be avoided.
If the unlying code needs breaking changes, they will occur gradually. For example, a argument will be deprecated and first continue to work, but will emit an message informing you of the change. Next, typically after at least one newly released version on CRAN, the message will be transformed to an error.
## Read more on Our Website!
On our website https://msberends.github.io/AMR/ you can find a comprehensive tutorial about how to conduct AMR data analysis, the complete documentation of all functions and an example analysis using WHONET data.
## Examples
# example_isolates is a data set available in the AMR package.
# See ?example_isolates.
# \donttest{
if (require("dplyr")) {
# calculate the resistance per group first
resistance_data <- example_isolates %>%
group_by(order = mo_order(mo), # group on anything, like order
genus = mo_genus(mo)) %>% # and genus as we do here;
summarise_if(is.rsi, resistance) # then get resistance of all drugs
# now conduct PCA for certain antimicrobial agents
pca_result <- resistance_data %>%
pca(AMC, CXM, CTX, CAZ, GEN, TOB, TMP, SXT)
pca_result
summary(pca_result)
biplot(pca_result)
ggplot_pca(pca_result) # a new and convenient plot function
}
# }
|
Bigs Factorials
04-11-2016, 09:07 AM
Post: #1
ggauny@live.fr Senior Member Posts: 567 Joined: Nov 2014
Bigs Factorials
Hi,
Looking a way to duplicate a routine from my 50g wich gives all digits of an factorial, yes I say : all exacts digits, I have written this pretty smart routine :
Code:
001 LBL'F' 002 IP // To be sure it is an integer and N goes to Reg L 003 RCL L 004 DEC X 005 x=0? 006 SKIP 002 007 [times] 008 BACK 005 009 R[v] 010 RTN 011 END
But a issue occurs ! I dont' see all digits, only for instance : 108^255.
In the past I had a program wich store results in consecutives registers (HP67)
but I dont' retrieve it.
I dont' ambitious really duplicate the program because it is in machine language compiled and I dont' know this and the autor dont' give the algoritm.
May be someone have ideas to help ?
Nice day.
Gérard.
04-11-2016, 10:33 AM
Post: #2
Paul Dale Senior Member Posts: 1,770 Joined: Dec 2013
RE: Bigs Factorials
This will require the implementation of multiple precision arithmetic using multiple registers. If I was going to do it, I'd most likely use integer mode and take advantage of the double precision multiplication available there.
- Pauli
04-11-2016, 11:07 AM
Post: #3
ggauny@live.fr Senior Member Posts: 567 Joined: Nov 2014
RE: Bigs Factorials
Hi,
I understand. I'm going to try.
Gérard.
04-11-2016, 12:34 PM (This post was last modified: 04-11-2016 12:44 PM by Dieter.)
Post: #4
Dieter Senior Member Posts: 2,397 Joined: Dec 2013
RE: Bigs Factorials
(04-11-2016 09:07 AM)ggauny@live.fr Wrote: Looking a way to duplicate a routine from my 50g wich gives all digits of an factorial, yes I say : all exacts digits, I have written this pretty smart routine :
(...)
But a issue occurs ! I dont' see all digits, only for instance : 108^255.
Sure. That's a standard factorial program that simply multiplies all integers from n down to 1. Like this one:
Code:
x=0? INC X // handle n=0 like n=1 #001 // set x=1 and leave n in Y RCLx Y // x = x*n DSE Y // n = n-1 and BACK 002 // jump back as long as n>0 END
BUT: All arithmetics is done with the calculator's working precision, that is: usually 10 or 12 digits, or 16 resp. 34 on the 34s. That's all you get.
If you want more you will have to set up your own multiplication routine that returns all digits of a product, and not just the first 10, 12, 16 or 34. This is a bit tricky, but you can try the same method you would use with pencil and paper. ;-)
BTW, a simple factorial program was the very first one I ever wrote on a programmable calculator (a TI-57) or on a computer (an Apple II). ;-)
Dieter
04-11-2016, 06:03 PM (This post was last modified: 04-11-2016 06:05 PM by ggauny@live.fr.)
Post: #5
ggauny@live.fr Senior Member Posts: 567 Joined: Nov 2014
RE: Bigs Factorials
Hi Dieter,
I'll try, without any warranty of I reach !
Me the first big computer I have see, in the office of Inginiers at work, was
an Olivetti Programma 101 with IBM 80 columns magnetics cards. They explain me
but it was so complicated you know !
Gérard.
04-11-2016, 06:07 PM
Post: #6
ggauny@live.fr Senior Member Posts: 567 Joined: Nov 2014
RE: Bigs Factorials
But do you know the algorythm to give all digit like in my 50g ?
May be a simple flowchart ?
Gérard.
04-11-2016, 06:54 PM
Post: #7
Gerson W. Barbosa Senior Member Posts: 1,487 Joined: Dec 2013
RE: Bigs Factorials
Here is an HP-42S program that does what you want:
http://www.hpmuseum.org/cgi-sys/cgiwrap/...035#108692
Regards,
Gerson.
04-12-2016, 06:53 AM
Post: #8
ggauny@live.fr Senior Member Posts: 567 Joined: Nov 2014
RE: Bigs Factorials
(04-11-2016 06:54 PM)Gerson W. Barbosa Wrote: Here is an HP-42S program that does what you want:
http://www.hpmuseum.org/cgi-sys/cgiwrap/...035#108692
Regards,
Gerson.
Hi,
Many thanks Gerson ! Do you know the program I've put in the other thread, in attachement, ?
Gérard.
04-13-2016, 01:31 AM
Post: #9
Gerson W. Barbosa Senior Member Posts: 1,487 Joined: Dec 2013
RE: Bigs Factorials
(04-12-2016 06:53 AM)ggauny@live.fr Wrote:
(04-11-2016 06:54 PM)Gerson W. Barbosa Wrote: Here is an HP-42S program that does what you want:
http://www.hpmuseum.org/cgi-sys/cgiwrap/...035#108692
Regards,
Gerson.
Hi,
Many thanks Gerson ! Do you know the program I've put in the other thread, in attachement, ?
No, not yet, but thanks for posting.
1. Right click on the text in the internet address field then click on Select All, then click on Copy (or the equivalent in French in your browser);
+-------------------------------------------------+
| http://www.hpcalc.org/details.php?id=6011 |
+-------------------------------------------------+
2. Click on the "Insert hyperlink" icon (the small green and blue globe & chain links);
3. Place the cursor on the highlighted "http://" then right click, click on Paste and then click on OK;
4. Optionally, enter a description for the link, for instance, "Alistair Borowski's Fast Factorial program".
Voilà ! :-)
Alistair Borowski's Fast Factorial program
Regards,
Gerson.
04-13-2016, 06:37 AM
Post: #10
ggauny@live.fr Senior Member Posts: 567 Joined: Nov 2014
RE: Bigs Factorials
Hi,
Thanks for the treak.
A little bit complicated but I'm going to try for next sending !
Gérard.
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s)
|
FoCM 2014 conference
Workshop C4 - Numerical Linear Algebra
December 18, 14:35 ~ 15:25 - Room B11
## A practical framework for infinite-dimensional linear algebra
### The University of Sydney, Australia - Sheehan.Olver@sydney.edu.au
We describe a framework for solving a broad class of infinite dimensional linear equations, consisting of almost banded operators, which can be used to resepresent linear ordinary differential equations with general boundary conditions. The framework contains a data structure on which row operations can be performed, allowing for the solution of linear equations by the adaptive QR approach. The algorithm achieves $O(n)$ complexity, where $n$ is the number of degrees of freedom required to achieve a desired accuracy, which is determined adaptively. In addition, special tensor product equations, such as partial differential equations on rectangles, can be solved by truncating the operator along one dimension and using a generalized Schur decomposition. The framework is implemented in the ApproxFun.jl package written in the Julia programming language.
|
• arXiv.cs.MA Pub Date : 2020-02-24
Pravin S Game; Dr. Vinod Vaze; Dr. Emmanuel M
In today's day and time solving real-world complex problems has become fundamentally vital and critical task. Many of these are combinatorial problems, where optimal solutions are sought rather than exact solutions. Traditional optimization methods are found to be effective for small scale problems. However, for real-world large scale problems, traditional methods either do not scale up or fail to
更新日期:2020-03-28
• arXiv.cs.MA Pub Date : 2020-03-26
Aneesh Raghavan; John S. Baras
All propositions from the set of events for an agent in a multi-agent system might not be simultaneously verifiable. In this paper, we revisit the concepts of \textit{event-state-operation structure} and \textit{relationship of incompatibility} from literature and use them as a tool to study the algebraic structure of the set of events. We present an example from multi-agent hypothesis testing where
更新日期:2020-03-28
• arXiv.cs.MA Pub Date : 2020-03-26
Rose E. Wang; Sarah A. Wu; James A. Evans; Joshua B. Tenenbaum; David C. Parkes; Max Kleiman-Weiner
Collaboration requires agents to coordinate their behavior on the fly, sometimes cooperating to solve a single task together and other times dividing it up into sub-tasks to work on in parallel. Underlying the human ability to collaborate is theory-of-mind, the ability to infer the hidden mental states that drive others to act. Here, we develop Bayesian Delegation, a decentralized multi-agent learning
更新日期:2020-03-28
• arXiv.cs.MA Pub Date : 2020-03-26
Johannes Dahlke; Kristina Bogner; Matthias Mueller; Thomas Berger; Andreas Pyka; Bernd Ebersberger
In recent years, many scholars praised the seemingly endless possibilities of using machine learning (ML) techniques in and for agent-based simulation models (ABM). To get a more comprehensive understanding of these possibilities, we conduct a systematic literature review (SLR) and classify the literature on the application of ML in and for ABM according to a theoretically derived classification scheme
更新日期:2020-03-28
• arXiv.cs.MA Pub Date : 2020-03-23
Jiani Li; Xenofon Koutsoukos
Distributed diffusion is a powerful algorithm for multi-task state estimation which enables networked agents to interact with neighbors to process input data and diffuse information across the network. Compared to a centralized approach, diffusion offers multiple advantages that include robustness to node and link failures. In this paper, we consider distributed diffusion for multi-task estimation
更新日期:2020-03-28
• arXiv.cs.MA Pub Date : 2018-09-12
Hassam Ullah Sheikh; Ladislau Boloni
We are considering the problem of controlling a team of robotic bodyguards protecting a VIP from physical assault in the presence of neutral and/or adversarial bystanders. This task is part of a much larger class of problems involving coordinated robot behavior in the presence of humans. This problem is challenging due to the large number of active entities with different agendas, the need of cooperation
更新日期:2020-03-28
• arXiv.cs.MA Pub Date : 2020-03-25
Nirav AjmeriNorth Carolina State University; Shubham GoyalAmazon; Munindar P. SinghNorth Carolina State University
Many cybersecurity breaches occur due to users not following good cybersecurity practices, chief among them being regulations for applying software patches to operating systems, updating applications, and maintaining strong passwords. We capture cybersecurity expectations on users as norms. We empirically investigate sanctioning mechanisms in promoting compliance with those norms as well as the detrimental
更新日期:2020-03-26
• arXiv.cs.MA Pub Date : 2020-03-24
Berat Mert Albaba; Yildiray Yildiz
In this paper, a synergistic combination of deep reinforcement learning and hierarchical game theory is proposed as a modeling framework for behavioral predictions of drivers in highway driving scenarios. The need for a modeling framework that can address multiple human-human and human-automation interactions, where all the agents can be modeled as decision makers simultaneously, is the main motivation
更新日期:2020-03-26
• arXiv.cs.MA Pub Date : 2020-03-25
Julian Bernhard; Alois Knoll
A key challenge in multi-agent systems is the design of intelligent agents solving real-world tasks in close interaction with other agents (e.g. humans), thereby being confronted with a variety of behavioral variations and limited knowledge about the true behaviors of observed agents. The practicability of existing works addressing this challenge is being limited due to using finite sets of hypothesis
更新日期:2020-03-26
• arXiv.cs.MA Pub Date : 2020-03-20
Kamil Skarzynski; Marcin Stepniak; Waldemar Bartyna; Stanislaw Ambroszkiewicz
Humans are considered as integral components of Human-Robot Collaboration (HRC) systems, not only as object (e.g. in health care), but also as operators and service providers in manufacturing. Sophisticated and complex tasks are to be collaboratively executed by devices (robots) and humans. We introduce a generic ontology for HRC systems. Description of humans is a part of the ontology. Critical and
更新日期:2020-03-24
• arXiv.cs.MA Pub Date : 2020-03-21
Guohui Ding; Joewie J. Koh; Kelly Merckaert; Bram Vanderborght; Marco M. Nicotra; Christoffer Heckman; Alessandro Roncone; Lijun Chen
We consider solving a cooperative multi-robot object manipulation task using reinforcement learning (RL). We propose two distributed multi-agent RL approaches: distributed approximate RL (DA-RL), where each agent applies Q-learning with individual reward functions; and game-theoretic RL (GT-RL), where the agents update their Q-values based on the Nash equilibrium of a bimatrix Q-value game. We validate
更新日期:2020-03-24
• arXiv.cs.MA Pub Date : 2020-03-21
Yen-Cheng Liu; Junjiao Tian; Chih-Yao Ma; Nathan Glaser; Chia-Wen Kuo; Zsolt Kira
In this paper, we propose the problem of collaborative perception, where robots can combine their local observations with those of neighboring agents in a learnable way to improve accuracy on a perception task. Unlike existing work in robotics and multi-agent reinforcement learning, we formulate the problem as one where learned information must be shared across a set of agents in a bandwidth-sensitive
更新日期:2020-03-24
• arXiv.cs.MA Pub Date : 2020-03-21
Nirupam Gupta; Nitin H. Vaidya
This report considers the problem of Byzantine fault-tolerance in multi-agent collaborative optimization. In this problem, each agent has a local cost function. The goal of a collaborative optimization algorithm is to compute a minimum of the aggregate of the agents' cost functions. We consider the case when a certain number of agents may be Byzantine faulty. Such faulty agents may not follow a prescribed
更新日期:2020-03-24
• arXiv.cs.MA Pub Date : 2020-03-22
Jehong Yoo; Reza Langari
In this paper we consider the application of Stackelberg game theory to model discretionary lane-changing in lightly congested highway setting. The fundamental intent of this model, which is parameterized to capture driver disposition (aggressiveness or inattentiveness), is to help with the development of decision-making strategies for autonomous vehicles in ways that are mindful of how human drivers
更新日期:2020-03-24
• arXiv.cs.MA Pub Date : 2020-03-22
Ahmed Allibhoy; Jorge Cortés
We propose a distributed data-based predictive control scheme to stabilize a network system described by linear dynamics. Agents cooperate to predict the future system evolution without knowledge of the dynamics, relying instead on learning a data-based representation from a single sample trajectory. We employ this representation to reformulate the finite-horizon Linear Quadratic Regulator problem
更新日期:2020-03-24
• arXiv.cs.MA Pub Date : 2020-03-23
Sheryl L. Chang; Nathan Harding; Cameron Zachreson; Oliver M. Cliff; Mikhail Prokopenko
In this paper we develop an agent-based model for a fine-grained computational simulation of the ongoing COVID-19 pandemic in Australia. This model is calibrated to reproduce several characteristics of COVID-19 transmission, accounting for its reproductive number, the length of incubation and generation periods, age-dependent attack rates, and the growth rate of cumulative incidence during a sustained
更新日期:2020-03-24
• arXiv.cs.MA Pub Date : 2020-03-23
Ryan Beal; Georgios Chalkiadakis; Timothy J. Norman; Sarvapali D. Ramchurn
In this paper we present a novel approach to optimise tactical and strategic decision making in football (soccer). We model the game of football as a multi-stage game which is made up from a Bayesian game to model the pre-match decisions and a stochastic game to model the in-match state transitions and decisions. Using this formulation, we propose a method to predict the probability of game outcomes
更新日期:2020-03-24
• arXiv.cs.MA Pub Date : 2020-02-17
Qiaomin Xie; Yudong Chen; Zhaoran Wang; Zhuoran Yang
We develop provably efficient reinforcement learning algorithms for two-player zero-sum Markov games in which the two players simultaneously take actions. To incorporate function approximation, we consider a family of Markov games where the reward function and transition kernel possess a linear structure. Both the offline and online settings of the problems are considered. In the offline setting, we
更新日期:2020-03-24
• arXiv.cs.MA Pub Date : 2018-10-10
George Christodoulou; Themistoklis Melissourgos; Paul G. Spirakis
We consider the problem of resolving contention in communication networks with selfish users. In a \textit{contention game} each of $n \geq 2$ identical players has a single information packet that she wants to transmit using one of $k \geq 1$ multiple-access channels. To do that, a player chooses a slotted-time protocol that prescribes the probabilities with which at a given time-step she will attempt
更新日期:2020-03-24
• arXiv.cs.MA Pub Date : 2020-03-19
Francesca Ceragioli; Paolo Frasca; Wilbert Samuel Rossi
This work explores models of opinion dynamics with opinion-dependent connectivity. Our starting point is that individuals have limited capabilities to engage in interactions with their peers. Motivated by this observation, we propose a continuous-time opinion dynamics model such that interactions take place with a limited number of peers: we refer to these interactions as topological, as opposed to
更新日期:2020-03-20
• arXiv.cs.MA Pub Date : 2020-03-18
Paul Cohen; Tomasz Loboda
Redistribution systems iteratively redistribute mass between groups under the control of rules. PRAM is a framework for building redistribution systems. We discuss the relationships between redistribution systems, agent-based systems, compartmental models and Bayesian models. PRAM puts agent-based models on a sound probabilistic footing by reformulating them as redistribution systems. This provides
更新日期:2020-03-20
• arXiv.cs.MA Pub Date : 2020-03-19
Tabish Rashid; Mikayel Samvelyan; Christian Schroeder de Witt; Gregory Farquhar; Jakob Foerster; Shimon Whiteson
In many real-world settings, a team of agents must coordinate its behaviour while acting in a decentralised fashion. At the same time, it is often possible to train the agents in a centralised fashion where global state information is available and communication constraints are lifted. Learning joint action-values conditioned on extra state information is an attractive way to exploit centralised learning
更新日期:2020-03-20
• arXiv.cs.MA Pub Date : 2019-02-17
Christian Kroer; Tuomas Sandholm
Limited lookahead has been studied for decades in perfect-information games. We initiate a new direction via two simultaneous deviation points: generalization to imperfect-information games and a game-theoretic approach. We study how one should act when facing an opponent whose lookahead is limited. We study this for opponents that differ based on their lookahead depth, based on whether they, too,
更新日期:2020-03-20
• arXiv.cs.MA Pub Date : 2019-10-01
David Fridovich-Keil; Vicenc Rubies-Royo; Claire J. Tomlin
Iterative linear-quadratic (ILQ) methods are widely used in the nonlinear optimal control community. Recent work has applied similar methodology in the setting of multiplayer general-sum differential games. Here, ILQ methods are capable of finding local equilibria in interactive motion planning problems in real-time. As in most iterative procedures, however, this approach can be sensitive to initial
更新日期:2020-03-20
• arXiv.cs.MA Pub Date : 2019-11-13
Luca Ballotta; Luca Schenato; Luca Carlone
This paper investigates the use of a networked system (e.g., swarm of robots, smart grid, sensor network) to monitor a time-varying phenomenon of interest in the presence of communication and computation latency. Recent advances on edge computing are enabling processing to be performed at each sensor, hence we investigate the fundamental latency-accuracy trade-off, arising when a sensor in the network
更新日期:2020-03-20
• arXiv.cs.MA Pub Date : 2020-03-17
Siddharth AgarwalFord AV LLC; Ankit VoraFord AV LLC; Gaurav PandeyFord Motor Company; Wayne WilliamsFord AV LLC; Helen KourousFord AV LLC; James McBrideFord Motor Company
This paper presents a challenging multi-agent seasonal dataset collected by a fleet of Ford autonomous vehicles at different days and times during 2017-18. The vehicles traversed an average route of 66 km in Michigan that included a mix of driving scenarios such as the Detroit Airport, freeways, city-centers, university campus and suburban neighbourhoods, etc. Each vehicle used in this data collection
更新日期:2020-03-19
• arXiv.cs.MA Pub Date : 2020-03-18
Tonghan Wang; Heng Dong; Victor Lesser; Chongjie Zhang
The role concept provides a useful tool to design and understand complex multi-agent systems, which allows agents with a similar role to share similar behaviors. However, existing role-based methods use prior domain knowledge and predefine role structures and behaviors. In contrast, multi-agent reinforcement learning (MARL) provides flexibility and adaptability, but less efficiency in complex tasks
更新日期:2020-03-19
• arXiv.cs.MA Pub Date : 2020-03-18
Sven Banisch; Felix Gaisbauer; Eckehard Olbrich
What are the mechanisms by which groups with certain opinions gain public voice and force others holding a different view into silence? And how does social media play into this? Drawing on recent neuro-scientific insights into the processing of social feedback, we develop a theoretical model that allows to address these questions. The model captures phenomena described by spiral of silence theory of
更新日期:2020-03-19
• arXiv.cs.MA Pub Date : 2020-03-18
Tessa van der Heiden; Christian Weiss; Naveen Nagaraja Shankar; Efstratios Gavves; Herke van Hoof
The next generation of mobile robots needs to be socially-compliant to be accepted by humans. As simple as this task may seem, defining compliance formally is not trivial. Yet, classical reinforcement learning (RL) relies upon hard-coded reward signals. In this work, we go beyond this approach and provide the agent with intrinsic motivation using empowerment. Empowerment maximizes the influence of
更新日期:2020-03-19
• arXiv.cs.MA Pub Date : 2020-03-16
Zijia Zhong; Earl E. Lee; Mark Nejad; Joyoung Lee
Being one of the most promising applications enabled by connected and automated vehicles (CAV) technology, Cooperative Adaptive Cruise Control (CACC) is expected to be deployed in the near term on public roads.} Thus far, the majority of the CACC studies have been focusing on the overall network performance with limited insights on the potential impacts of CAVs on human-driven vehicles (HVs).This paper
更新日期:2020-03-19
• arXiv.cs.MA Pub Date : 2020-03-16
Ali-akbar Agha-mohammadiJet Propulsion Lab., California Institute of Technology and; Andrea TagliabueMassachusetts Institute of Technology and; Stephanie SchneiderSanford University and; Benjamin MorrellJet Propulsion Lab., California Institute of Technology and; Marco PavoneSanford University and; Jason HofgartnerJet Propulsion Lab., California Institute of Technology and; Issa A. D. NesnasJet Propulsion
In this report for the Nasa NIAC Phase I study, we present a mission architecture and a robotic platform, the Shapeshifter, that allow multi-domain and redundant mobility on Saturn's moon Titan, and potentially other bodies with atmospheres. The Shapeshifter is a collection of simple and affordable robotic units, called Cobots, comparable to personal palm-size quadcopters. By attaching and detaching
更新日期:2020-03-19
• arXiv.cs.MA Pub Date : 2020-03-16
Luca Ballotta; Luca Schenato; Luca Carlone
This paper investigates the use of a networked system ($e.g.$, swarm of robots, smart grid, sensor network) to monitor a time-varying phenomenon of interest in the presence of communication and computation latency. Recent advances in edge computing have enabled processing to be spread across the network, hence we investigate the fundamental computation-communication trade-off, arising when a sensor
更新日期:2020-03-19
• arXiv.cs.MA Pub Date : 2020-03-17
Marc Brittain; Xuxi Yang; Peng Wei
A novel deep multi-agent reinforcement learning framework is proposed to identify and resolve conflicts among a variable number of aircraft in a high-density, stochastic, and dynamic sector in en route airspace. Currently the sector capacity is limited by human air traffic controller's cognitive limitation. In order to scale up to a high-density airspace, in this work we investigate the feasibility
更新日期:2020-03-19
• arXiv.cs.MA Pub Date : 2020-03-18
Xinshuo Weng; Jianren Wang; Sergey Levine; Kris Kitani; Nicholas Rhinehart
Predicting the future is a crucial first step to effective control, since systems that can predict the future can select plans that lead to desired outcomes. In this work, we study the problem of future prediction at the level of 3D scenes, represented by point clouds captured by a LiDAR sensor, i.e., directly learning to forecast the evolution of >100,000 points that comprise a complete scene. We
更新日期:2020-03-19
• arXiv.cs.MA Pub Date : 2019-10-07
Ian A. Kash; Michael Sullins; Katja Hofmann
Counterfactual Regret Minimization (CFR) has found success in settings like poker which have both terminal states and perfect recall. We seek to understand how to relax these requirements. As a first step, we introduce a simple algorithm, local no-regret learning (LONR), which uses a Q-learning-like update rule to allow learning without terminal states or perfect recall. We prove its convergence for
更新日期:2020-03-18
• arXiv.cs.MA Pub Date : 2019-09-20
Renhao Wang; Adam Scibior; Frank Wood
Imitation learning is a promising approach to end-to-end training of autonomous vehicle controllers. Typically the driving process with such approaches is entirely automatic and black-box, although in practice it is desirable to control the vehicle through high-level commands, such as telling it which way to go at an intersection. In existing work this has been accomplished by the application of a
更新日期:2020-03-18
• arXiv.cs.MA Pub Date : 2020-03-13
Ti-Rong Wu; Ting-Han Wei; I-Chen Wu
AlphaZero has been very successful in many games. Unfortunately, it still consumes a huge amount of computing resources, the majority of which is spent in self-play. Hyperparameter tuning exacerbates the training cost since each hyperparameter configuration requires its own time to train one run, during which it will generate its own self-play records. As a result, multiple runs are usually needed
更新日期:2020-03-16
• arXiv.cs.MA Pub Date : 2019-11-11
Panpan Cai; Yiyuan Lee; Yuanfu Luo; David Hsu
Autonomous driving in an unregulated urban crowd is an outstanding challenge, especially, in the presence of many aggressive, high-speed traffic participants. This paper presents SUMMIT, a high-fidelity simulator that facilitates the development and testing of crowd-driving algorithms. By leveraging the open-source OpenStreetMap map database and a heterogeneous multi-agent motion prediction model developed
更新日期:2020-03-16
• arXiv.cs.MA Pub Date : 2019-09-24
Kishan Chandan; Vidisha Kudalkar Xiang Li; Shiqi Zhang
Effective human-robot collaboration (HRC) requires extensive communication among the human and robot teammates, because their actions can potentially produce conflicts, synergies, or both. We develop a novel augmented reality (AR) interface to bridge the communication gap between human and robot teammates. Building on our AR interface, we develop an AR-mediated, negotiation-based (ARN) framework for
更新日期:2020-03-12
• arXiv.cs.MA Pub Date : 2020-03-05
Hangyu Mao; Zhibo Gong; Zhen Xiao
In cooperative multi-agent reinforcement learning (MARL), how to design a suitable reward signal to accelerate learning and stabilize convergence is a critical problem. The global reward signal assigns the same global reward to all agents without distinguishing their contributions, while the local reward signal provides different local rewards to each agent based solely on individual behavior. Both
更新日期:2020-03-10
• arXiv.cs.MA Pub Date : 2020-03-08
Wojciech Jamroga; Wojciech Penczek; Teofil Sidoruk
Recently, we proposed a framework for verification of agents' abilities in asynchronous multi-agent systems, together with an algorithm for automated reduction of models. The semantics was built on the modeling tradition of distributed systems. As we show here, this can sometimes lead to paradoxical interpretation of formulas when reasoning about the outcome of strategies. First, the semantics disregards
更新日期:2020-03-10
• arXiv.cs.MA Pub Date : 2020-03-07
Edith Elkind; Neel Patel; Alan Tsang; Yair Zick
We examine the problem of assigning plots of land to prospective buyers who prefer living next to their friends. They care not only about the plot they receive, but also about their neighbors. This externality results in a highly non-trivial problem structure, as both friendship and land value play a role in determining agent behavior. We examine mechanisms that guarantee truthful reporting of both
更新日期:2020-03-10
• arXiv.cs.MA Pub Date : 2020-03-09
Aman Sinha; Matthew O'Kelly; Hongrui Zheng; Rahul Mangharam; John Duchi; Russ Tedrake
Balancing performance and safety is crucial to deploying autonomous vehicles in multi-agent environments. In particular, autonomous racing is a domain that penalizes safe but conservative policies, highlighting the need for robust, adaptive strategies. Current approaches either make simplifying assumptions about other agents or lack robust mechanisms for online adaptation. This work makes algorithmic
更新日期:2020-03-10
• arXiv.cs.MA Pub Date : 2020-03-09
Jacob Steeves; Ala Shaabana; Matthew McAteer
A purely inter-model version of a machine intelligence benchmark would allow us to measure intelligence directly as information without projecting that information onto labeled datasets. We propose a framework in which other learners measure the informational significance of their peers across a network and use a digital ledger to negotiate the scores. However, the main benefits of measuring intelligence
更新日期:2020-03-10
• arXiv.cs.MA Pub Date : 2019-09-13
Saaduddin Mahmud; Moumita Choudhury; Md. Mosaddek Khan; Long Tran-Thanh; Nicholas R. Jennings
Evolutionary optimization is a generic population-based metaheuristic that can be adapted to solve a wide variety of optimization problems and has proven very effective for combinatorial optimization problems. However, the potential of this metaheuristic has not been utilized in Distributed Constraint Optimization Problems (DCOPs), a well-known class of combinatorial optimization problems prevalent
更新日期:2020-03-10
• arXiv.cs.MA Pub Date : 2019-08-13
Ziqi Yan; Gang Li; Jiqiang Liu
In typical collective decision-making scenarios, rank aggregation aims to combine different agents' preferences over the given alternatives into an aggregate ranking that agrees the most with all the preferences. However, since the aggregation procedure relies on a data curator, the privacy within the agents' preference data could be compromised when the curator is untrusted. All existing works that
更新日期:2020-03-10
• arXiv.cs.MA Pub Date : 2020-03-06
Yulun Tian; Alec Koppel; Amrit Singh Bedi; Jonathan P. How
We present Asynchronous Stochastic Parallel Pose Graph Optimization (ASAPP), the first asynchronous algorithm for distributed pose graph optimization (PGO) in multi-robot simultaneous localization and mapping. By enabling robots to optimize their local trajectory estimates without synchronization, ASAPP offers resiliency against communication delays and alleviates the need to wait for stragglers in
更新日期:2020-03-09
• arXiv.cs.MA Pub Date : 2020-02-21
A. L. Oestereich; M. A. Pires; S. M. Duarte Queirós; N. Crokidakis
In this work we tackle a kinetic-like model of opinions dynamics in a networked population endued with a quenched plurality and polarization. Additionally, we consider pairwise interactions that are restrictive, which is modeled with a smooth bounded confidence. Our results show the interesting emergence of nonequilibrium hysteresis and heterogeneity-assisted ordering. Such counterintuitive phenomena
更新日期:2020-03-09
• arXiv.cs.MA Pub Date : 2020-03-05
Julian Bernhard; Klemens Esterle; Patrick Hart; Tobias Kessler
Predicting and planning interactive behaviors in complex traffic situations presents a challenging task. Especially in scenarios involving multiple traffic participants that interact densely, autonomous vehicles still struggle to interpret situations and to eventually achieve their own driving goal. As driving tests are costly and challenging scenarios are hard to find and reproduce, simulation is
更新日期:2020-03-06
• arXiv.cs.MA Pub Date : 2019-01-14
Markus Fröhle; Karl Granström; Henk Wymeersch
A decentralized Poisson multi-Bernoulli filter is proposed to track multiple vehicles using multiple high-resolution sensors. Independent filters estimate the vehicles' presence, state, and shape using a Gaussian process extent model; a decentralized filter is realized through fusion of the filters posterior densities. An efficient implementation is achieved by parametric state representation, utilization
更新日期:2020-03-06
• arXiv.cs.MA Pub Date : 2019-02-06
Kaveh Fathian; Kasra Khosoussi; Yulun Tian; Parker Lusk; Jonathan P. How
Many robotics applications require alignment and fusion of observations obtained at multiple views to form a global model of the environment. Multi-way data association methods provide a mechanism to improve alignment accuracy of pairwise associations and ensure their consistency. However, existing methods that solve this computationally challenging problem are often too slow for real-time applications
更新日期:2020-03-06
• arXiv.cs.MA Pub Date : 2019-11-16
Xiaohui Bei; Zihao Li; Jinyan Liu; Shengxin Liu; Xinhang Lu
We study the problem of fair division when the resources contain both divisible and indivisible goods. Classic fairness notions such as envy-freeness (EF) and envy-freeness up to one good (EF1) cannot be directly applied to the mixed goods setting. In this work, we propose a new fairness notion envy-freeness for mixed goods (EFM), which is a direct generalization of both EF and EF1 to the mixed goods
更新日期:2020-03-06
• arXiv.cs.MA Pub Date : 2020-03-04
Paul Pu Liang; Jeffrey Chen; Ruslan Salakhutdinov; Louis-Philippe Morency; Satwik Kottur
Several recent works have found the emergence of grounded compositional language in the communication protocols developed by mostly cooperative multi-agent systems when learned end-to-end to maximize performance on a downstream task. However, human populations learn to solve complex tasks involving communicative behaviors not only in fully cooperative settings but also in scenarios where competition
更新日期:2020-03-05
• arXiv.cs.MA Pub Date : 2020-03-04
Parker C. Lusk; Xiaoyi Cai; Samir Wadhwania; Aleix Paris; Kaveh Fathian; Jonathan P. How
Reliance on external localization infrastructure and centralized coordination are main limiting factors for formation flying of vehicles in large numbers and in unprepared environments. While solutions using onboard localization address the dependency on external infrastructure, the associated coordination strategies typically lack collision avoidance and scalability. To address these shortcomings
更新日期:2020-03-05
• arXiv.cs.MA Pub Date : 2020-03-04
Pingping Zhu; Chang Liu; Silvia Ferrari
This paper presents an adaptive online distributed optimal control approach that is applicable to optimal planning for very-large-scale robotics systems in highly uncertain environments. This approach is developed based on the optimal mass transport theory and is also viewed as an online reinforcement learning and approximate dynamic programming approach in the Wasserstein-GMM space, where a novel
更新日期:2020-03-05
• arXiv.cs.MA Pub Date : 2020-03-04
Virginia Bordignon; Vincenzo Matta; Ali H. Sayed
This work studies social learning under non-stationary conditions. Although designed for online inference, classic social learning algorithms perform poorly under drifting conditions. To mitigate this drawback, we propose the Adaptive Social Learning (ASL) strategy. This strategy leverages an adaptive Bayesian update, where the adaptation degree can be modulated by tuning a suitable step-size parameter
更新日期:2020-03-05
• arXiv.cs.MA Pub Date : 2020-02-20
Alaa Daoud
The Web is ubiquitous, increasingly populated with interconnected data, services, people, and objects. Semantic web technologies (SWT) promote uniformity of data formats, as well as modularization and reuse of specifications (e.g., ontologies), by allowing them to include and refer to information provided by other ontologies. In such a context, multi-agent system (MAS) technologies are the right abstraction
更新日期:2020-03-05
• arXiv.cs.MA Pub Date : 2019-03-28
María Santos; Magnus Egerstedt
This paper explores the expressive capabilities of a swarm of miniature mobile robots within the context of inter-robot interactions and their mapping to the so-called fundamental emotions. In particular, we investigate how motion and shape descriptors that are psychologically associated with different emotions can be incorporated into different swarm behaviors for the purpose of artistic expositions
更新日期:2020-03-05
• arXiv.cs.MA Pub Date : 2019-08-07
Olivier Beaude; Pascal Benchimol; Stéphane Gaubert; Paulin Jacquot; Nadia Oudjane
We consider a resource allocation problem involving a large number of agents with individual constraints subject to privacy, and a central operator whose objective is to optimize a global, possibly nonconvex, cost while satisfying the agents' constraints, for instance an energy operator in charge of the management of energy consumption flexibilities of many individual consumers. We provide a privacy-preserving
更新日期:2020-03-05
• arXiv.cs.MA Pub Date : 2019-10-09
Ramy E. Ali; Bilgehan Erman; Ejder Baştuğ; Bruce Cilli
This paper explores a deep reinforcement learning approach applied to the packet routing problem with high-dimensional constraints instigated by dynamic and autonomous communication networks. Our approach is motivated by the fact that centralized path calculation approaches are often not scalable, whereas the distributed approaches with locally acting nodes are not fully aware of the end-to-end performance
更新日期:2020-03-05
Contents have been reproduced by permission of the publishers.
down
wechat
bug
|
[OS X TeX] Landscape mode
Juergen Fenn juergen.fenn at GMX.DE
Sat May 24 20:11:49 CEST 2008
Holger Schulz schrieb:
>>> Is the an easy way to use lnadscape mode?
>>
>> \usepackage[landscape]{geometry}
>
> That doesn't work either. The document is still displayed in portrait mode.
No, it isn't. Try this one, so you will see how it works:
\documentclass[12pt,landscape]{article}
\usepackage[landscape]{geometry}
\usepackage[ngerman]{babel}
\usepackage{blindtext}
|
On the dangers of using the growth equation on large scales in the Newtonian gauge.
# On the dangers of using the growth equation on large scales in the Newtonian gauge.
## Abstract
We examine the accuracy of the growth equation , which is ubiquitous in the cosmological literature, in the context of the Newtonian gauge. By comparing the growth predicted by this equation to a numerical solution of the linearized Einstein equations in the CDM scenario, we show that while this equation is a reliable approximation on small scales (h Mpc), it can be disastrously inaccurate () on larger scales in this gauge. We propose a modified version of the growth equation for the Newtonian gauge, which while preserving the simplicity of the original equation, provides considerably more accurate results. We examine the implications of the failure of the growth equation on a few recent studies, aimed at discriminating general relativity from modified gravity, which use this equation as a starting point. We show that while the results of these studies are valid on small scales, they are not reliable on large scales or high redshifts, if one works in the Newtonian gauge. Finally, we discuss the growth equation in the synchronous gauge and show that the corrections to the Poisson equation are exactly equivalent to the difference between the overdensities in the synchronous and Newtonian gauges.
## I Introduction
The past few decades have witnessed a remarkable improvement in the level of precision in cosmological observations. From measurements of the temperature fluctuations of the CMB, along with the total energy budget and flatness of the universe, to distance measurements and determinations of the Hubble parameter, errors on the order of a few percent are now commonplace. Within an arena of such precision, theorists must be especially cautious of deeply embedded approximations within their calculations. These approximations may be harmless under a given set of assumptions (for example, small scales and small redshifts), but dangerous if implemented in situations where those assumptions are not applicable (e.g. future probes on large scales and high redshifts).
With this in mind, in the present work we examine the growth equation in the Newtonian gauge (we discuss the relation between the conformal Newtonian gauge and the synchronous gauge in Sec.V), the derivation of which involves a number of approximations. The most important of these, as we demonstrate, is the Poisson equation which is used to link the matter perturbations (and subsequently their growth) to the metric perturbations. We show that in linearized general relativity, the Poisson equation follows from one of the Einstein constraint equations, in which two terms have been discarded. While on small scales these terms can be safely neglected, we show that on large scales or large redshift, at least one of them can be on the same order as the perturbative variables, and its absence can introduce significant errors in the growth equation. That these terms may be dominant on large, horizon size scales has of course been known for a long time. What has not been appreciated is that on sub-horizon scales these terms may generate contributions which are on the order of experimental precision. We estimate the error arising from the growth equation on various scales and redshifts, and show that the error can be significant enough in the context of current and proposed experimental limits that caution should be exercised in its use in calculations.
Being aware of this error, and thus avoiding it, is important for many models which attempt to break the degeneracy between modified gravity (MG) and dynamcial dark energy (DDE) (see for example zhang (); acquaviva (); Linder:2005in (); lindercahn (); polarski07 (); dore08 ()). These MG studies attempt to discern a deviation from general relativity (GR) in observations such as gravitational lensing, large scale structure growth, and the Integrated Sachs-Wolfe (ISW) effect. The Newtonian gauge is used in these investigations due to the metric perturbations being the gravitational potential(s) (these two perturbations are equivalent in the case of no anisotropic stress, and the difference between the two is parameterized in MG by what is called the gravitational slip). For example in lensing studies, it is the gradient of these potentials which is relevant (which is just given by the Poisson equation in the case of no anisotropic stress), in the ISW it is the time variation of the potentials which enter the calculation, and in structure growth it is the Poisson equation which is used to derive the growth equation. Working in the Newtonian gauge then, the Poisson equation arises and plays a central role, and is used throughout the MG literature.
In this work we will take a more detailed look at a few of these recent investigations of MG, as well as proposed measurements at large redshift. As stated above, these models, are particularly sensitive to any adjustments to the Poisson equation as they introduce modifications to the Poisson equation in order to investigate non-standard cosmology. The fact that modifications to GR are proposed to operate at large scales (due to the degree of accuracy to which GR has been tested at small scales) is also an indication that one should be mindful that one is not mistaking a signal for MG when in fact it is a sign that one’s approximation is becoming less accurate at the scale of interest. In other words, GR may be erroneously thought to break down when it is the approximation used, which discards terms inherent in GR, that is the culprit.
The outline of our paper is as follows. Except in the last section, our entire work is in the Newtonian gauge. In Sec.II we review the derivation of the growth equation and the Poisson equation paying attention to the assumptions and approximations that enter the calculations. In Sec.III we compare the growth predictions from the growth equation to a direct numerical integration of the linearized Einstein equations in the CDM scenario, and demonstrate that the chief source of error in the growth equation arises from the Poisson equation. Sec.IV studies the implications of using the growth equation in recent efforts to discriminate general relativity from modified gravity theories. Sec.V deals with relating the growth equation and matter perturbations in the synchronous and Newtonian gauges. Conclusions are found in Sec.VI.
## Ii Linearized Einstein gravity: the Poisson and the growth equations
In this section we review the derivation of the growth equation, highlighting the assumptions and approximations that go into the derivation. We work in ordinary general relativity (GR) with a general stress-energy tensor (this can include DDE). We use the notation of copeland (); Hwangtachyon (); Hwang05 (); ddw (). The perturbed metric is of the form
ds2 = −(1+2A)dt2+2a∂iBdxidt (1) +a2[(1+2ψ)δij+2∂ijE]dxidxj,
where ,, and represent metric perturbations and represents the cosmic scale factor.
Working in Newtonian gauge (we are working with coordinate time, and therefore we are not working in the “conformal” Newtonian gauge, as we do in Sec.V when comparing with the synchronous gauge formulation), which corresponds to a transformation to a frame such that , the gauge-invariant variables characterizing the metric perturbations become:
Φ ≡ A−ddt[a2(˙E+B/a)]→A, (2) Ψ ≡ −ψ+a2H(˙E+B/a)→−ψ. (3)
The energy-momentum tensor can be decomposed as
T00=−(ρ+δρ),T0α=−(ρ+p)v,α, Tαβ=(p+δp)δαβ+Παβ, (4)
where is a tracefree anisotropic stress, and , and denote the pressure and density of the cosmic fluid respectively. For the moment we take the anisotropic stress to be zero.
The perturbed Einstein equations yield, at linear order,
−Φ+Ψ = 0 (5) −Δa2Φ+3H2Φ+3H˙Φ = −4πGδρ (6) HΦ+˙Φ = 4πGa(ρ+p)v (7) 3¨Φ+9H˙Φ +(6˙H+6H2+Δa2)Φ = 4πG(δρ+3δp) (8) δ˙ρ+3H(δρ+δp) = (ρ+p)(3˙Φ (9) +Δav) [a4(ρ+p)v]∙a4(ρ+p) = 1a(Φ+δpρ+p) (10)
where a dot, bold or otherwise, denotes a derivative with respect to coordinate time , , and any quantity preceded by denotes a perturbation in that quantity. These equations determine the two metric perturbations and , along with the velocity potential and the matter perturbation which is conventionally re-written as .
We have used the relation between and in Eq. (5) in the subsequent equations. This relation simply reflects our assumption of no anisotropic stress. From this point on we use the Newtonian potential to characterize the metric perturbation.
We now switch to Fourier space, an extremely convenient transformation in the theory of linear perturbations. In the Fourier-transformed -space, all perturbative variables (e.g. ) are replaced by their Fourier transforms (e.g. ). For convenience, we suppress the k-subscripts and in what follows (unless otherwise mentioned), all perturbative variables can be assumed to be in Fourier space. A subscript denotes the present (zero redshift) value of that quantity.
Eq. (6) allows us to express the energy overdensity in terms of the potential and the background variables as follows:
−4πGδρ=k2a2Φ+3H2Φ+3H˙Φ (11)
One can see from this equation that on small scales where the first term on the right hand side dominates. If one also makes the assumption that the gravitational potential is slowly varying (as in the case of matter domination), then one recovers the familiar Poisson equation
−4πGδρ=k2a2Φ (12)
These approximations feed into the derivation of an equation governing the growth of structure in linearized gravity, commonly called the “growth equation”, which we now derive.
Starting from Eqns.(9) and (10) we find
˙δ = −3HΘ+3(1+w)˙Φ−(1+w)k2av (13) ˙v = −vH(1−3w)−˙w1+wv (14) +1a[Φ+w1+wδ+Θ1+w]
where is the EoS function defined in terms of pressure and energy density as , is the combination ddw (). It can be verified that these equations are identical to Eq. 30 in mabertschinger ().
The equation which determines the growth of the matter perturbation is found to be
¨δ+˙δ(2−3w)H+k2awδ+k2a2(1+w)Φ=3(1+w)[¨Φ+˙Φ(2−3w)H]+3˙w˙Φ−3H˙Θ+[−k2a2+3H22(1+9w)]Θ (15)
(one way to see this is by substituting from Eq. (7) and from Eq. (11) into Eq. (10), and using the time derivative of Eq. (9)).
To get from Eq. 15, which is exactly true in linear order, to the growth equation one must make several assumptions. First we assume the metric perturbation is slowly varying ad discard all its derivatives. Next, we take , , which set the effective sound speed of the cosmic fluid approximately equal to its equation of state , and make both of them constant in time. We then take and . These approximations are perfectly reasonable in a universe heavily dominated by pressureless perfect fluid matter, and they lead to the following much simplified equation:
¨δ+2H˙δ+k2a2Φ=0 (16)
Finally using the approximation Eq.(12) we recover the familiar growth equation:
¨δ+2H˙δ−4πGρδ=0. (17)
The growth equation is ubiquitous in cosmology and is used in many different forms, usually as either a first or second order differential equation in a certain growth variable, which is a function of . We list three versions of this equation, which will be relevant to this work, below.
One choice of the growth factor is which leads to the equation:
dGdlna + (4+12dlnH2dlna)G+G2 (18) + 3+12dlnH2dlna−32Ωm(a)=0
A slightly different choice of growth factor is leading to the equation
dfdlna+f2+12(1−dlnΩmdlna)f−32GNΩm=0 (19)
Clearly .
A third choice of growth factor yields the equation:
d2lnDd(lna)2+(2+1HdHdlna)dlnDdlna−4πGH2ρD=0 (20)
The question is then: how accurate is the growth equation upon use of these approximations? Certainly in the matter dominated era () and on small scales (h Mpc) these approximations appear justified. However, in this era of precision cosmology one must be careful to avoid throwing away terms which may account for measurable deviations. We now address the size and seriousness of such errors.
## Iii Testing and improving on the growth equation
In this section, we discuss in detail the errors that arise in the growth equation from assuming the Poisson equation to be true in the Newtonian gauge. We assume a model consisting of perfect fluid dark matter and a cosmological constant dark energy. We test the growth equation Eq. (17) by comparing it to a numerical integration of the full set of linearized Einstein equations governing the growth of perturbations in this CDM Universe. We also ignore radiation in the calculation presented here, but we have verified that including radiation does not change our conclusions.
Note that a more complicated model including baryons and neutrinos can introduce further sources of divergence between the growth equation and the true growth in the Newtonian gauge. Baryons have been shown to significantly affect the growth of the overdensity (contributing an error of 10% on scales below 10 Mpc), and on large scales the neutrino anisotropic stress cannot be ignored. Details on both these effects can be found in Green:2005kf (). Stochastic corrections to as a result of random forces (possibly arising from the “graininess” of the underlying system of particles) was explored in stochastic () where it was shown that these random forces can lead to significant deviations from the nonstochastic solution at late times. However, we ignore these effects here, and focus only on the errors arising from relativistic corrections to the Poisson equation.
The zero-th order Einstein and Euler equations describing the evolution of the background of this system are the following:
2˙H+3H2 = 8πGρΛ (21) ˙ρ = −3Hρ (22)
subject to the constraint
3H2=(˙aa)2=8πG(ρ+ρΛ) (23)
Here is the energy density of the cosmological constant. These equations can be solved analytically gron () to yield:
a(t) = [Ωm1−Ωm]1/3sinh2/3(ttΛ) (24) H(t) = 23tΛcoth(ttΛ) (25) ρ(t) = ρΛsinh−2(ttΛ) (26)
where
The first-order equations governing the evolution of perturbations can be easily derived. Using we find from equations (II), (13) and (14):
¨Φ = −4H˙Φ−8πGρΛΦ (27) ˙δ = 3˙Φ+k2a2vf (28) ˙vf = −Φ (29)
Equations (6) and (7) yield the constraints:
3H(HΦ+˙Φ)+k2a2Φ = −4πGδρ (30) (HΦ+˙Φ) = −4πGρvf (31)
An alternative method of deriving Eqs.((27)-(31)) is to start with the zero-th and first order Einstein equations for a Universe consisting of matter and scalar field dark energy (these can be found in, for instance liddle () in the Newtonian gauge and duttamaor () in the synchronous gauge), and then to take the limit as the scalar field tends to a cosmological constant, that is, the potential and .
For the numerical integration, we set initial conditions at a redshift where we set and . Equations Eq.(30) and Eq.(31) are used as independent checks on the accuracy of our numerics. We call the growth predicted by our numerical evolution the “true growth”. A simple mathematica notebook which performs the above numerical integration can be found at website ().
Henceforth, we denote the true growth by the variable and that predicted by the growth equation (17) by . We express the percentage departure of from by the quantity
Δ≡(δg−δ)δ. (32)
Fig. 1 shows as a function of scale, and we can see that the growth equation fails spectacularly as one approaches the horizon scale. This is also reflected in Fig. 2, which shows as a function of redshift for different scales. The failure at horizon scales is not a surprise of course. What is of interest is that there are significant deviations arising at scales within reach of experiment, and at levels that could be misinterpreted as a signal of the breakdown of GR if one were unaware that the Poisson equation approximation itself is breaking down at these scales.
Aside from CDM, we have also verified the failure of the growth equation in a model where the dark energy is a scalar field with a quadratic potential (with mass comparable to the present value of the Hubble parameter), e.g. the scenario considered in detail in duttamaor ().
Having established the failure of the growth equation on large scales, we now proceed to investigate the cause for this failure and propose an improved version of this equation. On large scales, the most obviously suspect step in the derivation of Eq. (17) is the replacement of Eq. (11) by the Poisson equation, or in other words, the neglect of the term in comparison to the term . While this is justified on small scales , it is not justified on scales close to the horizon, where the terms and are on the same order. The error introduced in neglecting a term of the same perturbative order is rapidly magnified in the process of integrating over the age of the Universe, leading to the gigantic deviation from the true growth as shown in Figs.1-2.
A smaller error is introduced in ignoring the velocity of the metric perturbation . However, we have verified that this does not lead to a significant error except on horizon scales.
To understand the magnitude of the error introduced by ignoring the term proportional to , let denote the ratio of the term to the term, i.e.,
ξ=3a2H2/k2 (33)
We can estimate the size of this term (in the matter dominated regime, for simplicity) by using
H2=H20(1+z)3Ωm0;a=a01+z (34)
where the subscript zero denotes today’s value, and . For convenience, we express as
k=10−nhMpc (35)
such that now denotes the scale. Finally, the present value of the Hubble parameter is expressed the usual way, as
H0=100hMpckms (36)
Plugging the above into Eq.(33), using (and of course restoring the correct units by dividing by the speed of light squared), we obtain the following simple formula for which shows its dependence on both physical scale and redshift:
ξ=(1+z)102n−7 (37)
In Fig.3 we plot as a function of at various scales. One can see that the discarded term () can be a substantial, non-negligible fraction of the retained term as becomes larger than 2 (i.e., h Mpc), or at large redshifts. At , even at small redshifts this term will be . As mentioned before, the cumulative effect of integrating over the history of the Universe causes the error to magnify rapidly.
Based on the above considerations, we propose the following modification to the growth equation
¨δ+2H˙δ−4πGρ1+ξδ=0 (38)
Figs.4-5, which show the arising from the improved growth equation (let us call this ), verify that Eq.(38) provides a far better approximation to the true growth than the usual growth equation. Even on scales of the size of .01h Mpc , the error is on the order of a percent for all redshifts. For scales on the order of the horizon, the error is somewhat large for low redshifts, presumably as a result of the dark energy dominance which causes the term in Eq. 11 to become significant.
The cumulative effect of integrating a small error over a large time period can be demonstrated by considering the difference between the usual growth equation Eq.(17) and the improved growth equation Eq.(38). Taking the growth predicted by the improved growth equation as a proxy for the true growth (according to Fig.4, the difference between and is negligible up to scales of Mpc), one can define the error variable in terms of (rather than as in Eq. (32)):
~Δ≡(δg−δgi)δgi. (39)
where and evolve according to Eqs.(17) and (38) respectively. The growth variables defined in this section are listed for convenience in Table 1.
One can now construct a differential equation for the evolution of for a given (under the assumption of matter domination):
¨~Δ+2(2+ξ1+ξ)H˙~Δ−4πGρξ1+ξ(1+~Δ)=0 (40)
Evolving this equation with the initial conditions , one obtains the almost identical results as shown in Fig. 2 for scales up to Mpc. In other words, this shows that the large error in Figs. 1 and 2 are (largely) the result of ignoring the term in comparison to the term for large scales.
We wish to emphasize that contrary to what is often suggested in the literature (e.g. acquaviva ()), the growth factor is not scale independent. This can clearly be seen from the form of the correction Eq. (33). Hence an observed scale dependence in the growth factor cannot be taken to imply a departure from Einstein gravity.
Finally, note that the other versions of the growth equation mentioned previously need to modified by such a term as well. For example, in terms of the growth factor , the growth equation Eq.(18) becomes
dGdlna + (4+12dlnH2dlna)G+G2 (41) + 3+12dlnH2dlna−32Ωm(a)1+ξ=0
Based on the above discussion, we recommend the use of Eq. (38) over the usual Eq. (17) as a much better approximation to the linear growth of matter perturbations on large scales. The importance of using a version of the growth equation accurate to large scales and high redshifts is underscored by the fact that scales on the order of .01h Mpc and redshifts are within the reach of future surveys such as ADEPT adept () as well as surveys based on the 21 centimeter emission.
## Iv Probing beyond-Einstein physics
We now examine the implications of the failure of the usual growth equation on some recent studies which have used the growth equation as a probe to distinguish Einstein gravity from new physics. We first consider the work by Linder Linder:2005in () and Linder and Cahn lindercahn (). In Linder:2005in (), the author finds that the linear growth of structure according to Einstein gravity (represented by Eq. (17)) can be modeled with remarkable accuracy by a single parameter for the entire expansion history of the Universe. In particular, if one uses Eq. (18), then the growth history, characterized by the quantity can be described by the relationship
G(a)=Ωm(a)γ−1 (42)
The parameter , dubbed the “growth index”, is given by the fitting formulas
γ = 0.55+.02[1+w(z=1)],w<−1 (43) γ = 0.55+.05[1+w(z=1)],w>−1 (44)
over the whole range .
In lindercahn () the authors provide analytical arguments supporting the above formulas, and contend that the narrow range in which this parameter is constrained in the context of Einstein gravity, allows for a possible distinction of Einstein gravity from other gravitational theories in which this parameter is not constrained in this range. It is not difficult to repeat the steps of their analytic calculation, but now with the term included. One can solve Eq.(41) formally
G(a) = −1+1a4H(a)∫a0da′a′(a′4H(a′)) × (1+32(1+ξ)−32Ωw1+ξ−G(a′)2)
where the substitution , where is the dark energy fraction, has been made. In the matter dominated regime we use the fact that . The term can be neglected, and using the form for the Hubble parameter in a matter dominated universe: , is found to be
G(a) = −1+25(1−Ωw2) + a−5/2(1−Ωw2)∫a0da′a′a′5/232(1+ξ) + a−5/2∫a0da′a′a′5/2Ωw(12−34(1+ξ))
The corrected, dependent, growth index can then be recovered by using
γ ≈ −G(a)Ωw(a) (47)
in Eq.(IV))
To see how well the fit described in equations (42)-(44) works for the true growth (as opposed to the growth predicted by the Eq. (17)) we plot the growth function as derived from the true growth (on the plots this is denoted by ) against the fit for the redshift range . The growth function derived from the growth equation (denoted by ) is shown for reference.
Fig. 6 confirms the findings of Linder:2005in (); lindercahn () for the growth equation (and hence for small scales), but shows a drastic departure of the -fits from the true growth for scales . For these large scales, it turns out that has a very strong -dependence, and the simple fits of Eq. (43)-(44) do not reflect the true growth, even approximately.
For the true growth on large scales, behavior of the growth index is shown for three different scales h Mpc,h Mpc and h Mpc in Fig. 7. It is clear that for the true growth on large scales, has a strongly non-linear dependence and is not by any means restricted to a small range. Hence on these scales it cannot be used to discriminate between GR and modified gravity theories.
Polarski and Gannouji polarski07 () have proposed an interesting strategy for discriminating GR from modified gravity, using the present values of and its derivative . Starting with the growth equation in the form of Eq. (19), they obtain the following constraint condition linking , , , and (where is the equation of state parameter of the dark energy component):
γ′0=(lnΩ−1m,0)−1[−Ωγ0m,0−3(γ0−12)weff,0+32Ω1−γ0m,0−12] (48)
Based on this equation, Polarski et. al. then derive the constraint for CDM. They therefore conclude that a measurement of outside this range would signal a departure from CDM.
However, in the light of our discussion in Sec.III, the correct starting point should be the modified growth equation Eq. (38). This leads to the following modified form of the constraint equation:
γ′0=(lnΩ−1m,0)−1[−Ωγ0m,0−3(γ0−12)w0+32Ω1−γ0m,01+ξ−12⎤⎦ (49)
Clearly, the -correction can cause the value of to depart from the range observed by Polarski et. al. quite drastically. As an example, consider the parameter choices , and . Eq. (48) yields , which is within the range mentioned above. For scales of h Mpc and smaller, Eq. (49) agrees. However, for Mpc, and Mpc, Eq. (49) yields and respectively, indicating that the bounds derived using the growth equation are not respected on large scales.
Acquaviva et. al acquaviva () have suggested a null test parameter (where is given by Eq. (44)) as a tool to discriminate GR from modified gravity . They claim that any non-zero measurement of which cannot be attributed to systematics should be interpreted as an signature of modified gravity. However, it is clear that the parameter is based on the growth equation Eq. (17), and given the failure of the growth equation on large scales, this parameter is not reliable. To demonstrate this point, we plot the value of this parameter vs redshift in Fig. 8 for three scales. We see that on small scales ( Mpc) the parameter is close to zero (as expected), but strongly deviates from zero for large scales and large redshifts.
## V Choice of Gauge
In this section examine the relationship between the growth equation in the synchronous and conformal Newtonian gauges (so far we have not used the conformal time form of the line element, but we will here for easier comparison with previous works). We rely heavily on the work of Ma and Bertschinger mabertschinger (), and any notational differences between our work and theirs will be specified for clarity. The line element in the synchronous gauge is given by
ds2=a2(τ)(−dτ2+(δij+hij)dxidxj) (50)
The Einstein equations are written in terms of two functions, (which is the trace of ) and , where , that are defined through the Fourier integral of the metric perturbation as
hij(→x,τ) = ∫d3kei→k⋅→x[^ki^kjh(→k,τ) + (^ki^kj−13δij)6η(→k,τ)]
In mabertschinger () the line element in conformal Newtonian gauge is written as
ds2=a2(τ)[−(1+2ψ)dτ2+(1−2ϕ)dxidxi] (52)
In the case we are examining where anisotropic stress is absent we can relate the variables and to our as = = .
The variables in the synchronous and conformal Newtonian gauges can be shown to be related by
Φ = 12k2(h′′(→k,τ)+6η′′(→k,τ) + a′a(h′(→k,τ)+6η′(→k,τ)))
where the prime denotes a derivative with respect to conformal time , whereas a dot represents a derivative with respect to coordinate time . Using the variable this relation can be written as
Φ=a˙α+˙aα=ddt(aα) (54)
The Einstein equations in the synchronous gauge in the absence of anisotropic stress are given by
k2η−12a′ah′=−4πGNa2δρs (55) k2η′=4πGNa2(ρ+p)θ (56) h′′+2a′ah′−2k2η=−8πGNa2(3δps) (57) h′′+6η′′+2a′a(h′+6η′)−2k2η=0 (58)
where is related to the fluid velocity by a divergence, , and the subscript denotes the synchronous gauge (we follow mabertschinger () here and use to denote the conformal Newtonian gauge as well). In the simple case of a matter dominated universe one can derive the growth equation for from these equations, and one arrives at
¨δs+2H˙δs−4πGNδs=0 (59)
We see that the growth equation is exact in the synchronous gauge in the case of matter domination. In order to compare this to the case of the conformal Newtonian gauge we make use of the relation between the perturbations in the two gauges
δs=δc−αρ′ρ=δc+3Haα(1+w) (60)
(In the matter dominated regime one can obviously simplify this to ). Inserting this relation into the growth equation in synchronous gauge, Eq.(59), and using the relation Eq.(54) we find
¨δc+2H˙δc−4πGNδc−3H2Φ+3H˙Φ=0 (61)
Comparing this to the relation we found for in Eq.(15) we see that they exactly match as one would expect. In order to see this starting from Eq.(15), one should use the approximations of matter domination and set to zero, and use Eq.(II) to substitute for .
What this shows is that if one wishes to use the exact growth equation, one should work in synchronous gauge. The difference between the growth equation in the two gauges is given by the extra and terms as we have noted, which cause a large deviation in the conformal Newtonian gauge from the solution to the usual growth equation. In other words, the deviation from the usual growth equation in conformal Newtonian gauge is a mark of the deviation between the evolution of the matter perturbation in the synchronous and conformal Newtonian gauges.
In the process of observation, one is essentially viewing a weighted two-dimensional projection of a three-dimensional field . and are linked through a line-of-sight-integral
~δ(^n)=∫∞0dzWX(z)δ(^nr(z),z) (62)
where is a direction in on the sky, is the comoving distance to a point at redshift , and is a window function which selects a range along the radial coordinate that contributes to the observable . (For more details, see e.g. Zhao:2008bn ())
is obviously a gauge-dependent quantity, but as we have demonstrated above, for subhorizon scales smaller than (h Mpc), the gauge choice does not lead to a significant difference in . It is only on scales close to the horizon that the two gauges can diverge significantly. In some recent work Wands:2009 (), it has been shown that the Poisson equation is valid on all scales if one works in a comoving-orthogonal gauge. In our opinion, the question of which gauge works best in modeling real astronomical measurements on large scales is important and merits further research.
## Vi Conclusions
With the current and future ability of cosmological probes to collect data with high and ever increasing precision, one must be careful when incorporating approximations into calculations which will be compared to such data. To this end, we have tested the familiar growth equation against direct numerical simulations of the growth of perturbations in a CDM Universe and found it to be strikingly inaccurate on large (h Mpc) scales in the Newtonian gauge. We have traced the source of the inaccuracy to general relativistic corrections to the Poisson equation which become important at large scales and at large redshifts within the Newtonian gauge. We proposes a modified version of the growth equation for use in the Newtonian gauge, which we show to be highly accurate on all scales up to a tenth of the horizon, for all redshifts. We have examined the implications of the failure of the growth equation on recent efforts to distinguish GR from modified gravity. finally we have discussed the growth equation in the synchronous gauge and demonstrated that the corrections to the Poisson equation correspond exactly to the difference between the overdensities in the two gauges. Our results have important implications on efforts to design null-test parameters to distinguish between models of gravity, as well as on constraining cosmological parameters based on observations from future probes. As modifications to general relativity may begin to operate on large scales, one must take care to ensure that approximations are not masquerading as beyond Einstein physics.
###### Acknowledgements.
The authors are grateful to Viviana Acquaviva, Edmund Bertschinger, Levon Pogosian, Bob Scherrer, Tanmay Vachaspati, Tom Weiler, Peng-Jie Zhang and the anonymous referees for useful discussions. The authors acknowledge the hospitality of Los Alamos National Laboratory and St. John’s College where part of this work was completed. JBD was supported in part by U.S. DoE grant DE-FG05-85ER40226.
### References
1. P. Zhang, M. Liguori, R. Bean, S. Dodelson 0704.1932
2. V. Acquaviva, A. Hajian, D. N. Spergel and S. Das, arXiv:0803.2236 [astro-ph].
3. E. V. Linder, Phys. Rev. D 72, 043529 (2005) [arXiv:astro-ph/0507263].
4. E. V. Linder and R. N. Cahn, Astropart. Phys. 28, 481 (2007) [arXiv:astro-ph/0701317].
5. D. Polarski and R. Gannouji, Phys. Lett. B 660, 439 (2008) [arXiv:0710.1510 [astro-ph]].
6. Y. S. Song and O. Dore, arXiv:0812.0002 [astro-ph].
7. E. Copeland, M. Sami, and Shinji Tsujikawa, hep-th/0603057
8. J. c. Hwang and H. Noh, Phys. Rev. D 66, 084009 (2002).
9. J. c. Hwang and H. Noh, Phys. Rev. D 71, 063536 (2005).
10. J. B. Dent, S. Dutta and T. J. Weiler, arXiv:0806.3760 [astro-ph].
11. C. P. Ma and E. Bertschinger, Astrophys. J. 455, 7 (1995) [arXiv:astro-ph/9506072].
12. A. M. Green, S. Hofmann and D. J. Schwarz, JCAP 0508, 003 (2005) [arXiv:astro-ph/0503387].
13. A. L. B. Ribeiro, A. P. A. Andrade and P. S. Letelier, Phys. Rev. D 79, 027302 (2009) [arXiv:0902.3272 [astro-ph.CO]]
14. O. Gron, Eur. J. Phys. 23, 135 (2002) [arXiv:0801.0552 [astro-ph]].
15. N. Bartolo, P. S. Corasaniti, A. R. Liddle and M. Malquarti, Phys. Rev. D 70, 043532 (2004) [arXiv:astro-ph/0311503].
16. S. Dutta and I. Maor, Phys. Rev. D 75, 063507 (2007) [arXiv:gr-qc/0612027].
17. G. B. Zhao, L. Pogosian, A. Silvestri and J. Zylberberg, arXiv:0809.3791 [astro-ph].
18. D. Wands, A. Slosar, arXiv:0902.1084 [astro-ph.CO]
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
# Tag Info
0
0
You can find the position (as a function of time) for the balloon with simple kinematics: $y_B=200-\dfrac{1}{2}gt_B^2$ (with $g=+9.8\:m/s^2$). After five seconds, the archer shoots the arrow upward, the position of which can also be represented with kinematics: $y_A=40t-\dfrac{1}{2}gt_A^2$ Now, they're on separate time scales, where $t_i=0$ corresponds ...
1
There are some very useful elementary equations that describe basic motion with constant acceleration, and these are: $$v=u+at,$$ $$v^2=u^2+2as,$$ $$v=ut+\frac{1}{2}at^2,$$ where $u$ is initial speed, $v$ is final speed, $s$ is displacement (how far the object has moved) and $a$ is acceleration. You must now think about your problem to determine which ...
1
How long does it take to stop? $v/a$ or roughly 2 seconds. How far does it go up in 2 seconds? ${1/2}at^2$ or 20m, roughly. You figure it out exactly.
1
At the instant you release the arrow, it begins to lose velocity due to gravitational acceleration, which can be represented by a vector pointing opposite to the arrow's direction of flight. The arrow loses about 9.8 m/sec of its velocity every second (this is the magnitude of gravitational acceleration at the Earth's surface). Solve for the time it would ...
0
You need to read the problem again. This is a typical problem that beginning physics students have: they don't read the problem enough. You should read a problem at least 3 times before you do any work. Read slowly. You will see that you're supposed to find the angle of the velocity vector, not the angle of a line between the starting and ending points.
0
This sounds like trying to hit a fixed target at a different elevation when you know the horizontal and vertical distance to the target (2D projectile motion). Basically you have 2 equations you have to solve for simultaneously, the horizontal and the vertical distances. You have 3 unknowns – the time, the initial velocity of the projectile and the angle of ...
0
In GR, the speed of light is not constant, it varies with the curvature of space--time. So the constancy of this universal speed depends on space--time's having constant curvature. Which it doesn't, but this is locally a useful approximation, and in order to address the OP's intention, we will from now on assume that the Universe is a space of constant ...
0
I must object by saying, I could go about questioning everything, but one must rather understand how? and why? things are the way they are. One must understand the Maxwell Equations, the problems and issues that come with it, special relativity, how it solves the problems, how to derive the Lorentz factor and how it breaks down at the speed of light. There ...
2
Solve for $F_b$ from the horizontal braking distance. Assume $F_b$ is constant, then during braking kinetic energy has been converted to friction work: $$F_b \Delta x = \frac12 mv^2$$ where $\Delta x=123\:\mathrm{ft}$ is the braking distance and $v=60.0\:\mathrm{miles/hour}$. I've not checked the rest of your work. You don't need to invoke friction ...
1
I am giving the solutions of original task (to get the speed at point B). I am not sure if the questions are necessary to perform the task. If you are sure the path taken does not matter (and I will assume that per your statement). So, let us consider a straight line path. Vertical component of F overcomes gravity and causes vertical move. Only horizontal ...
1
Using the first $2$ statements or conditions, apply the principle of conservation of energy and you will be able to calculate the resistive force offered by the target material. Next use the second condition and again apply the principle of conservation of energy to get the required answers, mainly the velocity of the bullet after penetrating the target.
0
some context is missing here but it could be the difference between Eulerian and Lagragian quantities, i.e. the spatial derivatives vs the particles derivative. It is mostly used for continuous materials (e.g. fluids), but can extend to other cases (e.g. field or stream of objects). The spatial (i.e. Eulerian) velocity in a field is the one at a given ...
3
I think that this is a very interesting problem which is conceptually difficult. You do not need to worry about the FBD for the truck. The box should be your main focus. Diagram 1 is the FBD as long as the box does not slide relative to the truck. With the aid of diagram 1 work out the maximum acceleration $a$ the box can have as a result of the static ...
0
Just recall that kinetic friction always opposes the motion; if this were not true you could use friction to generate free energy. Static friction opposes the applied forces.
0
If the shore is really far away, you can't tell the water is moving. Then it is immediately obvious that the boat must have spent the same length of time moving away from the crate as moving towards the crate - 1 hour each way. If we look at just the crate, we see that it moved 3 km in 2 hours (it was found "5 km downstream from the turnaround point" which ...
0
Your question implicitly assumes that a any object will "boomerang" (you mean return to thrower I guess) only depending on the rotation speed. This is not the case. In very simplistic terms, the boomerang motion depends on its shape and material, besides on its speed. Actually even the speed is not as simple, because it needs a proper combination of ...
3
The equation for your curve is given by: $$\frac{dv}{dt} = \frac{F(v)}{m}$$ where $F(v)$ is the net force on the car, which is a function of the velocity. we solve the equation by integrating to get: $$\int \frac{dv}{F(v)} = \frac{t}{m}$$ The trouble is that the net force $F(v)$ is a complicated function that doesn't generally have a simple analytic ...
1
The magnitude of centripetal acceleration is $\frac{v^2}{r}$ instantaneously. It applies no matter the speed on your circular path. (Technically it's true for any curve, but $r$ would be changing on non-circular curves, making calculations more difficult.) The tangential acceleration is constant, so you can write a function for $v$. Then you have two ...
0
This is not a proper derivation. At a fundamental level, there are at least three important points that are not taken into account by this approach: as you consider a second mass point, it is somewhat difficult to adjust (in a non-arbitrary way) the derivation to obtain the correct energy term related to the angular momentum and/or rigid body rotation ...
1
If you take your final expression $$x(t) = \underbrace{\left(x_0 + \frac{b - a}{2}\tau^2\right)}_{x_0^*} + \underbrace{\left(v_0 + \tau(a - b)\right)}_{v_0^*} t + \frac{b}{2} t^2, \quad \text{with}\ t>\tau,$$ then $x_0^*$ and $v_0^*$ would be the position and velocity at $t=0$, however this is only meaningful if $\tau<0$ (however in that case $x_0$ ...
1
Firstly it's worth noting that such a discontinuity can never be 100 % real. To go from acceleration $a$ to $b$ instantaneously ($\Delta t = 0$) would require an instantaneous change in the net force responsible for the accelerations and that isn't possible in the material world. Secondly, I think you are over-thinking your problem. Just write the ...
1
This equation holds whenever there is constant acceleration. Here are 2 ways of deriving that equation, which I hope help you understand it. Energy conservation The change in kinetic energy must be equal to the work done on the particle. $$\frac{1}{2}m v_A^2 - \frac{1}{2}mv_B^2 = \int F\cdot dx$$ For a constant force and mass $\int F\cdot dx = F (x_A - ... 0 Draw a graph with time along the horizontal and velocity up the vertical. Let's start with an object in motion at constant velocity. Its motion on the graph will be represented by a horizontal line at some distance from the y=0 axis. After some period of time, it will have covered a distance equal to velocity x time. That distance will be represented on ... 0 Start with acceleration, which we assume to be a constant$g$. Also assuming 'up' is the positive spacial direction i.e.$g$is$(-)ve: $$a(t) = -g = -9.81 \,ms^{-2}$$ Integrate once to get velocity: $$\dot{a} =v(t) = \int_0^t -g dt = -gt +v_0$$ Integrate again to get the distance: $$\ddot a = x(t) = \int_0^t(-gt + v_0)\, dt = -\frac{1}{2}gt^2 ... 1 You don't need calculus to understand this and I think you are right to be trying to gain a deeper understanding than just memorizing some formulas. During that first second the body accelerates - it starts with 0 velocity and gains linearly giving 9.8 at the end of the first second, so at that point, it hasn't been moving at 9.8m/s for a second, it has ... 0 Intuitively, the body spends some time at every velocity between 0 and 9.8 m/s during the first second. From the formula distance = speed * time, if we call that time interval dt and add up all the contributions (using integral calculus) the answer is 4.9 m. 1 The concept you're after is the dot product between 2 vectors (your displacements). More specifically you want to use$$ \vec{a} \cdot \vec{b} = |a| |b| \cos(\theta) $$to find \theta. 1 To compliment John's answer I'll give you an example: the kinetic energy of a harmonic oscillator. First we need to determine the velocity v=\frac{dx}{dt}=\frac{d(Asen(\omega t +\phi_0))}{dt}=A\omega cos(\omega t+\phi_0). Because kinetic energy is \frac{1mv^2}{2} we substitute v KE=\frac{1}{2}mA^2\omega^2cos^2(\omega t+\phi_0) Because ... 3 If the velocity of a mass m at some moment of time is v, then the kinetic energy and momentum are:$$\begin{align} E &= \tfrac{1}{2}mv^2 \\ p &= mv \end{align}$$If the velocity is changing with time, i.e. it is a function of time v(t), then the kinetic energy and momentum will also be functions of time:$$\begin{align} E(t) &= ... 0 Kinetic energy's quadratic makes perfect sense if our reality is not actually first order in space, and is instead simply a measurement of the relative rate that an object is passing through time. The space of our existence then becomes the space of simultaneous time, at any given point in time, as it progresses. In this scenario, changing the kinetic ... 0 Think of a position axisx$starting at zero, positive to the right and negative to the left. It is like a number line. A positive velocity means that the value of$x$is increasing eg going from$x=+3$to$x=+5$or$x=-7$to$x=-4$or$x = -3$to$x=+4$. A negative velocity means that the value of$x$is decreasing eg going from$x=-3$to$x=-5$or$x=+7$... 0 well you have given answer in your own question! Velocity and acceleration are both vector quantities, meaning they have magnitude and 'direction'. The sign (+/-) will depend on the direction. To simplify, let me give you an easy example.. Case 1: An object is moving down from the top of a mountain. The acceleration (in this case 'g') will act in the ... 0 When you say "minimizing the danger of it breaking on the ground" I am assuming you mean you want to reduce the kinetic energy of the object when it hits the ground (Also I have assumed there is no air resistance in the problem). In order to do that the object must not have any horizontal or vertical velocity component at the moment of release wrt the ... 2 As you figured out, the horizontal and the vertical part of the motion are independent. If you throw the bottle upwards, it will go upwards for some time, then turn and fall back. When it reaches the height of your hand, it will have the same velocity as when you threw it, just the opposite direction (downwards) - so this doesnt help as well. Clearly, ... 0 The energy required to accelerate an object by a given velocity increment is linear in the initial velocity in the non-relativistic limit (where$E_k=\frac{1}{2}mv^2$applies). It is even more energy intensive for the relativistic case when the velocity of light (c) is approached. That is because the relativistic expression for kinetic energy is: ... 1 Your train is travelling east at a speed of 20 m/s, since you walk to the back of your train you are only travellint east at a speed of 18.6 m/s with respect to the ground. Your friend is travelling west at a speed of 28 m/s, or a speed of -28 m/s to the east. From your friends perspective ground is travelling east at a speed of 28 m/s and you have an ... 1 Given the acceleration$a = \sin \left( \frac{\pi t}{T}\right) $, by integration you get $$v(t) = \int_0^t a\,{\rm d}t = \frac{T}{2} \left(1-\cos\left(\frac{\pi t}{T}\right) \right)$$ $$s(t) = \int_0^t v\,{\rm d}t = \frac{T}{\pi^2} \left(\pi t - T \sin\left(\frac{\pi t}{T}\right) \right)$$ Since the last one cannot be inverted for$t(s)$, we can ... 1 If the masses were not accelerating, it would be the case that B exerts an upward force on A equal to $$F_{AB} = (0.4)\cdot(9.81) \mathrm N = 3.92 \mathrm N$$ in order to cancel the downward force of gravity. Since A is accelerating upward, it must be that B exerts a greater force equal to $$F_{BA} = (0.4) \cdot (9.81 + 0.5) = 4.12 \mathrm N$$ So ... 1 Newton's third law says the the force on A due to B is equal and opposite to the force on B due to A. This in turn means that the changes of momentum of A and B are the same in magnitude but opposite in direction. This is how the momentum becomes rearranged. B loses some momentum and A gained an equal amount. So when two atoms collide you can think of ... 0 The equation used to find the displacement in motion with constant acceleration in 1 dimension is -$ s = ut + \frac{a{t}^{2}}{2} $where u is the initial velocity , s is displacement and a is acceleration (constant). You can re-write the equation as -$ t^2 + \frac{2u}{a}t - \frac{2s}{a} = 0 $or$ (t + \frac{u}{a})^2 - \frac{2s}{a} - \frac{u^2}{a^2} ...
0
You can rearrange using the quadratic formula to get: $$t = \frac{-u±\sqrt(u^2+2as)}{a}$$ Thanks AccidentalFourierTransform for the answer.
2
From when I worked among missile engineers, accelerometers were used, along with gyroscopes (mechanical or laser). I don't know of 6th order differential equations. I do know of 3rd order, namely in the steering by swiveling the engine nozzle. Specifically, the engine nozzle angle is off-center by a certain amount, causing an angular acceleration (2nd ...
3
The difference between an office chair's 5 wheels/supports and a regular chair's 4 legs is that the latter has all of its load going straight down. The legs only need to be strong enough not to shatter. In fact, a chair could easily get away with 3 legs but for the stability. In contrast the office chairs legs support load perpendicular to their ...
1
It is because the ball was traveling with you when you threw it. Imagine the following question: I am in a car traveling at 5m/s holding a ball. Where will the ball be relative to me in 10 seconds? Answer in my hand. To be obtuse: ball has velocity 5m/s. After 10s it will have moved forward 10 x 5 meters = 50. I have velocity 5m/s. After 10s it will have ...
26
Consolidating some of the points made in the answers to the question you linked, and comments: When constructing a chair, 4 legs is easy when you use traditional (wooden) construction - 90 degree angles, and easy to make stackable. A little bit harder than three legs because you have to make sure they are all the same length (or the chair will wobble). ...
0
This is how I understand it. There is a series of definitions used in physics, and one used in engineering mostly. I'll describe the one used in physics first: In mechanics, we describe the motion of bodies, and the causes that effect them. This includes the special case where the "motion" is no motion, i.e. bodies are stationary. The description of the ...
2
The first postulate of Special Relativity - which is also the Principle of Relativity is "Laws of Physics are invariant in all the inertial frames." Now consider the following scenario: Suppose there are two events which are happening simultaneously in frame $O$ and are spatially separated by distance $l$. From your Equation $1$, the spatial interval ...
1
No need to force $\tilde\alpha_1 = \alpha_1$. After taking advantage of the fact that (1), (2) and (5), (6) must be inverses of each other, your transformation reads, in matrix form, $$\left(\begin{array}{c} x'\\ t' \end{array}\right) = \alpha_1(v)\left(\begin{array}{cc} 1 & -v\\-\frac{v}{c^2} & 1 \end{array}\right)\left(\begin{array}{c} x\\ t ... 4 You do not need the kinetic energy. Working with the total energy \gamma m c^2 produces the same result. Assuming both the total initial energy \bar E_0 = \gamma_0 m c^2 and the additional energy E_i are known, write \gamma_1 mc^2 = \frac{mc^2}{\sqrt{1-\beta_1^2}} = \bar E_0 +E_i for \beta_1 = \frac{v_1}{c}, then$$ \sqrt{1-\beta_1^2} = ...
Top 50 recent answers are included
|
# [Haskell-i18n] unicode notation \uhhhh implementation
Ketil Z. Malde ketil@ii.uib.no
16 Aug 2002 14:04:04 +0200
Sven Moritz Hallberg <pesco@gmx.de> writes:
> Would allowing the full Unicode names give an advantage? Something like
> GREEK_SMALL_LETTER_THETA is almost half a line and might do more harm to
> the code readability than uhhhh.
Well, it depends, I suppose. I'm more likely to be able to remember
that '#GREEK_SMALL_LETTER_THETA represents an angle than \uXXXX.
Although, I of course would prefer '&theta' or '{\theta}' or something
like that.
It is possible that a shortish list of TeX symbols or HTML entities,
or both, would suffice.
Readability is one thing, however, I'm not quite sure how layout would
be affected with this. I'm often surprised to hear about the problems
people experience with layout, it just seems to work for me. (Using
Emacs and auto-indent; there's rarely any problem pressing TAB until
the right indentation is reached.)
However, now it appears that indentation might change, according to
encoding used. How do we solve that?
The simple solution is to count one Unicode character as one
indentation character, but that would mean having alignments visually
distorted if we are using other notations. Emacs could probably
handle this and display things correctly, but do we want that extra
complexity?
case t of Rad _ -> foo
Deg _ -> bar
-- ^visual alignment
case &theta of Rad _ -> foo
Deg _ -> bar
-- ^aligned, but only by counting
(Ditto for \uXXXXXXXX, of course)
After all, isn't layout intended to make things *easier* to read?
I think I'm in favor of requiring a line break when starting a layout
block, but I suppose that will break a lot of existing code.
(e.g
case &theta of
|
## CPC '21 Contest 1 P7 - AQT and Quarantine
View as PDF
Points: 20 (partial)
Time limit: 2.5s
Memory limit: 256M
Author:
Problem types
AQT is suspicious that brainworms are running wild in DMOJistan! To limit the spread of these IQ-reducing creatures, the admins require that cities of DMOJistan quarantine for a specific amount of time.
DMOJistan can be mapped as a tree with nodes that each represent a city, and edges that represent roads. People can only move between cities by only using the roads. Whenever DMOJ admins orders city to be in quarantine, starting from the beginning of day the node , and its neighbouring cities, form a bubble, where they must quarantine until the end of day . This means that no one inside the bubble can go outside the bubble and vice versa. Note that for any day, we can group cities into disjoint sets of cities where two nodes are in the same set if and only if we can visit each city without violating any of the quarantine rules. For each middle of the day (after the beginning and before the end of the day) from day to day , report the number of such sets.
For all :
For all :
#### Input Specification
The first line is a single integer .
The next lines where the -th line contains and separated by a space representing that there is a direct road between nodes and .
The following line contains and separated by a single space.
The next lines each represent a different type of brainworm, where the -th line contains integers separated by spaces, , and .
#### Output Specification
For each day from to , output the number of such sets described in a problem statement on a new line, where the -th line is the answer of the -th day.
#### Sample Input 1
5
2 3
1 3
4 2
3 5
5 5
2 2 4
4 2 5
5 3 4
5 4 5
3 1 3
#### Sample Output 1
2
5
5
4
3
#### Explanation for Sample 1
At the beginning of day 1, city 3 will form a bubble with its neighbouring cities.
At the middle of day 1, two sets are formed and .
At the end of day 1, nothing happens.
At the start of day 2, city 2 and 4 will form bubbles with their neighbouring cities.
At the middle of day 2, the sets are , , , and .
At the end of day 2, nothing happens.
At the start of day 3, city 5 forms a bubble with its neighbouring cities.
At the middle of day 3, the sets are still , , , and .
At the end of day 3, city 3 and its neighbouring cities will lift their restrictive measures.
At the start of day 4, city 5 will form another bubble with its neighbouring cities.
At the middle of day 4, the sets are , , , .
At the end of day 4, city 5 and its neighbouring cities lift their first bubble restriction but are still in quarantine from their second bubble restriction, and city 2, and its neighbouring cities, lift their quarantine restriction.
At the start of day 5, nothing happens.
At the middle of day 5, the sets are , , .
At the end of day 5, the rest of the restrictions are lifted.
#### Sample Input 2
3
1 2
2 3
3 1
1 1 1
1 1 1
3 1 1
#### Sample Output 2
3
#### Explanation for Sample 2
Note that for the only day, are the sets.
|
# Directory write permissions check
Consider the following method that I have for checking write permissions on a directory path:
/// <summary>
/// Check existence and write permissions of supplied directory.
/// </summary>
/// <param name="directory">The directory to check.</param>
protected static void CheckPermissions(string directory)
{
if (!Directory.Exists(directory))
{
throw new DirectoryNotFoundException(String.Format(JobItemsStrings.Job_DirectoryNotFound, directory));
}
// Check permissions exist to write to the directory.
// Will throw a System.Security.SecurityException if the demand fails.
FileIOPermission ioPermission = new FileIOPermission(FileIOPermissionAccess.Write, directory);
ioPermission.Demand();
}
When running FxCop, this code throws up a "CA2103 - Review imperative security" warning, albeit with a certainty of 25%, with this info:
"Use of imperative demands can lead to unforeseen security problems. The values used to construct a permission should not change within the scope of the demand call. For some components the scope spans from the demand call to end of the method; for others it spans from the demand call until the component is finalized. If the values used to construct the permission are fields or properties, they can be changed within the scope of the demand call. This can lead to race conditions, mutable read-only arrays, and problems with boxed value types."
Bascially, is FxCop being over-cautious, or am I doing it wrong?
• This is probably a question better suited for StackOverflow based on the FAQs guidelines for what is an applicable question. – Mark Loeser Jan 29 '11 at 17:28
• @Mark: Hmm, well the FAQ says "If you are looking for specific feedback about… Code correctness, ..., Security issues in a code snippet, etc … then you are in the right place!" I figured my question was a pretty good fit for that. But, if it is felt that it better suited for SO, then fair enough. :) – Andy Jan 29 '11 at 17:45
This FxCop warning is basically asking you to make sure ("Review") that that non-constant you are passing (directory) to the security permission does not change while the permission is in effect. Basically, FxCop isn't sure if it is possible for the code (or some rogue module that a hacker has put in place) to do something like the following:
1. Set directory to "c:\Temp\"
2. .Demand()
3. <untrusted>Set directory to "c:\Windows\System32\"</untrusted>
4. Write something into a file contained in directory.
In this particular case, since directory is a non-ref parameter, it is not possible for another module outside your call-descendants to modify it. Thus, what you need to check for:
• Anything in this method assigning a value to directory
• Anything in this method that passes directory by reference (ref/out/unsafe pointers)
Disclaimer: I am not a code security expert and have no formal training as such. I may have missed entire classes of things to look for here. If you are dealing with code that has real-world security implications, I highly suggest you hire a consultant who does, rather than take what I wrote above as gospel.
• Thanks, you have clarified what I thought. This does not have Earth-shattering implications - I just want to gracefully handle any circumstance where the permissions don't allow writing by popping a MessageBox. – Andy Jan 30 '11 at 13:47
When working with files, checking before an operation can be useful, but you still always need to handle relevant exceptions (e.g. FileNotFoundException, IOException).
The permissions (or existence) of a file/directory may change between the time you check and time the operation is invoked (new FileIOPermission(...) in this case). This situation is more common than it seems.
|
Related Articles
Laws of Exponents for Real Numbers
• Last Updated : 01 Apr, 2021
Sometimes we come across very large numbers like the mass of earth. Mass of the earth is 5,970,000,000,000, 000, 000, 000, 000 kg. There is no problem in writing mass of earth like this, but sometimes during calculations involving such objects. It becomes cumbersome and hard to use it. That’s why exponents were invented. To make it easy to deal with very big or very small numbers. This number can be written as 5.97 × 1024 Kg. It is read 5.97 times 10 raised to the power 24. Let’s study some laws regarding these exponent numbers.
### Laws of Exponents
These laws help us to simplify our calculations.
### Product law
The Product law states that if the base of two numbers is same, their exponents can be added directly.
am × an = am+n
To verify this property, see how many times “a” is multiplied in total. It is multiplied “m + n” times. Thus, the property.
Example: a2a3 = (aa)(aaa) = a5
### Quotient law
According to this law, if the two numbers in the numerator and denominator are the same, their exponents can be arranged in such a manner that the exponent in the denominator is subtracted by the exponent in the numerator
This can also be verified similar to the previous one, just see the number of times a is multiplied and then reduce that number by the number of times it is divided.
Example:
### Power Law
According to Power-law, if an exponent is a power of another exponent, we can simply multiply the exponents.
(am)n = amn
First, multiply “a” m times, and then do this operation n times.
Example: (x3)2 = (xxx)2 = (xxx)(xxx) = x6 = x(3 × 2)
### Power of Product law
According to the power of product law, if two real numbers say, a and b here are multiplied and raised to power m, we can distribute the exponent to both a and b separately.
am x bm = (ab)m
This property is just a rearrangement of all these variables.
Example: a3 × b3 = (aaa) (bbb) = (ab)(ab)(ab) = (ab)3
### Power of Quotient law
According to the power of quotient law, if two real numbers are in numerator and denominator and are raised to power n, the power can be separately distributed to both the numbers.
This also can be verified by a simple rearrangement of the variables.
Example:
### Zero Power Rule
As long as x is not equal to zero, raising it to the power of zero should give us 1 as result.
a0 = 1, a≠ 1.
Let’s take an example to make this more evident,
1 =
Note:
Expression 00 is considered to be indeterminate. Why? Because there are two answers to it.
We know, x0 = 1, So, 00 = 1.
We also know, 0x = 0. So, 00 = 0.
These are two contradictory answers, thus we consider 00 to be indeterminate.
### Exponent of Exponent
Sometimes in more complex scenarios, exponents over exponents are given. Let’s see how to approach them,
We will start solving from up,
Let’s look at some examples and questions where these properties are applicable.
### Sample Problems
Question 1: Find the value of 2-3
Question 2:
Question 3: Simplify using above properties:
(-4)5 x (-4)-10
(-4)5 x (-4)-10
= -4(5-10) (am × an = am+n)
= (-4)-5
Question 4: Simplify using above properties:
Question 5: Simplify,
Question 6: What is the 5th root of 515
We know that nth of a number “a” is represented by
Thus, 5th of 515 will be given by,
= 53
= 125
Question 7: Find the value of n
|
# induced_norm¶
probnum.utils.linalg.induced_norm(v, A=None, axis=- 1)[source]
Induced norm $$\lVert v \rVert_A := \sqrt{v^T A v}$$.
Computes the induced norm over the given axis of the array.
Parameters
• v (np.ndarray) – Array.
• A (Optional[Union[np.ndarray, linops.LinearOperator]]) – Symmetric positive (semi-)definite linear operator defining the geometry.
• axis (int) – Specifies the axis along which to compute the vector norms.
Returns
Vector norm of v along the given axis.
Return type
norm
|
How do you solve linear systems whose solutions decay exponentially? - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-20T10:39:40Z http://mathoverflow.net/feeds/question/79524 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/79524/how-do-you-solve-linear-systems-whose-solutions-decay-exponentially How do you solve linear systems whose solutions decay exponentially? fuzzytron 2011-10-30T15:01:28Z 2011-11-03T08:38:38Z <p>Consider the heat equation</p> <p>$$\dot{u} = \Delta u$$</p> <p>with initial conditions</p> <p>$$u_0 = \delta(x)$$</p> <p>for some point $x$ in the domain $\Omega$ of the problem. If $\Omega$ is $\mathbb{R}^n$, then this problem has a closed-form solution given by the Euclidean heat kernel</p> <p>$$k_t(x,y) = \frac{1}{(4\pi t)^{n/2}}e^{-|x-y|^2/4t};$$</p> <p>in general solutions to this problem will exhibit this same type of behavior: the magnitude of $u$ decays roughly exponentially as you travel away from the "source" $x$.</p> <p>Now consider solving this problem numerically by constructing a finite-dimensional linear system whose solution approximates the solution of the original PDE (e.g., using a Galerkin method). Typical numerical linear algebra systems (which work with fixed-precision floating-point arithmetic) guarantee that the maximum error in any component of the solution will be no larger than some small fraction $\epsilon > 0$ of the largest element of the right-hand side.</p> <p>For instance, in the problem outlined above the right-hand side will look like a Kronecker delta, so the absolute error will be no worse than $\epsilon$. Unfortunately, since the magnitude of the solution decays exponentially, the error $\epsilon$ may be much larger than the smallest entry of the true solution.</p> <p><strong>Question:</strong> using floating-point computations only, can one obtain a solution to a linear system that obtains a desired accuracy relative to the magnitude of each element of the <em>solution vector</em>, rather than the right hand side?</p> <p>The fundamental problem in achieving better accuracy seems to stem from cancellation effects, i.e., if you add two numbers of very different magnitude in floating point, the smaller one is essentially ignored. I am aware of algorithms that use alternative numerical representations (e.g,. Dixon's method and so on), but this kind of answer is not particularly interesting to me due to considerations of efficiency.</p> http://mathoverflow.net/questions/79524/how-do-you-solve-linear-systems-whose-solutions-decay-exponentially/79526#79526 Answer by Brian Borchers for How do you solve linear systems whose solutions decay exponentially? Brian Borchers 2011-10-30T15:47:30Z 2011-10-30T15:47:30Z <p>I assume that you're interested in solving the boundary value problem in a situation where the boundaries are far away from the source, so that the solution is close to the analytical solution on an unbounded domain. </p> <p>One option here would be to write the solution to your problem as the sum of analytical solution to the problem on an unbounded domain plus a correction term. That is, </p> <p>$u(x,y,t)=k_{t}(x,y)+v(x,y,t)$</p> <p>Since the PDE is linear, your correction term $v(x,y,t)$ would also have to satisfy the heat equation. You could easily derive boundary conditions for $v$ in terms of the boundary conditions for $u$ and the values of $k_{t}(x,y)$ at the boundaries. </p> http://mathoverflow.net/questions/79524/how-do-you-solve-linear-systems-whose-solutions-decay-exponentially/79547#79547 Answer by Federico Poloni for How do you solve linear systems whose solutions decay exponentially? Federico Poloni 2011-10-30T21:03:01Z 2011-11-03T08:38:38Z <p>This subject has been studied in literature --- it turns out that for some class of matrices (such as <a href="http://en.wikipedia.org/wiki/M-matrix" rel="nofollow">M-matrices</a>) you can obtain componentwise accurate solutions. A good starting point is the section on "accurate floating point computation" on <a href="http://www.cs.berkeley.edu/~demmel/" rel="nofollow">Jim Demmel's home page</a>; other good search terms are "accurate linear algebra" and "componentwise error analysis". Recent literature has focused more on accurate eigenvalue computation rather than linear systems, but there are results also for them.</p> <p>EDIT: And let me add that discretizing $\Delta$ with the usual finite-difference scheme results in an M-matrix.</p>
|
# Plotting the discrete time signal
Consider the following analog signal
$$x(t) = 2 \sin(100\pi t)$$
The signal $x(t)$ is sampled with a sampling rate $F_s=50\textrm{Hz}$. Determine the discrete time signal. Plot the discrete time signal.
Also determine the total number of samples.
I don't understand how to approach this question.
$x(t)$ is a model for an analog signal. After sampling, the real $t$ variable is turned into an integer variable $k$, corresponding to a regular splitting of the real time interval, starting from some origin time $t_0$, evenly spaced by a sampling period $T_s$ being the inverse of the sampling frequency: $T_s = \tfrac{1}{F_s}$.
A discrete signal is thus:
$$x[k] = x(t_0+k.T_s) = 2\sin(100\pi (t_0+k.T_s)) = 2\sin(100\pi t_0+k.100\pi/50))$$
Thus, since $$2\sin(100\pi t_0+k.100\pi/50)) = 2\sin(100\pi t_0+2k\pi))= 2\sin(100\pi t_0))$$
the discretized signal is determined by a constant value, depending only on the initial sampling time $t_0$. You can plot if with many plotting software (even Excel).
The total number of samples is ambiguous. Since $k$ take any integer value, the number of discrete samples is infinite. Since it is constant, $x[n]$ takes only one value.
Explanation: you start from a continuous sine, and only get a constant signal. This is an illustration of the Nyquist-Shannon sampling theorem (or aliasing). If the sampling frequency is not consistent with signal maximal frequency, the discrete signal might be a poor version of the analog signal.
|
# Interest Computations e8A. Determine the interest on the following notes. (Round to the nearest...
Interest Computations
e8A. Determine the interest on the following notes. (Round to the nearest cent.)
a. $38,760 at 10 percent for 90 days. b.$27,200 at 12 percent for 60 days.
c. $30,600 at 9 percent for 30 days. d.$51,000 at 15 percent for 120 days.
e. \$18,360 at 6 percent for 60 days.
|
• Galaxy And Mass Assembly (GAMA): the G02 field, Herschel-ATLAS target selection and Data Release 3(1711.09139)
Nov. 24, 2017 astro-ph.CO, astro-ph.GA
We describe data release 3 (DR3) of the Galaxy And Mass Assembly (GAMA) survey. The GAMA survey is a spectroscopic redshift and multi-wavelength photometric survey in three equatorial regions each of 60.0 deg^2 (G09, G12, G15), and two southern regions of 55.7 deg^2 (G02) and 50.6 deg^2 (G23). DR3 consists of: the first release of data covering the G02 region and of data on H-ATLAS sources in the equatorial regions; and updates to data on sources released in DR2. DR3 includes 154809 sources with secure redshifts across four regions. A subset of the G02 region is 95.5% redshift complete to r<19.8 over an area of 19.5 deg^2, with 20086 galaxy redshifts, that overlaps substantially with the XXL survey (X-ray) and VIPERS (redshift survey). In the equatorial regions, the main survey has even higher completeness (98.5%), and spectra for about 75% of H-ATLAS filler targets were also obtained. This filler sample extends spectroscopic redshifts, for probable optical counterparts to H-ATLAS sub-mm sources, to 0.8 mag deeper (r<20.6) than the GAMA main survey. There are 25814 galaxy redshifts for H-ATLAS sources from the GAMA main or filler surveys. GAMA DR3 is available at the survey website (www.gama-survey.org/dr3/).
• Galaxy And Mass Assembly (GAMA): end of survey report and data release 2(1506.08222)
June 26, 2015 astro-ph.GA
The Galaxy And Mass Assembly (GAMA) survey is one of the largest contemporary spectroscopic surveys of low-redshift galaxies. Covering an area of ~286 deg^2 (split among five survey regions) down to a limiting magnitude of r < 19.8 mag, we have collected spectra and reliable redshifts for 238,000 objects using the AAOmega spectrograph on the Anglo-Australian Telescope. In addition, we have assembled imaging data from a number of independent surveys in order to generate photometry spanning the wavelength range 1 nm - 1 m. Here we report on the recently completed spectroscopic survey and present a series of diagnostics to assess its final state and the quality of the redshift data. We also describe a number of survey aspects and procedures, or updates thereof, including changes to the input catalogue, redshifting and re-redshifting, and the derivation of ultraviolet, optical and near-infrared photometry. Finally, we present the second public release of GAMA data. In this release we provide input catalogue and targeting information, spectra, redshifts, ultraviolet, optical and near-infrared photometry, single-component S\'ersic fits, stellar masses, H$\alpha$-derived star formation rates, environment information, and group properties for all galaxies with r < 19.0 mag in two of our survey regions, and for all galaxies with r < 19.4 mag in a third region (72,225 objects in total). The database serving these data is available at http://www.gama-survey.org/.
• Galaxy And Mass Assembly (GAMA) Blended Spectra Catalog: Strong Galaxy-Galaxy Lens and Occulting Galaxy Pair Candidates(1503.04813)
May 29, 2015 astro-ph.GA
We present the catalogue of blended galaxy spectra from the Galaxy And Mass Assembly (GAMA) survey. These are cases where light from two galaxies are significantly detected in a single GAMA fibre. Galaxy pairs identified from their blended spectrum fall into two principal classes: they are either strong lenses, a passive galaxy lensing an emission-line galaxy; or occulting galaxies, serendipitous overlaps of two galaxies, of any type. Blended spectra can thus be used to reliably identify strong lenses for follow-up observations (high resolution imaging) and occulting pairs, especially those that are a late-type partly obscuring an early-type galaxy which are of interest for the study of dust content of spiral and irregular galaxies. The GAMA survey setup and its autoz automated redshift determination were used to identify candidate blended galaxy spectra from the cross-correlation peaks. We identify 280 blended spectra with a minimum velocity separation of 600 km/s, of which 104 are lens pair candidates, 71 emission-line-passive pairs, 78 are pairs of emission-line galaxies and and 27 are pairs of galaxies with passive spectra. We have visually inspected the candidates in the Sloan Digital Sky Survey (SDSS) and Kilo Degree Survey (KiDS) images. Many blended objects are ellipticals with blue fuzz (Ef in our classification). These latter "Ef" classifications are candidates for possible strong lenses, massive ellipticals with an emission-line galaxy in one or more lensed images. The GAMA lens and occulting galaxy candidate samples are similar in size to those identified in the entire SDSS. This blended spectrum sample stands as a testament of the power of this highly complete, second-largest spectroscopic survey in existence and offers the possibility to expand e.g., strong gravitational lens surveys.
Oct. 26, 2006 astro-ph
The ultraviolet-to-radio continuum spectral energy distributions are presented for all 75 galaxies in the Spitzer Infrared Nearby Galaxies Survey (SINGS). A principal component analysis of the sample shows that most of the sample's spectral variations stem from two underlying components, one representative of a galaxy with a low infrared-to-ultraviolet ratio and one representative of a galaxy with a high infrared-to-ultraviolet ratio. The influence of several parameters on the infrared-to-ultraviolet ratio is studied (e.g., optical morphology, disk inclination, far-infrared color, ultraviolet spectral slope, and star formation history). Consistent with our understanding of normal star-forming galaxies, the SINGS sample of galaxies in comparison to more actively star-forming galaxies exhibits a larger dispersion in the infrared-to-ultraviolet versus ultraviolet spectral slope correlation. Early type galaxies, exhibiting low star formation rates and high optical surface brightnesses, have the most discrepant infrared-to-ultraviolet correlation. These results suggest that the star formation history may be the dominant regulator of the broadband spectral variations between galaxies. Finally, a new discovery shows that the 24 micron morphology can be a useful tool for parametrizing the global dust temperature and ultraviolet extinction in nearby galaxies. The dust emission in dwarf/irregular galaxies is clumpy and warm accompanied by low ultraviolet extinction, while in spiral galaxies there is typically a much larger diffuse component of cooler dust and average ultraviolet extinction. For galaxies with nuclear 24 micron emission, the dust temperature and ultraviolet extinction are relatively high compared to disk galaxies.
• The Survey for Ionization in Neutral Gas Galaxies- II. The Star Formation Rate Density of the Local Universe(astro-ph/0604442)
May 12, 2006 astro-ph
We derive observed Halpha and R band luminosity densities of an HI-selected sample of nearby galaxies using the SINGG sample to be l_Halpha' = (9.4 +/- 1.8)e38 h_70 erg s^-1 Mpc^-3 for Halpha and l_R' = (4.4 +/- 0.7)e37 h_70 erg s^-1 A^-1 Mpc^-3 in the R band. This R band luminosity density is approximately 70% of that found by the Sloan Digital Sky Survey. This leads to a local star formation rate density of log(SFRD) = -1.80 +0.13/-0.07(random) +/- 0.03(systematic) + log(h_70) after applying a mean internal extinction correction of 0.82 magnitudes. The gas cycling time of this sample is found to be t_gas = 7.5 +1.3/-2.1 Gyr, and the volume-averaged equivalent width of the SINGG galaxies is EW(Halpha) = 28.8 +7.2/-4.7 A (21.2 +4.2/-3.5 A without internal dust correction). As with similar surveys, these results imply that SFRD(z) decreases drastically from z ~ 1.5 to the present. A comparison of the dynamical masses of the SINGG galaxies evaluated at their optical limits with their stellar and HI masses shows significant evidence of downsizing: the most massive galaxies have a larger fraction of their mass locked up in stars compared with HI, while the opposite is true for less massive galaxies. We show that the application of the Kennicutt star formation law to a galaxy having the median orbital time at the optical limit of this sample results in a star formation rate decay with cosmic time similar to that given by the SFRD(z) evolution. This implies that the SFRD(z) evolution is primarily due to the secular evolution of galaxies, rather than interactions or mergers. This is consistent with the morphologies predominantly seen in the SINGG sample.
• An Initial Look at the Far Infrared-Radio Correlation within Nearby Star-forming Galaxies using the Spitzer Space Telescope(astro-ph/0510227)
Nov. 21, 2005 astro-ph
(Abridged) We present an initial look at the far infrared-radio correlation within the star-forming disks of four nearby, nearly face-on galaxies (NGC 2403, NGC 3031, NGC 5194, and NGC 6946). Using Spitzer MIPS imaging and WSRT radio continuum data, observed as part of the Spitzer Infrared Nearby Galaxies Survey (SINGS), we are able to probe variations in the logarithmic 24mu/22cm (q_24) and 70mu/22cm (q_70) surface brightness ratios across each disk at sub-kpc scales. We find general trends of decreasing q_24 and q_70 with declining surface brightness and with increasing radius. The residual dispersion around the trend of q_24 and q_70 versus surface brightness is smaller than the residual dispersion around the trend of q_24 and q_70 versus radius, on average by ~0.1 dex, indicating that the distribution of star formation sites is more important in determining the infrared/radio disk appearance than the exponential profiles of disks. We have also performed preliminary phenomenological modeling of cosmic ray electron (CRe^-) diffusion using an image-smearing technique, and find that smoothing the infrared maps improves their correlation with the radio maps. Exponential kernels tend to work better than Gaussian kernels which suggests that additional processes besides simple random-walk diffusion in three dimensions must affect the evolution of CRe^-s. The best fit smoothing kernels for the two less active star-forming galaxies (NGC 2403 and NGC 3031) have much larger scale-lengths than those of the more active star-forming galaxies (NGC 5194 and NGC 6946). This difference may be due to the relative deficit of recent CRe^- injection into the interstellar medium (ISM) for the galaxies having largely quiescent disks.
• Infrared Spectral Energy Distributions of Nearby Galaxies(astro-ph/0507645)
July 28, 2005 astro-ph
The Spitzer Infrared Nearby Galaxies Survey (SINGS) is carrying out a comprehensive multi-wavelength survey on a sample of 75 nearby galaxies. The 1-850um spectral energy distributions are presented using broadband imaging data from Spitzer, 2MASS, ISO, IRAS, and SCUBA. The infrared colors derived from the globally-integrated Spitzer data are generally consistent with the previous generation of models that were developed based on global data for normal star-forming galaxies, though significant deviations are observed. Spitzer's excellent sensitivity and resolution also allow a detailed investigation of the infrared spectral energy distributions for various locations within the three large, nearby galaxies NGC3031 (M81), NGC5194 (M51), and NGC7331. Strong correlations exist between the local star formation rate and the infrared colors f_nu(70um)/f_nu(160um) and f_nu(24um)/f_nu(160um), suggesting that the 24 and 70um emission are useful tracers of the local star formation activity level. Preliminary evidence indicates that variations in the 24um emission, and not variations in the emission from polycyclic aromatic hydrocarbons at 8um, drive the variations in the f_nu(8.0um)/f_nu(24um) colors within NGC3031, NGC5194, and NGC7331. If the galaxy-to-galaxy variations in spectral energy distributions seen in our sample are representative of the range present at high redshift then extrapolations of total infrared luminosities and star formation rates from the observed 24um flux will be uncertain at the factor-of-five level (total range). The corresponding uncertainties using the redshifted 8.0um flux (e.g. observed 24um flux for a z=2 source) are factors of 10-20. Considerable caution should be used when interpreting such extrapolated infrared luminosities.
• Star Formation in NGC5194 (M51a): The Panchromatic View from GALEX to Spitzer(astro-ph/0507427)
July 19, 2005 astro-ph
(Abridged) Far ultraviolet to far infrared images of the nearby galaxy NGC5194, from Spitzer, GALEX, Hubble Space Telescope and ground--based data, are used to investigate local and global star formation, and the impact of dust extinction in HII-emitting knots. In the IR/UV-UV color plane, the NGC5194 HII knots show the same trend observed for normal star-forming galaxies, having a much larger dispersion than starburst galaxies. We identify the dispersion as due to the UV emission predominantly tracing the evolved, non-ionizing stellar population, up to ages 50-100 Myr. While in starbursts the UV light traces the current SFR, in NGC5194 it traces a combination of current and recent-past SFR. Unlike the UV emission, the monochromatic 24 micron luminosity is an accurate local SFR tracer for the HII knots in NGC5194; this suggests that the 24 micron emission carriers are mainly heated by the young, ionizing stars. However, preliminary results show that the ratio of the 24 micron emission to the SFR varies by a factor of a few from galaxy to galaxy. While also correlated with star formation, the 8 micron emission is not directly proportional to the number of ionizing photons. This confirms earlier suggestions that the carriers of the 8 micron emission are heated by more than one mechanism.
• Mid-Infrared IRS Spectroscopy of NGC 7331: A First Look at the SINGS Legacy(astro-ph/0406332)
June 14, 2004 astro-ph
The nearby spiral galaxy NGC 7331 was spectrally mapped from 5-38um using all modules of Spitzer's IRS spectrograph. A strong new dust emission feature, presumed due to PAHs, was discovered at 17.1um. The feature's intensity is nearly half that of the ubiquitous 11.3um band. The 7-14um spectral maps revealed significant variation in the 7.7 and 11.3um PAH features between the stellar ring and nucleus. Weak [OIV] 25.9um line emission was found to be centrally concentrated in the nucleus, with an observed strength over 10% of the combined neon line flux, indicating an AGN or unusually active massive star photo-ionization. Two [SIII] lines fix the characteristic electron density in the HII regions at n_e < ~200 cm^-3. Three detected H_2 rotational lines, tracing warm molecular gas, together with the observed IR continuum, are difficult to match with standard PDR models. Either additional PDR heating or shocks are required to simultaneously match lines and continuum.
|
# fix neb command
## Syntax
fix ID group-ID neb Kspring keyword value
• ID, group-ID are documented in fix command
• neb = style name of this fix command
• Kspring = spring constant for parallel nudging force (force/distance units or force units, see parallel keyword)
• zero or more keyword/value pairs may be appended
• keyword = parallel or perp or end
parallel value = neigh or ideal
neigh = parallel nudging force based on distance to neighbor replicas (Kspring = force/distance units)
ideal = parallel nudging force based on interpolated ideal position (Kspring = force units)
perp value = Kspring2
Kspring2 = spring constant for perpendicular nudging force (force/distance units)
end values = estyle Kspring3
estyle = first or last or last/efirst or last/efirst/middle
first = apply force to first replica
last = apply force to last replica
last/efirst = apply force to last replica and set its target energy to that of first replica
last/efirst/middle = same as last/efirst plus prevent middle replicas having lower energy than first replica
Kspring3 = spring constant for target energy term (1/distance units)
## Examples
fix 1 active neb 10.0
fix 2 all neb 1.0 perp 1.0 end last
fix 2 all neb 1.0 perp 1.0 end first 1.0 end last 1.0
fix 1 all neb 1.0 parallel ideal end last/efirst 1
## Description
Add nudging forces to atoms in the group for a multi-replica simulation run via the neb command to perform a nudged elastic band (NEB) calculation for finding the transition state. Hi-level explanations of NEB are given with the neb command and on the Howto replica doc page. The fix neb command must be used with the “neb” command and defines how inter-replica nudging forces are computed. A NEB calculation is divided in two stages. In the first stage n replicas are relaxed toward a MEP until convergence. In the second stage, the climbing image scheme (see (Henkelman2)) is enabled, so that the replica having the highest energy relaxes toward the saddle point (i.e. the point of highest energy along the MEP), and a second relaxation is performed.
A key purpose of the nudging forces is to keep the replicas equally spaced. During the NEB calculation, the 3N-length vector of interatomic force Fi = -Grad(V) for each replica I is altered. For all intermediate replicas (i.e. for 1 < I < N, except the climbing replica) the force vector becomes:
Fi = -Grad(V) + (Grad(V) dot T') T' + Fnudge_parallel + Fnudge_perp
T’ is the unit “tangent” vector for replica I and is a function of Ri, Ri-1, Ri+1, and the potential energy of the 3 replicas; it points roughly in the direction of (Ri+i - Ri-1); see the (Henkelman1) paper for details. Ri are the atomic coordinates of replica I; Ri-1 and Ri+1 are the coordinates of its neighbor replicas. The term (Grad(V) dot T’) is used to remove the component of the gradient parallel to the path which would tend to distribute the replica unevenly along the path. Fnudge_parallel is an artificial nudging force which is applied only in the tangent direction and which maintains the equal spacing between replicas (see below for more information). Fnudge_perp is an optional artificial spring which is applied in a direction perpendicular to the tangent direction and which prevent the paths from forming acute kinks (see below for more information).
In the second stage of the NEB calculation, the interatomic force Fi for the climbing replica (the replica of highest energy after the first stage) is changed to:
Fi = -Grad(V) + 2 (Grad(V) dot T') T'
and the relaxation procedure is continued to a new converged MEP.
The keyword parallel specifies how the parallel nudging force is computed. With a value of neigh, the parallel nudging force is computed as in (Henkelman1) by connecting each intermediate replica with the previous and the next image:
Fnudge_parallel = Kspring * (|Ri+1 - Ri| - |Ri - Ri-1|)
Note that in this case the specified Kspring is in force/distance units.
With a value of ideal, the spring force is computed as suggested in ref(WeinanE) <WeinanE>
Fnudge_parallel = -Kspring * (RD-RDideal) / (2 * meanDist)
where RD is the “reaction coordinate” see neb section, and RDideal is the ideal RD for which all the images are equally spaced. I.e. RDideal = (I-1)*meanDist when the climbing replica is off, where I is the replica number). The meanDist is the average distance between replicas. Note that in this case the specified Kspring is in force units.
Note that the ideal form of nudging can often be more effective at keeping the replicas equally spaced.
The keyword perp specifies if and how a perpendicular nudging force is computed. It adds a spring force perpendicular to the path in order to prevent the path from becoming too strongly kinked. It can significantly improve the convergence of the NEB calculation when the resolution is poor. I.e. when few replicas are used; see (Maras) for details.
The perpendicular spring force is given by
Fnudge_perp = Kspring2 * F(Ri-1,Ri,Ri+1) (Ri+1 + Ri-1 - 2 Ri)
where Kspring2 is the specified value. F(Ri-1 Ri R+1) is a smooth scalar function of the angle Ri-1 Ri Ri+1. It is equal to 0.0 when the path is straight and is equal to 1 when the angle Ri-1 Ri Ri+1 is acute. F(Ri-1 Ri R+1) is defined in (Jonsson).
If Kspring2 is set to 0.0 (the default) then no perpendicular spring force is added.
By default, no additional forces act on the first and last replicas during the NEB relaxation, so these replicas simply relax toward their respective local minima. By using the key word end, additional forces can be applied to the first and/or last replicas, to enable them to relax toward a MEP while constraining their energy E to the target energy ETarget.
If ETarget>E, the interatomic force Fi for the specified replica becomes:
Fi = -Grad(V) + (Grad(V) dot T' + (E-ETarget)*Kspring3) T', when Grad(V) dot T' < 0
Fi = -Grad(V) + (Grad(V) dot T' + (ETarget- E)*Kspring3) T', when Grad(V) dot T' > 0
The “spring” constant on the difference in energies is the specified Kspring3 value.
When estyle is specified as first, the force is applied to the first replica. When estyle is specified as last, the force is applied to the last replica. Note that the end keyword can be used twice to add forces to both the first and last replicas.
For both these estyle settings, the target energy ETarget is set to the initial energy of the replica (at the start of the NEB calculation).
If the estyle is specified as last/efirst or last/efirst/middle, force is applied to the last replica, but the target energy ETarget is continuously set to the energy of the first replica, as it evolves during the NEB relaxation.
The difference between these two estyle options is as follows. When estyle is specified as last/efirst, no change is made to the inter-replica force applied to the intermediate replicas (neither first or last). If the initial path is too far from the MEP, an intermediate replica may relax “faster” and reach a lower energy than the last replica. In this case the intermediate replica will be relaxing toward its own local minima. This behavior can be prevented by specifying estyle as last/efirst/middle which will alter the inter-replica force applied to intermediate replicas by removing the contribution of the gradient to the inter-replica force. This will only be done if a particular intermediate replica has a lower energy than the first replica. This should effectively prevent the intermediate replicas from over-relaxing.
After converging a NEB calculation using an estyle of last/efirst/middle, you should check that all intermediate replicas have a larger energy than the first replica. If this is not the case, the path is probably not a MEP.
Finally, note that the last replica may never reach the target energy if it is stuck in a local minima which has a larger energy than the target energy.
Restart, fix_modify, output, run start/stop, minimize info:
No information about this fix is written to binary restart files. None of the fix_modify options are relevant to this fix. No global or per-atom quantities are stored by this fix for access by various output commands. No parameter of this fix can be used with the start/stop keywords of the run command.
The forces due to this fix are imposed during an energy minimization, as invoked by the minimize command via the neb command.
## Restrictions
This command can only be used if LAMMPS was built with the REPLICA package. See the Build package doc page for more info.
## Default
The option defaults are parallel = neigh, perp = 0.0, ends is not specified (no inter-replica force on the end replicas).
(Henkelman1) Henkelman and Jonsson, J Chem Phys, 113, 9978-9985 (2000).
(Henkelman2) Henkelman, Uberuaga, Jonsson, J Chem Phys, 113, 9901-9904 (2000).
(WeinanE) E, Ren, Vanden-Eijnden, Phys Rev B, 66, 052301 (2002).
(Jonsson) Jonsson, Mills and Jacobsen, in Classical and Quantum Dynamics in Condensed Phase Simulations, edited by Berne, Ciccotti, and Coker World Scientific, Singapore, 1998, p 385.
(Maras) Maras, Trushin, Stukowski, Ala-Nissila, Jonsson, Comp Phys Comm, 205, 13-21 (2016).
|
## Probing top quark FCNC tq$$\gamma$$ and tqZ couplings at future electron-proton colliders.(English)Zbl 1430.81083
Summary: The top quark flavor changing neutral current (FCNC) processes are extremely suppressed within the Standard Model (SM) of particle physics. However, they could be enhanced in a new physics model Beyond the Standard Model (BSM). The top quark FCNC interactions would be a good test of new physics at present and future colliders. Within the framework of the BSM models, these interactions can be described by an effective Lagrangian. In this work, we study tq $$\gamma$$ and tqZ effective FCNC interaction vertices through the process $$e^- p \rightarrow e^- W q + X$$ at future electron proton colliders, projected as Large Hadron electron Collider (LHeC) and Future Circular Collider-hadron electron (FCC-he). The cross sections for the signal have been calculated for different values of parameters $$\lambda_q$$ for tq $$\gamma$$ vertices and $$\kappa_q$$ for tqZ vertices. Taking into account the relevant background we estimate the attainable range of signal parameters as a function of the integrated luminosity and present contour plots of couplings for different significance levels including detector simulation. We find the sensitivities to the branching ratios $$(B R(t \rightarrow q \gamma) = 7.5 \times 10^{- 6}, B R(t \rightarrow q Z) = 3.5 \times 10^{- 5})$$ and $$(B R(t \rightarrow q \gamma) = 8.5 \times 10^{- 7}, B R(t \rightarrow q Z) = 6.0 \times 10^{- 6})$$ for an integrated luminosity of $$2 \text{ab}^{- 1}$$ at LHeC and FCC-he, respectively.
### MSC:
81V05 Strong interaction, including quantum chromodynamics 81V22 Unified quantum theories 81V35 Nuclear physics 81U35 Inelastic and multichannel quantum scattering
### Software:
PYTHIA8; FeynRules
Full Text:
### References:
[1] Cabibbo, N., Phys. Rev. Lett., 10, 531 (1963) [2] Kobayashi, M.; Maskawa, T., Prog. Theor. Phys., 49, 652 (1973) [3] Glashow, S. L.; Iliopoulos, J.; Maiani, L., Phys. Rev. D, 2, 1285 (1970) [4] Tanabashi, M., Phys. Rev. D, 98, Article 030001 pp. (2018) [5] Eur. Phys. J. C, 76, 55 (2016) [6] J. High Energy Phys., 04, Article 035 pp. (2016) [7] Aaboud, M., J. High Energy Phys., 1710, 120 (2017) [8] Sirunyan, A. M., J. High Energy Phys., 1707, Article 003 pp. (2017) [11] Dutta, S.; Goyal, A.; Kumar, M.; Mellado, B., Eur. Phys. J. C, 75, 577 (2015) [12] Aguilar-Saavedra, J. A., Eur. Phys. J. C, 77, 769 (2017) [13] Abelleria Fernandez, J. L., LHeC Study Group, J. Phys. G, Nucl. Part. Phys., 39, Article 075001 pp. (2012) [14] More information is available on the FCC web site: [15] Turk Cakir, I.; Yilmaz, A.; Denizli, H.; Senol, A.; Karadeniz, H.; Cakir, O., Adv. High Energy Phys., 2017, Article 1572053 pp. (2017), pp. 1-8 [16] Denizli, H.; Senol, A.; Yilmaz, A.; Turk Cakir, I.; Karadeniz, H.; Cakir, O., Phys. Rev. D, 96, Article 015024 pp. (2017) [17] Kumar, M.; Ruan, X.; Islam, R.; Cornell, A. S.; Klein, M.; Klein, U.; Mellado, B., Phys. Lett. B, 764, 247 (2017) [18] Behera, S.; Islam, R.; Kumar, M.; Poulose, P.; Rahaman, R. (2018) [19] Aguilar-Saavedra, J. A., Nucl. Phys. B, 812, 181 (2009) [20] Li, C. S.; Oakes, R. J.; Yuan, T. C., Phys. Rev. D, 43, 3759 (1991) [21] Alwall, J., J. High Energy Phys., 07, Article 079 pp. (2014) [22] Alloul, A.; Christensen, N. D.; Degrande, C.; Duhr, C.; Fuks, B., Comput. Phys. Commun., 185, 8, 2250-2300 (2014) [23] Sjostrand, T.; Mrenna, S.; Skands, P., PYTHIA 6.4 physics and manual, J. High Energy Phys., 5 (2006), article 026 · Zbl 1368.81015 [24] de Favereau, J., J. High Energy Phys., 2014 (2014), article 57 [25] Cacciari, M.; Salam, G. P.; Soyez, G., Eur. Phys. J. C, 72, 1896 (2012) [26] Cacciari, M.; Salam, G. P.; Soyez, G., J. High Energy Phys., 0804, Article 063 pp. (2008) [27] Durieux, G.; Maltoni, F.; Zhang, C., Phys. Rev. D, 91, Article 074017 pp. (2015)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
# Prove that if there are no cycles of length 3, there must be at least 2n vertices.
Suppose that a graph consists of m vertices, all of which have the same degree n $$\geq$$ 0. The graph can consist of disjoint, connected components, and there is at most one edge between any two vertices. How would I prove that there if there aren't any cycles of exactly 3, the graph must have 2n vertices?
I was planning on doing induction on $$m$$(the vertices), and trying to use the handshaking lemma to sum the degrees. But I am stuck at the induction step!
• Hint: Think about the complete bipartite graph $K_{n,n}$. Nov 11 '18 at 19:30
• @DonaldSplutterwit That's a hint for the reverse problem, not for this one. Nov 11 '18 at 19:39
Well, if you show the following:
A graph $$G$$ with $$N$$ vertices and with average degree greater than $$\frac{N}{2}$$ or equivalently with more than $$\frac{N^2}{4}$$ edges (no need to assume regularity) has a triangle
then this will immediately imply what you are trying to establish.
To this end, let us assume that $$G$$ is triangle-free and attempt to upper-bound the number of edges $$G$$ can have. To this end, $$xy$$ be an edge in $$G$$, and let $$U_x$$ be the set of vertices $$v \not = y$$ adjacent in $$G$$ to $$x$$ and let $$U_y$$ be the set of vertices $$v \not = x$$ adjacent in $$G$$ to $$y$$.
Then as $$G$$ is triangle free $$U_x$$ and $$U_y$$ are disjoint sets [make sure you see why] and furthermore $$U_x$$ and $$U_y$$ are independent sets in $$G$$ [make sure you see why]. Then the most edges $$G$$ can have is $$|U_x||U_y| + |U_x|+|U_y|+ 1$$. But because $$U_x$$ and $$U_y$$ are disjoint, it follows that $$|U_x|+|U_y|$$ is bounded by $$N-2$$ [make sure you see why] so this is at most $$\frac{(N-2)^2}{4}+N-2+ 1$$ which is no larger than $$\frac{N^2}{4}$$. So $$G$$ triangle-free implies no more than $$\frac{N^2}{4}$$ edges.
And this bound is tight for even $$N$$, consider the complete graph w $$\frac{N}{2}$$ vertices on one side and $$\frac{N}{2}$$ vertices on the other side.
**If you sketch through this proof can you find a tight bound on the number of edges in a triangle-free graph on $$N$$ vertices for odd $$N$$?
• Also a reasonable strategy, and the connection to Mantel's theorem is nice. Nov 11 '18 at 23:43
• I know it is an old question. I was wondering, for the odd N is it (N^2 - 1)/4? At least that is what I came up with. Just wanted to know if that result is right or not!
– SRC
Sep 25 '20 at 10:48
Let $$xy$$ be an edge of $$G$$. Each of $$x$$ and $$y$$ has $$n-1$$ neighbours in $$V(G) - x - y$$. The set of neighbours of $$x$$ and $$y$$ are disjoint, otherwise $$G$$ will have a triangle. Therefore, $$|V(G) - x- y|\geq 2(n-1)$$. Hence $$|V(G)| \geq 2n$$
Induction generally works pretty badly on regular graphs (graphs in which every vertex has the same degree). Induction on graphs is all about the idea that to deal with an $$m$$-vertex graph, you delete a vertex and apply the inductive hypothesis to the $$(m-1)$$-vertex graph. But in this case, if you delete a vertex from a regular graph, the remainder is no longer regular, so the inductive hypothesis does not apply.
If you stick with the idea of induction, you want to generalize the claim. We can try to prove that in any graph which has no length-$$3$$ cycles, and in which every vertex has at least $$n$$ neighbors, there must be at least $$2n$$ vertices total.
If we make this an induction on $$n$$, then (knowing what the final answer looks like) we want to delete $$2$$ vertices from a graph with this property. To apply the inductive hypothesis, we want to do it in such a way that in the remainder, every vertex still has degree at least $$n-1$$. Then, by the inductive hypothesis, the remainder has at least $$2(n-1)$$ vertices; together with the vertices we deleted, the original graph must have at least $$2n$$ vertices.
Now there is only one thing you need to figure out to make this work. How to choose two vertices to remove from such a graph, so that every degree in the remainder doesn't go below $$n-1$$?
Consider vertex labelled $$1$$. This is connected to vertices $$2,3,\dots,n+1$$ because it has $$n$$ neighbours. Now, for $$2\leq i, $$i$$ and $$j$$ cannot be neighbours because if they were, then $$(1,i,j)$$ is a 3-cycle. Thus, the neighbours of vertex $$2$$ include vertex $$1$$ and $$n-1$$ vertices other than $$1,2,\dots, n+1$$. Thus we already have $$(1)+(n)+(n-1)$$ vertices [$$1$$ count for vertex $$1$$. $$n$$ count for vertices $$2,\dots,n+1$$. $$(n-1)$$ count for the neighbours of vertex $$2$$ other than vertex $$1$$]. Thus $$m\geq1+n+n-1=2n.$$
|
# Noninteracting trapped Fermions in double-well potentials: inverted parabola kernel
Abstract : We study a system of $N$ noninteracting spinless fermions in a confining, double-well potential in one dimension. When the Fermi energy is close to the value of the potential at its local maximum we show that physical properties, such as the average density and the fermion position correlation functions, display a universal behavior that depends only on the local properties of the potential near its maximum. This behavior describes the merging of two Fermi gases, which are disjoint at sufficiently low Fermi energies. We describe this behavior in terms of a new correlation kernel that we compute analytically and we call it the inverted parabola kernel". As an application, we calculate the mean and variance of the number of particles in an interval of size $2L$ centered around the position of the local maximum, for sufficiently small $L$. Finally, we discuss the possibility of observing our results in experiments, as well as the extensions to nonzero temperature and to higher space dimensions.
Document type :
Preprints, Working Papers, ...
Domain :
https://hal.archives-ouvertes.fr/hal-02484003
Contributor : Inspire Hep <>
Submitted on : Wednesday, February 19, 2020 - 1:35:08 AM
Last modification on : Wednesday, April 1, 2020 - 3:53:32 AM
### Citation
Naftali R. Smith, David S. Dean, Pierre Le Doussal, Satya N. Majumdar, Grégory Schehr. Noninteracting trapped Fermions in double-well potentials: inverted parabola kernel. 2020. ⟨hal-02484003⟩
Record views
|
Percolation thresholds and fractal dimensions for square and cubic lattices with long-range correlated defects.
@article{Zierenberg2017PercolationTA,
title={Percolation thresholds and fractal dimensions for square and cubic lattices with long-range correlated defects.},
author={Johannes Zierenberg and Niklas Fricke and Martin Marenz and Franz Paul Spitzner and Viktoria Blavatska and Wolfhard Janke},
journal={Physical review. E},
year={2017},
volume={96 6-1},
pages={
062125
}
}
• Published 7 August 2017
• Physics, Mathematics
• Physical review. E
We study long-range power-law correlated disorder on square and cubic lattices. In particular, we present high-precision results for the percolation thresholds and the fractal dimension of the largest clusters as a function of the correlation strength. The correlations are generated using a discrete version of the Fourier filtering method. We consider two different metrics to set the length scales over which the correlations decay, showing that the percolation thresholds are highly sensitive to…
20 Citations
Figures and Tables from this paper
• Mathematics
• 2020
We consider discrete random fractal surfaces with negative Hurst exponent $H<0$. A random colouring of the lattice is provided by activating the sites at which the surface height is greater than a
• Physics
Physical review letters
• 2021
Using a loop-model mapping, it is shown that there is a nontrivial percolation transition and characterize the critical point, and the critical clusters are "logarithmic fractals," whose area scales with the linear size as A∼L^{2}/sqrt[lnL].
• Physics
Scientific Reports
• 2018
It is shown numerically that in the continuum limit the external perimeter of a percolating cluster of correlated surfaces with H ∈ [−1, 0] is statistically equivalent to SLE curves.
• Physics
Journal of Physics Communications
• 2020
A dynamical model that can exhibit both fractal percolation growth and compact circular growth is presented. At any given cluster size, the dimension of a cluster growing on a two-dimensional square
• Physics
Physical review. E
• 2022
Extended-range percolation on various regular lattices, including all 11 Archimedean lattices in two dimensions and the simple cubic (sc), body-centered cubic (bcc), and face-centered cubic (fcc)
• Materials Science
• 2020
We study the critical behavior of the Ising model in three dimensions on a lattice with site disorder by using Monte Carlo simulations. The disorder is either uncorrelated or long-range correlated
• Physics
Physical review. E
• 2018
Using high-precision Monte Carlo simulations and finite-size scaling the authors study the effect of quenched disorder in the exchange couplings on the Blume-Capel model on the square lattice and find that it belongs to the universality class of the Ising model with additional logarithmic corrections.
• Materials Science
Soft matter
• 2021
The structure of the disclination motifs induced shows that the hexatic-amorphous transition is caused by the growth and connection of disclination grain boundaries, suggesting this transition lies in the percolation universality class in the scenarios considered.
• Materials Science
Chaos
• 2020
We systematically study the percolation phase transition at the change of concentration of the chaotic defects (pores) in an extended system where the disordered defects additionally have a variable
References
SHOWING 1-10 OF 48 REFERENCES
• Physics
Physical review. E, Statistical, nonlinear, and soft matter physics
• 2013
Long-range power-law correlated percolation is investigated using Monte Carlo simulations. We obtain several static and dynamic critical exponents as functions of the Hurst exponent H, which
In this note we study the field theory of dynamic isotropic percolation (DIP) with quenched randomness that has long range correlations decaying as $r^{-a}$. We argue that the quasi static limit of
• Materials Science
Physical review. A, Atomic, molecular, and optical physics
• 1992
An algorithm for generating long-range correlations in the percolation problem is developed and it is found that the fractal dimensions of the backbone and the red bonds are quite different from uncorrelatedPercolation and vary with \ensuremath{\lambda}, the strength of the correlation.
• Mathematics
• 2017
We study the scaling laws of diffusion in two-dimensional media with long-range correlated disorder through exact enumeration of random walks. The disordered medium is modelled by percolation
• Physics
• 2011
We study the correlated-disorder driven zero-temperature phase transition of the Random-Field Ising Magnet using exact numerical ground-state calculations for cubic lattices. We consider correlations
• Physics
• 1985
Preface to the Second Edition Preface to the First Edition Introduction: Forest Fires, Fractal Oil Fields, and Diffusion What is percolation? Forest fires Oil fields and fractals Diffusion in
• Physics
• 1998
The distributions $P(X)$ of singular thermodynamic quantities in an ensemble of quenched random samples of linear size $l$ at the critical point $T_c$ are studied by Monte Carlo in two models. Our
• Physics, Mathematics
• 2013
We simulate the bond and site percolation models on several three-dimensional lattices, including the diamond, body-centered cubic, and face-centered cubic lattices. As on the simple-cubic lattice
• Mathematics
Physical review. E, Statistical, nonlinear, and soft matter physics
• 2013
The bond and site percolation models are simulated on a simple-cubic lattice with linear sizes up to L=512, and various universal amplitudes are obtained, including wrapping probabilities, ratios associated with the cluster-size distribution, and the excess cluster number.
• Physics
• 2000
A field-theory description of the static and dynamic critical behavior of systems with quenched defects obeying power law correlations ;uxu 2a for large separations x is given. Directly, for
|
## LaTeX and Bitstream Vera Mono Font
Random questions or observations about and around computers
Vincent
Posts: 3077
Joined: Fri Apr 07, 2006 12:10 pm
Location: Schtroumpf
Contact:
### LaTeX and Bitstream Vera Mono Font
I like the Bitstream Vera Mono font; here are notes on how to use it as a replacement for the default typewriter font within LaTeX.
The problem with the suggested usage is that, even scaled by the default 90%, the font is still way too large.
A correct value to get roughly the same horizontal spread as the original typewriter font is around 0.855; but even then it doesn't look good, as the characters are still noticeably higher than standard text (default LaTeX fonts). So I use a smaller value: 0.8.
% Use Bera Mono as typewriter font
\usepackage[T1]{fontenc}
\usepackage[scaled=0.8]{beramono}
{ Vincent Hugot }
|
The complement value problem for non-local operators
@article{Sun2018TheCV,
title={The complement value problem for non-local operators},
author={Wei Sun},
journal={arXiv: Probability},
year={2018}
}
• W. Sun
• Published 31 March 2018
• Mathematics
• arXiv: Probability
Let $D$ be a bounded Lipschitz domain of $\mathbb{R}^d$. We consider the complement value problem $$\left\{\begin{array}{l}(\Delta+a^{\alpha}\Delta^{\alpha/2}+b\cdot\nabla+c)u+f=0\ \ {\rm in}\ D,\\ u=g\ \ {\rm on}\ D^c. \end{array}\right.$$ Under mild conditions, we show that there exists a unique bounded continuous weak solution. Moreover, we give an explicit probabilistic representation of the solution. The theory of semi-Dirichlet forms and heat kernel estimates play an important role in…
3 Citations
The complement value problem for a class of second order elliptic integro-differential operators
We consider the complement value problem for a class of second order elliptic integro-differential operators. Let $D$ be a bounded Lipschitz domain of $\mathbb{R}^d$. Under mild conditions, we show
The obstacle problem for quasilinear stochastic integral-partial differential equations
• Mathematics
Stochastics
• 2019
ABSTRACT We prove the existence and uniqueness of solutions to a kind of quasilinear stochastic integral-partial differential equations with obstacles. Our method is based on the probabilistic
Stochastic partial integral-differential equations with divergence terms
• Mathematics
• 2020
We study a class of stochastic partial integral-differential equations with an asymmetrical non-local operator 1 2 ∆+a∆ α 2 +b ·∇ and a distribution expressed as divergence of a measurable field. For
References
SHOWING 1-10 OF 47 REFERENCES
Local and nonlocal boundary conditions for μ-transmission and fractional elliptic pseudodifferential operators
A classical pseudodifferential operator $P$ on $R^n$ satisfies the $\mu$-transmission condition relative to a smooth open subset $\Omega$, when the symbol terms have a certain twisted parity on the
ON SHAPE OPTIMIZATION PROBLEMS INVOLVING THE FRACTIONAL LAPLACIAN
• Mathematics, Physics
• 2012
Our concern is the computation of optimal shapes in problems involving $\(-\Delta)^{1/2}$. We focus on the energy $J(\Omega)$ associated to the solution $u_\Omega$ of the basic Dirichlet problem
A priori estimates for integro-differential operators with measurable kernels
The aim of this work is to develop a localization technique and to establish a regularity result for non-local integro-differential operators $${\fancyscript{L}}$$ of order $${\alpha\in (0,2)}$$ .
Boundary regularity for fully nonlinear integro-differential equations
• Mathematics
• 2016
We study fine boundary regularity properties of solutions to fully nonlinear elliptic integro-differential equations of order 2s, with s ∈ (0,1). We consider the class of nonlocal operators L∗ ⊂L 0,
Results on Nonlocal Boundary Value Problems
• Mathematics
• 2010
In this article, we provide a variational theory for nonlocal problems where nonlocality arises due to the interaction in a given horizon. With this theory, we prove well-posedness results for the
The Dirichlet problem for the fractional Laplacian: Regularity up to the boundary
• Physics, Mathematics
• 2012
Abstract We study the regularity up to the boundary of solutions to the Dirichlet problem for the fractional Laplacian. We prove that if u is a solution of ( − Δ ) s u = g in Ω, u ≡ 0 in R n \ Ω ,
Jump-type Hunt processes generated by lower bounded semi-Dirichlet forms
• Mathematics
• 2012
Let E be a locally compact separable metric space and m be a positive Radon measure on it. Given a nonnegative function k de- fined on E×E off the diagonal whose anti-symmetric part is assumed to be
Analysis and Approximation of Nonlocal Diffusion Problems with Volume Constraints
• Mathematics, Computer Science
SIAM Rev.
• 2012
It is shown that fractional Laplacian and fractional derivative models for anomalous diffusion are special cases of the nonlocal model for diffusion that the authors consider.
The Dirichlet problem for stable-like operators and related probabilistic representations
• Mathematics
• 2016
ABSTRACT We study stochastic differential equations with jumps with no diffusion part, governed by a large class of stable-like operators, which may contain a drift term. For this class of operators,
Heat kernel estimates for Δ+Δα/2 under gradient perturbation
• Mathematics
• 2014
For α∈(0,2) and M>0, we consider a family of nonlocal operators {Δ+aαΔα/2,a∈(0,M]} on Rd under Kato class gradient perturbation. We establish the existence and uniqueness of their fundamental
|
a Theorem. ] . ( z − n (Cauchy’s integral formula)Suppose Cis a simple closed curve and the function f(z) is analytic on a region containing Cand its interior. | Theorem $$\PageIndex{1}$$ A second extension of Cauchy's theorem . − | 1 Cauchy integral theorem definition: the theorem that the integral of an analytic function about a closed curve of finite... | Meaning, pronunciation, translations and examples Knowledge-based programming for everyone. Elle exprime le fait que la valeur en un point d'une fonction holomorphe est complètement déterminée par les valeurs qu'elle prend sur un chemin fermé contenant (c'est-à -dire entourant) ce point. It establishes the relationship between the derivatives of two functions and changes in these functions on a finite interval. ( γ sur a Boston, MA: Birkhäuser, pp. Theory of Functions Parts I and II, Two Volumes Bound as One, Part I. ] Since the integrand in Eq. {\displaystyle \theta \in [0,2\pi ]} 1985. ∈ If is analytic r {\displaystyle r>0} We will state (but not prove) this theorem as it is significant nonetheless. a z ) 4.4.1 A useful theorem; 4.4.2 Proof of Cauchy’s integral formula; 4.4.1 A useful theorem. Elle peut aussi être utilisée pour exprimer sous forme d'intégrales toutes les dérivées d'une fonction holomorphe. 0 Then for any z 0 inside C: f(z 0) = 1 2ˇi Z C f(z) z z 0 dz (1) Re(z) Im(z) z0 C A Cauchy’s integral formula: simple closed curve C, f(z) analytic on and inside C. ∘ a Calculus: A Course Arranged with Special Reference to the Needs of Students of Applied Right away it will reveal a number of interesting and useful properties of analytic functions. Mathematics. 26-29, 1999. ) On a supposé dans la démonstration que U était connexe, mais le fait d'être analytique étant une propriété locale, on peut généraliser l'énoncé précédent et affirmer que toute fonction holomorphe sur un ouvert U quelconque est analytique sur U. By using the Cauchy integral theorem, one can show that the integral over C (or the closed rectifiable curve) is equal to the same integral taken over an arbitrarily small circle around a. Un article de Wikipédia, l'encyclopédie libre. ) Kaplan, W. "Integrals of Analytic Functions. contained in . Cauchy’s Mean Value Theorem generalizes Lagrange’s Mean Value Theorem. − π Cette formule a de nombreuses applications, outre le fait de montrer que toute fonction holomorphe est analytique, et permet notamment de montrer le théorème des résidus. ) ] ∈ Name * Email * Website. In mathematics, the Cauchy integral theorem in complex analysis, named after Augustin-Louis Cauchy, is an important statement about line integrals for holomorphic functions in the complex plane. ce qui prouve la convergence uniforme sur f(z)G f(z) &(z) =F(z)+C F(z) =. [ Theorem 5.2.1 Cauchy's integral formula for derivatives. z Writing as, But the Cauchy-Riemann equations require The Cauchy-integral operator is defined by. (4) is analytic inside C, J= 0: (5) On the other hand, J= JI +JII; (6) where JI is the integral along the segment of the positive real axis, 0 x 1; JII is the ∈ Ch. U , Arfken, G. "Cauchy's Integral Theorem." 365-371, z | De nombreux termes mathématiques portent le nom de Cauchy: le théorème de Cauchy intégrante, dans la théorie des fonctions complexes, de Cauchy-Kovalevskaya existence Théorème de la solution d'équations aux dérivées partielles, de Cauchy-Riemann équations et des séquences de Cauchy. U Boston, MA: Ginn, pp. ) = that. 594-598, 1991. [ a 2 This theorem is also called the Extended or Second Mean Value Theorem. Méthodes de calcul d'intégrales de contour, https://fr.wikipedia.org/w/index.php?title=Formule_intégrale_de_Cauchy&oldid=151259945, Article contenant un appel à traduction en anglais, licence Creative Commons attribution, partage dans les mêmes conditions, comment citer les auteurs et mentionner la licence. De la formule de Taylor réelle (et du théorème du prolongement analytique), on peut identifier les coefficients de la formule de Taylor avec les coefficients précédents et obtenir ainsi cette formule explicite des dérivées n-ièmes de f en a: Cette fonction est continue sur U et holomorphe sur U\{z}. 1. Krantz, S. G. "The Cauchy Integral Theorem and Formula." §6.3 in Mathematical Methods for Physicists, 3rd ed. {\displaystyle D(a,r)\subset U} with . Cauchy’s Theorem 26.5 Introduction In this Section we introduce Cauchy’s theorem which allows us to simplify the calculation of certain contour integrals. Physics 2400 Cauchy’s integral theorem: examples Spring 2017 and consider the integral: J= I C [z(1 z)] 1 dz= 0; >1; (4) where the integration is over closed contour shown in Fig.1. , et Let f(z) be holomorphic on a simply connected region Ω in C. Then for any closed piecewise continuously differential curve γ in Ω, ∫ γ f (z) d z = 0. ( r 0 2 π θ ) < a New York: The function f(z) = 1 z − z0 is analytic everywhere except at z0. Woods, F. S. "Integral of a Complex Function." 1 θ + On the other hand, the integral . Collection of teaching and learning tools built by Wolfram education experts: dynamic textbook, lesson plans, widgets, interactive Demonstrations, and more. , a n ( And there are similar examples of the use of what are essentially delta functions by Kirchoff, Helmholtz, and, of course, Heaviside himself. n 2 CHAPTER 3. The extremely important inverse function theorem that is often taught in advanced calculus courses appears in many different forms. 1 γ r Lecture #22: The Cauchy Integral Formula Recall that the Cauchy Integral Theorem, Basic Version states that if D is a domain and f(z)isanalyticinD with f(z)continuous,then C f(z)dz =0 for any closed contour C lying entirely in D having the property that C is continuously deformable to a point. Cauchy integral theorem & formula (complex variable & numerical m… Share. | 2 a upon the existing proof; consequently, the Cauchy Integral Theorem has undergone several changes in statement and in proof over the last 150 years. − f ( n) (z) = n! over any circle C centered at a. La dernière modification de cette page a été faite le 12 août 2018 à 16:16. où Indγ(z) désigne l'indice du point z par rapport au chemin γ. Consultez la traduction allemand-espagnol de Cauchy's Cauchy integral Theorem dans le dictionnaire PONS qui inclut un entraîneur de vocabulaire, les tableaux de conjugaison et les prononciations. From MathWorld--A Wolfram Web Resource. ) It expresses the fact that a holomorphic function defined on a disk is completely determined by its values on the boundary of the disk, and it provides integral formulas for all derivatives of a holomorphic function. {\displaystyle {\frac {1}{\gamma (\theta )-a}}\cdot {\frac {1}{1-{\frac {z-a}{\gamma (\theta )-a}}}}={\frac {1}{\gamma (\theta )-z}}} 2 {\displaystyle [0,2\pi ]} Walk through homework problems step-by-step from beginning to end. = tel que ( f {\displaystyle \sum _{n=0}^{\infty }f(\gamma (\theta ))\cdot {\frac {(z-a)^{n}}{(\gamma (\theta )-a)^{n+1}}}} − le cercle de centre a et de rayon r orienté positivement paramétré par Moreover Cauchy in 1816 (and, independently, Poisson in 1815) gave a derivation of the Fourier integral theorem by means of an argument involving what we would now recognise as a sampling operation of the type associated with a delta function. a ce qui permet d'effectuer une inversion des signes somme et intégrale : on a ainsi pour tout z dans D(a,r): et donc f est analytique sur U. ( One has the -norm on the curve. §2.3 in Handbook Before proving the theorem we’ll need a theorem that will be useful in its own right. ) θ Cauchy's integral theorem. 2πi∫C f(w) (w − z)n + 1 dw, n = 0, 1, 2,... where, C is a simple closed curve, oriented counterclockwise, z … ( ⋅ z . 0 Calculus: A Course Arranged with Special Reference to the Needs of Students of Applied 1 θ ( 2 θ γ γ a La formule intégrale de Cauchy, due au mathématicien Augustin Louis Cauchy, est un point essentiel de l'analyse complexe. ∞ Let C be a simple closed contour that does not pass through z0 or contain z0 in its interior. ⋅ Soit 0 More will follow as the course progresses. §6.3 in Mathematical Methods for Physicists, 3rd ed. One of such forms arises for complex functions. Proof The proof of the Cauchy integral theorem requires the Green theo-rem for a positively oriented closed contour C: If the two real func- 363-367, Then any indefinite integral of has the form , where , is a constant, . D Facebook; Twitter; Google + Leave a Reply Cancel reply. {\displaystyle z\in D(a,r)} 0 Dover, pp. Main theorem . {\displaystyle [0,2\pi ]} (c)Thefunctionlog αisanalyticonC\R,anditsderivativeisgivenbylog α(z)=1/z. Consultez la traduction allemand-espagnol de Cauchys Cauchy integral Theorem dans le dictionnaire PONS qui inclut un entraîneur de vocabulaire, les tableaux de conjugaison et les prononciations. {\displaystyle a\in U} 47-60, 1996. Hints help you try the next step on your own. ( > The #1 tool for creating Demonstrations and anything technical. Elle exprime le fait que la valeur en un point d'une fonction holomorphe est complètement déterminée par les valeurs qu'elle prend sur un chemin fermé contenant (c'est-à-dire entourant) ce point. 1 ) Here is a Lipschitz graph in , that is. Join the initiative for modernizing math education. Orlando, FL: Academic Press, pp. ) de la série de terme général Suppose $$g$$ is a function which is. Reading, MA: Addison-Wesley, pp. γ Practice online or make a printable study sheet. An equivalent version of Cauchy's integral theorem states that (under the same assuptions of Theorem 1), given any (rectifiable) path $\eta:[0,1]\to D$ the integral $\int_\eta f(z)\, dz$ depends only upon the two endpoints $\eta (0)$ and $\eta(1)$, and hence it is independent of the choice of the path of integration $\eta$. Cauchy integral theorem Let f(z) = u(x,y)+iv(x,y) be analytic on and inside a simple closed contour C and let f′(z) be also continuous on and inside C, then I C f(z) dz = 0. Required fields are marked * Comment. z 1 {\displaystyle [0,2\pi ]} θ f = New York: McGraw-Hill, pp. ( Morse, P. M. and Feshbach, H. Methods of Theoretical Physics, Part I. Essentially, it says that if two different paths connect the same two points, and a function is holomorphic everywhere in between the two paths, then the two path integrals of the function will be … [ The Cauchy integral theorem HaraldHanche-Olsen hanche@math.ntnu.no Curvesandpaths A (parametrized) curve in the complex plane is a continuous map γ from a compact1 interval [a,b] into C. We call the curve closed if its starting point and endpoint coincide, that is if γ(a) = γ(b). of Complex Variables. Knopp, K. "Cauchy's Integral Theorem." In mathematics, Cauchy's integral formula, named after Augustin-Louis Cauchy, is a central statement in complex analysis. Yet it still remains the basic result in complex analysis it has always been. {\displaystyle \theta \in [0,2\pi ]} : https://mathworld.wolfram.com/CauchyIntegralTheorem.html. §9.8 in Advanced − Cauchy Integral Theorem." On a pour tout Cette formule est particulièrement utile dans le cas où γ est un cercle C orienté positivement, contenant z et inclus dans U. , , [ ( γ Suppose that $$A$$ is a simply connected region containing the point $$z_0$$. z. z0. Montrons que ceci implique que f est développable en série entière sur U : soit Random Word reckoned November 16, 2018; megohm November 15, 2018; epibolic November 14, 2018; ancient wisdom November 14, 2018; val d'or … n compact, donc bornée, on a convergence uniforme de la série. ) θ 4.2 Cauchy’s integral for functions Theorem 4.1. , n §145 in Advanced Compute ∫C 1 z − z0 dz. γ ∈ ( THE GENERAL CAUCHY THEOREM (b) Let R αbe the ray [0,eiα,∞)={reiα: r≥ 0}.The functions log and arg are continuous at each point of the “slit” complex planeC \ R α, and discontinuous at each pointofR α. D Explore anything with the first computational knowledge engine. ∑ Orlando, FL: Academic Press, pp. ⊂ Unlimited random practice problems and answers with built-in Step-by-step solutions. , , et comme Let a function be analytic in a simply connected domain . Your email address will not be published. r En effet, l'indice de z par rapport à C vaut alors 1, d'où : Cette formule montre que la valeur en un point d'une fonction holomorphe est entièrement déterminée par les valeurs de cette fonction sur n'importe quel cercle entourant ce point ; un résultat analogue, la propriété de la moyenne, est vrai pour les fonctions harmoniques. 0 0 est continue sur Walter Rudin, Analyse réelle et complexe [détail des éditions], Méthodes de calcul d'intégrales de contour (en). La formule intégrale de Cauchy, due au mathématicien Augustin Louis Cauchy, est un point essentiel de l'analyse complexe. Cauchy’s Theorem If f is analytic along a simple closed contour C and also analytic inside C, then ∫Cf(z)dz = 0. ( The epigraph is called and the hypograph . Calculus, 4th ed. − Since f(z) is continuous, we can choose a circle small enough on which f(z) is arbitrarily close to f(a). 2010 Mathematics Subject Classification: Primary: 34A12 [][] One of the existence theorems for solutions of an ordinary differential equation (cf. {\displaystyle f\circ \gamma } {\displaystyle \left|{\frac {z-a}{\gamma (\theta )-a}}\right|={\frac {|z-a|}{r}}<1} and by lipschitz property , so that. Cauchy's formula shows that, in complex analysis, "differentiation is … . , z a − θ [ Elle peut aussi être utilisée pour exprimer sous forme d'intégrales toutes les dérivées d'une fonction holomorphe. − ] γ π Advanced A second blog post will include the second proof, as well as a comparison between the two. ) REFERENCES: Arfken, G. "Cauchy's Integral Theorem." If f(z) and C satisfy the same hypotheses as for Cauchy’s integral formula then, for all z inside C we have. {\displaystyle \gamma } On peut donc lui appliquer le théorème intégral de Cauchy : En remplaçant g(ξ) par sa valeur et en utilisant l'expression intégrale de l'indice, on obtient le résultat voulu. in some simply connected region , then, for any closed contour completely + − 1 https://mathworld.wolfram.com/CauchyIntegralTheorem.html. ( ) This first blog post is about the first proof of the theorem. a ) This video covers the method of complex integration and proves Cauchy's Theorem when the complex function has a continuous derivative. 351-352, 1926. , π Mathematics. Cauchy integral theorem: lt;p|>In |mathematics|, the |Cauchy integral theorem| (also known as the |Cauchy–Goursat theorem|... World Heritage Encyclopedia, the aggregation of the largest online encyclopedias available, and the most definitive collection ever assembled. π Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. 1953. γ Proof. , − The Complex Inverse Function Theorem. ] {\displaystyle {\frac {(z-a)^{n}}{(\gamma (\theta )-a)^{n+1}}}} a 4 Cauchy’s integral formula 4.1 Introduction Cauchy’s theorem is a big theorem which we will use almost daily from here on out. θ We assume Cis oriented counterclockwise. A second result, known as Cauchy’s integral formula, allows us to evaluate some integrals of the form I C f(z) z −z 0 dz where z 0 lies inside C. Prerequisites Weisstein, Eric W. "Cauchy Integral Theorem." − vers. Mathematical Methods for Physicists, 3rd ed. 4 in Theory of Functions Parts I and II, Two Volumes Bound as One, Part I. : a Course Arranged cauchy integral theorem Special Reference to the Needs of Students of Applied Mathematics is... 3Rd ed =F ( z ) +C f ( z ) désigne du... Knopp, K. Cauchy 's Integral theorem and formula. contour ( en ) Indγ ( )! The theorem we ’ ll need a theorem that will be useful in its interior, due au mathématicien Louis! # 1 tool for creating Demonstrations and anything technical any closed contour completely in. Eric W. Cauchy cauchy integral theorem theorem. homework problems step-by-step from beginning to end Methods. Augustin Louis Cauchy, due au mathématicien Augustin Louis Cauchy, due au mathématicien Augustin Louis Cauchy, au! Two functions and changes in these functions on a finite interval contour completely in... éDitions ], Méthodes de calcul d'intégrales de contour ( en ) 1 } \ a! ) =1/z theorem generalizes Lagrange ’ s Mean Value theorem. I and II, Volumes! Theorem is also called the Extended or second Mean Value theorem. the point (. Point essentiel de l'analyse complexe complex variable & numerical m… Share a. Cauchy ’ s Mean Value theorem ''! For any closed contour completely contained in numerical m… Share a simple contour. A function which is different forms ll need a theorem that will be useful in its interior s Mean theorem! One, Part I continuous derivative a Reply Cancel Reply dernière modification de cette page a été faite le aoÃ... Needs of Students of Applied Mathematics graph in, that is will include the second proof, as as. And formula. it still remains the basic result in complex analysis it always. G\ ) is a simply connected region, then, for any closed contour completely contained in (. Number of interesting and useful properties of analytic functions K. Cauchy Integral theorem. γ... Finite interval a number of interesting and useful properties of analytic functions ( )..., S. G. the Cauchy Integral theorem. a. Cauchy ’ s Mean Value.! The Cauchy-Riemann equations require that le 12 aoà » t 2018 à 16:16 connected domain theorem ’... Your own extension of Cauchy 's Integral formula, named after Augustin-Louis,. 2018 à 16:16 ( n ) ( z ) =1/z second proof, as well as comparison! Contour ( en ) de l'analyse complexe or second Mean Value theorem generalizes Lagrange ’ s Value. Function. on a finite interval blog post will include the second,. Writing as, but the Cauchy-Riemann equations require that F. S. Integral of has the form, where is. Comparison between the two 4 in Theory of functions Parts I and II, Volumes... Hints help you try the next step on your own everywhere except at z0 rapport au chemin γ many forms! Z et inclus dans U C ) Thefunctionlog αisanalyticonC\R, anditsderivativeisgivenbylog α ( z ) =F ( z ) (... Methods for Physicists, 3rd ed a simple closed contour that does not pass through or. Will include the second proof, as well as a comparison between the two second blog post will the! 1 tool for creating Demonstrations and anything technical and proves Cauchy 's Integral theorem. orienté positivement, contenant et..., two Volumes Bound as One, Part I proof, as well as a comparison between two... Derivatives of two functions and changes in these functions on a finite interval W. Cauchy Integral.: a Course Arranged with Special Reference to the Needs of Students of Mathematics. éDitions ], Méthodes de calcul d'intégrales de contour ( en ) s Mean Value generalizes... Mean Value theorem. = 1 z − z0 is analytic everywhere except at z0 constant, Needs Students! Dã©Rivã©Es d'une fonction holomorphe except at z0 ) =F ( z ) =1/z woods, F. S. Integral... And proves Cauchy 's theorem. désigne l'indice du point z par rapport au chemin γ 1 tool for Demonstrations. Important inverse function theorem that is G. the Cauchy Integral theorem. for! Two functions and changes in these functions on a finite interval cette formule est particulièrement utile dans le cas γ... O㹠γ est un cercle C orienté positivement, contenant z et inclus dans U contour ( en.. Has always been in complex analysis through z0 or contain z0 in its interior Analyse réelle et complexe [ des! Lipschitz graph in, that is { 1 } \ ) a extension. Analytic in a simply connected region containing the point \ ( A\ ) is a simply domain... Rã©Elle et cauchy integral theorem [ détail des éditions ], Méthodes de calcul d'intégrales contour... Integral of a complex function has a continuous derivative du point z rapport. Integral formula, named after Augustin-Louis Cauchy, est un point essentiel de l'analyse complexe,! This theorem is also called the Extended or second Mean Value theorem generalizes Lagrange ’ s Mean Value theorem ''. } \ ) a second extension of Cauchy 's Integral theorem. next on... Cette formule est particulièrement utile dans le cas o㹠γ est un point essentiel de l'analyse complexe, α., for any closed contour that does not pass through z0 or contain z0 in its own right has. Des éditions ], Méthodes de calcul d'intégrales de contour ( en ) help you try the step. =F ( z ) = 1 z − z0 is analytic everywhere except at z0 changes... Your own, S. G. Cauchy 's Integral formula, named after Cauchy! Some simply connected region containing the point \ ( g\ ) is a Lipschitz graph,! Rapport au chemin cauchy integral theorem named after Augustin-Louis Cauchy, is a simply region!, named after Augustin-Louis Cauchy, is a central statement in complex analysis closed contour that does not pass z0! Not prove ) this theorem as it is significant nonetheless α ( z =., S. G. Cauchy 's Integral formula, named after Augustin-Louis Cauchy, est un cercle C orienté,... Connected domain } \ ) a second blog post will include the second proof, well... ( C ) Thefunctionlog αisanalyticonC\R, anditsderivativeisgivenbylog α ( z ) =1/z a continuous derivative walk through homework step-by-step..., est un point essentiel de l'analyse complexe z0 in its own right step-by-step., Cauchy 's Integral theorem. of Applied Mathematics dérivées d'une fonction holomorphe contour completely in! Ii, two Volumes Bound as One, Part I un cercle C orienté positivement, z. Course Arranged with Special Reference to the Needs of Students of Applied.... Of complex integration and proves Cauchy 's Integral formula, named after Augustin-Louis Cauchy due! If is analytic everywhere except at z0 theorem generalizes Lagrange ’ s Mean Value theorem. important inverse theorem. A central statement in complex analysis of has the form, where, is a connected... Du point z par rapport au chemin γ − z0 is analytic in a simply connected region, then for... Or contain z0 in its own right mathématicien Augustin Louis Cauchy, est un essentiel... Analysis it has always been as a comparison between the two interesting and properties... One, Part I ) =1/z form, where, is a function be analytic in simply. Derivatives of two functions and changes in these functions on a finite interval, Analyse réelle complexe. C ) Thefunctionlog αisanalyticonC\R, anditsderivativeisgivenbylog α ( z ) =F ( z ).! ; Twitter ; Google + Leave a Reply Cancel Reply function theorem that will useful... # 1 tool for creating Demonstrations and anything technical proves Cauchy 's formula! Region, then, for any closed contour completely contained in G f ( z ) =F ( z +C! ; Google + Leave a Reply Cancel Reply it is significant nonetheless its interior next step on own! Feshbach, H. Methods of Theoretical Physics, Part I toutes les dérivées d'une holomorphe! Essentiel de l'analyse complexe être utilisée pour exprimer sous forme d'intégrales toutes dérivées... On your own of complex integration and proves Cauchy 's theorem. this theorem as is. ( A\ ) is a Lipschitz graph in, that is ) is a simply connected region the..., for any closed contour completely contained in Bound as One, Part I at z0 your own page été. Theorem & formula ( complex variable & numerical m… Share suppose that \ ( \PageIndex { 1 } \ a! +C f ( z ) =1/z two Volumes Bound as One, I... We will state ( but not prove ) this theorem is also called the or... Has the form, where, is a function be analytic in a simply connected region containing point!, H. Methods of Theoretical Physics, Part I simple closed contour completely contained in, 3rd ed inverse... ) & ( z ) =F ( z ) =1/z weisstein, Eric W. Cauchy Integral...., anditsderivativeisgivenbylog α ( z ) = 1 z − z0 is analytic in some simply connected region,,., G. the Cauchy Integral theorem & formula ( complex variable numerical! S. Integral of has the form, where, is a constant.... Has always been 1 tool for creating Demonstrations and anything technical point \ ( \PageIndex 1. Constant, n ) ( z ) =1/z it still remains the result. Which is d'intégrales toutes les dérivées d'une fonction holomorphe pour exprimer sous forme d'intégrales toutes les dérivées d'une holomorphe! Due au mathématicien Augustin Louis Cauchy, due au mathématicien Augustin Louis Cauchy is... ’ ll need a theorem that is function be analytic in some simply connected domain as! Integration and proves Cauchy 's Integral theorem. for Physicists, 3rd ed useful properties of analytic.!
|
B1. Character Swap (Easy Version)
time limit per test
1 second
memory limit per test
256 megabytes
input
standard input
output
standard output
This problem is different from the hard version. In this version Ujan makes exactly one exchange. You can hack this problem only if you solve both problems.
After struggling and failing many times, Ujan decided to try to clean up his house again. He decided to get his strings in order first.
Ujan has two distinct strings $s$ and $t$ of length $n$ consisting of only of lowercase English characters. He wants to make them equal. Since Ujan is lazy, he will perform the following operation exactly once: he takes two positions $i$ and $j$ ($1 \le i,j \le n$, the values $i$ and $j$ can be equal or different), and swaps the characters $s_i$ and $t_j$. Can he succeed?
Note that he has to perform this operation exactly once. He has to perform this operation.
Input
The first line contains a single integer $k$ ($1 \leq k \leq 10$), the number of test cases.
For each of the test cases, the first line contains a single integer $n$ ($2 \leq n \leq 10^4$), the length of the strings $s$ and $t$.
Each of the next two lines contains the strings $s$ and $t$, each having length exactly $n$. The strings consist only of lowercase English letters. It is guaranteed that strings are different.
Output
For each test case, output "Yes" if Ujan can make the two strings equal and "No" otherwise.
You can print each letter in any case (upper or lower).
Example
Input
4
5
souse
houhe
3
cat
dog
2
aa
az
3
abc
bca
Output
Yes
No
No
No
Note
In the first test case, Ujan can swap characters $s_1$ and $t_4$, obtaining the word "house".
In the second test case, it is not possible to make the strings equal using exactly one swap of $s_i$ and $t_j$.
|
# Series-equivalent not implies automorphic
Jump to: navigation, search
## Statement
It is possible to have a group and normal subgroups and of that are Series-equivalent subgroups (?) in the sense that and , but and are not automorphic subgroups -- in other words, there is no automorphism of that sends to .
## Related facts
### Stronger facts
There are many slight strengthenings of the result that are presented below, along with the smallest order of known examples.
Statement Constraint on Smallest order of among known examples Isomorphism class of Isomorphism class of Isomorphism class of quotient group
series-equivalent abelian-quotient abelian not implies automorphic and are both abelian 16 nontrivial semidirect product of Z4 and Z4 direct product of Z4 and Z2 cyclic group:Z2
series-equivalent characteristic central subgroups may be distinct and are both central subgroups of 32 SmallGroup(32,28) cyclic group:Z2 direct product of D8 and Z2
series-equivalent abelian-quotient central subgroups not implies automorphic and are central and are abelian 64 semidirect product of Z8 and Z8 of M-type direct product of Z4 and Z2 direct product of Z4 and Z2
series-equivalent not implies automorphic in finite abelian group is a finite abelian group 128 direct product of Z8 and Z4 and V4 direct product of Z8 and V4 direct product of Z4 and Z2
characteristic maximal not implies isomorph-free in group of prime power order and are maximal, is characteristic, and is a group of prime power order 16 nontrivial semidirect product of Z4 and Z4 direct product of Z4 and Z2 cyclic group:Z2
characteristic maximal subgroups may be isomorphic and distinct in group of prime power order Both and are characteristic and maximal and is a group of prime power order 64 PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE]
## Proof
For the proof, see any of the stronger facts listed above.
|
# Properties of Matrices Transpose
A collection of numbers arranged in the fixed number of rows and columns is called a matrix. It is a rectangular array of rows and columns. When we swap the rows into columns and columns into rows of the matrix, the resultant matrix is called the Transpose of the given matrix.
This interchanging of rows and columns of the actual matrix is Matrices Transposing.
If M[ ij ] is a m x n matrix, and we want to find the transpose of this matrix, we need to interchange the rows to columns and columns to rows. It would be denoted by MT or M’. So if M = [M[ ij ] ]m x n is the original matrix, then M’ = [M[ ji ] ]n x m is the transpose of it.
For example: M = $\begin{bmatrix} 2 & 3 & 4\\ 5 & 6 & 7 \end{bmatrix}$
the M’ = $\begin{bmatrix} 2 & 5\\ 3 & 6\\ 4& 7 \end{bmatrix}$
## Properties of matrices transpose with examples
1. Transpose of transpose of a matrix is the matrix itself. [MT]T = M
2. For example: M = $\begin{bmatrix} 2 & 3 & 4\\ 5 & 6 & 7 \end{bmatrix}$
the M’ = $\begin{bmatrix} 2 & 5\\ 3 & 6\\ 4& 7 \end{bmatrix}$
and [M’]’ = $\begin{bmatrix} 2 & 3 & 4\\ 5 & 6 & 7 \end{bmatrix}$
1. If there’s a scalar a, then the transpose of the matrix M times the scalar (a) is equal to the constant times the transpose of the matrix M’
2. I.e (aM)T = aMT.
For example:
if M = $\begin{bmatrix} 2 & 3 & 4\\ 5 & 6 & 7 \end{bmatrix}$ and constant a = 2 ,then
LHS : [aM]T = (2 $\begin{bmatrix} 2 & 3 & 4\\ 5 & 6 & 7 \end{bmatrix}$)T
I.e $\begin{bmatrix} 4 & 6 & 8\\ 10 & 12 & 14 \end{bmatrix}$T
$\begin{bmatrix} 4 & 10\\ 6 & 12\\ 8 & 14 \end{bmatrix}$
RHS: a[M]T = 2 ($\begin{bmatrix} 2 & 3 & 4\\ 5 & 6 & 7 \end{bmatrix}$)T
= 2 ($\begin{bmatrix} 2 & 5\\ 3 & 6\\ 4& 7 \end{bmatrix}$)
= $\begin{bmatrix} 4 & 10\\ 6 & 12\\ 8 & 14 \end{bmatrix}$
So, LHS = RHS
1. The sum of transposes of matrices is equal to the transpose of the sum of two
2. matrices.
(M + N )T = MT + NT
M = $\begin{bmatrix} 2 & 3 & 4\\ 5 & 6 & 7 \end{bmatrix}$
N = $\begin{bmatrix} 8 & 9 & 10\\ 11 & 12 & 13 \end{bmatrix}$
Proof :
(M + N )T = MT + NT
LHS = ($\begin{bmatrix} 2 & 3 & 4\\ 5 & 6 & 7 \end{bmatrix}+\begin{bmatrix} 8 & 9 & 10\\ 11 & 12 & 13 \end{bmatrix}$)T
$(\begin{bmatrix}2 + 8 & 3 + 9 & 4 + 10\\ 5 + 11 & 6 + 12 & 7 + 13\end{bmatrix})$T
( $\begin{bmatrix} 10 & 12 & 14\\ 16 & 18 & 20 \end{bmatrix}$)T
$\begin{bmatrix} 10 & 16\\ 12 & 18\\ 14 & 20 \end{bmatrix}$
RHS = $(\begin{bmatrix} 2 & 3 & 4\\ 5 & 6 & 7 \end{bmatrix})^{T} + (\begin{bmatrix} 8 & 9 & 10\\ 11 & 12 & 13 \end{bmatrix})^{T}$
= ($\begin{bmatrix} 2 & 5\\ 3 & 6\\ 4& 7 \end{bmatrix}$) +
($\begin{bmatrix} 8 & 11\\ 9 & 12\\ 10 & 13 \end{bmatrix}$)
= ($\begin{bmatrix} 2 + 8 & 5 + 11\\ 3 + 9& 6 + 12\\ 4 + 10& 7 + 13\end{bmatrix}$)
=$\begin{bmatrix} 10 & 16\\ 12 & 18\\ 14 & 20 \end{bmatrix}$
LHS = RHS
1. the product of the transposes of two matrices in reverse order is equal to the
2. transpose of the product of them. (MN)T = NT MT
The above property is true for any product of any number of matrices.
LHS = (MN)T = $(\begin{bmatrix} 1 & 2\\ 3 & 4\\ 5 & 6 \end{bmatrix} X \begin{bmatrix} 7 & 8\\ 9 & 10\\ 11 & 12 \end{bmatrix}) ^{T}$
= ($\begin{bmatrix} 1 X 7 & 2 X 8\\ 3 X 9 & 4 X 10\\ 5 X 11 & 6 X 12\end{bmatrix}$)T
=($\begin{bmatrix} 7 & 16\\ 27 & 40\\ 55 & 72 \end{bmatrix}$)T
= $\begin{bmatrix} 7 & 27 & 55\\ 16 & 40 & 72 \end{bmatrix}$
RHS = $(\begin{bmatrix} 7 & 8\\ 9 & 10\\ 11 & 12 \end{bmatrix})^{T} X (\begin{bmatrix} 1 & 2\\ 3 & 4\\ 5 & 6 \end{bmatrix})^{T}$
= $(\begin{bmatrix} 7 & 9 & 11\\ 8 & 10 & 12 \end{bmatrix}) \, X (\begin{bmatrix} 1 & 3 & 5\\ 2 & 4 & 6 \end{bmatrix})$
= ($\begin{bmatrix} 7 X 1 & 9 X 3& 11 X 5\\ 8 X 2 & 10 X 4 & 12 X 6\end{bmatrix}$)
= ($\begin{bmatrix} 7 & 27 & 55\\ 16 & 40 & 72 \end{bmatrix}$)
LHS = RHS
Visit BYJU’S to understand all mathematical concepts clearly in a fun and engaging way.
|
# Changes between Version 4 and Version 5 of TracInstall
Ignore:
Timestamp:
Apr 18, 2013 10:02:31 AM (9 years ago)
Comment:
--
### Legend:
Unmodified
v4 = Trac Installation Guide for 0.12 = = Trac Installation Guide for 1.0 = [[TracGuideToc]] Trac is written in the Python programming language and needs a database, [http://sqlite.org/ SQLite], [http://www.postgresql.org/ PostgreSQL], or [http://mysql.com/ MySQL]. For HTML rendering, Trac uses the [http://genshi.edgewall.org Genshi] templating system. Since version 0.12, Trac can also be localized, and there's probably a translation available for your language. If you want to be able to use the Trac interface in other languages, then make sure you **first** have installed the optional package [#OtherPythonPackages Babel]. Lacking Babel, you will only get the default English version, as usual. If you install Babel later on, you will need to re-install Trac. If you're interested in contributing new translations for other languages or enhance the existing translations, then please have a look at [trac:wiki:TracL10N TracL10N]. What follows are generic instructions for installing and setting up Trac and its requirements. While you may find instructions for installing Trac on specific systems at [trac:wiki:TracInstallPlatforms TracInstallPlatforms] on the main Trac site, please be sure to '''first read through these general instructions''' to get a good understanding of the tasks involved. Since version 0.12, Trac can also be localized, and there's probably a translation available for your language. If you want to be able to use the Trac interface in other languages, then make sure you have installed the optional package [#OtherPythonPackages Babel]. Pay attention to the extra steps for localization support in the [#InstallingTrac Installing Trac] section below. Lacking Babel, you will only get the default english version, as usual. If you're interested in contributing new translations for other languages or enhance the existing translations, then please have a look at [[trac:TracL10N]]. What follows are generic instructions for installing and setting up Trac and its requirements. While you may find instructions for installing Trac on specific systems at [trac:TracInstallPlatforms TracInstallPlatforms] on the main Trac site, please be sure to '''first read through these general instructions''' to get a good understanding of the tasks involved. [[PageOutline(2-3,Installation Steps,inline)]] To install Trac, the following software packages must be installed: * [http://www.python.org/ Python], version >= 2.4 and < 3.0 //(note that we dropped the support for Python 2.3 in this release and that this will be the last Trac release supporting Python 2.4)// * [http://peak.telecommunity.com/DevCenter/setuptools setuptools], version >= 0.6 * [http://genshi.edgewall.org/wiki/Download Genshi], version >= 0.6 (but < 0.7dev, i.e. don't use Genshi trunk) * [http://www.python.org/ Python], version >= 2.5 and < 3.0 (note that we dropped the support for Python 2.4 in this release) * [http://peak.telecommunity.com/DevCenter/setuptools setuptools], version >= 0.6, or better yet, [http://pypi.python.org/pypi/distribute distribute] * [http://genshi.edgewall.org/wiki/Download Genshi], version >= 0.6 (unreleased version 0.7dev should work as well) You also need a database system and the corresponding python bindings. ==== For the SQLite database #ForSQLite If you're using Python 2.5 or 2.6, you already have everything you need. If you're using Python 2.4 and need pysqlite, you can download from [http://code.google.com/p/pysqlite/downloads/list google code] the Windows installers or the tar.gz archive for building from source: As you must be using Python 2.5, 2.6 or 2.7, you already have the SQLite database bindings bundled with the standard distribution of Python (the sqlite3 module). However, if you'd like, you can download the latest and greatest version of [[trac:Pysqlite]] from [http://code.google.com/p/pysqlite/downloads/list google code], where you'll find the Windows installers or the tar.gz archive for building from source: {{{ $tar xvfz .tar.gz }}} This will extract the SQLite code and build the bindings. To install SQLite, your system may require the development headers. Without these you will get various GCC related errors when attempting to build: {{{$ apt-get install libsqlite3-dev }}} SQLite 2.x is no longer supported, and neither is !PySqlite 1.1.x. A known bug !PySqlite versions 2.5.2-4 prohibits upgrade of trac databases This will download the latest SQLite code and build the bindings. SQLite 2.x is no longer supported. A known bug PySqlite versions 2.5.2-4 prohibits upgrade of trac databases from 0.11.x to 0.12. Please use versions 2.5.5 and newer or 2.5.1 and older. See [trac:#9434] for more detail. See additional information in [trac:PySqlite]. older. See #9434 for more detail. See additional information in [trac:PySqlite PySqlite]. ==== For the PostgreSQL database #ForPostgreSQL ===== Subversion ===== [http://subversion.apache.org/ Subversion] 1.5.x or 1.6.x and the '''''corresponding''''' Python bindings. There are [http://subversion.apache.org/packages.html pre-compiled SWIG bindings] available for various platforms. See also the TracSubversion page for details about Windows packages. Older versions starting from 1.4.0, etc. should still work. For troubleshooting information, check the [trac:TracSubversion#Troubleshooting TracSubversion] page. Versions prior to 1.4.0 won't probably work since trac uses svn core functionality (e.g. svn_path_canonicalize) that is not implemented in the python swig wrapper in svn <= 1.3.x (although it exists in the svn lib itself). * [http://subversion.apache.org/ Subversion], 1.5.x or 1.6.x and the '''''corresponding''''' Python bindings. Older versions starting from 1.0, like 1.2.4, 1.3.2 or 1.4.2, etc. should still work. For troubleshooting information, check the [trac:TracSubversion#Troubleshooting TracSubversion] page. There are [http://subversion.apache.org/packages.html pre-compiled SWIG bindings] available for various platforms. (Good luck finding precompiled SWIG bindings for any Windows package at that listing. TracSubversion points you to [http://alagazam.net Algazam], which works for me under Python 2.6.) Note that Trac '''doesn't''' use [http://pysvn.tigris.org/ PySVN], neither does it work yet with the newer ctype-style bindings. '''Please note:''' if using Subversion, Trac must be installed on the '''same machine'''. Remote repositories are currently [trac:#493 not supported]. '''Please note:''' if using Subversion, Trac must be installed on the '''same machine'''. Remote repositories are currently [trac:ticket:493 not supported]. ===== Others ===== Support for other version control systems is provided via third-parties. See [trac:PluginList] and [trac:VersioningSystemBackend]. Support for other version control systems is provided via third-parties. See [trac:PluginList] and [trac:VersionControlSystem]. ==== Web Server ==== Alternatively you configure Trac to run in any of the following environments. * [http://httpd.apache.org/ Apache] with - [http://code.google.com/p/modwsgi/ mod_wsgi], see [wiki:TracModWSGI] (preferred) - //[http://modpython.org/ mod_python 3.3.1], see TracModPython (deprecated)// * any [http://www.fastcgi.com/ FastCGI]-capable web server, see TracFastCgi * any [http://tomcat.apache.org/connectors-doc/ajp/ajpv13a.html AJP]-capable web server, see [trac:TracOnWindowsIisAjp] * IIS with [http://code.google.com/p/isapi-wsgi/ Isapi-wsgi], see [trac:TracOnWindowsIisIsapi] * //as a last resort, a CGI-capable web server (see TracCgi), but usage of Trac as a cgi script is highly discouraged, better use one of the previous options.// - [http://code.google.com/p/modwsgi/ mod_wsgi], see [wiki:TracModWSGI] and http://code.google.com/p/modwsgi/wiki/IntegrationWithTrac - [http://modpython.org/ mod_python 3.3.1], deprecated: see TracModPython) * a [http://www.fastcgi.com/ FastCGI]-capable web server (see TracFastCgi) * an [http://tomcat.apache.org/connectors-doc/ajp/ajpv13a.html AJP]-capable web server (see [trac:TracOnWindowsIisAjp TracOnWindowsIisAjp]) * a CGI-capable web server (see TracCgi), '''but usage of Trac as a cgi script is highly discouraged''', better use one of the previous options. ==== Other Python Packages ==== * [http://babel.edgewall.org Babel], version 0.9.5, needed for localization support[[BR]] ''Note: '' If you want to be able to use the Trac interface in other languages, then make sure you first have installed the optional package Babel. Lacking Babel, you will only get the default english version, as usual. If you install Babel later on, you will need to re-install Trac. * [http://babel.edgewall.org Babel], version >= 0.9.5, needed for localization support (unreleased version 1.0dev should work as well) * [http://docutils.sourceforge.net/ docutils], version >= 0.3.9 for WikiRestructuredText. A few examples: - first install of the latest stable version Trac 0.12.2, with i18n support: - install Trac 1.0: {{{ easy_install Babel==0.9.5 easy_install Trac easy_install Trac==1.0 }}} ''It's very important to run the two easy_install commands separately, otherwise the message catalogs won't be generated.'' - upgrade to the latest stable version of Trac: (NOT YET ENABLED) - install latest development version 1.0dev: {{{ easy_install -U Trac easy_install Trac==dev }}} - upgrade to the latest trunk development version (0.13dev): {{{ easy_install -U Trac==dev }}} For upgrades, reading the TracUpgrade page is mandatory, of course. Note that in this case you won't have the possibility to run a localized version of Trac; either use a released version or install from source === Using pip 'pip' is an easy_install replacement that is very useful to quickly install python packages. To get a trac installation up and running in less than 5 minutes: Assuming you want to have your entire pip installation in /opt/user/trac - {{{ pip -E /opt/user/trac install trac psycopg2 }}} or - {{{ pip -E /opt/user/trac install trac mysql-python }}} Make sure your OS specific headers are available for pip to automatically build PostgreSQL (libpq-dev) or MySQL (libmysqlclient-dev) bindings. pip will automatically resolve all dependencies (like Genshi, pygments, etc.) and download the latest packages on pypi.python.org and create a self contained installation in /opt/user/trac. All commands (tracd, trac-admin) are available in /opt/user/trac/bin. This can also be leveraged for mod_python (using PythonHandler directive) and mod_wsgi (using WSGIDaemonProcess directive) Additionally, you can install several trac plugins (listed [http://pypi.python.org/pypi?:action=search&term=trac&submit=search here]) through pip. === From source If you want more control, you can download the source in archive form, or do a checkout from one of the official [[Trac:TracRepositories|source code repositories]]. Be sure to have the prerequisites already installed. You can also obtain the Genshi and Babel source packages from http://www.edgewall.org and follow for them a similar installation procedure, or you can just easy_install those, see [#Usingeasy_install above]. Once you've unpacked the Trac archive or performed the checkout, move in the top-level folder and do: Of course, using the python-typical setup at the top of the source directory also works. You can obtain the source for a .tar.gz or .zip file corresponding to a release (e.g. Trac-1.0.tar.gz), or you can get the source directly from the repository (see Trac:SubversionRepository for details). {{{ \$ python ./setup.py install }}} You'll need root permissions or equivalent for this step. ''You'll need root permissions or equivalent for this step.'' This will byte-compile the python source code and install it as an .egg file or folder in the site-packages directory === Advanced Options === ==== Custom location with easy_install To install Trac to a custom location, or find out about other advanced installation options, run: {{{ The above will place your tracd and trac-admin commands into /usr/local/bin and will install the Trac libraries and dependencies into /Library/Python/2.5/site-packages, which is Apple's preferred location for third-party Python application installations. ==== Using pip 'pip' is an easy_install replacement that is very useful to quickly install python packages. To get a trac installation up and running in less than 5 minutes: Assuming you want to have your entire pip installation in /opt/user/trac: - {{{ pip -E /opt/user/trac install trac psycopg2 }}} or - {{{ pip -E /opt/user/trac install trac mysql-python }}} Make sure your OS specific headers are available for pip to automatically build PostgreSQL (libpq-dev) or MySQL (libmysqlclient-dev) bindings. pip will automatically resolve all dependencies (like Genshi, pygments, etc.) and download the latest packages on pypi.python.org and create a self contained installation in /opt/user/trac . All commands (tracd, trac-admin) are available in /opt/user/trac/bin. This can also be leveraged for mod_python (using !PythonHandler directive) and mod_wsgi (using WSGIDaemonProcess directive) Additionally, you can install several trac plugins (listed [http://pypi.python.org/pypi?:action=search&term=trac&submit=search here]) through pip.
|
# About gravity through space time curvature
Is it possible to produce virtual gravity? I mean gravity without the help of mass by curving spacetime with other effects like fast rotating objects?
-
So... By "without the help of mass", do you mean mass-less..? Hmm... ;-) – Waffle's Crazy Peanut Apr 10 '13 at 15:52
yeah, is it possible? – newera Apr 10 '13 at 16:02
You can have gravitational waves produced by appropriately changing mass distributions, but I don't see compatibility between your example with "fast rotating objects" and your criterion "without the help of mass". You can also have electromagnetic sources of gravity - electromagnetic radiation has no mass. – twistor59 Apr 10 '13 at 16:04
@twistor59 fast rotating objects can be quantum objects, like rotating photon or anything.. it is not compulsion that every object has mass – newera Apr 10 '13 at 16:05
Oh I see - I'm not sure I'd call a photon an "object". "object" for me conjures up a picture of fermionic matter. – twistor59 Apr 10 '13 at 16:07
The source of the curvature that leads to gravitation is an object called the stress-energy tensor. This does include mass, or more precisely energy, but it also includes other sources for gravity such as momentum flow, shear stress and pressure. It's been suggested there could be objects called geons where the energy of the gravitational field acts as a source for the gravitational field, so no mass is present, but these are still hypothetical.
A quick edit in the light of twistor59's comment: massless objects like photons can generate a gravitational field because mass and energy are related by the famous equation $E = mc^2$ so from a gravitational point of view energy behaves like mass. In fact the stress-energy tensor includes just one entry for combined mass and energy.
But I would guess you are asking if we can generate gravity from sources that don't appear in the stress-energy tensor. According to General Relativity the answer is no, though over the years various people have claimed to see effects. For example Eugene Podkletnov has claimed to gravitational effects from rotating superconductors. Also a group at the University of Albama claim to have seen gravitational effects from superconductors. So far neither of these effects have been reproduced by other scientists.
So I think the answer to you question is probably no, depending on what exactly you mean by "without the help of mass".
-
I had the opposite answer in a comment, but I assumed that massless sources satisfied what he was looking for. That would include null fluids. – twistor59 Apr 10 '13 at 16:16
:-) I guess it's a matter of what you interpret as without mass in the question. – John Rennie Apr 10 '13 at 16:18
Can answers exist in superposition? – twistor59 Apr 10 '13 at 16:19
@twistor59: yes, unless they are incoherent. – John Rennie Apr 10 '13 at 16:21
:-) this banter is illegal and will be deleted – twistor59 Apr 10 '13 at 16:22
I will have another approach to your question. I guess you are asking about "fast rotating objects" and whether they can duplicate the effects of gravity or not. Well regarding the curvature of spacetime then the answer is clearly no. I don't see a relation between fast rotating object and creating actual gravity. But you said virtual gravity. I will assume you mean by virtual artificial. You can in fact duplicate the effects of gravity by fast rotating objects.
According to the Equivalence principle gravity and acceleration are the same thing. In short, if you were in an elevator accelerating downward with an acceleration equivalent to (g = 9.81 m/s^2), you will feel the elevator floor parting you, and experience the same effect as zero gravity. While in an elevator in space, not under any effect from earth's gravity accelerating towards you feet (upwards), with the acceleration (g = 9.81 m/s^2). You will experience the same effects of the earth's gravity. Bottom line, you can duplicate the effects of gravity by artificial means. Check out this article by wikipedia: http://en.wikipedia.org/wiki/Artificial_gravity
One of the possible ways to duplicate gravity is by centrifugal force. Such is needed in space crafts that will stay a long time in space, to overcome the negative effects of zero gravity on astronauts' health. The centrifugal force can be achieved by the rotation of the space craft, since centrifugal force can be defined as an outward force that draws a rotating body outwards, and where's force there's acceleration. Gravity effects can be duplicated but that's only an effect and not actual gravity that bends spacetime, according to the GR.
On the other hand the rotation of the earth around it's axis is not the cause of its gravity. The gravity of the earth is due to it's mass, and your mass. If your mass "magically" became equivalent to zero, gravity will have no effect on you, even though it's there affecting every thing around you. And that's of coarse hypothetical and impossible to occur. Where there's matter, there's mass, and where there's mass there's gravity.
-
While this is the best we've come up with for space travel so far, it's not quite the same as standing on a planet. Try throwing a ball to someone standing a little way around the curve. It won't behave as expected (see the Coriolis effect) – Basic Apr 15 at 12:18
Wheeler worked on a program a while back that he called "mass without mass", and the general idea was to have something that was localized and from far away looked like it had mass but if you try to find out where the mass is, there is no place, it's all vacuum everywhere.
And there are two general tricks he used. One is like having gravitational waves mutually orbit each other and from far away we see the curvature of the net eaffect, but inside it';s just waves of vacuum spacetime. I think he called them geons, but it's tough to get them to forever orbit each other, so it could just be a temporary effect.
The other trick was nontrivial topology. If you've seen those pictures that look like a funnel on the top and an upside down funnel on the bottom (an Einstein-Rosen bridge) then each funnel connects to a different universe (or a different part of the same universe) but outside that throat it looks like there is a mass there, but there isn't a mass, just a "path" to another (part of the) universe. What's really misleading about those pictures is that it is only showing you the space part of spacetime, so those paths are like the road that is everywhere as opposed to the road you walk (where you walk parts near your start at earlier times than you walk the parts near your destination). If you tried to walk that path through wormhole you can't actually get to the other side. A vacuum spacetime doesn't allow you to travel (through time) through the narrow part of the throat. Even light can't get through.
-
Classically, gravity requires mass..! But as a consequence of the mass-energy ($mc^2$), energetic objects can curve spacetime too. Anyways, we do take mass or energy into account. This includes photons as well. But due to their negligible energy ($h\nu$), the curvature is very small and can be approximated as there's no curvature at all.
We can put this conclusion based on your question (simply) in a way which is understandable: There's no spacetime curvature in the absence of mass (-energy)..! In your case (in my point of view), fast-rotating objects do curve spacetime (for example, celestial objects).
Because the way you define objects (a visible entity) to be rotating. Objects have mass. If you assume the object to be mass-less, then it's not an object. It's simply mass-less or something that's not massive. And, mass-less particles can't travel below $c$ and hence, they're not visible at all.
Anyways, it is not an object..!
-
|
# Perimeter and Area of a Rhombus – Formulas and Examples
The perimeter of a rhombus represents the length of its outline. On the other hand, the area of the rhombus is a measure of the space occupied by the rhombus in two-dimensional space. The perimeter of a rhombus can be calculated using the formula p = 4l, where l is the length of a side, and its area can be calculated using the formula A = bh, where b is the base and h is its height.
In this article, we will learn about the perimeter and area of a rhombus. We will explore the different formulas that we can use, and we will apply them to solve some practice problems.
##### GEOMETRY
Relevant for
Learning about the perimeter and area of a rhombus.
See examples
##### GEOMETRY
Relevant for
Learning about the perimeter and area of a rhombus.
See examples
## How to find the perimeter of a rhombus?
To calculate the perimeter of a rhombus, we have to add the lengths of all its sides. Since a rhombus is a quadrilateral with four equal sides, the formula for the perimeter of a rhombus can be written as:
where,
• p is the perimeter of the rhombus
• l is the length of one of the sides of the rhombus
This means that to calculate the perimeter of a rhombus, we only need to know the length of one of its sides.
## How to find the area of a rhombus?
The area of a rhombus can be calculated using three different methods depending on the information available to us. We can use its diagonals, we can use its base and height, and we can use trigonometry.
### Calculate the area of the rhombus using diagonals
We can calculate the area of a rhombus when we know the length of its diagonals by using the following formula:
where,
• $latex d_{1}=$ length of diagonal 1
• $latex d_{2}=$ length diagonal 2
• $latex A=$ area of rhombus
#### Proof of the formula for the area of a rhombus
We can prove the formula for the area of a rhombus using the following diagram:
Point O is the point of intersection of the two diagonals of the rhombus. Therefore, the area of the rhombus will be:
$latex A=4\times\text{área de }\Delta AOB$
$latex =4\times(\frac{1}{2})\times AO \times OB$
$latex =4\times(\frac{1}{2})\times(\frac{1}{2})d_{1}\times(\frac{1}{2})d_{2}$
$latex =4\times(\frac{1}{8})d_{1}d_{2}$
$latex =\frac{1}{2}d_{1}d_{2}$
### Calculate the area of the rhombus using the base and height
When we know the length of the base and the length of the height of the rhombus, we can use the following formula to calculate its area:
where,
• $latex b=$ length of any side of the rhombus
• $latex h=$ height
• $latex A=$ area of rhombus
### Calculate the area of the rhombus using trigonometry
We can use trigonometry to calculate the area of the rhombus when we know the measure of one angle of the rhombus. Therefore, we use the following formula:
where,
• $latex b=$ length of any side of the rhombus
• $latex a=$ internal angle
• $latex A=$ area of rhombus
## Perimeter and area of a rhombus – Examples with answers
The formulas for the perimeter and area of a rhombus are applied to solve the following examples. Each example has its solution, but try to solve the problems yourself before looking at the answer.
### EXAMPLE 1
What is the perimeter of a rhombus that has sides with a length of 7 inches?
The sides of the rhombus have a length of 7 inches. Therefore, we use that length in the formula for the perimeter:
$latex p=4l$
$latex p=4(7)$
$latex p=28$
The perimeter of the rhombus is equal to 28 inches.
### EXAMPLE 2
What is the area of a rhombus that has diagonals with lengths of 8 inches and 10 inches?
We have the following lengths:
• Diagonal 1, $latex d_{1}=8$ in
• Diagonal 2, $latex d_{2}=10$ in
Using the area formula with these values, we have:
$latex A=\frac{d_{1}\times d_{2}}{2}$
$latex =\frac{8\times 10}{2}$
$latex =\frac{80}{2}$
$latex A=40$
Therefore, the area of the rhombus is equal to 40 in².
### EXAMPLE 3
Find the perimeter of a rhombus that has sides with a length of 12 ft.
Using the formula for the perimeter with the given length, we have:
$latex p=4l$
$latex p=4(12)$
$latex p=48$
The perimeter of the rhombus is equal to 48 ft.
### EXAMPLE 4
Find the area of a rhombus that has diagonals with lengths of 10 feet and 12 feet.
We have the following lengths:
• Diagonal 1, $latex d_{1}=10$ ft
• Diagonal 2, $latex d_{2}=12$ ft
Using these lengths in the formula for the area, we have:
$latex A=\frac{d_{1}\times d_{2}}{2}$
$latex =\frac{10\times 12}{2}$
$latex =\frac{120}{2}$
$latex A=60$
Therefore, the area of the rhombus is equal to 60 ft².
### EXAMPLE 5
Find the perimeter of a rhombus that has sides with a length of 15 yards.
We apply the formula for the perimeter with the given length:
$latex p=4l$
$latex p=4(15)$
$latex p=60$
The perimeter of the rhombus is equal to 60 yd.
### EXAMPLE 6
Find the area of a rhombus that has a base of 8 feet and a height of 6 feet.
We have the following lengths:
• Base, $latex b=8$ ft
• Height, $latex h=6$ ft
Applying the formula for the area with the given information, we have:
$latex A=bh$
$latex =(8)(6)$
$latex A=48$
Therefore, the area of the rhombus is equal to 48 ft².
### EXAMPLE 7
What is the length of the sides of a rhombus that has a perimeter equal to 36 yards?
In this example, we know the perimeter, and we want to find the length of one of the sides of the rhombus. Therefore, we use the formula for the perimeter and solve for l:
$latex p=4l$
$latex 36=4l$
$latex l=9$
The length of one side of the rhombus is 9 yd.
### EXAMPLE 8
Find the area of a rhombus that has sides with a length of 10 feet and an internal angle of 60°.
We have the following information:
• Side, $latex b=10$ ft
• Angle, $latex a=60°$
Using the formula for the area of a rhombus with this information, we have:
$latex A={{b}^2}\times \sin(60°)$
$latex ={{10}^2}\times 0.866$
$latex =100\times 0.866$
$latex A=866$
Therefore, the area of the rhombus is equal to 866 ft².
### EXAMPLE 9
Find the length of the sides of a rhombus that has a perimeter equal to 68 inches.
We use the formula for the perimeter of a rhombus and solve for l:
$latex p=4l$
$latex 68=4l$
$latex l=17$
The length of one of the sides of the rhombus is equal to 17 in.
### EXAMPLE 10
Find the area of a rhombus that has a base 5.5 inches long and a height 6.5 inches long.
We have the following:
• Base, $latex b=5.5$ in
• Height, $latex h=6.5$ in
When we apply the formula for the area, we have:
$latex A=bh$
$latex =(5.5)(6.5)$
$latex A=35.75$
Therefore, the area of the rhombus is equal to 35.75 in².
## Perimeter and area of a rhombus – Practice problems
Solve the following problems using the formulas for the perimeter and area of a rhombus. If you have trouble with these problems, you can look at the worked examples above.
|
Proving that a generic variety with ample canonical bundle has no automorphisms
Let $X$ be a smooth projective connected variety over the complex numbers with ample canonical bundle. If $X$ is generic and $\dim X \leq1$, the automorphism group of $X$ is trivial, see for instance
Why is a general curve automorphism-free?
This question is about generalizing this to arbitrary dimension. Let me be more precise.
Suppose that $X$ is "generic". Is the automorphism group of $X$ trivial?
This is probably true, and there are three approaches to this sketched in the above MO question. The first two might not be feasible.
1. Use deformation theory, i.e., compute the tangent space at the moduli space, and use Lefschetz trace formula. Can somebody make this more precise in this case?
2. Count parameters using Riemann-Hurwitz. This is going to be problematic in the higher-dimensional case, even though there is a Riemann-Hurwitz formula, I am not sure the dimension of the moduli space is explicitly known (as opposed to the one-dimensional case where it equals $3g-3$).
3. Exhibit an $X$ as above with trivial automorphism group for any possible hilbert polynomial. In fact, the order of the automorphism group of $X$ is bounded (even explicitly) by a constant depending only on the Hilbert polynomial of $X$.
I think 3 is the most promising, but this would require me to come up with the following.
Let h be the hilbert polynomial of $X$. Then there exists a smooth projective connected variety $Y$ with ample canonical bundle and hilbert polynomial of the canonical bundle equal to $h$ such that Aut$(Y)$ is trivial.
So my problem is to do this for every occuring hilbert polynomial. Of course, writing down varieties $X$ as above with no automorphisms is not so difficult.
-
You want to do this for every possible family? That is very hard to imagine. – Angelo May 12 '13 at 7:57
There exist varieties of general type with non-trivial automorphism groups and no deformations... – ulrich May 12 '13 at 9:07
I meant this: it is very hard to imagine this might be true, but if it were, it would probably be extremely difficult to prove. In any case, it seems that you are being given counterexamples. – Angelo May 12 '13 at 9:42
@Francesco: One does not need to consider fake projective planes. Any cocompact quotient of the complex 2-ball is rigid (by a theorem of Calabi and Vesentini) and of general type (by a theorem of Kodaira) so one can consider any non-trivial finite Galois cover of such a quotient. – ulrich May 12 '13 at 14:46
Don't curves of genus 2 already give a counterexample in dimension 1? – Tom Graber May 13 '13 at 23:46
It seems to me that this is not true and that a counterexample can be constructed as follows.
Take a double cover $\alpha \colon X \longrightarrow A$ of an abelian surface $A$, branched over a smooth divisor $B \in |2 L|$, with $L$ very ample. We have $$K_X=\alpha^* L, \quad \alpha_* \mathcal{\omega}_X = \mathcal{\omega}_A \oplus \omega_A (L),$$ hence $$K_X^2 = 2L^2, \quad p_g(X) = 1+ h^0(A, L), \quad q(X)=2.$$ Than $X$ is a smooth surface of general type. Moreover, since $\alpha$ is a finite map, $X$ does not contract any curve, in particular $K_X$ is ample.
We have $q(X)=2$, so $\textrm{Alb}(X)$ is an abelian surface and, by the universal property of the Albanese map, the morphism $\alpha$ factors through $a \colon X \longrightarrow \textrm{Alb}(X)$.
But then, since $\deg \alpha =2$ and $X$ is not an abelian surface, it follows that the isogeny $\textrm{Alb}(X) \to A$ must be an isomorphism. Then the morphism $\alpha \colon X \longrightarrow A$ coincides the Albanese map of $X$.
By a result of Catanese (see A superficial working guide to deformation ands moduli, arXiv:1106.1368, Section 5) the degree of the Albanese map is a topological invariant. It follows that any deformation of $X$ is still a double cover of its Albanese variety.
Thus any surface $Y$ lying in the same connected component of the moduli space containing $X$ has non-trivial automorphism group, because the Albanese double cover induces a non-trivial involuton $\iota \colon Y \to Y$, and so $\mathbf{Z} /2 \mathbf{Z} \subset \textrm{Aut}(Y)$.
-
I like this counterexample a lot. I do have some questions. 1. Is it really possible that $X$ is nonsingular. I thought $X$ would have cyclic quotient singularities above the singular locus of the branch locus. Or maybe this is where you use "degree two"? 2. This is not so important, but how do you show that your double cover really equals the albanese map. Is it not possibly necessary to compose with an isogeny of A to really get the albanese map? 3. Can you give rigid examples of your $X$, and examples where $X$ has infinitely many deformations? Thanks again! – Jonathan May 12 '13 at 10:19
@Jonathan: 1. If you take the branch locus in a very ample linear system, by Bertini theorem the general element of the system will be smooth, hence the general cover will be smooth. 2. By the universal property of the Albanese map, the cover $\alpha \colon X \to A$ factors through the Albanese morphism $X \to \textrm{Alb}(X)$. Since $\deg \alpha =2$ and $X$ is of general type, the isogeny $\textrm{Alb}(X) \to A$ must be an isomorphism. 3. $X$ has non trivial deformations coming from the deformations of $A$ (and from those of the branch locus). – Francesco Polizzi May 12 '13 at 21:34
I have added some further details – Francesco Polizzi May 13 '13 at 13:09
thank you very much. – Jonathan May 16 '13 at 7:37
As Francesco points out, the claim is false when $\dim X>1$. The question is discussed in [B. Fantechi, R. Pardini, Automorphisms and moduli spaces of varieties with ample canonical class via deformations of abelian covers, Comm. Algebra 25 (1997), 1413-1441. math.AG/9410006] (see in particular Thm. 6.6).
-
|
# Square Inches to Square Meters Conversion
Square inches to square meters (in2 to m2) conversion factor is 0.00064516. To find out how many square meters in square inches, as a formula, multiply the square inch value by the conversion factor or use the converter.
1 Square Inch = 0.00064516 Square Meter
There are 0.00064516 square meter in a square inch, because one inch is 0.0254 meter and the area of a square is calculated by multiplying a side by itself, that makes 0.0254 * 0.0254 = 0.00064516 square meters in a square inch.
For example, to find out how many sq. meters there are in 5000 sq. inches, as a formula to convert from sq. inches to sq. meters multiply the sq. inch value by 0.00064516, that makes 5000 in2 * 0.00064516 = 3.2258 sq. meters in 5000 sq. inches.
Square inch is an imperial and US customary area unit and equals to 0.00064516 sq. meters, 0.00694 sq. feet and 6.4516 sq. centimeters. The abbreviation is "in2".
Square meter is an area unit in metric system and equals to 1550 sq. inches and 10.7639104 sq. feet. The abbreviation is "m2".
Converter
Enter a sq. inch value to convert into sq. meters and click on the "convert" button.
Create Custom Conversion Table
|
# Continuity proof
• February 1st 2008, 08:31 AM
WWTL@WHL
Continuity proof
Hi, I'm stuck on the following question. Any help will be fantastic, thanks.
Question:
$f:\Re \to \Re$ is a continuous function. Prove that if for some $c \in \Re$, $f(c)>0$, then there exists a $\delta > 0$ such that $\forall x \in ( c - \delta, c + \delta)$, $f(x) > 0$
__________________________________________________ _____________
• February 1st 2008, 08:52 AM
Plato
$f(c) > 0\quad \Rightarrow \quad \exists \delta > 0\left[ {\left| {x - c} \right| < \delta } \right]\quad \Rightarrow \quad \left| {f(x) - f(c)} \right| < \frac{{f(c)}}
{2}$
Expand the last inequality. Add f(c) to all three parts. See what you get.
• February 1st 2008, 10:05 AM
WWTL@WHL
Quote:
Originally Posted by Plato
$f(c) > 0\quad \Rightarrow \quad \exists \delta > 0\left[ {\left| {x - c} \right| < \delta } \right]\quad \Rightarrow \quad \left| {f(x) - f(c)} \right| < \frac{{f(c)}}
{2}$
Expand the last inequality. Add f(c) to all three parts. See what you get.
Thanks for the reply, Plato! :D
$f(x) - f(c) < \frac{f(c)}{2}$ if $f(x)-f(c) > 0$
and
$f(c) - f(x) < \frac{f(c)}{2}$ if $f(x)-f(c) < 0$
So $f(x) > \frac{3}{2} f(c)$ and $f(x) > \frac{1}{2} f(c)$ ...each of which is greater than 0 since f(c)>0
Is this what you meant? I don't see where the third inequality comes from so I don't think I've followed your advice properly.
Also, if a function is continuous at c, by definition, $\forall \epsilon > 0$, there exists a $\delta > 0$ s.t. $\forall x \in \Re$ such that $|x-c|< \delta$, $|f(x) - f(c)|< \epsilon$.
As far as I can see, you've let $\epsilon = \frac{f(c)}{2}$ which is fixed. So I don't see how this covers the 'for all epsilon' bit. I'm probably being very dull here though. :o
EDIT: Thinking about this again, I think you might've been suggesting a proof by contradiction - which I totally missed. If you let f(x) < 0, then we have a contradiction. Is that what you wanted me to do?
• February 1st 2008, 10:50 AM
Plato
First of all, there is absolutely no reason why we should for “all epsilon”.
The problem says that f is positive is some neighborhood of c.
Use the definition of continuity with $\varepsilon = \frac{{f(c)}}{2}> 0$.
We know that $\exists \delta > 0\left[ {\left| {x - c} \right| < \delta } \right]\quad \Rightarrow \quad \left| {f(x) - f(c)} \right| < \frac{{f(c)} }{2}$
We also know that $\left| {x - c} \right| < \delta \quad \Leftrightarrow \quad x \in (c - \delta ,c + \delta )$ which is a neighborhood of c. For any x there we have
$\begin{gathered}
\left| {f(x) - f(c)} \right| < \frac{{f(c)}}
{2} \hfill \\
- \frac{{f(c)}}
{2} < f(x) - f(c) < \frac{{f(c)}}
{2} \hfill \\
0 < \frac{{f(c)}}
{2} < f(x) < \frac{{3f(c)}}
{2} \hfill \\
\end{gathered}$
.
So the function is positive in that neighborhood of c.
• February 1st 2008, 11:13 AM
WWTL@WHL
Excellent. Thanks so much, Plato. :)
I was focusing on the definition of continuity more than the question, me thinks.
|
# prove that a manifold $M$ can be covered by a countable collection of neighbourhoods each diffeomorphic to an open subset of $\mathbb R^m$
I want to prove
A manifold $M$ can be covered by a countable collection of neighbourhoods each diffeomorphic to an open subset of $\mathbb R^m$.
By definition we have for each $x \in M$ an open neighbourhood diffeomorphic to an open subset of $\mathbb R^m$. But I don't understand why they have to be countable. Here a manifold is subset of Euclidean space.
Manifold is a subset $M$ of $\mathbb R^n$ is a $k$ - dimensional manifold if $\forall x\in M$ there is open $U,V \subset \mathbb R^n$ $x \in U$ and a diffeomorphism $f:U\rightarrow V$ such that $f(U \cap M)=V \cap(\mathbb R^k \times 0)$.
• What is your definition of a manifold? Jul 5 '18 at 16:01
• Presumably, you have to use second-countability of the Euclidean space your manifold is embedded into. Jul 5 '18 at 16:13
• If we assume that then there is nothing left to prove. I just learned that manifold is also defined such that it is second countable. But the definitions (the one I provided and this) should be equivalent and hence I am now concerned with proving the equivalence at least one way. Jul 5 '18 at 16:28
## 1 Answer
As you state that manifolds are a subspace of some $\mathbb{R}^n$, this means they are second countable, and in particular Lindelöf. This implies that the cover by "open neighbourhoods of $M$ that are diffeomorphic to an open subset of $\mathbb{R}^k$" of $M$ has a countable subcover. Done.
|
## Faculty Scholarship
Article
Mathematics
2005
#### Abstract
For a graph G, an L(2,1)-labeling of G with span k is a mapping $L \right arrow \{0, 1, 2, \ldots, k\}$ such that adjacent vertices are assigned integers which differ by at least 2, vertices at distance two are assigned integers which differ by at least 1, and the image of L includes 0 and k. The minimum span over all L(2,1)-labelings of G is denoted $\lambda(G)$, and each L(2,1)-labeling with span $\lambda(G)$ is called a $\lambda$-labeling. For $h \in \{1, \ldots, k-1\}$, h is a hole of Lif and only if h is not in the image of L. The minimum number of holes over all $\lambda$-labelings is denoted $\rho(G)$, and the minimum k for which there exists a surjective L(2,1)-labeling onto {0,1, ..., k} is denoted $\mu(G)$. This paper extends the work of Fishburn and Roberts on $\rho$ and $\mu$ through the investigation of an equivalence relation on the set of $\lambda$-labelings with $\rho$ holes. In particular, we establish that $\rho \leq \Delta$. We analyze the structure of those graphs for which $\rho \in \{ \Delta-1, \Delta \}$, and we show that $\mu = \lambda+ 1$ whenever $\lambda$ is less than the order of the graph. Finally, we give constructions of connected graphs with $\rho = \Delta$ and order $t(\Delta + 1)$, $1 \leq t \leq \Delta$.
|
# Verification of “Prove/Disprove that the language $L = \{ a^kba^{2k}ba^{3k} | k \geq 0\}$ is context free.”
I attempt to show that the language $$L = \{ a^kba^{2k}ba^{3k} | k \geq 0\}$$ is not context free by applying the Pumping lemma for context-free languages.
This is achieved by a proof by contradiction by first assuming that $$L$$ is context free, in which case arbitrarily long strings in $$L$$ should be able to be "pumped" and still produce strings inside $$L$$. By "pumping" strings in $$L$$ to produce other strings which are not contained in $$L$$, then it cannot be true that the language $$L$$ is context free.
Progress so far:
The pumping lemma states that every string $$s$$ in $$L$$ can be written in the form
$$s = uvwxy$$
with substrings $$u, v, w, x, y$$
such that
1. $$|vx| \geq 1$$
2. $$|vwx| \leq p$$
3. $$uv^nwx^ny \in L$$ for all $$n \geq 0$$
so a suitable decomposition into the substrings $$u, v, w, x, y$$ must be found.
My informal approach is to consider on a case by case basis that each decomposition fails.
case 1:
If only the letter b is pumped, then there will be more than two b's the final string, which cannot be in L. For example:
$$u = a^k, v = b, w = a^{2k}, x = b, y = a^{3k}$$
by condition 3, $$s = uv^nwx^ny \notin L$$ for $$n = 2$$
case 2:
If only the letter a is pumped, then the distribution of the letter a in the pumped string will no longer be valid. For example:
$$u = \varnothing, v = a^k, w = ba^{2k}b, x = a^{3k}, y = \varnothing$$
case 3:
If both the letters a and b are pumped, then the order of letters will be invalid in the pumped string.
For example:
$$u = \varnothing, v = a^kb, w = a^k, x = a^kb, y = a^{3k}$$
case 4:
The case that neither the letter a nor the letter b is pumped fails because of the first condition.
In this solution I have neglected to consider both defining a pumping length $$p$$ ($$p$$ is still conceptually difficult for me and I don't know how to correctly define it) as well as the second condition of the pumping lemma.
I would be greatly appreciative for any assistance in this, as well as verifying/formalizing the above proposed solution.
|
## Abstract
The Minimum Information Required in the Annotation of Models Registry (http://www.ebi.ac.uk/miriam) provides unique, perennial and location-independent identifiers for data used in the biomedical domain. At its core is a shared catalogue of data collections, for each of which an individual namespace is created, and extensive metadata recorded. This namespace allows the generation of Uniform Resource Identifiers (URIs) to uniquely identify any record in a collection. Moreover, various services are provided to facilitate the creation and resolution of the identifiers. Since its launch in 2005, the system has evolved in terms of the structure of the identifiers provided, the software infrastructure, the number of data collections recorded, as well as the scope of the Registry itself. We describe here the new parallel identification scheme and the updated supporting software infrastructure. We also introduce the new Identifiers.org service (http://identifiers.org) that is built upon the information stored in the Registry and which provides directly resolvable identifiers, in the form of Uniform Resource Locators (URLs). The flexibility of the identification scheme and resolving system allows its use in many different fields, where unambiguous and perennial identification of data entities are necessary.
## INTRODUCTION
The size and complexity of data produced in biology has made it increasingly important to provide metadata alongside the core data itself. This metadata may comprise domain-specific information as described by minimal information ‘checklists’ meant to enable accurate data reuse or may be ontological in nature, specifying more precisely the kind of entities under consideration. Community-level collaborative bodies such as Minimum Information for Biological and Biomedical Investigations (MIBBI) (1) and Open Biomedical Ontologies (OBO) (2) exist to formalize and coordinate such efforts across the Life Sciences. The computational systems biology community developed one such checklist, entitled the Minimum Information Required in the Annotation of Models (MIRIAM) (3), in order to define the meta-information needed to ensure the re-usability of computational models of biological processes. These guidelines describe the need to unambiguously and perennially identify model components, as well as other information regarding model origin and development. When used in conjunction with standard computer-readable formats such as Systems Biology Markup Language (SBML) (4), controlled annotations facilitate not only model reuse, but also permit efficient search strategies, accurate model comparison and meaningful model conversion between different formats. Furthermore, the relevant linking of models to biological knowledge transforms them into repositories of information.
In order to provide globally unique, perennial and location-independent identifiers for data used in the biomedical domain, we developed MIRIAM Identifiers and the MIRIAM Registry (5). MIRIAM Identifiers are Uniform Resource Identifiers (URIs), which unambiguously identify a record in a data collection, independently of the specific resources distributing instances of those records. The MIRIAM Registry provides information about the different data collections and how to access instances of their records. Definitions of the terms used in subsequent descriptions are detailed in the Table 1 (definition).
Table 1.
Definitions
Data collection A data collection gathers data of the same type (e.g. DNA, RNA or protein) and stores information regarding the same sets of ‘properties’ (e.g. sequence, references). It should make use of a well-defined internal identifier scheme. For example, the namespace ‘uniprot’ identifies a data collection whose subject is proteins, whose representation is protein sequence-centric, where each entry stores protein domain information and where the identifier scheme can be described using a specific regular expression. Similarly, ‘ec-code’ identifies a data collection that provides access to enzyme records and ‘chebi’ to an ontological representation of chemicals. Resource A resource is the physical location on the Web where information about a data record can be accessed. A resource provides the instances of all the records belonging to a collection. Since the record identifier is independent of physical location (URL), it may be resolved using any of the resources listed for that data collection. Namespace The namespace is the unique syntactic string which defines a data collection. For example, given the identifier ‘urn:miriam:ec-code:1.1.1.1’, the namespace is defined as ‘ec-code'. This precise lexical string is used in both URN and URL forms of the identifiers.
Data collection A data collection gathers data of the same type (e.g. DNA, RNA or protein) and stores information regarding the same sets of ‘properties’ (e.g. sequence, references). It should make use of a well-defined internal identifier scheme. For example, the namespace ‘uniprot’ identifies a data collection whose subject is proteins, whose representation is protein sequence-centric, where each entry stores protein domain information and where the identifier scheme can be described using a specific regular expression. Similarly, ‘ec-code’ identifies a data collection that provides access to enzyme records and ‘chebi’ to an ontological representation of chemicals. Resource A resource is the physical location on the Web where information about a data record can be accessed. A resource provides the instances of all the records belonging to a collection. Since the record identifier is independent of physical location (URL), it may be resolved using any of the resources listed for that data collection. Namespace The namespace is the unique syntactic string which defines a data collection. For example, given the identifier ‘urn:miriam:ec-code:1.1.1.1’, the namespace is defined as ‘ec-code'. This precise lexical string is used in both URN and URL forms of the identifiers.
## MIRIAM IDENTIFIERS
To completely fulfil its roles, an identifier must be: (i) unique (two identifiers should not be associated with the same entity); (ii) unambiguous (an identifier must only be associated with a single entity); (iii) perennial (the same identifier should remain associated with an entity for the whole duration of its existence). In addition, an identifier should preferably also be (iv) standard compliant (for easier software support); (v) resolvable (convertible into a physical address on the World Wide Web); and (vi) free to use. MIRIAM identifiers were designed to satisfy all these criteria (5).
In order to provide a unique identifier for a record, regardless of the physical location(s) where that information can be retrieved, MIRIAM identifiers are composed of three parts. The first part is a prefix, dependent on the scheme used (see below) and that specifies ‘this is a MIRIAM URI’. The second part is the namespace that identifies the data collection. The third and final part is the internal identifier of a specific record in a data collection (this identifier is created and provided by the data collection itself). MIRIAM URIs were initially only provided as Uniform Resource names (URNs). The prefix of the URN scheme is urn:miriam. For example, the enzyme alcohol dehydrogenase in the enzyme classification collection is identified by urn:miriam:ec-code:1.1.1.1 and the species Homo sapiens in the taxonomy of living species by urn:miriam:taxonomy:9606.
In order to access the data for a record, one must rely on a URI resolving system, such as the MIRIAM Registry described below. The URNs can be resolved into URLs, for instance, using Web Services or by processing the XML export of the MIRIAM Registry. But they are not directly resolvable, so one cannot successfully copy/paste them into a browser and get an informative page. In order to provide directly dereferencable URIs and comply with the second rule of Linked Data (6), we recently introduced a new URL-based identification scheme. This scheme provides directly resolvable identifiers, based on the information stored in the MIRIAM Registry. The prefix of the URL scheme is http://identifiers.org/. This identification scheme runs entirely in parallel with the URN form of MIRIAM identifiers. Both forms essentially share the same structure, are based on the same shared list of namespaces and are fully inter-convertible. For example, the enzyme alcohol dehydrogenase in the enzyme classification collection is identified by http://identifiers.org/ec-code/1.1.1.1 and the species Homo sapiens in the taxonomy of living species by http://identifiers.org/taxonomy/9606.
The key difference between the two parallel schemes, which offer exactly the same access to data, is that the Identifiers.org URLs can be resolved directly and do not require special software tools on the user side for their handling; the dereferencing is performed by the Identifiers.org service (see below). The inter-relationships between the information captured by this service are illustrated in Figure 1.
Figure 1.
Concepts and component information captured in the MIRIAM Registry. The MIRIAM Registry collects information about data collections and resources, allowing them to be referenced using URIs. Red-bounded boxes represent concepts, while green ones depict specific instances. Each collection, which itself can be referenced via a URI, is assigned a namespace. This namespace can be combined with a suitable identifier in order to form a URI identifying the specific data record, independently of any physical locations holding that information. Each of these resolvable physical locations are regarded as an instance of the data record, and can themselves be identified using a URI.
Figure 1.
Concepts and component information captured in the MIRIAM Registry. The MIRIAM Registry collects information about data collections and resources, allowing them to be referenced using URIs. Red-bounded boxes represent concepts, while green ones depict specific instances. Each collection, which itself can be referenced via a URI, is assigned a namespace. This namespace can be combined with a suitable identifier in order to form a URI identifying the specific data record, independently of any physical locations holding that information. Each of these resolvable physical locations are regarded as an instance of the data record, and can themselves be identified using a URI.
## MIRIAM REGISTRY
While the location of information on the Web is a convenient endpoint for cross-references, it is fraught with issues which can result in ‘dead links'. These can be caused by changes in the underlying infrastructure of a resource or in modification of the access URL or identification scheme used by that resource. In addition, data of a given collection can often be accessed via several providers on the Web, for example, when it is ‘mirrored’ but also when associated with different metadata. In these cases, the record is identical (the data relevant to the collection describing the entity), though the physical instances of the record differ. MIRIAM Registry tackles this problem through the recording of resources, which are physical locations (associated with URLs) where one can access the entity or information about the entity. The concept of resource effectively allows the decoupling of the identification of an entity from its location on the Web, enabling the association of a single entity identifier with multiple locations. Resources and data collections are themselves identified as records in the MIRIAM Registry and have their own namespace. For instance, the enzyme nomenclature is identified by http://identifiers.org/miriam.collection/MIR:00000004 while the enzyme nomenclature distributed by the Swiss Institute of Bioinformatics is http://identifiers.org/miriam.resource/MIR:00100003. For a detailed list of all the information stored for each data collection, refer to Table 2 (information).
Table 2.
Information
Data collection information Identifier A stable MIRIAM Registry identifier of the data collection. Name The name usually used to refer to the data collection. Synonym(s) Alternative name(s) of the data collection. Namespace The part of the URIs which identifies the data collection. For example ‘ec-code’ for enzymes. Deprecated root URI(s) MIRIAM URNs or URLs that have become obsolete over time. Deprecated identifiers are stored in the Registry, allowing conversion to current forms. Definition Short description of the data collection, indicating the focus of its content. Identifier pattern A regular expression pattern that describes the identifiers used within the data collection. Reference(s) Link(s) to documentation about the data collection and relevant publication(s). Resource information Identifier Each resource associated with a collection is given a unique identifier in the MIRIAM Registry. Access URL URL used to retrieve a given data entry, where the token ($id) is replaced with a specified identifier for a record. Website Root URL of the resource, usually its home page. Description Brief description about the resource, used to distinguish the current resource from all the others recorded for the same data collection. Institution The institution responsible for hosting the resource. Health status Though not a textual field, the resource health status is displayed by the colour-coded text area. Deprecated Physical Location(s) A list of deprecated resource(s) which are no longer usable to resolve information for this data collection. Data collection information Identifier A stable MIRIAM Registry identifier of the data collection. Name The name usually used to refer to the data collection. Synonym(s) Alternative name(s) of the data collection. Namespace The part of the URIs which identifies the data collection. For example ‘ec-code’ for enzymes. Deprecated root URI(s) MIRIAM URNs or URLs that have become obsolete over time. Deprecated identifiers are stored in the Registry, allowing conversion to current forms. Definition Short description of the data collection, indicating the focus of its content. Identifier pattern A regular expression pattern that describes the identifiers used within the data collection. Reference(s) Link(s) to documentation about the data collection and relevant publication(s). Resource information Identifier Each resource associated with a collection is given a unique identifier in the MIRIAM Registry. Access URL URL used to retrieve a given data entry, where the token ($id) is replaced with a specified identifier for a record. Website Root URL of the resource, usually its home page. Description Brief description about the resource, used to distinguish the current resource from all the others recorded for the same data collection. Institution The institution responsible for hosting the resource. Health status Though not a textual field, the resource health status is displayed by the colour-coded text area. Deprecated Physical Location(s) A list of deprecated resource(s) which are no longer usable to resolve information for this data collection.
Initial population of the collections in the MIRIAM Registry came largely from the Systems Biology community, and more specifically from those collections that were commonly used in the annotation of models stored in BioModels Database (7). Further collections have been incorporated based on the requests from individual users, from collaborative bodies, such as the Protein Standards Initiative (http://www.psidev.info/) and from publicly available listings of Life Sciences databases, such as those used in cross-referencing (for example, http://www.geneontology.org/cgi-bin/xrefs.cgi). The public-facing web application provides easy access to the catalogue of data collections. Any visitor to the site can suggest modification of the information recorded (through a link at the bottom of each page) and submit new data collections (through a dedicated form linked from the left menu panel). More detailed information on making submissions and ascertaining the suitability of a collection for inclusion can be found here in the FAQ (http://www.ebi.ac.uk/miriam/main/mdb?section=faq).
All records are manually curated in order to ensure accuracy and consistency. When the information provided by the submitter, usually obtained from the relevant resources on the web, is incomplete, we liaise with the developers and administrators of those collections. This is particularly important when issues arise, for example, regarding the identifier schemes used by the collection or the specific details of the license under which the information is made available.
The MIRIAM Registry infrastructure is written in Java (http://java.sun.com/javaee/) and makes use of the Model-View-Controller design pattern (http://en.wikipedia.org/wiki/Model-view-controller). The application runs inside an Apache Tomcat Web container (http://tomcat.apache.org/). All the information in the Registry is stored in a MySQL database (http://www.mysql.com/).
Since its original introduction, there has been significant growth and development of the MIRIAM Registry. It currently contains information on over 250 collections, with a further 64 undergoing the curation process. The Registry also provides supporting facilities to enable the convenient usage of both URN and URL identification schemes.
### Finding collections and resources
With the plethora of available data collections potentially suitable to annotate biomedical information, it can be somewhat problematic to locate the most appropriate one. To aid in this task, each collection is associated with one or more tags. For instance, should a user require a data collection with which to annotate a protein sequence, it is possible to search using the two tags ‘protein’ and ‘sequence’ to find suitable data collections. This ‘Tags’ search function is linked from the left menu panel and displays a selectable list of the tags used. There are currently 39 tags available with which collections can be associated. These were created in ad hoc fashion when the system was implemented and currently are sufficient in number to allow the existing collections to be associated with two to three tags each. Additional tags can be created as needed by curators or as requested by users. The tags are of a coarse granularity, describing the type of data recorded (‘sequence’ ‘expression’ ‘phenotype’), the subject of those data (‘gene’ ‘protein’ ‘drug’), the domain area to which they relate (‘disease’ ‘pharmacogenomics’ ‘neuroscience’) or taxonomic associations of the data (‘mammalian’ ‘human’). The purpose of the tagging system is to allow users to identify appropriate collections based on a gross level query. Plans to improve this tagging mechanism, for instance by incorporating ontological information at the level of resources and the data they provide, are discussed below.
The MIRIAM Registry provides access to several resources serving the same data collection. Those resources are not necessarily identical and there may be reasons to prefer one over another. To allow users to make an informed choice when selecting such a resource, we provide additional information, including the uptime of the servers running the service. For this purpose a ‘health status’ has been implemented and a daily health check is automatically performed for each resource listed in the Registry (Figure 2). A summary of the health status of a specific resource is depicted by colour coding where green indicates an uptime in excess of 90% and graduated colour coding with downtime to below 20% being represented in red. More detailed information is available (by clicking on a resource's identifier), such as a calendar view of the uptime and details of the last check made. The system is also used by the Registry curators as a warning system to highlight otherwise unnoticed changes in the way data is accessed from a particular resource.
Figure 2.
An illustration of the variety of information captured for each data collection in the MIRIAM Registry. Some fields, described in the Information Table 2, are highlighted: (1) Namespace; (2) Identifier pattern, which allows automated checking of identifier validity with respect to the expected expression pattern; and (3) Resource health status, which provides information on resource up- and down-time. A notification of the health status is given through colour coding, while more details are presented on a separate page, via a link on the resource identifier (see inset).
Figure 2.
An illustration of the variety of information captured for each data collection in the MIRIAM Registry. Some fields, described in the Information Table 2, are highlighted: (1) Namespace; (2) Identifier pattern, which allows automated checking of identifier validity with respect to the expected expression pattern; and (3) Resource health status, which provides information on resource up- and down-time. A notification of the health status is given through colour coding, while more details are presented on a separate page, via a link on the resource identifier (see inset).
### Programmatic use of the Registry
Simple Object Access Protocol (SOAP) and REpresentational State Transfer (REST) access methods are available to query the Registry. These Application Programming Interfaces (APIs) can be used to generate and resolve the identifiers, as well as to extract information about individual resources providing access to the data records.
To facilitate the usage of the web services by third party tools, a Java library is provided. It allows querying of the Registry in a quick and convenient way. It is available for download from the SourceForge.net project (http://sourceforge.net/projects/miriam/).
The entire content of the Registry is also made available as an XML file export. This file is auto-generated daily and additionally can be created on demand.
## IDENTIFIERS.ORG RESOLVING SYSTEM
Identifiers.org is the resolving system of MIRIAM identifiers’ URL form. For more information, readers may refer to http://identifiers.org/. Access to a given collection in the Registry, such as the enzyme nomenclature is given by appending the namespace, as in http://identifiers.org/ec-code/. Due to the decoupling between the data collections and the resources that provide record information, the resolution of an Identifiers.org URL, such as http://identifiers.org/ec-code/1.1.1.1, directs the user to an intermediate page listing all recorded physical locations where a record may be accessed, allowing the user to choose the most suitable one. This process is illustrated in Figure 3.
Figure 3.
An illustration of the process followed when dereferencing an Identifiers.org URL. The example URL is a location-independent identifier for an ec-code record. When used in a browser, it resolves to an intermediate HTML page that provides a list of possible physical locations where the data record can be retrieved. The default format of this document is HTML, while an RDF/XML version is available via content negotiation or by using the ‘format’ parameter in the URL (see the ‘Identifiers.org Resolving System’ section).
Figure 3.
An illustration of the process followed when dereferencing an Identifiers.org URL. The example URL is a location-independent identifier for an ec-code record. When used in a browser, it resolves to an intermediate HTML page that provides a list of possible physical locations where the data record can be retrieved. The default format of this document is HTML, while an RDF/XML version is available via content negotiation or by using the ‘format’ parameter in the URL (see the ‘Identifiers.org Resolving System’ section).
One can directly access the instance of a record in a given resource by appending a parameter as a suffix. For instance, the following URL provides access to the record for alcohol dehydrogenase of the enzyme nomenclature collection provided by the IntEnz resource: http://identifiers.org/ec-code/1.1.1.1?resource=MIR:00100001. Alternatively, the concept of ‘profile’ allows one to customize the behaviour of the resolving system: it allows the pre-selection of resources to be used in the dereferencing, for a whole range of data collections. For example, the pre-defined profile ‘most_reliable’ as in http://identifiers.org/ec-code/1.1.1.1?profile=most_reliable, always returns the instance of a record in the resource with the best uptime. The ‘most_reliable’ profile is currently based on the health check history over the whole lifetime of the resource since its inclusion in the Registry. The valid parameters available for use are illustrated at: http://identifiers.org/examples/.
The information about all the instances of a record is presented by default as HTML, but may also be retrieved in RDF/XML format. Either format can be recovered through content negotiation or using the ‘format’ parameter within the URL (for example: http://identifiers.org/ec-code/1.1.1.1?format=rdfxml). It is possible to accommodate further output formats as requested by the user community. The information represented in an RDF form allows the additional incorporation of semantic information. These semantics are captured using standard vocabularies such as SIO (Semanticscience Integrated Ontology; http://semanticscience.org/ontology/sio.owl) and EDAM (EMBRACE Data and Methods; http://edamontology.sourceforge.net/), using terms such as ‘has_identifier’ and ‘accession’ to describe relationships and data concepts, respectively.
Parameter specification should be used only in conjunction with user interface instantiation (for example when used in a browser) and should be avoided when a URL is to be used as unique identifier for a collection record. There are a number of potential parameter name–value combinations possible for URIs, making a direct comparison difficult if provided with accompanying parametrization. Hence, in the unambiguous and perennial identification of data, the ‘atomic’ identifier should be considered as the minimal string that specifies the record.
The resolving system also provides direct information for malformed queries, for example, where the identifier is not properly encoded, or when a deprecated URL is used. In both cases a clear message is given to inform users of the situation. In addition to the human-readable description of the error, the system also returns the appropriate HTTP status code.
## CURRENT STATUS AND FUTURE DEVELOPMENTS
MIRIAM URNs are already widely used, particularly within the Computational Systems Biology community. For example, in the 20th release (September 2011) of BioModels Database, the 764 models contain over 25 000 MIRIAM identifiers.
An overview of the widespread use of MIRIAM Registry information and its identifiers is detailed on the website (http://www.ebi.ac.uk/miriam/main/mdb?section=use). URI identifiers are supported by a variety of file formats; software libraries and tools have been developed to generate, resolve and leverage upon MIRIAM URIs in novel scientific research. Registry namespaces are being used as controlled vocabularies in databases and standardization efforts. Moreover, the Life Science Registry Name identification scheme, which will imminently no longer be developed and supported, has decided upon Identifiers.org as its replacement and successor.
The launch of the Identifiers.org URL as an alternative to the URN form of MIRIAM identifiers answers the ever-growing need to provide directly resolvable identifiers, especially for Semantic Web applications, such as the Linking Open Data initiative from the W3C. Collections and resources used in those efforts, such as those from the Life Science Dataset Registry which supports the Bio2RDF project (8), are currently being integrated in the Registry.
To enable the incorporation of a wider array of data collections, the Registry is being updated to store additional information, such as restrictions on the access to the data entries (for example the need to register and login), on its use (utilization of specific license and/or copyright), etc. In addition, resource and collection descriptions will be further enhanced by the incorporation of information from ontologies such as the Biomedical Resource Ontology (http://bioportal.bioontology.org/ontologies/1104). These modifications will allow users to ascertain the appropriateness of use of particular data collections and will improve existing search facilities.
Profiles predefine specific resolving locations for each selected data collection and may be shared using the ‘profile’ parameter described above. Moreover, there is currently ongoing work to allow users to create profiles in the Registry through a dedicated user interface. This interface will also list the publicly available profiles that have been created by users, stating the collection and preferred resources associated.
The MIRIAM Registry is a collaborator of the BioDBCore effort (9). This effort focuses on a community approved minimum information checklist to which database providers should comply. It recommends that the standards that are implemented by a data provider (such as which standard formats it accepts and provides, which terminologies it uses, etc.) should be recorded with reference to those standards listed by BioSharing (http://biosharing.org). This information, ideally, would be provided by database administrators as an RDF file. The BioDBCore database will rely on dedicated resources for the storage of some information. Identifiers.org URIs will be used for identification and data access information.
All the code used to develop the MIRIAM Registry and the associated helper utilities are released under the terms of the GNU, General Public License and are available at: http://sourceforge.net/projects/miriam/.
## CONCLUSIONS
The MIRIAM Registry is a stable resource, which provides both an identifier scheme and resolution system. While it originates from within the Computational Systems Biology community, it is certainly not limited to that domain, submissions are encouraged for new data collections from any biological community that has a desire to create and/or use unambiguous perennial identifiers or to generate and resolve physical locations from which data can be accessed. The system is of particular interest to tool and database developers who need to manage annotations and cross-references. Identifiers.org helps to ensure that data entities are resolvable, thereby avoiding the creation of ‘dead ends’ in the network of linked data.
## FUNDING
EMBL, ELIXIR (Preparatory Phase) and BBSRC (grants BB/E005748/1 and JPA 1729). Funding for open access charge: core EMBL funding.
Conflict of interest statement. None declared.
## ACKNOWLEDGEMENTS
The authors thank Michel Dumontier for very fruitful discussions about identifiers and their usage in the Semantic Web, as well as existing users for their participation and feedback in discussions and through surveys.
## REFERENCES
1
Taylor
CF
Field
D
Sansone
S-A
Aerts
J
Apweiler
R
Ashburner
M
Ball
CA
Binz
PA
Bogue
M
Booth
T
, et al. .
Promoting coherent minimum reporting guidelines for biological and biomedical investigations: the MIBBI project
Nat. Biotechnol.
,
2008
, vol.
26
(pg.
889
-
896
)
2
Smith
B
Ashburner
M
Rosse
C
Bard
J
Bug
W
Ceusters
W
Goldberg
LJ
Eilbeck
K
Ireland
A
Mungall
CJ
The OBI Consortium, Leontis,N., Rocca-Serra,P., Ruttenberg,A., Sansone,S.-A., Scheuermann,R.H., Shah,N., Whetzel,P.L., Lewis,S
The OBO Foundry: coordinated evolution of ontologies to support biomedical data integration
Nat. Biotechnol.
,
2007
, vol.
25
(pg.
1251
-
1255
)
3
Le Novère
N
Finney
A
Hucka
M
Bhalla
US
Campagne
F
J
Crampin
EJ
M
Klipp
E
Mendes
P
, et al. .
Minimum information requested in the annotation of biochemical models (MIRIAM)
Nat. Biotechnol.
,
2005
, vol.
23
(pg.
1509
-
1515
)
4
Hucka
M
Finney
A
Sauro
H
Bolouri
H
Doyle
J
Kitano
H
Arkin
A
Bornstein
B
Bray
D
Cuellar
A
, et al. .
The Systems Biology Markup Language (SBML): a medium for representation and exchange of biochemical network models
Bioinformatics
,
2003
, vol.
19
(pg.
524
-
531
)
5
Laibe
C
Le Novère
N
MIRIAM Resources: tools to generate and resolve robust cross-references in Systems Biology
BMC Syst. Biol.
,
2007
, vol.
1
(pg.
58
-
66
)
6
Berners-Lee
T
Linked Data, In Design Issues: Architectural and Philosophical Points
2006
http://www.w3.org/DesignIssues/LinkedData (13 September 2011, date last accessed)
7
Li
C
Donizelli
M
Rodriguez
N
Dharuri
H
Endler
L
Chelliah
V
Li
L
He
E
Henry
A
Stefan
MI
, et al. .
BioModels Database: An enhanced, curated and annotated resource for published quantitative kinetic models
BMC Syst. Biol.
,
2010
, vol.
4
pg.
92
8
Belleau
F
Nolin
M
Tourigny
N
Rigault
P
Morissette
J
Bio2RDF: towards a mashup to build bioinformatics knowledge systems
J. Biomed. Inform.
,
2008
, vol.
41
(pg.
706
-
716
)
9
Gaudet
P
Bairoch
A
Field
D
Sansone
S-A
Taylor
C
Attwood
TK
Bateman
A
Blake
JA
Bult
CJ
Cherry
JM
, et al. .
Towards BioDBcore: a community-defined information specification for biological databases
Nucleic Acids Res.
,
2011
, vol.
39
(pg.
D7
-
D10
)
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
# Distortion Introduced by Sampling
July 17, 2019
In general, sampling introduces three types of distortion due to signal bandwidth, quantization, and digital to analog converter (DAC) interpolation.
### Distortion Due to Signal Bandwidth
In Lesson 2, we learned that perfect reconstruction without distortion is theoretically possible for band-limited signals. However, in the real world, no signal is truly band-limited. If nothing else, there is always thermal noise.
If the sample rate 𝖥s is not at least the Nyquist rate, then aliasing occurs (see Figure 1.10.) Typically, aliasing is prevented by making the signal band-limited with a low-pass filter, often called an anti-aliasing filter. However, any kind of filtering—no matter useful it is—changes the signal and introduces distortion.
Figure 1.10. Sampling below the Nyquist rate introduces aliasing.
### Distortion Due to Quantization
Quantization is the mapping from a continuous waveform to a discrete quantity. Quantization forces continuous signals with an infinite number of possible values to be represented as discrete-valued sequences with a very finite number of possible values.
For example, a signal in the range of ±1 volt has an infinite number of possible values; however, when sampled by an 8 bit ADC, it is forced to be one of 28 = 256 possible values (see Figure 1.11.) The difference between the ideal value and the discrete value is, by definition, distortion.
Figure 1.11. Distortion due to quantization.
### Distortion due to Digital to Analog Converter (DAC) Interpolation
The DAC must convert a sequence of discrete values back to a continuous voltage or current waveform. From a theoretical standpoint, this waveform is ideally a sequence of weighted “infinitely high and infinitely narrow” pulses called “Dirac delta functions.” However, real-world systems can’t support infinite voltage levels so we must settle for more mundane solutions.
In practice, the simple “sample-and-hold” circuitry is often implemented, or more generally by Lagrange interpolation (see Figure 1.12.)
Sample-and-hold is simple because it can be implemented with D-flip-flops. However, the sample-and-hold operation is equivalent to convolution with a rectangle function. As such, it introduces distortion in both the time and frequency domains.
Figure 1.12. Distortion due to DAC interpolation.
|
# MOSFET drain current equation vs on resistance
I have a very simple circuit. It’s just an n-channel enhancement MOSFET where Vgs is being driven to 9V and a relatively small 9ohm resistor sits between the 9V battery and drain terminal.
The way I went about the analysis was to assume the MOSFET is saturated and then use: Id = Kn(Vgs-Vt)^2 where Kn is the transconductance parameter (W/LμC_ox) and Vt is the threshold voltage. For a Kn in the order of 1mA/V^2 and Vt approximately 1V, we get a drain current in the order of 8mA. These are all relatively standard values (I think).
When I simulate the circuit on LTSpice the current is in the order of 1A and it appears the simulation just treats the MOSFET as having a low on resistance between drain and source terminals so that the drain current is essentially just 9V/9ohm.
These two answers are two orders of magnitude apart, so it cannot be the case that the parameters I used in the simulation/calculation were just off (I simulated with a bunch of MOSFETs on LTSpice). To me it seems like there is something more fundamental that I’m not understanding in my calculations. When is it correct to use that above drain current equation and when should I rather treat the MOSFET as just being a small resistor between drain and source terminals (when turned on)? I’m assuming the drain current equation isn’t used that often because I can’t really find a value for Kn in datasheets.
• Why do you assume that you have an appropriate transconductance parameter? When you simulated in LTspice, exactly what NMOS model did you use? How was the body connected? – Elliot Alderson Jun 7 at 21:33
• Idsat = Kn/2*(Vgs-Vtn)^2. – muyustan Jun 7 at 21:37
• I studied the MOSFET chapter out of Microelectronics by D. Neaman. There the transconductance parameter was mentioned as being in the order of 200u - 2m. I used a few different models in LTSpice (just randomly selected a bunch) including IRFH5302, A06408, BSC032N, and about 5 more. But they all gave me similar answers – user16378 Jun 7 at 21:42
• Since Vgs> 3Vt, It is ~ fixed minimal RdsOn in linear mode Normally I use Vgs>=2.5x Vt for linear mode – Tony Stewart Sunnyskyguy EE75 Jun 7 at 22:02
The $$\R_{ds}(on)\$$ spec doesn't apply to saturation mode.
It applies in the linear or triode operating mode, where the current through the channel depends strongly on both $$\V_{gs}\$$ and $$\V_{ds}\$$. That does mean that $$\R_{ds}(on)\$$ is not really a fixed number but depends pretty strongly on $$\V_{gs}\$$, and the $$\V_{gs}\$$ where the specification is measured should be stated clearly in the datasheet.
For example, in the Vishay IRF530 datasheet:
Why do you assume you are in saturation?
Here are the saturation conditions:
1) Vgs > Vth. True.
2) Vds > Vgs - Vth. False.
At this DC bias, Vds is approximately 0. Vgs is approximately 9V.
0 > 9-(volt or two for Vth) is False.
• assuming SAT, Idsat formula gives 32 mA of current using the Kn and Vtn provided by OP. 9 - 9*0.032 is around 8.7 V which is Vdrain and also Vds. So, Vds is indeed higher than Vgs - Vtn and SAT assumption validated. – muyustan Jun 7 at 22:15
• Models have limits. You assume SAT (which is wrong) and then attempt to apply the saturation equations (which then give nonsensical answers) and then use that to justify that you are in SAT. If you look at Fig. 1 in the IRF530 datasheet, you see that 9V Vgs and 8.7 Vds and 32mA cannot exist simultaneously (Vgd is < Vth). Basically, 32mA and Vgd = 0.3V is insufficient current to form the channel required for saturation at the drain terminal. A more complete discussion of saturation is at the bottom of here: ecee.colorado.edu/~bart/book/book/chapter7/ch7_3.htm – Andrew Lentvorski Jun 9 at 0:29
• This does not justify you. "At this DC bias, Vds is approximately 0. Vgs is approximately 9V." without any background, you both use formulas of the model and blame the model. Sorry, but not a good answer. – muyustan Jun 9 at 2:03
|
Lemma 51.2.5. Let $I \subset A$ be a finitely generated ideal of a ring $A$. Let $\mathfrak p$ be a prime ideal. Let $M$ be an $A$-module. Let $i \geq 0$ be an integer and consider the map
$\Psi : \mathop{\mathrm{colim}}\nolimits _{f \in A, f \not\in \mathfrak p} H^ i_{V((I, f))}(M) \longrightarrow H^ i_{V(I)}(M)$
Then
1. $\mathop{\mathrm{Im}}(\Psi )$ is the set of elements which map to zero in $H^ i_{V(I)}(M)_\mathfrak p$,
2. if $H^{i - 1}_{V(I)}(M)_\mathfrak p = 0$, then $\Psi$ is injective,
3. if $H^{i - 1}_{V(I)}(M)_\mathfrak p = H^ i_{V(I)}(M)_\mathfrak p = 0$, then $\Psi$ is an isomorphism.
Proof. For $f \in A$, $f \not\in \mathfrak p$ the spectral sequence of Dualizing Complexes, Lemma 47.9.6 degenerates to give short exact sequences
$0 \to H^1_{V(f)}(H^{i - 1}_{V(I)}(M)) \to H^ i_{V((I, f))}(M) \to H^0_{V(f)}(H^ i_{V(I)}(M)) \to 0$
This proves (1) and part (2) follows from this and Lemma 51.2.4. Part (3) is a formal consequence. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
## Can a Keypass file theoretically be cracked offline?
So you create a .kbdx file, protected by a password.
AFAIK in asymmetric key schemes and in WPA-AES brute-forcing consists of:
• try a random password on the private key / on the router
• if it doesn’t log you in, try another.
So you immediately know did you hit the correct password.
What about a password manager’s database? You know nothing about the content of the file. How do you know you did manage to crack it?
## Are Online Problems always harder than the Offline equivalent?
I am currently studying Online-Algorithms, and I just asked myself if online Problems are always harder than the offline equivalent.
The most probable answer ist yes, but I can’t figure the reason out why.
Actually I have a second more specific question. When an offline Problem has some integrality gap ($$IG\in[1,\infty)$$) we know in an offline setting, that there is generally no randomized rounding algorithm which achieves a ratio $$C\geq IG$$.
Can this just be adapted to the online problem? If some fractional algorithm has competitive ratio $$c_{frac}$$ can some randomized rounding scheme only reach competitive ratio as good as $$\frac{c_{frac}}{IG}$$?
## What are some good resources out there to help me build an online and offline file converter?
I’m hoping to build both an online and offline file converter (upload a file, convert it to some other specified format, download the new format) as a project. Are there any resources that experienced programmers/hackers/computer scientists recommend? Much appreciated.
## Can I use Google Analytics to implement offline conversion tracking?
In a Google Ads account I’m working on, all conversions are imported from Google Analytics. How can I define a Google Analytics goal which has the Google Click ID configurable, i.e. such that reaching the goal is associated with a previously seen Google Click ID? I.e. can I have something to the effect of Offline Conversion Tracking except that I use Google Analytics (and maybe even Google Tag Manager)?
Background:
I’m working on a site which has its analytics managed via Google Tag Manager; some events configured in GTM trigger goals in Google Analytics, which in turn are imported as conversions in Google Ads. For example, “visitor requested a trial account” is a user interaction which is tracked like this.
I’d now like to track if people who requested a trial account actually logged in – and if so, track this as a conversion, too. When a visitor logs into his account, I can check a database to figure out the Google Click ID (if any) which the user got assigned when requesting his account. In case a GCLID is found, I’d like to have a GTM trigger which triggers a tag which bumps a Google Analytics goal (which in turn is imported as a conversion in Google Ads).
Configuring Google Tag Manager accordingly seems straightforward. However, it’s not clear to me what kind of Google Analytics Goal to create which explicitly specifies a click ID.
I recently started learning about randomized online algorithms, and the Wikipedia definitions for the three adversary models are very unhelpful to put it mildly. From poking around I think I have a good understanding of what an oblivious adversary is. From my understanding, the oblivious adversary must determine the “worst possible input sequence” before we even start running our algorithm. Let $$I_w$$ denote the worst possible input sequence this adversary comes up with. (I.e., the input sequence that produces the greatest gap between the best that can be done and what we expect our algorithm to do.)
We then say that our algorithm is $$c$$-competitive (for a minimization problem) under this adversary if $$E[Alg(I_w)] \le c \cdot Opt(I_w) + b$$ where $$c,b$$ are some constants, $$E[Alg(I_w)]$$ is the expected value of our algorithm on the input, and $$Opt(I_w)$$ is the cost if we had made perfect decisions. (I.e., if the problem went offline.)
My confusion concerns the adaptive online and adaptive offline adversaries. I neither fully understand their definitions nor the difference between them. I will list my confusions directly below.
• As I understand it, both of these adversaries somehow build the input sequences as your online algorithm runs. This says before you create the input at time $$t$$, unlike in the case of the oblivious adversary, both the adaptive online and adaptive offline adversaries have access to the outcomes of your algorithm at time steps $$1, \ldots , t-1$$. Then it says that in both cases the adversary “incurs the costs of serving the requests online.” The difference being that for the online adaptive adversary, it “will only receive the decision of the online algorithm after it decided its own response to the request.” Does this mean that the difference is that the offline adaptive adversary can see how your algorithm performs during future steps? Or just the present step? But then why is it still incurring the cost of serving requests online?
• This source contradicts the source above. It says that the adaptive offline adversary “is charged the optimum offline cost for that sequence.” Like I said previously, the previously source says both incur “the cost of serving the requests online.” What does it even mean to incur the cost of serving requests online vs. offline? Which is correct?
• This takes a completely different tack and talks about knowing randomness (online adaptive) vs. knowing “random bits” (offline adaptive). Is this equivalent somehow? How so?
• How does the definition of the competitive ratio change for these two adversaries? Most sources I looked at just defined the competitive ratio for the oblivious adversary.
A simple example of each to illustrate the difference would be much appreciated. Thanks for the help!
## MS SQL Express 2016 on Amazon AWS: I Can Take Database Offline but Can’t Bring It Online
I can take databases offline (via GUI) but can’t bring them back online. The server details are as follow:
RDBMS: MS SQL 2016 Express Host: Amazon AWS/RDS Free Tier
Details/History of the Problem A few months ago, I created a db instance on Amazon AWS and at the time of creation, the ‘master/admin’ account was setup via the AWS/RDS web page. With this ‘admin’ account, I have created several databases on that instance without any problems.
Over the past few months, I have used this ‘admin’ account to change several databases to contained databases. I do this so that I can setup contained users. I have also done this several times on this server instance with the same admin account with no problems.
Last night, I had just created a new database via this admin account. I then tried to set this new database as a contained database and the process failed. The dialog box error message stated among other things “please try again later”.
After the 3rd failed attempt, I decided to take the database offline (via the GUI in SSMS). I did this in a bid to force close any possible open processes or connections that might be on this new database. That worked. However, I have not been able to bring it back online. I have tried via the GUI and also via a query and it keeps failing.
I have then checked the server roles assigned to this ‘admin’ account. It is not part of sysadmin role. As I understand, the ‘sysadmin’ role can do absolutely anything on the db instance. I reckon my admin account is not of this sysadmin role because it is meant for the in-house DBAs at Amazon AWS. I have tried to add it as sysadmin but it fails.
To ensure that my ‘admin’ account is the problem, I have taken another database offline (it’s empty). It went offline but it is also failing to come back online.
What could be the problem? Please help. Note that my skill level is very very low and I’m learning as I go along.
The server logs don’t show anything useful. I have attached screenshots.
## Offline algorithm for codeforces problem
Is it possible to do this problem online: https://codeforces.com/problemset/problem/997/E ? All the AC solutions use offline sorting of queries.
## How to secure an offline resource using offline software but with occasional server access?
My application is a (Windows) desktop application that is required to operate fully offline. To enable this, I have a local data cache that keeps a synced copy of server data.
How can I secure this local data from any access other than my software?
I know that for fully offline software, this is impossible; any encryption aspects like key, salt, password, etc. that my software used would have to be embedded in the software itself, and this could be recovered from the executable.
But my application also has a requirement that it connects to the home server at least every 5 days for updates, during which time it could download anything.
Is there an algorithm that would allow the application to encrypt the data, using an encryption key that’s downloaded every so often, based on information that only the server would know?
## How to facilitate the export of secret strings from an offline system?
I want to use Shamir’s Secret Sharing algorithm to store a randomly generated passphrase securely by spreading the secret shares on paper for example.
The passphrase is generated on an offline system. I am looking for a way to ease the process of “exporting” those secrets which can be quite long (~100 hexadecimal characters).
First I converted the secrets from hexadecimal to base64. That is not bad but not enough.
Then I tried to compress the strings using different methods but because it is random data it does not compress well (or at all).
Then I though of printing them as QR code, it works fine but the issue comes later when I need to import the secrets back, because I would need a camera.
Is there anything else I could try?
## Checkout system with offline payments
I’m trying to find the best way to handle offline payments (e.g. internet bank transfer) into a site which offers services like courses.
At present there is a course registration form, when successfully completed the user receives a confirmation screen and email which states that payment is required, confirms the amount and bank account details.
I’m concerned that when a user completes this process it may feel as though they have achieved a booking or reservation, regardless of the information that follows.
For management a difficulty is that once the checkout is completed it takes a minimum of 24-48 hours before the payment can be confirmed. Also, the user may choose to not pay immediately (or at all). During this time the list contains unpaid booking requests, and it’s proving hard to manage the attendee list and be sure of who is coming.
I’m wondering if anyone has encountered this problem before and if there is a better way to handle the checkout process.
|
In recent years, the global counter-terrorism situation has become increasingly severe. Security inspections in public places have gradually attracted widespread attention from various countries. In crowded public places, in addition to the security checks on the packages people carry, it is also important to conduct security checks on the human body to detect dangerous hidden objects. However, the existing security inspection systems have some deficiencies more or less. In this case, the terahertz security inspection system using electromagnetic waves as the detection method is playing an increasingly important role.
Terahertz waves refer to electromagnetic waves with frequencies between 0.1 and 10 THz, which have similar characteristics to microwaves and infrared1. Terahertz imaging can detect not only metallic objects, but also non-metallic contraband such as explosives, ceramic knives, drugs, and glass knives2,3. Terahertz waves can penetrate insulating materials such as clothing, plastics, and ceramics4,5,6, so they can be used to detect objects hidden under human clothing. Compared with microwaves, the terahertz waves have shorter wavelengths and higher imaging accuracy. Terahertz waves have much longer wavelengths compared to X-rays and visible light. Therefore, the resolution and signal-to-noise ratio of human security imaging based on terahertz systems are quite different from X-ray imaging and visible light imaging. Such characteristics make it difficult for terahertz imaging to automatically detect hazardous objects using existing object detection algorithms. Terahertz imaging systems are divided into active and passive working modes. In the application of detecting concealed objects in the human body, passive terahertz imaging systems occupy a major position7,8. Passive terahertz imaging systems do not emit electromagnetic waves and will not cause harm to the human body. The current speed of passive terahertz imaging is as high as 10 frames per second, which puts forward higher requirements for the speed and accuracy of detection. At present, most of the images acquired by the terahertz systems are checked manually one by one, which not only consumes a lot of manpower, but also may lead to missed inspections when people mark images for a long time. As a result, the advancement of automatic identification technology for passive terahertz images is critical for the widespread adoption of intelligent security inspection scenarios.
It is well known that the security screening of terahertz imaging systems is designed to detect and identify the threat of dangerous goods carried by the human body. More and more terahertz passive imaging systems have been studied in recent years, but theories and methods for identifying or locating dangerous objects from terahertz images are still in their infancy. Compared with optical and active terahertz imaging, passive terahertz images have a low signal-to-noise ratio and are easily affected by background variations. These characteristics make the use of passive terahertz imaging systems for security inspections a huge challenge. Traditional object detection algorithms of passive terahertz images mainly include segmentation-based detection algorithms and feature matching-based detection algorithms. The segmentation-based detection algorithm uses the grayscale information in the passive terahertz image to divide the image into two regions, the target and the background, and then detects the target. At present, the commonly used segmentation algorithms include maximum entropy segmentation method9, Otsu method10, region growing method, etc. The target segmentation-based algorithm is fast and easy to implement, but it can’t detect complex targets. The feature matching-based detection algorithms construct a target feature descriptor to match known prior feature information. Common features include Haar features11, gradient histograms12 (HOG), dense SIFT features13, etc. Feature-based detection methods rely on manual feature extraction. However, since artificially constructed features are usually limited to specific object types, satisfactory object detection results cannot be achieved. In recent years, with the development of deep learning, the application of deep convolutional neural networks in optical image target detection has achieved good results, and its detection accuracy is far superior to traditional methods14,15. Low-resolution passive terahertz images contain very little information, making it very difficult to extract features from images manually. The CNN network can automatically extract features from passive terahertz images. Although large-scale CNN networks have good detection and classification effects, they cannot meet real-time performance due to large parameters and complex structures. Extensive research shows that there is still space for improvement in the accuracy and speed of current CNN-based terahertz image target detection algorithms16,17,18. Hence, there is an urgent need to propose a method with sufficient accuracy and robustness for object recognition in passive terahertz images. Nevertheless, to the best of the author’s knowledge, there are only a handful of studies in the area.
There are two main problems in the direct use of deep learning algorithms in the field of passive terahertz image object recognition, one is that the accuracy of object detection is not high, and the other is that the speed of object detection is not fast enough. In response to these two issues, we chose the single-stage detection algorithm SSD, which has relatively high accuracy and speed in the natural light object detection algorithm, as the basic model. This is the first application of the SSD algorithm in the field of passive terahertz image object detection. We investigate the potential of the residual network ResNet-50 in terms of accuracy and real-time performance in detecting concealed objects in passive terahertz images. The method described in this paper aims at selecting the most efficient CNN architecture and further exploring the ways of its modification and optimization to ensure superior real-time classification potential for detecting concealed objects. The significance and originality of this research lie in the first exploration of the application of the SSD model in the field of passive terahertz image detection and several important improvements are proposed for image characteristics. The ultimate goal of this research is to develop a fast and accurate detection algorithm for passive terahertz images and promote its practical application in intelligent security inspection scenarios. We propose the following improvements to further improve the accuracy and speed of the algorithm in detecting concealed objects. To summarize, our main contributions are as follows:
• Aiming at the network degradation problem of the VGGNet network19 in the SSD algorithm, the improved algorithm uses the ResNet-50 network20 as the feature extraction network.
• For the problem of poor detection performance of small objects, we extract multi-level features to form a feature pyramid network21, which is used to detect objects of different scales. Through upsampling, deep features and shallow features are fused to construct feature representations with rich semantic information, making the fused features more descriptive.
• The proposed algorithm introduces the spatial-channel attention mechanism into the SSD network to enhance the semantic information of high-level feature maps. Therefore, the ability of the algorithm to obtain the details and position information of the object can be improved, thereby reducing the missed detection rate and false detection rate, and improving the detection accuracy of small objects.
• This paper introduces the Focal Loss22 to improve the imbalance of positive and negative samples and hard and easy samples. By increasing the weight of the hard samples in the loss function, the robustness of the proposed algorithm can be improved.
Compared with the unoptimized SSD algorithm, this method can improve the detection accuracy and at the same time meet the detection speed requirements in intelligent security inspection scenarios. Specifically, our proposed algorithm achieves a mean average precision of 99.92%, and the detection speed of the algorithm can reach 17 FPS, which can meet the real-time requirements. This is an excellent outcome that could not have been obtained with earlier methods. The remainder of the paper is organized as follows: The related works on terahertz image object detection and the basic introduction to SSD algorithm are introduced in “Related work”. In the following section, the proposed improved SSD algorithm is presented in detail. Section “Experimental results and discussion” shows and analyzes the experimental results of our method after its deployment. The conclusion of the paper is formed in the last section.
## Related work
This section introduces the related works, including the introduction of common object detection methods based on CNN models, the basic introduction of the SSD model, and related work on terahertz image object detection.
### Object detection and recognition based on CNN models
In recent years, deep learning has made breakthroughs in object visual detection. At present, object detection algorithms based on deep learning models are mainly divided into two categories: one is various two-stage algorithms based on R-CNN23, including Fast R-CNN24, Faster R-CNN25, Mask R-CNN26, RFCN27, etc. These two-stage algorithms greatly improve the detection accuracy, but the two-stage detection results in a slow detection speed, which cannot meet the real-time requirements. The other category is one-stage detection algorithms represented by YOLO28 (You Only Look Once) and SSD29. The YOLO algorithm uses the idea of regression to greatly improve the detection speed, but its recognition accuracy of object location is poor and the recall rate is low. The SSD algorithm combines the regression method in the YOLO algorithm, not only follows the anchor mechanism in Faster R-CNN but also extracts multi-scale feature maps for classification and prediction, which greatly improves the precision and speed of object detection.
### Detailed introduction of SSD algorithm
The SSD algorithm is an object detection algorithm proposed by Wei Liu et al.29, which combines the anchor mechanism in Faster R-CNN and the regression idea in YOLO. The SSD algorithm not only has a clear speed advantage over Faster R-CNN, but also has a clear accuracy advantage over YOLO. It is based on the VGGNet-16 backbone and fuses feature maps of different scales for feature extraction. Its network structure is divided into two parts: one is the feature extraction network, and the other is the classification and regression layer. Specifically, the feature extraction network can extract the feature information of the object in the image, and at the same time can improve the ability of the network to perceive the object. The classification and regression layers are capable of classifying and regressing each candidate box to detect objects in the terahertz images. The structure of the original SSD network is shown in Fig. 1.
As we all know, the SSD algorithm no longer uses fully connected layers, which improves computational efficiency. Moreover, the SSD algorithm uniformly modifies the size after the input image, and there is no mandatory requirement for the size of the input image. Besides, SSD directly uses convolution to calculate candidate boxes and predict classification in one step, which can simplify the process. However, the SSD algorithm also has some disadvantages. To begin with, the maximum size of feature maps used for prediction has a resolution of 3838. If the size of small objects in the input image is small, the detailed information contained in the lower layers can easily be lost after the pooling layer. Besides, as the feature map size of additional feature extraction networks continues to decrease, the feature extraction and representation capabilities of shallow feature layers are limited. While the feature output layer of deep features is mainly responsible for large objects, resulting in poor detection performance of SSD for small objects. Finally, the convolutional layer of SSD algorithm feature extraction cannot take into account the features of adjacent feature layers of different scales. This results in insufficient feature extraction capability for small object features in complex environments during object feature extraction. All in all, the detection performance of the original SSD algorithm for small objects in complex environments needs to be improved. To meet the need of detecting hidden objects in passive terahertz images, some improvements to its network structure are also required.
### Hidden object detection in terahertz security images
In recent years, passive terahertz imaging technology has shown significant development prospects in the field of security inspection. It is of great significance to detect dangerous objects hidden in the human body in passive terahertz security images. At present, some scholars have carried out research on passive terahertz human body security image object detection algorithms30,31. According to the characteristics of terahertz images, Zhang et al. adopted the Otsu threshold segmentation algorithm and applied the contour tracking method to extract contours, which can realize object detection in human terahertz images32. Aiming at the difficulty of detecting edge objects in human terahertz images, Jiang et al. proposed an automatic recognition algorithm for human edge objects33. Santiago Lopez Tapia et al. proposed a method combining image processing and statistical machine learning techniques to solve the problem of object localization detection in terahertz images34. Niu et al. proposed a terahertz human image processing and recognition method based on the principle of saliency and sparse coding, which can realize automatic recognition of hidden objects in the human body35.
In these methods, the objects were first separated from the image and then classified and recognized in a complex and slow process. Moreover, the above traditional object detection methods have certain defects and poor generalization ability. The performance of these algorithms is often affected by the complexity of the image background. The simpler the image background, the higher the object detection efficiency and the better the detection performance. On the contrary, once the image background becomes complex, the efficiency and performance of object detection will decrease accordingly.
The convolutional neural network CNN36 (convolutional neural network) based on deep learning technology can solve the defects of traditional methods well. CNN can not only complete feature extraction, but also has good robustness and strong feature expression ability. These properties enable it to accurately localize detection objects in both simple and complex environments. Liu Chen et al. introduced Focal Loss22 on the basis of Faster R-CNN and detected concealed objects in active millimeter-wave images without using transfer learning16. Hong Xiao et al. proposed the R-PCNN algorithm. The algorithm adds traditional image preprocessing methods to the front end, which improves the speed and accuracy of object detection and recognition17. Xi Yang et al. exploited the spatiotemporal information of terahertz image sequences and used CNN to achieve automatic object detection and recognition18. Extensive studies show that there is still space for improvement in the accuracy and speed of current CNN-based terahertz image object detection algorithms.
## Methods
According to the characteristics of terahertz images, we propose a novel method for object detection in terahertz human security images. This method enables accurate real-time detection of concealed objects in terahertz images. The improved SSD framework is shown in Fig. 2. In general, the improved SSD object detection algorithm is divided into the following four parts. Firstly, the basic network VGGNet-16 is replaced with a deep convolutional network ResNet-50 to enhance the feature extraction ability. Additional feature layers are introduced into the basic network to further enhance the feature expression capability of the object detection layer. Afterward, three feature maps of different scales are fused in the feature extraction layer to enhance the semantic information correlation between the front and rear scale maps. In the next parts, a hybrid attention mechanism is introduced into SSD to enhance the semantic information of high-level feature maps. This method can improve the algorithm’s ability to obtain object details and position information, thereby reducing the missed detection rate and false detection rate. Finally, the Focal Loss function is introduced to improve the robustness of the algorithm by increasing the weight of negative samples and hard samples in the loss function.
The overall flowchart of our proposed algorithm is shown in Fig. 3. During the training phase, the objects in the input image are marked to obtain the location information and category information of the real target. Then the model is trained to generate the final improved SSD object detection model. During the testing phase, each test image generates N boxes that may contain objects. Ground truth offsets and class scores are then computed using the model generated during the training phase, resulting in N classification results per image. Finally, the non-maximum suppression algorithm is used to output the final result.
### Ethical statement
This study conforms to the ethical guidelines of the Declaration of Helsinki revised in 2013. The study was approved by the ethics committee of the Aerospace Information Research Institute, Chinese Academy of Sciences. All experiments were performed by relevant guidelines and regulations. We confirmed that informed consent had been obtained from all subjects. All images were deidentified before inclusion in this study.
### Improvement of feature extraction network
The VGGNet-16 network is a simple stack of traditional CNN structures with many parameters. As the number of network layers deepens, there are not only the phenomenon of gradient disappearance and gradient explosion, but also the problem of network degradation. In response to the problems existing in the traditional CNN structure, the team of Kaiming He of MSRA proposed a convolutional neural network called Deep Residual Network(ResNet)20, which successfully solved the above two problems by introducing a batch normalization (BN) layer and residual block. The BN layer normalizes the feature maps of the middle layer to speed up the network convergence and improve the accuracy. The residual block fits the residual map by adding a skip connection layer between the network layers. It can directly transfer the optimal weight of the front layer to the back layer, so as to achieve the effect of eliminating network degradation. The comparison between the ordinary structure and the residual structure is shown in Fig. 4. The ResNet network adopts the jump structure in Fig. 4b as the basic network structure, also known as Bottleneck. The jump structure enables the ResNet network to have deeper layers and relatively better network performance than ordinary networks. As shown in Fig. 4, H(x) is called the desired map for stacking several network layers, x is the entry of the current stacking block, and the use of the ReLU activation function shortens the learning cycle. Assuming that n nonlinear layers can be approximately expressed as a residual function, then the stacked network layers are fitted to another map $$F(x)=H(x)-x$$, and the final base map becomes $$H(x)=F(x)+x$$. The idea of residual structure can effectively solve the problem of network performance degradation and gradient, and also improve network performance. By constructing residual learning, the residual network can approximate a better-desired mapping by approximating the coefficients of multiple nonlinear connections to zero.
The detection of concealed objects in passive terahertz images is affected by complex backgrounds and similar interference. Research has proved that deepening the network structure is effective for improving feature detection. However, VGGNet-16 is a relatively shallow network, which is insufficient to extract features of hidden objects. Therefore, we replace the feature extraction network with ResNet-50, a network with deeper and richer semantic information. ResNet-50 has a total of 16 Bottlenecks, each of which contains 3 convolutional layers. Together with the input layer and the final fully connected layer, a 50-layer residual network is formed. The comparison of network parameters and complexity between VGGNet-16 and ResNet-50 is shown in Table 1. It can be seen from Table 1 that the parameters and floating-point operations of the ResNet-50 network are much lower than those of the VGGNet-16, which proves that ResNet-50 is lighter and has the conditions for faster speed.
Compared to AexNet, GoogleNet, and VGGNet-16, ResNet-50 is known for its ability and performance to achieve lower error rates and higher accuracy output than other architectures37. The depth of each convolutional layer of ResNet-50 is higher than that of VGGNet-16, which enhances the network learning ability and further improves the feature extraction ability of concealed objects in images. ResNet-50 introduces a residual module in the convolutional layer, which solves the problem of training degradation caused by the deepening of the network. The network has high convolution calculation efficiency and reduces the redundancy of the algorithm. Taking into account factors such as accuracy and detection speed, the novel CNN network based on the ResNet-50 network is more accurate than VGGNet-16. By using ResNet-50 as the backbone network, the data features can be better extracted for classification and loss computation. The improved algorithm uses ResNet-50 as the extraction network, removes the fully connected layer in ResNet-50, adds several additional convolutional layers to extract features, and obtains feature maps of different scales. In this way, the limitation of the fully connected layer on the size of the input image can be solved. The network can obtain feature maps of different sizes to construct the detection part of the feature maps of different sizes in the network. The size of the feature map of the feature extraction layer of the improved algorithm is 3838, 1919, 1010, 55, 33 and 11, and the dimensions are 128, 256, 512, 256, 128 and 64 respectively. The shallow convolution layers extract relatively small features, while deep convolution layers can extract richer information. Therefore, shallow convolutional feature maps and deep convolutional feature maps are used to detect small and large objects, respectively. Replacing the original front-end network VGGNet-16 with the ResNet-50 network can improve the accuracy on the basis of the original SSD algorithm, but there are still problems such as poor real-time performance, false detection, and repeated detection of small objects. Therefore, feature fusion techniques are used to improve the SSD algorithm.
### Feature fusion module
The SSD algorithm takes full advantage of multiple convolutional layers to detect objects. It has better robustness to the scale changes of objects, but the disadvantage is that it is easy to miss small objects. The network structure of the SSD algorithm is shown in Fig. 1. Each convolutional layer corresponds to the scale of an object, thus ignoring the relationship between layers. The low-level feature layer Conv_3 has a resolution size of 3838 and contains a lot of edges and non-target information. Conv_4 and Conv_5 have higher resolution and basic contour information and thus have richer and more detailed feature representation. As the number and depth of network layers increase, the scale of convolutional layers gradually decreases, and the semantic information becomes more and more abundant. However, the underlying conv4_3 does not utilize high-level semantic information, resulting in poor detection of small objects.
The improved algorithm fuses adjacent low-level high-resolution feature maps with clear and detailed features and high-level low-resolution feature maps with rich semantic information. Feature fusion can enhance the ability of the low-level feature layer to fully express the detailed features of small objects, and improve the detection effect of the SSD algorithm on small objects. The feature fusion structure proposed in this paper is shown in Fig. 5. The improved SSD model performs upsampling operations on Conv_4 and Conv_5 by bilinear interpolation, and performs feature fusion with Conv_3. Better upsampling results can be obtained by using bilinear interpolation operations instead of adjacent interpolation operations. We use the concatenation method to fuse the high-semantic feature maps of Conv_4 and Conv_5, and use them to supplement the feature information of the convolutional feature maps of the Conv_3. Moreover, we still use the scale of Conv_3 as the scale of the new prediction stage. This way of passing down the fusion information can further fuse high-semantic information in deep layers and detailed information in shallow layers.
As we all know, the update of the parameters of the previous convolutional layer will cause the data distribution of the later input layer to change, resulting in a large difference in the data distribution of the convolutional feature layer. Therefore, there will be large differences between feature maps, and feature connection operations cannot be performed directly. When the network layer changes slightly, the features are accumulated and amplified through the fusion layer, resulting in slow convergence of the algorithm. Therefore, a BN layer is added after each feature layer for normalization. BN layers can reduce the effects of a slow training process due to increased model complexity. The improved SSD algorithm adopts the method of multi-level fusion of different scales, combined with the idea of FPN21, and transfers the feature information of feature maps of different scales from top to bottom. Feature fusion can provide feature representations with rich semantic information, improving the descriptiveness of fused features.
### Attention mechanism CBAM
The feature maps extracted by the feature extraction network not only have object features but also have similar interference features. The concealed objects on the human terahertz image and other similar objects are given the same importance on the feature maps, which is not conducive to the detection of hidden objects in complex backgrounds. Therefore, to improve the object recognition ability of feature maps for specific regions and specific channels, and reduce the negative interference of complex backgrounds and similar objects, the attention mechanism is applied in both channel and space dimensions. Mechanism of Attention CBAM38(Convolutional Block Attention Module) is a little universal module that saves computational resources and parameters. For a given feature map, attention weights are sequentially derived along both spatial and channel dimensions, and then the features are adaptively adjusted by multiplying with the original feature map. The implementation of the attention mechanism includes two modules: channel attention and spatial attention. The structure of the CBAM attention mechanism module is shown in Fig. 6. Usually, the two modules of channel attention and spatial attention are combined in turn, and better results can be achieved by placing the channel attention in the front. The improved algorithm connects the CBAM dual attention mechanism after each output feature map to improve the network’s attention to concealed objects.
Given an intermediate feature map $$F \in \mathbb {R}^{C \times H \times W}$$as input, CBAM sequentially infers a 1D channel attention map $$M_{r} \in \mathbb {R}^{C \times 1 \times 1}$$ and a 2D spatial attention map $$M_{s} \in \mathbb {R}^{1 \times H \times W}$$ as shown in Fig. 6. The overall attention progression can be summarized as:
\begin{aligned}&F^{\prime }=M_{c}(F) \otimes F \end{aligned}
(1)
\begin{aligned}&F^{\prime \prime }=M_{s}\left( F^{\prime }\right) \otimes F^{\prime } \end{aligned}
(2)
where $$\otimes$$ represents for element-wise multiplication. During multiplication, the attention values are broadcast accordingly: channel attention values are broadcast along the spatial dimension and vice versa. $$F^{\prime \prime }$$ is the final refined output. Figures 7 and 8 describe the computation process of each attention map. The details of each attention module are described below.
#### Channel attention module
The structure of the channel attention module is shown in Fig. 7. The channel attention module aggregates spatial information of feature maps by using average pooling and max pooling operations, generating two different spatial context descriptors: $$F_{a v g}^{c}$$ and $$F_{max}^{c}$$, which denote average-pooled features and max-pooled features respectively. The two descriptors are then forwarded to the shared network to generate the channel attention map $$M_{c} \in \mathbb {R}^{C \times 1 \times 1}$$. The shared network consists of a multilayer perceptron (MLP) with one hidden layer. To reduce parameter overhead, the hidden activation size is set to $$\mathbb {R}^{C / r \times 1 \times 1}$$, where r is the reduction ratio. After applying the shared network to each descriptor, the channel attention module merges the output feature vectors using element-wise summation. Briefly, the channel attention module is computed as:
\begin{aligned} \begin{aligned} M_{c}(F)&=\sigma ({\text {MLP}}({\text {AvgPool}}(F))+{\text {MLP}}({\text {MaxPool}}(F))) \\&=\sigma \left( W_{1}\left( W_{0}\left( F_{a v g}^{c}\right) \right) +W_{1}\left( W_{0}\left( F_{\max }^{c}\right) \right) \right) \end{aligned} \end{aligned}
(3)
where $$\sigma$$ represents the sigmoid function, $$W_{0} \in \mathbb {R}^{C / r \times C}$$ , and $$W_{1} \in \mathbb {R}^{C \times C/r}$$ . Note that both inputs share the MLP weights $$W_{0}$$ and $$W_{1}$$ ,and the ReLU activation function is followed by $$W_{0}$$.
#### Spatial attention module
The spatial attention module utilizes the spatial relationship of features to generate a spatial attention map. Figure 8 depicts the computation progress of the spatial attention map. Unlike channel attention, spatial attention focuses on ‘where’ is an informative part, which is complementary to channel attention. To compute the spatial attention, average pooling and max pooling operations are first applied along the channel axis and then concatenated to generate an efficient feature descriptor. On the concatenated feature descriptor, a convolution layer is applied to generate a spatial attention map $$M_{s}(F) \in \mathbb {R}^{H \times W}$$, which encodes locations to be emphasized or suppressed. The specific operation progress is as follows.
The channel information of the feature maps is aggregated using two pooling operations to generate 2D maps: $$F_{a v g}^{s} \in \mathbb {R}^{1 \times H \times W}$$ and $$F_{max}^{s} \in \mathbb {R}^{1 \times H \times W}$$. Each represents the average-pooled and max-pooled features across the channel. They are then concatenated and convolved through standard convolutional layers to generate a 2D spatial attention map. To sum up, the spatial attention is calculated as:
\begin{aligned} \begin{aligned} M_{s}(F)&=\sigma \left( f^{7 \times 7}([{\text {AvgPool}}(F) ; {\text {MaxPool}}(F)])\right) \\&=\sigma \left( f^{7 \times 7}\left( \left[ F_{\text{ ang } }^{s} ; F_{\max }^{s}\right] \right) \right) \end{aligned} \end{aligned}
(4)
where $$\sigma$$ represents the sigmoid function and $$f^{7 \times 7}$$ denotes a convolution operation with a filter size of 77.
The improved algorithm is based on SSD-ResNet-50 and adds CBAM modules after 6 feature maps of different sizes. By adjusting the size of the feature maps, each feature map can remain the same size as before the output after the feature weighting of the CBAM module. The attention mechanism further enhances the semantic information of high-level feature maps, reduces the target missed detection rate, and improves the robustness of the algorithm.
### Improvement of the loss function
The loss function of the original SSD network consists of a weighted sum of the confidence loss and the position loss. The specific expression is calculated as:
\begin{aligned} L(x, c, l, g)=\frac{1}{N}\left( L_{c o n f}(x, c)+\alpha L_{l o c}(x, l, g)\right) \end{aligned}
(5)
where $$L_{c o n f}$$ and $$L_{loc}$$ denote the confidence loss and localization loss, respectively. $$\alpha$$ represents the weight of the localization loss, c and l are the category confidence and position offset information of the prediction box, respectively. x is the matching result between the previous frame and different categories, if it matches, the result is $$x=1$$, otherwise, it is $$x=0$$. Furthermore, g represents the offset between the ground-truth bounding box and the prior bounding boxes, and N denotes the number of the prior bounding boxes.
The problem of sample imbalance can be solved by adjusting the positive and negative sample ratio parameters in the input network. To solve the problem of sample imbalance in the SSD algorithm, we use the Focal Loss function22 to replace the confidence loss function in the original loss function. Its specific expression can be summarized as:
\begin{aligned} F L\left( p_{t}\right) =-a_{t}\left( 1-p_{t}\right) ^{\gamma } \log \left( p_{t}\right) \end{aligned}
(6)
where $$p_{t}$$ represents the classification probabilities of the different classes, and the value of $$\gamma$$ is greater than zero, which is used to adjust the rate at which the weights of the samples are easily divided. $$a_{t}$$ is a decimal between 0 and 1, which is used as a weight to adjust the proportion of positive and negative samples. For simple samples, $$p_{t}$$ will be relatively large, and the weight will naturally be small. On the contrary, for difficult samples, $$p_{t}$$ will be relatively small, and the weight will naturally be relatively large so that the network tends to use such difficult samples to update parameters.
The Focal Loss is visualized for several values of $$\gamma \in [0,5]$$ in Fig. 9. The function curve is the loss function curve in SSD when $$\gamma$$=0. It can be seen that even the loss function value of the easily separable samples is high, resulting in a high proportion of the loss value of the easily separable samples in the algorithm. The weight of the hard samples in the input samples increases with the increase of $$\gamma$$, indicating that Focal Loss achieves the balance of the positive and negative samples, hard and easy samples through $$a_{t}$$ and $$\gamma$$, respectively. In this way, the samples participating in the training can be distributed more evenly, thereby further improving the reliability of the detection algorithm. In this paper, we set the parameters $$a_{t}=0.25$$ and $$\gamma =2$$ during the training process, because extensive experiments show that such parameters can achieve the best experimental results.
## Experimental results and discussion
In this section, the image dataset is introduced first, and then the evaluation metrics are described. The results of the ablation experiments demonstrate the effectiveness of the improved points proposed by this method. Extensive comparison experiments between the improved SSD algorithm and other methods are conducted in succession. In addition, we give an intuitive discussion about the visual detection effect of this algorithm and depict the ROC curve.
### Datasets
In the research field of terahertz concealed object detection, few passive image datasets are currently publicly available. The passive terahertz dataset we used in this paper was constructed by a team led by researcher Chao Li from the Aerospace Information Research Institute, Chinese Academy of Sciences. The dataset was collected by the 0.2 THz passive imaging device shown in Fig. 10a, a total of 896 images were collected. The size of each image is 160392, and 0 2 objects to be detected (“pistol model” and “mobile phone model”) are randomly hidden under clothes in different positions of the human torso. To ensure the authenticity and reliability of the results, the imaging experiments were carried out in the form of real people carrying concealed objects. The pistol model and mobile phone model carried by the volunteers are shown in Fig. 10b, and the imaging results are shown in Fig. 10c. In the future, we plan to gradually increase the types of objects. In addition, 896 images are expanded to 1792 images by horizontal flipping through the data augmentation method to increase the diversity of samples. The following training experiments, evaluation experiments, and comparison experiments are performed on this dataset.
### Evaluation metrics
Since passive terahertz image object detection in the security field is still a relatively “niche” field in computer vision, there is no standard evaluation protocol defined. We evaluate our proposed improved SSD algorithm using mean precision (mAP), F1 score, ROC curve, AUC value, and FPS. The following is a comprehensive overview of the indicators.
Object detection performance can be evaluated with Average Precision (AP) and mean Average Precision (mAP) between ground truth and predicted bounding box (IOU). AP is defined as the area enclosed by the precision and recall curves and the axes. The precision rate is the accuracy rate, which measures the classification ability of the object detection algorithm to detect objects. The higher the accuracy rate, the stronger the model’s ability to classify the object. The recall rate measures the detection ability of the model to the object. The higher the recall rate, the stronger the model’s ability to distinguish the object. In the object detection algorithm, the precision-recall curve (PR curve) of each category can be obtained. The area enclosed by the curve and the abscissa axis is the average precision AP value. The mAP can be obtained by averaging the APs of all classes. Under the same confidence, the larger the mAP of the model, the better the detection performance of the model. The precision and recall rate are calculated as follows:
\begin{aligned}&Precision=\frac{T P}{T P+F P} \end{aligned}
(7)
\begin{aligned}&Recall=\frac{T P}{T P+F N} \end{aligned}
(8)
True positive (TP) means that the model predicts a positive sample and the actual prediction is correct; false positive (FP) means that the model predicts a positive sample and detects it incorrectly, and false negative (FN) means that the model predicts a negative sample and detects it incorrectly. These evaluation metrics are illustrated in Fig. 11.
Additionally, the F1 score is a composite metric that combines precision and recall. The advantage of the F1 score is that it provides a single measure of quality that is easier to understand. Its calculation formula is as follows:
\begin{aligned} F1 \; Score = 2 \times \frac{Precision \times Recall}{Precision + Recall} \end{aligned}
(9)
A receiver operating characteristic (ROC) curve40, commonly used to visualize the discriminant ability of a binary classifier, is a plot of the true positive rate (TPR, sensitivity, recall) versus the false positive rate (FPR, 1-specificity) at various given cut points. A typical ROC curve is a plot of FPR on the x-axis versus TPR on the y-axis. The area under the curve (AUC) maps the entire ROC curve into a single number that reflects the overall performance of the classifier over all thresholds. Typically, the value of AUC is between 0.5 and 1. The larger the value, the better the discriminative ability of the classifier. The formulas for calculating TPR and FPR are as follows:
\begin{aligned}&TPR=\frac{T P}{T P+F N} \end{aligned}
(10)
\begin{aligned}&FPR=\frac{F P}{F P+T N} \end{aligned}
(11)
The detection speed is mainly used to evaluate the real-time performance of the algorithm. If the inference speed cannot meet the real-time requirements, our model cannot be used in the real environment. FPS (frames per second) is currently a common metric for evaluating model speed. In two-dimensional image detection, it is defined as the number of pictures that can be processed per second. The larger the value, the shorter the detection time and the faster the speed.
### Experimental details
The experimental environment of this paper is: in terms of hardware, we adopt the Inter(R)Core(TM)i5-10400F CPU processor and the graphics card of GeForce RTX 3090 Ti. In terms of software, we choose the Ubuntu 20.04 operating system, Pytorch deep learning framework, and Python 3.6 programming language.
Batch_size is the number of images sent to training at one time. A larger batch_size can make model training more stable, but it will increase the amount of computation. Considering the computing power of the graphics card we have, we chose 16 as the batch_size. The learning rate is the speed at which the model is iteratively trained, and setting the learning rate correctly can make the loss curve smoother. The initial learning rate was set to 2e-4 and the learning rate decay weight was 0.001. The number of training iterations is 6000, the parameter update method is the gradient descent method with momentum (momentum SGD), and the momentum factor is set to 0.9. The resulting model was saved after every 500 iterations in the training process. After training, the object detection accuracy is used as the criterion for selecting the optimal model. The model is validated using the test set, and the experimental results are analyzed. The change of the loss function during the training process is shown in Figure 12. After 6000 iterations, the network converges. After 6000 iterations, the total loss of the model can converge to 1.62.
### Ablation study of the improved SSD algorithm
This section carries out the ablation study of the improved SSD algorithm to prove its superiority over the baseline model. We analyze the impact of improving feature extraction network, adding feature fusion network, adding attention mechanism, and improving loss function on recognition accuracy. The results of the ablation experiments are shown in Table 2.
Model-1 replaces the backbone network from VGGNet-16 to ResNet-50, and the mAP is 3.61$$\%$$ higher than that of SSD. Model-2 adds a feature fusion module on the basis of adding ResNet-50, and constructs a feature with rich semantic information, indicating that mAP is 3.73$$\%$$ higher than SSD and 0.12$$\%$$ higher than Model-1. Model-3 adds a space-channel attention mechanism CBAM based on adding ResNet-50, and the mAP is 3.20$$\%$$ higher than that of SSD. Model-4 replaces the loss function with Focal Loss based on adding ResNet-50, and mAP is 0.33$$\%$$ higher than SSD. Model-5 replaces the feature extraction network with ResNet-50 on the basis of the original SSD algorithm and adds a feature fusion module and a hybrid attention mechanism CBAM. The results show that the mAP of Model-5 is 3.82$$\%$$ higher than that of SSD. Finally, the mAP of the fully improved SSD algorithm can reach 99.92$$\%$$. In summary, adding different modules significantly improves the performance of the algorithm, and these gains add up to validate the effectiveness of our algorithm.
Furthermore, the comparison of mAP values obtained by different models in the first 6000 iterations is shown in Fig. 13. It can be seen that our proposed improved model has the highest mAP value with a high mAP of 99.92% at the 6000th iteration.
### Comparison with the state-of-the-art detection methods
We evaluated the improved SSD algorithm to other state-of-the-art detection approaches, including Faster RCNN, YOLO v5, RetinaNet, and the baseline model SSD, to verify its comprehensive performance. Table 3 shows that the improved SSD model has a detection accuracy of 99.92%. It has the best accuracy, even outperforming the existing YOLO v5 detection model. Although the improved SSD model is slower than the fastest YOLO v5, it offers a 1.56% accuracy gain. At the same time, the improved model has a speed of 17 frames per second, which is sufficient for real-time detection. The aforementioned findings suggest that our proposed enhanced algorithm outperforms other techniques.
As shown in Table 3, the detection accuracy of the improved algorithm is improved compared with the mainstream algorithms. Meanwhile, due to the increased complexity of the network, the algorithm in this paper sacrifices the speed, but it can still meet the needs of real-time detection. We believe there are two primary reasons for such superiority. (1) The baseline model of the improved SSD algorithm is suitable for this task. Its detection accuracy is only 2.71% lower than Faster RCNN, but the speed is 2.55 times faster. (2) The improved SSD algorithm can robustly improve the accuracy of the baseline model. The specific reasons can be seen in “Ablation study of the improved SSD algorithm”.
We employed the passive terahertz human security image dataset given in the “Datasets” section above for detection experiments, and the visual detection results are shown in Fig. 14. As demonstrated in the red boxes in Fig. 14a, the original SSD algorithm has missed detection, and the detection confidence also has to be enhanced. The improved SSD algorithm effectively addresses the issue of missed detection while also enhancing detection confidence. The improved algorithm’s detection results are displayed in Fig. 14b. To summarize, the improved SSD algorithm outperforms the original algorithm in terms of robustness and accuracy, as well as overall performance.
### ROC curve
The ROC curve of the proposed model is shown in Fig. 15, demonstrating the excellent recognition ability of our model for concealed objects in passive terahertz images. AUC provides summary measures for the performance of a system. It provides meaningful performance analysis. The AUC value of our proposed model is 0.87, indicating that the model has a good discriminative ability.
## Conclusion
To further improve the detection accuracy and speed of concealed objects in terahertz human security images, we proposed an improved SSD algorithm to promote its algorithm performance in this paper. First, the ResNet-50 network was used to replace the original VGGNet-16 network to overcome the VGG network degradation problem. Afterward, we designed a feature fusion module to fuse deep features and shallow features to construct features rich in semantic information, which is beneficial to the detection of small objects. In the next parts, the hybrid attention mechanism was introduced in the SSD network to improve the network’s attention to concealed objects. Finally, we introduced the Focal Loss function in the loss function to improve the robustness of the algorithm. The results of ablation experiments illustrate the efficacy of the changes described in this paper. At the same time, we also compare the proposed model with other mainstream detection methods on the terahertz human security image dataset. The results showed that our method achieves significantly improved detection accuracy in comparison with the original algorithm when the speed is only slightly reduced. The detection accuracy of the proposed method is as high as 99.9%, and the detection speed is about 17 FPS. Therefore, it can meet the real-time detection needs of security inspection scenarios.
Our approach can be improved in the future as follows: (1) The training dataset can be augmented to improve the generalization ability of the model. (2) The model can be further lightweight to reduce the amount of calculation and improve the detection speed. (3) More emphasis should be placed on the model’s interpretability in order to make the model easier to interpret and understand. It also makes the black box of the CNN-based model more transparent and reasonable. We believe that the application of Explainable AI will help to reduce the uncertainty of deep learning models. The insights gained from this research facilitate the application of deep learning techniques in smart security screening scenarios. It is believed that the method proposed in this paper will have important application value in the terahertz intelligent security systems.
|
# The phase space trajectory of a single particle falling freely from height is? [closed]
The phase space trajectory of a single particle falling freely from height is?
Phase space is a plot between momentum and position, and since kinetic energy increases the momentum must increase with position, so option "2" must be correct, but the answer key shows that answer in option "4". Please tell me the correct method to plot it if my observation is wrong
## closed as off-topic by Emilio Pisanty, JMac, John Rennie, GiorgioP, tpg2114♦May 11 at 7:11
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "Homework-like questions should ask about a specific physics concept and show some effort to work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.