url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://brainmass.com/math/functional-analysis/implicit-function-theorem-theorem-lagrange-multipliers-596801
|
Explore BrainMass
# Implicit Function Theorem and Theorem of Lagrange Multipliers
Not what you're looking for? Search our solutions OR ask your own Custom question.
This content was COPIED from BrainMass.com - View the original, and get the already-completed solution here!
I don't get how they can take the derivative of g_1 and g_2 with respect to x_1 and x_2 when they are defined as
h_1=h_1 (x_3,x_4,...,x_n )=x_1 and h_2=h_2 (x_3,x_4,...,x_n )=x_2
I need a mathematical justification for how this can be written simply as (21) when x_1 and x_2 are defined as functions from other variables: x_3,x_4,...,x_n
Is it by using the chain rule or something else? Please go through the detailed steps for how this can be a normal quadratic matrix as defined in the implicit function theorem and by that showing why this is an applicable form of the implicit function theorem nonsingular matrix in (21):
https://brainmass.com/math/functional-analysis/implicit-function-theorem-theorem-lagrange-multipliers-596801
#### Solution Preview
Please look at the attached pdf file for a better formatted answer.
{Introduction}
Let us suppose that we have a function F: R^2to R and consider an equation of the form:
F(x,y)=0.
It is natural to ask the question: is it possible to write y as a function of x?
Let us suppose that it is possible, that is, that there exists a function y(x) that satisfies
F(x,y(x))=0
and let us further suppose that F and y(x) are differentiable. Using the chain rule we can differentiate the previous
equation with respect to x:
pd{F}{x}+pd{F}{y}y'(x)=0
from where we obtain
{equation}
label{fyneq0}
y'(x)=-frac{pd{F}{x}}{pd{F}{y}}.
{equation}
Note that in order to make the last calculation it is necessary that pd{F}{y}neq 0.
{example}
Consider the equation
{equation}
label{circle}
x^2+y^2=1.
{equation}
Is it possible to write y as a function of x? The answer is that it is {not} possible to
{globally} write y as a function of x or x as a function of y (this is because the curve described
by eqref{circle} is not the graph of a function as it fails the so called vertical test); but if (x_0,y_0) is a point that
satisfies eqref{circle} and y_0neq 0, then it is possible to write y as a function of x in a neighborhood of (x_0,y_0).
Moreover, differentiating the equation with respect to x as we did before in eqref{fyneq0} we have
{equation}
label{derivative-circle}
y'=-frac{x}{y}.
{equation}
If y_0=0, then it is not possible to write y as a function of x in a neighborhood around the point (x_0,y_0). In this
case we can write x as a function of y instead.
For instance, if we are given the point (1/2,sqrt{3}/2), then y(x)=sqrt{1-x^2} satisfies eqref{circle} for xin[-1,1] and
y(1/2)=sqrt{3}/2. If we are given the point (1/2,-sqrt{3}/2), then y(x)=-sqrt{1-x^2} satisfies eqref{circle} for
xin[-1,1] and y(1/2)=-sqrt{3}/2. So the form of y(x) depends on the point around which we want to write y as a function
of x.
Using eqref{derivative-circle} we can calculate y'(1/2) for (x_0,y_0)=(1/2,sqrt{3}/2):
y'(1/2)=-frac{1/2}{y(1/2)}=-frac{1/2}{sqrt{3}/2}=-frac{1}{sqrt{3}}.
Of course, in this case we can take the explicit expression y=sqrt{1-x^2}, differentiate it, y'=-x/sqrt{1-x^2}, and
evaluate it y'(1/2)=-1/sqrt{3}. The point here is that we can know the derivative at (x_0,y_0) without the need for an
explicit formula for y(x).
{example}
{example}
label{example-system}
Let us consider a more general case, a system of equations. Let as suppose we have variables x_1,...,x_n and
y_1,...,y_m related by m equations
{equation}
label{gis}
g_i(x_1,...,x_n,y_1,...,y_m)=0{ for }1< i< m.
{equation}
We can write such a system in the following way. Define F: R^{n+m}to R^m by
F(x_1,...,x_n,y_1,...,y_m)={pmatrix}
...
#### Solution Summary
We explain and give explicit examples that use the Implicit Function Theorem. Later we explain the use of the Implicit Function Theorem in the proof of the Theorem of Lagrange multipliers.
\$2.49
|
2023-01-28 03:38:16
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8914930820465088, "perplexity": 544.6313585856057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499470.19/warc/CC-MAIN-20230128023233-20230128053233-00055.warc.gz"}
|
https://www.thejournal.club/c/paper/204231/
|
#### Convergence analysis of approximation formulas for analytic functions via duality for potential energy minimization
##### Satoshi Hayakawa, Ken'ichiro Tanaka
We investigate the approximation formulas that were proposed by Tanaka & Sugihara (2018), in weighted Hardy spaces, which are analytic function spaces with certain asymptotic decay. Under the criterion of minimum worst error of $n$-point approximation formulas, we demonstrate that the formulas are nearly optimal. We also obtain the upper bounds of the approximation errors that coincide with the existing heuristic bounds in asymptotic order by duality theorem for the minimization problem of potential energy.
arrow_drop_up
|
2022-11-29 15:29:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9712339639663696, "perplexity": 881.5145379025676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00302.warc.gz"}
|
https://golem.ph.utexas.edu/category/2006/08/lectures_on_ncategories_and_co.html
|
## August 22, 2006
### Lectures on n-Categories and Cohomology
#### Posted by John Baez
Here’s a paper that you n-category fanatics might enjoy - especially since it mentions a centaur, a faun-like thing, and a kid falling out of the back of a bus:
Lectures on n-Categories and Cohomology
John Baez and Michael Shulman
The goal of these talks was to explain how cohomology and other tools of algebraic topology are seen through the lens of n-category theory. Special topics include nonabelian cohomology, Postnikov towers, the theory of “n-stuff”, and n-categories for n = -1 and -2.
These talks were extremely informal, glossing over the difficulties involved in making certain things precise, just trying to sketch the big picture in an elementary way. It seemed useful to keep this informal tone in the notes. I cover a lot of material that seems hard to find spelled out anywhere, but nothing new here is due to me: anything not already known by experts was invented by James Dolan, Toby Bartels or Mike Shulman (who took notes, fixed lots of mistakes, and wrote the Appendix).
The talks were very informal, and so are these notes. A lengthy appendix clarifies certain puzzles and ventures into deeper waters such as higher topos theory. For readers who want more details, we include an annotated bibliography.
Posted at August 22, 2006 8:05 AM UTC
TrackBack URL for this Entry: http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/902
### Re: Lectures on n-Categories and Cohomology
I worked a lot today and feel tired now. In order to relax a bit I opened the above lecture notes and learned about (-1)-categories and (-2)-categories. (I had wanted to do this a while ago already.)
An $n$-category is a $\mathrm{Hom}$-thing of an $(n+1)$-category.
Hence a (-1)-category is a $\mathrm{Hom}$-thing of a 0-category.
And a (-2)-category is a $\mathrm{Hom}$-thing of a -1-category.
We are being asked to figure out what a monoidal (-1)-categories is.
That’s not hard. At least not if you feel sufficiently relaxed about the meaning of the word “is”.
For me, the really hard part is to figure out what the difference between a monoidal (-1)-category and a (-2)-category is.
They shouldn’t be exactly the same, should they?
Hey, this blog is also about philosophy. So I am still on topic!!
As John discusses, calling the two possible (-1)-categories “true” and “false” or “equal” and “not equal” both seem to be reasonable choices.
Had I been asked for a name, though, I would probably have suggested “discrete category on 1-object set” and “discrete category on empty set”, instead.
After all, if we regard a 0-category $S$ as an $\infty$-category with all $p$-morphisms identities for all $p$, then $\mathrm{Hom}_S(x,x)$ is the $\infty$-category with a single object (namey the identity 1-morphism on $x$) and all $p$-morphisms identites. So it’s still an infinity-category!
Similarly $\mathrm{Hom}_S(x,y)$ for $x \neq y$. This is the $\infty$-category with no object.
That’s what my answer would have been.
However, with that answer I would have found that the unique (-2)-category is also an $\infty$-category with a single object and all $p$-morphisms identities.
Same for monoidal (-1)-category. This is a 0-category with a single object, hence once again an $\infty$-category with a single object and all morphisms identies.
So it seems I am in trouble. I cannot distinguish between the unique monoidal 0-category, the unique (-2)-category and one of the two (-1)-categories.
However, from p.14 of the lecture notes mentioned above one finds that the experts distinguish the non-empty (-1)-category from the unique (-2)-category by using the distinction between TRUE and NECESSARILY TRUE.
Heh. It’s true that $e^{\pi i} = -1$.
Or so I thought.
Maybe instead it is necessarily true??
How can I tell?
Is there a (-1)-category of solutions to the equation $e^{\pi i} = -1$, or a (-2)-category. Or a monoidal (-1)-category?
OK, I go to bed now. Seems about time.
But before quitting, one more serious comment.
We use 1-functors to describe charged particles.
We use 2-functors to describe charged strings.
We use 3-functors to describe charged membranes.
We use 0-functors to describe charged instantons, aka $(-1)$-branes with a 0-dimensional worldvolume.
Really, we do. At least some people do ($\to$).
Do we need (-1)-functors for anything?
Let’s see. By my way of looking at this situation, a $(-1)$-functor is the same as a 0-functor on a monoidal 0-category.
OK, I can interpret that. That’s a charged instanton which is constrained to appear at a predescribed point in spacetime.
Not any point. A fixed point. The point being the single object in our monoidal 0-category.
All right, now it’s official. A $D(-2)$-brane is a D-instanton with predescribed point of appearance.
Maybe I should say it’s a D-instanton living in a 0-dimensional target space.
Yeah, that sounds right.
Posted by: urs on August 22, 2006 8:31 PM | Permalink | Reply to this
### Re: Lectures on n-Categories and Cohomology
You sound a bit tired, sort of joking around, and I’m not sure how much you want serious replies to your questions. But, I feel the need to explain this stuff to everyone else.
Everything you say sounds right, but you may not have gotten a feeling for how (-1)-categories and (-2)-categories connect to logic. So, I want to say a tiny bit about that.
To review for the nonexperts… let’s figure out what (-1)-categories and (-2)-categories are. We’ll start with a couple of principles:
• An n-category is a special sort of (n+1)-category.
• A 0-category is a set.
So, whatever a (-2)-category is, it’s a special sort of (-1)-category, which in turn is a special sort of set.
From this viewpoint, (-2)-categories and (-1)-categories can’t be anything new or weird. They’re just certain special sets. As we’ll see, they’re certain incredibly famous sets. But, it will turn out to be handy to think of them other ways too.
To see what these sets are, we need another handy principle:
• Given two objects x,y in an n-category, hom(x,y) is an (n-1)-category. Moreover, any (n-1)-category can arise this way.
So, given two objects x,y of a 0-category - that is, elements of a set! - hom(x,y) is a (-1)-category. Moreoever, any (-1)-category can arise this way.
What can hom(x,y) be like in this case? We need another principle:
• If j > n, all j-morphisms in an n-category are identity morphisms.
So, if x and y are elements of a set, all the 1-morphisms in hom(x,y) are identity morphisms.
So, hom(x,y) has one morphism in it if x = y, and none in it otherwise.
So, viewed as a set, the (-1)-category hom(x,y) either has one element or none. Moreover, all (-1)-categories arise this way.
So, viewed as a set, a (-1)-category is just a set with 1 or 0 elements.
But, what’s nice is that we see Boolean logic showing up in n-category theory - tucked in at the bottom, down in the (-1)-categories. This “1” or “0” binary choice is really showing up because the elements x and y are either equal or not equal. So, we can - and should! - also think of the two (-1)-categories as being called true and false.
This may seem like a minor point, but Mike develops it into quite a grand one in the Appendix, where he considers the world of sets as a special case of the world of objects in a general topos - where instead of just two truth values, we have a whole Heyting algebra of truth values, as usual in intuitionistic logic.
The point is that just as topos theory lets us drastically generalize our notion of 0-categories (sets), it does the same for (-1)-categories (truth values). It also lets us generalize n-categories for higher values of n, which is even more exciting - but it’s fun, and important, to see how the simple roots of this tree of n-topos theory grow naturally into the fancy intricate branches.
Anyway, returning from topos theory to our classical logic of set theory, there are just two (-1)-categories in the universe: the empty set and the 1-element set. What about (-2)-categories?
Well, by one of our principles, we get these by choosing two objects x,y in a (-1)-category and looking at hom(x,y). We can’t choose two objects if our (-1)-category is the empty set, so let’s take the 1-element set. Then x = y and hom(x,y) has just the identity morphism in it… so hom(x,y) is again the 1-element set.
Moreover, all (-2)-categories arise this way.
So, there’s just one (-2)-category: the 1-element set.
We could call this equal or true. So you’re right, Urs: it’s just the same as the (-1)-category called “true” that we already saw. But it’s playing a somewhat different role now, because it expresses the fact that everything is necessarily equal to itself - this is necessarily true. We don’t have a binary choice at this level - we just have one choice. Instead of a bit of information, a (-2)-category gives no information at all.
As a little puzzle, I’ll ask: how many (-3)-categories are there, and what are they like?
You also seemed to feel funny about the fact that there’s just one monoidal (-1)-category, also named true. This might help you feel better:
• For a 0-category to be monoidal (for a set to be a monoid) is extra structure - there’s a set of ways to make a set into a monoid.
• For a 1-category to be monoidal is extra stuff - there’s a category of a ways to make a category into a monoidal category.
• For a 2-category to be monoidal is extra 2-stuff - there’s a whole 2-category of ways to make a 2-category into a monoidal 2-category.
And so on… here we are using the yoga of properties, structure and stuff, with its extension to n-stuff.
So, it’s actually very nice that for a (-1)-category to be monoidal is a mere property - it either is or isn’t! In other words, there’s just a truth value of ways to (-1)-category monoidal!
Again, a small puzzle: what about monoidal (-2)-categories?
By the way, I never liked the use of the term p-brane to mean something whose extent in space is p-dimensional. It would be nicer to use p for the dimension of the entity in spacetime. Of course it’s too late to change it now, but anyway, then we’d have p-branes coupling to p-forms. 2-branes would be strings, 1-branes would be particles, and 0-branes would be… instantons!
Posted by: John Baez on August 23, 2006 8:29 AM | Permalink | Reply to this
### Re: Lectures on n-Categories and Cohomology
I’m not sure how much you want serious replies to your questions
Sorry. I was in a silly mood, having worked too much.
But I am seriously interested in this.
Mike develops it into quite a grand one in the Appendix, where he considers the world of sets as a special case of the world of objects in a general topos - where instead of just two truth values, we have a whole Heyting algebra of truth values, as usual in intuitionistic logic.
Ah, I should have a look at the appendix then. This sounds very interesting.
While internal to the topos $\mathrm{Set}$ ($-|n|$)-categories are on the verge of being trivial, pretty much because sets are so “trivial”, this may change drastically as we look at (-n) categories in another topos, where sets are replaced with more fancy things.
As a little puzzle, I’ll ask: how many (-3)-categories are there, and what are they like?
The periodic table here stabilizes in the horizontal direction.
A (-3)-category (internal to $\mathrm{Set}$) is a $\mathrm{Hom}$-thing of a (-2)-category. The latter is just the 1-element set $\{\mathrm{necTrue}\}$.
So the unique (-3)-category is
(1)$\mathrm{End}_{\{\mathrm{necTrue}\}}(\mathrm{necTrue}) = \{ \mathrm{Id}_{\mathrm{necTrue}} \} \,.$
Hence it is again a 1-element set.
Actually, I could write
(2)$\mathrm{necTrue} = \mathrm{Id}_{\mathrm{true}} = \mathrm{Id}_{\mathrm{Id}_x}$
And it continues this way. A (-4)-category is the 1-element set
(3)$\left\{ \mathrm{Id}_{\mathrm{Id}_{\mathrm{Id}_{\mathrm{Id}_{x}}}} \right\} \,.$
As one can see, one use of $(-4)$-categories internal to $\mathrm{Set}$ is as a test for the MathML capabilities of your browser. ;-)
I never liked the use of the term $p$-brane to mean something whose extent in space is $p$-dimensional.
True. But it harmonizes well with another awkward counting system - namely that of gerbes.
Since what they call a 1-gerbe is already a 2-structure, they run into the problem that bundles are $0$-gerbes, while functions are (-1)-gerbes.
Whenever you find yourself starting counting at (-1), you know you have chosen wrong conventions at some point.
But if you do it consistently, at least the indices match again.
So a 1-brane (string) couples to a 1-gerbe.
A 0-brane (point) couples to a 0-gerbe (bundle).
A (-1)-brane (instanton) couples to a (-1)-gerbe (function).
We should hence invent an alternative terminology for brane which fits into
$n$-bundles : ($n-1$)-gerbes :: ?? : $(n-1)$-branes.
I suggest $n$-particle.
$n$-bundles : ($n-1$)-gerbes :: $n$-particles : $(n-1)$-branes.
So a particle is a 1-particle is a point.
A string is a 2-particle.
A (-1)-brane is a 0-particle.
$n$-particles couple to $n$-bundles with connection. $\Leftrightarrow$ $(n-1)$-branes couple to $(n-1)$-gerbes with connection.
Posted by: urs on August 23, 2006 9:55 AM | Permalink | Reply to this
Read the post Connes on Spectral Geometry of the Standard Model, I
Weblog: The n-Category Café
Excerpt: Connes is coming closer to the spectral triple encoding the standard model coupled to gravity. Part I: some background material.
Tracked: September 6, 2006 12:20 PM
### Re: Lectures on n-Categories and Cohomology
I was looking again at the Lectures on $n$-Categories and Cohomology to see if could find a hint towards the answer of the following question:
What’s an action $n$-groupoid and how is it characterized by morphisms to an $n$-group?
In week 249 John mentions that
Any groupoid with a faithful functor to $G$ is equivalent to the action groupoid $X//G$ for some action of $G$ on some set $X$.
I like to think of it this way: I write $\mathbf{B} G$ for the group $G$ regarded as a one-object groupoid and then notice that every action groupoid sits in a sequence of groupoids
$X \to X//G \to \mathbf{B} G \,,$
where the $X := Disc(X)$ on the left is the discrete groupoid over $X$ ($X$ as objects, no nontrivial morphisms).
In particular, if $X=G$ and we have the right action of $G$ on itself we get
$G \to G//G \to \mathbf{B} G \,,$
which is the groupoid version of the universal $G$-bundle, realizing $\mathbf{E}G := G//G$ as the action groupoid of $G$ action on itself.
What I want to understand is: How does this generalize to higher $n$?
Given an $n$-group $G$ and writing $\mathbf{B} G$ for its one-object $n$-groupoid incarnation, which sequences of $n$-groupoids
$X \to X//G \to \mathbf{B}G$
I am entitled to address as coming from action $n$-groupoids?
I was looking at the lecture notes by John and Mike since they talk about the generalization of “faithful” to higher $n$ (e.g. p. 18).
There the observation of course is that for 1-functors, faithful means surjective on 2-morphisms (identities).
So I suppose it would make sense to call an $n$-functor faithful if it is $(n+1)$-surjective, i.e. surjective on equations between $n$-morphisms. But does that lead to the right characterization of action $n$-groupoids?
It looks intuitively all right, but one problem is that one would hope there to be a notion of $\infty$-action groupoid, but what’s $(\infty+1)$-surjective?
In principle it should be straightforward to work it out:
for $G$ an $n$-group and $C$ an $n$-category with a forgetful (hm…) $n$-functor to $(n-1)Grpd$, we should look at actions of $G$ on objects in $C$ $\rho : \mathbf{B} G \to C$ then send this forward to $n Grpd$
$\hat \rho : \mathbf{B} G \stackrel{\rho}{\to} C \to (n-1)Grpd \hookrightarrow n Grpd$
and then take a weak colimit there:
$X//G := colim_{\mathbf{B} G}\rho \,.$
That’s how it works for ordinary action 1-groupoids.
By the universal property of the colimit we canonically get the morphism
$X//G \to \mathbf{B} G$
and one should check what characteizes the morphisms obtained this way. Such that one could say that for every $n$-functor
$V \to \mathbf{B} G$
with these and those properties, $V$ is equivalent to a colimit of the above kind.
Does anyone know what these properties are? It depends on what we take a “weak $n$-colimit” to be.
I’d be most happy if we could do this in $\omega Cat$ (strict $\infty$-categories). But whatever works, I’d be glad to see it.
Posted by: Urs Schreiber on March 25, 2008 11:48 PM | Permalink | Reply to this
### Re: Lectures on n-Categories and Cohomology
If the ordinary action groupoid can be derived as a pullback of the forgetful functor from Pointed set to Set, might we not be able to think of an action $n$-groupoid similarly?
I don’t think we ever took the step of properly working out action 2-groupoids, despite having in our hands the classifier, $(Pointed cat)^+ \to Cat$.
I tried to make a start here in a).
Posted by: David Corfield on March 26, 2008 10:05 AM | Permalink | Reply to this
### Re: Lectures on n-Categories and Cohomology
the ordinary action groupoid can be derived as a pullback of the forgetful functor from Pointed set to Set
Thanks, David, I had forgotten about that. Didn’t properly follow that discussion back then anyway. I’ll go back to that now.
I tried to make a start here in a).
Great, thanks for reminding me.
When we form the pullback $\array{ X//G &\to& (Pointed n-Cat) \\ \downarrow && \downarrow \\ \mathbf{B}G &\stackrel{\rho}{\to}& n-Cat }$
is it sufficient to talk about strict pullbacks? (Don’t ask me what i mean by “sufficient”.)
Do we know what $(Pointed \omega Cat)$ might mean?
Notice I am not after computing these higher action groupoids right this moment. I want to figure out their best description first.
Posted by: Urs Schreiber on March 26, 2008 4:48 PM | Permalink | Reply to this
### Re: Lectures on n-Categories and Cohomology
Well, given my comment in another thread I suppose I should answer this last question myself:
assuming all required pullbacks exist in $(\omega Cat,\otimes_{Gray})-Cat$, I should form there the pullback diagram
$\array{ T_{pt}\omega Cat &\to& Hom(2,\omega Cat) \\ \downarrow && \downarrow^{dom} \\ pt &\to& \omega Cat }$
as discussed in Tangent categories.
Then $\array{ T_{pt}\omega Cat &\to& \omega Cat \\ \searrow && \nearrow_{codom} \\ & Hom(2,\omega Cat) }$
gives the projection down to $\omega Cat$ and the “kernel” $s^{-1}pt$ that hopefully exists. Then
$\array{ s^{-1} pt \\ \downarrow \\ T_{pt}\omega Cat \\ \downarrow \\ \omega Cat }$
would be the universal $\omega Cat$-$\omega$-bundle.
Does that exist the way I am imagining here?
(Darn, I don’t even know if $(\omega Cat, \otimes_{Gray})-Cat$) has all pullbacks.)
Posted by: Urs Schreiber on March 26, 2008 6:44 PM | Permalink | Reply to this
Post a New Comment
|
2015-04-18 07:17:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 117, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.755530595779419, "perplexity": 1149.5105825216028}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246633972.52/warc/CC-MAIN-20150417045713-00098-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://questioncove.com/updates/524c5a1ee4b06f86e821533b
|
Mathematics
OpenStudy (anonymous):
Solve the system. y = -1/3 x + 2 and x + 3y = 3
OpenStudy (austinl):
Plug in the first equation into the second one. Then solve for x.
OpenStudy (abb0t):
yes, I suggest you use substitution method for fractions. It's much easier. So you have this system: $\left\{\begin{matrix} \sf \color{red}{y=-\frac{1}{3}x+2} & \\ \sf \color{blue}{y + 3}\sf \color{red}{y} \sf \color{blue}{=3} & \end{matrix}\right.$ plug in the red requation into the blue one where the red y is.
OpenStudy (abb0t):
sorry, the first blue $$y$$ should be a $$\sf \color{blue}{x}$$
OpenStudy (goformit100):
@gabbie96 you can use Matrices to solve this equation.
OpenStudy (abb0t):
Why the heck would you want to use matrices tho.
OpenStudy (ankit042):
They are parallel lines look closely
|
2021-07-24 08:13:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7916780114173889, "perplexity": 1254.1462846959541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150134.86/warc/CC-MAIN-20210724063259-20210724093259-00038.warc.gz"}
|
http://connection.ebscohost.com/c/articles/86420198/generalized-lower-bound-conjecture-polytopes-spheres
|
TITLE
# On the generalized lower bound conjecture for polytopes and spheres
AUTHOR(S)
Murai, Satoshi; Nevo, Eran
PUB. DATE
March 2013
SOURCE
Acta Mathematica;Mar2013, Vol. 210 Issue 1, p185
SOURCE TYPE
DOC. TYPE
Article
ABSTRACT
In 1971, McMullen and Walkup posed the following conjecture, which is called the generalized lower bound conjecture: If P is a simplicial d-polytope then its h-vector ( h, h, ..., h) satisfies $${h_0}\leq {h_1}\leq \ldots \leq {h_{{\left\lfloor {{d \left/ {2} \right.}} \right\rfloor }}}$$. Moreover, if h = h for some $$r\leq \frac{1}{2}d$$ then P can be triangulated without introducing simplices of dimension ≤ d − r. The first part of the conjecture was solved by Stanley in 1980 using the hard Lefschetz theorem for projective toric varieties. In this paper, we give a proof of the remaining part of the conjecture. In addition, we generalize this result to a certain class of simplicial spheres, namely those admitting the weak Lefschetz property.
ACCESSION #
86420198
## Related Articles
• Mixed Lefschetz Theorems and Hodge-Riemann Bilinear Relations. Cattani, Eduardo // IMRN: International Mathematics Research Notices;Jan2008, Vol. 2008, p1
The Hard Lefschetz Theorem (HLT) and the Hodge–Riemann bilinear relations (HRR) hold in various contexts: they impose restrictions on the cohomology algebra of a smooth compact Kähler manifold; they restrict the local monodromy of a polarized variation of Hodge structure; they impose...
• Ehrhart Series, Unimodality, and Integrally Closed Reflexive Polytopes. Braun, Benjamin; Davis, Robert // Annals of Combinatorics;Dec2016, Vol. 20 Issue 4, p705
An interesting open problem in Ehrhart theory is to classify those lattice polytopes having a unimodal h*-vector. Although various sufficient conditions have been found, necessary conditions remain a challenge. In this paper, we consider integrally closed reflexive simplices and discuss an...
• Convex Polytopes and Quasilattices from the Symplectic Viewpoint. Battaglia, Fiammetta // Communications in Mathematical Physics;Jan2007, Vol. 269 Issue 2, p283
We construct, for each convex polytope, possibly nonrational and nonsimple, a family of compact spaces that are stratified by quasifolds, i.e. each of these spaces is a collection of quasifolds glued together in an suitable way. A quasifold is a space locally modelled on $${\mathbb{R}^{k}}$$...
• Tropical varieties for non-archimedean analytic spaces. Gubler, Walter // Inventiones Mathematicae;Jun2007, Vol. 169 Issue 2, p321
Generalizing the construction from tropical algebraic geometry, we associate to every (irreducible d-dimensional) closed analytic subvariety of $\mathbb{G}_{m}^{n}$ a tropical variety in R n with respect to a complete non-archimedean place. By methods of analytic and formal geometry, we prove...
• Toric cohomological rigidity of simple convex polytopes. Choi, S.; Panov, T.; Suh, D. Y. // Journal of the London Mathematical Society;Oct2010, Vol. 82 Issue 2, p343
A simple convex polytope P is cohomologically rigid if its combinatorial structure is determined by the cohomology ring of a quasitoric manifold over P. Not every P has this property, but some important polytopes such as simplices or cubes are known to be cohomologically rigid. In this paper we...
• The Coherent–Constructible Correspondence for Toric Deligne–Mumford Stacks. Fang, Bohan; Liu, Chiu-Chu Melissa; Treumann, David; Zaslow, Eric // IMRN: International Mathematics Research Notices;Feb2014, Vol. 2014 Issue 4, p914
We extend our previous work [8] on coherent–constructible correspondence for toric varieties to toric Deligne–Mumford (DM) stacks. Following Borisov et al. [3], a toric DM stack is described by a “stacky fan†Σ=(N,Σ,β), where N is a finitely generated abelian...
• Nonpolytopal Nonsimplicial Lattice Spheres with Nonnegative Toric g-Vector. Billera, Louis; Nevo, Eran // Discrete & Computational Geometry;Dec2012, Vol. 48 Issue 4, p1048
We construct many nonpolytopal nonsimplicial Gorenstein meet semi-lattices with nonnegative toric g-vector, supporting a conjecture of Stanley. These are formed as Bier spheres over the face posets of multiplexes, polytopes constructed by Bisztriczky as generalizations of simplices.
• A toric varieties approach to geometrical structure of multi partite states. Heydari, Hoshang // AIP Conference Proceedings;5/4/2010, Vol. 1232 Issue 1, p283
We investigate the geometrical structures of multipartite states based on construction of toric varieties. In particular, we describe pure quantum systems in terms of affine toric varieties and projective embedding of these varieties in complex projective spaces. We show that a quantum system...
• Separating Cycles in Doubly Toroidal Embeddings. Ellingham, M.N.; Xiaoya Zha // Graphs & Combinatorics;Jun2003, Vol. 19 Issue 2, p161
We show that every 4-representative graph embedding in the double torus contains a noncontractible cycle that separates the surface into two pieces. As a special case, every triangulation of the double torus in which every noncontractible cycle has length at least 4 has a noncontractible cycle...
Share
|
2017-09-25 13:24:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6616549491882324, "perplexity": 1706.1416697832792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818691830.33/warc/CC-MAIN-20170925130433-20170925150433-00283.warc.gz"}
|
https://lifeboat.com/blog/2020/03/higgs-boson-creation-in-laser-boosted-lepton-collisions
|
## Blog
Higgs boson laser.
Electroweak processes in high-energy lepton collisions are considered in a situation where the incident center-of-mass energy lies below the reaction threshold, but is boosted to the required level by subsequent laser acceleration. Within the framework of laser-dressed quantum field theory, we study the laser-boosted process $\ell^+ \ell^- \to HZ^0$ in detail and specify the technical demands needed for its experimental realization. Further, we outline possible qualitative differences to field-free processes regarding the detection of the produced Higgs bosons.
|
2020-08-12 00:56:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.519879937171936, "perplexity": 1771.07688210098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738858.45/warc/CC-MAIN-20200811235207-20200812025207-00402.warc.gz"}
|
https://www.picostat.com/dataset/r-dataset-package-stat2data-palmbeach
|
# R Dataset / Package Stat2Data / PalmBeach
Webform
Category
Webform
Category
Webform
Category
Webform
Category
Webform
Category
Webform
Category
## Visual Summaries
Embed
<iframe src="https://embed.picostat.com/r-dataset-package-stat2data-palmbeach.html" frameBorder="0" width="100%" height="307px" />
Attachment Size
1.29 KB
Dataset Help
On this Picostat.com statistics page, you will find information about the PalmBeach data set which pertains to PalmBeach. The PalmBeach data set is found in the Stat2Data R package. You can load the PalmBeach data set in R by issuing the following command at the console data("PalmBeach"). This will load the data into a variable called PalmBeach. If R says the PalmBeach data set is not found, you can try installing the package by issuing this command install.packages("Stat2Data") and then attempt to reload the data. If you need to download R, you can go to the R project website. You can download a CSV (comma separated values) version of the PalmBeach R data set. The size of this file is about 1,316 bytes.
Documentation
## PalmBeach
### Description
Votes for Geroge Bush and Pat Buchanan in Florida counties for the 2000 U.S. presidential election
### Format
A dataset with 67 observations on the following 3 variables.
County Name of the Florida county Buchanan Number of votes for Par Buchanan Bush number of votes for George Bush
### Details
The race for the presidency of the United States in the fall of 2000 was very close, with the electoral votes from Florida determining the outcome. In the disputed final tally in Florida, George W. Bush won by just 537 votes over Al Gore, out of almost 6 million votes cast. About 2.3 were awarded to other candidates. One of those other candidates was Pat Buchanan, who did much better in Palm Beach County than he did anywhere else. Palm Beach County used a unique "butterfly ballot" that had candidate names on either side of the page with "chads" to be punched in the middle. This non-standard ballot seemed to confuse some voters, who punched votes for Buchanan that may have been intended for a different candidate. This dataset shows the number of votes for Bush and Buchanan in each Florida county.
### Source
Florida county data for the 2000 presidential election can be found at http://election.dos.state.fl.us/elections/resultsarchive/Index.asp?Elec…
--
Dataset imported from https://www.r-project.org.
Recent Queries For This Dataset
No queries made on this dataset yet.
|
2020-10-31 06:45:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19552314281463623, "perplexity": 3768.8334754097773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107916776.80/warc/CC-MAIN-20201031062721-20201031092721-00018.warc.gz"}
|
https://freebasic.net/forum/viewtopic.php?f=17&t=28415&start=15
|
## Freebasic is not smooth in graphics
For other topics related to the FreeBASIC project or its community.
dodicat
Posts: 6493
Joined: Jan 10, 2006 20:30
Location: Scotland
### Re: Freebasic is not smooth in graphics
Hi basiccoder2.
Thanks for testing.
The indentations are an fbide option, tab size set at 4.
BasicCoder2
Posts: 3541
Joined: Jan 01, 2009 7:03
### Re: Freebasic is not smooth in graphics
@dodicat
I also use FBIDE but if you look at your posted code all the end if statements are bang up against the left border.
Below I have gone through your code and restored the indentations as it makes it easier for me to read.
Code: Select all
`'dodicat https://www.freebasic.net/forum/viewtopic.php?f=17&t=28415#include "windows.bi"#include "GL\glu.bi"Dim Shared As Integer refresh_rateDim Shared As Integer w,hSub glcircle(x As Single,y As Single,rx As Single,ry As Single,clr As Ulong) Export Const pi2 = 8*Atn(1),st=pi2/(60) glend glBegin GL_TRIANGLE_FAN glcolor3ub(Cast(Ubyte Ptr,@clr)[2],Cast(Ubyte Ptr,@clr)[1],Cast(Ubyte Ptr,@clr)[0]) For a As Single=0 To pi2 Step st glVertex2f (x)+Cos(a)*(rx),(y)+Sin(a)*(ry) Next glEndEnd SubSub glline(x1 As Long,y1 As Long,x2 As Long,y2 As Long,clr As Ulong) glend glbegin gl_lines glcolor3ub(Cast(Ubyte Ptr,@clr)[2],Cast(Ubyte Ptr,@clr)[1],Cast(Ubyte Ptr,@clr)[0]) glvertex2f x1,y1 glvertex2f x2,y2 glendEnd SubSub LineByAngle(x As Long,y As Long,angle As Single,length As Single,col As Ulong,Byref x2 As Long=0,Byref y2 As Long=0) x2=x+length*Cos(angle) y2=y-length*Sin(angle) Line(x,y)-(x2,y2),col Circle(x2,y2),50,Rgb(200,0,0),,,,fEnd SubSub glLineByAngle(x As Long,y As Long,angle As Single,length As Single,col As Ulong,Byref x2 As Long=0,Byref y2 As Long=0) x2=x+length*Cos(angle) y2=y-length*Sin(angle) glline(x,y,x2,y2,col ) glcircle(x2,y2,50,50,Rgb(0,200,0))End SubFunction Regulate(Byval MyFps As Long,Byref fps As Long=0) As Long Static As Double timervalue,_lastsleeptime,t3,frames Var t=Timer frames+=1 If (t-t3)>=1 Then t3=t:fps=frames:frames=0 Var sleeptime=_lastsleeptime+((1/myfps)-T+timervalue)*1000 If sleeptime<1 Then sleeptime=1 _lastsleeptime=sleeptime timervalue=T Return sleeptimeEnd FunctionSub gfxpendulum(a As Single) Const pi=4*Atn(1) Dim As Long x,y LineByAngle(w/4,120,.3*Sin(a)-pi/2,.75*h,Rgb(200,0,0),x,y)End Sub Sub glpendulum(a As Single) Const pi=4*Atn(1) Dim As Long x,y glLineByAngle(3*w/4,h-120,.3*Sin(a)-3*pi/2,.75*h,Rgb(0,0,200),x,y) End Sub Sub GLinit glOrtho (0,w,h,0, -1, 1) glDisable (GL_DEPTH_TEST) glEnable (GL_LINE_SMOOTH) glLineWidth(1)End Sub'from glwin2Sub SetUpglTOfbscreen(Byref pPixels As Ubyte Ptr,x As Long,y As Long ) Dim As Any Ptr MemoryDC,ScreenDC ' HDC Dim As Any Ptr RenderContext ' HGLRC Dim As Any Ptr Bitmap,OldBitmap ' HBITMAP Dim As BITMAPINFO BI Dim As PIXELFORMATDESCRIPTOR PfD Dim As Integer PixelFormat ScreenDC=GetDC(0) 'CreateDC("DISPLAY",NULL,NULL,NULL) If ScreenDC Then MemoryDC=CreateCompatibleDC(ScreenDC) If MemoryDC Then With BI.bmiHeader .biSize = Sizeof(BITMAPINFOHEADER) .biWidth = x'800'512 .biHeight =-y'-600'-512 '.biSizeImage = 512*512*2 .biPlanes = 1 .biBitCount = 24 .biCompression = BI_RGB .biXPelsPerMeter = 0 .biYPelsPerMeter = 0 .biClrUsed = 0 .biClrImportant = 0 End With Bitmap=CreateDIBSection(MemoryDC,@BI,DIB_RGB_COLORS,@pPixels,NULL,0) If Bitmap Then OldBitmap=SelectObject(MemoryDC,Bitmap) If OldBitmap Then With PfD .nSize = Sizeof(PIXELFORMATDESCRIPTOR) .nVersion = 1 .dwFlags = PFD_DRAW_TO_BITMAP Or PFD_SUPPORT_OPENGL Or PFD_SUPPORT_GDI .iPixelType = PFD_TYPE_RGBA .iLayerType = PFD_MAIN_PLANE .cColorBits = 24 .cDepthBits = 24 '.cAlphaBits = 8 '.cAccumBits = 0 '.cStencilBits = 0 End With PixelFormat = ChoosePixelFormat(MemoryDC,@PfD) If PixelFormat Then If SetPixelFormat(MemoryDC,PixelFormat,@PfD) Then RenderContext=wglCreateContext(MemoryDC) If RenderContext=0 Then Dim As zstring Ptr pszMessage FormatMessage(FORMAT_MESSAGE_ALLOCATE_BUFFER Or _ FORMAT_MESSAGE_FROM_SYSTEM Or _ FORMAT_MESSAGE_IGNORE_INSERTS, _ NULL, GetLastError(), _ MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), _ Cptr(Any Ptr,@pszMessage),0, NULL ) SelectObject(MemoryDC,OldBitmap) DeleteObject(Bitmap) DeleteDC(MemoryDC) DeleteDC(ScreenDC) Print "error create opengl render context: " & *pszMessage Beep:Sleep:End End If ' create render context If wglMakeCurrent(MemoryDC,RenderContext)=0 Then print "error: make current!" Beep:Sleep End If End If End If End If End If End If End IfEnd Sub'superimpose via screenptrSub Drawgl(p As Ubyte Ptr,pPixels As Ubyte Ptr,xx As Long,yy As Long) Dim As Long i For y As Long=0 To xx-1 For x As Long=0 To yy-1 p[i*4+0]= pPixels[i*3+0] p[i*4+1]= pPixels[i*3+1] p[i*4+2]= pPixels[i*3+2] i+=1 Next NextEnd SubSub Start() screenres 840,600,32 'Screen 20,32 Screeninfo w,h Screencontrol 8,refresh_rate Dim As Ubyte Ptr pPixels '==== opengl =========== SetUpglTOfbscreen(pPixels,w,h) 'for gl glinit() 'initialize the open gl ortho '======================== Dim As Long fps Dim As Single a While 1 a+=.02 Screenlock 'Cls glClear GL_COLOR_BUFFER_BIT Or GL_DEPTH_BUFFER_BIT DrawGl(Screenptr,pPixels,w,h) 'transfer openGL to fb screen glpendulum(a) gfxpendulum(a) Draw String(50, 10), " Press escape key to end", Rgb(255, 200, 0) Draw String(50, 55), "framerate " &fps , Rgb(0, 200, 0) Draw String(w\4-50,110),"fbgfx pendulum" Draw String(3*w\4-50,110),"openGL pendulum" Screenunlock glflush Sleep regulate(refresh_rate, fps) If Inkey=Chr(27) Then Exit While Wend End Substart`
dodicat
Posts: 6493
Joined: Jan 10, 2006 20:30
Location: Scotland
### Re: Freebasic is not smooth in graphics
Hi Basiccoder2.
I've got rid of the end if's
I must have forgot to indent before I posted.
See the projects section for cubes instead of pendulums.
|
2020-07-10 19:00:08
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.89933842420578, "perplexity": 8292.52337305311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655911896.73/warc/CC-MAIN-20200710175432-20200710205432-00574.warc.gz"}
|
https://www.sciencemadness.org/talk/viewthread.php?tid=156784
|
Sciencemadness Discussion Board » Special topics » Energetic Materials » Why did TNT become so universal? Select A Forum Fundamentals » Chemistry in General » Organic Chemistry » Reagents and Apparatus Acquisition » Beginnings » Responsible Practices » Miscellaneous » The Wiki Special topics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Models and Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues » Test Forum
Author: Subject: Why did TNT become so universal?
Delta-R
Harmless
Posts: 12
Registered: 6-5-2018
Member Is Offline
Why did TNT become so universal?
So like the title says, of all the high explosives known back before WWI/II, why did so many countries choose TNT as the de facto standard military high explosive, and how did it stick around so long? It requires a fairly intensive multi step synthesis that demands highly pure, expensive reagents like oleum and white fuming nitric acid, leaves tons of toxic waste, and isn’t really that spectacular as an explosive (leaves a bunch of carbon soot residue, isn’t all that powerful, less than .75 the RE factor of nitroglycerin, etc), at least compared to the difficulty, cost, and wastefulness of its production.
PETN, RDX, nitroglycerin, EGDN, nitrourea... all were invented in the late 1800’s, all have a higher RE factor, all are much easier to produce, and while all were also used in the world wars, TNT was much more widely used either by itself or in a mixture from what I can tell, with only RDX production beginning to come close towards the end of the WWII.
Was it purely a result of it’s insensitivity and ability to be melt cast? I feel like there must be some other factor I’m missing that outweighs the downsides, considering all the other excellent options they had available.
And really, the same question could be asked for HMX too - considering the performance of HMX is practically identical to RDX, why would any military take on the added difficulty and cost of production compared to RDX for such an insignificant benefit?
Twospoons
International Hazard
Posts: 1124
Registered: 26-7-2004
Location: Middle Earth
Member Is Offline
Mood: A trace of hope...
Quote: Originally posted by Delta-R Was it purely a result of it’s insensitivity and ability to be melt cast?
Those seem like pretty major advantages if you are making millions of shells - low risk of accidents and easy handling in manufacture. The insensitivity is probably a bonus in the field too - who wants to ride around in a truck full of touchy munitions?
Helicopter: "helico" -> spiral, "pter" -> with wings
Fulmen
International Hazard
Posts: 1466
Registered: 24-9-2005
Member Is Offline
Mood: Bored
Don't forget long term stability.
We're not banging rocks together here. We know how to put a man back together.
Johnny Cappone
Hazard to Self
Posts: 54
Registered: 10-12-2020
Location: Brazil
Member Is Offline
As already mentioned, stability and sensitivity were certainly key factors. Well, at least it was what caused the replacement of picric acid by TNT even when the availability of toluene was less than that of phenol, and even though TNT was a little less powerful.
Herr Haber
International Hazard
Posts: 905
Registered: 29-1-2016
Member Is Offline
Mood: No Mood
1906 or somewhere around there.
Germany uses TNT, UK uses picric acid.
UK shells detonate on impact with armour, German shells penetrate than detonate.
World learns a lesson.
The spirit of adventure was upon me. Having nitric acid and copper, I had only to learn what the words 'act upon' meant. - Ira Remsen
Delta-R
Harmless
Posts: 12
Registered: 6-5-2018
Member Is Offline
Ok well let’s take TNT vs RDX. TNT was discovered as an explosive in 1891, RDX 1898. RDX is by far the better explosive by any metric, yet TNT was put into service by everyone almost immediately, or at least before 1910 whereas RDX didn’t really start to be used until the 1920’s and didn’t really take off until the late 1930’s.
The only quality I really see where TNT comes out ahead is the ability to be melt cast. But let’s take filling shells - RDX is SO much more powerful, even a shell with low density RDX powder poured in would sill be more powerful. And iF there was a situation that absolutely required an insensitive casting, plasticizers and phlegmatizers had already been invented and in use and even a completely inert plasticizer would still yield a product with a higher RE, no?
I guess what I’m saying is I just can understand one or even several countries investing heavily in TNT but it was everyone- and it’s just beyond me why it took so long for better alternatives to really start taking hold and how TNT lasted so long
Fulmen
International Hazard
Posts: 1466
Registered: 24-9-2005
Member Is Offline
Mood: Bored
The fact that TNT is the preferred explosive should tell you that you're wrong.
First off, RDX is also SO much more sensitive than TNT. Filling artillery shells with powdered RDX? Powders are a total PITA to handle, and even if you could do that, the shell will experience something in the order of 10'000g acceleration. Setback is a real danger even with TNT. Phlegmatized RDX might work, but it would still be a major drawback during loading.
And what about cost? IIRC the direct nitration process available for RDX back then was inefficient, partly because it was hard to recycle spent acid.
We're not banging rocks together here. We know how to put a man back together.
unionised
International Hazard
Posts: 4628
Registered: 1-11-2003
Location: UK
Member Is Offline
Mood: No Mood
Quote: Originally posted by Delta-R The only quality I really see where TNT comes out ahead is the ability to be melt cast
Did you consider cost?
roXefeller
National Hazard
Posts: 463
Registered: 9-9-2013
Location: 13 Colonies
Member Is Offline
Mood: 220 221 whatever it takes
A better reason for the switch from picric acid to TNT was its lack of acidity. Munitions people didn't need to worry about dangerous salts.
I don't know much about the driving factor between HMX and RDX beyond solid propellants, but that cast-ability of TNT is major. Like mentioned above charging powders is a PITA, they can aggregate loose and ask for pinch points. Amorphous TNT is really sleepy. Detonation requires a heterogenous mix to decrease critical diameters, they are the sites of microheating. Waxy TNT doesn't have that but crystalline RDX does.
Don't get wrapped up on single metrics of detonation. RDX is known for brisance, but not all applications demand that. Airblast and lift are also important aspects. If more brisance and sensitivity were asked for, they blended RDX into TNT.
I second the motion of asking about cost. Given the procedures to recycle acids from third stage nitration back to second and first stage nitration (which RDX doesn't have from its instability to survive linear polynitramine decomposition), has relative cost been computed? Include yield percentages in that.
One thing to detract from TNT that you didn't mention was the liver toxicity from chronic handling.
One must forego the self to attain total spiritual creaminess and avoid the chewy chunks of degradation.
Fyndium
International Hazard
Posts: 1015
Registered: 12-7-2020
Location: Not in USA
Member Is Offline
Industrial scale dedicated production is not that sensitive on multi-step synthesis, especially if stuff can be recycled.
Previous posts indicate many good points of TNT. Melt-casting eases up mass production of shells to huge extent, and it also increases RE by higher density compared to powders. Safety is a major factor too, as war equipment tends to get manhandled and shot at and it would be silly to get blown up by some stray bullets. Toxicity has never been such of a concern for industry, as it lays on the workers and once they walk out the plant, it's their problem, and everything that is liquid, can be pumped down the river, and solids can be dumped away from sight. Nowadays at least some occupational and environmental laws exist.
Also I've understood that TNT was rarely used as sole, but it was mixed with various compositions, and the TNT can act more as a binder, like bitumen in asphalt, creating a processable mass that can be easily handled, transferred, pumped, filled and packed and it has high efficiency.
https://en.wikipedia.org/wiki/TNT#Applications
I could not imagine using energetic powders in mass filling of artillery shells. Actually handling any sort of powder that is more reactive than flour(except, don't forget about flour explosions) is pita, as it gets everywhere, is difficult to fill and pack and is hard to mix homogenously. Amateurs generally lack the concept of mass production because they only need to make a pinch of stuff for experiments and many people believe that stuff is made and handled by people handling them, while in reality most everything has been automatic packing lines since the beginning of century. They made hundreds of millions of munitions during the wars.
Hey Buddy
Hazard to Self
Posts: 62
Registered: 3-11-2020
Member Is Offline
Impact and thermal insensitivity along with melt cast made it suitable for efficient mass production and standardization. All the world war nations tooled up for it and developed batch methods. When Fischer-Tropsch method came out of the bag and everybody figured out you could make hydrocarbon fuels from coal and biomass/syngas, obviously now toluene could be produced as a branched off product after torrefaction solids or liquid fuels which were produced necessarily.
Antiswat
International Hazard
Posts: 1345
Registered: 12-12-2012
Location: Dysrope (aka europe)
Member Is Offline
Mood: dangerously practical
because its easy to handle, and easy by handle also includes stable to store- doesnt break down to become dangerous like NG would
essentially, the incompetence of the many defines the use of materials. very sad to think about
~25 drops = 1mL @dH2O viscocity - STP
Truth is ever growing - but without context theres barely any such.
https://en.wikipedia.org/wiki/Solubility_table
Fyndium
International Hazard
Posts: 1015
Registered: 12-7-2020
Location: Not in USA
Member Is Offline
Quote: Originally posted by Antiswat essentially, the incompetence of the many defines the use of materials. very sad to think about
Applies to society and law universally. Even more sad to think. They should ban stupid people instead.
stamasd
Hazard to Others
Posts: 122
Registered: 24-5-2018
Location: in the crosshairs
Member Is Offline
Mood: moody
TNT is cheap to manufacture from plentiful ingredients. In war, that plays a huge role in making a decision for or against ordnance.
It does not help if you've got the universe's best explosive, if you don't have the ingredients to make it from in sufficient quantities.
[Edited on 5-1-2021 by stamasd]
All your acids are belong to us.
roXefeller
National Hazard
Posts: 463
Registered: 9-9-2013
Location: 13 Colonies
Member Is Offline
Mood: 220 221 whatever it takes
Quote: Originally posted by Johnny Cappone As already mentioned, stability and sensitivity were certainly key factors. Well, at least it was what caused the replacement of picric acid by TNT even when the availability of toluene was less than that of phenol, and even though TNT was a little less powerful.
As Don Cappone mentioned, availability of precursors probably wasn't weighing heavily on their decision to adopt the safest molecule.
Quote: Originally posted by Hey Buddy When Fischer-Tropsch method came out of the bag and everybody figured out you could make hydrocarbon fuels from coal and biomass/syngas, obviously now toluene could be produced as a branched off product after torrefaction solids or liquid fuels which were produced necessarily.
And necessity is the mother of invention.
One must forego the self to attain total spiritual creaminess and avoid the chewy chunks of degradation.
Mush
International Hazard
Posts: 545
Registered: 27-12-2008
Member Is Offline
Mood: No Mood
Bit off topic , yet still relevant .
TNT is important because it can be melt-cast and forms melt cast matrix with other explosives. However alternative compounds are being researched.
3,5-difluoro-2,4,6-trinitroanisole: promising melt-cast insensitive explosives instead of TNT
ABSTRACT
Finding new melt-cast explosives with desired properties to replace 2,4,6-trinitrotoluene (TNT) has been intensely-pursued in recent decades. However, the contradiction among high energy, low mechanical sensitivity and low-melting makes the innovation of insensitive high-energy-density melt-cast explosives an enormous challenge, so the melt-cast explosives of comprehensive properties better than TNT has not been found. Here, we show a new way to design melt-cast insensitive energetic compound by the introduction of C-F into nitroaromatics. This as-synthesized energetic compound exhibits excellent performance with a high-measured density of 1.81 g cm−3, high thermal decomposition temperature (>300°C), high detonation velocity of 8.54 km s−1, appropriate melting point (82°C) and low viscosity(6200 mpa·S), and extremely low mechanical sensitivities (impact sensitivity, >60 J, and friction sensitivity, >360 N), superior to those of current melt-cast explosives, such as TNT and 2,4-dinitroanisole (DNAN).
Code: https://www.tandfonline.com/doi/abs/10.1080/07370652.2020.1859645
Hazard to Self
Posts: 98
Registered: 12-7-2018
Member Is Offline
TNT was not only used because incompetent people handle it. It was detonation safty if the ammonition bunker was hit. I cant find it now but i know they wrote an artillery shell with picrid acid can be detonated with an charge 76cm away from it. It it is filled with TNT only 15cm or so.
Petn1933
Hazard to Others
Posts: 101
Registered: 16-9-2019
Member Is Offline
Quote: Originally posted by Mush
Bit off topic , yet still relevant .
TNT is important because it can be melt-cast and forms melt cast matrix with other explosives. However alternative compounds are being researched.
3,5-difluoro-2,4,6-trinitroanisole: promising melt-cast insensitive explosives instead of TNT
ABSTRACT
Finding new melt-cast explosives with desired properties to replace 2,4,6-trinitrotoluene (TNT) has been intensely-pursued in recent decades. However, the contradiction among high energy, low mechanical sensitivity and low-melting makes the innovation of insensitive high-energy-density melt-cast explosives an enormous challenge, so the melt-cast explosives of comprehensive properties better than TNT has not been found. Here, we show a new way to design melt-cast insensitive energetic compound by the introduction of C-F into nitroaromatics. This as-synthesized energetic compound exhibits excellent performance with a high-measured density of 1.81 g cm−3, high thermal decomposition temperature (>300°C), high detonation velocity of 8.54 km s−1, appropriate melting point (82°C) and low viscosity(6200 mpa·S), and extremely low mechanical sensitivities (impact sensitivity, >60 J, and friction sensitivity, >360 N), superior to those of current melt-cast explosives, such as TNT and 2,4-dinitroanisole (DNAN).
Code: https://www.tandfonline.com/doi/abs/10.1080/07370652.2020.1859645
Attachment: DF8726F9-DA19-4831-BC6D-CF67806994E7.pdf (1.9MB)
Fulmen
International Hazard
Posts: 1466
Registered: 24-9-2005
Member Is Offline
Mood: Bored
Dude, that's some sweet specs. What about toxicity? Even though it's quite manageable with simple precautions and protective gear, the toxicity of TNT has always been a downside.
We're not banging rocks together here. We know how to put a man back together.
zed
International Hazard
Posts: 2208
Registered: 6-9-2008
Location: Great State of Jefferson, City of Portland
Member Is Offline
Mood: Semi-repentant Sith Lord
Ummm. War materials. Price may sometimes be of little consequence.
Yet, I find myself thinking: "Good? Yes. But, how much does it cost?"
I've been pricing reagents derived from Phloroglucinol. 1,3,5-Tri-hydroxy-benzene
Not cheap. While Phloroglucinol is widespread in nature, and plants can prestidigitate it, out of H2O and thin air....
Mortal manufacturing processes, rely on cruder methods.
Yup! They make it from TNT.
The Ring structure of TNT has been so substituted, that the ring structure and substituents no longer react in what we consider normal ways. Well, I mean other than explosiveness. I'll go get a reference.
It may all be, old hat, to you guys. But, I'm not used to such stuff. TNT as a manufacturing intermediate.
I'll go fetch.
OK, I'm back.
First TNT is oxidized to Tri-nitro-benzoic acid.
http://orgsyn.org/demo.aspx?prep=CV1P0543
Then the Nitro-Benzoic acid, is reduced, hydrolysed, and decarboxylated.
http://orgsyn.org/demo.aspx?prep=CV1P0455
And, now of course, the irony. Innocent phloroglucinol, produced by plants and bacteria, can now be reverse engineered, into processes that produce TNT.
Attachment: WP-1582.pdf (15kB)
[Edited on 18-3-2021 by zed]
[Edited on 18-3-2021 by zed]
[Edited on 18-3-2021 by zed]
zed
International Hazard
Posts: 2208
Registered: 6-9-2008
Location: Great State of Jefferson, City of Portland
Member Is Offline
Mood: Semi-repentant Sith Lord
Gotta admit. This is a better file.
Attachment: WP-1582-FR.pdf (1.7MB)
God, how one thing leads to another.
Had an interest in 2,4,6-Trimethoxy-Allylbenzene. Exotic stuff.
Expensive precursors, not easy to find references.
So.... it's been a Long, roundabout trek, tracking down possibilities.
OCD can be a good source of entertainment, in a pandemic.
Its good to keep busy, and it is always good to learn.
But dammit! I'm ready to be vaccinated. I'm gettin' antsy.
[Edited on 18-3-2021 by zed]
[Edited on 18-3-2021 by zed]
CycloKnight
Hazard to Others
Posts: 128
Registered: 4-8-2003
Member Is Offline
Mood: Still waiting for the emulsion to settle.
A point not mentioned yet, is the wide availability of toluene. In times of full scale war (both world wars), nations strive to maximize toluene production for the manufacture of TNT.
Sciencemadness Discussion Board » Special topics » Energetic Materials » Why did TNT become so universal? Select A Forum Fundamentals » Chemistry in General » Organic Chemistry » Reagents and Apparatus Acquisition » Beginnings » Responsible Practices » Miscellaneous » The Wiki Special topics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Models and Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues » Test Forum
|
2021-06-24 00:44:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4229629933834076, "perplexity": 10405.554151260281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488544264.91/warc/CC-MAIN-20210623225535-20210624015535-00545.warc.gz"}
|
https://cstheory.stackexchange.com/questions/11200/finding-invariant-elements-of-binary-matrix-from-row-and-column-sums
|
# Finding invariant elements of binary matrix from row and column sums
Consider a m-by-n binary matrix A. Given only an m-dimensional vector of its row sums (R) and an n-dimensional vector of its column sums (C), is there an efficient (ie. polynomial time) algorithm that determines which elements of A must be 1 and which elements of A must be 0?
A trivial example: if R = {2,1,2} and C = {1,1,3}, then
$$A = \begin{bmatrix} ? & ? & 1 \\ 0 & 0 & 1 \\ ? & ? & 1 \\ \end{bmatrix}$$
• Yes: Sort vectors, put ones in left-up corner, unshuffle. Unknowns come from repetitions (and hence multiple ways of unshuffling). – Radu GRIGore Apr 26 '12 at 13:42
• Could you elaborate? What's the mechanism for placing ones and for unshuffling? And what about zeros? – Special Touch Apr 26 '12 at 16:49
The answer is yes. The basic idea is to determine whether there is a matrix $A$ with $A_{i, j} = 0$ and one with $A_{i, j} = 1$. If both answers are yes, then $A_{i, j}$ cannot be determined. otherwise you will know what it is.
1. Consider such a problem: Given $r$ (row sum) and $c$ (column sum), how to determine whether there is a solution or not?
This problem can be solved by network flow algorithm. Imagine a bipartite graph $G$ whose two disjoint vertex sets are $S$ and $T$. Each vertex in $S$ corresponds to a row in $A$ (so $|S| = n$), and each vertex in $T$ corresponds to a column in $A$ (so $|T| = m$). The network $N$ can be constructed as follows:
• Add edge between source and each $s_i$ in $S$ with capacity $r_i$.
• Add edge $E_{i, j}$ between each node $s_i$ in $S$ and $t_j$ in $T$ with capacity $1$.
• Add edge between each $t_j$ in $T$ and sink with capacity $c_j$.
Find the maxflow in $N$. All edges form source are full iff there is a binary matrix A whose row sum is $r$ and column sum is $c$. And flow in $E_{i, j}$ equals to the value of $A_{i, j}$. It can be found in poly time.
1. Consider how to find the answer when some $A_{i, j}$ is determined.
We can also construct a network $N$ as above and change the upper and lower bound of corresponding $E_{i, j}$. If you want to find whether there is solution with $A_{i, j} = v$, you can change the upper and lower bound to $v$. You should run this algorithm $2nm$ times to find whether each $A_{i, j}$ can be $0$ or $1$.
PS: The first problem can also be solved by a theorem called "Gale-Ryser Theorem". But I've no idea how to modify it to solve the second one.
• Great explanation. This is a great starting point. Now I can focus on improving the time complexity. Thanks for your help. – Special Touch Apr 30 '12 at 5:36
|
2021-02-28 07:30:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8821579813957214, "perplexity": 263.3643401112488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360293.33/warc/CC-MAIN-20210228054509-20210228084509-00413.warc.gz"}
|
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/4800/2/f/t/
|
# Properties
Label 4800.2.f.t Level $4800$ Weight $2$ Character orbit 4800.f Analytic conductor $38.328$ Analytic rank $0$ Dimension $2$ CM no Inner twists $2$
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$4800 = 2^{6} \cdot 3 \cdot 5^{2}$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 4800.f (of order $$2$$, degree $$1$$, not minimal)
## Newform invariants
Self dual: no Analytic conductor: $$38.3281929702$$ Analytic rank: $$0$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{-1})$$ Defining polynomial: $$x^{2} + 1$$ x^2 + 1 Coefficient ring: $$\Z[a_1, a_2, a_3]$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 2400) Sato-Tate group: $\mathrm{SU}(2)[C_{2}]$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of $$i = \sqrt{-1}$$. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + i q^{3} + i q^{7} - q^{9}+O(q^{10})$$ q + i * q^3 + i * q^7 - q^9 $$q + i q^{3} + i q^{7} - q^{9} + i q^{13} + 3 q^{19} - q^{21} - 4 i q^{23} - i q^{27} + 4 q^{29} - 7 q^{31} + 6 i q^{37} - q^{39} + 6 q^{41} + 9 i q^{43} + 6 i q^{47} + 6 q^{49} + 2 i q^{53} + 3 i q^{57} + 10 q^{59} + q^{61} - i q^{63} + 3 i q^{67} + 4 q^{69} - 14 q^{71} - 10 i q^{73} - 8 q^{79} + q^{81} + 18 i q^{83} + 4 i q^{87} - q^{91} - 7 i q^{93} - 3 i q^{97} +O(q^{100})$$ q + i * q^3 + i * q^7 - q^9 + i * q^13 + 3 * q^19 - q^21 - 4*i * q^23 - i * q^27 + 4 * q^29 - 7 * q^31 + 6*i * q^37 - q^39 + 6 * q^41 + 9*i * q^43 + 6*i * q^47 + 6 * q^49 + 2*i * q^53 + 3*i * q^57 + 10 * q^59 + q^61 - i * q^63 + 3*i * q^67 + 4 * q^69 - 14 * q^71 - 10*i * q^73 - 8 * q^79 + q^81 + 18*i * q^83 + 4*i * q^87 - q^91 - 7*i * q^93 - 3*i * q^97 $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2 q - 2 q^{9}+O(q^{10})$$ 2 * q - 2 * q^9 $$2 q - 2 q^{9} + 6 q^{19} - 2 q^{21} + 8 q^{29} - 14 q^{31} - 2 q^{39} + 12 q^{41} + 12 q^{49} + 20 q^{59} + 2 q^{61} + 8 q^{69} - 28 q^{71} - 16 q^{79} + 2 q^{81} - 2 q^{91}+O(q^{100})$$ 2 * q - 2 * q^9 + 6 * q^19 - 2 * q^21 + 8 * q^29 - 14 * q^31 - 2 * q^39 + 12 * q^41 + 12 * q^49 + 20 * q^59 + 2 * q^61 + 8 * q^69 - 28 * q^71 - 16 * q^79 + 2 * q^81 - 2 * q^91
## Character values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/4800\mathbb{Z}\right)^\times$$.
$$n$$ $$577$$ $$901$$ $$1601$$ $$4351$$ $$\chi(n)$$ $$-1$$ $$1$$ $$1$$ $$1$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
3649.1
− 1.00000i 1.00000i
0 1.00000i 0 0 0 1.00000i 0 −1.00000 0
3649.2 0 1.00000i 0 0 0 1.00000i 0 −1.00000 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
5.b even 2 1 inner
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 4800.2.f.t 2
4.b odd 2 1 4800.2.f.q 2
5.b even 2 1 inner 4800.2.f.t 2
5.c odd 4 1 4800.2.a.w 1
5.c odd 4 1 4800.2.a.bx 1
8.b even 2 1 2400.2.f.h 2
8.d odd 2 1 2400.2.f.k 2
20.d odd 2 1 4800.2.f.q 2
20.e even 4 1 4800.2.a.x 1
20.e even 4 1 4800.2.a.bw 1
24.f even 2 1 7200.2.f.r 2
24.h odd 2 1 7200.2.f.l 2
40.e odd 2 1 2400.2.f.k 2
40.f even 2 1 2400.2.f.h 2
40.i odd 4 1 2400.2.a.f 1
40.i odd 4 1 2400.2.a.bc yes 1
40.k even 4 1 2400.2.a.g yes 1
40.k even 4 1 2400.2.a.bb yes 1
120.i odd 2 1 7200.2.f.l 2
120.m even 2 1 7200.2.f.r 2
120.q odd 4 1 7200.2.a.s 1
120.q odd 4 1 7200.2.a.bi 1
120.w even 4 1 7200.2.a.r 1
120.w even 4 1 7200.2.a.bj 1
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
2400.2.a.f 1 40.i odd 4 1
2400.2.a.g yes 1 40.k even 4 1
2400.2.a.bb yes 1 40.k even 4 1
2400.2.a.bc yes 1 40.i odd 4 1
2400.2.f.h 2 8.b even 2 1
2400.2.f.h 2 40.f even 2 1
2400.2.f.k 2 8.d odd 2 1
2400.2.f.k 2 40.e odd 2 1
4800.2.a.w 1 5.c odd 4 1
4800.2.a.x 1 20.e even 4 1
4800.2.a.bw 1 20.e even 4 1
4800.2.a.bx 1 5.c odd 4 1
4800.2.f.q 2 4.b odd 2 1
4800.2.f.q 2 20.d odd 2 1
4800.2.f.t 2 1.a even 1 1 trivial
4800.2.f.t 2 5.b even 2 1 inner
7200.2.a.r 1 120.w even 4 1
7200.2.a.s 1 120.q odd 4 1
7200.2.a.bi 1 120.q odd 4 1
7200.2.a.bj 1 120.w even 4 1
7200.2.f.l 2 24.h odd 2 1
7200.2.f.l 2 120.i odd 2 1
7200.2.f.r 2 24.f even 2 1
7200.2.f.r 2 120.m even 2 1
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(4800, [\chi])$$:
$$T_{7}^{2} + 1$$ T7^2 + 1 $$T_{11}$$ T11 $$T_{13}^{2} + 1$$ T13^2 + 1 $$T_{19} - 3$$ T19 - 3 $$T_{23}^{2} + 16$$ T23^2 + 16 $$T_{31} + 7$$ T31 + 7
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$T^{2}$$
$3$ $$T^{2} + 1$$
$5$ $$T^{2}$$
$7$ $$T^{2} + 1$$
$11$ $$T^{2}$$
$13$ $$T^{2} + 1$$
$17$ $$T^{2}$$
$19$ $$(T - 3)^{2}$$
$23$ $$T^{2} + 16$$
$29$ $$(T - 4)^{2}$$
$31$ $$(T + 7)^{2}$$
$37$ $$T^{2} + 36$$
$41$ $$(T - 6)^{2}$$
$43$ $$T^{2} + 81$$
$47$ $$T^{2} + 36$$
$53$ $$T^{2} + 4$$
$59$ $$(T - 10)^{2}$$
$61$ $$(T - 1)^{2}$$
$67$ $$T^{2} + 9$$
$71$ $$(T + 14)^{2}$$
$73$ $$T^{2} + 100$$
$79$ $$(T + 8)^{2}$$
$83$ $$T^{2} + 324$$
$89$ $$T^{2}$$
$97$ $$T^{2} + 9$$
|
2023-01-27 21:02:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9879183173179626, "perplexity": 6579.019751825724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495012.84/warc/CC-MAIN-20230127195946-20230127225946-00311.warc.gz"}
|
http://www.self.gutenberg.org/articles/eng/Fine_structure_constant
|
#jsDisabledContent { display:none; } My Account | Register | Help
# Fine structure constant
Article Id: WHEBN0000049295
Reproduction Date:
Title: Fine structure constant Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date:
### Fine structure constant
In physics, the fine-structure constant, also known as Sommerfeld's constant, commonly denoted α, is a fundamental physical constant, namely the coupling constant characterizing the strength of the electromagnetic interaction between elementary charged particles. Being a dimensionless quantity, it has the same numerical value in all systems of units. Arnold Sommerfeld introduced the fine-structure constant in 1916.
The currently accepted value of α is 7.29735257×10−3.[1]
## Definition
Five equivalent definitions of α in terms of other fundamental physical constants are:
\alpha = \frac{m_{e} c r_{e}}{\hbar} = \frac{k_\mathrm{e} e^2}{\hbar c} = \frac{1}{(4 \pi \varepsilon_0)} \frac{e^2}{\hbar c} = \frac{e^2 c \mu_0}{2 h} = \frac{c \mu_0}{2 R_\text{K}}
where:
The first definition involves pure angular momentum as a ratio - that of a theoretical maximum for an electron spinning at the speed of light compared to the absolute minimum of angular momentum in the universe, namely ħ.
In electrostatic cgs units, the unit of electric charge, the statcoulomb, is defined so that the Coulomb constant, ke, or the permittivity factor, 4πε0, is 1 and dimensionless. Then the expression of the fine-structure constant becomes the abbreviated
\alpha = \frac{e^2}{\hbar c}
which is an expression commonly appearing in physics literature.
In natural units, commonly used in high energy physics, where ε0 = c = ħ = 1, the value of the fine-structure constant is[2]
\alpha = \frac{e^2}{4 \pi}.
As such, the fine-structure constant is just an alternative expression of the elementary charge; \scriptstyle e \ = \ \sqrt{4 \pi \alpha} \ \approx \ 0.30282212 in terms of the natural unit of charge.
## Measurement
Two example eighth-order Feynman diagrams that contribute to the electron self-interaction. The horizontal line with an arrow represents the electron while the wavy lines are virtual photons, and the circles represent virtual electron-positron pairs.
The 2010 CODATA recommended value of α is[3]
\alpha = \frac{e^2}{(4 \pi \varepsilon_0) \hbar c} = 7.297\,352\,5698(24) \times 10^{-3}.
This has a relative standard uncertainty of 0.32 parts per billion.[3] For reasons of convenience, historically the value of the reciprocal of the fine-structure constant is often specified. The 2010 CODATA recommended value is given by[3]
\alpha^{-1} = 137.035\,999\,074(44).
While the value of α can be estimated from the values of the constants appearing in any of its definitions, the theory of quantum electrodynamics (QED) provides a way to measure α directly using the quantum Hall effect or the anomalous magnetic moment of the electron. The theory of QED predicts a relationship between the dimensionless magnetic moment of the electron and the fine-structure constant α (the magnetic moment of the electron is also referred to as "Landé g-factor" and symbolized as g). The most precise value of α obtained experimentally (as of 2012) is based on a measurement of g using a one-electron so-called "quantum cyclotron" apparatus, together with a calculation via the theory of QED that involved 12,672 tenth-order Feynman diagrams:[4]
\alpha^{-1} = 137.035\,999\,173(35).
This measurement of α has a precision of 0.25 parts per billion. This value and uncertainty are about the same as the latest experimental results.[5]
## Physical interpretations
The fine-structure constant, α, has several physical interpretations. α is:
\alpha = \left( \frac{e}{q_\mathrm{P}} \right)^2.
• The ratio of two energies: (i) the energy needed to overcome the electrostatic repulsion between two electrons a distance of d apart, and (ii) the energy of a single photon of wavelength \lambda = 2\pi d (or of angular wavelength d ; see Planck relation):
\alpha = \frac{e^2}{4 \pi \varepsilon_0 d} \left/ \frac{h c}{\lambda} \right.= \frac{e^2}{4 \pi \varepsilon_0 d} \times {\frac{2 \pi d}{h c}} = \frac{e^2}{4 \pi \varepsilon_0 d} \times {\frac{d}{\hbar c}} = \frac{e^2}{4 \pi \varepsilon_0 \hbar c}.
r_e = {\alpha \lambda_e \over 2\pi} = \alpha^2 a_0
\alpha = \frac{1}{4}\,Z_0\,G_0.
When perturbation theory is applied to quantum electrodynamics, the resulting perturbative expansions for physical results are expressed as sets of power series in α. Because α is much less than one, higher powers of α are soon unimportant, making the perturbation theory extremely practical in this case. On the other hand, the large value of the corresponding factors in quantum chromodynamics makes calculations involving the strong nuclear force extremely difficult.
According to the theory of the renormalization group, the value of the fine-structure constant (the strength of the electromagnetic interaction) grows logarithmically as the energy scale is increased. The observed value of α is associated with the energy scale of the electron mass; the electron is a lower bound for this energy scale because it (and the positron) is the lightest charged object whose quantum loops can contribute to the running. Therefore 1/137.036 is the value of the fine-structure constant at zero energy. Moreover, as the energy scale increases, the strength of the electromagnetic interaction approaches that of the other two fundamental interactions, a fact important for grand unification theories. If quantum electrodynamics were an exact theory, the fine-structure constant would actually diverge at an energy known as the Landau pole. This fact makes quantum electrodynamics inconsistent beyond the perturbative expansions.
### Classical considerations
The fine-structure constant, α, expresses the strength of the electrodynamic interaction. It can already be determined with mainly classical considerations. If the electron is described in a classical model by the circulation of a massless charge on a circular track with radius r it is embedded in its own synchrotron radiation field .[7] The interaction with photons result in an angular momentum L=1\hbar. Then with general arguments one obtains the chain
1\hbar=L=r\cdot p=r\cdot E/c=T \cdot E/(2\pi)=\lambda E/(2 \pi c),
with momentum p, total energy E, circulation period T, and λ the Compton wavelength which is the wavelength of the fundamental mode of the synchrotron radiation.
The power of the radiation is P=E/T and leads to 2\pi \hbar=E^2/P.
The power of the synchrotron radiation is for \beta=1 [8]
P=\frac{e^2 c}{4\pi\varepsilon_0 r^2}\cdot Sum_n
with Sum_n the sum over integrals of the contributing modes n. With P=1.01\cdot 10^{7}[W] which corresponds to Sum_n=22 and which is close to numerical calculations in the model, one gets the values for the elementary charge e and the fine structure constant α.
More convincing is the use of arguments from quantum mechanics: The interaction of the charge with its field leads to oscillations and replaces the point like charge by a charge distribution. The energy of this distribution is E=\frac{e^2}{4\pi\varepsilon_0 R} where R describes the radius of this distribution. The comparison with the mass of the lepton m c^2=\hbar \omega=\hbar c/R yields E/(m c^2)= \alpha. [7]
## History
Arnold Sommerfeld introduced the fine-structure constant in 1916, as part of his theory of the relativistic deviations of atomic spectral lines from the predictions of the Bohr model. The first physical interpretation of the fine-structure constant α was as the ratio of the velocity of the electron in the first circular orbit of the relativistic Bohr atom to the speed of light in the vacuum.[9] Equivalently, it was the quotient between the minimum angular momentum allowed by relativity for a closed orbit, and the minimum angular momentum allowed for it by quantum mechanics. It appears naturally in Sommerfeld's analysis, and determines the size of the splitting or fine-structure of the hydrogenic spectral lines.
The fine-structure constant so intrigued physicist Wolfgang Pauli that he collaborated with psychiatrist Carl Jung in an extraordinary quest to understand its significance.[10]
## Is the fine-structure constant actually constant?
While at interaction energies above 80 GeV the fine-structure constant is known to approach 1/128,[11] physicists have pondered whether the fine-structure constant is in fact constant, or whether its value differs by location and over time. A varying α has been proposed as a way of solving problems in cosmology and astrophysics.[12][13][14][15] String theory and other proposals for going beyond the Standard Model of particle physics have led to theoretical interest in whether the accepted physical constants (not just α) actually vary.
The first experimenters to test whether the fine-structure constant might actually vary examined the spectral lines of distant astronomical objects and the products of radioactive decay in the Oklo natural nuclear fission reactor. Their findings were consistent with no variation in the fine-structure constant between these two vastly separated locations and times.[16][17][18][19][20][21]
More recently, improved technology has made it possible to probe the value of α at much larger distances and to a much greater accuracy. In 1999, a team led by John K. Webb of the University of New South Wales claimed the first detection of a variation in α.[22][23][24][25] Using the Keck telescopes and a data set of 128 quasars at redshifts 0.5 < z < 3, Webb et al. found that their spectra were consistent with a slight increase in α over the last 10–12 billion years. Specifically, they found that
\frac{\Delta \alpha}{\alpha} \ \stackrel{\mathrm{def}}{=}\ \frac{\alpha _\mathrm{prev}-\alpha _\mathrm{now}}{\alpha_\mathrm{now}} = \left(-5.7\pm 1.0 \right) \times 10^{-6}.
In 2004, a smaller study of 23 absorption systems by Chand et al., using the Very Large Telescope, found no measureable variation:[26][27]
\frac{\Delta \alpha}{\alpha_\mathrm{em}}= \left(-0.6\pm 0.6\right) \times 10^{-6}.
However, in 2007 simple flaws were identified in the analysis method of Chand et al., discrediting those results.[28][29]
King et al. have used Markov Chain Monte Carlo methods to investigate the algorithm used by the UNSW group to determine \Delta\alpha/\alpha from the quasar spectra, and have found that the algorithm appears to produce correct uncertainties and maximum likelihood estimates for \Delta\alpha/\alpha for particular models.[30] This suggests that the statistical uncertainties and best estimate for \Delta\alpha/\alpha stated by Webb et al. and Murphy et al. are robust.
Lamoreaux and Torgerson analyzed data from the Oklo natural nuclear fission reactor in 2004, and concluded that α has changed in the past 2 billion years by 4.5 parts in 108. They claimed that this finding was "probably accurate to within 20%." Accuracy is dependent on estimates of impurities and temperature in the natural reactor. These conclusions have to be verified.[31][32][33][34]
In 2007, Khatri and Wandelt of the University of Illinois at Urbana-Champaign realized that the 21 cm hyperfine transition in neutral hydrogen of the early Universe leaves a unique absorption line imprint in the cosmic microwave background radiation.[35] They proposed using this effect to measure the value of α during the epoch before the formation of the first stars. In principle, this technique provides enough information to measure a variation of 1 part in 109 (4 orders of magnitude better than the current quasar constraints). However, the constraint which can be placed on α is strongly dependent upon effective integration time, going as t−1/2. The European LOFAR radio telescope would only be able to constrain Δα/α to about 0.3%.[35] The collecting area required to constrain Δα/α to the current level of quasar constraints is on the order of 100 square kilometers, which is economically impracticable at the present time.
In 2008, Rosenband et al.[36] used the frequency ratio of Al+ and Hg+ in single-ion optical atomic clocks to place a very stringent constraint on the present time variation of α, namely Δα̇/α = (−1.6±2.3)×10−17 per year. Note that any present day null constraint on the time variation of alpha does not necessarily rule out time variation in the past. Indeed, some theories[37] that predict a variable fine-structure constant also predict that the value of the fine-structure constant should become practically fixed in its value once the universe enters its current dark energy-dominated epoch.
### Australian dipole
In September 2010 researchers from Australia said they had identified a dipole-like structure in the variation of the fine-structure constant across the observable universe. They used data on quasars obtained by the Very Large Telescope, combined with the previous data obtained by Webb at the Keck telescopes. The fine-structure constant appears to have been larger by one part in 100,000 in the direction of the southern hemisphere constellation Ara, 10 billion years ago. Similarly, the constant appeared to have been smaller by a similar fraction in the northern direction, billions of years ago.[38][39][40]
In September and October 2010, after Webb's released research, physicists Chad Orzel and Sean M. Carroll suggested various approaches of how Webb's observations may be wrong. Orzel argues that the study may contain wrong data due to subtle differences in the two telescopes, in which one of the telescopes the data set was slightly high and on the other slightly low, so that they cancel each other out when they overlapped. He finds it suspicious that the triangles in the plotted graph of the quasars are so well-aligned (triangles representing sources examined with both telescopes). Carroll suggested a totally different approach; he looks at the fine-structure constant as a scalar field and claims that if the telescopes are correct and the fine-structure constant varies smoothly over the universe, then the scalar field must have a very small mass. However, previous research has shown that the mass is not likely to be extremely small. Both of these scientists' early criticisms point to the fact that different techniques are needed to confirm or contradict the results, as Webb, et al., also concluded in their study.[41][42]
In October 2011, Webb et al. reported[43] a variation in α dependent on both redshift and spatial direction. They report "the combined data set fits a spatial dipole" with an increase in α with redshift in one direction and a decrease in the other. "[I]ndependent VLT and Keck samples give consistent dipole directions and amplitudes...."
## Anthropic explanation
The anthropic principle is a controversial argument of why the fine-structure constant has the value it does: stable matter, and therefore life and intelligent beings, could not exist if its value were much different. For instance, were α to change by 4%, stellar fusion would not produce carbon, so that carbon-based life would be impossible. If α were > 0.1, stellar fusion would be impossible and no place in the universe would be warm enough for life as we know it.[44]
## Numerological explanations
As a dimensionless constant which does not seem to be directly related to any mathematical constant, the fine-structure constant has long fascinated physicists. Richard Feynman, one of the originators and early developers of the theory of quantum electrodynamics (QED), referred to the fine-structure constant in these terms:
There is a most profound and beautiful question associated with the observed coupling constant, e – the amplitude for a real electron to emit or absorb a real photon. It is a simple number that has been experimentally determined to be close to 0.08542455. (My physicist friends won't recognize this number, because they like to remember it as the inverse of its square: about 137.03597 with about an uncertainty of about 2 in the last decimal place. It has been a mystery ever since it was discovered more than fifty years ago, and all good theoretical physicists put this number up on their wall and worry about it.) Immediately you would like to know where this number for a coupling comes from: is it related to pi or perhaps to the base of natural logarithms? Nobody knows. It's one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the "hand of God" wrote that number, and "we don't know how He pushed his pencil." We know what kind of a dance to do experimentally to measure this number very accurately, but we don't know what kind of dance to do on the computer to make this number come out, without putting it in secretly!
There have been a few examples of numerology that have led to theories that transformed society: see the mention of Kirchhoff and Balmer in Good (1962, p. 316) ... and one can well include Kepler on account of his third law. It would be fair enough to say that numerology was the origin of the theories of electromagnetism, quantum mechanics, gravitation.... So I intend no disparagement when I describe a formula as numerological. When a numerological formula is proposed, then we may ask whether it is correct. ... I think an appropriate definition of correctness is that the formula has a good explanation, in a Platonic sense, that is, the explanation could be based on a good theory that is not yet known but ‘exists’ in the universe of possible reasonable ideas.
Arthur Eddington argued that the value could be "obtained by pure deduction" and he related it to the Eddington number, his estimate of the number of protons in the Universe.[45] This led him in 1929 to conjecture that its reciprocal was precisely the integer 137. Other physicists neither adopted this conjecture nor accepted his arguments but by the 1940s experimental values for 1/α deviated sufficiently from 137 to refute Eddington's argument.[46] Attempts to find a mathematical basis for this dimensionless constant have continued up to the present time.
## Quotes
The mystery about α is actually a double mystery. The first mystery – the origin of its numerical value α ≈ 1/137 has been recognized and discussed for decades. The second mystery – the range of its domain – is generally unrecognized.
—Malcolm H. Mac Gregor, M.H. MacGregor (2007). The Power of Alpha.
If alpha [the fine-structure constant] were bigger than it really is, we should not be able to distinguish matter from ether [the vacuum, nothingness], and our task to disentangle the natural laws would be hopelessly difficult. The fact however that alpha has just its value 1/137 is certainly no chance but itself a law of nature. It is clear that the explanation of this number must be the central problem of natural philosophy.
## References
1. ^ "CODATA Value: fine-structure constant". The NIST Reference on Constants, Units, and Uncertainty. US
2. ^ Peskin, M.; Schroeder, D. (1995). An Introduction to Quantum Field Theory. Westview Press. ISBN 0-201-50397-2. p. 125.
3. ^ a b c P.J. Mohr, B.N. Taylor, and D.B. Newell (2011), "The 2010 CODATA Recommended Values of the Fundamental Physical Constants" (Web Version 6.0). This database was developed by J. Baker, M. Douma, and S. Kotochigova. Available: http://physics.nist.gov/constants [Thursday, 02-Jun-2011 21:00:12 EDT]. National Institute of Standards and Technology, Gaithersburg, MD 20899.
4. ^ Tatsumi Aoyama, Masashi Hayakawa, Toichiro Kinoshita, Makiko Nio (2012). "Tenth-Order QED Contribution to the Electron g-2 and an Improved Value of the Fine Structure Constant".
5. ^ Rym Bouchendira; Pierre Cladé; Saïda Guellati-Khélifa; François Nez; François Biraben (2010). "New determination of the fine-structure constant and test of the quantum electrodynamics". Physical Review Letters 106 (8).
6. ^ NIST Reference on Constants, Units, and Uncertainty Current advances: The fine-structure constant and quantum Hall effect
7. ^ a b G. Poelz (2012). "On the Wave Character of the Electron". arXiv:1206.0620 [physics.class-ph].
8. ^ Iwanenko, D.; Sokolov, A. (1953). "39ff". Klassische Feldtheorie. Berlin: Akademie-Verlag.
9. ^ "Introduction to the Constants for Nonexperts – Current Advances: The Fine-Structure Constant and Quantum Hall Effect". The NIST Reference on Constants, Units, and Uncertainty.
10. ^ P. Varlaki, L. Nadai, J. Bokor (2008). "Number Archetypes and Background Control Theory Concerning the Fine Structure Constant".
11. ^ P.J. Mohr (NIST) (2010). "Physical Constants".
12. ^ E.A. Milne (1935). Relativity, Gravitation and World Structure.
13. ^ P.A.M. Dirac (1937). "The Cosmological Constants".
14. ^ G. Gamow (1967). "Electricity, Gravity, and Cosmology".
15. ^ G. Gamow (1967). "Variability of Elementary Charge and Quasistellar Objects".
16. ^ J.-P. Uzan (2003). "The Fundamental Constants and Their Variation: Observational Status and Theoretical Motivations".
17. ^ J.-P. Uzan (2004). "Variation of the Constants in the Late and Early Universe". arXiv:astro-ph/0409424 [astro-ph].
18. ^ K. Olive, Y.-Z. Qian (2003). "Were Fundamental Constants Different in the Past?".
19. ^ J.D. Barrow (2002). The Constants of Nature: From Alpha to Omega—the Numbers That Encode the Deepest Secrets of the Universe.
20. ^ J.-P. Uzan, B. Leclercq (2008). The Natural Laws of the Universe: Understanding Fundamental Constants.
21. ^ F. Yasunori (2004). "Oklo Constraint on the Time-Variability of the Fine-Structure Constant". Astrophysics, Clocks and Fundamental Constants. Lecture Notes in Physics.
22. ^ J.K. Webb et al. (1999). "Search for Time Variation of the Fine Structure Constant".
23. ^ M.T. Murphy et al. (2001). "Possible evidence for a variable fine-structure constant from QSO absorption lines: motivations, analysis and results".
24. ^ J.K. Webb et al. (2001). "Further Evidence for Cosmological Evolution of the Fine Structure Constant".
25. ^ M.T. Murphy, J.K. Webb, V.V. Flambaum (2003). "Further Evidence for a Variable Fine-Structure Constant from Keck/HIRES QSO Absorption Spectra".
26. ^ H. Chand et al. (2004). "Probing the Cosmological Variation of the Fine-Structure Constant: Results Based on VLT-UVES Sample".
27. ^ R. Srianand et al. (2004). "Limits on the Time Variation of the Electromagnetic Fine-Structure Constant in the Low Energy Limit from Absorption Lines in the Spectra of Distant Quasars".
28. ^ M.T. Murphy, J.K. Webb, V.V. Flambaum (2007). "Comment on "Limits on the Time Variation of the Electromagnetic Fine-Structure Constant in the Low Energy Limit from Absorption Lines in the Spectra of Distant Quasars"".
29. ^ M.T. Murphy, J.K. Webb, V.V. Flambaum (2008). "Revision of VLT/UVES Constraints on a Varying Fine-Structure Constant".
30. ^ J. King, D. Mortlock, J. Webb, M. Murphy (2009). "Markov Chain Monte Carlo methods applied to measuring the fine-structure constant from quasar spectroscopy". arXiv:0910.2699 [astro-ph].
31. ^ R. Kurzweil (2005).
32. ^ S.K. Lamoreaux, J.R. Torgerson (2004). "Neutron Moderation in the Oklo Natural Reactor and the Time Variation of Alpha".
33. ^ E.S. Reich (30 June 2004). "Speed of Light May Have Changed Recently".
34. ^ "Scientists Discover One Of The Constants Of The Universe Might Not Be Constant".
35. ^ a b R. Khatri, B.D. Wandelt (2007). "21-cm Radiation: A New Probe of Variation in the Fine-Structure Constant".
36. ^ T. Rosenband et al. (2008). "Frequency Ratio of Al+ and Hg+ Single-Ion Optical Clocks; Metrology at the 17th Decimal Place".
37. ^ J.D. Barrow, H.B. Sandvik, J. Magueijo (2001). "The Behaviour of Varying-Alpha Cosmologies". Phys. Rev. D 65 (6).
38. ^ H. Johnston (2 September 2010). "Changes spotted in fundamental constant".
39. ^ J.K.Webb et al.; King, J. A.; Murphy, M. T.; Flambaum, V. V.; Carswell, R. F.; Bainbridge, M. B. (23 August 2010). "Evidence for spatial variation of the fine-structure constant". Physical Review Letters 107 (19).
40. ^ J. A. King (2010). Searching for variations in the fine-structure constant and the proton-to-electron mass ratio using quasar absorption lines (PhD thesis). University of New South Wales.
41. ^ Chad Orzel (14 September). "Why I'm skeptical about changing Fine-Structure constant".
42. ^ Sean Corroll (18 October). "The fine-structure constant is probably constant".
43. ^ J. K. Webb, J. A. King, M. T. Murphy, V. V. Flambaum, R. F. Carswell, and M. B. Bainbridge (2011). "Indications of a Spatial Variation of the Fine Structure Constant".
44. ^ J.D. Barrow (2001). "Cosmology, Life, and the Anthropic Principle".
45. ^ A.S Eddington (1956). "The Constants of Nature". In J.R. Newman. The World of Mathematics 2.
46. ^ H. Kragh (2003). "Magic Number: A Partial History of the Fine-Structure Constant".
• Stephen L. Adler, "Theories of the Fine Structure Constant α" FERMILAB-PUB-72/059-T
• "Introduction to the constants for nonexperts", adapted from the Encyclopædia Britannica, 15th ed. Disseminated by the NIST web page.
• CODATA recommended value of α, as of 2006.
• "Fine Structure Constant", Eric Weisstein's World of Physics website.
• John D. Barrow, and John K. Webb, "Inconstant Constants", Scientific American, June 2005.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
|
2020-10-21 08:33:14
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8377989530563354, "perplexity": 2119.848846503856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876136.24/warc/CC-MAIN-20201021064154-20201021094154-00062.warc.gz"}
|
http://openstudy.com/updates/50798fd6e4b0ed1dac5136ca
|
• anonymous
word problem: Kristin spent $131 on shirts.Fancy shirts cost$28 and plain shirts cost \$15.If she bought a total of 7 then how many of each kind did she buy?
Mathematics
Looking for something else?
Not the answer you are looking for? Search for more explanations.
|
2017-04-23 14:06:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19922834634780884, "perplexity": 3659.5233799051302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118707.23/warc/CC-MAIN-20170423031158-00579-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://physics.aps.org/articles/v14/20
|
Viewpoint
# A Penetrating Look at Ice Friction
Physics 14, 20
A new approach for studying friction on ice helps explain why the ease of sliding depends strongly on temperature, contact pressure, and speed.
In Latvia, where bobsledding, luge, and skeleton are popular, ice can be more than just “slippery.” The local language has another term slīdamība—roughly translated as “slideability”—which refers to the ease of movement on a surface. This terminology signifies the awareness that sliding on ice depends on multiple factors—something physicists have had trouble explaining despite 160 years of effort. Previous work has focused on the water layer that forms between the ice surface and the sliding object, say, an ice skate. However, this model does not show why friction is higher near the ice melting point than it is at lower temperatures (Fig. 1). A new study of the solid properties of ice may provide a solution. Rinse Liefferink from the University of Amsterdam and colleagues have performed a series of experiments, in which they measure both the friction of a sliding object and the hardness of ice over a wide range of conditions [1]. The observations show that the hardness decreases as the temperature increases, leading to a high-friction “ploughing” behavior once the sliding object is able to penetrate the softer ice surface. This novel approach to studying ice friction could help in developing technologies that improve safety for winter drivers or give an edge to winter athletes.
Previous work has related slipperiness to a surface layer (or film) of water. The thickness of this water layer could explain how friction varies with temperature [2]. At very cold temperatures (around $-100{\phantom{\rule{0.3em}{0ex}}}^{\circ }\text{C}$), melting is minimal, and the surface is considered dry—providing a possible explanation for the large observed friction. As temperatures warm to an intermediate range (around $-20{\phantom{\rule{0.3em}{0ex}}}^{\circ }\text{C}$), the developing water layer acts as a thin lubricating film that could explain the observed decrease in friction. However, it becomes less clear what happens near the melting point at $0{\phantom{\rule{0.3em}{0ex}}}^{\circ }\text{C}$, where the observed friction increases again. The models explain this reduced slipperiness as arising from a thicker film of water, but observations have not been able to confirm this [3].
In previous studies of ice friction, researchers have had difficulty in controlling all the relevant parameters, such as temperature and surface smoothness. Normally, such tribology experiments have been performed on a small-scale rheometer, in which a rotating probe is pressed down on an ice disk. One problem with these setups is that the rotating probe, or “slider,” often follows a single track and thereby moves repetitively over disturbed ice.
Liefferink and colleagues used a rheometer at low loads with three different slider shapes: a big sphere, a small sphere, and a model skate. The team kept the ice smooth by repeatedly adding a fresh film of water to the surface, and they varied the ice temperatures over a wide range from $-120{\phantom{\rule{0.3em}{0ex}}}^{\circ }\text{C}$ to $-1.5{\phantom{\rule{0.3em}{0ex}}}^{\circ }\text{C}$. In addition to measuring friction with the different sliders, the researchers measured the ice hardness using a high-load mechanical testing machine that presses on the ice with a spherical probe. The value of the hardness is given by the force needed to penetrate or indent the surface with the probe.
The data showed how ice hardness varied with temperature and sliding speed. The ice became harder at colder temperatures (Fig. 1), while at a given temperature, the hardness increased at faster indentation and sliding speeds. The hardness behavior helped the team interpret the ice friction observations. At the lowest temperatures, the large sliding friction was attributed to water molecules being held rigidly on the surface. These molecules were made more mobile by the shearing action of the slider, and this mobility became greater—and the friction lower—at intermediate temperatures. However, at higher temperatures, the friction increased because of the decrease in hardness that allowed the slider to plough into the ice. Despite this drop in hardness, we know that ice remains sufficiently hard for sliding at temperatures close to the melting point. Most other materials become soft—and “unslidable”—near their melting point.
Different slider shapes started to plough at different temperatures. At a given pressing force, the small sphere showed ploughing at about $-20{\phantom{\rule{0.3em}{0ex}}}^{\circ }\text{C}$, but the larger skate-section showed ploughing at $-8{\phantom{\rule{0.3em}{0ex}}}^{\circ }\text{C}$. This behavior is explained by a lower contact pressure from the skate’s larger size (larger contact area). Besides delaying the onset of ploughing, the lower contact pressure of the skate provides more mobility to surface water molecules, making the skate slide better than the sphere even at colder temperatures. Friction’s dependence on contact pressure is mediated by how much of the slider’s surface is in contact with the ice and whether the pressure is enough to initiate ploughing (Fig. 2). Experiments at low speeds showed that ice hardness increases with slider speed, implying that a fast skate should plough less and thus slide better.
Further work should investigate how ice friction and slideability are affected by weather conditions, faster sliding speeds, and ice structure. Weather is an issue because it can affect the water layer thickness. My team has recently performed experiments with a skeleton sled and showed that humidity, air temperature, and ice temperature jointly influence the sliding speed [4]. We have also looked at the effect of slider surface topography and loading on sliding speed [5]. To measure the effect at faster sliding conditions, researchers will need an appropriate test facility, such as the bobsled ice track in Sigulda, Latvia, where higher speeds can be attained in a long, straight section [6]. Such tests will need an advanced sensor system to accurately and simultaneously measure both ice friction and air drag [7].
The last element for further study is how ice’s structure helps regulate the hardness. Ice is known to self-heal after surface scratches. It may turn out that ice is like steel in that it becomes harder after being mechanically stressed, or “cold worked.” Materials science needs to offer advanced characterization and testing under extreme conditions to completely unravel the slideability of ice.
## References
1. R. W. Liefferink et al., “Friction on ice: How temperature, pressure, and speed control the slipperiness of ice,” Phys. Rev. X 11, 011025 (2021).
2. A.-M. Kietzig et al., “Physics of ice friction,” J. Appl. Phys. 107, 081101 (2010).
3. T. Bartels-Rausch et al., “A review of air–ice chemical and physical interactions (AICI): liquids, quasi-liquids, and solids in snow,” Atmos. Chem. Phys. 14, 1587 (2014).
4. E. Jansons et al., “Influence of weather conditions on sliding over ice at a push-start training facility,” Biotribology 25, 100152 (2021).
5. E. Jansons et al., “Measurement of sliding velocity on ice, as a function of temperature, runner load and roughness, in a skeleton push-start facility,” Cold Reg. Sci. Technol. 151, 260 (2018).
6. M. Irbe et al., “Unveiling ice friction and aerodynamic drag at the initial stage of sliding on ice: Faster sliding in winter sports,” Tribol. Int. (to be published).
Kārlis Agris Gross is a Professor in Materials Science at Riga Technical University in Latvia, leading the Biomaterials Research Group in the Faculty of Materials Science and Applied Chemistry. His background is in biomaterials, amorphous phases, and structuring, and the development of new methods to deepen scientific enquiry in those related areas. The last five years have been spent on initiating research on ice and ice-related materials. He is working on nanoindentation experiments with a focus on materials at subzero temperatures. He was recently (2017–2020) involved in an ERAF project The quest for disclosing how surface characteristics affect slideability (No.1.1.1.1/16/A/129). kgross@rtu.lv
## Subject Areas
Materials Science
## Related Articles
Materials Science
### Icicle Structure Reveals Growth Dynamics
Some icicles develop surface ripples as they grow. Researchers now explain the growth mechanism, but a full explanation remains elusive. Read More »
Materials Science
### Machine-Learning Tool Solves Metamaterial Jigsaw
A new tool can determine whether a collection of building blocks will assemble into a mechanically sound structure. Read More »
Condensed Matter Physics
### Impurities Enable High-Quality Resistive Switching Devices
Adding dopants to resistive random-access memories could enable the controllable operation of these devices in neuromorphic computing hardware Read More »
|
2022-12-05 10:56:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43308424949645996, "perplexity": 2337.7808201082707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711016.32/warc/CC-MAIN-20221205100449-20221205130449-00458.warc.gz"}
|
https://studyadda.com/sample-papers/physics-sample-paper-2_q14/1253/400464
|
• # question_answer For a magnetising field of intensity $2\times {{10}^{3}}A{{m}^{-1}},$ aluminium at 280 K acquires intensity of magnetisation of $4.8\times {{10}^{-2}}{{m}^{-1}}A.$ Find the susceptibility of aluminium at 280 K. If the temperature of the metal is raised to 320 K, then what will be its susceptibility and intensity of magnetisation?
Answer:
Here, $H=2\times {{10}^{3}}A{{m}^{-1}},$ $T=280\,K$ $I=4.8\times {{10}^{-2}}A{{m}^{-1}}$ $\therefore$Susceptibility${{\chi }_{m}}=\frac{I}{H}=\frac{4.8\times {{10}^{-2}}}{2\times {{10}^{3}}}=2.4\times {{10}^{-5}}$ $\chi {{'}_{m}}=?,$ $T'=320K$ According to Curie?s law, $\frac{\chi {{'}_{m}}}{{{\chi }_{m}}}=\frac{T}{T'}\Rightarrow \chi {{'}_{m}}=\frac{T}{T'}\times {{\chi }_{m}}$ or $\chi {{'}_{m}}=\frac{280}{320}\times 2.4\times {{10}^{-5}}=2.1\times {{10}^{-5}}$ As, $\chi {{'}_{m}}=\frac{I'}{H}\Rightarrow I'=\chi 'm\times H$ or Intensity of magnetization, $I'=2.1\times {{10}^{-5}}\times 2\times {{10}^{3}}=4.2\times {{10}^{-2}}A{{m}^{-1}}$
You need to login to perform this action.
You will be redirected in 3 sec
|
2020-09-29 11:36:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7780340313911438, "perplexity": 3319.0114250091997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401641638.83/warc/CC-MAIN-20200929091913-20200929121913-00304.warc.gz"}
|
https://math.stackexchange.com/questions/601163/improper-multivariable-integrals
|
# Improper Multivariable Integrals
How can I find the values of $\alpha$ for which the following integrals (in $\mathbb{R}^n$ ) converge ?
1. $\int_{|\vec{x}|\geq 1 } \frac{ln(|\vec{x}|^3 )}{|\vec{x}|^\alpha} d\vec{x}$
2. $\int_{\mathbb{R}^n } \frac{sin(|\vec{x}|)}{|\vec{x}|^\alpha} d\vec{x}$
I guess the thing is we need to calculate the limit form of these integrals, in some other coordinate system, where $r= ||\vec{x}||$ ... I can't figure out what will be that Jacobian in this case and can't understand how to solve these questions
Recall that $\def\vol{{\rm vol}}\vol_{n-1}(rS^{n-1}) = 2\frac{\pi^{n/2}}{\Gamma(n/2)} r^{n-1} =: \beta_{n-1} r^{n-1}$, we have $\def\abs#1{\left|#1\right|}$ \begin{align*} \int_{\abs x \ge 1} \frac{\log \abs x^3}{\abs x^\alpha}\, dx &= \int_{1}^\infty \int_{rS^{n-1}} \frac{\log r^3}{r^\alpha}\, dS(x)\, dr\\ &= \beta_{n-1} \int_1^\infty \frac{\log r^3}{r^\alpha} r^{n-1}\, dr\\ &= 3\beta_{n-1}\int_1^\infty {r^{ n - 1-\alpha}\cdot \log r}\, dr \end{align*} This converges if $n-1-\alpha < -1$, that is $\alpha > n$.
For the second case, arguing along the same lines, we have \begin{align*} \int_{\mathbb R^n} \frac{\sin\abs x}{\abs x^\alpha}\, dx &= \int_0^\infty \frac{\sin r}{r^\alpha}\beta_{n-1} r^{n-1}\, dr\\ &= \int_0^\infty \sin r \cdot r^{n-1-\alpha}\, dr \end{align*} This converges "at $\infty$" if $n-1-\alpha < -1$, that is $\alpha > n$, and "at 0" if $n-\alpha> -1$ (note that $\sin r\cdot r^{n-1-\alpha} = \frac{\sin r}r \cdot r^{n-\alpha}$, that is if $\alpha < n+1$. So we must have $\alpha \in (n, n+1)$.
• Hi @martini, thanks for your answer. I can't understand the equality: $\int_{\abs x \ge 1} \frac{\log \abs x^3}{\abs x^\alpha}\, dx = \int_{1}^\infty \int_{rS^{n-1}} \frac{\log r^3}{r^\alpha}\, dS(x)\, dr\\...$ (i.e. - I can't understand how you passed from an integral over some region to two integrals , one over $1 ,\infty$ and the other over $rS^n$ (why $rS^n$?) Thanks ! – homogenity Dec 10 '13 at 15:10
• Note that $\{x \in \mathbb R^n \mid \left| x\right| \ge 1\} = \biguplus_{r \ge 1} rS^{n-1}$, where $S^{n-1}$ denotes the unit sphere, i. e. the set of vectors with unit length. – martini Dec 10 '13 at 15:15
• Hmmm... when writing $r\cdot S^{n-1}$ do you mean pointwise multiplication ? i.e. - $(xr| x\in S^{n-1} )$ ? – homogenity Dec 10 '13 at 17:44
• and why did you put $dS(x)$ ? why does this volume element depend on $x$ ? – homogenity Dec 10 '13 at 17:48
• (1) Yes, $rS^{n-1} = \{rx \mid x \in S^{n-1}$. And by writing $dS(x)$ I tried to make explicit that the variable with respect to which we integrate is $x$. – martini Dec 10 '13 at 18:18
|
2020-01-27 13:38:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999091625213623, "perplexity": 631.1630116156568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700675.78/warc/CC-MAIN-20200127112805-20200127142805-00469.warc.gz"}
|
https://tex.stackexchange.com/questions/2288/aligning-a-footer-on-the-outside-text-margins
|
Aligning a footer on the outside text margins
Does anybody know how to align an header/footer on the outside of the text margins? For example:
\fancyfoot[LO]{\footnotesize \thepage~{\color{red}\vline}}
\fancyfoot[RE]{\footnotesize {\color{red}\vline}~\thepage}
What I want is to fix the position of \vline on the text-width, and make \thepage left (or right) align on the outside.
\fancyfoot[LO]{\footnotesize\leavevmode\llap{\thepage~}\textcolor{red}{\vline}}
|
2022-08-14 12:26:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.900428295135498, "perplexity": 4194.590462415594}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572033.91/warc/CC-MAIN-20220814113403-20220814143403-00444.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-12th-edition/chapter-8-section-8-1-the-square-root-property-and-completing-the-square-8-1-exercises-page-512/65
|
## Intermediate Algebra (12th Edition)
$x=\left\{ \dfrac{-5-\sqrt{41}}{4},\dfrac{-5+\sqrt{41}}{4} \right\}$
$\bf{\text{Solution Outline:}}$ To solve the given equation, $2k^2+5k-2=0 ,$ use first the properties of equality to express the equation in the form $x^2\pm bx=c.$ Once in this form, complete the square by adding $\left( \dfrac{b}{2} \right)^2$ to both sides of the equal sign. Then express the left side as a square of a binomial while simplify the right side. Then take the square root of both sides (Square Root Property) and use the properties of equality to isolate the variable. $\bf{\text{Solution Details:}}$ Using the properties of equality, in the form $x^2+bx=c,$ the given equation is equivalent to \begin{array}{l}\require{cancel} \dfrac{2k^2+5k-2}{2}=\dfrac{0}{2} \\\\ k^2+\dfrac{5}{2}k-1=0 \\\\ k^2+\dfrac{5}{2}k=1 .\end{array} In the equation above, $b= \dfrac{5}{2} .$ The expression $\left( \dfrac{b}{2} \right)^2,$ evaluates to \begin{array}{l}\require{cancel} \left( \dfrac{\dfrac{5}{2}}{2} \right)^2 \\\\= \left( \dfrac{5}{2}\div{2} \right)^2 \\\\= \left( \dfrac{5}{2}\cdot\dfrac{1}{2} \right)^2 \\\\= \left( \dfrac{5}{4} \right)^2 \\\\= \dfrac{25}{16} .\end{array} Adding the value of $\left( \dfrac{b}{2} \right)^2,$ to both sides of the equation above results to \begin{array}{l}\require{cancel} k^2+\dfrac{5}{2}k+\dfrac{25}{16}=1+\dfrac{25}{16} \\\\ k^2+\dfrac{5}{2}k+\dfrac{25}{16}=\dfrac{16}{16}+\dfrac{25}{16} \\\\ k^2+\dfrac{5}{2}k+\dfrac{25}{16}=\dfrac{41}{16} .\end{array} With the left side now a perfect square trinomial, the equation above is equivalent to \begin{array}{l}\require{cancel} \left( k+\dfrac{5}{4} \right)^2=\dfrac{41}{16} .\end{array} Taking the square root of both sides (Square Root Property), simplifying the radical and then isolating the variable, the equation above is equivalent to \begin{array}{l}\require{cancel} k+\dfrac{5}{4}=\pm\sqrt{\dfrac{41}{16}} \\\\ k+\dfrac{5}{4}=\pm\sqrt{\dfrac{1}{16}\cdot41} \\\\ k+\dfrac{5}{4}=\pm\dfrac{1}{4}\sqrt{41} \\\\ k+\dfrac{5}{4}=\pm\dfrac{\sqrt{41}}{4} \\\\ k=-\dfrac{5}{4}\pm\dfrac{\sqrt{41}}{4} \\\\ k=\dfrac{-5\pm\sqrt{41}}{4} .\end{array} The solutions are \begin{array}{l}\require{cancel} k=\dfrac{-5-\sqrt{41}}{4} \\\\\text{OR}\\\\ k=\dfrac{-5+\sqrt{41}}{4} .\end{array} Hence, $x=\left\{ \dfrac{-5-\sqrt{41}}{4},\dfrac{-5+\sqrt{41}}{4} \right\} .$
|
2018-04-19 11:59:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999817430973053, "perplexity": 1540.5660342006825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936914.5/warc/CC-MAIN-20180419110948-20180419130948-00067.warc.gz"}
|
https://ww2.mathworks.cn/help/deeplearning/ref/dlarray.crossentropy.html
|
# crossentropy
## Description
The cross-entropy operation computes the cross-entropy loss between network predictions and target values for single-label and multi-label classification tasks.
The crossentropy function computes the cross-entropy loss between predictions and targets represented as dlarray data. Using dlarray objects makes working with high dimensional data easier by allowing you to label the dimensions. For example, you can label which dimensions correspond to spatial, time, channel, and batch dimensions using the "S", "T", "C", and "B" labels, respectively. For unspecified and other dimensions, use the "U" label. For dlarray object functions that operate over particular dimensions, you can specify the dimension labels by formatting the dlarray object directly, or by using the DataFormat option.
Note
To calculate the cross-entropy loss within a layerGraph object or Layer array for use with the trainNetwork function, use classificationLayer.
example
loss = crossentropy(Y,targets) returns the categorical cross-entropy loss between the formatted dlarray object Y containing the predictions and the target values targets for single-label classification tasks. The output loss is an unformatted scalar dlarray scalar.
For unformatted input data, use the 'DataFormat' option.
loss = crossentropy(Y,targets,weights) applies weights to the calculated loss values. Use this syntax to weight the contributions of classes, observations, regions, or individual elements of the input to the calculated loss values.
loss = crossentropy(___,'DataFormat',FMT) also specifies the dimension format FMT when Y is not a formatted dlarray.
loss = crossentropy(___,Name,Value) specifies options using one or more name-value pair arguments in addition to the input arguments in previous syntaxes. For example, 'TargetCategories','independent' computes the cross-entropy loss for a multi-label classification task.
## Examples
collapse all
Create an array of prediction scores for 12 observations over 10 classes.
numClasses = 10;
numObservations = 12;
Y = rand(numClasses,numObservations);
dlY = dlarray(Y,'CB');
dlY = softmax(dlY);
View the size and format of the prediction scores.
size(dlY)
ans = 1×2
10 12
dims(dlY)
ans =
'CB'
Create an array of targets encoded as one-hot vectors.
labels = randi(numClasses,[1 numObservations]);
targets = onehotencode(labels,1,'ClassNames',1:numClasses);
View the size of the targets.
size(targets)
ans = 1×2
10 12
Compute the cross-entropy loss between the predictions and the targets.
loss = crossentropy(dlY,targets)
loss =
1x1 dlarray
2.3343
Create an array of prediction scores for 12 observations over 10 classes.
numClasses = 10;
numObservations = 12;
Y = rand(numClasses,numObservations);
dlY = dlarray(Y,'CB');
View the size and format of the prediction scores.
size(dlY)
ans = 1×2
10 12
dims(dlY)
ans =
'CB'
Create a random array of targets encoded as a numeric array of zeros and ones. Each observation can have multiple classes.
targets = rand(numClasses,numObservations) > 0.75;
targets = single(targets);
View the size of the targets.
size(targets)
ans = 1×2
10 12
Compute the cross-entropy loss between the predictions and the targets. To specify cross-entropy loss for multi-label classification, set the 'TargetCategories' option to 'independent'.
loss = crossentropy(dlY,targets,'TargetCategories','independent')
loss =
1x1 single dlarray
9.8853
Create an array of prediction scores for 12 observations over 10 classes.
numClasses = 10;
numObservations = 12;
Y = rand(numClasses,numObservations);
dlY = dlarray(Y,'CB');
dlY = softmax(dlY);
View the size and format of the prediction scores.
size(dlY)
ans = 1×2
10 12
dims(dlY)
ans =
'CB'
Create an array of targets encoded as one-hot vectors.
labels = randi(numClasses,[1 numObservations]);
targets = onehotencode(labels,1,'ClassNames',1:numClasses);
View the size of the targets.
size(targets)
ans = 1×2
10 12
Compute the weighted cross-entropy loss between the predictions and the targets using a vector class weights. Specify a weights format of 'UC' (unspecified, channel) using the 'WeightsFormat' option.
weights = rand(1,numClasses);
loss = crossentropy(dlY,targets,weights,'WeightsFormat','UC')
loss =
1x1 dlarray
1.1261
## Input Arguments
collapse all
Predictions, specified as a formatted dlarray, an unformatted dlarray, or a numeric array. When Y is not a formatted dlarray, you must specify the dimension format using the DataFormat option.
If Y is a numeric array, targets must be a dlarray.
Target classification labels, specified as a formatted or unformatted dlarray or a numeric array.
Specify the targets as an array containing one-hot encoded labels with the same size and format as Y. For example, if Y is a numObservations-by-numClasses array, then targets(n,i) = 1 if observation n belongs to class i targets(n,i) = 0 otherwise.
If targets is a formatted dlarray, then its format must be the same as the format of Y, or the same as DataFormat if Y is unformatted.
If targets is an unformatted dlarray or a numeric array, then the function applies the format of Y or the value of DataFormat to targets.
Tip
Formatted dlarray objects automatically permute the dimensions of the underlying data to have order "S" (spatial), "C" (channel), "B" (batch), "T" (time), then "U" (unspecified). To ensure that the dimensions of Y and targets are consistent, when Y is a formatted dlarray, also specify targets as a formatted dlarray.
Weights, specified as a dlarray or a numeric array.
To specify class weights, specify a vector with a 'C' (channel) dimension with size matching the 'C' (channel) dimension of the X. Specify the 'C' (channel) dimension of the class weights by using a formatted dlarray object or by using the 'WeightsFormat' option.
To specify observation weights, specify a vector with a 'B' (batch) dimension with size matching the 'B' (batch) dimension of the Y. Specify the 'B' (batch) dimension of the class weights by using a formatted dlarray object or by using the 'WeightsFormat' option.
To specify weights for each element of the input independently, specify the weights as an array of the same size as Y. In this case, if weights is not a formatted dlarray object, then the function uses the same format as Y. Alternatively, specify the weights format using the 'WeightsFormat' option.
### Name-Value Arguments
Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose Name in quotes.
Example: 'TargetCategories','independent','DataFormat','CB' evaluates the cross-entropy loss for multi-label classification tasks and specifies the dimension order of the input data as 'CB'
Type of classification task, specified as the comma-separated pair consisting of 'TargetCategories' and one of the following:
• 'exclusive' — Single-label classification. Each observation in the predictions Y is exclusively assigned to one category. The function computes the loss between the target value for the single category specified by targets and the corresponding prediction in Y, averaged over the number of observations.
• 'independent'— Multi-label classification. Each observation in the predictions Y can be assigned to one or more independent categories. The function computes the sum of the loss between each category specified by targets and the predictions in Y for those categories, averaged over the number of observations. Cross-entropy loss for this type of classification task is also known as binary cross-entropy loss.
Mask indicating which elements to include for loss computation, specified as a dlarray object, a logical array, or a numeric array with the same size as Y.
The function includes and excludes elements of the input data for loss computation when the corresponding value in the mask is 1 and 0, respectively.
If Mask is a formatted dlarray object, then its format must match that of Y. If Mask is not a formatted dlarray object, then the function uses the same format as Y.
If you specify the DataFormat option, then the function also uses the specified format for the mask.
The size of each dimension of Mask must match the size of the corresponding dimension in Y. The default value is a logical array of ones.
Tip
Formatted dlarray objects automatically permute the dimensions of the underlying data to have this order: "S" (spatial), "C" (channel), "B" (batch), "T" (time), and "U" (unspecified). For example, dlarray objects automatically permute the dimensions of data with format "TSCSBS" to have format "SSSCBT".
To ensure that the dimensions of Y and the mask are consistent, when Y is a formatted dlarray, also specify the mask as a formatted dlarray.
Mode for reducing the array of loss values, specified as one of the following:
• "sum" — Sum all of the elements in the array of loss values. In this case, the output loss is scalar.
• "none" — Do not reduce the array of loss values. In this case, the output loss is an unformatted dlarray object with the same size as Y.
Divisor for normalizing the reduced loss when Reduction is "sum", specified as one of the following:
• "batch-size" — Normalize the loss by dividing it by the number of observations in X.
• "all-elements" — Normalize the loss by dividing it by the number of elements of X.
• "mask-included" — Normalize the loss by dividing the loss values by the number of included elements specified by the mask for each observation independently. To use this option, you must specify a mask using the Mask option.
• "none" — Do not normalize the loss.
Dimension order of unformatted input data, specified as a character vector or string scalar FMT that provides a label for each dimension of the data.
When you specify the format of a dlarray object, each character provides a label for each dimension of the data and must be one of the following:
• "S" — Spatial
• "C" — Channel
• "B" — Batch (for example, samples and observations)
• "T" — Time (for example, time steps of sequences)
• "U" — Unspecified
You can specify multiple dimensions labeled "S" or "U". You can use the labels "C", "B", and "T" at most once.
You must specify DataFormat when the input data is not a formatted dlarray.
Data Types: char | string
Dimension order of the weights, specified as a character vector or string scalar that provides a label for each dimension of the weights.
When you specify the format of a dlarray object, each character provides a label for each dimension of the data and must be one of the following:
• "S" — Spatial
• "C" — Channel
• "B" — Batch (for example, samples and observations)
• "T" — Time (for example, time steps of sequences)
• "U" — Unspecified
You can specify multiple dimensions labeled "S" or "U". You can use the labels "C", "B", and "T" at most once.
You must specify WeightsFormat when weights is a numeric vector and Y has two or more nonsingleton dimensions.
If weights is not a vector, or both weights and Y are vectors, then default value of WeightsFormat is the same as the format of Y.
Data Types: char | string
## Output Arguments
collapse all
Cross-entropy loss, returned as an unformatted dlarray. The output loss is an unformatted dlarray with the same underlying data type as the input Y.
The size of loss depends on the 'Reduction' option.
## Algorithms
collapse all
### Cross-Entropy Loss
For each element Yj of the input, the crossentropy function computes the corresponding cross-entropy element-wise loss values using the formula
${\text{loss}}_{j}=-\left({T}_{j}\text{ln}{Y}_{j}+\left(1-{T}_{j}\right)\text{ln}\left(1-{Y}_{j}\right)\right),$
where Tj is the corresponding target value to Yj.
To reduce the loss values to a scalar, the function then reduces the element-wise loss using the formula
$\text{loss}=\frac{1}{N}\sum _{j}{m}_{j}{w}_{j}{\text{loss}}_{j},$
where N is the normalization factor, mj is the mask value for element j, and wj is the weight value for element j.
If you do not opt to reduce the loss, then the function applies the mask and the weights to the loss values directly:
${\text{loss}}_{j}^{*}={m}_{j}{w}_{j}{\text{loss}}_{j}$
This table shows the loss formulations for different tasks.
Single-label classificationCross-entropy loss for mutually exclusive classes. This is useful when observations must have a single label only.
$\text{loss}=-\frac{1}{N}\sum _{n=1}^{N}\sum _{i=1}^{K}{T}_{ni}\text{ln}{Y}_{ni},$
where N and K are the numbers of observations, and classes, respectively.
Multi-label classificationCross-entropy loss for independent classes. This is useful when observations can have multiple labels.
$\text{loss}=-\frac{1}{N}\sum _{n=1}^{N}\sum _{i=1}^{K}\left({T}_{ni}\mathrm{ln}\left({Y}_{ni}\right)+\left(1-{T}_{ni}\right)\mathrm{ln}\left(1-{Y}_{ni}\right)\right),$
where N and K are the numbers of observations and classes, respectively.
Single-label classification with weighted classesCross-entropy loss with class weights. This is useful for datasets with imbalanced classes.
$\text{loss}=-\frac{1}{N}\sum _{n=1}^{N}\sum _{i=1}^{K}{w}_{i}{T}_{ni}\text{ln}{Y}_{ni},$
where N and K are the numbers of observations and classes, respectively, and wi denotes the weight for class i.
Sequence-to-sequence classificationCross-entropy loss with masked time-steps. This is useful for ignoring loss values that correspond to padded data.
$\text{loss}=-\frac{1}{N}\sum _{n=1}^{N}\sum _{t=1}^{S}{m}_{nt}\sum _{i=1}^{K}{T}_{nti}\text{ln}{Y}_{nti},$
where N, S, and K are the numbers of observations, time steps, and classes, mnt denotes the mask value for time step t of observation n.
## Version History
Introduced in R2019b
|
2023-01-31 13:07:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7073116302490234, "perplexity": 3016.1814064362925}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00708.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/thomas-calculus-13th-edition/chapter-15-multiple-integrals-section-15-1-double-and-iterated-integrals-over-rectangles-exercises-15-1-page-875/36
|
## Thomas' Calculus 13th Edition
To evaluate $F_{yx}$ we use Fubini's Theorem to rewrite $F(x, y)$ as $\int^y_c\int^x_af(u,v)du dv$ and make a similar argument. The result is again $f(x,y)$.
Since $f$ is continuous on $R$, for fixed $u$ $f(u, v)$ is a continuous function of $v$ and has an antiderivative with respect to $v$ on $R$, call it $g(u,v)$ . Then $\int^y_cf(u,v)dv$ = $g(u,v)-g(u,c)$ and $F(x,y)$=$\int^x_a\int^y_cf(u,v) dv du$ = $\int^x_a(g(u,y)-g(u,c))du$ $F_x$=$\frac{\partial}{\partial x}\int^x_a(g(u,v)-g(u,c)) du$ = $g(x,y)-g(x,c)$ Now taking the derivative with respect to $y$,we get $F_{xy}$=$\frac{\partial}{\partial x}(g(x,y)-g(x,c))$ To evaluate $F_{yx}$ we use Fubini's Theorem to rewrite $F(x, y)$ as $\int^y_c\int^x_af(u,v)du dv$ and make a similar argument. The result is again $f(x,y)$.
|
2023-04-02 00:07:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9531428813934326, "perplexity": 58.360860828329265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00522.warc.gz"}
|
https://proxies-free.com/fa-functional-analysis-strict-rieszs-rearrangement-inequality/
|
# fa.functional analysis – Strict Riesz’s rearrangement inequality
(Continue to the last question Riesz rearrangement inequality) In the Lieb-Loss’s book , they present the strict Riesz rearrangement inequality in Section3, Theorem 3.9(Page 93). They say that when the functions f,g,h are all nonnegative, and if g is strictly symmetric decreasing, then Riesz rearrangement inequality holds and the “=” holds iff f and h are translation of $$f^*, h^*$$. Namely if f,g,h are all nonnegative, then
$$iint_{mathbb{R}^ntimes mathbb{R}^n} f(x) g(x-y) h(y) , dx,dy \ le iint_{mathbb{R}^ntimes mathbb{R}^n} f^*(x) g^*(x-y) h^*(y) , dx,dytag{1}$$
and if g is strictly symmetric decreasing, then there is a equality only of $$f=T(f^*), h=T(h^*)$$ for some translation $$T$$. I want to ask when remove the nonnegative condition, such as g(x)=-ln(x), whether the “=” holds iff f and h are translation of $$f^*, h^*$$. For example, let $$g(x)=-ln x$$, which is strictly symmetric decreasing. In this cases, we know that (1) still holds. Does the equality holds in (1) only if f and h are a translation of $$f^*, h^*$$?
|
2021-04-17 23:58:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9526453614234924, "perplexity": 588.8668050748174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464065.57/warc/CC-MAIN-20210417222733-20210418012733-00565.warc.gz"}
|
https://danmackinlay.name/notebook/orthonormal_matrices.html
|
# Orthonormal and unitary matrices
## Energy preserving operators, generalized rotations
(
In which I think about parameterisations and implementations of finite dimensional energy-preserving operators, a.k.a. matrices. A particular nook in the linear feedback process library, closely related to stability in linear dynamical systems, since every orthonormal matrix is the forward operator of an energy-preserving system, which is an edge case for certain natural types of stability. Also important in random low-dimensional projections.
Uses include maintaining stable gradients in recurrent neural networks and efficient invertible normalising flows . Also, parameterising stable Multi-Input-Multi-Output (MIMO) delay networks in signal processing.
There is some terminological work to be done. Some writers refer to orthogonal matrices (but I prefer that to mean matrices where the columns are not necessarily length 1), and some refer to unitary matrices, which seems to imply the matrix is over the complex field instead of the reals but is basically the same from my perspective.
We also might want to consider the implied manifolds upon which these objects live, the Stiefel manifold. Formally, the Stiefel manifold $$\mathcal{V}_{k, m}$$ is the space of $$k$$ frames in the $$m$$ -dimensional real Euclidean space $$\mathbb{R}^{m},$$ represented by the set of $$m \times k$$ matrices $$\mathrm{M}$$ such that $$\mathrm{M}^{\prime} \mathrm{M}=\mathrm{I}_{k},$$ where $$\mathrm{I}_{k}$$ is the $$k \times k$$ identity matrix. Usually my purposes are served here by $$k=m$$. There are some interesting cases in low dimensional projections served by $$k<m,$$ including $$k=1.$$
Finding an orthonormal matrix is equivalent to choosing a finite orthonormal basis, so any way we can parameterise such a basis gives us an orthonormal matrix.
NB the normalisation implies that the basis for an $$n\times n$$ matrix has a most $$n(n-1)$$ free parameters.
## Take the QR decomposition
HT Russell Tsuchida for pointing out that the $$\mathrm{Q}$$ matrix in the QR decomposition, $$\mathrm{M}=\mathrm{Q}\mathrm{R}$$ by construction gives me an orthonormal matrix from any square matrix. Likewise with the $$\mathrm{U},\mathrm{V}$$ matrices in the $$\mathrm{M}=\mathrm{U}\Sigma \mathrm{V}^*$$ SVD. This construction is overparameterised, with $$n^2$$ free parameters.
The construction of the QR decomposition Householder reflections is, Wikipedia reckons, $$\mathcal{O}(n^3)$$ multiplications for an $$n\times n$$ matrix.
I wonder what the distribution of such matrices is for some, say, matrix with independent standard Gaussian entries? Nick Higham has the answer, in his compact introduction to random orthonormal matrices. A uniform, rotation-invariant distribution is given by the Haar measure over the group of orthogonal matrices. He also gives the construction for drawing them by random Householder reflections derived from random standard normal vectors. See random rotations.
## Iterative normalising
Have a nearly orthonormal matrix? Berg et al. (2018) gives a contraction which gets us closer to an orthonormal matrix: $\mathrm{Q}^{(k+1)}=\mathrm{Q}^{(k)}\left(\mathrm{I}+\frac{1}{2}\left(\mathrm{I}-\mathrm{Q}^{(k) \top} \mathrm{Q}^{(k)}\right)\right).$ This reputedly converges if $$\left\|\mathrm{Q}^{(0) \top} \mathrm{Q}^{(0)}-\mathrm{I}\right\|_{2}<1.$$ They attribute this to Björck and Bowie (1971) and Kovarik (1970), wherein it is derived from the Newton iteration for solving $$\mathrm{Q}^{-1}-$$ Here the iterations are clearly $$\mathcal{O}(n^2).$$ An $$\mathcal{O}(n)$$ option would be nice.
## Perturbing an existing orthonormal matrix
Unitary transforms map unitary matrixes to unitary matrixes. We can even start from the identity matrix and perturb it.
### Householder reflections
We can apply successive reflections about hyperplanes, the so called Householder reflections, to an orthonormal matrix to construct a new one. For a unit vector $$v$$ the associated Householder reflection is $\mathrm{H}(v)=\mathrm{I}-2vv^{*}.$ NB $$\det \mathrm{H}=-1$$ so we need to apply an even number of Householder reflections to preserve orthonormality.
### Givens rotation
One obvious method for constructing unitary matrices is composing Givens rotations, which are atomic rotations about 2 axes.
A Givens rotation is represented by a matrix of the form ${\displaystyle \mathrm{G}(i,j,\theta )={\begin{bmatrix}1&\cdots &0&\cdots &0&\cdots &0\\\vdots &\ddots &\vdots &&\vdots &&\vdots \\0&\cdots &c&\cdots &-s&\cdots &0\\\vdots &&\vdots &\ddots &\vdots &&\vdots \\0&\cdots &s&\cdots &c&\cdots &0\\\vdots &&\vdots &&\vdots &\ddots &\vdots \\0&\cdots &0&\cdots &0&\cdots &1\end{bmatrix}},}$ where $$c = \cos \theta$$ and $$s = \sim\theta$$ appear at the intersections ith and jth rows and columns. The product $$\mathrm{G}(i,j,\theta)x$$ represents a $$\theta$$-radian counterclockwise rotation of the vector x in the $$(i,j)$$ plane.
## Cayley map
the Cayley map maps the skew-symmetric matrices to the orthogonal matrices of positive determinant, and parameterizing skew-symmetric matrices is easy; just take the upper triangular component of some matrix and flip /negate it. This still requires a matrix inversion in general, AFAICS.
## Parametric sub families
Citing MATLAB, Nick Higham gives the following two parametric families of orthonormal matrices. These are clearly far from covering the whole space of orthonormal matrices.
$q_{ij} = \displaystyle\frac{2}{\sqrt{2n+1}}\sin \left(\displaystyle\frac{2ij\pi}{2n+1}\right)$
$q_{ij} = \sqrt{\displaystyle\frac{2}{n}}\cos \left(\displaystyle\frac{(i-1/2)(j-1/2)\pi}{n} \right)$
Another one: the matrix exponential of a skew-symmetric matrix is orthonormal. If $$\mathrm{A}=-\mathrm{A}^{T}$$ then $\left(e^{\mathrm{A}}\right)^{-1}=\mathrm{e}^{-\mathrm{A}}=\mathrm{e}^{\mathrm{A}^{T}}=\left(\mathrm{e}^{\mathrm{A}}\right)^{T}.$
## Structured
Orthogonal convolutions? TBD
## Random distributions over
See random rotations.
## References
Anderson, T. W., I. Olkin, and L. G. Underhill. 1987. SIAM Journal on Scientific and Statistical Computing 8 (4): 625–29.
Arjovsky, Martin, Amar Shah, and Yoshua Bengio. 2016. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, 1120–28. ICML’16. New York, NY, USA: JMLR.org.
Berg, Rianne van den, Leonard Hasenclever, Jakub M. Tomczak, and Max Welling. 2018. In Uai18.
Björck, Å., and C. Bowie. 1971. SIAM Journal on Numerical Analysis 8 (2): 358–64.
De Sena, Enzo, Huseyin Haciihabiboglu, Zoran Cvetkovic, and Julius O. Smith. 2015. IEEE/ACM Transactions on Audio, Speech, and Language Processing 23 (9): 1478–92.
Edelman, Alan, and N. Raj Rao. 2005. Acta Numerica 14 (May): 233–97.
Hasenclever, Leonard, Jakub M Tomczak, and Max Welling. 2017. “Variational Inference with Orthogonal Normalizing Flows,” 4.
Hendeković, J. 1974. Chemical Physics Letters 28 (2): 242–45.
Jarlskog, C. 2005. Journal of Mathematical Physics 46 (10): 103508.
Jing, Li, Yichen Shen, Tena Dubcek, John Peurifoy, Scott Skirlo, Yann LeCun, Max Tegmark, and Marin Soljačić. 2017. In PMLR, 1733–41.
Kovarik, Zdislav. 1970. SIAM Journal on Numerical Analysis 7 (3): 386–89.
Menzer, Fritz, and Christof Faller. 2010.
Mezzadri, Francesco. 2007. “How to Generate Random Matrices from the Classical Compact Groups” 54 (5): 13.
Mhammedi, Zakaria, Andrew Hellicar, Ashfaqur Rahman, and James Bailey. 2017. In PMLR, 2401–9.
Regalia, P., and M. Sanjit. 1989. SIAM Review 31 (4): 586–613.
Schroeder, Manfred R. 1961. The Journal of the Acoustical Society of America 33 (8): 1061–64.
Schroeder, Manfred R., and B. Logan. 1961. Audio, IRE Transactions on AU-9 (6): 209–14.
Tilma, Todd, and E C G Sudarshan. 2002. Journal of Physics A: Mathematical and General 35 (48): 10467–501.
Valimaki, v., and T. I. Laakso. 2012. “Fractional Delay Filters-Design and Applications.” In Nonuniform Sampling: Theory and Practice, edited by Farokh Marvasti. Springer Science & Business Media.
### No comments yet. Why not leave one?
GitHub-flavored Markdown & a sane subset of HTML is supported.
|
2022-09-27 23:32:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8583190441131592, "perplexity": 2055.187279526412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00003.warc.gz"}
|
https://codegolf.stackexchange.com/questions/11130/advanced-code-golf-disk-operations-and-file-allocation
|
Advanced Code Golf - Disk Operations, and File Allocation
Good Afternoon Golfgeneers.
This is reasonably long and detailed question. Given what it was asking, it needed to be. If you have any questions, please ask them. If there is anything that isn't clear, please alert me to it so I can fix it. This is probably on the tougher side of codegolf.
We are building a lightweight computer, and we need the lightest weight file system possible. The shortest code will be chosen.
We are providing a cutting edge 65536 byte hard drive. For the sake of this prototype, it will be a direct image file, which your program can assume exists, and is in whichever location suits you - ie, a binary file representing the entire hard disk. You may assume that this image is already 'formatted' - ie. if your program relies on something being in the file to work, it can be. If you require the initial empty state to be something other then all-zeros, please state what it is.
There is no memory limit as to RAM used by your application.
The input and output commands will require an interface to the actual hard drive. Like the disk image, your program can assume that the file for input exists, and is wherever you want it to be. Likewise, your program can output wherever is convenient. It must however close the file after executing the input or output command.
You are not being provided with a format which you must use for the disc image - you are free to develop your own. It must be capable of storing up to 248 files. Any file greater then 256 bytes can count as a new file for the sake of this limit for every 256 bytes or part thereof. A file can be up to 63488 bytes. Basically - it must be as capable as a hard drive with 248 sectors of 256 bytes each.
The reasoning behind these seemingly sizes is to give you 2048 bytes of 'administration' - to store details of the files. Each file/folder must be accessible by a name of 4 alphanumeric characters, which may be case sensitive or insensitive as per your preference. If your program supports names of 4 or less characters, then there is bonus of a 0.95 multiplier.
Your program must accept, via stdin, the following commands. Parameters will be separated by a space. The command will be terminated by a newline.
• L - List the names to stdout of all the current files and their sizes in bytes, separated by newlines.
• C a b - Copy file a to new file b.
• D a - Delete file a
• R a b - Renames file a to new name b
• I a - Adds the input file (see note above) as file a
• O a - Outputs file a to the output file
The following errors may be reported to STDOUT or STDERR as valid reasons for a command to fail to execute. You may choose to only print ERR# where # is the number of the error:
• 1 - File doesn't exist
• 2 - File already exists
• 3 - Out of space*
* Note that your program can not issue this just because it is out of continuous space. If you still have sectors available, you must defragment the disk to make it work.
A folder system is optional - however, will net a bonus of a 0.8 multiplier to your score. If it supports more then 1 level of directory, it will net a bonus of a 0.7 multiplier (not in addition to the 0.8). For the bonus, you must have
• L, R, C and D only work within the current directory. L must list folders in the current directory, as well as the files.
• New command M a b moves file a to folder b. If b is '.', moves file to the parent director
• New command G a goes to folder a. If a is '.', goes to parent folder
• R must also rename folders
• D must also delete folders, and any files/folders within them
• C must also copy folders, and any files/folders within them
The following additional errors may be reported to STDOUT or STDERR as valid reasons for a command to fail to execute.
• 4 - Folder doesn't exist
• 5 - File, not folder required - where, I and O require file names, and a folder was given
• The size, in bytes, of your source code
• Multiplied by
• 0.95 if you support names of 4, or less characters
• 0.8 if you support a single level of folders
• 0.7 if you support multiple levels of folders
• 0.95 if you support commands (not necessarily file names) in lower or uppercase
Good luck.
• I am willing to make allowances for languages which do not support something required by this challenge. Unfortunately, I don't think I can make it work just via command line parameters for GolfScript. – lochok Apr 5 '13 at 7:07
• Looks complex enough to need a good test suite. – Peter Taylor Apr 5 '13 at 7:40
• I'll start working on one - but may not be done today – lochok Apr 5 '13 at 7:41
• Are the score multipliers compounded? – jdstankosky Apr 5 '13 at 13:34
• Compounded. Note that you can only get one of the single level or multiple levels of folders though. – lochok Apr 5 '13 at 22:29
Ruby, score 505.4 (560 characters)
x,a,b=gets.chomp.split
e=f[0,4*X=248].unpack 'A4'*X
s=f[4*X,2*X].unpack 's'*X
d=f[6*X..-1].unpack 'A'+s*'A'
u&&$><<"ERR1\n"||v&&$><<"ERR2\n"||w&&$><<"ERR3\n"||eval(y) d=d*"" d.size>63488&&$><<"ERR3\n"||IO.write('F',(e+s+[d]).pack('A4'*X+'s'*X+'A63488'))
Notes:
• Filesystem is located in file F in the current directory. F must exist and may be created/formatted via the following command: IO.write('F',(([""]*248)+([0]*248)+[""]).pack('A4'*248+'s'*248+'A63488')).
• Input file is always I also in the current directoy, output file is O.
• Since it wasn't required to check errors, be sure to type correct commands (i.e. no unsupported commands, no missing arguments, too long filenames).
• The file system implementation is extremely simple - for each command the full hard drive is read into memory and rebuild upon (successful) completion.
Bonuses:
• Filenames may be 1-4 chars
• Commands may be upper or lower case
The code is not yet fully golfed but shows already that for a substancial better score I'd try a completely different approach.
Test session (only STDIN/STDOUT is shown but of course each command is prepended by calling the above program):
> L
> I F001
> L
F001 558
> I F001
ERR2
> C F002 F003
ERR1
> C F001 F003
> L
F001 558
F003 558
> C F001 F003
ERR2
> R F002 F003
ERR1
> R F001 F003
ERR2
> R F001 F002
> L
F002 558
F003 558
> O F001
ERR1
> O F002
> L
F002 558
F003 558
> D F001
ERR1
> D F002
> L
F003 558
> C F003 F001
> L
F001 558
F003 558
> D F001
> L
F003 558
> D F003
> L
Tcl, score 487,711 (772 bytes)
{*}[set a {interp alias {}}] s {} dict se f
{*}$a u {} dict un f {*}$a h {} dict g
proc 0 {} {return -level 1}
proc p {n a b} {foreach l $n {proc$l $a "global f c;$b;S"}}
p L\ l {} {puts [join [dict k [h $f {*}$c]] \n]}
p C\ c a\ b {s {*}$c$b [h $f {*}$c $b]} p D\ d a {u {*}$c $a} p R\ r a\ b {s {*}$c $a [h$f {*}$c$b];u {*}$c$b}
p I\ i a {set i [open i rb];s {*}$c$a [read $i];close$i}
p O\ o a {set o [open o wb];chan puts $o [h$f {*}$c$a];close $o} p M\ m a\ b {set d$c;if {$b eq "."} {set d [lrange$c 0 end-1]};s {*}$d$a [h $f {*}$c $a];u {*}$c $a} p G\ g a {if {$a eq "."} {set c [lrange $c 0 end-1]} {lappend c$a}}
p S {} {puts [set o [open F wb]] $f;close$o;return}
set f [read [set i [open F rb]]]
close $i set c {} while 1 {{*}[split [gets stdin]]} Bonuses (gotta catch them all): • support file names with 4 byte or less - or more. I don't care. 0.95 • support multilevel folders 0.7 • support lower and upper case commands 0.95 Known limitations: • File system F must already exist. Empty or whitespace is fine. • I don't check the size of the file system - I don't care. • Input file is i, outputfile is o and the filesystem is in F • Errors will crash the program. • No checks if file/directory exist, might be an error. • No difference between a file and a directory. Yes, you can write a directory to output and use this as file system. • Using a file as directory that is not a valid file system will result in an error. • No input validation. Invalid commands may throw an error. Or not: eval puts [expr 1+2] • No command to create a directory (was not my idea), but will be created implicit by I (also not my idea, and G doesn't create) • G does not validate the directory. • Filenames could contain spaces, but the interface does not support them. • Concurrent modification of the same file system is not supported. Some hacks: • eval puts$c -> current directory, separated by spaces.
• exit - no comment.
• lappend c . -> switches into the subdirectory .
Python 2.7 373 (413 bytes)
from pickle import*
E=exit;o=open;D,I,W='DIw'
def T(i):w[i]in d and E('ERR2')
w=raw_input().split();b=w[0].upper()
if b=='L':
for k in d:print k,len(d[k])
|
2019-09-23 14:12:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4168739318847656, "perplexity": 5293.609671991584}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576965.71/warc/CC-MAIN-20190923125729-20190923151729-00391.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/big-inductance-coil-coneccted-capacitor-c-100-f-resistor-r-40-ac-circuit-input-voltage-u1--q2613439
|
how big will be inductance of a coil coneccted in with capacitor C=100
? F and resistor of R=40
? in AC circuit with input voltage U1=(220+2*17) V and frequency 50 Hz if power factor for circuit is o.6 ?
how big will be inductance of a coil coneccted in with capacitor C=100 $$\mu$$ F and resistor of R=40 $$\Omega$$ in AC circuit with input voltage U1=(220+2*17) V and frequency 50 Hz if power factor for circuit is o.6 ?
|
2016-05-26 04:51:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3019962012767792, "perplexity": 4269.808023445495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275645.9/warc/CC-MAIN-20160524002115-00007-ip-10-185-217-139.ec2.internal.warc.gz"}
|
http://physics.stackexchange.com/tags/inertia/new
|
# Tag Info
0
If the fluid inside the shell is frictionless and without bubbles or sloshing, it may be treated as a solid which does not rotate, but just slides down the ramp. This provides one component of instantaneous acceleration, which you would add to the instantaneous acceleration of the hollow ball. It seems to me that with this method, you needn't combine the ...
0
What exactly is mass? My answer will encompass four very different regimes each separated by a factor of 1027. I'll first look at things on the order of 10-27 kg, then 100 kg (1 kg), then 1027 kg, and finally 1054 kg. Mass at the scale of 10-27 kg This is the domain of atoms and elementary particles. A proton has a mass of 1.672621777×10-27 kg, to ...
1
What is physics? Physics is the modeling with mathematics of observations in the world around us. It is a way of creating a logical sequence that can be predictive and not only explanatory. It reduces the innumerable constants one would need to describe, for example , the trajectory of a ball with just space coordinates, to a simple parabolic function that ...
0
Since your tags are "newtonian-gravity" and "mass" I will attempt to answer this question in a classical framework. In classical mechanics, mass is essentially defined as a measure of an object's inertia. Let me explain further. We have Newton's second law $$F_\text{net}=ma$$ which is assumed to hold for all objects in classical mechanics. Suppose we ...
3
In classical physics mass has two definitions: It measures the amount of inertia that you have. In order to accelerate something you have to apply a force to it. The heavier your thing is, the less it will accelerate, $$a = \frac{F}{m} \, .$$ If you know the force and can measure the acceleration, you have access to the mass. In the physics of gravity, ...
1
It happens, it's just (as Sidd said) not as noticeable. I would like to suggest the following experiment: wear a chain, necklace, or bracelet--something with some dangle to it. You should see that during braking/acceleration it no longer hangs straight down.
0
The inertial force is proportional to both the acceleration of the system and the mass of the object in question. Hair are light, and thus you don't see the effects without severe acceleration. The clothing are worn, and hence supported by one's body. So they won't move till the body does.
Top 50 recent answers are included
|
2015-04-19 12:51:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6500193476676941, "perplexity": 487.35453048865236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639057.4/warc/CC-MAIN-20150417045719-00277-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://stats.stackexchange.com/questions/436970/predicting-binary-outcomes-for-observations-given-statistics-on-binned-data
|
# Predicting binary outcomes for observations given statistics on binned data
SAT Verbal scores range from 200 to 800 in increments of 10. MIT says that for the class of 2023, the acceptance rates were, for various score ranges
• 750-800 10% = 677/6504
• 700-740 06% = 312/5039
• 650-690 03% = 87/2614
• 600-640 01% = 11/1091
• 200-590 00% = 3/688
How would you estimate the probability of acceptance for a given score, say 750, given this data? You could say 10%, but in reality the probability is likely lower for scores of 750 than 800. You can fit smooth monotonic functions to acceptance rate vs score to the binned data, but there is no unique solution.
• Percentages provided are only approximate: You might use $(677+312)/(6504+5039) = 0.0857.$ So maybe say about 8%. Fitting a smooth curve to all the data may not be helpful. Because you don't know what other relevant criteria are correlated with SAT verbal scores. Nov 20 '19 at 6:46
|
2021-10-17 23:03:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5758041739463806, "perplexity": 1675.7639354982027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00093.warc.gz"}
|
https://math.stackexchange.com/questions/4112110/need-a-pure-geometric-solution-to-a-20-30-130-triangle-question
|
# Need a pure geometric solution to a $20-30-130$ triangle question
In $$\triangle{ABC}$$, $$\angle{ABC}=20^{\circ}$$, $$\angle{ACB}=30^{\circ}$$, $$D$$ is a point inside the triangle and $$\angle{ABD}=10^{\circ}$$, $$\angle{ACD}=10^{\circ}$$, find $$\angle{CAD}$$.
Note: I have seen some very similar question with beautiful solution in pure geometric format. I know how to solve this problem in trigonometric format. But I think this problem deserves a beautiful geometric approach as solution, and that's why I post it here.
As request, here is approach applying Ceva's theorem in trigonometric form,
\begin{align*} \frac{\sin130}{\cos130+2\cos10}&=\tan(x)\Longrightarrow \frac{\sin120\cos10+\cos120\sin10}{\cos120\cos10-\sin120\sin10+2\cos10}\\ &=\frac{\sqrt{3}\cos10-\sin10}{3\cos10-\sqrt{3}\sin10}.\\ &=\frac{1}{\sqrt{3}}\\ &=\tan30\Longrightarrow x\\ &=\boxed{30}\\ \end{align*}
• You should know that the community prefers/expects a question to include something of what the asker knows about the problem. (What have you tried? Where did you get stuck? etc) This helps answerers tailor their responses to best serve you, without wasting time (theirs or yours) explaining things you already understand or using techniques beyond your skill level. (It also helps convince people that you aren't simply trying to get them to do your homework for you. An isolated problem statement with no evidence of personal effort makes a poor impression, attracting down- and close-votes.) – Blue Apr 22 at 9:16
• I can help but before that, you need to show your effort. Did you at least attempt using Trigonometric form of Ceva's theorem or law of sines? Did you get success? If you tried a geometric solution, what construction did you do? Where did you get stuck? – Math Lover Apr 22 at 9:27
• The point is not whether you can solve it. Did you solve it? If yes, then why not share all your work and turn this a good question as per site guidelines? – Math Lover Apr 22 at 9:48
• @Blue Yeah I know what you mean now. You are right that people would not like a do-my-homework-for-me question. That's a good reminder question posting, truly. I have added a note to the question. Last time I posted a geometric problem and provided my algebraic solution and asked for pure geometric approach, the admin saw it and thought I was showing off, and closed my question. So it's hard to guess what people think and I am still learning on that part... – r ne Apr 22 at 10:44
• @MathLover Done. Added. – r ne Apr 22 at 11:12
Please extend line segment $$BA$$. We have $$\angle CAE = 50^0$$. Draw $$\angle ACE = 50^0$$. We have $$CE = AE$$.
So, $$\angle BCE = \angle BEC = 80^0$$. $$BM$$ is angle bisector of isosceles triangle $$\triangle CBE$$ where $$BC=BE$$.
Therefore $$CD = DE$$. As $$\angle DCE = 60^0$$, $$\triangle DCE$$ is equilateral triangle and $$DE = CE = AE$$. So $$\triangle AED$$ is isosceles triangle with $$\angle AED = 20^0$$.
That leads to $$\angle DAE = 80^0 \implies \angle DAC = 30^0$$.
• Cool. This is very like the PDF solution. I should have got this. Thank you! – r ne Apr 22 at 11:37
• you are welcome. – Math Lover Apr 22 at 11:38
COMMENT.-This could be another nice way to prove that $$x=30º$$.
• Here you have A, E, G determined, so F is also determined, now you have to prove that $\triangle{EFC}$ is isosceles. If this is proven, the rest is correct. – r ne Apr 26 at 3:30
|
2021-07-30 12:33:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6325929164886475, "perplexity": 683.780418175965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.60/warc/CC-MAIN-20210730122926-20210730152926-00157.warc.gz"}
|
https://mathematica.stackexchange.com/questions/109697/how-can-i-optimize-the-search-between-two-lists-one-that-is-changing-length
|
# How can I optimize the search between two lists, one that is changing length?
I'm trying to identify unique arrays by comparing two lists, one which is growing in length. The number of elements in the list grows as 2^n^2, where n is the dimension of the array.
I'm not sure how to better optimize this code for speed, but it is very slow for arrays larger than 3x3, i.e. n=4in the code below. I suspect that the If and the AppendTo are slowing things down, and I don't believe I can use ParallelTable due to the nature of the search.
f[n_, i_] := Partition[IntegerDigits[i, 2, (n*n)], n, n]
g[n_, i_] := {f[n, i], Transpose[f[n, i]], J.f[n, i], Transpose[J.f[n, i]], J.f[n, i].J, Transpose[J.f[n, i].J], f[n, i].J, Transpose[f[n, i].J]}
h[n_, i_] := ContainsAny[uniqueArray, g[n, i]]
m[n_] := Table[If[h[n, i], , uniqueArray = AppendTo[uniqueArray, f[n, i]]], {i, 2^n^2}][[2^n^2]]
n = 2;
J = Reverse[IdentityMatrix[n]];
uniqueArray = {};
m[n]; // AbsoluteTiming
m[1]; // AbsoluteTiming
{0.000346, Null}
m[2]; // AbsoluteTiming
{0.001656, Null}
m[3]; // AbsoluteTiming
{0.061311, Null}
m[4]; // AbsoluteTiming
{169.272, Null}
This question builds on the following: A few tuples at a time?
EDIT Ok, I thought I could reduce the time of this search by doing some sorting before checking for uniqueness. For instance, an binary array with two 1's cannot be the same as an array with with three 1's, i.e. these cannot be the same:
MatrixForm[{{0,0},{1,1}}]
MatrixForm[{{0,1},{1,1}}]
So, it seems reasonable to only compares arrays of the same total. I amended the initial code in the following way:
f[n_, i_] := Partition[IntegerDigits[i, 2, (n*n)], n, n]
m[n_] := Table[
If[
ContainsAny[
uniqueArraySplit[[Total[Flatten[f[n, i]]] + 1]],
{f[n, i],
Transpose[f[n, i]],
J.f[n, i],
Transpose[J.f[n, i]],
J.f[n, i].J,
Transpose[J.f[n, i].J],
f[n, i].J,
Transpose[f[n, i].J]
}
],
Nothing,
AppendTo[uniqueArraySplit[[Total[Flatten[f[n, i]]] + 1]],
f[n, i]]
],
{i, 2^n^2}][[2^n^2]]
n = 4;
J = Reverse[IdentityMatrix[n]];
uniqueArraySplit = Table[{}, {p, n^2 + 1}];
m[n]; // AbsoluteTiming
m[4]; // AbsoluteTiming
{165.308, Null}
Now I'm stumped. I removed two functions, and I removed unnecessary searches, and yet it barely improved the timing... Any help would be greatly appreciated.
• How large would you like n to get? Even if you'd optimize the Table calculation to do one iteration per nanosecond, n=8 would take over 500 years. 2^n^2 simply grows very fast. Unless you come up with a different algorithm, anything above n=5 or maybe n=6 will take a very long time. Mar 22 '16 at 6:18
• FWIW, I'd try to replace AppendTo with an Association, which should make insertion faster Mar 22 '16 at 6:23
• Thanks nikie - I recognize that getting much past n=6 is essentially impossible. Perhaps there's a better algorithm, yet I haven't been able to think about one. Get the results for n=5 and n=6 would be a victory. Is there a syntax change needed to switch AppendTo with Association? When I substituted the latter it didn't write anything to the nested list. Mar 22 '16 at 13:07
• If you really want to find out about n=6, get a book like "Hackers Delight", translate the transpose/reverse operations to bit-twiddling operations, and learn C. Back-of-the envelope calculation suggests that it should be possible to run this for n=6 in hours. But that it something you'd have to do yourself ;-) Mar 23 '16 at 18:52
AppendTo has to create a new list, then copy all entries and add the new item, so it's a very expensive operation.
One alternative is to use an Association data structure instead of a list: that's a lookup table, and it supports fast(er) insert and lookup operations. So instead of
uniqueArray = {};
you'd write
uniqueArray = Association[];
and instead of AppendTo[uniqueArray, f[n, i]]:
uniqueArray[f[n, i]] = 1;
This adds a key-value pair to the association that maps the key f[n, i] to the value 1. You don't really care about the value, you only care that insertion is fast and that there's a very fast KeyExistsQ[uniqueArray, ...] function.
If I use:
h[n_, i_] := AnyTrue[g[n, i], KeyExistsQ[uniqueArray, #] &]
m[n_] := Table[
If[h[n, i], , uniqueArray[f[n, i]] = 1;], {i, 2^n^2}][[2^n^2]]
uniqueArray = Association[];
I get about 10x faster for m[3], and m[4] takes about 6s. Probably not enough for n=5, but a good improvement nonetheless.
Another thing: When I run your code, I get a lot of warnings "Tensors ... have incompatible shapes". Are you sure those tensor calculations are correct? Because if they aren't, Mathematica will still happily put the unevaluated expressions in a list and try to work with them. Which of course takes a lot longer, because the unevaluated expressions are much more complex than the correct result would be.
• Excellent, that's a really helpful and significant speed up. Is there any reason I don't see any of these warnings? My messages are blank. Mar 22 '16 at 20:34
• Try this: open a new notebook, restart the kernel, copy&paste your code from the start up to (and including) m[1]; // AbsoluteTiming and run. I get a lot of warnings then, apparently on the second call to m. Maybe some cleanup is missing from m Mar 23 '16 at 7:19
• Ahh, ok, that's my fault - I added those lines of m[1]; // AbsoluteTiming to the question to quickly show the timing, but those lines are not in the code I run. I should clarify that. Mar 23 '16 at 13:42
• using your code improvements I was able to run the n=5 case (the timing was {1630.49, Null}. One question I have is in regards to how Association adds the data to the list. I'm trying to export the final value as a CSV file where each 5 x 5 array that is stored in uniqueArray is separated by a comma, but I'm getting some strange delimiters between the arrays, e.g. the first characters are <|, and in between each array is ->1 The code I'm using to export is: Export["uniqueArray.txt", uniqueArray, "CSV", "TextDelimiters" -> None] Mar 23 '16 at 13:47
• If you're going to use Association, you should probably read the documentation and tutorials in MMA. Anything else will only lead to frustration. That said, you probably want Export[... Keys[uniqueArray] ...] Mar 23 '16 at 14:06
|
2021-12-02 20:00:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4793044924736023, "perplexity": 2649.7150853441244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362287.26/warc/CC-MAIN-20211202175510-20211202205510-00293.warc.gz"}
|
https://jp.maplesoft.com/support/help/MapleSim/view.aspx?path=MaplesoftBattery%2FElectrochem%2FLiIon
|
LiIon - MapleSim Help
MaplesoftBattery
LiIon $—$ Electrochemical model of a lithium-ion battery
Description
The LiIon component models a lithium-ion battery using order-reduced equations derived from John Newman’s works on porous-electrode theory [1-3]. The following figure shows the basic anatomy of a lithium-ion cell, which has four main components: the negative composite electrode connected to the negative terminal of the cell, the positive electrode connected to the positive terminal of the cell, the separator, and the electrolyte. The chemistries of the positive and negative electrodes are independently selectable and define the electrochemical and thermal behaviors of the battery.
Main chemical reactions (assuming ${\mathrm{Li}}_{y}{\mathrm{CoO}}_{2}$ cathode and ${\mathrm{Li}}_{x}{C}_{6}$ anode).
Cathode: ${\mathrm{Li}}_{1-y}{\mathrm{CoO}}_{2}+y{\mathrm{Li}}^{+}+y{e}^{-}\to {\mathrm{LiCoO}}_{2}$
Anode: ${\mathrm{Li}}_{y}{C}_{6}\to {C}_{6}+y{\mathrm{Li}}^{+}+y{e}^{-}$
During battery operation, the position lithium ions (${\mathrm{Li}}^{+}$) travel between the two electrodes via diffusion and ionic conduction through the porous separator and the surface of the active material particles where they undergo electrochemical reactions. This process is called intercalation.
Electrochemical Behavior
Transport in solid phase The following partial differential equation (PDE) describes the solid phase ${\mathrm{Li}}^{+}$ concentration in a single spherical active material particle in solid phase: $\frac{\partial {c}_{s}}{\partial t}=\frac{{\mathrm{D}}_{s}}{{r}^{2}}\frac{\partial }{\partial r}\left({r}^{2}\frac{\partial {c}_{s}}{\partial r}\right)$ where ${\mathrm{D}}_{s}$ is the ${\mathrm{Li}}^{+}$ diffusion coefficient in the intercalation particle of the electrodes.
Transport in electrolyte
The ${\mathrm{Li}}^{+}$ concentration in the electrolyte phase changes due to the changes in the gradient diffusive flow of ${\mathrm{Li}}^{+}$ ions and is described by the following PDE:
$\epsilon \frac{\partial {c}_{e}}{\partial t}=\frac{\partial }{\partial x}\left({\mathrm{D}}_{\mathrm{eff}}\frac{\partial {c}_{e}}{\partial x}\right)+a\left(1+{t}^{+}\right)j$
where
$\mathrm{\epsilon }$ is the volume fraction,
${\mathrm{D}}_{\mathrm{eff}}$ is the ${\mathrm{Li}}^{+}$ diffusion coefficient in the electrolyte,
$a=\frac{3}{{R}_{s}}\left(1-\mathrm{\epsilon }-{\mathrm{\epsilon }}_{f}\right)$ is the specific surface area of electrode,
${R}_{s}$ is the radius of intercalation of electrode
${\mathrm{\epsilon }}_{f}$ is the volume fraction of fillers
${t}^{+}$ is the ${\mathrm{Li}}^{+}$ transference constant in the electrolyte, and
$j$ is the wall-flux of ${\mathrm{Li}}^{+}$ on the intercalation particle of electrode.
Electrical potentials
Charge conservation in the solid phase of each electrode is described by Ohm’s law:
${\mathrm{\sigma }}_{\mathrm{eff}}\frac{{{\partial }}^{2}}{{\partial }{x}^{2}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{\mathrm{\Phi }}_{s}=aFj$
In the electrolyte phase, the electrical potential is described by combining Kirchhoff’s law and Ohm’s law:
$-{\mathrm{\sigma }}_{\mathrm{eff}}\left(\frac{\partial {\mathrm{\Phi }}_{s}}{\partial x}\right)-{\mathrm{\kappa }}_{\mathrm{eff}}\left(\frac{\partial {\mathrm{\Phi }}_{e}}{\partial x}\right)+\frac{2{\mathrm{\kappa }}_{\mathrm{eff}}RT}{F}\left(1-{t}^{+}\right)\frac{\partial \mathrm{ln}\left({c}_{e}\right)}{\partial x}=J$
where
${\mathrm{\sigma }}_{\mathrm{eff}}=\mathrm{\sigma }\left(1-\mathrm{\epsilon }-{\mathrm{\epsilon }}_{\mathrm{eff}}\right)$ is the effective electronic conductivity,
$\mathrm{\sigma }$ is the electronic conductivity in solid phase,
${\mathrm{\kappa }}_{\mathrm{eff}}$ is the effective ionic conductivity of the electrolyte, and
$J$ is the applied current density.
Butler-Volmer kinetics
The Butler-Volmer equation describes the relationship between the current density, concentrations, and over-potential:
$j=k{\left({c}_{s,\mathrm{max}}-{c}_{s,\mathrm{surf}}\right)}^{0.5}{\left({c}_{s,\mathrm{surf}}\right)}^{0.5}{\left({c}_{e}\right)}^{0.5}\left(\mathrm{exp}\left(0.5\frac{F\mathrm{\mu }}{RT}\right)-\mathrm{exp}\left(-0.5\frac{F\mathrm{\mu }}{RT}\right)\right)$
where
$k$ is the reaction rate constant,
$\mathrm{\mu }={\mathrm{\Phi }}_{s}-{\mathrm{\Phi }}_{e}-U$ is the over-potential of the intercalation reaction,
${c}_{s,\mathrm{max}}$ is maximum concentration of ${\mathrm{Li}}^{+}$ ions in the intercalation particles of the electrode,
${c}_{s,\mathrm{surf}}$ is the concentration of of ${\mathrm{Li}}^{+}$ ions on the surface of the intercalation particles of the electrode, and
$U$ is the open-circuit potential for the electrode material.
The open-circuit potential for each cathode and anode material has been curve-fitted based on experimental measurements.
An example of the open-circuit potentials for ${Li}_{y}Co{O}_{2}$ cathode and ${Li}_{x}{C}_{6}$ anode, curve-fitted from experiment measurement, are shown in the following figure:
The gradual decay, with use, of a cell's capacity and increase of its resistance is modeled by enabling the include degradation effects boolean parameter. Enabling this feature adds a state-of-health (soh) output to the model. This signal is 1 when the cell has no decay and 0 when is completely decayed.
The soh output is given by
$\mathrm{soh}={\left(1-\frac{s}{{R}_{s}}\right)}^{3}$
where
$s$ is thickness of the solid-electrolyte interface (SEI),
${R}_{s}$ is radius of the particles of active material in the SEI.
The decay of the capacity is
$C={C}_{\mathrm{max}}\mathrm{soh}$
where
$C$ is the effective capacity, and
${C}_{\mathrm{max}}$ is the specified capacity equal to either the parameter $\mathrm{CA}$ or the input ${C}_{\mathrm{in}}$.
The additional series resistance added to a cell is
${R}_{\mathrm{sei}}=\frac{s}{\mathrm{\kappa }}$
with $\mathrm{\kappa }$ a parameter of the model.
The following equations govern the increase in the thickness of the SEI layer ($s$).
$k={A}_{e}\mathrm{exp}\left(-\frac{{E}_{a}}{RT}\right)$
$\frac{\mathrm{ds}}{\mathrm{dt}}=\left\{\begin{array}{cc}\frac{kcM}{\left(1+\frac{ks}{{\mathrm{D}}_{\mathrm{diff}}}\right){\mathrm{\rho }}_{\mathrm{sei}}}& \mathrm{charging}\\ 0& \mathrm{otherwise}\end{array}$
Thermal Effects
Select the thermal model of the battery from the heat model drop-down list. The available models are: isothermal, external port, and convection.
Isothermal The isothermal model sets the cell temperature to a constant parameter, ${T}_{\mathrm{iso}}$.
External Port The external port model adds a thermal port to the battery model. The temperature of the heat port is the cell temperature. The parameters ${m}_{\mathrm{cell}}$ and ${c}_{p}$ become available and are used in the heat equation ${m}_{\mathrm{cell}}{c}_{p}\frac{\mathrm{d}{T}_{\mathrm{cell}}}{\mathrm{d}t}={P}_{\mathrm{cell}}-{Q}_{\mathrm{cell}}$ ${Q}_{\mathrm{flow}}={n}_{\mathrm{cell}}{Q}_{\mathrm{cell}}$ ${P}_{\mathrm{cell}}={i}_{\mathrm{cell}}^{2}{R}_{\mathrm{cell}}+{i}_{\mathrm{cell}}{T}_{\mathrm{cell}}\left(\frac{\mathrm{d}{U}_{p}}{\mathrm{d}T}-\frac{\mathrm{d}{U}_{n}}{\mathrm{d}T}\right)+{i}_{\mathrm{cell}}\left({\mathrm{\mu }}_{p}-{\mathrm{\mu }}_{n}\right)$ where ${P}_{\mathrm{cell}}$ is the heat generated in each cell, including chemical reactions and ohmic resistive losses, ${Q}_{\mathrm{cell}}$ is the heat flow out of each cell, and ${Q}_{\mathrm{flow}}$ is the heat flow out of the external port.
Convection The convection model assumes the heat dissipation from each cell is due to uniform convection from the surface to an ambient temperature. The parameters ${m}_{\mathrm{cell}}$, ${c}_{p}$, ${A}_{\mathrm{cell}}$, $h$, and ${T}_{\mathrm{amb}}$ become available, as does an output signal port that gives the cell temperature in Kelvin. The heat equation is the same as the heat equation for the external port, with ${Q}_{\mathrm{cell}}$ given by ${Q}_{\mathrm{cell}}=h{A}_{\mathrm{cell}}\left({T}_{\mathrm{cell}}-{T}_{\mathrm{amb}}\right)$
Arrhenius equations For all thermal models, the Arrhenius equations model the effect of cell temperature on the chemical reaction. ${\mathrm{D}}_{s,x}={\mathrm{D}}_{s,x,\mathrm{ref}}\mathrm{exp}\left(\frac{{E}_{\mathrm{dx},p}}{R}\left(\frac{1}{{T}_{\mathrm{ref}}}-\frac{1}{{T}_{\mathrm{cell}}}\right)\right)$ ${\mathrm{D}}_{\mathrm{eff},x}={\mathrm{D}}_{e}{\mathrm{\epsilon }}_{x}^{\mathrm{brugg}}\mathrm{exp}\left(\frac{{E}_{\mathrm{de},x}}{R}\left(\frac{1}{{T}_{\mathrm{ref}}}-\frac{1}{{T}_{\mathrm{cell}}}\right)\right)$ with $x\in \left\{p,s\right\}$.
State of Charge A signal output, soc, gives the state-of-charge of the battery, with 0 being fully discharged and 1 being fully charged. The parameter ${\mathrm{SOC}}_{\mathrm{min}}$ sets the minimum allowable state-of-charge; if the battery is discharged past this level, the simulation is terminated and an error message is raised. This prevents the battery model from reaching non-physical conditions. A similar effect occurs if the battery is fully charged so that the state of charge reaches one. The parameter ${\mathrm{SOC}}_{0}$ assigns the initial state-of charge of the battery.
Capacity The capacity of the battery can either be a fixed value, $\mathrm{CA}$, or be controlled via an input signal, ${C}_{\mathrm{in}}$, if the use capacity input box is checked.
Resistance The resistance of each cell can either be a fixed value, ${R}_{\mathrm{cell}}$, or be controlled via an input signal, ${R}_{\mathrm{in}}$, if the use cell resistance input box is checked.
Variables
Name Units Description Modelica ID ${T}_{\mathrm{cell}}$ $K$ Internal temperature of battery Tcell $i$ $A$ Current into battery i $v$ $V$ Voltage across battery v
Connections
Name Type Description Modelica ID $p$ Electrical Positive pin p $n$ Electrical Negative pin n $\mathrm{soh}$ Real output State of health [0..1]; available when include degradation effects is enabled soh $\mathrm{SOC}$ Real output State of charge [0..1] SOC ${C}_{\mathrm{in}}$ Real input Sets capacity of cell, in ampere hours; available when use capacity input is true Cin ${R}_{\mathrm{in}}$ Real input Sets resistance of cell, in Ohms; available when use resistance input is true Rin ${T}_{\mathrm{out}}$ Real output Temperature of cell, in Kelvin; available with convection heat model Tout $\mathrm{heatPort}$ Thermal Thermal connection; available with external port heat model heatPort
Electrode Chemistry Parameters
Name Default Units Description Modelica ID ${\mathrm{chem}}^{+}$ LiCoO2 Chemistry of the positive electrode chem_pos ${\mathrm{chem}}^{-}$ Graphite Chemistry of the negative electrode chem_neg
The chem_pos and chem_neg parameters select the chemistry of the positive and negative electrodes, respectively. They are of types MaplesoftBattery.Selector.Chemistry.Positive and MaplesoftBattery.Selector.Chemistry.Negative. The selection affects the variation in the open-circuit electrode potential and the chemical reaction rate versus the concentration of lithium ions in the intercalation particles of the electrode.
If the Use input option is selected for either the positive or negative electrode, a vector input port appears next to the corresponding electrode. The port takes two real signals, $U$ and $S$, where $U$ specifies the potential in volts at the electrode and $S$ specifies the entropy in $\frac{J}{\mathrm{mol}K}$.
If any of the chem_pos materials $LiNi{O}_{2}$, $LiTi{S}_{2}$, $Li{V}_{2}{O}_{5}$, $LiW{O}_{3}$, or $NaCo{O}_{2}$ is selected, the isothermal model is used.
Supported positive electrode materials
Chemical composition Chemical name Common name $LiCo{O}_{2}$ Lithium Cobalt Oxide LCO $LiFⅇP{O}_{4}$ Lithium Iron Phosphate LFP $Li{Mn}_{2}{O}_{4}$ Lithium Manganese Oxide LMO $Li{Mn}_{2}{O}_{4}$ - low plateau Lithium Manganese Oxide ${Li}_{1.156}{Mn}_{1.844}{O}_{4}$ Lithium Manganese Oxide $Li{Ni}_{0.8}{Co}_{0.15}{Al}_{0.05}{O}_{2}$ Lithium Nickel Cobalt Aluminum Oxide NCA $Li{Ni}_{0.8}{Co}_{0.2}{O}_{2}$ Lithium Nickel Cobalt Oxide $Li{Ni}_{0.7}{Co}_{0.3}{O}_{2}$ Lithium Nickel Cobalt Oxide $Li{Ni}_{0.33}{Mn}_{0.33}{Co}_{0.33}{O}_{2}$ Lithium Nickel Manganese Cobalt Oxide NMC $LiNi{O}_{2}$ Lithium Nickel Oxide $LiTi{S}_{2}$ Lithium Titanium Sulphide $Li{V}_{2}{O}_{5}$ Lithium Vanadium Oxide $LiW{O}_{3}$ Lithium Tungsten Oxide $NaCo{O}_{2}$ Sodium Cobalt Oxide
Supported negative electrode materials
Chemical composition Chemical name Common name $Li{C}_{6}$ Lithium Carbide Graphite $LiTi{O}_{2}$ Lithium Titanium Oxide ${Li}_{2}{Ti}_{5}{O}_{12}$ Lithium Titanate LTO
Name Default Units Description Modelica ID ${A}_{e}$ 1.2 $\frac{m}{s}$ Factor for reaction rate equation Ae ${\mathrm{D}}_{0}$ $1.8·{10}^{-19}$ $\frac{{m}^{2}}{s}$ Diffusion coefficient at standard conditions D0 ${E}_{a}$ 10000 $\frac{J}{\mathrm{mol}}$ Activation energy Ea $M$ 0.026 $\frac{\mathrm{kg}}{\mathrm{mol}}$ Molar mass of SEI layer M ${R}_{s}$ $2·{10}^{-6}$ $m$ Radius of particles of active material in anode Rs ${\mathrm{SoH}}_{0}$ 1 Initial state-of-health: $0\le {\mathrm{SoH}}_{0}\le 1$ SoH0 $c$ 5000 $\frac{\mathrm{mol}}{{m}^{3}}$ Molar concentration of electrolyte c $\mathrm{\kappa }$ 0.001 $\frac{m}{\mathrm{\Omega }}$ Specific conductivity coefficient kappa ${\mathrm{\rho }}_{\mathrm{sei}}$ 2600 $\frac{\mathrm{kg}}{{m}^{3}}$ Density of SEI layer rho_sei
Basic Parameters
Name Default Units Description Modelica ID ${N}_{\mathrm{cell}}$ $1$ Number of cells, connected in series ncell $\mathrm{CA}$ $1$ $\mathrm{A·h}$ Capacity of cell; available when use capacity input is false C ${\mathrm{SOC}}_{0}$ $1$ Initial state-of-charge [0..1] SOC0 ${\mathrm{SOC}}_{\mathrm{min}}$ $0.01$ Minimum allowable state-of-charge SOCmin ${R}_{\mathrm{cell}}$ $0.005$ $\mathrm{\Omega }$ Series resistance of each cell; available when use cell resistance input is false Rcell
Basic Thermal Parameters
Name Default Units Description Modelica ID ${T}_{\mathrm{iso}}$ $298.15$ $K$ Constant cell temperature; used with isothermal heat model Tiso ${c}_{p}$ $750$ $\frac{J}{\mathrm{kg}K}$ Specific heat capacity of cell cp ${m}_{\mathrm{cell}}$ $0.55$ $\mathrm{kg}$ Mass of one cell mcell $h$ $100$ $\frac{W}{{m}^{2}K}$ Surface coefficient of heat transfer; used with convection heat model h ${A}_{\mathrm{cell}}$ $0.0085$ ${m}^{2}$ Surface area of one cell; used with convection heat model Acell ${T}_{\mathrm{amb}}$ $298.15$ $K$ Ambient temperature; used with convection heat model Tamb
Detailed Parameters
Name Default Units Description Modelica ID ${\mathrm{D}}_{e}$ $7.5·{10}^{-11}$ $\frac{{m}^{2}}{s}$ Electrolyte diffusion coefficient De ${\mathrm{D}}_{s,n,\mathrm{ref}}$ $3.9·{10}^{-14}$ $\frac{{m}^{2}}{s}$ Lithium-ion diffusion coefficient in the intercalation particles of the negative electrode Dsnref ${\mathrm{D}}_{s,p,\mathrm{ref}}$ $1.0·{10}^{-14}$ $\frac{{m}^{2}}{s}$ Lithium-ion diffusion coefficient in the intercalation particles of the positive electrode Dspref ${L}_{n}$ $8.8·{10}^{-5}$ $m$ Thickness of negative electrode Ln ${L}_{p}$ $8.0·{10}^{-5}$ $m$ Thickness of positive electrode Lp ${L}_{s}$ $2.5·{10}^{-5}$ $m$ Thickness of separator Ls ${R}_{s,n}$ $2·{10}^{-6}$ $m$ Radius of intercalation particles at negative electrode Rsn ${R}_{s,p}$ $2·{10}^{-6}$ $m$ Radius of intercalation particles at positive electrode Rsp $\mathrm{brugg}$ 1.5 Bruggeman's constant brugg ${c}_{\mathrm{e0}}$ 5000 $\frac{\mathrm{mol}}{{m}^{3}}$ Initial concentration of Li in electrolyte Ce0 ${c}_{s,n,\mathrm{max}}$ 30555 $\frac{\mathrm{mol}}{{m}^{3}}$ Maximum concentration of Li at the anode Csnmax ${c}_{s,p,\mathrm{max}}$ 51554 $\frac{\mathrm{mol}}{{m}^{3}}$ Maximum concentration of Li at the cathode Cspmax ${\mathrm{\epsilon }}_{f,n}$ $0.0326$ Volumetric fraction of negative electrode fillers efn ${\mathrm{\epsilon }}_{f,p}$ $0.0250$ Volumetric fraction of positive electrode fillers efp ${\mathrm{\epsilon }}_{n}$ $0.485$ Porosity of negative electrode en ${\mathrm{\epsilon }}_{p}$ $0.385$ Porosity of positive electrode ep ${\mathrm{\epsilon }}_{s}$ $0.724$ Porosity of separator electrode es ${k}_{n}$ $5.0307·{10}^{-11}$ $\frac{\mathrm{mol}}{{\left(\frac{\mathrm{mol}}{{m}^{3}}\right)}^{3}{2}}}$ Intercalation/deintercalation reaction-rate constant at the negative electrode Kn ${k}_{p}$ $2.334·{10}^{-11}$ $\frac{\mathrm{mol}}{{\left(\frac{\mathrm{mol}}{{m}^{3}}\right)}^{3}{2}}}$ Intercalation/deintercalation reaction-rate constant at the positive electrode Kp ${\mathrm{\sigma }}_{n}$ 100 $\frac{S}{m}$ Conductivity of solid phase of negative electrode sigman ${t}^{+}$ 0.363 LiOn transference number in the electrolyte Tplus
Detailed Thermal Parameters
Name Default Units Description Modelica ID ${E}_{\mathrm{de},n}$ 10000 $\frac{J}{\mathrm{mol}}$ Activation energy for electrolyte phase diffusion, De, of the negative electrode Eden ${E}_{\mathrm{de},p}$ 10000 $\frac{J}{\mathrm{mol}}$ Activation energy for electrolyte phase diffusion, De, of the positive electrode Edep ${E}_{\mathrm{de},s}$ 10000 $\frac{J}{\mathrm{mol}}$ Activation energy for electrolyte phase diffusion, De, of the separator Edes ${E}_{\mathrm{ds},n}$ 50000 $\frac{J}{\mathrm{mol}}$ Activation energy for solid phase Li diffusion coefficient, Ds, of the negative electrode Edsn ${E}_{\mathrm{ds},p}$ 25000 $\frac{J}{\mathrm{mol}}$ Activation energy for solid phase Li diffusion coefficient, Dp, of the positive electrode Edsp ${E}_{k,n}$ 20000 $\frac{J}{\mathrm{mol}}$ Activation energy for ionic conductivity of electrolyte solution, κ, of the negative electrode Ekn ${E}_{k,p}$ 20000 $\frac{J}{\mathrm{mol}}$ Activation energy for ionic conductivity of electrolyte solution, κ, of the positive electrode Ekp ${E}_{k,s}$ 20000 $\frac{J}{\mathrm{mol}}$ Activation energy for ionic conductivity of electrolyte solution, κ, of the separator Eks
References
[1] Newman, J. and William, T., Porous-electrode theory with battery applications, AIChE Journal, Vol. 21, No. 1, pp.25-41, 1975.
[2] Dao, T.-S., Vyasarayani, C.P., McPhee, J., Simplification and order reduction of lithium-ion battery model based on porous-electrode theory, Journal of Power Sources, Vol. 198, pp. 329-337, 2012.
[3] Subramanian,V.R., Boovaragavan,V., and Diwakar, V.D., Toward real-time simulation of physics based lithium-ion battery models, Electrochemical and Solid-State Letters, Vol. 10, No. 11, pp. A255-A260, 2007.
[4] Kumaresan, K., Sikha G., and White, R.E., Thermal model for a Li-ion cell, Journal of the Electrochemical Society, Vol. 155, No. 2, pp. A164-A171, 2008.
[5] Newman, J. and William, T., Porous-electrode theory with battery applications, AIChE Journal, Vol. 21, No. 1, pp.25-41, 1975.
[6] Viswanathan, V.V., Choi, D., Wang, D., Xu, W., Towne, S., Williford, R.E., Zhang, J.G., Liu, J., and Yang, Z., Effect of entropy change of lithium intercalation in cathodes and anodes on Li-ion battery thermal management, Journal of Power Sources, Vol. 195, No. 11, pp. 3720–3729, 2010.
|
2023-03-30 02:15:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 236, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8690438270568848, "perplexity": 3209.6382104721747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00744.warc.gz"}
|
http://www.bertarojas.com/?library/advances-in-architectural-geometry-2014
|
# Advances in Architectural Geometry 2014
Format: Hardcover
Language: English
Format: PDF / Kindle / ePub
Size: 7.28 MB
The recorded development of geometry spans more than two millennia. The course itself is mathematically rigorous, but still emphasizes concrete aspects of geometry, centered on ... The region is simple, if there is at most one such geodesic. Mathematicians studying relativity and mathematical physics will find this an invaluable introduction to the techniques of differential geometry. Egon Schulte works on discrete geometry, with an emphasis on combinatorial aspects and symmetry. differential geometry so that you can switch to physics when you realize econ is boring and pointless.
Pages: 385
Publisher: Springer; 2015 edition (December 27, 2014)
ISBN: 3319114174
Basic Concepts of Synthetic Differential Geometry (Texts in the Mathematical Sciences)
Pollack, "Differential topology", Prentice-Hall, 1974. Covering spaces and fundamental groups, van Kampen's theorem and classification of surfaces. Basics of homology and cohomology, singular and cellular; isomorphism with de Rham cohomology. Brouwer fixed point theorem, CW complexes, cup and cap products, Poincare duality, Kunneth and universal coefficient theorems, Alexander duality, Lefschetz fixed point theorem Clifford Algebras and their Applications in Mathematical Physics: Volume 2: Clifford Analysis (Progress in Mathematical Physics). The chapters give the background required to begin research in these fields or at their interfaces. They introduce new research domains and both old and new conjectures in these different subjects show some interaction between other sciences close to mathematics Geometry of Navigation (Horwood Series in Mathematics & Applications). 3 MB The aim of this volume is to give an introduction and overview to differential topology, differential geometry and computational geometry with an emphasis on some interconnections between these three domains of mathematics foundation of modern physics Series 7: Introduction to Differential Geometry and General Relativity (Vol.1). Einstein not only explained how gravitating bodies give this surface its properties—that is, mass determines how the differential distances, or curvatures, in Riemann’s geometry differ from those in Euclidean space—but also successfully predicted the deflection of light, which has no mass, in the vicinity of a star or other massive body. This was an extravagant piece of geometrizing—the replacement of gravitational force by the curvature of a surface An Introduction To Differential Geometry With Use Of The Tensor Calculus. Here, the singularity of $M_t$ is an immersed geodesic surface whose cone angles also vary monotonically from $0$ to $2\pi$. When a cone angle tends to $0$ a small core surface (a torus or Klein bottle) is drilled producing a new cusp. We show that various instances of hyperbolic Dehn fillings may arise, including one case where a degeneration occurs when the cone angles tend to $2\pi$, like in the famous figure-eight knot complement example Smooth Quasigroups and Loops (Mathematics and Its Applications).
Heath, Jr. "Grassmannian Beamforming for Multiple-Input Multiple-Output Wireless Systems," IEEE Transactions on Information Theory, Vol. 49, No. 10, October 2003 Would you like to merge this question into it? already exists as an alternate of this question. Would you like to make it the primary and merge this question into it Differential and Riemannian Geometry? Therefore it is natural to use great circles as replacements for lines. Contents: A Brief History of Greek Mathematics; Basic Results in Book I of the Elements; Triangles; Quadrilaterals; Concurrence; Collinearity; Circles; Using Coordinates; Inversive Geometry; Models and Basic Results of Hyperbolic Geometry download Advances in Architectural Geometry 2014 pdf. Some may like to think of flying insects, avian creatures, or winged mammals, but I am a creature of water and will think of dolphins instead. This dolphin, or Darius as he prefers to be called, is equipped not only with a strong tail for propelling himself forward, but with a couple of lateral fins and one dorsal fin for controlling his direction Festschrift Masatoshi Fukushima:In Honor of Masatoshi Fukushima's Sanju (Interdisciplinary Mathematical Sciences).
Singularities of Differentiable Maps, Volume 1: Classification of Critical Points, Caustics and Wave Fronts (Modern Birkhäuser Classics)
Smooth manifolds are 'softer' than manifolds with extra geometric structures, which can act as obstructions to certain types of equivalences and deformations that exist in differential topology. For instance, volume and Riemannian curvature are invariants that can distinguish different geometric structures on the same smooth manifold—that is, one can smoothly "flatten out" certain manifolds, but it might require distorting the space and affecting the curvature or volume Introduction to Differentiable Manifolds (Universitext). This is a differential manifold with a Finsler metric, i.e. a Banach norm defined on each tangent space. A Finsler metric is a much more general structure than a Riemannian metric. A Finsler structure on a manifold M is a function F : TM → [0,∞) such that: F(x, my) = F(x,y) for all x, y in TM, The vertical Hessian of F2 is positive definite Differential Geometry: Frame Fields and Curves Unit 2 (Course M434). This is arguably one of the deepest and most beautiful results in modern geometry, and it is surely a must know for any geometer / topologist Dirac Operators and Spectral Geometry (Cambridge Lecture Notes in Physics). I understood my undergrad analysis book before the first time I walk into my class. Knowing analysis makes me to become a more practical person in life In the end, everything is just topology, analysis, and algebra A First Course in Differential Geometry (Chapman & Hall/CRC Pure and Applied Mathematics). The session featured many fascinating talks on topics of current interest. The articles collected here reflect the diverse interests of the participants but are united by the common theme of the interplay among geometry, global analysis, and topology The Theory of Finslerian Laplacians and Applications (Mathematics and Its Applications). Some of these applications are mentioned in this book. With such a lot of "parents," modern differential geometry and topology naturally inherited many of their features; being at the same time young areas of mathematics, they possess vivid individuality, the main characteristics being, perhaps, their universality and the synthetic character of the methods and concepts employed in their study Projective Duality and Homogeneous Spaces (Encyclopaedia of Mathematical Sciences). This is an introduction to some of the analytic aspects of quantum cohomology. The small quantum cohomology algebra, regarded as an example of a Frobenius manifold, is described without going into the technicalities of a rigorous definition. Differential geometry is deceptively simple Hamiltonian Structures and Generating Families (Universitext).
A treatise on the circle and the sphere, by Julian Lowell Coolidge.
Clifford (Geometric) Algebras With Applications in Physics, Mathematics, and Engineering
Fractals, Wavelets, and their Applications: Contributions from the International Conference and Workshop on Fractals and Wavelets (Springer Proceedings in Mathematics & Statistics)
Singularity Theory: Proceedings of the European Singularities Conference, August 1996, Liverpool and Dedicated to C.T.C. Wall on the Occasion of his ... Mathematical Society Lecture Note Series)
A Treatise On The Differential Geometry Of Curves And Surfaces (1909)
General investigations of curved surfaces of 1827 and 1825; tr. with notes and a bibliography by James Caddall Morehead and Adam Miller Hiltebeitel.
Differential Geometry and its Applications (Colloquia Mathematica Societatis Janos Bolyai)
Loop Spaces, Characteristic Classes and Geometric Quantization (Modern Birkhäuser Classics)
Projective differential geometry of line congruences
Lectures on Probability Theory and Statistics: Ecole d'Ete de Probabilites de Saint-Flour XXV - 1995 (Lecture Notes in Mathematics)
Null Curves and Hypersurfaces of Semi-riemannian Manifolds
Vectors And Tensors In Engineering And Physics: Second Edition
The text is reasonably rigorous and build around stating theorems, giving the proofs and lemmas with occasional examples Elliptic Genera and Vertex Operator Super-Algebras (Lecture Notes in Mathematics). Although both Saccheri and Lambert aimed to establish the hypothesis of the right angle, their arguments seemed rather to indicate the unimpeachability of the alternatives. Several mathematicians at the University of Göttingen, notably the great Carl Friedrich Gauss (1777–1855), then took up the problem. Gauss was probably the first to perceive that a consistent geometry could be built up independent of Euclid’s fifth postulate, and he derived many relevant propositions, which, however, he promulgated only in his teaching and correspondence Representation Theory and Noncommutative Harmonic Analysis II: Homogeneous Spaces, Representations and Special Functions (Encyclopaedia of Mathematical Sciences) (v. 2). Both versions require a JAVA-capable browser. Anamorphic art is an art form which distorts an image on a grid and then rebuilds it using a curved mirror. Create your own anamorphic art by printing this Cylindrical Grid. It was used by Jessica Kwasnica to create an Anamorphic Giraffe and by Joey Rollo to create an Anamorphic Elephant Elementary Differential Geometry 2nd (Second) Edition Bypressley. The most striking results obtained in this field are the proof of Weil's conjectures (Dwork, Grothendieck, Deligne), Faltings's proof of Mordell's conjecture, Fontaine's theory (comparison between certain cohomologies), Wiles's proof of Fermat's Last Theorem, Lafforgue's result on Langlands's conjectures, the proof of Serre's modularity conjecture (Khare, Wintenberger, Kisin....), and Taylor's proof of the Sato-Tate conjecture Differential Geometry and the Calculus of Variations. Please read: De Rham-like operators and curvature of a connection (5.7 and 5.12 in the notes) Week 12: parallel transport on vector bundles, principal bundles connections and connection 1-forms, parallel transport in principal bundles, from vector bundle connections to principal ones Differential Geometry of Foliations: The Fundamental Integrability Problem (Ergebnisse der Mathematik und ihrer Grenzgebiete. 2. Folge). Abstract: Following Lekili, Perutz, and Auroux, we know that the Floer homology of a 3-manifold with torus boundary should be viewed as an element in the Fukaya category of the punctured torus. I’ll give a concrete description of how to do this and explain how it can be applied to study the relationship between L-spaces (3-manifolds with the simplest Heegaard Floer homology) and left orderings of their fundamental group Collected Papers - Gesammelte Abhandlungen (Springer Collected Works in Mathematics). With tools from differential geometry, I develop a general kernel density estimator, for a large class of symmetric spaces, and then derive a minimax rate for this estimator comparable to the Euclidean case. In the second part, I will discuss a geometric approach to network inference, joint work with Cosma Shalizi, that uses the above estimator on hyperbolic spaces. We propose a more general, principled statistical approach to network comparison, based on the non-parametric inference and comparison of densities on hyperbolic manifolds from sample networks Lectures On Differential Geometry. Moreover, to master the course of differential geometry you have to be aware of the basic concepts of geometry related disciplines, such as algebra, physics, calculus etc Visualization and Mathematics III (Mathematics and Visualization) (v. 3). Many vector datasets contain features that share geometry. For example, a forest border might be at the edge of a stream, lake polygons might share borders with land-cover polygons and shorelines, and parcel polygons might be covered by parcel lot lines. When you edit these layers, features that are coincident should be updated simultaneously so they continue to share geometry Advances in Architectural Geometry 2014 online.
|
2017-11-24 02:10:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36405229568481445, "perplexity": 1082.067475202821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807056.67/warc/CC-MAIN-20171124012912-20171124032912-00030.warc.gz"}
|
https://www.nature.com/articles/s41598-020-62560-4
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Copper(II)-binding equilibria in human blood
## Abstract
It has been reported that Cu(II) ions in human blood are bound mainly to serum albumin (HSA), ceruloplasmin (CP), alpha-2-macroglobulin (α2M) and His, however, data for α2M are very limited and the thermodynamics and kinetics of the copper distribution are not known. We have applied a new LC-ICP MS-based approach for direct determination of Cu(II)-binding affinities of HSA, CP and α2M in the presence of competing Cu(II)-binding reference ligands including His. The ligands affected both the rate of metal release from Cu•HSA complex and the value of KD. Slow release and KD = 0.90 pM was observed with nitrilotriacetic acid (NTA), whereas His showed fast release and substantially lower KD = 34.7 fM (50 mM HEPES, 50 mM NaCl, pH 7.4), which was explained with formation of ternary His•Cu•HSA complex. High mM concentrations of EDTA were not able to elicit metal release from metallated CP at pH 7.4 and therefore it was impossible to determine the KD value for CP. In contrast to earlier inconclusive evidence, we show that α2M does not bind Cu(II) ions. In the human blood serum ~75% of Cu(II) ions are in a nonexchangeable manner bound to CP and the rest exchangeable copper is in an equilibrium between HSA (~25%) and Cu(II)-His-Xaa ternary complexes (~0.2%).
## Introduction
Copper is an essential cofactor for more than twenty proteins that play important roles in cellular energy production, antioxidative defense and oxidative metabolism. Organismal copper metabolism is strictly regulated and its deflections are characteristic for many diseases. Disturbance of copper metabolism is the primary cause of monogenic Wilson’s and Menkes diseases1 and it is also involved in progression of inflammation, cancer, atherosclerosis2 and many neurodegenerative diseases including the Alzheimer’s disease3.
The extracellular copper pool in the body is stored, transported and distributed in the body mainly by blood. According to the current opinion, the copper ions in the human blood serum are distributed between three proteins: ceruloplasmin (CP), albumin (HSA) and alpha-2-macroglobulin (α2M) accounting for approx. 70, 15 and 10% of copper, respectively4,5. The remaining 5% of copper is functioning as a cofactor in Cu,ZnSOD-3, clotting factors V and VIII, amine and diamine oxidases, ferroxidase II and some other enzymes5. Existence of a low-molecular-weight (LMW) copper pool bound with His in blood was proposed for a half of century6,7. However, equilibrium dialysis data8 and direct EPR measurements9 showed that HSA competes effectively with His for binding of Cu(II) at equimolar concentrations suggesting that excessive concentration of HSA in the blood serum should avoid the binding of Cu(II) to His and other free amino acids in the blood. At the same time the existence of LMW copper pool in blood serum is recently confirmed10 and the “saga of Cu(II)-His” in blood11 still continues.
Structure and functioning of major blood copper proteins have been intensively studied for many decades, however, information about their metal-binding properties is still limited and often inconsistent.
CP is a monomeric glycoprotein with MW of 132 kD12 and its average concentration in blood serum is 1.9 μM13. CP is metallated in the secretory pathway with six Cu ions bound in a highly cooperative manner14. It is known that the cupric ions bound to CP are “not exchangeable”2, which seriously complicates determination of the Cu(II)-binding affinity of CP. CP is an enzyme (EC 1.16.3.1) exhibiting oxidase activity towards Fe(II) ions and numerous aromatic compounds5, however, it is involved also in the transport of copper into mammalian cells15.
α2M is a homotetrameric glycoprotein with MW of 720 kD16 responsible for binding and inactivation of proteases17. The average concentration of α2M in human plasma is 1.3 μM and there is evidence that α2M can bind copper ions and participate in copper delivery into mammalian cells18. Limited information about Cu(II)-binding properties of α2M indicates that only two Cu(II) ions could be bound to tetramer with higher affinity than that of HSA18.
HSA is the most abundant and probably the most studied protein in the blood. The average concentration of HSA in serum is 650 μM and it constitutes more than 50% of serum proteins19,20. HSA (MW 66.5 kD) is a monomeric multicargo transport and storage protein containing seven hydrophobic binding pockets for organic molecules such as fatty acids, metabolites, hormones and drugs21,22. In vitro studies have identified also four distinct metal binding sites in HSA, differing in structure and metal-binding specificity23. The most studied metal-binding site of HSA is the N-terminal site (NTS)24 also known as ATCUN25, which binds Cu(II) ions in physiological conditions. The second site is called Multi-Metal Binding Site (MBS), responsible mainly for binding of Zn(II) ions23. The third site is composed from a free Cys34 residue and its surrounding and the fourth site with unknown location is called Site B. Physiological relevance of last two sites is unknown, however, they might interact with toxic metal ions like Cd(II) or Ni(II) or with metallodrugs23.
The NTS, includes three N-terminal amino acid residues and is characterized by the presence of His in the third position24. Only a small fraction (~2%) of these sites are occupied by Cu(II) ions in physiological conditions23, however, binding and concomitant redox silencing of pro-oxidant Cu(II) ions by HSA gives an important contribution to the antioxidative capacity of the blood26.
Cu(II)-binding properties of NTS have been in the focus of intensive studies. The dissociation constant (KD) values for Cu(II) in NTS of HSA (Asp-Ala-His) and of bovine serum albumin (Asp-Thr-His), have been determined by a variety of methods such as potentiometric titration, equilibrium dialysis, ultrafiltration, isothermal titration calorimetry, Cu(II) electrode and several spectroscopic methods, including EPR27. However, the estimated values for the conditional dissociation constant vary from picomolar (log KD = −11.18) to subfemtomolar (log KD = −16.18) range28,23. The methods used for the study of the binding of Cu(II) to HSA and other Cu(II) proteins so far rely on indirect estimation of the equilibrium concentration of Cu•Protein complex, which can lead to misinterpretation of experimental results29. Such problems can be avoided by using methods, which can directly determine the equilibrium concentrations of the metallated complex.
The aim of the current study was to estimate the Cu(II)-binding affinities of major serum copper proteins (HSA, CP and α2M) in comparison with LMW Cu(II)-binding reference ligands including His by using an unified and direct approach, which gives thermodynamic background for understanding the distribution copper in the blood. For this purpose we have elaborated a new LC-ICP MS based approach for direct detection of Cu•Protein complex in the presence of metal-competing LMW Cu(II)-binding ligands. To cover the wide range of metal-binding affinities of different proteins high-affinity DTPA and EDTA, intermediate affinity NTA and low-affinity His were used in titrations. The kinetics of the metal release from Cu•Protein complex was also studied to ensure that the equilibrium was reached. In result comparable Cu(II)-binding properties of HSA, CP and α2M have been determined and an accelerating effect of His on metal release from HSA has been described. Results enable systematical and quantitative overview from the copper-binding equilibria in human blood.
## Results
### Elaboration of LC-ICP MS for determination of HSA interaction with Cu(II) ions
The high and low-molecular weight (HMW and LMW) copper pools were separated by size-exclusion chromatography (SEC) on a 1 ml Sephadex G25 Superfine column. The separation of HMW and LMW compounds was completed in 4 minutes (flow rate 0.4 ml/min). When an equimolar amount of Cu(II) was added to 10 μM HSA all copper was found in the HMW peak (Fig. 1a). Thus, the kinetic stability of Cu•HSA complex is sufficient for SEC separation.
Adding of Cu(II) to HSA in a twofold excess did not increase the amount of copper in the HMW peak, which confirms that only the binding to the single high-affinity site is seen in the experiment. At the same time, the peak of LMW copper was not observed in the chromatogram, demonstrating that the column material has some capacity to bind the “free” Cu(II) ions. We confirmed that the column-bound copper could be quantitatively removed from the column by a subsequent elution with 10 μl of 1 mM EDTA. In the Cu(II)-competition experiments with LMW ligands like EDTA, NTA, DTPA, the copper was quantitatively eluted from the column, suggesting that the binding affinity of column matrix is sufficiently lower as that for listed ligands. In case of HSA - His competition experiments LMW copper peak was smaller than expected at lower His concentrations. To quarantee similar LC conditions for all experiments, column decoppering with10 μl of 1 mM EDTA was performed before each LC ICP MS experiment.
### Demetallation of Cu(II)•HSA with copper-binding ligands
Incubation of Cu•HSA complex with EDTA decreased the HMW copper peak and increased the LMW copper peak (Fig. 1b–g). The half-life of Cu•HSA demetallation in the presence of EDTA was 20–30 min (Fig. 2) and the protein was almost fully demetallated already by an equimolar amount of EDTA. Thus, the dissociation of Cu(II) ions from HSA in the presence of EDTA is relatively slow and the affinity of Cu(II) for HSA is lower than that for EDTA. EDTA binds one Cu(II) ion with KD = 1.26 × 10−16 M at pH 7.430.
A substantially weaker Cu(II)-binding ligand NTA (KD = 1.99 × 10−11 M at pH = 7.431) also demetallated Cu•HSA with the half-life of 20–25 min. As expected, a substantially higher than equimolar concentration of NTA was required for complete demetallation of HSA (Fig. 3a), which allows to determine the KD value for Cu•HSA complex. From the fractional content of Cu•HSA at equilibrium in the presence of different concentrations of NTA (Fig. 3b) the KD value equal to 0.900 ± 0.091 pM was determined by using Eqn. 4 as described in Experimental section. Demetallation of Cu•HSA complex in the presence of His was also studied. His forms 2:1 complex with Cu(II) at pH = 7.4 where KD1 = 3.7 × 10−9 M and KD2 = 4.7 × 10−7 M31. In contrast to EDTA and NTA, His demetallated Cu•HSA with fast kinetics (half-life of appr. 1-2 min) and at relatively high concentrations (Fig. 4a), which also enables determination of KD for Cu•HSA complex. By using of fractional content of Cu•HSA at equilibrium conditions at different concentrations of His (Fig. 4b) the KD = 34.7 ± 4.5 fM was determined according to the Eqn. 4 as described in Experimental section.
A separate experiment showed that low concentrations of His that alone did not demetallate Cu•HSA, substantially increased the rate of the HSA demetallation by EDTA: three and sixfold increases were observed in the presence of 50 μM and 100 μM of His, respectively (Fig. 2b). 0.5 mM Glu and Gly were not able to demetallate Cu•HSA complex and did not accelerate its demetallation in the presence of EDTA (Data not shown).
### Attempts to demetallate Cu•CP with copper-binding ligands
In Cu•CP samples the copper eluted from the Sephadex G25 column in HMW region confirming that metal is strongly bound to the protein. Adding up to 200 mM of strong Cu(II) chelators (DTPA, EDTA) at pH 7.4 did not decrease the metal content in Cu•CP peak even after 100 min incubation with 200 mM EDTA or 21 h incubation with 100 mM DTPA (Supplementary Information, Figure S1a). At high, non-physiological pH values (pH 11) Cu•CP could be demetallated by EDTA in a dose-dependent manner (Supplementary Information, Figure S1b).
### Metallation of α2M with Cu(II) ions
The experiments were carried out with two preparations of normal (“slow form”) human plasma α2M (from Sigma and Athens Research) as well as with the so-called “fast form” of α2M from Athens Research. The “slow form” of α2M, represents more than 99% of total α2M in blood plasma and possesses the ability to bind and inhibit proteases. The “fast form” of α2M arises through a conformational change caused by entrapment of a protease in the α2M bait region, or chemical cleavage of an internal thiol ester bond located near the bait region17. “Fast form” of α2M represents only 0.17–0.7% of the total α2M in blood plasma and is rapidly taken up by the liver. Our LC-ICP MS experiments on Sephadex G25 column showed that both forms of α2M neither contained copper nor were able to bind added Cu(II) ions essentially (Supplementary Information, Figure S2). We tested Cu(II) binding of normal α2M also by ultrafiltration with 10 kD cut-off membranes and confirmed that α2M had only marginal ability (<5%) for retention of Cu(II) ions during ultrafiltration. These results suggest that α2M from human plasma does not bind Cu(II) ions with a biologically significant (e.g. µM or higher) affinity. We also performed a metal competition experiment with α2M and HSA by LC-ICP MS using a Superdex 200 (10 × 300 mm) column, which can separate α2M and HSA peaks. When an equimolar concentration of Cu(II) was added to the equimolar mixture of normal α2M and HSA, almost all copper was eluted in a single peak corresponding to Cu•HSA, demonstrating that α2M cannot compete for Cu(II) ions with HSA even at equimolar concentration (Fig. 5e).
## Discussion
The affinities of metalloproteins for the metal ions, characterized by KD values, are the key thermodynamic parameters, which direct the metabolism of metal ions and determine their homeostasis. The metal homeostasis can be understood and described quantitatively only if the affinities of the proteins involved are comparable to each other, e.g. they have been determined absolutely correctly or with the same method and using the same reference compounds to ensure the comparability. The full set of comparable metal-binding affinities is available only for the intracellular Cu(I) proteome32. The metal-binding affinities of the individual Cu(II) proteins, determined by different methods vary largely as their accurate estimation is a subject to a number of pitfalls and complicating factors29. Here we have applied an unified approach for determination of Cu(II)-binding affinities for three major blood copper proteins with the aim to get reliable and comparable KD values for individual proteins determining the copper distribution in the blood and also in other extracellular media such as cerebrospinal fluid.
### Binding of Cu(II) ions to human serum albumin
The release of Cu(II) ions from Cu•HSA complex in the presence of strong multidentate ligands such as EDTA and NTA is a slow process. The independence of the Cu•HSA demetallation half-life on the multidentate ligands and their concentrations used suggests that the rate-limiting step of the metal release is the dissociation of Cu(II) ions from NTS of HSA. NTS of HSA is highly dynamic33 and no large-scale protein conformational changes or other slow processes are assumed to occur during dissocition. Moreover, the maximal half-life of unassisted dissociation of a complex with pKD 13, is approx. 30 min that is similar to the observed half-life of Cu•HSA decoppering in the presence of EDTA and NTA, which supports our suggestion. The fast release of copper from Cu•HSA in the presence of His points to a different mechanism of demetallation and involvement of His in the rate-limiting step. Based on spectroscopic studies it was suggested that His forms a ternary complex with Cu•HSA34, however, a direct [14C]His binding assay by using ultrafiltration showed that His interaction with Cu•HSA could be only transient8. Nevertheless, even a transient His•Cu•HSA can act as a ligand-exchange complexes enhancing the rate of copper release. This conclusion is also supported by the catalytic effect of physiological concentrations of His (50 - 100 μM) on the demetallation of Cu•HSA by EDTA observed in the current study. The catalytic effect on metal release from Cu•HSA was specific for His as neither 1 mM Glu nor 0.5 mM Gly did not accelerate the reaction. From the X-ray crystal structure of model peptides (DAHK) it is known that Cu(II) ion in NTS (DAH) is equatorially coordinated by N-terminal amine, two deprotonated peptide bond nitrogens and imidazole of His-3, whereas one apical coordination site is occupied with water35. It can be suggested that replacement of this water molecule in Cu•HSA with imidazole from free His facilitates the metal release from the ternary complex formed.
The dissociation constant (KD) value for Cu•HSA complex may substantially depend on many factors including the experimental setup of its determination. First, much attention has to be paid to reaching the equilibrium, which is slow in case of many chelators including NTA. Second, our results demonstrate that the estimated KD value for Cu•HSA complex depends also on the nature and binding mode of the competing ligand used in metal competition experiments. Using of NTA, which forms 1: 1 complex with Cu(II) in our experimental conditions and showed slow demetallation of Cu•HSA, a KD value equal to 0.900 ± 0.091 pM (50 mM Hepes, 50 mM NaCl, pH 7.4) was determined. However, titration with His, which forms 2: 1 complex in our experimental conditions and leads to fast demetallation of Cu•HSA, yielded KD equal to 34.7 ± 4.5 fM (50 mM Hepes, 50 mM NaCl at pH 7.4). The lower KD value determined from competition with His might arise due to the omitting of the formation of a putative ternary complex between HSA, Cu(II) ion and His in the binding scheme, which should result in higher affinity estimates, since the copper in the ternary complex is found in the HMW peak. Thus, the KD value determined with NTA should be considered more reliable. This value is very similar to the KD value 1.0 pM, determined from spectroscopic titration of Cu•HSA with NTA in 100 mM NaCl (pH 7.4)28. At the same time a substantially higher KD value 4.0 pM was obtained in 100 mM HEPES, pH 7.4, which was explained with the formation of ternary HEPES• Cu(II)•NTA complex28. We note that in addition to the effect of ternary complex formation there is also contribution from differences in ionic strength and extinction coefficients in two media, which were not considered. In our LC-ICP MS experiments KD values (determined by NTA and His) in 5 mM HEPES, 50 mM NaCl (data not shown) were similar to those in 50 mM HEPES, 50 mM NaCl, which allows to conclude that HEPES at 50 mM level does not compete with HSA, NTA and His for Cu(II) binding or ternary HEPES complexes are formed with both Cu•HSA and Cu•NTA. At the same time we observed that even a slight decrease in pH has a substantial effect on the HSA demetallation levels, which shows that the usage of buffer is necessary. The KD values for ionic equilibria can be affected also by ionic strength. Indeed, conducting NTA titration in 5 mM HEPES, pH 7.4 we observed that absence of 50 mM NaCl caused fivefold increase of the KD for Cu•HSA, which is, however, apparent and should be corrected by using of KD value for Cu(II)•NTA complex at low ionic strength. To avoid these fluctuations and also for practical reasons it is advisable to keep ionic strength value in metal-binding experiments close to 0.1 M, which is a standard condition, where KD values for reference Cu(II)•Ligand complexes are normally determined36. Thus, there is a 4.4-fold difference between KD values determined from NTA competition by spectroscopic (100 mM HEPES, pH 7.4)28 and LC-ICP MS-based method (50 mM HEPES, 50 mM NaCl, pH 7.4). However, we cannot compare these results directly as kinetic results of copper release and incubation times, necessary for reaching the equilibrium were not presented in the earlier paper and it was only declared that the metal release from Cu•HSA in the presence of NTA is faster than in case of EDTA28. Our KD value of 0.90 pM is comparable also with KD = 6.61 pM, determined in 30 mM Hepes, 250 mM NaCl (pH = 7.0) with equilibrium dialysis37, however, difference could be ascribed to different pH value. Therefore KD value of 0.90 pM for Cu•HSA determined in the present work could serve as a reference KD value for Cu•HSA in up to 50 mM HEPES buffer, 50 mM NaCl, pH 7.4, useful for further research. From a methodological point of view our study highlights the need to use a kinetic approach and direct methods for quantification of metallated protein complexes as well as the neccessity for strict control of buffer, ionic strength and pH value during titration with metal-chelating ligands.
### Binding of Cu(II) ions to ceruloplasmin
Our results confirm that at pH 7.4 Cu•CP complex can not be demetallated with high millimolar concentrations of strong Cu(II)-chelating ligands like EDTA or DTPA in a timeframe of up to 20 h, which could be explained with three possibilities: first – the thermodynamic Cu(II)-binding affinity of Cu•CP is much higher as compared to chelators used (for EDTA KD = 1.26 × 10−16 M30; for DTPA KD = 5.0 × 10−17 M38, both at pH 7.4); secondly - dissociation of Cu(II) ions from Cu•CP complex is extremely slow, i.e. complex is kinetically inert at physiological pH values or third - the Cu•CP complex is thermodynamically very stabile and also kinetically inert. Based on available data it is impossible to distinguish between these three possibilities and therefore it is also impossible to determine or estimate KD for Cu•CP complex from our results. It is known that in Cu•CP the copper ions are located in three mononuclear and one trinuclear binding sites, which are not exposed to the environment39. Such encapsulation of metal ions in the protein interior might hinder their dissociation and exclude formation of ternary ligand exchange complexes leading to kinetical inertness of the Cu•CP complex at pH 7.4. Cu•CP could be demetallated by EDTA at non-physiological pH value (pH = 11) in dose-dependent manner, characteristic for fast binding equilibrium. Such behaviour could most probably be explained with partial opening of protein conformation or accelaration of conformational dynamics at alkaline pH values. Thus, our results confirm earlier conclusions that CP-bound copper ions are practically nonexchangeable at physiological pH values and CP does not participate in the regulation of exchangeable copper levels in the blood5.
### Binding of Cu(II) ions to alpha-2 macroglobulin
Our results demonstrated that α2M does not bind Cu(II) ions. In earlier experiments α2M has been shown to bind a substoichiometric amount of Cu(II) ions18, which is difficult to interpret in terms of thermodynamics. It has been noticed that purified α2M can be contaminated by HSA40 that binds Cu(II) ions, which can be mistakenly interpreted as binding of Cu(II) to α2M. We performed also LC-ICP MS experiments using Superdex 200 column, where peaks of CP, HSA and α2M elute at different timepoints (Fig. 5b–d). The α2M sample was supplemented with equimolar concentration of Cu(II) ions, however, only substoichiometric traces of copper were detected in the peak corresponding to α2M (Fig. 5d), which confirms that α2M cannot bind Cu(II) ions. In the competition experiment with 2.5 µM α2M, 10 µM HSA and 10 µM Cu(II) only HSA was metallated (Fig. 5e). Moreover, only two protein-bound copper peaks, corresponding to the major CP and minor HSA, were detected in case of a pooled human serum sample by similar SEC experiment, whereas practically no copper was detected in the HMW region corresponding to α2M (Fig. 5a).
### Copper equilibrium in blood
The average total concentration of copper in normal human blood serum is 16.7 μM41. Our results suggest that copper is distributed only between two principal serum proteins – CP and HSA, which expose different affinity and mechanisms of metal binding. In Cu•CP complex, which constitutes approximately 75% from the total copper pool (our data and4,5), the copper ions are bound in protein interior, which makes them kinetically inert and practically nonexchangeable at physiological pH values even in the presence of high millimolar concentrations of powerful Cu(II)-chelating ligands like EDTA or DTPA.
Metal-binding mode of HSA differs from that of CP as the NTS of HSA is highly dynamic and exposed to the environment33. Cu(II) binding to this site is reversible, however, characterized by slow unassisted dissociation rate and picomolar affinity (KD = 0.90 ± 0.09 pM), determined with NTA or fast and His-assisted dissociation and apparent subpicomolar affinity (KD = 34.7 ± 4.5 fM). Cu(II) bound to HSA constitutes approximately 25% from total blood copper (on average 4.2 μM) and this copper pool could also be in equilibrium with Cu(II) ions bound to free amino acids, primarily His. The outcome of this competition depends from the Cu(II)-binding affinities and concentrations of HSA, Cu•HSA and His according to the following simplified reaction scheme:
$$\begin{array}{lll} & {K}_{D} & \\ Cu+\,HSA & \rightleftharpoons & Cu\bullet HSA\end{array}$$
(1)
$$\,\begin{array}{lll} & {K}_{D1} & \\ Cu+\,His & \rightleftharpoons & Cu\bullet His\end{array}$$
(2)
$$\,\begin{array}{lll} & {K}_{D2} & \\ Cu\bullet His\,+\,His & \rightleftharpoons & Cu\bullet Hi{s}_{2}\end{array}$$
(3)
HSA and His are present in the blood serum at average 650 μM and 75 μM concentration respectively22,42,43,44. Taking into account the KD = 34.7 fM (KD determined from competition with His in the present work), KD1 = 3.7 × 10−9 M and KD2 = 4.7 × 10−7 M values for Cu(II)-His complexes at pH 7.4 (taken from31) it can be estimated that approximately 0.7 nM of Cu(II) are bound to His mainly in the form of Cu•His2 complex (Equations are presented in Materials and methods). Thus, His alone can bind in average 0.04% from total copper in the blood serum. However, it should be noted that other free amino acids, which are present in the blood serum at total 3 mM concentration42, can also form tight ternary complexes with Cu(II)-His45. Thus, LMW copper pool in the blood serum might be composed mainly from Cu(II)-His-Xaa complexes and can reach appr. 0.2% level. This estimate agrees relatively well with the range of 1–2.4% for low-molecular weight copper pool in blood serum determined in seven independent studies by ultrafiltration10. Moreover, besides binding of minor fraction of Cu(II) ions, His can also act as a catalyst enhancing the rate of the copper transfer between Cu•HSA and other proteins and cellular destinations, which might be the most important physiological role of His in copper metabolism. The proposed catalytic role of His in Cu(II) release from HSA is in agreement with the finding that HSA inhibits copper transport into liver cells whereas His can specifically mobilize Cu(II) from plasma and facilitate the process46.
## Conclusion
Cu(II)-binding affinities for three major blood copper proteins: human serum albumin, ceruloplasmin and alpha-2-macroglobulin, were studied through their competition with a set of low-molecular weight Cu(II)-binding reference ligands (DTPA, EDTA, NTA, His) by using an unified LC-ICP MS-based approach. A reliable KD value for human serum albumin was determined and it was demonstrated that metallated ceruloplasmin is resistant for copper release by chelators used, whereas alpha-2 macroglobulin does not bind Cu(II) ions. Obtained thermodynamic and kinetic data allow determination of the copper distribution in the human blood and also in other extracellular media like for example cerebrospinal fluid. Results allow detection of disturbances in copper metabolism, characteristic for many diseases and provide a rationale for effective metalloregulation.
## Methods
### Instrumentation
For LC-ICP MS analyses an Agilent Technologies (Santa Clara, USA) Infinity HPLC system, which consisted of 1260 series µ-degasser, 1200 series capillary pump, Micro WPS autosampler and 1200 series MWD VL detector was coupled with Agilent 7800 series ICP-MS instrument (Agilent, USA). For instruments control and data acquisition, ICP-MS MassHunter 4.4 software Version C.01.04 from Agilent was used. ICP MS was operated under following conditions: RF power 1550 W, nebulizer gas flow 1.03 l/min, auxiliary gas flow 0.90 l/min, plasma gas flow 15 l/min, nebulizer type: MicroMist, isotope monitored: Cu-63. A 1 ml gel filtration column, self-filled with HiTrap Desalting resin Sephadex G25 Superfine (Amersham/GE Healthcare, Buckinghamshire, UK), was used for the separation of HMW and LMW pools. Injection volume of 10 ul for HSA and 2 ul for other proteins were used. Separation of pooled plasma and individual copper proteins was conducted using Superdex 200 SEC column (10 × 300 mm) (Amersham Biosciences AB, Uppsala, Sweden) by using injection volume of 40 μl. ICP MS compatible flow rate of 0.4 ml/min was used in all separations. In order to get rid of contaminating metal ions in the buffer, the mobile phase was eluted through the Chelex100 Chelating Ion Exchange resin (Sigma, Merck KGaA, Darmstadt, Germany) prior to liquid chromatographic separation. The demetallation of the SEC columns before each experiment was conducted by injecting of 1 mM EDTA into the columns (injection volumes were the same for all experiments, depending on the column type and size). For ultrafiltration experiments, 0.5 ml Amicon Ultra centrifugal filters with 10 kDa cut-off were used (Merck Millipore Ltd, Ireland).
### Materials
Ultrapure milliQ water with a resistivity of 18.2 MΩ/cm, produced by a Merck Millipore Direct-Q & Direct–Q UV water purification system (Merck KGaA, Darmstadt, Germany), was used for all applications.
Mobile phase for gel filtration was 200 mM NH4NO3 at pH 7.5, prepared from TraceMetal Grade nitric acid (Fisher Scientific UK Limited, Leicestershire, UK) and ammonium hydroxide 25% solution (Honeywell Fluka, Seelze, Germany), which is compatible with ICP MS.
Following lyophilized proteins were used: human serum albumin (HSA) from Sigma/Merck (Darmstadt, Germany), alpha-2 macroglobulin (α2M) from human plasma (“slow form” and “fast form”) and ceruloplasmin (CP) from human plasma were from Athens Research and Technology (Athens, USA), α2M was purchased also from Sigma/Merck (Darmstadt, Germany).
All protein stock solutions were prepared in milliQ water and further diluted in reaction buffer solution (containing 50 mM HEPES and 50 mM NaCl, pH 7.4) for all experiments. For metallation of HSA and α2M Cu(II)acetate from Sigma (Sigma/Merck KGaA, Darmstadt, Germany) was used. An equimolar concentration of Cu(II)acetate was added to HSA in 50 mM Hepes, 50 mM NaCl, pH 7.4 and the sample was incubated 10 min at room temperature, which is sufficient for Cu•HSA complex formation (in separate kinetic experiment we established that the half-life of Cu(II) binding to HSA is approx. 1.5 min). An equimolar concentration of Cu(II)acetate was added α2M in 50 mM Hepes, 50 mM NaCl, pH 7.4 and sample was incubated for up to 1 h. Following reagents were used: ethylenediaminetetraacetic acid (EDTA, 99.995% trace metal basis) and diethylenetriaminepentaacetic acid (DTPA) from Sigma/Merck (Merck KGaA, Darmstadt, Germany), nitrilotriacetic acid (NTA) and L-amino acids (His, Glu and Gly) from Fluka (Merck KGaA, Darmstadt, Germany). Stock solutions of amino acids were prepared in pure milliQ water whereas stock solutions of EDTA, DTPA and NTA were neutralized with 0.1 M NaOH to pH 7-8 and further diluted into reaction buffer. Mobile phase and reagent solutions were prepared daily before experiment.
Blood serum of anonymous donors was obtained from West-Tallinn Central Hospital (Estonia) and the study approval was obtained from Tallinn Medical Research Ethics Committee. For current research, blood plasma pool from 15 individuals was prepared, in order to eliminate individual variability and exclude possibility for identification of a participants. We confirm that all experiments were performed in accordance with relevant guidelines and regulations.
### SDS PAGE
Commercial protein samples have been analyzed by SDS PAGE performed by the Mini-PROTEAN Tetra System (BioRad) with Laemmli gel (10%T/5%C) consisting of separation and comb gel. Samples were prepared in milliQ water at concentration of 3 mg/ml. Samples (20 µl) were mixed with 5 µl of 5x loading buffer (0.3 M Tris-HCl pH = 6.8, 25% β-mercaptoethanol, 50% glycerol, 10% SDS, 1% bromophenol blue), heated to 95 °C for 5 min, and 15 µl of samples were applied to the gel together with 5 µl of “Thermo Scientific PageRuler Unstained Broad Range Protein Ladder”. Electrophoresis was performed with 25 mM Tris, 192 mM Glycine, 0,1% SDS running buffer for about 1 h (130 V). The gel was stained in Coomassie staining solution for overnight followed by destaining in milliQ water and the results, presented in Figure S3 show, that all commercial proteins used have high purity and are suitable for metal-binding studies.
### Calculations for dissociation constants
First, the titration curves (Figs. 3b and 4b) were fitted to a dose-dependent demetallation curve to determine the ligand concentration where the concentrations of copper ions bound to the protein was decreased by 50%. At that point, the equilibrium concentration of the free copper ions [Cu]free, which can be calculated according to Eqn.431 equals to the KD value of the protein.
$${[{Cu}]}_{{free}}=\frac{1}{1+\frac{[{L}]}{{{K}}_{1}}+\frac{{[{L}]}^{2}}{{{K}}_{1}\ast {{K}}_{2}}}\,\ast \,({[{Cu}]}_{{total}}-{[{Cu}]}_{{bound}{to}{protein}})\,$$
(4)
In the Eqn.4 L is the concentration of the competing ligand, K1 and K2 correspond to the dissociation constants of Cu•L and Cu•L2 complexes, respectively [Cu]total is the concentration of copper and [Cu]bound to protein is the concentration of the copper bound to the protein at IC50.
### Calculations for concentrations of free Cu(II) ions and Cu(II)•His2 complex
The concentration of free Cu(II) ions at equilibrium can be calculated as follows:
$${[{Cu}]}_{{free}}=\frac{{{K}}_{{D}}\,\ast \,[{Cu}\bullet {HSA}]}{{[{HSA}]}_{{free}}}$$
(5)
[Cu]free in blood serum is calculated from the metallation equilibrium of HSA taking into account the total HSA concentration 650 µM and the concentration of non-CP bound copper available for HSA in blood 4.2 µM. [Cu]free is related to the amount of copper bound to LMW ligands like His according to Eqn. 4 and the concentration of LMW copper is given by:
$${[{Cu}]}_{{LMW}}={[{Cu}]}_{{free}}\,\ast \,\left(1+\frac{[{L}]}{{{K}}_{1}}+\frac{{[{L}]}^{2}}{{{K}}_{1}\ast {{K}}_{2}}\right){,}$$
(6)
where [L] is the concentration of the competing ligand (His), K1 and K2 correspond to the dissociation constants of CuHis and CuHis2 complexes (K1 = 3.7 × 10−9 M, K2 = 4.7 × 10−7 M, pH = 7.431). Considering the average concentration of His in blood 75 µM43, the concentration of Cu•His complexes (mainly Cu•His2) is equal to 0.7 nM. Considering that the total concentration of free amino acids in the blood is approximately 3 mM45, then concentration of Cu•His•Xaa ternary complexes in the blood is appr. 28 nM.
## References
1. de Bie, P., Muller, P., Wijmenga, C. & Klomp, L. W. Molecular pathogenesis of Wilson and Menkes disease: correlation of mutations with molecular defects and disease phenotypes. J. Med. Genet. 44, 673–688, https://doi.org/10.1136/jmg.2007.052746 (2007).
2. Linder, M. C. & Hazegh-Azam, M. Copper biochemistry and molecular biology. Am. J. Clin. Nutr. 63, 797S–811S, https://doi.org/10.1093/ajcn/63.5.797 (1996).
3. Donnelly, P. S., Xiao, Z. & Wedd, A. G. Copper and Alzheimer’s disease. Curr. Opin. Chem. Biol. 11, 128–133, https://doi.org/10.1016/j.cbpa.2007.01.678 (2007).
4. Cabrera, A. et al. Copper binding components of blood plasma and organs, and their responses to influx of large doses of (65)Cu, in the mouse. Biometals 21, 525–543, https://doi.org/10.1007/s10534-008-9139-6 (2008).
5. Linder, M. C. Ceruloplasmin and other copper binding components of blood plasma and their functions: an update. Metallomics 8, 887–905, https://doi.org/10.1039/c6mt00103c (2016).
6. Sarkar, B. K. T. In Biochemistry of copper (ed. J.; Aisen Peisach, P.; Blumberg, W.) 183–196 (Academic Press, 1966).
7. Neumann, P. Z. & Sass-Kortsak, A. The state of copper in human serum: evidence for an amino acid-bound fraction. J. Clin. Invest. 46, 646–658, https://doi.org/10.1172/JCI105566 (1967).
8. Masuoka, J. & Saltman, P. Zinc(II) and copper(II) binding to serum albumin. A comparative study of dog, bovine, and human albumin. J. Biol. Chem. 269, 25557–25561 (1994).
9. Valko, M. et al. High-affinity binding site for copper(II) in human and dog serum albumins (an EPR study). J. Phys. Chem. B 103, 5591–5597, https://doi.org/10.1021/jp9846532 (1999).
10. Catalani, S. et al. Free copper in serum: An analytical challenge and its possible applications. J. Trace Elem. Med. Biol. 45, 176–180, https://doi.org/10.1016/j.jtemb.2017.11.006 (2018).
11. Deschamps, P., Kulkarni, P. P., Gautam-Basak, M. & Sarkar, B. The saga of copper(II)-L-histidine. Coord. Chem. Rev. 249, 895–909, https://doi.org/10.1016/j.ccr.2005.09.013 (2005).
12. Takahashi, N., Ortel, T. L. & Putnam, F. W. Single-chain structure of human ceruloplasmin: the complete amino acid sequence of the whole molecule. Proc. Natl Acad. Sci. USA 81, 390–394 (1984).
13. Adamczyk-Sowa, M. et al. Changes in Serum Ceruloplasmin Levels Based on Immunomodulatory Treatments and Melatonin Supplementation in Multiple Sclerosis Patients. Med. Sci. Monit. 22, 2484–2491, https://doi.org/10.12659/msm.895702 (2016).
14. Hellman, N. E. & Gitlin, J. D. Ceruloplasmin metabolism and function. Annu. Rev. Nutr. 22, 439–458, https://doi.org/10.1146/annurev.nutr.22.012502.114457 (2002).
15. Ramos, D. et al. Mechanism of Copper Uptake from Blood Plasma Ceruloplasmin by Mammalian Cells. PLoS One 11, e0149516, https://doi.org/10.1371/journal.pone.0149516 (2016).
16. Marrero, A. et al. The crystal structure of human alpha2-macroglobulin reveals a unique molecular cage. Angew. Chem. Int. Ed. Engl. 51, 3340–3344, https://doi.org/10.1002/anie.201108015 (2012).
17. Sottrup-Jensen, L. Alpha-macroglobulins: structure, shape, and mechanism of proteinase complex formation. J. Biol. Chem. 264, 11539–11542 (1989).
18. Moriya, M. et al. Copper is taken up efficiently from albumin and alpha2-macroglobulin by cultured human cells by more than one mechanism. Am. J. Physiol. Cell Physiol 295, C708–721, https://doi.org/10.1152/ajpcell.00029.2008 (2008).
19. Xu, W. H., Dong, C., Rundek, T., Elkind, M. S. & Sacco, R. L. Serum albumin levels are associated with cardioembolic and cryptogenic ischemic strokes: Northern Manhattan Study. Stroke 45, 973–978, https://doi.org/10.1161/STROKEAHA.113.003835 (2014).
20. Jun, J. E. et al. Increase in serum albumin concentration is associated with prediabetes development and progression to overt diabetes independently of metabolic syndrome. PLoS One 12, e0176209, https://doi.org/10.1371/journal.pone.0176209 (2017).
21. Fasano, M. et al. The extraordinary ligand binding properties of human serum albumin. IUBMB Life 57, 787–796, https://doi.org/10.1080/15216540500404093 (2005).
22. Fanali, G. et al. Human serum albumin: from bench to bedside. Mol. Asp. Med. 33, 209–290, https://doi.org/10.1016/j.mam.2011.12.002 (2012).
23. Bal, W., Sokolowska, M., Kurowska, E. & Faller, P. Binding of transition metal ions to albumin: sites, affinities and rates. Biochim. Biophys. Acta 1830, 5444–5455, https://doi.org/10.1016/j.bbagen.2013.06.018 (2013).
24. Sokolowska, M., Krezel, A., Dyba, M., Szewczuk, Z. & Bal, W. Short peptides are not reliable models of thermodynamic and kinetic properties of the N-terminal metal binding site in serum albumin. Eur. J. Biochem. 269, 1323–1331, https://doi.org/10.1046/j.1432-1033.2002.02772.x (2002).
25. Harford, C. & Sarkar, B. Amino terminal Cu(II)- and Ni(II)-binding (ATCUN) motif of proteins and peptides: Metal binding, DNA cleavage, and other properties. Acc. Chem. Res. 30, 123–130, https://doi.org/10.1021/ar9501535 (1997).
26. Roche, M., Rondeau, P., Singh, N. R., Tarnus, E. & Bourdon, E. The antioxidant properties of serum albumin. FEBS Lett. 582, 1783–1787, https://doi.org/10.1016/j.febslet.2008.04.057 (2008).
27. Bossak-Ahmad, K., Fraczyk, T., Bal, W. & Drew, S. C. The Sub-picomolar Cu(2+) Dissociation Constant of Human Serum Albumin. Chembiochem https://doi.org/10.1002/cbic.201900435 (2019).
28. Rozga, M., Sokolowska, M., Protas, A. M. & Bal, W. Human serum albumin coordinates Cu(II) at its N-terminal binding site with 1 pM affinity. J. Biol. Inorg. Chem. 12, 913–918, https://doi.org/10.1007/s00775-007-0244-8 (2007).
29. Xiao, Z. & Wedd, A. G. The challenges of determining metal-protein affinities. Nat. Prod. Rep. 27, 768–789, https://doi.org/10.1039/b906690j (2010).
30. Atwood, C. S. et al. Characterization of copper interactions with alzheimer amyloid beta peptides: identification of an attomolar-affinity copper binding site on amyloid beta1-42. J. Neurochem. 75, 1219–1233 (2000).
31. Sarell, C. J., Syme, C. D., Rigby, S. E. & Viles, J. H. Copper(II) binding to amyloid-beta fibrils of Alzheimer’s disease reveals a picomolar affinity: stoichiometry and coordination geometry are independent of Abeta oligomeric form. Biochemistry 48, 4388–4402, https://doi.org/10.1021/bi900254n (2009).
32. Banci, L. et al. Affinity gradients drive copper to cellular destinations. Nature 465, 645–648, https://doi.org/10.1038/nature09018 (2010).
33. Guizado, T. R. Analysis of the structure and dynamics of human serum albumin. J. Mol. Model. 20, 2450, https://doi.org/10.1007/s00894-014-2450-y (2014).
34. Lau, S. J. & Sarkar, B. Ternary coordination complex between human serum albumin, copper (II), and L-histidine. J. Biol. Chem. 246, 5938–5943 (1971).
35. Hureau, C. et al. X-ray and solution structures of Cu(II) GHK and Cu(II) DAHK complexes: influence on their redox properties. Chemistry 17, 10151–10160, https://doi.org/10.1002/chem.201100751 (2011).
36. Andrderegg, G. Critical survey of stability constants of NTA complexes. Pure Appl. Chem. 54, 2693–2758 (1982).
37. Masuoka, J., Hegenauer, J., Van Dyke, B. R. & Saltman, P. Intrinsic stoichiometric equilibrium constants for the binding of zinc(II) and copper(II) to the high affinity site of serum albumin. J. Biol. Chem. 268, 21533–21537 (1993).
38. Thompsett, A. R., Abdelraheim, S. R., Daniels, M. & Brown, D. R. High affinity binding between copper and full-length prion protein identified by two different techniques. J. Biol. Chem. 280, 42750–42758, https://doi.org/10.1074/jbc.M506521200 (2005).
39. Samygina, V. R. et al. Ceruloplasmin: macromolecular assemblies with iron-containing acute phase proteins. PLoS One 8, e67145, https://doi.org/10.1371/journal.pone.0067145 (2013).
40. Liu, N. et al. Transcuprein is a macroglobulin regulated by copper and iron availability. J. Nutr. Biochem. 18, 597–608, https://doi.org/10.1016/j.jnutbio.2006.11.005 (2007).
41. Li, D. D., Zhang, W., Wang, Z. Y. & Zhao, P. Serum Copper, Zinc, and Iron Levels in Patients with Alzheimer’s Disease: A Meta-Analysis of Case-Control Studies. Front. Aging Neurosci. 9, 300, https://doi.org/10.3389/fnagi.2017.00300 (2017).
42. Pitkanen, H. T., Oja, S. S., Kemppainen, K., Seppa, J. M. & Mero, A. A. Serum amino acid concentrations in aging men and women. Amino Acids 24, 413–421, https://doi.org/10.1007/s00726-002-0338-0 (2003).
43. Lepage, N., McDonald, N., Dallaire, L. & Lambert, M. Age-specific distribution of plasma amino acid concentrations in a healthy pediatric population. Clin. Chem. 43, 2397–2402 (1997).
44. Schmidt, J. A. et al. Plasma concentrations and intakes of amino acids in male meat-eaters, fish-eaters, vegetarians and vegans: a cross-sectional analysis in the EPIC-Oxford cohort. Eur. J. Clin. Nutr. 70, 306–312, https://doi.org/10.1038/ejcn.2015.144 (2016).
45. Brumas, V., Alliey, N. & Berthon, G. A new investigation of copper(II)-serine, copper(II)-histidine-serine, copper(II)-asparagine, and copper(II)-histidine-asparagine equilibria under physiological conditions, and implications for simulation models relative to blood plasma. J. Inorg. Biochem. 52, 287–296 (1993).
46. Darwish, H. M., Cheney, J. C., Schmitt, R. C. & Ettinger, M. J. Mobilization of copper(II) from plasma components and mechanisms of hepatic copper transport. Am. J. Physiol. 246, G72–79, https://doi.org/10.1152/ajpgi.1984.246.1.G72 (1984).
## Acknowledgements
This work was supported during the 2019 by the Estonian Ministry of Education and Research (Grant IUT 19-8 to PP), by a research grant from Wilson Therapeutics AB/Alexion Pharmaceuticals Inc. Support from Estonian Ministry of Education and Research discontinued since 2020. Authors like to thank Andra Noormägi for assistance in LC-ICP MS experiments and anonymous Reviewer 2 for very constructive critics.
## Author information
Authors
### Contributions
P.P. and T.P. conceived the study; T.K., M.F., A.Z. and J.S. carried out the main experiments; T.K., A.Z., J.S., M.F., V.T., T.P. and P.P. analyzed the data; P.P. wrote the main draft of the paper and all authors participated in revising the manuscript.
### Corresponding author
Correspondence to Peep Palumaa.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Kirsipuu, T., Zadorožnaja, A., Smirnova, J. et al. Copper(II)-binding equilibria in human blood. Sci Rep 10, 5686 (2020). https://doi.org/10.1038/s41598-020-62560-4
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41598-020-62560-4
• ### Adsorbent grafted on cellulose by in situ synthesis of EDTA-like groups and its properties of metal ion adsorption from aqueous solution
• Tao Hu
• Xiaoli Hu
• Dan Liu
Cellulose (2022)
|
2022-05-26 19:07:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5355288982391357, "perplexity": 11647.603685366268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662619221.81/warc/CC-MAIN-20220526162749-20220526192749-00406.warc.gz"}
|
http://www.math.rutgers.edu/news-events/calendar-weekly-view/icalrepeat.detail/2018/03/07/9569/-/conformal-blocks-attached-to-twisted-groups
|
# Calendar - Weekly View
Algebra Seminar
## Conformal blocks attached to twisted groups
#### Chiara Damiolini Rutgers University
Location: Hill 425
Date & time: Wednesday, 07 March 2018 at 2:00PM - 3:00PM
Abstract: Let G be a simple and simply connected algebraic group over $$\Bbb C$$. We can attach to G the sheaf of conformal blocks: a vector bundle on Mwhose fibres are identified with global sections of a certain line bundle on the stack of G-torsors. We generalize the construction of conformal blocks to the case in which $$\cal G$$ is a twisted group over a curve which can be defined in terms of covering data. In this case the associated conformal blocks define a sheaf on a Hurwitz space and have properties analogous to the classical case.
## Contact Us
Department of Mathematics Department of Mathematics Rutgers University Hill Center - Busch Campus 110 Frelinghuysen Road Piscataway, NJ 08854-8019, USAPhone: +1.848.445.2390Fax: +1.732.445.5530
|
2018-03-21 03:10:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5202957391738892, "perplexity": 1183.1256331553593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647567.36/warc/CC-MAIN-20180321023951-20180321043951-00183.warc.gz"}
|
https://ltwork.net/jenae-noticed-that-many-of-her-co-workers-would-opt-for--13897066
|
# Jenae noticed that many of her co-workers would opt for the coffee that appeared to be most recently brewed, regardless
###### Question:
Jenae noticed that many of her co-workers would opt for the coffee that appeared to be most recently brewed, regardless of the flavor of the coffee offered. This leads her to believe that what she was witnessing was not really representative of everyone's true flavor preferences. She adapted her experimental study accordingly. Select one control in Janaes experimental study:
a. Jenae uses different locations in the kitchen for the coffee pots.
b. Jenae makes sure that the coffee in different pots is brewed at the same time.
c. Jenae places condiments at random places throughout the kitchen.
d. Jenae monitors the habits of the co-workers who do not drink coffee.
|
2023-03-28 12:19:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23271626234054565, "perplexity": 4889.597870056126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948858.7/warc/CC-MAIN-20230328104523-20230328134523-00722.warc.gz"}
|
https://tex.stackexchange.com/questions/3639/preventing-page-break-after-title-page
|
Preventing page break after title page
I am trying to create a title page for my thesis proposal, which is using the article class. I have the page formatted correctly, but it does not act like the default article title page where the document starts on the title page. Instead, I get the body of the proposal starting on the second page, which is not what I want. Also, it is not sufficient to just put the abstract on the title page; the first section should also start on the title page if there is room.
Here is my code right now:
\begin{titlepage}
\begin{center}
{\LARGE Insert Title Here \par}
\vskip 2em
A Master's Thesis Proposal \\
{\tiny by} \\
Christopher J. Lieb \\
\vskip 2em
Professor Gary Pollice \par
\vskip 1em
{\Large \makebox[3in]{\hrulefill} \par}
\vskip 1em
{\small
\today \\
Department of Computer Science\\
Worcester Polytechnic Institute\\
Worcester, MA 01609\\}
\end{center}
\par
\end{titlepage}
I am just inserting it right at the beginning of my document environment. I started this from the article.cls where I thought the title page was being created, but apparently I missed the part that gets rid of the page break.
How do I make the document start on the title page like the article class does?
• Isn't the whole point of the titlepage environment that you get a separate title page? Just remove the environment if you don't want that behaviour? – Jukka Suomela Oct 1 '10 at 11:45
Simply redefining \endtitlepage in the preamble would be sufficient:
\let\endtitlepage\relax
• Wouldn't Don't use titlepage be cleaner and more to the point? This is the kind of hack that works, but is hard to understand for newcommers and is more of the kind quick and dirty. – Johannes_B Feb 17 '16 at 14:27
Figures that I'd get it AFTER I posted the question.
...
% prevent a page break from being put at the end of the title page so that
% the contents of the paper spill onto the title page
% save the function of the \newpage macro so we can restore it later
\global\let\newpagegood\newpage
\global\let\newpage\relax
\end{titlepage}
% restore the \newpage command after creating the title page
\global\let\newpage\newpagegood
|
2020-02-27 20:33:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8404183983802795, "perplexity": 1788.0722619749451}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146809.98/warc/CC-MAIN-20200227191150-20200227221150-00362.warc.gz"}
|
http://mathoverflow.net/questions?page=5&sort=newest
|
# All Questions
91 views
### No irreducible parallelizable manifold of given dimension
What is an example of a closed 4-manifold $M$ such that $M$ is parallelizable and $M$ is topologically (or at least smoothly) irreducible? Topological irreducible: it is not homemorphic to ...
157 views
### Consistency of the nonrigidity of $P(\omega_1)/NS$
Is it consistent with ZFC that there exists an automorphism of $P(\omega_1)/\mathrm{NS}_{\omega_1}$ which is not the identity?
95 views
### Defining Global Choice in terms of strong limit cardinals over $ZF$
In his answer to user33038's mathoverflow question "What axioms are stronger than the Axiom of choice?", Prof. Hamkins writes: "What's more, the axiom of choice is equivalent over $ZF$ to the ...
90 views
### When is a conformal class equal to a conformal orbit?
Let $(M,g)$ be a Riemannian manifold of dimension $n$. Let $\text{conf}(M,g)$ denote the conformal group, i.e. the subgroup of diffeomorphisms of $M$ that acts by conformal transformations relative to ...
211 views
### Differential geometry without the Hausdorff condition or the second axiom of countability
I would like to know how the standard differential geometry of manifolds would change if we didn't assume the Hausdorff condition and/or the second axiom of countability. There are some simple things ...
35 views
### Global minimization. How? [closed]
I know it's impossible to have an algorithm that finds the global minimum (without a brute force approach), for a general problem. I also understand that the efficacy of the flavour of minimization ...
81 views
### Does this notion of “$\mathcal{F}$-digraph” appear in the literature?
By a digraph, I mean a quiver with no multiple edges. So in particular: Loops are okay. An infinite set of vertexes is okay. Furthermore, I will tend to identify each digraph with its underlying ...
28 views
### Positive-definite and positive semi-definite matrixes sum [closed]
I'm doing an exercise of numerical analysis that ask me to demonstrate a particular sum of matrixes. From Wikipedia, I know that: M and N are two matrixes: if M is positive definite and r > 0 is ...
111 views
### perfect modules over polynomial algebra
This may be obvious. My question is short: $R$ is the polynomial algebra $\mathbb{k}[X_{1},\dots , X_{n}]$. Is the $R$-module $\mathbb{k}$ perfect in the sense that $\mathbb{k}$ is a compact object ...
458 views
### Matrix equation $XAXBXC=I$
Let $A,B,C$ be unitary matrices. Does there always exist a unitary matrix $X$ such that $$(XA)(XB)(XC)=I,$$ where $I$ is the identity matrix? The quadratic equation $(XA)(XB)=I$ has the solution ...
35 views
### Can Mumford-Shah functional be adapted to lower $L^1$ space?
The well know Mumford-Shah functional functional $$F(u)=\int_\Omega|\nabla u|^2+\mathcal H^{N-1}(S_u) \tag 1$$ where $u\in SBV(\Omega)$ and $\nabla u$ is the absolutely continuous part of ...
67 views
### Sum of two surjective operators
It is well-known that the sum of two surjective operators isn't (in general) a surjective operator (for example consider $A+(-A)$). When it happens that the sum of two surjective operators is still ...
338 views
### Minimum size of the union of sets
I came accross this combinatorial problem in my computer science research. You are given a collection of k sets $S_1,...,S_k$ such that for any $i \neq j$, $\vert S_i \setminus S_j \vert \geq p$ ...
129 views
### Large Cardinal Principles that Imply $\Sigma_3^1$-Generic Absoluteness
It is known that (light-face) $\Sigma_3^1$ generic absoluteness is consistent with $\mathsf{ZFC}$: Friedman and Bagaria showed that it holds in the $\text{Coll}(\omega, < \kappa)$ extension of $V$ ...
108 views
### Question about mean square estimate for sums of Dirichlet coefficients of Symmetric Power $L$-functions
I have a question related to Coefficients of Symmetric power $L$-functions and I would be grateful if you could answer it. Let $\lambda_{Sym^rf}(n)$ be the $n$th Dirichlet coefficient of ...
79 views
### Possible argument against Height bound hypothesis
From this paper. $f(x,y)$ is polynomial with integer coefficients. $s(f)$ is its size, the sum of the logarithms of the absolute values of the nonzero coefficients, defined on p. 6. From p. 7. ...
67 views
### Construct a PDE solution from a net of approximations
Consider $P$ a linear partial differential operator in $\Bbb R ^n$. Consider some boundary condition given in the generic form $C(u) = 0$, that guarantees a unique solution (if any) of $Pu = 0$. Let ...
179 views
146 views
### What's the name of this theorem? [closed]
I would like to know the name of a theorem that states that if a continuous variable (I.E. y) takes a positive (negative) value for x(i) and a negative (positive) value for x(j), it is sure that y has ...
35 views
### Quotient of cumulative binomial distribution functions
Given to integers $n < m \in \mathbb{N}_0$ and a probability $p$, I'm struggling to calculate (or at least get an upper bound for) the quotient $$Q = \frac{F(n+1;m,p)}{F(n;m,p)}$$ where $F$ denotes ...
158 views
### How does one identify flow lines on a vector bundle with those on the base in Morse theory?
In Chapter 4.2 of Schwarz's book on Morse homology there is a brief discussion of Morse theory on the total space of a smooth vector bundle $E \to M$. In particular, one can take the Morse function ...
83 views
### complex dynamic system and affine algebraic variety
Let $M^n$ be a $n$-dimensional noncompact complex manifold. In "The density property for complex manifolds and geometric structures II", Dror Varolin showed that some open set of $M$ is ...
33 views
### Having the highest value in a interval appear less often [closed]
I have an array of size 5. And initially in each index, they are initialized with the value 1. so it looks like this : 1 1 1 1 1 Every iteration, I get a decimal value between 0.0 and 1.0 At the ...
192 views
### Any representation is a sub representation of direct sum of regular representation
I need a reference for the following statement: Let G be a linear algebraic group over algebraically closed field k. Let V be a finite dimensional G-module. The V is sub representation of k[G]^n for ...
92 views
52 views
### $G$-CW complex structure of universal a $\mathcal{F}$-space
Let $G$ be a finite group and $H$ be an abelian subgroup of $G$. Let $\mathcal{F}$ be a family of all subgroups of $H$ , i.e. $\mathcal{F}= \{K : K \leq H\}$ Define universal $\mathcal{F}$-space ...
96 views
### do there exist finite simple characteristic quotients of the free group of rank 2?
Let $F_2$ be the free group of rank 2. Let $Aut^+(F_2)$ be the subgroup of $Aut(F_2)$ consisting of automorphisms of determinant 1 under abelianization. Do there exist maximal normal finite index ...
Question $\def\nn{\mathbb{N}}$ For any $n \in \nn^+$, is there a finite set $S \subset \nn^+$ such that $\sum_{k \in S} \frac{1}{k} = n$ but $\sum_{k \in T} \frac{1}{k} \notin \nn^+$ for any \$T ...
|
2015-08-27 19:31:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836220145225525, "perplexity": 339.28834274750216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059455.0/warc/CC-MAIN-20150827025419-00310-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://tex.stackexchange.com/questions/534446/autowidth-box-for-itemize-environment
|
# Autowidth Box for Itemize Environment
A newenvironment is defined using varwidth and tikz to get a block of width equal to that of the content within it. It works fine, unless itemization is involved. A MWE is given here.
\documentclass[]{beamer}
\usepackage{tikz}
\usepackage{varwidth}
\tikzstyle{example-box} = [fill=yellow, rectangle]
\newenvironment{autowidthblock}{
\begin{tikzpicture}
\node [example-box] (box)
\bgroup
\begin{varwidth}{\linewidth}
}{
\end{varwidth}
\egroup;
\end{tikzpicture}
}
\begin{document}
\begin{frame}
\begin{autowidthblock}
With normal sentences, the width is rightly fit
\end{autowidthblock}
\begin{autowidthblock}
\begin{itemize}
\item With itemize environment,
\item The width is not auto-set
\item Instead spans the entire textwidth
\end{itemize}
\end{autowidthblock}
\end{frame}
\end{document}
Can anybody suggest a way to make the width auto for itemization/enumeration as well? Thanks in advance.
• You are dealing with two very different things here: 1) In the case that works as you want, the material can be typeset in an \hbox (or in \mbox, if you prefer, i.e.: in LR-mode). There can't be any automatic line wrapping. 2) List environments such as itemize, on the other hand, can only be used in vertical mode; they always require an \hsize which determines where automatic line wrapping happens. In your case, the itemize environment is inside a \vbox of width \linewidth. With more text, you would see the line wrapping. If you want automatic line wrapping, ... (continued) – frougon Mar 27 '20 at 16:45
• you need to specify a width, and this will be the width of your box (though you can make it bigger by adding space, etc.). On the other hand, if you don't need any automatic line wrapping in the list environment, then I think that itemize is the wrong tool. Implementing your list as a tabular containing a single column of type l would allow the automatic width determination that you appear to want. With the array package, you can use the >{...} syntax to automatically insert the “bullets” and the horizontal skip (space) following them. – frougon Mar 27 '20 at 16:54
• Did the above comments help solve your problem ? – BambOo May 13 '20 at 12:15
• @BambOo, No, I thought the only way is to wrap it in a minipage and specify width manually. – Ashok May 14 '20 at 3:15
• If you found a solution, please answer your own question so that other users may benefit from your experience – BambOo May 22 '20 at 11:17
|
2021-05-11 03:04:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8357493877410889, "perplexity": 1626.3832729944627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991641.5/warc/CC-MAIN-20210511025739-20210511055739-00582.warc.gz"}
|
http://mathoverflow.net/api/userquestions.html?userid=1650&page=1&pagesize=10&sort=recent
|
6
# Questions
6
9
9
4k
views
### Applications of knot theory
feb 2 at 14:35 Ronnie Brown 5,2071720
7
20
3
562
views
### Do finite groups acting on a ball have a fixed point?
oct 18 11 at 23:37 Peter Arndt 6,8681842
7
6
7
1k
views
### Typesetting mathematics: how do {\em you} convert text into pdf?
mar 11 11 at 15:58 S. Sra 10k11644
1
7
1
470
views
### Counting submanifolds of the plane
jul 14 10 at 18:44 Agol 23.9k13384
1
9
1
571
views
### Packing twelve spherical caps to maximize tangencies
may 18 10 at 15:37 Alexey Tarasov 863
2
8
|
2013-05-20 14:54:53
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8922713398933411, "perplexity": 8083.005089250319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699056351/warc/CC-MAIN-20130516101056-00086-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.groundai.com/project/theoretical-foundations-of-equitability-and-the-maximal-information-coefficient/
|
Theoretical Foundations of Equitabilityand the Maximal Information Coefficient 1footnote 11footnote 1This manuscript is subsumed by [1] and [2]. Please cite those papers instead.
# Theoretical Foundations of Equitability and the Maximal Information Coefficient 111This manuscript is subsumed by [1] and [2]. Please cite those papers instead.
Yakir A. Reshef School of Engineering and Applied Sciences, Harvard University. yakir@seas.harvard.edu. David N. Reshef Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology. dnreshef@mit.edu. Pardis C. Sabeti Department of Organismic and Evolutionary Biology, Harvard University. psabeti@oeb.harvard.edu. Michael Mitzenmacher School of Engineering and Applied Sciences, Harvard University. michaelm@eecs.harvard.edu.
###### Abstract
The maximal information coefficient (MIC) is a tool for finding the strongest pairwise relationships in a data set with many variables [3]. MIC is useful because it gives similar scores to equally noisy relationships of different types. This property, called equitability, is important for analyzing high-dimensional data sets.
Here we formalize the theory behind both equitability and MIC in the language of estimation theory. This formalization has a number of advantages. First, it allows us to show that equitability is a generalization of power against statistical independence. Second, it allows us to compute and discuss the population value of MIC, which we call . In doing so we generalize and strengthen the mathematical results proven in [3] and clarify the relationship between MIC and mutual information. Introducing also enables us to reason about the properties of MIC more abstractly: for instance, we show that is continuous and that there is a sense in which it is a canonical “smoothing” of mutual information. We also prove an alternate, equivalent characterization of that we use to state new estimators of it as well as an algorithm for explicitly computing it when the joint probability density function of a pair of random variables is known. Our hope is that this paper provides a richer theoretical foundation for MIC and equitability going forward.
This paper will be accompanied by a forthcoming companion paper that performs extensive empirical analysis and comparison to other methods and discusses the practical aspects of both equitability and the use of MIC and its related statistics.
## 1 Introduction
Suppose we have a data set with hundreds or thousands of variables, and we wish to find the strongest pairwise associations in that data set. The number of pairs of will be in the hundreds of thousands, or even millions, and so manually examining each pairwise scatter plot is out of the question. In such a context, one commonly taken approach is to compute some statistic on each of these pairs of variables, to rank the variable pairs from highest- to lowest-scoring, and then to examine only the top of the resulting list.
The results of this approach depend heavily on the chosen statistic. In particular, suppose the statistic is a measure of dependence, meaning that its population value is non-zero score exactly in cases of statistical dependence. Even with such a guarantee, the magnitude of the this non-zero score may depend heavily on the type of dependence in question, thereby skewing the top of the list toward certain types of relationships over others. For instance, a statistic may give non-zero scores to both linear and sinusoidal relationships; however, if the scores of the linear relationships are systematically higher, then using this statistic to rank variable pairs in a large data set will cause the many linear relationships in the data set to crowd out any potential sinusoidal relationships from the top of the list. This means that the human examining the top of the list will effectively never see the sinusoidal relationships.
This shortcoming is not as concerning in a hypothesis testing framework: if what we sought were a comprehensive list of all the non-trivial associations in a data set, then all we would care about would be that the sinusoidal relationships are detected with sufficient power so that we could reject the null hypothesis. Many excellent methods exist that allow one to test for independence in this way in various settings [4, 5, 6, 7, 8, 9, 10]. However, often in data exploration the goal is to identify a relatively small set of strongest associations within a dataset as opposed to finding as many non-trivial associations as possible, which often are too many to sift through. What is needed then is a measure of dependence whose values, in addition to allowing us to identify significant relationships (i.e. reject a null hypothesis of independence), also allow us to measure the strength of relationships (i.e. estimate an effect size).
With the goal of addressing this need, we introduced in [3] a notion called equitability: an equitable measure of dependence is one that assigns similar scores to relationships with equal noise levels, regardless of relationship type. This notion is notably imprecise – it does not specify, for example, which relationship types are covered nor what is meant by “noise” or “similar”. However, we noted that in the case of functional relationships, one reasonable definition of equitability might be that the value of the statistic reflect the coefficient of determination () of the data with respect to the regression function with as weak a dependence as possible on the function in question. Additionally, though characterizing noise in the case of superpositions of several functional relationships is difficult, it seems reasonable to require that the statistic give a perfect score when the functional relationships being superimposed are noiseless. We then introduced a statistic, the maximal information coefficient (MIC), that behaves more equitably on functional relationships than the state of the art and also has the desired behavior on superpositions of functional relationships, given sufficient sample size.
Although MIC has enjoyed widespread use in a variety of disciplines [11, 12, 13, 14, 15, 16, 17, 18, 19, 20], the original paper on equitability and MIC has generated much discussion, both published and otherwise, including some concerns and confusions. Perhaps the most frequent concern that we have heard is the desire for a richer and more formal theoretical framework for equitability, and this is the main issue we address in this paper. In particular, we provide a formal definition for equitability that is sufficiently general to allow us to state in one unified language several of the variants of the original concept that have arisen. We use this language to discuss the result of Kinney and Atwal about the impossibility of perfect equitability in some settings [21], explaining its limitations based on the permissive underlying noise model that it requires, and on the strong assumption of perfect equitability that it makes. (See Section 2.3.2.) We also use the formal definition of equitability to clarify the relationship between equitability and statistical power by proving an equivalent characterization of equitability in terms of power against a certain set of null hypotheses. Specifically, we show that whereas typical measures of dependence are analyzed in terms of the power of their corresponding tests to distinguish statistical independence from non-trivial associations, an equitable statistic is one that yields tests that can distinguish finely between relationships of different strengths that may both be non-trivial. We then explain how this relates to raised concerns about the power of MIC [22]. (See Section 3.)
Following our treatment of equitability, we show that MIC can be viewed as a consistent estimator of a population quantity, which we call , and give a closed-form expression for it. This has several benefits. First, the consistency of MIC as an estimator of , together with properties of that are easy to prove given the closed-form expression, trivially subsumes and generalizes many of the theorems proven in [3] about the properties of MIC. Second, it clarifies that the parameter choices in MIC are not fundamental to the definition of the estimand () but rather simply control the bias and variance of the estimator (MIC). Third, separating finite-sample effects from properties of the estimand allows us to rigorously discuss the theoretical relationship between and mutual information. And finally, since power is a property of the test corresponding to a statistic at finite sample sizes and not of the population value of the statistic, this re-orientation allows us to ask whether there exist estimates of other than MIC that retain the relative equitability of MIC but also result in better power against statistical independence. It turns out that there do, as we shall soon discuss.
Having a closed-form expression the population value of MIC (i.e. ) also allows us to reason about it more abstractly, and this is the goal to which we devote the remainder of the paper. We first show that, considered as a function of probability density functions, is continuous. This further clarifies the relationship with mutual information by allowing us to view as a “minimally smoothed” version of mutual information that is uniformly continuous. (In contrast, mutual information alone is not continuous in this sense.)
Our theory also yields an equivalent characterization of that allows us to develop better estimators of it. The expression for given at the beginning of this paper, which is analogous to the expression for MIC, defines it as the supremum of a matrix called the characteristic matrix. We show here that can instead be viewed as the supremum only of the boundary of that matrix. This is theoretically interesting, but it is also practically important, because computing elements of this boundary is easier than computing elements of the original matrix. In particular, our equivalent characterization of leads to the following advances.
• A new consistent estimator of , which we call , and an exact, efficient algorithm for computing it. (The algorithm introduced in [3] for computing MIC is only a heuristic.)
• An approximation algorithm for computing the of a given probability density function. Previously only heuristic algorithms were known [3, 23]. Having such an algorithm enables the evaluation of different estimators of as well as the evaluation of properties of in the infinite-data limit.
• An estimator of , which we call , that proceeds by using a consistent density estimator to estimate the probability density function that gave rise to the observed samples, and then applying the above algorithm to compute the true of the resulting probability density function. This approach may prove more accurate in cases where we can encode into our density estimator some prior knowledge about the types of distributions we expect.
This paper will be accompanied by a forthcoming companion paper that performs extensive empirical analysis of several methods, including comparisons of MIC and related statistics. Among other things, that paper show that the first of the two estimators introduced here, , yields a significant improvement in terms of equitability, power, bias/variance properties, and runtime over the original statistic introduced in [3]. The companion paper also compares both as well as the original statistic from [3] to existing methods along these same criteria and discusses when equability is a useful desideratum for data exploration in practice. Thus, questions concerning the performance of MIC (as well as other estimates of ) compared to other methods at finite sample sizes are deferred to the companion paper, while this paper focuses on theoretical issues. Our hope is that these two papers together will lay a richer theoretical foundation on which others can build to improve our knowledge of equitability and , and to advance our understanding of when equitability and estimation of are useful in practice.
## 2 Equitability
Equitability has been described informally as the ability of a statistic to “give similar scores to equally noisy relationships of different types” [3]. Here we provide the formalism necessary to discuss this notion more rigorously and define equitability using the language of estimation theory. Although what we are ultimately interested in is the equitability of a statistic, we first define equitability and discuss variations on the definition in the setting of random variables. Only then do we adapt our definitions to incorporate the uncertainty that comes with working with finite samples rather than random variables.
### 2.1 Overview
Before we formally define equitability in full generality, we first give a semi-formal overview of how we will do so, as well as a brief discussion of the benefits of our approach.
In [3], we asked to what extent evaluating a statistic like MIC on a sample from a noisy functional relationship with joint distribution tells us about the coefficient of determination () of that relationship222Recall that for a pair of jointly distributed random variables with regression function , is the squared Pearson correlation coefficient between and . However, this setup can be generalized as follows. We have a statistic (e.g. MIC) that detects any deviation from statistical independence, a set of distinguished standard relationships on which we are able to define what we mean by noise (e.g. noisy functional relationships), and a property of interest that quantifies the noise in those relationships (e.g. ). We now ask: to what extent will evaluating on a sample from some joint distribution tell us about ?
How will we quantify this? Let us step back for a moment and suppose that finite sample effects are not an issue: we will consider the population value of and discuss its desired behavior on distributions . In this setting, we will want to have the following two properties.
1. It is a measure of dependence. That is, if and only if exhibits statistical independence.
2. For every fixed number there exists a small interval such that implies . In other words, the set is small.
Assuming that satisfies the first criterion, we will define its equitability on with respect to as the extent to which it satisfies the second. A stronger version of this definition, which we will call perfect equitability, adds the requirement that the interval be of size , i.e. that be exactly recoverable from regardless of the particular identity of . Note, however, that this is strictly a special case of our definition, and in general when we discuss equitability we are explicitly acknowledging the fact that this may not be the case.
The notion of equitability we just described for will then have a natural extension to a statistic . The extension will proceed in the same way that one might define a confidence interval of an estimator: for a fixed number , instead of considering the distributions for which exactly, we will consider the distributions for which is a likely result of computing , where .
In Section 3 we will use this formalization of equitability to give an alternate, equivalent definition in terms of statistical power against a certain set of null hypotheses.
Though [3] focused primarily on various models of noisy functional relationships with as the property of interest, the appropriate definitions of and may change from application to application. For instance, as we have noted previously, when is the set of all superpositions of noiseless functional relationships and , then the population MIC (i.e. ) is perfectly equitable. More generally, instead of functional relationships one may be interested in relationships supported on one-manifolds, with added noise. Or perhaps instead of one may decide to focus simply on the magnitude of the added noise, or on the mutual information between the sampled y-values and the corresponding de-noised y-values. In each case the overarching goal should be to have be as large as possible without making it impossible to define an interesting or making it impossible to find a measure of dependence that achieves good equitability on with respect to this . For this reason, we keep our exposition on equitability generic, and use noisy functional relationships and only as as a motivating example.
Keeping our exposition generic also allows us to address variations on the concept of equitability that have been introduced by others. For example, we are able to state in a formal, unified language the relationship of the work of Kinney and Atwal [21] to our previous work on MIC. In particular, we explain why their negative result about the impossibility of achieving perfect equitability is of limited scope due to its focus on perfect equitability and to the setting of that it requires. (See Section 2.3.2 for this discussion.)
As a matter of record, we wish to clarify at this point that the key motivation given for Kinney and Atwal’s work, namely that our original paper [3] stated that MIC was perfectly equitable, is incorrect. Specifically, they write “The key claim made by Reshef et al. in arguing for the use of MIC as a dependence measure has two parts. First, MIC is said to satisfy not just the heuristic notion of equitability, but also the mathematical criterion of -equitability…”, with the latter term referring to perfect equitability [21]. However, such a claim was never made in [3]. Rather, that paper informally defined equitability as an approximate notion and compared the equitability of MIC, mutual information estimation, and other schemes empirically, concluding that MIC is the most equitable statistic in a variety of settings. In other words, one method can be more equitable than another, even if neither method is perfectly equitable. We intend for the formal definitions we present in this section to lead to a clearer picture of the relationships among these concepts and among the results published about them.
### 2.2 Preliminaries: interpretability and reliability
Let be the set of distributions over , and let be a mapping such that for describing a pair of jointly distributed random variables, if and only if and are statistically independent. Such a map is called a measure of dependence.
Now let be some subset of indexed by a parameter , and let be some property that is defined on but may not be defined on all of . We ask the following question: to what extent does knowing for some tell us about the value of ? We will refer to the members of as standard relationships, and to as the property of interest.
Conventionally, noisy functional relationships have been used as standard relationships, and the corresponding property of interest has been with respect to the regression function. However, as noted above we might imagine different scenarios. For this reason, we will make our exposition as generic as possible and refer back to the setting of noisy functional relationships as a motivating example.
Regardless of our choice of , , and , there are two straightforward ways to measure how similar is to on . The first such way is to restrict our attention only to distributions with , and then to ask how much can vary subject to that constraint.
###### Definition 2.1.
Let be a measure of dependence, and let . The smallest closed interval containing the set is called the reliable interval of at and is denoted by . is a -reliable proxy for on at if and only if the diameter of is at most .
Equivalently, is a -reliable proxy for on at if and only if there exists an interval of size such that implies that . In other words, if we restrict our attention to distributions such that we are guaranteed that applied to those distributions will produce values that are close to each other. (See Figure 1a for an illustration.) In the context of noisy functional relationships and , this corresponds to saying that relationships with the same will not score too differently.
The second way of measuring how closely matches on is to talk about how much can vary when we consider only distributions with .
###### Definition 2.2.
Let be a measure of dependence, and let . The smallest closed interval containing the set is called the interpretable interval of at and is denoted by . is a -interpretable proxy for on at if and only if the diameter of is at most .
Equivalently, is a -interpretable proxy for on at if and only if there exists an interval of size such that implies that for all . In other words, if all we know about a distribution is that , then we are able to guess what is pretty accurately. (See Figure 1a for an illustration.) In the context of noisy functional relationships and , this corresponds to the fact that evaluating on a relationship will give us good upper and lower bounds on the noise-level of that relationship as measured by .
When and are clear we will omit them and describe simply as -reliable (resp. interpretable) at (resp. ).
Once we have specified what we mean by “reliable” and “interpretable”, it is straightforward to define “reliability” and “interpretability”.
###### Definition 2.3.
The reliability (resp. interpretability) of at (resp. ) is , where is the diameter of (resp. ). If , the reliability (resp. interpretability) of is and is called perfectly reliable (resp. interpretable).
We will occasionally refer to the more general notions of reliability/interpretability as “approximate” to distinguish them from the perfect case.
One can imagine many different ways to quantify the overall interpretability and reliability of a measure of dependence. For instance, we have
###### Definition 2.4.
A measure of dependence is worst-case -reliable (resp. interpretable) if it is -reliable (resp. interpretable) at all (resp. ) .
A measure of dependence is average-case -reliable (resp. interpretable) if its reliability (resp. interpretability), averaged over all (resp. ) , is at least .
More generally, one could imagine defining a prior over all the distributions in to reflect one’s belief about the importance of various types of relationships in the world, and then using that to measure overall reliability and interpretability. We do not pursue this here; instead, we focus only on worst-case reliability and interpretability.
Let us give two simple examples of the use of this new terminology. First, the Linfoot correlation coefficient [24], defined as where is mutual information, is a worst-case perfectly interpretable and perfectly reliable proxy for , the squared Pearson correlation coefficient , on the set of bivariate normal random variables. Additionally, Theorem 6 of [4] implies that distance correlation is a perfectly interpretable and perfectly reliable proxy for on the same set . In the first example, the given measure of dependence simply equals when it is restricted to , which is why the reliability and interpretability are perfect. In the second example, the distance correlation does not equal but rather is a deterministic function of it, which is sufficient.
### 2.3 Defining equitability
#### 2.3.1 Equitability in the sense of [3]
As we have suggested above, in the language of reliability and interpretability, the informal notion of equitability described in [3] amounts to a requirement that a measure of dependence be a highly interpretable proxy for some property of interest that is suitably defined to reflect “strength”, and over as large a model as possible.
We have discussed the fact that the particular choice of and may vary from problem to problem, as might the way in which the equitability is measured (average-case versus worst-case). Let us define the models considered in [3]. We begin by stating precisely what we mean by the term “noisy functional relationship”.
###### Definition 2.5.
A random variable distributed over is called a noisy functional relationship if and only if it can be written in the form where , is a random variable distributed over , and and are random variables. We denote the set of all noisy functional relationships by .
As we will soon discuss, there are varying views about whether constraints should be placed on and , ranging from setting them to be Gaussians independent of each other and of all the way to allowing them to be arbitrary random variables that are not necessarily independent of . For this reason, we do not place any constraints on them in the above definition.
With the concept of noisy functional relationships defined, equitability on a set of functional relationships simply amounts to the use of as the property of interest.
###### Definition 2.6 (Equitability on functional relationships in the sense of Reshef et al.).
Let be a set of noisy functional relationships. A measure of dependence is worst-case (resp. average-case) -equitable on if it is a worst-case (resp. average case) -interpretable proxy for on .
In this paper, we will often use “equitability” with no qualifier to mean worst-case equitability.
Given a set of functions from to , [3] defined a few different subsets of . The simplest is
QY,UF={(X,f(X)+εb):b∈R≥0,f∈F}
Where the letter U in indicates that is uniform333In [3] was not actually random. Instead, values of were chosen in to produce evenly spaced x-values. However, for theoretical clarity we opt here to treat as a random variable. over , and is uniform over and is independent of . Of course, one can add noise in the first coordinate as well, producing
QXY,UF={(X+εa,f(X)+εb):a,b∈R≥0,f∈F}
where is defined analogously to . In both of the above cases, we can also modify such that, rather than being uniformly distributed over , it is distributed in such a way that is uniformly distributed over the graph of . This gives the last two models, and , that are used in [3].
The reason that [3] defined four different models models was simple: since it is often difficult to say exactly which model (if any) is actually followed by real data, we would ideally like to see good equitability on as many different such models as possible. Given the lack of a neat description of how real data behave, we aim for robustness.
Nevertheless, each of these models is somewhat narrow, and we can easily imagine others: for instance, we might define and to be Gaussian, we might allow them to depend on each other, or we might consider adding noise only to the first coordinate. Each of these modifications deserves attention.
###### Remark 2.7.
In the remainder of this paper, we will use the terms “equitability” and “interpretability” differently, but the difference is merely notional and not formal: equitability is a type of interpretability that we get when our goal is that reflect the strength of our relationships.
#### 2.3.2 Kinney and Atwal’s impossibility result
Now that we have a sufficiently general language in which to discuss equitability, let us turn to the recent impossibility result of Kinney and Atwal [21]. Kinney and Atwal write the following.
[W]e prove that the definition of equitability proposed by Reshef et al. (ed: [3]) is, in fact, impossible for any (nontrivial) dependence measure to satisfy.
However, this result actually has two severe limitations to its scope. To understand these issues, let us state the result in the language developed above: it amounts to showing that no non-trivial measure of dependence can be perfectly equitable (i.e. a perfect worse-case interpretable proxy for ) on , where
QK={(X,f(X)+η) ∣∣ f:[0,1]→[0,1],(η⊥X)|f(X)}
with representing a random variable that is conditionally independent of given . This model describes functional relationships with noise in the second coordinate only, where that noise can depend arbitrarily on the value of (i.e. it can be heteroscedastic) but must be otherwise independent of .
The first limitation of this result is that the argument depends crucially on the fact that the noise term can depend arbitrarily on the value of . In particular, its mean need not be 0 but rather may change depending on . As pointed out in [25], selecting such a large model leads to identifiability issues such as allowing one to obtain the relationship as a noisy version of . The more permissive (i.e. large) a model is, the easier it is to prove an impossibility result for it, and is indeed quite large: in particular, it is not contained in any of the models defined above. This would be necessary in order for impossibility on to translate into impossibility for one of these other models. Thus, Kinney and Atwal’s result does not apply to the models defined in [3].
The second limitation of Kinney and Atwal’s result is that it only addresses perfect equitability rather than the more general, approximate notion with which we are primarily concerned. As we discussed in Section 2.1, the claim that the definition of equitability given in [3] was one of perfect equitability rather than approximate equitability is incorrect. More generally however, though a perfectly equitable proxy for may indeed be difficult or even impossible to achieve for many large models including some of the models defined above, such impossibility would make approximate equitability no less desirable a property. The question thus remains how equitable various measures are, both provably and empirically. To borrow an analogy from computer science, the fact that a problem is proven to be NP-complete does not mean that we that we do not want efficient algorithms for the problem; we simply may have to settle for heuristic solutions, or solutions with some provable approximation guarantees. Similarly, there is merit in searching for measures of dependence that appear to be sufficiently equitable proxies for in practice.
For more on this discussion, see the technical comment [26] published by the authors of this paper about [21].
### 2.4 Equitability of a statistic
Until now we have only discussed the properties of a measure of dependence considered as a function of random variables. However, it is trivial to define a perfectly reliable and interpretable proxy for any on any : simply define to equal on and an arbitrary measure of dependence on . Of course, this is not the point. Rather, the idea is to define a function that is amenable to efficient estimation, and to use the notions of interpretability and reliability defined above in order to separate the loss in performance that a given estimator of incurs from finite sample effects from the loss in performance caused by the choice of the estimand itself.
However, to reason about this distinction, we do need a way to directly evaluate the reliability and interpretability of a statistic at a given sample size. To do so, we will adapt our above definitions from the “infinite-data limit” by analogy using the theory of estimation and confidence intervals. Specifically, in estimation theory, confidence intervals can be defined in terms of the sets of likely values of a statistic at each value of the parameter. In the same way, we will define a reliable interval to be a set of likely values of given a certain value of , and then define the interpretable interval in terms of the values of whose reliable intervals contain a given value of . This analogy is depicted in Figure 2 and Table 1.
###### Remark 2.8.
The analogy between an equitable statistic and an estimator with small confidence intervals can be made even more explicit as follows: ordinarily, the best way to obtain information about would be to estimate it directly. However, if we do so we are not guaranteed that the statistic we use will detect any deviation from statistical independence when used on distributions not in . Thus, our problem is akin to that of seeking the best possible estimator of on subject to the constraint that the population value equal 0 if and only if the distribution in question exhibits statistical independence. The difference is that we only care about the confidence intervals of the estimator and not about its bias, since we are principally interested in ranking relationships according to rather than recovering the exact value of .
We first define the reliability of a statistic. Previously, reliability meant that if we know then we can place in a small interval. To obtain the analogous definition for a statistic, we simply relax the requirement that be in a small interval to the requirement that be in a small interval with high probability when is a sample from . This is equivalent to simply considering as an estimator of rather than of and requiring that its sampling distribution have its probability mass concentrated in a small area.
###### Definition 2.9.
Let be a statistic, let . The -reliable interval444This is simply the union of the central intervals of the sampling distribution of taken over all distributions . of at , denoted by , is the smallest closed interval with the property that, for all with ,
P(^φ(D)
and
P(^φ(D)>maxA)<α
where is a sample of size from .
The statistic is a -reliable proxy for on at with probability if and only if the diameter of is at most .
(See Figure 1b for an illustration.) Looking once more at the example of noisy functional relationships with as the property of interest, this corresponds to the requirement that there exist an interval such that, for any functional relationship with an of , falls within with high probability when is a sample from .
Once reliability is suitably defined, the definition of interpretability is simple to translate into one for a statistic. Here we again make our definition by considering as an estimator of and looking at its confidence intervals. The key is that while we generally think of a confidence interval of a consistent estimator becomes large only due to finite sample effects, the so-called interpretable interval can become large either because of finite sample effects or because the function to which converges is itself not very interpretable.
###### Definition 2.10.
Let be a statistic, and let . The -interpretable interval of at , denoted by , is the smallest closed interval containing the set
{x∈[0,1]:y∈R^φα(x)}
The statistic is a -interpretable proxy for on at with confidence if and only if the diameter of is at most .
(See Figure 1b for an illustration.)
###### Remark 2.11.
Note that our definitions do not require that converge to in any sense; we are not trying to construct a measure of dependence that also estimates exactly. Rather, we are willing to tolerate some discrepancy between and in order to preserve the fact that acts as a measure of dependence when applied to samples from distributions not in . This is the essential compromise behind the idea of equitability. Why is it worthwhile to make? Because on the one hand, if we are interested in ranking relationships then having only a measure of dependence with no guarantees about how noise affects its score will not do; but on the other hand, we want a statistic that is robust enough that we will not completely miss relationships that do not fall in this set.
Analogous definitions can be made for average-case and worst-case reliability/equitability, and for equitability on functional relationships.
### 2.5 Discussion
As the definitions given above imply, an equitable statistic is different from other measures of dependence in that its main intended use is not testing for independence, but rather measurement of effect size. The idea is to have a statistic that has the robustness of a measure of dependence but that also, via its relationship to , gives values that have a clear, if approximate, interpretation and can therefore be used to rank relationships.
There is a tension inherent in the concept of equitability that arises from the attempt to reconcile the robustness of a measure of dependence with the utility of a measure of effect size. This tension leads to two important concessions to pragmatism.
1. The set is not the set of all distributions but rather some strict subset of it.
2. Despite the fact that we evaluate as an estimator of , we have not required that converge to in any sense, and we explicitly allow for the possibility that it may not. Rather, we are willing to tolerate some discrepancy between the population value of and in order to preserve the fact that acts as a measure of dependence when applied to samples from distributions not in .
The first of these compromises necessitates the second. For if we could set to be the set of all distributions and still define a property of interest that captured what we mean by a “strong” relationship, then we truly would simply seek an estimator for and be done. Unfortunately we cannot do this; the concepts of “noise” and what it means to be a “strong” relationship can become elusive when we enlarge too much. However, this does not mean that we should give up on seeking a statistic that somehow performs reasonably at ranking relationships. Therefore, while define exactly what we would like to have (i.e, ) whenever we can (i.e., on some ), we still demand that our statistic act as a measure of dependence on relationships not in . This second requirement may hurt our ability to estimate , but when we are exploring data sets with real relationships whose form we cannot fully anticipate or model, the robustness it gives can be worth the price of relaxing the requirement that converge to to a requirement that it merely approximate . This is our second compromise.
In this section, we largely focused on setting to be some subset of the set of noisy functional relationships, as this has been the subject of most of the empirical work on the equitability of MIC and other measures of dependence. However, it is important to keep in mind that should ideally be larger than this. For instance, as we discussed previously, in [3] the equitability of MIC is discussed not just in the case of noisy functional relationships but also in the case of superpositions of functional relationships.
As the compromises discussed above make clear, equitability sits in between the traditional hypothesis-testing paradigm of measures of dependence on the one hand and the paradigm of measuring effect size on the other. However, equitability can actually be framed entirely in terms of hypothesis tests. This is the topic of our next section.
## 3 Equitability as a generalization of power against independence
Having defined equitability in terms of estimation theory, we will now show that we can equivalently think of it in terms of power against a certain family of null hypotheses. This result re-casts equitability as a generalization of power against statistical independence and gives a second formal definition of equitability that is easily quantifiable using traditional power analysis.
### 3.1 Overview
Our proof is based on the idea behind a standard construction of confidence intervals via inversion of statistical tests. In particular, equitability of a statistic with respect to a property of interest on a model will be shown to be equivalent to power against the collection of null hypotheses of the form corresponding to different values of . Thus, if is such that if and only if exhibits statistical independence, then equitability with respect to is a strictly stronger requirement than power against statistical independence.
As a concrete example, let us again return to the case in which is a set of noisy functional relationships and the property of interest is . Here, a conventional power analysis would consider, say, the right-tailed test based on the statistic and evaluate its type II error at rejecting the null hypothesis of , i.e. statistical independence. In contrast, we will show that for to be equitable, it must yield right-tailed tests with high power against null hypotheses of the form for any . This is difficult: each of these new null hypotheses can be composite since can contain relationships of many different types (e.g. a noisy linear relationship, a noisy sinusoidal relationship, and a noisy parabolic relationship). Whereas all of these relationships may have reduced to a single null hypothesis of statistical independence in the case of , they yield composite null hypotheses once we allow to be non-zero.
### 3.2 Definitions and proof of the result
As before, let be the set of distributions over , and let be a measure of dependence estimated by some statistic . Let be some model of interest and let be a property of interest.
Now, given some , let be the right-tailed test based on with critical value , null hypothesis , and alternative hypothesis . The set is the set of possible right-tailed tests based on that are available to us for distinguishing from . We will distinguish a one of these tests in particular, namely the optimal one subject to a constraint on type I error: let be the test with chosen to be as small as possible subject to the constraint that the type I error of the resulting test be at most . We are now ready to define the measure of power that we will use to show the equivalence with equitability.
###### Definition 3.1.
Fix . For any given , let be the power of . We call the function the power function associated to at with significance with respect to .
When if and only if represents statistical independence, then the power function gives the power of right-tailed tests based on at distinguishing statistical independence from various non-zero values of with significance . For instance, if is the set of bivariate normal distributions and is ordinary correlation , then simply gives us the power of the right-tailed test based on at distinguishing the alternative hypothesis of from the null hypothesis of . As an additional example, in the cases discussed above where is some set of functional relationships and is , the power function associated to equals the power of the right-tailed test based on that distinguishes the alternative hypothesis of from the null hypothesis of , i.e., independence, with type I error .
Nevertheless, as we observe here, the set of power functions at values of besides 0 contains much more information that just the power of right-tailed tests based on against the null hypothesis of . We can recover the interpretability of a statistic at every by considering its power functions at values of beyond 0. This is the main result of this section. It is analogous to the standard relationship between the size of the confidence intervals of an estimator and the power of their corresponding right-tailed tests.
###### Remark 3.2.
In this setup our null and alternative hypotheses, since they are based on and not on a parametrization of that uniquely specifies distributions, may be composite: can be one of several distributions with or respectively. This composite nature of our null hypotheses is bound up in the reason we need interpretability and reliability in the first place: if the set were so small that each value of defined only one distribution then we would likely not be in a setting where we needed an agnostic approach to detecting strong relationships. We could just estimate directly.
Before we prove the main result of this section, the connection between power and interpretabiilty, we must first define what aspect of power will be reflected in the interpretability of .
###### Definition 3.3.
The uncertain set of a power function is the set .
We will now prove the main proposition of this section, which is essentially that uncertain sets are interpretable intervals and vice versa. In what follows, since our statistic is fixed we use to denote , and to denote . We also use the function to denote the diameter of a subset of .
###### Proposition 3.4.
Fix and , and suppose is a statistic with the property that is a continuous, increasing function of . The following two statements hold.
1. If , then the uncertain set of has diameter for .
2. If the uncertain set of has diameter , then for .
An illustration of this proposition and its proof is shown in Figure 3.
###### Proof.
Let denote the statistical test corresponding to . We first determine: what is the critical value of ? By definition, it is the smallest critical value that gives a type I error of at most . In other words, it is the supremum, over all with , of the -percentile of the sampling distribution of when applied to . But this is simply .
We now prove the proposition by proving each of the two statements separately.
Proof of the first statement: Let be the uncertain set of . Since , we know that , and so . It therefore suffices to show that .
We first show that : since , we know that we can find arbitrarily close to from below such that . But this means that there exists some with such that if is a sample of size from then
P(^φ(Dx)
i.e.,
Kx0α(x)=P(^φ(Dx)≥y)<1−α
and so .
We next show that . To do so, we will need the following fact: since is continuous, the set is closed and because it’s bounded this means that is actually a member of . In other words, . It is easy to similarly show, using the continuity and invertibility of , that in fact .
To show that , we now observe that since , we know that for all . This is either because or because . However, since and is an increasing function, no can have . Thus the only option remaining is that , which gives that if is a sample of size from any with , we will have
P(^φ(Dx)
But, as we’ve shown, the critical value of the test in question is , which equals . We therefore have that
Kx0α(x)=P(^φ(Dx)≥y)≥1−α
which implies that is not contained in , as desired.
Proof of the second statement: We again let denote the uncertain set of . What are the infimum and supremum of ? To answer this, we note once again that implies that and moreover, since , we also have that .
To prove our claim, we will establish that and that . The fact that follows easily from and the fact that is an increasing function. It is therefore left only to show the latter claim.
To establish that , let us first show that . We know that since , we can find arbitrarily close to from below such that . Again, since the critical value of the test in question is , this means that there exists some with such that if is a sample of size from then
Kx0α(x)=P(^φ(Dx)≥y)<1−α
i.e.,
P(^φ(D)
and so . This means that .
To show that , we observe that for we must have . Since the critical value of the test is , this implies that if is a sample of size from any with , then
Kx0α(x)=P(^φ(Dx)≥y)≥1−α
i.e.,
P(^φ(D)
In other words, for any , as desired. ∎
### 3.3 Discussion
What does the above result tell us about equitability? The first consequence of it is the following formal definition of equitability/interpretability in terms of statistical power, which we present without proof.
###### Theorem 3.5.
Fix a set , and a function . Let be a statistic with the property that is a continuous increasing function of , and fix some and some . Then the following are equivalent:
1. is a worst-case -interpretable proxy for with confidence .
2. For every satisfying , there exists a right-tailed test based on that can distinguish between and with type I error at most and power at least .
This definition shows what the concept of equitability/interpretability is fundamentally about: being able to distinguish not just signal () from no signal () but also stronger signal () from weaker signal (). This is the essence of the difference between equitability/interpretability and power against statistical independence.
The definition also shows that equitability and intepretability — to the extent they can be achieved — subsume power against independence. To see this, suppose again that exactly when exhibits statistical independence. By setting in the definition, we obtain the following corollary.
###### Corollary 3.6.
Fix a set , a function such that iff exhibits statistical independence, and some . Let be a worst-case -interpretable proxy for with confidence , and assume that is a continuous increasing function. The power of the right-tailed test based on at distinguishing from statistical independence with type I error at most is at least .
In other words, equitability/interpretability implies power against independence. However, equitability/interpretability is actually a stronger requirement: as the theorem shows, to be interpretable a statistic must yield a right-tailed test that is well-powered not only to detect deviations from independence () but also from any fixed level of “noisiness” (e.g., ). This indeed makes sense when a data set contains an overwhelming number of relationships that exhibit, say and that we would like to ignore because they are not as interesting as the small number of relationships with .
It is our hope that by formalizing the relationship between equitability and power against independence, our equivalence result will clarify the differences between these two properties, thereby addressing some of the concerns raised about the power of MIC against statistical independence ([22] and [27]). We of course do agree that power against independence is a very important goal that is often the right one, and if all other things are equal more power is certainly always better. To this end, we have worked to greatly enhance MIC’s power, both through better choice of parameters and through use of the estimators introduced later in this paper, to the point where it is often competitive with the state of the art. (The results of this work are forthcoming in the companion paper.) However, we also think that limiting one’s analysis of MIC to power against statistical independence alone is not the right way to think about its utility.
For example, in [22], Simon and Tibshirani write “The ‘equitability’ property of MIC is not very useful, if it has low power”. However, as the result described above shows, the question is “power against what?”. If one is interested only in power against statistical independence (e.g. , in the setting of functional relationships), then choosing a statistic based solely on this property is the correct way to proceed. However, when the relationships in a dataset that exhibit non-trivial statistical dependence number in the hundreds of thousands, it often becomes necessary to be more stringent in deciding which of them to manually examine. As our result in this section shows, this can be thought of as defining one’s null hypothesis to be for some . In such a case, the statistic is not being used to identify any instance of dependence, but rather to identify any instance of dependence of a certain minimal strength. In other words, when used on relationships in , an equitable statistic is a measure of effect size rather than a statistical test, and as with other measures of effect size, analyzing its power against only one null hypothesis (that of statistical independence alone) is therefore inappropriate.
Of course, when the relationships being sought in a dataset are expected to be very noisy, the above paradigm does not make sense and it is quite reasonable to ignore equitability and seek a statistic that maximizes power specifically against statistical independence. This issue, along with a broader discussion of when equitability is an appropriate desideratum, is discussed in more detail in the upcoming companion paper. From a theoretical standpoint, our result here simply formalizes the notion that these concepts, while distinct, are related, and shows that the former — to the extent that it can be achieved — implies the latter.
## 4 Mic and the MINE statistics as consistent estimators
MIC is defined as the maximal element of a matrix called the characteristic matrix. However, both of these quantities are defined in [3] as statistics rather than as properties of distributions that can then be estimated from samples. Here we define the quantities that these two statistics turn out to estimate, and we prove that they do so. Thinking about these statistic as consistent estimators and then analyzing their behavior in the infinite-data limit subsumes and strengthens several previous results about MIC, gives a better interpretation of the parameters in the definition of MIC, clarifies the relationship of MIC to other measures of dependence, especially mutual information, and allows us to introduce new, better estimators that have improved performance.
In this section, we focus on introducing the population value of MIC (which we will call ) and proving that MIC is a consistent estimator of it, and then give a discussion of some immediate consequences of this approach. Subsequent sections of the paper are devoted to analyzing and stating new estimators of it.
### 4.1 Definitions
We begin by defining the characteristic matrix as a property of the distribution of two jointly distributed random variables rather than as a statistic. In the sequel we will use to denote, for positive integers and , the set of all -by- grids (possibly with empty rows/columns).
###### Definition 4.1.
Let be jointly distributed random variables on . For a grid , let where is the column of containing and is analogously defined. Let
I∗((X,Y),k,ℓ)=maxG∈G(k,ℓ)I((X,Y)|G)
where represents the mutual information of and . The population characteristic matrix of , denoted by , is defined by
M(X,Y)k,ℓ=I∗((X,Y),k,ℓ)logmin{k,ℓ}
for .
Note that in the above definition, refers to mutual information (see, e.g., [28] and [29]), not to an interpretable interval as in the previous sections.
The characteristic matrix is so named because in [3] it was hypothesized that this matrix has a characteristic shape for different relationship types, such that different properties of this matrix may correspond to different properties of relationships. One such property was the maximal value of the matrix. This is called the maximal information coefficient (MIC), and is defined below.
###### Definition 4.2.
Let be jointly distributed random variables on . The population maximal information coefficient () of is defined by
MIC∗(X,Y)=supM(X,Y)
We now define the corresponding statistics introduced in [3].
###### Remark 4.3.
In the rest of this paper, we will sometimes have a sample from the distribution of rather than the distribution itself. We will abuse notation by using to refer both to the set of points that is the sample, as well as to the uniform distribution over those points. In the latter case, it will then make sense to talk about , as we are about to do below.
###### Definition 4.4.
Let be a set of ordered pairs. Given a function , we define the sample characteristic matrix of to be
^MB(D)k,ℓ=⎧⎨⎩I∗(D,k,ℓ)logmin{k,ℓ}kℓ≤B(|D|)0kℓ>B(|D|)
###### Definition 4.5.
Let be a set of ordered pairs, and let . We define
MICB(D)=max^MB(D)
In [3], other characteristic matrix properties were introduced as well (e.g. maximum asymmetry score [MAS], maximum edge value [MEV], etc.). These can be analogously presented as functions of random variables together with a corresponding statistic for each property.
### 4.2 The main consistency result
We now show that the statistic MIC defined above is in fact a consistent estimator of . This is a consequence of the following more general result, which will be the main theorem of this section. In the theorem statement below, we let be the space of infinite matrices equipped with the supremum norm, and we let denote the projection
ri(A)k,ℓ={Ak,ℓkℓ≤i0kℓ>i
###### Theorem.
Let be uniformly continuous, and assume that pointwise. Then for every random variable supported on , the statistic is a consistent estimator of provided for .
Since the supremum of a matrix is uniformly continuous as a function on and can be realized as the limit of maxima of larger and larger segments of the matrix, this theorem gives us the following corollary.
###### Corollary 4.6.
is a consistent estimator of provided for .
It is easily verified that analogous corollaries also hold for the statistics and defined in [3] (and referred to there simply as MAS and MEV). Interestingly, it is unclear a priori whether such a result exists for MCN, since that statistic is not a uniformly continuous function of the sample characteristic matrix.
Before we prove this theorem, we will first give some intuition for why it should hold, and also for why it is non-trivial to prove. We then present the general strategy for the proof before giving the proof itself.
#### 4.2.1 Intuition
Fix a random variable and let be a sample of size from its distribution. It is known that, for a fixed grid , is a consistent estimator of [30]. We might therefore expect to be a consistent estimator of as well. And if is a consistent estimator of , then we might expect the maximum of the sample characteristic matrix (which just consists of normalized terms) to be a consistent estimator of the supremum of the true characteristic matrix.
These intuitions turn out to be true, but there are two reasons they are non-trivial to prove. First, consistency for does not follow from abstract considerations since the maximum of an infinite set of estimators is not necessarily a consistent estimator of the supremum of the estimands555If is a finite set of estimators, then a union bound shows that the random variable converges in probability to with respect to the supremum metric. The continuous mapping theorem then gives the desired result. However, if the set of estimators is infinite, the union bound cannot be employed. And indeed, if we let , let represent a sample of size , and suppose that has all of its probability mass on the point , then each is consistent but their supremum is always infinite.. Second, consistency of alone does not suffice to show that the maximum of the sample characteristic matrix converges to . In particular, if grows too quickly, and the convergence of to is slow, inflated values of MIC can result. To see this, notice that if then always, even though each individual entry of the sample characteristic matrix converges to its true value eventually.
The technical heart of the proof is overcoming these obstacles by using the dependences between the quantities for different grids to not only show the consistency of but then to quantify how quickly actually converges to .
#### 4.2.2 Proof strategy
We will prove the theorem by a sequence of lemmas that build on each other to bound the bias of . The general strategy is to capture the dependencies between different -by- grids by considering a “master grid” that contains many more than cells. Given this master grid, we first bound the difference between and only for sub-grids of . The bound will be in terms of the difference between and . We then show that this bound can be extended without too much loss to all -by- grids. This will give us what we seek, because then the differences between and will be uniformly bounded for all grids in terms of the same random variable: . Once this is done, standard arguments will give us the consistency we seek.
#### 4.2.3 The proof
The proof of this result will sometimes require technical facts about entropy and mutual information that are self-contained and unrelated to the central idea behind our argument. These lemmas are consolidated in Appendix 10.
We begin by using one of these technical lemmas to prove a bound on the difference between and that is uniform over all grids that are sub-grids of a much denser grid . The common structure imposed by will allow us to capture the dependence between the quantities for different grids .
###### Lemma 4.7.
Let and be random variables distributed over the cells of a grid , and let and be their respective distributions. Define
εi,j=ψi,j−πi,jπi,j
Let be a sub-grid of with cells. Then, for every there exists some such that
∣∣I(Ψ|G)−I(Π|G)∣∣≤(logB)A∑i,j|εi,j|
when for all .
###### Proof.
Let and be the random variables induced by and respectively on the cells of . Using the fact that , we write
|I(Q)−I(P)|≤|H(QX)−H(PX)|+|H(QY)−H(PY)|+|H(Q)−H(P)|
where and denote the marginal distributions on the columns of and and denote the marginal distributions on the rows. We can bound each of the above terms using a Taylor expansion argument given in Lemma A.1, whose proof is found in the appendix. Doing so gives
(lnB)(∑iO(|εi,∗|)+∑jO(|ε∗,j|)+∑i,jO(|εi,j|))
where
εi,∗=∑j(ψi,j−πi,j)∑jπi,j
and is defined analogously.
To obtain the result, we observe that
since , and the analogous bound holds for . ∎
We now extend Lemma 4.7 to all grids with cells rather than just those that are sub-grids of the master grid . It is useful at this point to recall that, given a distribution , an equipartition of is a grid such that all the rows of have the same probability mass, and all the columns do as well.
###### Lemma 4.8.
Let and be random variables distributed over , and let be a grid. Define on and as in Lemma 4.7. Let be any -by- grid, and let (resp. ) represent the total probability mass of (resp. ) falling in cells of that are not contained in individual cells of . We have that
provided that the are bounded away from 1 and that .
###### Proof.
In the proof below, we use the convention that for any two grids and and any distribution , the expression denotes . In addition, we refer to any horizontal or vertical line in that is not in as a dissonant line of .
Consider the grid obtained by adding to the two lines in that surround each dissonant line of and then removing all the dissonant lines of . This grid is clearly a sub-grid of . And in Lemma A.4, whose proof we defer to the appendix, we do some careful accounting to show that has the property that
ΔΨ(
|
2020-08-13 20:42:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771804571151733, "perplexity": 386.5127068711873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739073.12/warc/CC-MAIN-20200813191256-20200813221256-00286.warc.gz"}
|
http://www.lse.ac.uk/Statistics/Events/Joint-Risk-Stochastics-and-Financial-Mathematics-seminar-series
|
Joint Risk & Stochastics and Financial Mathematics seminar series
One should really think of Statistics as a discipline which can be used to support other disciplines
The series aims to promote communication and discussion of research in the mathematics of insurance and finance and their interface, to encourage interaction between practice and theory in these areas, and to support academic students in related programmes at postgraduate level.
LSE maps and directions
All are welcome to attend these seminars. If you are attending from outside LSE please notify statistics@lse.ac.uk, so that we can ensure you have access to the seminar room.
10/10/2019 - Simone Scotti
Thursday 10th October 2019
32L.LG.03 (32 Lincoln’s Inn Fields) 12pm to 1pm
Simone Scotti (Université Paris Diderot)
Title: Alpha-Heston stochastic volatility model
We introduce an affine extension of the Heston model where the instantaneous variance process contains a jump part driven by $\alpha$-stable processes with $\alpha\in(1,2]$. In this framework, we examine the implied volatility and its asymptotic behaviors for both asset and variance options. In particular, we show that the behavior of stock implied volatility is the sharpest coherent with theoretical bounds at extreme strikes independently of the value of $\alpha\in(1,2)$. As far as volatility options are concerned, VIX-implied volatility is characterized by an upward-sloping behavior and the slope is growing when $\alpha$ decreases.
Furthermore, we examine the jump clustering phenomenon observed on the variance marketand provide a jump cluster decompositionwhich allows to analyse the cluster processes. The variance process could be split into a basis process, without large jumps, and a sum of jump cluster processes, giving explicit equations for both terms. We show that each cluster process is induced by a first mother" jump giving birth to a sequence of child jumps". We first obtain a closed form for the total number of clusters in a given period.
Moreovereach cluster process satisfies the same $\alpha$-CIR evolution of the variance process excluding the long term mean coefficient that takes the value $0$.We show that each cluster process reaches $0$ in finite time and we exhibit a closed form for its expected life time.We study the dependence of the number and the duration of clusters as function of the parameter $\alpha$ and the threshold used to split large and small jumps.
Joint work with Ying Jiao, Chunhua Ma and Chao Zhou
24/10/2019 - Eugene Feinberg
Thursday 24th October 2019
32L.LG.03 (32 Lincoln’s Inn Fields) 12pm to 1pm
Eugene Feinberg (Stony Brooks University)
olutions for Zero-Sum Two-Player Games with Noncompact Decision Sets
The classic theory of infinite zero-sum two-player games has been developed under the assumptions that either the decision set of at least one of the players is compact or some convexity/concavity conditions hold. In this talk we describe sufficient conditions for the existence of solutions for two-person zero-sum games with possibly noncompact decision sets for both players and the structure of the solution sets under these conditions. Payoff functions may be unbounded, and we do not assume any convexity/concavity-type conditions. For such games expected payoffs may not exist for some pairs of strategies. These results imply several classic facts, and they are illustrated with the number guessing game. We also describe sufficient conditions for the existence of a value and solutions for each player.
The talk is based on joint papers with Pavlo O. Kasyanov and Michael Z.
07/11/2019 - Stefano Duca and Philip Gradwell
Thursday 7th November 2019
32L.LG.03 (32 Lincoln’s Inn Fields) 12pm to 1pm
Stefano Duca and Philip Gradwell (Chainalysis)
Title: Cryptocurrencies: what the data tells us about a new financial market
Abstract: Cryptocurrencies have generated much hype and controversy, but they have also generated vast amounts of financial data. Not only are they traded on exchanges, via spot and derivatives, but they are also transacted on the blockchain. This potentially allows for detailed analysis of this new financial market. However, interpretation of the data is complex due to the pseudo-anonymity of blockchain transactions and the immaturity of markets. Chainalysis, the leading blockchain analytics company, will describe the state of cryptocurrency data, their latest understanding of the crypto-economy, and frame the open questions for a debate on the frontiers of cryptocurrency research.
21/11/2019 - Francesco Russo
Thursday 21th November 2019
32L.LG.03 (32 Lincoln’s Inn Fields) 12pm to 1pm
(NOTE: change of day and venue)
Francesco Russo (ENSTA Paris, Institut Polytechnique de Paris)
Title: BSDEs, Decoupled Mild Solutions of Associated Deterministic Equations and Applications to Hedging Under Basis Risk
The full abstract can be found here.
05/12/2019 - Renyuan Xu
Thursday 5th December 2019
32L.LG.03 (32 Lincoln’s Inn Fields) 12pm to 1pm
Renyuan Xu (University of Oxford)
Title and abstract - to be confirmed.
|
2019-11-12 15:49:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5026918053627014, "perplexity": 1596.7450532649723}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665575.34/warc/CC-MAIN-20191112151954-20191112175954-00515.warc.gz"}
|
https://www.physicsforums.com/threads/how-could-you-use-the-implicit-function-theorem-to-prove-something-like-this.329164/
|
# How could you use the implicit function theorem to prove something like this?
1. Aug 5, 2009
### AxiomOfChoice
Suppose you know that a function $g(x,y,z)$ has a unique, non-degenerate minimum at some point $(x_0,y_0,z_0)$. How could you go about using the implicit function theorem to prove that $f(x,y,z) = g(x,y,z) + C h(x,y,z)$, where $C$ is some constant, has a minimum at some point $(x_c,y_c,z_c)$? Could we further show that $(x_c,y_c,z_c)$ would need to be close to $(x_0,y_0,z_0)$?
I realize that I've left so many things unspecified that a lot of details will have to be omitted. But could you walk me through the basic program for proving something like this (e.g., things I'd have to show about the function $h$ or the constant $C$)?
2. Aug 5, 2009
### g_edgar
I suppose "non-degenerate" means the Hessian is positive definite? Then you need to diagonalize Hessian matrix and take C related to its eigenvalues.
3. Aug 5, 2009
### AxiomOfChoice
Hehe...yeah, that's the thing...I'm not really sure what a "non-degenerate" minimum is. Is that kind of an accepted definition for non-degenerate minimum?
|
2018-07-19 14:40:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.786676824092865, "perplexity": 317.3608324716697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590901.10/warc/CC-MAIN-20180719125339-20180719145339-00401.warc.gz"}
|
https://www.ademcetinkaya.com/2023/02/bbw-build-bear-workshop-inc-common-stock.html
|
Outlook: Build-A-Bear Workshop Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating.
Dominant Strategy : Hold
Time series to forecast n: 25 Feb 2023 for (n+3 month)
Methodology : Modular Neural Network (Market News Sentiment Analysis)
## Abstract
Build-A-Bear Workshop Inc. Common Stock prediction model is evaluated with Modular Neural Network (Market News Sentiment Analysis) and Stepwise Regression1,2,3,4 and it is concluded that the BBW stock is predictable in the short/long term. According to price forecasts for (n+3 month) period, the dominant strategy among neural network is: Hold
## Key Points
1. What are buy sell or hold recommendations?
2. How do you know when a stock will go up or down?
3. What is Markov decision process in reinforcement learning?
## BBW Target Price Prediction Modeling Methodology
We consider Build-A-Bear Workshop Inc. Common Stock Decision Process with Modular Neural Network (Market News Sentiment Analysis) where A is the set of discrete actions of BBW stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4
F(Stepwise Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (Market News Sentiment Analysis)) X S(n):→ (n+3 month) $R=\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right)$
n:Time series to forecast
p:Price signals of BBW stock
j:Nash equilibria (Neural Network)
k:Dominated move
a:Best response for target price
For further technical information as per how our model work we invite you to visit the article below:
How do AC Investment Research machine learning (predictive) algorithms actually work?
## BBW Stock Forecast (Buy or Sell) for (n+3 month)
Sample Set: Neural Network
Stock/Index: BBW Build-A-Bear Workshop Inc. Common Stock
Time series to forecast n: 25 Feb 2023 for (n+3 month)
According to price forecasts for (n+3 month) period, the dominant strategy among neural network is: Hold
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
## IFRS Reconciliation Adjustments for Build-A-Bear Workshop Inc. Common Stock
1. The requirements in paragraphs 6.8.4–6.8.8 may cease to apply at different times. Therefore, in applying paragraph 6.9.1, an entity may be required to amend the formal designation of its hedging relationships at different times, or may be required to amend the formal designation of a hedging relationship more than once. When, and only when, such a change is made to the hedge designation, an entity shall apply paragraphs 6.9.7–6.9.12 as applicable. An entity also shall apply paragraph 6.5.8 (for a fair value hedge) or paragraph 6.5.11 (for a cash flow hedge) to account for any changes in the fair value of the hedged item or the hedging instrument.
2. Such designation may be used whether paragraph 4.3.3 requires the embedded derivatives to be separated from the host contract or prohibits such separation. However, paragraph 4.3.5 would not justify designating the hybrid contract as at fair value through profit or loss in the cases set out in paragraph 4.3.5(a) and (b) because doing so would not reduce complexity or increase reliability.
3. A firm commitment to acquire a business in a business combination cannot be a hedged item, except for foreign currency risk, because the other risks being hedged cannot be specifically identified and measured. Those other risks are general business risks.
4. Although the objective of an entity's business model may be to hold financial assets in order to collect contractual cash flows, the entity need not hold all of those instruments until maturity. Thus an entity's business model can be to hold financial assets to collect contractual cash flows even when sales of financial assets occur or are expected to occur in the future.
*International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS.
## Conclusions
Build-A-Bear Workshop Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Build-A-Bear Workshop Inc. Common Stock prediction model is evaluated with Modular Neural Network (Market News Sentiment Analysis) and Stepwise Regression1,2,3,4 and it is concluded that the BBW stock is predictable in the short/long term. According to price forecasts for (n+3 month) period, the dominant strategy among neural network is: Hold
### BBW Build-A-Bear Workshop Inc. Common Stock Financial Analysis*
Rating Short-Term Long-Term Senior
Outlook*Ba1Ba1
Income StatementCaa2Baa2
Balance SheetBaa2B2
Leverage RatiosCaa2Ba2
Cash FlowCB2
Rates of Return and ProfitabilityBaa2B3
*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?
### Prediction Confidence Score
Trust metric by Neural Network: 87 out of 100 with 787 signals.
## References
1. Çetinkaya, A., Zhang, Y.Z., Hao, Y.M. and Ma, X.Y., Trading Signals (WTS Stock Forecast). AC Investment Research Journal, 101(3).
2. M. Colby, T. Duchow-Pressley, J. J. Chung, and K. Tumer. Local approximation of difference evaluation functions. In Proceedings of the Fifteenth International Joint Conference on Autonomous Agents and Multiagent Systems, Singapore, May 2016
3. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, Vol. 26, ed. Z Ghahramani, M Welling, C Cortes, ND Lawrence, KQ Weinberger, pp. 3111–19. San Diego, CA: Neural Inf. Process. Syst. Found.
4. G. J. Laurent, L. Matignon, and N. L. Fort-Piat. The world of independent learners is not Markovian. Int. J. Know.-Based Intell. Eng. Syst., 15(1):55–64, 2011
5. Barkan O. 2016. Bayesian neural word embedding. arXiv:1603.06571 [math.ST]
6. Athey S, Bayati M, Doudchenko N, Imbens G, Khosravi K. 2017a. Matrix completion methods for causal panel data models. arXiv:1710.10251 [math.ST]
7. P. Marbach. Simulated-Based Methods for Markov Decision Processes. PhD thesis, Massachusetts Institute of Technology, 1998
Frequently Asked QuestionsQ: What is the prediction methodology for BBW stock?
A: BBW stock prediction methodology: We evaluate the prediction models Modular Neural Network (Market News Sentiment Analysis) and Stepwise Regression
Q: Is BBW stock a buy or sell?
A: The dominant strategy among neural network is to Hold BBW Stock.
Q: Is Build-A-Bear Workshop Inc. Common Stock stock a good investment?
A: The consensus rating for Build-A-Bear Workshop Inc. Common Stock is Hold and is assigned short-term Ba1 & long-term Ba1 estimated rating.
Q: What is the consensus rating of BBW stock?
A: The consensus rating for BBW is Hold.
Q: What is the prediction period for BBW stock?
A: The prediction period for BBW is (n+3 month)
|
2023-03-30 07:39:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3640754818916321, "perplexity": 9885.218238024632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949107.48/warc/CC-MAIN-20230330070451-20230330100451-00798.warc.gz"}
|
https://api.queryxchange.com/q/20_428252/constant-conditional-variance-does-not-imply-constant-conditional-mean/
|
# Constant conditional variance does not imply constant conditional mean?
by Syd Amerikaner Last Updated September 22, 2019 21:19 PM
Let $$y = X\beta + u$$ be a regression model. If we assume $$\mathbb V[u|X] = \sigma^2$$ then does this imply $$\mathbb E[u|X] = c$$? Clearly, $$\mathbb V[u|X] = \mathbb E[uu'|X] - \mathbb E[u|X]\mathbb E[u|X]'$$, so $$\mathbb E[uu'|X] = \sigma^2 + \mathbb E[u|X]\mathbb E[u|X]'$$. Since both, $$\mathbb E[uu'|X]$$ and $$\mathbb E[u|X]\mathbb E[u|X]'$$ can vary, $$\mathbb E[u|X]$$ need not be constant. But what would be a counter example?
If we assume iid observations and only a single regressor, it suffices to regard a single $$u_i$$. So, $$E[u_i^2|x_i] = \sigma^2 + \mathbb E[u_i|x_i]^2$$. If we take the derivative wrt to $$x_i$$, we find $$D\mathbb E[u_i^2|x_i] = 2D\mathbb E[u_i|x_i]$$ so I thought about assuming $$\mathbb E[u_i|x_i] = 0.5x_i$$ and $$\mathbb E[u^2_i|x_i] = x_i$$. Is this reasoning correct or do I miss a point?
Tags :
#### Answers 1
$$\mathbb E[u|X]$$ can be whatever you want. The variance assumption alone does not imply it must be anything. Most commonly, $$\mathbb E[u|X]$$ is assumed to be $$0$$. If this was not done, and instead you assumed it was equal to some parameter to be estimated, like $$c$$, then all of the parameters would probably not be identifiable, and they would be less interpretable.
If you assume $$\mathbb E[u_i|x_i] = 0.5x_i$$ then you can write your model as $$y_i = \beta_0 + \beta_1 x_i + (.5x_i + \sigma z_i)$$ where $$z_i$$ is a standard normal variate. The parameters $$(\beta_0, \beta_1, .5)$$ yield the same likelihood as $$(\beta_0, \beta_1 + .5, 0)$$. You are not estimating $$.5$$, so technically your model is still identifiable; but I don't see any advantage to be gained from not following the convention of assuming your noise is mean zero.
Taylor
September 22, 2019 20:49 PM
|
2019-10-13 20:04:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9522425532341003, "perplexity": 226.02625143563446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986647517.11/warc/CC-MAIN-20191013195541-20191013222541-00465.warc.gz"}
|
https://math.stackexchange.com/questions/3046295/does-bilinear-models-on-vectors-mean-dot-or-outer-product/3046321
|
# Does bilinear models on vectors mean dot or outer product?
If I have 2 vectors $$x$$ and $$y$$ where $$x \in \mathcal{R}^{m}$$ and $$y \in \mathcal{R}^{n}$$.
Does bilinear model mean?
$$f(x,y) = x^TWy$$ where $$W \in \mathcal{R}^{m*n}$$
which result in a scalar
or
$$f(x,y) = W(x⊗y^T)$$ where ⊗ is the outer product and $$W \in \mathcal{R}^{m*n}$$.
which result in a matrix
I checked 2 papers, the first one Low-rank Bilinear Pooling in page 2 in equation 1 their bilinear model produce a scalar
while in Compact Bilinear Pooling in section 3.1 they said "Bilinear models take the outer product of two vectors"
$$(x,y) \rightarrow x^TWy$$
$$(x,y) \rightarrow xy^T$$
|
2019-09-17 12:51:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9582691192626953, "perplexity": 1872.3348763211286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573071.65/warc/CC-MAIN-20190917121048-20190917143048-00325.warc.gz"}
|
http://conferences.science.unsw.edu.au/SLOAN80/programme.html
|
Conference in honour of Ian Sloan on the occasion of his 80th birthday,
17-19 June 2018, UNSW, Sydney, Australia.
## Programme
Sunday 17 June Monday 18 June Tuesday 19 June 8:30 - 9:00 (* by invitation only) Breakfast 9:00 - 9:30 Tony Guttmann 9:30 - 10:00 James Sethian 10:00 - 10:30 Honorary degree conferral* Nalini Joshi 10:30 - 11:00 Honorary degree conferral* Coffee 11:00 - 11:30 Honorary degree conferral* Ronald Cools 11:30 - 12:00 Morning tea* Douglas Arnold 12:00 - 12:30 Morning tea* Mark Ainsworth 12:30 - 1:00 Lunch 1:00 - 1:30 Registration and lunch Lunch 1:30 - 2:00 Registration and lunch Fred Hickernell 2:00 - 2:30 Opening - Emma Johnston Welcome - Bruce Henry Markus Hegland 2:30 - 3:00 Edward Redish Henryk Wozniakowski 3:00 - 3:30 Wolfgang Wendland Afternoon tea and farewell 3:30 - 4:00 Coffee 4:00 - 4:30 Mike Giles 4:30 - 5:00 Linda Petzold 5:00 - 5:30 Dinner Ivan Graham 5:30 - 9:00 Dinner
Opening from the Dean of Faculty of Science
Emma Johnston UNSW, Sydney.
Welcome from the Head of School of Mathematics and Statistics
Bruce Henry UNSW, Sydney.
Math, Applied Math, and Math in Physics
Edward Redish University of Maryland, USA
In 1971, I was a recently minted mathematical physicist hunting for a future. Along came a visitor from the other side of the world bearing a set of new tools and new ways of mathematical thinking that changed how I thought about complex nuclear reactions. Twenty years later, I transitioned to a new career in which I think about the role math plays in scientific thinking and how “mathematicians and physicists are two disciplines separated by a common language.” I’ll provide reminiscences and examples of what I’ve learned as a mathematical physicist and as a cognitive education researcher.
Minimal Riesz energy problems in Sobolev spaces
Wolfgang Wendland University of Stuttgart, Germany
The minimal energy problem for nonnegative charges on a closed surface $\Gamma$ in $\mathbb{R}^3$ goes back to C.F.~Gauss in 1839. The corresponding Riesz kernel is then on $\Gamma$ weakly singular. More general, in $\mathbb{R}^n, n\geq 2$, the constructive and numerical solution of minimizing the energy relative to the Riesz kernel then defines on $\Gamma$ a weakly singular single layer boundary integral operator. The construction and numerical solution of minimizing the corresponding energy then provides the distribution of charges on $\Gamma$ whose single layer potential is the solution of the Dirichlet problem inside $\Gamma$. For more general Riesz kernels the corresponding minimizing problem leads to a charge distribution which provides the density of Sloan’s integration points on $\Gamma$. If the surface consists of two separate parts $\Gamma_1$ and $\Gamma_2$ with positive charges on $\Gamma_1$ and negative charges on $\Gamma_2$ then the minimizing charge distribution can be approximated by piecewise constant charges via a Galerkin-Bubnov method. Wavelet matrix compression is applied for solving the discrete system. This is joint work with Helmut Harbrecht (Basel), Günther Of (Graz) and Natalia Zorii (Kiev).
Mike Giles , University of Oxford, UK
One noteworthy aspect of Ian Sloan's career is his switch from theoretical physics to computational mathematics. In this talk I will discuss my own mid-career switch from CFD (computational fluid dynamics) to mathematical finance and Monte Carlo methods, with a visit to Ian in UNSW in early 2007 being a key part of my switch.
The Master Clock: Structure and Function
Linda Petzold, UCSB, USA
In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. In past work we have inferred the functional network for synchronization of the SCN. In recent work we have inferred the directionality of the network connections. We discuss the network structure and its advantages for function.
Uncertainty quantification in partial differential and integral equation models
Ivan Graham University of Bath, UK
In late 1975 I received a very enthusiastic aerogramme from Ian Sloan telling me that "he and his collaborators had just discovered some exciting new numerical methods for solving integral equations", and offering me a PhD scholarship to work with him on this at UNSW. Going to Sydney in 1976 to work with Ian was probably the most important decision of my career and I'm pleased to say that we have kept in close touch and have continued to work together down the years. In fact recently some of the things we were working on in 1976 became useful again in the analysis of uncertainty propagation in reactor modelling, an industrial project I've been doing with Wood plc in the UK. I'll talk about this and, more generally, I'll talk about uncertainty quantification for various PDE models and how recent work by Ian and a number of other collaborators on high dimensional integration has allowed us to make substantial progress in this area.
Pattern-avoiding permutations
Tony Guttmann University of Melbourne, Australia
There are n! permutations of the integers 1,2,...,n. We say that a permutation avoids a given sub-permutation (called a pattern) if there is no sub-sequence of the permutation with digits in the same relative order as the digits in the pattern. Thus 1,2,3,4,5 avoids the pattern 2,1 as there are no two integers in decreasing order in the permutation. However 1,3,4,2,5 does not avoid the pattern, as the pairs 3,2, and 4,2 are both in decreasing order. We will describe some open questions, and discuss some numerical methods for exploring them.
The Mathematics of Moving Interfaces: From Industrial Printers and Semiconductors to Medical Imaging and Foamy Fluids
James Sethian University of California, Berkeley, USA
Moving interfaces appear in a large variety of physical phenomena, including mixing fluids, industrial printers, medical images, and foamy fluids. One way to frame moving interfaces is to recast them as solutions to fixed domain Eulerian partial differential equations, and this has led to a collection of PDE-based techniques, including level set methods, fast marching methods, and ordered upwind methods. These techniques easily accommodate merging boundaries and the delicate 3D physics of interface motion. We will give a brief overview of these methods, and then talk about a few selected new applications from fluids, materials science, and image analysis.
Symmetry through Geometry
Nalini Joshi University of Sydney, Australia
Symmetry is an essential part of our description of the world. The quality of being made up of exactly similar parts facing each other is all around us: one day reflects another and the days fill out the year in the same way that similar hexagonal compartments fill out a honeycomb. The mathematical description of symmetries is built from only two operations: reflections and translations. In two dimensions, these give rise to triangular, hexagonal and square tilings of the plane. But in higher dimensions, many more tiling patterns are available. One of the many questions that arise is how to go from higher dimensional tilings to two dimensional ones. I will show how to use these ideas to link two major theories that arise in mathematical physics.
Lattice rules with a twist
Ronald Cools KU Leuven, Belgium
In 1991 Ian invited me to Sydney to join forces, and in 1992 I went. At that time one of his research passions were lattice rules for periodic functions. Mine was cubature formulas that are exact for trigonometric functions. Together we investigated minimal cubature formulas, and we obtained rules for arbitrary polynomial degrees of precision with free parameters. These are all equal weight rules with points shifted in a particular way. Within this continuum there are lattice rules. So we were both happy with the result. Why it took 4 more years before the paper appeared in print is another story. In a series of papers that started at an Oberwolfach meeting in 1992, Ian and Harald Niederreiter modified lattice rules for non-periodic function. They modified the weights and kept the points. Dirk Nuyens and I picked up this thread for the occasion of Ian's 80th birthday.
Spiky eigenfunctions
Douglas Arnold, University of Minnesota, USA
From our first encounters with partial differential equations and Fourier analysis, we encounter eigenfunctions of elliptic operators which are oscillatory and global: sines, cosines, Bessel functions, and so forth. But when the coefficients of the PDE are not smooth but disordered, an entirely different sort of eigenfunction appears: spiky and highly localized. Such localization is highly relevant in physical applications, especially in quantum mechanics. A large body of mathematics has been developed to explain it, but many aspects of localization remain mysterious. In particular, most known results are probabilistic, providing statistical information for a class or operators with random coefficients, but not information specific to a particular realization of such an operator. We will describe a new and very different perspective on localization. The new viewpoint is deterministic, enabling us, for the first time, to deduce the localized spectrum of a disordered operator without actually calculating its eigenfunctions and eigenvalues. This approach holds out the promise of harnessing disorder to create devices and materials with new and desired properties.
Fractional Swift-Hohenberg Equation and Its Application to Modelling of Crystals
Mark Ainsworth, Brown University, USA
Recent years have seen a surge of interest in fractional partial differential equations driven, in part, by the desire to develop models that are able to more accurately model systems which exhibit behaviour including sub-and super-diffusion. In the current work, we shall discuss a fractional version of the Swift-Hohenberg equation. The Swift-Hohenberg equation is a non-linear parabolic PDE which is known to have solutions display pattern formation, and which has been used in the physics literature to develop macroscopic models for crystals known as "phase field crystals." We consider a fractional version of the Swift-Hohenberg equation (FSHE) which gives a non-linear fractional parabolic problem. In particular, we show that FSHE is well-posed and admits a unique solution; develop a Fourier-Galerkin scheme and obtain error estimates; analyse pattern formation properties and discuss the use of FSHE to develop fractional phase field crystal models. This is joint work with Zhiping Mao (Brown University).
The Advantages of Sampling with Integration Lattices
Fred Hickernell IIT Chicago, USA
Ian Sloan has been a key developer and proponent of integration lattices for numerical computation. His 1994 monograph has over 700 citations, and his more recent 2013 Acta Numerica article has over 200 citations. This talk highlights how well-chosen integration lattices can speed up numerical multidimensional integration in contrast to tensor product rules and simple Monte Carlo. It also describes recent work sampling with integration lattices when performing Bayesian cubature, where the integrand is assumed to be an instance of a Gaussian random process.
The combination technique for high dimensional approximation
Markus Hegland ANU, Australia
Ian Sloan has made substantial contributions to high dimensional quadrature and approximation which include the establishment of quasi Monte Carlo Methods, and results about the ANOVA decomposition of functions. Combining ideas from algebra and analysis he and his collaborators have been able to control the curse of dimensionality to a large extent. He has actively nurtured Australian research in high-dimensional computing, and the development of sparse grid approximation. Sparse grids have been used for the solution of differential equations and recently in uncertainty quantification and fault-tolerant algorithms. In my presentation I will review some of the development of the sparse grid combination technique and its foundations in algebra, extrapolation and ANOVA decompositions.
Ian Sloan and Tractability
Henryk Wozniakowski Columbia University, USA and University of Warsaw, Poland
I will try to summarize over 25 years of our collaboration and friendship with Ian Sloan. Most of our papers deal with tractability of multivariate problems such as multivariate integration and approximation. I will cite the results of our first and last paper.
|
2018-07-17 21:24:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4022623300552368, "perplexity": 1202.4317248214481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589902.8/warc/CC-MAIN-20180717203423-20180717223423-00085.warc.gz"}
|
https://www.physicsforums.com/threads/electric-flux-and-electric-flux-density.904122/
|
# Electric Flux and Electric Flux Density
1. Feb 15, 2017
### CoolDude420
1. The problem statement, all variables and given/known data
Hi,
So I'm doing a electromagnetics course and we've been given equations for electric flux and electric flux density but I can't seem to find any sort of intuitive explanation for these.
In my lecture notes, the electric flux density is introduced first as vector D and given the formula:
Vector D = epsilon*Vector E
The electric flux is defined as:
From my understanding from high school, the electric flux is the number of electric field lines passing through an area(perpendicular.)
I'm just really confused as to what is what in sort of a realistic viewpoint.
2. Relevant equations
3. The attempt at a solution
2. Feb 15, 2017
### haruspex
Yes, but field lines are just constructs to aid intuition. There are not actual discrete lines that can be counted. You can think of each field line as representing the same quantity of flux, but what that quantity is is up to you.
3. Feb 15, 2017
### kuruman
D is proportional to E under most circumstances that you will encounter so if you understand one kind of flux, you should be able to understand the other. I suspect your question has more to do with Gauss's Law than with flux so I will discuss electric flux.
Imagine a closed surface like the skin of a potato. Now draw a square grid on the skin of the potato subdividing its area into many many little pieces dA. You can make the pieces as small as you like - we are doing calculus here. Number the pieces so that you can tell them apart. Go to piece 1 and measure the electric field at the location of that piece assuming that it is the same over the entire area of the piece. Consider a unit vector $\hat{n}$ perpendicular to the area pointing outwards, away from the "meat" of the potato. Find the component of the E-field, i.e. $\vec{E_1} \cdot \hat{n_1}$ and multiply by the area element $dA_1$. Now go to element 2 and do the same. Add the new product $\vec{E_2} \cdot \hat{n_2}~dA_2$ to the previous one. Keep on adding until you run out of area elements. The sum of all the products is the electric flux.
OK, but what does that mean intutively? Remember that the dot product between two vectors is positive if the angle between the vectors is less than 90o and negative if the angle is greater than 90o. So, if the sum you get is positive, this means that more field lines on average are coming out of the area than going in; this is means that there is a source of field lines inside the meat of the potato. If the sum is negative, more field lines on average are going into the area than are coming out; this means that there is a sink of field lines inside the meat of the potato. And If the sum is zero, this means that there is neither a source nor a sink of electric field lines inside the meat of the potato.
Gauss's' Law asserts that the sum you get this way is proportional to the total net charge inside the meat of the potato. In other words, just by walking around the skin of the potato, keeping track of what goes in and what comes out, you can figure out what's under the skin without looking.
4. Feb 16, 2017
### CoolDude420
Very nice explanation! I think I understand electric flux. But I'm still not too sure about electric flux density and why it is used as vector D everywhere instead of the actual electric flux? Also in my lecture notes, electric flux is defined as the the flux of the electric flux density D instead of the of being the electric flux of the electric field strength E
Last edited: Feb 16, 2017
5. Feb 17, 2017
### kuruman
It doesn't matter what kind of vector field you have. Flux is a mathematical construct. You can go through the procedure that I described, for any vector field, whether it is E, D, the magnetic field B, the velocity vector field v (in a river) or whatever.
|
2017-08-19 05:01:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7612709999084473, "perplexity": 277.5867261537768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105297.38/warc/CC-MAIN-20170819031734-20170819051734-00455.warc.gz"}
|
https://www.physicsforums.com/threads/floating-iceberg.695508/
|
# Floating iceberg
1. Jun 5, 2013
### Felafel
1. The problem statement, all variables and given/known data
I think it should be pretty simple, but my result and that of the book are different:
How much water does an iceberg displace (Its emerged part is $V_i=100m^3$)
3. The attempt at a solution
knowing the density of sea water is $d_w=1.03*10^3 kg/m^3$ and that of ice $d_i=0.92*10^3 kg/m^3$ i can calculate the volume of the submerged part of the iceberg:
$\frac{d_i}{d_w}=0.89%$ meaning the total volume of the iceberg is $100:11=x:100$
x=909. The submerged part is then 909-100=809m^3. now, since the iceberg floats, its weight and archimedes' force should be equal, then:
$m \cdot g= d_w \cdot g \cdot V_w$ and so:
$809 \cdot 92=103 \cdot v_w$ $\Rightarrow$ $V_w=723 m^3$
but according to the book it should be 1050m^3
what's wrong in my reasoning?
2. Jun 5, 2013
### SteamKing
Staff Emeritus
IMO, the book's answer is incorrect.
What I did was to say the x = submerged volume of the iceberg. Then I wrote an equation setting the weight of the iceberg = to the buoyancy of the iceberg. Solve for x.
3. Jun 5, 2013
### Felafel
I don't understand your method. I get two incognitas:
Weight of the iceberg: mass multiplied g, where the mass is volume divided density and x is the submerged volume. Buoyancy= g multiplied the volume of displaced water (which i don't know) multiplied the density of the water
$\frac{x+100}{d_{ice}} \cdot g=V_w \cdot g \cdot d_{w}$
but i don't know both x and V_w
4. Jun 5, 2013
### Staff: Mentor
How is displaced water different from submerged volume?
5. Jun 5, 2013
### haruspex
Because it gets subtracted from 1, effectively, the rounding error in truncating it to 0.89 becomes significant. You need to use a couple more digits of precision.
The method looks ok to here, but as Borek points out this should also be the volume of water displaced. I don't understand what you did from here.
Fwiw, I get 836 cu m.
6. Jun 5, 2013
### Staff: Mentor
And you are not alone
7. Jun 5, 2013
### Felafel
now everything's clear :) thank you!
8. Jun 5, 2013
### SteamKing
Staff Emeritus
Let x = submerged volume of iceberg
Total volume of iceberg = (x + 100) cu. m.
Mass of iceberg = 920 kg/m^3 * (x + 100) m^3
Since the iceberg is floating, the mass of the iceberg = mass of the displaced water
(this is Archimedes Principle),
Therefore,
mass of displaced water = x * 1030
equating displacement of iceberg to mass of iceberg,
1030*x = 920 * (x + 100)
Solve for x
|
2017-10-21 16:13:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.453463613986969, "perplexity": 2228.6050888887385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824820.28/warc/CC-MAIN-20171021152723-20171021172723-00882.warc.gz"}
|
https://stats.stackexchange.com/questions/184224/generating-data-from-a-specific-distribution/184225
|
Generating data from a specific distribution
Let X1,...,Xn be a random sample from the pdf
f(x)=(1/b)exp[-(x-a)/b], a< x< infinity, 0< b< infinity,
where a and b are location and scale parameter, respectively.
Now I have to generate X of 1000 observations from the pdf. I am a R user.
For initiate the problem, I set parameter values of a and b arbitrarily. But to generate the probability f(x), I need to set the values of x, but I have been asked to generate X from the pdf. It seems to calculate the probabilities f(x) and to generate the values of X are simultaneous work. How can I do that ?
• (Trying not to give a complete solution to a homework problem:) The usual method to take a first stab at generating random variables (if prevented from using a built-in function) from a particular distribution is to invert the function to get a quantile function and then feed it random variates from the uniform distribution on the interval [0,1]. You can look at the shape of the distribution thereby produced using hist. hist( -log( runif(1000) ) ). I'm guessing this is homework from a mathematical statistics course and that you should demonstrate steps in getting to your final solution. – DWin Nov 30 '15 at 2:19
• Close vote on the hypothesis that this is request of stats advice. – DWin Nov 30 '15 at 2:24
• @42- hm, that's inversion method. – user 31466 Nov 30 '15 at 2:44
• Yes, it is. What is your point? – DWin Nov 30 '15 at 2:45
Use rexp function to generate vector with rate equal to 1/b and then shift the result by adding a.
|
2020-09-30 16:12:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5303410887718201, "perplexity": 598.9911117613264}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402127075.68/warc/CC-MAIN-20200930141310-20200930171310-00169.warc.gz"}
|
https://read.dukeupress.edu/demography/article/52/2/613/169361/Birth-Order-and-Mortality-A-Population-Based
|
## Abstract
This study uses Swedish population register data to investigate the relationship between birth order and mortality at ages 30 to 69 for Swedish cohorts born between 1938 and 1960, using a within-family comparison. The main analyses are conducted with discrete-time survival analysis using a within-family comparison, and the estimates are adjusted for age, mother’s age at the time of birth, and cohort. Focusing on sibships ranging in size from two to six, we find that mortality risk in adulthood increases with later birth order. The results show that the relative effect of birth order is greater among women than among men. This pattern is consistent for all the major causes of death but is particularly pronounced for mortality attributable to cancers of the respiratory system and to external causes. Further analyses in which we adjust for adult socioeconomic status and adult educational attainment suggest that social pathways only mediate the relationship between birth order and mortality risk in adulthood to a limited degree.
## Introduction
However, despite this background, few studies have investigated the relationship between birth order and mortality in adulthood (Modin 2002; O’Leary et al. 1996; Smith et al. 2009), and very little research addresses birth order and cause-specific mortality in adulthood. A number of studies have demonstrated a link between birth order and cancer development, although not mortality attributable to cancer (Altieri and Hemminki 2007; Amirian et al. 2010; Bevier et al. 2011; Hemminki and Mutanen 2001; Richiardi et al. 2004). The overall pattern is mixed: the direction of the relationship between birth order and cancer development has been shown to vary according to the site of the cancer. This study is the first to address the relationship between birth order and all-cause mortality using a population data set, the first to use a within-family comparison design to study the relationship between mortality and birth order, and the first to use a sufficiently large database to address cause-specific mortality in adulthood. Using a within-family comparison approach—meaning that we only compare siblings born to the same parents to one another—allows us to rule out a wide range of potential confounding factors that may vary considerably between families, such as parental SES, as well as other unobserved family-specific characteristics. The specific causes of death that we study are mortality attributable to diseases of the circulatory system, external causes, and neoplasms, excluding cancers of the respiratory system, which we study separately. These cause-specific analyses are valuable both in terms of enabling us to partially discriminate between different causal mechanisms, and in allowing us to speculate about the importance of birth order effects on mortality beyond the ages that we observe in our data.
This study focuses on adult mortality at ages 30 to 69. Given that the life expectancy at birth in Sweden as of this writing is 79.7 for men and 83.8 for women (United Nations 2013), we are studying premature mortality. Although premature mortality is relatively rare in Western developed societies such as Sweden, it remains a critical topic of interest given that premature mortality terminates the opportunity for individuals to enjoy long, satisfying lives and to make an economic contribution toward society. Premature mortality in adulthood is usually concentrated among the most vulnerable in society because those who suffer from premature mortality are typically more frail than the average individual. In this study, we choose to study the relationship between birth order and mortality separately for men and women, for several reasons. First, mortality patterns differ by gender. Second, previous research has also shown gender differences in the relative impact of early-life conditions, such as early socioeconomic disadvantage, on later-life mortality, with women being relatively more vulnerable (Hamil-Luker and O’Rand 2007). Read and Gorman (2010) suggested that the long-term chains of risk associated with early relative disadvantage may be greater among women than men because of the institutional disadvantages that women face in the household and the labor market. In this study, we will be able to examine whether relative disadvantage within the family produces a similar pattern. One reason why women may be more sensitive to relative disadvantage within the family is that previous research indicates that women demonstrate a greater tendency to root their social role within the family and private sphere than men (Hagestad 1986; Rossi and Rossi 1990; Young and Willmott 1957).
Previous studies on the relationship between birth order and mortality in adulthood have been mixed, with some finding that children of higher birth order have greater mortality (Modin 2002) and others finding no clear pattern of substantive or statistical significance (O’Leary et al. 1996; Smith et al. 2009). However, these studies have varied in quality as well as in the degree to which they focused on birth order as a key variable. Using the Utah Population Database, Smith et al. (2009) investigated how a range of early-life factors were associated with mortality in adulthood. The impact of birth order on adult mortality was not the main focus of the study. Operationalizing birth order as a binary variable indicating whether the individual was firstborn, that study found no statistically significant associations between birth order and adult mortality for either men or women. The study by O’Leary et al. (1996) found little relationship between birth order and mortality but used a small (n = 1,162) and non-representative sample, with insufficient statistical power to detect any patterns. Finally, a study using Swedish data (n = 14,192) from the Uppsala Birth Cohort Study found that birth order was associated with an increased risk of all-cause mortality for both men and women aged 20–54 and for men aged 55–80 (Modin 2002), although after adjusting for the SES of the ego in adulthood, the pattern was no longer statistically significant. However, sibship size was not included in the models. Because high birth orders are directly correlated with large family sizes, this leaves open the potential for confounding if sibship size is not adjusted for. Furthermore, none of these studies used the within-family comparison approach adopted in this study, leaving open the possibility that spurious associations could be observed even after the researchers adjusted for important variables, such as sibship size and parental SES. Nevertheless, given past research findings on the importance of birth order, we anticipate that all-cause mortality will increase with a rising birth order, and we also anticipate that we will observe the same pattern for cause-specific mortality.
Several hypotheses have been proposed for why birth order should be related to outcomes in adulthood. Among myriad explanations, two theories have gained particular scientific interest: the confluence hypothesis (Zajonc 1976; Zajonc and Markus 1975) and the resource dilution hypothesis (Blake 1981). These two theories explain discrepancies in achievement by birth order as being attributable to lower cognitive stimulation in early-life and cumulative resource disadvantage for later-born siblings. More specifically, the confluence hypothesis takes account of the fact that children are a part of their own dynamically changing environment, and states that as family size grows with an increasing number of children, the environment becomes steadily less cognitively stimulating (Zajonc 1976). This less-stimulating environment is hypothesized to impact intellectual development (Zajonc 1976). Previous findings of a negative relationship between cognitive ability and both education and longevity suggest that later-borns would have higher mortality in adulthood (Batty et al. 2007; Lager and Torssander 2012). The resource dilution hypothesis states that the pool of parental resources, which includes material, cognitive, and interpersonal resources (Hertwig et al. 2002), available to each child decreases as the sibship size increases (Blake 1981). First- and early-born children will spend the early years of life having the exclusive or near-exclusive attention of parents, whereas later-borns are forced to compete with siblings over resources from birth. Empirical research shows a negative relationship between birth order and the time that parents spend caring for their children (Price 2008), the amount of parental leave time that parents take (Sundström and Duvander 2002), and the likelihood of breast-feeding (Buckles and Kolka 2014). Although parents’ incomes may increase as they age, this rising income is rarely sufficient to offset the dilution of resources as more children enter the household. Investment in childhood and early-life resource access has been shown to be associated with various measures of health in adulthood (Campbell et al. 2014), including mortality (Hayward and Gorman 2004).
In addition to the confluence hypothesis and the resource dilution hypothesis, several other explanatory theories have been offered. For example, the hygiene hypothesis (Strachan 1989) argues that a larger sibship increases the likelihood of communicable diseases being introduced into the family and that younger siblings may be more susceptible to these diseases (Holman et al. 2003; Strachan 1989). Another explanation, the family dynamics model (Sulloway 1996; Sulloway and Zweigenhaft 2010), assumes the fundamental aspects of the resource dilution hypothesis and extends it to argue that children tend to occupy different niches within the family environment, and that they also attempt to differentiate themselves from one another in order to avoid direct intersibling competition. It has been argued that these intrafamily dynamics tend to produce firstborns whose values are more closely aligned with those of their parents, and later-borns who are more rebellious and more likely to engage in risky or dangerous activities (Sulloway 1996; Sulloway and Zweigenhaft 2010; Zweigenhaft and Von Ammon 2000). Yet another explanation is that older siblings introduce younger siblings to developmentally inappropriate activities, such as smoking and alcohol consumption, at a younger age than they otherwise would have been, which may have both direct and indirect influences on health and mortality (Elliott 1992; Harakeha et al. 2007). We hypothesize that if there is support for this latter explanation then mortality attributable to cancers of the respiratory system, as well as external causes in the form of accidents, suicide, and events of undetermined intent, should be positively associated with birth order; we will be able to test this hypothesis by looking at cause-specific mortality outcomes.
Another perspective in the literature is that the finding of a relationship between birth order and intelligence or educational attainment is a methodological artifact of drawing inferences about within-family patterns from between-family data and that these associations disappear after between-family heterogeneity is adjusted for (Rodgers 2001). However, recent research using high-quality, population-based, longitudinal Nordic administrative register data comparing siblings within the same family to one another suggests that within-family birth-order effects do exist and that later-born children fare worse on measures of both cognitive ability and educational attainment (Bjerkedal et al. 2007; Black et al. 2005, 2011; Kristensen and Bjerkedal 2007). The current study, using Swedish administrative register data, features the same advantages in terms of data and methodological approach. Given the strong and unambiguous evidence for the effect of IQ, educational attainment, and SES on health outcomes (Batty et al. 2007; Lager and Torssander 2012; Mackenbach et al. 1997; Marmot 2004; Torssander and Erikson 2010), we expect that mortality will increase for higher birth orders, and that this association will be mediated through social pathways in a way that is at least partially observable by using measures of SES and educational attainment. We hypothesize that this pattern will be clearest for cause-specific mortality associated with lifestyle and environmental conditions, such as cancers of the respiratory system and mortality attributable to external causes. We will be able to test this hypothesis by adjusting for SES and educational attainment in adulthood and by analyzing cause-specific mortality outcomes.
## Data and Methods
### Data
In this study, we use Swedish population register data to investigate the relationship between birth order and mortality. We conduct separate analyses for men and women. The individuals under analysis consist of cohorts born between 1938 and 1960, with 1938 being practically the earliest point for which we can obtain reliable information on parent–child linkages using the multigenerational Swedish registers. Here, we define a sibship as a group of siblings with the same biological mother–father pairing. We use the terms “set size” to refer to the size of the full sibling group and “set order” to refer to birth order within that sibling group. We do not restrict the calculation of set size or set order to these cohorts but instead use the full population registers to generate these measures. We link the population register to the Swedish mortality register, following them from 1990 to 2007 for both all-cause and cause-specific mortality. Some descriptive details on our data can be seen in Table 1. Although the Swedish mortality register contains data over the period 1960 to 2007, the multigenerational registers that allow family members to be linked to one another are incomplete before the 1990s (SCB 2011). We exclude families with plural births from our analyses because the meaning of birth order is less clear in these families. To maximize the quality of our birth-order measure, we also exclude sibling groups in which any of the children are born outside the Nordic region. For our main analyses, which use a within-family comparison approach (to be described in more detail later herein), we also exclude only-children. Finally, we study sibling groups with between two and six children because sibling sizes greater than six are rare, and including those few cases produced unreliable estimates for birth orders seven and higher. Based on these exclusion criteria, of the 2,166,948 individuals in the 1938–1960 birth cohorts living in Sweden in 1990, we have 1,788,388 individuals from sibling groups with between two and six children available for our analyses.
Aside from all-cause mortality, we address mortality attributable to the following causes: neoplasms; cancers of the respiratory system; diseases of the circulatory system; and external causes, which includes accidents, suicides, and events of undetermined intent. These cause-specific outcome variables were coded using the World Health Organization (WHO) International Classification of Diseases (ICD), versions 9 and 10, taking into account the transition between these versions in 1996 in Sweden (Janssen and Kunst 2004). Because we also study cancers of the respiratory system as a specific outcome, we remove this category of cancers from the larger category of neoplasms for the analyses presented here. Because the 1990s are the earliest point at which we can reliably link the multigenerational registers to the mortality register, we have both left- and right-censoring in our models, and the age at which individuals enter and exit the analysis varies for different birth cohorts. This means that the members of the earliest cohort in our study, born in 1938, enter the analysis at age 52 and are followed until age 69; and members of the latest-born cohort, born in 1960, are followed from ages 30 to 47. Because of the nature of the data, we are not able to observe mortality for the youngest ages of adulthood, from 18 to 29, and for the oldest ages, after 69. Although we are unlikely to lose a great deal of information on mortality attributable to diseases of the circulatory system and different cancers by having the earliest age of analysis at 30, we undoubtedly fail to fully capture all of the deaths attributable to external causes in the form of accidents and suicides. We also fail to observe a large proportion of the deaths of each cohort by not observing them later than age 69. Including all our birth cohorts, 4.7 % of our total study population died between the time they entered the analysis and the end of the follow-up period. This proportion is higher for the oldest cohort, at approximately 13 %.
### Statistical Analyses
We conducted within-family analyses to estimate the relationship between birth order and mortality. The within-family analyses—meaning a within-sibship comparison—use fixed-effect discrete-time survival analysis. The hazard function—that is, the probability that individual i has an event y during interval t, given that no event has occurred before the start of t—is defined as
$hit=Pryit=1yit−1=0.$
We fit a discrete-time logistic regression model of the following form:
$logithijt=loghijt1−hijt=αt+βxijt+uj,$
where hij refers to the hazard of failure at discrete time t for respondent i in sibling group j; α(t) is the logit of the baseline hazard function; xij is a vector of covariates for individual i; β is the vector of regression parameters; and uj refers to the fixed-effect component for any given sibling group j. The estimator used is a conditional logistic regression model (Greene 2012; Hosmer et al. 2013), with fixed effects specified at the level of the sibling group. These models have been estimated using cluster-adjusted standard errors to account for any potential intragroup correlation (Primo et al. 2007). The clusters in this study are sibships.
The results from these discrete-time survival analyses are the main results presented in the Results section. Although we study individuals from different cohorts across different ages, using survival analysis allows us to adjust for differences in the number of person-years of exposure that different groups contribute to the analysis. We right-censor for the first out-migration of any individual from Sweden. In our main analysis of all-cause mortality, 5.7 % of men and 3.6 % of women out-migrated over the entire study period. Table 1 shows the study size as well as the number of deaths for men and women. Besides modeling all-cause mortality, we also estimate the cause-specific mortality of other causes of death. We can no longer assume independent right-censoring because our causes of death are dependent on each other; thus, we can no longer estimate the marginal effect (the effect of our covariates on a specific cause of death in the absence of other causes of deaths). We can, however, still examine the extent to which birth order mediates mortality for different causes of death.
Because the within-sibship comparison fixed-effects approach requires within-family variation, we are able to examine only those families in which at least one sibling has died. Thus, the frequency of the outcome is very high in the within-family logistic regression models. The procedure by which odds ratios are calculated means that when the incidence of an outcome is greater than 10 %, as it is in this study, any given odds ratio will be elevated relative to the corresponding difference in the probability of the outcome between two groups (Zhang and Kai 1998). For example, when the frequency of the outcome is 50 %, the odds ratio can be more than 150 % higher than the corresponding relative risk (Schmidt and Kohlmann 2008). Odds ratios are not problematic in and of themselves, but it is important that they are interpreted in terms of a relative increase or decrease in the odds of an outcome rather than a relative increase or decrease in the probability of an outcome between groups.
### Covariates in Survival Analyses
We adjust our estimates of the relationship between birth order and mortality for a number of different variables that are theoretically confounders for this relationship. Correlation matrices for these variables are shown in Table S10 in Online Resource 1. In the analyses, we adjust for the age of the ego’s mother in the birth year of the ego and for cohort. Theoretically, all other intrafamily characteristics, including sibship size, geographical location, and parental SES, are inherently accounted for by conducting a within-family comparison, allowing us to focus exclusively on the importance of birth order for mortality. This approach precludes concerns that the results for birth order may be a statistical artifact of drawing within-family inferences from a between-family comparison (Rodgers et al. 2000), thereby isolating the causal effect of birth order on mortality.
We adjust for cohort effects rather than period effects for two reasons. The first is burgeoning evidence that suggests the importance of in utero and early-life conditions around the time of birth, which vary substantially by cohorts over time, on longevity (Bengtsson and Broström 2009; Bengtsson and Mineau 2009; Gluckman et al. 2008). Furthermore, previous research has indicated that cohort effects play a more significant role in mortality trends than period effects (Richards et al. 2006). In addition, because of changing fertility preferences, period-specific fertility patterns are also related to cohort size (Andersson et al. 2009; Andersson and Kolk 2011), which is related to birth order. Thus, we include a variable for birth year to account for these underlying patterns. We also implicitly adjust for period effects by adjusting for both cohort and age. We adjust for maternal age at birth because evidence suggests that this is an important factor influencing a wide range of adult health outcomes (Myrskylä and Fenelon 2012). In our within-family comparison analyses, we exclude only-children because variance in the outcome is required within the sibling group. We also do not include any sibling set that includes multiple births because the meaning of birth order is different in these families. The full results in Online Resource 1 show the association between birth order and mortality from pooled analyses for children born in sibling sets ranging in size from two to six, as well as results from sibship-size–specific analyses for both the within-family and between-family analyses.
Because previous research has shown that birth order influences education and IQ (Bjerkedal et al. 2007; Black et al. 2005, 2011; Kristensen and Bjerkedal 2007), we also conduct additional analyses to estimate the degree to which the relationship between birth order and mortality is mediated by socioeconomic class and educational attainment, measured in adulthood. To do this, we estimate models in which we adjust for a common measure of SES, the Erikson, Goldthorpe, and Portocarero occupational class scheme (EGP) (Erikson and Goldthorpe 1992; Erikson et al. 1979), measured between ages 30 and 40 using information on occupation from the Swedish censuses in 1960, 1970, 1980, and 1990. The EGP variable used in this study is divided into the following categories: upper service class, including self-employed professionals (EGP = I); lower service class (EGP = II); routine nonmanual (EGP = III); self-employed nonprofessionals, farmers, and fishermen (EGP = IV); skilled and unskilled workers (EGP = VI–VII); and unknown/other. We adjust for educational attainment using information from the Swedish educational register, which has been updated continuously since 1987, using information on the highest achieved educational level starting from age 51. These additional analyses adjusting for socioeconomic and educational attainment are limited to individuals aged 52 years or older. We conduct separate analyses for men and women. We also present results based on this same older sample group without the inclusion of the variables for adult socioeconomic and educational attainment for the sake of comparison.
### Between-Family Analyses
Although the main analyses to be presented use a within-family comparison, we also conduct the full set of analyses described for the within-family comparisons using a between-family comparison approach to provide comparability to previous research on birth order and mortality. The between-family analyses also use discrete-time survival analysis, using sequential logistic regression. As in the within-family analyses, we estimate cluster-adjusted standard errors. In these between-family analyses, we include singletons (i.e., individuals from sibling groups with only one child) because this statistical approach does not require variance within the sibling group. In these between-family analyses, we adjust for the age of the ego, the age of the ego’s mother in the birth year of the ego, cohort, and the sibling set size of which the ego is a part. These results can be seen in Tables S4, S6, and S7 in Online Resource 1.
## Results
The main analyses presented in this article use discrete-time survival analyses in the form of logistic regressions, specifying fixed effects at the sibship level, to perform a within-family comparison; we compare only those siblings born to the same biological mother and father to one another. The results from the within-family analyses for all-cause mortality are shown in Fig. 1 for men and women, in Table 2 for men, and in Table 3 for women. These results show a positive and statistically significant relationship between birth order and mortality, with the hazard rising steadily with an increasing birth order for both men and women. This relationship is considerably stronger for women than for men. Although the analyses presented here pool individuals in sibship sizes ranging from two to six, we also conducted within-family comparison analyses that were specific to sibship size. These results are shown in Table S4 in Online Resource 1. The overall pattern of increasing mortality by birth order is consistent for the sibship-size-specific results. A substantially elevated hazard for higher birth orders remains even in families with two or three children, which are the most common family sizes in Sweden.
Fig. 1
Within-family discrete-time survival analyses: All-cause mortality by birth order, Swedish men and women born 1938–1960. Error bars are 95 % confidence intervals
Fig. 1
Within-family discrete-time survival analyses: All-cause mortality by birth order, Swedish men and women born 1938–1960. Error bars are 95 % confidence intervals
Close modal
To test the extent to which the relationship between birth order and mortality is mediated by social pathways, we conduct additional analyses in which we adjust for SES and educational attainment in adulthood. The results for all-cause mortality from these additional analyses are shown in Fig. 2 and Table 2 for men and in Fig. 3 and Table 3 for women. These additional analyses of men and women aged 52 or older use a different sample group because variables for socioeconomic and educational attainment are consistently available for the entire sample only at an older age. These results show that the association between birth order and mortality is weaker than that seen in the models based upon the full sample. Figure 2 shows that the relationship between birth order and mortality for men aged 52 and older is flat for birth orders 2 and 3 relative to firstborns, and then increases from birth order 4. However, the difference in the odds ratios is statistically significant only for birth orders 5 and 6. After we adjust for attained SES and educational attainment, the confidence intervals show that mortality is statistically significantly elevated only for individuals of birth order 6 relative to firstborns. Overall, adjusting for attained SES and educational attainment decreases the size of the parameter estimates, suggesting that this is a mediating factor between birth order and mortality for men in this age group. Nevertheless, the smaller and age-restricted nature of the sample means that the standard errors for these estimates are substantially larger than those seen in the results for the full sample. The results in Fig. 3 for women show that mortality is elevated from birth order 3 to birth order 6, but these results are not statistically significant. Adjusting for attained SES and educational attainment has little impact on the size of the parameter estimates, indicating that adult socioeconomic attainment is not an important mediating variable for birth order and mortality for women aged 52 and older.
In addition to the analyses of all-cause mortality, we study cause-specific mortality for several major causes of death. The cause-specific patterns for the within-family analyses are shown in Fig. 4 for men and Fig. 5 for women (see also Table S5 in Online Resource 1). For men, the odds of mortality attributable to diseases of the circulatory system are lower for birth orders 2 to 4 in comparison with firstborns, before leveling out for birth orders 5 and 6. In contrast, the odds of mortality attributable to neoplasms, and cancers of the respiratory system are flat until birth orders 5 and 6, at which point they increase substantially. However, the confidence intervals, shown in Table S5, show that the differences are not statistically significant. The strongest pattern of association for men is clearly mortality attributable to external causes, which includes accidents, suicides, and events of undetermined intent. The odds of mortality attributable to external causes rises steadily up to birth order 4 before decreasing slightly for birth orders 5 and 6. The results for the analyses for women were substantially larger than those observed for men. Although the odds of mortality attributable to diseases of the circulatory system are slightly negative for later-born siblings, the odds of mortality attributable to neoplasms, cancers of the respiratory system, and external causes increase very substantially with an increasing birth order.
As described earlier in this article, we also conduct the full set of analyses using a between-family comparison approach, using discrete-time survival models in the form of logistic regressions, in order to provide a comparison to the previous research on birth order and mortality. This between-family analysis approach looks at the association between birth order and mortality across all families rather than conducting a within-family comparison of siblings born to the same mother and father. Online Resource 1 presents the all-cause mortality results from these between-family analyses for men (Table S6) and women (Table S7), as well as the cause-specific mortality results from the between-family analyses for men and women (Table S8). These results show that singletons often have mortality comparable to that of fourth-borns or later. We also conduct sibship-size-specific analyses using the between-family comparison approach, and these are presented in Table S4 alongside the sibship-size-specific analyses from the models using the within-family comparison approach.
Finally, we also conduct robustness checks to verify that the main results presented earlier are not skewed by the differences in the follow-up time for different cohorts. For these analyses, we used discrete-time survival models, meaning a between-family comparison, and restricted the follow-up period to age 65. We used the between-family comparison approach because the within-family approach requires that at least two children be alive in each sibship group. Given that they must have an opportunity to live to the age of 65, we must focus only on cohorts born from 1938 to 1942. Thus, larger sibship groups are particularly unusual, because they require that multiple siblings are born within a limited period, which introduces endogeneity problems. The results of these robustness checks are shown in Table S9 (Online Resource 1). The results are still fully consistent with the main results. We also conduct analyses in which we restrict the follow-up period to age 60, and age 55, with the analyses conducted on the 1938–1947 and 1938–1952 cohorts, respectively. These results, also shown in Table S9, are consistent with our main finding. We also considered that the interaction between the gender composition of the sibling group and birth order might have an impact on mortality. However, these extra analyses, performed on sibling groups of up to three children, showed no substantively or statistically significant patterns of association. These results are available upon request from the authors.
## Discussion
The results of these analyses demonstrate that birth order matters for men’s and women’s mortality in adulthood after confounding attributable to factors shared among siblings are eliminated and other potential confounding factors are minimized. This is true for all-cause mortality as well as for several cause-specific mortality patterns, and it is particularly pronounced for mortality attributable to external causes for men and for mortality attributable to neoplasms, cancers of the respiratory system, and external causes for women. The overall pattern of these all-cause mortality results is consistent with those reported by Modin (2002). These results indicate that relative deprivation and cumulative disadvantage early in life can have long-term consequences, even extending into adulthood. Previous research has shown that sibship size is related to mortality both in childhood as well as adulthood, but few studies have had a sufficiently large database to investigate the impact of birth order itself on mortality, and particularly not to conduct a within-family analysis. We also find that the relative effect of birth order on mortality is greater between sisters than it is between brothers; and when looking at an older portion of our sample, we find that socioeconomic and educational attainment only mediates the relationship between birth order and mortality to a limited degree.
By using a within-family comparison approach, comparing only siblings born to the same biological mother–father pairing, we eliminate residual confounding with respect to shared factors among siblings. Furthermore, by adjusting for a number of confounders within the sibling group, such as maternal age at the time of birth and birth year, we minimize confounding in our results. However, although the results from the within-family analyses allow us to rule out confounding from factors shared among siblings, these results do not allow us to distinguish between the different hypotheses about how this relationship operates, including the confluence hypothesis (Zajonc 1976), the resource dilution hypothesis (Blake 1981), the hygiene hypothesis (Strachan 1989), and the family dynamics model (Sulloway 1996). Each of these hypotheses predicts that later-born siblings have poorer outcomes for IQ and educational attainment; that is, they predict that the observed association between birth order and mortality in adulthood is transmitted through social pathways, such as adult SES. The results from this study, however, show that the effect of birth order on mortality is largely the same after we adjust for adult SES and educational attainment. A limitation of this study is that we are able to adjust for socioeconomic and educational attainment only for individuals aged 52 or older. Previous research has indicated that the association between birth order and mortality is weaker in later stages of the life course than at younger ages (Modin 2002).
The resource dilution hypothesis and the confluence hypothesis in particular would predict that the degree to which parental resources are available for children by birth order and the degree to which they are exposed to an intellectually stimulating environment within the household at a crucial point of their lives are likely to have important implications for their long-term prospects for educational and intellectual development (Sénéchal and LeFevre 2002; Sénéchal et al. 1998). In addition, previous research on the relationship among birth order and IQ, educational attainment, and height has found that birth-order effects persist even among children raised in high-SES families (Bjerkedal et al. 2007; Kristensen and Bjerkedal 2010; Myrskylä et al. 2013). One the one hand, this would suggest support for the confluence hypothesis, given that even later-born children in high-SES families are unlikely to be left wanting in terms of access to resources. On the other hand, the observed birth-order effects are relative effects. Although the marginal gain from access to additional resources is likely to diminish past a certain threshold, presumably realized in high-SES families, there may still be an effect of relative access to resources. This would be consistent with research on social status and income inequality finding that even when all basic needs are satisfied, relative social standing and material resources still produce a gradient in health (Marmot 2004).
The results from the cause-specific mortality analyses, in which mortality attributable to external causes rises sharply by birth order for men and women and mortality attributable to cancers of the respiratory system rises sharply by birth order for women, suggest possible support for the family dynamics model. This model argues that children tend to occupy different niches within the family environment and that these intrafamily dynamics tend to produce firstborns whose values are more closely aligned with those of their parents and later-borns who are more rebellious and more likely to engage in risky activities (Sulloway 1996; Sulloway and Zweigenhaft 2010; Zweigenhaft and Von Ammon 2000). These predictions would be consistent with the patterns observed for mortality attributable to cancers of the respiratory system and mortality from external causes. Although this study has primarily focused on the social pathways by which birth order is linked to outcomes in adulthood, physiological pathways accounted for by prenatal or gestational factors are also possible (Gualtieri and Hicks 1985). However, the evidence for physiological pathways predicting a negative relationship between birth order and health is sparse. Research on sibling groups in which children have died in infancy and in fully adopted sibling groups indicate that it is social set order rather than biological birth order that explains the consistently observed birth-order effects (Barclay 2015b; Kristensen and Bjerkedal 2007).
The within-family comparison results show that the relative effect of birth order on mortality is greater between sisters than it is between brothers. Although the reason for this is not clear, previous research has shown that women are much more closely involved in kin work, such as maintaining kinship ties (Hagestad 1986; Rossi and Rossi 1990; Young and Willmott 1957). It may be that these closer ties to kin mean that women are more affected by intrafamily interactions than are men. Alternatively, the nature of our sample means that we do not observe mortality at the youngest ages. Given that men have higher rates of mortality at all ages, forces of selection mean that those who survive to enter our analysis are less frail. If part of this selection is related to birth order, this would partially account for the stronger effects observed for women. Another, related explanation is that because we observe mortality before very old age and because mortality in our study period is more common for men than for women, our observed mortality is likely to be concentrated in more vulnerable populations in the portion of our sample that is female. If birth-order effects are stronger in more vulnerable populations, this could partially explain these results. Previous research also indicates gender differences in the degree to which later-life outcomes, such as the risk of cardiac disease, diabetes, and obesity, are affected by early-life socioeconomic conditions (Hamil-Luker and O’Rand 2007; Khlat et al. 2009; Maty et al. 2008). This research indicates that women are more responsive to early-life conditions than men. It has been argued that because of institutionalized structures that disadvantage women relative to men—most prominently in terms of paid work and household conditions (Read and Gorman 2010)—women might be more heavily affected by the chains of risk that follow early-life disadvantage than are men (Hamil-Luker and O’Rand 2007). Although men have higher rates of mortality at all ages relative to women, when stratifying analyses by gender, the differences between women coming from different socioeconomic backgrounds are greater than they are for men. Given that birth order is a marker of early-life resource availability and intellectual stimulation within the household, the results presented in this article are consistent with that previous research on socioeconomic conditions in childhood and later-life health outcomes.
A potential alternative explanation for the pattern observed for mortality attributable to cancers of the respiratory system is sibling influence. Research in the fields of social psychology and social networks has consistently and convincingly demonstrated the importance of alters, including parents and siblings, for shaping health behaviors (Christakis and Fowler 2008; Leonardi-Bee et al. 2011; Rosenquist et al. 2010). Studies more particularly focused on sibling influences show that younger siblings—those with a higher birth order—are more likely to begin smoking if an older sibling already smokes, but this relationship is not reversed (Harakeha et al. 2007). There are also indications that, because of this pattern of smoking uptake by younger siblings, they are likely to begin smoking at younger ages (Bard and Rodgers 2003). Smoking initiation at younger ages is associated with a greater daily cigarette consumption as well as a stronger tendency toward smoking continuation, particularly when smoking initiation begins before age 16 (Chen and Millar 1998; Khuder et al. 1999). These findings suggest that an individual with a higher birth order will be more likely to smoke in the long term, with obvious implications for the future health conditions of that individual’s respiratory system, regardless of his or her socioeconomic trajectory over the life course. Although smoking behavior would also impact the health of the circulatory system, previous research indicates that younger siblings demonstrate both a higher rate of alcoholism and a greater proclivity to initiate developmentally inappropriate activities at younger ages (Blane and Barry 1973; Rodgers and Rowe 1988). The positive relationship between birth order and both cancers of the respiratory system and external causes for women suggests some support for this hypothesis, but unfortunately we do not have data on smoking or drinking behavior that would allow us to test the degree to which these factors may mediate that relationship.
The nature of the data used in this study means that we have studied adult mortality, at ages 30 to 69, with different birth cohorts contributing exposure for different ages. Although most deaths in Sweden take place after age 70, mortality before this age indicates that the exposure can explain variation in mortality among the healthiest and most robust section of the population. Unfortunately, we were not able to study mortality at age 70 and older because the data contained information on mortality only up to 2007, and our earliest cohort was born in 1938. In the future, it will be valuable to address whether the birth-order effect on mortality persists among the elderly. In Sweden, the majority of deaths at old ages are attributable to cancer and diseases of the circulatory system (Janssen and Kunst 2005; Socialstyrelsen 2010). Given that the strongest pattern for the relationship between birth order and mortality is seen for neoplasms in general, and for cancers of the respiratory system more particularly for women, one might speculate that the birth-order effect on mortality attributable to neoplasms could persist for women into older ages. For men, this is less clear because the relationship between birth order and mortality attributable to neoplasms generally, cancers of the respiratory system more particularly, and diseases of the circulatory system, is weaker than that seen for women. However, because we do not study individuals aged 70 and older, these suggestions about mortality patterns by birth order among the elderly should be interpreted as conjecture, particularly given that previous research has suggested that rates of mortality attributable to specific causes differ not only by age but also across cohorts (Janssen and Kunst 2005).
This study has many strengths, but certain factors are difficult to account for when using register data. For example, we have not been able to test the specific mechanisms that potentially link birth order to mortality, and this will be an important dimension of this research question for future studies to address. In this study, we look at birth order within sibships, where a sibship is defined as a group of children born from the same biological mother–father pairing. Our research excludes half-brothers or half-sisters who may, practically speaking, be part of a sibship. This can be seen as both an advantage and a disadvantage. Indeed, a general shortcoming is that we are not able to observe which children are in the household—an important factor when considering the potential importance of a shared pool of resources and how this might be related to later health outcomes. We also do not have access to information on birth weight. Firstborns consistently have a lower birth weight than later-born siblings (Magnus et al. 1985), and birth weight has been shown to be positively associated with educational attainment and earnings in adulthood (Black et al. 2007). Thus, the estimates for the effect of birth order on mortality presented in this study represent a conservative lower bound, and accounting for birth weight would increase the point estimates. An additional factor that we do not adjust for in our models is the potential role of the time interval between the births of siblings. However, birth intervals are endogenous and will be strongly related to the SES of the parents, meaning that the extent to which the results of further analyses would further clarify the underlying processes might be limited. Furthermore, it is not possible to overcome this endogeneity by using a within-family comparison because the values for the interaction between birth order and birth intervals are constant within a sibling group. Overall, the results of this study demonstrate how social conditions within the family of origin can significantly influence long-term health outcomes.
## References
Altieri, A., & Hemminki, K. (
2007
).
Number of siblings and the risk of solid tumours: A nationwide study
.
British Journal of Cancer
,
96
,
1755
1759
. 10.1038/sj.bjc.6603760
Amirian, E., Scheurer, M. E., & Bondy, M. L. (
2010
).
The association between birth order, sibship size and glioma development in adulthood
.
International Journal of Cancer
,
126
,
2752
2756
.
Andersson, G., & Kolk, M. (
2011
).
Trends in childbearing and nuptiality in Sweden: An update with data up to 2007
.
Finnish Yearbook of Population Research
,
46
,
21
29
.
Andersson, G., Ronsen, M., Knudsen, L. B., Lappedgård, T., Neyer, G., Skrede, K., & . . . Vikat, A. (
2009
).
Cohort fertility patterns in the Nordic countries
.
Demographic Research
,
20
(article 14),
313
352
. doi: 10.4054/DemRes.2009.20.14
Barclay, K. J. (
2015
).
A within-family analysis of birth order and intelligence using population conscription data on Swedish men
.
Intelligence
,
49
,
134
143
. 10.1016/j.intell.2014.12.007
Barclay, K. J. (
2015
).
Birth order and educational attainment: Evidence from fully adopted sibling groups
.
Intelligence
,
48
,
109
122
. 10.1016/j.intell.2014.10.009
Bard, D. E., & Rodgers, J. L. (
2003
).
Sibling influence on smoking behavior: A within-family look at explanations for a birth-order effect
.
Journal of Applied Social Psychology
,
33
,
1773
1795
. 10.1111/j.1559-1816.2003.tb02080.x
Batty, G. D., Deary, I. J., & Gottfredson, L. S. (
2007
).
Premorbid (early life) IQ and later mortality risk: Systematic review
.
Annals of Epidemiology
,
17
,
278
288
. 10.1016/j.annepidem.2006.07.010
Bengtsson, T., & Broström, G. (
2009
).
Do conditions in early life affect old-age mortality directly or indirectly? Evidence from 19th-century rural Sweden
.
Social Science & Medicine
,
68
,
1583
1590
. 10.1016/j.socscimed.2009.02.020
Bengtsson, T., & Mineau, G. P. (
2009
).
Early-life effects on socio-economic performance and mortality in later life: A full life-course approach using contemporary and historical sources
.
Social Science & Medicine
,
68
,
1561
1564
. 10.1016/j.socscimed.2009.02.012
Bevier, M., Weires, M., Thomsen, H., Sundquist, J., & Hemminki, K. (
2011
).
Influence of family size and birth order on risk of cancer: A population-based study
.
BMC Cancer
,
11
. doi:10.1186/1471-2407-11-163
Bjerkedal, T., Kristensen, P., Skjeret, G. A., & Brevik, J. I. (
2007
).
Intelligence test scores and birth order among young Norwegian men (conscripts) analyzed within and between families
.
Intelligence
,
35
,
503
514
. 10.1016/j.intell.2007.01.004
Black, S. E., Devereux, P. J., & Salvanes, K. G. (
2005
).
The more the merrier? The effect of family size and birth order on children’s education
.
Quarterly Journal of Economics
,
120
,
669
700
.
Black, S. E., Devereux, P. J., & Salvanes, K. G. (
2007
).
From the cradle to the labor market? The effect of birth weight on adult outcomes
.
Quarterly Journal of Economics
,
122
,
409
439
. 10.1162/qjec.122.1.409
Black, S. E., Devereux, P. J., & Salvanes, K. G. (
2011
).
Older and wiser? Birth order and IQ of young men
.
CESifo Economic Studies
,
57
,
103
120
. 10.1093/cesifo/ifq022
Blake, J. (
1981
).
Family size and the quality of children
.
Demography
,
18
,
421
442
. 10.2307/2060941
Blane, H. T., & Barry, H. (
1973
).
Birth order and alcoholism: A review
.
Quarterly Journal of Studies on Alcohol
,
34
,
837
852
.
Bradley, R. H., & Corwyn, R. F. (
2002
).
Socioeconomic status and child development
.
Annual Review of Psychology
,
53
,
371
399
. 10.1146/annurev.psych.53.100901.135233
Buckles, K., & Kolka, S. (
2014
).
Prenatal investments, breastfeeding, and birth order
.
Social Science & Medicine
,
118
,
66
70
. 10.1016/j.socscimed.2014.07.055
Campbell, F., Conti, G., Heckman, J., Moon, S. H., Pinto, R., Pungello, E., & Pan, Y. (
2014
).
Early childhood investments substantially boost adult health
.
Science
,
343
,
1478
1485
. 10.1126/science.1248429
Chen, J., & Millar, W. J. (
1998
).
Age of smoking initiation: Implications for quitting
.
Health Reports
,
9
(
4
),
39
46
.
Christakis, N. A., & Fowler, J. H. (
2008
).
The collective dynamics of smoking in a large social network
.
New England Journal of Medicine
,
358
,
2249
2258
. 10.1056/NEJMsa0706154
Elliott, B. A. (
1992
).
Birth order and health: Major issues
.
Social Science & Medicine
,
35
,
443
452
. 10.1016/0277-9536(92)90337-P
Erikson, R., & Goldthorpe, J. H. (
1992
).
The constant flux: A study of class mobility in industrial societies
.
Oxford, UK
:
Clarendon Press
.
Erikson, R., Goldthorpe, J. H., & Portocarero, L. (
1979
).
Intergenerational class mobility in three Western European societies: England, France and Sweden
.
British Journal of Sociology
,
30
,
415
441
. 10.2307/589632
Gluckman, P. D., Hanson, M. A., Cooper, C., & Thornburg, K. L. (
2008
).
Effect of in utero and early-life conditions on adult health and disease
.
New England Journal of Medicine
,
359
,
61
73
. 10.1056/NEJMra0708473
Greene, W. H. (
2012
).
Econometric analysis
. 7
:
Prentice Hall
.
Gualtieri, T., & Hicks, R. E. (
1985
).
An immunoreactive theory of selective male affliction
.
Behavioral and Brain Sciences
,
8
,
427
441
. 10.1017/S0140525X00001023
1986
).
The family: Women and grandparents as kin-keepers
. In A. J. Pifer, & L. Bronte (Eds.),
Our aging society: Paradox and promise
(pp.
141
160
).
New York, NY
:
W.W. Norton
.
Hamil-Luker, J., & O’Rand, A. M. (
2007
).
Gender differences in the link between childhood socioeconomic conditions and heart attack risk in adulthood
.
Demography
,
44
,
137
158
. 10.1353/dem.2007.0004
Harakeha, Z., Engelsa, R. C. M. E., Vermulsta, A. A., De Vriesb, H., & Scholtea, R. H. J. (
2007
).
The influence of best friends and siblings on adolescent smoking: A longitudinal study
.
Psychology and Health
,
22
,
269
289
. 10.1080/14768320600843218
Härkönen, J. (
2014
).
Birth order effects on educational attainment and educational transitions in West Germany
.
European Sociological Review
,
30
,
166
179
. 10.1093/esr/jct027
Hayward, M. D., & Gorman, B. K. (
2004
).
The long arm of childhood: The influence of early-life social conditions on men’s mortality
.
Demography
,
41
,
87
107
. 10.1353/dem.2004.0005
Heckman, J. (
2006
).
Skill formation and the economics of investing in disadvantaged children
.
Science
,
312
,
1900
1902
. 10.1126/science.1128898
Heckman, J. J., Moon, S. H., Pinto, R., Savelyev, P. A., & Yavitz, A. (
2010
).
.
Journal of Public Economics
,
94
,
114
128
. 10.1016/j.jpubeco.2009.11.001
Hemminki, K., & Mutanen, P. (
2001
).
Birth order, family size, and the risk of cancer in young and middle-aged adults
.
British Journal of Cancer
,
84
,
1466
1471
. 10.1054/bjoc.2001.1811
Hertwig, R., Davis, J. N., & Sulloway, F. J. (
2002
).
Parental investment: How an equity motive can produce inequality
.
Psychological Bulletin
,
128
,
728
745
. 10.1037/0033-2909.128.5.728
Holman, R. C., Shay, D. K., Curns, A. T., Lingappa, J. R., & Anderson, L. J. (
2003
).
Risk factors for bronchiolitis-associated deaths among infants in the United States
.
Paediatric Infectious Disease Journal
,
22
,
483
489
.
Hosmer, D. W.Jr, Lemeshow, S. A., & Sturdivant, R. X. (
2013
).
Applied logistic regression
. 3
Hoboken, NJ
:
Wiley
.
Janssen, F., & Kunst, A. E. (
2004
).
ICD coding changes and discontinuities in trends in cause-specific mortality in six European countries, 1950–99
.
Bulletin of the World Health Organization
,
82
,
904
913
.
Janssen, F., & Kunst, A. E. (
2005
).
Cohort patterns in mortality trends among the elderly in seven European countries, 1950–99
.
International Journal of Epidemiology
,
34
,
1149
1159
. 10.1093/ije/dyi123
Khlat, M., Jusot, F., & Ville, I. (
2009
).
Social origins, early hardship and obesity: A strong association in women, but not in men?
.
Social Science & Medicine
,
68
,
1692
1699
. 10.1016/j.socscimed.2009.02.024
Khuder, S. A., Dayal, H. H., & Mutgi, A. B. (
1999
).
Age at smoking onset and its effect on smoking cessation
.
,
24
,
673
677
. 10.1016/S0306-4603(98)00113-0
Knudsen, E. I., Heckman, J. J., Cameron, J. L., & Shonkoff, J. P. (
2006
).
Economic, neurobiological, and behavioral perspectives on building America’s future workforce
.
Proceedings of the National Academy of Sciences
,
103
,
10155
10162
. 10.1073/pnas.0600888103
Kristensen, P., & Bjerkedal, T. (
2007
).
Explaining the relation between birth order and intelligence
.
Science
,
316
,
1717
. 10.1126/science.1141493
Kristensen, P., & Bjerkedal, T. (
2010
).
Educational attainment of 25 year old Norwegians according to birth order and gender
.
Intelligence
,
38
,
123
136
. 10.1016/j.intell.2009.08.003
Lager, A. C. J., & Torssander, J. (
2012
).
Causal effect of education on mortality in a quasi-experiment on 1.2 million Swedes
.
Proceedings of the National Academy of Sciences
,
109
,
8461
8466
. 10.1073/pnas.1105839109
Leonardi-Bee, J., Jere, M. L., & Britton, J. (
2011
).
Exposure to parental and sibling smoking and the risk of smoking uptake in childhood and adolescence: A systematic review and meta-analysis
.
Thorax
,
66
,
847
855
. 10.1136/thx.2010.153379
Mackenbach, J. P., Kunst, A. E., Cavelaars, A. E. J. M., Groenhof, F., & Geurts, J. J. M. (
1997
).
Socioeconomic inequalities in morbidity and mortality in Western Europe
.
Lancet
,
349
,
1655
1659
. 10.1016/S0140-6736(96)07226-1
Magnus, P., Berg, K., & Bjerkedal, T. (
1985
).
The association of parity and birth weight: Testing the sensitization hypothesis
.
Early Human Development
,
12
,
49
54
. 10.1016/0378-3782(85)90136-7
Marmot, M. (
2004
).
The status syndrome: How social standing directly affects our health and longevity
.
London, UK
:
Bloomsbury
.
Maty, S. C., Lynch, J. W., Raghunathan, T. E., & Kaplan, G. A. (
2008
).
Childhood socioeconomic position, gender, adult body mass index, and incidence of type 2 diabetes mellitus over 34 years in the Alameda County Study
.
American Journal of Public Health
,
98
,
1486
1494
. 10.2105/AJPH.2007.123653
Modin, B. (
2002
).
Birth order and mortality: A life-long follow-up of 14,200 boys and girls born in early 20th century Sweden
.
Social Science & Medicine
,
54
,
1051
1064
. 10.1016/S0277-9536(01)00080-6
Myrskylä, M., & Fenelon, A. (
2012
).
Maternal age and offspring adult health: Evidence from the Health and Retirement Study
.
Demography
,
49
,
1231
1257
. 10.1007/s13524-012-0132-x
Myrskylä, M., Silventoinen, K., Jelenkovic, A., Tynelius, P., & Rasmussen, F. (
2013
).
The association between height and birth order: Evidence from 652 518 Swedish men
.
Journal of Epidemiology and Community Health
,
67
,
571
577
. 10.1136/jech-2012-202296
O’Leary, S. R., Wingard, D. L., Edelstein, S. L., Criqui, M. H., Tucker, J. S., & Friedman, H. S. (
1996
).
Is birth order associated with adult mortality?
.
Annals of Epidemiology
,
6
,
34
40
. 10.1016/1047-2797(95)00098-4
Price, J. (
2008
).
Parent-child quality time: Does birth order matter?
.
Journal of Human Resources
,
43
,
240
265
. 10.1353/jhr.2008.0023
Primo, D. M., Jacobsmeier, M. L., & Milyo, J. (
2007
).
The practical researcher: Estimating the impact of state policies and institutions with mixed-level data
.
State Politics and Policy Quarterly
,
7
,
446
459
. 10.1177/153244000700700405
Read, J. G., & Gorman, B. K. (
2010
).
Gender and health inequality
.
Annual Review of Sociology
,
36
,
371
386
. 10.1146/annurev.soc.012809.102535
Richards, S. J., Kirkby, J. G., & Currie, I. D. (
2006
).
The importance of year of birth in two-dimensional mortality data
.
British Actuarial Journal
,
12
,
5
61
. 10.1017/S1357321700004682
Richiardi, L., Akre, O., Lambe, M., Granath, F., Montgomery, S. M., & Ekbom, A. (
2004
).
Birth order, sibship size, and risk for germ-cell testicular cancer
.
Epidemiology
,
15
,
323
329
. 10.1097/01.ede.0000120043.45185.7e
Rodgers, J. L. (
2001
).
What causes birth order-intelligence patterns? The admixture hypothesis, revived
.
American Psychologist
,
56
,
505
510
. 10.1037/0003-066X.56.6-7.505
Rodgers, J. L., Cleveland, H. H., van den Oord, E., & Rowe, D. C. (
2000
).
Resolving the debate over birth order, family size, and intelligence
.
American Psychologist
,
55
,
599
612
. 10.1037/0003-066X.55.6.599
Rodgers, J. L., & Rowe, D. C. (
1988
).
Influence of siblings on adolescent sexual behavior
.
Developmental Psychology
,
24
,
722
728
. 10.1037/0012-1649.24.5.722
Rosenquist, J. N., Murabito, J., Fowler, J. H., & Christakis, N. A. (
2010
).
The spread of alcohol consumption behavior in a large social network
.
Annals of Internal Medicine
,
152
,
426
433
. 10.7326/0003-4819-152-7-201004060-00007
Rossi, A. S., & Rossi, P. H. (
1990
).
Of human bonding: Parent-child relations across the life course
.
New York, NY
:
Aldine de Gruyter
.
SCB
. (
2011
).
Multi-generation register 2010: A description of contents and quality
.
Stockholm
:
Statistics Sweden
.
Schmidt, C. O., & Kohlmann, T. (
2008
).
When to use the odds ratio or the relative risk?
.
International Journal of Public Health
,
53
,
165
167
. 10.1007/s00038-008-7068-3
Sénéchal, M., & LeFevre, J-A (
2002
).
Parental involvement in the development of children’s reading skill: A five-year longitudinal study
.
Child Development
,
73
,
445
460
. 10.1111/1467-8624.00417
Sénéchal, M., Lefevre, J-A, Thomas, E. M., & Daley, K. E. (
1998
).
Differential effects of home literacy experiences on the development of oral and written language
.
,
33
,
96
116
. 10.1598/RRQ.33.1.5
Smith, K. R., Mineau, G. P., Garibotti, G., & Kerber, R. (
2009
).
Effects of childhood and middle-adulthood family conditions on later-life mortality: Evidence from the Utah Population Database, 1850–2002
.
Social Science & Medicine
,
68
,
1649
1658
. 10.1016/j.socscimed.2009.02.010
Socialstyrelsen
. (
2010
).
Dödsorsaker 2010 [Causes of death 2010]
. (
2010
).
Stockholm, Sweden
:
Socialstyrelsen
.
Strachan, D. P. (
1989
).
Hay fever, hygiene, and household size
.
British Medical Journal
,
299
,
1259
1260
. 10.1136/bmj.299.6710.1259
Sulloway, F. J. (
1996
).
Born to rebel: Birth order, family dynamics, and creative lives
.
London, UK
:
Little, Brown and Company
.
Sulloway, F. J., & Zweigenhaft, R. L. (
2010
).
Birth order and risk taking in athletics: A meta-analysis and study of major league baseball
.
Personality and Social Psychology Review
,
14
,
402
416
. 10.1177/1088868310361241
Sundström, M., & Duvander, A-ZE (
2002
).
Gender division of childcare and the sharing of parental leave among new parents in Sweden
.
European Sociological Review
,
18
,
433
447
. 10.1093/esr/18.4.433
Torssander, J., & Erikson, R. (
2010
).
Stratification and mortality—A comparison of education, class, status, and income
.
European Sociological Review
,
26
,
465
474
. 10.1093/esr/jcp034
United Nations, Department of Economic and Social Affairs, Population Division
. (
2013
).
World Mortality Report 2013
. (
2013
).
New York, NY
:
United Nations
.
Willson, A. E., Shuey, K. M., & Elder, G. H.Jr (
2007
).
Cumulative advantage processes as mechanisms of inequality in life course health
.
American Journal of Sociology
,
112
,
1886
1924
. 10.1086/512712
Young, M., & Willmott, P. (
1957
).
Family and kinship in East London
.
Glencoe, IL
:
Free Press
.
Zajonc, R. B. (
1976
).
Family configuration and intelligence
.
Science
,
192
,
227
236
. 10.1126/science.192.4236.227
Zajonc, R. B., & Markus, G. B. (
1975
).
Birth order and intellectual development
.
Psychological Review
,
82
,
74
88
. 10.1037/h0076229
Zhang, J., & Kai, F. Y. (
1998
).
What’s the relative risk?
.
Journal of the American Medical Association
,
280
,
1690
1691
. 10.1001/jama.280.19.1690
Zweigenhaft, R. L., & Von Ammon, J. (
2000
).
Birth order and civil disobedience: A test of Sulloway’s “born to rebel” hypothesis
.
Journal of Social Psychology
,
140
,
624
627
. 10.1080/00224540009600502
|
2022-06-27 03:30:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5454210042953491, "perplexity": 3885.6203524543735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103324665.17/warc/CC-MAIN-20220627012807-20220627042807-00179.warc.gz"}
|
http://mathhelpforum.com/algebra/143258-re-arranging-equation.html
|
Are you sure you wrote this correctly? $\displaystyle (2-1) = 1$ so$\displaystyle ...$
$\displaystyle \frac{1}{n-1}+\frac{1}{n(n-1)} = \frac{n}{n(n-1)}+\frac{1}{n(n-1)} = \frac{n+1}{n(n-1)}$
|
2018-04-26 13:11:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9461502432823181, "perplexity": 385.9666308512773}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948214.37/warc/CC-MAIN-20180426125104-20180426145104-00571.warc.gz"}
|
https://socratic.org/questions/what-is-the-formula-of-sodium-sulphate
|
# What is the formula of sodium sulphate?
$N {a}_{2} S {O}_{4}$
This was known historically as $\text{Glauber's salt}$, and is important today as a commodity chemical in detergent and paper processing.
|
2019-08-21 09:21:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679319024085999, "perplexity": 3125.594362074091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315865.44/warc/CC-MAIN-20190821085942-20190821111942-00206.warc.gz"}
|
http://www.zora.uzh.ch/id/eprint/122836/
|
# Measurement of the branching fraction ratio $\mathcal{B}(B_c^+ \rightarrow \psi(2S)\pi^+)/\mathcal{B}(B_c^+ \rightarrow J/\psi \pi^+)$ - Zurich Open Repository and Archive
LHCb collaboration; Anderson, J; Bowen, E; Bursche, A; Chiapolini, N; Chrzaszcz, M; Elsasser, C; Lionetto, F; Serra, N; Steinkamp, O; Storaci, B; Straumann, U; Vollhardt, A; Weiden, A; et al (2015). Measurement of the branching fraction ratio $\mathcal{B}(B_c^+ \rightarrow \psi(2S)\pi^+)/\mathcal{B}(B_c^+ \rightarrow J/\psi \pi^+)$. Physical Review D (Particles, Fields, Gravitation and Cosmology), 92:072007.
Test mit Language Auszeichnung: eng
## Abstract
Using $pp$ collision data collected by LHCb at center-of-mass energies $\sqrt{s}$ = 7 TeV and 8 TeV, corresponding to an integrated luminosity of 3 fb$^{-1}$, the ratio of the branching fraction of the $B_c^+ \rightarrow \psi(2S)\pi^+$ decay relative to that of the $B_c^+ \rightarrow J/\psi\pi^+$ decay is measured to be 0.268 $\pm$ 0.032 (stat) $\pm$ 0.007 (syst) $\pm$ 0.006 (BF). The first uncertainty is statistical, the second is systematic, and the third is due to the uncertainties on the branching fractions of the $J/\psi \rightarrow \mu^+\mu^-$ and $\psi(2S) \rightarrow \mu^+\mu^-$ decays. This measurement is consistent with the previous LHCb result, and the statistical uncertainty is halved.
## Abstract
Using $pp$ collision data collected by LHCb at center-of-mass energies $\sqrt{s}$ = 7 TeV and 8 TeV, corresponding to an integrated luminosity of 3 fb$^{-1}$, the ratio of the branching fraction of the $B_c^+ \rightarrow \psi(2S)\pi^+$ decay relative to that of the $B_c^+ \rightarrow J/\psi\pi^+$ decay is measured to be 0.268 $\pm$ 0.032 (stat) $\pm$ 0.007 (syst) $\pm$ 0.006 (BF). The first uncertainty is statistical, the second is systematic, and the third is due to the uncertainties on the branching fractions of the $J/\psi \rightarrow \mu^+\mu^-$ and $\psi(2S) \rightarrow \mu^+\mu^-$ decays. This measurement is consistent with the previous LHCb result, and the statistical uncertainty is halved.
## Statistics
### Citations
5 citations in Web of Science®
3 citations in Scopus®
### Altmetrics
Detailed statistics
Item Type: Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics English 2015 17 Feb 2016 17:57 05 Apr 2016 20:09 American Physical Society 1550-7998 https://doi.org/10.1103/PhysRevD.92.072007 arXiv:1507.03516
Preview
Content: Published Version
Language: English
Filetype: PDF
Size: 471kB
View at publisher
Licence:
## TrendTerms
TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents.
You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.
|
2017-08-17 08:29:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8208954930305481, "perplexity": 2749.480906904866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102993.24/warc/CC-MAIN-20170817073135-20170817093135-00021.warc.gz"}
|
http://angg.twu.net/eev-current/article/eev.txt.html
|
Warning: this is an htmlized version! The original is across this link, and the conversion rules are here.
\def\myversion{2005jul25 2:05}
This is an asciification of:
<http://angg.twu.net/eev-current/article/eev.pdf>
There will be a TeXinfo version at some point, but dont' hold
Quick index:
«.abstract» (to "abstract")
«.note» (to "note")
«.three-kinds-of-interfaces» (to "three-kinds-of-interfaces")
«.one-thing-well» (to "one-thing-well")
«.sending-commands» (to "sending-commands")
«.forward-and-back» (to "forward-and-back")
«.returning» (to "returning")
«.local-copies» (to "local-copies")
«.glyphs» (to "glyphs")
«.compose-pairs» (to "compose-pairs")
«.delimited-regions» (to "delimited-regions")
«.channels» (to "channels")
«.channels-implementation» (to "channels-implementation")
«.anchors» (to "anchors")
«.e-scripts» (to "e-scripts")
«.splitting-eev.el» (to "splitting-eev.el")
«.eesteps» (to "eesteps")
«.big-modular-e-scripts» (to "big-modular-e-scripts")
«.iskidip» (to "iskidip")
«.availability» (to "availability")
Emacs and eev, or: How to Automate Almost Everything
====================================================
Eduardo Ochs
http://angg.twu.net/
edrx@mat.puc-rio.br
Not currently affiliated to any institution.
Snail-mail address: R.\ Jardim Botânico 622/103B, Jardim
Botânico, Rio de Janeiro, RJ, Brazil, CEP 22461-000.
Abstract
--------
«abstract» (to ".abstract")
Interacting with programs with command-line interfaces always
involves a bit of line editing, and each CLI program tends to
implement independently its own minimalistic editing features. We
show a way of centralizing these editing tasks by making these
programs receive commands that are prepared, and sent from, Emacs.
The resulting system is a kind of Emacs- and Emacs Lisp-based
universal scripting language'' in which commands can be sent to
both external programs and to Emacs itself either in blocks or
step-by-step under very fine control from the user.
Note: this is a working draft that has many pieces missing and needs
urgent revision on the pieces it has. Current version: [see top].
<http://angg.twu.net/eev-current/article/>, and two animations
(in Flash) showing eev at work can be found at:
<http://angg.twu.net/eev-current/anim/channels.anim.html> and
<http://angg.twu.net/eev-current/anim/gdb.anim.html>.
1. Three kinds of interfaces
----------------------------
«three-kinds-of-interfaces» (to ".three-kinds-of-interfaces")
Interactive programs in a Un*x system(1) can have basically three
kinds of interfaces: they can be mouse-oriented, like most programs
with graphical interfaces nowadays, in which commands are given by
clicking with the mouse; they can be character-oriented, like most
editors and mail readers, in which most commands are single keys or
short sequences of keys; and they can be line-oriented, as, for
example, shells are: in a shell commands are given by editing a full
line and then typing enter'' to process that line.
It is commonplace to classify computer users in a spectrum where the
users'' are in one extreme and the programmers'' are in the other;
the users'' tend to use only mouse-oriented and character-oriented
programs, and the programmers'' only character-oriented and
line-oriented programs.
In this paper we will show a way to automate'' interactions with
line-oriented programs, and, but not so well, to character-oriented
programs; more precisely, it is a way to edit commands for these
programs in a single central place --- Emacs --- and then send them to
the programs; re-sending the same commands afterwards, with or without
modifications, then becomes very easy.
This way (e-scripts'') can not be used to send commands to
mouse-oriented programs --- at least not without introducing several
new tricks. But programmers'' using Un*x systems usually see most
mouse-oriented programs --- except for a few that are /intrinsically/
mouse-oriented, like drawing programs --- as being just wrappers
around line-oriented programs than perform the same tasks with
different interfaces; and so, most mouse-oriented programs do not
matter'', and our method of automating interactions using e-scripts
can be used to automate almost everything''; hence the title of the
paper.
(1): Actually we are more interested in GNU systems than in real''
Unix systems; the reasons will become clear in the section nnn. By the
way: the term Unix'' is Copyright (C) Bell Labs).
2. Make each program do one thing well''
------------------------------------------
«one-thing-well» (to ".one-thing-well")
One of the tenets of the Unix philosophy is that each program should
do one thing, and do it well; this is a good design rule for Unix
programs because the system makes it easy to invoke external programs
to perform tasks, and to connect programs.
Some of parts of a Unix system are more like meta-programs'' or
sub-programs'' than like self-contained programs that do some
clearly useful task by themselves. Shells, for example, are
meta-programs: their main function is to allow users to invoke real
programs'' and to connect these programs using pipes, redirections,
control structures (if, for, etc) and Unix signals''. On the other
hand, libraries are sub-programs: for example, on GNU systems there's
a library called GNU readline that line-oriented programs can use to
get input; if a program, say, bc (a calculator) gets its input by
fgets(...) then its line-oriented interface will have a little more
functionality: it will allow the user to do some minimal editing in
the current line, and also to recall, edit and issue again some of the
latest commands given.
-----------------------------------
\label{ee}
Many line-oriented programs allow scripting'', which means executing
commands from a file. For example, in most shells we can say source
~/ee.sh'', and the shell will then execute the commands in the file
~/ee.sh. There are other ways of executing commands from a file ---
like sh ~/ee.sh'' --- but the one with source'' is the one that
we'll be more interested in, because it is closer to running the
commands in ~/ee.sh one by one by hand: for example, with source
~/ee.sh'' the commands that change parameters of the shell --- like
the current directory and the environment variables --- will work in
the obvious way, while with sh ~/ee.sh'' they would only change
the parameters of a temporary sub-shell; the current directory and the
environment variables of the present shell would be protected''.
So, it is possible to prepare commands for a shell (or for scriptable
line-oriented programs; for arbitrary line-oriented programs see the
section nnn) in several ways: by typing them at the shell's interface
--- and if the shell uses readline its interface can be reasonably
friendly --- or, alternatively, by using a text editor to edit a file,
say, ~/ee.sh, and by then executing'' that file with source
~/ee.sh''. source ~/ee.sh'' is a lot of keystrokes, but that
can be shortened if we can define a shell function: by putting
function ee () { source ~/ee.sh; }
in the shell's initialization file (~/.bashrc, ~/.zshrc, ...) we
can reduce source ~/ee.sh'' to just ee'': e, e, enter --- three
keystrokes.
We just saw how a shell --- or, by the way, any line-oriented program
in which we can define an ee' function like we did for the shell ---
can receive commands prepared in an external editor and stored in a
certain file; let's refer to that file, ~/ee.sh, as a temporary
script file''. Now it remains to see how an external text editor can
send commands to the shell'', i.e., how to make the editor save some
commands in a temporary script file in a convenient way, that is,
without using too many keystrokes...
4. Sending commands
-------------------
«sending-commands» (to ".sending-commands")
GNU Emacs, the extensible, self-documenting text-editor'' ([S79]),
does at least two things very well: one is to edit text, and so it can
be used to edit temporary scripts, and thus to send commands to shells
and to line-oriented programs with ee' functions; and the other one
is to run Lisp. Lisp is a powerful programming language, and (at least
in principle!) any action or series of actions can be expressed as a
program in Lisp; the first thing that we want to do is a way to mark a
region of a text and send it as commands to a shell'', by saving it
in a temporary script file. We implement that in two ways:
1: (defun ee (s e)
2: "Save the region in a temporary script"
3: (interactive "r")
4: (write-region s e "~/ee.sh"))
5:
6: (defun eev (s e)
7: "Like ee', but the script executes in verbose mode"
8: (interactive "r")
9: (write-region
10: (concat "set -v\n" (buffer-substring s e)
11: "\nset+v")
12: nil "~/ee.sh"))
ee' (the name stands for something like emacs-execute') just saves
the currently-marked region of text to ~/ee.sh; eev' (for
something like emacs-execute-verbose') does the same but adding to
the beginning of the temporary script a command to put the shell in
verbose mode'', where each command is displayed before being
executed, and also adding at the end an command to leave verbose mode.
We can now use ee' and eev' to send a block of commands to a shell:
just select a region and then run ee' or eev'. More precisely: mark
a region, that is, put the cursor at one of the extremities of the
region, then type C-SPC to set Emacs's mark'' to that position, then
go to other extremity of the region and type M-x eev (C-SPC and M-x
are Emacs's notations for Control-Space and Alt-x, a.k.a. Meta-x'').
After doing that, go to a shell and make it receive these
commands'', by typing ee'.
-------------
When we are using a system like *NIX, in a part of the time we are
using programs with which we are perfectly familiar, and in the rest
of the time we are using things that we don't understand completely
and that make us have to access the documentation from time to time.
In a GNU system the documentation is all on-line, and the steps needed
to access any piece of documentation can be automated. We can use
Emacs Lisp one-liners'' to create hyperlinks'' to files:
A: (info "(emacs)Lisp Eval")
B: (find-file "~/usrc/busybox-1.00/shell/ash.c")
C: (find-file "/usr/share/emacs/21.4/lisp/info.el")
These expressions, when executed --- which is done by placing the
cursor after them and then typing C-x C-e, or, equivalently, M-x
eval-last-sexp --- will (A) open a page of Emacs manual (the manual is
a set of files in Info'' format), (B) open the source file
shell/ash.c' of a program called busybox, and (C) open the file
info.el' from the Emacs sources, respectively. As some of these files
and pages can be very big, these hyperlinks are not yet very
satisfactory: we want ways to not only open these files and pages but
also to point to specific positions'', i.e., to make the cursor go
to these positions automatically. We can do that by defining some new
hyperlink functions, that are invoked like this:
A': (find-node "(emacs)Lisp Eval" "C-x C-e")
B': (find-fline "~/usrc/busybox-1.00/shell/ash.c"
"void\nevalpipe")
C': (find-fline "/usr/share/emacs/21.4/lisp/info.el"
"defun info")
The convention is that these extended hyperlink functions'' have
names like find-xxxnode', find-xxxfile', or find-xxxyyy'; as the
name find-file' was already taken by a standard Emacs function we had
to use find-fline' for ours.
Here are the definitions of find-node' and find-fline':
14: (defun ee-goto-position (&optional pos-spec)
15: "If POS-SPEC is a string search for its first
16: occurrence in the file; if it is a number go to the
17: POS-SPECth line; if it is nil, don't move."
18: (cond ((null pos-spec))
19: ((numberp pos-spec)
20: (goto-char (point-min))
21: (forward-line (1- pos-spec)))
22: ((stringp pos-spec)
23: (goto-char (point-min))
24: (search-forward pos-spec))
25: (t (error "Invalid pos-spec: %S" pos-spec))))
26:
27: (defun find-fline (fname &optional pos-spec)
28: "Like (find-file FNAME), but accepts a POS-SPEC"
29: (find-file fname)
20: (ee-goto-position pos-spec))
31:
32: (defun find-node (node &optional pos-spec)
33: "Like (info NODE), but accepts a POS-SPEC"
34: (info node)
35: (ee-goto-position pos-spec)))
Now consider what happens when we send to a shell a sequence of
commands like this one:
# (find-node "(gawk)Fields")
seq 4 9 | gawk '{print $1,$1*$1}' the shell ignores the first line because of the #', that makes the shell treat that line as a comment; but when we are editing that in Emacs we can execute the (find-node ...)' with C-x C-e. Hyperlinks can be mixed with shell code --- they just need to be marked as comments. Note: the actual definitions of eev', ee-goto-position', find-fline' and find-node' in eev's source code are a bit more complex than the code in the listings above (lines 6--12 in the previous section and 14--35 in the current section). In all the (few) occasions in this paper where we will present the source code of eev's functions what will be shown are versions that implement only the essence'' of those functions, stripped down of all extra functionality. The point that we wanted to stress with those listings is how natural it is to use Emacs in a certain way, as an editor for commands for external programs, and with these plain-text hyperlinks that can be put almost anywhere: the essence of that idea can be implemented in 30 lines of Lisp and one or two lines of shell code. [See also: Section \ref{e-scripts}] 6. Shorter Hyperlinks --------------------- «shorter-hyperlinks» (to ".shorter-hyperlinks") The hyperlinks in lines A'', B'' and C'', below, A'': (find-enode "Lisp Eval" "C-x C-e") B'': (find-busyboxfile "shell/ash.c" "void\nevalpipe") C'': (find-efile "info.el" "defun info") are equivalent to the ones labeled A', B', C' in Section 5, but are a bit shorter, and they hide details like Emacs's path and the version of BusyBox; if we switch to newer versions of Emacs and BusyBox we only need to change the definitions of find-busyboxfile' and find-efile' to update the hyperlinks. Usually not many things change from one version of a package to another, so most hyperlinks continue to work after the update. Eev defines a function called code-c-d' that makes defining functions like find-enode', find-busyboxfile' and find-efile' much easier: (code-c-d "busybox" "~/usrc/busybox-1.00/") (code-c-d "e" "/usr/share/emacs/21.4/lisp/" "emacs") The arguments for code-c-d' are (1) a code'' (the xxx'' in a find-xxxfile''), (2) a directory, and optionally (3) the name of a manual in Info format. The definition of code-c-d is not very interesting, so we won't show it here. 7. Keys for following hyperlinks and for going back --------------------------------------------------- «forward-and-back» (to ".forward-and-back") [Rewrite this; mention M-k, M-K, to' and the (disabled) stubs to implement a back' command] It is so common to have Lisp hyperlinks that extend from some position in a line --- usually after a comment sign --- to the end of the line that eev implements a special key for executing these hyperlinks: the effect of typing M-e (when eev is installed and eev mode'' is on) is roughly the same of first going to the end of the line and then typing C-x C-e; that is, M-e does the same as the key sequence C-e C-x C-e(1). [There are many other kinds of hyperlinks. Examples?] (1) The main difference between M-e and C-e C-x C-e is how they behave when called with numeric prefix arguments'': for example, M-0 M-e highlights temporarily the Lisp expression instead of executing it and M-4 M-e executes it with some debugging flags turned on, while C-x C-e when called with any prefix argument inserts the result of the expression at the cursor instead of just showing it at the echo area. 8. Dangerous hyperlinks ----------------------- «dangerous-hyperlinks» (to ".dangerous-hyperlinks") Note that these hyperlinks'' can do very dangerous things. If we start to execute blindly every Lisp expression we see just because it can do something interesting or take us to an interesting place then we can end up running something like: (shell-c ommand "rm -Rf ~") which destroy all files in our home directory; not a good idea. Hyperlinks should be a bit safer than that... The modern approach to safety in hyperlinks --- the one found in web browsers, for example --- is that following a hyperlink can execute only a few kinds of actions, all known to be safe; the target'' of a hyperlink is something of the form http://..., ftp://..., file://..., info://..., mailto:... or at worst like javascript:...; none of these kinds of actions can even erase our files. That approach limits a lot what hyperlinks can do, but makes it harmless to hide the hyperlink action and display only some descriptive text. Eev's approach is the opposite of that. I wrote the first functions of eev in my first weeks after installing GNU/Linux in my home machine and starting using GNU Emacs, in 1994; before that I was using mostly Forth (on MS-DOS), and I hadn't had a lot of exposure to *nix systems by then --- in particular, I had tried to understand *nix's notions of user IDs and file ownerships and permissions, and I felt that they were a thick layer of complexity that I wasn't being able to get through. Forth's attitude is more like the user knows what he's doing''; the system is kept very simple, so that understanding all the consequences of an action is not very hard. If the user wants to change a byte in a critical memory position and crash the machine he can do that, and partly because of that simplicity bringing the machine up again didn't use to take more than one minute (in the good old days, of course). Forth people developed good backup strategies to cope with the insecurities, and --- as strange as that might sound nowadays, where all machines are connected and multi-user and crackers abound --- using the system in the Forth way was productive and fun. *NIX systems are not like Forth, but when I started using them I was accustomed to this idea of achieving simplicity through the lack of safeguards, and eev reflects that. The only thing that keeps eev's hyperlinks reasonably safe is /transparency/: the code that a hyperlink executes is so visible that it is hard to mistake a dangerous Lisp expression for a real'' hyperlink. Also, all the safe hyperlink functions implemented by eev start with find-', and all the find-' functions in eev are safe, except for those with names like find-xxxsh' and find-xxxsh0: for example, (find-sh "wget --help" "recursive download") executes wget --help'', puts the output of that in an Emacs buffer and then jumps to the first occurrence of the string recursive download'' there; other find-xxxsh' functions are variations on that that execute some extra shell commands before executing the first argument --- typically either switching to another directory or loading an initialization file, like ~/.bashrc or ~/.zshrc. The find-xxxsh0' functions are similar to their find-xxxsh' counterparts, but instead of creating a buffer with their output they just show it at Emacs's echo area and they use only the first argument and ignore the others (the pos-spec). 9. Generating Hyperlinks ------------------------ «generating-hyperlinks» (to ".generating-hyperlinks") Do we need to remember the names of all hyperlinks functions, like find-fline and find-node? Do we need to type the code for each hyperlink in full by hand? The answers are no'' and no''. Eev implements several functions that create temporary buffers containing hyperlinks, that can then be cut and pasted to other buffers. For example, M-h M-f' creates links about an Emacs Lisp function: typing M-h M-f' displays a prompt in a minibuffer asking for the name of an Elisp function; if we type, say, find-file' there (note: name completion with the TAB key works in that prompt) we get a buffer like the one in figure 1. _________________________________________________________ |# (find-efunction-links 'find-file) | | | |# (where-is 'find-file) | |# (describe-function 'find-file) | |# (find-efunctiondescr 'find-file) | |# (find-efunction 'find-file) | |# (find-efunctionpp 'find-file) | |# (find-efunctiond 'find-file) | |# (find-eCfunction 'find-file) | |# (find-estring (documentation 'find-file)) | |# (find-estring (documentation 'find-file t)) | | | |# (Info-goto-emacs-command-node 'find-file) | |# (find-enode "Command Index" "* find-file:") | |# (find-elnode "Index" "* find-file:") | | | | | | | |--:** *Elisp hyperlinks* All L18 (Fundamental)-----| |_________________________________________________________| Figure 1: the result of typing M-h M-f find-file (find-eevfile "article/ss-m-h.png") (find-eevex "screenshots.e" "fisl-screenshots-M-h") The first line of that buffer is a hyperlink to that dynamically-generated page of hyperlinks. Its function --- find-efunction-links' --- has a long name that is hard to remember, but there's a shorter link that will do the same job: (eek "M-h M-f find-file") The argument to eek' is a string describing a sequence of keys in a certain verbose format, and the effect of running, say, (eek "M-h M-f find-file") is the same as of typing M-h M-f find-file'. [M-h is a prefix; (eek "M-h C-h") shows all the sequences with the same prefix.] [Exceptions: M-h M-c, M-h M-2, M-h M-y. Show examples of how to edit hyperlinks with M-h M-2 and M-h M-y.] [Mention hyperlinks about a key sequence? (eek "M-h M-k C-x C-f")] [Mention hyperlinks about a Debian package?] 10. Returning from Hyperlinks ----------------------------- «returning» (to ".returning") [Mention M-k to kill the current buffer, and how Emacs asks for confirmation when it's a file and it's modified] [Mention M-K for burying the current buffer] [Mention what to do in the cases where a hyperlink points to the current buffer (section 16); there used to be an ee-back'' function bound to M-B, but to reactivate it I would have to add back some ugly code to to'... (by the way, that included Rubikitch's contributions)] [web browsers have a way to return'' from hyperlinks: the back'' button... In eev we have many kinds of hyperlinks, including some that are unsafe and irreversible, but we have a few kinds of back''s that work... 1) if the hyperlink opened a new file or buffer, then to kill the file or buffer, use M-k (an eev binding for kill-this-buffer); note that it asks for a confirmation when the buffer is associated to a file and it has been modified --- or we can use bury-buffer; M-K is an eev binding for bury-buffer. [explain how emacs keeps a list of buffers?] Note: if the buffer contains, say, a manpage, or an html page rendered by w3m, which take a significant time to generate, then M-K is better is than M-k. 2) if the hyperlink was a to' then it jumped to another position in the same file... it is possible to keep a list of previous positions in a buffer and to create an ee-back' function (suggestion: bind it to M-B) but I haver never been satisfied with the implementations that I did so we're only keeping a hook in to' for a function that saves the current position before the jump] [dto recommended winner-undo] 11. Local copies of files from the internet ------------------------------------------- «local-copies» (to ".local-copies") Emacs knows how to fetch files from the internet, but for most purposes it is better to use local copies. Suppose that the environment variable$S is set to ~/snarf/; then running this on a
shell
mkdir -p $S/http/www.gnu.org/software/emacs/ cd$S/http/www.gnu.org/software/emacs/
wget http://www.gnu.org/software/emacs/emacs-paper.html
# (find-fline "$S/http/www.gnu.org/software/emacs/emacs-paper.html") # (find-w3m "$S/http/www.gnu.org/software/emacs/emacs-paper.html")
creates a local copy of emacs-paper.html inside ~/snarf/http/. The
two last lines are hyperlinks to the local copy; find-w3m' opens it
as HTML'', using a web browser called w3m that can be run either in
standalone mode or inside Emacs; find-w3m' uses w3m's Emacs
interface, and it accepts extra arguments, which are treated as a
pos-spec-list.
Instead of running the mkdir', cd' and wget' lines above we can run
a single command that does everything:
psne http://www.gnu.org/software/emacs/emacs-paper.html
which also adds a line with that URL to a log file (usually
~/.psne.log). It is more convenient to have a psne' that changes the
current directory of the shell than one that doesn't, and for that it
must be defined as a shell function.
Eev comes with an installer script, called eev-rctool, that can help
in adding the definitions for eev (like the function ee () { source
~/ee.sh; }'' of section 3) to initialization files like ~/.bashrc
(such initialization files are termed rcfiles''). Eev-rctool does
{\sl not} add by default the definitions for psne' and for $S to rcfiles; however, it adds commented-out lines with instructions, which might be something like: # To define$S and psne uncomment this:
# . $EEVTMPDIR/psne.sh # (find-eevtmpfile "psne.sh") [See: <http://lists.gnu.org/archive/html/eev/2005-06/msg00000.html>] 12. Glyphs ---------- «glyphs» (to ".glyphs") Emacs allows redefining how characters are displayed, and one of the modules of eev --- eev-glyphs --- uses that to make some characters stand out. Character 15, for example, is displayed on the screen by default as '^O' (two characters, suggesting control-O''), sometimes in a different color from normal text(3). Eev changes the appearance of char 15 to make it be displayed as a red star. Here is how: Emacs has some structures called faces'' that store font and color information, and eeglyphs-face-red' is a face that says use the default font and the default background color, but a red foreground''; eev's initialization code runs this, (eev-set-glyph 15 ?* 'eev-glyph-face-red) which sets the representation of char 15 to the glyph'' made of a star in the face eeglyphs-face-red. For this article, as red doesn't print well in black and white, we used this instead: (eev-set-glyph 15 342434) this made occurrences of char 15 appear as the character 342434, *' (note that this is outside of the ascii range), using the default face, i.e., the default font and color. Eev also sets a few other glyphs with non-standard faces. The most important of those are «' and '»', which are set to appear in green against the default background, with: (eev-set-glyph 171 171 'eev-glyph-face-green) (eev-set-glyph 187 187 'eev-glyph-face-green) There's a technical point to be raised here. Emacs can use several encodings'' for files and buffers, and «' and »' only have character codes 171 and 187 in a few cases, mainly in the raw-text' encoding and in unibyte'' buffers; in most other encodings they have other char codes, usually above 255, and when they have these other codes Emacs considers that they are other characters for which no special glyphs were set and shows them in the default face. This visual distinction between the below-255 «' and »' and the other «' and »'s is deliberate --- it helps preventing some subtle bugs involving the anchor functions of section \ref{anchors}. (3). Determined by the face'' escape-glyph-face, introduced in GNU Emacs in late 2004. 13. Compose Pairs ----------------- «compose-pairs» (to ".compose-pairs") To insert a *' in a text we type C-q C-o' --- C-q quotes'' the next key that Emacs receives, and C-q C-o' inserts a literal C-o'', which is a char 15. Typing «' and »'s --- and other non-standard glyphs, if we decide to define our own --- involves using another module of eev: eev-compose. Eev-compose defines a few variables that hold tables of compose pairs'', which map pairs of characters that are easy to type into other, weirder characters; for example, eev-composes-otheriso' says that the pair "<<" is mapped to "«" and that ">>" is mapped to "»", among others. When we are in eev mode'' the prefix M-,' can be used to perform the translation: typing M-, < <' enters «', and typing M-, > >' enters »'. The variable eev-composes-accents' holds mappings for accented chars, like "'a" to "á" and "cc" to "ç"; eev-composes-otheriso' takes care of the other mappings that still concern characters found in the ISO8859-1 character set, like «' and '»' as above, "_a" to "ª", "xx" to "×", and a few others; eev-composes-globalmath' and eev-composes-localmath' are initially empty and are meant to be used for used-defined glyphs. The suffix math' in their names is a relic: Emacs implements its own ways to enter special characters, which support several languages and character encodings, but their code is quite complex and they are difficult to extend; the code that implements eev's M-,', on the other hand, takes about just 10 lines of Lisp (excluding the tables of compose pairs) and it is trivial to understand and to change its tables of pairs. M-,' was created originally to enter special glyphs for editing mathematical texts in TeX, but it turned out to be a convenient hack, and it stuck. 14. Delimited regions --------------------- «delimited-regions» (to ".delimited-regions") Sometimes it happens that we need to run a certain (long) series of commands over and over again, maybe with some changes from one run to the next; then having to mark the block all the time becomes a hassle. One alternative to that is using a variaton on M-x eev': M-x eev-bounded'. It saves the region around the cursor up to certain delimiters instead of saving what's between Emacs's point'' and mark''. The original definition of eev-bounded was something like this: (defun eev-bounded () (interactive) (eev (ee-search-backwards "\n#*\n") (ee-search-forward "\n#*\n"))) the call to ee-search-backwards' searches for the first occurrence of the string "\n#*\n" (newline, hash sign, control-O, newline) before the cursor and returns the position after the "\n#*\n", without moving the cursor; the call to ee-search-forward does something similar with a forward search. As the arguments to eev' indicate the extremities of the region to be saved into the temporary script, this saves the region between the first "\n#*\n" backwards from the cursor to the first "\n#*\n" after the cursor. The actual definition of eev-bounded' includes some extra code to highlight temporarily the region that was used; see Figure \ref{fig:F3}. Normally the highlighting lasts for less than one second, but here we have set its duration to several seconds to produce a more interesting screenshot. ____________________ emacs@localhost _______________________ | _________ xterm __________ |#* |/home/edrx(edrx)# ee | |# Global variables |# Global variables | |lua50 -e ' |lua50 -e ' | | print(print) | print(print) | | print(_G["print"]) | print(_G["print"]) | | print(_G.print) | print(_G.print) | | print(_G) | print(_G) | | print(_G._G) | print(_G._G) | |' |' | |#* |function: 0x804dfc0 | |# Capture of local variables |function: 0x804dfc0 | |lua50 -e ' |function: 0x804dfc0 | | foo = function () |table: 0x804d420 | | local storage |table: 0x804d420 | | return |/home/edrx(edrx)# | | (function () return storage end), |__________________________| | (function (x) storage = x; return x end) | | end | | get1, set1 = foo() | | get2, set2 = foo() -- Output: | | print(set1(22), get1()) -- 22 22 | | print(set2(33), get1(), get2()) -- 33 22 33 | |' | |#* | | | |-:-- lua5.e 91% L325 (Fundamental)--------------------| |____________________________________________________________| Figure 2: sending a delimited block with F3 (find-fline "ss-lua.png") (find-eevex "screenshots.e" "fisl-screenshots") Eev binds the key F3 to the function eeb-default', which runs the current default bounded function'' (which is set initially to eev', /not/ eev-bounded') on the region between the current default delimiters, using the current default highlight-spec''; so, instead of typing M-x eev-bounded' inside the region to save it we can just type F3. All these defaults values come from a single list, which is stored in the variable eeb-defaults'. The real definition of eev-bounded' is something like: (setq eev-bounded '(eev ee-delimiter-hash nil t t)) (defun eev-bounded () (interactive) (setq eeb-defaults eev-bounded) (eeb-default)) Note that in Emacs Lisp (and in most other Lisps) each symbol has a value as a variable that is independent from its value as a function'': actually a symbol is a structure containg a name, a value cell'', a function cell'' and a few other fields. Our definition of eev-bounded', above, includes both a definition of the function eev-bounded' and a value for the variable eev-bounded'. Eev has an auxiliary function for defining these bounded functions''; running (eeb-define 'eev-bounded 'eev 'ee-delimiter-hash nil t t) has the same effect as doing the setq' and the defun' above. As for the meaning of the entries of the list eeb-defaults', the first one (eev') says which function to run; the second one (ee-delimiter-hash') says which initial delimiter to use --- in this case it is a symbol instead of a string, and so eeb-default' takes the value of the variable ee-delimiter-hash'; the third one (nil) is like the second one, but for the final delimiter, and when it is nil eeb-default' considers that the final delimiter is equal to the initial delimiter; the fourth entry (t) means to use the standard highlight-spec, and the fifth one (t, again) tells eeb-default' to make an adjustment to the highlighted region for purely aestethical reasons: the saved region does not include the initial "\n" in the final delimiter, "\n#*\n", but the highlighting looks nicer if it is included; without it the last highlighted line in Figure 2 would have only its first character --- an apostrophe --- highlighted. Eev also implements other of these bounded'' functions. For example, running M-x eelatex' on a region saves it in a temporary LaTeX file, and also saves into the temporary script file the commands to process it with LaTeX; eelatex-bounded' is defined by (eeb-define 'eelatex-bounded 'eelatex 'ee-delimiter-percent nil t t) where the variable ee-delimiter-percent' holds the string "\n%*\n"; comments in LaTeX start with percent signs, not hash signs, and it is convenient to use delimiters that are treated as comments. [The block below ... tricky ... blah. How to typeset *' in LaTeX. Running eelatex-bounded changed the defaults stored in eeb-defaults, but ee-once blah doesn't.] %* % (eelatex-bounded) % (ee-once (eelatex-bounded)) \def\myttbox#1{% \setbox0=\hbox{\texttt{a}}% \hbox to \wd0{\hss#1\hss}% } \catcode*=13 \def*{\myttbox{$\bullet$}} \begin{verbatim} abcdefg d*fg \end{verbatim} %* ...for example eelatex, that saves the region (plus certain standard header and footer lines) to a temporary LaTeX file'' and saves into the temporary script file the commands to make ee' run LaTeX on that and display the result. The block below is an example of (...) ...The block below shows a typical application of eev-bounded: # (find-es "lua5" "install-5.0.2") # (find-es "lua5" "install-5.0.2" "Edrx's changes") # (code-c-d "lua5" "/tmp/usrc/lua-5.0.2/") # (find-lua5file "INSTALL") # (find-lua5file "config" "support for dynamic loading") # (find-lua5file "config") # (find-lua5file "") #* rm -Rv ~/usrc/lua-5.0.2/ mkdir -p ~/usrc/lua-5.0.2/ tar -C ~/usrc/ \ -xvzf$S/http/www.lua.org/ftp/lua-5.0.2.tar.gz
cd ~/usrc/lua-5.0.2/
cat >> config <<'---'
DLLIB= -ldl
MYLDFLAGS= -Wl,-E
EXTRA_LIBS= -lm -ldl
---
make test 2>&1 | tee omt
#*
in unpacks a program (the interpreter for Lua), changes its default
configuration slightly, then compiles and tests it.
[about the size: the above code is too small for being a script'',
gdb (here-documents, gcc, ee-once)
(alternative: here-documents, gcc, gdb, screenshot(s) for gdb)
15. Communication channels
--------------------------
«channels» (to ".channels")
The way that we saw to send commands to a shell is in two steps: first
we use M-x eev in Emacs to send'' a block of commands, and then we
run ee' at the shell to make it receive'' these commands. But there
is also a way to create shells that listen'' not only to the
keyboard for their input, but also to certain communication
channels''; by making Emacs send commands through these communication
channels we can skip the step of going to the shell and typing ee'
--- the commands are received immediately.
_________emacs@localhost____________ ___________channel A______________
| | |/tmp(edrx)# # Send things to port |
|* (eechannel-xterm "A") ;; create | | 1234 |
|* (eechannel-xterm "B") ;; create | |/tmp(edrx)# { |
|# Listen on port 1234 | |> echo hi |
|netcat -l -p 1234 | |> sleep 1 |
|* | |> echo bye |
|* (eechannel "A") ;; change target | |> sleep 1 |
|# Send things to port 1234 | |> } | netcat -q 0 localhost 1234 |
|{ | |/tmp(edrx)# |
| echo hi | |/tmp(edrx)# |
| sleep 1 | |__________________________________|
| echo bye | ___________channel B______________
| sleep 1 | |/tmp(edrx)# # Listen on port 1234 |
|} | netcat -q 0 localhost 1234 | |/tmp(edrx)# netcat -l -p 1234 |
| | |hi |
|-:-- screenshots.e 95% L409 (Fu| |bye |
|_Wrote /home/edrx/.eev/eeg.A.str____| |/tmp(edrx)# |
| |
|__________________________________|
Figure 3: sending commands to two xterms using F9
(find-eevex "screenshots.e" "fisl-screenshots")
(find-eevfile "article/ss-f9.png")
The screenshot at Figure 3 shows this at work. The user has started
with the cursor at the second line from the top of the screen in the
Emacs window and then has typed F9 several times. Eev binds F9 to a
command that operates on the current line and then moves down to the
next line; if the current line starts with *' then what comes after
the *' is considered as Lisp code and executed immediately, and the
current line doesn't start with *' then its contents are sent through
the default communication channel, or though a dummy communication
channel if no default was set.
The first F9 executed (eechannel-xterm "A"), which created an
xterm with title channel A'', running a shell listening on the
communication channel A'', and set the default channel to A; the
second F9 created another xterm, now listening to channel B, and set
the default channel to B.
The next two F9's sent each one one line to channel B. The first line
was a shell comment (# Listen...''); the second one started the
program netcat, with options to make netcat listen to the internet
port 1234'' and dump to standard output what it receives.
The next line had just *'; executing the rest of it as Lisp did
nothing. The following line changed the default channel to A.
In the following lines there is a small shell program that outputs
hi'', then waits one second, then outputs bye'', then waits for
another second, then finishes; due to the | netcat...'' its output
is redirected to the internet port 1234, and so we see it appearing as
the output of the netcat running on channel B, with all the expected
delays: one second between hi'' and bye'', and one second after
bye''; after that last one-second delay the netcat at channel A
finishes receiving input (because the program between {' and }'
ends) and it finishes its execution, closing the port 1234; the netcat
at B notices that the port was closed and finishes its execution too,
There are also ways to send whole blocks of lines at once through
communication channels; see Section \ref{bigmodular}.
15.1. The Implementation of Communication Channels
---------------------------------------------------
«channels-implementation» (to ".channels-implementation")
Communication channels are implemented using an auxiliary script
called eegchannel', which is written in Expect ([L90] and [L95]). If
we start an xterm in the default way it starts a shell (say,
/bin/bash) and interacts with it: the xterm sends to the shell as
characters the keystrokes that it receives from the window manager and
treats the characters that the shell sends back as being instructions
to draw characters, numbers and symbols on the screen. But when we run
(eechannel-xterm "A") Emacs creates an xterm that interacts with
another program --- eegchannel --- instead of with a shell, and
eegchannel in its turn runs a shell and interacts with it.
Eegchannel passes characters back and forth between the xterm and the
shell without changing them in any way; it mostly tries to pretend
that it is not there and that the xterm is communicating directly with
the shell. However, when eegchannel receives a certain signal it sends
to the shell a certain sequence of characters that were not sent by
the xterm; it fakes a sequence of keystrokes''.
Let's see a concrete example. Suppose than Emacs was running with
process id (pid'') 1000, and running (eechannel-xterm "A")
in it made it create an xterm, which got pid 1001; that xterm ran
eegchannel (pid 1002), which ran /bin/bash (pid 1003). Actually Emacs
invoked xterm using this command line:
xterm -n "channel A" -e eegchannel A /bin/bash
and xterm invoked eegchannel with eegchannel A /bin/bash'';
eegchannel saw the A', saved its pid (1002) to the file
~/.eev/eeg.A.pid, and watched for SIGUSR1 signals; every time that
~/.eev/eeg.A.str and sends that as fake input to the shell that it
is controlling. So, running
echo 'echo $[1+2]' > ~/.eev/eeg.A.str kill -USR1$(cat ~/.eev/eeg.A.pid)
in a shell sends the string echo $[1+2]'' (plus a newline) through the channel A''; what Emacs does when we type F9 on a line that does not start with *' corresponds exactly to that. 16. Anchors ----------- «anchors» (to ".anchors") The function to' can be used to create hyperlink to certain positions --- called anchors'' --- in the current file. For example, # Index: # «.first_block» (to "first_block") # «.second_block» (to "second_block") #* # «first_block» (to ".first_block") echo blah #* # «second_block» (to ".second_block") echo blah blah #* What to' does is simply to wrap its argument inside «' and »' characters and then jump to the first occurrence of the resulting string in the current file. In the (toy) example above, the line that starts with # «.first_block»'' has a link that jumps to the line that starts with # «first_block»'', which has a link that jumps back --- the anchors and (to ...)''s act like an index for that file. The function find-anchor' works like a to' that first opens another file: (find-anchor "~/.zshrc" "update-homepage") does roughly the same as: (find-fline "~/.zshrc" "«update-homepage»") Actually find-anchor' consults a variable, ee-anchor-format', to see in which strings to wrap the argument. Some functions modify ee-anchor-format' temporarily to obtain special effects; for example, a lot of information about the packages installed in a Debian GNU system is kept in a text file called /var/lib/dpkg/info/status; (find-status "emacs21") opens this file and searches for the string "\nPackage: emacs21\n" there --- that string is the header for the block with information about the package emacs21, and it tells the size of the package, description, version, whether it is installed or not, etc, in a format that is both machine-readable and human-readable. 17. E-scripts ------------- «e-scripts» (to ".e-scripts") The best short definition for eev that I've found involves some cheating, as it is a circular definition: eev is a library that adds support for e-scripts to Emacs'' --- and e-scripts are files that contain chunks meant to be processed by eev's functions. Almost any file can contain parts meant for eev'': for example, a HOWTO or README file about some program will usually contain some example shell commands, and we can mark these commands and execute them with M-x eev; and if we have the habit of using eev and we are writing code in, say, C or Lua we will often put elisp hyperlinks inside comment blocks in our code. These two specific languages (and a few others) have a feature that is quite convenient for eev: they have syntactical constructs that allow comment blocks spanning several lines --- for example, in Lua, where these comment blocks are delimited by --[['' and --]]''s, we can have a block like --[[ #* # This file: (find-fline "~/LUA/lstoindexhtml.lua") # A test: cd /tmp/ ls -laF | col -x \ | lua50 ~/LUA/lstoindexhtml.lua tmp/ \ | lua50 -e 'writefile("index.html", io.read("*a"))' #* --]] in a Lua script, and the script will be at the same time a Lua script and an e-script. When I started using GNU and Emacs the notion of an e-script was something quite precise to me: I was keeping notes on what I was learning and on all that I was trying to do, and I was keeping those notes in a format that was partly English (or Portuguese), partly executable things --- not all of them finished, or working --- after all, it was much more practical to write rm -Rv ~/usrc/busybox-1.00/ tar -C ~/usrc/ -xvzf \$S/http/www.busybox.net/downloads/busybox-1.00.tar.gz
cd ~/usrc/busybox-1.00/
cp -iv ~/BUSYBOX/myconfig .config
make 2>&1 | tee om
than to write
Unpack BusyBox's source, then run "make menuconfig"
and "make" on its main directory
and then have to translate that from English into machine commands
every time... So, those files where I was keeping my notes contained
executable notes'', or were scripts for Emacs'', and I was quite
sure that everyone else around were also keeping notes in executable
formats, possibly using other editors and environments (vi, maybe?)
and that if I showed these people my notes and they were about some
task that they were also struggling with then they would also show me
/their/ notes... I ended up making a system that uploaded regularly
all my e-scripts (no matter how messy they were) to my home page, and
writing a text --- The Eev Manifesto'' ([O99]) --- about sharing
these executable notes.
Actually trying to define an e-script as being a file containing
executable parts, that are picked up and executed interactively''
makes the concept of an e-script /very/ loose.
Note that we /can/ execute the Lua parts in the code above by running
the Lua interpreter on it, we /can/ execute the elisp one-liner with
M-e in Emacs, and we /can/ execute the shell commands using F3 or M-x
eev; but the code will do nothing by itself --- it is passive.
A piece of code containing instructions in English on how to use it is
also an e-script, in a sense; but to execute these instructions we
need to invoke an external entity --- a human, usually ourselves ---
to interpret them. This is much more flexible, but also much more
error-prone and slow, than just pressing a simple sequence of keys
like M-e, or F9, or F3, alt-tab, e, e, enter.
18. Splitting eev.el
--------------------
«splitting-eev.el» (to ".splitting-eev.el")
When I first submittted eev for inclusion in GNU Emacs, in 1999, the
people at the FSF requested some changes. One of them was to split
eev.el --- the code at that point was all in a single Emacs Lisp file,
called eev.el --- into several separate source files according to
functionality; at least the code for saving temporary scripts and the
code for hyperlinks should be kept separate.
It turned out that that was the wrong way of splitting eev. The
frontier between what is a hyperlink and what is a block of commands
is blurry:
man foo
man -P 'less +/bar' foo
# (eev "man foo")
# (eev "man -P 'less +/bar' foo")
# (find-man "foo" "bar")
The two man' commands above can be considered as hyperlinks to a
manpage, but we need to send those commands to a shell to actually
open the manpage; the option -P 'less +/bar' instructs man' to use
the program less' to display the manpage, and it tells less' to jump
to the first occurrence of the string bar'' in the text, and so it
is a hyperlink to a specific position in a manpage. Each of the two
eev' lines, when executed, saves one of these man' commands to the
temporary script file; because they contain Lisp expressions they look
much more like hyperlinks than the man' lines. The last line,
find-man', behaves much more like a real'' hyperlink: it opens the
manpage /inside Emacs/ and searches for the first occurrence of bar'
there; but Emacs's code for displaying manpages was tricky, and it
took me a few years to figure out how to add support for
pos-spec-lists to it...
So, what happens is that often a new kind of hyperlink will begin its
life as a series of shell commands (another example: using gv --page
14 file.ps' to open a PostScript file and then jump to a certain page)
and then it takes some time to make a nice hyperlink function that
does the same thing; and often these functions are implemented by
executing commands in external programs.
There's a much better way to split conceptually what eev does, though.
Most functions in eev take a region of text (for example Emacs's own
selected region'', or the extent of Lisp expression coming before
the cursor) and execute'' that in some way; the kinds of regions are
Emacs's (selected) region | M-x eev, M-x eelatex (sec. 4)
----------------------------+------------------------------
last-sexp (Lisp expression | C-x C-e, M-E (sec. 5)
at the left of the cursor) |
----------------------------+------------------------------
sexp-eol (go to end of | C-e C-x C-e, M-e (sec. 7)
line, then last-sexp) |
----------------------------+------------------------------
bounded region | F3, M-x eev-bounded,
| M-x eelatex-bounded (sec. 14)
----------------------------+------------------------------
bounded region around | (ee-at anchor'' ...)
anchor | (sec. 20)
----------------------------+------------------------------
current line | F9 (sec. 15)
----------------------------+------------------------------
no text (instead use the | F12 (sec. 19)
next item in a list) |
Actions (can be composed):
* Saving a region or a string into a file
* Sending a signal to a process
* Executing as Lisp
* Executing immediately in a shell
* Start a debugger
[Emacs terminology: commands]
19. Steps
---------
«eesteps» (to ".eesteps")
[Simple examples]
[writing demos]
[hyperlinks for which no short form is known]
[producing animations and screenshots]
20. Big Modular E-scripts
-------------------------
«big-modular-e-scripts» (to ".big-modular-e-scripts")
% (find-eevex "screenshots.e" "fisl-screenshots-modular")
% (find-eimage0 "./ss-modular.png")
% (find-fline "ss-modular.png")
% (find-es "tex" "png_screenshots")
A shell can be run in two modes: either interactively, by
expecting lines from the user and executing them as soon as they
are received\footnote{except for multi-line commands.}, or by
commands, and executes them in sequence as fast as possible, with
no pause between one command and the next.
When we are sending lines to a shell with F9 we are telling it not
only {\sl what} to execute but also {\sl when} to execute it; this is
somewhat similar to running a program step-by-step inside a debugger
--- but note that most shells provide no single-stepping facilities.
We will start with a toy example --- actually the example from Section
\ref{anchors} with five new lines added at the end --- and then in the
next section we will see a real-world example that uses these ideas.
Figure 4: sending a block at once with eevnow-at
(find-fline "ss-modular.png")
Figure 5: single-stepping through a C program
(find-fline "ss-gdbwide.png")
[Somewhere between a script and direct user interaction]
[No loops, no conditionals]
[Several xterms]
21. Internet Skills for Disconnected People
-------------------------------------------
«iskidip» (to ".iskidip")
Suppose that we have a person $P$ who has learned how to use a
computer and now wants to learn how the internet works. That person
$P$ knows a bit of programming and can use Emacs, and sure she can use
e-mail clients and web browsers by clicking around with the mouse, but
she has grown tired of just using those things as black boxes; now she
wants to experiment with setting up HTTP and mail servers, to
understand how data packets are driven around, how firewalls can block
some connections, such things.
The problem is that $P$ has never had access to any machine besides
her own, which is connected to the internet only through a modem; and
also, she doesn't have any friends who are computer technicians or
people she's got the impression that they live lifes that are almost
as grey as the ones of factory workers, and she's afraid of them. To
add up to all that, $P$ has some hippie job that makes her happy but
poor, so she's not going to buy a second computer, and the books she
can borrow, for example, Richard Stevens' series on TCP/IP
programming, just don't cut.
One of eev's intents isto make life easier for autodidacts. Can it be
used to rescue people in positions like $P$'s(4)? It was thinking on
that that I created a side-project to eev called Internet Skills for
Disconnected People'': it consists of e-scripts about running a second
machine, called the guest'', emulated inside the host'', and
making the two talk to each other via standard internet protocols, via
emulated ethernet cards. Those e-scripts make heavy use of the
concepts in the last section [...]
Figure 6: a call map
(find-fline "iskidip.png")
(find-eimage0 "./iskidip.png")
% (find-eevex "busybox.e" "bb_chroot_main")
% (find-eevex "busybox.e" "bbinitrd-qemu-main")
% (find-eevex "busybox.e" "iso-qemu-main")
% (find-eevex "busybox.e" "iso-qemu-main-2")
(4). by the way, I created $P$ inspired on myself; my hippie job is
being a mathematician.
22. Availability and Resources
------------------------------
«availability» (to ".availability")
http://angg.twu.net/. That page also contains lots of examples, some
animations showing some of eev's features at work, a mailing list,
etc.
Eev is in the middle of the process of becoming a standard part of GNU
Emacs; I expect it to be integrated just after the release of GNU
23. Acknowledgments
-------------------
I'd like to thank David O'Toole, Diogo Leal and Leslie Watter for our
countless hours of discussions about eev; many of the recent features
of eev --- almost half of this article --- were conceived at our
talks.
[Thank also the people at #emacs, for help with the code and for
small revision tips]
24. References
--------------
[L90] - Libes, D. - Expect: Curing Those Uncontrollable Fits of
Interaction. 1990. Available online from http://expect.nist.gov/ .
[L95] - Libes, D. - Exploring Expect. O'Reilly, 1995.
[O99] - Ochs, E. - The Eev Manifesto
<http://angg.twu.net/eev-manifesto.html>
[S79] - Stallman, R. - EMACS: The Extensible, Customizable Display
Editor. <http://www.gnu.org/software/emacs/emacs-paper.html>
--snip--snip--ignore everything below this point--snip--snip--
(code-c-d-anchor "eev" "$EEVDIR/" "eev") (code-c-d "eeva" "$EEVDIR/" "eev" :anchor)
(find-eevaxdvi "eev.dvi")
# Local Variables:
# coding: raw-text-unix
# End:
`
|
2013-06-19 23:20:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5924344062805176, "perplexity": 12331.33584485574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709805610/warc/CC-MAIN-20130516131005-00053-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-3-equations-and-problem-solving-3-5-problem-solving-problem-set-3-5-page-131/14
|
## Elementary Algebra
Let X represent the side of the square. The width of the rectangle is 9 inches less than twice the side of the square. This equals 2X - 9. The length of the rectangle is 3 inches less than twice the 2 side of the square. This equals 2X - 3. The perimeter of the square is 4 $\times$ X = 4X. The formula for the perimeter of the rectangle is width + width + length + length. The perimeter of the rectangle is: P = 2X - 9 + 2X - 9 + 2X - 3 + 2X - 3 = 8X - 24. Since their perimeters are equal, we can set up the following equation: 4X = 8X - 24 24 = 4X Divide both sides by 4. X = 6 The length of the side of the square is 6. The width of the rectangle is 2X - 9 = 2 $\times$ 6 - 9 = 3. The length of the rectangle is 2X - 3 = 2 $\times$ 6 - 3 = 9.
|
2019-10-23 17:40:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7847323417663574, "perplexity": 82.01141022383212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987835748.66/warc/CC-MAIN-20191023173708-20191023201208-00477.warc.gz"}
|
http://twrogers.work/research.html
|
## In Review
T. W. Rogers, N. Jaccard, E. J. Morton and L. D. Griffin A Deep Learning Framework for the Automated Inspection of Complex Dual-Energy X-ray Cargo Imagery Submitted to SPIE Anomaly Detection and Imaging with X-Rays (ADIX) T. W. Rogers, N. Jaccard, E. J. Morton and L. D. Griffin Automated Detection of Loads in Cargo Containers Submitted to IEEE Transactions on Cybernetics
## Publications
#### Nb: Items with a symbol can be expanded for extra details.
N. Jaccard, T. W. Rogers, E. J. Morton and L. D. Griffin
Detection of concealed cars in complex cargo X-ray imagery using deep learning
Journal of X-ray Science and Technology (accepted) [arXiv]
N. Jaccard, T. W. Rogers, E. J. Morton and L. D. Griffin
Automated detection of smuggled high-risk security threats using Deep Learning
International Conference on Imaging for Crime Detection and Prevention [arXiv]
T. W. Rogers, N. Jaccard, E. J. Morton and L. D. Griffin
Automated X-ray Image Analysis for Cargo Security: Critical Review and Future Promise
Journal of X-ray Science and Technology (2016).
## Automated X-ray Image Analysis for Cargo Security: Critical Review and Future Promise
Abstract: We review the relatively immature field of automated image analysis for X-ray cargo imagery. There is increasing demand for automated analysis methods that can assist in the inspection and selection of containers, due to the ever-growing volumes of traded cargo and the increasing concerns that customs- and security-related threats are being smuggled across borders by organised crime and terrorist networks. We split the field into the classical pipeline of image preprocessing and image understanding. Preprocessing includes: image manipulation; quality improvement; Threat Image Projection (TIP); and material discrimination and segmentation. Image understanding includes: Automated Threat Detection (ATD); and Automated Contents Verification (ACV). We identify several gaps in the literature that need to be addressed and propose ideas for future research. Where the current literature is sparse we borrow from the single-view, multi-view, and CT X-ray baggage domains, which have some characteristics in common with X-ray cargo.
T. W. Rogers, J. Ollier, E. J. Morton and L. D. Griffin
Measuring and correcting wobble in large-scale transmission radiography
Journal of X-ray Science and Technology (2016, accepted).
## Measuring and correcting wobble in large-scale transmission radiography
BACKGROUND: Large-scale transmission radiography scanners are used to image vehicles and cargo containers. Acquired images are inspected for threats by a human operator or a computer algorithm. To make accurate detections, it is important that image values are precise. However, due to the scale (~5m tall) of such systems, they can be mechanically unstable, causing the imaging array to wobble during a scan. This leads to an effective loss of precision in the captured image. OBJECTIVE: We consider the measurement of wobble and amelioration of the consequent loss of image precision. METHODS: Following our previous work, we use Beam Position Detectors (BPDs) to measure the cross-sectional profile of the X-ray beam, allowing for estimation, and thus correction, of wobble. We propose: (i) a model of image formation with a wobbling detector array; (ii) a method of wobble correction derived from this model; (iii) methods for calibrating sensor sensitivities and relative offsets; (iv) a Random Regression Forest based method for instantaneous estimation of detector wobble; and (v) using these estimates to apply corrections to captured images of difficult scenes. RESULTS: We show that these methods are able to correct for 87% of image error due wobble, and when applied to difficult images, a significant visible improvement in the intensity-windowed image quality is observed. CONCLUSIONS: The method improves the precision of wobble affected images, which should help improve detection of threats and the identification of different materials in the image.
T. W. Rogers, N. Jaccard, E. D. Protonotarios, J. Ollier, E. J. Morton and L. D. Griffin
Threat Image Projection (TIP) into X-ray images of cargo containers for training humans and machines
50th IEEE International Carnahan Conference on Security Technology (2016, accepted).
## Threat Image Projection (TIP) into X-ray images of cargo containers for training humans and machines
Abstract: We propose a framework for Threat Image Projection (TIP) in cargo transmission X-ray imagery. The method exploits the approximately multiplicative nature of X-ray imagery to extract a library of threat items. These items can then be projected into real cargo. We show using experimental data that there is no significant qualitative or quantitative difference between real threat images and TIP images. We also describe methods for adding realistic variation to TIP images in order to robustify Machine Learning (ML) based algorithms trained on TIP. These variations are derived from cargo X-ray image formation, and include: (i) translations; (ii) magnification; (iii) rotations; (iv) noise; (v) illumination; (vi) volume and density; and (vii) obscuration. These methods are particularly relevant for representation learning, since it allows the system to learn features that are invariant to these variations. The framework also allows efficient addition of new or emerging threats to a detection system, which is important if time is critical.
We have applied the framework to training ML-based cargo algorithms for (i) detection of loads (empty verification), (ii) detection of concealed cars (ii) detection of Small Metallic Threats (SMTs). TIP also enables algorithm testing under controlled conditions, allowing one to gain a deeper understanding of performance. Whilst we have focused on robustifying ML-based threat detectors, our TIP method can also be used to train and robustify human threat detectors as is done in cabin baggage screening.
N. Jaccard, T. W. Rogers, E. J. Morton, and Lewis D. Griffin
Tackling the X-ray cargo inspection challenge using machine learning
Proc. SPIE 9847, Anomaly Detection and Imaging with X-Rays (ADIX), 98470N (2016).
## Tackling the X-ray cargo inspection challenge using machine learning
Abstract: The current infrastructure for non-intrusive inspection of cargo containers cannot accommodate exploding commerce volumes and increasingly stringent regulations. There is a pressing need to develop methods to automate parts of the inspection work-flow, enabling expert operators to focus on a manageable number of high-risk images. To tackle this challenge, we developed a modular framework for automated X-ray cargo image inspection. Employing state-of-the-art machine learning approaches, including deep learning, we demonstrate high performance for empty container verification and specific threat detection. This work constitutes a significant step towards the partial automation of X-ray cargo image inspection.
N. Jaccard, T. W. Rogers, E. J. Morton, and Lewis D. Griffin
Using deep learning on X-ray images to detect threats
Defence and Security Doctoral Symposium, Cranfield University (2015).
## Using deep learning on X-ray images to detect threats
Shortened Abstract: World trade volumes are exploding, with cargo containers totaling an estimated 500 million TEU (Twenty-foot equivalent units) shipped globally in 2012. At the same time, security requirements and transport regulations are increasingly stringent, putting significant pressure on the infrastructure at transport hubs and borders. In order to meet ambitious aims set by authorities, such as the inspection of every US-bound container, there is a pressing need to devise procedures to cope with high trade volumes while minimising the impact on the stream-of-commerce (SoC) ... We have developed a deep learning framework for the classification of X-ray cargo images according to their content. This framework is based on convolutional neural networks (CNNs), a class of artificial neural networks that currently is the state-of-the-art in many areas of machine-learning and –vision ... CNNs typically require very large training datasets, the acquisition of which is costly and impractical for cargo images ... In this contribution, we present an overview of our deep learning framework and present preliminary results, including a comparison to a more conventional approach we previously proposed for object detection in X-ray cargo container images. While our focus has been on cargo containers, we expect the framework to generalise to X-ray images of other vehicles and luggage. As such, this research is expected to contribute to the development of specialised software packages to assist operators through partial automation of the inspection work-flow.
T. W. Rogers, N. Jaccard, E. J. Morton, and Lewis D. Griffin
Detection of cargo container loads from X-ray images
IET Intelligent Signal Processing (2015).
## Detection of cargo container loads from X-ray images
Abstract: Over 100 million cargo containers that are declared empty on their manifests are transported globally each year. Human operators can confirm if each is truly empty by physical inspection or by examination of an X-ray image. However, the huge number transported means that confirmation is far from complete. Thus, empty containers offer an opportunity for criminals to smuggle contraband. We report an algorithm for automatically detecting loads in cargo containers from transmission X-ray images. Detection without generation of excessive false positives is complicated by the fabric of the container, container variation, damage, and detritus. The algorithm detects 99.3% of loads in stream-of-commerce date while raising false alarms on 0.7% of actually empty containers. On challenging data, created by image synthesis, we are able to achieve 90% detection of loads with the same size and attenuation as a 1.5 kg cube of cocaine or 1 L of water, while triggering fewer than 1-in-605 or 1-in-197 false alarms respectively, on truly empty containers. The algorithm analyses each small window of the image separately, and detects loads within the window by random forest classification of texture features together with the window coordinates.
N. Jaccard, T. W. Rogers, E. J. Morton, and Lewis D. Griffin
Automated detection of cars in transmission X-ray images of freight containers
IEEE Advanced Video and Signal Based Surveillance (2014).
## Automated detection of cars in transmission X-ray images of freight containers
Abstract: We present a method for automated car detection in xraytransmission images of freight containers. A random forest classifier was used to classify image sub-windows as “car” and “non-car” based on image features such as intensity and log-intensity, as well as local structures and symmetries as encoded by Basic Image Features (BIFs) and oriented Basic Image Features (oBIFs). The proposed approach was validated using a dataset of stream of commerce X-ray images. A car detection rate of 100% was achieved while maintaining a false alarm rate of 1.23%. Further reduction in false alarm rate, potentially at the cost of detection rate, was possible by tweaking the classification confidence threshold. This work establishes a framework for the automated classification of X-ray transmission cargo images and their content, paving the way towards the development of tools to assist custom officers faced with an ever increasing number of images to inspect.
T. W. Rogers, J. Ollier, E. J. Morton, and Lewis D. Griffin
Reduction of wobble artefacts in images from mobile transmission X-ray vehicle scanners
IEEE Imaging Systems and Techniques (2014).
## Reduction of wobble artefacts in images from mobile transmission X-ray vehicle scanners
Abstract: Detector boom wobble in transmission X-ray vehicle scanners is an unpredictable and currently uncontrollable problem, which lowers the quality of captured X-ray images. We propose (i) a method for image correction which is able to correct for 70% of boom wobble error given estimates of boom wobble, and (ii) a method of wobble estimation, based on the fusion of instantaneous wobble estimates with previous estimates, which is robust against non-Gaussian X-ray beam cross-sections and approaches ground truth accuracy. The combination of the two approaches provides a method for the reduction of wobble artefacts in images. The two methods have good potential for application in analogous scenarios in medical imaging, radiation physics, laser science and biophysics.
N. S. Blunt, T. W. Rogers, J. S. Spencer, and W. M. C. Foulkes
Density-matrix quantum Monte Carlo method
Physical Review B 89, 245124 (2014).
## Density-matrix quantum Monte Carlo method
Abstract: We present a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system at finite temperature. This allows arbitrary reduced density matrix elements and expectation values of complicated nonlocal observables to be evaluated easily. The method resembles full configuration interaction quantum Monte Carlo but works in the space of many-particle operators instead of the space of many-particle wave functions. One simulation provides the density matrix at all temperatures simultaneously, from $T=\infty$ to $T=0$, allowing the temperature dependence of expectation values to be studied. The direct sampling of the density matrix also allows the calculation of some previously inaccessible entanglement measures. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices, the concurrence of one-dimensional spin rings, and the Renyi ${S}_{2}$ entanglement entropy of various sublattices of the $6×6$ Heisenberg model are calculated. The nature of the sign problem in the method is also investigated.
|
2017-05-25 14:26:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3693883717060089, "perplexity": 3322.0535072118087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608084.63/warc/CC-MAIN-20170525140724-20170525160724-00209.warc.gz"}
|
https://tex.stackexchange.com/questions/217986/standard-ams-sum-operator-using-mnsymbols
|
# Standard AMS-Sum Operator using MnSymbols?
Because a professor who reads one of the lectures I attend this term writes so slowly, I typeset a lecture script for that lecture in TeX. For the use of some symbols and for better underbrace rendering I use MnSymbol. As those of you who are familiar with MnSymbol may know, it offers a different sum operator than the amsmath-package. It also offers a different integral operator, but I already fixed that by loading "esint" as last math font package, as I like those amsmath math operators better. Is there a way of easily "fixing" the sum operators to be the standard ones without having to stop using MnSymbol? Any help would be appreciated.
• If you have a small set of symbols from MnSymbols to use, it is relatively easy to do. Is that the case? Dec 14, 2014 at 23:37
• Yes thats the case, its mainly (big)cup dot and the braces from MnSymbol. What does the trick then in this case? Thank you in advance :) Dec 15, 2014 at 12:45
• this question may help if you want to use just a few symbols from "another" font: Importing a Single Symbol From a Different Font Dec 15, 2014 at 13:49
• Ok, so I'll have to find out how the syntax is for importing the sum operator from amsmath. I'll try this by myself, if I don't succeed I'd be glad if one of you could help me. Dec 15, 2014 at 14:38
• I did not find sny table whatsoever which showed me the name of the \sum operator in amsmath, I do not know how to achieve this by myself. Please help me. Dec 16, 2014 at 7:55
See FOLLOW UP below for importing CM symbols into MnSymbol. But first...
Import MnSymbols:
If you only use a few MnSymbols, it is easiest to import just those, rather than loading MnSymbol package. Here is how it is done (I used, for MnSymbol specific reference, Dashed left arrow over symbol, though other questions on this site about importing symbols are also useful).
It helps to look through the MnSymbol package document (or better still, mnsymbol.sty) to find from which font family the symbol derives, and then it helps to use \fonttable (package fonttable) to display the font family in a table, from which you can determine the slot number corresponding to the desired glyph.
First, to determine font families, I look through mnsymbol.sty for the glyph name, and use it to determine the font family:
From the above pictures, I see that the \bigcupdot glyphs (\displaystyle and\textstyle) are \mathops and come from the symbols font. The name symbols is associated with MnSymbolF font family. These are needed in my MWE. The \cupdot glyph comes from a different font family (MnSymbolC).
In the MWE below, I show the importation of 3 glyphs from two different font families: \cupdot, \tbigcupdot (textstyle \bigcupdot) and \dbigcupdot (displaystyle \bigcupdot). I then use \mathchoice, to tell LaTeX to use the proper bigcupdot style in the appropriate math style.
If you uncomment my commented lines concerning fonttable, you can see the font tables from which I determined the glyph's slot numbers.
\documentclass{article}
% =============================================
%Import symbols from font MnSymbol without importing the whole package
% =============================================
\DeclareFontFamily{U} {MnSymbolC}{}
\DeclareFontShape{U}{MnSymbolC}{m}{n}{
<-6> MnSymbolC5
<6-7> MnSymbolC6
<7-8> MnSymbolC7
<8-9> MnSymbolC8
<9-10> MnSymbolC9
<10-12> MnSymbolC10
<12-> MnSymbolC12}{}
\DeclareFontShape{U}{MnSymbolC}{b}{n}{
<-6> MnSymbolC-Bold5
<6-7> MnSymbolC-Bold6
<7-8> MnSymbolC-Bold7
<8-9> MnSymbolC-Bold8
<9-10> MnSymbolC-Bold9
<10-12> MnSymbolC-Bold10
<12-> MnSymbolC-Bold12}{}
\DeclareSymbolFont{MnSyC} {U} {MnSymbolC}{m}{n}
\DeclareMathSymbol{\cupdot}{\mathbin}{MnSyC}{60}
% =============================================
\DeclareFontFamily{U} {MnSymbolF}{}
\DeclareFontShape{U}{MnSymbolF}{m}{n}{
<-6> MnSymbolF5
<6-7> MnSymbolF6
<7-8> MnSymbolF7
<8-9> MnSymbolF8
<9-10> MnSymbolF9
<10-12> MnSymbolF10
<12-> MnSymbolF12}{}
\DeclareFontShape{U}{MnSymbolF}{b}{n}{
<-6> MnSymbolF-Bold5
<6-7> MnSymbolF-Bold6
<7-8> MnSymbolF-Bold7
<8-9> MnSymbolF-Bold8
<9-10> MnSymbolF-Bold9
<10-12> MnSymbolF-Bold10
<12-> MnSymbolF-Bold12}{}
\DeclareSymbolFont{SymbolF} {U} {MnSymbolF}{m}{n}
\DeclareMathSymbol{\dbigcupdot}{\mathop}{SymbolF}{35}
\DeclareMathSymbol{\tbigcupdot}{\mathop}{SymbolF}{34}
\def\bigcupdot{\mathchoice{\dbigcupdot}{\tbigcupdot}{\tbigcupdot}{\tbigcupdot}}
% =============================================
%\usepackage{fonttable}
\begin{document}
\centering
$x \cupdot y$\par
$x \bigcupdot y \quad \scriptstyle x \bigcupdot y \quad \scriptscriptstyle x \bigcupdot y$
$x \bigcupdot y$
%\clearpage\fonttable{MnSymbolF8}
%\clearpage\fonttable{MnSymbolC10}
\end{document}
The OP asked if MnSymbol can be the default, with (for example), \sum being imported from default LaTeX. As barbara points out in the comment, the default \sum comes from the cmex font set. Here, I import it as \Xsum (I am not sure if my \DeclareFontShape invocation is appropriate for cm fonts, but I mimicked what had been done for MnSymbol):
\documentclass{article}
\usepackage{mnsymbol}
% =============================================
%Import symbols from font cmex without importing the whole package
% =============================================
\DeclareFontFamily{U} {cmex}{}
\DeclareFontShape{U}{cmex}{m}{n}{
<-6> cmex5
<6-7> cmex6
<7-8> cmex7
<8-9> cmex8
<9-10> cmex9
<10-12> cmex10
<12-> cmex12}{}
\DeclareSymbolFont{Xcmex} {U} {cmex}{m}{n}
\DeclareMathSymbol{\Xdsum}{\mathop}{Xcmex}{88}
\DeclareMathSymbol{\Xtsum}{\mathop}{Xcmex}{80}
\DeclareMathOperator*{\Xsum}{\mathchoice{\Xdsum}{\Xtsum}{\Xtsum}{\Xtsum}}
% =============================================
\usepackage{fonttable}
\begin{document}
\centering
sum under MnSymbol:\par
$\sum_{i=1}^2 x_i$
$\sum_{i=1}^2 x_i$
Defined Xsum from cmex:\par
$\Xsum_{i=1}^2 x_i$
$\Xsum_{i=1}^2 x_i$
\tiny\fonttable{cmex8}
\end{document}
• Hi, the disjunct union symbol is not quite the deal, its the bracket rendering, still its a very kind answer; I'll just try this with the sum operator out of the ams math package. Dec 24, 2014 at 17:03
• @Friedrich I wasn't sure what you had in mind when you mentioned "brackets" (the \{\} brace glyphs actually looked very similar to computer modern defaults), so I did not try to do anything with that. But the framework is here for importing glyphs from MnSymbol. Dec 24, 2014 at 17:18
• Could you help me with importing from amsmath anyways or does this method not work with this? Dec 25, 2014 at 19:15
• @Friedrich In general, each font has a similar, though unique, method. But in this case, I believe the \sum operator you make mention of does not actually originate from the amsmath font set, as the loading (or not) of amsmath does not change the look of the \sum. Thus, I'm guessing you are actually seeking the \sum glyphs from the LaTeX-default Computer-Modern font set (which are discarded with the loading of MnSymbol). Is this the case? Dec 25, 2014 at 22:39
• @Friedrich -- \sum is "basic tex". the definition for these symbols (in the font cmex) is found in the file fontmath.ltx. Dec 26, 2014 at 0:23
|
2022-06-30 08:00:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8847153782844543, "perplexity": 3524.975114807781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103669266.42/warc/CC-MAIN-20220630062154-20220630092154-00634.warc.gz"}
|
https://math.stackexchange.com/questions/1105214/difference-between-newtons-method-and-gauss-newton-method/2687949
|
# Difference between Newton's method and Gauss-Newton method
I know that the Gauss-Newton method is essentially Newton's method with the modification that the Gauss-Newton method it uses the approximation $2J^TJ$ (where $J$ is the Jacobian matrix) for the Hessian matrix.
I didn't understand why we are using this approximation. Can anyone explain how this approximation occur?
Thanks
If the objective function is the classical sum of squares, it can be written as the norm squared of a certain error vector $\boldsymbol e \in \mathbb{R}^m$: $$f(\boldsymbol x) = \| \boldsymbol e(\boldsymbol x))\|^2 = \boldsymbol e ^T(\boldsymbol x) \boldsymbol e(\boldsymbol x)$$ where $\boldsymbol x \in \mathbb{R}^n$ is the decision variable.
Newton algorithm tries to minimize the objective function by finding a point where its gradient vanishes, by using a local linear approximation of the gradient difference: $$\nabla f(\boldsymbol x_{k+1}) - \nabla f(\boldsymbol x_{k}) \approx \boldsymbol Hf(\boldsymbol x_{k})(\boldsymbol x_{k+1} -\boldsymbol x_{k})$$ in the hypothesis that the function is convex or that the hessian matrix is locally positive semi-definite, otherwise the algorithm fails, because it gets attracted by any stationary point, which may be either a minimum, a maximum or saddle). If the above expression is rewritten as an affine transformation: $$\nabla f(\boldsymbol x_{k+1}) \approx \nabla f(\boldsymbol x_{k}) +\boldsymbol Hf(\boldsymbol x_{k})(\boldsymbol x_{k+1} -\boldsymbol x_{k})$$ the optimum update $\Delta \boldsymbol x^*_k = \boldsymbol x_{k+1} -\boldsymbol x_{k}$ can be found by solving the equation: $$\nabla f(\boldsymbol x_{k}) = -\boldsymbol Hf(\boldsymbol x_{k})\Delta \boldsymbol x_k$$
The Hessian matrix, owing to the particular structure of the cost function, depends upon both first and second derivatives of each component $e_i(\boldsymbol x)$ of the error vector. Considering that: $$e_i(\boldsymbol x + \Delta \boldsymbol x) \approx e_i(\boldsymbol x) + \nabla e_i^T(\boldsymbol x) \Delta \boldsymbol x + \frac{1}{2}\Delta \boldsymbol x^T \boldsymbol He_i(\boldsymbol x) \Delta \boldsymbol x + \|\Delta \boldsymbol x \|^3, \quad \Delta \boldsymbol x \to \boldsymbol 0, \forall i = 1, \ldots, m$$ hence: $$\boldsymbol Hf(\boldsymbol x) = \frac{\partial^2 f(\boldsymbol x)}{\partial \boldsymbol x^2} = \frac{\partial}{\partial \boldsymbol x}\frac{\partial f(\boldsymbol x)}{\partial \boldsymbol x}= \boldsymbol J \left(2 \boldsymbol e^T(\boldsymbol x) \boldsymbol J \boldsymbol e(\boldsymbol x) \right) = 2 \left(\boldsymbol J^T \boldsymbol e(\boldsymbol x) \boldsymbol J \boldsymbol e(\boldsymbol x) + \sum_{i=1}^{m} e_i(\boldsymbol x) \boldsymbol He_i(\boldsymbol x) \right)$$ where $\boldsymbol J \boldsymbol e(\boldsymbol x)$ is the error jacobian matrix defined as: $$\boldsymbol J \boldsymbol e(\boldsymbol x) = \begin{pmatrix} \nabla e_1^T(\boldsymbol x) \\ \vdots \\ \nabla e_m^T(\boldsymbol x) \end{pmatrix}$$
If the second derivatives of the error components $e_h(\boldsymbol x)$ $$\frac{\partial^2 e_h(\boldsymbol x)}{\partial x_i \partial x_j}$$ are not known, one can approximate the hessian matrix by neglecting the second part (which becomes more and more negligible as the error gets smaller so it makes perfectly sense when the residuals are very small):
$$\boldsymbol Hf(\boldsymbol x) \approx 2 \boldsymbol J^T \boldsymbol e(\boldsymbol x) \boldsymbol J \boldsymbol e(\boldsymbol x)$$ $$\boldsymbol \nabla f(\boldsymbol x) = 2 \boldsymbol J^T \boldsymbol e(\boldsymbol x) \boldsymbol e(\boldsymbol x)$$
This gives rise to the Gauss-Newton algorithm: $$2 \boldsymbol J^T \boldsymbol e(\boldsymbol x) \boldsymbol e(\boldsymbol x) = -\left( 2 \boldsymbol J^T \boldsymbol e(\boldsymbol x) \boldsymbol J \boldsymbol e(\boldsymbol x) \right) \Delta \boldsymbol x^* \Leftrightarrow$$ $$\boldsymbol J^T \boldsymbol e(\boldsymbol x) \boldsymbol e(\boldsymbol x) = -\left(\boldsymbol J^T \boldsymbol e(\boldsymbol x) \boldsymbol J \boldsymbol e(\boldsymbol x) \right) \Delta \boldsymbol x^*$$
• It's also worth mentioning the Gauss-Newton step always exists. Sep 24, 2018 at 20:13
The difference can be seen with a scalar function.
Gauss Newton is used to solve nonlinear least squares problems and the objective has the form $f(x) = r(x)^2$. The derivatives are $f'(x) = 2 r(x) r'(x)$ and $f''(x) = 2 ( r(x) r''(x) + (r'(x))^2)$.
Newton's method uses the second derivative $f''(x)$ above, the Gauss Newton method uses the approximation $f''(x) \approx 2 (r'(x))^2)$ (that is, the Hessian of $r$ is dropped).
Newton computes the update step $s$ by solving $F'(x)·s=-F(x)$.
Gauss-Newton determines the update by minimizing the error in the linearization of the overdetermined system, i.e., minimizes $\|F'(x)·s+F(x)\|$. The expanded form of the square of this error is $$\|F(x)\|^2 + 2·F(x)^TF'(x)·s+s^T·F'(x)^TF'(x)·s$$ The quadratic term is not an approximation for the Hessian of $\|F(x+s)\|^2$, just an expression in the error minimization of a linear system.
|
2022-05-29 02:00:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9522847533226013, "perplexity": 168.68847591287192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663035797.93/warc/CC-MAIN-20220529011010-20220529041010-00205.warc.gz"}
|
https://arcadianfunctor.wordpress.com/2007/04/27/the-maypole/
|
## The Maypole
The strange people who have been hanging around this blog for a while will recall a paper by Mulase and Waldron on matrix models and quaternionic graphs.
In particular, T duality appears between the symplectic and orthogonal integrals. This involves a doubling in the size of the matrices being considered. For this reason, it might be interesting to investigate the doubling of matrix sizes in the honeycomb geometry.
Recall that in the 3×3 case, a single central hexagon appears. For 4×4 matrices, there are three central hexagons. In general, the number of hexagons is the sum of $1,2,3, \cdots , N-2$ for $NxN$ matrices, which is equal to $\frac{1}{2} N(N – 1)$. Observe that as $N \rightarrow \infty$ the increase in the number of hexagons obtained by doubling the matrix size is fourfold, since for the $\frac{N}{2}$ case the total is $\frac{1}{8} (N^2 – 2N)$. For any $N$, the number of additional hexagons is given by $\frac{1}{8} (3 N^2 – 2N)$.
By the way, the maypole is a dance (that my childhood ballet class used to perform each year) in which ribbons are knotted.
## 1 Response so far »
1. 1
### nige said,
Hi Kea,
Just to say I’m put off by a word in your opening sentence:
“The strange people who have been hanging around this blog for a while…”
Umm. Maybe you could be a little more lucid next time. E.g.,
“The brilliant people who have been hanging around this blog for a while…”
|
2021-05-12 21:28:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6932909488677979, "perplexity": 579.797727577812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00264.warc.gz"}
|
https://www.justanswer.com/urology/9azzv-there-weeks-post-op-adult-circumcision-ring-around-head.html
|
Urology
Have Urology Questions? Ask a Urologist Online.
Related Urology Questions
I have developed a strange swollen ring around the shaft of
I have developed a strange swollen ring around the shaft of my penis just under the head. There are a few open sores around it that seem to be producing some puss. I have looked all over google and ca… read more
Dr Basu
Physician
Doctoral Degree
38,770 satisfied customers
I recently had a surgery circumcision. It is the fourth day
I recently had a surgery circumcision. It is the fourth day today . The sensitivity is not going away . My surgeon said I should be ok to work in a week . However how can I wear clothes The head and p… read more
Dr. Arun Phophalia
Consultant Surgeon and Sports Medicine
Post-Doctoral Degree
47,235 satisfied customers
How much does it cost. Can i talk to a doctor. Uroolgoy
how much does it cost JA: The Expert's answer will cost $10 to$100, depending on the issue type and time to respond. You'll see the exact amount on the next page and can decide then. It's way less ex… read more
Dr-KHAN
Urologist
Post-Doctoral Degree
1,667 satisfied customers
The other night I masturbated like 4 times maybe 30 mins
the other night I masturbated like 4 times maybe 30 mins apart first 3 times fine on the fourth time I used lotion but it took forever and after I was done I'm circumcised btw the skin on the shaft ki… read more
Dr Chip (M.D.)
M.D.
50,042 satisfied customers
Looks like I've done some damage to Penis shaft from rubber
Looks like I've done some damage to Penis shaft from rubber cock rink + male chastity device. Has decode on for 2 days and module not painful rally at touch formed under head of penis I think from coc… read more
Ross
Urologist
515 satisfied customers
I was Masterbating while on viagra and unable to so I
I was Masterbating while on viagra and unable to orgasm so I continued for hours and finely went to sleep without reaching climax. I woke up to find the extra skin on the bottom of my penis left over … read more
Dr. David
Post-Doctoral Degree
41,472 satisfied customers
Dr Y, After masturbating somewhat too forcefully my penis is
Dr Y, After masturbating somewhat too forcefully my penis is very swollen. It's gone back to it's normal flaccid length 2 1\2 - 3 inches but the top\end half appears to be the same size (girth) as whe… read more
Dr. Y.
Urologist
Doctoral Degree
20,446 satisfied customers
I've asked a few urologists if the foreskin is pleasurable
I've asked a few urologists if the foreskin is pleasurable to touch... I don't know I'm circumcised. Some say it definitely is and others say it isn't and its just a covering? I don't know if I'm miss… read more
Dr. Y.
Urologist
Doctoral Degree
20,446 satisfied customers
About 24 hours ago I noticed some swelling on the right side
About 24 hours ago I noticed some swelling on the right side of my penis under the glans. (I am circumcised) After much research on the Internet, I came to the conclusion that this was a Penis Edema. … read more
Dr. Y.
Urologist
Doctoral Degree
20,446 satisfied customers
The morning after a night of fairly rough sex. I notice
Hello, the morning after a night of fairly rough sex. I notice swelling near the head of my penis where I was circumcised. When I stretch the skin back a little to see what's going on it looks like th… read more
Dr. Tharun
Urologist
Pursuing Masters
1,584 satisfied customers
Ever since I was a kid and even until now (almost 30
Hello. Ever since I was a kid and even until now (almost 30 years-old), I have never been able to pull my foreskin back only because the penis head is extremely sensitive. If I dare to touch it, it fe… read more
Dr. Y.
Urologist
Doctoral Degree
20,446 satisfied customers
Four days ago I started having some itchiness on my penis.
Four days ago I started having some itchiness on my penis. Two days ago a fluid filled ring formed a little below the head of my penis. The itchiness has calmed down but the ring remains. What is goin… read more
Dr. Y.
Urologist
Doctoral Degree
20,446 satisfied customers
My penis seems to have disappeared into my scrotum after
My penis seems to have disappeared into my scrotum after having circumcision four days ago,, will it return?… read more
Dr. Tharun
Urologist
Pursuing Masters
1,584 satisfied customers
I am 52 and had a circumcision 3 days ago. Swelling is
I am 52 and had a circumcision 3 days ago. Swelling is improving. No signs of infection or problems. However, urination is difficult. Have to sit down and slowly urinate to avoid tingling sensation or… read more
Dr. Y.
Urologist
Doctoral Degree
20,446 satisfied customers
After masturbating, The next day my penis had a swollen ring
After masturbating, The next day my penis had a swollen ring just below the head. I took antihistimines and this relieved the swelling. Now I am left with a hard ring that goes halfway around. If I pu… read more
Dr. Y.
Urologist
Doctoral Degree
20,446 satisfied customers
I am a circumcised 40 year old with an active monogomous sex
I am a circumcised 40 year old with an active monogomous sex life with my spouse for over 4 years. I used a very tight cock ring the other day while having intercourse. the next morning i woke up with… read more
Dr. Y.
Urologist
Doctoral Degree
20,446 satisfied customers
My boyfriend recently (4 days ago) had inguinal hernia
My boyfriend recently (4 days ago) had inguinal hernia surgery (left groin). Within 24 hours his testicles and penis were swollen, purple and numb. There is a purple ring around the lower portion of t… read more
Dr. R
Associate Professor of Urology
Doctoral Degree
3,563 satisfied customers
My boyfriend's penis' head is slightly swollen and red than
My boyfriend's penis' head is slightly swollen and red than normal. He says he doesn't feel any discomfort or pain and he is ejaculating normally without any abnormal discharge. It has been like this … read more
Dr. Y.
Urologist
Doctoral Degree
20,446 satisfied customers
Disclaimer: Information in questions, answers, and other posts on this site ("Posts") comes from individual users, not JustAnswer; JustAnswer is not responsible for Posts. Posts are for general information, are not intended to substitute for informed professional advice (medical, legal, veterinary, financial, etc.), or to establish a professional-client relationship. The site and services are provided "as is" with no warranty or representations by JustAnswer regarding the qualifications of Experts. To see what credentials have been verified by a third-party service, please click on the "Verified" symbol in some Experts' profiles. JustAnswer is not intended or designed for EMERGENCY questions which should be directed immediately by telephone or in-person to qualified professionals.
Ask-a-doc Web sites: If you've got a quick question, you can try to get an answer from sites that say they have various specialists on hand to give quick answers... Justanswer.com.
...leave nothing to chance.
Traffic on JustAnswer rose 14 percent...and had nearly 400,000 page views in 30 days...inquiries related to stress, high blood pressure, drinking and heart pain jumped 33 percent.
Tory Johnson, GMA Workplace Contributor, discusses work-from-home jobs, such as JustAnswer in which verified Experts answer people’s questions.
I will tell you that...the things you have to go through to be an Expert are quite rigorous.
## What Customers are Saying:
I feel so much better today, and upon further investigation believe that there is a chance that the responses I got saved me from a serious, even life threatening situation. I am very grateful to the experts who answered me.
Susan O.USA
I can go as far as to say it could have resulted in saving my sons life and our entire family now knows what bipolar is and how to assist and understand my most wonderful son, brother and friend to all who loves him dearly. Thank you very much
Corrie MollPretoria, South Africa
I thank-you so much! It really helped to have this information and confirmation. We will watch her carefully and get her in for the examination and US right away if things do not improve. God bless you as well!
ClaudiaAlbuquerque, NM
Outstanding response time less than 6 minutes. Answered the question professionally and with a great deal of compassion.
KevinBeaverton, OR
Suggested diagnosis was what I hoped and will take this info to my doctor's appointment next week.
I feel better already! Thank you.
ElanorTracy, CA
Thank you to the Physician who answered my question today. The answer was far more informative than what I got from the Physicians I saw in person for my problem.
JulieLockesburg, AR
You have been more help than you know. I seriously don't know what my sisters situation would be today if you had not gone above and beyond just answering my questions.
John and StefanieTucson, AZ
< Previous | Next >
## Meet the Experts:
Dr. Y.
Urologist
20,446 satisfied customers
I am fellowship trained specializing in general urology and reconstructive urology.
Dr. Tharun
Urologist
1,584 satisfied customers
MBBS, DNB surgery, DNB UROLOGY
Urologist,M.D.
Urologist
4,036 satisfied customers
Board Certified in Urology, Fellowship trained
Dr-KHAN
Urologist
1,667 satisfied customers
MBBS, FCPS, MRCS (UK)
urologyguru
Urologist MD
1,120 satisfied customers
I am fellowship-trained MD Urologist in active practice. I treat all types of cases from genital, cancer, urinary, stone and incontinence.
Dr Josh
Physician
666 satisfied customers
General Physician
Ross
Urologist
515 satisfied customers
urologist
< Previous | Next >
Disclaimer: Information in questions, answers, and other posts on this site ("Posts") comes from individual users, not JustAnswer; JustAnswer is not responsible for Posts. Posts are for general information, are not intended to substitute for informed professional advice (medical, legal, veterinary, financial, etc.), or to establish a professional-client relationship. The site and services are provided "as is" with no warranty or representations by JustAnswer regarding the qualifications of Experts. To see what credentials have been verified by a third-party service, please click on the "Verified" symbol in some Experts' profiles. JustAnswer is not intended or designed for EMERGENCY questions which should be directed immediately by telephone or in-person to qualified professionals.
Show MoreShow Less
|
2023-03-30 20:13:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17906825244426727, "perplexity": 6571.02737411871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949387.98/warc/CC-MAIN-20230330194843-20230330224843-00494.warc.gz"}
|
https://people.maths.bris.ac.uk/~matyd/GroupNames/448/C2xC7sQ32.html
|
Copied to
clipboard
## G = C2×C7⋊Q32order 448 = 26·7
### Direct product of C2 and C7⋊Q32
Series: Derived Chief Lower central Upper central
Derived series C1 — C56 — C2×C7⋊Q32
Chief series C1 — C7 — C14 — C28 — C56 — Dic28 — C2×Dic28 — C2×C7⋊Q32
Lower central C7 — C14 — C28 — C56 — C2×C7⋊Q32
Upper central C1 — C22 — C2×C4 — C2×C8 — C2×Q16
Generators and relations for C2×C7⋊Q32
G = < a,b,c,d | a2=b7=c16=1, d2=c8, ab=ba, ac=ca, ad=da, cbc-1=b-1, bd=db, dcd-1=c-1 >
Subgroups: 356 in 82 conjugacy classes, 39 normal (23 characteristic)
C1, C2, C2, C4, C4, C22, C7, C8, C2×C4, C2×C4, Q8, C14, C14, C16, C2×C8, Q16, Q16, C2×Q8, Dic7, C28, C28, C2×C14, C2×C16, Q32, C2×Q16, C2×Q16, C56, Dic14, C2×Dic7, C2×C28, C2×C28, C7×Q8, C2×Q32, C7⋊C16, Dic28, Dic28, C2×C56, C7×Q16, C7×Q16, C2×Dic14, Q8×C14, C2×C7⋊C16, C7⋊Q32, C2×Dic28, C14×Q16, C2×C7⋊Q32
Quotients: C1, C2, C22, D4, C23, D7, D8, C2×D4, D14, Q32, C2×D8, C7⋊D4, C22×D7, C2×Q32, D4⋊D7, C2×C7⋊D4, C7⋊Q32, C2×D4⋊D7, C2×C7⋊Q32
Smallest permutation representation of C2×C7⋊Q32
Regular action on 448 points
Generators in S448
(1 371)(2 372)(3 373)(4 374)(5 375)(6 376)(7 377)(8 378)(9 379)(10 380)(11 381)(12 382)(13 383)(14 384)(15 369)(16 370)(17 413)(18 414)(19 415)(20 416)(21 401)(22 402)(23 403)(24 404)(25 405)(26 406)(27 407)(28 408)(29 409)(30 410)(31 411)(32 412)(33 278)(34 279)(35 280)(36 281)(37 282)(38 283)(39 284)(40 285)(41 286)(42 287)(43 288)(44 273)(45 274)(46 275)(47 276)(48 277)(49 140)(50 141)(51 142)(52 143)(53 144)(54 129)(55 130)(56 131)(57 132)(58 133)(59 134)(60 135)(61 136)(62 137)(63 138)(64 139)(65 214)(66 215)(67 216)(68 217)(69 218)(70 219)(71 220)(72 221)(73 222)(74 223)(75 224)(76 209)(77 210)(78 211)(79 212)(80 213)(81 299)(82 300)(83 301)(84 302)(85 303)(86 304)(87 289)(88 290)(89 291)(90 292)(91 293)(92 294)(93 295)(94 296)(95 297)(96 298)(97 347)(98 348)(99 349)(100 350)(101 351)(102 352)(103 337)(104 338)(105 339)(106 340)(107 341)(108 342)(109 343)(110 344)(111 345)(112 346)(113 160)(114 145)(115 146)(116 147)(117 148)(118 149)(119 150)(120 151)(121 152)(122 153)(123 154)(124 155)(125 156)(126 157)(127 158)(128 159)(161 208)(162 193)(163 194)(164 195)(165 196)(166 197)(167 198)(168 199)(169 200)(170 201)(171 202)(172 203)(173 204)(174 205)(175 206)(176 207)(177 357)(178 358)(179 359)(180 360)(181 361)(182 362)(183 363)(184 364)(185 365)(186 366)(187 367)(188 368)(189 353)(190 354)(191 355)(192 356)(225 391)(226 392)(227 393)(228 394)(229 395)(230 396)(231 397)(232 398)(233 399)(234 400)(235 385)(236 386)(237 387)(238 388)(239 389)(240 390)(241 436)(242 437)(243 438)(244 439)(245 440)(246 441)(247 442)(248 443)(249 444)(250 445)(251 446)(252 447)(253 448)(254 433)(255 434)(256 435)(257 305)(258 306)(259 307)(260 308)(261 309)(262 310)(263 311)(264 312)(265 313)(266 314)(267 315)(268 316)(269 317)(270 318)(271 319)(272 320)(321 425)(322 426)(323 427)(324 428)(325 429)(326 430)(327 431)(328 432)(329 417)(330 418)(331 419)(332 420)(333 421)(334 422)(335 423)(336 424)
(1 351 318 158 328 228 185)(2 186 229 329 159 319 352)(3 337 320 160 330 230 187)(4 188 231 331 145 305 338)(5 339 306 146 332 232 189)(6 190 233 333 147 307 340)(7 341 308 148 334 234 191)(8 192 235 335 149 309 342)(9 343 310 150 336 236 177)(10 178 237 321 151 311 344)(11 345 312 152 322 238 179)(12 180 239 323 153 313 346)(13 347 314 154 324 240 181)(14 182 225 325 155 315 348)(15 349 316 156 326 226 183)(16 184 227 327 157 317 350)(17 141 213 205 85 241 43)(18 44 242 86 206 214 142)(19 143 215 207 87 243 45)(20 46 244 88 208 216 144)(21 129 217 193 89 245 47)(22 48 246 90 194 218 130)(23 131 219 195 91 247 33)(24 34 248 92 196 220 132)(25 133 221 197 93 249 35)(26 36 250 94 198 222 134)(27 135 223 199 95 251 37)(28 38 252 96 200 224 136)(29 137 209 201 81 253 39)(30 40 254 82 202 210 138)(31 139 211 203 83 255 41)(32 42 256 84 204 212 140)(49 412 287 435 302 173 79)(50 80 174 303 436 288 413)(51 414 273 437 304 175 65)(52 66 176 289 438 274 415)(53 416 275 439 290 161 67)(54 68 162 291 440 276 401)(55 402 277 441 292 163 69)(56 70 164 293 442 278 403)(57 404 279 443 294 165 71)(58 72 166 295 444 280 405)(59 406 281 445 296 167 73)(60 74 168 297 446 282 407)(61 408 283 447 298 169 75)(62 76 170 299 448 284 409)(63 410 285 433 300 171 77)(64 78 172 301 434 286 411)(97 266 123 428 390 361 383)(98 384 362 391 429 124 267)(99 268 125 430 392 363 369)(100 370 364 393 431 126 269)(101 270 127 432 394 365 371)(102 372 366 395 417 128 271)(103 272 113 418 396 367 373)(104 374 368 397 419 114 257)(105 258 115 420 398 353 375)(106 376 354 399 421 116 259)(107 260 117 422 400 355 377)(108 378 356 385 423 118 261)(109 262 119 424 386 357 379)(110 380 358 387 425 120 263)(111 264 121 426 388 359 381)(112 382 360 389 427 122 265)
(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16)(17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32)(33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48)(49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64)(65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80)(81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96)(97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112)(113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128)(129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144)(145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160)(161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176)(177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192)(193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208)(209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224)(225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240)(241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256)(257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272)(273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288)(289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304)(305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320)(321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336)(337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352)(353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368)(369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384)(385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400)(401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416)(417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432)(433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448)
(1 135 9 143)(2 134 10 142)(3 133 11 141)(4 132 12 140)(5 131 13 139)(6 130 14 138)(7 129 15 137)(8 144 16 136)(17 187 25 179)(18 186 26 178)(19 185 27 177)(20 184 28 192)(21 183 29 191)(22 182 30 190)(23 181 31 189)(24 180 32 188)(33 240 41 232)(34 239 42 231)(35 238 43 230)(36 237 44 229)(37 236 45 228)(38 235 46 227)(39 234 47 226)(40 233 48 225)(49 374 57 382)(50 373 58 381)(51 372 59 380)(52 371 60 379)(53 370 61 378)(54 369 62 377)(55 384 63 376)(56 383 64 375)(65 102 73 110)(66 101 74 109)(67 100 75 108)(68 99 76 107)(69 98 77 106)(70 97 78 105)(71 112 79 104)(72 111 80 103)(81 148 89 156)(82 147 90 155)(83 146 91 154)(84 145 92 153)(85 160 93 152)(86 159 94 151)(87 158 95 150)(88 157 96 149)(113 295 121 303)(114 294 122 302)(115 293 123 301)(116 292 124 300)(117 291 125 299)(118 290 126 298)(119 289 127 297)(120 304 128 296)(161 269 169 261)(162 268 170 260)(163 267 171 259)(164 266 172 258)(165 265 173 257)(166 264 174 272)(167 263 175 271)(168 262 176 270)(193 316 201 308)(194 315 202 307)(195 314 203 306)(196 313 204 305)(197 312 205 320)(198 311 206 319)(199 310 207 318)(200 309 208 317)(209 341 217 349)(210 340 218 348)(211 339 219 347)(212 338 220 346)(213 337 221 345)(214 352 222 344)(215 351 223 343)(216 350 224 342)(241 330 249 322)(242 329 250 321)(243 328 251 336)(244 327 252 335)(245 326 253 334)(246 325 254 333)(247 324 255 332)(248 323 256 331)(273 395 281 387)(274 394 282 386)(275 393 283 385)(276 392 284 400)(277 391 285 399)(278 390 286 398)(279 389 287 397)(280 388 288 396)(353 403 361 411)(354 402 362 410)(355 401 363 409)(356 416 364 408)(357 415 365 407)(358 414 366 406)(359 413 367 405)(360 412 368 404)(417 445 425 437)(418 444 426 436)(419 443 427 435)(420 442 428 434)(421 441 429 433)(422 440 430 448)(423 439 431 447)(424 438 432 446)
G:=sub<Sym(448)| (1,371)(2,372)(3,373)(4,374)(5,375)(6,376)(7,377)(8,378)(9,379)(10,380)(11,381)(12,382)(13,383)(14,384)(15,369)(16,370)(17,413)(18,414)(19,415)(20,416)(21,401)(22,402)(23,403)(24,404)(25,405)(26,406)(27,407)(28,408)(29,409)(30,410)(31,411)(32,412)(33,278)(34,279)(35,280)(36,281)(37,282)(38,283)(39,284)(40,285)(41,286)(42,287)(43,288)(44,273)(45,274)(46,275)(47,276)(48,277)(49,140)(50,141)(51,142)(52,143)(53,144)(54,129)(55,130)(56,131)(57,132)(58,133)(59,134)(60,135)(61,136)(62,137)(63,138)(64,139)(65,214)(66,215)(67,216)(68,217)(69,218)(70,219)(71,220)(72,221)(73,222)(74,223)(75,224)(76,209)(77,210)(78,211)(79,212)(80,213)(81,299)(82,300)(83,301)(84,302)(85,303)(86,304)(87,289)(88,290)(89,291)(90,292)(91,293)(92,294)(93,295)(94,296)(95,297)(96,298)(97,347)(98,348)(99,349)(100,350)(101,351)(102,352)(103,337)(104,338)(105,339)(106,340)(107,341)(108,342)(109,343)(110,344)(111,345)(112,346)(113,160)(114,145)(115,146)(116,147)(117,148)(118,149)(119,150)(120,151)(121,152)(122,153)(123,154)(124,155)(125,156)(126,157)(127,158)(128,159)(161,208)(162,193)(163,194)(164,195)(165,196)(166,197)(167,198)(168,199)(169,200)(170,201)(171,202)(172,203)(173,204)(174,205)(175,206)(176,207)(177,357)(178,358)(179,359)(180,360)(181,361)(182,362)(183,363)(184,364)(185,365)(186,366)(187,367)(188,368)(189,353)(190,354)(191,355)(192,356)(225,391)(226,392)(227,393)(228,394)(229,395)(230,396)(231,397)(232,398)(233,399)(234,400)(235,385)(236,386)(237,387)(238,388)(239,389)(240,390)(241,436)(242,437)(243,438)(244,439)(245,440)(246,441)(247,442)(248,443)(249,444)(250,445)(251,446)(252,447)(253,448)(254,433)(255,434)(256,435)(257,305)(258,306)(259,307)(260,308)(261,309)(262,310)(263,311)(264,312)(265,313)(266,314)(267,315)(268,316)(269,317)(270,318)(271,319)(272,320)(321,425)(322,426)(323,427)(324,428)(325,429)(326,430)(327,431)(328,432)(329,417)(330,418)(331,419)(332,420)(333,421)(334,422)(335,423)(336,424), (1,351,318,158,328,228,185)(2,186,229,329,159,319,352)(3,337,320,160,330,230,187)(4,188,231,331,145,305,338)(5,339,306,146,332,232,189)(6,190,233,333,147,307,340)(7,341,308,148,334,234,191)(8,192,235,335,149,309,342)(9,343,310,150,336,236,177)(10,178,237,321,151,311,344)(11,345,312,152,322,238,179)(12,180,239,323,153,313,346)(13,347,314,154,324,240,181)(14,182,225,325,155,315,348)(15,349,316,156,326,226,183)(16,184,227,327,157,317,350)(17,141,213,205,85,241,43)(18,44,242,86,206,214,142)(19,143,215,207,87,243,45)(20,46,244,88,208,216,144)(21,129,217,193,89,245,47)(22,48,246,90,194,218,130)(23,131,219,195,91,247,33)(24,34,248,92,196,220,132)(25,133,221,197,93,249,35)(26,36,250,94,198,222,134)(27,135,223,199,95,251,37)(28,38,252,96,200,224,136)(29,137,209,201,81,253,39)(30,40,254,82,202,210,138)(31,139,211,203,83,255,41)(32,42,256,84,204,212,140)(49,412,287,435,302,173,79)(50,80,174,303,436,288,413)(51,414,273,437,304,175,65)(52,66,176,289,438,274,415)(53,416,275,439,290,161,67)(54,68,162,291,440,276,401)(55,402,277,441,292,163,69)(56,70,164,293,442,278,403)(57,404,279,443,294,165,71)(58,72,166,295,444,280,405)(59,406,281,445,296,167,73)(60,74,168,297,446,282,407)(61,408,283,447,298,169,75)(62,76,170,299,448,284,409)(63,410,285,433,300,171,77)(64,78,172,301,434,286,411)(97,266,123,428,390,361,383)(98,384,362,391,429,124,267)(99,268,125,430,392,363,369)(100,370,364,393,431,126,269)(101,270,127,432,394,365,371)(102,372,366,395,417,128,271)(103,272,113,418,396,367,373)(104,374,368,397,419,114,257)(105,258,115,420,398,353,375)(106,376,354,399,421,116,259)(107,260,117,422,400,355,377)(108,378,356,385,423,118,261)(109,262,119,424,386,357,379)(110,380,358,387,425,120,263)(111,264,121,426,388,359,381)(112,382,360,389,427,122,265), (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64)(65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96)(97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112)(113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128)(129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144)(145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160)(161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176)(177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192)(193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208)(209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224)(225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240)(241,242,243,244,245,246,247,248,249,250,251,252,253,254,255,256)(257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272)(273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288)(289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304)(305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320)(321,322,323,324,325,326,327,328,329,330,331,332,333,334,335,336)(337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352)(353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368)(369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384)(385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400)(401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,416)(417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432)(433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448), (1,135,9,143)(2,134,10,142)(3,133,11,141)(4,132,12,140)(5,131,13,139)(6,130,14,138)(7,129,15,137)(8,144,16,136)(17,187,25,179)(18,186,26,178)(19,185,27,177)(20,184,28,192)(21,183,29,191)(22,182,30,190)(23,181,31,189)(24,180,32,188)(33,240,41,232)(34,239,42,231)(35,238,43,230)(36,237,44,229)(37,236,45,228)(38,235,46,227)(39,234,47,226)(40,233,48,225)(49,374,57,382)(50,373,58,381)(51,372,59,380)(52,371,60,379)(53,370,61,378)(54,369,62,377)(55,384,63,376)(56,383,64,375)(65,102,73,110)(66,101,74,109)(67,100,75,108)(68,99,76,107)(69,98,77,106)(70,97,78,105)(71,112,79,104)(72,111,80,103)(81,148,89,156)(82,147,90,155)(83,146,91,154)(84,145,92,153)(85,160,93,152)(86,159,94,151)(87,158,95,150)(88,157,96,149)(113,295,121,303)(114,294,122,302)(115,293,123,301)(116,292,124,300)(117,291,125,299)(118,290,126,298)(119,289,127,297)(120,304,128,296)(161,269,169,261)(162,268,170,260)(163,267,171,259)(164,266,172,258)(165,265,173,257)(166,264,174,272)(167,263,175,271)(168,262,176,270)(193,316,201,308)(194,315,202,307)(195,314,203,306)(196,313,204,305)(197,312,205,320)(198,311,206,319)(199,310,207,318)(200,309,208,317)(209,341,217,349)(210,340,218,348)(211,339,219,347)(212,338,220,346)(213,337,221,345)(214,352,222,344)(215,351,223,343)(216,350,224,342)(241,330,249,322)(242,329,250,321)(243,328,251,336)(244,327,252,335)(245,326,253,334)(246,325,254,333)(247,324,255,332)(248,323,256,331)(273,395,281,387)(274,394,282,386)(275,393,283,385)(276,392,284,400)(277,391,285,399)(278,390,286,398)(279,389,287,397)(280,388,288,396)(353,403,361,411)(354,402,362,410)(355,401,363,409)(356,416,364,408)(357,415,365,407)(358,414,366,406)(359,413,367,405)(360,412,368,404)(417,445,425,437)(418,444,426,436)(419,443,427,435)(420,442,428,434)(421,441,429,433)(422,440,430,448)(423,439,431,447)(424,438,432,446)>;
G:=Group( (1,371)(2,372)(3,373)(4,374)(5,375)(6,376)(7,377)(8,378)(9,379)(10,380)(11,381)(12,382)(13,383)(14,384)(15,369)(16,370)(17,413)(18,414)(19,415)(20,416)(21,401)(22,402)(23,403)(24,404)(25,405)(26,406)(27,407)(28,408)(29,409)(30,410)(31,411)(32,412)(33,278)(34,279)(35,280)(36,281)(37,282)(38,283)(39,284)(40,285)(41,286)(42,287)(43,288)(44,273)(45,274)(46,275)(47,276)(48,277)(49,140)(50,141)(51,142)(52,143)(53,144)(54,129)(55,130)(56,131)(57,132)(58,133)(59,134)(60,135)(61,136)(62,137)(63,138)(64,139)(65,214)(66,215)(67,216)(68,217)(69,218)(70,219)(71,220)(72,221)(73,222)(74,223)(75,224)(76,209)(77,210)(78,211)(79,212)(80,213)(81,299)(82,300)(83,301)(84,302)(85,303)(86,304)(87,289)(88,290)(89,291)(90,292)(91,293)(92,294)(93,295)(94,296)(95,297)(96,298)(97,347)(98,348)(99,349)(100,350)(101,351)(102,352)(103,337)(104,338)(105,339)(106,340)(107,341)(108,342)(109,343)(110,344)(111,345)(112,346)(113,160)(114,145)(115,146)(116,147)(117,148)(118,149)(119,150)(120,151)(121,152)(122,153)(123,154)(124,155)(125,156)(126,157)(127,158)(128,159)(161,208)(162,193)(163,194)(164,195)(165,196)(166,197)(167,198)(168,199)(169,200)(170,201)(171,202)(172,203)(173,204)(174,205)(175,206)(176,207)(177,357)(178,358)(179,359)(180,360)(181,361)(182,362)(183,363)(184,364)(185,365)(186,366)(187,367)(188,368)(189,353)(190,354)(191,355)(192,356)(225,391)(226,392)(227,393)(228,394)(229,395)(230,396)(231,397)(232,398)(233,399)(234,400)(235,385)(236,386)(237,387)(238,388)(239,389)(240,390)(241,436)(242,437)(243,438)(244,439)(245,440)(246,441)(247,442)(248,443)(249,444)(250,445)(251,446)(252,447)(253,448)(254,433)(255,434)(256,435)(257,305)(258,306)(259,307)(260,308)(261,309)(262,310)(263,311)(264,312)(265,313)(266,314)(267,315)(268,316)(269,317)(270,318)(271,319)(272,320)(321,425)(322,426)(323,427)(324,428)(325,429)(326,430)(327,431)(328,432)(329,417)(330,418)(331,419)(332,420)(333,421)(334,422)(335,423)(336,424), (1,351,318,158,328,228,185)(2,186,229,329,159,319,352)(3,337,320,160,330,230,187)(4,188,231,331,145,305,338)(5,339,306,146,332,232,189)(6,190,233,333,147,307,340)(7,341,308,148,334,234,191)(8,192,235,335,149,309,342)(9,343,310,150,336,236,177)(10,178,237,321,151,311,344)(11,345,312,152,322,238,179)(12,180,239,323,153,313,346)(13,347,314,154,324,240,181)(14,182,225,325,155,315,348)(15,349,316,156,326,226,183)(16,184,227,327,157,317,350)(17,141,213,205,85,241,43)(18,44,242,86,206,214,142)(19,143,215,207,87,243,45)(20,46,244,88,208,216,144)(21,129,217,193,89,245,47)(22,48,246,90,194,218,130)(23,131,219,195,91,247,33)(24,34,248,92,196,220,132)(25,133,221,197,93,249,35)(26,36,250,94,198,222,134)(27,135,223,199,95,251,37)(28,38,252,96,200,224,136)(29,137,209,201,81,253,39)(30,40,254,82,202,210,138)(31,139,211,203,83,255,41)(32,42,256,84,204,212,140)(49,412,287,435,302,173,79)(50,80,174,303,436,288,413)(51,414,273,437,304,175,65)(52,66,176,289,438,274,415)(53,416,275,439,290,161,67)(54,68,162,291,440,276,401)(55,402,277,441,292,163,69)(56,70,164,293,442,278,403)(57,404,279,443,294,165,71)(58,72,166,295,444,280,405)(59,406,281,445,296,167,73)(60,74,168,297,446,282,407)(61,408,283,447,298,169,75)(62,76,170,299,448,284,409)(63,410,285,433,300,171,77)(64,78,172,301,434,286,411)(97,266,123,428,390,361,383)(98,384,362,391,429,124,267)(99,268,125,430,392,363,369)(100,370,364,393,431,126,269)(101,270,127,432,394,365,371)(102,372,366,395,417,128,271)(103,272,113,418,396,367,373)(104,374,368,397,419,114,257)(105,258,115,420,398,353,375)(106,376,354,399,421,116,259)(107,260,117,422,400,355,377)(108,378,356,385,423,118,261)(109,262,119,424,386,357,379)(110,380,358,387,425,120,263)(111,264,121,426,388,359,381)(112,382,360,389,427,122,265), (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64)(65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96)(97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112)(113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128)(129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144)(145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160)(161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176)(177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192)(193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208)(209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224)(225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240)(241,242,243,244,245,246,247,248,249,250,251,252,253,254,255,256)(257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272)(273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288)(289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304)(305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320)(321,322,323,324,325,326,327,328,329,330,331,332,333,334,335,336)(337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352)(353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368)(369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384)(385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400)(401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,416)(417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432)(433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448), (1,135,9,143)(2,134,10,142)(3,133,11,141)(4,132,12,140)(5,131,13,139)(6,130,14,138)(7,129,15,137)(8,144,16,136)(17,187,25,179)(18,186,26,178)(19,185,27,177)(20,184,28,192)(21,183,29,191)(22,182,30,190)(23,181,31,189)(24,180,32,188)(33,240,41,232)(34,239,42,231)(35,238,43,230)(36,237,44,229)(37,236,45,228)(38,235,46,227)(39,234,47,226)(40,233,48,225)(49,374,57,382)(50,373,58,381)(51,372,59,380)(52,371,60,379)(53,370,61,378)(54,369,62,377)(55,384,63,376)(56,383,64,375)(65,102,73,110)(66,101,74,109)(67,100,75,108)(68,99,76,107)(69,98,77,106)(70,97,78,105)(71,112,79,104)(72,111,80,103)(81,148,89,156)(82,147,90,155)(83,146,91,154)(84,145,92,153)(85,160,93,152)(86,159,94,151)(87,158,95,150)(88,157,96,149)(113,295,121,303)(114,294,122,302)(115,293,123,301)(116,292,124,300)(117,291,125,299)(118,290,126,298)(119,289,127,297)(120,304,128,296)(161,269,169,261)(162,268,170,260)(163,267,171,259)(164,266,172,258)(165,265,173,257)(166,264,174,272)(167,263,175,271)(168,262,176,270)(193,316,201,308)(194,315,202,307)(195,314,203,306)(196,313,204,305)(197,312,205,320)(198,311,206,319)(199,310,207,318)(200,309,208,317)(209,341,217,349)(210,340,218,348)(211,339,219,347)(212,338,220,346)(213,337,221,345)(214,352,222,344)(215,351,223,343)(216,350,224,342)(241,330,249,322)(242,329,250,321)(243,328,251,336)(244,327,252,335)(245,326,253,334)(246,325,254,333)(247,324,255,332)(248,323,256,331)(273,395,281,387)(274,394,282,386)(275,393,283,385)(276,392,284,400)(277,391,285,399)(278,390,286,398)(279,389,287,397)(280,388,288,396)(353,403,361,411)(354,402,362,410)(355,401,363,409)(356,416,364,408)(357,415,365,407)(358,414,366,406)(359,413,367,405)(360,412,368,404)(417,445,425,437)(418,444,426,436)(419,443,427,435)(420,442,428,434)(421,441,429,433)(422,440,430,448)(423,439,431,447)(424,438,432,446) );
G=PermutationGroup([[(1,371),(2,372),(3,373),(4,374),(5,375),(6,376),(7,377),(8,378),(9,379),(10,380),(11,381),(12,382),(13,383),(14,384),(15,369),(16,370),(17,413),(18,414),(19,415),(20,416),(21,401),(22,402),(23,403),(24,404),(25,405),(26,406),(27,407),(28,408),(29,409),(30,410),(31,411),(32,412),(33,278),(34,279),(35,280),(36,281),(37,282),(38,283),(39,284),(40,285),(41,286),(42,287),(43,288),(44,273),(45,274),(46,275),(47,276),(48,277),(49,140),(50,141),(51,142),(52,143),(53,144),(54,129),(55,130),(56,131),(57,132),(58,133),(59,134),(60,135),(61,136),(62,137),(63,138),(64,139),(65,214),(66,215),(67,216),(68,217),(69,218),(70,219),(71,220),(72,221),(73,222),(74,223),(75,224),(76,209),(77,210),(78,211),(79,212),(80,213),(81,299),(82,300),(83,301),(84,302),(85,303),(86,304),(87,289),(88,290),(89,291),(90,292),(91,293),(92,294),(93,295),(94,296),(95,297),(96,298),(97,347),(98,348),(99,349),(100,350),(101,351),(102,352),(103,337),(104,338),(105,339),(106,340),(107,341),(108,342),(109,343),(110,344),(111,345),(112,346),(113,160),(114,145),(115,146),(116,147),(117,148),(118,149),(119,150),(120,151),(121,152),(122,153),(123,154),(124,155),(125,156),(126,157),(127,158),(128,159),(161,208),(162,193),(163,194),(164,195),(165,196),(166,197),(167,198),(168,199),(169,200),(170,201),(171,202),(172,203),(173,204),(174,205),(175,206),(176,207),(177,357),(178,358),(179,359),(180,360),(181,361),(182,362),(183,363),(184,364),(185,365),(186,366),(187,367),(188,368),(189,353),(190,354),(191,355),(192,356),(225,391),(226,392),(227,393),(228,394),(229,395),(230,396),(231,397),(232,398),(233,399),(234,400),(235,385),(236,386),(237,387),(238,388),(239,389),(240,390),(241,436),(242,437),(243,438),(244,439),(245,440),(246,441),(247,442),(248,443),(249,444),(250,445),(251,446),(252,447),(253,448),(254,433),(255,434),(256,435),(257,305),(258,306),(259,307),(260,308),(261,309),(262,310),(263,311),(264,312),(265,313),(266,314),(267,315),(268,316),(269,317),(270,318),(271,319),(272,320),(321,425),(322,426),(323,427),(324,428),(325,429),(326,430),(327,431),(328,432),(329,417),(330,418),(331,419),(332,420),(333,421),(334,422),(335,423),(336,424)], [(1,351,318,158,328,228,185),(2,186,229,329,159,319,352),(3,337,320,160,330,230,187),(4,188,231,331,145,305,338),(5,339,306,146,332,232,189),(6,190,233,333,147,307,340),(7,341,308,148,334,234,191),(8,192,235,335,149,309,342),(9,343,310,150,336,236,177),(10,178,237,321,151,311,344),(11,345,312,152,322,238,179),(12,180,239,323,153,313,346),(13,347,314,154,324,240,181),(14,182,225,325,155,315,348),(15,349,316,156,326,226,183),(16,184,227,327,157,317,350),(17,141,213,205,85,241,43),(18,44,242,86,206,214,142),(19,143,215,207,87,243,45),(20,46,244,88,208,216,144),(21,129,217,193,89,245,47),(22,48,246,90,194,218,130),(23,131,219,195,91,247,33),(24,34,248,92,196,220,132),(25,133,221,197,93,249,35),(26,36,250,94,198,222,134),(27,135,223,199,95,251,37),(28,38,252,96,200,224,136),(29,137,209,201,81,253,39),(30,40,254,82,202,210,138),(31,139,211,203,83,255,41),(32,42,256,84,204,212,140),(49,412,287,435,302,173,79),(50,80,174,303,436,288,413),(51,414,273,437,304,175,65),(52,66,176,289,438,274,415),(53,416,275,439,290,161,67),(54,68,162,291,440,276,401),(55,402,277,441,292,163,69),(56,70,164,293,442,278,403),(57,404,279,443,294,165,71),(58,72,166,295,444,280,405),(59,406,281,445,296,167,73),(60,74,168,297,446,282,407),(61,408,283,447,298,169,75),(62,76,170,299,448,284,409),(63,410,285,433,300,171,77),(64,78,172,301,434,286,411),(97,266,123,428,390,361,383),(98,384,362,391,429,124,267),(99,268,125,430,392,363,369),(100,370,364,393,431,126,269),(101,270,127,432,394,365,371),(102,372,366,395,417,128,271),(103,272,113,418,396,367,373),(104,374,368,397,419,114,257),(105,258,115,420,398,353,375),(106,376,354,399,421,116,259),(107,260,117,422,400,355,377),(108,378,356,385,423,118,261),(109,262,119,424,386,357,379),(110,380,358,387,425,120,263),(111,264,121,426,388,359,381),(112,382,360,389,427,122,265)], [(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16),(17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32),(33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48),(49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64),(65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80),(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96),(97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112),(113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128),(129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144),(145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160),(161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176),(177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192),(193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208),(209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224),(225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240),(241,242,243,244,245,246,247,248,249,250,251,252,253,254,255,256),(257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272),(273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288),(289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304),(305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320),(321,322,323,324,325,326,327,328,329,330,331,332,333,334,335,336),(337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352),(353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368),(369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384),(385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400),(401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,416),(417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432),(433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448)], [(1,135,9,143),(2,134,10,142),(3,133,11,141),(4,132,12,140),(5,131,13,139),(6,130,14,138),(7,129,15,137),(8,144,16,136),(17,187,25,179),(18,186,26,178),(19,185,27,177),(20,184,28,192),(21,183,29,191),(22,182,30,190),(23,181,31,189),(24,180,32,188),(33,240,41,232),(34,239,42,231),(35,238,43,230),(36,237,44,229),(37,236,45,228),(38,235,46,227),(39,234,47,226),(40,233,48,225),(49,374,57,382),(50,373,58,381),(51,372,59,380),(52,371,60,379),(53,370,61,378),(54,369,62,377),(55,384,63,376),(56,383,64,375),(65,102,73,110),(66,101,74,109),(67,100,75,108),(68,99,76,107),(69,98,77,106),(70,97,78,105),(71,112,79,104),(72,111,80,103),(81,148,89,156),(82,147,90,155),(83,146,91,154),(84,145,92,153),(85,160,93,152),(86,159,94,151),(87,158,95,150),(88,157,96,149),(113,295,121,303),(114,294,122,302),(115,293,123,301),(116,292,124,300),(117,291,125,299),(118,290,126,298),(119,289,127,297),(120,304,128,296),(161,269,169,261),(162,268,170,260),(163,267,171,259),(164,266,172,258),(165,265,173,257),(166,264,174,272),(167,263,175,271),(168,262,176,270),(193,316,201,308),(194,315,202,307),(195,314,203,306),(196,313,204,305),(197,312,205,320),(198,311,206,319),(199,310,207,318),(200,309,208,317),(209,341,217,349),(210,340,218,348),(211,339,219,347),(212,338,220,346),(213,337,221,345),(214,352,222,344),(215,351,223,343),(216,350,224,342),(241,330,249,322),(242,329,250,321),(243,328,251,336),(244,327,252,335),(245,326,253,334),(246,325,254,333),(247,324,255,332),(248,323,256,331),(273,395,281,387),(274,394,282,386),(275,393,283,385),(276,392,284,400),(277,391,285,399),(278,390,286,398),(279,389,287,397),(280,388,288,396),(353,403,361,411),(354,402,362,410),(355,401,363,409),(356,416,364,408),(357,415,365,407),(358,414,366,406),(359,413,367,405),(360,412,368,404),(417,445,425,437),(418,444,426,436),(419,443,427,435),(420,442,428,434),(421,441,429,433),(422,440,430,448),(423,439,431,447),(424,438,432,446)]])
64 conjugacy classes
class 1 2A 2B 2C 4A 4B 4C 4D 4E 4F 7A 7B 7C 8A 8B 8C 8D 14A ··· 14I 16A ··· 16H 28A ··· 28F 28G ··· 28R 56A ··· 56L order 1 2 2 2 4 4 4 4 4 4 7 7 7 8 8 8 8 14 ··· 14 16 ··· 16 28 ··· 28 28 ··· 28 56 ··· 56 size 1 1 1 1 2 2 8 8 56 56 2 2 2 2 2 2 2 2 ··· 2 14 ··· 14 4 ··· 4 8 ··· 8 4 ··· 4
64 irreducible representations
dim 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 4 4 4 type + + + + + + + + + + + + - + + - image C1 C2 C2 C2 C2 D4 D4 D7 D8 D8 D14 D14 Q32 C7⋊D4 C7⋊D4 D4⋊D7 D4⋊D7 C7⋊Q32 kernel C2×C7⋊Q32 C2×C7⋊C16 C7⋊Q32 C2×Dic28 C14×Q16 C56 C2×C28 C2×Q16 C28 C2×C14 C2×C8 Q16 C14 C8 C2×C4 C4 C22 C2 # reps 1 1 4 1 1 1 1 3 2 2 3 6 8 6 6 3 3 12
Matrix representation of C2×C7⋊Q32 in GL4(𝔽113) generated by
112 0 0 0 0 112 0 0 0 0 112 0 0 0 0 112
,
79 112 0 0 1 0 0 0 0 0 1 0 0 0 0 1
,
75 20 0 0 69 38 0 0 0 0 99 31 0 0 50 91
,
91 12 0 0 101 22 0 0 0 0 104 13 0 0 98 9
G:=sub<GL(4,GF(113))| [112,0,0,0,0,112,0,0,0,0,112,0,0,0,0,112],[79,1,0,0,112,0,0,0,0,0,1,0,0,0,0,1],[75,69,0,0,20,38,0,0,0,0,99,50,0,0,31,91],[91,101,0,0,12,22,0,0,0,0,104,98,0,0,13,9] >;
C2×C7⋊Q32 in GAP, Magma, Sage, TeX
C_2\times C_7\rtimes Q_{32}
% in TeX
G:=Group("C2xC7:Q32");
// GroupNames label
G:=SmallGroup(448,714);
// by ID
G=gap.SmallGroup(448,714);
# by ID
G:=PCGroup([7,-2,-2,-2,-2,-2,-2,-7,224,254,184,675,185,192,1684,438,102,18822]);
// Polycyclic
G:=Group<a,b,c,d|a^2=b^7=c^16=1,d^2=c^8,a*b=b*a,a*c=c*a,a*d=d*a,c*b*c^-1=b^-1,b*d=d*b,d*c*d^-1=c^-1>;
// generators/relations
×
𝔽
|
2021-10-22 02:26:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998873233795166, "perplexity": 3492.293824959868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00361.warc.gz"}
|
http://doc.rero.ch/record/255771
|
Faculté des sciences
## A ligand field theory-based methodology for the characterization of the Eu²⁺ [Xe]4f⁶5d¹ excited states in solid state compounds
### In: Chemical Physics Letters, 2015, vol. 622, p. 120–123
The theoretical rationalization of the open-shell 4f and 5d configuration of Eu²⁺ is by far not trivial because it involves a non-standard version of ligand field theory, based on a two-shell Hamiltonian. Here we present our methodology based on ligand field theory, taking the system CsCaBr₃:Eu²⁺ as a case study with an octahedral coordination sphere of Eu²⁺. The ligand field,... More
|
2021-01-19 05:59:38
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8085728287696838, "perplexity": 3391.8571565889847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517966.39/warc/CC-MAIN-20210119042046-20210119072046-00234.warc.gz"}
|
http://math.stackexchange.com/questions/185286/sum-limits-infty-infty-a-jzj-could-be-holomorphic-in-this-zone
|
# $\sum\limits_{ - \infty }^\infty {a_jz^j}$ could be holomorphic in this zone?
Let be a series $\sum\limits_{ - \infty }^\infty {a_jz^j}$. Convergent on $1<|z|<4$. Such that vanishes on $2<|z|<3$ It's true that all the coefficients $a_j$ are zero?
I know by the principle of analytic continuation, that this would be true, if I know that the series has the form $\sum\limits_{ 0 }^\infty {b_jz^j}$. Here I don't know how to do. Maybe in the convergence zone, I can write the series on that form, but I don't know how :/
-
On any circle inside the domain of convergence it's a Fourier series, hence its coefficients can be obtained by integration. Or equivalently, use the Cauchy integral formula. And by the way, sums of Laurent series are analytic too. You don't need to bother rewriting as Taylor series, and actually you can't do that, since domains of convergence would be disjoint. – Alexander Shamov Aug 22 '12 at 2:32
Oh I can't use the Cauchy integral formula, because that's from chapter 4 of my book, and this is a problem of the chapter 3. The only things that I can use. Is that a power series is $C^{\infty}$, and the principle of analytic continuation :/ – Daniel Aug 22 '12 at 2:35
Do you know the identity theorem for holomorphic functions? – EuYu Aug 22 '12 at 3:40
Nop :/,only for series that are locally a power series ( I know that are equivalent, but this will be proved later in my book). – Daniel Aug 22 '12 at 3:58
If you multiply your series by $z^{-1 - n}$ and integrate it around the circle $|z| = 2.5$, you'll get $2\pi i a_n$. Thus each $a_n$ is zero. Note that to rigorously justify this, you use that the the integral of the (infinite) sum of the functions is the sum of the integrals, which will hold here since your series converges uniformly on any subannulus $a < |z| < b$ where $1 < a < b < 4$.
|
2015-06-30 20:11:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9810960292816162, "perplexity": 220.5565500491553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094491.62/warc/CC-MAIN-20150627031814-00095-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://zbmath.org/?q=an:1033.13004
|
# zbMATH — the first resource for mathematics
Partial intersections and graded Betti numbers. (English) Zbl 1033.13004
For arithmetically Cohen-Macaulay (aCM) subschemes of $$\mathbb{P}^r$$ of codimension $$c= 2$$, the possible sets of graded Betti numbers have been completely classified [ see G. Campanella, J. Algebra 101, 47–60 (1986; Zbl 0609.13001) and R. Maggioni and A. Ragusa, Matematiche 42, 195–209 (1987; Zbl 0701.14030)]. For aCM schemes of codimension $$c\geq 3$$ with fixed Hilbert function, there is still a maximum for the graded Betti numbers, but not necessarily a minimum.
The authors develop a construction which allows them to obtain a large part of the possible sets of graded Betti numbers of aCM schemes with fixed Hilbert function. Their machinery is based on the concept of partial intersection subschemes of $$\mathbb{P}^r$$. Those schemes are reduced, aCM, and unions of linear varieties similar to those used by J. Migliore and U. Nagel [ Commun. Algebra 28, No. 12, 5679–5701 (2000; Zbl 1003.13005)] and more general than the $$k$$-configurations used by A. V. Geramita, T. Harima and Y. S. Shin [ Adv. Math. 152, No. 1, 78–119 (2000; Zbl 0965.13011)]. In codimension $$c=3$$, the authors succeed in computing all graded Betti numbers in terms of certain combinatorial data used to construct the partial intersection scheme. In general codimensions $$c\geq 3$$, they determine the Hilbert function, the degrees of the minimal generators of the vanishing ideal, and the degrees of the last syzygies in terms of those data.
##### MSC:
13D40 Hilbert-Samuel and Hilbert-Kunz functions; Poincaré series 13H10 Special types (Cohen-Macaulay, Gorenstein, Buchsbaum, etc.) 14M05 Varieties defined by ring conditions (factorial, Cohen-Macaulay, seminormal)
Full Text:
|
2021-10-22 10:59:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6149401068687439, "perplexity": 949.9983846871594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585504.90/warc/CC-MAIN-20211022084005-20211022114005-00270.warc.gz"}
|
https://marcofrasca.wordpress.com/tag/brookhaven-national-laboratory/
|
## PHENIX says gluons are not all the story
04/05/2009
PHENIX is a collaboration working with data extracted from RHIC (Relativistic Heavy Ion Collider) located at Brookhaven Labs. In an experiment with proton-antiproton colliding beams and looking at the ejected $\pi^0$ they were able to extract the contribution of the gluons to the proton spin. They did this using Next-to-Leading-Order perturbation theory fixing the theory scale at $4GeV^2$. Their paper is here and will appear shortly in Physical Review Letters. Their result is
$\Delta G^{[0.02,0.3]}_{\rm GRSV}=0.2\pm0.1{\rm (stat)}\pm0.1{\rm (sys)} ^{+0.0}_{-0.4}{\rm (shape)}\pm0.1{\rm (scale)}$
that is consistent with zero. This is an independent confirmation of the results of the COMPASS Collaboration that we discussed here. These results let us know that in a proton no contribution to the spin comes from glue, rather this is mostly orbital angular momentum. So, why is this conclusion so relevant? From our point of view we know that, in the low energy limit, glue carries no spin. Rather, true excitations of the Yang-Mills field are some kind of colorless states that makes the spectrum and having the lower state with a massive glueball that can also be seen in labs. We know that this state is the $\sigma$ resonance. This is the scenario that is emerging from experiments and that whatever theory one can think about should explain.
Update: COMPASS Collaboration confirms small polarization of the gluons inside the nucleon (see here, to appear in Physics Letters B). The current world situation is given in their figure that I put here with their caption (for the refs check their paper).
These results, emerging from several different collaborations, are saying to us a relevant information. Glue seems to carry no spin in the low-energy limit. I think that any sound approach to manage QCD in this case should address this result. The main conclusion to be drawn is that glue excitations seen in this case are different from those seen in the high-energy limit. This is a strong confirmation of our point of view presented here and in published papers. It is a mounting evidence that appears to outline a clear scenario of strong interactions at lower energies.
|
2022-01-22 15:28:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7194986939430237, "perplexity": 627.2083449232383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303864.86/warc/CC-MAIN-20220122134127-20220122164127-00403.warc.gz"}
|
http://mathhelpforum.com/calculus/204984-limit-continuity.html
|
1. ## Limit Continuity
check continuity at point
2. ## Re: Limit Continuity
A function is continuous at a point x if
1) It is defined at that point.
2) The function approaches the same value as you approach x from the left as it does when you approach x from the right.
3) The function value is equal to this limiting value.
3. ## Re: Limit Continuity
The question's a little weird - I think you're probably supposed to take the limit of all three expressions as x goes to 0. Give each one a try and let us know how far you get. You'll probably need l'Hôpital's rule at a minimum.
It probably doesn't help much, but for the second expression, you can take the limit as x goes to zero taking on only negative values, and for the third expression, you can take the limit as x goes to zero taking on only positive values.
The first expression shouldn't have a limit - that's why I said the question was weird.
- Hollywood
4. ## Re: Limit Continuity
I'm also thinking strange because the f(0) doesn't exist...
|
2017-08-23 00:48:08
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8974866271018982, "perplexity": 425.31638109822904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.70/warc/CC-MAIN-20170823000718-20170823020718-00537.warc.gz"}
|
https://datascience.stackexchange.com/questions/106223/updating-a-train-val-test-set
|
# Updating a train/val/test set
It is considered best practice to split your data into a train and test set at the start of a data science / machine learnign project (and then your train set further into a validation set for hyperparamter optimisation).
If it turns out that the distribution in your train set isn't the same as your test set, perhaps one group is completely lacking in the test set, or a group is overly represented in the test set for example, what do you do?
Is it a problem, after learning that the distributions in the two sets are different to re-compute your train and test sets? Practically this must be ok, and you won't always know upfront that the distributions in the sets are representative. However, this is a form of data leakage, as you are applying information that you have learnt, to the creation of these sets which wasn't necessarily to hand before you started your task.
How does one deal with such a scenario?
There are two really different scenarios:
### The training and test data are obtained from the same dataset
If the data has been randomly split between training and test set, this is extremely unlikely to happen in the first place. If the data contains some small groups/classes, then the split can be made not only randomly but also with stratified sampling: this prevents that 100% of one group ends up on either side of the split by chance. And if there are some classes or groups which still appear too rarely (only once or twice), then these should usually be discarded or replaced by some generic value at some preprocessing stage.
### The training and test data are obtained independently
This is the serious case. There are cases where external constraints make it unavoidable to have slightly different distributions in the training and test set, even though this breaches the main assumption of supervised learning. Note that if the distributions differ too much, it's very likely a lost cause. In this scenario one doesn't have a choice: if the training and test set are provided separately,then it's expected that the performance is measure on the test set "as is". So it's a matter of working with this constraint: some specific preprocessing may be necessary (e.g. introduce a special group/class 'unknown' in the training set), a robust method may be preferable, possibly some plan a default prediction (majority class) for invalid instances, etc.
### Mistakes happen
In any case, if one realizes that there's something wrong in the design of the splitting process or any other problem which makes it necessary to re-shuffle the data, well, it's sometimes better to redo the whole process despite the risk of data leakage. Of course it's better if this can be avoided, but it's not as if the ML police is going to arrest you ;)
• Thanks Erwan, this is helpful. I work on a lot of data mining activities, and often groups are not known upfront. I experience that sometimes models work for certain samples well and others not so well, this can then indicate sometimes that some samples belong to one group and perhaps this is better included as information for model in the data. If this happens then a stratified split would be better upfront. So in a case like that, it would typically be alright to re-perform the splitting with this stratification? Dec 20, 2021 at 12:35
In case of imbalanced datasets, we can either use stratify while splitting the data, or use Cross validation, or both. Stratifying/CV upfront helps mitigate this data leakage by developer confirmation bias.
Stratify
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
stratify=y,
test_size=0.25)
Cross validation
>>> from sklearn.model_selection import cross_val_score
>>> clf = svm.SVC(kernel='linear', C=1, random_state=42)
>>> scores = cross_val_score(clf, X, y, cv=5)
>>> scores
array([0.96..., 1. , 0.96..., 0.96..., 1. ])
In this example there are 5 random splits, 5 models score.
RepeatedStratifiedKFold helps.
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
cv = RepeatedStratifiedKFold(n_splits=5, n_repeats=2, random_state=42)
scores = cross_val_score(model, X, y, scoring='mean_squared_error', cv=cv, n_jobs=-1)
print('Mean MSE: %.4f' % mean(scores))
In this example there are 5 splits and two iteration, total of 10 models aggregated score.
• Hi Timoth, thanks for the response. My question was more of a semantic one focused on if it is ok to re-perform the splitting once it has been performed, not how to concretely implement it. But thanks for taking the time to try and help me. Dec 20, 2021 at 12:37
|
2022-10-02 20:17:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5168731212615967, "perplexity": 1080.5570211566787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00239.warc.gz"}
|
https://gateoverflow.in/3812/gate-it-2005-question-51
|
7,390 views
Let $T(n)$ be a function defined by the recurrence
$T(n) = 2T(n/2) + \sqrt n$ for $n \geq 2$ and
$T(1) = 1$
Which of the following statements is TRUE?
1. $T(n) = \Theta(\log n)$
2. $T(n) = \Theta(\sqrt n)$
3. $T(n) = \Theta(n)$
4. $T(n) = \Theta(n \log n)$
@piyushwm Your recursion tree is wrong. Check once again.
@gatecse
Somebody explain How( see comment of
$\sqrt{2}^{logn} = \sqrt{n}$
and
In the gp , how you came to know that number of terms = logn not logn +1
(considering base of $logn$ is $2$) .
(1 --logn)-→ total logn terms.
(0—logn)→ total logn+1 terms
Option $C$ is the answer. It can be done by Master's theorem.
$n^{\log_b a} = n^{\log_2 2} = n$.
$f(n) = \sqrt n = n^{\frac{1}{2}}$.
So, $f(n) = O\left(n^{\log_b a -\epsilon}\right)$ is true for any real $\epsilon$, $0 < \epsilon < \frac{1}{2}$. Hence Master theorem Case 1 satisfied,
$$T(n) = \Theta\left(n^{\log_b a}\right) = \Theta (n).$$
How to do it using back substitution?
Master's theorem the Cormen way. Never fails in any case.
it will form a GP series like
n*(1+1/√2 + 1/√4 +1/√8 +1/√16 + ………………..+ 1/√n )
here the series is decreasing GP series thus
T(n) = ⊖(n)
Answer is C it can also be solved by master theorem. by case 1(a>=b^k)
T(n)= aT(n/b)+n^k
Here a = 2 b=2 and k =1/2
so a>= b^k
T(n)=Θ (n^logba)
hence T(n)=Θ(n)
let me know if i am doing any mistake.
### 1 comment
Shouldn't it be Math.root(n/2) instead of Math.root(n)/2 ?
option c is right
|
2023-01-31 03:41:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7761252522468567, "perplexity": 2894.012091001632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499842.81/warc/CC-MAIN-20230131023947-20230131053947-00121.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/the-probability-certain-event-isthe-probability-certain-event-probability-experimental-approach_62161
|
# The Probability of a Certain Event Isthe Probability of a Certain Event is - Mathematics
Advertisement Remove all ads
Advertisement Remove all ads
Advertisement Remove all ads
MCQ
The probability of a certain event is
#### Options
• 0
• 1
• greater than 1
• less than 0
Advertisement Remove all ads
#### Solution
We have to find the probability of a certain event.
Note that the number of occurrence of an impossible event is same as the total number of trials. When we repeat the experiment, every times it occurs. This is the reason that’s why it is called certain event.
Remember the empirical or experimental or observed frequency approach to probability.
If n be the total number of trials of an experiment and A is an event associated to it such that A happens in m-trials. Then the empirical probability of happening of event is denoted by P (A) and is given by
P (A) = m/n
Note that is a positive integer, it can’t be zero. So, the probability of an impossible event is n/n=1.
Concept: Probability - an Experimental Approach
Is there an error in this question or solution?
#### APPEARS IN
RD Sharma Mathematics for Class 9
Chapter 25 Probability
Exercise 25.3 | Q 2 | Page 16
#### Video TutorialsVIEW ALL [1]
Share
Notifications
View all notifications
Forgot password?
|
2023-03-24 00:53:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5229080319404602, "perplexity": 1366.3613082897157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00269.warc.gz"}
|
https://www.kaiekubjas.com/publication/kubjas-2010-hilbert/
|
Hilbert polynomial of the Kimura 3-parameter model
Abstract
Buczyńska and Wiśniewski showed that the Hilbert polynomial of the algebraic variety associated to the Jukes-Cantor binary model on a trivalent tree depends only on the number of leaves of the tree and not on its shape. We ask if this can be generalized to other group-based models. The Jukes-Cantor binary model has $\mathbb{Z}_2$ as the underlying group. We consider the Kimura 3-parameter model with $\mathbb{Z}_2 \times \mathbb{Z}_2$ as the underlying group. We show that the generalization of the statement about the Hilbert polynomials to the Kimura 3-parameter model is not possible as the Hilbert polynomial depends on the shape of a trivalent tree.
Type
Publication
Journal of Algebraic Statistics, 3(1):64-69
|
2022-07-05 14:45:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7151864767074585, "perplexity": 213.04993288055275}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104585887.84/warc/CC-MAIN-20220705144321-20220705174321-00584.warc.gz"}
|
https://www.vedantu.com/stories/short-space-stories
|
Courses
Courses for Kids
Free study material
Free LIVE classes
More
# Short Space Stories - Why Space Aliens Haven’t Destroyed the Planet Yet?
## What is Space?
Last updated date: 17th Mar 2023
Total views: 73.5k
Views today: 0.52k
The area just outside Earth's atmosphere is known as space, commonly known as outer space. The Kármán Line, which is roughly 100 kilometres or 62 miles above the Earth, is where space officially begins. There is no oxygen in space, unlike on Earth, which is why astronauts must travel in high-tech spacesuits. This also implies that there is no air in space to scatter sunlight and create a blue sky as we have on Earth.
Space is a vacuum, meaning it is absolutely devoid of any substance, such as air, making it absolutely quiet and dark. Since sound is transmitted by air molecules, it is impossible for sound to exist in space without them.
Now, let us read one of the interesting short space stories about Aliens and a boy, and know the reason Aliens haven’t destroyed our Earth yet.
## Space Story: Why Space Aliens Haven’t Destroyed the Planet Yet?
Dave was at a party and eating his candy bar quietly in a corner, sitting on the grass in the park, and clutching the candy bar softly but firmly in his left palm. Dave was a quiet child and he rarely talked to anyone. He does not like running like other kids and making noise.
He preferred to sit quietly in a corner and read a book, watch a movie, or, in this case, enjoy a candy bar. The other kids liked Dave, but because of his quiet nature turned some of them off.
That day Dave unwrapped the candy bar and was about to take the first bite when he noticed lights above him. He was surprised when he looked up as it was a spacecraft descending from the sky.
Someone from the landed UFO spoke to Dave in an unexpectedly friendly tone:
“Greetings, Earth Human. My name is Gygar and I am from the planet Ramzok many, many lightyears from here.” That was Alien. He said that his people had been observing Earth for one thousand years. He further added,” we (Aliens) have not been happy with what we have seen. Half of you are starving, while the other half are obese. Ninety-nine per cent of you work like dogs to maintain the lifestyle of the one percent who have all the money. You go to war with people who have a slightly different interpretation of books written aeons before any of you were ever even born. We have therefore decided that, for the good of the universe, your planet will be destroyed.
Listening to this the little boy Dave was silent for a little period. His six-year-old shoulders were suddenly bearing the weight of the universe, and he wasn't sure he was equal to it. In fact, he genuinely considered sobbing and fleeing, which is usually a smart idea when confronted with something terrifying, but Dave assumed it would be considered a forfeit, and Gygar would blow up the Earth. So, while he pondered his options, he looked down at his hands. That's when he had an epiphany and stretched his left arm.
Confused, Gygar opened one of his right hands and allowed the little human to place something brown in his green palm, half-covered by a plastic wrapper. Dave pretended to eat while explaining to the alien what he should do with the candy bar. Gygar shrugged and took a bite of chocolate. The Alien Gygar was stained with the taste of candy bars. He kept on eating another bite of the chocolate.
Gygar exclaimed, "This is the most amazing dish I've ever tasted!" He asked Dave "And you claim you make it here on Earth?". Dave smiled and nodded. Ginger said that he needs that candy bar again and again and if they destroy the earth He would not be able to have candy bars. So Gygar returned without destroying earth and kept visiting Earth to eat chocolate.
That's how this peaceful young guy saved the planet while also introducing chocolate to the farthest reaches of the universe.
Why Space Aliens Haven’t Destroyed the Planet Yet
## Conclusion
The story “Why Space Alien Haven’t Destroyed The Planet Yet” is an interesting story of a young boy who saved Earth from aliens with the help of a candy bar. Kids like to read and listen to such stories.
Courses for kids
English Superstar
Maths Classes
Spoken English
## FAQs on Short Space Stories - Why Space Aliens Haven’t Destroyed the Planet Yet?
1. What exactly is there in space?
When we think of space, we usually see a vast, empty emptiness, although there are many things that exist there. As we all know, the Solar System has a great number of stars, as well as all of the planets.
2. How did a young boy Dave save the planet?
This brave young guy Dave rescued the world while simultaneously bringing chocolate to the farthest corners of the universe.
Recently Updated Pages
Recently Updated Pages
|
2023-03-23 21:05:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35462838411331177, "perplexity": 3195.9335036405496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00479.warc.gz"}
|
https://socratic.org/questions/what-is-the-simplest-radical-form-of-the-square-root-of-80
|
What is the simplest radical form of the square root of 80?
I think: $4 \sqrt{5}$
$\sqrt{80} = \sqrt{8 \cdot 10} = \sqrt{4 \cdot 2 \cdot 5 \cdot 2} = \sqrt{16 \cdot 5} = 4 \sqrt{5}$
|
2019-12-07 14:14:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8801882266998291, "perplexity": 360.921788837985}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540499439.6/warc/CC-MAIN-20191207132817-20191207160817-00406.warc.gz"}
|
https://mathlesstraveled.com/2015/11/02/mablowrimo-the-lucas-lehmer-test/?shared=email&msg=fail
|
## MaBloWriMo: The Lucas-Lehmer test
Today, I noticed both Zachary Abel and Qiaochu Yuan plan to write a blog post every day this month (hooray!). I haven’t written on here as much as I would like recently, and so I thought, why not? I already missed November 1, but no matter, perhaps I can do a bonus post on December 1 to make up for it. So, I hereby commit to write one blog post every day this month!1
So what will I write about? Well, a long time ago I wrote about the Lucas-Lehmer test for finding Mersenne primes, and promised to prove it in some future posts. I never got around to that, of course, and it seems like a perfect topic. I don’t know if we’ll make it through the whole proof, but it doesn’t really matter. The posts will be short, and we can take things slowly and explore interesting things that come up along the way. Don’t fret if you don’t remember what Mersenne primes or the Lucas-Lehmer test are, I’ll explain those again as we go. And, of course, you can help shape the direction I take by leaving questions and comments.
For today, just recall what a Mersenne prime is: a prime number of the form $2^n - 1$. For example, $2^5 - 1 = 31$ is a Mersenne prime (the third, in fact). Can you find the first two? What is the next one?
1. Fine print: I can fall behind by at most 2 days, as long as I publish 30 posts by the end of December 1.
Assistant Professor of Computer Science at Hendrix College. Functional programmer, mathematician, teacher, pianist, follower of Jesus.
This entry was posted in algebra, arithmetic, computation, famous numbers, iteration, modular arithmetic, number theory, primes and tagged , , , , , , . Bookmark the permalink.
### 5 Responses to MaBloWriMo: The Lucas-Lehmer test
1. The first two Mersenne primes are 3 and 7, for n=2, and n=3, respectively.
• Brent says:
Right you are!
2. Yamin says:
Looking forward to it 😀
|
2021-09-21 20:15:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6522929668426514, "perplexity": 874.5157834114177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057227.73/warc/CC-MAIN-20210921191451-20210921221451-00282.warc.gz"}
|
http://talkstats.com/threads/automatic-urls-ending-in-parenthesis.40547/
|
# Automatic urls ending in parenthesis
#### Dason
##### Ambassador to the humans
When directly posting a url the forum will automatically wrap it in [noparse] [/noparse] tags. This is handy except that sometimes when the url ends in a parenthesis the closing parenthesis is put outside the tags. So for example http://en.wikipedia.org/wiki/Skynet_(Terminator) becomes
http://en.wikipedia.org/wiki/Skynet_(Terminator)
if you try to follow the link wikipedia freaks out a little. It's easy enough to fix manually but I was wondering if there is a way to fix this.
|
2018-07-21 09:45:53
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8590534329414368, "perplexity": 2032.207920315691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592475.84/warc/CC-MAIN-20180721090529-20180721110529-00308.warc.gz"}
|
https://chat.stackexchange.com/transcript/36/2019/5/16/0-20
|
12:01 AM
@Semiclassical Okay. The actual function was $\frac{x+2}{\sqrt{x^3+x^5}}$. And this was simplified to $\frac{2+0}{\sqrt{x^3}+0}$. When I draw this in Desmos, they are almost similar, and so I can see why the definite integral would be the same.
Right, cause this function behaves very much like $1/x$, so it's the small x that probably contribute most to the integral. Thanks a lot!
12:20 AM
@schn: To be a bit more precise. When $x\approx 0$, $x^3+x^5$ looks like $x^3$. So the function looks like $2/\sqrt{x^3} = 2x^{-3/2}$.
This sort of reasoning is very important for looking at infinite series and improper integrals.
@TedShifrin Hi uncle Ted
12:57 AM
Is anyone here familiar with LyX (the LaTeX editor)?
LOL, hi @Jacksoja.
@TedShifrin hehe
What's that?
in trying to prove that there cannot be a surjective map between S and S* ( the subsets of S)
we are not assuming finetness of S
Ah, this is a classic problem.
1:05 AM
am not sure how to tackle it
the idea is , one element subsets would exaust all of S
hence there are no room for 2 or more element sets
but this arguement does not work for infinity right?
Nope, it doesn't work.
i thought so
can you drop me a hint?
There are fancy ways with infinite arithmetic, but I think you can do a Bertrand's paradox type proof. You suppose you have a surjection, and you invent a subset that cannot possibly be in the image of it.
okay
i have to google this Bertnand
well, that gives it away.
1:12 AM
okay then later I shall
I'm desperate for a pun but that would give it away too
@ÍgjøgnumMeg Hi , go ahead do that pun
:D
I shan't!
I cannot, nor will I
What Ted said holds true then
1:13 AM
@Jacksoja: Suppose you have a map $f\colon S\to S^*$. Can you think about whether $s$ and $f(s)$ are related?
That's a generous hint.
well i do want that s to be in f(s)
so f(s) is a proper name of it
the subset of S that contains our element s
but what else does it contain ?
Maybe. That's not quite the idea. You want to make a subset of $s$'s with some property.
if just a singelton, then i cannot do much with it
Make a subset $X = \{s: \text{something holds}\}$. Can $X=f(t)$ for some $t\in S$?
okay I see
so the idea I want to make a subset of S , that is not reached by my funtion
1:18 AM
Right.
no matter what function is
hmm that is quite evolved
@ÍgjøgnumMeg: Have I misled?
No that's what the standard method is I think?
I'm just trying not to give it away entirely. Yes, I know only the standard method.
It's just a nice choice of something holds I guess
1:21 AM
That's for @Jacksoja to contemplate.
BBIAB
would this be good, if i consider a subset of T of S such that, for each a in S, f(a) and T differ in at least one element
@TedShifrin what is BBIAB?
darn it ,I assume you had to go, thanks anyway all
@Jacksoja I assume "Be back in a bit"
@ÍgjøgnumMeg such a short sentence why not spell it out as it is
old generations are driving me crazy with these abbriviations
1:36 AM
Let $M$ be an $R$-module and $\varphi : R \to S$ a ring hom. Then $S$ is an $R$-module with $R$-action given by $rm := \varphi(r)m$. Thus we can form $M^S := M \otimes_R S$.. this is an $S$-module with $S$-action given by $(m \otimes s) \cdot s^\prime = m\otimes ss^\prime$
so I wanna check that this $S$-action is well defined
@Jacksoja LOL ... Mostly they come from your generation, guy.
and I guess $\varphi$ lets me do that?
Well, of course you need to use $\varphi$ :P
@Jacksoja: My hint was that your condition should involve relating $s$ and $f(s)$.
What you wrote is a good idea, but how would you define such a $T$?
Oh, @Rithaniel snuck in.
So I have $m \in M$ and $r_1, r_2 \in R$, $s_1, s_2 \in S$ I have $m \otimes (r_1s_1 + r_2s_2)s^\prime$.. and the expression $r_1s_1 + r_2s_2 = \varphi(r_1)s_1 + \varphi(r_2)s_2$ and you get $m \otimes (r_1s_1s^\prime + r_2s_2s^\prime)$
and then by $R$-bilinearity $r_1(m \otimes s_1s^\prime) + r_2(m\otimes s_2s^\prime)$
Spam end
My sleep schedule is fucked
1:49 AM
S a m e
Let $A$ be a $4\times 4$ matrix over $\mathbb C$ such that $\operatorname{rank} A=2$ and $A^3=A^2\neq 0$.Suppose $A$ is not diagonalizable.
$\exists$ a vector $v$ such that $Av\neq 0$ but $A^2v= 0$. Is the statement true or false?
It's holidays for me so I also have no "real" reason to fix it which makes it worse because I won't even though I want to
I'm off work atm so I'm slowly ruining my life
I know that rank (A)=2. so Geometric multiplicity of ($\lambda=0$)=2
LOL, a @Balarka. How unusual.
1:53 AM
@ÍgjøgnumMeg I wonder if this corresponds to pushforward of the coherent sheaf defined by $M$ on $\text{Spec}\, R$ to $\text{Spec}\, S$.
The jay lower star you know
(This is extension of scalars, of course)
lol I'm not there yet, this is just me justifying that $k[X]/(f) \otimes_k \bar{k} \cong \bar{k}^{\deg f}$
So Eigen space corresponding to $\lambda=0$ consists of two linearly independent vectors. $E_{0}=span\{x_1,x_2\}$. I can rule out the cases and show that characteristic polynomial is $x^3(x-1)$.
@N.Maneesh: I vote false. What do you vote?
No, that isn't the right characteristic polynomial.
Then obviously it must be false.
Hmm.
What are you suggesting the Jordan normal form will be?
2:02 AM
@ÍgjøgnumMeg Oh, that's because the spectrum of the left hand side is the fibered product given by limit of the pullback diagram $\text{Spec}\, k[X]/f \stackrel{\pi}{\to} \text{Spec}\, k \leftarrow \text{Spec}\, \overline{k}$, which is nothing but the fiber over the basepoint $\text{Spec} \, \overline{k} \to \text{Spec}\, k$, i.e., $\deg f$ many copies of $\text{Spec}\, \overline{k}$ which is $\text{Spec}\, \overline{k}^{\text{deg}(f)}$!
What's the algebraic multiplicity of $0$ and of $1$, @N.Maneesh?
Covering spaces ftw
$$\operatorname{Spec}^{-\deg f}! = \operatorname{Spec}^{-\deg f}(\operatorname{Spec}^{-\deg f} - 1)(\operatorname{Spec}^{-\deg f} - 2) \cdots$$
uh oh
missed a $k$
$$J= \left[\begin{array}{rrrrr} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ \end{array}\right]$$
@ÍgjøgnumMeg ripriprip
2:04 AM
So that has rank $3$, @N.Maneesh.
I guess this book is supposed to be giving me an explanation for the obvious similarity between the galois correspondence for fields and for covering spaces, but i'm only on chapter 1
I don't know the explanation to be honest. Also I don't know any field theory; I can pretend to because I know covering spaces and I know the statement of the correspondence >:)
It's good for party tricks if you are a topology enthusiast want to surprise people with your quick Galois theory skills
$$J= \left[\begin{array}{rrrrr} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1\\ \end{array}\right]$$ sorry
Use \begin{matrix}
2:08 AM
Alas, I don't really know any topology "properly", only things I've gotten from reading number theory books
lol
Yes, that is the only possibility.
You're on the other side. Learn the correspondence and come to topologists' parties to surprise people with your quick covering space theory skills
So then the statement must be correct, @N.Maneesh.
You have to show that that is the only possible Jordan form.
OK, I'm disappearing.
@TedShifrin How can be from the unique jordan form statement correct?
I like whole numbers too much
Despite my poor numeracy skills
2:12 AM
Hey guys.
Take $n$ vectors $\{v_i\}$ in an $F$-vector space $V$.
Does the existence of linear maps $\phi_i:V\to F$ with $\phi_i(v_j) = \delta_{ij}$ imply linear independence of the $v_i$?
I mean, for any $\alpha_i\in F$ we have
$$\sum \alpha_iv_i = 0 \implies \sum \alpha_i\phi_j(v_i) = 0 \implies \alpha_j = 0$$
Is that the proof? haha.
I it's so simple I feel I'm missing something important hahaha.
2:29 AM
$$J^2= \left[\begin{array}{rrrrr} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1\\ \end{array}\right]$$. For $J^2$, I can prove that for $J^2$ there is $v=(a,b,c,0)$. For suitable $a,b,c$, I can prove that $Jv \neq 0$ and $J^2v=0$. I don't know about $A$
2:47 AM
@BAYMAX We know that there is a matrix $P$ such that $J=P^{-1}AP$. So, $J^2=P^{-1}A^2 P$. Similar matrices have rank same. So, Rank of $A^2$ must be 1. So, Nullity($A^2)=3$. Nullity(A)=2. So, there is a vector $v$ such that $Av\neq 0$ and $A^2v=0$.
Is my argument correct?
> Nullity($A^2)=3$. Nullity(A)=2. So, there is a vector $v$ such that $Av\neq 0$ and $A^2v=0$.
@N.Maneesh
3:52 AM
Ah yeah. I just leave this tab open on my computer. So I imagine that I just randomly pop up whenever I connect my internet. :] @TedShifrin
4:15 AM
Hey.
I have vector spaces $U$ and $V$, and I'm trying to show that the usual construction $F(U\times V)/S$ (where $F$ is the free functor from Set to k-Vec) together with the map $\xi:U\times V\to F(U\times V)/S$ given by $(u,v)\mapsto [(u,v)]$ satisfies the universal property of the tensor product.
So I take an arbitrary bilinear map $f:U\times V\to W$, and I want to show it factors through $\xi$ via a unique $\tilde{f}$.
Haha is anybody following?
This sounds like Category Theory, and, unfortunately, I do not personally know category theory well enough to be of any assistance.
It's linear algebra. I just use categorical jargon because it's easier haha.
But thanks :)
5:10 AM
[Random]
On the problem of the Category of Mathematics
Consider the following diagram:
Top -> Ord
..........|
.........V
Hi all, where is the playground?
..........Alg
The morphisms that links Top to Ord and Ord to Alg may not be associative, as the morphisms Top -> Alg may be in general very different in structure
Thus mathematics is something much larger than a category
May 11 at 16:52, by Tobias Kildetoft
If you lecture long into the void, the void starts lecturing back.
@Rudi_Birnbaum what playground, there is no chat room call that
@Secret where I can test mathjax code (regularly).
I do it usually in the answer windows in MSE
that would be irregular I guess
### Sandbox
Where you can play with chat features (except flagging) and ch...
@Secret Sandbox, that was the term! Thanks!!
5:45 AM
I'm about to go to bed, but I don't even know how to ask this question in a front-page appropriate way. Take a hyperbolic plane ($H^2$), mapping it to a disk, and compactifying it (including the circumference). Adding a Euclidean dimension $X$, to turn it into a cylinder, what type of geometry is along the surface of the $\text{Circumference}\times X$?
I'm probably even talking nonsense at this point, in regards to "what type of geometry", but I'm just having trouble grasping a geometric concept discussed in a physic discussion with someone
And this shape came up
I can't even begin to reason about the Circumference of a compactified Hyperbolic Disk, let alone see how extending it with a euclidean dimension affects it.
What should I ask to accomplish learning anything?
1 hour later…
7:02 AM
Let $f:[0,\infty)\to \mathbb R$ be a real valued function such that $f(1)=1$ and forall $x\in \mathbb R$ $f'(x)=\frac{1}{x^2+f(x)^2}$. Then $\lim_{x\to \infty}f(x)$
I know that $f'(x)\ge 0$. So, Monotonically increasing
$f''(x)\leq 0 \implies f$ is concave down
7:47 AM
Let $f:[0,1]\to \Bbb R$ be a continuous function, define $g:[0,1]\to\Bbb R$ and $g(x)=(f(x))^2$, then $\int_0^1g\,dx=0\implies\int_0^1f\,dx=0$. How to show this? I first thought that it is because $\int fg\,dx=\int f\,dx\int g\,dx$ kind of rule, but there is none such rule.
8:10 AM
can someone help me with this
1
I want to finish this proof. OP has considered the interval differently. I choose here, in $\mathbb{R}^p$, the interval $a_i - (\epsilon/2)^{(1/p)} ≤ x_i ≤ b_i + (\epsilon/2)^{(1/p)}$ (the closed set $F$). I am trying to show that $m(A)≤m(F)+ \epsilon$. Clearly, $F \subset A$. We break up the...
@Silent that's because $g$ is always nonnegative
so if its integral is zero, it must be identically zero
@Sil
@Silent, suppose $f(x)=k$ somewhere. Then, in some neighbourhood of $x$, $f(x) \neq 0$. Therefore, $g(x)>0$ in there. Let the nbd be $(e,f)$. Now, split the integral of $g(x)$ into three parts: from $0$ to $e$. $e$ to $f$. $f$ to $1$. middle term is positive and the other parts are nonnegative, thereby yielding $\int_0^1g(x)>0$. A contradiction
this is basically what @LeakyNun said, I guess
Suppose we have a function $f:B\rightarrow\ddot{B}$ where $B$ and $\ddot{B}$ are algebraic structures with the same underlying set and $f$ satisfies the property that for $a,b\in B$ then $f(a)f(b)=\phi(f(ab))$ where $\phi :\ddot{B}\rightarrow\ddot{B}$ is a permutation on $\ddot{B}$. Is there a term for this sort of relationship?
8:34 AM
can anybody on earth say why this is wrong?
$p$$L$$E$$a$$s$E$8:45 AM I don't see anything necessarily wrong with your work, Subhasis, though I always doubt myself when I see expressions like this. for one last a time, this is my request to go through this maybe you can find some errors here very sorry for the handwriting Hello everyone. I have a small query. Is writing a matrix in its Smith normal form a basis change? 1 hour later… 10:05 AM @ÉricoMeloSilva there is one such function for the upper half plane though: G(r;r0) = 1/2π [log|r-r0| + log|r-r0'|] the "divergence theorem" proof doesn't work because the test function 1 isn't$C_0(V)$3 hours later… 12:42 PM @LeakyNun @SubhasisBiswas, thank you. 1:35 PM Let$f:\Bbb R\to\Bbb R$be a polynomial such that$f(0)>0$and$f(f(x))=4x+1$for all$x\in \Bbb R$, then what is$f(0)$? The options given are: 1/4, 1/3, 1/2, 1 I have no idea what to do 1:49 PM 6 If I have a rank-two tensor that I want to analyze ─ say, an electric quadrupole moment, or a moment of inertia ─ it can often be very easy to analyze by moving to its principal-axes frame: one rotates to a reference frame where the tensor is diagonal, and this simplifies all sorts of understandi...$f(0) > 0$o great, f is not necessarily order preserving trying again...$ff(0) = 1$2:01 PM @Silent$f(x)=\frac13 + 2x$Wow! how did you figure that out? Just take an arbitrary polynomial$f(x)=a_0+a_1x+\dots +a_nx^n$and you get $$f(f(x))=a_0+a_1(a_0+\dots+a_nx^n)^1+\dots + (a_0+a_1x^1+\dots +a_nx^n)^n=4x+1,$$ and over an integral domain you can deduce that$a_2=a_3=\dots=a_n=0$hmm So$f(x) = a_0+a_1x$and$f(f(x))=a_0+a_1(a_0+a_1x)=a_0+a_0a_1 + a_1^2x=1+4x$so$a_0+a_0a_1 =1 $and$a_1^2=4$which tells you that$a_1=\pm 2$And thus you have two cases, either$a_1=2$in which case$a_0=\frac13$and you have$f(x)=\frac13 + 2x$which satisfies$f(0)>0$Or else you have$a_1=-2$and thus$a_0=-1$which gives you$f(x) = -1-2x$which does not give you$f(0)>0$Hence you can conclude that$f(x)=\frac13 + 2x$yielding$f(0)=\frac13$Thank you so much, @EarthCracks! 2:04 PM Not a problem 2:18 PM Afternoon How does one estimate the behaviour of a function? Say I have the function$\frac{x+2}{\sqrt{x^3+x^5}}$, how come one can simplify this to$\frac{1}{\sqrt{x^3}}$. I'm asking cause I'm determining the convergence of an integral from 1 to infinity with this function, and this approximation is made. But according to my drawing, the approximated function is less than the original, isn't that a problem? I would simplify the function to$\frac{2+x}{\sqrt{x^3}}$cause for large x, this is GREATER than the original function, and so, if the integral with this simplified function convergence, the original has to as well, no? But I can't see how one can go even further and approximate$\frac{2+x}{\sqrt{x^3}}$to$\frac{1}{\sqrt{x^3}}$. This is making compromises on both numerator and denominator, which gets itchy. 2:36 PM Can someone help me figure out how to turn this into a front-page appropriate question? I was thinking about this last night and I honestly, don't know what I'd ask if I posted it on the front page 9 hours ago, by Axoren I'm about to go to bed, but I don't even know how to ask this question in a front-page appropriate way. Take a hyperbolic plane ($H^2$), mapping it to a disk, and compactifying it (including the circumference). Adding a Euclidean dimension$X$, to turn it into a cylinder, what type of geometry is along the surface of the$\text{Circumference}\times X$? @schn Note that$\frac{x + 2}{\sqrt{x^3 + x^5}} \sim \frac{x}{\sqrt{x^3 + x^5}}$. Now$x^3 + x^5 = x^2(x + x^3)$so you have$\frac{x}{\sqrt{x^3 + x^5}} = \frac{1}{\sqrt{x + x^3}}$is 2, 3 the only prime of the form$(p-1)!+1$where$p$is prime? if yes then why so? probably smth to do with Wilson's Theorem @ÍgjøgnumMeg Okay. But when approximating from$\frac{x + 2}{\sqrt{x^3 + x^5}}$to$\frac{x}{\sqrt{x^3 + x^5}}$, aren't we approximating to a lesser function, and so a lesser integral over 1 to infinity? And how would one arrive at$\frac{1}{\sqrt{x^3}}$from your last expression$\frac{1}{\sqrt{x + x^3}}$. It seems like here one would have to make the function greater again, no? My intention is to determine the convergence of the definite integral over the function$\frac{x+2}{\sqrt{x^3+x^5}}$from 1 to infinity, and the approximation$\frac{1}{\sqrt{x^3}}$is made. @schn the denominator can be further factored to give$\sqrt{x^3}\sqrt{\frac{1}{x^2} + 1}$Now in the limit$\frac{1}{x^2} \to 0$Also, in the limit the$2$in the numerator becomes irrelevant (think$x \to \infty$, so$x + 2 \to \infty$) Could've done this all in one go by writing$x^3 + x^5 = x^5\left(\frac{1}{x^2} + 1\right)$3:02 PM @ÍgjøgnumMeg True. Would you say then that$\frac{x + 2}{\sqrt{x^3 + x^5}}\geq \frac{1}{\sqrt{x^3}}$or the other way around? Would this matter in determining the definite integral of$\frac{x + 2}{\sqrt{x^3 + x^5}}$from 1 to infinity? I'm thinking, wouldn't one want to find a function that behaves very similar to the one in question but that is greater than it, so if the integral with the approximated or simplified function,$\frac{1}{\sqrt{x^3}}$in this case, is converging, then the integral with the original function has to as well Well you can think of$\int_{1}^\infty$as$\lim_{c \to \infty} \int_1^c$so that probably helps idk I can't do analysis 4:02 PM @TedShifrin on a final exam I just took, there was another typo :\ 3 hours later… 6:46 PM is infinitation of x equal to the infinite tetration of x? Lets introduce a new notation for this question: let$x^{/1/}=x+x$(addition)$x^{/2/}=x.x$(multiplication)$x^{/3/}=x^x$(exponentiation)$x^{/4/}=^xx$(tetration) so infinitation according to my notation is$x^{/\infty/}$infinite tetration of x is$x^{x^{x^{x.....}}}$so my question is whether$x^{/\infty/}=x^{x^{x^{x.....}}}$When people talk about the tails on box plots, are they talking about just the whiskers or are they talking about anything after or before the median? 7:00 PM Why does anything I read involving a topological group specify a basis of open neighbourhoods of the identity? What's useful about having that information? Context? Having a local basis at identity gives a local basis at every point which might come in handy I see, context is varied, I've just seen it a lot of times and wasn't sure what the use of it was The kernels of the connecting maps in a profinite group give a basis of open neighbourhoods of 1 No not the connecting maps The projections down from the inverse limit So in particular if it's a countable inverse system, the inverse limit is a first countable profinite group. That's always a plus. ergh how far can I get without actively learning any topology and just picking up the relevant definitions as and when I need them lol Very far I do that all the time with point set topology 7:06 PM @ÍgjøgnumMeg because you can just translate it around and describe a basis around every point, hence the whole topology So having a local basis at every point lets you piece together the whole topology? @alessandro ah sniped That's nice The idea is that you only need to know what happens near a point to talk about convergence (random remark: if you're convinced of this fact it's obvious that the metrics$d(x,y)$and$d'(x,y)=\min\{1,d(x,y)\}$induce the same topology) Throw away all the big balls and you're still fine 7:12 PM lol It is sometimes useful to know that every metric is equivalent to a bounded one Proving that a countable product of Polish spaces is Polish is the first example I could think of but there's surely more sounds way further from the surface than I care to venture Even more fundamentally you need that to metrize the product topology, if you have a countable family of spaces Oh right, of course, that's how it's used in my example too Right 7:15 PM might go ahead and learn p-adic analysis before I learn real analysis properly @ÍgjøgnumMeg why not both write a program to calculate exp in 3-adic :P really cool and then just claim that the p-adic metric is more natural and that real numbers don't make sense to me if i want to consider the antipodal map on the n-sphere, i can consider it as being restricted from$-Id_{n\times n}$on$R^{n+1}$. can I make rigorous through this embedding taking the map on tangent spaces on the sphere as just coming from the Jacobian$-Id_{n+1\times n+1}$or should I instead take the stereographic projection charts on the n-sphere, and determine the Jacobians in local coordinates whenever I get stuck on something I go back in time and remember when I didn't know the difference between$\cap$and$\cup$What's the subgroup of PGL(2,C) consisting of transformations leaving the unit circle invariant? 7:18 PM We had a lecture on Lean yesterday, but I slept in... @Leaky @AlessandroCodenotti quel dommage @LeakyNun {1}? definitely not you mean taking the unit circle into itself? they arise in the context of classifying Aut(D) 7:19 PM or fixing the unit circles elements maps like [z-a]/[1-conjugate(a)z]$\dfrac{z-a}{1-\overline{a}z}$@ÉricoMeloSilva I'm wondering whether we can use these maps to generate a Green's function for the Laplacian in the unit disc Does anybody have a nice sounding soundbite as to why a geometric structure on a manifold should always have a finite dimensional symmetry group? Remind me what the Green's function is again? Solution to Laplacian f = delta-mass at the boundary? ie if the symmetry group were infinite dimensional "its topology and not geometry" @s.harp what's a geometric structure? 7:23 PM @BalarkaSen$G(r;r_0)$,$\nabla^2 G = \delta(r-r_0)$,$G|_{\partial M} = 0$@F.White here just a structure that is about geometry, so the question is vague and I'm trying to motivate why it should be so. there is a notion by Gromov called geometric structure, but thats not what I mean (or it is what I mean, but I don't want to say it) 0 Here are some partial answers: It is known that Banach Lie group actions on finite-dimensional manifolds are quite restricted. What I mean by this is: Due to a theorem by Omori, see Hideki Omori, On Banach-Lie groups acting on finite dimensional manifolds, Tohoku Math. J. (2) 30 (2), 223-25... Myers-Steenrod theorem states for example that Isom(M) for any Riemannian manifold M is actually a finite-dimensional Lie group. I don't know a good argument for it to be finite-dimensional; you might want to argue that the tangent space to Isom(M) at the identity is finite dimensional - which I suppose is equivalent to claiming that the space of Killing fields is finite dimensional. That should be doable but I haven't tried it @s.harp "if a Banach-Lie group acts smoothly, effectively and transitively on a finite-dimensional manifold, then it automatically is finite-dimensional. Hence for many manifolds which come up, one can only consider actions which do not satisfy these requirements or one is forced outside of the class of Banach Lie groups (e.g. if one considers diffeomorphism groups of finite-dimensional manifolds). Quotient theorems, or smooth structures on orbits for infinite-dimensional group actions are in general much harder to establish than in the finite-dimensional case. Something along the lines y @F.White that is very interesting, thank you for that link. Just a remark: Diffeo(M) is infinite dimensional modelled on a Frechet space and acts on M, for me to motivate with this remark would require me to distinguish between Frechet spaces and Banach spaces, which is too subtle I believe 7:31 PM @BalarkaSen rip me I got topology exam tomorrow year 2 What kind of topology? Viel Glück :) Good luck! @AlessandroCodenotti "metric space and topology" Oh, way easier. 7:33 PM @BalarkaSen I believe that that theorem follows from you being able to see that an isometry is uniquely determined by its derivative at a point (if manifold is connected). So the idea that geometric structures have to be rigid (that is symmetries are uniquely determined by a fixed amount of derivatives at a point) would be enough to motivate finite dimensional @LeakyNun I remember when I took the exam one question asked us to state and prove Baire's theorem and I was afraid I remembered the statement wrong :p yeah I'm just afraid of missing some things that are considered basic :P oh no @s.harp Yeah, that's it, I figured it out right before you mentioned. @loch that's hardcore lemme dig out your year's paper... 2015 I think 7:35 PM yeah found it this is not good Fixing a point$x \in M$, you can write down a map$\text{Isom}(M) \to M \times GL_n(T_x M)$This is going to be injective. @loch what if they ask us to prove sequential compactness => compactness lol That's actually surprisingly complicated if you think about it ^ yeah I remember the whole proof I think and I think I also remember the skeleton of Baire and also the filter proof for Tychonoff but I'm kinda feeling that they're pointless like who cares about proofs of technical result Oh remembering them are, yes. Cramming to keep them in your head for an exam the next day is very pointless. I hate doing it. 7:37 PM @LeakyNun review the Hausdorff iff closed diagonal thing, a lot of people like to put it in exams apparently @AlessandroCodenotti that's trivial seriously in Lean you just "follow your nose" So is the filters proof for Tychonoff, so what? it's true I don't remember most of these things since I don't use them you're probably right ok FIP characterisation for compact is also follow-your-nose eh, what more technical results do you have in mind @BalarkaSen et al Baire is pretty simple it's just that some applications are weird. 7:39 PM prove that isometry of a compact space is a compact group @LeakyNun well I sure hope you have a metric space if they ask that @AlessandroCodenotti yeah, metric space I suppose that Baire theorem means "complete metric spaces are Baire" in this context then? yeah What's your proof that a complete metric space with no point isolated is uncountable? @Leaky 7:41 PM @BalarkaSen "BCT1" :P Ah, good. I didn't think of applying Baire when this came up in my exam. Just did the Rudin-style proof. Essentially proving Baire but still. but isn't Z a complete metric space with every point isolated @BalarkaSen without isolated points* Thanks With a bit more work you can prove it actually has cardinality$\geq |\Bbb R|$7:44 PM always assume continuum hypothesis is true to save yourself as much effort as possible Con(ZFC) implies Con(ZFC+GCH) so I guess you can even assume GCH if you really want to :P I dont know the contents of GCH so I'll assume that too why not Also,$\Bbb R^n$cannot be written as a countable union of disjoint closed balls. That's actually notoriously complicated.$2^\kappa=\kappa^+$for every infinite cardinal$\kappa$.$\kappa=\aleph_0$is CH It's "obvious" but you need Baire to prove this. 7:47 PM can't$\Bbb R^1$be done like that? Thanks again :P @BalarkaSen 0celo is back. (as Ryan Unger ;) I'm having trouble visualizing this, if$\dot{\mathbf x} = \mathbf f(t,\mathbf x(t))$, what's the graphical interpretation of$\langle \mathbf x, \mathbf f \rangle < 0$? Any metric space$X$embeds into$C(X,\Bbb R)X$is complete iff the image is closed could i get some help for this question: 1 hour ago, by Mathphile is infinitation of x equal to the infinite tetration of x? Lets introduce a new notation for this question: let$x^{/1/}=x+x$(addition)$x^{/2/}=x.x$(multiplication)$x^{/3/}=x^x$(exponentiation)$x^{/4/}=^xx$(tetration) so infinitation according to my notation is$x^{/\infty/}$infinite tetration of x is$x^{x^{x^{x.....}}}$so my question is whether$x^{/\infty/}=x^{x^{x^{x.....}}}$7:56 PM @LeakyNun Busemann embedding, very important I wouldn't say technical though I'm just wondering if passing to the space$C(X,\Bbb R)$would help me prove that any complete perfect metric space is at least continuum @s.harp So actually$\dim \text{Isom}(M^n) \leq n(n+1)/2$, because that's the dimension of$M \times SO(T_x M)$. Which is interesting: this bound is tight, it's attained for eg$S^n$. Is it true that it's attained iff$M$is isotropic? @Balarka its is also attained by euclidean space (with symmetry group$O(n)\ltimes \Bbb R^n\$)
|
2022-01-19 05:58:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9432178139686584, "perplexity": 970.1688658209308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301263.50/warc/CC-MAIN-20220119033421-20220119063421-00652.warc.gz"}
|
https://rdrr.io/cran/imbalance/f/vignettes/imbalance.Rmd
|
# Imbalance classification problem
Let:
• $S={(x_1, y_1), \ldots (x_m, y_m)}$ be our training data for a classification problem, where $y_i \in {0,1}$ will be our data labels. Therefore, we will have a binary classification problem.
• $S^{+} = {(x,y) \in S: y=1}$ be the positive or minority instances.
• $S^{-} = {(x,y) \in S: y=-1}$ be the negative or majority instances.
If $|S^{+}| > |S^{-}|$, the performance of classification algorithms is highly hindered, especially when it comes to the positive class. Therefore, methods to improve that performance are required.
Namely, imbalance package provides oversampling algorithms. Those family of procedures aim to generate a set $E$ of synthetic positive instances based on the training ones, so that we have a new classification problem with $\bar{S}^{+} = S^{+} \cup E$, $\bar{S}^{-} = S^{-}$ and $\bar{S} = \bar{S}^{+}\cup \bar{S}^{-}$ our new training set.
# Contents of the package
In the package, we have the following oversampling functions available:
• mwmote
• racog
• wracog
• rwo
• pdfos
Each of these functions can be applied to a binary dataset (that is, a set of data where labels $y$ could only take two possible values). In particular, the following examples will use datasets included in the package, which are imbalanced datasets. For example, we can run pdfos algorithm on newthyroid1 dataset.
First of all we could check the shape of the dataset:
library("imbalance")
data(newthyroid1)
Clearly, Class is the class attribute of the dataset and there are two possible classes: positive and negative. How many instances do we need to balance the dataset? We could easily compute this by doing:
numPositive <- length(which(newthyroid1$Class == "positive")) numNegative <- length(which(newthyroid1$Class == "negative"))
nInstances <- numNegative - numPositive
We get that we need to generate r nInstances instances to balance the dataset. It would not be advisable such a high number of instances, due to the scarcity of minority examples required to infer data structure. We could try to generate 80 synthetic examples instead:
newSamples <- pdfos(dataset = newthyroid1, numInstances = 80,
classAttr = "Class")
newSamples would contain the 80 synthetic examples, with same shape as the original dataset newthyroid1.
All of the algorithms can be used with the minimal parameters dataset, numInstances and classAttr, except for wRACOG, which does not have a numInstances parameter. The latter adjusts this number itself, and needs two datasets (more accurately, two partitions of the same dataset), train and validation to work.
The package also includes a method to plot a visual comparison between the oversampled dataset and the old imbalanced dataset:
# Bind a balanced dataset
newDataset <- rbind(newthyroid1, newSamples)
# Plot a visual comparison between new and old dataset
plotComparison(newthyroid1, newDataset,
attrs = names(newthyroid1)[1:3], classAttr = "Class")
There is also a filtering algorithm available, neater, to cleanse synthetic instances. This algorithm could be used with every oversampling method, either included in this package or in another one:
filteredSamples <- neater(newthyroid1, newSamples, iterations = 500)
filteredNewDataset <- rbind(newthyroid1, filteredSamples)
plotComparison(newthyroid1, filteredNewDataset,
attrs = names(newthyroid1)[1:3])
# Oversampling
## MWMOTE @Barua_2014
SMOTE is a classic algorithm which generates new examples by filling empty areas among the positive instances. It updates the training set iteratively, by performing:
[ E:=E\cup{x + r(y-x)}, \quad x,y\in S^{+}, r\sim N(0,1) ]
It has a major setback though: it does not detect noisy instances. Therefore it can generate synthetic examples out of noisy ones or even between two minority classes, which if not cleansed up, may end up becoming noise inside a majority class cluster.
knitr::include_graphics("smote-flaws.png")
MWMOTE (Majority Weighted Minority Oversampling Technique) tries to overcome both problems. It intends to give higher weight to borderline instances, undersize minority cluster instances and examples near the borderline of the two clases.
Let us recall the header of the method:
mwmote(dataset, numInstances, kNoisy, kMajority, kMinority,
threshold, cmax, cclustering, classAttr)
A KNN algorithm will be used, where we call $d(x,y)$ the euclidean distance between $x$ and $y$. Let $NN^{k}(x)\subseteq S$ be the $k$-neighbourhood of $x$ among the whole trainning set (the $k$ closest instances with euclidean distance). Let $NN_{+}^k(x) \subseteq S^{+}$ be its $k$ minority neighbourhood and $NN_{-}^k(x) \subseteq S^{-}$ be its $k$ majority neighbourhood.
For ease of notation, we will name $k_1:=$KNoisy, $k_2:=$KMajority, $k_3:=$KMinority, $\alpha:=$threshold, $C:=$clust, $C_{clust}:=$cclustering.
We define $I_{\alpha,C}(x,y) = C_f(x,y) \cdot D_f(x,y)$, where if $x \notin NN_{+}^{k_3}(y)$ then $I_{\alpha,C}w(x,y) = 0$. Otherwise: [ f(x) = \left{\begin{array}{ll} x &, x\le \alpha \ C & \textrm{otherwise} \end{array}\right.,\qquad C_f(x,y) = \frac{C}{\alpha} \cdot f\left(\frac{d}{d(x,y)}\right) ]
$C_f$ measures the closeness to $y$, that is, it will measure the proximity of borderline instances.
$D_f(x,y) = \frac{C_f(x,y)}{\sum_{z\in V} C_f(z,y)}$ will represent a density factor so an instance belonging to a compact cluster will have higher $\sum C_f(z,y)$ than another one belonging to a more sparse one.
Let $T_{clust}:= C_{clust} \cdot \frac{1}{|S_f^{+}|} \sum_{x\in S_f^{+}} \underset{y\in S_f^{+}, y\neq x}{min} d(x,y)$. We will also use a mean-average agglomerative hierarchical clustering of the minority instances with threshold $T_{clust}$, that is, we will use a mean distance: [dist(L_i, L_j) = \frac{1}{|L_i||L_j|} \sum_{x\in L_i} \sum_{y\in L_j} d(x,y)] and having started with a cluster per instance, we will proceed by joining nearest clusters until minimum of distances is lower than $T_{clust}$.
A general outline of the algorithm is:
• Firstly, MWMOTE computes a set of filtered positive instances: $S_f^{+}$, by erasing those instances whose $k_1$-neighborhood does not contain any positive instance.
• Secondly, it computes the positive boundary of $S_f^{+}$, that is, $U = \cup_{x \in S^{+}f} NN{-}^{k_2}(x)$ and the negative boundary, by doing $V = \cup_{x \in U} NN_{+}^{k_3}(x)$.
• For each $x\in V$, it figures out probability of picking $x$ by assigning: $P(x) = \sum_{y\in U} I_{\alpha, C}(x,y)$ and normalizing those probabilities.
• Then, it estimates $L_1, \ldots, L_M$ clusters of $S^{+}$, with the aforementioned jerarquical agglomerative clustering algorithm and threshold $T_{clust}$.
• Generate numInstances examples by iteratively picking $x\in V$ with respect to probability $P(x)$, and updating $E:=E\cup {x+r(y-x)}$, where $y\in L_k$ is uniformly picked and $L_k$ is the cluster containing $x$.
A few interesting considerations:
• Low $k_2$ is required in order to ensure we do not pick too many negative instances in $U$.
• For an opposite reason, a high $k_3$ must be selected to ensure we pick as many positive hard-to-learn borderline examples as we can.
• The higher the $C_{clust}$ parameter, the less and more-populated clusters we will get.
## RACOG and wRACOG @Das_2015
These set of algorithms assume we want to approximate a discrete distribution $P(W_1, \ldots, W_d)$.
Computing that distribution can be too expensive, because we have to compute: [ |{\textrm{Feasible values for }W_1}| \cdots |{\textrm{Feasible values for} W_d}| ] total values.
We are going to approximate $P(W_1, \ldots, W_d)$ as $\prod_{i=1}^d P(W_i \mid W_{n(i)})$ where $n(i) \in {1, \ldots, d}$. Chow-Liu's algorithm will be used to meet that purpose. This algorithm minimizes Kullback-Leibler distance between two distributions: [ D_{KL}(P \parallel Q) = \sum_{i} P(i) \left(\log P(i) - \log Q(i)\right) ]
We recall the definition for the mutual information of two random discrete variables $W_i, W_j$: [ I(W_i, W_j) = \sum_{w_1\in W_1} \sum_{w_2\in W_2} p(w_1, w_2) \log\left(\frac{p(w_1,w_2)}{p(w_1) p(w_2)}\right) ]
Let $S^{+}={x_i = (w_1^{(i)}, \ldots, w_d^{(i)})}_{i=1}^m$ be the unlabeled positive instances. The algorithm to approximate the distribution that will be used is:
• Compute $G'=(E',V')$, Chow Liu's dependence tree.
• If $r$ is the root of the tree, we will define $P(W_r|n(r)):=P(W_r)$.
• For each $(u,v) \in E$ arc in the tree, $n(v):=u$ and compute $P(W_v | W_{n(v)})$.
A Gibbs Sampling scheme would later be used to extract samples with respect to the approximated probability distribution, where a badge of new instances is obtained by performing:
• Given a minority sample $x_k = (w_1^{(i)}, \ldots w_d^{(i)})$.
• Iteratively construct for each attribute [\bar{w}k^{(i)} \sim P(W_k \mid \bar{w}_1^{(i)}, \ldots, \bar{w}{k-1}^{(i)}, w_{k+1}^{(i)} \ldots, w_{d}^{(i)})].
• Return $S = {\bar{x}i=(\bar{w}_1^{(i)}, \ldots \bar{w}_d^{(i)})}{i=1}^m$.
knitr::include_graphics("monte-carlo.png")
Let us recall the headers of racog and wracog functions:
racog(dataset, numInstances, burnin, lag, classAttr)
wracog(train, validation, wrapper, slideWin,
threshold, classAttr, ...)
### RACOG
RACOG (Rapidly Converging Gibbs) iteratively builds badges of synthetic instances using minority given ones. But it rules out first burnin generated badges and from that moment onwards, it picks a badge of newly-generated examples each lag iterations.
### wRACOG
The downside of RACOG is that it clearly depends on burnin, lag and the requested number of instances numInstances. wRACOG (wrapper-based RACOG) tries to overcome that problem. Let wrapper be a classifier, that could be declared as it follows:
myWrapper <- structure(list(), class = "C50Wrapper")
trainWrapper.C50Wrapper <- function(wrapper, train, trainClass){
C50::C5.0(train, trainClass)
}
That is, a wrapper should be an S3 class with a method trainWrapper following the generic method:
trainWrapper(wrapper, train, trainClass, ...)
Furthermore, the result of trainWrapper must be a predict callable S3 class.
Another example of wrapper with a knn (which can get a little tricky, since it is a lazy classificator):
library("FNN")
myWrapper <- structure(list(), class = "KNNWrapper")
predict.KNN <- function(model, test){
FNN::knn(model$train, test, model$trainClass)
}
trainWrapper.KNNWrapper <- function(wrapper, train, trainClass){
myKNN <- structure(list(), class = "KNN")
myKNN$train <- train myKNN$trainClass <- trainClass
myKNN
}
where train is the unlabeled tranining dataset, and trainClass are the labels for the training set.
An example of call for this dataset may consist in splitting haberman dataset (provided by the package) into train and validation, and calling wracog with both partitions and any of the aforementioned wrappers:
data(haberman)
trainFold <- sample(1:nrow(haberman), nrow(haberman)/2, FALSE)
newSamples <- wracog(haberman[trainFold, ], haberman[-trainFold, ],
myWrapper, classAttr = "Class")
## RWO @Zhang_2014
RWO (Random Walk Oversampling) generates synthetic instances so that mean and deviation of numerical attributes remain as close as possible to the original ones. This algorithm is motivated by the central limit theorem.
### Central limit theorem
Let $W_1, \ldots, W_m$ be a collection of independent and identically distributed random variables, with $\mathbb{E}(W_i) = \mu$ and $Var(W_i) = \sigma^2 < \infty$. Hence: [ \lim_{m} P\left[\frac{\sqrt{m}}{\sigma} \left(\underbrace{\frac{1}{m}\sum_{i=1}^m W_i}_{\overline{W}} - \mu \right) \le z \right] = \phi(z) ]
where $\phi$ is the distribution function of $N(0,1)$.
That is, $\frac{\overline{W} - \mu}{\sigma/\sqrt{m}} \rightarrow N(0,1)$ probability-wise.
Let $S^{+}= {x_i = (w_1^{(i)}, \ldots w_d^{(i)})}{i=1}^m$ be the minority instances. Now, let's fix some $j\in {1, \ldots d}$, and let's assume that $j$-ith column follows a numerical random variable $W_j$, with mean $\mu_j$ and standard deviation $\sigma_j < \infty$. Let's compute $\sigma_j' = \sqrt{\frac{1}{m}\sum{i=1}^m \left(w_j^{(i)} - \frac{\sum_{i=1}^m w_j^{(i)}}{m} \right)^2}$ the biased estimator for the standard deviation. It can be proven that instances generated with $\bar{w}_j = w_j^{(i)} - \frac{\sigma_j'}{\sqrt{m}}\cdot r, r\sim N(0,1)$ have the same sample mean as the original ones, and their sample variance tends to the original one.
### Outline of the algorithm
Our algorithm will proceed as follows:
• For each numerical attribute $j=1, \ldots, d$ compute the standard deviation of the column, $\sigma_j' = \sqrt{\frac{1}{m}\sum_{i=1}^m \left(w_j^{(i)} - \frac{\sum_{i=1}^m w_j^{(i)}}{m} \right)^2}$.
• For a given instance $x_i=(w_1^{(i)}, \ldots, w_d^{(i)})$, for each attribute attribute $j$, generate:
[ \bar{w}_j = \left{\begin{array}{ll} w_j^{(i)} - \frac{\sigma_j'}{\sqrt{m}}\cdot r, r\sim N(0,1) & \textrm{if numerical attribute}\ \textrm{pick uniformly over } {w_j^{(1)}, \ldots w_j^{(m)}} & \textrm{otherwise} \end{array}\right. ]
## PDFOS [[email protected]_2014]
### Motivation
Given a distribution function of a random variable $X$, namely $F(x)$, if that function has an almost everywhere derivative, then, almost everywhere, it holds: [ f(x) = \lim_{h\rightarrow 0} \frac{F(x+h) - F(x-h)}{2h} = \lim_{h\rightarrow 0} \frac{P(x-h < X \le x+h)}{2h} ]
Given random samples of $X$, $X_1, \ldots X_n$, namely $x_1, \ldots x_n$, an estimator for $f$ could be the mean of samples in $]x-h, x+h[$ divided by the length of the interval: [ \widehat{f}(x) = \frac{1}{2hn} \bigg[\textrm{Number of samples } x_1, \ldots, x_n \textrm{ that belong to ]x-h, x+h[}\bigg] ]
If we define $\omega(x) = \left{\begin{array}{ll} \frac{1}{2} &, |x| < 1\ 0 & \textrm{otherwise} \end{array}\right.$
and $w_h(x) = w\left(\left|\frac{x}{h}\right|\right)$, then we could write $\widehat{f}$ as: [ \widehat{f}(x) = \frac{1}{nh} \sum_{i=1}^n \omega_h(x-x_i) ]
It we assume that $x_1, \ldots, x_n$ are equidistant with distance $2h$ (they are placed in the middle of $2h$ length intervals), $\widehat{f}$ could be seen as an histogram where each bar has a $2h$ width and a $\frac{1}{2nh} \cdot \bigg[[\textrm{Number of samples } x_1, \ldots, x_n \textrm{ that belong to the interval}]\bigg]$ length. Parameter $h$ is called bandwidth.
In multivariate case ($d$ dimensional), we define: [ \widehat{f}(x) = \frac{1}{nh^d} \sum_{i=1}^n \omega_h(x-x_i) ]
### Kernel methods
If we took $w = \frac{1}{2} \Large{1}\normalsize_{]-1,1[}$, then $\widehat{f}$ would have jump discontinuities and we would have jump derivatives. On the other hand, we could took $\omega$, where $w\ge 0$, $\int_{\Omega} \omega(x) dx = 1$, $\Omega \subseteq X$ a domain, and $w$ were even, and that way we could have estimators with more desirable properties with respect to continuity and differentiability.
$\widehat{f}$ can be evaluated through its MISE (Mean Integral Squared Error): [ MISE(h) = \underset{x_1, \ldots, x_d}{\mathbb{E}} \int (\widehat{f}(x) - f(x))^2 dx ]
knitr::include_graphics("kernel-estimation.png")
#### Gaussian kernels
PDFOS (Probability Distribution density Function estimation based Oversampling) uses multivariate Gaussian kernel methods. The probability density function of a $d$-Gaussian distribution with mean $0$ and $\Psi$ as its covariance matrix is: [ \phi^{\Psi}(x) = \frac{1}{\sqrt{(2\pi \cdot det(\Psi))^d}} exp\left(-\frac{1}{2} x \Psi^{-1} x^T \right) ]
Let $S^{+} = {x_i = (w_1^{(i)}, \ldots, w_d^{(i)})}{i=1}^m$ be the minority instances. The unbiased covariance estimator is: [ U = \frac{1}{m-1} \sum{i=1}^m (x_i - \overline{x})(x_i - \overline{x})^T, \qquad \textrm{where } \overline{x} = \frac{1}{m}\sum_{i=1}^m x_i ]
We will use kernel functions $\phi_h(x) = \phi^U\left(\frac{x}{h}\right)$, where $h$ ought to be optimized to minimize the MISE. It is well-known that can be achieved by minimizing the following cross validation function: [ M(h) = \frac{1}{m^2 h^d} \sum_{i=1}^m \sum_{j=1}^m \phi_h^{\ast} (x_i - x_j) + \frac{2}{m h^d} \phi_h(0) ] where $\phi_h^{\ast} \approx \phi_{h\sqrt{2}} - 2\phi_h$.
Once a proper $h$ has been found, a suitable generating scheme could be to take $x_i + h R r$, where $x_i \in S^{+}$, $r\sim N^d(0,1)$ and $U = R\cdot R^T$. In case we have enough guarantees to decompose $U = R^T \cdot R$ ($U$ must be a positive-definite matrix), we could use Choleski decomposition. In fact, we provide a sketch of proof showing that all covariance matrices are positive-semidefinite: [ y^T \left(\sum_{i=1}^m (x_i - \overline{x})(x_i - \overline{x})^T\right) y = \sum_{i=1}^m (\underbrace{(x_i - \overline{x})^T y}{z_i^T})^T \underbrace{(x_i - \overline{x})^T y}{z_i}) = \sum_{i=1}^m ||z_i||^2\ge 0 ] for arbitrary $y \in \mathbb{R}^d$. We need a strict positive definite matrix, otherwise PDFOS would not provide a result and will stop its execution.
### Search of optimal bandwidth
We take a first approximation to $h$ as the value: [ h_{Silverman} = \left(\frac{4}{m(d+2)}\right)^{\frac{1}{d+4}} ] where $d$ is number of attributes and $m$ the size of the minority class.
Reshaping the equation of the cross validation function and differentiating: \begin{align} M(h) &= \frac{1}{m^2 h^d} \sum_{i=1}^m \sum_{j=1}^m \phi_h^{\ast} (x_i - x_j) + \frac{2}{m h^d} \phi_h(0) \nonumber\ &= \frac{1}{m^2 h^d} \sum_{i=1}^m \sum_{j=1, j\neq i}^m \phi_h^{\ast} (x_i - x_j) + \frac{1}{m h^{d}} \phi_{h\sqrt{2}}(0) \nonumber\ &= \frac{2}{m^2 h^d} \sum_{j > i}^m \phi_h^{\ast} (x_i - x_j) + \frac{1}{m h^{d}} \phi_{h\sqrt{2}}(0) \label{eq:cv-simp} \end{align}
\begin{align} \frac{\partial M}{\partial h}(h) &= \frac{2}{m^2 h^d} \sum_{j>i}^m \phi_h^{\ast} (x_i - x_j) \bigg(-d h^{-1} + h^{-3} (x_i-x_j)^T U (x_i-x_j) \bigg) \nonumber - \frac{dh^{-1}}{mh^{d}} \phi_{h\sqrt{2}}(0) \label{eq:cross-val-df} \end{align}
And a straightforward gradient descendent algorithm is used to find a good $h$ estimation.
# Filtering
Once we have created synthetic examples, we should ask ourselves how many of those instances are in fact relevant to our problem. Filtering algorithms can be applied to oversampled datasets, to erase the least relevant instances.
## NEATER @Almogahed_2014
NEATER (filteriNg of ovErsampled dAta using non cooperaTive gamE theoRy) is a filtering algorithm based on game theory.
### Introduction to Game Theory
Let $(P, T, f)$ be our game space. We would have a set of players, $P={1, \ldots, n}$, and $T_i={1, \ldots, k_i}$, set of feasible strategies for the $i$-th player, resulting in $T = T_1 \times \ldots \times T_n$. We can easily assign a payoff to each player taking into account his/her own strategy as well as other players' strategy. So $f$ will be given by the following equation: [ \begin{array}{rll} f: T &\longrightarrow& \mathbb{R}^n\ t &\longmapsto& (f_1(t), \ldots, f_n(t)) \end{array} ]
$t_{-i}$ will denote $(t_1, \ldots, t_{i-1}, t_{i+1}, \ldots, t_n)$ and similarly we can denote $f_i(t_i, t_{-i})= f_i(t)$.
An strategic Nash equilibrium is a tuple $(t_1, \ldots, t_n)$ where $f_i(t_i, t_{-i}) \ge f_i(t'{i}, t{-i})$ for every other $t'\in T$, and all $i=1, \ldots, n$. That is, an strategic Nash equilibrium maximizes the payoff for all the players.
The strategy for each player will be picked with respect to a given probability: [ \delta_i \in \Delta_i = {(\delta_i^{(1)}, \ldots, \delta_i^{(k_i)}) \in (R^{+}0)^{k_i} : \sum{j=1}^{k_i} \delta_i^{(j)} = 1} ] We define $\Delta_1 \times \ldots \times \Delta_n := \Delta$ and we call an element $\delta = (\delta_1, \ldots, \delta_n) \in \Delta$ an strategy profile. Having a fixed strategy profile $\delta$, the overall payoff for the $i$-th player is defined as: [ u_i(\delta) = \sum_{(t_1, \ldots, t_n)\in T} \delta_i^{(t_i)} f_i(t) ]
Given $u_i$ the payoff for a $\delta$ strategy profile in the $i$-th player and $\delta\in \Delta$ we will denote
\begin{align} \delta_{-i} &:= (\delta_1, \ldots, \delta_{i-1}, \delta_{i+1}, \ldots, \delta_n)\ u_i(\delta_i, \delta_{-i}) &:= u_i(\delta) \end{align}
A probabilistic Nash equilibrium is a strategy profile $x = (\delta_1, \ldots, \delta_n)$ verifying $u_i(\delta_i, \delta_{-i}) \ge u_i(\delta'{i}, \delta{-i})$ for every other $\delta'\in \Delta$, and all $i=1, \ldots, n$.
A theorem ensures that every game space $(P,T,f)$ with finite players and strategies has a probabistic Nash equilibrium
### Particularization to imbalance problem
Let $S$ be the original training set, $E$ the synthetic generated instances. Our players would be $S \cup E$. Every player would be able to pick between two different strategies: $0$ - being a negative instance - and $1$ - being a positive instance -. Players of $S$ would always have a fixed strategy, where the $i$-th player would have $\delta_i = (0,1)$ (a $0$ strategy) in case it is a negative instance or $\delta_i = (1,0)$ (a $1$ strategy) otherwise.
The payoff for a given instance is affected only by its own strategy and its $k$ nearest neighbors in $S\cup E$. That is, for every $x_i \in E$, we will have $u_i(\delta) = \sum_{j\in NN^k(x)} (x_i^T w_{ij} x_j)$ where $w_{ij} = g\left(d(x_i, x_j)\right)$ and $g$ is a decreasing function (the further, the lower payoff). In our implementation, we have considered $g(z) = \frac{1}{1+z^2}$, with $d$ the euclidean distance.
Each step should involve an update to the strategy profiles of instances of $E$. Namely, if $x_i \in E$, the following equation will be used: \begin{align} & \delta_i(0) = \left(\frac{1}{2}, \frac{1}{2}\right)\ & \delta_{i,1}(n+1) = \frac{\alpha + u_i((1,0))}{\alpha + u_i(\delta(n))} \delta_{i,1}(n)\ & \delta_{i,2}(n+1) = 1 - \delta_{i,1}(n+1) \end{align}
That is, we are reinforcing the strategy that is producing the higher payoff, in detriment to the opposite strategy. This method has enough convergence guarantees.
Let's recall the header for neater:
neater(dataset, newSamples, k, iterations, smoothFactor, classAttr)
Then a rough sketch of the algorithm is:
• Compute k nearest neighbors for every instance of $E:=$newSamples.
• Initialize strategy profiles of dataset$\cup$newSamples.
• Iterate iterations times updating payoffs with the aforementioned rule and strategy profiles.
• Keep only those examples of newSamples with probability of being positive instances higher than $0.5$.
# References
## Try the imbalance package in your browser
Any scripts or data that you put into this service are public.
imbalance documentation built on Feb. 19, 2018, 1:03 a.m.
|
2018-11-20 14:31:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9278340339660645, "perplexity": 2184.9789040228247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746398.20/warc/CC-MAIN-20181120130743-20181120152743-00439.warc.gz"}
|
https://ask.libreoffice.org/en/questions/135528/revisions/
|
# Revision history [back]
### Forecast autofield value in Base form
For my database I created a form with "Add data only" property to insert new records in a table.
Every record has a unique "ID".
The ID field generally displays <AutoField> until the record is saved, after that the ID number is generated and displayed in his form field.
I'd like to display this new ID number just when the form is opened, before the record is saved.
So I created a Query SELECT COUNT( * ) + 1 "NewID" FROM "Items" (I know this is not really the AutoField value, but could be enough for my project).
I added a subform with the Query as Content.
And into the subform added a new ID field with "NewID" assigned as Data field.
This partially works: when the form is opened the ID field is 1, and it's updated to the total number of records after the record is saved...
Is it possible to "refresh" the ID field just after the form is opened? So that it immediately shows the total number of records + 1.
Should I use a macro?
I found this topic asking the same but not really solving the problem.
Thank you!
|
2019-08-19 10:45:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19602128863334656, "perplexity": 2313.9381004143092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314721.74/warc/CC-MAIN-20190819093231-20190819115231-00224.warc.gz"}
|
https://electronics.stackexchange.com/questions/353774/calculating-ac-input-power
|
# Calculating AC Input Power
I'm making an AC to DC power supply that connects to mains. There's an AC to DC converter, buck converter and a smoothing LDO that outputs a DC controlled with pots.
I'm verifying the supply, and have been trying to get efficiency measurements. Due to limited equipment, I can only measure input Vrms, input Irms, output DC voltage, and output DC current. However, if I just do the Vdc * Idc/(Vrms * Irms) calculation, my effeciency goes above 100% at the higher loads, which is obviosuly wrong.
I know there's apparent vs real power when driving reactive loads with AC, but when I'm measuring the Vrms and Irms that goes into my circuit, that is the real power, is it not? Adding a power factor would mean my circuit becomes even more unphysically efficient.
Is there anything else that I haven't taken into account?
• Vrms * Irms gives you appearant power, while Vdc * Idc is active power. You cannot tell efficiency from the quotient because appearant power also has phase shift and (for a rectifier mostly) current distortion in it. – Janka Feb 3 '18 at 1:39
The product of Irms and Vrms is not real power.
Imagine if the load was a perfect capacitor and input was a pure sine wave.
RMS voltage is mains voltage. RMS current is Vrms/Xc where Xc = $\frac{1}{2 \pi f C}$
Say 120VAC RMS and 100$\mu$F, so Xc is 1592$\Omega$ and Irms is 75.4mA, so we get 9W.
However real power is exactly zero.
To get input power you need to find the average of the instantaneous product of voltage and current. In other words, a wattmeter.
• So measuring Irms and Vrms is really just measuring the apparent power? – Michael E Feb 3 '18 at 1:42
• Yes, by definition, the product of the two is apparent power. – Spehro Pefhany Feb 3 '18 at 1:45
• So would I be able to get to the true power delivered from the data I have + an approximation of the reactive and resistive elements or would that be pretty inaccurate? – Michael E Feb 3 '18 at 1:51
• Your input is most likely far from an ideal reactance. It's probably some spikey waveform (large crest factor). You might be able to use an oscillscope that has appropriate math functions but most are not easily used on mains voltage without danger. For cheap you could try a "Kill-a-Watt" consumer meter. – Spehro Pefhany Feb 3 '18 at 1:58
• I see, thanks! I guess the last question I have is, if my current and voltage inputs have some phase shift, that would just be a smaller number than the Irms*Vrms that I have. Then why is my efficiency still greater than 100%? If I use the data I have without accounting for power factor, it should just result in worse efficiency, right? – Michael E Feb 3 '18 at 2:14
|
2019-03-18 16:17:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7049975991249084, "perplexity": 1786.833849444623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201455.20/warc/CC-MAIN-20190318152343-20190318174343-00051.warc.gz"}
|
https://www.acmicpc.net/problem/2384
|
시간 제한메모리 제한제출정답맞힌 사람정답 비율
1 초 128 MB0000.000%
## 문제
On the table in front of you lie a bunch of pieces of coloured thread. All the threads are different colours and the ends have been arranged in a single line, for example red, blue, green, green, blue, red. Note that each colour appears exactly twice, once for each end of that thread. You've been asked to tie the threads into a single large loop by successively tying together ends of some pair of adjacent threads. In the above example you could start by tying end 1 to end 2 or end 2 to end 3 but not end 3 to end 4 since that would make a green loop. Likewise, if your first tie was red to blue, then your second tie could not join the remaining red and blue.
Finding the job a little boring, you decide instead to count the number of ways in which you could perform the task, i.e. the number of sequences of ties that you could do. For instance, suppose that the initial pattern was rrbb, then there is only one allowed sequence of ties (join the middle two, then join the two remaining). On the other hand if it were rbrb, then there are three allowed sequences (join any consecutive pair to begin with, then join the remaining two).
Likewise you can see that if the initial pattern were rgrbgb, then four of your initial ties lead to a subsequent pattern of the general form abab, while one leads to abba. Thus there are 4x3 + 1x2 = 14 ways to complete the ties in this case.
## 입력
Input will consist of a sequence of colour patterns represented by words of length at most 22 consisting of lower case letters and will be terminated by a line containing the single character '#'.
## 출력
Output will be a sequence of lines, one for each line in the input, each containing a the number of ways to tie off the pattern in the corresponding input line. If an input line is invalid (because it does not contain exactly two occurrences of each of its characters) the output for that line should be 0.
## 예제 입력 1
rrbb
rbrb
rgrbgb
gbg
#
## 예제 출력 1
1
3
14
0
|
2022-05-24 11:32:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30067047476768494, "perplexity": 594.2616699276479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662572800.59/warc/CC-MAIN-20220524110236-20220524140236-00412.warc.gz"}
|
https://plainmath.net/pre-algebra/81335-algebra-exponents-and-fractions-i-could
|
Jameson Lucero
2022-07-07
algebra exponents and fractions
I could be over thinking or tired... But I am to embarrassed to ask my prof. this probably very simple algebra rule I am ignorant of... Also this is just a snip-it from a inductive proof example.
Say you have something like this
$\frac{{4}^{k}-1}{3}+\frac{3\ast {4}^{k}}{3}=$
$\frac{4\ast {4}^{k}-1}{3}$
My question is what algebra rules or rules of fractions allow this? In other words what computation is going on? I can agree that the 3 should cancel out
$\frac{3\ast {4}^{k}}{3}={4}^{k}$
But wouldn't that mean it be something like this
$\frac{\left({4}^{k}-1\right)\ast {4}^{k}}{3}$
or more accurately
$\frac{\left({4}^{k}-1\right)}{3}\ast {4}^{k}$
and not this?
$\frac{4\ast {4}^{k}-1}{3}$
jugf5
Expert
|
2023-01-31 19:54:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 32, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5611816048622131, "perplexity": 617.6385602994283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499890.39/warc/CC-MAIN-20230131190543-20230131220543-00173.warc.gz"}
|
https://www.expii.com/t/interval-and-set-builder-notation-compound-inequalities-4291
|
Expii
# Interval and Set-Builder Notation - Compound Inequalities - Expii
We notate solutions to inequalities with interval or set-builder notation. Interval notation uses brackets and parentheses to denote an open or closed set.
|
2021-12-07 02:45:31
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9920870065689087, "perplexity": 1853.628830485798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363332.1/warc/CC-MAIN-20211207014802-20211207044802-00565.warc.gz"}
|
https://stacks.math.columbia.edu/tag/0FP8
|
Lemma 20.46.1. Let $(X, \mathcal{O}_ X)$ be a ringed space. The category of complexes of $\mathcal{O}_ X$-modules with tensor product defined by $\mathcal{F}^\bullet \otimes \mathcal{G}^\bullet = \text{Tot}(\mathcal{F}^\bullet \otimes _{\mathcal{O}_ X} \mathcal{G}^\bullet )$ is a symmetric monoidal category (for sign rules, see More on Algebra, Section 15.68).
Proof. Omitted. Hints: as unit $\mathbf{1}$ we take the complex having $\mathcal{O}_ X$ in degree $0$ and zero in other degrees with obvious isomorphisms $\text{Tot}(\mathbf{1} \otimes _{\mathcal{O}_ X} \mathcal{G}^\bullet ) = \mathcal{G}^\bullet$ and $\text{Tot}(\mathcal{F}^\bullet \otimes _{\mathcal{O}_ X} \mathbf{1}) = \mathcal{F}^\bullet$. to prove the lemma you have to check the commutativity of various diagrams, see Categories, Definitions 4.42.1 and 4.42.9. The verifications are straightforward in each case. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2020-08-13 12:15:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9659847617149353, "perplexity": 392.63307963626676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738982.70/warc/CC-MAIN-20200813103121-20200813133121-00229.warc.gz"}
|
https://www.physicsforums.com/threads/ir-spectroscopy-notation.651444/
|
# IR spectroscopy notation?
1. Nov 11, 2012
### burton95
Hi there. First time PF poster. I have a question about interpreting IR spectro graphs. I get the general idea of interpreting the sharpness and strength of the line but when I need to ID the pulse down I have to choose between $\delta$ and $\upsilon$ (or mabe its just v) version of the functional groups. I tried to correlate the usage b/w the fingerprint (400-1500 cm-1) and the functional (1500-4000 cm-1) regions and haven't seen the light yet. Thanks
burton95
Last edited: Nov 11, 2012
2. Nov 13, 2012
### Amok
I don't understand what you mean by delta or nu versions of the functional group. The fingerprint region is pretty useless in finding out what functional groups are present btw. Could you state your question more precisely?
3. Nov 14, 2012
### burton95
I think I found out it represents a stretch or a bend. Thanks for the reply
Anthony
|
2017-12-13 03:43:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5426489114761353, "perplexity": 2189.967259405362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948521188.19/warc/CC-MAIN-20171213030444-20171213050444-00456.warc.gz"}
|
https://www.svabkobranie.sk/8fiz19mt/article.php?996dcd=point-estimate-of-the-population-mean-calculator
|
Normal IID samples. Population variance (σ 2) indicates how data points in a given population are distributed.This is the average of the distances from each data point in the population to the mean square. The other important piece of information is the confidence level required, which is the probability that the confidence interval contains the true point estimate. Best Guess Estimation Calculation: Feel free to use this online best point estimation calculator to calculate the point estimation which involves the utilization of sample data to determine a single value which is to serve as a 'best guess' or 'best estimate' of an unknown population parameter. Point estimation of the mean. Point estimation involves the use of sample data to calculate a single value (known as a statistic) which is to serve as a "best guess" or "best estimate" of an unknown (fixed or random) population parameter. Using the simplest definition, any statistic can also be a point estimate. Point Estimate Calculator – How to Find Point Estimate? As such, the "corrected sample standard deviation" is the most commonly used estimator for population standard deviation, and is generally referred to as simply the "sample standard deviation." Expected value of the estimator. For instance, what if it’s bent slightly? You want to go back 20 years and calculate monthly return but that will become very hectic. Statology is a site that makes learning statistics easy. Table of contents. FAQ. Find the point estimate A sample was taken from students to ask them how many believe in BigFoot. Base your selection on these rules: If the value of the MLE ≤ 0.5, select the Wilson Estimation as this is the most accurate. A study aims to estimate the mean systolic blood pressure of British adults by randomly sampling and measuring the blood pressure of 100 adults from the population. Sample statistics or statistics are observable because we calculate them from the data (or sample) ... An example, would be to use the sample mean as a point estimate of the population mean, here the population mean is the population parameter we are interested in finding out about. This is similar to what researchers and statisticians do: out of an arena of possibl… As you learn how to find point estimate, there are different point estimate formulas for you to use. Let’s input the values we already have into the formula: Therefore, the error is 13.4 and the point estimate for the population mean is 139.2. Also, it provides you with the results for all the formulas. For example, a sample mean can be used as a point estimate of a population mean. Distribution of the estimator. A tutorial on computing the point estimate of population mean from a simple random sample. A sample standard deviation “s” is the point estimate of a population standard deviation “σ.”, A sample mean “x” is a point estimate of a population mean “μ.”, A sample variance “s2” is a point estimate of a population variance “σ2.”. For example, the sample mean X̄ is the point estimate of the population mean μ. Another way to express the confidence interval is as the point estimate plus or minus a margin of error; in this case, it is 198 ± 6 pounds. It is desirable for a point estimate to be: (1) Consistent. Please enter the necessary parameter values, and then click 'Calculate'. This refers to the probability that you’ve made the best point estimate correctly and it’s within the margin of error. ̅=7.35, =1.28 2 = 1−.98 2 Given a lower bound of 17, upper bound of 27, and sample size of , calculate the point estimate and margin of error It uses sample data when calculating a single statistic that will be the best estimate of the unknown parameter of the population. A point estimate gives statisticians a single value as the estimate of a given population parameter. Variance of the estimator. Once you have all of the required values, you can use the formulas to calculate the point estimate. Using your TI-83/84/89 Calculator: Estimating a Population Mean (σ Unknown)Dr. Laura Schultz Statistics I When the population standard deviation (σ) is not known (as is generally the case), a confidence interval estimate for a population mean (μ) is constructed using a critical value from the Student’s tdistribution. Thus, the variance itself is the mean of the random variable Y = (X − μ)2. The two main types of estimators in statistics are point estimators and interval estimators. Calculating confidence intervals: Calculating a confidence interval involves determining the sample mean, X̄, and the population standard deviation, σ, if possible. There are other confidence intervals you can use such as the confidence interval for the sample variance, the confidence interval for slope coefficients, or confidence intervals and prediction intervals for regression estimate . The population mean is the average of the data of a group characteristics of a population, where the group characteristic can be anything such as people, things, animals etc. The larger the sample size, the more accurate the estimate. Example 2: C.I. Confidence Interval Calculator for the Population Mean This calculator will compute the 99%, 95%, and 90% confidence intervals for the mean of a normal population, given the sample mean, the sample size, and the sample standard deviation. When the sample mean is used as a point estimate of the population mean, some error can be expected owing to the fact that a sample, or subset of the population, is used to compute the point estimate. For example, in the data set survey, the survey is performed on a sample of the student population. Get the spreadsheets here: Try out our free online statistics calculators if you’re looking for some help finding probabilities, p-values, critical values, sample sizes, expected values, summary statistics, or correlation coefficients. The calculation of the mean is the sum of all sample values divided by the number of values. As such, the "corrected sample standard deviation" is the most commonly used estimator for population standard deviation, and is generally referred to as simply the "sample standard deviation." Similar to this example, you might … Point Estimate for the Population Variance & Standard Deviation. In our example, this refers to the number of times you tossed the coin. Please type the sample mean, the population standard deviation, the sample size and the confidence level, and the confidence interval will be computed for you: The expectation of the observed values of many samples (“average observation value”) equals the corresponding population parameter. Find the point estimate of students who believe in BigFoot. Using each of these formulas will provide you with results that differ slightly. Try to imagine that you’re tossing a coin. Population variance (σ 2) indicates how data points in a given population are distributed. There are a lot of point estimators with different properties. More formally, it is the application of a point estimator to the data. Similarly, the sample proportion p is a point estimate of the population proportion p when binomial modeling is involved. For example, the sample mean X̄ is the point estimate of the population mean μ. If the value is 0.9 < MLE, select the smaller value between the Laplace and Jeffrey Estimations as this is the most accurate. This calculator uses the following logic to determine which point estimate is best to use: Your email address will not be published. The calculator uses four estimation approaches to compute the most suitable point estimate: the maximum likelihood, Wilson, Laplace, and Jeffrey's methods. Imagine you are trapped inside a dangerous dome with 20 game contestants who can only win the game by being the last person left alive. Show Video Lesson. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. If the population standard deviation cannot be used, then the sample standard deviation, s, can be used when the sample size is greater than 30. The points are individual values compared to the interval estimates which are a set of values. This point estimate calculator can help you quickly and easily determine the most suitable point estimate according to the size of the sample, number of successes, and required confidence level. Going back to calculating the point estimate manually, follow these steps: In some cases, you need to determine the point estimate of a population mean. Once you press ENTER, the 95% confidence interval for the population mean will be displayed: The 95% confidence interval for the population mean is (12.675, 15.325). σ (Greek letter sigma) is the symbol for the population standard deviation. Point estimation is the opposite of interval estimation. Point Estimate Calculator. The expected value is the average value in the target population - the population mean. We can compute the sample mean and use it as an estimate of the corresponding population parameter. The estimator . How to use the point estimate calculator? The great thing about this calculator is that it selects the most relevant result for you automatically. Confidence Interval Calculator for the Population Mean. Using this information the 95% confidence interval is calculated as between 68.43 and 71.57mmHg. Point Estimate for Population Proportion n x pˆ = 1. The Population Variance Calculator is used to calculate the population variance of a set of numbers. Here is the simple online best point estimation calculator to find the best guess along with the Laplace, Jeffrey, Wilson, Maximum Likelihood Estimation. If you have an unbiased coin and you toss it several times, you should get a result of about 50% tails and 50% heads. Here are the equations for the different formulas: for the Maximum Likelihood Estimation, the equation is MLE = S / T, for the Laplace Estimation, the equation is Laplace = (S + 1) / (T + 2), for the Jeffrey Estimation, the equation is Jeffrey = (S + 0.5) / (T + 1), for the Wilson Estimation, the equation is Wilson = (S + z²/2) / (T + z²). A little bird, a Mocking Jay perhaps, tells you that you can end the game by shooting an arrow into the sky and hitting some unknown point that will disable the power source of the city that put you there in the first place. The best thing about this online tool is that it’s very easy to use. Data collected from a simple random sample can be used to compute the sample mean, x̄, where the value of x̄ provides a point estimate of μ. It is usually an unknown constant. Point Estimate and Margin of Error Calculator But what happens if your coin has a slight bias. If you instead know the population standard deviation, you should use our Confidence Interval Calculator for the Mean with known Population Standard Deviation. This lecture presents some examples of point estimation problems, focusing on mean estimation, that is, on using a sample to produce a point estimate of the mean of an unknown distribution. It makes use of the different point estimate formulas to provide you with the most precise possible value. You also need to have the value of the z-score which comes from when you’ve calculated the confidence interval. FAQ. Required fields are marked *. The following reference explains how the FPC is used to adjust a variance estimate when sampling without replacement (see pages 141-142). Confidence interval (limits) calculator, formulas & workout with steps to measure or estimate confidence limits for the mean or proportion of finite (known) or infinite (unknown) population by using standard deviation or p value in statistical surveys or experiments. In more formal terms, the estimate occurs as a result of point estimation applied to a set of sample data. How do you find the point estimate of the population mean? What Is Population Standard Deviation? surveyed was 1.28 hours. This is because a statistic serves as an estimator of a given parameter in a population. If the population standard deviation cannot be used, then the sample standard deviation, s, can be used when the sample size is greater than 30. In practice, when the sample mean difference is statistically significant, our next step is often to calculate a confidence interval to estimate the size of the population mean difference. Instructions: Use this Confidence Interval Calculator to compute a confidence interval for the population mean $$\mu$$, in the case that the population standard deviation $$\sigma$$ is known. For example, a sample mean can be used as a point estimate of a population mean. Please enter the necessary parameter values, and then click 'Calculate'. These are the only values needed by this calculator to give you the point estimate statistics. Once you’ve verified that they’re all correct, you can select the most accurate value. Upgrade to Math Mastery. What would happen to the confidence interval if you wanted to be 95 percent certain of it? After entering all of the required values, the calculator will generate a number of results including the Best Point Estimation, the Maximum Likelihood Estimation, the Laplace Estimation, Jeffrey’s Estimation, and the Wilson Estimation. To illustrate this, let’s work with an example. This online population mean calculator helps you to find the mean for the given set of data of a group characteristics. From their sample, they estimate the sample mean to be 70mmHg and the sample standard deviation to be 8mmHg. Unbiased estimation of standard deviation however, is highly involved and varies depending on distribution. Biostatistics: A Foundation for Analysis in the Health Sciences. The arithmetic mean is a single value meant to “sum up” a data set. Show Instructions. Unbiased estimation of standard deviation however, is highly involved and varies depending on distribution. After calculating all of the four values manually, you can check the accuracy of your calculation using the online calculator. For the given set of values, the calculator will find their variance (either sample or population), with steps shown. The above discussion suggests the sample mean, ¯ X, is often a reasonable point estimator for the mean. How to Calculate Minkowski Distance in R (With Examples). The population standard deviation measures the variability of data in a population. The sample mean (̄x) is a point estimate of the population mean, μ; The sample variance (s 2 is a point estimate of the population variance (σ 2). In the example we used, this refers to the number of tails you got after tossing the coin for a certain number of times. However, there are several ways to calculate the point estimate of a population proportion, including: To find the best point estimate, simply enter in the values for the number of successes, number of trials, and confidence level in the boxes below and then click the “Calculate” button. Now, suppose that we would like to estimate the variance of a distribution σ2. A point estimate of the mean of a population is determined by calculating the mean of a sample drawn from the population. Daniel WW (1999). Arithmetic Mean For Samples And Populations. Similarly, a sample proportion can be used as a point estimate of a population proportion. Note that a Finite Population Correction has been applied to the sample size formula. In general, you can skip parentheses, but be very careful: e^3x is e^3x, and e^(3x) is e^(3x). by Marco Taboga, PhD. We can't directly calculate the population average because we only have a sample, so we have to estimate it. The point estimate refers to the probability of getting one of the results. computing the point estimate of a population mean example: estimating a population proportion calculator: how to find sample size needed for confidence interval: the point estimate of the population mean is: 68 confidence interval calculator: critical value confidence level calculator: finding confidence interval without standard deviation Learn more. Sample/Population Variance Calculator. Similarly, a sample proportion can be used as a point estimate of a population proportion. Your email address will not be published. Obtaining a point estimate of a population parameter is rather easy: just use the corresponding sample statistic. A point estimate gives statisticians a single value as the estimate of a given population parameter. Similarly, the sample proportion p is a point estimate of the population proportion p when binomial modeling is involved. About Population Variance Calculator . For example, the sample mean is a point estimate of the population mean. Apart from learning how to use this point estimate calculator, we’ll also learn how to find point estimate manually, and other relevant information. If the value is 0.5 < MLE < 0.9, select the Maximum Likelihood Estimation as this is the most accurate. (2) Unbiased. Assuming 0 < σ2 < ∞, by definition σ2 = E[(X − μ)2]. This is the average of the distances from each data point in the population to the mean square. Calculate and interpret a 98% confidence interval for the mean number of hours practiced by the dancers at the competition. Point Estimate Calculator This point estimate calculator can help you quickly and easily determine the most suitable point estimate according to the size of the sample, number of … These are the Maximum Likelihood Estimation formula or MLE, the Wilson Estimation formula, the Laplace Estimation formula, and the Jeffrey Estimation formula. Get the formula sheet here: Statistics in Excel Made Easy is a collection of 16 Excel spreadsheets that contain built-in formulas to perform the most commonly used statistical tests. First, you need to have the value for the number of successes. If you need to find the most accurate point estimates, follow these steps: eval(ez_write_tag([[728,90],'calculators_io-medrectangle-3','ezslot_7',110,'0','0'])); eval(ez_write_tag([[300,250],'calculators_io-medrectangle-4','ezslot_4',103,'0','0']));eval(ez_write_tag([[300,250],'calculators_io-medrectangle-4','ezslot_5',103,'0','1']));eval(ez_write_tag([[300,250],'calculators_io-medrectangle-4','ezslot_6',103,'0','2']));For you to understand the concept of point estimate better, let’s use a simple example. A population parameter is assumed to be fixed or take only one value. Point Estimate. In cases where you’ve collected a lot of data about a specific population and you want to calculate the “best guess” parameter, this point estimate calculator proves extremely useful. Points are single values, in comparison to interval estimates, which are a The Elementary Statistics Formula Sheet is a printable formula sheet that contains the formulas for the most common confidence intervals and hypothesis tests in Elementary Statistics, all neatly arranged on one page. Find a 95% confidence interval for a population mean, given the following information: sample mean x = 12; sample size n = 19 You should also have the value for the number of trials. It produces a single value while the latter produces a range of values. You are 90 percent certain that the true population mean of football player weights is between 192 and 204 pounds. A sample mean “x” is a point estimate of a population mean “μ.” A sample variance “s2” is a point estimate of a population variance “σ2.” When you look at this in a more formal perspective, the occurrence of the estimate is a result of the application of the point estimate to a sample data set. Point Estimate Calculator A point estimate represents our “best guess” of a population parameter. To calculate a confidence interval, you will first need the point estimate and, in some cases, its standard deviation. Let say you want to invest in IBM and very keen to look at its past performance and returns. Consider these examples: When you look at this in a more formal perspective, the occurrence of the estimate is a result of the application of the point estimate to a sample data set.
2020 point estimate of the population mean calculator
|
2021-10-24 10:19:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8468288779258728, "perplexity": 315.22510723838315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00275.warc.gz"}
|
http://electronics.stackexchange.com/questions/53664/how-can-i-drive-12v-leds-from-my-arduino-nano
|
# How can I drive 12v LEDs from my Arduino Nano?
## Background
I am building a scale RC helicopter and ordered a bunch of 3mm LEDs to give it some nice navigation and strobe lights which I plan to drive with an Arduino Nano.
Problem is that I goofed and didn't notice that the LEDs are 12v until they arrived. I know that the Arduino Nano can accept 12V on pin 30, but I think the voltage output on any of the "D" or "A" pins is always 5v (correct??).
## Question
If the voltage output of the Arduino pins is 5v, then how can I drive the 12v LEDs using the Arduino, or is it impossible?
-
You cannot directly drive 12V LEDs from arduino. If driving only one LED, use a transistor. Otherwise use a ULN2003 IC to drive multiple LEDs.
Better off, order the normal LEDs. You will save area on your RC device.
Here is a video that will hep you using ULN2003. In the video ULN2003 is used to drive relays, but you can replace them with LEDs.
Using a transistor:
Using ULN2003:
-
I'm not aware of any individual LED which is actually a 12V device. Usually, a "12V LED" is an LED plus some circuitry (most usually just a resistor) designed to make it trivial to drive by applying only 12V. Can you locate this resistor, remove it, and replace it with a wire, or maybe a $0\Omega$ surface mount "resistor" designed to do exactly this?
Once that's done, it will be an ordinary LED, and you can drive it like any other. I'd explain in more detail, but seeing that driving LEDs is an especially common application for an Arduino, I doubt it's necessary.
-
Some LEDs have built in resistors. I have a bunch of LEDs from Agilent that have integral resistor. The integral resistors are built into the body of LED and can't be removed. Here is such a LED – Chetan Bhargava Jan 10 '13 at 21:18
Using OHM's Law.
Assuming the leds are set for I = 20ma current and use Vf = 2.4 to 3.2 volts (you didn't provide a color, but the voltage difference is minor) at Vs = 12v the Leds would have a resistor close to 450~480ohms. R = (Vs - Vf) / I.
If you use that same resistor with a lower input voltage (5v as you mentioned), you need to calculate for I. I = (Vs - Vf) / R. (Using average numbers) I = (5 - 3) / 450, I = 0.0044A or 4.4ma.
Using those leds with the 5v Arduino output, the leds would work at ~4.4ma. They won't be very bright, but they should work.
The other option is to replace the resistor if you can.
-
It might be too complicated for use in this situation, but in theory you could use this circuit I came up with a while back:
It was intended to boost 3.3 volt supplies to run white LEDs, but could also boost 5 volts to run a "12 volt" LED. A1, A2, A3 are sections of a hex inverting schmitt trigger like the 74HC14. Drive the end of R2 from a microcontroller pin to turn the LED on and off.
-
|
2015-08-02 10:21:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22075508534908295, "perplexity": 2216.7733706457525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989042.37/warc/CC-MAIN-20150728002309-00073-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://ncna20.kattis.com/problems/ncna20.digitalspeedometer
|
2020 North Central NA Regional Contest
#### Start
2021-02-27 10:00 AKST
## 2020 North Central NA Regional Contest
#### End
2021-02-27 15:00 AKST
The end is near!
Contest is over.
Not yet started.
Contest is starting in -277 days 2:40:15
5:00:00
0:00:00
# Problem HDigital Speedometer
A digital speedometer shows a vehicle’s speed as integer miles per hour. There are occasions when the sensed speed varies between two integer values, such as during cruise control. Using a single threshold to round between adjacent integers often makes the display toggle rapidly between the two integers, which is distracting to the driver.
Your team must implement a smoothing technique for the display using separate rising and falling thresholds ($t_ r$ and $t_ f$, $t_ f < t_ r$, respectively). See Figure 1 for a graphical depiction of the Sample Input for use with the following rules.
Each sensed speed, $s$, falls between two adjacent integers $i$ and $j$, $i \le s < j$, where $j = i + 1$. When displaying the sensed speed $s$ as an integer:
• When $s$ falls between $i$ and $i+t_ f$, $s$ is displayed as $i$.
• When $s$ falls between $i+t_ r$ and $j$, $s$ is displayed as $j$.
• When $s$ falls between $i+t_ f$ and $i+t_ r$, $s$ is displayed as $i$ if the most recent preceding value for $s$ outside of range $[i+t_ f, i+t_ r]$ is less than $i+t_ r$, and $s$ is displayed as $j$ if the most recent preceding value for $s$ outside of range $[i+t_ f, i+t_ r]$ is greater than $i+t_ r$.
• Any sensed speed, $0 < s < 1$, must display as $1$ because any non-zero speed, no matter how small, must display as non-zero to indicate that the vehicle is in motion.
## Input
The first line of input contains $t_ f$, the falling threshold. The second line of input contains $t_ r$, the rising threshold. The speed sensor reports $s$ in increments of $0.1$ mph. The thresholds are always set halfway between speed increments. All remaining lines until end-of-file are successive decimal speeds, $s$, in miles per hour, one speed per line. The third line of input, which is the first measured speed, will always be $0$. There are at most $1000$ observed speeds $s$ in input.
$0 < t_ f,t_ r < 1; \ \ \ \ t_ f < t_ r; \ \ \ \ 0 \le s \le 120$
## Output
Output is the list of speeds, one speed per line, smoothed to integer values appropriate to $t_ f$ and $t_ r$.
## Sample Explanation
Input Output Explanation 0.25 Value of $t_ f$. 0.75 Value of $t_ r$. 0 0 Initial input. 2.0 2 Input greater than $0$, below threshold of $2.25$. 5.7 5 Input greater than $2.0$, in threshold range. 5.8 6 Input greater than $2.0$, exceeds upper threshold of $5.75$. 5.7 6 Input less than $5.8$, in threshold range. 5.2 5 Input less than $5.8$, below threshold of $5.25$. 5.7 5 Input greater than $5.2$, in threshold range. 0.8 1 Input greater than $0$ and less than $1$. 0.2 1 Input greater than $0$ and less than $1$.
Sample Input 1 Sample Output 1
0.25
0.75
0
2.0
5.7
5.8
5.7
5.2
5.7
0.8
0.2
0
2
5
6
6
5
5
1
1
|
2021-12-01 21:40:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6044180989265442, "perplexity": 1271.4564419736453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360951.9/warc/CC-MAIN-20211201203843-20211201233843-00538.warc.gz"}
|
http://libros.duhnnae.com/2017/aug7/150319064558-Langevin-agglomeration-of-nanoparticles-interacting-via-a-central-potential-Condensed-Matter-Statistical-Mechanics.php
|
Langevin agglomeration of nanoparticles interacting via a central potential - Condensed Matter > Statistical Mechanics
Langevin agglomeration of nanoparticles interacting via a central potential - Condensed Matter > Statistical Mechanics - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
Abstract: Nanoparticle agglomeration in a quiescent fluid is simulated by solving theLangevin equations of motion of a set of interacting monomers in the continuumregime. Monomers interact via a radial, rapidly decaying intermonomerpotential. The morphology of generated clusters is analyzed through theirfractal dimension $d f$ and the cluster coordination number. The time evolutionof the cluster fractal dimension is linked to the dynamics of two populations,small $k \le 15$ and large $k>15$ clusters. At early times monomer-clusteragglomeration is the dominant agglomeration mechanism $d f = 2.25$, whereasat late times cluster-cluster agglomeration dominates $d f = 1.56$. Clustersare found to be compact mean coordination number $\sim 5$, tubular, andelongated. The local, compact structure of the aggregates is attributed to theisotropy of the interaction potential, which allows rearrangement of bondedmonomers, whereas the large-scale tubular structure is attributed to itsrelatively short attractive range. The cluster translational diffusioncoefficient is determined to be inversely proportional to the cluster mass andthe per-unit-mass friction coefficient of an isolated monomer, a consequenceof the neglect of monomer shielding in a cluster. Clusters generated byunshielded Langevin equations are referred to as \textit{ideal clusters}because the surface area accessible to the underlying fluid is found to be thesum of the accessible surface areas of the isolated monomers. Similarly, idealclusters do not have, on average, a preferential orientation. The decrease ofthe numbers of clusters with time and a few collision kernel elements areevaluated and compared to analytical expressions.
Autor: Lorenzo Isella, Yannis Drossinos
Fuente: https://arxiv.org/
|
2018-10-17 11:51:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5820191502571106, "perplexity": 4586.65544881279}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511173.7/warc/CC-MAIN-20181017111301-20181017132801-00206.warc.gz"}
|
http://www.scientificlib.com/en/Mathematics/HW/BaileyPair.html
|
# .
In mathematics, a Bailey pair is a pair of sequences satisfying certain relations, and a Bailey chain is a sequence of Bailey pairs. Bailey pairs were introduced by W. N. Bailey (1947, 1948) while studying the second proof Rogers (1917) of the Rogers–Ramanujan identities, and Bailey chains were introduced by Andrews (1984).
Definition
The q-Pochhammer symbols $$(a;q)_n$$ are defined as:
$$(a;q)_n = \prod_{0\le j<n}(1-aq^j) = (1-a)(1-aq)\cdots(1-aq^{n-1}).$$
A pair of sequences $$(\alpha_n, \beta_n)$$ is called a Bailey pair if they are related by
$$\beta_n=\sum_{r=0}^n\frac{\alpha_r}{(q;q)_{n-r}(aq;q)_{n+r}}$$
or equivalently
$$\alpha_n = (1-aq^{2n})\sum_{j=0}^n\frac{(aq;q)_{n+j-1}(-1)^{n-j}q^{n-j\choose 2}\beta_j}{(q;q)_{n-j}}.$$
Bailey's lemma
Bailey's lemma states that if $$(\alpha_n, \beta_n)$$ is a Bailey pair, then so is $$(\alpha^\prime_n, \beta^\prime_n)$$ where
$$\alpha^\prime_n= \frac{(\rho_1;q)_n(\rho_2;q)_n(aq/\rho_1\rho_2)^n\alpha_n}{(aq/\rho_1;q)_n(aq/\rho_2;q)_n}$$
$$\beta^\prime_n = \sum_{j\ge0}\frac{(\rho_1;q)_j(\rho_2;q)_j(aq/\rho_1\rho_2;q)_{n-j}(aq/\rho_1\rho_2)^j\beta_j}{(q;q)_{n-j}(aq/\rho_1;q)_n(aq/\rho_2;q)_n}.$$
In other words, given one Bailey pair, one can construct a second using the formulas above. This process can be iterated to produce an infinite sequence of Bailey pairs, called a Bailey chain.
Examples
An example of a Bailey pair is given by (Andrews, Askey & Roy 1999, p. 590)
$$\alpha_n = q^{n^2+n}\sum_{j=-n}^n(-1)^jq^{-j^2}, \quad \beta_n = \frac{(-q)^n}{(q^2;q^2)_n}.$$
L. J. Slater (1952) gave a list of 130 examples related to Bailey pairs.
References
Andrews, George E. (1984), "Multiple series Rogers-Ramanujan type identities", Pacific Journal of Mathematics 114 (2): 267–283, doi:10.2140/pjm.1984.114.267, ISSN 0030-8730, MR 757501
Andrews, George E.; Askey, Richard; Roy, Ranjan (1999), Special functions, Encyclopedia of Mathematics and its Applications 71, Cambridge University Press, ISBN 978-0-521-62321-6, MR 1688958
Bailey, W. N. (1947), "Some identities in combinatory analysis", Proceedings of the London Mathematical Society, Second series 49 (6): 421–425, doi:10.1112/plms/s2-49.6.421, ISSN 0024-6115, MR 0022816
Bailey, W. N. (1948), "Identities of the Rogers-Ramanujan Type", Proc. London Math. Soc., s2-50 (1): 1–10, doi:10.1112/plms/s2-50.1.1
Paule, Peter, The Concept of Bailey Chains (PDF)
Slater, L. J. (1952), "Further identities of the Rogers-Ramanujan type", Proceedings of the London Mathematical Society, Second series 54 (2): 147–167, doi:10.1112/plms/s2-54.2.147, ISSN 0024-6115, MR 0049225
Warnaar, S. Ole (2001), "50 years of Bailey's lemma", Algebraic combinatorics and applications (Gössweinstein, 1999) (PDF), Berlin, New York: Springer-Verlag, pp. 333–347, MR 1851961
|
2017-02-27 11:26:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9257209300994873, "perplexity": 1738.0885482394588}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00224-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://www.vedantu.com/question-answer/the-heap-of-wheat-is-in-the-form-of-a-cone-whose-class-10-maths-cbse-609eab7b807ed97f73ea3f1f
|
Courses
Courses for Kids
Free study material
Free LIVE classes
More
# The heap of wheat is in the form of a cone whose diameter is $48m$ and height is $7m$. Find the volume of the heap. If it is covered by canvas to protect it from rain. Find the cost of the canvas required. Given that the rate of canvas is $Rs.7\,per\,100c{m^2}$.
Last updated date: 24th Mar 2023
Total views: 206.4k
Views today: 3.83k
Verified
206.4k+ views
Hint: The area can be defined as the space occupied by a flat surface of an object. The area is the number of unit squares closed by figure. Perimeter is the total length of the sides of the two dimensional shape. Perimeter is always less than the area of the given figure. Because the perimeter is outer and the area is inner property. Volume is the capacity which hold by objects
As we know that
$\therefore V = \dfrac{1}{3}\pi {r^2}h$
Here
V=volume
h=height
Complete step-by-step solution:
Given,
Diameter of heap,$d = 48m$
Radius of heap, $r = \dfrac{{48}}{2}m$
Radius of heap, $r = 24m$
Height of heap, $h = 7m$
Rate of canvass $Rs.7\,per\,100c{m^2}$
Volume=?
Slant height of heap, $l = ?$
As we know that
$\therefore l = \sqrt {{h^2} + {r^2}}$
Put the value
$\Rightarrow l = \sqrt {{7^2} + {{24}^2}}$
$\Rightarrow l = \sqrt {49 + 576}$
$\Rightarrow l = \sqrt {625}$
$\Rightarrow l = 25m$
Now the volume of heap
As we know that
$\therefore V = \dfrac{1}{3}\pi {r^2}h$
Put the value
$\Rightarrow V = \dfrac{1}{3} \times \dfrac{{22}}{7} \times {(24)^2} \times 7$
Simplify
$\Rightarrow V = 22 \times 24 \times 8$
$\Rightarrow V = 4224{m^3}$
Now area covered by canvas
As we know that
$\therefore A = \pi rl$
Put the value
$\Rightarrow A = \dfrac{{22}}{7} \times 24 \times 25$
Simplify
$\Rightarrow A = \dfrac{{13200}}{7}$
$\Rightarrow A = 1885.71{m^2}$
Now
The price of canvas,
$\Rightarrow \text{Price of canvas} = \text{Area covered} \times \text{Rate of canvas}$
Put the value
$\Rightarrow \text{Price of canvas} = 1885.71 \times \dfrac{7}{{100}}$
$1{m^2} = 10000c{m^2}$
$\Rightarrow \text{Price of canvas} = 1885.71 \times \dfrac{7}{{100}} \times 10000$
$\Rightarrow \text{Price of canvas} = Rs.1320000$
Note: The slant height of an object (such as frustum, cone, and pyramid) is the distance measured along a lateral face from the base to the apex along the “center” of the face. As we know that the slant height is given by
$\therefore l = \sqrt {{h^2} + {r^2}}$
Here
h=height of cone
|
2023-03-30 15:01:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7714370489120483, "perplexity": 1344.0692215420072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949331.26/warc/CC-MAIN-20230330132508-20230330162508-00617.warc.gz"}
|
https://online.stat.psu.edu/stat509/book/export/html/649
|
# 4.1 - Random Error
4.1 - Random Error
Random error (variability, imprecision) can be overcome by increasing the sample size. This is illustrated in this section via hypothesis testing and confidence intervals, two accepted forms of statistical inference.
## Review of Hypothesis testing
In hypothesis testing, a null hypothesis and an alternative hypothesis are formed. Typically, the null hypothesis reflects the lack of an effect and the alternative hypothesis reflects the presence of an effect (supporting the research hypothesis). The investigator needs to have sufficient evidence, based on data collected in a study, to reject the null hypothesis in favor of the alternative hypothesis.
Suppose an investigator is conducting a two-armed clinical trial in which subjects are randomized to group A or group B, and the outcome of interest is the change in serum cholesterol after 8 weeks. Because the outcome is measured on a continuous scale, the hypotheses are stated as:
$$H_0\colon \mu_A = \mu_B$$ versus $$H_0: \mu_A \ne \mu_B$$
where $$\mu_{A} \text{ and } μ_{B}$$ represent the population means for groups A and B, respectively.
The alternative hypothesis of $$H_1\colon \mu_{A} \ne \mu_{B}$$ is labeled a “two-sided alternative” because it does not indicate whether A is better than B or vice versa. Rather, it just indicates that A and B are different. A “one-sided alternative” of $$H_1\colon \mu_{A}< \mu_{B}$$ (or $$H_1\colon \mu_{A} > \mu_{B}$$) is possible, but it is more conservative to use the two-sided alternative.
The investigator conducts a study to test his hypothesis with 40 subjects in each of group A and group B $$\left(n_{A} = 40 \text{ and } n_{B} = 40\right)$$. The investigator estimates the population means via the sample means (labeled $$\bar{x}_A$$ and $$\bar{x}_B$$, respectively). Suppose the average changes that we observed are $$\bar{x}_A = 7.3$$ and $$\bar{x}_B = 4.8 \text { mg/dl}$$. Do these data provide enough evidence to reject the null hypothesis that the average changes in the two populations means are equal? (The question cannot be answered yet. We do not know if this is a statistically significant difference!)
If the data approximately follow a normal distribution or are from large enough samples, then a two-sample t test is appropriate for comparing groups A and B where:
$$t = (\bar{x}_A - \bar{x}_B) / (\text{standard error of } \bar{x}_A - \bar{x}_B)$$.
We can think of the two-sample t test as representing a signal-to-noise ratio and ask if the signal is large enough, relative to the noise detected? In the example, $$\bar{x}_A = 7.3$$ and $$\bar{x}_B = 4.8 mg/dl$$. If the standard error of $$\bar{x}_A - \bar{x}_B$$ is 1.2 mg/dl, then:
$$t_{obs} = (7.3 - 4.8) / 1.2 = 2.1$$
But what does this value mean?
Each t value has associated probabilities. In this case, we want to know the probability of observing a t value as extreme or more extreme than the t value actually observed, if the null hypothesis is true. This is the p-value. At the completion of the study, a statistical test is performed and its corresponding p-value calculated. If the p-value $$< \alpha$$, then $$H_0$$ is rejected in favor of $$H_1$$.
Two types of errors can be made in testing hypotheses: rejecting the null hypothesis when it is true or failing to reject the null hypothesis when it is false. The probability of making a Type I error, represented by $$\alpha$$ (the significance level), is determined by the investigator prior to the onset of the study. Typically, $$\alpha$$ is set at a low value, say 0.01 or 0.05.
Here is an interactive table that presents these options. Roll your cursor over the specific decisions to view results.
Decision Reality
$$H_0$$ is true $$H_0$$ is false
Reject $$H_0$$, (conclude $$H_a$$) Type I error Correct decision
Fail to reject $$H_0$$ Correct decision Type II error
In our example, the p-value = [probability that $$|t| > 2.1] = 0.04$$
Thus, the null hypothesis of equal mean change for in the two populations is rejected at the 0.05 significance level. The treatments were different in the mean change in serum cholesterol at 8 weeks.
Note that $$\beta$$ (the probability of not rejecting $$H_0$$ when it is false) did not play a role in the test of hypothesis.
The importance of $$\beta$$ came into play during the design phase when the investigator attempted to determine the appropriate sample size for the study. To do so, the investigator had to decide on the effect size of interest, i.e., a clinically meaningful difference between groups A and B in the average change in cholesterol at 8 weeks. The statistician cannot determine this but can help the researcher decide whether he has the resources to have a reasonable chance of observing the desired effect or should rethink his proposed study design.
The effect size is expressed as: $$\delta = \mu_{A} - \mu_{B}$$.
The sample size should be determined such that there exists good statistical power $$\left(\beta = 0.1\text{ or }0.2\right)$$ for detecting this effect size with a test of hypothesis that has significance level α.
A sample size formula that can be used for a two-sided, two-sample test with $$\alpha = 0.05$$ and $$\beta = 0.1$$ (90% statistical power) is:
$$n_A = n_A = 21\sigma^{2}/\delta^{2}$$
where σ = the population standard deviation (more detailed information will be discussed in a later lesson).
Note that the sample size increases as σ increases (noise increases).
Note that the sample size increases as $$\delta$$ decreases (effect size decreases).
In the serum cholesterol example, the investigator had selected a meaningful difference, $$\delta = 3.0 \text{ mg/dl}$$ and located a similar study in the literature that reported $$\sigma = 4.0 \text{ mg/dl}$$. Then:
$$n_A = n_B = 21\sigma^{2}/\delta^{2} = (21 \times 16) / 9 = 37$$
Thus, the investigator randomized 40 subjects to each of group A and group B to assure 90% power for detecting an effect size that would have clinical relevance..
Many studies suffer from low statistical power (large Type II error) because the investigators do not perform sample size calculations.
If a study has very large sample sizes, then it may yield a statistically significant result without any clinical meaning. Suppose in the serum cholesterol example that $$\bar{x}_A = 7.3$$ and $$\bar{x}_A = 7.1 \text {mg/dl}$$ , with $$n_{A} = n_{B} = 5,000$$. The two-sample t test may yield a p-value = 0.001, but $$\bar{x}_A - \bar{x}_B = 7.3 - 7.1 = 0.2 \text { mg/dl}$$ is not clinically interesting.
## Confidence Intervals
A confidence interval provides a plausible range of values for a population measure. Instead of just reporting $$\bar{x}_A - \bar{x}_B$$ as the sample estimate of $$\mu_{A} - \mu_{B}$$, a range of values can be reported using a confidence interval..
The confidence interval is constructed in a manner such that it provides a high percentage of “confidence” (95% is commonly used) that the true value of $$\mu_{A} - \mu_{B}$$ lies within it.
If the data approximately follow a bell-shaped normal distribution, then a 95% confidence interval for $$\mu_{A} - \mu_{B}$$ is
$$(\bar{x}_A - \bar{x}_B) \pm \left \{1.96 \times (\text{standard error of } \bar{x}_A - \bar{x}_B)\right \}$$
In the serum cholesterol example, $$(\bar{x}_A - \bar{x}_B) = 7.3 - 4.8 = 2.5 \text{mg/dl}$$ and the standard error = $$1.2 \text{mg/dl}$$. Thus, the approximate 95% confidence interval is:
$$2.5 \pm (1.96 \times 1.2) = \left [ 0.1, 4.9 \right ]$$
Note that the 95% confidence interval does not contain 0, which is consistent with the results of the 0.05-level hypothesis test (p-value = 0.04). 'No difference' is not a plausible value for the difference between the treatments.
Notice also that the length of the confidence interval depends on the standard error. The standard error decreases as the sample size increases, so the confidence interval gets narrower as the sample size increases (hence, greater precision).
A confidence interval is actually is more informative than testing a hypothesis. Not only does it indicate whether $$H_0$$ can be rejected, but it also provides a plausible range of values for the population measure. Many of the major medical journals request the inclusion of confidence intervals within submitted reports and published articles.
[1] Link ↥ Has Tooltip/Popover Toggleable Visibility
|
2021-08-06 00:11:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7521355748176575, "perplexity": 453.92414218121604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152085.13/warc/CC-MAIN-20210805224801-20210806014801-00709.warc.gz"}
|
https://www.zbmath.org/?q=an%3A0766.05063
|
# zbMATH — the first resource for mathematics
Covering the cliques of a graph with vertices. (English) Zbl 0766.05063
Here all graphs have order $$n$$ and isolated vertics are not counted as cliques. The central problem studied is that of estimating the cardinality $$\tau_c(G)$$ of the smallest set that shares a vertex with each clique of $$G$$. Among other results it is shown that $$\tau_c(G)\leq n-\sqrt{2n}+{3\over 2}$$ and a linear time (in the number of edges) algorithm for achieving this bound is proposed. Four associated problems are presented. For example, it is asked if $$\tau_c(G)\leq n-r(n)$$ for all graphs $$G$$ where $$r(n)$$ is the largest integer such that every triangle-free graph contains an independent set of $$r(n)$$ vertices. Also, how large triangle-free induced subgraphs does a $$K_4$$-free graph $$G$$ contain.
##### MSC:
05C70 Edge subsets with special properties (factorization, matching, partitioning, covering and packing, etc.) 05C35 Extremal problems in graph theory 05C85 Graph algorithms (graph-theoretic aspects)
##### Keywords:
covering; cliques; linear time algorithm; triangle-free graph
Full Text:
##### References:
[1] Aigner, M.; Andreae, T., Vertex-sets that meet all maximal cliques of a graph, (1986), manuscript [2] Ajtai, M.; Komlós, J.; Szemerédi, E., A note on Ramsey numbers, J. combin. theory ser. A, 29, 354-360, (1980) · Zbl 0455.05045 [3] Andreae, T.; Schughart, M.; Tuza, Zs., Clique-tranversal sets of line graphs and complements of line graphs, Discrete math., 88, 11-20, (1991) · Zbl 0734.05077 [4] Brooks, R.L., On colouring the nodes of a network, Proc. Cambridge philos. soc., 37, 194-197, (1941) · Zbl 0027.26403 [5] Caro, Y.; Tuza, Zs., Improved lower bounds on k-independence, J. graph theory, 15, 99-107, (1991) · Zbl 0753.68079 [6] Erdős, P., Graph theory and probability II, Canad. J. math., 13, 346-352, (1961) · Zbl 0097.39102 [7] Erdős, P.; Szekeres, G., A combinatorial problem in geometry, Compositio math., 2, 463-470, (1935) · JFM 61.0651.04 [8] Lonc, Z.; Rival, I., Chains, antichains and fibres, J. combin. theory ser. A, 44, 207-228, (1987) · Zbl 0637.06001 [9] Poljak, S., A note on stable sets and colourings of graphs, Comment. math. univ. carolin., 15, 307-309, (1974) · Zbl 0284.05105 [10] Sands, B., Private communication, (July 1989) [11] Tuza, Zs., Covering all cliques of a graph, Discrete math., 86, 117-126, (1990) · Zbl 0744.05040
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-06-18 02:12:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.69199538230896, "perplexity": 2343.5373310081222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487634616.65/warc/CC-MAIN-20210618013013-20210618043013-00126.warc.gz"}
|
https://stats.stackexchange.com/questions/447278/what-test-to-use-when-the-dependent-variable-is-an-integer-but-independent-varia
|
# What test to use when the dependent variable is an integer but independent variable is categorical? (Data non-normal, unequal variances)
My independent variable is species (categorical).
My dependent variable is the total mass of what each species ate (in grams, numerical).
I want to test the difference in mass values between species.
My data are non-normal, so I initially thought I should run a Kruskal-Wallis test. However, after reading about Kruskal-Wallis tests here, I realized that I shouldn't because I have unequal variances. The above link suggests a Welch's t-test when you don't have equal variances, but I have categorical data for my independent variable so I can't do that.
I have four species, with sample sizes of 20, 20, 30 and 30.
Any suggestions would be appreciated.
• Welch's ANOVA would work on your data, but it also assumes normality. Please edit your post to include the variances for each species, and histograms for each species – Robert Long Feb 1 at 8:51
• You don't require identical spread in the data for a Kruskal-Wallis (many books say so, but its not necessary). You require identical spread under the null (for exchangeability), but the alternative can be considerably more general without harming the behavior of the test. Consider, for example, if you had data from a collection of lognormal distributions with common shape parameter ($\sigma$), but potentially differing scales. Then the Kruskal-Wallis statistic is the same whether you work on the original scale or on the log-scale (now with constant spread across the groups, ... ctd – Glen_b -Reinstate Monica Feb 1 at 23:53
• ctd ... but potentially differing location) It would work as well for any other strictly monotonic increasing transformation (leaving the pattern of ranks unaltered). There are somewhat more general choices (than a class of alternatives which possess some monotonic transformation to constant spreads) which make sense with the test. You can choose to impose a stronger assumption such as a pure-location-shift alternative (for example, if you want to start estimating between-group location differences) but it's not inherently part of the test. – Glen_b -Reinstate Monica Feb 1 at 23:57
• In this case, "mass of food eaten" may tend to have spread that increases with increasing mean, but the assumption of identical distributions (including equal spread) under the null may be perfectly tenable (you can't assess it from the data, since you don't know that the null is true). – Glen_b -Reinstate Monica Feb 2 at 0:00
|
2020-04-10 04:02:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6848638653755188, "perplexity": 1111.755606153702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371883359.91/warc/CC-MAIN-20200410012405-20200410042905-00251.warc.gz"}
|
https://bitbucket.org/ned/coveragepy/src/6c5b9c6986b900dbfbea1d3c339c1684e953b521/doc/faq.rst?at=default
|
# FAQ and other help
history: 20090613T141800, brand new docs. 20091005T073900, updated for 3.1. 20091127T201500, updated for 3.2. 20110605T175500, add the announcement mailing list. 20121231T104700, Tweak the py3 text.
Q: I use nose to run my tests, and its cover plugin doesn't let me create HTML or XML reports. What should I do?
First run your tests and collect coverage data with nose and its plugin. This will write coverage data into a .coverage file. Then run coverage.py from the :ref:command line <cmd> to create the reports you need from that data.
Q: Why do unexecutable lines show up as executed?
Usually this is because you've updated your code and run coverage on it again without erasing the old data. Coverage records line numbers executed, so the old data may have recorded a line number which has since moved, causing coverage to claim a line has been executed which cannot be.
If you are using the -x command line action, it doesn't erase first by default. Switch to the coverage run command, or use the -e switch to erase all data before starting the next run.
Q: Why do the bodies of functions (or classes) show as executed, but the def lines do not?
This happens because coverage is started after the functions are defined. The definition lines are executed without coverage measurement, then coverage is started, then the function is called. This means the body is measured, but the definition of the function itself is not.
To fix this, start coverage earlier. If you use the :ref:command line <cmd> to run your program with coverage, then your entire program will be monitored. If you are using the :ref:API <api>, you need to call coverage.start() before importing the modules that define your functions.
Q: Does coverage.py work on Python 3.x?
Yes, Python 3 is fully supported.
Q: Isn't coverage testing the best thing ever?
It's good, but it isn't perfect.
Q: Where can I get more help with coverage.py?
You can discuss coverage.py or get help using it on the Testing In Python mailing list.
Bug reports are gladly accepted at the Bitbucket issue tracker.
Announcements of new coverage.py releases are sent to the coveragepy-announce mailing list.
I can be reached in a number of ways, I'm happy to answer questions about using coverage.py. I'm also available hourly for consultation or custom development.
## History
Coverage.py was originally written by Gareth Rees. Since 2004, Ned Batchelder has extended and maintained it with the help of many others. The :ref:change history <changes> has all the details.
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.
|
2014-07-13 00:46:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22329577803611755, "perplexity": 5429.7240568120205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776435808.92/warc/CC-MAIN-20140707234035-00012-ip-10-180-212-248.ec2.internal.warc.gz"}
|
http://misc.daniel-marschall.de/math/immortal/
|
# Immortal numbers
The immortal numbers were defined and researched by Daniel Marschall and are defined as integers, which contain themselves as suffix when raised to a power.
## 1. Definitions
### 1.1 Simple case (base 10, power 2)
Definition: A number $n \in \mathbb{N}_{0}$ is "immortal" (with power 2 at base 10), if it satisfies following equation:
$\large n^2 \equiv n \quad ( \bmod \; 10^{\lfloor\log_{10}(n)+1\rfloor} )$
Example: The number 625 is immortal at base 10, because 625 x 625 = 390625
### 1.2 General case (base b, power p)
Definition: A number $n \in \mathbb{N}_{0}$ is "immortal" with power $p \in \mathbb{N}\setminus\{0,1\}$ at base $b \in \mathbb{N}\setminus\{0,1\}$, if it satisfies following equation:
$\large n^p \equiv n \quad ( \bmod \; b^{\lfloor\log_{b}(n)+1\rfloor} )$
Example: The number 2047 ( 3777(8) ) is immortal with power 3 at base 8, because 3777(8)3 = 77720013777(8)
### 1.3 Set of immortal numbers
The set $\small \mathbb{M}_{b}^{p}$ contains all immortal numbers with power $p$ at base $b$.
Definition:
$\large n \in \mathbb{M}_{b}^{p} \; \Leftrightarrow \; n^p \equiv n \; ( \bmod \; b^{\lfloor\log_{b}(n)+1\rfloor} )$
Example: $\small 625 \in \mathbb{M}_{10}^{2}$
### 1.4 Immortality of base/power-tuples
The "immortality" of a $(b,p)$ tuple is defined by the amount of immortal numbers divided by the maximum length of these numbers. The higher the maximum length is chosen, the more accurate is the immortality. The unit of measurement can be optionally written as $\tfrac{I}{D}$ ("Immortals per Digit"). Since the numbers $\{0,1\}$ are always immortal, they will be exluded from this calculation.
Formal definition:
$\large \vert \mathbb{M}_{b}^{p} \vert^\ast := \lim\limits_{L \rightarrow \infty}{\Big[ \frac{1}{L}\cdot\vert\{ n \in \mathbb{M}_{b}^{p} \; \vert \; 1 < n < b^L \}\vert \Big]}$
Example: For base 10, power 3, there are 1176 immortal numbers with length of 1..100 digits, exluding 0 and 1. The immortality of $(10,3)$ is therefore approximately $\vert\mathbb{M}^{3}_{10}\vert^\ast \approx \tfrac{1176}{100} = 11.76$
Here you can find a plot that compares the immortality of various $(b,p)$ tuples.
### 1.5 Pseudo immortal base/power-tuples
There are base/power-tuples $(b,p)$ where only the numbers $\mathbb{M}_b^p = \{0,1\}$ are immortal. These base/power-tuples are called "pseudo-immortal". An example is $(9,2)$.
Their immortality is zero $\vert\mathbb{M}_{b}^{p}\vert^\ast = 0$ because:
$\vert \{0,1\} \vert^\ast = \lim\limits_{L \rightarrow \infty}{\Big[ \frac{1}{L}\cdot\vert\{ n \in \{0,1\} \; \vert \; 1 < n < b^L \}\vert \Big]} = \lim\limits_{L \rightarrow \infty}{\Big[ \frac{1}{L}\cdot\vert\{ \}\vert \Big]} = 0$
Here you can find a plot that shows the $(b,p)$ values of some pseudo-immortals.
### 1.6 Super immortal numbers
Super immortal numbers are numbers which are immortal to every $(b,p)$ tuple. Only the numbers $\{0,1\}$ satisfy this property (see proof).
$\mathbb{M}_{s} := \bigcap_{b=2}^{\infty}\bigcap_{p=2}^{\infty}\mathbb{M}_{b}^{p} = \{0,1\}$
## 2. Branch graph notation
Immortal numbers can be written in graph with "branches".
In base 10, power 2, things are quite easy: There are only 4 branches 0, 1, 5 and 6. The branches 0 and 1 are trivial branches and have no children.
In branch 5, the immortal numbers are 5, 25, 625, 90625, 890625, 2890625, 12890625, 212890625, 8212890625, ...
In branch 6, the immortal numbers are 6, 76, 376, 9376, 109376, 7109376, 87109376, 787109376, 1787109376, ...
In this case, immortal numbers of base 10 power 2 can be generated using a tree node and adding a single digit in front of it. Additionally, the sum of the pairs of digits below branch 5 and 6 always add up to 9.
However, there are more complex graphs, for example base 10, power 3:
Credits: The graphs were created with mind-map-online.de
## 3. Immortal number search
### 3.1 Base 10, power 2
There has been a huge effort to find immortal numbers with several million digits. A special algorithm, optimized extensively using SSE/MMX technology, has been written to archive this task.
On July, 9th 2019, the immortal number of base 10, power 2 with 1.1 billion digits was found. (Note: The double-verification started 2017 is still running)
Get the tool to calculate these numbers (Source codes written in C)
Credits: The search for giant immortal numbers at base-10-power-2 were only made possible by the users of the matheboard.de thread who contributed the algorithm for finding base-10-power-2 numbers as well as the contributor of this StackOverflow question who has showed me how to speed up the algorithm using SSE/MMX instructions.
### 3.2 Base b, power p
A rather trivial search algorithm has been developed to find all immortal numbers between base 2 and 36 and power 2 till 10, with a maximum length of 100 digits.
Python source code
Get a list of immortal numbers with max length 100 digits, for powers 2 till 10, with following base:
02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
## 4. Graphical plots
### 4.1 Immortal number overview
Python source code
Base 10 filtered:
### 4.2 Immortality of various (b,p)-tuples
Python source code
A red x denotes an immortality of zero. Dots denote an immortality. The bigger the dots are, the greater is the immortality of $\mathbb{M}_{b}^{p}$ .
Green dots denote an immortality of $\mathbb{N}$.
Blue dots denote an immortality of $\mathbb{R}$.
### 4.3 Pseudo immortal base/power tuples
Python source code
## 5. Miscellaneous proofs, theorems and attributes
### 5.1 An immortal number with power p is also immortal with power p+n(p-1)
We want to prove that $\large \mathbb{M}_{b}^{p} \subseteq \mathbb{M}^{p+n(p-1)}_{b}$
Let's begin with:
$n \in \mathbb{M}_{b}^{p} \; \Leftrightarrow \; n^p \equiv n \quad (\bmod \; b^{\lfloor\log_{b}(n)+1\rfloor} ) \qquad \vert \cdot n^{p-1}$
$n^p \cdot n^{p-1} \equiv n \cdot n^{p-1} \quad (\bmod \; b^{\lfloor\log_{b}(n)+1\rfloor} )$
$n^{p+p-1} \equiv n^{1+p-1} \quad (\bmod \; b^{\lfloor\log_{b}(n)+1\rfloor} )$
$n^{p+(p-1)} \equiv n^{p} \quad (\bmod \; b^{\lfloor\log_{b}(n)+1\rfloor} )$
With mathematical induction we can now prove that
$n^{p+n(p-1)} \equiv n^{p+(n-1)(p-1)} \equiv ... \equiv n^{p+(p-1)} \equiv n^{p} \equiv n \quad (\bmod \; b^{\lfloor\log_{b}(n)+1\rfloor} )$
therefore
$n \in \mathbb{M}_{b}^{p+n(p-1)} \Leftrightarrow n \in \mathbb{M}_{b}^{p+(n-1)(p-1)} \Leftrightarrow ... \Leftrightarrow n \in \mathbb{M}_{b}^{p+(p-1)} \Leftrightarrow n \in \mathbb{M}_{b}^{p}$
Since all immortal numbers of power $p$ are also immortal in power $p+n(p-1)$, we have now proven that
$\large \mathbb{M}_{b}^{p} \subseteq \mathbb{M}^{p+n(p-1)}_{b} \quad \blacksquare$
### 5.2 Theorem of complete immortality
Theorem: For every number $n \in \mathbb{N}_{0}$ there is a tuple $(b,p)$ so that $n \in \mathbb{M}_{b}^{p}$. In other words, every number is immortal to some specific base and power.
$\large \forall \; n \in \mathbb{N}_0 \; \exists \; (b,p): n \in \mathbb{M}_{b}^{p}$
Note: The theorem of complete immortality also leads to the conclusion:
$\bigcup_{b=2}^{\infty} \bigcup_{p=2}^{\infty}\mathbb{M}_{b}^{p} = \mathbb{N}_{0}$
Proof:
(0) Let $n \in \mathbb{N}_0$ with an arbitary value.
(1) Let $b$ be a prime $b \in \mathbb{P}$ that is not a prime factor of $n$ ( $n \nmid b$ ), then $m := b^{\lfloor\log_{b}(n)+1\rfloor}$ is not a prime factor of $n$ either ( $n \nmid m$ ).
In other words: $\gcd(b,n) = 1 \; \Rightarrow \; \gcd(m,n) = 1$ .
(2) Since we have $\gcd(m,n) = 1$, we can create following term using the Fermat-Euler theorem:
$n^{\varphi(m)} \equiv 1 \quad (\bmod \; m)$
Transforming it a bit, we get:
$n^{\varphi(m)} \equiv 1 \quad (\bmod \; m) \quad \vert \cdot n$
$n^{\varphi(m)} \cdot n \equiv n \quad (\bmod \; m)$
$n^{\varphi(m)+1} \equiv n \quad (\bmod \; m)$
(3) If we now define $p := \varphi(m) + 1$ and insert the definition of $m$ from above, then we get:
$\large n^{p} \equiv n \quad (\bmod \; b^{\lfloor\log_{b}(n)+1\rfloor} )$
which is equivalent to:
$\large n \in \mathbb{M}_{b}^{p} \quad \blacksquare$
Credits: Many thanks to Finn from matheboard.de for showing me how to prove this theorem!
### 5.3 The only super-immortals are 0 and 1
We want to prove that the only super-immortals are 0 and 1.
$\mathbb{M}_{s} := \bigcap_{b=2}^{\infty}\bigcap_{p=2}^{\infty}\mathbb{M}_{b}^{p} = \{0,1\}$
(1) First, we verify that 0 and 1 are immortal to every $b,p \in \mathbb{N}\setminus\{0,1\}$ :
(1.1) $\forall \; b,p \geq 2: \; 0^p = 0 \quad \Rightarrow \quad 0^p \equiv 0 \; (\bmod \; b^{\lfloor\log_{b}(n)+1\rfloor} ) \quad \Rightarrow \quad 0 \in \mathbb{M}_{b}^{p}$
(1.2) $\forall \; b,p \geq 2: \; 1^p = 1 \quad \Rightarrow \quad 1^p \equiv 1 \; (\bmod \; b^{\lfloor\log_{b}(n)+1\rfloor} ) \quad \Rightarrow \quad 1 \in \mathbb{M}_{b}^{p}$
(2) To prove that the only super-immortals are 0 and 1, we want to find a $(b,p)$ tuple for every $n > 1$ that will cause $n \notin \mathbb{M}_{b}^{p}$ :
$\large \forall \; n \in \mathbb{N}\setminus\{0,1\} \; \exists \; (b,p): n \notin \mathbb{M}_{b}^{p}; \quad b,p \in \mathbb{N}\setminus\{0,1\}$
(3) The definition of an immortal number is
$n^p \equiv n \quad ( \bmod \; b^{\lfloor\log_{b}(n)+1\rfloor} )$
We can see that a number can only be immortal if $n^p$ is a number that has more digits (in the notation of base $b$) than $n$.
(4) If we choose a high enough base $b$, then $n$ and $n^p$ would have the same amounts of digits, and therefore $n$ could not be immortal.
So, we need to find a $b$, so that $b^{\lfloor\log_{b}(n)+1\rfloor} > n^p$ .
If we choose $b := n^p$, we get:
$n^{p^{\lfloor\log_{n^p}(n)+1\rfloor}} > n^p$
$n^{p+{\lfloor\log_{n^p}(n)+1\rfloor}} > n^p$
$n^{p+{\lfloor\tfrac{\ln n}{\ln n^p}+1\rfloor}} > n^p$
$n^{p+{\lfloor\tfrac{\ln n}{p \cdot \ln n}+1\rfloor}} > n^p$
$n^{p+{\lfloor\tfrac{1}{p}+1\rfloor}} > n^p$
$n^{p+{\lfloor\tfrac{1}{p}\rfloor}+1} > n^p$
If $p>1$, then $\lfloor\tfrac{1}{p}\rfloor = 0$ and therefore:
$n^{p+0+1} > n^p$, which is true, if $n>1$.
(5) We continue by taking the general definition of an immortal number
$n^p \equiv n \; ( \bmod \; b^{\lfloor\log_{b}(n)+1\rfloor} ); \quad b,p \in \mathbb{N}\setminus\{0,1\}$
which can also be written as:
$(n^p \bmod \; b^{\lfloor\log_{b}(n)+1\rfloor}) = (n \bmod \; b^{\lfloor\log_{b}(n)+1\rfloor} )$
Since $b^{\lfloor\log_{b}(n)+1\rfloor} > n^p$ and $b^{\lfloor\log_{b}(n)+1\rfloor} > n$ ( because $n^p > n$ if $p \geq 2$ ) :
$(n^p \bmod \; b^{\lfloor\log_{b}(n)+1\rfloor}) = (n \bmod \; b^{\lfloor\log_{b}(n)+1\rfloor} ) \; \Rightarrow \; n^p = n$, which is false if $p \geq 2$ .
Side note:
$(a \bmod m) := a - \lfloor \tfrac{a}{m} \rfloor \cdot m$
If $m > a$ , then: $(a \bmod m) = a - 0 \cdot m = a$
(6) Therefore:
$\Large \forall \; n\in\mathbb{N}\setminus\{0,1\}: \; n \notin \mathbb{M}_{b:=n^p}^{p}, \;\; p \in \mathbb{N}\setminus\{0,1\}$
$\Leftrightarrow \forall \; n\in\mathbb{N}\setminus\{0,1\}: \; n \notin \mathbb{M}_{s}$
(7) Bringing together the results of step (1) and (6), we can conclude:
$\mathbb{M}_{s} = \{0,1\} \quad \blacksquare$
## 6. Open questions and unproven theorems
Help and suggestions are welcome!
#1:
$\bigcap_{b=2}^{\infty} \Big( \bigcup_{p=2}^{\infty}\mathbb{M}_{b}^{p} \Big) \stackrel{?}{=} \{0,1\}$
#2:
$\bigcap_{p=2}^{\infty} \Big( \bigcup_{b=2}^{\infty}\mathbb{M}_{b}^{p} \Big) \stackrel{?}{=} \{0,1\}$
#3: Is there a base that does not have any non-trivial (not 0,1) immortal numbers?
$\exists? \; b \in \mathbb{N}\setminus\{0,1\} : \sum_{p=2}^{\infty} \vert \mathbb{M}_{b}^{p} \vert^{\ast} = 0$
#4: Is there a power that does not have any non-trivial (not 0,1) immortal numbers?
$\exists? \; p \in \mathbb{N}\setminus\{0,1\} : \sum_{b=2}^{\infty} \vert \mathbb{M}_{b}^{p} \vert^{\ast} = 0$
#5: The topic "Base B / Power P" is not researched very much. It is not known how to find "Base B / Power P" immortal numbers without brute-forcing them.
Various notes and other stuff: runs, old_notes
Credits: The equations were generated with the Codecogs Online LaTeX Editor.
Last update: 13 July 2019
Contact: info&daniel-marschall.de
|
2019-11-20 04:55:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 110, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7753564715385437, "perplexity": 1181.3676421071618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670448.67/warc/CC-MAIN-20191120033221-20191120061221-00479.warc.gz"}
|
https://leetcode.com/articles/k-closest-points-to-origin/
|
## Solution
#### Approach 1: Sort
Intuition
Sort the points by distance, then take the closest K points.
Algorithm
There are two variants.
In Java, we find the K-th distance by creating an array of distances and then sorting them. After, we select all the points with distance less than or equal to this K-th distance.
In Python, we sort by a custom key function - namely, the distance to the origin. Afterwards, we return the first K elements of the list.
Complexity Analysis
• Time Complexity: , where is the length of points.
• Space Complexity: .
#### Approach 2: Divide and Conquer
Intuition
We want an algorithm faster than . Clearly, the only way to do this is to use the fact that the K elements returned can be in any order -- otherwise we would be sorting which is at least .
Say we choose some random element x = A[i] and split the array into two buckets: one bucket of all the elements less than x, and another bucket of all the elements greater than or equal to x. This is known as "quickselecting by a pivot x".
The idea is that if we quickselect by some pivot, on average in linear time we'll reduce the problem to a problem of half the size.
Algorithm
Let's do the work(i, j, K) of partially sorting the subarray (points[i], points[i+1], ..., points[j]) so that the smallest K elements of this subarray occur in the first K positions (i, i+1, ..., i+K-1).
First, we quickselect by a random pivot element from the subarray. To do this in place, we have two pointers i and j, and move these pointers to the elements that are in the wrong bucket -- then, we swap these elements.
After, we have two buckets [oi, i] and [i+1, oj], where (oi, oj) are the original (i, j) values when calling work(i, j, K). Say the first bucket has 10 items and the second bucket has 15 items. If we were trying to partially sort say, K = 5 items, then we only need to partially sort the first bucket: work(oi, i, 5). Otherwise, if we were trying to partially sort say, K = 17 items, then the first 10 items are already partially sorted, and we only need to partially sort the next 7 items: work(i+1, oj, 7).
Complexity Analysis
• Time Complexity: in average case complexity, where is the length of points.
• Space Complexity: .
Analysis written by: @awice.
|
2019-08-18 05:12:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7182113528251648, "perplexity": 1221.239006433428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313617.6/warc/CC-MAIN-20190818042813-20190818064813-00208.warc.gz"}
|
https://www.physicsforums.com/threads/how-to-parameterize-these-surfaces.897126/
|
# How to parameterize these surfaces?
Tags:
1. Dec 13, 2016
### Sho Kano
1. The problem statement, all variables and given/known data
Calculate $\iint { y+{ z }^{ 2 }ds }$ where the surface is the upper part of a hemisphere with radius a centered at the origin with $x\ge 0$
2. Relevant equations
Parameterizations:
$\sigma =\left< asin\phi cos\theta ,asin\phi sin\theta ,acos\phi \right> ,0\le \phi \le \frac { \pi }{ 2 } ,\frac { -\pi }{ 2 } \le \theta \le \frac { \pi }{ 2 } \\ N=(asin\phi )\sigma \\ \left| N \right| ={ a }^{ 2 }sin\phi \\ \\ \alpha =\left< rcos\theta ,rsin\theta ,0 \right> ,0\le r\le a,\frac { -\pi }{ 2 } \le \theta \le \frac { \pi }{ 2 } \\ N=-k\\ \left| N \right| =1$
3. The attempt at a solution
are these the right parameterizations?
2. Dec 14, 2016
### BvU
Check for yourself: $\ \sigma =\left< a\sin\phi \cos\theta ,a\sin\phi \sin\theta ,a\cos\phi \right> \$seems right to me. For $\ \iint ds\$ you would then get $\ \pi a^2, \$ right ?
It is not clear to me what you do to express $\ ds \$. What is $N$ and what is the function of $N$ ?
3. Dec 14, 2016
### Sho Kano
Oh sorry, by the integral I mean a surface integral. N is the normal. Both parameterizations seem right to me...i originally had $a$ instead of $r$ for the second parameterization. But that would just give me a circle, not a disk (a surface)
4. Dec 14, 2016
### BvU
Are we mixing up two threads with almost the same title ?
Not clear to me why you need $N$ in this thread. But you sure need $ds$ and I haven't seen how you are going to express that in the parameters
|
2017-11-24 17:49:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7684053778648376, "perplexity": 583.533771842263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808260.61/warc/CC-MAIN-20171124161303-20171124181303-00527.warc.gz"}
|
http://bittooth.blogspot.com/2010/04/european-airspace-645-am-cst.html
|
## Monday, April 19, 2010
### European airspace 6:45 am CST
This image showing 128 aircraft in European airspace, would indicate that airlines are beginning to find ways of getting planes into the air. However the patterns show where the plume is still probably too dense to allow flights. Including the UK, where the Government is apparently going to send Navy ships to help.
And just to make you feel comfortable about listening to experts, this was the listing for Europe's ten most dangerous volcanoes (in terms of their impact on Europe) just four days ago:
Volcano . . . . . . . . .Country . . . . . .Affected population . . . Values of residences at risk
(US \$billion)
1. Vesuvius . . . . . . . Italy . . . . . . . . 1,651,950 . . . . . . . . . . 66.1
2. Campi Flegrei . . . Italy . . . . . . . . . . 144,144 . . . . . . . . . . .7.8
3. La Soufriere . . . . .Guadeloupe, . . . . .94,037 . . . . . . . . . . . 3.8
4. Etna . . . . . . . . . . . Italy . . . . . . . . . . 70,819 . . . . . . . . . . .. 2.8
5. Agua de Pau . . . . .Azores, . . . . . . . .34,307 . . . . . . . . . . . 1.4
Portugal
6. Soufriere . . . . . Saint Vincent . . . . . .24,493 . . . . . . . . . . .1.0
Saint Vincent, Caribbean
7. Furnas . . . . . . . Azores, . . . . . . . . . .19,862 . . . . . . . . . . 0.8
Portugal
8. Sete Cidades . . . Azores, . . . . . . . . . 17,889 . . . . . . . . . . . 0.7
Portugal
9. Hekla . . . . . . . ..Iceland . . . . . . . . . . 10,024 . . . . . . . . . . .0.4
10. Mt Pelee . . . . .Martinique, . . . . . . .10,002 . . . . . . . . . . 0.4
France
Hekla is the only one from Iceland that makes the list, which, given that Katla is likely to be perhaps ten times as large as the current eruption from Eyjafjallajokull, and may well go in the next 2 years perhaps underscores the occasional need to question expert opinion.
|
2013-05-18 19:39:38
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9554122090339661, "perplexity": 340.844667766882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382764/warc/CC-MAIN-20130516092622-00002-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/970496/dense-open-subsets-of-schemes
|
# Dense open subsets of schemes
Let $X$ be a scheme. Let $U$ be an open subset of $X$. It is clear that if $U$ contains all the generic points of $X$ (by which I mean the generic points of irreducible components of $X$) then $U$ is dense in $X$.
Is the converse of this statement true in general? That is, if $U$ is dense in $X$ does it contain all the generic points of $X$?
I know the converse is true for an affine scheme with finitely many minimal primes, hence for a scheme $X$ whose set of irreducible components is locally finite. So the statement holds for any locally Noetherian scheme.
$\textbf{Proof for the affine case when the ring has finitely many minimal primes:}$ Let $X = Spec(A)$. Let $p_1, \dots, p_n$ be the minimal primes of $A$. Let $U$ be a dense open of $Spec(A)$. Suppose $U = \{p_a : a \in A\}$. Then $$\overline{U} = \mathbb{V}(\cap_{a \in A} p_a) = Spec(A)$$
So, $\cap_{a \in A} p_a = nil(A)$. Let $p_{b_1}, \dots, p_{b_k}$ be the minimal primes of $A$ in $U$. Then $\cap_{a \in A} p_a = \cap p_{b_j}$. So for any minimal prime $p_i$, $nil(A) = \cap p_{b_j} \subset p_i$. By minimality and the fact that $p_i$ is prime, $p_i = p_{b_j}$ for some $j$ ($\textbf{here I need finiteness of the set of minimal primes}$), and so $p_i \in U$.
$\DeclareMathOperator{\Spec}{\operatorname{Spec}}$This is not true, even if $X = \Spec R$ is affine. Consider the ring
$$R := k[x_1, x_2, \ldots]/(x_1x_2, x_3x_4, \ldots) = k[x_i]/(x_ix_{i+1} \mid i \text{ odd})$$
This ring has infinitely many minimal primes (corresponding to binary sequences of even/odd choices for each pair of annihilating variables $\{x_i, x_{i+1}\}$). Set $p := (x_1, x_3, x_5, \ldots)$ be the minimal prime consisting of odd variables, and $U := \Spec(R) \setminus V(p)$, so that $p \not \in U$. Notice that for $f \in R$, $D(f) \subseteq U \iff V(f) \supseteq V(p) \iff f \in p$. Also, since $R$ is reduced, $D(f) = \emptyset$ iff $f = 0$.
To show $U$ is dense in $R$, it suffices to show that for all $g \ne 0 \in R$, $U \cap D(g) \ne \emptyset$, or equivalently, there exists $f \in R$ with $D(f) \subseteq U$ and $D(f) \cap D(g) = D(fg) \ne \emptyset$. In other words, for all $0 \ne g \in R$, there exists $f \in p$ with $fg \ne 0$. But such a choice of $f$ always exists for any $g$, by taking $f = x_k$, for some odd $k$ greater than the largest index appearing in $g$.
(To show this last claim, note that $x_kg = 0$ in $R$ involves only finitely many variables in $k[x_i]$, hence would hold in a sub-polynomial ring in finitely many variables, but in the finite case $k[x_1, \ldots, x_{n+1}]/(x_1x_2, \ldots, x_nx_{n+1}) = \Big(k[\ldots]/(\ldots)\Big)[x_k,x_{k+1}]/(x_kx_{k+1})$, $\text{ann}(x_k) = (x_{k+1})$).
|
2020-04-05 07:55:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9927599430084229, "perplexity": 37.31322227058193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370529375.49/warc/CC-MAIN-20200405053120-20200405083120-00418.warc.gz"}
|
https://moodle.org/mod/forum/discuss.php?d=362660
|
## General help
### Error on deleting users
Error on deleting users
I have a weird error when deleting users:
Notice: Trying to get property of non-object in /var/www/elo/moodle/lib/grade/grade_grade.php on line 1080
Notice: Trying to get property of non-object in /var/www/elo/moodle/lib/classes/event/grade_deleted.php on line 70
Can not find data record in database table course.
Moodle version is Moodle 3.3.3 (Build: 20171113)
This doesn't happen with all users when clicking the cross in "Browse list of users", but it happens always if I use "Bulk user actions" or "Upload users" with CSV using username and delete fields.
Some users I can't delete at all. So I have the impression there is some data for some users that breaks the delete script. I couldn't find any bugs that look similar. I tried to bulk delete the two users on the demo site and that worked
Has anyone an idea where to start my investigation?
Average of ratings: -
Re: Error on deleting users
Hi Koen,
The code should check for something like empty() and skip those items.
HTH,
Matteo
Average of ratings: -
Re: Error on deleting users
Thank you for your response Matteo.
It seems like I'm not the only one. In the mean time I found
I used
select * from mdl_grade_grades where itemid not in (select id from mdl_grade_items)
from the tracker issue and found 2084 records on a total of 707849.
I deleted them with
DELETE from mdl_grade_grades where itemid not in (select id from mdl_grade_items)
which was mentioned in the bug.
After running the delete query all was working fine again and I managed to delete the 400 users.
I think your suggested code change would have worked too, but I preferred in this case to clean up the database - it might prevent future problems.
Average of ratings: Useful (1)
|
2018-03-24 22:28:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18658536672592163, "perplexity": 5321.697214236625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651007.67/warc/CC-MAIN-20180324210433-20180324230433-00643.warc.gz"}
|
https://www.opensourceforu.com/2016/06/plotting-with-latex/
|
HomeAudienceFor U & MePlotting with LaTeX
# Plotting with LaTeX
This article is a tutorial on plotting graphs with LaTeX. It contains a number of examples that will be of immense use to readers who wish to try it out. It also has a comprehensive list of references, which will be of further use.
The LaTeX package for plotting is pgfplot. Its present in all full distributions of LaTeX, and permits the creation of consistent graphics with the typographic settings of the document, writing the instructions directly in the text source and ensuring the highest quality typical of LaTeX. In other words, pgfplots is a package that plots by coding. Further on in this article, an example of a flowchart will be shown. All the examples presented here are as standalone plots and are made with ProTeXt 3.1.5. Because the complete code is too long for a printed article, interested readers will find it freely available on https://github.com/astonfe/latex.
Data management
The first important thing to be aware of is how to use the data. Generally speaking, there are two main situations a short data table and a long data table. The former is useful to put the table directly into the script, while in the case of the latter, its preferable to use an external file. For a short table, I use the coordinates of each point, as shown in the following code:
coordinates {
(0.1,0.086)
(...,... )
(0.9,0.782)};
This data has been taken from Reference 1 given at the end of the article. Another way to use a short table is to use table, as shown in the following example:
table {
X Y Z
0.324 0.213 0.463
... ... ...
0.150 0.700 0.150
};
A third short table option is given by pgfplotstableread:
\pgfplotstableread[row sep=\\,col sep=&]{
M & Palermo & Milan \\
1 & 71.6 & 58.7 \\
.. & ... & ... \\
12 & 80.0 & 61.7 \\
}\datarain
This meteorological data has been published in References 2 and 3. If the table is a bit longer, I can continue to use pgfplotstableread, but in another way, as shown below:
\pgfplotstableread{absorbance.txt}{\dataabs}
in which absorbance.txt is a simple text file with a structure comprising rows and columns. In this example, usepackage{pgfplotstable} must be added in the preamble (see the next section). Probably, the simplest way is to put the data file, for example pressure.txt, directly after addplot, without any other command:
\addplot [p1] table[x=Date,y=DAP]{pressure.txt};
This last example will be better explained in the next section.
File structure
A generic LaTeX file has the following well-known structure:
\documentclass{...}
...
\begin{document}
...
\end{document}
in which the preamble is the part between documentclass and begin{document}. The following example is a bit more detailed:
\documentclass[border=0.5cm]{standalone}
\usepackage[svgnames]{xcolour}
\usepackage{pgfplots}
\usetikzlibrary{pgfplots.dateplot}
\pgfplotsset{width=16cm,height=12cm}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
date coordinates in=x,
grid=major,
grid style={dashed,gray!30},
xtick align=outside,
ytick align=outside,
xtick pos=left,
ytick pos=left,
xticklabel=\month.\year,
xticklabel style={rotate=90,anchor=near xticklabel},
ylabel={Pressure (mmHg)},
title={Blood pressure},
title style={font=\huge},
legend cell align=left,
p1/.style={draw=DodgerBlue,line width=1pt,mark=*,mark options={fill=white}},
p2/.style={draw=Salmon,line width=1pt,mark=*,mark options={fill=white}}
]
\legend{DAP,SAP}
\end{axis}
\end{tikzpicture}
\end{document}
The preamble contains the packages, the libraries and some settings to build the plot typically, these are the width, height and, eventually, the definition of a custom colour map. In this example, the xcolor package is used to have svgnames for the colours for example, DodgerBlue, Salmon, etc (see Reference 4, pages 38-39). Usually, the preamble also contains the data table if its used with pgfplotstableread. The part between begin{axis} and end{axis} is about the plot options for example, the grid, ticks, labels, title and legend. In this part, addplot is also present. Its used to add each data series to the plot. Each data series has its own style options as specified with p1 and p2. The file pressure.txt contains a simulation of medical data about the measure of the diastolic (DAP) and systolic (SAP) arterial pressures during a certain period of time. It has the following structure:
Date DAP SAP
2014-06-12 90 130
2014-07-04 92 130
2014-07-27 85 125
... ... ...
The final result is shown in Figure 1.
2D plot examples
Using the above mentioned meteorological data, an area plot can be easily built. In the axis section, the following options are used:
p1/.style={draw=DodgerBlue,draw opacity=0.6,line width=3pt,fill=DodgerBlue,fill opacity=0.6,mark=none},
p2/.style={draw=Salmon,draw opacity=0.6,line width=3pt,fill=Salmon,fill opacity=0.6,mark=none}
And then each data series is plotted, as follows:
\addplot [p1] table[x=M,y=Milan]{\datarain}\closedcycle;
\addplot [p2] table[x=M,y=Palermo]{\datarain}\closedcycle;
The option closedcycle defines a closed surface for each series. This is the main difference between an area plot and a bar plot. The same table could also be plotted as a bar plot. In the preamble, the following lines are added to have the height of each bar written on the plot considering the first decimal figure, even if its a zero.
\pgfkeys{
/pgf/number format/precision=1,
/pgf/number format/fixed zerofill=true}
The axis section contains, among the other options, the following code. The first three lines are to write on the plot the height of each bar and the last two to select the colour of each bar type.
nodes near coords,
nodes near coords align={vertical},
every node near coord/.append style={font=\tiny},
...
p1/.style={draw=DodgerBlue,fill=DodgerBlue},
p2/.style={draw=Salmon,fill=Salmon}
Last of all, each series is plotted, as shown below:
\addplot [p1] table[x=M,y=Milan]{\datarain};
\addplot [p2] table[x=M,y=Palermo]{\datarain};
On the same plot, its possible to add two (or more) data series with different plot types. For example, I can plot the above mentioned coordinates table as points only, as follows:
\addplot[only marks,mark=*,mark size=3pt,mark options={fill=green}]
coordinates {...};
And then add another series as a line only, as shown below:
\addplot[line width=1pt]
coordinates {
(0.1,0.0927143)
(0.9,0.7937429)};
The final results are shown in Figures 2, 3 and 4.
Smoothing: an example of interacting with Scilab
If some calculations are necessary before plotting the data, Scilab (http://www.scilab.org) can be used. In the following script for Scilab, a moving average smoothing is calculated. The block length is equal to 2*m+1. A three-column text file called smoothing.txt is exported; the first column is an index from 1 to the data length, the second is the data values, and the third is the calculated moving average. The LaTeX engine is then called by the function unix_g and the script smoothing.tex produces the final plot (see Figure 5).
clear;
data=sheets(1).value;
m=3;
[nr,nc]=size(data);
ma=zeros(1:nr);
for i=1+m:nr-m
for j=-m:m
ma(i)=ma(i)+data(i+j);
end
ma(i)=ma(i)/(2*m+1);
end
n=1:1:nr;
q=cat(2,n,data,ma);
write_csv(q,smoothing.txt, ,.);
unix_g(pdflatex smoothing.tex);
Since in smoothing.txt the column headers are not present, in smoothing.tex each column is selected by its own index:
\addplot [p1] table[x index={0},y index={1}]{smoothing.txt};
\addplot [p2] table[x index={0},y index={2}]{smoothing.txt};
3D plot examples
The first thing I would like to talk about is how to create a custom colour map. Simply add the following code to pgfplotsset in the preamble:
colormap={parula}{rgb=(...,...,...)}
The complete RGB data for the parula and the viridis colour maps can be found in Reference 5. This technique is also useful to create a custom colour map for Scilab. In the following example, a 3D algebraic function is plotted and coloured with the parula colour map. For Scilab, also see Reference 6.
clear;
parula=[0.2081,0.1663,0.5292
... ,... ,...
0.9763,0.9831,0.0538];
[nr,nc]=size(parula);
n=nr-1;
k=0:1:n;
cmap=interp1(k/n,parula,linspace(0,1,n));
deff(z=f(x,y),z=sin(sqrt(x.^2+y.^2)));
x=-4:0.25:4;
y=-4:0.25:4;
xset(colourmap,cmap);
fplot3d1(x,y,f,alpha=80,theta=30);
s=gca();
s.box=off;
s.axes_visible=off;
xtitle("","","","");
The LaTeX code for the same function is a bit different, as seen below:
\documentclass[border=0.5cm]{standalone}
\usepackage{pgfplots}
\pgfplotsset{width=16cm,height=12cm,
colourmap={parula}{
rgb=(0.2081,0.1663,0.5292)
rgb=(... ,... ,... )
rgb=(0.9763,0.9831,0.0538)
}
}
\begin{document}
\begin{tikzpicture}
\begin{axis}[hide axis]
surf,
domain=-230:230,
samples=50,
opacity=0.8,
line width=0.1pt
]
{sin(sqrt(x^2+y^2))};
\end{axis}
\end{tikzpicture}
\end{document}
An alternative is to replace surf with mesh. The final plots are shown in Figures 6 and 7. If the surface is parametric, for example, a torus (Figure 8), the three equations can be written in an easy way putting them between ( and ):
({cos(x)},
{(2.5+sin(x))*cos(y)},
{(2.5+sin(x))*sin(y)});
Plotting a 3D function, algebraic or parametric, requires the customisation of some keys for example, domain (x range, or x, y if domain y is not specified), domain y (y range), draw (lines colour), line width (lines width), opacity (alpha value, from 0 – totally transparent – to 1 – totally opaque -), samples (which manages the number of sampled points on x – its default value is 25 -, or x, y if samples y is not specified), samples y (manages the number of sampled points on y), shader (lines shade), and z buffer (if equal to sort, the segments more distant from the observation point are first plotted).
I think that a better way to plot a parametric function is shown in Reference 7 or Reference 14 (page 109), in which the package pst-solides3d is used. This is the only example discussed here for which its necessary to run with XeLaTeX. The final plot in Figure 9 shows the Dinis surface (from Ulisse Dini, http://en.wikipedia.org/wiki/Ulisse_Dini). Some important options are: viewpoint (notation with spherical coordinates), decran (a particular distance, see Reference 14, pages 14-15), base (dimensions of the grid), and ngrid (the number of grid lines).
\documentclass[border=0.5cm]{standalone}
\usepackage{pst-solides3d}
\begin{document}
\begin{pspicture}(-1,-2)(1,2)
\psset{viewpoint=20 70 20 rtp2xyz,Decran=20}
\defFunction[algebraic]{lily}(u,v)
{cos(u)*sin(v)}
{sin(u)*sin(v)}
{cos(v)+ln(tan(v/2))+0.15*u}
\psSolid[
object=surfaceparametree,
function=lily,
base=0 12.56 0.13 2,
ngrid=60 0.1,
linewidth=0.1pt,
incolor=gray!30,
hue=0 1,
opacity=0.8,
]
\end{pspicture}
\end{document}
The last example for 3D plotting is about the representation of some spectroscopic data. In particular, I will consider an absorbance spectrum measured by points from 350 nm to 550 nm with steps of 10 nm, at 8 different times (0, 1, 2, 4, 6, 8, 12, 24 hours). So I have a table 21 rows x 8 columns. The table is present in the preamble as pgfplotstableread, the first column is the wavelength and the columns from two to nine are the measured absorbance values. In the preamble I have also added eight different definecolors, one for each spectrum, and eight different pgfplotscreateplotcyclelist to define some options for each spectrum. The syntax is as follows:
\definecolor{col1}{HTML}{F46D43}
...
\definecolor{col8}{HTML}{3288BD}
\pgfplotscreateplotcyclelist{mycol}{%
{draw=col1,line width=1pt,fill=col1,fill opacity=0.6},
...
{draw=col8,line width=1pt,fill=col8,fill opacity=0.6}%
}
Each spectrum is then plotted by a loop, as follows:
\pgfplotsinvokeforeach{24,12,8,6,4,2,1,0}{
}
The result is eight different areas plotted in a 3D space, as shown in Figure 10. For aesthetic reasons, I have added one more row with a z=0 value at 350 nm and at 550 nm. Another option is to plot each spectrum in a simple sequence, not considering the time. In this case, each measured point is specified by its coordinates (x,y,z) and the result is a surface (see Figure 11).
\addplot3[
surf,
z buffer=sort,
opacity=0.8,
line width=0.1pt
]
coordinates{
(350,1,0.000)(350,2,0.000)(350,3,0.000) ...
...
... (550,6,0.000)(550,7,0.000)(550,8,0.000)
};
Another possibility is to give each spectrum a certain thickness by adding one more y value and then plotting from the last (n=8 or 24 h) to the first (n=1 or 0 h). The resulting plot is a series of ribbons (Figure 12). The contour plot is not directly supported (a Gnuplot installation is required), but its possible to obtain a pseudo contour plot using view={0}{90} (the syntax is view={azimuth}{elevation}) in the axis definition; see Figure 13). For a 3D bar plot, it is possible, for example, to follow the answer proposed by the user LMT-PhD in reference [8]; see also https://github.com/astonfe/latex/blob/master/absorbance/absorbance_bar.tex. In this case, the data are ordered according to the following table, in which the first row under the headers contains the maximum value of each column:
X Y Z
21 8 1.758
1 8 0.143
1 7 0.138
...
2 8 0.280
2 7 0.270
...
21 8 0.247
21 7 0.263
...
21 1 0.035
Other examples
The last two examples are about the radar (or spider) and ternary plots. For the first type, an interesting way is to follow Reference 9. Figure 14 shows an example based on that reference. The radar plot has a certain importance for some applications, an example of which can be seen in Reference 10 (pages 12-13 and 21). A ternary plot (Figure 15) can be done by adding the usepgfplotslibrary{ternary} in the preamble. An example of this is shown below:
\begin{ternaryaxis}[
ternary limits relative=false,
clip=false,
disabledatascaling,
...
]
mark=*,mark size=3pt,only marks,draw=black,fill=gray,
nodes near coords=\coordindex,
every node near coord/.append style={yshift=3pt}
]
table {
X Y Z
0.324 0.213 0.463
... ... ...
0.150 0.700 0.150
};
\end{ternaryaxis}
An easy technique for numbering each point is to use coordindex. See Reference 13, pages 487-498, in which more complex examples are reported.
Flowchart
This section is dedicated to creating a basic example of a flowchart with the package tikz. Even in this case, LaTeX can produce a great output. In the preamble, the packages xcolour, mhchem (because some chemical formulae will be used) and tikz are called. Then each chart component (block, decision a type of diamond shaped block, and line) is defined using tikzstyle for example, geometry, fill colour, text, etc. In the tikzpicture section, each block or decision (both called node) is positioned with respect to the others and the text content is written. Last, every node is connected with a path (a line), and on each path its also possible to add some text. Its pretty easy and only takes a little patience. Probably, the best explanation is to view a practical example (Figure 16):
\documentclass[border=0.5cm]{standalone}
\usepackage[svgnames]{xcolor}
\usepackage[version=3]{mhchem}
\usepackage{tikz}
\usetikzlibrary{shapes,arrows}
\tikzstyle{block1}=[rectangle,draw,fill=Tomato,text width=2cm,minimum height=2cm,rounded corners]
...
\tikzstyle{line}=[draw,-triangle 45]
\begin{document}
\begin{tikzpicture}[text centered,node distance=4cm,auto]
\node [block1] (1) {Mixture};
\node [decision,right of=1] (2) {+\ce{CHCl3}};
...
\path [line] (1)--(2);
\path [line] (2)--(3);
...
\path [line] (2)--node {Aq} (3);
\path [line] (2)--node {Org} (5);
...
\end{tikzpicture}
\end{document}
Only some introductory examples are presented here. More of them can be found in References 11 and 12, and a large collection in the manuals for the packages pgfplots and pst-solides3d (References 13 and 14). About tikz, an introduction can be found in Reference 15 and a large manual in Reference 16.
Stefano Rizzi
The author works in the analytical chemistry and textile chemistry areas. He has been a Linux user since 1998.
|
2022-08-17 19:46:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7482645511627197, "perplexity": 1727.171874456829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573104.24/warc/CC-MAIN-20220817183340-20220817213340-00551.warc.gz"}
|
https://gmatclub.com/forum/gmat-problem-solving-ps-140/index-12750.html?sd=a
|
It is currently 22 Jun 2017, 05:35
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# GMAT Problem Solving (PS)
new topic Question banks Downloads My Bookmarks Reviews Important topics Go to page Previous 1 ... 254 255 256 257 258 ... 295 Next Search for:
Topics Author Replies Views Last post
Announcements
220
150 Hardest and easiest questions for PS
Bunuel
7
41584
11 Sep 2016, 03:25
1077
GMAT PS Question Directory by Topic & Difficulty
bb
0
405275
22 Feb 2012, 11:27
Topics
29
If x and y are integers and 2x + 9y = 57, then the least possible valu
Bunuel
13
926
10 May 2017, 20:01
2
There are 13 South American countries, Ann has won a ticket
imhimanshu
6
1342
10 May 2017, 21:54
9
JKL is an equilateral triangle. Point M is the midpoint of
mikemcgarry
6
2386
10 May 2017, 22:18
8
On a given week, the classical literature section in a certain library
Nevernevergiveup
7
829
10 May 2017, 22:48
7
A store currently charges the same price per pound of salad. If the cu
MathRevolution
4
295
11 May 2017, 01:25
6
If (n+2)!/n!=132, n=?
MathRevolution
8
1359
11 May 2017, 02:35
11
((n-1)!+n!+(n+1)!)/n!
jahanafsana
11
1729
11 May 2017, 02:49
26
Susan drove an average speed of 30 miles per hour for the
anilnandyala
18
21045
11 May 2017, 05:12
121
The number of stamps that Kaye and Alberto had were in the
kilukilam
25
25934
11 May 2017, 05:16
132
A thin piece of wire 40 meters long is cut into two pieces. Go to page: 1, 2 Tags: Difficulty: 600-700 Level, Geometry, Source: GMAT Prep
anilnandyala
33
35587
11 May 2017, 06:50
5
The number of flights leaving a certain airport doubles during every o
3
351
11 May 2017, 09:06
23
An automated manufacturing unit employs N experts such that the range
Bunuel
17
2718
11 May 2017, 09:27
14
There are different 10 circles. What is the number of the greatest pos
MathRevolution
5
954
11 May 2017, 09:38
18
A scientist has 400 units of a 6% phosphoric acid solution, and an unl
Bunuel
9
2195
11 May 2017, 09:44
5
John would make the 3-letter codes with 26 alphabets in condition that
MathRevolution
5
699
11 May 2017, 11:16
2
On a certain day, a delivery driver must make 4 deliveries. He departs
2
193
11 May 2017, 12:18
2
Of the 140 moving trucks available to rent, 82 have air conditioning,
2
163
11 May 2017, 16:48
20
The main ingredient in a certain prescription drug capsule cost $500 kuristar 14 13720 11 May 2017, 19:44 18 If a certain sample of data has a mean of 20.0 and a enigma123 9 8302 11 May 2017, 19:54 8 Last year, a certain public transportation system sold an average jimmyjamesdonkey 9 7298 11 May 2017, 20:44 33 A and B together can complete one-ninth of a job in a day. SravnaTestPrep 8 2096 11 May 2017, 21:20 16 If J = 2 + 4 + 6 + 8 + . . . 98 + 100, and K = 1 + 3 + 5 + 7 + . . . + GMATPrepNow 7 531 11 May 2017, 23:39 3 Lisa bought a house for$70,000 and paid closing costs equal to 2 perc
Kritesh
3
158
11 May 2017, 23:39
3
If a=(1/21)+(1/22)+(1/23)+(1/24)+(1/25)+(1/26)+(1/27)+(1/28)+(1/29)+(1
nguyendinhtuong
4
190
12 May 2017, 01:11
13
If @x is the number of distinct positive divisors of x, what is the va
suntaurian
12
5584
12 May 2017, 03:00
26
If xy > 0 and yz < 0, which of the following must be negativ
GMAThirst
12
5879
12 May 2017, 03:20
1
Lynn lent Robert $10,000 at 6 percent simple annual interest to be rep Kritesh 3 192 12 May 2017, 03:37 7 Eddie purchased baked goods for his brunch. The store sold chocolate Bunuel 9 677 12 May 2017, 04:13 1 What is the value of [m](\sqrt[3]{\sqrt{\sqrt[3]{512}}})^2[/m] Bunuel 6 316 12 May 2017, 04:23 1 To determine her state income tax last year, Elena Kritesh 3 184 12 May 2017, 04:43 15 The units digit of (137^13)^47 is: Bunuel 8 1738 12 May 2017, 04:48 12 What is the remainder when 3^24 is divided by 5? nycgirl212 14 3013 12 May 2017, 05:10 1 John got scores of x, x + 3, and x + 5 on his first three tests this Bunuel 1 134 12 May 2017, 05:17 On 3 sales Tom has received commissions of$200, $90, and$130, and ha
Bunuel
1
126
12 May 2017, 06:06
14
If x and y are both integers such that 41y = 36x and y is not negative
JusTLucK04
10
3046
12 May 2017, 06:26
1
Due to rush hour traffic, it takes Marcus twice as long to drive home
Bunuel
1
177
12 May 2017, 06:49
2
In the rectangular coordinate system, the line 2y - 3x = 14 passes thr
2
225
12 May 2017, 08:53
2
There are 150 students at Seward High School. 66 students
saurabhgoel
6
8893
12 May 2017, 08:57
9
A hostess must seat 10 people around 2 circular tables.
Youraisemeup
4
607
12 May 2017, 10:23
3
If equation ах^2 + bх + с = 0 have two distinct roots, which of the
banksy
8
3082
12 May 2017, 10:43
22
Three fair coins are labeled with a zero (0) on one side and
JCLEONES
18
9190
12 May 2017, 10:50
What is the average (arithmetic mean) of 1/4, 3/8 and 5/16?
Bunuel
2
143
12 May 2017, 11:29
6
If 3 ≥ s ≥ –3, and t > –2, then which of the following cannot be the
Bunuel
8
471
12 May 2017, 11:37
2
A caterer must choose 3 canapés to serve from 12 possible selections.
1
128
12 May 2017, 12:59
3
Ambrose and Davis arranged to meet along a path from Vista View to Bat
Bunuel
4
257
12 May 2017, 13:12
2
Eric is ordering a pizza. He wants three different toppings on his piz
4
227
12 May 2017, 13:12
1
How many vertically striped flags can be created if the flag
AndreG
10
3255
12 May 2017, 13:17
1
Randle can trim 4 bushes per hour and Billy can trim 5 bushes per hour
Bunuel
4
262
12 May 2017, 13:19
2
If (2/3)^n=(3/2)^2, what is the value of n?
jsphcal
6
3223
12 May 2017, 13:19
Bino pens can be purchased for $12 per pack of 100,$8 per pack of 50,
Bunuel
3
163
12 May 2017, 13:20
new topic Question banks Downloads My Bookmarks Reviews Important topics Go to page Previous 1 ... 254 255 256 257 258 ... 295 Next Search for:
Who is online In total there are 6 users online :: 1 registered, 0 hidden and 5 guests (based on users active over the past 30 minutes) Users browsing this forum: Bunuel and 5 guests Statistics Total posts 1643067 | Total topics 198655 | Active members 519649 | Our newest member jaskirats
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2017-06-22 12:35:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2723873257637024, "perplexity": 4456.511573923057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319265.41/warc/CC-MAIN-20170622114718-20170622134718-00205.warc.gz"}
|
https://www.buecher.de/shop/atom-und-molekuelphysik/theory-of-the-muon-anomalous-magnetic-moment-ebook-pdf/melnikov-kirill-vainshtein-arkady/products_products/detail/prod_id/43885671/
|
• Format: PDF
The theory of the muon anomalous magnetic moment is particle physics in a nutshell. It is an interesting, exciting and difficult subject, and this book provides a comprehensive review of it. The theory of the muon anomalous magnetic moment is at the cutting edge of current research in particle physics, and any deviation between the theoretical prediction and the experimental value might be interpreted as a signal of an as-yet-unknown new physics.…mehr
Produktbeschreibung
The theory of the muon anomalous magnetic moment is particle physics in a nutshell. It is an interesting, exciting and difficult subject, and this book provides a comprehensive review of it. The theory of the muon anomalous magnetic moment is at the cutting edge of current research in particle physics, and any deviation between the theoretical prediction and the experimental value might be interpreted as a signal of an as-yet-unknown new physics.
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.
• Produktdetails
• Verlag: Springer-Verlag GmbH
• Seitenzahl: 180
• Erscheinungstermin: 9. Januar 2007
• Englisch
• ISBN-13: 9783540328070
• Artikelnr.: 43885671
Autorenporträt
Kirill Melnikov, University of Hawaii, Honolulu, Hawaii, USA / Arkady Vainshtein, University of Minnesota, Minneapolis, MN, USA
Inhaltsangabe
QED Effects in the Muon Magnetic Anomaly.- Hadronic Vacuum Polarization.- Electroweak Corrections to a ? .- Strong Interaction Effects in Electroweak Corrections.- Hadronic Light-by-light Scattering and the Muon Magnetic Anomaly.- Standard Model Value for the Muon Magnetic Anomaly.- New Physics and the Muon Anomalous Magnetic Moment.- Summary.
|
2021-12-03 02:10:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8861701488494873, "perplexity": 3413.431191673506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362571.17/warc/CC-MAIN-20211203000401-20211203030401-00399.warc.gz"}
|