text
stringlengths 256
16.4k
|
|---|
Levi-Civita_field Knowpia
In mathematics, the Levi-Civita field, named after Tullio Levi-Civita, is a non-Archimedean ordered field; i.e., a system of numbers containing infinite and infinitesimal quantities. Each member
{\displaystyle a}
can be constructed as a formal series of the form
{\displaystyle a=\sum _{q\in \mathbb {Q} }a_{q}\varepsilon ^{q},}
{\displaystyle a_{q}}
{\displaystyle \mathbb {Q} }
is the set of rational numbers, and
{\displaystyle \varepsilon }
is to be interpreted as a positive infinitesimal. The support of
{\displaystyle a}
, i.e., the set of indices of the nonvanishing coefficients
{\displaystyle \{q\in \mathbb {Q} :a_{q}\neq 0\},}
must be a left-finite set: for any member of
{\displaystyle \mathbb {Q} }
, there are only finitely many members of the set less than it; this restriction is necessary in order to make multiplication and division well defined and unique. The ordering is defined according to the dictionary ordering of the list of coefficients, which is equivalent to the assumption that
{\displaystyle \varepsilon }
is an infinitesimal.
The real numbers are embedded in this field as series in which all of the coefficients vanish except
{\displaystyle a_{0}}
{\displaystyle 7\varepsilon }
is an infinitesimal that is greater than
{\displaystyle \varepsilon }
, but less than every positive real number.
{\displaystyle \varepsilon ^{2}}
{\displaystyle \varepsilon }
, and is also less than
{\displaystyle r\varepsilon }
for any positive real
{\displaystyle r}
{\displaystyle 1+\varepsilon }
differs infinitesimally from 1.
{\displaystyle \varepsilon ^{\frac {1}{2}}}
{\displaystyle \varepsilon }
, but still less than every positive real number.
{\displaystyle 1/\varepsilon }
is greater than any real number.
{\displaystyle 1+\varepsilon +{\frac {1}{2}}\varepsilon ^{2}+\cdots +{\frac {1}{n!}}\varepsilon ^{n}+\cdots }
{\displaystyle e^{\varepsilon }}
{\displaystyle 1+\varepsilon +2\varepsilon ^{2}+\cdots +n!\varepsilon ^{n}+\cdots }
is a valid member of the field, because the series is to be construed formally, without any consideration of convergence.
Definition of the field operations and positive coneEdit
{\displaystyle f=\sum \limits _{q\in \mathbb {Q} }f_{q}\varepsilon ^{q}}
{\displaystyle g=\sum \limits _{q\in \mathbb {Q} }g_{q}\varepsilon ^{q}}
are two Levi-Civita series, then
their sum
{\displaystyle f+g}
is the pointwise sum
{\displaystyle f+g:=\sum \limits _{q\in \mathbb {Q} }(f_{q}+g_{q})\varepsilon ^{q}}
{\displaystyle fg}
is the Cauchy product
{\displaystyle fg:=\sum \limits _{q\in \mathbb {Q} }\sum \limits _{a+b=q}(f_{a}g_{b})\varepsilon ^{q}}
(One can check that the support of this series is left-finite and that for each of its elements
{\displaystyle q}
{\displaystyle \{(a,b)\in \mathbb {Q} \times \mathbb {Q} :\ a+b=q\wedge f_{a}\neq 0\wedge g_{b}\neq 0\}}
is finite, so the product is well defined.)
{\displaystyle 0<f}
{\displaystyle f\neq 0}
{\displaystyle f}
has non-empty support) and the least non-zero coefficient o{\displaystyle f}
is strictly positive.
Equipped with those operations and order, the Levi-Civita field is indeed an ordered field extension of
{\displaystyle \mathbb {R} }
{\displaystyle \varepsilon }
is a positive infinitesimal.
The Levi-Civita field is real-closed, meaning that it can be algebraically closed by adjoining an imaginary unit (i), or by letting the coefficients be complex. It is rich enough to allow a significant amount of analysis to be done, but its elements can still be represented on a computer in the same sense that real numbers can be represented using floating point. It is the basis of automatic differentiation, a way to perform differentiation in cases that are intractable by symbolic differentiation or finite-difference methods.[1]
The Levi-Civita field is also Cauchy complete, meaning that relativizing the
{\displaystyle \forall \exists \forall }
definitions of Cauchy sequence and convergent sequence to sequences of Levi-Civita series, each Cauchy sequence in the field converges. Equivalently, it has no proper dense ordered field extension.
As an ordered field, it has a natural valuation given by the rational exponent corresponding to the first non zero coefficient of a Levi-Civita series. The valuation ring is that of series bounded by real numbers, the residue field is
{\displaystyle \mathbb {R} }
, and the value group is
{\displaystyle (\mathbb {Q} ,+)}
. The resulting valued field is Henselian (being real closed with a convex valuation ring) but not spherically complete. Indeed, the field of Hahn series with real coefficients and value group
{\displaystyle (\mathbb {Q} ,+)}
is a proper immediate extension, containing series such as
{\displaystyle 1+\varepsilon ^{1/2}+\varepsilon ^{2/3}+\varepsilon ^{3/4}+\varepsilon ^{4/5}+\cdots }
which are not in the Levi-Civita field.
Relations to other ordered fieldsEdit
The Levi-Civita field is the Cauchy-completion of the field
{\displaystyle \mathbb {P} }
of Puiseux series over the field of real numbers, that is, it is a dense extension of
{\displaystyle \mathbb {P} }
without proper dense extension. Here is a list of some of its notable proper subfields and its proper ordered field extensions:
Notable subfieldsEdit
{\displaystyle \mathbb {R} }
{\displaystyle \mathbb {R} (\varepsilon )}
of fractions of real polynomials with infinitesimal positive indeterminate
{\displaystyle \varepsilon }
{\displaystyle \mathbb {R} ((\varepsilon ))}
of formal Laurent series over
{\displaystyle \mathbb {R} }
{\displaystyle \mathbb {P} }
of Puiseux series over
{\displaystyle \mathbb {R} }
Notable extensionsEdit
{\displaystyle \mathbb {R} [[\varepsilon ^{\mathbb {Q} }]]}
of Hahn series with real coefficients and rational exponents.
{\displaystyle \mathbb {T} ^{LE}}
of logarithmic-exponential transseries.
{\displaystyle \mathbf {No} (\varepsilon _{0})}
of surreal numbers with birthdate below the first
{\displaystyle \varepsilon }
-number
{\displaystyle \varepsilon _{0}}
Fields of hyperreal numbers constructed as ultrapowers of
{\displaystyle \mathbb {R} }
modulo a free ultrafilter on
{\displaystyle \mathbb {N} }
(although here the embeddings are not canonical).
^ Khodr Shamseddine, Martin Berz "Analysis on the Levi-Civita Field: A Brief Overview", Contemporary Mathematics, 508 pp 215-237 (2010)
A web-based calculator for Levi-Civita numbers
|
15 September 2006 A link invariant from the symplectic geometry of nilpotent slices
Paul Seidel, Ivan Smith
Paul Seidel,1 Ivan Smith2
1Department of Mathematics, University of Chicago; current: Department of Mathematics, Massachusetts Institute of Technology
We define an invariant of oriented links in
{S}^{3}
using the symplectic geometry of certain spaces that arise naturally in Lie theory. More specifically, we present a knot as the closure of a braid that, in turn, we view as a loop in configuration space. Fix an affine subspace
{\mathcal{S}}_{m}
{\mathfrak{sl}}_{2m}\left(\mathbb{C}\right)
which is a transverse slice to the adjoint action at a nilpotent matrix with two equal Jordan blocks. The adjoint quotient map restricted to
{\mathcal{S}}_{m}
gives rise to a symplectic fibre bundle over configuration space. An inductive argument constructs a distinguished Lagrangian submanifold
{L}_{\wp {}_{±}}
of a fibre
{\mathcal{Y}}_{m,{t}_{0}}
of this fibre bundle; we regard the braid
\beta
as a symplectic automorphism of the fibre and apply Lagrangian Floer cohomology to
{L}_{\wp {}_{±}}
\beta \left({L}_{\wp {}_{±}}\right)
{\mathcal{Y}}_{m,{t}_{0}}
. The main theorem asserts that this group is invariant under the Markov moves and hence defines an oriented link invariant. We conjecture that this invariant coincides with Khovanov's combinatorially defined link homology theory, after collapsing the bigrading of the latter to a single grading
Paul Seidel. Ivan Smith. "A link invariant from the symplectic geometry of nilpotent slices." Duke Math. J. 134 (3) 453 - 514, 15 September 2006. https://doi.org/10.1215/S0012-7094-06-13432-4
Primary: 17B45 , 53D40 , 57M25
Secondary: 14D05 , 14D06 , 20C30
Paul Seidel, Ivan Smith "A link invariant from the symplectic geometry of nilpotent slices," Duke Mathematical Journal, Duke Math. J. 134(3), 453-514, (15 September 2006)
|
Understanding math through code
Bayesian networks and causality
The Foursquare Theorem
Painless, effective peer reviews
Impossible functions
Understanding A/B test analysis
Unevaluating polynomials
Insertion sort is dual to bubble sort
The 3 Things You Should Understand about Quantum Computation
Fun with Bayesian Priors
A Programmer's Guide to the Central Limit Theorem
Climbing the probability distribution ladder
Consider a population of rabbits and foxes. The number of rabbits
r
and the number of foxes
f
will range between 0 and 1, representing the percentage of some theoretical maximum population. Each generation, the number of rabbits and foxes changes according to a simple rule.
The number of rabbits in generation
n+1
, based on the number of rabbits
r_n
and foxes
f_n
in the previous generation
a
is the rabbits’ birth rate. For example, if
a = 3
, then each rabbit produces 2 offspring in the next generation. The factor
(1 - r_n - f_n)
accounts for deaths due to starvation and predation. If the number of rabbits is low, then few will die of starvation, if it’s high then many will; and likewise if the number of foxes is high, many rabbits will die from being eaten.
The number of foxes in the next generation is given by:
This says that the chance that a fox encounters and eats a rabbit is
r_n
. So if the rabbit population is at 80% of its theoretical maximum, 80% of foxes will eat enough to reproduce, and will produce
b
offspring.
So let’s pick some values for
and
b
and see how the system behaves. We’ll visualize it by plotting the populations on a graph. But instead of plotting both populations over time, we’ll plot the populations against each other. That is, we’ll leave time out of it and just plot the set of points
(r_i, f_i)
over, say, 40,000 generations.
I came across an old post by Eliezer Yudkowsky on Less Wrong entitled Probability is in the Mind. It immediately struck me as, well, more wrong, and I want to explain why.
A brief disclaimer: that article was published 8 years ago and he may have changed his thinking since then, but I’m purely out to address the idea, not the person. So here goes.
First I’ll lay out his argument, which he does through a series of examples.
The quantum eraser is a variation on the classic double-slit experiment. If you ever have any doubt about the weirdness of quantum mechanics (“oh, there’s probably some classical explanation for all of this”), this experiment is designed to remove it.
The experiment involves two entangled polarized photons. The first goes straight to a detector, and the second passes through a barrier with two slits before reaching a detector.
The experiment proceeds in three stages. I’m going to simulate each stage using my toy quantum computing library (see earlier post here), and we’ll see what happens!
Had a chat with @jliszka about Bayes' rule, the '14 draft lottery, & the chances the NBA is rigged. Now I don't believe in anything anymore.
— harryh (@harryh) July 11, 2014
This chat basically consisted of Harry mentioning that the Cleveland Cavaliers got the first pick in the draft, even though the lottery gave them only a 1.7% chance of drawing that slot, then wondering aloud how he should update his prior on whether the NBA draft is rigged given this information, and then me breaking out my probability monad, because there’s no problem that can’t be solved with more monads.
Twitter’s Future library is a beautiful abstraction for dealing with concurrency. However, there are code patterns that seem natural or innocuous but can cause real trouble in production systems. This short article outlines a few of the easiest traps to fall into.
Below is a method from a fictional web application that registers a user by calling the Foursquare API to get the user’s profile info, their friend graph and their recent check-ins.
def registerUser(token: String): Future[User] = {
val api = FoursquareApi(token)
def apiFriendsF(apiUser: ApiUser): Future[Seq[ApiUser]] = {
Future.collect(apiUser.friendIDs.map(api.getUserF))
def apiCheckinsF(apiUser: ApiUser, categoryies: Seq[ApiCategory]): Future[Seq[ApiCheckin]] = {
def createDBUser(
user: ApiUser,
friends: Seq[ApiUser],
checkins: Seq[ApiCheckin]): User = {
apiUser <- api.getSelfF()
apiCategories <- api.getCategoriesF()
apiFriends <- apiFriendsF(apiUser)
apiCheckins <- apiCheckinsF(apiUser, apiCategories)
} yield createDBUser(apiUser, apiFriends, apiCheckins)
There are some problems with this code.
A brief guide to tech leadership at Foursquare, inspired by Ben Horowitz’s Good Product Manager, Bad Product Manager. Cross-posted from Medium.
Good tech leads act as a member of the team, and consider themselves successful when the team is successful. They take their share of unsexy grungy work and clear roadblocks so their team can operate at 100%. They work to broaden the technical capabilities of their team, making sure knowledge of critical systems is not concentrated in one or two minds.
Correlation does not imply causality—you’ve heard it a thousand times. But causality does imply correlation. Being good Bayesians, we should know how to turn a statement like that around and find a way to infer causality from correlation.
The tool we’re going to use to do this is called a probabilistic graphical model. A PGM is a graph that encodes the causal relationships between events. For example, you might construct this graph to model a chain of causes resulting in someone getting a college scholarship:
Or the relationship between diseases and their symptoms:
Or the events surrounding a traffic jam:
Each node represents a random variable, and the arrows represent dependence relations between them. You can think of a node with incoming arrows as a probability distribution parameterized on some set of inputs; in other words, a function from some set of inputs to a probability distribution.
PGMs with directed edges and no cycles are specifically called Bayesian networks, and that’s the kind of PGM I’m going to focus on.
It’s easy to translate a Bayesian network into code using this toy probability library. All we need are the observed frequencies for each node and its inputs. Let’s try the traffic jam graph. I’ll make up some numbers and we’ll see how it works.
Yesterday I received an unexpected media query from Jen Doll, a journalist at New York Magazine, reporting on the story where Frank Bruni found Courtney Love’s iPhone in a taxi. She was musing about the statistical likelihood of an event like that, and somehow found this twitter thread where I had calculated the probability of getting the same cab driver twice. She wanted to know how I arrived at my figures and whether I had any additional insight on the question.
So of course I wrote her back a whole essay, and today there’s this article. Her editors had cut it way down, because journalism. But I had put all this work into it, so I thought I’d post it here.
The text of my reply is below.
This has nothing to do with the playground game, the church, or the mobile/social/local city guide that helps you make the most of where you are. (Disclosure: I work at Foursquare.)
This is about Lagrange’s four-square theorem, which states that every natural number can be expressed as the sum of four squares. For example,
The proof given on the Wikipedia page is only an existence proof, but I was able to find a mostly constructive proof elsewhere online. I want to present an outline of the proof along with some code that carries out the construction. Here’s a preview:
*Main> foursquare 123456789
(-2142,8673,6264,-2100)
Peer reviews are the most effective kind of feedback — only your peers really know what it’s like to work with you, and they have the most insightful, nuanced and helpful suggestions for improvement. Almost every tech company I can think of does them as part of their annual review process. The problem is that everyone hates writing them, because decent feedback takes a really long time to write, sometimes on the order of 4 or 5 hours for a single peer review.
I’d like to solve this problem. Most people think that there’s a natural, unavoidable relationship between quality and time spent. But that overlooks an important point — the thing that makes writing peer reviews difficult is: writing itself.
Here are 5 easy steps to collecting insightful, critical, honest peer review feedback, in about an hour:
|
1. 2. 3. 4. identify the wrong description of the above figures-Turito
Identify the wrong description of the above figures
3 represents far sightedness
1 represents far-sightedness
4 correction for far-sightedness
2 correction for short sightedness
Answer:The correct answer is: 1 represents far-sightedness
Match the List I with the List II from the combinations shown
The resolving limit of healthy eye is about
Resolving limit of eye is one minute (1').
A person is suffering from myopic defect. He is able to see clear objects placed at 15 cm. What type and of what focal length of lens he should use to see clearly the object placed 60 cm away
For viewing far objects, concave lenses are used and for concave lens
u = wants to see ; v = can see
so from .
{\int }_{-\pi /4}^{\pi /4} \frac{{e}^{x}\left(x\mathrm{sin}x\right)}{{e}^{2x}-1}dx
\frac{l}{3}=\frac{m}{-3}=\frac{n}{-9}
\frac{l}{-1}=\frac{m}{+1}=\frac{n}{3}
\frac{x-2}{-1}=\frac{y+1}{1}=\frac{z+1}{3}
{\int }_{-\pi /4}^{\pi /4} \frac{{e}^{x}\left(x\mathrm{sin}x\right)}{{e}^{2x}-1}dx
\frac{l}{3}=\frac{m}{-3}=\frac{n}{-9}
\frac{l}{-1}=\frac{m}{+1}=\frac{n}{3}
\frac{x-2}{-1}=\frac{y+1}{1}=\frac{z+1}{3}
f:R\to R,f\left(x\right)=\left\{\begin{array}{c}|x-\left[x\right]|,\left[x\right]\\ |x-\left[x+1\right]|,\left[x\right]\end{array}\right\
\begin{array}{r}\text{ is odd }\\ 1\text{ is even where [.] }\end{array}
denotes greatest integer function, then
{\int }_{-2}^{4} f\left(x\right)dx
Since these two lines are intersecting so shortest distance between the lines will be 0. Hence (c) is the correct answer.
f:R\to R,f\left(x\right)=\left\{\begin{array}{c}|x-\left[x\right]|,\left[x\right]\\ |x-\left[x+1\right]|,\left[x\right]\end{array}\right\
\begin{array}{r}\text{ is odd }\\ 1\text{ is even where [.] }\end{array}
{\int }_{-2}^{4} f\left(x\right)dx
|
Paths on Grids · USACO Guide
HomeGoldPaths on Grids
TutorialSolution - Grid PathsSolution - Longest Common SubsequenceProblems
Paths on Grids
Authors: Nathan Chen, Michael Cao, Benjamin Qi, Andrew Wang
Contributor: Maggie Liu
Counting the number of "special" paths on a grid, and how some string problems can be solved using grids.
7.3 - Paths in a Grid
A common archetype of DP Problems involves a 2D grid of square cells (like graph paper), and we have to analyze "paths." A path is a sequence of cells whose movement is restricted to one direction on the
x
-axis and one direction on the
y
-axis (for example, you may only be able to move down or to the right). Usually, the path also has to start in one corner of the grid and end on another corner. The problem may ask you to count the number of paths that satisfy some property, or it may ask you to find the max/min of some quantity over all paths.
Usually, the sub-problems in this type of DP are a sub-rectangle of the whole grid. For example, consider a problem in which we count the number of paths from
(1, 1)
(N, M)
when we can only move in the positive
x
-direction and the positive
y
\texttt{dp}[x][y]
be the number of paths in the sub-rectangle whose corners are
(1, 1)
(x, y)
. We know that the first cell in a path counted by
\texttt{dp}[x][y]
(1, 1)
, and we know the last cell is
(x, y)
. However, the second-to-last cell can either be
(x-1, y)
(x, y-1)
. Thus, if we pretend to append the cell
(x, y)
to the paths that end on
(x-1, y)
(x, y-1)
, we construct paths that end on
(x, y)
. Working backwards like that motivates the following recurrence:
\texttt{dp}[x][y] = \texttt{dp}[x-1][y] + \texttt{dp}[x][y-1]
. We can use this recurrence to calculate
\texttt{dp}[N][M]
. Keep in mind that
\texttt{dp}[1][1] = 1
because the path to
(1, 1)
is just a single cell. In general, thinking about how you can append cells to paths will help you construct the correct DP recurrence.
When using the DP recurrence, it's important that you compute the DP values in an order such that the dp-value for a cell is known before you use it to compute the dp-value for another cell. In the example problem above, it's fine to iterate through each row from
0
M-1
if(j > 0) dp[j][i] += dp[j - 1][i];
if(i > 0) dp[j][i] += dp[j][i - 1];
dp[j][i] += dp[j - 1][i]
dp[j][i] += dp[j][i - 1]
Note how the coordinates in the code are in the form (x coordinate, y coordinate). Most of the time, it's more convenient to think of points as (row, column) instead, which swaps the order of the coordinates, though the code uses the former format to be consistent with the definition of
\texttt{dp}[x][y]
Solution - Grid Paths
In this problem, we are directly given a 2D grid of cells, and we have to count the number of paths from corner to corner that can only go down (positive
y
direction) and to the right (positive
x
direction), with a special catch. The path can't use a cell marked with an asterisk.
We come close to being able to use our original recurrence, but we have to modify it. Basically, if a cell
(x, y)
is normal, we can use the recurrence normally. But, if cell
(x, y)
has an asterisk, the dp-value is
0
, because no path can end on a trap.
\texttt{dp}[x][y] = \begin{cases} \texttt{dp}[x-1][y] + \texttt{dp}[x][y-1] & \text{if $(x, y)$ is not a trap} \\ 0, & \text{if $(x, y)$ is a trap} \end{cases}
The code for the DP recurrence doesn't change much:
long dp[][] = new long[N][N];
ok = [[char == "." for char in input()] for _ in range(n)]
# if current square is a trap
if not ok[i][j]:
Note how the coordinates are now in the form (row, column) when reading in the input.
Solution - Longest Common Subsequence
The longest common subsequence is a classical string problem, but where's the grid?
In fact, we can create a grid to solve it. Think about the following algorithm to create any (not necessarily the longest) subsequence between two strings
A
B
We start with two pointers,
i
j
, each beginning at
0
We do some "action" at each time step, until there are no more available "actions". An "action" can be any of the following:
i
1
(only works if
i < |A|
j
1
j < |B|
i
j
1
A_i = B_j
. Append that character
A_i
B_j
) to the common subsequence. (only works if
i < |A|
j < |B|
We know that this process creates a common subsequence because characters which are common to both strings are found from left to right.
This algorithm can also be illustrated on a grid. Let
A := xabcd
B := yazc
. Then, the current state of the algorithm can be defined as a specific point
(i, j)
i
j
that we discussed previously. The process of increasing pointers can be seen as moving right (if
is increased), moving down (if
j
is increased), or moving diagonally (if both
i
j
increase). See that each diagonal movement adds one to the length of the common subsequence.
Now, we re-phrase "the length of the longest increasing subsequence" as "the maximum number of 'diagonal movements' ("action 3" in the above algorithm) in a path from the top-left corner to the bottom-right corner on the grid." Thus, we have constructed a grid-type DP problem.
In the above grid, see how the bolded path has diagonal movements at characters "a" and "c". That means the longest common subsequence between "xabcd" and "yazc" is "ac".
Based on the three "actions", which are also the three possible movements of the path, we can create a DP-recurrence to find the longest common subsequence:
\texttt{dp}[i][j] = \begin{cases} \max(\texttt{dp}[i-1][j], \texttt{dp}[i][j-1]) & \text{if }A_i \neq B_j \\ \texttt{dp}[i-1][j-1]+1, & \text{if }A_i = B_j \end{cases}
int longestCommonSubsequence(string a, string b) {
int dp[a.size()][b.size()];
fill(dp[i], dp[i] + b.size(), 0);
if(a[i] == b[0]) dp[i][0] = 1;
if(i != 0) dp[i][0] = max(dp[i][0], dp[i - 1][0]);
Ben - shorter version using macros:
int longestCommonSubsequence(str a, str b) {
V<vi> dp(sz(a) + 1, vi(sz(b) + 1));
F0R(i, sz(a) + 1) F0R(j, sz(b) + 1) {
if (i < sz(a)) ckmax(dp[i + 1][j], dp[i][j]);
if (j < sz(b)) ckmax(dp[i][j + 1], dp[i][j]);
if (i < sz(a) && j < sz(b))
ckmax(dp[i + 1][j + 1], dp[i][j] + (a[i] == b[j]));
return dp[sz(a)][sz(b)];
public int longestCommonSubsequence(String a, String b) {
int[][] dp = new int[a.length()][b.length()];
if(a.charAt(i) == b.charAt(0)) dp[i][0] = 1;
if(i != 0) dp[i][0] = Math.max(dp[i][0], dp[i-1][0]);
if(a.charAt(0) == b.charAt(i)) {
def longestCommonSubsequence(self, a: str, b: str) -> int:
dp = [[0] * (len(b) + 1) for _ in range(len(a) + 1)]
for j in range(1, len(b) + 1):
return dp[len(a)][len(b)]
Cow Checklist
Why Did the Cow Cross the Road II
Don't expect you to solve this task at this level, but you might find it interesting:
Circular Longest Common Subsequence
|
(Geen) Blind date @ DAK Utrecht
This was a (GEEN) Blind Date @ DAK Utrecht... 22-12-2016
Winston Churchill on painting, pasting as a pastime.
interference on wiki and in a wine glass at home...
Thin-film interference is a natural phenomenon in which light waves reflected by the upper and lower boundaries of a thin film interfere with one another, either enhancing or reducing the reflected light. When the thickness of the film is an odd multiple of one quarter-wavelength of the light on it, the reflected waves from both surfaces interfere to cancel each other. Since the wave cannot be reflected, it is completely transmitted instead. When the thickness is a multiple of a half-wavelength of the light, the two reflected waves reinforce each other, increasing the reflection and reducing the transmission. Thus when white light, which consists of a range of wavelengths, is incident on the film, certain wavelengths (colors) are intensified while others are attenuated. Thin-film interference explains the multiple colors seen in light reflected from soap bubbles and oil films on water. It also is the mechanism behind the action of antireflection coatings used on glasses and camera lenses.
Studying the light reflected or transmitted by a thin film can reveal information about the thickness of the film or the effective refractive index of the film medium. Thin films have many commercial applications including anti-reflection coatings, mirrors, and optical filters.
1.1Monochromatic source
1.2Broadband source
1.3Phase interaction
2.1Soap bubble
2.2Oil film
2.3Anti-reflection coatings
2.4In nature
Thin-film interference caused by ITOdefrosting coating on an Airbus cockpit window.
A thin film is a layer of material with thickness in the sub-nanometer to micron range. As light strikes the surface of a film it is either transmitted or reflected at the upper surface. Light that is transmitted reaches the bottom surface and may once again be transmitted or reflected. The Fresnel equations provide a quantitative description of how much of the light will be transmitted or reflected at an interface. The light reflected from the upper and lower surfaces will interfere. The degree of constructive or destructive interference between the two light waves depends on the difference in their phase. This difference in turn depends on the thickness of the film layer, the refractive index of the film, and the angle of incidence of the original wave on the film. Additionally, a phase shift of 180° or
\pi
{n}_{1}<{n}_{2}
OPD={n}_{2}\left(\overline{AB}+\overline{BC}\right)-{n}_{1}\left(\overline{AD}\right)
\overline{AB}=\overline{BC}=\frac{d}{\mathrm{cos}\left({\theta }_{2}\right)}
\overline{AD}=2d\mathrm{tan}\left({\theta }_{2}\right)\mathrm{sin}\left({\theta }_{1}\right)
{n}_{1}\mathrm{sin}\left({\theta }_{1}\right)={n}_{2}\mathrm{sin}\left({\theta }_{2}\right)
OPD={n}_{2}\left(\frac{2d}{\mathrm{cos}\left({\theta }_{2}\right)}\right)-2d\mathrm{tan}\left({\theta }_{2}\right){n}_{2}\mathrm{sin}\left({\theta }_{2}\right)
OPD=2{n}_{2}d\left(\frac{1-{\mathrm{sin}}^{2}\left({\theta }_{2}\right)}{\mathrm{cos}\left({\theta }_{2}\right)}\right)
OPD=2{n}_{2}d\mathrm{cos}\left({\theta }_{2}\right)
\lambda
2{n}_{2}d\mathrm{cos}\left({\theta }_{2}\right)=m\lambda
Where incident light is monochromatic in nature, interference patterns appear as light and dark bands. Light bands correspond to regions at which constructive interference is occurring between the reflected waves and dark bands correspond to destructive interference regions. As the thickness of the film varies from one location to another, the interference may change from constructive to destructive. A good example of this phenomenon, termed "Newton's rings," demonstrates the interference pattern that results when light is reflected from a spherical surface adjacent to a flat surface. Concentric rings are viewed when the surface is illuminated with monochromatic light.
This section provides a simplified explanation of the phase relationship responsible for most of this phenomenon. The figures show two incident light beams (A and B). Each beam produces a reflected beam (dashed). The reflections of interest are beam A’s reflection off of the lower surface and beam B’s reflection off of the upper surface. These reflected beams combine to produce a resultant beam (C). If the reflected beams are in phase (as in the first figure) the resultant beam is relatively strong. If, on the other hand, the reflected beams have opposite phase, the resulting beam is attenuated (as in the second figure).
{n}_{\mathrm{a}\mathrm{i}\mathrm{r}}=1
{n}_{\mathrm{f}\mathrm{i}\mathrm{l}\mathrm{m}}>1
{n}_{\mathrm{a}\mathrm{i}\mathrm{r}}<{n}_{\mathrm{f}\mathrm{i}\mathrm{l}\mathrm{m}}
{n}_{\mathrm{f}\mathrm{i}\mathrm{l}\mathrm{m}}>{n}_{\mathrm{a}\mathrm{i}\mathrm{r}}
2{n}_{\mathrm{f}\mathrm{i}\mathrm{l}\mathrm{m}}d\mathrm{cos}\left({\theta }_{2}\right)=\left(m-\frac{1}{2}\right)\lambda
2{n}_{\mathrm{f}\mathrm{i}\mathrm{l}\mathrm{m}}d\mathrm{cos}\left({\theta }_{2}\right)=m\lambda
d
{n}_{\mathrm{f}\mathrm{i}\mathrm{l}\mathrm{m}}
{\theta }_{2}
m
\lambda
{n}_{\mathrm{a}\mathrm{i}\mathrm{r}}<{n}_{\mathrm{w}\mathrm{a}\mathrm{t}\mathrm{e}\mathrm{r}}<{n}_{\mathrm{o}\mathrm{i}\mathrm{l}}
{n}_{\mathrm{a}\mathrm{i}\mathrm{r}}<{n}_{\mathrm{o}\mathrm{i}\mathrm{l}}
{n}_{\mathrm{o}\mathrm{i}\mathrm{l}}>{n}_{\mathrm{w}\mathrm{a}\mathrm{t}\mathrm{e}\mathrm{r}}
2{n}_{\mathrm{o}\mathrm{i}\mathrm{l}}d\mathrm{cos}\left({\theta }_{2}\right)=\left(m-\frac{1}{2}\right)\lambda
2{n}_{\mathrm{o}\mathrm{i}\mathrm{l}}d\mathrm{cos}\left({\theta }_{2}\right)=m\lambda
d{n}_{\mathrm{c}\mathrm{o}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{n}\mathrm{g}}
{n}_{\mathrm{a}\mathrm{i}\mathrm{r}}<{n}_{\mathrm{c}\mathrm{o}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{n}\mathrm{g}}<{n}_{\mathrm{g}\mathrm{l}\mathrm{a}\mathrm{s}\mathrm{s}}
d=\lambda /\left(4{n}_{\mathrm{c}\mathrm{o}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{n}\mathrm{g}}\right)
{n}_{\mathrm{a}\mathrm{i}\mathrm{r}}<{n}_{\mathrm{c}\mathrm{o}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{n}\mathrm{g}}
{n}_{\mathrm{c}\mathrm{o}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{n}\mathrm{g}}<{n}_{\mathrm{g}\mathrm{l}\mathrm{a}\mathrm{s}\mathrm{s}}
2{n}_{\mathrm{c}\mathrm{o}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{n}\mathrm{g}}d\mathrm{cos}\left({\theta }_{2}\right)=m\lambda
2{n}_{\mathrm{c}\mathrm{o}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{n}\mathrm{g}}d\mathrm{cos}\left({\theta }_{2}\right)=\left(m-\frac{1}{2}\right)\lambda
d{n}_{\mathrm{c}\mathrm{o}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{n}\mathrm{g}}
\left({\theta }_{2}=0\right)
The blue wing patches of the Aglais io butterfly are due to thin-film interference.[1]
The gloss of buttercup flowers is due to thin-film interference.
Structural coloration due to thin-film layers is common in the natural world. The wings of many insects act as thin films because of their minimal thickness. This is clearly visible in the wings of many flies and wasps. In butterflies, the thin-film optics are visible when the wing itself is not covered by pigmented wing scales, which is the case in the blue wing spots of the Aglais io butterfly.[1] The glossy appearance of buttercup flowers is also due to a thin film[2] as well as the shiny breast feathers of the bird of paradise.[3]
An antireflection coated piece of glass (right) compared to an ordinary uncoated glass (left)
Thin-film coatings are used in dielectric mirrors to provide near-total reflection of light over a limited range of wavelengths.
\rho
Iridescence caused by thin-film interference is a commonly observed phenomenon in nature, being found in a variety of plants and animals. One of the first known studies of this phenomenon was conducted by Robert Hooke in 1665. In Micrographia, Hooke postulated that the iridescence in peacock feathers was caused by thin, alternating layers of plate and air. In 1704, Isaac Newton stated in his book, Opticks, that the iridescence in a peacock feather was due to the fact that the transparent layers in the feather were so thin.[4] In 1801, Thomas Young provided the first explanation of constructive and destructive interference. Young's contribution went largely unnoticed until the work of Augustin Fresnel. In 1816, Fresnel helped to establish the wave theory of light.[5]However, very little explanation could be made of the iridescence until the 1870s, when James Maxwell and Heinrich Hertz helped to explain the electromagnetic nature of light.[4] After the invention of the Fabry–Perot interferometer, in 1899, the mechanisms of thin-film interference could be demonstrated on a larger scale.[5]
^ Jump up to: a b c Stavenga, D. G. (2014). "Thin Film and Multilayer Optics Cause Structural Colors of Many Insects and Birds" (PDF). Materials Today: Proceedings. 1: 109. doi:10.1016/j.matpr.2014.09.007.
Jump up ^ Van Der Kooi, C. J.; Wilts, B. D.; Leertouwer, H. L.; Staal, M.; Elzenga, J. T. M.; Stavenga, D. G. (2014). "Iridescent flowers? Contribution of surface structures to optical signaling" (PDF). New Phytologist. 203 (2): 667. doi:10.1111/nph.12808. PMID 24713039.
Jump up ^ Stavenga, D. G.; Leertouwer, H. L.; Marshall, N. J.; Osorio, D. (2010). "Dramatic colour changes in a bird of paradise caused by uniquely structured breast feather barbules". Proceedings of the Royal Society B: Biological Sciences. 278 (1715): 2098. doi:10.1098/rspb.2010.2293.
^ Jump up to: a b c d Structural colors in the realm of nature By Shūichi Kinoshita – World Scientific Publishing 2008 pages 3–6
^ Jump up to: a b c d Thin-film optical filters By Hugh Angus Macleod – Institute of Physics Publishing 2001 Pages 1–4
Jump up ^ Structural colors in the realm of nature By Shūichi Kinoshita - World Scientific Publishing 2008 Page 165-167
Knittl, Zdeněk (1976), Optics of Thin Films; An Optical Multilayer Theory, Wiley Missing or empty |title= (help)
oil paint close up's
Appearance can be deceptive, never be the same. O...
|
Per-Unit System of Units - MATLAB & Simulink - MathWorks España
What Is the Per-Unit System?
Example 1: Three-Phase Transformer
Example 2: Asynchronous Machine
Base Values for Instantaneous Voltage and Current Waveforms
Why Use the Per-Unit System Instead of the Standard SI Units?
The per-unit system is widely used in the power system industry to express values of voltages, currents, powers, and impedances of various power equipment. It is typically used for transformers and AC machines.
For a given quantity (voltage, current, power, impedance, torque, etc.) the per-unit value is the value related to a base quantity.
\text{base value in p}\text{.u}\text{. = }\frac{\text{quantity expressed in SI units}}{\text{base value}}
Generally the following two base values are chosen:
The base power = nominal power of the equipment
The base voltage = nominal voltage of the equipment
All other base quantities are derived from these two base quantities. Once the base power and the base voltage are chosen, the base current and the base impedance are determined by the natural laws of electrical circuits.
\begin{array}{l}\text{base current = }\frac{\text{base power}}{\text{base voltage}}\\ \text{base impedance = }\frac{\text{base voltage}}{\text{base current}}\text{= }\frac{{\text{(base voltage)}}^{2}}{\text{base power}}\end{array}
For a transformer with multiple windings, each having a different nominal voltage, the same base power is used for all windings (nominal power of the transformer). However, according to the definitions, there are as many base values as windings for voltages, currents, and impedances.
The saturation characteristic of saturable transformer is given in the form of an instantaneous current versus instantaneous flux-linkage curve: [i1 phi1; i2 phi2; ..., in phin].
When the per-unit system is used to specify the transformer R L parameters, the flux linkage and current in the saturation characteristic must be also specified in pu. The corresponding base values are
\begin{array}{l}\text{base instantaneous current = (base rms current) }×\text{ }\sqrt{2}\\ \text{base flux linkage = }\frac{\left(\text{base rms voltage) }×\text{ }\sqrt{2}}{2\pi ×\left(\text{base frequency)}}\text{ }\end{array}
where current, voltage, and flux linkage are expressed respectively in volts, amperes, and volt-seconds.
For AC machines, the torque and speed can be also expressed in pu. The following base quantities are chosen:
The base speed = synchronous speed
The base torque = torque corresponding at base power and synchronous speed
\text{base torque = }\frac{\text{base power (3 phases) in VA}}{\text{base speed in radians/second}}
Instead of specifying the rotor inertia in kg*m2, you would generally give the inertia constant H defined as
\begin{array}{c}H=\frac{\text{kinetic energy stored in the rotor at synchronous speed in joules}}{\text{machine nominal power in VA}}\\ H=\frac{\frac{1}{2}×J\cdot {w}^{2}}{Pnom}\end{array}
The inertia constant is expressed in seconds. For large machines, this constant is around 3–5 seconds. An inertia constant of 3 seconds means that the energy stored in the rotating part could supply the nominal load during 3 seconds. For small machines, H is lower. For example, for a 3-HP motor, it can be 0.5–0.7 seconds.
Consider, for example, a three-phase two-winding transformer with these manufacturer-provided, typical parameters:
Nominal power = 300 kVA total for three phases
Winding 1: connected in wye, nominal voltage = 25-kV RMS line-to-line
resistance 0.01 pu, leakage reactance = 0.02 pu
Winding 2: connected in delta, nominal voltage = 600-V RMS line-to-line
Magnetizing losses at nominal voltage in % of nominal current:
Resistive 1%, Inductive 1%
The base values for each single-phase transformer are first calculated:
For winding 1:
300 kVA/3 = 100e3 VA/phase
25 kV/sqrt(3) = 14434 V RMS
100e3/14434 = 6.928 A RMS
14434/6.928 = 2083 Ω
Base inductance
2083/(2π*60)= 5.525 H
300 kVA/3 = 100e3 VA
100e3/600 = 166.7 A RMS
600/166.7 = 3.60 Ω
3.60/(2π*60) = 0.009549 H
The values of the winding resistances and leakage inductances expressed in SI units are therefore
For winding 1: R1= 0.01 * 2083 = 20.83 Ω; L1= 0.02*5.525 = 0.1105 H
For winding 2: R2= 0.01 * 3.60 = 0.0360 Ω; L2= 0.02*0.009549 = 0.191 mH
For the magnetizing branch, magnetizing losses of 1% resistive and 1% inductive mean a magnetizing resistance Rm of 100 pu and a magnetizing inductance Lm of 100 pu. Therefore, the values expressed in SI units referred to winding 1 are
Rm = 100*2083 = 208.3 kΩ
Lm = 100*5.525 = 552.5 H
Now consider a three-phase, four-pole Asynchronous Machine block in SI units. It is rated 3 HP, 220 V RMS line-to-line, 60 Hz.
The stator and rotor resistance and inductance referred to stator are
Rs = 0.435 Ω; Ls = 2 mH
Rr = 0.816 Ω; Lr = 2 mH
The mutual inductance is Lm = 69.31 mH. The rotor inertia is J = 0.089 kg.m2.
The base quantities for one phase are calculated as follows:
3 HP*746VA/3 = 746 VA/phase
220 V/sqrt(3) = 127.0 V RMS
746/127.0 = 5.874 A RMS
127.0/5.874 = 21.62 Ω
21.62/(2π*60)= 0.05735 H = 57.35 mH
1800 rpm = 1800*(2π)/60 = 188.5 radians/second
Base torque (three-phase)
746*3/188.5 = 11.87 newton-meters
Using the base values, you can compute the values in per-units.
Rs= 0.435 / 21.62 = 0.0201 pu Ls= 2 / 57.35 = 0.0349 pu
Rr= 0.816 / 21.62 = 0.0377 pu Lr= 2 / 57.35 = 0.0349 pu
Lm = 69.31/57.35 = 1.208 pu
The inertia is calculated from inertia J, synchronous speed, and nominal power.
H=\frac{\frac{1}{2}×J\cdot {w}^{2}}{Pnom}=\frac{\frac{1}{2}×0.089×{\left(188.5\right)}^{2}}{3×746}=0.7065\text{ seconds}
If you open the dialog box of the Asynchronous Machine block in pu units provided in the Machines library of the Simscape™ Electrical™ Specialized Power Systems Fundamental Blocks library, you find that the parameters in pu are the ones calculated.
When displaying instantaneous voltage and current waveforms on graphs or oscilloscopes, you normally consider the peak value of the nominal sinusoidal voltage as 1 pu. In other words, the base values used for voltage and currents are the RMS values given multiplied by
\sqrt{2}
Here are the main reasons for using the per-unit system:
When values are expressed in pu, the comparison of electrical quantities with their "normal" values is straightforward.
For example, a transient voltage reaching a maximum of 1.42 pu indicates immediately that this voltage exceeds the nominal value by 42%.
The values of impedances expressed in pu stay fairly constant whatever the power and voltage ratings.
For example, for all transformers in the 3–300 kVA power range, the leakage reactance varies approximately 0.01–0.03 pu, whereas the winding resistances vary between 0.01 pu and 0.005 pu, whatever the nominal voltage. For transformers in the 300 kVA to 300 MVA range, the leakage reactance varies approximately 0.03–0.12 pu, whereas the winding resistances vary between 0.005–0.002 pu.
Similarly, for salient pole synchronous machines, the synchronous reactance Xd is generally 0.60–1.50 pu, whereas the subtransient reactance X'd is generally 0.20–0.50 pu.
It means that if you do not know the parameters for a 10-kVA transformer, you are not making a major error by assuming an average value of 0.02 pu for leakage reactances and 0.0075 pu for winding resistances.
The calculations using the per-unit system are simplified. When all impedances in a multivoltage power system are expressed on a common power base and on the nominal voltages of the different subnetworks, the total impedance in pu seen at one bus is obtained by simply adding all impedances in pu, without considering the transformer ratios.
|
On the Summation of Series in Terms of Bessel Functions | EMS Press
On the Summation of Series in Terms of Bessel Functions
Slobodan B. Trickovic
Mirjana V. Vidanovic
In this article we deal with summation formulas for the series %(\ref{1}),
\sum_{n=1}^\infty\frac{J_\mu(nx)}{n^\nu}\,,
referring partly to some results from our paper in %\cite{jmaa}. J. Math. Anal. Appl. 247 (2000) 15 -- 26. We show how these formulas arise from different representations of Bessel functions. In other words, we first apply Poisson's or Bessel's integral, then in the sequel we define a function by means of the power series representation of Bessel functions and make use of Poisson's formula. Also, closed form cases as well as those when it is necessary to take the limit have been thoroughly analyzed.
Slobodan B. Trickovic, Mirjana V. Vidanovic, Miomir S. Stankovic, On the Summation of Series in Terms of Bessel Functions. Z. Anal. Anwend. 25 (2006), no. 3, pp. 393–406
|
Old Matrixswap Whitepaper - Matrix Labs
This is the old Matrixswap whitepaper, for reference.
Matrixswap is a decentralized virtual-AMM-based perpetual swaps trading protocol deployed on the Polkadot, Cardano and Polygon [Ethereum Layer 2] blockchain. Unlike traditional AMMs, users can long or short any assets' perpetual contracts with up to 10x leverage. While most decentralized derivative trading platforms face liquidity concerns, the Matrixswap vAMM offers infinite on-chain liquidity.
What makes Matrixswap attractive and unique as a trading platform are the below key features,
While most decentralized derivative trading platforms face liquidity concerns, Matrixswap vAMM offers trades with up to 10x leverage with infinite on-chain liquidity,
100% on-chain and 100% non-custodial trading,
By interacting with Matrixswap's vAMM smart contracts, users can gain exposure to derivatives for any assets on the market,
Matrixswap aims to deploy on multiple blockchains: Polygon, Polkadot and Cardano. By leveraging cross-chain bridges, Matrixswap aims to unlock liquidity from different blockchain networks thereby empowering traders with maximum capital efficiency,
Matrixswap provides users an Emergency Nuke Button (DEX aggregator) that allows users to convert multiple (or all) tokens into one single asset under one transaction. This is made possible by the nature of the underlying blockchain (low fees, high throughput) and shared liquidity amongst DEXs,
The Matrixswap platform utility and governance token MATRIX is built with a strong token economic design. 50% of the platform trading fees will be allocated towards token swapback order to offset rewards token emission.
Matrixswap's main component is "THE MATRIX" which acts as the core and interacts with other components in the eco-system.
Matrixswap eco-system
Perpetual Swaps are cryptocurrency derivatives that enable traders to speculate on the valuation of specific underlying assets. Perpetual contracts have two key features:
(i) There are no expiry dates on contracts. Contracts are effective until traders close their positions.
(ii) The underlying asset itself is never traded, therefore custody issues are mitigated.- The swap price closely tracks the price of the underlying asset by utilizing funding rates. A mechanism that ensures the convergence of the perpetual MarkPrice to the IndexPrice.
In general, the main difference of perpetual contract trading on Matrixswap is that all assets and trades are stored and executed on-chain. Unlike Binance, Bitmex perp contracts, Matrixswap doesn't rely on counterparties and there is no risk of centralized off-chain servers. With Matrixswap, users have full custody of their own funds.
vAMM (Virtual AMM)
Matrixswap's vAMM (pioneered by perp.fi) uses the same x*y=k constant product formula as most AMMs do. However, as the word virtual suggests, The vAMM itself does not contain an actual asset pool (k). Instead, the actual assets (traders' collateral) are kept in a smart contract vault that oversees all of the vAMM's collateral. In other words, Matrixswap uses vAMMs as price discovery mechanisms, not for spot trading. This allows Martixswap to operate with infinite liquidity with zero impermanent loss for stakeholders, as liquidity providers aren’t required.
The vAMM acts as an independent settlement market, all profits and losses are directly settled in a collateral vault.
Let's first look at how a position is opened:
Trader Neo sends 100 USDC to the Clearing House on Matrixswap and declares to use that fund as the initial margin to open a 3x leveraged long position.
Clearing House then deposits the 100 USDC into the Collateral Vault. Matrixswap subsequently updates the virtual token amount in vAMM based on the initial margin value, long or short position, and leverage amount.
As demonstrated in the graph above, the deposited funds from Neo aren't stored inside the vAMM. Instead, the funds are stored in the collateral vault for future settlements.
Now let's look at how Neo can profit from his position:
Let's assume we have 1,000 vDOT and 10,000 vUSDC in the vAMM as its initial state.
Step 1: Neo uses 100 USDC as the margin to open a 3x leveraged long position, the amount of USDC in vAMMs changes to 10,300 (10,000+100*3), the amount of DOT changes to 970.87 (1,000*10,000/10,300), calculated by x*y=k , Neo is now credited with 29.13 DOT (1,000-970.87).
Neo opens 29.13 long (300 vUSDC)
Step 2: Following Neo, Trader Morpheus uses 100 USDC to open a long position with 5x leverage. vAMM credits him 44.95 DOT (970.87-925.92).
Morpheus opens 44.95 long (500 vUSDC)
Step 3: Neo sees that the price has gone up, decides to close his position, and realizes a profit of 29.35 USDC
Neo closes long and earns (329.35-300)=29.35 USDC
Step 4: Seeing the price dropped, Morpheus went to close his position to prevent further damage, only to find out that he lost -29.35 USDC
Morpheus closes long and loses (470.65-500)=-29.35 USDC
As demonstrated above, all profits and losses are directly settled between traders. One trader's gain is another trader's loss, there's always enough capital inside the collateral vault to pay back everyone. That's how we achieve leverage with infinite liquidity.
While traditional AMMs k-value is the equal value of asset x*y in a given liquidity pool, vAMM's k-value is a number set by the protocol's architect to ensure optimal trading experience in The Matrix. At the project's early stage, the Matrixswap core contributors will act as the protocol's architect for setting k-value for each vAMMs. A few factors are taken into consideration in the process of setting k-value:
Underlying asset trading volume
Underlying asset pool value
Underlying asset index price volatility
Platform open interest
When the margin ratio of the trader becomes lower than the maintenance margin ratio (now set at 6.25%, subject to changes from ZionDAO), liquidation will be triggered.
Margin Ratio = (Initial margin +Unrealized PnL)/OpenPositionNotionalSize
Unrealized PnL=(Exit Price-Entry Price)*Positionsize+Funding Fee
Position Notional = Position Size * Market Price
OpenPositionNotionalSize = Position Size* Entry Price
Liquidations are triggered by Agents. As a reward for providing this service, they earn 1.25% of the remaining notional. The remaining margin will be deposited into the Insurance Fund.
To ensure perpetual contracts price inside The Matrix converges with index price in the real world, Matrixswap introduces an hourly funding rate. Whether the perpetual is trading at a premium (longs pay shorts) or at a discount (shorts pay longs). The funding payment is calculated by position size * (TWAP of perpetual - TWAP of Index)/24.
Matrixswap utilizes the same funding rate formula as FTX does, as shown below:
funding Payment = position Size∗ funding Rate
funding Rate =TWAPperpetual −TWAPindex/24
All TWAP data will be sourced from the Oracle.
A trading fee of 0.1% is charged when opening and closing positions in Matrixswap. It is important to note that the fees are not collected as revenue for the protocol, but as insurance backup for the system. 50% of the trading fees will be deposited into the Insurance Fund, a vault that ensures the stability of the protocol. In the event of unexpected losses from the liquidation process, the Insurance Fund will absorb the losses. The other 50% of the trading fees will be allocated towards the following events:
Swapback of platform token MATRIX for single token stakers
Trading competition prizes
Matrixswap Components
Matrixswap's main component is "THE MATRIX" which acts as the core of the entire eco-system and interacts with the following 4 major components,
(i) Construct - interoperable cross-chain bridges,
(ii) Nebuchadnezzar - a DEX aggregator,
(iii) Oracle - a price feed data source,
(iv) ZionDAO - a decentralized governance mechanism.
In order for users to bridge liquidity from one chain to another, Matrixswap relies on partnered projects to offer secured and reliable cross-chain bridges.
Nebuchadnezzar is a DEX aggregator that connects with high-volume AMMs operating on the Polkadot, Polygon and Cardano networks. Nebuchadnezzar aims to provide traders the ability to swap multiple tokens into one single asset under one transaction in the event of an emergency. This is made possible by the nature of the underlying blockchains (low fees, high throughput) and shared liquidity amongst AMMs.
Nebuchadnezzar DEX aggregator
Matrixswap Market Oracle nodes receive information on the time-weighted average price (TWAP) of assets. Upon mainnet launch, Matrixswap oracle providers will provide TWAP information for all assets (major coins & shit coins) by utilizing our own nodes that interact with DEXs' API for market pairs' real-time data. As a starting point, we will utilize Kylin data oracle as our real-time index price feed for stable coins and major assets, as they provide the oracle solutions in the Polkadot/ Substrate framework. Each provider will be the data supplier of each asset. For example, provider k is the sole provider for asset k.
In the near future, Matrixswap plans to incorporate Chainlink oracle price feeds in addition to Kylin, as it increases security and network validations.
Matrixswap Market Oracle Solution
MATRIX token is the governance token of Matrixswap and it possesses the right to propose changes to the protocol, voting rights to proposals, and overall project direction through ZionDAO.
Considering Matrixswap would still be at its early stage upon Mainnet launch, the Matrixswap core contributors will spearhead the decision-making and execution of the project in order to ensure long-term growth. As the platform matures and we have a robust on-chain governance mechanism in place, the Matrixswap core contributers will let the community govern decisions and executions. During this period, we will still hear the community's voice and factor in their interest for every step we take. Community's feedback will be obtained from Telegram, Discord and also through external voting platforms as and when needed.
At this stage, we expect the ZionDAO community council will be able to propose, vote, and implement upgrades and changes to Matrixswap by staking MATRIX. The major implementation and the deployment of the smart contracts will still be done by the Matrixswap core contributers.
Market Listings,
Token Bridge Asset Whitelist,
vAMM Fee Structure,
Nebuchadnezzar DEX Whitelist,
Platform Usage Required Token Amount,
Oracle Providers.
ERC-20 (Token will be bridged at launch)
MATRIX Token Distribution
MATRIX Tokenmetrics
10% unlocked at TGE.
Daily unlock for 9 months.
Eco-system Rewards
Linear vesting for 2 years.
Locked for 180 days following TGE, daily unlock for 2 years.
Locked for 120 days following TGE, daily unlock for 18 months.
|
Traveling Wave Solution in a Diffusive Predator-Prey System with Holling Type-IV Functional Response
2014 Traveling Wave Solution in a Diffusive Predator-Prey System with Holling Type-IV Functional Response
Deniu Yang, Lihan Liu, Hongyong Wang
We establish the existence of traveling wave solution for a reaction-diffusion predator-prey system with Holling type-IV functional response. For simplicity, only one space dimension will be involved, the traveling solution equivalent to the heteroclinic orbits in
{R}^{\mathrm{3}}
. The methods used to prove the result are the shooting argument and the invariant manifold theory.
Deniu Yang. Lihan Liu. Hongyong Wang. "Traveling Wave Solution in a Diffusive Predator-Prey System with Holling Type-IV Functional Response." Abstr. Appl. Anal. 2014 (SI67) 1 - 7, 2014. https://doi.org/10.1155/2014/409264
Deniu Yang, Lihan Liu, Hongyong Wang "Traveling Wave Solution in a Diffusive Predator-Prey System with Holling Type-IV Functional Response," Abstract and Applied Analysis, Abstr. Appl. Anal. 2014(SI67), 1-7, (2014)
|
On Fredholm boundary conditions on manifolds with corners I: Global corner cycles obstructions
2022 On Fredholm boundary conditions on manifolds with corners I: Global corner cycles obstructions
Paulo Carrillo Rouse, Jean-Marie Lescure, Mario Velásquez
Given a connected manifold with corners of any codimension there is a very basic and computable homology theory called conormal homology defined in terms of faces and orientations of their conormal bundles, and whose cycles correspond geometrically to corner cycles.
Our main theorem is that, for any manifold with corners
X
of any codimension, there is a natural and explicit morphism
{K}_{\ast }\left(\mathsc{𝒦}\left(X\right)\right)\stackrel{T}{\to }{H}_{\ast }^{\text{pcn}}\left(X,ℚ\right)
K
-theory group of the algebra
\mathsc{𝒦}\left(X\right)
b
-compact operators for
X
and the periodic conormal homology group with rational coefficients, and that
T
is a rational isomorphism.
As shown by the first two authors in a previous paper, this computation implies that the rational groups
{H}_{\text{ev}}^{\text{pcn}}\left(X,ℚ\right)
provide an obstruction to the Fredholm perturbation property for compact connected manifolds with corners.
This paper differs from that previous paper, in which the problem is solved in low codimensions, in that here we overcome the problem of computing the higher spectral sequence
K
-theory differentials associated to the canonical filtration by codimension by introducing an explicit topological space whose singular cohomology is canonically isomorphic to the conormal homology and whose
K
-theory is naturally isomorphic to the
K
-theory groups of the algebra
\mathsc{𝒦}\left(X\right)
Paulo Carrillo Rouse. Jean-Marie Lescure. Mario Velásquez. "On Fredholm boundary conditions on manifolds with corners I: Global corner cycles obstructions." Ann. K-Theory 6 (4) 607 - 628, 2022. https://doi.org/10.2140/akt.2021.6.607
Received: 24 October 2019; Revised: 22 February 2021; Accepted: 22 March 2021; Published: 2022
Primary: 19K56 , 55N15 , 58H05 , 58J22 , 58J40
Keywords: Index theory , K-theory , Lie groupoids , Manifolds with corners
Paulo Carrillo Rouse, Jean-Marie Lescure, Mario Velásquez "On Fredholm boundary conditions on manifolds with corners I: Global corner cycles obstructions," Annals of K-Theory, Ann. K-Theory 6(4), 607-628, (2022)
|
In the context of modified gravity theory, we study time-dependent wormhole spacetimes in the radiation background. In this framework, we attempt to generalize the thermodynamic properties of time-dependent wormholes in gravity. Finally, at event horizon, the rate of change of total entropy has been discussed.
f(R) Gravity, Time-Dependent Wormholes, Thermodynamics, Event Horizon
Saiedi, H. (2017) Modified f(R) Gravity and Thermodynamics of Time-Dependent Wormholes at Event Horizon. Journal of High Energy Physics, Gravitation and Cosmology, 3, 708-714. doi: 10.4236/jhepgc.2017.34053.
f\left(R\right)
f\left(R\right)
f\left(R\right)
f\left(R\right)
\text{d}{s}^{2}=-{\text{e}}^{2\Phi \left(t,r\right)}\text{d}{t}^{2}+{a}^{2}\left(t\right)\left[\frac{\text{d}{r}^{2}}{1-\frac{b\left(r\right)}{r}}+{r}^{2}\text{d}{\Omega }_{2}^{2}\right],
\text{d}{\Omega }_{2}^{2}=\text{d}{\theta }^{2}+{\mathrm{sin}}^{2}\theta \text{d}{\varphi }^{2}
a\left(t\right)
b\left(r\right)
\Phi \left(t,r\right)
\Phi \left(t,r\right)=0
\text{d}{s}^{2}={h}_{ab}\text{d}{x}^{a}\text{d}{x}^{b}+{\stackrel{˜}{r}}^{2}\text{d}{\Omega }_{2}^{2},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(a,b=0,1\right)
{x}^{0}=t,{x}^{1}=r
\stackrel{˜}{r}=a\left(t\right)r
{h}_{ab}
{h}_{ab}=diag\left[-1,{a}^{2}\left(t\right){\left(1-\frac{b\left(r\right)}{r}\right)}^{-1}\right].
\kappa =\frac{1}{2\sqrt{-h}}{\partial }_{a}\left(\sqrt{-h}{h}^{ab}{\partial }_{b}\stackrel{˜}{r}\right),
h
{h}_{ab}
{\stackrel{˜}{r}}_{h}
\kappa =-\frac{{\stackrel{˜}{r}}_{h}}{2}\left(\stackrel{˙}{H}+2{H}^{2}\right)+\frac{1}{4{\stackrel{˜}{r}}_{h}^{2}}\left(ab\left(r\right)-{\stackrel{˜}{r}}_{h}{b}^{\prime }\left(r\right)\right),
{b}^{\prime }=\partial b/\partial r
H=\stackrel{˙}{a}/a
{T}_{h}=\kappa /\text{2π}
{T}_{h}=-\frac{{\stackrel{˜}{r}}_{h}}{4\text{π}}\left(\stackrel{˙}{H}+2{H}^{2}\right)+\frac{1}{8\text{π}{\stackrel{˜}{r}}_{h}^{2}}\left(ab\left(r\right)-{\stackrel{˜}{r}}_{h}{b}^{\prime }\left(r\right)\right).
A=4\text{π}{\stackrel{˜}{r}}_{h}^{2}
{S}_{h}={\frac{A}{4G}|}_{{\stackrel{˜}{r}}_{h}}
f\left(R\right)
{S}_{h}={\frac{AF}{4G}|}_{{\stackrel{˜}{r}}_{h}}=\frac{\text{π}{\stackrel{˜}{r}}_{h}^{2}F}{G}=8{\text{π}}^{2}{\stackrel{˜}{r}}_{h}^{2}F\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(8\text{π}G=1\right),
F=\text{d}f/\text{d}R\ne 0
\text{d}{S}_{h}=8{\text{π}}^{2}{\stackrel{˜}{r}}_{h}^{2}\text{d}F+16{\text{π}}^{2}{\stackrel{˜}{r}}_{h}F\text{d}{\stackrel{˜}{r}}_{h}.
{T}_{h}\text{d}{S}_{h}=\left(8{\text{π}}^{2}{\stackrel{˜}{r}}_{h}^{2}\text{d}F+16{\text{π}}^{2}{\stackrel{˜}{r}}_{h}F\text{d}{\stackrel{˜}{r}}_{h}\right)\left[\frac{ab\left(r\right)-{\stackrel{˜}{r}}_{h}{b}^{\prime }\left(r\right)}{8\text{π}{\stackrel{˜}{r}}_{h}^{2}}-\frac{{\stackrel{˜}{r}}_{h}\left(\stackrel{˙}{H}+2{H}^{2}\right)}{4\text{π}}\right].
{T}_{h}\text{d}{S}_{I}=\text{d}{E}_{I}+p\text{d}V,
{S}_{I}
{E}_{I}
V
V=\frac{4}{3}\text{π}{\stackrel{˜}{r}}^{3}
p
p=\left({p}_{r}+2{p}_{t}\right)/3
{p}_{r}\left(t,r\right)
{p}_{t}\left(t,r\right)
{T}_{h}
V=\frac{4}{3}\text{π}{\stackrel{˜}{r}}_{h}^{3},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{E}_{I}=\rho V=\frac{4}{3}\text{π}\rho {\stackrel{˜}{r}}_{h}^{3},
\stackrel{˙}{\rho }+H\left(3\rho +{p}_{r}+2{p}_{t}\right)=0
{T}_{\nu ;\mu }^{\mu }
{T}_{h}\text{d}{S}_{I}=\frac{4\text{π}{\stackrel{˜}{r}}_{h}^{2}}{3}\left(3\rho +{p}_{r}+2{p}_{t}+\frac{{\stackrel{˜}{r}}_{h}{\rho }^{\prime }}{a}\right)\left(\text{d}{\stackrel{˜}{r}}_{h}-H{\stackrel{˜}{r}}_{h}\text{d}t\right),
\rho \left(t,r\right)
{\rho }^{\prime }=\partial \rho /\partial r
\begin{array}{c}{T}_{h}{\stackrel{˙}{S}}_{tot}={T}_{h}\left({\stackrel{˙}{S}}_{I}+{\stackrel{˙}{S}}_{h}\right)=-\frac{4\text{π}H{\stackrel{˜}{r}}_{h}^{3}}{3}\left(3\rho +{p}_{r}+2{p}_{t}+\frac{{\stackrel{˜}{r}}_{h}{\rho }^{\prime }}{a}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+8{\text{π}}^{2}{\stackrel{˜}{r}}_{h}^{2}\stackrel{˙}{F}\left[\frac{ab\left(r\right)-{\stackrel{˜}{r}}_{h}{b}^{\prime }\left(r\right)}{8\text{π}{\stackrel{˜}{r}}_{h}^{2}}-\frac{{\stackrel{˜}{r}}_{h}\left(\stackrel{˙}{H}+2{H}^{2}\right)}{4\text{π}}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+16{\text{π}}^{2}{\stackrel{˜}{r}}_{h}F\left[\frac{ab\left(r\right)-{\stackrel{˜}{r}}_{h}{b}^{\prime }\left(r\right)}{8\text{π}{\stackrel{˜}{r}}_{h}^{2}}-\frac{{\stackrel{˜}{r}}_{h}\left(\stackrel{˙}{H}+2{H}^{2}\right)}{4\text{π}}\right]{\stackrel{˙}{\stackrel{˜}{r}}}_{h}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{4\text{π}{\stackrel{˜}{r}}_{h}^{2}}{3}\left(3\rho +{p}_{r}+2{p}_{t}+\frac{{\stackrel{˜}{r}}_{h}{\rho }^{\prime }}{a}\right){\stackrel{˙}{\stackrel{˜}{r}}}_{h}.\end{array}
b\left(r\right)={r}_{0}
{\stackrel{˜}{r}}_{E}
\text{d}{s}^{2}=0=\text{d}{\Omega }_{2}^{2}
{\stackrel{˙}{\stackrel{˜}{r}}}_{E}={\stackrel{˜}{r}}_{E}H-\sqrt{1-\frac{a{r}_{0}}{{\stackrel{˜}{r}}_{E}}},
{\int }_{0}^{\frac{{\stackrel{˜}{r}}_{E}}{a}}\frac{\text{d}r}{\sqrt{1-\frac{{r}_{0}}{r}}}={\int }_{t}^{\infty }\frac{\text{d}t}{a}.
f\left(R\right)
\left(\rho \left(t,r\right)\right)
\left({p}_{r}\left(t,r\right)\right)
\left({p}_{t}\left(t,r\right)\right)
\rho =-\stackrel{¨}{F}+3{H}^{2}F,
{p}_{r}=-2\stackrel{˙}{H}F+H\stackrel{˙}{F}-3{H}^{2}F-\frac{{r}_{0}F}{{a}^{2}{r}^{3}},
{p}_{t}=-2\stackrel{˙}{H}F+H\stackrel{˙}{F}-3{H}^{2}F+\frac{{r}_{0}F}{2{a}^{2}{r}^{3}},
\begin{array}{c}{T}_{h}{\stackrel{\dot{}}{S}}_{tot}=-\frac{4\text{π}{\stackrel{˜}{r}}_{E}^{2}}{3}\left(3H\stackrel{\dot{}}{F}-6\stackrel{\dot{}}{H}F-3\stackrel{¨}{F}\right)\sqrt{1-\frac{a{r}_{0}}{{\stackrel{˜}{r}}_{E}}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+8{\text{π}}^{2}{\stackrel{˜}{r}}_{E}\left(2{\stackrel{˜}{r}}_{E}HF+{\stackrel{˜}{r}}_{E}\stackrel{\dot{}}{F}\right)\left(\frac{a{r}_{0}}{8\text{π}{\stackrel{˜}{r}}_{E}^{2}}-\frac{\left(\stackrel{\dot{}}{H}+2{H}^{2}\right){\stackrel{˜}{r}}_{E}}{4\text{π}}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\left(16{\text{π}}^{2}{\stackrel{˜}{r}}_{E}F\sqrt{1-\frac{a{r}_{0}}{{\stackrel{˜}{r}}_{E}}}\right)\left(\frac{a{r}_{0}}{8\text{π}{\stackrel{˜}{r}}_{E}^{2}}-\frac{\left(\stackrel{\dot{}}{H}+2{H}^{2}\right){\stackrel{˜}{r}}_{E}}{4\text{π}}\right).\end{array}
f\left(R\right)
f\left(R\right)
|
rule (method) [Isabelle/HOL Support Wiki]
Trace: • rule (method)
reference:rule_method
rule is a proof method. It applies a rule if possible.
\quad\bigwedge x_1 \dots x_k : [|\ A_1; \dots ; A_m\ |] \Longrightarrow C
and we want to use rule with rule
\quad[|\ P_1; \dots ; P_n\ |] \Longrightarrow Q
Then, rule does the following:
C
Q
. The application fails if there is no unifier; otherwise, let
U
Remove the old subgoal and create one new subgoal
\quad\bigwedge x_1 \dots x_k : [|\ U(A_1); \dots ; U(A_m)\ |] \Longrightarrow U(P_k)
k = 1, \dots, n
Assume we have a goal
\quad[|\ A |] \Longrightarrow A \vee B
Applying apply (rule disjI1) yields the new subgoal
\quad [|\ A\ |] \Longrightarrow A
which can obviously be solved by one application of Assumption. Note that apply (rule(1) disjI) is a shortcut for this and immediately solves the goal.
rule_tac
With rule_tac, you can force schematic variables in the used rule to take specific values. The extended syntax is:
apply (rule_tac ident1="expr1" and ident2="expr1" and ... in rule)
This means that the variable ?ident1 is replaced by expression expr1 and similarly for the others. Note that you have to leave out the question mark marking schematic variables. Find out which variables a rule uses with thm rule.
rule(k)
Oftentimes, a rule application results in several subgoals that can directly be solved by Assumption; see above for an example. Instead of applying assumption by hand, you can apply rule(k) which forces Isabelle to apply assumption
k
reference/rule_method.txt · Last modified: 2011/06/22 12:27 by 131.246.161.187
|
The Warp Protocol - Warp Finance
The Warp Ecosystem
Users will be able to deposit LP tokens onto the Warp platform and receive stablecoin loans in exchange, while their LP tokens continue to earn from Uniswap’s rewards.
By lending LP tokens compared to other assets, users are able to continue earning trade fees from Uniswap, reducing the effective interest rate paid.
Demand (Borrowers)
At launch, users seeking loans will be able to deposit the LP tokens generated from the following four Uniswap pairs (WBTC- ETH), (ETH-USDC), (ETH-USDT), (ETH-DAI).
These pairs will be deposited at 150% over-collateralization. In other words, the user deposits at least 1.5 times the value of money they will borrow.
These borrowers then receive a loan of DAI, USDC, or USDT at a specific interest rate, which will fluctuate based on the availability of the respective stablecoin within the liquidity pool. All while still earning the 0.3% from Uniswap per trade made in the respective liquidity pool.
Through the Warp platform returns would look like the following:
YieldTotal Collateral-Interest Rate +Stablecoin(YieldUser Derived)
This results in the borrowers having the stablecoin to deploy in other platforms with an effective interest rate of:
Interest Rate-YieldTotal Collateral
Once the loan has been repaid, users may withdraw their collateral.
Supply (Lenders)
Lenders will be able to supply DAI, USDC and USDT on Warp Finance.
In return, suppliers will receive either wDAI, wUSDC or wUSDT, which are interest-earning tokens that indicate a deposit into Warp.
On withdrawal, suppliers will receive back the stablecoin they initially deposited plus the interest earned.
Platform Reserves and Development
The Warp Protocol collects 5% of interest accrued and stores the funds in a treasury wallet. At first, these funds will be used as a reserve and for continuous development of the platform. Once governance is live, the community will be able to decide on the usage of these funds.
|
Approaches for Model Validation: Methodology and Illustration on a Sheet Metal Flanging Process | J. Manuf. Sci. Eng. | ASME Digital Collection
Approaches for Model Validation: Methodology and Illustration on a Sheet Metal Flanging Process
Thaweepat Buranathiti,
Thaweepat Buranathiti
Department of Mechanical Engineering, Northwestern University, Evanston, IL 60208
e-mail: jcao@northwestern.edu
Lusine Baghdasaryan,
Department of Mechanical & Industrial Engineering, University of Illinois at Chicago, Chicago, IL 60607
Ford Scientific Research Laboratory, Dearborn, MI 48121
Contributed by the Manufacturing Engineering Division for publication in the ASME JOURNAL OF MANUFACTURING SCIENCE AND ENGINEERING. Manuscript received January 3, 2004, final manuscript received May 24, 2004. Review conducted by S. R. Schmid.
Buranathiti, T., Cao, J., Chen, W., Baghdasaryan, L., and Xia, Z. C. (May 9, 2006). "Approaches for Model Validation: Methodology and Illustration on a Sheet Metal Flanging Process ." ASME. J. Manuf. Sci. Eng. May 2006; 128(2): 588–597. https://doi.org/10.1115/1.1807852
Model validation has become an increasingly important issue in the decision-making process for model development, as numerical simulations have widely demonstrated their benefits in reducing development time and cost. Frequently, the trustworthiness of models is inevitably questioned in this competitive and demanding world. By definition, model validation is a means to systematically establish a level of confidence of models. To demonstrate the processes of model validation for simulation-based models, a sheet metal flanging process is used as an example with the objective that is to predict the final geometry, or springback. This forming process involves large deformation of sheet metals, contact between tooling and blanks, and process uncertainties. The corresponding uncertainties in material properties and process conditions are investigated and taken as inputs to the uncertainty propagation, where metamodels, known as a model of the model, are developed to efficiently and effectively compute the total uncertainty/variation of the final configuration. Three model validation techniques (graphical comparison, confidence interval technique, and
r2
technique) are applied and examined; furthermore, strength and weakness of each technique are examined. The latter two techniques offer a broader perspective due to the involvement of statistical and uncertainty analyses. The proposed model validation approaches reduce the number of experiments to one for each design point by shifting the evaluation effort to the uncertainty propagation of the simulation model rather than using costly physical experiments.
sheet materials, sheet metal processing, forming processes, deformation, finite element analysis, yield stress, work hardening, statistical analysis
Model validation, Uncertainty
Hills, R. G., and Trucano, T. G., 1999, “Statistical Validation of Engineering and Scientific Models: Background,” SAND99-1256.
Sargent, R. G., 1999, “Validation and Verification of Simulation Models,” Proceedings of the Winter Simulation Conference, pp. 39–48.
Hemez, F. M., and Doebling, S. W., 2001, “Model Validation and Uncertainty Quantification,” Proceedings of IMAC-XIX, the 19th International Modal Analysis Conference, p. 6.
What are Validation Experiments
Oberkampf, W. L., and Trucano, T. G., 2000, “Validation Methodology in Computational Fluid Dynamics,” American Institute of Aeronautics and Astronautics, AIAA 2000-2549, Fluids Conference, Denver, CO.
The Need for Computational Model Validation
Cardew-Hall
Experimental Validation of Sheet Thickness Optimization for Superplastic Forming of Engineering Structures
Role of Plastic Anisotropy and its Evolution on Springback
Livatyali
Prediction and Elimination of Springback in Straight Flanging Using Computer Aided Design Methods. 2. FEM Predictions and Tool Design
Prediction and Elimination of Springback in Straight Flanging Using Computer Aided Design Methods. 1. Experimental Investigations
Beck, J. V., and Arnold, K. J., 1977, Parameter Estimation in Engineering and Science, Wiley, New York.
Montgomery, D. C., 2001, Design and Analysis of Experiment, Wiley, New York.
Law, A., and Kelton, W., 2000, Simulation Modeling and Analysis, McGraw–Hill, Boston.
Key Inspection Characteristics
Belytschko, T., Liu, W. K., and Moran, B., 2000, Nonlinear Finite Elements for Continua and Structures, Wiley, New York.
Gradient-based Refinement Indicators in Adaptive Finite Element Analysis with Special Reference to Sheet Metal Forming
Element-free Galerkin Method for Contact Problems in Metal Forming Analysis
Annual Book of ASTM Standards, 2000, Sec. 3, Vol. 03.01, E8M, 79 pp.
ABAQUS: User’s Manual, Hibbit, Karlson, and Sorensen, Inc., RI.
Theoretical Plasticity of Textured Aggregates
A generalized isotropic yield criterion
Makosey
Yield Function Development for Aluminum Alloy Sheets
Six-component Yield Function for Anisotropic Materials
Wang, N. M., 1984, “Predicting the Effect of Die Gap of Flange Springback,” Proceedings of the 13th Bienial IDDRG Congress, Melbourne, Australia, pp. 133–147.
Random Field Finite Element
Jin, R., Chen, W., and Simpson, T., 2001, “Comparative Studies of Metamodeling Techniques under Modeling Criteria,” AIAA.
Prediction of Forming Limit Diagrams for Aluminum Alloy Sheet Using Finite Element Analysis
|
As shown in figure (a) and (b) , a body of mass is is attached -Turito
Two blocks and are attached to the two ends of a spring having leagth and force consiant on a horizontal surface. Initially the systen is in equilhium. Now a third block having same mass , moving with velocity collides with block A. Ln this situation........
The displacement of a particle is given by Which of the following graph represents variation in potential energy as a fimction of tims and displacement .
As shown in figure, a block A having mass M is attached to one end of a massless spring. The block is on a frictionless horizontal surface and the free end of the spring is attached to a wall. Another block B having mass ' ' is placed on top of blockA. Now on displacing this system horizontally and released, it executes S.H.M. What should be the maximum amplitude of oscillation so that B does not slide off A? Coefficient of static friction between the surfaces of the block's is .
Three equal masses of mkg each are plced the vertices of an equilateral triangle PQR and a mass of 2 m kg is placed at the centroid 0 of the triangle which is at a distance of
\sqrt{2}
m from each of vertices of triangle. The force in newton. acting on the mass 2 m is …….
\sqrt{2}
Which of the following statement about the gravitational constant is true
Two point masses A and B having masses in the ratio 4:3 are seprated by a distance of 1m. When another point mass c of mass M is placed in between A and B the forces A and C is 1/3rd of the force between b and C, Then the distance C form is...m
|
Implementation and Architecture - Flux Protocol
This section of the whitepaper describes the implementation and architecture of Flux Protocol
Flux Protocol is coded in Solidity, and all Conflux Network accounts can interact with the platform. Flux Protocol provides interfaces to market participants such as borrowers, depositors, and liquidators, etc. Flux Protocol is built on Conflux Network and utilizes ShuttleFlow to interact with cross-chained Ethereum or Bitcoin assets. The advantages of using Conflux blockchain are:
The advantages of using Conflux Network are:
The performance of Conflux Network is superior and the transaction fees lower than Ethereum
Improved liquidity possibility. E.g. wBTC, imBTC and alike are supplied/ collateralized through fBTC, and enjoy shared liquidity through this method
Flux Protocol creates an independent money market for each asset, also known as the lending market. Assets on Flux Protocol are mapped into ERC-777 tokens on Conflux Network: fToken. The borrowers transfer their assets to the contract, and their assets can be converted into fToken; over time, the borrowers can gain fToken interest returns. Table 2 list the initial batch of assets launched by Flux and the corresponding fToken information:
ctoken (ERC777)
ftoken (ERC777)
Conflux native
Table 2: Initial Lending Assets
Similar Compound, Flux’ money markets are defined by a pair of current interest rates (supply and borrowing interest rates), which are applicable to all users. Over time, this pair of interest rates will be adjusted once changes in supply and demand take place.
For every money market, every change in interest rates is controlled by the Interest Rate Index. The change of the interest rate is a result of user actions such as supply, withdrawal, borrow, repayment or liquidation of assets. User balance includes accrued interest, which is the ratio of the current interest rate index divided by the interest rate index when the user balance was last updated.
The balance of each account address in the money market is stored as an account checkpoint. The account checkpoint is a Solidity tuple <uint256 borrows, uint256 interestIndex> that describes the interest rate index and the borrowing balance when the account was last updated. The current interest rate index of the money market is also stored globally.
Each time a transaction occurs, the asset's supply interest rate and borrowing interest rate index will be updated to the compound interest from the previous period of the index. The interest is priced at r∗ t during the period of use, and calculated at the interest rate of each period.
Index_{(a,n)} = Index_{(a,n-1)}*(1+r*t)
Assets circulate in the money market a, therefore the amount of fToken will continue to grow with market interest. We set the exchange rate of fToken according to market supply and demand, which is the market exchange rate of fToken as well.
Exchange Rate_a = (Cash_a + Borrows_a *0.9) / Supply_a
Exchange Rate_a = (Cash_a + Borrows_a *0.8) / Supply_a
More borrowings result in a higher market exchange rate when the deposit remains unchanged. The depositor's fToken balance can be exchanged for more assets, and the assets that are exchanged by the fToken balance are the deposit interest. The constant 0.9 is to express the difference between the supply interest and the borrow interest rate of non-stablecoins and that 10% of the borrowing interest is distributed to orz Protocol’s team. The constant 0.8 is to express the difference between the supply interest and the borrow interest rate of stablecoins and that 20% of the borrowing interest is distributed to orz Protocol’s team.
cTokens_a = fTokens_a / Exchange Rate_a
Supply (Mint)
Any Conflux account can obtain minted fTokens by supplying assets. Once the assets are deposited into the smart contract of the money market, and the number of minted fTokens is equal to the number of the supplied assets divided by its present market exchange rate. There are three types of assets in the Flux Protocol, and there are different ways to supply these assets:
1. Conflux token – CFX:
Depositing assets through: function mint() payable
2. Cross-chain assets – cTokens:
Supplying assets to the money market contract through: cToken.operatorSend(minter,market,amount,0x0,0x1)
Supplying assets from the Ethereum or Bitcoin network to the money market through the ShuttleFlow Protocol. ShuttleFlow mints new cTokens into the smart contract through: cToken.mint(market,amount,minter,"")
3. Non-cross-chain ERC777 assets:
Supplying assets to the money market smart contract through: cToken.operatorSend(minter,market,amount,0x0,0x1)
The key steps to process supply function are:
Minting new fTokens to minter;
Updating minter balance;
Please note: the process of repaying ERC777 assets to the money market is not executed via the repayment function, but when ERC777 assets are successfully deposited, the tokensReceived will trigger the repayment function.
Withdraw (Redeem)
Redeem is similar to withdrawal and once the redemption takes place, rights like interest income are no longer entitled to the withdrawer. The redeem function is responsible for transferring assets from the money market to users in exchange for the previously minted fToken. The amount of assets redeemed is equal to the amount of fTokens to be redeemed multiplied by the current exchange rate. The redemption amount must be smaller than the available liquidity of the user account and the available liquidity of the market. There are three types of assets in the Flux protocol, and there are different ways to redeem different assets:
Conflux token CFX:
Redeem assets to your Conflux address through: function redeem(uint256 ftokens)
Cross-chain assets – cTokens (Support redemption to Ethereum or Bitcoin address):
Redeem assets to Conflux address through: function redeem(uint256 ftokens)
Receiver shall enter Ethereum or Bitcoin address to redeem assets to Ethereum or Bitcoin network: function redeem(uint256 ftokens, address receiver)
Non-cross-chain ERC777 assets:
Redeem assets to Conflux wallet through: function redeem(uint256 ftokens)
The key steps to process redeem functions are:
Transferring ERC777 assets to msg.sender or receiver (through cross-chain system)
Burn fToken
Updating msg.sender balance
The borrow function is responsible for transferring the assets from the money market to the users, and creating a borrowing balance that is compounded based on the asset's borrowing rate. The amount of borrowing must be smaller than the user’s borrowing capacity and the market’s available liquidity. The user must maintain the collateral requirement in order to avoid liquidation. Note that the borrower will receive an asset through a transaction, e.g. ETH for the fETH money market. Therefore, the borrower must be repay the same asset.There are three types of assets in the Flux Protocol, and there are different ways to borrow different assets:
Borrow assets to Conflux address through: function borrow(uint256 ctokens)
Cross-chain assets – cTokens (Supports borrowing assets to Ethereum or Bitcoin address):
Borrow assets to Conflux address through: function redeem(uint256 ftokens)
Borrow assets to the Ethereum or Bitcoin network, the receiver needs to enter the Ethereum or Bitcoin address through: function borrow(uint256 ctokens, address receiver)
Borrow assets to Conflux wallet: function borrow(uint256 ctokens)
The key steps to process borrowing functions are:
Updating interest rate index
Updating msg.sender borrowing balance
Borrow Repay
The borrower deposits the borrowed assets back to the money market through the repay function to reduce the user's outstanding borrowing. If the borrower's repayment of assets exceeds the actual amount of assets that have been borrowed, the excesses will be credited to the account via deposit. There are three types of assets in the Flux Protocol, and the way to repay the assets is similar to the way of depositing assets:
Repay assets through: function repay() payable
Cross-chain cTokens:
Repay assets to the money market contract, regarding it as repayment by default, through: cToken.operatorSend(borrower,market,amount,0x0,0x0)
Repay assets from the Ethereum or Bitcoin network to the money market through the ShuttleFlow. ShuttleFlow mints new cTokens into the borrowing smart contract, regarding it as repayment by default, through: cToken.mint(market,amount,borrower, "")
Transferring assets to the money market through: cToken.operatorSend(borrower,market,amount,0x0,0x0)
The key steps to process repayment functions are:
Updating interest rate index;
Updating borrower or msg.sender borrowing balance;
Updating borrower or msg.sender balance (possibly);
Updating interest rate model.
Borrow Liquidation
Liquidator can repay outstanding borrowing of an account with negative liquidity to restore the liquidity of the borrower's account. Liquidation utilizes the Flux Guardian Smart Contract: Guard. Before the liquidation, Guard will need to be authorized to repay the outstanding borrowings from the liquidators account into the money market
Liquidation function:
function liquidate(address borrower) external
The key steps to process liquidation functions are:
Check whether the borrower’s collateral rate is lower than the liquidation collateral rate
Repay outstanding borrowings into the money market
Update borrower's borrowing outstanding
sender will receive the borrower collateral ftoken assets
Update interest rate model
|
Turbine — Wikipedia Republished // WIKI 2
For other uses, see Turbine (disambiguation).
A steam turbine with the case opened.
A turbine (/ˈtɜːrbaɪn/ or /ˈtɜːrbɪn/) (from the Greek τύρβη, tyrbē, or Latin turbo, meaning vortex)[1][2] is a rotary mechanical device that extracts energy from a fluid flow and converts it into useful work. The work produced by a turbine can be used for generating electrical power when combined with a generator.[3] A turbine is a turbomachine with at least one moving part called a rotor assembly, which is a shaft or drum with blades attached. Moving fluid acts on the blades so that they move and impart rotational energy to the rotor. Early turbine examples are windmills and waterwheels.
Gas, steam, and water turbines have a casing around the blades that contains and controls the working fluid. Credit for invention of the steam turbine is given both to Anglo-Irish engineer Sir Charles Parsons (1854–1931) for invention of the reaction turbine, and to Swedish engineer Gustaf de Laval (1845–1913) for invention of the impulse turbine. Modern steam turbines frequently employ both reaction and impulse in the same unit, typically varying the degree of reaction and impulse from the blade root to its periphery. Hero of Alexandria demonstrated the turbine principle in an aeolipile in the first century AD and Vitruvius mentioned them around 70 BC.
The word "turbine" was coined in 1822 by the French mining engineer Claude Burdin from the Greek τύρβη, tyrbē, meaning "vortex" or "whirling", in a memo, "Des turbines hydrauliques ou machines rotatoires à grande vitesse", which he submitted to the Académie royale des sciences in Paris.[4] Benoit Fourneyron, a former student of Claude Burdin, built the first practical water turbine.
Humming of a small pneumatic turbine used in a German 1940s-vintage safety lamp
What is a Gas Turbine? (For beginners)
1 Operation theory
A working fluid contains potential energy (pressure head) and kinetic energy (velocity head). The fluid may be compressible or incompressible. Several physical principles are employed by turbines to collect this energy:
Impulse turbines change the direction of flow of a high velocity fluid or gas jet. The resulting impulse spins the turbine and leaves the fluid flow with diminished kinetic energy. There is no pressure change of the fluid or gas in the turbine blades (the moving blades), as in the case of a steam or gas turbine, all the pressure drop takes place in the stationary blades (the nozzles). Before reaching the turbine, the fluid's pressure head is changed to velocity head by accelerating the fluid with a nozzle. Pelton wheels and de Laval turbines use this process exclusively. Impulse turbines do not require a pressure casement around the rotor since the fluid jet is created by the nozzle prior to reaching the blades on the rotor. Newton's second law describes the transfer of energy for impulse turbines. Impulse turbines are most efficient for use in cases where the flow is low and the inlet pressure is high. [3]
Reaction turbines develop torque by reacting to the gas or fluid's pressure or mass. The pressure of the gas or fluid changes as it passes through the turbine rotor blades.[3] A pressure casement is needed to contain the working fluid as it acts on the turbine stage(s) or the turbine must be fully immersed in the fluid flow (such as with wind turbines). The casing contains and directs the working fluid and, for water turbines, maintains the suction imparted by the draft tube. Francis turbines and most steam turbines use this concept. For compressible working fluids, multiple turbine stages are usually used to harness the expanding gas efficiently. Newton's third law describes the transfer of energy for reaction turbines. Reaction turbines are better suited to higher flow velocities or applications where the fluid head (upstream pressure) is low. [3]
In practice, modern turbine designs use both reaction and impulse concepts to varying degrees whenever possible. Wind turbines use an airfoil to generate a reaction lift from the moving fluid and impart it to the rotor. Wind turbines also gain some energy from the impulse of the wind, by deflecting it at an angle. Turbines with multiple stages may use either reaction or impulse blading at high pressure. Steam turbines were traditionally more impulse but continue to move towards reaction designs similar to those used in gas turbines. At low pressure the operating fluid medium expands in volume for small reductions in pressure. Under these conditions, blading becomes strictly a reaction type design with the base of the blade solely impulse. The reason is due to the effect of the rotation speed for each blade. As the volume increases, the blade height increases, and the base of the blade spins at a slower speed relative to the tip. This change in speed forces a designer to change from impulse at the base, to a high reaction-style tip.
Classical turbine design methods were developed in the mid 19th century. Vector analysis related the fluid flow with turbine shape and rotation. Graphical calculation methods were used at first. Formulae for the basic dimensions of turbine parts are well documented and a highly efficient machine can be reliably designed for any fluid flow condition. Some of the calculations are empirical or 'rule of thumb' formulae, and others are based on classical mechanics. As with most engineering calculations, simplifying assumptions were made.
Velocity triangles can be used to calculate the basic performance of a turbine stage. Gas exits the stationary turbine nozzle guide vanes at absolute velocity Va1. The rotor rotates at velocity U. Relative to the rotor, the velocity of the gas as it impinges on the rotor entrance is Vr1. The gas is turned by the rotor and exits, relative to the rotor, at velocity Vr2. However, in absolute terms the rotor exit velocity is Va2. The velocity triangles are constructed using these various velocity vectors. Velocity triangles can be constructed at any section through the blading (for example: hub, tip, midsection and so on) but are usually shown at the mean stage radius. Mean performance for the stage can be calculated from the velocity triangles, at this radius, using the Euler equation:
{\displaystyle \Delta h=u\cdot \Delta v_{w}}
{\displaystyle {\frac {\Delta h}{T}}={\frac {u\cdot \Delta v_{w}}{T}}}
{\displaystyle \Delta h}
{\displaystyle T}
{\displaystyle u}
{\displaystyle \Delta v_{w}}
{\displaystyle {\frac {\Delta h}{T}}}
Modern turbine design carries the calculations further. Computational fluid dynamics dispenses with many of the simplifying assumptions used to derive classical formulas and computer software facilitates optimization. These tools have led to steady improvements in turbine design over the last forty years.
The primary numerical classification of a turbine is its specific speed. This number describes the speed of the turbine at its maximum efficiency with respect to the power and flow rate. The specific speed is derived to be independent of turbine size. Given the fluid flow conditions and the desired shaft output speed, the specific speed can be calculated and an appropriate turbine design selected.
Off-design performance is normally displayed as a turbine map or characteristic.
The number of blades in the rotor and the number of vanes in the stator are often two different prime numbers in order to reduce the harmonics and maximize the blade-passing frequency.[5]
Steam turbines are used to drive electrical generators in thermal power plants which use coal, fuel oil or nuclear fuel. They were once used to directly drive mechanical devices such as ships' propellers (for example the Turbinia, the first turbine-powered steam launch[6]), but most such applications now use reduction gears or an intermediate electrical step, where the turbine is used to generate electricity, which then powers an electric motor connected to the mechanical load. Turbo electric ship machinery was particularly popular in the period immediately before and during World War II, primarily due to a lack of sufficient gear-cutting facilities in US and UK shipyards.
Aircraft gas turbine engines are sometimes referred to as turbine engines to distinguish between piston engines.
Contra-rotating turbines. With axial turbines, some efficiency advantage can be obtained if a downstream turbine rotates in the opposite direction to an upstream unit. However, the complication can be counter-productive. A contra-rotating steam turbine, usually known as the Ljungström turbine, was originally invented by Swedish Engineer Fredrik Ljungström (1875–1964) in Stockholm, and in partnership with his brother Birger Ljungström he obtained a patent in 1894. The design is essentially a multi-stage radial turbine (or pair of 'nested' turbine rotors) offering great efficiency, four times as large heat drop per stage as in the reaction (Parsons) turbine, extremely compact design and the type met particular success in back pressure power plants. However, contrary to other designs, large steam volumes are handled with difficulty and only a combination with axial flow turbines (DUREX) admits the turbine to be built for power greater than ca 50 MW. In marine applications only about 50 turbo-electric units were ordered (of which a considerable amount were finally sold to land plants) during 1917–19, and during 1920-22 a few turbo-mechanic not very successful units were sold.[7] Only a few turbo-electric marine plants were still in use in the late 1960s (ss Ragne, ss Regin) while most land plants remain in use 2010.
Ceramic turbine. Conventional high-pressure turbine blades (and vanes) are made from nickel based alloys and often use intricate internal air-cooling passages to prevent the metal from overheating. In recent years, experimental ceramic blades have been manufactured and tested in gas turbines, with a view to increasing rotor inlet temperatures and/or, possibly, eliminating air cooling. Ceramic blades are more brittle than their metallic counterparts, and carry a greater risk of catastrophic blade failure. This has tended to limit their use in jet engines and gas turbines to the stator (stationary) blades.
Shroudless turbine. Modern practice is, wherever possible, to eliminate the rotor shrouding, thus reducing the centrifugal load on the blade and the cooling requirements.
Bladeless turbine uses the boundary layer effect and not a fluid impinging upon the blades as in a conventional turbine.
Three types of water turbines: Kaplan (in front), Pelton (middle) and Francis (back left)
Water turbines
Pelton turbine, a type of impulse water turbine.
Francis turbine, a type of widely used water turbine.
Kaplan turbine, a variation of the Francis Turbine.
Turgo turbine, a modified form of the Pelton wheel.
Cross-flow turbine, also known as Banki-Michell turbine, or Ossberger turbine.
Wind turbine. These normally operate as a single stage without nozzle and interstage guide vanes. An exception is the Éolienne Bollée, which has a stator and a rotor.
Pressure compound multi-stage impulse, or "Rateau", after its French inventor, Auguste Rateau. The Rateau employs simple impulse rotors separated by a nozzle diaphragm. The diaphragm is essentially a partition wall in the turbine with a series of tunnels cut into it, funnel shaped with the broad end facing the previous stage and the narrow the next they are also angled to direct the steam jets onto the impulse rotor.
Mercury vapour turbines used mercury as the working fluid, to improve the efficiency of fossil-fuelled generating stations. Although a few power plants were built with combined mercury vapour and conventional steam turbines, the toxicity of the metal mercury was quickly apparent.
Screw turbine is a water turbine which uses the principle of the Archimedean screw to convert the potential energy of water on an upstream level into kinetic energy.
A large proportion of the world's electrical power is generated by turbo generators.
Turbines are used in gas turbine engines on land, sea and air.
Turbochargers are used on piston engines.
Gas turbines have very high power densities (i.e. the ratio of power to mass, or power to volume) because they run at very high speeds. The Space Shuttle main engines used turbopumps (machines consisting of a pump driven by a turbine engine) to feed the propellants (liquid oxygen and liquid hydrogen) into the engine's combustion chamber. The liquid hydrogen turbopump is slightly larger than an automobile engine (weighing approximately 700 lb) with the turbine producing nearly 70,000 hp (52.2 MW).
Turboexpanders are used for refrigeration in industrial processes.
Balancing machine
Helmholtz's theorems
Rotor–stator interaction
Segner wheel
Turbine-electric transmission
^ "turbine". "turbid". Online Etymology Dictionary.
^ τύρβη. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project.
^ a b c d Munson, Bruce Roy, T. H. Okiishi, and Wade W. Huebsch. "Turbomachines." Fundamentals of Fluid Mechanics. 6th ed. Hoboken, NJ: J. Wiley & Sons, 2009. Print.
^ In 1822, Claude Burdin submitted his memo "Des turbines hydrauliques ou machines rotatoires à grande vitesse" (Hydraulic turbines or high-speed rotary machines) to the Académie royale des sciences in Paris. (See: Annales de chimie et de physique, vol. 21, page 183 (1822).) However, it was not until 1824 that a committee of the Académie (composed of Prony, Dupin, and Girard) reported favorably on Burdin's memo. See: Prony and Girard (1824) "Rapport sur le mémoire de M. Burdin intitulé: Des turbines hydrauliques ou machines rotatoires à grande vitesse" (Report on the memo of Mr. Burdin titled: Hydraulic turbines or high-speed rotary machines), Annales de chimie et de physique, vol. 26, pages 207-217.
^ Tim J Carter. "Common failures in gas turbine blades". 2004. p. 244-245.
^ Adrian Osler (October 1981). "Turbinia" (PDF). (ASME-sponsored booklet to mark the designation of Turbinia as an international engineering landmark). Tyne And Wear County Council Museums. Archived from the original (PDF) on 28 September 2011. Retrieved 13 April 2011.
^ Ingvar Jung, 1979, The history of the marine turbine, part 1, Royal Institute of Technology, Stockholm, dep of History of technology
|
pattern matching examples - Maple Help
Home : Support : Online Help : Programming : General Information : pattern matching examples
The Pattern Matcher in Maple
This worksheet demonstrates the functionality of the Maple pattern matcher. To check an expression for a match to a single pattern, the patmatch function is used. An efficient facility for matching an expression to one of several patterns is provided by the compiletable and tablelook functions.
\mathrm{restart}
patmatch(expr,pattern); or patmatch(expr, pattern, 's');
expr: the expression to be matched.
pattern: the pattern.
s: the returned variable with the substitution.
The patmatch function returns true if it can match expr to pattern, and returns false otherwise. If the matching is successful, s is assigned to a substitution set such that subs(s, pattern) = expr.
A pattern is an expression containing variables with type defined by "::"; for example, a::radnum means that
a
is matched to an expression of type radnum. Note that, in a sum such as a::realcons+x,
a
0
; while in a product, such as a::realcons*x,
a
1
. This behavior can be avoided by wrapping the keyword nonunit around the type: for example, a::nonunit(realcons)*x does not match
x
Matching a linear expression with real coefficients:
\mathrm{patmatch}\left(x,a∷\mathrm{realcons} \cdot x+b∷\mathrm{realcons},'\mathrm{la}'\right);
\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{la}
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\left[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}\right]
The following pattern matcher looks for type
ax+b
b
is a sum of real constants:
\mathrm{patmatch}\left(\sqrt{3}x-\frac{\mathrm{ln}\left(4\right)\mathrm{π}}{5}-ⅇ,a∷\mathrm{realcons} \cdot x+b∷\mathrm{realcons},'\mathrm{ls}'\right)\phantom{\rule[-0.0ex]{0.0em}{0.0ex}};
\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{la}
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\left[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}\right]
keyword nonunit:
\mathrm{patmatch}\left({x}^{2},{x}^{n::\left(\mathrm{nonunit}\left(\mathrm{integer}\right)\right)},'\mathrm{la}'\right);
\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{la}
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\left[\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{2}\right]
\mathrm{patmatch}\left(x,{x}^{n::\left(\mathrm{nonunit}\left(\mathrm{integer}\right)\right)},'\mathrm{la}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{false}}
A Note on Commutativity
The pattern matcher matches the commutative operations `+` and `*`; for example, the pattern a::realcons*x+b::algebraic will look for a term of the form realcons*x, and then bind the rest of the sum to
b
\mathrm{patmatch}\left(f\left(x,y,x+y\right),f\left(a::\mathrm{name},b::\mathrm{name},a::\mathrm{name}+b::\mathrm{name}\right),'\mathrm{la}'\right);
\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{la}
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\left[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{y}\right]
\mathrm{patmatch}\left(f\left(y,x,x+y\right),f\left(a::\mathrm{name},b::\mathrm{name},a::\mathrm{name}+b::\mathrm{name}\right),'\mathrm{la}'\right);
\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{la}
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\left[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{x}\right]
Patterns with Conditions
The special keyword conditional is used to specify patterns having additional conditions. This is used for programming patterns in tables with additional conditions on the pattern. The syntax is conditional(pattern, condition) and conditional(pattern=right_hand_side, condition) for rules in tables or define. For example, it can be used for patterns of type int(a::algebraic,x::name)=a*x,_type(a,freeof(x)). This is not the same as int(a::freeof(x),x::name), because at the point that the pattern matcher matches,
a
x
is not known yet. Note that the condition has to be unevaluated or in inert form, meaning that you must use an underscore '_' in front of every name; for example, _type(a,freeof(x)).
Note: You cannot use `=` or `<>`.
\mathrm{patmatch}\left(2x+5,\mathrm{conditional}\left(a∷\mathrm{integer} \cdot x+b∷\mathrm{integer},{a}^{2}<b\right),'\mathrm{la}'\right);
\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{la}
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\left[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{5}\right]
\mathrm{patmatch}\left(2x+2,\mathrm{conditional}\left(a∷\mathrm{integer} \cdot x+b∷\mathrm{integer},{a}^{2}<b\right),'\mathrm{la}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{false}}
\mathrm{patmatch}\left(11x+6,\mathrm{conditional}\left(a∷\mathrm{integer} \cdot x+b∷\mathrm{integer},b<a\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{and}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathrm{_type}\left(a,\mathrm{prime}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{and}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{not}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}a<0\right),'\mathrm{la}'\right);
\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{la}
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\left[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{11}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{6}\right]
Linear and Other Common Patterns
Matching linear patterns and other common patterns: the pattern a::nonunit(algebraic)+b::nonunit(algebraic) matches the sum of two or more terms. (The same construct as for *.) a::nonunit(algebraic)+b::algebraic matches a single term or the sum of terms. Note that in define (see the help page of define) we have the keywords linear and multilinear, which generate more efficient code. x^nonunit(n::integer) matches an integer power of
x
x
\mathrm{patmatch}\left(a,A::\left(\mathrm{nonunit}\left(\mathrm{algebraic}\right)\right)+B::\left(\mathrm{nonunit}\left(\mathrm{algebraic}\right)\right),'\mathrm{la}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{false}}
\mathrm{patmatch}\left(a+{ⅇ}^{x},A::\left(\mathrm{nonunit}\left(\mathrm{algebraic}\right)\right)+B::\left(\mathrm{nonunit}\left(\mathrm{algebraic}\right)\right),'\mathrm{la}'\right);
\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{la}
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\left[\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{B}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{x}}\right]
\mathrm{patmatch}\left(a,A::\left(\mathrm{nonunit}\left(\mathrm{algebraic}\right)\right)+B::\mathrm{algebraic},'\mathrm{la}'\right);
\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{la}
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\left[\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{B}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}\right]
\mathrm{patmatch}\left(a+{ⅇ}^{x},A::\left(\mathrm{nonunit}\left(\mathrm{algebraic}\right)\right)+B::\mathrm{algebraic},'\mathrm{la}'\right);
\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{la}
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\left[\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{B}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}\right]
Note: for the last result, one may obtain either
A=a
A=a+{ⅇ}^{x}
. Several outcomes are possible for the following case:
\mathrm{patmatch}\left(a+{ⅇ}^{x}-\mathrm{π},A::\left(\mathrm{nonunit}\left(\mathrm{algebraic}\right)\right)+B::\mathrm{algebraic},'\mathrm{la}'\right);
\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{la}
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\left[\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{π}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{B}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}\right]
The pattern matcher can also handle lists:
\mathrm{patmatch}\left(\left[{ⅇ}^{2x},2,2\right],\left[{ⅇ}^{a::\mathrm{integer}x},a::\mathrm{integer},b::\mathrm{integer}\right],'\mathrm{la}'\right);
\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{la}
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\left[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{2}\right]
With compiletable and tablelook, Maple has the ability to create tables of patterns that are merged for efficient look-up.
compiletable([pattern1=rhs,pattern2=rhs,...]); and tablelook(expr,pattern);
\mathrm{pat}:=\mathrm{compiletable}\left(\left[f\left(a::\left(\mathrm{nonunit}\left(\mathrm{integer}\right)\right) \cdot x\right)=af\left(x\right),g\left(b::\left(\mathrm{nonunit}\left(\mathrm{integer}\right)\right) \cdot x\right)= \frac{1}{b} \cdot g\left(x\right)\right]\right):
\mathrm{tablelook}\left(f\left(2x\right),\mathrm{pat}\right)
\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)
\mathrm{tablelook}\left(g\left(4x\right),\mathrm{pat}\right)
\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{g}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)
They can easily be used to create lookup tables for integration and other formulas. (See also the integration example in define.)
\mathrm{tab}≔\left[\mathrm{ln}\left(a∷\mathrm{radnum} \cdot \mathrm{_X}+b∷\mathrm{radnum}\right)=\mathrm{ln}\left(ax+b\right)x+\frac{1}{a} \cdot \mathrm{ln}\left(a x + b\right)b -x-\frac{1}{a} \cdot b,{ⅇ}^{a∷\mathrm{radnum} \cdot \mathrm{_X}}=\frac{1}{a} {ⅇ}^{x},{ⅇ}^{a∷\mathrm{radnum} \cdot {\mathrm{_X}}^{2}+b::\mathrm{radnum}}=\frac{1}{2}\frac{{\mathrm{π}}^{\frac{1}{2}}}{{\left(-a\right)}^{\frac{1}{2}}} \cdot \mathrm{erf}\left({\left(-a\right)}^{\frac{1}{2}}\cdot x\right) \cdot {ⅇ}^{b},{\left(\mathrm{_X}+a::\mathrm{radnum}\right)}^{-1}=\mathrm{ln}\left(x+a\right),\left(a∷\mathrm{radnum} \cdot {\mathrm{_X}}^{n::\mathrm{integer}}=\frac{a}{n+1} \cdot {x}^{n+1}\right)\right]:
\mathrm{tab}:=\mathrm{subs}\left(\mathrm{_X}=x::\mathrm{name},\mathrm{tab}\right):
\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}p≔\mathrm{compiletable}\left(\mathrm{tab}\right):
Now we can use this table:
\mathrm{tablelook}\left({ⅇ}^{\frac{x}{3}},p\right)
\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{x}}
\mathrm{tablelook}\left(\frac{1}{2+x},p\right)
\textcolor[rgb]{0,0,1}{\mathrm{ln}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\right)
|
Theory of relativity/Special relativity/energy - Wikiversity
Theory of relativity/Special relativity/energy
< Theory of relativity | Special relativity(Redirected from Special relativity/energy)
This article presumes that the reader has read Special relativity/momentum.
This article will deduce, on theoretical grounds, the relativistically correct formula for kinetic energy of a moving object. As in the previous article, we will perform a "gedanken experiment" on a collision. As before, the collision will be set up to be perfectly elastic, with no gain or loss of total kinetic energy.
Because of the formula for momentum that was worked out previously, this experiment is simpler than in the previous article. The collision doesn't need to be analyzed in two dimensions—one dimension will suffice.
1 The derivation
2 Intrinsic energy and total energy
3 Controversy about "relativistic mass"
4 The energy/momentum formula
The derivation[edit | edit source]
The collision in the CM frame. Particles A and B both approach and depart with the same speed.
First, set up the experiment in a reference frame such that the total momentum, before and after, is zero. This means that this is the "center of mass frame" or "CM frame". (Physicists commonly use the CM frame when analyzing collisions.)
The two colliding particles have greatly different masses. Particle A, of small mass
{\displaystyle m\,}
, approaches from the left at speed
{\displaystyle v\,}
. Particle B, of large mass
{\displaystyle Km\,}
, approaches from the right at speed
{\displaystyle r\,}
. They have exactly equal and opposite momenta when approaching. After the collision, each particle moves away at exactly the speed it had before, in the opposite direction. Clearly, momentum and total kinetic energy are both conserved.
We assume that particle A is very much lighter than B, so K is enormous—so much so that particle B's motion is non-relativistic. In fact, we will ultimately take the limit as K goes to infinity and
{\displaystyle r\,}
(B's speed) goes to zero.
Particles A's motion, with speed
{\displaystyle v\,}
, is presumed to be relativistic.
The conservation of momentum puts constraints on the speeds
{\displaystyle r\,}
{\displaystyle v\,}
. From the derivation of the preceding article, the momentum of A before the collision was
{\displaystyle p_{A}={\frac {mv}{\sqrt {1-v^{2}/c^{2}}}}}
and that of B was
{\displaystyle p_{B}={\frac {Kmr}{\sqrt {1-r^{2}/c^{2}}}}}
Since these must exactly cancel each other, we have
{\displaystyle {\frac {v}{\sqrt {1-v^{2}/c^{2}}}}={\frac {Kr}{\sqrt {1-r^{2}/c^{2}}}}}
Some messy high-school algebra gives
{\displaystyle r={\frac {v}{\sqrt {K^{2}-[K^{2}-1]\ v^{2}/c^{2}}}}\qquad \qquad }
The collision in the frame in which B is initially at rest. Particle A's departure speed is slightly less than its approach speed; it transmits energy to B during the collision.
Now we change to a reference frame in which B is at rest prior to the collision. In this frame, A approaches the stationary B from the left at high speed, gives it a nudge, and bounces back to the left at slightly lower speed. In response to the nudge, B moves slowly off to the right. A has given up some of its energy to B. But observers in this frame still insist that total kinetic energy is conserved.
Because this is in only one dimension, we don't need to use the full Lorentz transform to calculate speeds. We can just use the formula for addition of relativistic velocities. The speed
{\displaystyle r\,}
is added to everything.
B's speed before the collision was zero.
B's speed after the collision is
{\displaystyle {\frac {r+r}{1+r^{2}/c^{2}}}={\frac {2r}{1+r^{2}/c^{2}}}}
A's speed before the collision was
{\displaystyle {\frac {v+r}{1+vr/c^{2}}}}
A's velocity after the collision is
{\displaystyle {\frac {r-v}{1-vr/c^{2}}}}
which is negative, but we are only interested in the speed, which is
{\displaystyle {\frac {v-r}{1-vr/c^{2}}}}
The change in A's speed before and after the collision is
{\displaystyle {\frac {v+r}{1+vr/c^{2}}}-{\frac {v-r}{1-vr/c^{2}}}=2r\ {\frac {1-v^{2}/c^{2}}{1-v^{2}r^{2}/c^{4}}}}
{\displaystyle r\,}
is extremely small compared to c, so the denominator is effectively 1, so the change in A's speed is essentially
{\displaystyle \Delta {}v=2r\ (1-v^{2}/c^{2})\,\qquad \qquad }
Now B's speed after the collision is
{\displaystyle {\frac {2r}{1+r^{2}/c^{2}}}}
{\displaystyle r\,}
is very small, this is essentially
{\displaystyle 2r\,}
. B's mass is
{\displaystyle Km\,}
, so, by the classical formula (E = 1/2 mv2), B's kinetic energy after the collision is essentially
{\displaystyle {\frac {1}{2}}Km\ (2r)^{2}=2\ Km\ r^{2}\,}
Now, by equation (1), we have
{\displaystyle r={\frac {v}{\sqrt {K^{2}-[K^{2}-1]v^{2}/c^{2}}}}}
{\displaystyle K\,}
is huge,
{\displaystyle K^{2}-1\,}
is effectively equal to
{\displaystyle K^{2}\,}
, so this is effectively
{\displaystyle r={\frac {v}{K{\sqrt {1-v^{2}/c^{2}}}}}}
{\displaystyle 2Kmr^{2}={\frac {2mrv}{\sqrt {1-v^{2}/c^{2}}}}}
which is the kinetic energy gained by B and lost by A.
So the kinetic energy lost by A is
{\displaystyle \Delta {}E={\frac {2mrv}{\sqrt {1-v^{2}/c^{2}}}}={\frac {mv}{(1-v^{2}/c^{2})^{3/2}}}\ 2r\ (1-v^{2}/c^{2})}
and, by equation (2),
{\displaystyle \Delta {}v=2r\ (1-v^{2}/c^{2})\,}
{\displaystyle K\,}
go to infinity, so
{\displaystyle r\,}
goes to zero, we get
{\displaystyle {\frac {dE}{dv}}={\frac {mv}{(1-v^{2}/c^{2})^{3/2}}}}
giving the integral:
{\displaystyle E=\int {\frac {mv}{(1-v^{2}/c^{2})^{3/2}}}dv}
{\displaystyle E={\frac {mc^{2}}{\sqrt {1-v^{2}/c^{2}}}}+C}
where C is the usual constant of integration. Since the energy is zero when the speed is zero, we set
{\displaystyle C=-mc^{2}\,}
So the formula for kinetic energy is
{\displaystyle E_{\textrm {kinetic}}={\frac {mc^{2}}{\sqrt {1-v^{2}/c^{2}}}}-mc^{2}}
It follows that an object's kinetic energy grows unboundedly large as its speed approaches c.
A little calculation will show that, in the non-relativistic limit, this reduces to the classical formula:
{\displaystyle E_{\textrm {kinetic}}={\frac {1}{2}}mv^{2}}
Intrinsic energy and total energy[edit | edit source]
It is common to define a particle's intrinsic energy or rest energy as:
{\displaystyle E_{\textrm {intrinsic}}=mc^{2}\,}
and its total energy as:
{\displaystyle E_{\textrm {total}}={\frac {mc^{2}}{\sqrt {1-v^{2}/c^{2}}}}}
Or, using the common abbreviation
{\displaystyle \gamma ={\frac {1}{\sqrt {1-v^{2}/c^{2}}}}}
{\displaystyle E_{\textrm {total}}=\gamma \ mc^{2}}
{\displaystyle E_{\textrm {kinetic}}=E_{\textrm {total}}-E_{\textrm {intrinsic}}\,}
Controversy about "relativistic mass"[edit | edit source]
There has been controversy about the term "mass" and its symbol
{\displaystyle m\,}
. Many older textbooks use the term intrinsic mass or rest mass or invariant mass for what we call mass, and use the symbol
{\displaystyle m_{0}\,}
to denote it. They use
{\displaystyle m\,}
to denote what they call relativistic mass or effective mass, with the equation:
{\displaystyle m={\frac {m_{0}}{\sqrt {1-v^{2}/c^{2}}}}}
Doing this preserves the formula
{\displaystyle p=mv\,}
, but it is wrong. It makes the meaning of "mass" dependent on the observer.
While the notion of relativistic mass is convenient, it is frame-dependent. The intrinsic mass is the "true" mass. Everyone, in all reference frames, agrees on the intrinsic mass of the proton—it is 1.6726217×10-27 kg, and any observer can look it up in a textbook. See [1] for a discussion of this point by a physics educator.
Rather than thinking that a particle's mass increases when it moves, it is better to think of the momentum as increasing with the extra factor of
{\displaystyle \gamma \,}
{\displaystyle E_{\textrm {total}}=\gamma \ mc^{2}\,}
{\displaystyle {\vec {p}}=\gamma \ m{\vec {v}}}
The energy/momentum formula[edit | edit source]
A Pythagoras-like representation of the relationship among intrinsic energy (on the bottom), momentum (on the right), and total energy (on the hypotenuse). The author of this diagram used the "m0" notation. The segment labeled "K" is the kinetic energy.
A little algebra will show that
{\displaystyle {\frac {m^{2}c^{4}}{1-v^{2}/c^{2}}}={\frac {m^{2}v^{2}c^{2}}{1-v^{2}/c^{2}}}+m^{2}c^{4}}
{\displaystyle E_{\textrm {total}}^{2}=(pc)^{2}+E_{\textrm {intrinsic}}^{2}}
which gives a Pythagoras-like formula relating the momentum and the energy.
The next article in this series is Special relativity/E = mc².
An alternative derivation, from physicsforums.com
Special relativity/momentum
↑ See this article (from the web archive) from www.worldscientific.com.
Retrieved from "https://en.wikiversity.org/w/index.php?title=Theory_of_relativity/Special_relativity/energy&oldid=1867883"
|
Blow-up of Solutions for a Class of Nonlinear Parabolic Equations | EMS Press
Blow-up of Solutions for a Class of Nonlinear Parabolic Equations
In this paper, the blow up of solutions for a class of nonlinear parabolic equations
u_t(x,t)=\nabla _{x}(a(u(x,t))b(x)c(t)\nabla _{x}u(x,t))+g(x,|\nabla _{x}u(x,t) |^2,t)f(u(x,t))
with mixed boundary conditions is studied. By constructing an auxiliary function and using Hopf's maximum principles, an existence theorem of blow-up solutions, upper bound of ``blow-up time" and upper estimates of ``blow-up rate" are given under suitable assumptions on
a, b,c, f, g
, initial data and suitable mixed boundary conditions. The obtained result is illustrated through an example in which
a, b,c, f, g
are power functions or exponential functions.
Zhang Lingling, Blow-up of Solutions for a Class of Nonlinear Parabolic Equations. Z. Anal. Anwend. 25 (2006), no. 4, pp. 479–486
|
40 CFR § 61.245 - Test methods and procedures. | CFR | US Law | LII / Legal Information Institute
Subpart V - National Emission Standard for Equipment Leaks (Fugitive Emission Sources)
40 CFR § 61.245 - Test methods and procedures.
(a) Each owner or operator subject to the provisions of this subpart shall comply with the test methods and procedures requirements provided in this section.
(b) Monitoring, as required in §§ 61.242, 61.243, 61.244, and 61.135, shall comply with the following requirements:
(1) Monitoring shall comply with Method 21 of appendix A of 40 CFR part 60.
(2) The detection instrument shall meet the performance criteria of Method 21.
(3) The instrument shall be calibrated before use on each day of its use by the procedures specified in Method 21.
(4) Calibration gases shall be:
(i) Zero air (less than 10 ppm of hydrocarbon in air); and
(ii) A mixture of methane or n-hexane and air at a concentration of approximately, but less than, 10,000 ppm methane or n-hexane.
(5) The instrument probe shall be traversed around all potential leak interfaces as close to the interface as possible as described in Method 21.
(c) When equipment is tested for compliance with or monitored for no detectable emissions, the owner or operator shall comply with the following requirements:
(1) The requirements of paragraphs (b) (1) through (4) shall apply.
(2) The background level shall be determined, as set forth in Method 21.
(4) The arithmetic difference between the maximum concentration indicated by the instrument and the background level is compared with 500 ppm for determining compliance.
(1) Each piece of equipment within a process unit that can conceivably contain equipment in VHAP service is presumed to be in VHAP service unless an owner or operator demonstrates that the piece of equipment is not in VHAP service. For a piece of equipment to be considered not in VHAP service, it must be determined that the percent VHAP content can be reasonably expected never to exceed 10 percent by weight. For purposes of determining the percent VHAP content of the process fluid that is contained in or contacts equipment, procedures that conform to the methods described in ASTM Method D-2267 (incorporated by the reference as specified in § 61.18) shall be used.
(i) An owner or operator may use engineering judgment rather than the procedures in paragraph (d)(1) of this section to demonstrate that the percent VHAP content does not exceed 10 percent by weight, provided that the engineering judgment demonstrates that the VHAP content clearly does not exceed 10 percent by weight. When an owner or operator and the Administrator do not agree on whether a piece of equipment is not in VHAP service, however, the procedures in paragraph (d)(1) of this section shall be used to resolve the disagreement.
(ii) If an owner or operator determines that a piece of equipment is in VHAP service, the determination can be revised only after following the procedures in paragraph (d)(1) of this section.
(3) Samples used in determining the percent VHAP content shall be representative of the process fluid that is contained in or contacts the equipment or the gas being combusted in the flare.
(1) Method 22 of appendix A of 40 CFR part 60 shall be used to determine compliance of flares with the visible emission provisions of this subpart.
{H}_{T}=K\left(\sum _{i=1}^{n}{C}_{i}{H}_{i}\right)
HT = Net heating value of the sample, MJ/scm (BTU/scf); where the net enthalpy per mole of offgas is based on combustion at 25 °C and 760 mm Hg (77 °F and 14.7 psi), but the standard temperature for determining the volume corresponding to one mole is 20 °C (68 °F).
K = conversion constant, 1.740 × 10 7 (g-mole) (MJ)/(ppm-scm-kcal) (metric units); or 4.674 × 10 8 ((g-mole) (Btu)/(ppm-scf-kcal)) (English units)
Ci = Concentration of sample component “i” in ppm, as measured by Method 18 of appendix A to 40 CFR part 60 and ASTM D2504-67, 77, or 88 (Reapproved 1993) (incorporated by reference as specified in § 61.18).
Hi = net heat of combustion of sample component “i” at 25 °C and 760 mm Hg (77 °F and 14.7 psi), kcal/g-mole. The heats of combustion may be determined using ASTM D2382-76 or 88 or D4809-95 (incorporated by reference as specified in § 61.18) if published values are not available or cannot be calculated.
(4) The actual exit velocity of a flare shall be determined by dividing the volumetric flowrate (in units of standard temperature and pressure), as determined by Method 2, 2A, 2C, or 2D, as appropriate, by the unobstructed (free) cross section area of the flare tip.
{V}_{max}={K}_{1}+{K}_{2}{H}_{T}
HT = Net heating value of the gas being combusted, as determined in paragraph (e)(3) of this section, MJ/scm (Btu/scf).
K2 = 0.7084 m 4/(MJ-sec) (metric units)
= 0.087 ft 4/(Btu-sec) (English units)
[49 FR 23513, June 6, 1984, as amended at 49 FR 38946, Oct. 2, 1984; 49 FR 43647, Oct. 31, 1984; 53 FR 36972, Sept. 23, 1988; 54 FR 38077, Sept. 14, 1989; 65 FR 62158, Oct. 17, 2000]
|
Estimation of ground motion at deep-soil sites in eastern North America
A preliminary descriptive model for the distance dependence of the spectral decay parameter in southern California
Site amplification from S-wave coda in the Long Valley caldera region, California
Kevin Mayeda; Stuart Koyanagi; Keiiti Aki
Study of the propagation and amplification of seismic waves in Caracas Valley with reference to the 29 July 1967 earthquake: SH waves
Apostolos S. Papageorgiou; Jaekwan Kim
Francisco J. Sánchez-Sesma; Michel Campillo
Local magnitude and source parameters for earthquakes in the Peninsular Ranges of Baja California, Mexico
Antonio Vidal; Luis Mungúia
R. R. Castro; J. G. Anderson; J. N. Brune
Allison L. Bent; Donald V. Helmberger
Kuo-Fong Ma; Hiroo Kanamori
Beiyuan Liang; Max Wyss
Masayuki Kikuchi; Hiroo Kanamori
Time domain waveform inversion—A frequency domain view: How well we need to match waveforms?
Zoltan A. Der; Robert H. Shumway; Michael R. Hirano
Near-source effects on regional seismograms: An analysis of the NTS explosions PERA and QUESO
Steven R. Taylor; John T. Rambo; Robert P. Swift
A waveform correlation method for identifying quarry explosions
D. B. Harris
Three-component analysis of regional phases at NORESS and ARCESS: Polarization and phase identification
Anne Suteau-Henson
Lateral velocity variations in the Andean foreland in Argentina determined with the JHD method
J. Pujol; J. M. Chiu; R. Smalley, Jr.; M. Regnier; B. Isacks; J. L. Chatelain; J. Vlasity; D. Vlasity; J. Castano; N. Puebla
A three-component borehole seismometer for earthquake seismology
Hsi-Ping Liu; Richard E. Warrick; Robert E. Westerlund; Jon B. Fletcher
Seismological notes—November 1990-February 1991
SSA minutes and report
Qc site dependence in the Granada basin (southern Spain)
J. Morales; J. M. Ibáñez; F. Vidal; F. de Miguel; G. Alguacil; A. M. Posadas
Loading of faults to failure
Would it have been possible to predict the 30 August 1986 Vrancea earthquake?
Mircea Radulian; Cezar-Ioan Trifu
The isolation of receiver effects from teleseismic P waveforms
Seismic signal detection—A better mousetrap?
Roland G. Roberts; Anders Christoffersson
A fundamental earthquake problem
P-wave residuals at Fiji from deep earthquakes in the Tonga subduction zone
G. Prasad; G. Bock
Bulletin of the Seismological Society of America December 01, 1991, Vol.81, 2529. doi:https://doi.org/10.1785/BSSA0810062529A
Simulated ground motions for hypothesized Mw = 8 subduction earthquakes in Washington and Oregon
B. P. Cohee; P. G. Somerville; N. A. Abrahamson
Bulletin of the Seismological Society of America December 01, 1991, Vol.81, 2529. doi:https://doi.org/10.1785/BSSA0810062529B
Call for papers: 1992 Annual Meeting Seismological Society of America
Medal nominations invited
Mw
|
Introduction to DP · USACO Guide
HomeGoldIntroduction to DP
General ResourcesExample - Frog 1Without Dynamic ProgrammingWith Dynamic ProgrammingClassical ProblemsProblemsetsIntroductory ProblemsHarder USACO
Authors: Michael Cao, Benjamin Qi, Neo Wang
Speeding up naive recursive solutions with memoization.
Gold - Modular Arithmetic
Dynamic Programming (DP) is an important algorithmic technique in Competitive Programming from the gold division to competitions like the International Olympiad of Informatics. By breaking down the full task into sub-problems, DP avoids the redundant computations of brute force solutions.
Although it is not too difficult to grasp the general ideas behind DP, the technique can be used in a diverse range of problems and is a must-know idea for competitors in the USACO Gold division.
Great introduction that covers most classical problems. Mentions memoization.
DP from Novice to Advanced
General tutorial, great for all skill levels
Contains examples with nonclassical problems
Describes many ways to solve the example problem + additional classical examples
Covers classical problems
Dynamic Programming for Computing Contests
If you prefer watching videos instead, here are some options:
Errichto DP #1 - Fibonacci, iteration vs recursion
Great introduction video
Errichto DP #2 - Coin change, double counting
Errichto DP video regarding coin change
Errichto DP #3 - Line of Wines
Errichto DP problem editorial
WilliamFiset DP Videos
Animated DP videos that pertain to interview questions
It's usually a good idea to write a slower solution first. For example, if the complexity required for full points is
\mathcal{O}(N)
and you come up with a simple
\mathcal{O}(N^2)
solution, then you should definitely type that up first and earn some partial credit. Afterwards, you can rewrite parts of your slow solution until it is of the desired complexity. The slow solution might also serve as something to stress test against.
Example - Frog 1
AC - Easy
The problem asks us to compute the minimum total cost it takes for a frog to travel from stone
1
to stone
N (N \le 10^5)
given that the frog can only jump a distance of one or two. The cost to travel between any two stones
i
j
|h_i - h_j|
h_i
represents the height of stone
i
Without Dynamic Programming
\mathcal{O}(2^N)
Since there are only two options, we can use recursion to compute what would happen if we jumped either
1
stone, or
2
stones. There are two possibilities, so recursively computing would require computing both a left and right subtree. Therefore, for every additional jump, each branch splits into two, which results in an exponential time complexity.
However, this can be sped up with dynamic programming by keeping track of "optimal states" in order to avoid calculating states multiple times. For example, recursively calculating jumps of length
1,2,1
2,1,2
reuses the state of stone
3
. Dynamic programming provides the mechanism to cache such states.
With Dynamic Programming
\mathcal{O}(N)
There are only two options: jumping once, or jumping twice. Define
\texttt{dp}[i]
as the minimum cost to reach stone
i
\texttt{dp}[i+1]
must represent the next stone, and
\texttt{dp}[i+2]
must represent the stone after that. Then, our transitions are as follows at stone
i
must be:
Jump one stone, incurring a cost of
|\text{height}_i - \text{height}_{i+1}|
\texttt{dp}[i + 1] = \min(\texttt{dp}[i + 1], \texttt{dp}[i] + |\text{height}_i - \text{height}_{i + 1}|)
Jump two stones, incurring a cost of
|\text{height}_i - \text{height}_{i + 2}|
\texttt{dp}[i + 2] = \min(\texttt{dp}[i + 2], \texttt{dp}[i] + |\text{height}_i - \text{height}_{i + 2}|)
We can start with the base case that
\texttt{dp}[1] = 0
, since the frog is already on that square, and proceed to calculate
\texttt{dp}[1], \texttt{dp}[2], \ldots \texttt{dp}[N]
. Note that in the code we ignore
\texttt{dp}[i]
i>N
// height is 1-indexed so it can match up with dp
int height[MAX_N + 1];
// dp[N] is the minimum cost to get to the Nth stone
int dp[MAX_N + 1];
# height is 1-indexed so it can match up with dp
height = [0] + [int(s) for s in input().split()]
assert N == len(height) - 1
dp[N] is the minimum cost to get to the Nth stone
(we initially set all values to INF)
dp = [float("inf") for _ in range(N + 1)]
The next few modules provide examples of some classical problems: Dynamic Programming problems which are well known. However, classical doesn't necessarily mean common. Since so many competitors know about these problems, problemsetters rarely set direct applications of them.
DP Section
You should know how to do all of these once you're finished with the DP section. Editorials are available here.
DP Contest
Some tasks are beyond the scope of Gold. Editorials are available here.
Beginner DP Contest
Beginner-friendly classical problems. Some tasks requires input/output files. The solutions can be found here
DP Practice Problems
Good practice problems. You should be able to do most of these after completing the Gold DP module. Some problems might be out of the scope for gold.
Some of these problems will be mentioned in the next few modules.
Easier problems that don't require many optimizations or complex states.
Note - Ordering of DP Modules
You are not expected to complete all of the problems below before starting the other DP modules. In particular, we recommend that you begin with the "easy" problems from the knapsack module if this is your first encounter with DP.
Easy Show Tags DP
Hoof Paper Scissors
Time is Mooney
Hard Show Tags BFS, DP
Harder USACO
Circular Barn Revisited
Hard Show Tags DP
Taming the Herd
Moortal Cowmbat
Hard Show Tags APSP, DP, Prefix Sums
Very Hard Show Tags DP
|
Revision as of 08:29, 9 September 2013 by J. Ashley Burgoyne (talk | contribs) (→Bibliography)
{\displaystyle {\textrm {CSR}}={\frac {\textrm {totaldurationofsegmentswhereannotationequalsestimation}}{\textrm {totaldurationofannotatedsegments}}}}
{\displaystyle Q=1-{\frac {\textrm {maximumofdirectionalHammingdistancesineitherdirection}}{\textrm {totaldurationofsong}}}}
Burgoyne, John Ashley. 2012. “Stochastic Processes and Database-Driven Musicology.” Ph.D. diss. Montréal, Québec, Canada: McGill University.
|
Ohm's law - Simple English Wikipedia, the free encyclopedia
Ohm's law says that in an electrical circuit, the current passing through a resistor is related to the voltage difference and the electrical resistance between the two sides, as long as the physical conditions and the temperature of the conductor remain constant. Because there are three variables, it can be written in three ways, depending on which variable is placed on the left of the equals sign:
{\displaystyle I={\frac {V}{R}}\quad {\text{or}}\quad V=IR\quad {\text{or}}\quad R={\frac {V}{I}}}
2 How Current, Voltage, and Resistance are related
3 Find all values in the circuit
Current, Voltage, and Resistance[change | change source]
Voltage[change | change source]
Current is how fast the charge is flowing. The higher the charge, the faster the current. Current has to do with electrons flowing in a circuit. Current measures how fast the electrons go. The unit of the current is “ampere,” (often referred to as “amps”). The letter “I” is usually used to represent current, from the French intensité du courant, (current intensity).
Resistance is how much the circuit resists the flow of the charge. This makes sure the charge does not flow too fast and damage the components. In a circuit, a light bulb can be a resistor. If electrons flow through the light bulb, then the light bulb will light up. If the resistance is high, then the lamp will be dimmer. The unit of resistance is “Ω”, which is called omega, and pronounced “ohm”, it is the name of the inventor of Ohm’s law.[1]
How Current, Voltage, and Resistance are related[change | change source]
Current, Voltage, and Resistance are related, which is called “Ohm’s law”. The unit of resistance (also named an "Ohm"), is defined so that “1 Ohm” as the resistance between two points in a conductor where the application of 1 volt will push 1 ampere, or 6.241×10^18 electrons, through.[2] This takes energy, which (depending on the component which the charge is flowing through) is usually lost as heat.
Find all values in the circuit[change | change source]
For example, a scientist knows that the value of the voltage is 20V. Resistance is known, which is in the light bulb, is 10 Ω. Now we need to find the other unknown variable, which is current. The Ohm’s law formula can be used to solve it. With the two known variables, V(voltage) and R(resistance), the only variable left to find is I(current).
In a problem, a scientist always gets enough information to solve the other values, the only thing a scientist has to memorize is the Ohm’s law formula. Then it is used with what is given to solve the unknown part. In the example above, the current is 2 amps.
↑ CTaylor. "Voltage, Current, Resistance, and Ohm's Law". sparkfun. SparkFun Electronics. Retrieved 10 June 2016.
↑ "How Voltage, Current, and Resistance Relate". all about circuit. EETech Media, LLC. 6 June 2016. Retrieved 10 June 2016.
"Ohms law calculator | Calculate Voltage power Resistance and Current". Archived from the original on 2019-08-21.
Online ohm's law calculator Archived 2019-08-21 at the Wayback Machine
Ohm's Law worksheet on All About Circuits
Ohm Law Archived 2014-05-13 at the Wayback Machine: Electronics for Beginners
Calculator - Ohm's law in the DC circuit
Retrieved from "https://simple.wikipedia.org/w/index.php?title=Ohm%27s_law&oldid=8156301"
|
Correspondence to: *E-mail: kjungyun@cu.ac.kr
Wheel linear accelerations, Rollover safety, Tire vertical force, LTR, Road test
휠 가속도, 전복안전성, 타이어 수직력, 횡방향 부하 전달 비율, 실차 시험
Rollover safety test car trajectory and behavior(https://www.kotsa.or.kr/)
LTR\stackrel{\scriptscriptstyle\mathrm{def}}{=}\frac{{F}_{zR}-{F}_{zL}}{{F}_{zL}+{F}_{zR}}
\begin{array}{c}{F}_{zFL}=\frac{{m}_{f} }{2}\left[{a}_{zFL}-\left({a}_{yFL}+{a}_{yFR}\right)\cdot \frac{h}{t{r}_{f}}\right]\hfill \\ -\left[\frac{{m}_{f}}{2}\cdot {a}_{xFL}+\frac{{m}_{r}}{2}\cdot {a}_{xRL}\right]\cdot \frac{h}{l}\hfill \\ {F}_{zFR}=\frac{{m}_{f} }{2}\left[{a}_{zFR}+\left({a}_{yFL}+{a}_{yFR}\right)\cdot \frac{h}{t{r}_{f}}\right]\hfill \\ -\left[\frac{{m}_{f}}{2}\cdot {a}_{xFR}+\frac{{m}_{r}}{2}\cdot {a}_{xRR}\right]\cdot \frac{h}{l}\hfill \\ {F}_{zRL}=\frac{{m}_{r} }{2}\left[{a}_{zRL}-\left({a}_{yRL}+{a}_{yRR}\right)\cdot \frac{h}{t{r}_{r}}\right]\hfill \\ +\left[\frac{{m}_{f}}{2}\cdot {a}_{xFL}+\frac{{m}_{r}}{2}\cdot {a}_{xRL}\right]\cdot \frac{h}{l}\hfill \\ {F}_{zRR}=\frac{{m}_{r} }{2}\left[{a}_{zRR}+\left({a}_{yRL}+{a}_{yRR}\right)\cdot \frac{h}{t{r}_{r}}\right]\hfill \\ +\left[\frac{{m}_{f}}{2}\cdot {a}_{xFR}+\frac{{m}_{r}}{2}\cdot {a}_{xRR}\right]\cdot \frac{h}{l}\hfill \end{array}
\begin{array}{c}LTR\equiv \frac{{F}_{zR}-{F}_{zL}}{{F}_{zL}+{F}_{zR}}\hfill \\ =\frac{1}{\frac{{m}_{f}}{2}\left[{a}_{zFL}+{a}_{zFR}\right]+\frac{{m}_{r}}{2}\left[{a}_{zRL}+{a}_{zRR}\right]}×\hfill \\ \left[\frac{{m}_{f}}{2}\left({a}_{zFR}-{a}_{zFL}\right)+{m}_{f}\cdot \right\left({a}_{yFL}+{a}_{yFR}\right)\cdot \frac{h}{t{r}_{f}}+\hfill \\ \frac{{m}_{r}}{2}\left({a}_{zRR}-{a}_{zRL}\right)+{m}_{r}\cdot \left({a}_{yRL}+{a}_{yRR}\right)\cdot \frac{h}{t{r}_{r}}]\hfill \end{array}
National Highway Traffic Safety Administration (NHTSA), www.nhtsa.gov, .
M. H. Kim, J. H. Oh, J. H. Lee and M. C. Jeon, “Development of Rollover Criteria based on Simple Physical Model of Rollover Event,” Int. J. Automotive Technology, Vol.7, No.1, pp.51-59, 2006.
S. Yim, Y. Park and Y. Park, “Design of Active Suspension and ESP for Rollover Prevention,” KSAE Spring Conference Proceedings, pp.807-812, 2005.
J. Yoon, K. Yi, W. Cho and D. Kim, “Unified Chassis Control to Prevent Vehicle Rollover,” KSME Annual Conference Proceedings, pp.1132-1137, 2007.
S. W. Kim, Y. W. Jeong, J. S. Kim, S. Lee and C. C. Chung, “Estimation of Vertical Load Variation on a Tire Using Unscented Kalman Filter,” Transactions of KSAE, Vol.28, No.1, pp.43-52, 2020. [https://doi.org/10.7467/KSAE.2020.28.1.043]
J. Jung, J. Kim and K. Huh, “Performance Comparison Between the Tire Force Estimation Methods,” Transactions of KSAE, Vol.7, No.2, pp.312-319, 1999.
R. D. Ervin, C. Winkler, P. Fancher, M. Hagan, V. Krishnaswami, H. Zhang, S. Bogard and S. Karamihas, Cooperative Agreement to Foster the Deployment of a Heavy Vehicle htelligent Dynamic Stability Enhancement System, University of Michigan Transportation Research Institute, 1998.
A. Y. Ungoren and H. Peng, “Rollover Propensity Evaluation of an SUV Equipped with a TRW VSC System,” SAE 2001-01-0128, 2001. [https://doi.org/10.4271/2001-01-0128]
J. P. Thomas and J. H. F. Woodrooffe, “A Feasibility Study of a Rollover Warning Device for Heavy Trucks,” Transport Canada Publication No. TP 1061OE, 1990.
V. Tsourapas, D. Piyabongkarn1, A. C. Williams and R. Rajamani, “New Method of Identifying Real-Time Predictive Lateral Load Transfer Ratio for Rollover Prevention Systems,” American Control Conference Proceedings, pp.439-444, 2009. [https://doi.org/10.1109/ACC.2009.5160061]
J. Kim, “Estimation of Tire Forces using Vehicle Linear Accelerations and Yaw Rate,” Transactions of KSAE, Vol.27, No.10, pp.747-753, 2019. [https://doi.org/10.7467/KSAE.2019.27.10.747]
|
Johnson's algorithm - Wikipedia
An algorithm to find the shortest paths between all pairs of vertices in an edge-weighted directed graph
{\displaystyle O(|V|^{2}\log |V|+|V||E|)}
For the scheduling algorithm of the same name, see Job shop scheduling.
Johnson's algorithm is a way to find the shortest paths between all pairs of vertices in an edge-weighted directed graph. It allows some of the edge weights to be negative numbers, but no negative-weight cycles may exist. It works by using the Bellman–Ford algorithm to compute a transformation of the input graph that removes all negative weights, allowing Dijkstra's algorithm to be used on the transformed graph.[1][2] It is named after Donald B. Johnson, who first published the technique in 1977.[3]
A similar reweighting technique is also used in Suurballe's algorithm for finding two disjoint paths of minimum total length between the same two vertices in a graph with non-negative edge weights.[4]
Algorithm description[edit]
Johnson's algorithm consists of the following steps:[1][2]
First, a new node q is added to the graph, connected by zero-weight edges to each of the other nodes.
Second, the Bellman–Ford algorithm is used, starting from the new vertex q, to find for each vertex v the minimum weight h(v) of a path from q to v. If this step detects a negative cycle, the algorithm is terminated.
Next the edges of the original graph are reweighted using the values computed by the Bellman–Ford algorithm: an edge from u to v, having length
{\displaystyle w(u,v)}
, is given the new length w(u,v) + h(u) − h(v).
Finally, q is removed, and Dijkstra's algorithm is used to find the shortest paths from each node s to every other vertex in the reweighted graph. The distance in the original graph is then computed for each distance D(u , v), by adding h(v) − h(u) to the distance returned by Dijkstra's algorithm.
The first three stages of Johnson's algorithm are depicted in the illustration below.
The graph on the left of the illustration has two negative edges, but no negative cycles. The center graph shows the new vertex q, a shortest path tree as computed by the Bellman–Ford algorithm with q as starting vertex, and the values h(v) computed at each other node as the length of the shortest path from q to that node. Note that these values are all non-positive, because q has a length-zero edge to each vertex and the shortest path can be no longer than that edge. On the right is shown the reweighted graph, formed by replacing each edge weight
{\displaystyle w(u,v)}
by w(u,v) + h(u) − h(v). In this reweighted graph, all edge weights are non-negative, but the shortest path between any two nodes uses the same sequence of edges as the shortest path between the same two nodes in the original graph. The algorithm concludes by applying Dijkstra's algorithm to each of the four starting nodes in the reweighted graph.
In the reweighted graph, all paths between a pair s and t of nodes have the same quantity h(s) − h(t) added to them. The previous statement can be proven as follows: Let p be an
{\displaystyle s-t}
path. Its weight W in the reweighted graph is given by the following expression:
{\displaystyle {\bigl (}w(s,p_{1})+h(s)-h(p_{1}){\bigr )}+{\bigl (}w(p_{1},p_{2})+h(p_{1})-h(p_{2}){\bigr )}+...+{\bigl (}w(p_{n},t)+h(p_{n})-h(t){\bigr )}.}
{\displaystyle +h(p_{i})}
is cancelled by
{\displaystyle -h(p_{i})}
in the previous bracketed expression; therefore, we are left with the following expression for W:
{\displaystyle {\bigl (}w(s,p_{1})+w(p_{1},p_{2})+...+w(p_{n},t){\bigr )}+h(s)-h(t)}
The bracketed expression is the weight of p in the original weighting.
Since the reweighting adds the same amount to the weight of every
{\displaystyle s-t}
path, a path is a shortest path in the original weighting if and only if it is a shortest path after reweighting. The weight of edges that belong to a shortest path from q to any node is zero, and therefore the lengths of the shortest paths from q to every node become zero in the reweighted graph; however, they still remain shortest paths. Therefore, there can be no negative edges: if edge uv had a negative weight after the reweighting, then the zero-length path from q to u together with this edge would form a negative-length path from q to v, contradicting the fact that all vertices have zero distance from q. The non-existence of negative edges ensures the optimality of the paths found by Dijkstra's algorithm. The distances in the original graph may be calculated from the distances calculated by Dijkstra's algorithm in the reweighted graph by reversing the reweighting transformation.[1]
The time complexity of this algorithm, using Fibonacci heaps in the implementation of Dijkstra's algorithm, is
{\displaystyle O(|V|^{2}\log |V|+|V||E|)}
: the algorithm uses
{\displaystyle O(|V||E|)}
time for the Bellman–Ford stage of the algorithm, and
{\displaystyle O(|V|\log |V|+|E|)}
{\displaystyle |V|}
instantiations of Dijkstra's algorithm. Thus, when the graph is sparse, the total time can be faster than the Floyd–Warshall algorithm, which solves the same problem in time
{\displaystyle O(|V|^{3})}
^ a b c d Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001), Introduction to Algorithms, MIT Press and McGraw-Hill, ISBN 978-0-262-03293-3 . Section 25.3, "Johnson's algorithm for sparse graphs", pp. 636–640.
^ a b Black, Paul E. (2004), "Johnson's Algorithm", Dictionary of Algorithms and Data Structures, National Institute of Standards and Technology .
^ Johnson, Donald B. (1977), "Efficient algorithms for shortest paths in sparse networks", Journal of the ACM, 24 (1): 1–13, doi:10.1145/321992.321993, S2CID 207678246 .
^ Suurballe, J. W. (1974), "Disjoint paths in a network", Networks, 14 (2): 125–145, doi:10.1002/net.3230040204 .
Boost: All Pairs Shortest Paths
Retrieved from "https://en.wikipedia.org/w/index.php?title=Johnson%27s_algorithm&oldid=1070050912"
|
Shortest Paths with Non-Negative Edge Weights · USACO Guide
HomeGoldShortest Paths with Non-Negative Edge Weights
Bellman-FordFloyd-WarshallTutorialProblemExplanationImplementationProblemsDijkstraTutorial
\mathcal{O}(N^2)
\mathcal{O}(M\log N)
\mathcal{O}(M\log N)
ImplementationProblemImplementationProblems
Somewhat Frequent
Shortest Paths with Non-Negative Edge Weights
Authors: Benjamin Qi, Andi Qu, Qi Wang, Neo Wang
Introduces Bellman-Ford, Floyd-Warshall, Dijkstra.
Silver - (Optional) C++ Sets with Custom Comparators
\mathcal{O}(N^2)
\mathcal{O}(M\log N)
\mathcal{O}(M\log N)
Nearly all Gold shortest path problems involve Dijkstra. However, it's a good idea to learn Bellman-Ford and Floyd-Warshall first since they're simpler.
13.1 - Bellman-Ford
up to but not including "Negative Cycles"
13.3 - Floyd-Warshall
example calculation, code
code, why it works
12.3.3 - Floyd-Warshall
4.5 - All-Pairs Shortest Paths
Optional: Incorrect Floyd-Warshall
A common mistake in implementing the Floyd–Warshall algorithm is to misorder the triply nested loops (The correct order is KIJ). The incorrect IJK and IKJ algorithms do not give correct solutions for some instance. However, we can prove that if these are repeated three times, we obtain the correct solutions.
It would be emphasized that these fixes (repeating incorrect algorithms three times) have the same time complexity as the correct Floyd–Warshall algorithm up to constant factors. Therefore, our results suggest that, if one is confused by the order of the triply nested loops, one can repeat the procedure three times just to be safe.
Shortest Routes II
This problem asks us to compute shortest paths between any two vertices. Hence, Floyd-Warshall is suitable because of the low
N (N \le 500)
, and the inclusion of negative weights.
\mathcal{O}(N^3)
Used as the first step of the following:
Hard Show Tags APSP, DP
\mathcal{O}(N^2)
Dijkstra (Dense Graphs)
\mathcal{O}(M\log N)
13.2 - Dijkstra
Dijkstra (Sparse Graphs)
8 - Shortest Paths
4.4.3 - SSSP on Weighted Graph
\mathcal{O}(M\log N)
Shortest Routes I
\mathcal{O}(N + M\log N)
Recall from the second prerequisite module that we can use greater<> to make the top element of a priority queue the least instead of the greatest. Alternatively, you can negate distances before placing them into the priority queue, but that's more confusing.
vector<pair<int, int>> graph[100000];
// Adjacency list of (neighbour, edge weight)
public K a;
public V b;
public Pair(K a, V b) {
It's important to include continue. This ensures that when all edge weights are non-negative, we will never go through the adjacency list of any vertex more than once. Removing it will cause TLE on the last test case of the above problem!
The last test case contains 100000 destinations and 149997 flights. City 1 has flights to cities 2 through 50000. Cities 2 through 50000 have flights to city 50001. City 50001 has flights to cities 50002 through 100000. Without the continue, after the program pops cities 1 through 50000 off the queue, the priority queue will contain 49999 routes that end at city 50001. Every one of the 49999 times city 50001 is popped off the queue and processed, we must iterate over all of its outgoing flights (to cities 50002 through 100000). This results in a runtime of
\Theta(N^2\log N)
, which will TLE.
On the other hand, if we did include the continue, the program will never iterate through the adjacency list of city 50001 after processing it for the first time.
Optional: Faster Dijkstra
Can be done in
\mathcal{O}(M+N\log N)
with Fibonacci heap. In practice though, this is rarely faster, since the Fibonacci heap has a bad constant factor.
Easy Show Tags SP
Normal Show Tags SP
Normal Show Tags SP, Coordinate Compression, Binary Search, DP
Hard Show Tags SP
2018 - Commuter Pass
Hard Show Tags DP, SP
Hard Show Tags Geometry, SP
Balkan OI
2012 - Shortest Paths
Very Hard Show Tags SP, Stack
|
Speed and direction of a motion
Find sources: "Velocity" – news · newspapers · books · scholar · JSTOR (March 2011) (Learn how and when to remove this template message)
As a change of direction occurs while the racing cars turn on the curved track, their velocity is not constant.
{\displaystyle {\textbf {F}}={\frac {d}{dt}}(m{\textbf {v}})}
1 Constant velocity vs acceleration
2 Difference between speed and velocity
3.3 Relationship to acceleration
3.4 Quantities that are dependent on velocity
4.1 Scalar velocities
Constant velocity vs acceleration
Main article: Equation of motion
{\displaystyle {\boldsymbol {\bar {v}}}={\frac {\Delta {\boldsymbol {x}}}{\Delta t}}.}
In terms of a displacement-time (x vs. t) graph, the instantaneous velocity (or, simply, velocity) can be thought of as the slope of the tangent line to the curve at any point, and the average velocity as the slope of the secant line between two points with t coordinates equal to the boundaries of the time period for the average velocity.
{\displaystyle {\boldsymbol {\bar {v}}}={1 \over t_{1}-t_{0}}\int _{t_{0}}^{t_{1}}{\boldsymbol {v}}(t)\ dt,}
where we may identify
{\displaystyle \Delta {\boldsymbol {x}}=\int _{t_{0}}^{t_{1}}{\boldsymbol {v}}(t)\ dt}
{\displaystyle \Delta t=t_{1}-t_{0}.}
Example of a velocity vs. time graph, and the relationship between velocity v on the y-axis, acceleration a (the three green tangent lines represent the values for acceleration at different points along the curve) and displacement s (the yellow area under the curve.)
{\displaystyle {\boldsymbol {v}}=\lim _{{\Delta t}\to 0}{\frac {\Delta {\boldsymbol {x}}}{\Delta t}}={\frac {d{\boldsymbol {x}}}{dt}}.}
{\displaystyle {\boldsymbol {x}}=\int {\boldsymbol {v}}\ dt.}
Relationship to acceleration
{\displaystyle {\boldsymbol {a}}={\frac {d{\boldsymbol {v}}}{dt}}.}
{\displaystyle {\boldsymbol {v}}=\int {\boldsymbol {a}}\ dt.}
{\displaystyle {\boldsymbol {v}}={\boldsymbol {u}}+{\boldsymbol {a}}t}
{\displaystyle {\boldsymbol {x}}={\frac {({\boldsymbol {u}}+{\boldsymbol {v}})}{2}}t={\boldsymbol {\bar {v}}}t.}
{\displaystyle v^{2}={\boldsymbol {v}}\cdot {\boldsymbol {v}}=({\boldsymbol {u}}+{\boldsymbol {a}}t)\cdot ({\boldsymbol {u}}+{\boldsymbol {a}}t)=u^{2}+2t({\boldsymbol {a}}\cdot {\boldsymbol {u}})+a^{2}t^{2}}
{\displaystyle (2{\boldsymbol {a}})\cdot {\boldsymbol {x}}=(2{\boldsymbol {a}})\cdot ({\boldsymbol {u}}t+{\tfrac {1}{2}}{\boldsymbol {a}}t^{2})=2t({\boldsymbol {a}}\cdot {\boldsymbol {u}})+a^{2}t^{2}=v^{2}-u^{2}}
{\displaystyle \therefore v^{2}=u^{2}+2({\boldsymbol {a}}\cdot {\boldsymbol {x}})}
where v = |v| etc.
Quantities that are dependent on velocity
{\displaystyle E_{\text{k}}={\tfrac {1}{2}}mv^{2}}
{\displaystyle {\boldsymbol {p}}=m{\boldsymbol {v}}}
{\displaystyle \gamma ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}}
where γ is the Lorentz factor and c is the speed of light.
{\displaystyle v_{\text{e}}={\sqrt {\frac {2GM}{r}}}={\sqrt {2gr}},}
Main article: Relative velocity
{\displaystyle {\boldsymbol {v}}_{A{\text{ relative to }}B}={\boldsymbol {v}}-{\boldsymbol {w}}}
{\displaystyle {\boldsymbol {v}}_{B{\text{ relative to }}A}={\boldsymbol {w}}-{\boldsymbol {v}}}
Scalar velocities
{\displaystyle v_{\text{rel}}=v-(-w)}
, if the two objects are moving in opposite directions, or:
{\displaystyle v_{\text{rel}}=v-(+w)}
, if the two objects are moving in the same direction.
{\displaystyle {\boldsymbol {v}}={\boldsymbol {v}}_{T}+{\boldsymbol {v}}_{R}}
{\displaystyle {\boldsymbol {v}}_{T}}
is the transverse velocity
{\displaystyle {\boldsymbol {v}}_{R}}
is the radial velocity.
{\displaystyle v_{R}={\frac {{\boldsymbol {v}}\cdot {\boldsymbol {r}}}{\left|{\boldsymbol {r}}\right|}}}
{\displaystyle {\boldsymbol {r}}}
is displacement.
The magnitude of the transverse velocity is that of the cross product of the unit vector in the direction of the displacement and the velocity vector. It is also the product of the angular speed
{\displaystyle \omega }
and the magnitude of the displacement.
{\displaystyle v_{T}={\frac {|{\boldsymbol {r}}\times {\boldsymbol {v}}|}{|{\boldsymbol {r}}|}}=\omega |{\boldsymbol {r}}|}
{\displaystyle \omega ={\frac {|{\boldsymbol {r}}\times {\boldsymbol {v}}|}{|{\boldsymbol {r}}|^{2}}}.}
{\displaystyle L=mrv_{T}=mr^{2}\omega }
{\displaystyle m}
is mass
{\displaystyle r=|{\boldsymbol {r}}|.}
{\displaystyle mr^{2}}
is known as moment of inertia. If forces are in the radial direction only with an inverse square dependence, as in the case of a gravitational orbit, angular momentum is constant, and transverse speed is inversely proportional to the distance, angular speed is inversely proportional to the distance squared, and the rate at which area is swept out is constant. These relations are known as Kepler's laws of planetary motion.
Four-velocity (relativistic version of velocity for Minkowski spacetime)
Proper velocity (in relativity, using traveler time instead of observer time)
Rapidity (a version of velocity additive at relativistic speeds)
^ "velocity". Lexico dictionary. 2022. Retrieved 2 May 2022.
^ Rowland, Todd (2019). "Velocity Vector". Wolfram MathWorld. Retrieved 2 June 2019.
^ Wilson, Edwin Bidwell (1901). Vector analysis: a text-book for the use of students of mathematics and physics, founded upon the lectures of J. Willard Gibbs. Yale bicentennial publications. C. Scribner's Sons. p. 125. hdl:2027/mdp.39015000962285. Earliest occurrence of the speed/velocity terminology.
^ Basic principle
Retrieved from "https://en.wikipedia.org/w/index.php?title=Velocity&oldid=1085834000"
|
DP on Trees - Combining Subtrees · USACO Guide
Max Suffix Query with Insertions OnlyWavelet TreeCounting Minimums with Segment TreeSegment Tree BeatsPersistent Data StructuresTreaps
LineContainerLagrangian RelaxationSlope Trick
Shortest Paths with Negative Edge WeightsEulerian ToursBCCs and 2CCsStrongly Connected ComponentsOffline DeletionEuler's FormulaCriticalLink Cut Tree
DP on Trees - Combining SubtreesAdditional DP Optimizations and TechniquesSum over Subsets DP
Maximum FlowMinimum CutFlow with Lower BoundsMinimum Cost Flow
Introduction to Fast Fourier TransformMore Complex Operations Using FFT
String SearchingSuffix ArrayString Suffix Structures
Extended Euclidean AlgorithmXOR BasisFracturing SearchGame TheoryPrefix Sums of Multiplicative FunctionsMatroid IntersectionInteractive and Communication ProblemsVectorization in C++
HomeAdvancedDP on Trees - Combining Subtrees
SolutionTime Complexity of Merging SubtreesProblems
DP on Trees - Combining Subtrees
Gold - DP on Trees - Introduction
Karen & Supermarket
CF - Normal
This was the first problem I saw that involved this trick.
For two vectors
and
b
, define the vector
c=a\oplus b
to have entries
c_i=\min_{k=0}^i\left(a_k+b_{i-k}\right)
0\le i < \text{size}(a)+\text{size}(b)-1
Similar to the editorial, define
\texttt{dp[x][0][g]}
to be the minimum cost to buy exactly
goods out of the subtree of
x
if we don't use the coupon for
x
\texttt{dp[x][1][g]}
goods out of the subtree of
x
if we are allowed to use the coupon for
x
. We update
\texttt{dp[x][0]}
with one of the child subtrees
t
x
\texttt{dp[x][0]}=\texttt{dp[x][0]}\oplus \texttt{dp[t][0]}
\texttt{dp[x][1]}
The editorial naively computes a bound of
\mathcal{O}(N^3)
on the running time of this solution. However, this actually runs in
\mathcal{O}(N^2)
Time Complexity of Merging Subtrees
The complexity can be demonstrated with the following problem:
You have an list of
N
ones and a counter initially set to
0
. While the list has greater than one element, remove any two elements
and
b
from the list, add
a\cdot b
to the counter, and add
a+b
to the list. In terms of
N
, what is the maximum possible value of the counter at the end of this process?
2019 - Dzumbus
Div 1 D - Miss Punyverse
Ostap & Tree
Normal Show Tags DP
COCI - Periodni
Hard Show Tags NT
|
Custom Comparators and Coordinate Compression · USACO Guide
HomeSilverCustom Comparators and Coordinate Compression
Example - Wormhole SortClassesC++ComparatorsMethod 1 - Overloading the Less Than OperatorMethod 2 - Comparison FunctionVariationsSorting in Decreasing Order of WeightSorting by Two CriteriaCoordinate CompressionExample 1Example 2Problems
Custom Comparators and Coordinate Compression
Authors: Darren Yao, Siyong Huang, Michael Cao, Benjamin Qi, Nathan Chen
Using a custom comparator to sort custom objects or values in a non-default order; Coordinate compressing values from a large range to a smaller one.
Bronze - Introduction to Sorting
8 - Sorting & Comparators
partially based off this
3.2 - User-Defined Structs, Comparison Functions
short overview of what this module will cover
Silver - Insane
View Internal Solution
We won't discuss the full solution here, as some of the concepts necessary for solving this problem will be introduced later in Silver. However, many solutions to this problem start by sorting the edges in nondecreasing order of weight. For example, the sample contains the following edges:
After sorting, it should look like
With C++, the easiest method is to use a vector of nested pairs:
or a vector of array<int,3>s or vector<int>s:
vector<array<int,3>> v; // or vector<vector<int>>
int a,b,w; cin >> a >> b >> w;
v.push_back({w,a,b});
sort(begin(v),end(v));
for (auto e: v) cout << e[1] << " " << e[2] << " " << e[0] << "\n";
In Python, we can use a list of lists.
But in Java, we can't sort an ArrayList of ArrayLists without writing some additional code. What should we do?
If we only stored the edge weights and sorted them, we would have a sorted list of edge weights, but it would be impossible to tell which weights corresponded to which edges.
However, if we create a class representing the edges and define a custom comparator to sort them by weight, we can sort the edges in ascending order while also keeping track of their endpoints.
First, we need to define a class that represents what we want to sort. In our example we will define a class Edge that contains the two endpoints of the edge and the weight.
A C++ struct is the same as a class in C++, but all members are public by default.
/* alternatively,
public Edge(int _a, int _b, int _w) { a = _a; b = _b; w = _w; }
def __init__(self, a, b, w):
a,b,w = map(int,input().split())
v.append(Edge(a,b,w))
print(e.a,e.b,e.w)
Normally, sorting functions rely on moving objects with a lower value in front of objects with a higher value if sorting in ascending order, and vice versa if in descending order. This is done through comparing two objects at a time.
What a comparator does is compare two objects as follows, based on our comparison criteria:
If object
x
is less than object
y
, return true
x
is greater than or equal to object
y
, return false
Essentially, the comparator determines whether object
x
belongs to the left of object
y
in a sorted ordering.
A comparator must return false for two equal objects (not doing so results in undefined behavior and potentially a verdict of wrong answer or runtime error).
In addition to returning the correct answer, comparators should also satisfy the following conditions:
The function must be consistent with respect to reversing the order of the arguments: if
x \neq y
and compare(x, y) is true, then compare(y, x) should be false and vice versa.
The function must be transitive. If compare(x, y) is true and compare(y, z) is true, then compare(x, z) should also be true. If the first two compare functions both return false, the third must also return false.
Method 1 - Overloading the Less Than Operator
This is the easiest to implement. However, it only works for objects (not primitives) and it doesn't allow you to define multiple ways to compare the same type of class.
In the context of Wormhole Sort (note the use of const Edge&):
bool operator<(const Edge& y) { return w < y.w; }
We can also overload the operator outside of the class:
bool operator<(const Edge& x, const Edge& y) { return x.w < y.w; }
or within it using friend:
friend bool operator<(const Edge& x, const Edge& y) { return x.w < y.w; }
Method 2 - Comparison Function
This works for both objects and primitives, and you can declare many different comparators for the same object.
bool cmp(const Edge& x, const Edge& y) { return x.w < y.w; }
We can also use lambda expressions in C++11 or above:
sort(begin(v),end(v),[](const Edge& x, const Edge& y) { return x.w < y.w; });
x
y
, return a negative integer.
x
is greater than object
y
, return a positive integer.
x
is equal to object
y
, return 0.
In addition to returning the correct number, comparators should also satisfy the following conditions:
The function must be consistent with respect to reversing the order of the arguments: if compare(x, y) is positive, then compare(y, x) should be negative and vice versa.
The function must be transitive. If compare(x, y) > 0 and compare(y, z) > 0, then compare(x, z) > 0. Same applies if the compare functions return negative numbers.
Equality must be consistent. If compare(x, y) = 0, then compare(x, z) and compare(y, z) must both be positive, both negative, or both zero. Note that they don't have to be equal, they just need to have the same sign.
Java has default functions for comparing int, long, double types. The Integer.compare(), Long.compare(), and Double.compare() functions take two arguments
x
y
and compare them as described above.
There are two ways of implementing this in Java: Comparable, and Comparator. They essentially serve the same purpose, but Comparable is generally easier and shorter to code. Comparable is a function implemented within the class containing the custom object, while Comparator is its own class.
Method 1 - Comparable
We'll need to put implements Comparable<Edge> into the heading of the class. Furthermore, we'll need to implement the compareTo method. Essentially, compareTo(x) is the compare function that we described above, with the object itself as the first argument, or compare(self, x).
When using Comparable, we can just call Arrays.sort(arr) or Collections.sort(list) on the array or list as usual.
static class Edge implements Comparable<Edge> {
public int compareTo(Edge y) { return Integer.compare(w,y.w); }
Method 2 - Comparator
If instead we choose to use Comparator, we'll need to declare a second class that implements Comparator<Edge>:
static class Comp implements Comparator<Edge> {
return Integer.compare(a.w, b.w);
When using Comparator, the syntax for using the built-in sorting function requires a second argument: Arrays.sort(arr, new Comp()), or Collections.sort(list, new Comp()).
Defining Less Than Operator
def __lt__(self, other): # lt means less than
return self.w < other.w
This method maps an object to another comparable datatype with which to be sorted. This is the preferred method if you are only sorting something once. In this case we map edges to their weights.
v.sort(key=lambda x:x.w)
A comparison function in Python must satisfy the same properties as a comparator in Java. Note that old-style cmp functions are no longer supported, so the comparison function must be converted into a key function with cmp_to_key. Most of the time, it is better to use the key function, but in the rare case that the comparison function is not easily represented as a key function, we can use this.
Sorting in Decreasing Order of Weight
We can replace all occurrences of x.w < y.w with x.w > y.w in our C++ code. Similarly, we can replace all occurrences of Integer.compare(x, y) with -Integer.compare(x, y) in our Java code. In Python, we can pass the parameter reverse=True to the sort or sorted function.
Now, suppose we wanted to sort a list of Edges in ascending order, primarily by weight and secondarily by first vertex (a). We can do this quite similarly to how we handled sorting by one criterion earlier. What the comparator function needs to do is to compare the weights if the weights are not equal, and otherwise compare first vertices.
bool operator<(const Edge& y) {
if (w != y.w) return w < y.w;
return a < y.a;
public int compareTo(Edge y) {
if (w != y.w) return Integer.compare(w,y.w);
return Integer.compare(a,y.a);
In Python, tuples have a natural order based on their elements in order. We can take advantage of this to write a comparator:
return (self.w, self.a) < (other.w, other.a)
This also gives an easy way to write a key function to sort in this way:
edges: list[Edge]
edges.sort(key=lambda edge: (edge.w, edge.a))
Sorting by an arbitrary number of criteria is done similarly.
With Java, we can implement a comparator for arrays of arbitrary length (although this might be more confusing than creating a separate class).
static class Comp implements Comparator<int[]> {
if (a[i] != b[i]) return Integer.compare(a[i],b[i]);
Coordinate compression describes the process of mapping each value in a list to its index if that list was sorted. For example, the list
\{7, 3, 4, 1\}
would be compressed to
\{3, 1, 2, 0\}
1
is the least value in the first list, so it becomes
0
7
is the greatest value, so it becomes
3
, the largest index in the list.
When we have values from a large range, but we only care about their relative order (for example, if we have to know if one value is above another), coordinate compression is a simple way to help with implementation. For example, if we have a set of integers ranging from
0
10^9
, we can't use them as array indices because we'd have to create an array of size
10^9
, which would surely cause a Memory Limit Exceeded verdict. However, if there are only
N \leq 10^6
such integers, we can coordinate-compress their values, which guarantees that the values will all be in the range from
0
N-1
, which can be used as array indices.
Rectangular Pasture
Silver - Hard
A good example of coordinate compression in action is in the solution of USACO Rectangular Pasture. Again, we won't delve into the full solution but instead discuss how coordinate compression is applied. Since the solution uses 2D-prefix sums (another Silver topic), it is helpful if all point coordinates are coordinate-compressed to the range
0
N-1
so they can be used as array indices. Without coordinate compression, creating a large enough array would result in a Memory Limit Exceeded verdict.
Below you will find the solution to Rectangular Pasture, which uses coordinate compression at the beginning. Observe how a custom comparator is used to sort the points:
Code Snippet: Solution code (Click to expand)
typedef pair<int,int> Point;
bool ycomp(Point p, Point q) { return p.second < q.second; }
sort(P, P+N);
for (int i=0; i<N; i++) P[i].first = i+1;
sort(P, P+N, ycomp);
for (int i=0; i<N; i++) P[i].second = i+1;
int[] xs = new int[n];
int[] ys = new int[n];
Integer[] cows = new Integer[n];
xs[j] = in.nextInt();
ys[j] = in.nextInt();
The solution uses a lambda function as the custom comparator, which our guide didn't discuss, but it should be apparent which coordinate (x or y) that the comparator is sorting by.
The solution to Rectangular Pasture directly replaces coordinates with their compressed values, and forgets the real values of the coordinates because they are unnecessary. However, there may be problems for which we need to also remember the original values of coordinates that we compress.
Static Range Queries
CF - Hard
This problem will require prefix sums and coordinate compression. However, the implementation of coordinate compression in this solution will also require remembering values in addition to compressing them (as opposed to just replacing the original values, as in the last problem). If you just want to focus on the implementation of coordinate compression and how it can switch between compressed indices and original values, see the contracted code below. indices is a list of values that need to be compressed. After it gets sorted and has duplicate values removed, it is ready to use. The method getCompressedIndex takes in an original value, and binary searches for its position in indices to get its corresponding compressed index. To go from a compressed index to an original value, the code can just access that index in indices.
We also provide a full solution:
//finds the "compressed index" of a special index (a.k.a. its position in the sorted list)
int getCompressedIndex(int a) {
return lower_bound(indices.begin(), indices.end(), a) - indices.begin();
//========= COORDINATE COMPRESSION =======
sort(indices.begin(), indices.end());
static int getCompressedIndex(int a) {
return Collections.binarySearch(indices, a);
TreeSet<Integer> temp = new TreeSet<Integer>(indices);
//Since temp is a set, all duplicate elements are removed
Many of the problems below may use other Silver concepts, such as prefix sums.
Easy Show Tags Prefix Sums, Sorting
Normal Show Tags Prefix Sums, Sorting
Normal Show Tags Sorting
The Smallest String Concatenation
Nezzar and Symmetric Array
Hard Show Tags Sorting
Very Hard Show Tags Sorting
|
Connected Lie groups and property RD
15 April 2007 Connected Lie groups and property RD
I. Chatterji, C. Pittet, L. Saloff-Coste
I. Chatterji,1 C. Pittet,2 L. Saloff-Coste3
1Department of Mathematics, Ohio State University
2Centre de Mathématiques et Informatique, Université de Provence
3Department of Mathematics, Cornell University
For a locally compact group, the property of rapid decay (property RD) gives a control on the convolutor norm of any compactly supported function in terms of its
{L}^{2}
-norm and the diameter of its support. We characterize the Lie groups that have property RD
I. Chatterji. C. Pittet. L. Saloff-Coste. "Connected Lie groups and property RD." Duke Math. J. 137 (3) 511 - 536, 15 April 2007. https://doi.org/10.1215/S0012-7094-07-13733-5
Primary: 22D15 , 22E30 , 43A15
I. Chatterji, C. Pittet, L. Saloff-Coste "Connected Lie groups and property RD," Duke Mathematical Journal, Duke Math. J. 137(3), 511-536, (15 April 2007)
|
≻
1
a
a=l+r
d
\mathrm{degree}\left(r\right)≺d
l=0
\mathrm{degree}\left(a\right)≺d
\mathrm{tdegree}\left(l\right)≽d
r
a
\mathrm{tdegree}\left(a\right)=0
0
a
{\mathbf{\omega }}^{d}
\mathrm{Div}\left(a,{\mathbf{\omega }}^{d}\right)=q,r
l=q\cdot {\mathbf{\omega }}^{d}
t
The form of the return values is determined by the output option. If this option is not given (the default), then the result is returned as an expression sequence of two ordinal numbers (even if the input
\mathrm{with}\left(\mathrm{Ordinals}\right)
[\textcolor[rgb]{0,0,1}{\mathrm{`+`}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{`.`}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{`<`}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{<=}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Add}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Base}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Dec}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Decompose}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Div}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Eval}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Factor}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Gcd}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Lcm}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{LessThan}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Log}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Max}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Min}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Mult}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Ordinal}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Power}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Split}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Sub}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{`^`}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{degree}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{lcoeff}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{log}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{lterm}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{\omega }}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{quo}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{rem}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{tcoeff}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{tdegree}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{tterm}}]
a≔\mathrm{Ordinal}\left([[\mathrm{\omega },1],[3,2],[2,4],[1,5]]\right)
\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{5}
\mathrm{Split}\left(a\right)
{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}
\mathrm{Split}\left(a,\mathrm{degree}=3\right)
{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{5}
\mathrm{Div}\left(a,{\mathrm{\omega }}^{3}\right)
{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{5}
\mathrm{Split}\left(a,\mathrm{degree}=5\right)
{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{5}
\mathrm{Split}\left(a,\mathrm{degree}=\mathrm{\omega }\right)
{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{5}
l
\mathrm{Split}\left(a,\mathrm{degree}=3,\mathrm{output}=\mathrm{index}\right)
\textcolor[rgb]{0,0,1}{2}
\mathrm{Split}\left(a,\mathrm{degree}=5,\mathrm{output}=\mathrm{index}\right)
\textcolor[rgb]{0,0,1}{1}
t
t
a=\mathrm{Ordinal}\left(t\right)
t≔\mathrm{op}\left(a\right)
\textcolor[rgb]{0,0,1}{t}\textcolor[rgb]{0,0,1}{≔}[[\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}]]
\mathrm{Split}\left(a,\mathrm{degree}=3,\mathrm{output}=\mathrm{lists}\right)
[[\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}]]
\mathrm{Split}\left(t,\mathrm{degree}=3,\mathrm{output}=\mathrm{lists}\right)
[[\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}]]
b≔\mathrm{Ordinal}\left([[3,2],[1,1],[0,x]]\right)
\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}
\mathrm{Split}\left(b\right)
{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}
\mathrm{Split}\left(b,\mathrm{output}=\mathrm{lists}\right)
[[\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}]]
\mathrm{Split}\left(b,\mathrm{output}=\mathrm{mixed}\right)
[[\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}
|
Longest Increasing Subsequence · USACO Guide
HomeGoldLongest Increasing Subsequence
TutorialSlow SolutionFast SolutionFast Solution (RMQ Data Structures)Example - PCBApplication 1 - Non-intersecting SegmentsApplication 2 - Minimum Number of Increasing SequencesCodeProblems
Has Not Appeared
Authors: Michael Cao, Benjamin Qi, Andi Qu, Andrew Wang
Contributors: Dong Liu, Siyong Huang, Aryansh Shrivastava, Kevin Sheng
Finding and using the longest increasing subsequence of an array.
A comprehensive guide (covers almost everything here)
In this tutorial, let
A
be the array we want to find the LIS for.
\mathcal O(N^2)
7.2 - LIS
dp[i]
be the length of the longest increasing subsequence that ends on
A[i]
. We can then naively compute
dp
(and thus the LIS) in
\mathcal{O}(N^2)
int find_lis(vector<int> a) {
vector<int> dp(a.size(), 1);
public static int findLis(int[] a) {
int[] dp = new int[a.length];
lis = Math.max(lis, dp[i]);
def find_lis(a: List[int]) -> int:
lis = max(lis, dp[i])
We can do much better than this though!
\mathcal O(N \log N)
L_i
be an array (0-indexed) where
L_i[j]
is the smallest element from the first
i
A
with an increasing sequence of length
j + 1
ending on it (or
\infty
if there is no such element).
For example, let our array be
[2, 18, 7, 20, 18, 5, 18, 15, 13, 19, 9]
L_8
[2, 5, 15]
. Some example sequences satisfying these are
[2]
[2, 5]
[2, 5, 15]
L_i
forms a strictly increasing sequence.
Proof: Assume for a contradiction that for some
j
L_i[j - 1] \geq L_i[j]
a_0, a_1, \dots, a_{j}
be any increasing subsequence of length
j + 1
ending on
L_i[j]
a_j = L_i[j]
a_0, a_1, \dots, a_{j - 1}
is an increasing subsequence of length
j
L_i[j]
L_i[j - 1]
. From that, we can write this equation:
L_i[j - 1] \leq a_{j-1} < a_j = L_i[j]
, which is exactly what we wanted.
Lemma 2: The length of the LIS ending on
A[i + 1]
is equal to the least index
j
(with 0-indexing) such that
L_i[j] \geq A[i + 1]
j
be defined as it is in the statement. First of all, since
A[i + 1] > L_i[j - 1]
, there is an increasing sequence of length
j
A[i + 1]
, as we can append
A[i + 1]
to the end of the sequence of length
j
L_i
is strictly increasing, so
L_i[k] \geq A[i + 1]
k \geq j
. This means that there cannot be a LIS ending at
A[i + 1]
longer than
j + 1
Lemma 3: At most 1 element differs between
L_i
L_{i + 1}
j
be defined as it was in Lemma 2. We need to set
L_{i + 1}[j]
A[i + 1]
A[i + 1] \leq L_i[j]
L_{i + 1}[k] = L_i[k]
k \neq j
, though, since
A[i + 1] > L_i[k]
k < j
and there are no increasing sequences ending on
A[i + 1]
k > j
To find and update the described
j
\mathcal{O}(\log N)
time, we can use a list with binary search, or we can use a sorted set (as demonstrated in the solution for PCB).
int pos = lower_bound(dp.begin(), dp.end(), i) - dp.begin();
if (pos == dp.size()) {
// we can have a new, longer increasing subsequence!
ArrayList<Integer> dp = new ArrayList<Integer>();
int pos = Collections.binarySearch(dp, i);
pos = pos < 0 ? Math.abs(pos + 1) : pos;
dp.add(i);
// oh ok, at least we can make the ending element smaller
dp.set(pos, i);
def find_lis(arr: List[int]) -> int:
min_endings = []
pos = bisect_left(min_endings, i)
if pos == len(min_endings): # we can have a new, longer increasing subsequence!
min_endings.append(i)
else: # oh ok, at least we can make the ending element smaller
min_endings[pos] = i
return len(min_endings)
Fast Solution (RMQ Data Structures)
\mathcal O(N \log N)
, but perhaps a constant factor slower than the previous solution.
Note that the following solution assumes knowledge of a basic PURQ data structure for RMQ (range max query). Suitable implementations include a segment tree or a modified Fenwick tree, but both of these are generally considered Platinum level topics. However, this solution has the advantage of being more intuitive if you're deriving LIS on the spot.
\texttt{dp}[k]
be the length of the LIS of
a
that ends with the element
k \in a
. Our base case is obviously
\texttt{dp}[a[0]] = 1
(we have an LIS containing only
a[0]
, which has length
1
). Since LIS exhibits optimal substructure, we transition as follows, processing
\forall t \in a[1 \dots n-1]
from left to right (which we must do in order to maintain the property that our subsequence strictly increases in this direction):
\texttt{dp}[t] = \max_{0 \leq k < t} \texttt{dp}[k] + 1.
\texttt{dp}
is now a RMQ data structure, the change in
\texttt{dp}[t]
is merely a point update, while the calculation of
\max\limits_{0 \leq k < t} \texttt{dp}[k]
is a range query.
Our final answer is just an RMQ from end to end (or, alternatively, you could keep a running max of the answer in another variable and change it with each update). This method actually gives us the leisure to process online as long as our elements are small enough to be used as indices, or if we use a more advanced data structure such as a sparse segment tree. If we are willing to process offline as we often can, however, we can avoid using a more advanced technique: it suffices to collect the array elements and sort them to coordinate compress, creating
\texttt{dp}
with those compressed IDs instead.
Code Snippet: Range Maximum Segment Tree (Click to expand)
// apply coordinate compression to all elements of the array
vector<int> sorted(a);
public class FindLis {
HashMap<Integer, Integer> coordComp = new HashMap<>();
for (int i : sorted) {
class MaxSegTree:
def __init__(self, len_: int):
self.segtree = [0] * (2 * len_)
def set(self, ind: int, val: int) -> None:
ind += self.len
Example - PCB
Baltic OI - Normal
This problem asks us to find the minimum number of disjoint sets of non-intersecting segments. This seems quite intimidating, so let's break it up into two parts:
Finding a set of non-intersecting segments
Minimizing the number of these sets
Application 1 - Non-intersecting Segments
First, what can we say about two segments
(l_1, r_1)
(l_2, r_2)
if they intersect (assuming
l_1 < l_2
Since these segments are straight, notice how
l_1 < l_2 \implies r_1 > r_2
This means that a set of non-intersecting segments satisfies
l_i < l_j \implies r_i < r_j
(i, j)
A
be an array where
A[i] = x
means that the segment with its right endpoint at position
i
has its left endpoint at position
x
If we were asked to find the maximum size of a set of non-intersecting segments, the answer would be the LIS of
A
Application 2 - Minimum Number of Increasing Sequences
Continuing from application 1, we now want to find the minimum number of increasing subsequences required to cover
A
Luckily for us, there's a simple (though not so obvious) solution to this.
Lemma (Easy): The minimum number of increasing subsequences required to cover
A
is at least the size of longest non-increasing subsequence of
A
Proof: No two elements of any non-increasing subsequence can be part of the same increasing subsequence.
Claim: The minimum number of increasing subsequences required to cover
A
is equal to the size of longest non-increasing subsequence of
A
Wrong Proof 1: See cp-algo (note that this link describes partitioning
A
into non-increasing subsequences rather than increasing subsequences). However, it's not correct because the process of unhooking and reattaching might never terminate. For example, consider partitioning
A=(3,1,2)
into the non-increasing subsequences
s_1=(3,1)
s_2=(2)
3
will be moved from the front of
s_1
to the front of
s_2
on the first step, back to
s_1
on the second step, and so on.
Wrong Proof 2: This is essentially the same as the above.
Motivation: Consider the obvious greedy strategy to construct the collection of increasing subsequences (essentially patience sorting). For each element
x
A
from left to right, add it to the increasing subsequence with last element less than
x
such that the value of this last element is maximized. If no such increasing subsequence currently exists, then start a new increasing subsequence with
x
This algorithm performs exactly the same steps as the algorithm to compute the length of the longest non-increasing subsequence, so it follows that they return the same result.
f_i
denote the length of longest non-increasing subsequence ending at
A_i
A_i
's satisfying
f_i=t
t
are an increasing subsequence for each
t
. So we have covered
A
with (size of longest non-increasing subsequence) increasing subsequences, done.
Do you see why this is equivalent to the sketch?
Alternative Proof: This is just a special case of Dilworth's Theorem. See the inductive proof.
TreeMap<Integer, Integer> a = new TreeMap<Integer, Integer>(Collections.reverseOrder());
Easy Show Tags LIS
LIS on Permutations
Cowjog
2019 - Triusis
Normal Show Tags Bitmasks, DP
Hard Show Tags DP, Geometry
2016 - Matryoshka
Hard Show Tags Binary Search, DP, LIS
The original problem statement for "Matryoshka" is in Japanese. You can find a user-translated version of the problem here.
|
Output and state covariance of system driven by white noise - MATLAB covar - MathWorks 한êµ
P=E\left(y{y}^{T}\right)
\begin{array}{cc}E\left(w\left(t\right)w{\left(\mathrm{Ï}\right)}^{T}\right)=W\mathrm{δ}\left(tâ\mathrm{Ï}\right)& \text{(continuous time)}\\ E\left(w\left[k\right]w{\left[l\right]}^{T}\right)=W{\mathrm{δ}}_{kl}& \text{(discrete time)}\end{array}
Q=E\left(x{x}^{T}\right)
\begin{array}{cc}H\left(z\right)=\frac{2z+1}{{z}^{2}+0.2z+0.5},& {T}_{s}\end{array}=0.1
\begin{array}{l}\stackrel{Ë}{x}=Ax+Bw\\ y=Cx+Dw,\end{array}
AQ+Q{A}^{T}+BW{B}^{T}=0.
AQ{A}^{T}âQ+BW{B}^{T}=0.
|
Duality in Solving Multi-Objective Optimization (MOO) Problems
Department of Agricultural Economics, Institute of Agricultural Sciences, Banaras Hindu University, Varanasi, India
Z=\left[\text{Max}.\text{\hspace{0.17em}}{Z}_{1},\text{Max}\text{.}\text{\hspace{0.17em}}{Z}_{2},\cdots ,\text{Max}\text{.}\text{\hspace{0.17em}}{Z}_{r},\text{Min}\text{.}\text{\hspace{0.17em}}{Z}_{r+1},\cdots ,\text{Min}\text{.}\text{\hspace{0.17em}}{Z}_{s}\right]
AX=b
X\ge 0
{Z}_{\text{optima}}=\left[{\theta }_{1},{\theta }_{2},\cdots ,{\theta }_{s}\right]
Z=\frac{{\sum }_{j=1}^{r}{Z}_{j}}{|{\theta }_{j}|}-\frac{{\sum }_{j=r+1}^{s}{Z}_{j}}{|{\theta }_{r+1}|}
AX=b
X\ge 0
{\theta }_{j}\ne 0
j=1,2,\cdots ,s
{\theta }_{j}
AX=b
X\ge 0
Z=\frac{{\sum }_{j=1}^{s}{Z}_{j}}{|{\theta }_{j}|}
Z=\frac{{\sum }_{j=1}^{s}{Z}_{j}}{|{\theta }_{j}|}
AX=b
X\ge 0
{\theta }_{j}\ne 0
j=1,2,\cdots ,s
{\theta }_{j}
\text{Max}.\text{\hspace{0.17em}}{Z}_{1}=12500{X}_{1}+25100{X}_{2}+16700{X}_{3}+23300{X}_{4}+20200{X}_{5}
\text{Max}.\text{\hspace{0.17em}}{Z}_{2}=21{X}_{1}+15{X}_{2}+13{X}_{3}+17{X}_{4}+11{X}_{5}
\text{Min}.\text{\hspace{0.17em}}{Z}_{3}=370{X}_{1}+280{X}_{2}+350{X}_{3}+270{X}_{4}+240{X}_{5}
\text{Min}.\text{\hspace{0.17em}}{Z}_{4}=1930{X}_{1}+1790{X}_{2}+1520{X}_{3}+1690{X}_{4}+1720{X}_{5}
{X}_{1}+{X}_{2}+{X}_{3}+{X}_{4}+{X}_{5}=4.5
2{X}_{1}\ge 1.0
3{X}_{4}\ge 1.5
\text{Max}.\text{\hspace{0.17em}}{Z}_{1}=12500{X}_{1}+25100{X}_{2}+16700{X}_{3}+23300{X}_{4}+20200{X}_{5}
\text{Max}.\text{}{Z}_{2}=21{X}_{1}+15{X}_{2}+13{X}_{3}+17{X}_{4}+11{X}_{5}
\text{Max}.\text{}{Z}_{3}=-370{X}_{1}-280{X}_{2}-350{X}_{3}-270{X}_{4}-240{X}_{5}
\text{Max}.\text{}{Z}_{4}=-1930{X}_{1}-1790{X}_{2}-1520{X}_{3}-1690{X}_{4}-1720{X}_{5}
\text{Min}.\text{}{Z}_{1}=-12500{X}_{1}-25100{X}_{2}-16700{X}_{3}-23300{X}_{4}-20200{X}_{5}
\text{Min}.\text{}{Z}_{2}=-21{X}_{1}-15{X}_{2}-13{X}_{3}-17{X}_{4}-11{X}_{5}
\text{Min}.\text{}{Z}_{3}=370{X}_{1}+280{X}_{2}+350{X}_{3}+270{X}_{4}+240{X}_{5}
\text{Min}.\text{}{Z}_{4}=1930{X}_{1}+1790{X}_{2}+1520{X}_{3}+1690{X}_{4}+1720{X}_{5}
Sen, C. (2019) Duality in Solving Multi-Objective Optimization (MOO) Problems. American Journal of Operations Research, 9, 109-113. https://doi.org/10.4236/ajor.2019.93006
1. Sulaiman, N.A. and Hamadameen, A.-Q.O. (2008) Optimal Transformation Technique to Solve Multi-Objective Linear Programming Problem (MOLPP). Journal of Kirkuk University Scientific Studies, 3, 158-168.
2. Suleiman, N.A. and Nawkhass, M.A. (2013) Transforming and Solving Multi-Objective Quadratic Fractional Programming Problems by Optimal Average of Maximin & Minimax Techniques. American Journal of Operational Research, 3, 92-98.
3. Sulaiman, N.A. and Mustafa, R.B. (2016) Using Harmonic Mean to Solve Multi-Objective Linear Programming Problems. American Journal of Operations Research, 6, 25-30. https://doi.org/10.4236/ajor.2016.61004
4. Sulaiman, N.A. and Mustafa, R.B. (2016) Transform Extreme Point Multi-Objective Linear Programming Problem to Extreme Point Single Objective Linear Programming Problem by Using Harmonic Mean. Applied Mathematics, 6, 95-99.
5. Huma, A., Geeta, M. and Sushma, D. (2017) Transforming and Optimizing Multi-Objective Quadratic Fractional Programming Problem. International Journal of Statistics and Applied Mathematics, 2, 01-05.
6. Nahar, S. and Alim, A. (2017) A New Statistical Averaging Method to Solve Multi-Objective Linear Programming Problem. International Journal of Science and Research, 6, 623-629.
7. Huma, A., Modi, G. and Duraphe, S. (2017) An Appropriate Approach for Transforming and Optimizing Multi-Objective Quadratic Fractional Programming Problem. International Journal of Mathematics Trends and Technology, 50, 80-83. https://doi.org/10.14445/22315373/IJMTT-V50P511
8. Nawkhass, M.A. and Birdawod, H.Q. (2017) Transformed and Solving Multi-Objective Linear Programming Problems to Single-Objective by Using Correlation Technique. Cihan International Journal of Social Science, 1, 30-36.
9. Akhtar, H. and Modi, G. (2017) An Approach for Solving Multi-Objective Fractional Programming Problem and It’s Comparison with Other Techniques. International Journal of Scientific and Innovative Mathematical Research, 5, 1-5. https://doi.org/10.20431/2347-3142.0511001
10. Nahar, S. and Alim, A. (2017) A New Geometric Average Technique to Solve Multi-Objective Linear Fractional Programming Problem and Comparison with New Arithmetic Average Technique. IOSR Journal of Mathematics (IOSR-JM), 13, 39-52. https://doi.org/10.9790/5728-1303013952
11. Sohag, Z.I. and Asadujjaman, Md. (2018) A Proposed New Average Method for Solving Multi-Objective Linear Programming Problem Using Various Kinds of Mean Techniques. Mathematics Letters, 4, 25-33. https://doi.org/10.11648/j.ml.20180402.11
12. Sen, C. (2018) Multi Objective Optimization Techniques: Misconceptions and Clarifications. International Journal of Scientific and Innovative Mathematical Research, 6, 29-33.
13. Sen, C. (2018) Sen’s Multi-Objective Programming Method and Its Comparison with Other Techniques. American Journal of Operational Research, 8, 10-13.
|
Generate pink noise - MATLAB pinknoise - MathWorks España
Generate Pink Noise
Amplitude Distribution of Pink Noise
Generate Multiple Independent Channels of Pink Noise
Add Pink Noise to Audio Signal
sz1,sz2
X = pinknoise(n)
X = pinknoise(sz1,sz2)
X = pinknoise(sz)
X = pinknoise(___,typename)
X = pinknoise(___,'like',p)
X = pinknoise(n) returns a pink noise column vector of length n.
X = pinknoise(sz1,sz2) returns a sz1-by-sz2 matrix. Each channel (column) of the output X is an independent pink noise signal.
X = pinknoise(sz) returns a vector or matrix with dimensions defined by the elements of vector sz. sz must be a one- or two-element row vector of positive integers. Each channel (column) of the output X is an independent pink noise signal.
X = pinknoise(___,typename) returns an array of pink noise of data type typename. The typename input can be either 'single' or 'double'. You can combine typename with any of the input arguments in the previous syntaxes.
X = pinknoise(___,'like',p) returns an array of pink noise like p. You can specify either typename or 'like', but not both.
Generate 100 seconds of pink noise with a sample rate of 44.1 kHz.
y = pinknoise(duration*fs);
Plot the average power spectral density (PSD) of the generated pink noise.
[~,freqVec,~,psd] = spectrogram(y,round(0.05*fs),[],[],fs);
meanPSD = mean(psd,2);
semilogx(freqVec,db(meanPSD,"power"))
title('Power Spectral Density of Pink Noise (Averaged)')
Generate 500 seconds of pink noise with a sample rate of 16 kHz.
Plot the relative probability of the pink noise amplitude. The amplitude is always bounded between
-
histogram(y,"Normalization","probability","EdgeColor","none")
xlabel("Amplitude")
title("Relative Probability of Pink Noise Amplitude")
Create a 5 second stereo pink noise signal with a 48 kHz sample rate.
pn = pinknoise(duration*fs,numChan);
Listen to the stereo pink noise signal.
sound(pn,fs)
Channels of the pink noise function are generated independently. Note that the off-diagonal correlation coefficients are close to zero (uncorrelated).
R = corrcoef(pn(:,1),pn(:,2))
Correlated and uncorrelated pink noise have different psychoacoustic effects. When the noise is correlated, the sound is less ambient and more centralized. To listen to correlated pink noise, send a single channel of the pink noise signal to your stereo device. The effect is most pronounced when using headphones.
sound([pn(:,1),pn(:,1)],fs)
[audioIn,fs] = audioread("MainStreetOne-16-16-mono-12secs.wav");
Create a pink noise signal of the same size and data type as audioIn.
noise = pinknoise(size(audioIn),'like',audioIn);
Add the pink noise to the audio signal and then listen to the first 5 seconds.
noisyMainStreet = noise + audioIn;
sound(noisyMainStreet(1:fs*5,:),fs)
The pinknoise function generates an approximate
-
29.5 dB signal level, which is close to the power of the audio signal.
noisePower = sum(noise.^2,1)/size(noise,1);
signalPower = sum(audioIn.^2,1)/size(audioIn,1);
snr = 10*log10(signalPower./noisePower)
noisePowerdB = 10*log10(noisePower)
noisePowerdB = -29.6665
signalPowerdB = 10*log10(signalPower)
signalPowerdB = -27.6874
Mix the input audio with the generated pink noise at an 8 dB SNR.
desiredSNR = 8;
scaleFactor = sqrt(signalPower./(noisePower*(10^(desiredSNR/10))));
noise = noise.*scaleFactor;
Verify the resulting SNR is 8 dB and then listen to the first 5 seconds.
n — Number of rows of pink noise
Number of rows of pink noise, specified as a nonnegative integer.
sz1,sz2 — Size of each dimension (as separate arguments)
Size of each dimension, specified as a nonnegative integer or two separate arguments of nonnegative integers.
one- or two-element row vector of nonnegative integers
Size of each dimension, specified as a one- or two-element row vector of nonnegative integers. Each element of this vector indicates the size of the corresponding dimension.
typename — Data type to create
Prototype of array to create, specified as a numeric array. The generated pink noise is the same data type as p.
X — Pink noise
Pink noise, returned as a column vector or matrix of independent channels.
The concatenation of multiple pink noise vectors does not result in pink noise. For streaming applications, use dsp.ColoredNoise.
Pink noise is generated by passing uniformly distributed random numbers through a series of randomly initiated SOS filters. The resulting pink noise amplitude distribution is quasi-Gaussian and bounded between −1 and 1. The resulting pink noise power spectral density (PSD) is inversely proportional to frequency:
S\left(f\right)\propto \frac{1}{f}
dsp.ColoredNoise | rng | rand
|
line(l, [A, B])
line(l, eqn, n)
the algebraic representation of a line, that is, a polynomial or equation
In the geometry package, a line means a ``straight line''. It is unlimited in extent, i.e., it may be extended in either direction indefinitely.
from two given points A and B
from its algebraic representation eqn. I.e., eqn is a polynomial or an equation. If the third optional argument is not given, then:
if names are assigned to the two environment variables _EnvHorizontalName and _EnvVerticalName, then these two names will be used as the names of the horizontal-axis and vertical-axis respectively.
otherwise, Maple will prompt the user to input the names of the axes.
returns the form of the geometric object (i.e., line2d if l is a line).
returns the equation that represents the line l.
HorizontalName(l)
VerticalName(l)
returns a detailed description of the line l.
The command with(geometry,line) allows the use of the abbreviated form of this command.
\mathrm{with}\left(\mathrm{geometry}\right):
define two points
A\left(0,0\right)
B\left(1,1\right)
\mathrm{point}\left(A,0,0\right),\mathrm{point}\left(B,1,1\right):
l
A
B
\mathrm{line}\left(l,[A,B]\right)
\textcolor[rgb]{0,0,1}{l}
\mathrm{form}\left(l\right)
\textcolor[rgb]{0,0,1}{\mathrm{line2d}}
\mathrm{HorizontalName}\left(l\right)
\textcolor[rgb]{0,0,1}{\mathrm{FAIL}}
To assign names to the axes, assign the names to the environment variables _EnvHorizontalName and _EnvVerticalName.
\mathrm{_EnvHorizontalName}≔x:
\mathrm{_EnvVerticalName}≔y:
\mathrm{point}\left(A,0,0\right),\mathrm{point}\left(B,1,1\right):
\mathrm{line}\left(l,[A,B]\right)
\textcolor[rgb]{0,0,1}{l}
\mathrm{HorizontalName}\left(l\right)
\textcolor[rgb]{0,0,1}{x}
\mathrm{VerticalName}\left(l\right)
\textcolor[rgb]{0,0,1}{y}
\mathrm{detail}\left(l\right)
\begin{array}{ll}\textcolor[rgb]{0,0,1}{\text{name of the object}}& \textcolor[rgb]{0,0,1}{l}\\ \textcolor[rgb]{0,0,1}{\text{form of the object}}& \textcolor[rgb]{0,0,1}{\mathrm{line2d}}\\ \textcolor[rgb]{0,0,1}{\text{equation of the line}}& \textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}\end{array}
Define a line from its algebraic representation.
\mathrm{line}\left(\mathrm{l2},x-3y\right)
\textcolor[rgb]{0,0,1}{\mathrm{l2}}
\mathrm{Equation}\left(\mathrm{l2}\right)
\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}
geometry[AreConcurrent]
|
Comoving and proper distances - Wikipedia
In standard cosmology, comoving distance and proper distance are two closely related distance measures used by cosmologists to define distances between objects. Proper distance roughly corresponds to where a distant object would be at a specific moment of cosmological time, which can change over time due to the expansion of the universe. Comoving distance factors out the expansion of the universe, giving a distance that does not change in time due to the expansion of space (though this may change due to other, local factors, such as the motion of a galaxy within a cluster).
Comoving distance and proper distance are defined to be equal at the present time. At other times, the Universe's expansion results in the proper distance changing, while the comoving distance remains constant.
1 Comoving coordinates
2 Comoving distance and proper distance
2.2 Uses of the proper distance
2.3 Short distances vs. long distances
Comoving coordinates[edit]
The evolution of the universe and its horizons in comoving distances. The x-axis is distance, in billions of light years; the left-hand y-axis is time, in billions of years since the Big Bang; the right-hand y-axis is the scale factor. This model of the universe includes dark energy which causes an accelerating expansion after a certain point in time, and results in an event horizon beyond which we can never see.
Although general relativity allows one to formulate the laws of physics using arbitrary coordinates, some coordinate choices are more natural or easier to work with. Comoving coordinates are an example of such a natural coordinate choice. They assign constant spatial coordinate values to observers who perceive the universe as isotropic. Such observers are called "comoving" observers because they move along with the Hubble flow.
A comoving observer is the only observer who will perceive the universe, including the cosmic microwave background radiation, to be isotropic. Non-comoving observers will see regions of the sky systematically blue-shifted or red-shifted. Thus isotropy, particularly isotropy of the cosmic microwave background radiation, defines a special local frame of reference called the comoving frame. The velocity of an observer relative to the local comoving frame is called the peculiar velocity of the observer.
Most large lumps of matter, such as galaxies, are nearly comoving, so that their peculiar velocities (owing to gravitational attraction) are low.
Comoving coordinates separate the exactly proportional expansion in a Friedmannian universe in spatial comoving coordinates from the scale factor a(t). This example is for the ΛCDM model.
The comoving time coordinate is the elapsed time since the Big Bang according to a clock of a comoving observer and is a measure of cosmological time. The comoving spatial coordinates tell where an event occurs while cosmological time tells when an event occurs. Together, they form a complete coordinate system, giving both the location and time of an event.
The expanding Universe has an increasing scale factor which explains how constant comoving distances are reconciled with proper distances that increase with time.
Comoving distance and proper distance[edit]
Comoving distance is the distance between two points measured along a path defined at the present cosmological time. For objects moving with the Hubble flow, it is deemed to remain constant in time. The comoving distance from an observer to a distant object (e.g. galaxy) can be computed by the following formula (derived using the Friedmann–Lemaître–Robertson–Walker metric):
{\displaystyle \chi =\int _{t_{e}}^{t}c\;{\frac {\mathrm {d} t'}{a(t')}}}
where a(t′) is the scale factor, te is the time of emission of the photons detected by the observer, t is the present time, and c is the speed of light in vacuum.
Despite being an integral over time, this expression gives the correct distance that would be measured by a hypothetical tape measure at fixed time t, i.e. the "proper distance" (as defined below) after accounting for the time-dependent comoving speed of light via the inverse scale factor term
{\displaystyle 1/a(t')}
in the integrand. By "comoving speed of light", we mean the velocity of light through comoving coordinates [
{\displaystyle c/a(t')}
] which is time-dependent even though locally, at any point along the null geodesic of the light particles, an observer in an inertial frame always measures the speed of light as
{\displaystyle c}
in accordance with special relativity. For a derivation see "Appendix A: Standard general relativistic definitions of expansion and horizons" from Davis & Lineweaver 2004.[1] In particular, see eqs. 16-22 in the referenced 2004 paper [note: in that paper the scale factor
{\displaystyle R(t')}
is defined as a quantity with the dimension of distance while the radial coordinate
{\displaystyle \chi }
is dimensionless.]
Many textbooks use the symbol
{\displaystyle \chi }
for the comoving distance. However, this
{\displaystyle \chi }
must be distinguished from the coordinate distance
{\displaystyle r}
in the commonly used comoving coordinate system for a FLRW universe where the metric takes the form (in reduced-circumference polar coordinates, which only works half-way around a spherical universe):
{\displaystyle ds^{2}=-c^{2}\,d\tau ^{2}=-c^{2}\,dt^{2}+a(t)^{2}\left({\frac {dr^{2}}{1-\kappa r^{2}}}+r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right)\right).}
In this case the comoving coordinate distance
{\displaystyle r}
{\displaystyle \chi }
by:[2][3][4]
{\displaystyle \chi ={\begin{cases}|\kappa |^{-1/2}\sinh ^{-1}{\sqrt {|\kappa |}}r,&{\text{if }}\kappa <0\ {\text{(a negatively curved ‘hyperbolic’ universe)}}\\r,&{\text{if }}\kappa =0\ {\text{(a spatially flat universe)}}\\|\kappa |^{-1/2}\sin ^{-1}{\sqrt {|\kappa |}}r,&{\text{if }}\kappa >0\ {\text{(a positively curved ‘spherical’ universe)}}\end{cases}}}
Most textbooks and research papers define the comoving distance between comoving observers to be a fixed unchanging quantity independent of time, while calling the dynamic, changing distance between them "proper distance". On this usage, comoving and proper distances are numerically equal at the current age of the universe, but will differ in the past and in the future; if the comoving distance to a galaxy is denoted
{\displaystyle \chi }
, the proper distance
{\displaystyle d(t)}
at an arbitrary time
{\displaystyle t}
is simply given by
{\displaystyle d(t)=a(t)\chi }
{\displaystyle a(t)}
is the scale factor (e.g. Davis & Lineweaver 2004).[1] The proper distance
{\displaystyle d(t)}
between two galaxies at time t is just the distance that would be measured by rulers between them at that time.[5]
Uses of the proper distance[edit]
The evolution of the universe and its horizons in proper distances. The x-axis is distance, in billions of light years; the left-hand y-axis is time, in billions of years since the Big Bang; the right-hand y-axis is the scale factor. This is the same model as in the earlier figure, with dark energy and an event horizon.
Cosmological time is identical to locally measured time for an observer at a fixed comoving spatial position, that is, in the local comoving frame. Proper distance is also equal to the locally measured distance in the comoving frame for nearby objects. To measure the proper distance between two distant objects, one imagines that one has many comoving observers in a straight line between the two objects, so that all of the observers are close to each other, and form a chain between the two distant objects. All of these observers must have the same cosmological time. Each observer measures their distance to the nearest observer in the chain, and the length of the chain, the sum of distances between nearby observers, is the total proper distance.[6]
It is important to the definition of both comoving distance and proper distance in the cosmological sense (as opposed to proper length in special relativity) that all observers have the same cosmological age. For instance, if one measured the distance along a straight line or spacelike geodesic between the two points, observers situated between the two points would have different cosmological ages when the geodesic path crossed their own world lines, so in calculating the distance along this geodesic one would not be correctly measuring comoving distance or cosmological proper distance. Comoving and proper distances are not the same concept of distance as the concept of distance in special relativity. This can be seen by considering the hypothetical case of a universe empty of mass, where both sorts of distance can be measured. When the density of mass in the FLRW metric is set to zero (an empty 'Milne universe'), then the cosmological coordinate system used to write this metric becomes a non-inertial coordinate system in the Minkowski spacetime of special relativity where surfaces of constant Minkowski proper-time τ appear as hyperbolas in the Minkowski diagram from the perspective of an inertial frame of reference.[7] In this case, for two events which are simultaneous according to the cosmological time coordinate, the value of the cosmological proper distance is not equal to the value of the proper length between these same events,[8] which would just be the distance along a straight line between the events in a Minkowski diagram (and a straight line is a geodesic in flat Minkowski spacetime), or the coordinate distance between the events in the inertial frame where they are simultaneous.
If one divides a change in proper distance by the interval of cosmological time where the change was measured (or takes the derivative of proper distance with respect to cosmological time) and calls this a "velocity", then the resulting "velocities" of galaxies or quasars can be above the speed of light, c. Such superluminal expansion is not in conflict with special or general relativity nor the definitions used in physical cosmology. Even light itself does not have a "velocity" of c in this sense; the total velocity of any object can be expressed as the sum
{\displaystyle v_{\text{tot}}=v_{\text{rec}}+v_{\text{pec}}}
{\displaystyle v_{\text{rec}}}
is the recession velocity due to the expansion of the universe (the velocity given by Hubble's law) and
{\displaystyle v_{\text{pec}}}
is the "peculiar velocity" measured by local observers (with
{\displaystyle v_{\text{rec}}={\dot {a}}(t)\chi (t)}
{\displaystyle v_{\text{pec}}=a(t){\dot {\chi }}(t)}
, the dots indicating a first derivative), so for light
{\displaystyle v_{\text{pec}}}
is equal to c (−c if the light is emitted towards our position at the origin and +c if emitted away from us) but the total velocity
{\displaystyle v_{\text{tot}}}
is generally different from c.[1] Even in special relativity the coordinate speed of light is only guaranteed to be c in an inertial frame; in a non-inertial frame the coordinate speed may be different from c.[9] In general relativity no coordinate system on a large region of curved spacetime is "inertial", but in the local neighborhood of any point in curved spacetime we can define a "local inertial frame" in which the local speed of light is c[10] and in which massive objects such as stars and galaxies always have a local speed smaller than c. The cosmological definitions used to define the velocities of distant objects are coordinate-dependent – there is no general coordinate-independent definition of velocity between distant objects in general relativity.[11] How best to describe and popularize that expansion of the universe is (or at least was) very likely proceeding – at the greatest scale – at above the speed of light, has caused a minor amount of controversy. One viewpoint is presented in Davis and Lineweaver, 2004.[1]
Short distances vs. long distances[edit]
Within small distances and short trips, the expansion of the universe during the trip can be ignored. This is because the travel time between any two points for a non-relativistic moving particle will just be the proper distance (that is, the comoving distance measured using the scale factor of the universe at the time of the trip rather than the scale factor "now") between those points divided by the velocity of the particle. If the particle is moving at a relativistic velocity, the usual relativistic corrections for time dilation must be made.
Distance measures (cosmology) for comparison with other distance measures.
Faster-than-light#Universal expansion, for the apparent faster-than-light movement of distant galaxies.
Redshift, for the link of comoving distance to redshift.
^ a b c d T. M. Davis, C. H. Lineweaver (2004). "Expanding Confusion: Common Misconceptions of Cosmological Horizons and the Superluminal Expansion of the Universe". Publications of the Astronomical Society of Australia. 21 (1): 97–109. arXiv:astro-ph/0310808v2. Bibcode:2004PASA...21...97D. doi:10.1071/AS03040. S2CID 13068122.
^ Roos, Matts (2015). Introduction to Cosmology (4th ed.). John Wiley & Sons. p. 37. ISBN 978-1-118-92329-0. Extract of page 37 (see equation 2.39)
^ Webb, Stephen (1999). Measuring the Universe: The Cosmological Distance Ladder (illustrated ed.). Springer Science & Business Media. p. 263. ISBN 978-1-85233-106-1. Extract of page 263
^ Lachièze-Rey, Marc; Gunzig, Edgard (1999). The Cosmological Background Radiation (illustrated ed.). Cambridge University Press. pp. 9–12. ISBN 978-0-521-57437-2. Extract of page 11
^ see p. 4 of Distance Measures in Cosmology by David W. Hogg.
^ Steven Weinberg, Gravitation and Cosmology (1972), p. 415
^ See the diagram on p. 28 of Physical Foundations of Cosmology by V. F. Mukhanov, along with the accompanying discussion.
^ E. L. Wright (2009). "Homogeneity and Isotropy". Retrieved 28 February 2015.
^ Vesselin Petkov (2009). Relativity and the Nature of Spacetime. Springer Science & Business Media. p. 219. ISBN 978-3-642-01962-3.
^ Derek Raine; E.G. Thomas (2001). An Introduction to the Science of Cosmology. CRC Press. p. 94. ISBN 978-0-7503-0405-4.
^ J. Baez and E. Bunn (2006). "Preliminaries". University of California. Retrieved 28 February 2015.
Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity. Steven Weinberg. Publisher:Wiley-VCH (July 1972). ISBN 0-471-92567-5.
Principles of Physical Cosmology. P. J. E. Peebles. Publisher:Princeton University Press (1993). ISBN 978-0-691-01933-8.
General method, including locally inhomogeneous case and Fortran 77 software
An explanation from the Atlas of the Universe website of distance.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Comoving_and_proper_distances&oldid=1084225127"
|
Spectral clustering - MATLAB spectralcluster - MathWorks Nordic
Perform Spectral Clustering on Input Data
Perform Spectral Clustering on Similarity Matrix
Cluster Using Radius Search for Similarity Graph
Find Eigenvalues and Eigenvectors of Laplacian Matrix
KNNGraphType
LaplacianNormalization
idx = spectralcluster(X,k)
idx = spectralcluster(S,k,'Distance','precomputed')
idx = spectralcluster(___,Name,Value)
[idx,V] = spectralcluster(___)
[idx,V,D] = spectralcluster(___)
idx = spectralcluster(X,k) partitions observations in the n-by-p data matrix X into k clusters using the spectral clustering algorithm (see Algorithms). spectralcluster returns an n-by-1 vector idx containing cluster indices of each observation.
idx = spectralcluster(S,k,'Distance','precomputed') returns a vector of cluster indices for S, the similarity matrix (or adjacency matrix) of a similarity graph. S can be the output of adjacency.
To use a similarity matrix as the first input, you must specify 'Distance','precomputed'.
idx = spectralcluster(___,Name,Value) specifies additional options using one or more name-value pair arguments in addition to the input arguments in previous syntaxes. For example, you can specify 'SimilarityGraph','epsilon' to construct a similarity graph using the radius search method.
[idx,V] = spectralcluster(___) also returns the eigenvectors V corresponding to the k smallest eigenvalues of the Laplacian matrix.
[idx,V,D] = spectralcluster(___) also returns a vector D containing the k smallest eigenvalues of the Laplacian matrix.
Cluster a 2-D circular data set using spectral clustering with the default Euclidean distance metric.
r1 = 2; % Radius of first circle
Find two clusters in the data by using spectral clustering.
idx = spectralcluster(X,2);
Visualize the result of clustering.
The spectralcluster function correctly identifies the two clusters in the data set.
Compute a similarity matrix from Fisher's iris data set and perform spectral clustering on the similarity matrix.
Load Fisher's iris data set. Use the petal lengths and widths as features to consider for clustering.
gscatter(X(:,1),X(:,2),species);
Find the distance between each pair of observations in X by using the pdist and squareform functions with the default Euclidean distance metric.
dist_temp = pdist(X);
dist = squareform(dist_temp);
S = exp(-dist.^2);
Perform spectral clustering. Specify 'Distance','precomputed' to perform clustering using the similarity matrix. Specify k=3 clusters, and set the 'LaplacianNormalization' name-value pair argument to use the normalized symmetric Laplacian matrix.
k = 3; % Number of clusters
idx = spectralcluster(S,k,'Distance','precomputed','LaplacianNormalization','symmetric');
idx contains the cluster indices for each observation in X.
Tabulate the clustering results.
tabulate(idx)
1 48 32.00%
The Percent column shows the percentage of data points assigned to the three clusters.
Repeat spectral clustering using the data as input to spectralcluster. Specify 'NumNeighbors' as size(X,1), which corresponds to creating the similarity matrix S by connecting each point to all the remaining points.
idx2 = spectralcluster(X,k,'NumNeighbors',size(X,1),'LaplacianNormalization','symmetric');
tabulate(idx2)
The clustering results for both approaches are the same. The order of cluster assignments is different, even though the data points are clustered in the same way.
Find clusters in a data set, based on a specified search radius for creating a similarity graph.
Create data with 3 clusters, each containing 500 points.
X = [mvnrnd([0 0],eye(2),N); ...
mvnrnd(5*[1 -1],eye(2),N); ...
mvnrnd(5*[1 1],eye(2),N)];
Specify a search radius of 2 for creating a similarity graph, and find 3 clusters in the data.
idx = spectralcluster(X,3,'SimilarityGraph','epsilon','Radius',2);
Find the eigenvalues and eigenvectors of the Laplacian matrix and use the values to confirm clustering results.
Randomly generate sample data with three well-separated clusters, each containing 100 points.
X = [randn(n,2)*0.5+3;
randn(n,2)*0.5
randn(n,2)*0.5-3];
Estimate the number of clusters in the data by using the eigenvalues of the Laplacian matrix. Compute the five smallest eigenvalues (in magnitude) of the Laplacian matrix.
[~,~,D_temp] = spectralcluster(X,5)
D_temp = 5×1
Only the first three eigenvalues are approximately zero. The number of zero eigenvalues is a good indicator of the number of connected components in a similarity graph and, therefore, is a good estimate of the number of clusters in your data. So, k=3 is a good estimate of the number of clusters in X.
Find k=3 clusters and return the three smallest eigenvalues and corresponding eigenvectors of the Laplacian matrix.
[idx,V,D] = spectralcluster(X,3)
idx = 300×1
Elements of D correspond to the three smallest eigenvalues of the Laplacian matrix. The columns of V contain the eigenvectors corresponding to the eigenvalues in D. For well-separated clusters, the eigenvectors are indicator vectors. The eigenvectors have values of zero (or close to zero) for points that do not belong to a particular cluster, and nonzero values for points that belong to a particular cluster.
The software treats NaNs in X as missing data and ignores any row of X containing at least one NaN. The spectralcluster function returns NaN values for the corresponding row in the output arguments idx and V.
S — Similarity matrix
Similarity matrix, specified as an n-by-n symmetric matrix, where n is the number of observations. A similarity matrix (or adjacency matrix) represents the input data by modeling local neighborhood relationships among the data points. The values in a similarity matrix represent the edges (or connections) between nodes (data points) that are connected in a similarity graph. For more information, see Similarity Matrix.
S must not contain any NaN values.
To use a similarity matrix as the first input of spectralcluster, you must specify 'Distance','precomputed'.
For details about how to estimate the number of clusters, see Tips.
Example: spectralcluster(X,3,'SimilarityGraph','epsilon','Radius',5) specifies 3 clusters and uses the radius search method with a search radius of 5 to construct a similarity graph.
Precomputed distance. You must specify this option if the first input to spectralcluster is a similarity matrix S.
When you use the 'seuclidean', 'minkowski', or 'mahalanobis' distance metric, you can specify the additional name-value pair argument 'Scale', 'P', or 'Cov', respectively, to control the distance metric.
Example: spectralcluster(X,5,'Distance','minkowski','P',3) specifies 5 clusters and uses of the Minkowski distance metric with an exponent of 3 to perform the clustering algorithm.
Scale has length p (the number of columns in X), because each dimension (column) of X has a corresponding value in Scale. For each dimension of X, spectralcluster uses the corresponding value in Scale to standardize the difference between observations.
SimilarityGraph — Type of similarity graph
'knn' (default) | 'epsilon'
Type of similarity graph to construct from the input data X, specified as the comma-separated pair consisting of 'SimilarityGraph' and one of these values.
Graph-Specific Name-Value Pair Arguments
'knn' (Default) Construct the graph using nearest neighbors.
'NumNeighbors' — Number of nearest neighbors used to construct the similarity graph
'KNNGraphType' — Type of nearest neighbor graph
Construct the graph using a radius search. You must specify a value for Radius if you use this option.
'Radius' — Search radius for the nearest neighbors used to construct the similarity graph
For more information, see Similarity Graph.
This argument is valid only if 'Distance' is not 'precomputed'.
Example: 'SimilarityGraph','epsilon'
This argument is valid only if 'SimilarityGraph' is 'knn'. For more information, see Similarity Graph.
KNNGraphType — Type of nearest neighbor graph
'complete' (default) | 'mutual'
Type of nearest neighbor graph, specified as the comma-separated pair consisting of 'KNNGraphType' and one of these values.
(Default) Connects two points i and j, when either i is a nearest neighbor of j or j is a nearest neighbor of i.
This option leads to a denser representation of the similarity matrix.
Connects two points i and j, when i is a nearest neighbor of j and j is a nearest neighbor of i.
This option leads to a sparser representation of the similarity matrix.
This argument is valid only if 'SimilarityGraph' is 'knn'.
Example: 'KNNGraphType','mutual'
Search radius for the nearest neighbors used to construct the similarity graph, specified as the comma-separated pair consisting of 'Radius' and a nonnegative scalar.
You must specify this argument if 'SimilarityGraph' is 'epsilon'. For more information, see Similarity Graph.
If you specify 'auto', then the software selects an appropriate scale factor using a heuristic procedure. This heuristic procedure uses subsampling, so estimates can vary from one call to another. To reproduce results, set a random number seed using rng before calling spectralcluster.
LaplacianNormalization — Method to normalize Laplacian matrix
'randomwalk' (default) | 'symmetric' | 'none'
Method to normalize the Laplacian matrix L, specified as the comma-separated pair consisting of 'LaplacianNormalization' and one of these values.
Use Laplacian matrix L without normalization.
'randomwalk'
(Default) Use the normalized random-walk Laplacian matrix Lrw (Shi-Malik [2]).
{L}_{rw}={D}_{g}{}^{-1}L.
The matrix Dg is the degree matrix.
Use the normalized symmetric Laplacian matrix Ls (Ng-Jordan-Weiss [3]).
{L}_{s}={D}_{g}{}^{-1/2}L{D}_{g}{}^{-1/2}.
For more information, see Laplacian Matrix.
Example: 'LaplacianNormalization','randomwalk'
ClusterMethod — Clustering method
'kmeans' (default) | 'kmedoids'
Clustering method to cluster the eigenvectors of the Laplacian matrix, specified as the comma-separated pair consisting of 'ClusterMethod' and either 'kmeans' or 'kmedoids'.
'kmeans' — Perform k-means clustering by using the kmeans function.
'kmedoids' — Perform k-medoids clustering by using the kmedoids function.
kmeans and kmedoids involve randomness in their algorithms. Therefore, to reproduce the results of spectralcluster, you must set the seed of the random number generator by using rng.
Example: 'ClusterMethod','kmedoids'
Cluster indices, returned as a numeric column vector. idx has n rows, and each row of idx indicates the cluster assignment of the corresponding row (or observation) in X.
Eigenvectors, returned as an n-by-k numeric matrix. The columns of V are the eigenvectors corresponding to the k smallest eigenvalues of the Laplacian matrix. These eigenvectors are a low-dimensional representation of the input data X in a new space where clusters are more widely separated.
For well-separated clusters, the eigenvectors are indicator vectors. That is, the eigenvectors have values of zero (or close to zero) for points that do not belong to a given cluster, and nonzero values for points that belong to a particular cluster.
Eigenvalues, returned as a k-by-1 numeric vector that contains the k smallest eigenvalues of the Laplacian matrix. The number of zero eigenvalues in D is an indicator of the number of connected components in the similarity graph and, therefore, is a good estimate of the number of clusters in your data.
{S}_{i,j}=\mathrm{exp}\left(-{\left(\frac{Dis{t}_{i,j}}{\sigma }\right)}^{2}\right)
spectralcluster supports these two methods of constructing a similarity graph:
Nearest neighbor method (if 'SimilarityGraph' is 'knn'(default)): spectralcluster connects points in X that are nearest neighbors. You can use the 'NumNeighbors' and 'KNNGraphType' name-value pair arguments to specify the options for constructing the nearest neighbor graph.
Use 'NumNeighbors' to specify the number of nearest neighbors.
Use 'KNNGraphType' to specify whether to make a 'complete' or 'mutual' connection of points.
Radius search method (if 'SimilarityGraph' is 'epsilon'): spectralcluster connects points whose pairwise distances are smaller than a search radius. You must specify the search radius for nearest neighbors used to construct the similarity graph by using the 'Radius' name-value pair argument.
S={\left({S}_{i,j}\right)}_{i,j=1,\text{\hspace{0.17em}}\dots ,\text{\hspace{0.17em}}n}
{D}_{g}\left(i,i\right)=\sum _{j=1}^{n}{S}_{i,j}.
A Laplacian matrix is one way of representing a similarity graph. The spectralcluster function supports the unnormalized Laplacian matrix, the normalized Laplacian matrix using the Shi-Malik method [2], and the normalized Laplacian matrix using the Ng-Jordan-Weiss method [3].
The unnormalized Laplacian matrix L is the difference between the degree matrix and the similarity matrix.
L={D}_{g}-S.
The normalized random-walk Laplacian matrix (Shi-Malik) is defined as:
{L}_{rw}={D}_{g}{}^{-1}L.
To derive Lrw, solve the generalized eigenvalue problem
Lv=\lambda {D}_{g}v
, where v is a column vector of length n, and λ is a scalar. The values of λ that satisfy the equation are the generalized eigenvalues of the matrix
{L}_{rw}={D}_{g}{}^{-1}L.
You can use the MATLAB® function eigs to solve the generalized eigenvalue problem.
The normalized symmetric Laplacian matrix (Ng-Jordan-Weiss) is defined as:
{L}_{s}={D}_{g}{}^{-1/2}L{D}_{g}{}^{-1/2}.
Use the 'LaplacianNormalization' name-value pair argument to specify the method to normalize the Laplacian matrix.
Consider using spectral clustering when the clusters in your data do not naturally correspond to convex regions.
From the spectral clustering algorithm, you can estimate the number of clusters k as:
The number of eigenvalues of the Laplacian matrix that are equal to 0.
The number of connected components in your similarity graph representation. Use graph to create a similarity graph from a similarity matrix, and use conncomp to find the number of connected components in the graph.
For an example, see Estimate Number of Clusters and Perform Spectral Clustering.
Spectral clustering is a graph-based algorithm for clustering data points (or observations in X). The algorithm involves constructing a graph, finding its Laplacian matrix, and using this matrix to find k eigenvectors to split the graph k ways. By default, the algorithm for spectralcluster computes the normalized random-walk Laplacian matrix using the method described by Shi-Malik [2]. spectralcluster also supports the unnormalized Laplacian matrix and the normalized symmetric Laplacian matrix which uses the Ng-Jordan-Weiss method [3]. spectralcluster implements clustering as follows:
For each data point in X, define a local neighborhood using either the radius search method or nearest neighbor method, as specified by the 'SimilarityGraph' name-value pair argument (see Similarity Graph). Then, find the pairwise distances
Dis{t}_{i,j}
Convert the distances to similarity measures using the kernel transformation
{S}_{i,j}=\mathrm{exp}\left(-{\left(\frac{Dis{t}_{i,j}}{\sigma }\right)}^{2}\right)
. The matrix S is the similarity matrix, and σ is the scale factor for the kernel, as specified using the 'KernelScale' name-value pair argument.
Calculate the unnormalized Laplacian matrix L , the normalized random-walk Laplacian matrix Lrw, or the normalized symmetric Laplacian matrix Ls, depending on the value of the 'LaplacianNormalization' name-value pair argument.
V\in {ℝ}^{n×k}
containing columns
{v}_{1},\dots ,{v}_{k}
, where the columns are the k eigenvectors that correspond to the k smallest eigenvalues of the Laplacian matrix. If using Ls, normalize each row of V to have unit length.
Treating each row of V as a point, cluster the n points using k-means clustering (default) or k-medoids clustering, as specified by the 'ClusterMethod' name-value pair argument.
Assign the original points in X to the same clusters as their corresponding rows in V.
[2] Shi, J., and J. Malik. “Normalized cuts and image segmentation.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 22, 2000, pp. 888–905.
[3] Ng, A.Y., M. Jordan, and Y. Weiss. “On spectral clustering: Analysis and an algorithm.” In Proceedings of the Advances in Neural Information Processing Systems 14. MIT Press, 2001, pp. 849–856.
eigs | kmeans | kmedoids | pdist | adjacency | squareform
|
Home : Support : Online Help : Programming : Audio Processing : Create
create an Array suitable for storing audio data
Create(data, options)
(optional); data with which to initialize the audio Array
various options to specify attributes of the created Array
The Create command creates an Array suitable for storing audio data. All parameters are optional and have suitable defaults as described below.
The optional data parameter specifies data that is used to initialize the audio Array. It can be a 1- or 2-D Array, a Matrix, a Vector, a numeric list, a list of numeric lists, a procedure, a set of equations of the form index=value, or a table.
If the data is an Array, Matrix, or Vector (all of which are instances of rtables), the resulting audio rtable will also be an Array, Matrix, or Vector respectively.
If the passed rtable has datatype=float[8], storage=rectangular, and no indexing functions, that rtable becomes the audio rtable (that is, no copy is made) unless the copy=true option is specified (see below).
If a list of numeric values is passed, a 1-dimensional Array of the same number of entries is produced. Each value in the list becomes one sample in the Array.
If a list of lists of numeric values is passed, the number of elements in the outer list determines the number of samples per channel, and the number of elements in the first inner list determines the number of audio channels.
Passing a procedure, set, or table for data causes an Array to be created according to the other options described below.
If a procedure was passed, it is called for each location in that array, being passed the sample number and channel number, and is expected to return a value for that location.
A set must contain equations of the form index=value, where index is a sample number, or a (sample number, channel number) pair, and value is the sample value for that location in the resulting Array. Array entries for which no equation exists will be initialized to zero.
If a table is passed, entries in the table with indices corresponding to indices in the resulting Array will initialize those entries in the Array. Array entries for which no corresponding table entry exists will be initialized to zero.
A numeric value can be passed for data, in which case the value is not treated as data, but is used in lieu of the duration option described below. The samples will be initialized to zero.
The duration=numeric option specifies the audio length, in seconds, that can be recorded in the Array. If omitted, this defaults to 1 second. If initial data in the form of an Array, Matrix, Vector, or list was passed via the data parameter, this option is ignored.
The channels=integer option specifies the number of channels of audio data. For example, 1 for mono and 2 for stereo. If omitted, this defaults to one channel. If initial data in the form of an Array, Matrix, Vector, or list was passed via the data parameter, this option is ignored.
The rate=integer option specifies the number of samples (rows in the Array) to be allocated per second of duration. Valid values are 1 through 4294967295. If omitted, this defaults to 44100 samples per second (CD quality).
The copy=truefalse option specifies what happens when the data parameter is already an rtable of a format suitable for use by AudioTools. If false (the default), that rtable is changed in-place (by adding some attributes to it) to be the audio rtable returned by Create. If true, then a new audio Array is created, and the data is copied into it. If the data is not in a form suitable to being an audio rtable, a new rtable is always created, regardless of the setting of the copy option.
The order=ord option specifies the internal ordering of the created Array, and can be either C_order or Fortran_order. If initial data in the form of an Array, Matrix, Vector, or list was passed via the data parameter, this option is ignored.
Option noise=truefalse specifies that the audio Array is to be initialized with random (white) noise. Passing initial data is meaningless if this option is true, as it will be overwritten by the generated noise.
The following two options do not affect the internal format used to represent audio data (always an Array with datatype=float[8]), but instead control the form in which the data will be written to a file when using the AudioTools[Write] command:
The float=truefalse option specifies whether the audio data will be written to a file as integer (float=false) or floating point (float=true) values. The default is integer.
The bits=integer option specifies the number of bits per sample that are written when the audio data is eventually written to a file. If not specified, the default is 16 (CD quality) if float=false, or 32 if float=true.
All parameters and options to Create are optional. If called with no arguments, Create returns one channel containing one second of silence, at 44100 samples per second, with 16 bits per sample.
The output from the Create command is an Array (unless the initial data was returned as the output, in which case it may be a Matrix or Vector) with dimensions appropriate to the duration, rate, and bits specified. It will also have three numeric attributes describing the data: rate, bits, and sub-format. The latter is currently always 1, corresponding to the PCM sub-format of the WAVE file format.
\mathrm{with}\left(\mathrm{AudioTools}\right):
\mathrm{aud}≔\mathrm{Create}\left(\mathrm{duration}=5.0\right)
\textcolor[rgb]{0,0,1}{\mathrm{aud}}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cc}\textcolor[rgb]{0,0,1}{"Sample Rate"}& \textcolor[rgb]{0,0,1}{44100}\\ \textcolor[rgb]{0,0,1}{"File Format"}& \textcolor[rgb]{0,0,1}{\mathrm{PCM}}\\ \textcolor[rgb]{0,0,1}{"File Bit Depth"}& \textcolor[rgb]{0,0,1}{16}\\ \textcolor[rgb]{0,0,1}{"Channels"}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{"Samples/Channel"}& \textcolor[rgb]{0,0,1}{220500}\\ \textcolor[rgb]{0,0,1}{"Duration"}& \textcolor[rgb]{0,0,1}{5.00000}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{s}\end{array}]
\mathrm{attributes}\left(\mathrm{aud}\right)
\textcolor[rgb]{0,0,1}{44100}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{16}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}
\mathrm{aud}≔\mathrm{Create}\left(7.5,\mathrm{channels}=2,\mathrm{rate}=11025,8,\mathrm{order}=\mathrm{C_order}\right)
\textcolor[rgb]{0,0,1}{\mathrm{aud}}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cc}\textcolor[rgb]{0,0,1}{"Sample Rate"}& \textcolor[rgb]{0,0,1}{11025}\\ \textcolor[rgb]{0,0,1}{"File Format"}& \textcolor[rgb]{0,0,1}{\mathrm{PCM}}\\ \textcolor[rgb]{0,0,1}{"File Bit Depth"}& \textcolor[rgb]{0,0,1}{16}\\ \textcolor[rgb]{0,0,1}{"Channels"}& \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{"Samples/Channel"}& \textcolor[rgb]{0,0,1}{82688}\\ \textcolor[rgb]{0,0,1}{"Duration"}& \textcolor[rgb]{0,0,1}{7.50005}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{s}\end{array}]
\mathrm{attributes}\left(\mathrm{aud}\right)
\textcolor[rgb]{0,0,1}{11025}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{16}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}
\mathrm{aud}≔\mathrm{Create}\left(x↦\mathrm{evalhf}\left(\mathrm{sin}\left(\frac{x}{4}-\frac{1}{4}\right)\right)\right)
\textcolor[rgb]{0,0,1}{\mathrm{aud}}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cc}\textcolor[rgb]{0,0,1}{"Sample Rate"}& \textcolor[rgb]{0,0,1}{44100}\\ \textcolor[rgb]{0,0,1}{"File Format"}& \textcolor[rgb]{0,0,1}{\mathrm{PCM}}\\ \textcolor[rgb]{0,0,1}{"File Bit Depth"}& \textcolor[rgb]{0,0,1}{16}\\ \textcolor[rgb]{0,0,1}{"Channels"}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{"Samples/Channel"}& \textcolor[rgb]{0,0,1}{44100}\\ \textcolor[rgb]{0,0,1}{"Duration"}& \textcolor[rgb]{0,0,1}{1.00000}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{s}\end{array}]
\mathrm{printf}\left("%1.2f\n",\mathrm{aud}[1..8]\right)
\mathrm{aud}≔\mathrm{Create}\left(\right)
\textcolor[rgb]{0,0,1}{\mathrm{aud}}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cc}\textcolor[rgb]{0,0,1}{"Sample Rate"}& \textcolor[rgb]{0,0,1}{44100}\\ \textcolor[rgb]{0,0,1}{"File Format"}& \textcolor[rgb]{0,0,1}{\mathrm{PCM}}\\ \textcolor[rgb]{0,0,1}{"File Bit Depth"}& \textcolor[rgb]{0,0,1}{16}\\ \textcolor[rgb]{0,0,1}{"Channels"}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{"Samples/Channel"}& \textcolor[rgb]{0,0,1}{44100}\\ \textcolor[rgb]{0,0,1}{"Duration"}& \textcolor[rgb]{0,0,1}{1.00000}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{s}\end{array}]
\mathrm{attributes}\left(\mathrm{aud}\right)
\textcolor[rgb]{0,0,1}{44100}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{16}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}
The AudioTools[Create] command was updated in Maple 2020.
The noise and float options were introduced in Maple 2020.
|
Please note the corrected abstract of U. Hartl.
The conference brought together researchers from Europe, the US, and Japan who reported on various recent and ongoing developments in algebraic number theory and related fields. As at previous meetings, organized by Deninger, Schneider and Scholl, one of the clearest themes was the prevalence of p-adic methods across a range of areas. A notable difference with previous years was the number of younger people both as speakers and participants.
Colmez reported on his work relating unitary admissible GL2(Qp)-representations to local Galois representations. This realizes a program of Breuil and stands at the crossroads of p-adic Hodge theory, representations of p-adic reductive groups and explicit reciprocity laws, as well as having applications to modularity of global Galois representations. Related talks were given by Schneider who explained on going work with Vigneras, attempting to generalize some of Colmez' constructions to higher rank, as well as Orlik who discussed the construction of locally analytic representations from equivariant vector bundles on symmetric spaces.
L. Berger reported on an extension of his earlier work on classification of local Galois representations. Hartl explained how these ideas could be used to give a description of the image of the Rapoport-Zink period morphism. This was a satisfying complement to his talk at the previous meeting where he had sketched some of these ideas.
There were several talks related to Iwasawa theory and reciprocity laws. Zerbes reported on her work on reciprocity laws for higher dimensional local fields. Fukaya reported on joint work with Coates, Kato, Sujatha and Venjakob in non-abelian Iwasawa theory, and Ochiai discussed the Iwasawa theory of ordinary Hida families. The talk by Sharifi was also related to this area. It described a fascinating relation between Galois cohomology and modular symbols, which seems to be closely related to the Main conjecture of Iwasawa theory.
There were a number of talks dealing with congruences between automorphic forms, and applications. The most exciting of these was by Fujiwara who outlined how Taylor-Wiles systems could be used, in certain circumstances, to prove the Leopoldt conjecture for totally real fields. Sorensen discussed his work on level raising for GSp4 and some applications to Selmer groups. T.~Berger explained how to construct Galois representations attached to cusp forms on GL2 over an imaginary field. These had been constructed by Taylor about 15 years ago, but were previously known to have the correct
L
-factors only at a set of primes of density 1. Berger also explained ongoing work on modularity lifting theorems in this situation. This would be an exciting advance since such theorems are currently available only over totally real fields.
There were two talks on polylogarithms. Blottiere explained his results on the Eisenstein classes on Hilbert modular varieties and applications to special values of L-functions. Bannai discussed the crystalline realization of the elliptic polylogarithm. Somewhat related to this was the talk of Huber on the p-adic Borel regulator.
Other talks were given by Yoshida who explained a computation of vanishing cycles on Shimura varieties, realizing the local Langlands and Jacquet-Langlands correspondences, Görtz who spoke on affine Deligne-Lusztig varieties, Saito who outlined his construction of the characteristic cycle of an l-adic sheaf, and Schmidt who discussed his work on integer rings of type K(π,1).
|
The rank of a matrix <math>A</math> is the number of independent columns of <math>A</math>. A square matrix is full rank if all of its columns are independent. That is, a full rank matrix has no column vector <math>v_i</math> of <math>A</math> that can be expressed as a linear combination of the other column vectors <math>v_j \neq \Sigma_{i = 0, i\neq j}^{n} a_i v_i</math>.
The rank of a matrix <math>A</math> is the number of independent columns of <math>A</math>. A square matrix is full rank if all of its columns are independent. That is, a full rank matrix has no column vector <math>v_i</math> of <math>A</math> that can be expressed as a linear combination of the other column vectors <math>v_j \neq \Sigma_{i = 0, i\neq j}^{n} a_i v_i</math>. For example, if one column of <math>A</math> is twice another one, then those two columns are linearly dependent (with a scaling factor 2) and thus the matrix would not be full rank.
{\displaystyle A}
{\displaystyle A}
{\displaystyle v_{i}}
{\displaystyle A}
{\displaystyle v_{j}\neq \Sigma _{i=0,i\neq j}^{n}a_{i}v_{i}}
{\displaystyle A}
|
A Nonlinear Constituent Based Viscoelastic Model for Articular Cartilage and Analysis of Tissue Remodeling Due to Altered Glycosaminoglycan-Collagen Interactions | J. Biomech Eng. | ASME Digital Collection
Gregory C. Thomas,
Anna Asanbaeva,
Anna Asanbaeva
Pasquale Vena,
Department of Structural Engineering, Laboratory of Biological Structure Mechanics,
, 20133, Milan, Italy
Robert L. Sah,
Thomas, G. C., Asanbaeva, A., Vena, P., Sah, R. L., and Klisch, S. M. (September 1, 2009). "A Nonlinear Constituent Based Viscoelastic Model for Articular Cartilage and Analysis of Tissue Remodeling Due to Altered Glycosaminoglycan-Collagen Interactions." ASME. J Biomech Eng. October 2009; 131(10): 101002. https://doi.org/10.1115/1.3192139
A constituent based nonlinear viscoelastic (VE) model was modified from a previous study (Vena, et al., 2006, “A Constituent-Based Model for the Nonlinear Viscoelastic Behavior of Ligaments,” J. Biomech. Eng., 128, pp. 449–457) to incorporate a glycosaminoglycan (GAG)-collagen (COL) stress balance using compressible elastic stress constitutive equations specific to articular cartilage (AC). For uniaxial loading of a mixture of quasilinear VE constituents, time constant and relaxation ratio equations are derived to highlight how a mixture of constituents with distinct quasilinear VE properties is one mechanism that produces a nonlinear VE tissue. Uniaxial tension experiments were performed with newborn bovine AC specimens before and after
∼55%
∼85%
GAG depletion treatment with guanidine. Experimental tissue VE parameters were calculated directly from stress relaxation data, while intrinsic COL VE parameters were calculated by curve fitting the data with the nonlinear VE model with intrinsic GAG viscoelasticity neglected. Select tissue and intrinsic COL VE parameters were significantly different from control and experimental groups and correlated with GAG content, suggesting that GAG-COL interactions exist to modulate tissue and COL mechanical properties. Comparison of the results from this and other studies that subjected more mature AC tissue to GAG depletion treatment suggests that the GAGs interact with the COL network in a manner that may be beneficial for rapid volumetric expansion during developmental growth while protecting cells from excessive matrix strains. Furthermore, the underlying GAG-COL interactions appear to diminish as the tissue matures, indicating a distinctive remodeling response during developmental growth.
biomechanics, bone, internal stresses, molecular biophysics, proteins, viscoelasticity, cartilage, collagen, viscoelastic, glycosaminoglycans, remodeling
Biological tissues, Relaxation (Physics), Stress, Viscoelasticity, Cartilage
Articular Cartilage: Composition and Structure
Anisotropy, Inhomogeneity, and Tension-Compression Nonlinearity of Human Glenohumeral Cartilage in Finite Deformation
Inhomogeneous Cartilage Properties Enhance Superficial Insterstitial Fluid Support and Frictional Properties, but Do Not Provide a Homogeneous State of Stress
Fibril Reinforced Poroelastic Model Predicts Specifically Mechanical Behavior of Normal, Proteoglycan Depleted and Collagen Degraded Articular Cartilage
The Tensile Properties of the Cartilage of Human Femoral Condyles Related to the Content of Collagen and Glycosaminoglycans
The Effects of Proteolytic Enzymes on the Mechanical Properties of Adult Human Articular Cartilage
Biphasic Poroviscoelastic Characteristics of Proteoglycan-Depleted Articular Cartilage: Simulation of Degeneration
Articular Cartilage Tensile Integrity: Modulation by Matrix Depletion Is Maturation-Dependent
Effect of Glycosaminoglycan Degradation on Lung Tissue Viscoelasticity
The Proteoglycan Contents of the Temporomandibular Joint Disc Influence Its Dynamic Viscoelastic Properties
Relationship Between Collagen Fibrils, Glycosaminoglycans, and Stress Relaxation in Mitral Valve Chordae Tendineae
The Role of Viscoelasticity of Collagen Fibers in Articular Cartilage: Theory and Numerical Formulation
The Role of Viscoelasticity of Collagen Fibers in Articular Cartilage: Axial Tension Versus Compression
Cartilage Growth and Remodeling: Modulation of Growth Phenotype and Tensile Integrity
,” Ph.D. thesis, University of California, La Jolla, San Diego, CA.
Tensile Mechanical Properties of Bovine Articular Cartilage: Variations With Growth and Relationships to Collagen Network Components
Ultrastructure of Cartilage Under Tensile Strain
Transactions of the 50th Annual Meeting, Orthopaedic Research Society
Viscoelastic Properties of Proteoglycan Solutions With Varying Proportions Present as Aggregates
Viscoelastic Properties of Proteoglycan Subunits and Aggregates in Varying Solution Concentrations
Nonlinear Viscoelastic Properties of Articular Cartilage in Shear
Viscoelastic Shear Properties of Articular Cartilage and the Effects of Glycosidase Treatment
The Effect of Glycosaminoglycans and Hydration on the Viscoelastic Properties of Aortic Valve Cusps
Effects of Aggregate Modulus Inhomogeneity on Cartilage Compressive Stress-Relaxation Behavior
|
The Difference Between Fast and Slow Stochastics
How the Stochastic Works
The main difference between fast and slow stochastics is summed up in one word: sensitivity. The fast stochastic is more sensitive than the slow stochastic to changes in the price of the underlying security and will likely result in many transaction signals. However, to really understand this difference, you should first understand what the stochastic momentum indicator is all about.
Stochastic oscillators are a class of momentum indicators comparing a particular closing price of a security to a range of its prices over a certain period of time.
The sensitivity of the oscillator to market movements is related directly to the length of that time period or by taking a moving average of the result.
The "fast" stochastic uses the most recent price data, while the "slow" stochastic uses a moving average.
Therefore, the fast version will react more quickly with timely signals, but may also produce false signals. The slow version will be smoother, taking more time to produce signals, but may be more accurate.
The word "stochastic" indicates some sort of random process. This randomness can be measured probabilistically, but cannot be known completely in advance. Adding randomness, or "noise" to understanding the movement of stock prices was seen as a major innovation.
The stochastic oscillator was developed in the late 1950s by George Lane. As designed by Lane, the stochastic oscillator presents the location of the closing price of a stock in relation to the high and low range of the price of a stock over a period of time, typically a 14-day period. Lane, over the course of numerous interviews, said that the stochastic oscillator does not follow price or volume or anything similar. He indicated that the oscillator follows the speed or momentum of price.
Lane also revealed in interviews that, as a rule, the momentum or speed of the price of a stock changes before the price changes itself. In this way, the stochastic oscillator can be used to foreshadow reversals when the indicator reveals bullish or bearish divergences. This signal is the first, and arguably the most important, trading signal Lane identified.
How the Stochastic Momentum Oscillator Works
Developed as a tool for technical analysis, the stochastic momentum oscillator is used to compare where a security's price closed relative to its price range over a given period of time—usually 14 days. It is calculated using the following formula:
\begin{aligned} &\%K=\frac{100\ast (CP-L14)}{(H14-L14)}\\ &\textbf{where:}\\ &C= \text{Most recent closing price}\\ &L14= \text{Low of the 14 previous trading sessions}\\ &H14 = \text{Highest price traded during the same 14-day period} \end{aligned}
%K=(H14−L14)100∗(CP−L14)where:C=Most recent closing priceL14=Low of the 14 previous trading sessions
A %K result of 80 is interpreted to mean that the price of the security closed above 80% of all prior closing prices that have occurred over the past 14 days. The main assumption is that a security's price will trade at the top of the range in a major uptrend. A three-period moving average of the %K called %D is usually included to act as a signal line. Transaction signals are usually made when the %K crosses through the %D.
Generally, a period of 14 days is used in the above calculation, but this period is often modified by traders to make this indicator more or less sensitive to movements in the price of the underlying asset.
The "speed" of a stochastic oscillator refers to the settings used for the %D and %K inputs. The result obtained from applying the formula above is known as the fast stochastic. Some traders find that this indicator is too responsive to price changes, which ultimately leads to being taken out of positions prematurely. To solve this problem, the slow stochastic was invented by applying a three-period moving average to the %K of the fast calculation.
Fast= the formula spelled out above, where %D is a 3-day moving average of %K.
Slow= replace %K with the Fast %D (i.e. the MA of the fast %K); replace %D with a MA of slow %K.
Taking a three-period moving average of the fast stochastics %K has proved to be an effective way to increase the quality of transaction signals; it also reduces the number of false crossovers. After the first moving average is applied to the fast stochastics %K, an additional three-period moving average is then applied—making what is known as the slow stochastics %D. Close inspection will reveal that the %K of the slow stochastic is the same as the %D (signal line) on the fast stochastic.
An easy way to remember the difference between the two technical indicators is to think of the fast stochastic as a sports car and the slow stochastic as a limousine. Like a sports car, the fast stochastic is agile and changes direction very quickly in response to sudden changes. The slow stochastic takes a little more time to change direction but promises a very smooth ride.
Mathematically, the two oscillators are nearly the same except that the slow stochastics %K is created by taking a three-period average of the fast stochastics %K. Taking a three-period moving average of each %K will result in the line that is used for a signal.
CMT Association. "Origins of the Stochastic Oscillator (Article)." Accessed March 11, 2021.
George Lane, “Lane’s Stochastics,” Pages 87-90. Technical Analysis of Stocks and Commodities Magazine. May-June, 1984.
Fidelity. "Fast Stochastic." Accessed March 11, 2021.
Fidelity. "Slow Stochastic." Accessed March 11, 2021.
Is a Slow Stochastic Effective in Day Trading?
|
TEOTWAWKI – 40 Years In The Desert
Terror Tweet Triggers Total Trending Trauma
ultimately I think we'll probably look back on 2020 as one of the last relatively normal years before everything got really, really fucked up
This is not a good thought.
If it’s true, I will not be a happy camper next year.
Clearly, Young Skywalker Has Completed His Training
This is not just the best response to the debates, it’s the best possible response to the debates:
That being said, Weird Al Yankovic is a pretty close second:
It’s actually pretty impressive that he got together a song based on the actual event in such a short time.
Here’s Hoping for an Easy Fast for my Fellow Landsmen
Yom Kippur starts in a few hours, so 25 hours of prayer, and no food or drink.
It does seem to be an odd Day of Atonement though.
I think that I will be spending more time praying for the redemption of the world than I will for my own redemption.
The future is always unpredictable but it’s hard not to think that aerial firefighting aircraft will be the most valuable kind of fighter aircraft in 2040.
It is an extreme optimist who prepares for a high-tech war in 2100.
It’s a military aviation site, and even they get that we are facing a catastrophic, and perhaps extinction level, climate crisis.
If the Greenland ice field were to melt, it would raise sea levels about 7 meters (23 feet).
New evidence indicates that the melting of Greenland has reach the point of no return.
This will devastate most of the coastal cities across the world:
Annual snowfall can no longer replenish the melted ice that flows into the ocean from Greenland’s glaciers. That is the conclusion of a new analysis of almost 40 years’ satellite data by researchers at Ohio State University. The ice loss, they think, is now so great that it has triggered an irreversible feedback loop: the sheet will keep melting, even if all climate-warming emissions are miraculously curtailed. This is bad news for coastal cities, given that Greenland boasts the largest ice sheet on the planet after Antarctica. Since 2000 its melting ice has contributed about a millimetre a year to rising sea levels. The loss of the entire ice sheet would raise them by more than seven metres, enough to reconfigure the majority of the world’s coastlines.
Clearly people operating in their own enlightened self-interest, as the religion of the free-market mousketeers, is not working.
This Sounds Like Prophecy
The guard at the now graffiti covered Treasury building might a prophet, but I am not sure if that is a good thing, or a bad thing.
We Are F%$#ed
The Atlanta Fed’s real time estimate of GDP just came out, and while there are plenty of caveats, they are estimating a decline in GDP of -52.8%.
Even the more mainstream estimates shown in the figure are end of the world stuff, but their estimate is a Stay Puft Marshmallow Man moment:
Ok, this is now getting a little scary:
The real time GDP running estimate of US economic activity is half of what it was 3 months ago. As of June 1, the Atlanta Fed is nowcasting that economic activity in the United States, as measured in GDP, is minus 52.8%.
Given the extent of the collapse in demand that has accompanied quarantines and shelter-in-place orders, this is not a surprise. Still, when you see the number in print, it still has the capacity to shock.
Yeah, it has the capacity to shock.
Who Had the 2020 Over and Under for Super-Volcano?
There have been a series of mild earthquakes which may indicate that the Yellowstone super-volcano might be becoming active again:
Monitoring services from the US Geological Survey (USGS) found there have been 213 earthquakes in the Yellowstone National Park in the past 28 days. The tremors were relatively small, with the largest being a 2.1 magnitude tremor on May 22.
However, some experts warn it is not necessarily the size of an earthquake which is an indicator a volcano might erupt, but the quantity of them.
Portland State University Geology Professor Emeritus Scott Burns said: “If you get swarms under a working volcano, the working hypothesis is that magma is moving up underneath there.”
But others disagree about whether an earthquake swarm near a volcano could be a sign of things to come.
Jamie Farrell at the University of Utah in Salt Lake City believes this is just part of the natural cycle for Yellowstone volcano, saying: “Earthquake swarms are fairly common in Yellowstone.
The Yellowstone supervolcano, located in the US state of Wyoming, last erupted on a major scale 640,000 years ago.
As an FYI, when the Yellowstone Supervolcano (also called the Yellowstone Caldera) last erupted, it put something on the order of 100 km3 material into the air, with heavy ash falls as far away as 1000 miles away.
Additionally, it would likely precipitate a climate catastrophe with widespread crop failures and famine.
I know that the chance of something happening is tiny, but if that doesn’t sound like a 2020 thing to you, you have not been paying attention.
Another 3 Million New Jobless Claims
So the total since mid March, about 8 weeks, is 36½ million new jobless claims.
Assuming that the normal level of claims is 225,000 (PDF link, see page 6), this means that the excess initial unemployment claims is
36,500,000–225,000×8=34,700,000
excess unemployment claims.
The labor force was roughly 165 million with 3% unemployment, which gives about 170 million working or looking for work.
Just subtracting the 34.7 million excess claims, and a lot of people have not been processed, gives 23.4% unemployment (U3).
The above is just spit-balling by me, but it is not unreasonable to expect the unemployment rate to top 20% right now.
OK, I Get Why Some People are Saying, “End of Times”
So in addition to a global pandemic with a virus that may not be amenable to a vaccine, we are now seeing an influx of giant killer hornets.
This is firmly in the area of things that give me the SERIOUS heebie jeebies:
Excuse me while I try to stop shaking like a leaf.
Human Sacrifice, Dogs and Cats Living Together, Mass Hysteria
This is a scary
We now have the first GDP numbers for the first quarter of 2020, and it is down 4.8%.
When one considers the trend of 2% annualized, and the fact that the shutdowns, and hence the economic contraction, did not begin until March, it means that the month of March fell at something approaching a 60% annual rate.
Obviously, this won’t continue at this rate, but predictions are looking at a 30% contraction:
When one considers that pending home sales fell 25% month over month in March, this is going to get a LOT uglier before things turn up.
Also, with an additional 28 million people out of work, and a strong recovery, say 500K growth in non farm payrolls a month, something that has happened in only 15 months over the past 60 years, it would still take over a year for complete recovery.
The total theater box office in the United States this past weekend was just 2 movies shown at one drive in theater:
With movie theaters across the country closed for the foreseeable future due to the ongoing coronavirus (COVID-19) pandemic, the weekly box office report is all but a distant memory. But there’s one theater that’s still keeping the weekly box office report alive. A single drive-in theater in Florida was the source of the entire domestic box office this past weekend, showing a whopping two (!) movies to its audience. So if you were missing your weekly box office report, here it is, in extremely barebones form.
The forced temporary shutterings of businesses and movie theaters across has created an unexpected result: the rise of drive-in movie theaters. Once a widely frequented form of moviegoing, the drive-in theater has become an increasing rarity since its heyday in the late 1950s. But now the drive-in theater is seeing a boom in business thanks to the pandemic.
That’s true especially of the Ocala Drive-In in Ocala, Florida: the one source of the domestic box office this past weekend. The weekend box office report on the website The Numbers (via ScreenCrush) showed two new movies playing at one theater in the entire United States last week. The two films, the World War II mime biopic Resistance and the indie psychological thriller Swallow (both from IFC Films) were shown at the Ocala Drive-In in Ocala, Florida, according to journalist Gitesh Pandya, for a grand total box office $33,456.
I don’t even want to think how this effects theater popcorn sales.
Holy Sh%$
Oil prices, specifically the price of WTI crude, just fell to almost NEGATIVE $40 a barrel today.
Part of this was an artifact of the calendar, futures contracts were coming due, so stockbrokers were facing the possibilities of thousands of gallons of crude oil being pumped into their swimming pools, but this is f%$#ed-up and sh%$.
When you consider the fact that fracking is a particularly expensive way to extract oil, and that the best evidence is that it has never been profitable, there are going to be a whole bunch of eager investors left holding the bag:
There is a whole bunch of money from a whole the “smartest people in the world” that just got lit on fire.
Dana Milbank, the quintessential Washington, DC insider know nothing, just got something right when he noted that the Covid-19 response was a direct result of the movement Republican belief that the government should be drowned in a bathtub.
Well a stopped clock, is right once a day, and Dana Milbank is right (maybe) once a year:
I had been expecting this for 21 years.
“It’s not a matter of ‘if,’ but ‘when,’” the legendary epidemiologist D.A. Henderson told me in 1999 when we discussed the likelihood of a biological event causing mass destruction.
In 2001, I wrote about experts urging a “medical Manhattan Project” for new vaccines, antibiotics and antivirals.
I repeat these things not to pretend I was prescient but to show that the nation’s top scientists and public health experts were shouting these warnings from the rooftops — deafeningly, unanimously and consistently. In the years after the 2001 terrorist attacks, the Bush and Obama administrations seemed to be listening.
But then came the tea party, the anti-government conservatism that infected the Republican Party in 2010 and triumphed with President Trump’s election. Perhaps the best articulation of its ideology came from the anti-tax activist Grover Norquist, who once said: “I don’t want to abolish government. I simply want to reduce it to the size where I can drag it into the bathroom and drown it in the bathtub.”
They got their wish. What you see today is your government, drowning — a government that couldn’t produce a rudimentary test for coronavirus, that couldn’t contain the pandemic as other countries have done, that couldn’t produce enough ventilators for the sick or even enough face masks and gowns for health-care workers.
The fact that this font of conventional wisdom (the conventional wisdom is always wrong) recognizes that this is a direct result of an ideology is significant.
The pundit class, disdains the discussion of ideology, so the fact that one of their most prominent avatars is assigning the blame to a right-wing ideology constituents a statement against interest, which increases the credibility of the assewrtion.
According to a poll, 18% of US workers have either lost their jobs or hours.
This has been over the past 2 weeks.
This is a collapse that is unprecedented since at least the end of World War II:
It’s no surprise that the NYSE triggered the circuit breakers again, for the 3rd time in less than 2 weeks.
Well, This Has Gone from Concerning to Bat-Sh%$ Insane Quickly
I am talking, of course, about Coronavirus.
In the past 24 hours, after Donald Trump gave the least reassuring political speech since Pennsylvania State Treasurer R. Budd Dwyer’s resignation speech 1n 1987,* things have gone to hell in a hand-basket.
The NCAA has canceled the collegiate basketball championships, AKA March Madness, because of COVID-19 19 concerns.
This is the most ppopular sporting event in the United States, normally pulling in about 50% more in ad revenue, and even more in eyeballs, than the Superbowl, and it’s canceled.
In addition, the Baseball Spring training has been suspended, the NBS has suspended its season,
Heard on every trading desk for last 10yrs: “Fck Dodd-Frank.”
Heard on every trading desk for the last 10d: “Thank fck for Dodd-Frank”
— Joseph S. Mauro (@jsmauro13) March 10, 2020
And then, for the second time this week, but only the third time in more than 20 years, circuit breakers temporarily halted stock trading after the S&P 500 entered free fall.
I am certain right now that there are a lot of brokers who are VERY happy that Dodd-Frank strengthened these market protections.
Finally, in Maryland, all public schools will be closed for 2 weeks, Catholic Schools in Baltimore are shutting down, Episcopal Churches are suspending services, and both state and federal courts are suspending cases, with most public entertainment events cancelled as well.
This all went pear shaped rather quickly.
* Following his conviction on bribery charges, he blew his brains out at a press converence.
Given that the COVID-19 has now hit Rhode Island, it’s only a matter of time until the virus pops up in an Amazon warehouse.
Think about that for a second: Thousands of workers working in close proximity, without the time for proper hygiene measures, and immune systems already compromised by stress.
It looks like updated models are showing that anthropogenic climate change will be even more disastrous than previously predicted:
Our planet’s climate may be more sensitive to increases in greenhouse gas than we realized, according to a new generation of global climate models being used for the next major assessment from the Intergovernmental Panel on Climate Change (IPCC). The findings—which run counter to a 40-year consensus—are a troubling sign that future warming and related impacts could be even worse than expected.
One of the new models, the second version of the Community Earth System Model (CESM2) from the National Center for Atmospheric Research (NCAR), saw a 35% increase in its equilibrium climate sensitivity (ECS), the rise in global temperature one might expect as the atmosphere adjusts to an instantaneous doubling of atmospheric carbon dioxide. Instead of the model’s previous ECS of 4°C (7.2°F), the CESM2 now shows an ECS of 5.3°C (9.5°F).
“It is imperative that the community work in a multi-model context to understand how plausible such a high ECS is,” said NCAR’s Andrew Gettelman and coauthors in a paper published last month in Geophysical Research Letters. They added: “What scares us is not that the CESM2 ECS is wrong…but that it might be right.”
At least eight of the global-scale models used by IPCC are showing upward trends in climate sensitivity, according to climate researcher Joëlle Gergis, an IPCC lead author and a scientific advisor to Australia’s Climate Council. Gergis wrote about the disconcerting trends in an August column for the Australian website The Monthly.
Researchers are now evaluating the models to see whether the higher ECS values are model artifacts or correctly depict a more dire prognosis.
I would note that every time that researchers update their models as a result of real world data, the predictions get more and more dire.
We are in for a huge world of hurt.
Permafrost in Canada is melting at a rate faster than the most alarmist models predicted:
Durgin Park, the iconic Fanuil Hall restaurant is closing on January 12:
Durgin-Park, a Faneuil Hall staple since 1827, will be closing on January 12.
Employees of the historic restaurant were notified about the decision to close Wednesday.
Durgin-Park is one of the oldest restaurants in the country. It gained a reputation for its good-hearted waitresses being nearly as “fresh” as its fish.
Parent company Ark Restaurants based out of New York says it’s the nature of the business – and that the restaurant just isn’t making money like it used to.
Seriously, this sucks like 1000 hovers all going at once.
F%$# Ark Restaurants.
There are plenty of people in the Boston who are more than willing to abuse me, but none of them make prime rib, Boston baked beans, and Indian pudding like Durgin Park.
|
Heat Transfer From Novel Target Surface Structures to a 3×3 Array of Normally Impinging Water Jets | J. Thermal Sci. Eng. Appl. | ASME Digital Collection
Heat Transfer From Novel Target Surface Structures to a
3×3
Array of Normally Impinging Water Jets
Nicholas M. R. Jeffers,
Nicholas M. R. Jeffers
Stokes Institute, Mechanical and Aeronautical Engineering,
e-mail: nick.jeffers@ul.ie
CTVR, Stokes Institute, Mechanical and Aeronautical Engineering,
e-mail: jeff.punch@ul.ie
Edmond J. Walsh,
Edmond J. Walsh
Jeffers, N. M. R., Punch, J., Walsh, E. J., and McLean, M. (January 28, 2011). "Heat Transfer From Novel Target Surface Structures to a
3×3
Array of Normally Impinging Water Jets." ASME. J. Thermal Sci. Eng. Appl. December 2010; 2(4): 041004. https://doi.org/10.1115/1.4003220
Impinging jet arrays provide a means to achieve high heat transfer coefficients and are used in a wide variety of engineering applications such as electronics cooling. The objective of this paper is to characterize the heat transfer from an array of
3×3
submerged and confined impinging water jets to a range of target surface structures. The target surfaces consisted of a flat surface, nine 90 deg swirl generators, a
6×6
pin fin array, and nine pedestals with turn-down dishes that turned the flow to create an additional annular impingement. In order to make comparisons with a previous single jet study by the authors, each impinging jet within the array was geometrically constrained to a round, 8.5 mm diameter, square-edged nozzle at a jet exit-to-target surface spacing, of
H/D=0.5
. A custom measurement facility was designed and commissioned in order to measure the heat transfer coefficient and the pressure loss coefficient of each of the target surface augmentations. The heat transfer results are presented in terms of
Nu/Pr0.4
, and the pressure results are presented in terms of pressure loss coefficient. Comparing the array of jets to a single jet showed a decrease in heat transfer. Full field velocity magnitude images showed that this decrease in heat transfer was caused by neighboring jet interference cross-flow coupled with a greater back pressure effect. The analysis of the different target surface augmentations showed that the performance of the pedestal with the turn-down dish was the least compromised by the addition of the surrounding jets. It showed both the highest fin efficiency of 95.1% and fin effectiveness of 2.27. However, it showed the highest overall pressure loss coefficient compared with the other target surfaces, and therefore the nine 90 deg swirl generators performed the best in terms of both pressure loss coefficient and thermal performance. The findings of this paper are of practical relevance to the design of primary heat exchangers for high-flux thermal management applications, where the boundaries of cooling requirements continue to be tested.
confined flow, cooling, flow visualisation, jets, nozzles, pressure measurement, swirling flow, water
Flow (Dynamics), Heat transfer, Jets, Water, Pressure, Generators
Numerical Flow and Heat Transfer Under Impinging Jets: A Review
Annu. Rev. Numer. Fluid Mech. Heat Transfer
Wall Imprint of Turbulent Structures and Heat Transfer in Multiple Impinging Jet Arrays
The Effect of Drainage Configuration on Heat Transfer Under an Impinging Liquid Jet Array
Effect of Jet-Jet Spacing on Convective Heat Transfer to Confined, Impinging Arrays of Axisymmetric Air Jets
Heat Transfer Characteristics of Arrays of Free-Surface Liquid Jets
On the Heat Transfer and Fluid Mechanics of a Normally-Impinging, Submerged and Confined Liquid Jet
,” Ph.D. thesis, University of Limerick, Ireland.
Jet Impingement Cooling of a Discretely Heated Portion of a Protruding Pedestal With a Single Round Air Jet
The Aeronautical Journal of the Royal Aeronautical Society
Design and Evaluation of a Low-Speed Wind Tunnel With Expanding Corners
,” Department of Mechanics, KTH, Report No. TRITA-MEK 2002-14.
On the Resistance of Screens
TSI, Particle Image Velocimetry (PIV): Theory of Operation.
Convective Heat Transfer on a Small Vertical Heated Surface in an Impinging Circular Liquid Jet
Proceedings of the Second International Symposium on Heat Transfer
, Beijing, China, Vol.
Confined and Submerged Liquid Jet Impingement Heat Transfer
M. -D.
Air Jet Impingement Heat Transfer at Low Nozzle-Plate Spacings
Flow Downstream of a Cluster of Nine Jets
|
Parameter Analysis and Numerical Simulation of Hydrogen Production by Steam Reforming of Dimethyl Ether
Hydrogen production through dimethyl ether steam reforming is an attractive option for mobile applications of hydrogen fuel cells. Hydrogen is a major trend in the future of energy development. It is not only pollution-free, but also has a high energy density. Therefore, research on hydrogen fuel cells is particularly important. In this paper, a numerical research on dimethyl ether steam reforming reaction in a reactor has been presented using a computational fluid dynamics. A three-dimensional reactor model developed by the commercial software COMSOL (version 5.2a) was used to simulate the reaction characteristics by modifying reforming conditions. The simulation results indicate the temperature distribution, mass distribution, and reveal the dependency of dimethyl ether reforming reaction rate on temperature, pressure, the length of the reactor. The yield of H2 and conversion of dimethyl ether with different mass ratios and inlet temperature (200˚C, 300˚C, 400˚C, 500˚C) were examined. The governing equations in the model include conservations of mass, momentum, energy and chemical species.
\nabla \cdot \left(\rho \left(-\frac{\kappa }{\eta }\nabla {p}_{sr}\right)\right)=0
Q={\left(\rho {C}_{p}\right)}_{t}\frac{\partial {T}_{sr}}{\partial t}+\nabla \cdot \left(-{k}_{sr}\nabla {T}_{sr}\right)+{\left(\rho {C}_{p}\right)}_{f}u\cdot \nabla {T}_{sr}
{\left(\rho {C}_{p}\right)}_{t}=\epsilon {\left(\rho {C}_{p}\right)}_{f}+\left(1-\epsilon \right){\left(\rho {C}_{p}\right)}_{s}
{R}_{i}=\nabla \cdot \left(\rho {\omega }_{i}u-\rho {\omega }_{i}\underset{j=1}{\overset{n}{\sum }}{\stackrel{˜}{D}}_{ij}\left(\nabla {x}_{j}+\left({x}_{j}-{\omega }_{j}\right)\frac{\nabla p}{p}\right)-{D}_{i}^{T}\frac{\nabla T}{T}\right)
{\text{CH}}_{\text{3}}{\text{OCH}}_{3}+{\text{H}}_{\text{2}}\text{O}⇔2{\text{CH}}_{\text{3}}\text{OH}
{\text{CH}}_{\text{3}}\text{OH}+{\text{H}}_{\text{2}}\text{O}⇔3{\text{H}}_{2}+{\text{CO}}_{2}
\text{CO}+{\text{H}}_{\text{2}}\text{O}⇔{\text{CO}}_{2}+{\text{H}}_{2}
{R}_{2}=\left(1-\epsilon \right){\rho }_{s}{k}_{R}{C}_{{\text{CH}}_{\text{3}}\text{OH}}
{\rho }_{s}
{k}_{R}
{C}_{{\text{CH}}_{\text{3}}\text{OH}}
\epsilon
{R}_{4}={C}_{\text{WGS}}{k}_{\text{WGS}}\left({p}_{\text{CO}}{p}_{{\text{H}}_{\text{2}}\text{O}}-{p}_{{\text{CO}}_{\text{2}}}{p}_{{\text{H}}_{\text{2}}}/{K}_{eq}\right)
{k}_{\text{WGS}}
{K}_{eq}
K=A{e}^{-\frac{E}{RT}}
{k}_{1}={k}_{m1}\left(\frac{{E}_{A1}}{R}\left(\frac{1}{{T}_{m}}-\frac{1}{T}\right)\right)
A=\frac{{k}_{m1}}{{e}^{\frac{{E}_{A1}}{RT}}}
{K}_{j}={K}_{mj}\mathrm{exp}\left(\frac{\Delta {\text{H}}_{\text{adsj}}}{R}\left(\frac{1}{{T}_{m}}-\frac{1}{T}\right)\right)
{{\Delta }^{\prime }}_{jk}
-\frac{\kappa }{\eta }\nabla {p}_{sr}\cdot n=0
n\cdot \left({k}_{sr}\nabla {T}_{sr}\right)=0
n\cdot \left(\left(-\rho {\omega }_{i}{\sum }_{j=1}^{n}{D}_{ij}\left(\nabla {x}_{j}+\left({x}_{j}-{\omega }_{j}\right)\frac{\nabla p}{p}\right)\right)-{D}^{T}\frac{\nabla T}{T}\right)=0
{X}_{\text{DME}}=\frac{{F}_{\text{DME,in}}-{F}_{\text{DME,out}}}{{F}_{\text{DME,in}}}\times 100\%
Y=\frac{{F}_{i}}{{F}_{0}\cdot {v}_{i}}
Guo, L. and Li, C. (2018) Parameter Analysis and Numerical Simulation of Hydrogen Production by Steam Reforming of Dimethyl Ether. Journal of Power and Energy Engineering, 6, 1-11. https://doi.org/10.4236/jpee.2018.611001
1. Nicoletti, G., Arcuri, N., Bruno, R., et al. (2015) A Technical and Environmental Comparison between Hydrogen and Some Fossil Fuels. Energy Conversion and Management, 89, 205-213. https://doi.org/10.1016/j.enconman.2014.09.057
2. Yuan, X.Z., Li, H., Zhang, S.S., Martin, J., Wang, H.J., et al. (2011) A Review of Polymer Electrolyte Membrane Fuel Cell Durability Test Protocols. Journal of Power Sources, 196, 9107-9116. https://doi.org/10.1016/j.jpowsour.2011.07.082
3. Isono, T., Suzuki, S., Kaneko, M., et al. (2000) Development of a High-Performance PEFC Module Operated by Reformed Gas. Journal of Power Sources, 86, 269-273. https://doi.org/10.1016/S0378-7753(99)00441-3
4. Hoang, D.L. and Chan, S.H. (2004) Modeling of a Catalytic Autothermal Methane Reformer for Fuel Cell Applications. Applied Catalysis A: General, 268, 207-216. https://doi.org/10.1016/j.apcata.2004.03.056
5. Li, C., Gao, Y., Wu, C.S., et al. (2015) Modeling and Simulation of Hydrogen Production from Dimethyl Ether Steam Reforming Using Exhaust Gas. International Journal of Energy Research, 39, 1272-1279. https://doi.org/10.1002/er.3330
6. Feng, D.M., Wang, Y., Wang, D., Wang, J., et al. (2009) Steam Reforming of Dimethyl Ether over CuO-ZnO-Al2O3-ZrO2 + ZSM-5: A Kinetic Study. Chemical Engineering Journal, 146, 477-485. https://doi.org/10.1016/j.cej.2008.11.005
7. Choi, S. and Bae, J. (2016) Autothermal Reforming of Dimethyl Ether with CGO-Based Precious Metal Catalysts. Journal of Power Sources, 307, 351-357. https://doi.org/10.1016/j.jpowsour.2015.12.068
8. Creaser, D., Nilsson, M., Pettersson, L.J., Dawody, J., et al. (2010) Kinetic Modeling of Autothermal Reforming of Dimethyl Ether. Industrial & Engineering Chemistry Research, 49, 9712-9719. https://doi.org/10.1021/ie100834v
9. Akbari, M.H., Ardakani, A.H.S., Tadbir, M.A., et al. (2011) A Microreactor Modeling, Analysis and Optimization for Methane Autothermal Reforming in Fuel Cell Applications. Chemical Engineering Journal, 166, 1116-1125. https://doi.org/10.1016/j.cej.2010.12.044
10. Elewuwa, F.A. and Makkawi, Y.T. (2015) Hydrogen Production by Steam Reforming of DME in a Large Scale CFB Reactor. Part I: Computational Model and Predictions. International Journal of Hydrogen Energy, 40, 15865-15876. ttps://doi.org/10.1016/j.ijhydene.2015.10.050
11. Elewuwa, F.A. and Makkawi, Y.T. (2016) A Computational Model of Hydrogen Production by Steam Reforming of Dimethyl Ether in a Large Scale CFB Reactor. Part II: Parametric Analysis. International Journal of Hydrogen Energy, 41, 19819-19828. https://doi.org/10.1016/j.ijhydene.2016.08.072
12. Yan, C.F., Ye, W., Guo, C.Q., Huang, S.L., Li, W.B., Luo, W.M., et al. (2014) Numerical Simulation and Experimental Study of Hydrogen Production from Dimethyl Ether Steam Reforming in a micro-Reactor. International Journal of Hydrogen Energy, 39, 18642-18649. https://doi.org/10.1016/j.ijhydene.2014.02.133
13. Yan, C.F., Hai, H., Hu, R.R., Guo, C.Q., Huang, S.L., Li, W.B., Wen, Y., et al. (2014) Effect of Cr Promoter on Performance of Steam Reforming of Dimethyl Ether in a Metal Foam Micro-Reactor. International Journal of Hydrogen Energy, 39, 18625-18631. https://doi.org/10.1016/j.ijhydene.2014.02.152
14. Moharana, M.K., Peela, N.R., Khandekar, S., Kunzru, D., et al. (2011) Distributed Hydrogen Production from Ethanol in a Microfuel Processor: Issues and Challenges. Renewable and Sustainable Energy Reviews, 15, 524-533. https://doi.org/10.1016/j.rser.2010.08.011
15. Gateau, P. (2007) Design of Reactors and Heat Exchange Systems to Optimize a Fuel Cell Reformer. Proceedings of the COMSOL User’s Conference Grenoble.
|
Attribute Examples [Isabelle/HOL Support Wiki]
Trace: • Attribute Examples
reference:attribute_examples
This page contains usage examples for attributes (or directives).
Consider rule/theorem mp:
\quad [|\ ?P \longrightarrow ?Q; ?P\ |] \Longrightarrow ?Q
mp [of _ "C ∨ D"]
yields the more specific theorem
\quad [|\ ?P \longrightarrow C \vee D; ?P\ |] \Longrightarrow C \vee D
C, D
are no longer schematic variables but specific terms.
\quad [|\ ?P \longrightarrow ?Q; ?P\ |] \Longrightarrow ?Q
mp [where Q="C ∨ D"]
\quad [|\ ?P \longrightarrow C \vee D; ?P\ |] \Longrightarrow C \vee D
C, D
Consider the “composite” theorem
\quad t : (?A \vee \neg ?A) \wedge (\text{False} \longrightarrow ?A)
t [THEN conjunct1]
yields the new theorem
\quad ?A \vee \neg ?A
Consider rule subst:
\quad [|\ ?s = ?t; ?P ?s\ |]\quad \Longrightarrow\quad ?P ?t
Note that you can use this rule only to substitute from left to right, not the reverse. Using OF together with theorem sym (
?s = ?t\ \Longrightarrow\ ?t = ?s
subst [OF sym]
\quad [|\ ?t = ?s; ?P ?s\ |]\quad \Longrightarrow\quad ?P ?t
which allows substitutions from right to left.
Consider theorem Nat.le_square:
\quad ?m \leq\ ?m\ \cdot\ ?m
Nat.le_square [simplified]
\quad 0 <\ ?m \longrightarrow Suc\ 0 \leq\ ?m
Wether or not this is actually “simpler” is certainly open to discussion, especially because the original form does say something about
?m=0
while the simplified version does not.
Consider rule disjE:
\quad [|\ ?P\;\vee\; ?Q; ?P \Longrightarrow ?R; ?Q \Longrightarrow ?R\ |] \Longrightarrow ?R
disjE [rotated 2]
\quad [|\ ?Q \Longrightarrow ?R; ?P\;\vee\; ?Q; ?P \Longrightarrow ?R\ |] \Longrightarrow ?R
Consider the definition of set membership Set.mem_def,
\quad (?x \in ?A) = ?A ?x
test [symmetric]
\quad ?X ?x = (?x \in ?X)
reference/attribute_examples.txt · Last modified: 2011/07/25 12:32 by 131.246.41.159
|
Combination (mathematics) - Simple English Wikipedia, the free encyclopedia
All possibilities of picking three objects of a set of five.
In mathematics, combination is used for picking a number of objects from a given set of objects. Combinatorics looks at the number of possibilities to pick k objects from a set of n. It does not take into account the order in which these are picked. People talk about finding permutations if the order in which the objects are picked matters. This explains why there are more permutations than combinations. One combination, picking the objects 1,3, and 5 in the example can be done in the following ways: {1,3,5}, {1,5,3}, {3,1,5}, {3,5,1}, {5,1,3} and {5,3,1}.
In general, the number of combinations of k objects from n objects, written as
{\displaystyle nCk}
{\displaystyle {\tbinom {n}{k}}}
,[1] is equal to
{\displaystyle {\tfrac {n!}{k!(n-k)!}}}
, where "!" stands for the factorial notation. This number is also known as the binomial coefficient.[2][3]
↑ Weisstein, Eric W. "Combination". mathworld.wolfram.com. Retrieved 2020-09-10.
Retrieved from "https://simple.wikipedia.org/w/index.php?title=Combination_(mathematics)&oldid=7103240"
|
Strangeness Knowpia
In particle physics, strangeness ("S")[1][2] is a property of particles, expressed as a quantum number, for describing decay of particles in strong and electromagnetic interactions which occur in a short period of time. The strangeness of a particle is defined as:
{\displaystyle S=-(n_{s}-n_{\bar {s}})}
represents the number of strange quarks (
) and n
represents the number of strange antiquarks (
). Evaluation of strangeness production has become an important tool in search, discovery, observation and interpretation of quark–gluon plasma (QGP).[3] Strangeness is an excited state of matter and its decay is governed by CKM mixing.
The terms strange and strangeness predate the discovery of the quark, and were adopted after its discovery in order to preserve the continuity of the phrase; strangeness of anti-particles being referred to as +1, and particles as −1 as per the original definition. For all the quark flavour quantum numbers (strangeness, charm, topness and bottomness) the convention is that the flavour charge and the electric charge of a quark have the same sign. With this, any flavour carried by a charged meson has the same sign as its charge.
Strangeness was introduced by Murray Gell-Mann,[4] Abraham Pais,[5][6] Tadao Nakano and Kazuhiko Nishijima[7] to explain the fact that certain particles, such as the kaons or the hyperons
, were created easily in particle collisions, yet decayed much more slowly than expected for their large masses and large production cross sections. Noting that collisions seemed to always produce pairs of these particles, it was postulated that a new conserved quantity, dubbed "strangeness", was preserved during their creation, but not conserved in their decay.[8]
In our modern understanding, strangeness is conserved during the strong and the electromagnetic interactions, but not during the weak interactions. Consequently, the lightest particles containing a strange quark cannot decay by the strong interaction, and must instead decay via the much slower weak interaction. In most cases these decays change the value of the strangeness by one unit. However, this doesn't necessarily hold in second-order weak reactions, where there are mixes of
mesons. All in all, the amount of strangeness can change in a weak interaction reaction by +1, 0 or -1 (depending on the reaction).
For example, the interaction of a K− meson with a proton is represented as:
{\displaystyle K^{-}+p\rightarrow \Xi ^{0}+K^{0}}
{\displaystyle (-1)+(0)\rightarrow (-2)+(1)}
Here strangeness is conserved and the interaction proceeds via the strong nuclear force.[9]
However, in reactions like the decay of the positive kaon:
{\displaystyle k^{+}\rightarrow \pi ^{+}+\pi ^{0}}
{\displaystyle +1\rightarrow (0)+(0)}
Since both pions have a strangeness of 0, this violates conservation of strangeness, meaning the reaction must go via the weak force.[9]
^ Jacob, Maurice (1992). The Quark Structure of Matter. World Scientific Lecture Notes in Physics. Vol. 50. World Scientific. doi:10.1142/1653. ISBN 978-981-02-0962-9.
^ Tanabashi, M.; Hagiwara, K.; Hikasa, K.; Nakamura, K.; Sumino, Y.; Takahashi, F.; Tanaka, J.; Agashe, K.; Aielli, G.; Amsler, C.; Antonelli, M. (2018-08-17). "Review of Particle Physics". Physical Review D. 98 (3): 030001. Bibcode:2018PhRvD..98c0001T. doi:10.1103/PhysRevD.98.030001. ISSN 2470-0010. PMID 10020536. pages 1188 (Mesons), 1716 ff (Baryons)
^ Gell-Mann, M. (1953-11-01). "Isotopic Spin and New Unstable Particles". Physical Review. 92 (3): 833–834. Bibcode:1953PhRv...92..833G. doi:10.1103/PhysRev.92.833. ISSN 0031-899X.
^ Pais, A. (1952-06-01). "Some Remarks on the V -Particles". Physical Review. 86 (5): 663–672. Bibcode:1952PhRv...86..663P. doi:10.1103/PhysRev.86.663. ISSN 0031-899X.
^ Pais, A. (October 1953). "On the Baryon-meson-photon System". Progress of Theoretical Physics. 10 (4): 457–469. Bibcode:1953PThPh..10..457P. doi:10.1143/PTP.10.457. ISSN 0033-068X.
^ Nakano, Tadao; Nishijima, Kazuhiko (November 1953). "Charge Independence for V -particles". Progress of Theoretical Physics. 10 (5): 581–582. Bibcode:1953PThPh..10..581N. doi:10.1143/PTP.10.581. ISSN 0033-068X.
^ Griffiths, David J. (David Jeffery), 1942- (1987). Introduction to elementary particles. New York: Wiley. ISBN 0-471-60386-4. OCLC 19468842. {{cite book}}: CS1 maint: multiple names: authors list (link)
^ a b "The Nobel Prize in Physics 1968". NobelPrize.org. Retrieved 2020-03-15.
|
15 July 2006 Finiteness of rigid cohomology with coefficients
Kiran S. Kedlaya1
1Department of Mathematics, Room 2-165, Massachusetts Institute of Technology
We prove that for any field
k
p>0
, any separated scheme
X
of finite type over
k
, and any overconvergent
F
-isocrystal
\mathcal{E}
X
, the rigid cohomology
{H}_{\mathrm{rig}}^{i}\left(X,\mathcal{E}\right)
and rigid cohomology with compact supports
{H}_{c,\mathrm{rig}}^{i}\left(X,\mathcal{E}\right)
are finite-dimensional vector spaces over an appropriate
p
-adic field. We also establish Poincaré duality and the Künneth formula with coefficients. The arguments use a pushforward construction in relative dimension
1
, based on a relative version of Crew's [Cr] conjecture on the quasi-unipotence of certain
p
Kiran S. Kedlaya. "Finiteness of rigid cohomology with coefficients." Duke Math. J. 134 (1) 15 - 97, 15 July 2006. https://doi.org/10.1215/S0012-7094-06-13412-9
Kiran S. Kedlaya "Finiteness of rigid cohomology with coefficients," Duke Mathematical Journal, Duke Math. J. 134(1), 15-97, (15 July 2006)
|
Rigidity of critical circle maps
15 August 2018 Rigidity of critical circle maps
Pablo Guarino, Marco Martens, Welington de Melo
Duke Math. J. 167(11): 2125-2188 (15 August 2018). DOI: 10.1215/00127094-2018-0017
We prove that any two
{C}^{4}
critical circle maps with the same irrational rotation number and the same odd criticality are conjugate to each other by a
{C}^{1}
circle diffeomorphism. The conjugacy is
{C}^{1+\alpha }
for a full Lebesgue measure set of rotation numbers.
Pablo Guarino. Marco Martens. Welington de Melo. "Rigidity of critical circle maps." Duke Math. J. 167 (11) 2125 - 2188, 15 August 2018. https://doi.org/10.1215/00127094-2018-0017
Received: 15 December 2016; Revised: 19 March 2018; Published: 15 August 2018
Keywords: commuting pairs , critical circle maps , renormalization , smooth rigidity
Pablo Guarino, Marco Martens, Welington de Melo "Rigidity of critical circle maps," Duke Mathematical Journal, Duke Math. J. 167(11), 2125-2188, (15 August 2018)
|
Effective Annual Interest Rate Definition , a savings account, or a loan offer may be advertised with its nominal interest rate as well as its effective annual interest rate. The nominal interest rate does not reflect the effects of compounding interest or even the fees that come with these financial products. The effective annual interest rate is the real return.
That's why the effective annual interest rate is an important financial concept to understand. You can compare various offers accurately only if you know the effective annual interest rate of each one.
Example of Effective Annual Interest Rate
Consider these two offers: Investment A pays 10% interest, compounded monthly. Investment B pays 10.1% compounded semiannually. Which is the better offer?
In both cases, the advertised interest rate is the nominal interest rate. The effective annual interest rate is calculated by adjusting the nominal interest rate for the number of compounding periods the financial product will undergo in a period of time. In this case, that period is one year. The formula and calculations are as follows:
Effective annual interest rate = (1 + (nominal rate / number of compounding periods)) ^ (number of compounding periods) - 1
For investment A, this would be: 10.47% = (1 + (10% / 12)) ^ 12 - 1
And for investment B, it would be: 10.36% = (1 + (10.1% / 2)) ^ 2 - 1
Investment B has a higher stated nominal interest rate, but the effective annual interest rate is lower than the effective rate for investment A. This is because Investment B compounds fewer times over the course of the year. If an investor were to put, say, $5 million into one of these investments, the wrong decision would cost more than $5,800 per year.
As the number of compounding periods increases, so does the effective annual interest rate. Quarterly compounding produces higher returns than semiannual compounding, monthly compounding produces higher returns than quarterly, and daily compounding produces higher returns than monthly. Below is a breakdown of the results of these different compound periods with a 10% nominal interest rate:
Semiannual = 10.250%
Quarterly = 10.381%
Monthly = 10.471%
Daily = 10.516%
The limits to compounding
There is a ceiling to the compounding phenomenon. Even if compounding occurs an infinite number of times—not just every second or microsecond but continuously—the limit of compounding is reached.
With 10%, the continuously compounded effective annual interest rate is 10.517%. The continuous rate is calculated by raising the number "e" (approximately equal to 2.71828) to the power of the interest rate and subtracting one. In this example, it would be 2.171828 ^ (0.1) - 1.
How Do You Calculate the Effective Annual Interest Rate?
The effective annual interest rate is calculated using the following formula:
\begin{aligned} &Effective\ Annual\ Interest\ Rate=\left ( 1+\frac{i}{n} \right )^n-1\\ &\textbf{where:}\\ &i=\text{Nominal interest rate}\\ &n=\text{Number of periods}\\ \end{aligned}
Effective Annual Interest Rate=(1+ni)n−1where:i=Nominal interest raten=Number of periods
Although it can be done by hand, most investors will use a financial calculator, spreadsheet, or online program. Moreover, investment websites and other financial resources regularly publish the effective annual interest rate of a loan or investment. This figure is also often included in the prospectus and marketing documents prepared by the security issuers.
A nominal interest rate does not take into account any fees or compounding of interest. It is often the rate that is stated by financial institutions.
Compound interest is calculated on the initial principal and also includes all of the accumulated interest from previous periods on a loan or deposit. The number of compounding periods makes a significant difference when calculating compound interest.
Corporate Finance Institute. "Effective Annual Interest Rate."
Federal Reserve Bank of St. Louis. "How Does Compound Interest Work?"
A stated annual interest rate is the return on an investment (ROI) that is expressed as a per-year percentage.
Stated Annual vs. Effective Annual Return: What's the Difference?
|
You'd be hard-pressed to find a trader who has never heard of John Bollinger and his namesake bands. Most charting programs include Bollinger Bands®. Although these bands are some of the most useful technical indicators if applied properly, they are also among the least understood. One good way to get a handle on how the bands function is to read the book "Bollinger on Bollinger Bands®," in which the man himself explains it all.
According to Bollinger, there's one pattern that raises more questions than any other aspect of Bollinger Bands®. He calls it "the Squeeze." As he puts it, his bands, "are driven by volatility, and the Squeeze is a pure reflection of that volatility."
Here we look at the Squeeze and how it can help you identify breakouts.
A Bollinger Band®, as we mentioned above, is a tool used in technical analysis. It is defined by a series of lines that are plotted two standard deviations—both positively and negatively—away from the simple moving average (SMA) of the price of a security.
Bollinger Bands® identify a stock's high and low volatility points. While it can be a real challenge to forecast future prices and price cycles, volatility changes and cycles are relatively easy to identify. This is because equities alternate between periods of low volatility and high volatility—much like the calm before the storm and the inevitable activity afterward.
Here is the Squeeze equation:
\begin{aligned} &\text{BBW}=\frac{\text{TBP }-\text{ BBP}}{\text{SMAC}}\\ &\textbf{where:}\\ &\text{BBW} = Bollinger Band ^ {\circledR} \text{width}\\ &\text{TBP} = \text{Top }Bollinger Band ^ {\circledR} \left(\text{the top 20 periods}\right)\\ &\text{BBP} = \text{Bottom }Bollinger Band ^ {\circledR} \left(\text{the bottom 20 periods}\right)\\ &\text{SMAC = Simple moving average close}\\ &\text{(the middle 20 periods)}\\ \end{aligned}
BBW=SMACTBP − BBPwhere:BBW=BollingerBand®widthTBP=Top BollingerBand®(the top 20 periods)BBP=Bottom BollingerBand®(the bottom 20 periods)SMAC = Simple moving average close(the middle 20 periods)
When Bollinger Bands® are far apart, volatility is high. When they are close together, it is low. A Squeeze is triggered when volatility reaches a six-month low and is identified when Bollinger Bands® reach a six-month minimum distance apart.
Determining Breakout Direction
The next step—deciding which way stocks will go once they break out—is somewhat more challenging. To determine breakout direction, Bollinger suggests that it is necessary to look to other indicators. He suggests using the relative strength index (RSI) along with one or two volume-based indicators such as the intraday intensity index (developed by David Bostian) or the accumulation/distribution index (developed by Larry William).
If there is a positive divergence—that is, if indicators are heading upward while price is heading down or neutral—it is a bullish sign. For further confirmation, look for volume to build on up days. On the other hand, if price is moving higher but the indicators are showing negative divergence, look for a downside breakout—especially if there have been increasing volume spikes on down days.
Another indication of breakout direction is the way the bands move on expansion. When a powerful trend is born, the resulting explosive volatility increase is often so great that the lower band will turn downward in an upside break, or the upper band will turn higher in a downside breakout.
See Figure 1 below, showing a Squeeze pattern setting up in the year leading up to a KB Home (KBH) breakout. Bandwidth reaches a minimum distance apart in May (indicated by the blue arrow in window 2), followed by an explosive breakout to the upside. Note the increasing relative strength index (shown in window 1), along with increasing intraday intensity (the red histogram in window 2) and the accumulation/distribution index (the green line in window 2), both of which (demonstrated by line A) are showing positive divergence with price (demonstrated by line B). Note the volume build that occurred beginning in mid-April through July.
A third condition to look out for is something Bollinger calls a "head fake." It is not unusual for a security to turn in one direction immediately after the Squeeze, as if to trick traders into thinking the breakout will occur in that direction, only to reverse course and make the true and more significant move in the opposite direction. Traders who act quickly on the breakout get caught offside, which can prove extremely costly if they do not use stop-losses. Those expecting the head fake can quickly cover their original position and enter a trade in the direction of the reversal.
In Figure 2, Amazon appeared to be giving a Squeeze setup in early February. Bollinger Bands® were at a minimum distance apart, which had not been seen for at least a year, and there is a six-month low bandwidth (see line A in window II). There is negative divergence between the RSI (line 1 of window I), the intraday intensity (line 2 of window II), accumulation/distribution index (line 3 or window II), and price (line 4 of window III)—all of which point to a downward breakout.
A Squeeze candidate is identified when the bandwidth is at a six-month low value.
Breaking above the 50-day moving average (the orange line in the lower volume window) on drops in stock price, suggesting a build-up in selling pressure, volume shows above normal values on downside price moves. Finally, the long-term trendline is breached to the downside in the first week of February. A downside breakout would be confirmed by a penetration in the long-term support line (line 5 of window III) and a continued increase in volume on downside moves.
The challenge lies in the fact that the stock had demonstrated a strong uptrend, and one pillar of technical analysis is that the dominant trend will continue until an equal or greater force operates in the opposite direction. This means the stock could very well make a head fake down through the trendline, then immediately reverse and break out to the upside. It could also fake out to the upside and break down. While it looks set to break out to the downside along with a trend reversal, one must await confirmation that a trend reversal has taken place and, in case there is a fake-out, be ready to change trade direction at a moment's notice.
Just like any other strategy, the Bollinger Squeeze shouldn't be the be-all and end-all of your trading career. Remember, like everything else in the investment world, it does have its limitations. If you follow it too closely and don't consider the risks—and limit them—you could stand to lose. Do your research, take care of your capital, and know when you should make an exit point, if necessary.
The Squeeze relies on the premise that stocks constantly experience periods of high volatility followed by low volatility. Equities that are at six-month low levels of volatility, as demonstrated by the narrow distance between Bollinger Bands®, generally demonstrate explosive breakouts. By using non-collinear indicators, an investor or trader can determine in which direction the stock is most likely to move in the ensuing breakout. With a little practice using your favorite charting program, you should find the Squeeze a welcome addition to your bag of trading tricks.
James Chen. "Essentials of Foreign Exchange Trading," Page 91. John Wiley & Sons, 2009.
Bollinger Bands®. "Bollinger Bands Rules." Accessed April 23, 2020.
Multicharts. "Williams Accumulation - Distribution." Accessed April 23, 2020.
Buff Pelz Dormeier. "Investing with Volume Analysis: Identify, Follow, and Profit from Trends," Page 1 of Chapter 12. FT Press, 2011.
|
specified rootof - Maple Help
Home : Support : Online Help : Mathematics : Numbers : Type Checking : specified rootof
check for a RootOf data structure that is a specific root of an expression
type(expr, specified_rootof)
The type(expr, specified_rootof) function returns true if expr is a specific root of an expression that uses a RootOf data structure. Otherwise, false is returned.
1. RootOf(expr, index = n) where expr is an algebraic expression, and n is a positive integer
Specifies the roots of a rational polynomial.
2. RootOf(expr, c) where expr is an algebraic expression, and c is a constant
Specifies the closest root to the point c.
If extra arguments are included in the call to type(expr, specified_rootof), they are ignored.
\mathrm{type}\left(\mathrm{RootOf}\left({x}^{2}-4x+71\right),\mathrm{specified_rootof}\right)
\textcolor[rgb]{0,0,1}{\mathrm{false}}
\mathrm{type}\left(\mathrm{RootOf}\left({x}^{2}-4x+71,\mathrm{index}=1\right),\mathrm{specified_rootof}\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\mathrm{type}\left(\mathrm{RootOf}\left({x}^{2}-4x+7,\mathrm{label}=A\right),\mathrm{specified_rootof}\right)
\textcolor[rgb]{0,0,1}{\mathrm{false}}
\mathrm{type}\left(\mathrm{RootOf}\left({x}^{2}-4x+7,2.+1.732050808I\right),\mathrm{specified_rootof}\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
|
Error, (in dsolve/numeric/process_input) input system must be an ODE system, got independent variables {x, y} - Maple Help
Home : Support : Online Help : Error, (in dsolve/numeric/process_input) input system must be an ODE system, got independent variables {x, y}
g\left(y\right)
g\left(x\right)
\mathrm{dsolve}\left(\left\{\mathrm{diff}\left(f\left(x\right),x\right)=g\left(y\right)\right\},\mathrm{numeric}\right);\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}
\mathrm{dsolve}\left(\left\{\mathrm{diff}\left(f\left(x\right),x\right)=f\left(x\right),\mathrm{diff}\left(g\left(y\right),y\right)=-g\left(y\right),f\left(0\right)=1,g\left(0\right)=-1\right\},\mathrm{numeric}\right);
\mathrm{dsolve}\left(\left\{\mathrm{diff}\left(f\left(x\right),x\right)=f\left(x\right),f\left(0\right)=1\right\},\mathrm{numeric}\right),\mathrm{dsolve}\left(\left\{\mathrm{diff}\left(g\left(y\right),y\right)=-g\left(y\right),g\left(0\right)=-1\right\},\mathrm{numeric}\right); \phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}
\textcolor[rgb]{0,0,1}{\mathbf{proc}}\left(\textcolor[rgb]{0,0,1}{\mathrm{x_rkf45}}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{end proc}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathbf{proc}}\left(\textcolor[rgb]{0,0,1}{\mathrm{x_rkf45}}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{end proc}}
'g'
\mathrm{dsolve}\left(\left\{\mathrm{diff}\left(f\left(x\right),x\right)=f\left(x\right),\mathrm{diff}\left(g\left(x\right),x\right)=-g\left(x\right),f\left(0\right)=1,g\left(0\right)=-1\right\},\mathrm{numeric}\right); \phantom{\rule[-0.0ex]{0.0em}{0.0ex}}
\textcolor[rgb]{0,0,1}{\mathbf{proc}}\left(\textcolor[rgb]{0,0,1}{\mathrm{x_rkf45}}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{end proc}}
|
Fourier transform on high-dimensional unitary groups with applications to random tilings
15 September 2019 Fourier transform on high-dimensional unitary groups with applications to random tilings
Alexey Bufetov, Vadim Gorin
A combination of direct and inverse Fourier transforms on the unitary group
U\left(N\right)
identifies normalized characters with probability measures on
N
-tuples of integers. We develop the
N\to \infty
version of this correspondence by matching the asymptotics of partial derivatives at the identity of logarithm of characters with the law of large numbers and the central limit theorem for global behavior of corresponding random
N
As one application we study fluctuations of the height function of random domino and lozenge tilings of a rich class of domains. In this direction we prove the Kenyon–Okounkov conjecture (which predicts asymptotic Gaussianity and the exact form of the covariance) for a family of non-simply-connected polygons.
Another application is a central limit theorem for the
U\left(N\right)
quantum random walk with random initial data.
Alexey Bufetov. Vadim Gorin. "Fourier transform on high-dimensional unitary groups with applications to random tilings." Duke Math. J. 168 (13) 2559 - 2649, 15 September 2019. https://doi.org/10.1215/00127094-2019-0023
Received: 7 January 2018; Revised: 29 January 2019; Published: 15 September 2019
Secondary: 22E65 , 60K35
Keywords: Asymptotic representation theory , noncommutative Fourier transform , random tilings
Alexey Bufetov, Vadim Gorin "Fourier transform on high-dimensional unitary groups with applications to random tilings," Duke Mathematical Journal, Duke Math. J. 168(13), 2559-2649, (15 September 2019)
|
frule [Isabelle/HOL Support Wiki]
Trace: • frule
reference:frule
frule is a proof method. It applies a rule if possible.
\quad\bigwedge x_1 \dots x_k : [|\ A_1; \dots ; A_m\ |] \Longrightarrow C
and we want to use frule with rule
\quad[|\ P_1; \dots ; P_n\ |] \Longrightarrow Q
Then, frule does the following:
P_1
A_j
j
. The application fails if there is no unifier for any
j
; otherwise, let
U
be this unifier.
Remove the old subgoal and create new ones:
\quad\bigwedge x_1 \dots x_k : [|\ U(A_1); \dots ; U(A_m)\ ; U(Q) |] \Longrightarrow U(C)
\quad\bigwedge x_1 \dots x_k : [|\ U(A_1); \dots ; U(A_m)\ |] \Longrightarrow U(P_k)
k = 2, \dots, n
Note that frule is almost the same as drule. The only difference is that frule keeps the assumption “used” by the used rule
A_j
while drule drops it. This is useful if
A_j
is needed to prove the new subgoals; drule is unsafe because of this. However, frule tends to clutter your assumption set unnecessarily if
A_j
is no longer needed.
\quad[|\ A \longrightarrow B; A\ |] \Longrightarrow B
Applying apply (drule mp) yields new goals
\quad [|\ A \longrightarrow B; A\ |] \Longrightarrow A
\quad[|\ A \longrightarrow B; A; B\ |] \Longrightarrow B
which can both be solved by Assumption. Note that apply (frule(2) mp) is a shortcut for this and immediately solves the goal.
frule_tac
With frule_tac, you can force schematic variables in the used rule to take specific values. The extended syntax is:
apply (frule_tac ident1="expr1" and ident2="expr1" and ... in rule)
frule(k)
Oftentimes, a rule application results in several subgoals that can directly be solved by Assumption; see above for an example. Instead of applying assumption by hand, you can apply frule(k) which forces Isabelle to apply assumption
k
times after the rule application.
reference/frule.txt · Last modified: 2011/06/22 12:37 by 131.246.161.187
|
Average Speed in Projectile Motion and in General Motion of a Particle
Average Speed in Projectile Motion and in General Motion of a Particle
We calculate the average speed of a projectile in the absence of air resistance, a quantity that is missing from the treatment of the problem in the literature. We then show that this quantity is equal to the time-average instantaneous speed of the projectile, but different from its space-average instantaneous speed. It is then shown that this behavior is shared by general motion of all particles regardless of the dimensionality of motion and the nature of the forces involved. The equality of average speed and time-average instantaneous speed can be useful in situations where the calculation of one is more difficult than the other. Thus, making it more efficient to calculate one by calculating the other.
Average Speed, Time-Average, Space-Average, Instantaneous Speed, Projectile Motion, General Motion
Projectile motion, one of the simplest examples of two-dimensional motion in classical mechanics, has been extensively studied and thoroughly discussed in the literature and textbooks at different levels. Introductory treatment of projectile motion normally ignores air resistance and treats the problem using either differential and integral calculus [1] - [6] or algebra and trigonometry [7] [8] , or even without trigonometry [9] . More advanced treatments of the problem include air resistance [10] [11] [12] [13] . However, to the best of the author’s knowledge and based on extensive literature search, the average speed of a projectile during its motion has not been discussed in the literature at any level. Although this may seem trivial at first sight, it is an important concept in correlating different methods of averaging the speed of an object during its motion as will be explained below.
The average speed of a moving particle is defined as the total distance traveled L divided by the total time T,
{s}_{ave}=\frac{L}{T}
and the instantaneous speed is defined as magnitude of the instantaneous velocity of the particle at a given time,
v=|v\left(t\right)|=|\frac{\text{d}r\left(t\right)}{\text{d}t}|
r\left(t\right)
is the position vector of the particle.
A question that occasionally comes up in discussions related to speed is whether the average of the instantaneous speed of an object during its motion is the same as its average speed. In general, the answer is definitely NO. For example, if an object travels half of a distance with speed
{v}_{1}
and the other half with speed
{v}_{2}
, the average of these speeds is
\left({v}_{1}+{v}_{2}\right)/2
whereas the average speed which, by definition is the total distance divided by total time, can be shown to be given by
{s}_{ave}=\frac{2{v}_{1}{v}_{2}}{{v}_{1}+{v}_{2}}
Later on in this article we shall refer to the former average as the space-average of the speed.
In what follows, we show that the space-average of instantaneous speed of a particle is not, in general, equal to its average speed, however, the time-average of instantaneous speed is always equal to the average speed regardless of the nature of the motion. But first, we find the average speed, the time-average of instantaneous speed, and the space-average of instantaneous speed for a projectile in the absence of air resistance.
2.1. Average Speed
In the absence of air resistance, the components of the equation of motion for a projectile launched with an initial speed
{v}_{0}
at an angle of elevation
{\theta }_{0}
(Figure 1) are given by
\begin{array}{l}x={v}_{0x}t,\text{ }y=-\frac{1}{2}g{t}^{2}+{v}_{0y}t\\ {v}_{x}={v}_{0x},\text{ }{v}_{y}=-gt+{v}_{0y}\end{array}
{v}_{0x}={v}_{0}\mathrm{cos}{\theta }_{0}
{v}_{0y}={v}_{0}\mathrm{sin}{\theta }_{0}
. Elimination of t between the equations for x and y results in the equation of the trajectory,
y=-\frac{g}{2{v}_{0x}^{2}}{x}^{2}+\frac{{v}_{0y}}{{v}_{0x}}x
Defining the constants of the motion,
Figure 1. Trajectory of a projectile in the absence of air resistance. The constants a and b are defined by Equation (6).
a=\frac{g}{2{v}_{0x}^{2}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{and}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}b=\frac{{v}_{0y}}{{v}_{0x}}
the equation of the trajectory reduces to
y=-a{x}^{2}+bx
which is the equation of a parabola with x intercepts at x = 0 and x = b/a, as shown in Figure 1.
The element of arc length of a plane curve with equation
y=f\left(x\right)
\text{d}L=\sqrt{1+{\left(\frac{\text{d}y}{\text{d}x}\right)}^{2}}\text{ }\text{d}x
\text{d}L=\sqrt{1+{\left(2ax-b\right)}^{2}}\text{ }\text{d}x
Therefore, the total distance traveled by the projectile during its flight (returning to the same level from which it was launched) is given by
L={\int }_{0}^{b/a}\sqrt{1+{\left(2ax-b\right)}^{2}}\text{d}x
Finally, changing the variable to
u=2ax-b
L=\frac{1}{2a}{\int }_{-b}^{b}\sqrt{1+{u}^{2}}\text{ }\text{d}u=\frac{1}{a}{\int }_{0}^{b}\sqrt{1+{u}^{2}}\text{ }\text{d}u
Evaluation of this integral is elementary and can be found in any table of integrals [14] . Thus, we obtain
L=\frac{1}{2a}\left[b\sqrt{1+{b}^{2}}+\mathrm{ln}\left(b+\sqrt{1+{b}^{2}}\right)\right]
Note that b is a dimensionless quantity, and a has the dimension of reciprocal length.
From the second of Equation (4), the total time of flight can be calculated by setting y = 0,
T=\frac{2{v}_{0y}}{g}
Therefore, the average speed of the projectile during its entire flight is
{s}_{ave}=\frac{L}{T}
Using Equation (12) and Equation (13) together with the values of a and b, and noting that
b=\frac{{v}_{0y}}{{v}_{0x}}=\mathrm{tan}{\theta }_{0}
after some simple algebraic manipulations we find
{s}_{ave}=\frac{{v}_{0}}{2}\left[1+\frac{{\mathrm{cos}}^{2}{\theta }_{0}}{\mathrm{sin}{\theta }_{0}}\mathrm{ln}\left(\frac{1+\mathrm{sin}{\theta }_{0}}{\mathrm{cos}{\theta }_{0}}\right)\right]
Equation (16) describes the average speed of a projectile in the absence of air resistance in terms of its initial speed and the launch angle. In fact, any projectile motion is completely defined by these two parameters. An inspection of the second term on the right-hand side of this equation shows that it becomes indeterminate for
{\theta }_{0}=0
{\theta }_{0}=\text{π}/2
\underset{{\theta }_{0}\to 0}{\mathrm{lim}}\frac{{\mathrm{cos}}^{2}{\theta }_{0}}{\mathrm{sin}{\theta }_{0}}\mathrm{ln}\left(\frac{1+\mathrm{sin}{\theta }_{0}}{\mathrm{cos}{\theta }_{0}}\right)=\frac{0}{0}
\underset{{\theta }_{0}\to \text{π}/2}{\mathrm{lim}}\frac{{\mathrm{cos}}^{2}{\theta }_{0}}{\mathrm{sin}{\theta }_{0}}\mathrm{ln}\left(\frac{1+\mathrm{sin}{\theta }_{0}}{\mathrm{cos}{\theta }_{0}}\right)=0\times \infty
However, one can use the L’Hôpital rule to evaluate these limits, which turn out to be 1 and 0, respectively. Therefore, for
{\theta }_{0}=0
{\theta }_{0}=\text{π}/2
{s}_{ave}={v}_{0}
{s}_{ave}={v}_{0}/2
from Equation (16), respectively. Remembering that in our analysis we have assumed that the projectile returns to the same level that it started, these results can easily be verified using Equations (4). A graph of the average speed as a function of launch angle is shown in Figure 2.
2.2. Time-Average Instantaneous Speed
From Equation (4), the instantaneous speed of a projectile at a time t is given by
v=\sqrt{{v}_{0x}^{2}+{\left(gt-{v}_{0y}\right)}^{2}}
Using the definition of the average of a function
f\left(x\right)
\left[a,b\right]
{\stackrel{¯}{f}}_{ab}=\frac{1}{b-a}{\displaystyle {\int }_{a}^{b}}f\left(x\right)\text{d}x
Figure 2. Average speed of a projectile as a function of launch angle according to Equation (16).
we calculate the time-average of the instantaneous speed of the projectile during its motion,
{\stackrel{¯}{v}}_{t}=\frac{1}{T}{\displaystyle {\int }_{0}^{T}}\sqrt{{v}_{0x}^{2}+{\left(gt-{v}_{0y}\right)}^{2}}\text{d}t
where the time of flight T is given by Equation (13). Substituting for T and changing the variable of integration to
u=gt-{v}_{0y}
{\stackrel{¯}{v}}_{t}=\frac{1}{{v}_{0y}}{\displaystyle {\int }_{0}^{{v}_{0y}}}\sqrt{{v}_{0x}^{2}+{u}^{2}}\text{\hspace{0.05em}}\text{d}u
Evaluation of this integral is again elementary and the result is
{\stackrel{¯}{v}}_{t}=\frac{1}{2}\left[\sqrt{{v}_{0x}^{2}+{v}_{0y}^{2}}+\frac{{v}_{0x}^{2}}{{v}_{0y}}\mathrm{ln}\left(\frac{{v}_{0y}+\sqrt{{v}_{0x}^{2}+{v}_{0y}^{2}}}{{v}_{0x}}\right)\right]
{v}_{0}^{2}={v}_{0x}^{2}+{v}_{0y}^{2}
{v}_{0x}={v}_{0}\mathrm{cos}{\theta }_{0}
{v}_{0y}={v}_{0}\mathrm{sin}{\theta }_{0}
, this equation reduces to Equation (16). Therefore, the average speed and the time-average instantaneous speed of a projectile in the absence of air resistance are the same.
2.3. Space-Average Instantaneous Speed
According to the law of conservation of mechanical energy, the speed of a projectile in the absence of air resistance is a function of its height, y. Therefore, the space dependence of the speed of the projectile is given by
v=\sqrt{{v}_{0}^{2}-2gy}
and from Equation (4) the maximum height reached by the projectile is
{y}_{\mathrm{max}}=\frac{{v}_{0}^{2}{\mathrm{sin}}^{2}{\theta }_{0}}{2g}
Therefore, the space-average of the instantaneous speed is obtained from
{\stackrel{¯}{v}}_{s}=\frac{1}{{y}_{\mathrm{max}}}{\displaystyle {\int }_{0}^{{y}_{\mathrm{max}}}}\sqrt{{v}_{0}^{2}-2gy}\text{\hspace{0.05em}}\text{d}y
This integral can easily be evaluated and, after substituting for
{y}_{\mathrm{max}}
from Equation (25) and some algebraic manipulations, we obtain
{\stackrel{¯}{v}}_{s}=\frac{2{v}_{0}}{3{\mathrm{sin}}^{2}{\theta }_{0}}\left(1-{\mathrm{cos}}^{3}{\theta }_{0}\right)
which gives the space-average instantaneous speed of the projectile as a function its initial speed and launch angle. A graph of this equation together with that of Equation (16) are shown in Figure 3 for comparison. As can be seen, these two speeds are different. Therefore, the space-average of instantaneous speed of the projectile is not equal to its average speed. Again, we point out that Equation (27) becomes indeterminate (0/0) at
{\theta }_{0}=0
\underset{{\theta }_{0}\to 0}{\mathrm{lim}}{\stackrel{¯}{v}}_{s}={v}_{0}
3. Average Speed, Time-Average Instantaneous Speed, and Space-Average Instantaneous Speed in General Motion
We now show that the average speed and the time-average instantaneous speed of a particle in any general motion are always the same but they are different from the space-average instantaneous speed.
Consider the general motion of a particle in three dimensions, described by its
Figure 3. Average speed and space-average instantaneous speed of a projectile as a function of launch angle.
r=r\left(t\right)
. The total distance that the particle travels in a time interval
{t}_{2}-{t}_{1}
L={\int }_{{t}_{1}}^{{t}_{2}}|\text{d}r\left(t\right)|
and the average speed is
{s}_{ave}=\frac{L}{{t}_{2}-{t}_{1}}=\frac{1}{{t}_{2}-{t}_{1}}{\int }_{{t}_{1}}^{{t}_{2}}|\text{d}r\left(t\right)|
On the other hand, the instantaneous speed (or magnitude of velocity) of the particle at any time is given by
v=|\frac{\text{d}r\left(t\right)}{\text{d}t}|
and the time-average of the instantaneous speed during the time interval
{t}_{2}-{t}_{1}
{\stackrel{¯}{v}}_{t}=\frac{1}{{t}_{2}-{t}_{1}}{\displaystyle {\int }_{{t}_{1}}^{{t}_{2}}}v\text{d}t=\frac{1}{{t}_{2}-{t}_{1}}{\displaystyle {\int }_{{t}_{1}}^{{t}_{2}}}\left|\frac{\text{d}r\left(t\right)}{\text{d}t}\right|\text{d}t
But since dt is a positive scalar, the right-hand side of this equation reduces to that of Equation (30). Therefore,
{s}_{ave}={\stackrel{¯}{v}}_{t}
which means that for any general motion of a particle in any dimensions the average speed is equal to the time-average of its instantaneous speeds over any time interval.
On the other hand, the space-average instantaneous speed of a particle in its general motion is given by
{\stackrel{¯}{v}}_{s}=\frac{{\displaystyle {\int }_{path}}\left|\frac{\text{d}r}{\text{d}t}\right|\text{d}L}{{\displaystyle {\int }_{path}}\text{\hspace{0.05em}}\text{d}L}
where dL is the element of length along the path of the particle. This results does not in general reduce to Equation (30) as evidenced by the analysis of the projectile motion.
According to Equation (16), the average speed of a projectile in the absence of air resistance varies between v0 and v0/2 depending on the launch angle
{\theta }_{0}
{\theta }_{0}=0
the average speed is maximum (v0) and at
{\theta }_{0}={90}^{\circ }
it is minimum (v0/2). The average speed of 3v0/4, midpoint between the minimum and maximum occurs at
{\theta }_{0}={52.8}^{\circ }
Direct calculations of the time-average instantaneous speed of a projectile in the absence of air resistance reveal that this speed and the average speed of the projectile are the same, but they are different from the space-average instantaneous speed. These results prompted the investigation of various average speeds in general motion of a particle. As it turns out, average speed and time-average instantaneous speed are exactly the same regardless of the dimensionality of the motion and the nature of the forces involved during the motion. Thus, for example, the average speed and the time-average instantaneous speed of a projectile during its motion are equal even in the presence of air resistance. The space-average instantaneous speed, on the other hand, is not equal to the other two.
The equality of average speed and time-average instantaneous speed can be useful in situations where the calculation of one is more difficult than the other. For example, if the equation of the trajectory of planar motion of a particle
y=f\left(x\right)
and a given time interval
\left[{t}_{1},{t}_{2}\right]
are known, one can calculate the time-average instantaneous speed by calculating the average speed. Direct calculation of time-average instantaneous speed in this case would otherwise be more involved.
This work is intended to complement the existing wealth of literature on projectile motion as well as proving that the average speed and the time-average instantaneous speed of a particle are always the same in any general motion, but different from the space-average instantaneous speed.
Calderon, C.T. and Mohazzabi, P. (2018) Average Speed in Projectile Motion and in General Motion of a Particle. Journal of Applied Mathematics and Physics, 6, 1540-1548. https://doi.org/10.4236/jamp.2018.67130
1. Halliday, D., Resnick, R. and Walker, J. (2005) Fundamentals of Physics. 7th Edition, Wiley, New York, 64-67.
2. Serway, R.A. and Jewett Jr., J.W. (2014) Physics for Scientists and Engineers. 9th Edition, Brooks/Colem Boston, 84-91.
3. Knight, R.D. (2008) Physics for Scientists and Engineers. 2nd Edition, Pearson/Addison-Wesley, San Francisco, 97-102.
4. Weidner, R.T. (1985) Physics. Allyn & Bacon, Massachusetts, 68-73.
5. Tipler, P.A. and Mosca, G. (2004) Physics for Scientists and Engineers. 5th Edition, Volume 1, Freeman and Company, New York, 65-72.
6. Fishbane, P.M., Gasiorowicz, S. and Thornton, S.T. (1993) Physics for Scientists and Engineers. Prentice-Hall, Englewood Cliffs, 71-76.
7. Giancoli, D.C. (1980) Physics. Prentice-Hall, Englewood Cliffs, 51-54.
8. Giambattista, A., McCarthy-Richardson, B. and Richardson, R.C. (2017) College Physics. McGraw-Hill, New York, 120-126.
9. Mohazzabi, P. and Kohneh, Z.A. (2005) Projectile Motion without Trigonometric Functions. The Physics Teacher, 43, 114-115. https://doi.org/10.1119/1.1855750
10. Barger, V.D. and Olsson, M.G. (1995) Classical Mechanics. 2nd Edition, McGraw-Hill, New York, 30-31.
11. Fowles, G.R. and Cassiday, G.L. (1999) Analytical Mechanics. 6th Edition, Sanders, New York, 145-153.
12. Mohazzabi, P. (2018) When Does Air Resistance Become Significant in Projectile Motion? The Physics Teacher, 56, 168-169. https://doi.org/10.1119/1.5025298
13. Mohazzabi, P. and Fields, J.C. (2004) High-Altitude Projectile Motion. Canadian Journal of Physics, 82, 197-204. https://doi.org/10.1139/p04-001
14. Korn, G.A. and Korn, T.M. (1968) Mathematical Handbook for Scientists and Engineers. 2nd Enlarged and Revised Edition, McGraw-Hill, New York, 943.
15. Stewart, J. (2016) Single Variable Calculus: Early Transcendentals. 8th Edition, Cengage, Boston, 461.
|
Statistical hypothesis test for forecasting
When time series X Granger-causes time series Y, the patterns in X are approximately repeated in Y after some time lag (two examples are indicated with arrows). Thus, past values of X can be used for the prediction of future values of Y.
The Granger causality test is a statistical hypothesis test for determining whether one time series is useful in forecasting another, first proposed in 1969.[1] Ordinarily, regressions reflect "mere" correlations, but Clive Granger argued that causality in economics could be tested for by measuring the ability to predict the future values of a time series using prior values of another time series. Since the question of "true causality" is deeply philosophical, and because of the post hoc ergo propter hoc fallacy of assuming that one thing preceding another can be used as a proof of causation, econometricians assert that the Granger test finds only "predictive causality".[2] Using the term "causality" alone is a misnomer, as Granger-causality is better described as "precedence",[3] or, as Granger himself later claimed in 1977, "temporally related".[4] Rather than testing whether X causes Y, the Granger causality tests whether X forecasts Y.[5]
Granger also stressed that some studies using "Granger causality" testing in areas outside economics reached "ridiculous" conclusions.[6] "Of course, many ridiculous papers appeared", he said in his Nobel lecture.[7] However, it remains a popular method for causality analysis in time series due to its computational simplicity.[8][9] The original definition of Granger causality does not account for latent confounding effects and does not capture instantaneous and non-linear causal relationships, though several extensions have been proposed to address these issues.[8]
3.3 Non-parametric test
6.1 Extensions to point process models
We say that a variable X that evolves over time Granger-causes another evolving variable Y if predictions of the value of Y based on its own past values and on the past values of X are better than predictions of Y based only on Y's own past values.
Underlying principles[edit]
Granger defined the causality relationship based on two principles:[8][10]
Given these two assumptions about causality, Granger proposed to test the following hypothesis for identification of a causal effect of
{\displaystyle X}
{\displaystyle Y}
{\displaystyle \mathbb {P} [Y(t+1)\in A\mid {\mathcal {I}}(t)]\neq \mathbb {P} [Y(t+1)\in A\mid {\mathcal {I}}_{-X}(t)],}
{\displaystyle \mathbb {P} }
refers to probability,
{\displaystyle A}
is an arbitrary non-empty set, and
{\displaystyle {\mathcal {I}}(t)}
{\displaystyle {\mathcal {I}}_{-X}(t)}
respectively denote the information available as of time
{\displaystyle t}
in the entire universe, and that in the modified universe in which
{\displaystyle X}
is excluded. If the above hypothesis is accepted, we say that
{\displaystyle X}
Granger-causes
{\displaystyle Y}
Mathematical statement[edit]
Let y and x be stationary time series. To test the null hypothesis that x does not Granger-cause y, one first finds the proper lagged values of y to include in an univariate autoregression of y:
{\displaystyle y_{t}=a_{0}+a_{1}y_{t-1}+a_{2}y_{t-2}+\cdots +a_{m}y_{t-m}+{\text{error}}_{t}.}
{\displaystyle y_{t}=a_{0}+a_{1}y_{t-1}+a_{2}y_{t-2}+\cdots +a_{m}y_{t-m}+b_{p}x_{t-p}+\cdots +b_{q}x_{t-q}+{\text{error}}_{t}.}
Multivariate Granger causality analysis is usually performed by fitting a vector autoregressive model (VAR) to the time series. In particular, let
{\displaystyle X(t)\in \mathbb {R} ^{d\times 1}}
{\displaystyle t=1,\ldots ,T}
{\displaystyle d}
-dimensional multivariate time series. Granger causality is performed by fitting a VAR model with
{\displaystyle L}
time lags as follows:
{\displaystyle X(t)=\sum _{\tau =1}^{L}A_{\tau }X(t-\tau )+\varepsilon (t),}
{\displaystyle \varepsilon (t)}
is a white Gaussian random vector, and
{\displaystyle A_{\tau }}
is a matrix for every
{\displaystyle \tau }
. A time series
{\displaystyle X_{i}}
is called a Granger cause of another time series
{\displaystyle X_{j}}
, if at least one of the elements
{\displaystyle A_{\tau }(j,i)}
{\displaystyle \tau =1,\ldots ,L}
is significantly larger than zero (in absolute value).[11]
Non-parametric test[edit]
The above linear methods are appropriate for testing Granger causality in the mean. However they are not able to detect Granger causality in higher moments, e.g., in the variance. Non-parametric tests for Granger causality are designed to address this problem.[12] The definition of Granger causality in these tests is general and does not involve any modelling assumptions, such as a linear autoregressive model. The non-parametric tests for Granger causality can be used as diagnostic tools to build better parametric models including higher order moments and/or non-linearity.[13]
As its name implies, Granger causality is not necessarily true causality. In fact, the Granger-causality tests fulfill only the Humean definition of causality that identifies the cause-effect relations with constant conjunctions.[14] If both X and Y are driven by a common third process with different lags, one might still fail to reject the alternative hypothesis of Granger causality. Yet, manipulation of one of the variables would not change the other. Indeed, the Granger-causality tests are designed to handle pairs of variables, and may produce misleading results when the true relationship involves three or more variables. Having said this, it has been argued that given a probabilistic view of causation, Granger causality can be considered true causality in that sense, especially when Reichenbach's "screening off" notion of probabilistic causation is taken into account.[15] Other possible sources of misguiding test results are: (1) not frequent enough or too frequent sampling, (2) nonlinear causal relationship, (3) time series nonstationarity and nonlinearity and (4) existence of rational expectations.[14] A similar test involving more variables can be applied with vector autoregression. Recently [16] a fundamental mathematical study of the mechanism underlying the Granger method has been provided. By making use exclusively of mathematical tools (Fourier transformation and differential calculus), it has been found that not even the most basic requirement underlying any possible definition of causality is met by the Granger causality test: any definition of causality should refer to the prediction of the future from the past; instead by inverting the time series it can be shown that Granger allows one to ”predict” the past from the future as well.
A method for Granger causality has been developed that is not sensitive to deviations from the assumption that the error term is normally distributed.[17] This method is especially useful in financial economics, since many financial variables are non-normally distributed.[18] Recently, asymmetric causality testing has been suggested in the literature in order to separate the causal impact of positive changes from the negative ones.[19] An extension of Granger (non-)causality testing to panel data is also available.[20] A modified Granger causality test based on the GARCH (generalized auto-regressive conditional heteroscedasticity) type of integer-valued time series models is available in many areas.[21][22]
Extensions to point process models[edit]
Neural spike train data can be modeled as a point-process. A temporal point process is a stochastic time-series of binary events that occurs in continuous time. It can only take on two values at each point in time, indicating whether or not an event has actually occurred. This type of binary-valued representation of information suits the activity of neural populations because a single neuron's action potential has a typical waveform. In this way, what carries the actual information being output from a neuron is the occurrence of a “spike”, as well as the time between successive spikes. Using this approach one could abstract the flow of information in a neural-network to be simply the spiking times for each neuron through an observation period. A point-process can be represented either by the timing of the spikes themselves, the waiting times between spikes, using a counting process, or, if time is discretized enough to ensure that in each window only one event has the possibility of occurring, that is to say one time bin can only contain one event, as a set of 1s and 0s, very similar to binary.[citation needed]
One of the simplest types of neural-spiking models is the Poisson process. This however, is limited in that it is memory-less. It does not account for any spiking history when calculating the current probability of firing. Neurons, however, exhibit a fundamental (biophysical) history dependence by way of its relative and absolute refractory periods. To address this, a conditional intensity function is used to represent the probability of a neuron spiking, conditioned on its own history. The conditional intensity function expresses the instantaneous firing probability and implicitly defines a complete probability model for the point process. It defines a probability per unit time. So if this unit time is taken small enough to ensure that only one spike could occur in that time window, then our conditional intensity function completely specifies the probability that a given neuron will fire in a certain time.[citation needed]
Convergent cross mapping – Statistical test for causality, another technique for testing for causality between dynamic variables
^ Granger, C. W. J. (1969). "Investigating Causal Relations by Econometric Models and Cross-spectral Methods". Econometrica. 37 (3): 424–438. doi:10.2307/1912791. JSTOR 1912791.
^ Diebold, Francis X. (2007). Elements of Forecasting (PDF) (4th ed.). Thomson South-Western. pp. 230–231. ISBN 978-0324359046.
^ Leamer, Edward E. (1985). "Vector Autoregressions for Causal Inference?". Carnegie-Rochester Conference Series on Public Policy. 22: 283. doi:10.1016/0167-2231(85)90035-1.
^ Granger, C. W. J.; Newbold, Paul (1977). Forecasting Economic Time Series. New York: Academic Press. p. 225. ISBN 0122951506.
^ Hamilton, James D. (1994). Time Series Analysis (PDF). Princeton University Press. pp. 306–308. ISBN 0-691-04289-6.
^ Thurman, Walter (1988). "Chickens, Eggs, and Causality or Which Came First?" (PDF). American Journal of Agricultural Economics. 70 (2): 237–238. Retrieved 2 April 2022.
^ Granger, Clive W. J. (2004). "Time Series Analysis, Cointegration, and Applications" (PDF). American Economic Review. 94 (3): 421–425. CiteSeerX 10.1.1.370.6488. doi:10.1257/0002828041464669. Retrieved 12 June 2019.
^ a b c d Eichler, Michael (2012). "Causal Inference in Time Series Analysis" (PDF). In Berzuini, Carlo (ed.). Causality : statistical perspectives and applications (3rd ed.). Hoboken, N.J.: Wiley. pp. 327–352. ISBN 978-0470665565.
^ Seth, Anil (2007). "Granger causality". Scholarpedia. 2 (7): 1667. Bibcode:2007SchpJ...2.1667S. doi:10.4249/scholarpedia.1667.
^ a b Granger, C.W.J. (1980). "Testing for causality: A personal viewpoint". Journal of Economic Dynamics and Control. 2: 329–352. doi:10.1016/0165-1889(80)90069-X.
^ Lütkepohl, Helmut (2005). New introduction to multiple time series analysis (3 ed.). Berlin: Springer. pp. 41–51. ISBN 978-3540262398.
^ Diks, Cees; Panchenko, Valentyn (2006). "A new statistic and practical guidelines for nonparametric Granger causality testing" (PDF). Journal of Economic Dynamics and Control. 30 (9): 1647–1669. doi:10.1016/j.jedc.2005.08.008.
^ Francis, Bill B.; Mougoue, Mbodja; Panchenko, Valentyn (2010). "Is there a Symmetric Nonlinear Causal Relationship between Large and Small Firms?" (PDF). Journal of Empirical Finance. 17 (1): 23–28. doi:10.1016/j.jempfin.2009.08.003.
^ a b Mariusz, Maziarz (2015-05-20). "A review of the Granger-causality fallacy". The Journal of Philosophical Economics: Reflections on Economic and Social Issues. VIII. (2). ISSN 1843-2298.
^ Mannino, Michael; Bressler, Steven L (2015). "Foundational perspectives on causality in large-scale brain networks". Physics of Life Reviews. 15: 107–23. Bibcode:2015PhLRv..15..107M. doi:10.1016/j.plrev.2015.09.002. PMID 26429630.
^ Grassmann, Greta (2020). "New considerations on the validity of the Wiener-Granger causality test". Heliyon. 6: e05208. doi:10.1016/j.heliyon.2020.e05208. PMC 7578691. PMID 33102842.
^ Hacker, R. Scott; Hatemi-j, A. (2006). "Tests for causality between integrated variables using asymptotic and bootstrap distributions: Theory and application". Applied Economics. 38 (13): 1489–1500. doi:10.1080/00036840500405763. S2CID 121999615.
^ Mandelbrot, Benoit (1963). "The Variation of Certain Speculative Prices". The Journal of Business. 36 (4): 394–419. doi:10.1086/294632.
^ Hatemi-j, A. (2012). "Asymmetric causality tests with an application". Empirical Economics. 43: 447–456. doi:10.1007/s00181-011-0484-x. S2CID 153562476.
^ Dumitrescu, E.-I.; Hurlin, C. (2012). "Testing for Granger non-causality in heterogeneous panels". Economic Modelling. 29 (4): 1450–1460. CiteSeerX 10.1.1.395.568. doi:10.1016/j.econmod.2012.02.014.
^ Chen, Cathy W. S.; Hsieh, Ying-Hen; Su, Hung-Chieh; Wu, Jia Jing (2018-02-01). "Causality test of ambient fine particles and human influenza in Taiwan: Age group-specific disparity and geographic heterogeneity". Environment International. 111: 354–361. doi:10.1016/j.envint.2017.10.011. ISSN 0160-4120. PMID 29173968.
^ Chen, Cathy W. S.; Lee, Sangyeol (2017). "Bayesian causality test for integer-valued time series models with applications to climate and crime data". Journal of the Royal Statistical Society, Series C (Applied Statistics). 66 (4): 797–814. doi:10.1111/rssc.12200. ISSN 1467-9876.
^ Knight, R. T (2007). "NEUROSCIENCE: Neural Networks Debunk Phrenology". Science. 316 (5831): 1578–9. doi:10.1126/science.1144677. PMID 17569852. S2CID 15065228.
^ a b Kim, Sanggyun; Putrino, David; Ghosh, Soumya; Brown, Emery N (2011). "A Granger Causality Measure for Point Process Models of Ensemble Neural Spiking Activity". PLOS Computational Biology. 7 (3): e1001110. Bibcode:2011PLSCB...7E1110K. doi:10.1371/journal.pcbi.1001110. PMC 3063721. PMID 21455283.
^ Bressler, Steven L; Seth, Anil K (2011). "Wiener–Granger Causality: A well established methodology". NeuroImage. 58 (2): 323–9. doi:10.1016/j.neuroimage.2010.02.059. PMID 20202481. S2CID 36616970.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Granger_causality&oldid=1081784839"
|
On fluctuations of global and mesoscopic linear statistics of generalized Wigner matrices
May 2021 On fluctuations of global and mesoscopic linear statistics of generalized Wigner matrices
Yiting Li, Yuanyuan Xu
Yiting Li,1 Yuanyuan Xu2
We consider an N by N real or complex generalized Wigner matrix
{\mathit{H}}_{\mathit{N}}
, whose entries are independent centered random variables with uniformly bounded moments. We assume that the variance profile,
{\mathit{s}}_{\mathit{i}\mathit{j}}:=\mathbb{E}|{\mathit{H}}_{\mathit{i}\mathit{j}}{|}^{2}
{\sum }_{\mathit{i}=1}^{\mathit{N}}{\mathit{s}}_{\mathit{i}\mathit{j}}=1
1\le \mathit{j}\le \mathit{N}
{\mathit{c}}^{-1}\le \mathit{N}{\mathit{s}}_{\mathit{i}\mathit{j}}\le \mathit{c}
1\le \mathit{i},\mathit{j}\le \mathit{N}
with some constant
\mathit{c}\ge 1
. We establish Gaussian fluctuations for the linear eigenvalue statistics of
{\mathit{H}}_{\mathit{N}}
on global scales, as well as on all mesoscopic scales up to the spectral edges, with the expectation and variance formulated in terms of the variance profile. We subsequently obtain the universal mesoscopic central limit theorems for the linear eigenvalue statistics inside the bulk and at the edges, respectively.
Yiting Li. Yuanyuan Xu. "On fluctuations of global and mesoscopic linear statistics of generalized Wigner matrices." Bernoulli 27 (2) 1057 - 1076, May 2021. https://doi.org/10.3150/20-BEJ1265
Received: 1 June 2020; Revised: 1 July 2020; Published: May 2021
Keywords: central limit theorem , generalized Wigner matrix , Linear eigenvalue statistics
Yiting Li, Yuanyuan Xu "On fluctuations of global and mesoscopic linear statistics of generalized Wigner matrices," Bernoulli, Bernoulli 27(2), 1057-1076, (May 2021)
|
Talk:Axiom of extensionality - Wikipedia
Talk:Axiom of extensionality
Axiom of extensionality has been listed as a level-5 vital article in Mathematics. If you can improve it, please do. This article has been rated as Start-Class.
Choice of symbolsEdit
it's been a while since I did ZFC. Should it be ⇔ for iff, or the single line arrow currently in the article? -- Tarquin 23:13 Nov 28, 2002 (UTC)
It depends on your notational conventions, and is more an issue of symbolic logic than ZFC as such. IME, when people do really symbolic logic, not trying to mix with natural language at all and restricting to a minimal set of defined symbols -- which is what I'm doing in the symbolic portions of the document, restricting to the symbols of predicate logic, including equality, and the one set-theoretic symbol ∈ -- then they tend to use the arrows with single lines. OTOH, when being less formal and especially in conjunction with natural language, people tend to use the arrows with double lines; this is especially especially true when the single-lined arrows might serve some other purpose (such as indicating functions), which is not the case here.
Some will also make a distinction in meaning between the two types of arrows; for them, ⇔ is a symbol in the metalanguage indicating that two expressions in the object language are logically equivalent (either semantically or syntactically), while ↔ remains a symbol in the object language indicating (in classical logic) the material biconditional of the two expressions. Thus:
p ⇔ q if and only if p ↔ q is a tautology (semantically) or a theorem (syntactically).
Note that in the above sentence, "↔", "⇔", and "if and only if" all mean slightly different things! If I wished to express it entirely in words, I would say:
Two expressions are logically equivalent (semantically or syntactically) if and only if their material biconditional is (respectively) a tautology or a theorem.
Or from another perspective, "↔" links two propositions in the object language to get another proposition in the object language, "⇔" links two propositions in the object language to get a proposition in the metalanguage, and "if and only if" links two propositions in the metalanguage to get a proposition in the metalanguage. Of course, when I translate the axioms in this article into words, I'm not so precise about these distinctions; if I were, the resulting English would be even more incomprehensible that the symbolic statement. So the English translations must be read as from a Platonic point of view, pretending that the purely formal symbolic statements above are representing certain actual facts about some real things called "sets".
For us, there's also the practical matter that there are some web browsers (at least some versions of M$IE) that can read "↔" but not "⇔".
— Toby 04:11 Nov 29, 2002 (UTC)
My web browser cannot read ∀ but it can read
{\displaystyle \forall }
. I would assume that the
{\displaystyle \forall }
is a better choice for standardisation as it is a part proper of Tex. As such I'll change it over on this page and wait a while for objections. I will then (further no objections) start changing over the symbols on other pages. Barnaby dawson 09:56, 31 Aug 2004 (UTC)
With ur-elementsEdit
The section on set theory with ur-elements says that one could define ur-elements to contain themselves as unique elements. Apart from the mentioned adaptation of regularity this requires, I see another problem: on can no longer distinguish an ur-element x from {x} or {{x}} or {x,{x}} and so forth (more exactly, one can show these to be all equal). Shouldn't that be mentioned? Marc van Leeuwen (talk) 13:07, 7 April 2011 (UTC)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Talk:Axiom_of_extensionality&oldid=842508461"
|
C++ With the Command Line · USACO Guide
HomeGeneralC++ With the Command Line
Command Line BasicsGeneralLinuxMacWindowsInstalling g++On LinuxOn MacOn WindowsSimpler: Mingw-w64 (Minimalist GNU for Windows)Harder: Windows Subsystem for Linux (WSL)C++ with the Command LineBasics of Compiling & RunningRedirecting Input & OutputCompiler Options (aka Flags)Adding Shortcuts (Mac)What is & fg for?Measuring Time & Memory Usage (Mac)Adjusting Stack Size (Mac)Method 1Method 2
Contributors: Benjamin Qi, Hankai Zhang, Anthony Wang, Nathan Wang, Nathan Chen, Michael Lan, Arpan Banerjee
General - Running Code Locally
what compiling a simple program looks like
hm, anything simpler / interactive (and free)?
Should be mostly the same as Linux ...
Open the Terminal application and familiarize yourself with some basic commands. Upgrade to zsh if you haven't already.
Intro to OS X Command Line
keyboard shortcuts / terminal commands
USACO (and most contests) use GCC's g++ to compile and run your code. You'll need g++ specifically to use the #include <bits/stdc++.h> header file; see Running Code Locally for details.
GCC is usually preinstalled on most Linux distros. You can check if it is installed with
If it is not preinstalled, you can probably install it using your distro's package manager.
If you previously installed these you may need to update them:
softwareupdate --list # list updates
softwareupdate -i -a # installs all updates
After this step, clang should be installed (try running clang --version in Terminal).
Install gcc with Homebrew.
According to this if brew doesn't seem to finish for a long time then
brew install gcc --force-bottle
probably suffices.
You should be able to compile with g++-#, where # is the version number (e.g., 10). Running the following command
g++-10 --version
If you want to be able to compile with just g++, write a shell alias! Put the following lines into your shell's rc file (~/.bashrc if you use bash, and ~/.zshrc if you use zsh).
alias g++=g++-10
Once you do so, g++ --version should now output the same thing as g++-10 --version.
Note: avoid overriding the system g++ with symlinking or hard-linking as that will almost surely cause problems. Don't worry if you don't know what those terms mean.
Simpler: Mingw-w64 (Minimalist GNU for Windows)
MinGW with VS Code
Setting Up MinGW with VS Code
Configuring CLion on Windows
Setting up MinGW with CLion
Harder: Windows Subsystem for Linux (WSL)
If you're already accustomed to the Linux Command line, this might be the best option for you.
Windows Subsystem for Linux, commonly referred to as WSL, runs the linux kernel (or an emulation layer, depending on which version you use) within your windows installation. This allows you to use Linux binaries without needing to use Linux as your main Operating System.
Many people use WSL (such as Anthony), but it can be difficult to properly set up.
VSCode - GCC on WSL
If you want to code in (neo)vim, you can install WSL and code through WSL bash.
To install the necessary tools after setting up WSL, you can run the following commands.
On Debian based distributions like Ubuntu:
On Arch based distributions like Arch Linux:
sudo pacman -Sy base-devel
You can find many tutorials on how to style up WSL and make it feel more cozy. The first step is to use a proper terminal and not the default one that Windows provides. An easy to use option is Windows Terminal, which can be found on the Microsoft Store.
Set Up and Customize Windows Terminal
Make the command line look good
Get a beautiful command line interface
Basics of Compiling & Running
Consider a simple program such as the following, which we'll save in name.cpp.
cout << "FOUND " << x << "\n";
It's not hard to compile & run a C++ program. First, open up Powershell on Windows, Terminal on Mac, or your distro's terminal in Linux. We can compile name.cpp into an executable named name with the following command:
g++ name.cpp -o name
Then we can execute the program:
If you type some integer and then press enter, then the program should produce output. We can write both of these commands in a single line:
g++ name.cpp -o name && ./name
Note that && ensures that ./name only runs if g++ name.cpp -o name finishes successfully.
Redirecting Input & Output
If you want to read standard input from inp.txt, use the following:
./name < inp.txt
If you want to write standard output to out.txt, then use the following:
./name > out.txt
They can also be used in conjunction, as shown below:
./name < inp.txt > out.txt
See Input & Output for how to do file input and output within the program.
Compiler Options (aka Flags)
Use compiler flags to change the way GCC compiles your code. Usually, we use something like the following in place of g++ name.cpp -o name:
g++ -std=c++17 -O2 name.cpp -o name -Wall
-O2 tells g++ to compile your code to run more quickly while increasing compilation time (see here).
-std=c++17 allows you to use features that were added to C++ in 2017. USACO recently upgraded from C++11 to C++17.
-Wall checks your program for common errors. See Debugging for more information.
You should always compile with these flags.
Adding Shortcuts (Mac)
For Users of Linux & Windows
The process is similar for Linux. If you're on Windows, you can use an IDE to get these shortcuts, or you can install WSL (mentioned above).
Retyping the compiler flags above can get tedious. You should define shortcuts so you don't need to type them every time!
Aliases in Terminal
What should / shouldn't go in .zshenv, .zshrc, ...
First, create your .zshrc if it doesn't already exist.
Open your .zshrc with a text editor.
or some text editor (ex. sublime text with subl).
You can add aliases and functions here, such as the following to compile and run C++ on Mac.
co() { g++ -std=c++17 -O2 -o "${1%.*}" $1 -Wall; }
run() { co $1 && ./${1%.*} & fg; }
Now you can easily compile and run name.cpp from the command line with co name.cpp && ./name or run name.cpp. Note that all occurrences of $1 in the function are replaced with name.cpp, while ${1%.*} removes the file extension from $1 to produce name.
What is & fg for?
Displaying or redirecting a shell's job control messages
Let prog.cpp denote the following file:
cout << v[-1];
According to the resource above, the & fg is necessary for getting zsh on Mac to display crash messages (such as segmentation fault). For example, consider the running the first prog.cpp above with run prog.cpp.
If & fg is removed from the run command above then the terminal displays no message at all. Leaving it in produces the following (ignore the first two lines):
[2] - running ./${1%.*}
zsh: segmentation fault ./${1%.*}
Measuring Time & Memory Usage (Mac)
How to Find Total Memory Consumption of C++ Program
time -v on Mac
use gtime
For example, suppose that prog.cpp consists of the following:
int a[BIG];
for (int i = 0; i < BIG; ++i) sum += a[i];
Then co prog.cpp && gtime -v ./prog gives the following:
Command being timed: "./prog"
10^7
integers require
4\cdot 10^7\cdot 10^{-3}\approx 40000
kilobytes of memory, which is close to
40216
in the above output as expected.
Adjusting Stack Size (Mac)
This section might be out of date.
Let A.cpp denote the following program:
int res(int x) {
if (x == 200000) return x;
return res(x+1);
cout << res(0) << "\n";
If we compile and run this with g++ A.cpp -o A && ./A, this outputs 200000. However, changing 200000 to 300000 gives a segmentation fault. Similarly,
runs, but changing 2000000 to 3000000 also gives a segmentation fault. This is because the stack size on Mac appears to be limited to 8 megabytes by default.
Note that USACO does not have a stack size limit, aside from the usual 256 MB memory limit. Therefore, code that crashes locally due to a stack overflow error may still pass on the USACO servers. To get your code running locally, use one of the methods below.
This matters particularly for contests such as Facebook Hacker Cup where you submit the output of a program you run locally.
Change Stack Size on Mac OS?
ulimit -s 65532 will increase the stack size to about 64 MB. Unfortunately, this doesn't work for higher numbers.
Terminal Command on Mac
people complain about FHC every year
To get around this, we can pass a linker option. According to the manual for ld (enter man ld in Terminal), the option -stack_size size does the following:
Specifies the maximum stack size for the main thread in a program. Without this option a program has a 8MB stack. The argument size is a hexadecimal number with an optional leading 0x. The size should be a multiple of the architecture's page size (4KB or 16KB).
So including -Wl,-stack_size,0x10000000 as part of your compilation command will set the maximum stack size to
16^7
\approx 256
megabytes, which is usually sufficient. However, running the first program above with 200000 replaced by 1e7 still gives an error. In this case, you can further increase the maximum stack size (ex. changing 0x10000000 to 0xF0000000).
On windows, adding -Wl,--stack,268435456 as a part of your compilation flags should do the trick. The 268435456 corresponds to 268435456 bytes, or 256 megabytes. If you are using Windows PowerShell, make sure to wrap it in quotations (like so: "-Wl,--stack,268435456"), since commas are considered to be special characters.
|
Enhanced Splash Models for High Pressure Diesel Spray | J. Eng. Gas Turbines Power | ASME Digital Collection
Enhanced Splash Models for High Pressure Diesel Spray
L. Allocca,
, Via Marconi, 8, 80125 Napoli, Italy
e-mail: l.allocca@im.cnr.it
L. Andreassi,
, Dip. di Ingegneria Meccanica, Via del Politecnico 1-00133 Rome, Italy
e-mail: lucand@mail.mec.uniroma2.it
S. Ubertini
e-mail: stefano.ubertini@uniroma2.it
Allocca, L., Andreassi, L., and Ubertini, S. (September 4, 2006). "Enhanced Splash Models for High Pressure Diesel Spray." ASME. J. Eng. Gas Turbines Power. April 2007; 129(2): 609–621. https://doi.org/10.1115/1.2432891
Mixture preparation is a crucial aspect for the correct operation of modern direct injection (DI) Diesel engines as it greatly influences and alters the combustion process and, therefore, the exhaust emissions. The complete comprehension of the spray impingement phenomenon is a quite complete task and a mixed numerical-experimental approach has to be considered. On the modeling side, several studies can be found in the scientific literature but only in the last years complete multidimensional modeling has been developed and applied to engine simulations. Among the models available in literature, in this paper, the models by Bai and Gosman (Bai, C., and Gosman, A. D., 1995, SAE Technical Paper No. 950283) and by Lee et al. (Lee, S., and Ryou, H., 2000, Proceedings of the Eighth International Conference on Liquid Atomization and Spray Systems, Pasadena, CA, pp. 586–593; Lee, S., Ko, G. H., Ryas, H., and Hong, K. B., 2001, KSME Int. J., 15(7), pp. 951–961) have been selected and implemented in the KIVA-3V code. On the experimental side, the behavior of a Diesel impinging spray emerging from a common rail injection system (injection pressures of 80 and
120MPa
) has been analyzed. The impinging spray has been lightened by a pulsed laser sheet generated from the second harmonic of a Nd-yttrium-aluminum-garnet laser. The images have been acquired by a charge coupled device camera at different times from the start of injection. Digital image processing software has enabled to extract the characteristic parameters of the impinging spray with respect to different operating conditions. The comparison of numerical and experimental data shows that both models should be modified in order to allow a proper simulation of the splash phenomena in modern Diesel engines. Then the numerical data in terms of radial growth, height and shape of the splash cloud, as predicted by modified versions of the models are compared to the experimental ones. Differences among the models are highlighted and discussed.
diesel engines, combustion, exhaust systems, sprays
Drops, Sprays, Diesel
Modelling of Wall Films Formed by Impinging Diesel Sprays
Modelling Wall Impaction of Diesel Sprays
Kidoguki
High Time-Space Resolution Analysis of Droplets Behavior and Gas Entrainment into Diesel Sprays Impinging on Walls
,” 16th ICLASS Europe Meeting, Germany.
Fuel Injection Spray and Combustion Chamber Wall Impingement in Large Bore Diesel Engines
Characteristics of a Diesel Spray Impinging on a Flat Wall
Analysis of Impinging Spray Characteristics under High-Pressure Fuel Injection
Proceedings of COMODIA 90 Int. Symposium on Diagnostic and Modeling of Combustion in I.C. Engines
Flow and Heat Transfer Characteristics of Impinging Transient Diesel Sprays
Early Injection and Time-Resolved Evolution of a Spray for GDI Engines
,” ASME Fluids Engineering Division Summer Meeting, Montreal.
Influence of the Gas Ambient Nature on Diesel Spray Properties at High Injection Pressure: Experimental Results
,” THIESEL 2000, Valencia, Spain.
Wall-Impingement Analysis of a Spray From a Common Rail Injection System for Diesel Engines
Quantitative Analysis of Combustion in High-Speed Direct Injection Diesel Engines
,” COMODIA 94 July 11–14, 1994 Yokohama, Japan.
Modeling Engine/Spray Wall Impingement
Comparison of Models and Experiments for Diesel Fuel Sprays
,” COMODIA 1990, Int. Symposium on Diagnostic and Modelling of Combustion in IC Engines, Kyoto, Japan, pp.
Eckause
Modeling Heat Transfer to Impinging Fuel Sprays in Direct-Injection Engines
Guerrassi
Experimental Study and Modeling of Diesel Spray/Wall Impingement
Numerical Modeling of Diesel Spray Impinging on Flat Walls
Spray Wall Impingement Phenomena: Experimental Investigations and Numerical Predictions
,” 12th Annual Conference of ICLASS Europe, Lund, Sweden, pp.
Aupfrall von Tropfen auf Flussigkeitsfilmen
,” Workshop uber Sprays, Erfassung von Spruhvorgangen und Techniken der Fluidzerstaubung, pp.
3-1–A 3-
A Spray Wall Impingement Model Based Upon Conservation Priciples
,” Fifth International Symposium on Diagnostics and Modeling of Combustion in Internal Combustion Engines, pp.
Modelling Fuel Film Formation and Wall Interaction in Diesel Engines
A Particle Numerical Model for Wall Film Dynamics in Port-Injected Engines
A Spray/Wall Interaction Submodel for the KIVA-3V Wall Film Model
Modeling of Spray-Wall Interactions Considering Liquid Film Formation
Proceedings of the Eighth International Conference on Liquid Atomization and Spray Systems
Development and Application of a New Spray Impingement Model Considering Film Formation in a Diesel Engine
Multidimensional Modeling of Spray Impingement in Modern Diesel Engines
,” SAE Technical Paper No. 2005-24-092.
Evaluation of Splash Models With High-Pressure Diesel Spray
(1998) ISO 4113, 2nd ed., 1998-11-15.
A Phenomenological Model of Diesel Spray Atomization
Proc. of the International Conference on Multiphase Flows
, Tsukuba, Japan.
Structure of High Pressure Fuel Sprays
Combustion and Spray Simulation of a DI Turbocharged Diesel Engine
2002 SAE Trans. J. Engines
Spray and Combustion Characteristics of Biodiesel∕Diesel Blended Fuel in a Direct Injection Common-Rail Diesel Engine
|
Worlds and Variables | Bean Machine
Worlds and Variables
Custom Inference
Worlds
A World is Bean Machine's internal representation of the state of the model. It can be thought of as a graph corresponding to a particular instantiation of the graphical model, or mathematically, a sample from the joint distribution of the model:
world \sim p(data, random\_variables)
. When we run inference, we create worlds, each of which corresponds to a Monte Carlo sample of the posterior. New worlds are either accepted or rejected, which is (usually) determined by the accept-reject stage of the MH algorithm.
The World class provides a flexible interface for performing inference and representing the intermediate or final results of inference. To that end, it provides a few functionalities. Since a world is a representation of the model joint, it tracks which variables are latent and which are observed, and we can evaluate its density with its log_prob method which returns the joint log probability given its instantiated variables. A world can also be run as a Python context manager which allows the user to execute a model given a particular instantiation of a world. Ordinarily, a random_variable returns a function pointer to the variable, but under the world context, the actual variable is sampled since we are instantiating it inside a world:
return Bernoulli(0.5)
pointer = foo()
assert isinstance(pointer, RVIdentifier)
# everything run inside the world context manager
# is recorded in the world
x == torch.tensor(1.)
x_var = world.get_variable(foo())
x_var.value == x
Since worlds are independent instantiations of the model, you can compose them interchangeably. This allows us to inspect and manipulate our model as we see fit. During MCMC inference, Bean Machine is constantly proposing new worlds in accordance with the proposal distribution, the collection of which form the posterior.
Variables are primitives that contain metadata about a given random variable defined by @bm.random_variable, such as the distribution it was sampled from, its parents and children, the sampled value of the variable, and its log density. They can represent latent or observed variables. Only latent variables are inferred during inference and the values of the Variables can change between inference iterations.
RVIdentifiers
Each random variable is associated with a unique key RVidentifier. This is a pointer to the random variable and is implemented as a dataclass containing the random variable's Python function and arguments. Since the function argument is a component of generating an RVIdentifier, the same callable can generate independent random variables by using different arguments:
foo(0) # this is one variable with an RVIdentifier
foo(1) # this is another variable with a different RVIdentifier
RVIdentifiers
|
§ 160.2. Require the pupils to learn the meanings of these four verbs.
§ 161. The imperative mood is introduced at this point rather than later because of its being formed on the present stem, thus completing the formation of the active tenses on this stem in the indicative, infinitive, and imperative.
Conduct the vocabulary review like the first one (see p. 13). The number of words is less than usual to permit of more concentration on the review of the verb forms.
For reviewing the verb, place upon the board the following blank scheme and use a variety of verbs for drill on the different conjugations:
{\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\\\ \\\ \\\ \\\ \\\ \\\ \\\ \ \end{matrix}}\right.}}
{\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\\\ \ \end{matrix}}\right.}}
Tense Sign -bā-
{\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\\\ \ \end{matrix}}\right.}}
{\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\\\ \ \end{matrix}}\right.}}
Tense signs
{\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}}
I, II, -bi- 2. —— 2. ——
III, IV, -ǎ-, -ě- 3. —— 3. ——
{\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}}
{\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}}
§ 511. Make the review questions the basis of a written lesson.
Be sure that the active forms are thoroughly learned before taking up the passive.
§ 164. Require the pupils to write side by side the active and the passive personal endings for the purpose of comparison.
|
Instantaneous AGC - SEG Wiki
Instantaneous AGC is one of the most common gain types used. This gain function is computed as follows. First, the mean absolute value of trace amplitudes is computed within a specified time gate. Second, the ratio of the desired rms level to this mean value is assigned as the value of the gain function. Unlike the rms amplitude AGC, this value is assigned to any desired time sample of the gain function within the time gate, say the nth sample of the trace, rather than to the sample at the center of the gate. The next step is to move the time gate one sample down the trace and compute the value of the gain function for the (n + 1)th time sample, and so on. No interpolation is therefore required to define this gain function. Hence, the scaling function g(t) at the gate center is given by
{\displaystyle g(t)={\frac {\text{desired rms}}{{\frac {1}{N}}\sum \nolimits _{i=1}^{N}{\left|{x_{i}}\right|}}},}
Figure 1.4-11 A portion of a CMP stack before and after application of five different instantaneous AGC functions. The numbers on top indicate gain window sizes in milliseconds used in computing the AGC gain function described by equation ( 11 ).
Figure 1.4-11 shows the ungained data and four instantaneous AGC-gained sections. Gate lengths are indicated on top of each panel. Very small time gates cause a significant can loss of signal character by boosting zones that contain small amplitudes. This occurs with the 64-ms AGC output. In processing, this is called a fast AGC. In the other extreme, if a large time gate is selected, then the effectiveness of the AGC process is lessened. In practice, AGC time gates commonly are specified between 200 and 500 ms.
Retrieved from "https://wiki.seg.org/index.php?title=Instantaneous_AGC&oldid=17312"
|
Positive Data Visualization Using Trigonometric Function
2012 Positive Data Visualization Using Trigonometric Function
Farheen Ibraheem, Maria Hussain, Malik Zawwar Hussain, Akhlaq Ahmad Bhatti
{C}^{1}
piecewise rational trigonometric cubic function with four shape parameters has been constructed to address the problem of visualizing positive data. Simple data-dependent constraints on shape parameters are derived to preserve positivity and assure smoothness. The method is then extended to positive surface data by rational trigonometric bicubic function. The order of approximation of developed interpolant is
O\left({h}_{i}^{3}\right)
Farheen Ibraheem. Maria Hussain. Malik Zawwar Hussain. Akhlaq Ahmad Bhatti. "Positive Data Visualization Using Trigonometric Function." J. Appl. Math. 2012 1 - 19, 2012. https://doi.org/10.1155/2012/247120
Farheen Ibraheem, Maria Hussain, Malik Zawwar Hussain, Akhlaq Ahmad Bhatti "Positive Data Visualization Using Trigonometric Function," Journal of Applied Mathematics, J. Appl. Math. 2012(none), 1-19, (2012)
|
2020 Best Lag Window for Spectrum Estimation of Law Order MA Process
Ali Sami Rashid, Mohammed Jabber Hawas Allami, Ahmed Kareem Mutasher
In this article, we investigate spectrum estimation of law order moving average (MA) process. The main tool is the lag window which is one of the important components of the consistent form to estimate spectral density function (SDF). We show, based on a computer simulation, that the Blackman window is the best lag window to estimate the SDF of
\text{MA}\left(1\right)
\text{MA}\left(2\right)
at the most values of parameters
{\beta }_{i}
and series sizes
n
, except for a special case when
\beta =-1
n\ge 40
\text{MA}\left(1\right)
. In addition, the Hanning–Poisson window appears as the best to estimate the SDF of
\text{MA}\left(2\right)
{\beta }_{1}={\beta }_{2}=-0.5
n\ge 40
Ali Sami Rashid. Mohammed Jabber Hawas Allami. Ahmed Kareem Mutasher. "Best Lag Window for Spectrum Estimation of Law Order MA Process." Abstr. Appl. Anal. 2020 1 - 10, 2020. https://doi.org/10.1155/2020/9352453
Received: 9 August 2019; Accepted: 6 February 2020; Published: 2020
Ali Sami Rashid, Mohammed Jabber Hawas Allami, Ahmed Kareem Mutasher "Best Lag Window for Spectrum Estimation of Law Order MA Process," Abstract and Applied Analysis, Abstr. Appl. Anal. 2020(none), 1-10, (2020)
|
Two-Scale Convergence of First-Order Operators | EMS Press
Two-Scale Convergence of First-Order Operators
Nguetseng's notion of {\it two-scale convergence\/} and some of its main properties are first shortly reviewed. The (weak) two-scale limit of the gradient of bounded sequences of
W^{1,p}(\erre^N)
is then studied: if
u_\eps\to u
W^{1,p}(\erre^N)
\{u_{1\eps}\}
is constructed such that
u_{1\eps}(x)\to u_1(x,y)
\nabla u_\eps(x)\to \nabla u(x) + \nabla_y u_1(x,y)
weakly two-scale. Analogous constructions are introduced for the weak two-scale limit of derivatives in the spaces
W^{1,p}(\erre^N)^N
L^2_{\mbox{\eightrm rot}}(\erre^3)^3
L^2_{\mbox{\eightrm div}}(\erre^N)^N
L^2_{\mbox{\eightrm div}}(\erre^N)^{N^2}
. The application to the two-scale limit of some classical equations of electromagnetism and continuum mechanics is outlined. These results are then applied to the homogenization of quasilinear elliptic equations like
\nabla \!\times\! \big[A(u_\eps(x), x,\frac{x}{\eps}) \!\cdot\! \nabla \!\times\! u_\eps\big] = f
Augusto Visintin, Two-Scale Convergence of First-Order Operators. Z. Anal. Anwend. 26 (2007), no. 2, pp. 133–164
|
Multimodel Control Design - MATLAB & Simulink - MathWorks Australia
Control Design Overview
Create Model Arrays
Import Model Arrays to Control System Designer
What Is a Nominal Model?
Specify Nominal Model
Design Controller for Multiple Plant Models
Typically, the dynamics of a system are not known exactly and may vary. For example, system dynamics can vary because of:
Parameter value variations caused by manufacturing tolerances — For example, the resistance value of a resistor is typically within a range about the nominal value, 5 Ω +/– 5%.
Operating conditions — For example, aircraft dynamics change based on altitude and speed.
Any controller you design for such a system must satisfy the design requirements for all potential system dynamics.
To design a controller for a system with varying dynamics:
Sample the variations.
Create an LTI model for each sample.
Create an array of sampled LTI models.
Design a controller for a nominal representative model from the array.
Analyze the controller design for all models in the array.
If the controller design does not satisfy the requirements for all the models, specify a different nominal model and redesign the controller.
In Control System Designer, you can specify multiple models for any plant or sensor in the current control architecture using an array of LTI models (see Model Arrays). If you specify model arrays for more than one plant or sensor, the lengths of the arrays must match.
To create arrays for multimodel control design, you can:
Create multiple LTI models using the tf, ss, zpk, or frd commands.
% Specify model parameters.
k = 8:1:10;
T = 0.1:.05:.2;
% Create an array of LTI models.
for ct = 1:length(k);
G(:,:,ct) = tf(1,[m,b,k(ct)]);
Create an array of LTI models using the stack command.
% Create individual LTI models.
G1 = tf(1, [1 1 8]);
G3 = tf(1, [1 1 10]);
% Combine models in an array.
G = stack(1,G1,G2,G3);
Perform batch linearizations at multiple operating points. Then export the computed LTI models to create an array of LTI models. See the example Reference Tracking of DC Motor with Parameter Variations (Simulink Control Design).
Sample an uncertain state-space (uss) model using usample (Robust Control Toolbox).
Compute a uss model from a Simulink® model. Then use usubs (Robust Control Toolbox) or usample (Robust Control Toolbox) to create an array of LTI models. See Obtain Uncertain State-Space Model from Simulink Model (Robust Control Toolbox).
Specify a core Simulink block to linearize to a uss (Robust Control Toolbox) or ufrd (Robust Control Toolbox) model. See Specify Uncertain Linearization for Core or Custom Simulink Blocks (Robust Control Toolbox).
To import models as arrays, you can pass them as input arguments when opening Control System Designer from the MATLAB® command line. For more information, see Control System Designer.
You can also import model arrays into Control System Designer when configuring the control architecture. In the Edit Architecture dialog box:
In the Value text box, specify the name of an LTI model from the MATLAB workspace.
To import block data from the MATLAB workspace or from a MAT-file in your current working directory, click .
The nominal model is a representative model in the array of LTI models that you use to design the controller in Control System Designer. Use the editor and analysis plots to visualize and analyze the effect of the controller on the remaining plants in the array.
You can select any model in the array as your nominal model. For example, you can choose a model that:
Represents the expected nominal operating point of your system.
Is an average of the models in the array.
Represents a worst-case plant.
Lies closest to the stability point.
You can plot and analyze the open-loop dynamics of the system on a Bode plot to determine which model to choose as nominal.
To select a nominal model from the array of LTI models, in Control System Designer, click Multimodel Configuration. Then, in the Multimodel Configuration dialog box, select a Nominal model index. The default index is 1.
For each plant or sensor that is defined as a model array, the app selects the model at the specified index as the nominal model. Otherwise, the app uses scalar expansion to apply the single LTI model for all model indices.
For example, for the following control architecture:
if G and H are both three-element arrays and the nominal model index is 2, the software uses the second element in both the arrays to compute the nominal model:
The nominal response from r to y is:
T=\frac{C{G}_{2}}{1+C{G}_{2}{H}_{2}}
The app also computes and plots the responses showing the effect of C on the remaining pairs of plant and sensor models — G1H1 and G3H3.
If only G is an array of LTI models, and the specified nominal model is 2, then the control architecture for nominal response is:
In this case, the nominal response from r to y is:
T=\frac{C{G}_{2}}{1+C{G}_{2}H}
The app also computes and plots the responses showing the effect of C on the remaining pairs of plant and sensor model — G1H and G3H.
The frequency response of a system is computed at a series of frequency values, called a frequency grid. By default, Control System Designer computes a logarithmically equally spaced grid based on the dynamic range of each model in the array.
Specify a custom frequency grid when:
The automatic grid has more points than you require. To improve computational efficiency, specify a less dense grid spacing.
The automatic grid is not sufficiently dense within a particular frequency range. For example, if the response does not capture the resonant peak dynamics of an underdamped system, specify a more dense grid around the corner frequency.
You are only interested in the response within specific frequency ranges. To improve computational efficiency, specify a grid that covers only the frequency ranges of interest.
For more information on specifying logarithmically spaced vectors, see logspace.
Modifying the frequency grid does not affect the frequency response computation for the nominal model. The app always uses the Auto select option to compute the nominal model frequency response.
This example shows how to design a compensator for a set of plant models using Control System Designer.
Create Array of Plant Models
Create an array of LTI plant models using the stack command.
% Create an array of LTI models to model plant (G) variations.
G1 = tf(1,[1 1 8]);
G3 = tf(1,[1 1 10]);
Create Array of Sensor Models
Similarly, create an array of sensor models.
H1 = tf(1,[1/0.1,1]);
H2 = tf(1,[1/0.15,1]);
H = stack(1,H1,H2,H3);
Open Control System Designer, and import the plant and sensor model arrays.
controlSystemDesigner(G,1,H)
The app opens and imports the plant and sensor model arrays.
Configure Analysis Plot
To view the closed-loop step response in a larger plot, in Control System Designer, click on the small dropdown arrow on the IOTransfer_r2y: step plot and then select Maximize.
By default the step response shows only the nominal response. To display the individual responses for the other model indices, right-click the plot area, and select Multimodel Configuration > Individual Responses.
To view an envelope of all model responses, right-click the plot area, and select Multimodel Configuration > Bounds
The plot updates to display the responses for the other models.
Select Nominal Model
On the Control System tab, click Multimodel Configuration.
In the Multimodel Configuration dialog box, specify a Nominal Model Index of 2.
The selected nominal model corresponds to the average system response.
Design Compensator
To design a compensator using the nominal model, you can use any of the supported Control System Designer Tuning Methods.
For this example, use the Compensator Editor to manually specify the compensator dynamics. Add an integrator to the compensator and set the compensator gain to 0.4. For more information, see Edit Compensator Dynamics.
The tuned controller produces a step response with minimal overshoot for the nominal models and a worst-case overshoot less than 10%.
|
Offline Deletion · USACO Guide
HomeAdvancedOffline Deletion
Offline Deleting from a Data StructureDynamic ConnectivityDSU With RollbackImplementationProblems
Offline Deletion
Authors: Benjamin Qi, Siyong Huang
Erasing from non-amortized insert-only data structures.
Gold - Disjoint Set Union
Offline Deleting from a Data Structure
Using a persistent data structure or rollbacking, you are able to simulate deleting from a data structure while only using insertion operations.
CP-Algorithms
Deleting from a data structure in O(T(n) log n)
includes code (but no explanation) for dynamic connectivity
15.5.4 - Dynamic Connectivity
Dynamic Connectivity is the most common problem using the deleting trick. The problem is to determine whether pairs of nodes are in the same connected component while edges are being inserted and removed.
Vertex Add Component Sum
Normal Show Tags DSUrb
DSU With Rollback
DSU with rollback is a subproblem required to solve the above task.
Persistent Union Find
Easy Show Tags DSUrb
explanation? check Guide to CP?
Warning: Watch Out!
Because path compression is amortized, it does not guarauntee
\mathcal{O}(N \log^2 N)
runtime. You must use merging by rank.
int p[MN], r[MN];
int *t[MN*40], v[MN*40], X;
int setv(int *a, int b)
if(*a != b) t[X] = a, v[X] = *a, *a = b, ++X;
void rollback(int x) {for(;X>x;) --X, *t[X] = v[X];}
int find(int n) {return p[n] ? find(p[n]) : n;}
bool merge(int a, int b)
Edu F - Extending Set of Points
Hard Show Tags Dynacon
Very Hard Show Tags D&C, DSUrb
|
Q0 (mathematical logic) - Wikipedia
Q0 (mathematical logic)
(Redirected from Q zero)
Q0 is Peter Andrews' formulation of the simply-typed lambda calculus, and provides a foundation for mathematics comparable to first-order logic plus set theory. It is a form of higher-order logic and closely related to the logics of the HOL theorem prover family.
The theorem proving systems TPS and ETPS are based on Q0. In August 2009, TPS won the first-ever competition among higher-order theorem proving systems.[1]
1 Axioms of Q0
1.1 About the axioms
2 Inference in Q0
Axioms of Q0[edit]
The system has just five axioms, which can be stated as:
{\displaystyle (1)}
{\displaystyle g_{oo}T\land g_{oo}F=\forall x_{o}\centerdot g_{oo}x_{o}}
{\displaystyle (2^{\alpha })}
{\displaystyle [x_{\alpha }=y_{\alpha }]\supset \centerdot \,h_{o\alpha }x_{\alpha }=h_{o\alpha }y_{\alpha }}
{\displaystyle (3^{\alpha \beta })}
{\displaystyle f_{\alpha \beta }=g_{\alpha \beta }=\forall x_{\beta }\centerdot f_{\alpha \beta }x_{\beta }=g_{\alpha \beta }x_{\beta }}
{\displaystyle (4)}
{\displaystyle [\lambda \mathbf {x_{\alpha }} \mathbf {B} _{\beta }]\mathbf {A} _{\alpha }=\mathbf {S} _{A_{\alpha }}^{\mathbf {x} _{\alpha }}\mathbf {B} _{\beta }}
{\displaystyle (5)}
{\displaystyle _{i(oi)}[{\text{Q}}_{oii}y_{i}]=y_{i}\,}
(Axioms 2, 3, and 4 are axiom schemas—families of similar axioms. Instances of Axiom 2 and Axiom 3 vary only by the types of variables and constants, but instances of Axiom 4 can have any expression replacing A and B.)
The subscripted "o" is the type symbol for boolean values, and subscripted "i" is the type symbol for individual (non-boolean) values. Sequences of these represent types of functions, and can include parentheses to distinguish different function types. Subscripted Greek letters such as α and β are syntactic variables for type symbols. Bold capital letters such as A, B, and C are syntactic variables for WFFs, and bold lower case letters such as x, y are syntactic variables for variables. S indicates syntactic substitution at all free occurrences.
The only primitive constants are Q((oα)α), denoting equality of members of each type α, and ℩(i(oi)), denoting a description operator for individuals, the unique element of a set containing exactly one individual. The symbols λ and brackets ("[" and "]") are syntax of the language. All other symbols are abbreviations for terms containing these, including quantifiers ∀ and ∃.
In Axiom 4, x must be free for A in B, meaning that the substitution does not cause any occurrences of free variables of A to become bound in the result of the substitution.
About the axioms[edit]
Axiom 1 expresses the idea that T and F are the only boolean values.
Axiom schemas 2α and 3αβ express fundamental properties of functions.
Axiom schema 4 defines the nature of λ notation.
Axiom 5 says that the selection operator is the inverse of the equality function on individuals. (Given one argument, Q maps that individual to the set/predicate containing the individual. In Q0, x = y is an abbreviation for Qxy, which is an abbreviation for (Qx)y.)
In Andrews 2002, Axiom 4 is developed in five subparts that break down the process of substitution. The axiom as given here is discussed as an alternative and proved from the subparts.
Inference in Q0[edit]
Q0 has a single rule of inference.
Rule R. From C and Aα = Bα to infer the result of replacing one occurrence of Aα in C by an occurrence of Bα, provided that the occurrence of Aα in C is not (an occurrence of a variable) immediately preceded by λ.
Derived rule of inference R′ enables reasoning from a set of hypotheses H.
Rule R′. If H ⊦ Aα = Bα, and H ⊦ C, and D is obtained from C by replacing one occurrence of Aα by an occurrence of Bα, then H ⊦ D, provided that:
The occurrence of Aα in C is not an occurrence of a variable immediately preceded by λ, and
no variable free in Aα = Bα and a member of H is bound in C at the replaced occurrence of Aα.
Note: The restriction on replacement of Aα by Bα in C ensures that any variable free in both a hypothesis and Aα = Bα continues to be constrained to have the same value in both after the replacement is done.
The Deduction Theorem for Q0 shows that proofs from hypotheses using Rule R′ can be converted into proofs without hypotheses and using Rule R.
Unlike some similar systems, inference in Q0 replaces a subexpression at any depth within a WFF with an equal expression. So for example given axioms:
1. ∃x Px
2. Px ⊃ Qx
and the fact that A ⊃ B ≡ (A ≡ A ∧ B), we can proceed without removing the quantifier:
3. Px ≡ (Px ∧ Qx) instantiating for A and B
4. ∃x (Px ∧ Qx) rule R substituting into line 1 using line 3.
^ The CADE-22 ATP System Competition (CASC-22)
Andrews, Peter B. (2002). An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof (2nd ed.). Dordrecht, The Netherlands: Kluwer Academic Publishers. ISBN 1-4020-0763-9. See also [1]
Church, Alonzo (1940). "A Formulation of the Simple Theory of Types" (PDF). Journal of Symbolic Logic. 5: 56–58. doi:10.2307/2266170. Archived from the original (PDF) on 2019-01-12.
A description of Q0 in more depth; part of an article on Church's Type Theory in the Stanford Encyclopedia of Philosophy.
An overview on mathematical logics (including various successors of Q0): Foundations of Mathematics. Genealogy and Overview doi:10.4444/100.111.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Q0_(mathematical_logic)&oldid=1033351199"
|
Valuation in finance
{\displaystyle {\frac {R_{t}}{(1+i)^{t}}}}
{\displaystyle t}
{\displaystyle i}s the discount rate, i.e. the return that could be earned per unit of time on an investment with similar risk
{\displaystyle R_{t}}
{\displaystyle R_{0}}
{\displaystyle t}
{\displaystyle R_{t}}
{\displaystyle N}
{\displaystyle \mathrm {NPV} }
{\displaystyle \mathrm {NPV} (i,N)=\sum _{t=0}^{N}{\frac {R_{t}}{(1+i)^{t}}}}
{\displaystyle R}
{\displaystyle \mathrm {NPV} }
{\displaystyle \mathrm {NPV} (i,N,R)=R\left({\frac {1-\left({\frac {1}{1+i}}\right)^{N+1}}{1-\left({\frac {1}{1+i}}\right)}}\right),\quad i\neq 0}
{\displaystyle R_{0}}
{\displaystyle R_{0}}
The discount rate[edit]
Use in decision making[edit]
{\displaystyle R_{t}}
{\displaystyle R_{t}}
Interpretation as integral transform[edit]
{\displaystyle \mathrm {NPV} (i,N)=\sum _{t=0}^{N}{\frac {R_{t}}{(1+i)^{t}}}}
{\displaystyle \mathrm {NPV} (i)=\int _{t=0}^{\infty }(1+i)^{-t}\cdot r(t)\,dt}
{\displaystyle F(s)=\left\{{\mathcal {L}}f\right\}(s)=\int _{0}^{\infty }e^{-st}f(t)\,dt}
{\displaystyle {\frac {-100,000}{(1+0.10)^{0}}}}
{\displaystyle {\frac {10,000}{(1+0.10)^{1}}}}
{\displaystyle {\frac {10,000}{(1+0.10)^{2}}}}
{\displaystyle {\frac {10,000}{(1+0.10)^{3}}}}
{\displaystyle {\frac {10,000}{(1+0.10)^{4}}}}
{\displaystyle {\frac {10,000}{(1+0.10)^{5}}}}
{\displaystyle {\frac {10,000}{(1+0.10)^{6}}}}
{\displaystyle {\frac {10,000}{(1+0.10)^{7}}}}
{\displaystyle {\frac {10,000}{(1+0.10)^{8}}}}
{\displaystyle {\frac {10,000}{(1+0.10)^{9}}}}
{\displaystyle {\frac {10,000}{(1+0.10)^{10}}}}
{\displaystyle {\frac {10,000}{(1+0.10)^{11}}}}
{\displaystyle {\frac {10,000}{(1+0.10)^{12}}}}
{\displaystyle \mathrm {NPV} =PV({\text{benefits}})-PV({\text{costs}})}
{\displaystyle \mathrm {NPV} =68,136.91-100,000}
{\displaystyle \mathrm {NPV} =-31,863.09}
Alternative capital budgeting methods[edit]
|
Thermodynamics And Thermochemistry, Popular Questions: Jee Online course CHEMISTRY, Chemistry - Meritnation
The enthalpy of neutralization of acetic acid and sodium hydroxide is -55.4KJ. What is the enthalpy of ionisation of acetic acid?
For the formation of CH4 of delta U =-xKJ/mol then what will the value of delta H
Three thermochemical equations are given below:
\left(i\right) {C}_{\left(graphite\right)}+{O}_{2}\left(g\right) \to C{O}_{2}\left(g\right) ; {△}_{r}H°=x kJ mo{l}^{-1}\phantom{\rule{0ex}{0ex}}\left(ii\right) {C}_{\left(graphite\right)}+\frac{1}{2}{O}_{2}\left(g\right) \to CO\left(g\right) ; {△}_{r}H°=y kJ mo{l}^{-1} \phantom{\rule{0ex}{0ex}}\left(iii\right) CO\left(g\right)+{O}_{2}\left(g\right) \to C{O}_{2}\left(g\right) ; {△}_{r}H°=z kJ mo{l}^{-1}\phantom{\rule{0ex}{0ex}}
Based on the above equations, find out which of the relationship given below is correct:
(1) x = y - z
(2) z = x + y
in Q. 52 the answer given is C but i am getting the answer is like A . explain why
when 20ml of a strong acid is mixed to 20ml of strong alkali, the temp. rises by 10degree.
what wiould be the temp. rise if 200ml of each liquid are mixed??
1)5degree
2)10degree
4)0.10degree
2. When an ideal gas is compressed adiabatically and reversibly, the final temperature is
(1) Higher than the initial temperature
(2) Lower than the initial temperature
(3) The same as the initial temperature
(4) Dependent on the rate of compression
for an isothermal expansion of an ideal gas
change in E=0
q=0
change in V=0
in which case entropy increases??
combustion of methane gas
diamond----> graphite
Vortega asked a question
the enthalpy change for the reaction 2H2 + O2--> 2H2O is 574kJ . what is the heat of formation of water
Find the work done when 1 mole of hydrogen expands isothermally from 15 to 50 litres against a constant pressure of 1atm at 25C
Propane (C3H8) is used for heating water for domestic supply. assume that for 150kg of hot water supply per day must be heated from 10oC to 65oC. What moles of propane is used for heating this amount of water ?
(DeltacH for propane = -2050 KJ/mol , c for water = 4.184 x 10-3 )
derive the relation between change in internal energy and change in enthalpy for a system in which the reactants and products are gases
the efficiency of a carnot engine having source temperature 127c and sink 27 c is
What is the meaning of thermodynamic function?
Q. ∆{S}_{Total}=-40kJ/mol×K\phantom{\rule{0ex}{0ex}}∆{H}_{sys}=2000 kJ/mol \phantom{\rule{0ex}{0ex}}T = 400 K\phantom{\rule{0ex}{0ex}}Find out value of ∆{S}_{system}?
if deltaH depends on temperature how can we define any reaction to be spontaneous or not on the basis of temperature only by gibbs energy also if any reaction is not carried out at constant pressure what value is to be put in in gibbs equation
QUESTION NO 22
\mathbf{22}\mathbf{.} When 1 g of anhydrous oxalic acid is burnt at 25°C, the amount of heat liberated is 2.835 kJ. ∆H combustion is \left(oxalic acid : {C}_{2}{H}_{2}{O}_{4}\right) \phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}\left(1\right) -255.15 kJ \left(2\right) -445.65 kJ\phantom{\rule{0ex}{0ex}}\left(3\right) -295.24 kJ \left(4\right) -155.16 kJ
An ideal gas expands in volume from 1 x 10^-3 m cube to 1x 10^-2 m cube at 300 K against a constant pressure of 1 x 10^5 N m^-2 . The work done is ? A) 270 kJ . B) -900 kJ . C) -900 J . D ) 900 kJ .
\mathrm{Given} \mathrm{that} \mathrm{C}+{\mathrm{O}}_{2}\to {\mathrm{CO}}_{2}, ∆\mathrm{H}°=-\mathrm{x} \mathrm{KJ}\phantom{\rule{0ex}{0ex}} 2\mathrm{CO} + {\mathrm{O}}_{2}\to 2{\mathrm{CO}}_{2}, ∆\mathrm{H}°= -\mathrm{y} \mathrm{KJ}
what is heat of formation of CO?
\frac{\mathrm{y}-2\mathrm{x}}{2}
(2) 2 x – y
(3) y – 2x (4)
\frac{2\mathrm{x}-\mathrm{y}}{2}
What is the difference between isothermal and adiabatic process? Please explain with examples.
In alternative (c) why is it giving E1CB reaction even though Br- is not a strong base? Please explain in detail.
tis it true that no activation energy is needed at all for exothermic reaction since energy is liberated. It is been told to us as false if so there are many reaction in chemistry saying energy evolved in 1st reaction is used to carry out 2nd reaction getting confused plzzz help
we define spontaneous reaction is reaction which takes place on its own without aid of any external reaction but also that endothermic reaction that absorbs energy AN EXTERNAL AID also spontaneous at higher temperature
determine the enthalphy of combustion of CH4 at 298 K if the following is given C+O2=CO2 ; delta H=-393,H2+1+2(O2)=H2O;delta H=-285.8, CO2+2H2O=CH4+2O2;deltaH=890.3
PLEASE EXPLAIN 61 TH QUES
if i try to solve it with equation deltaG =deltaH-TdeltaS option 4 is correct but in same equation i put T=0 option 3 is correct getting confused
PLEASE EXPLAIN 58TH QUES
Anubhav Das asked a question
in zeroth law of thermodynamic can we state thermal equilibrium as dyanamic equlibrium?
Jaya Gupta asked a question
is it true that enthalpy of reaction is not same as heat of reaction
|
Algebraic equation - Simple English Wikipedia, the free encyclopedia
In mathematics, an algebraic equation, also called polynomial equation over a given field is an equation of the form
{\displaystyle P=Q}
where P and Q are polynomials over that field, and have one (univariate) or more than one (multivariate) variables. For example:
{\displaystyle y^{4}+{\frac {xy}{2}}={\frac {x^{3}}{3}}-xy^{2}+y^{2}-{\frac {1}{7}}}
is an algebraic equation over the rational numbers.
Two equations are called equivalent if they have the same set of solutions. This means that all solutions of the second equation must also be solutions of the first one and vice versa. The equation
{\displaystyle P=Q}
{\displaystyle P-Q=0}
. So the study of algebraic equations is equivalent to the study of polynomials.
If an algebraic equation is over the rationals, it can always be converted to an equivalent one, where all the coefficients are integers. For example, in the equation given above, we multiply by 42 = 2·3·7 and group the terms in the first member. The equation is converted to
{\displaystyle 42y^{4}+21xy-14x^{3}+42xy^{2}-42y^{2}+6=0}
The solutions of an equation are the values of the variables for which the equation is true. But for algebraic equations there are also called roots. When solving an equation, we need to say in which set the solutions are allowed. For example, for an equation over the rationals, one can find solutions in the integers. Then, the equation is a diophantine equation. One may also look for solutions in the field of complex numbers. One can also look for solutions in real numbers.
Ancient mathematicians wanted the solutions of univariate equations (that is, equations with one variable) in the form of radical expressions, like
{\displaystyle x={\frac {1+{\sqrt {5}}}{2}}}
for the positive solution of
{\displaystyle x^{2}+x-1=0}
. The ancient Egyptians knew how to solve equations of degree 2 (that is equations in which the highest power of the variable is 2) in this manner. During the Renaissance, Gerolamo Cardano solved the equation of degree 3 and Lodovico Ferrari solved the equation of degree 4. Finally Niels Henrik Abel proved in 1824 that the equation of degree 5 and the equations of higher degree cannot always be solved by using radicals. Galois theory, named after Évariste Galois, were introduced to give criteria deciding if an equation is solvable using radicals.
Eric W. Weisstein, Algebraic Equation at MathWorld.
Retrieved from "https://simple.wikipedia.org/w/index.php?title=Algebraic_equation&oldid=5434356"
|
Find each sum or difference without a calculator.
\frac { 7 } { 10 } + \frac { 2 } { 3 }
To add the fractions together, start by finding the least common multiple of
3
10
. Then use that number as a common denominator. This table lists multiples of
3
10
\left. \begin{array} { | c | c | c | c | c | c | c | } \hline 3 & { 6 } & { 9 } & { 12 } & { 15 } & { 18 } & { 21 } & { 24 } & { 27 } & { 30 } & { 33 } \\ \hline 10 & { 20 } & { 30 } & { 40 } & { 50 } & { 60 } & { 70 } & { 80 } & { 90 } & { 100 } & { 110 } \\ \hline \end{array} \right.
30
. Rewrite each fraction to have a denominator of
30
by using Giant Ones, as shown below. Can you add the fractions now?
\frac{7}{10}\cdot
+\ \frac{2}{3}\cdot
=\frac{21}{30}+\frac{20}{20}
\frac{41}{30}\text{ or }1\frac{11}{30}
0.9−0.04
Whenever adding or subtracting decimals, be sure to line up the decimal points! It may help to write this problem like this:
\begin{array}{r} 0.90 \\ -0.04\\ \hline \; ? \; ? ? \end{array}
Also, remember that the decimal point will fall in the same place in your answer.
3 \frac { 1 } { 4 } + 2 \frac { 11 } { 12 }
3\frac{1}{4} = \frac{12}{4}+\frac{1}{4}=\frac{13}{4}
. Rewrite
2 \frac { 11 } { 12 }
as a fraction greater than one. Then follow the example in Part (a).
14\frac{1}{3}-9\frac{1}{5}
14\frac{1}{3}-9\frac{1}{5}=\frac{43}{3}-\frac{46}{5}=\frac{5}{5}\cdot\frac{43}{3}-\frac{3}{3}\cdot\frac{46}{5}=\frac{?}{15}-\frac{?}{15}=\frac{?}{15}
|
2014 Solvability for a Discrete Fractional Three-Point Boundary Value Problem at Resonance
This paper is concerned with the existence of solutions to a discrete three-point boundary value problem at resonance involving the Riemann-Liouville fractional difference of order
\alpha \in \left(0,1\right]
. Under certain suitable nonlinear growth conditions imposed on the nonlinear term, the existence result is established by using the coincidence degree continuation theorem. Additionally, a representative example is presented to illustrate the effectiveness of the main result.
Weidong Lv. "Solvability for a Discrete Fractional Three-Point Boundary Value Problem at Resonance." Abstr. Appl. Anal. 2014 (SI58) 1 - 7, 2014. https://doi.org/10.1155/2014/601092
Weidong Lv "Solvability for a Discrete Fractional Three-Point Boundary Value Problem at Resonance," Abstract and Applied Analysis, Abstr. Appl. Anal. 2014(SI58), 1-7, (2014)
|
On harmonic Hardy and Bergman spaces
October, 2002 On harmonic Hardy and Bergman spaces
In this paper we use an identity of Hardy-Stein type in investigations of the harmonic
{H}^{p}\left(B\right)
and Bergman
{b}^{p}\left(B\right)
Stevo STEVIĆ. "On harmonic Hardy and Bergman spaces." J. Math. Soc. Japan 54 (4) 983 - 996, October, 2002. https://doi.org/10.2969/jmsj/1191592001
Keywords: Bergman space , Hardy space , Harmonic functions
Stevo STEVIĆ "On harmonic Hardy and Bergman spaces," Journal of the Mathematical Society of Japan, J. Math. Soc. Japan 54(4), 983-996, (October, 2002)
|
$\mathbf{L^q}$ estimates of weak solutions to the stationary Stokes equations around a rotating body
July, 2006 $\mathbf{L^q}$ estimates of weak solutions to the stationary Stokes equations around a rotating body
We establish the existence, uniqueness and
{L}^{q}
estimates of weak solutions to the stationary Stokes equations with rotation effect both in the whole space and in exterior domains. The equation arises from the study of viscous incompressible flows around a body that is rotating with a constant angular velocity, and it involves an important drift operator with unbounded variable coefficient that causes some difficulties.
Toshiaki HISHIDA. "$\mathbf{L^q}$ estimates of weak solutions to the stationary Stokes equations around a rotating body." J. Math. Soc. Japan 58 (3) 743 - 767, July, 2006. https://doi.org/10.2969/jmsj/1156342036
Keywords: $L^q$ estimate , exterior domain , rotating body , Stokes flow
Toshiaki HISHIDA "$\mathbf{L^q}$ estimates of weak solutions to the stationary Stokes equations around a rotating body," Journal of the Mathematical Society of Japan, J. Math. Soc. Japan 58(3), 743-767, (July, 2006)
|
Calculus/Power series - Wikibooks, open books for an open world
Calculus/Power series
← Taylor series Calculus Sequences and Series/Exercises →
The study of power series is aimed at investigating series which can approximate some function over a certain interval.
Wikipedia has related information at Power series
Elementary calculus (differentiation) is used to obtain information on a line which touches a curve at one point (i.e. a tangent). This is done by calculating the gradient, or slope of the curve, at a single point. However, this does not provide us with reliable information on the curve's actual value at given points in a wider interval. This is where the concept of power series becomes useful.
Consider the curve of
{\displaystyle y=\cos(x)}
{\displaystyle x=0}
. A naïve approximation would be the line
{\displaystyle y=1}
. However, for a more accurate approximation, observe that
{\displaystyle \cos(x)}
looks like an inverted parabola around
{\displaystyle x=0}
- therefore, we might think about which parabola could approximate the shape of
{\displaystyle \cos(x)}
near this point. This curve might well come to mind:
{\displaystyle y=1-{\frac {x^{2}}{2}}}
In fact, this is the best estimate for
{\displaystyle \cos(x)}
which uses polynomials of degree 2 (i.e. a highest term of
{\displaystyle x^{2}}
) - but how do we know this is true? This is the study of power series: finding optimal approximations to functions using polynomials.
A power series (in one variable) is a infinite series of the form
{\displaystyle f(x)=a_{0}(x-c)^{0}+a_{1}(x-c)^{1}+a_{2}(x-c)^{2}+\cdots }
{\displaystyle c}
is a constant)
{\displaystyle f(x)=\sum _{n=0}^{\infty }a_{n}(x-c)^{n}}
Radius of convergenceEdit
When using a power series as an alternative method of calculating a function's value, the equation
{\displaystyle f(x)=\sum _{n=0}^{\infty }a_{n}(x-c)^{n}}
can only be used to study
{\displaystyle f(x)}
where the power series converges - this may happen for a finite range, or for all real numbers.
The size of the interval (around its center) in which the power series converges to the function is known as the radius of convergence.
{\displaystyle {\frac {1}{1-x}}=\sum _{n=0}^{\infty }x^{n}}
(a geometric series)
this converges when
{\displaystyle |x|<1}
{\displaystyle f(x)-1<x<1}
, so the radius of convergence - centered at 0 - is 1. It should also be observed that at the extremities of the radius, that is where
{\displaystyle x=1}
{\displaystyle x=-1}
, the power series does not converge.
{\displaystyle e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}}
Using the ratio test, this series converges when the ratio of successive terms is less than one:
{\displaystyle \lim _{n\to \infty }\left|{\frac {x^{n+1}}{(n+1)!}}{\frac {n!}{x^{n}}}\right|<1}
{\displaystyle =\lim _{n\to \infty }\left|{\frac {x^{n}x^{1}}{n!(n+1)}}{\frac {n!}{x^{n}}}\right|<1}
{\displaystyle =\lim _{n\to \infty }\left|{\frac {x}{n+1}}\right|<1}
which is always true - therefore, this power series has an infinite radius of convergence. In effect, this means that the power series can always be used as a valid alternative to the original function,
{\displaystyle e^{x}}
If we use the ratio test on an arbitrary power series, we find it converges when
{\displaystyle \lim _{n\to \infty }{\frac {|a_{n+1}x|}{|a_{n}|}}<1}
and diverges when
{\displaystyle \lim _{n\to \infty }{\frac {|a_{n+1}x|}{|a_{n}|}}>1}
The radius of convergence is therefore
{\displaystyle r=\lim _{n\to \infty }{\frac {|a_{n}|}{|a_{n+1}|}}}
If this limit diverges to infinity, the series has an infinite radius of convergence.
Within its radius of convergence, a power series can be differentiated and integrated term by term.
{\displaystyle {\frac {d}{dx}}\left[\sum _{n=0}^{\infty }a_{n}x^{n}\right]=\sum _{n=0}^{\infty }a_{n+1}(n+1)(x-c)^{n}}
{\displaystyle \int \sum _{n=0}^{\infty }a_{n}(x-c)^{n}dx=\sum _{n=1}^{\infty }{\frac {a_{n-1}(x-c)^{n}}{n}}+k}
Both the differential and the integral have the same radius of convergence as the original series.
This allows us to sum exactly suitable power series. For example,
{\displaystyle {\frac {1}{1+x}}=1-x+x^{2}-x^{3}\pm \cdots }
This is a geometric series, which converges for
{\displaystyle |x|<1}
. Integrating both sides, we get
{\displaystyle \ln(1+x)=x-{\frac {x^{2}}{2}}+{\frac {x^{3}}{3}}\pm \cdots }
which will also converge for
{\displaystyle |x|<1}
{\displaystyle x=-1}
this is the harmonic series, which diverges; when
{\displaystyle x=1}
this is an alternating series with diminishing terms, which converges to
{\displaystyle \ln(2)}
- this is testing the extremities.
It also lets us write series for integrals we cannot do exactly such as the error function:
{\displaystyle e^{-x^{2}}=\sum (-1)^{n}{\frac {x^{2n}}{n!}}}
The left hand side can not be integrated exactly, but the right hand side can be.
{\displaystyle \int \limits _{0}^{z}e^{-x^{2}}dx=\sum {\frac {(-1)^{n}z^{2n+1}}{(2n+1)n!}}}
This gives us a series for the sum, which has an infinite radius of convergence, letting us approximate the integral as closely as we like.
Note that this is not a power series, as the power of
{\displaystyle z}
is not the index.
"Decoding the Rosetta Stone" article by Jack W. Crenshaw 2005-10-12
Retrieved from "https://en.wikibooks.org/w/index.php?title=Calculus/Power_series&oldid=3373660"
|
Natural Convection Analysis Around Elliptical Shapes With Sinusoidal Heat Flux Using CFD | FEDSM | ASME Digital Collection
Annette G. Kamstrup,
Anna Marie A. Pedersen,
Frederik M. Elimar,
Frederik M. Elimar
Lasse Olsen,
Kamstrup, AG, Pedersen, AMA, Elimar, FM, Olsen, L, Sørensen, H, & Hærvig, J. "Natural Convection Analysis Around Elliptical Shapes With Sinusoidal Heat Flux Using CFD." Proceedings of the ASME 2020 Fluids Engineering Division Summer Meeting collocated with the ASME 2020 Heat Transfer Summer Conference and the ASME 2020 18th International Conference on Nanochannels, Microchannels, and Minichannels. Volume 2: Fluid Mechanics; Multiphase Flows. Virtual, Online. July 13–15, 2020. V002T03A001. ASME. https://doi.org/10.1115/FEDSM2020-20100
Heat transfer is important as technology becomes more compact, thereby increasing the heat flux and consequently the need for cooling. This paper will investigate natural convection on an elliptical shape with sinusoidal heat flux input. Natural convection was analysed using CFD simulations on an ellipse, with minor- to major axis ratio b/a = 0.6 and an inclination angle of α = 90°. The sinusoidal heat flux was non-dimensionalised by a modified Grashof number Gr* = g · β · (∂T/∂n) · (Lc)4/v2 with a mean value of 2 × 107, amplitude of up to 2 × 107, and dimensionless angular frequency ω* = ω ·
Lc2
/v = 24, 36, and 72. All simulations were made with a Prandtl number of Pr = 7.0. To ensure reliable results a Grid Convergence Index analysis was carried out. Validation was made by comparing the obtained surface averaged Nusselt numbers to previous studies and results from performed experiments. An experiment using Particle Image Velocimetry, PIV, measured the flow field around an ellipse. The results from the sinusoidal heat flux showed that the difference in accounting for the sinusoidal Grashof function was up to 10% for the time-surface averaged temperature and time-surface averaged Nusselt number. Generally, the amplitude would increase the temperature, while the effect of the dimensionless angular frequency was dependent on the given amplitude.
Computational fluid dynamics, Heat flux, Natural convection, Shapes, Engineering simulation, Simulation, Temperature, Accounting, Cooling, Flow (Dynamics), Heat transfer, Particulate matter, Prandtl number
Field Validation of a Systematic Approach to Modeling of Glass Delivery Systems
|
Inference | Bean Machine
Bean MachineDocsTutorialsAPI
Why Bean Machine?
Inference is the process of combining a model with data to obtain insights, in the form of probability distributions over values of interest.
A little note on vocabulary: You've already seen in Modeling that the model in Bean Machine is comprised of random variable functions. In Bean Machine, the data is built up of a dictionary mapping random variable functions to their observed values, and insights take the form of discrete samples from a probability distribution. We refer to the random variables for which we're learning distributions as queried random variables.
Let's make this concrete by returning to our disease modeling example. As a refresher, here's the full model:
reproduction_rate_rate = 10.0
num_init = 1087980
time = [date(2021, 1, 1), date(2021, 1, 2), date(2021, 1, 3)]
def reproduction_rate():
return dist.Exponential(rate=reproduction_rate_rate)
@bm.functional
def num_total(today):
if today <= time[0]:
return num_init
return num_new(today) + num_total(yesterday)
def num_new(today):
return dist.Poisson(reproduction_rate() * num_total(yesterday))
Prior and Posterior Distributions
\text{Exponential}
distribution used here represents our beliefs about the disease's reproduction rate before seeing any data, and is known as the prior distribution. We've visualized this distribution previously: it represents a reproduction rate that is around 10% on average, but could be as high as 50%, and is highly right-skewed (the right side has a long tail). Values associated with prior distributions (here reproduction_rate()) are known as latent variables.
While the prior distribution encodes our prior beliefs, inference will perform the important task of adjusting latent variable values so that they balance both our prior belief and our knowledge from observed data. We refer to this distribution, after conditioning on observed data, as a posterior distribution. And the remaining parts of the generative model, which determine the notion of consistency used to match the latent variables with the observations, are collectively called the likelihood terms of the model (here consisting of num_total(today) and num_new(today)). The way inference is performed depends upon the specific numerical method used, but it does always mean that inferred distributions will blend smoothly from resembling your prior distribution, when there is little data observed, to more wholly representing your observed data, when there are many observations.
Binding Data
Inference requires us to bind data to the model in order to learn posterior distributions for our queried random variables. This is achieved by passing an observations dictionary to Bean Machine at inference time. Instead of sampling from random variables contained in that dictionary, Bean Machine will consider them to take on the constant values provided, and will try to find values for other random variables in your model that are consistent with the observations. For this example model, we can bind a few days of data as follows, taking care to match the
\text{Poisson}
distributions in num_new() with the corresponding increases in infection counts which they're modelling:
case_history = tensor([num_init, 1381734., 1630446.])
observations = {num_new(t): d for t, d in zip(time[1:], case_history.diff())}
Though correct, that code is a bit difficult to read for pedagogical purposes. The following code is equivalent:
num_new(date(2021, 1, 2)): tensor(293754.),
Recall that calls to random variable functions from ordinary functions (including the Python toplevel) return RVIdentifier objects. So, the keys of this dictionary are RVIdentifiers, and the values are values of observed data corresponding to each key that you provide. Note that the value for a particular observation must be of the same type as the support for the distribution that it's bound to. In this case, the support for the
\text{Poisson}
distribution is scalar and non-negative, so what we have bound here are bounded scalar tensors.
Running Inference
We're finally ready to run inference! Let's take a look first, and then we'll explain what's happening.
queries=[reproduction_rate()],
num_adaptive_samples=3000,
Let's break this down. There is an inference method (in this example, that's the CompositionalInference class), and there's a call to infer().
Inference methods are simply classes that extend from AbstractInference. These classes define the engine that will be used in order to fit posterior distributions to queried random variables given observations. In this particular example, we've chosen to use the specific inference method CompositionalInference to run inference for our disease modeling problem.
CompositionalInference is a powerful, flexible class for configuring inference in a variety of ways. By default, CompositionalInference will select an inference method for each random variable that is appropriate based on its support. For example, for differentiable random variables, this inference method will attempt to leverage gradient information when generating samples from the posterior; for discrete random variables, it will use a uniform sampler to get representative draws for each discrete value.
A full discussion of the powerful CompositionalInference method, including extensive instructions on how to configure it to tailor specific inference methods for particular random variables, can be found in the Compositional Inference guide. Bean Machine offers a variety of other inference methods as well, which can perform differently based on the particular model you're working with. You can learn more about these inference methods under the Inference framework topic.
Regardless of the inference method, infer() has a few important general parameters:
queries A list of random variable functions to fit posterior distributions for.
observations The Python dictionary of observations that we discussed in Binding Data.
num_samples The integer number of samples with which to approximate the posterior distributions for the values listed in queries.
num_adaptive_samples The integer number of samples to spend before num_samples on tuning the inference algorithm for the queries.
num_chains The integer number of separate inference runs to use. Multiple chains can be used to verify that inference ran correctly.
You've already seen queries and observations many times. num_adaptive_samples and num_samples are used to specify the number of iterations to respectively tune, and then run, inference. More iterations will allow inference to explore the posterior distribution more completely, resulting in more reliable posterior distributions. num_chains lets you specify the number of identical runs of the entire inference algorithm to perform, called "chains". Multiple chains of inference can be used to validate that inference ran correctly and was run for enough iterations to produce reliable results, and their behavior can also help detect whether the model was well specified. We'll revisit chains in Inference Methods.
Now that we've run inference, it's time to explore our results in the Analysis section!
|
Level set - Wikipedia
(Redirected from Level surface)
Subset of a function's domain on which its value is equal
For the computational technique, see Level-set method. For level surfaces of force fields, see Equipotential surface.
Points at constant slices of x2 = f(x1).
Lines at constant slices of x3 = f(x1, x2).
Planes at constant slices of x4 = f(x1, x2, x3).
(n − 1)-dimensional level sets for functions of the form f(x1, x2, ..., xn) = a1x1 + a2x2 + ⋯ + anxn where a1, a2, ..., an are constants, in (n + 1)-dimensional Euclidean space, for n = 1, 2, 3.
Contour curves at constant slices of x3 = f(x1, x2).
Curved surfaces at constant slices of x4 = f(x1, x2, x3).
(n − 1)-dimensional level sets of non-linear functions f(x1, x2, ..., xn) in (n + 1)-dimensional Euclidean space, for n = 1, 2, 3.
In mathematics, a level set of a real-valued function f of n real variables is a set where the function takes on a given constant value c, that is:
{\displaystyle L_{c}(f)=\left\{(x_{1},\ldots ,x_{n})\mid f(x_{1},\ldots ,x_{n})=c\right\}~,}
When the number of independent variables is two, a level set is called a level curve, also known as contour line or isoline; so a level curve is the set of all real-valued solutions of an equation in two variables x1 and x2. When n = 3, a level set is called a level surface (or isosurface); so a level surface is the set of all real-valued roots of an equation in three variables x1, x2 and x3. For higher values of n, the level set is a level hypersurface, the set of all real-valued roots of an equation in n > 3 variables.
A level set is a special case of a fiber.
3 Level sets versus the gradient
4 Sublevel and superlevel sets
Intersections of a co-ordinate function's level surfaces with a trefoil knot. Red curves are closest to the viewer, while yellow curves are farthest.
Level sets show up in many applications, often under different names. For example, an implicit curve is a level curve, which is considered independently of its neighbor curves, emphasizing that such a curve is defined by an implicit equation. Analogously, a level surface is sometimes called an implicit surface or an isosurface.
The name isocontour is also used, which means a contour of equal height. In various application areas, isocontours have received specific names, which indicate often the nature of the values of the considered function, such as isobar, isotherm, isogon, isochrone, isoquant and indifference curve.
Consider the 2-dimensional Euclidean distance:
{\displaystyle d(x,y)={\sqrt {x^{2}+y^{2}}}}
A level set
{\displaystyle L_{r}(d)}
of this function consists of those points that lie at a distance of
{\displaystyle r}
from the origin, that make a circle. For example,
{\displaystyle (3,4)\in L_{5}(d)}
{\displaystyle d(3,4)=5}
. Geometrically, this means that the point
{\displaystyle (3,4)}
lies on the circle of radius 5 centered at the origin. More generally, a sphere in a metric space
{\displaystyle (M,m)}
{\displaystyle r}
{\displaystyle x\in M}
can be defined as the level set
{\displaystyle L_{r}(y\mapsto m(x,y))}
A second example is the plot of Himmelblau's function shown in the figure to the right. Each curve shown is a level curve of the function, and they are spaced logarithmically: if a curve represents
{\displaystyle L_{x}}
, the curve directly "within" represents
{\displaystyle L_{x/10}}
, and the curve directly "outside" represents
{\displaystyle L_{10x}}
Log-spaced level curve plot of Himmelblau's function[1]
Level sets versus the gradient[edit]
Consider a function f whose graph looks like a hill. The blue curves are the level sets; the red curves follow the direction of the gradient. The cautious hiker follows the blue paths; the bold hiker follows the red paths. Note that blue and red paths always cross at right angles.
Theorem: If the function f is differentiable, the gradient of f at a point is either zero, or perpendicular to the level set of f at that point.
To understand what this means, imagine that two hikers are at the same location on a mountain. One of them is bold, and he decides to go in the direction where the slope is steepest. The other one is more cautious; he does not want to either climb or descend, choosing a path which will keep him at the same height. In our analogy, the above theorem says that the two hikers will depart in directions perpendicular to each other.
A consequence of this theorem (and its proof) is that if f is differentiable, a level set is a hypersurface and a manifold outside the critical points of f. At a critical point, a level set may be reduced to a point (for example at a local extremum of f ) or may have a singularity such as a self-intersection point or a cusp.
Sublevel and superlevel sets[edit]
A set of the form
{\displaystyle L_{c}^{-}(f)=\left\{(x_{1},\dots ,x_{n})\mid f(x_{1},\dots ,x_{n})\leq c\right\}}
is called a sublevel set of f (or, alternatively, a lower level set or trench of f). A strict sublevel set of f is
{\displaystyle \left\{(x_{1},\dots ,x_{n})\mid f(x_{1},\dots ,x_{n})<c\right\}}
{\displaystyle L_{c}^{+}(f)=\left\{(x_{1},\dots ,x_{n})\mid f(x_{1},\dots ,x_{n})\geq c\right\}}
is called a superlevel set of f (or, alternatively, an upper level set of f). And a strict superlevel set of f is
{\displaystyle \left\{(x_{1},\dots ,x_{n})\mid f(x_{1},\dots ,x_{n})>c\right\}}
Sublevel sets are important in minimization theory. By Weierstrass's theorem, the boundness of some non-empty sublevel set and the lower-semicontinuity of the function implies that a function attains its minimum. The convexity of all the sublevel sets characterizes quasiconvex functions.[2]
^ Simionescu, P.A. (2011). "Some Advancements to Visualizing Constrained Functions and Inequalities of Two Variables". Journal of Computing and Information Science in Engineering. 11 (1). doi:10.1115/1.3570770.
^ Kiwiel, Krzysztof C. (2001). "Convergence and efficiency of subgradient methods for quasiconvex minimization". Mathematical Programming, Series A. Berlin, Heidelberg: Springer. 90 (1): 1–25. doi:10.1007/PL00011414. ISSN 0025-5610. MR 1819784. S2CID 10043417.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Level_set&oldid=1071177103"
|
Range DP · USACO Guide
HomeGoldRange DP
TutorialSolution - Space JazzProblems
Authors: Michael Cao, Andi Qu
Dynamic programming on ranges is a general technique used to solve problems of the form "what is the minimum/maximum metric one can achieve on an array
A
?" with the following properties:
A greedy approach seems feasible but yields incorrect answers.
Given the answers for each subarray
A[l : x]
A[y : r]
, we can calculate the answer for the subarray
A[l : r]
\mathcal O(r - l)
Disjoint subarrays can be "combined" independently.
N
(the size of
A
) is not greater than
500
This technique relies on the assumption that we can "combine" two subarrays
A[l : x]
A[x + 1 : r]
to get a candidate for
A[l : r]
. We can thus iterate over all
x
and find the best possible answer for
A[l : r]
. (Note that we need to process subarrays in increasing order of length!)
\mathcal O(N^2)
subarrays and processing each one takes
\mathcal O(N)
time, solutions using this technique generally run in
\mathcal O(N^3)
2015 - Space Jazz
SAPO - Easy
Solution - Space Jazz
\mathcal O(N^3)
While it may be tempting to use a greedy approach (e.g. repeatedly erasing matching letters until you can't anymore, and then erasing the first "bad" letter), this approach doesn't work on inputs like ababa. Combined with the fact that
N \leq 500
here, this suggests that we should use dynamic programming on ranges.
Let's consider the above test case - which a (if any) should we match the first letter with? Since
N
is small, we may as well try each other a, but then how do we deal with the resulting "gaps" in the string?
The key observation is that if we match it with the second a in the string, then we can't match the two bs together. This means that we don't actually need to care about the gaps from matching letters! More specifically, if it's optimal to match
S[0]
S[i]
, then the minimum number of insertions for
S
is the sum of the minimum number of insertions for
S[1 : i - 1]
S[i + 1 : |S| - 1]
We can thus use dynamic programming on ranges to find, for each substring of
S
, the minimum number of insertions needed to turn it into space jazz.
(Don't forget to consider the case where we don't match
S[i]
with anything, and just duplicate it!)
int dp[502][502]; // Min additions to get "jazz" from index i to j
// Inclusive and 0-indexed
public class Jazz {
public static final int MAXN = 500;
char[] inp = io.next().toCharArray();
// DP[i][j] is the min number of additions to get "jazz" from index i to j
The Cow Run
Easy Show Tags Range DP
Normal Show Tags Range DP
Hard Show Tags Range DP
Subsequence Reversal
Very Hard Show Tags Range DP
2012 - Sailing Race
CF Gym
Paimon's Tree
|
Predicate logic - Simple English Wikipedia, the free encyclopedia
It has been suggested that this article be merged with First order logic. (Discuss)
In logic and philosophy, predicate logic is a system of mathematical logic. It uses predicates to express the state of certain things, which are "incomplete propositions" with a placeholder for objects or subjects that must be inserted in order to obtain a valid proposition.
Predicate logic is different from propositional logic, in part because it has the concept of quantifiers. A quantifier is used in conjunction with a variable (say x) in order to talk about a general instance of x, and in doing so, this allows predicate logic to make statements about quantity.
The best-known quantifiers are the existential quantifier, represented by ∃, and the universal quantifier, represented by ∀.[1] The existential quantifier is used to express statements of the form "there exists", and is true precisely when there is at least one mathematical object from the universe of discourse that matches the predicate or formula. On the other hand, the universal quantifier is used to express statements of the form "for all", and is true precisely when all possible mathematical objects of the universe of discourse match the specified predicate or formula.[2]
In the notation of predicate logic, quantifiers directly precede (and thus introduce) variable names, which are then followed by other quantifiers or mathematical expressions, where the said variables are found. For example. one can use the expression
{\displaystyle \exists x\forall yLyx}
to mean "there is a person x such that for all persons y, y likes x" ("everyone is liked by someone.").[3]
{\displaystyle \exists c\ ({\text{Cat}}(c)\land {\text{isBlack}}(c)\land \exists d\ ({\text{Dog}}(d)\land {\text{likes}}(c,d)))}
can be read as: "There is at least one cat which is black, and which likes (one or more) dogs."
{\displaystyle \neg \forall c\ ({\text{Cat}}(c)\to \forall d\ ({\text{Dog}}(d)\to \neg {\text{likes}}(c,d)))}
can be read as: "It is not true that every cat doesn't like any dog."
{\displaystyle \neg \exists c\ ({\text{Cat}}(c)\land {\text{Dog}}(c))}
can be read as: "There does not exist a cat which is also a dog."
↑ "Mathematics | Predicates and Quantifiers | Set 1". GeeksforGeeks. 2015-06-24. Retrieved 2020-08-21.
↑ "Predicate Logic | Brilliant Math & Science Wiki". brilliant.org. Retrieved 2020-08-21.
Retrieved from "https://simple.wikipedia.org/w/index.php?title=Predicate_logic&oldid=8096768"
|
Teletraffic engineering/Trunking - Wikiversity
Teletraffic engineering/Trunking
< Teletraffic engineering
Author: Michel Le Vieux
Module 22 of the Teletraffic Hyperlinked Textbook
1 What is Trunking?
1.2.1 Erlang and Trunking Theory
What is Trunking?[edit | edit source]
In telecommunications systems, trunking is the aggregation of multiple user circuits into a single channel. The aggregation is achieved using some form of multiplexing. Trunking theory was developed by Agner Krarup Erlang, Erlang based his studies of the statistical nature of the arrival and the length of calls. The Erlang B formula allows for the calculation of the number of circuits required in a trunk based on the Grade of Service and the amount of traffic in Erlangs the trunk needs cater for.
In order to provide connectivity between all users on the network one solution is to build a full mesh network between all endpoints. A full mesh solution is however impractical, a far better approach is to provide a pool of resources that end points can make use of in order to connect to foreign exchanges. The diagram below illustrates the where in a telecommunication network trunks are used.
Figure 1: - A Modern Telephone Network Indicating where trunks are used. SLC - Subscriber line concentrator (HANRAHAN, 2001)[1]
Erlang and Trunking Theory[edit | edit source]
The Danish mathematician Agner Krarup Erlang [2] is the founder of teletraffic engineering. Erlang developed the fundamentals of trunking theory while investigating how a large population can be serviced by a limited number of servers [3]. Trunking theory leverages off the statistical behaviour of users accessing the network, these characteristics discussed in the assumptions of the Erlang B equation.
Grade of Service is a measure of the probability that a user may not be able to access an available circuit because of congestion. The busy hour is the time when the network is the most busy and is dependent on the users. The highest traffic may not occur at the same time every day so the concept of time consistant busy hour is defined, TCBH, as those 60 minutes (within 15 minute accuracy) that has the highest traffic [4]. Business users may have their busy hour between 8:30am and 9:30am while residential users may have their busy hour between 6:00pm and 7:00pm.
{\displaystyle GoS={\frac {\frac {A^{C}}{C!}}{\sum _{i=0}^{C}{\frac {A^{i}}{i!}}}}}
GoS Grade of Service is the probability of blocking during the busy hour
C is the number of resources such as servers or circuits in a group
A = λh is the total amount of traffic offered in Erlangs
and based on the following assumptions, taken from Kennedy 2007 [5].
The assumption of pure-chance traffic means that call arrivals and terminations are independent, identically distributed random events. The number of call arrivals in a given time also has a Poisson distribution.
Statistical equilibrium assumes that the probabilities do not change with time.
Full availability means an arriving call can be connected to any free outgoing circuit. If switches make the connection from incoming calls to outgoing, each switch must have sufficient outlets to provide connection to every outgoing circuit.
Any attempted call that encounters congestion is lost because the derivation assumed lost-calls. If this congestion did occur, the customer is likely to make another attempt in a short while, thus increasing the traffic offered when there is congestion.
Multiplexing[edit | edit source]
In order to have multiple communication channels use the same medium some form of multiplexing needs to be used. The two main types of multiplexing are;
The two main types of multiplexing are: -
Multiple channels are combined onto a single medium for transmission. The channels are separated in the medium by their time slot.
Multiple channels are combined onto a single medium for transmission. The channels are separated in the medium by their frequency.
Assume a fictitious residential telephone network with 10 users connected to Local Exchange A and 10 users connected to Local Exchange B. If we would like 10 users on LE A to connect to 10 users on LE B a proposed architecture might use a connection between the two exchanges with 10 circuits. If the circuits were to be investigated it would be seen that there utilisations would in fact be very low. The low usage of the circuits in this scenario leads us to look at Trunking theory.
Looking at the assumptions made by the Erlang B equations we have:
Pure-chance traffic - A user may make a call at any time of the day
Statistical equilibrium - A user may or may not make a call directly after a previous call
Full availability - If there is a circuit on the trunk availiable an incoming call may make use of it
Calls which encounter congestion are lost - If there were no available circuits on the trunk the call will be lost and the user will recieve a busy tone
Investigations have also indicated that a residential user generates 0.02 erlangs of traffic and if a Grade or Service of 0.001 (1 in a thousand calls will be lost) is selected.
Applying this information into the the Erlang B equation above
A = 0.2 (10 x 0.02 Erlangs)
GoS = 0.001
We can calculate that the actual number of circuits required between LE A and LE B is 4. Trunking theory has been a major driver in making communication networks economically viable and affordable to service providers and users.
The management of a fixed line voice operator would like to try a decrease costs, they have suggested reducing the Grade of Service on their E1, E3, T1, T3 and STM-1 trunk circuits.
(a) Calculate and graph the efficiency per circuit of each of the carrier’s trunks with the following GoS 0.001, 0.002 and 0.005. You may make use of the following Erlang-B calculator
(b) Does reducing the grade of service significantly increase circuit efficiency? Could you suggest a better way of reducing costs?
(a) The table below summaries the results and the graph below illustrates the increase in the amount of traffic a circuit may carry for different Grades of Service.
T1 E1 E3 T3 STM-1
Circuits 24 32 512 672 2430
At 0.01% GoS 10.2 15.6 442.8 593.3 2287.4
Link Effiency (circuits per Erlang)
At 0.01% GoS 0.43 0.49 0.86 0.88 0.94
Table 1: - TABLE OF THE LINK EFFICIENCY PER CIRCUIT WITH DECREASING GRADES OF SERVICE
Graph 1: - GRAPH OF THE LINK EFFICIENCY PER CIRCUIT WITH DECREASING GRADES OF SERVICE
(b) Dropping the Grade of Service on a trunk does not seem to significantly increase the efficiency per circuit. If you do not change the Grade of Service but compare the link efficiencies of larger trunks. A better way to reduce cost would to be aggregate smaller circuits into larger circuits as close to the edge of the network as possible. Simply aggregating multiple E1's into an E3 significantly increase link efficiency so by reducing the amount of circuits required and ultimately reducing costs, this illustates the importance of design in order to make networks efficient.
Graph 2: - GRAPH OF THE LINK EFFICIENCY PER CIRCUIT WITH INCREASING CIRCUIT SIZES
↑ Hanrahan, H. E., Telecommunications Access Networks – ELEN 5010, Department of Electrical Engineering, University of the Witwatersrand, 2001.
↑ Wikipedia: Agner Krarup Erlang
↑ Rappaport, T. S., Wireless Communications – Principles and Practice, Second Edition, 2002.
↑ Iverson, V.B., Teletraffic Engineering and Network Planning, COM Course 34340, Technical University of Denmark, May 2006.
↑ Kennedy, I.G., ELEN7015 lecture notes, Department of Electrical Engineering, University of the Witwatersrand, 2007.
Retrieved from "https://en.wikiversity.org/w/index.php?title=Teletraffic_engineering/Trunking&oldid=1877920"
|
Global Behavior of Solutions to the Focusing Generalized Hartree Equation
2021 Global Behavior of Solutions to the Focusing Generalized Hartree Equation
Anudeep Kumar Arora, Svetlana Roudenko
We study behavior of solutions to the nonlinear generalized Hartree equation, where the nonlinearity is of nonlocal type and is expressed as a convolution
\mathit{i}{\mathit{u}}_{\mathit{t}}+\mathrm{\Delta }\mathit{u}+\left(|\mathit{x}{|}^{-\left(\mathit{N}-\mathit{\gamma }\right)}\ast |\mathit{u}{|}^{\mathit{p}}\right)|\mathit{u}{|}^{\mathit{p}-2}\mathit{u}=0,\phantom{\rule{1em}{0ex}}\mathit{x}\in {\mathbb{R}}^{\mathit{N}},\mathit{t}\in \mathbb{R}.
Our main goal is to understand global behavior of solutions of this equation in various settings. In this work we make an initial attempt towards this goal and study
{\mathit{H}}^{1}
(finite energy) solutions. We first investigate the
{\mathit{H}}^{1}
local well-posedness and small data theory. We then, in the intercritical regime (
0<\mathit{s}<1
), classify the behavior of
{\mathit{H}}^{1}
solutions under the mass-energy assumption
\mathcal{ME}\left[{\mathit{u}}_{0}\right]<1
, identifying the sharp threshold for global versus finite time solutions via the sharp constant of the corresponding convolution type Gagliardo–Nirenberg interpolation inequality (note that the uniqueness of a ground state is not known in the general case). In particular, depending on the size of the initial mass and gradient, solutions will either exist for all time and scatter in
{\mathit{H}}^{1}
, or blow up in finite time, or diverge along an infinite time sequence. To obtain
{\mathit{H}}^{1}
scattering or divergence to infinity, in this paper we employ the well-known concentration compactness and rigidity method of Kenig and Merle [36] with the novelty of studying the nonlocal, convolution nonlinearity.
Anudeep Kumar Arora. Svetlana Roudenko. "Global Behavior of Solutions to the Focusing Generalized Hartree Equation." Michigan Math. J. Advance Publication 1 - 54, 2021. https://doi.org/10.1307/mmj/20205855
Received: 13 January 2020; Revised: 12 August 2020; Published: 2021
Anudeep Kumar Arora, Svetlana Roudenko "Global Behavior of Solutions to the Focusing Generalized Hartree Equation," Michigan Mathematical Journal, Michigan Math. J. Advance Publication, 1-54, (2021)
|
Kurs:Mathematik für Anwender (Osnabrück 2011-2012)/Teil I/Arbeitsblatt 17/en – Wikiversity
Kurs:Mathematik für Anwender (Osnabrück 2011-2012)/Teil I/Arbeitsblatt 17/en
< Kurs:Mathematik für Anwender (Osnabrück 2011-2012)/Teil I | Arbeitsblatt 17
11 Exercise (3 points)
Compute the first five terms of the Cauchy product of the two convergent series
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}}}\,\,{\text{ and }}\,\,\sum _{n=1}^{\infty }{\frac {1}{n^{3}}}.}
Keep in mind that the partial sums of the Cauchy product of two series are not the product of the partial sums of the two series.
{\displaystyle {}\sum _{n=0}^{\infty }a_{n}x^{n}}
{\displaystyle {}\sum _{n=0}^{\infty }b_{n}x^{n}}
be two power series absolutely convergent in
{\displaystyle {}x\in \mathbb {R} }
. Prove that the Cauchy product of these series is exactly
{\displaystyle \sum _{n=0}^{\infty }c_{n}x^{n}{\text{ where }}c_{n}=\sum _{i=0}^{n}a_{i}b_{n-i}.}
{\displaystyle {}x\in \mathbb {R} }
{\displaystyle {}\vert {x}\vert <1}
. Determine (in dependence of
{\displaystyle {}x}
) the sum of the two series
{\displaystyle \sum _{k=0}^{\infty }x^{2k}{\text{ and }}\sum _{k=0}^{\infty }x^{2k+1}.}
{\displaystyle \sum _{n=0}^{\infty }a_{n}x^{n}}
be an absolutely convergent power series. Compute the coefficients of the powers
{\displaystyle {}x^{0},x^{1},x^{2},x^{3},x^{4}}
in the third power
{\displaystyle \sum _{n=0}^{\infty }c_{n}x^{n}=(\sum _{n=0}^{\infty }a_{n}x^{n})^{3}.}
Prove that the real function defined by the exponential
{\displaystyle \exp \colon \mathbb {R} \longrightarrow \mathbb {R} ,\,x\longmapsto \exp x,}
has no upper limit and that
{\displaystyle {}0}
is the infimum (but not the minimum) of the image set.[1]
Prove that for the exponential function
{\displaystyle \mathbb {R} \longrightarrow \mathbb {R} ,\,x\longmapsto a^{x},}
the following calculation rules hold (where
{\displaystyle {}a,b\in \mathbb {R} _{+}}
{\displaystyle {}x,y\in \mathbb {R} }
{\displaystyle {}a^{x+y}=a^{x}\cdot a^{y}}
{\displaystyle {}a^{-x}={\frac {1}{a^{x}}}}
{\displaystyle {}(a^{x})^{y}=a^{xy}}
{\displaystyle {}(ab)^{x}=a^{x}b^{x}}
Prove that for the logarithm to base
{\displaystyle {}b}
the following calculation rules hold.
{\displaystyle {}\log _{b}(b^{x})=x}
{\displaystyle {}b^{\log _{b}(y)}=y}
, ie, the logarithm to base
{\displaystyle {}b}
is the inverse to the exponential function to the base
{\displaystyle {}b}
{\displaystyle {}\log _{b}(y\cdot z)=\log _{b}y+\log _{b}z}
{\displaystyle {}\log _{b}y^{u}=u\cdot \log _{b}y}
{\displaystyle {}u\in \mathbb {R} }
{\displaystyle {}\log _{a}y=\log _{a}(b^{\log _{b}y})=\log _{b}y\cdot \log _{a}b\,.}
A monetary community has an annual inflation of
{\displaystyle {}2\%}
. After what period of time (in years and days), the prices have doubled?
{\displaystyle {}b,c>0}
{\displaystyle \operatorname {lim} _{b\rightarrow 0}\,b^{c}=0.}
Exercise (3 points)
Compute the coefficients
{\displaystyle {}c_{0},c_{1},\ldots ,c_{5}}
of the power series
{\displaystyle {}\sum _{n=0}^{\infty }c_{n}x^{n}}
, which is the Cauchy product of the geometric series with the exponential series.
{\displaystyle \sum _{n=0}^{\infty }a_{n}x^{n}}
be an absolutely convergent power series. Determine the coefficients of the powers
{\displaystyle {}x^{0},x^{1},x^{2},x^{3},x^{4},x^{5}}
in the fourth power
{\displaystyle \sum _{n=0}^{\infty }c_{n}x^{n}=(\sum _{n=0}^{\infty }a_{n}x^{n})^{4}.}
{\displaystyle {}N\in \mathbb {N} }
{\displaystyle {}x\in \mathbb {R} }
{\displaystyle R_{N+1}(x)=\exp x-\sum _{n=0}^{N}{\frac {x^{n}}{n!}}=\sum _{n=N+1}^{\infty }{\frac {x^{n}}{n!}}}
be the remainder of the exponential series. Prove that for
{\displaystyle {}\vert {x}\vert \leq 1+{\frac {1}{2}}N}
the remainder term estimate
{\displaystyle \vert {R_{N+1}(x)}\vert \leq {\frac {2}{(N+1)!}}\vert {x}\vert ^{N+1}}
Compute by hand the first
{\displaystyle {}4}
digits in the decimal system of
{\displaystyle \exp 1.}
Prove that the real exponential function defined by the exponential series has the property that for each
{\displaystyle {}d\in \mathbb {N} }
{\displaystyle \left({\frac {\exp n}{n^{d}}}\right)_{n\in \mathbb {N} }}
diverges to
{\displaystyle {}+\infty }
{\displaystyle f\colon \mathbb {R} \longrightarrow \mathbb {R} }
be a continuous function
{\displaystyle {}\neq 0}
{\displaystyle f(x+y)=f(x)\cdot f(y)}
{\displaystyle {}x,y\in \mathbb {R} }
{\displaystyle {}f}
is an exponential function, i.e. there exists a
{\displaystyle {}b>0}
{\displaystyle {}f(x)=b^{x}}
↑ From the continuity it follows that
{\displaystyle {}\mathbb {R} _{+}}
is the image set of the real exponential function.
↑ Therefore we say that the exponential function grows faster than any polynomial function.
Abgerufen von „https://de.wikiversity.org/w/index.php?title=Kurs:Mathematik_für_Anwender_(Osnabrück_2011-2012)/Teil_I/Arbeitsblatt_17/en&oldid=457798“
Fachbereich Mathematik/Aufgaben/Englische Übersetzung
|
Limits - MATLAB & Simulink - MathWorks Switzerland
The fundamental idea in calculus is to make calculations on functions as a variable “gets close to” or approaches a certain value. Recall that the definition of the derivative is given by a limit
f\text{'}\left(x\right)=\underset{h\to 0}{\mathrm{lim}}\frac{f\left(x+h\right)-f\left(x\right)}{h},
provided this limit exists. Symbolic Math Toolbox™ software enables you to calculate the limits of functions directly. The commands
syms h n x
limit((cos(x+h) - cos(x))/h, h, 0)
limit((1 + x/n)^n, n, inf)
illustrate two of the most important limits in mathematics: the derivative (in this case of cos(x)) and the exponential function.
You can also calculate one-sided limits with Symbolic Math Toolbox software. For example, you can calculate the limit of x/|x|, whose graph is shown in the following figure, as x approaches 0 from the left or from the right.
fplot(x/abs(x), [-1 1], 'ShowPoles', 'off')
To calculate the limit as x approaches 0 from the left,
\underset{x\to {0}^{-}}{\mathrm{lim}}\frac{x}{|x|},
limit(x/abs(x), x, 0, 'left')
To calculate the limit as x approaches 0 from the right,
\underset{x\to {0}^{+}}{\mathrm{lim}}\frac{x}{|x|}=1,
limit(x/abs(x), x, 0, 'right')
Since the limit from the left does not equal the limit from the right, the two- sided limit does not exist. In the case of undefined limits, MATLAB® returns NaN (not a number). For example,
limit(x/abs(x), x, 0)
Observe that the default case, limit(f) is the same as limit(f,x,0). Explore the options for the limit command in this table, where f is a function of the symbolic object x.
\underset{x\to 0}{\mathrm{lim}}f\left(x\right)
limit(f)
\underset{x\to a}{\mathrm{lim}}f\left(x\right)
limit(f, x, a) or
limit(f, a)
\underset{x\to a-}{\mathrm{lim}}f\left(x\right)
limit(f, x, a, 'left')
\underset{x\to a+}{\mathrm{lim}}f\left(x\right)
limit(f, x, a, 'right')
|
divconq - Maple Help
Home : Support : Online Help : Mathematics : Discrete Mathematics : Summation and Difference Equations : LREtools : divconq
find solutions of "divide and conquer" recurrence equations
divconq(problem)
Solves divide and conquer recurrences, meaning those of the form
Af\left({R}^{n}k\right)+Bf\left({R}^{m}k\right)=0
, where R is either numeric or a name, m and n are integers, and where A and B are independent of k.
\mathrm{with}\left(\mathrm{LREtools}\right):
\mathrm{prob}≔\mathrm{REcreate}\left({y\left(nk\right)=2y\left(k\right)},y\left(k\right),\varnothing \right)
\textcolor[rgb]{0,0,1}{\mathrm{prob}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{RESol}}\textcolor[rgb]{0,0,1}{}\left({\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{k}\right)\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{k}\right)\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{k}\right)}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\varnothing }\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{INFO}}\right)
\mathrm{divconq}\left(\mathrm{prob}\right)
\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{k}}^{\frac{\textcolor[rgb]{0,0,1}{\mathrm{ln}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{2}\right)}{\textcolor[rgb]{0,0,1}{\mathrm{ln}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{n~}}\right)}}
\mathrm{divconq}\left(f\left({r}^{3}k\right)=2f\left({r}^{2}k\right),f\left(k\right),\varnothing \right)
\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{k}}^{\frac{\textcolor[rgb]{0,0,1}{\mathrm{ln}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{2}\right)}{\textcolor[rgb]{0,0,1}{\mathrm{ln}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{r}\right)}}
|
I have a function that evaluates polynomials with integer coefficients. To evaluate at
f(8)
, for example, you do this:
scala> evalPoly(8, List(6, 5, 0, 2))
res0: (Int, Int) = (8, 1070)
For some reason it echoes the input back out to you. Here’s the code you might write:
def evalPoly(x: Int, coeffs: List[Int]): (Int, Int) = {
def eval(cs: List[Int]): Int = {
cs match {
case h :: t => x * eval(t) + h
(x, eval(coeffs))
But I also have a function that un-evaluates polynomials. To un-evaluate
f(8) = 1070
, you do this:
scala> unevalPoly(8, 1070)
res1: (Int, List[Int]) = (8, List(6, 5, 0, 2))
and it echoes your input and gives you back the coefficients of the polynomial.
Wait, what? I thought you needed
N+1
points to determine an
N
-degree polynomial. Here I’ve seemingly done it with just one point. To spoil the surprise a little, unevalPoly doesn’t always work. But how does it work even some of the time? How would you go about coding this up?
Having noticed that the input to unevalPoly is the output of evalPoly, and vice versa, one tack we can try is to write evalPoly backwards. First let me rewrite it slightly:
case h :: t => plustimes(x, eval(t), h)
I’ve just replaced x * eval(t) + h with a call to this function:
def plustimes(n: Int, q: Int, r: Int) = {
n * q + r
Now here’s eval as a data flow diagram. I’ve threaded through x as a “context” variable because it isn’t an input to eval per se.
Following the arrows backwards from the outputs to the inputs we can write the following code:
def unevalPoly(x: Int, y: Int): (Int, List[Int]) = {
def uneval(y: Int): List[Int] = {
case y => {
val (q, r) = unplustimes(x, y)
r :: uneval(q)
(x, uneval(y))
Now this should work as long as we can write unplustimes, which is possible only when plustimes doesn’t destroy information. So given m and n and m = n * q + r, when can we recover q and r?
Well, if r happens to be less than n, this is just like doing long division — q and r are the quotient and remainder when dividing m by n:
def unplustimes(n: Int, m: Int): (Int, Int) = {
val q = m / n
val r = m % n
This works because for a given positive integer
n
, any integer
m
m = nq + r
q
r
r \lt n
. Since this formulation is unique, it’s easy to reverse the process and recover
q
r
So what does that mean for unevalPoly? It will only work if
x is a positive integer, and
all of the coefficients are nonnegative integers less than x.
Let’s try it out. This works:
scala> evalPoly(8, List(1, 3))
res0: (Int, Int) = (8, 25)
scala> unevalPoly(8, 25)
res1: (Int, List[Int]) = (8, List(1, 3))
But this doesn’t, as expected:
scala> evalPoly(2, List(1, 4, 2))
res3: (Int, List[Int]) = (2, List(1, 0, 0, 0, 1))
scala> evalPoly(5, List(1, -2, 1))
Neat though!
This all came to me through a puzzle I heard: Your friend has a secret polynomial, which you know has nonnegative integer coefficients. She challenges you to determine the coefficients of the polynomial, offering to evaluate the polynomial for you on any two numbers you choose.
From the above, you know need to evaluate the polynomial at a number that is larger than all of the coefficients. So all that’s left to the solution is finding some number that satisfies that description.
You might have noticed that all unevalPoly(n, m) is doing is converting m to its representation in base n. Here it is converting 42 to base 2:
res6: (Int, List[Int]) = (2,List(0, 1, 0, 1, 0, 1))
And oh, look:
scala> unevalPoly(10, 12345)
res7: (Int, List[Int]) = (10, List(5, 4, 3, 2, 1))
This all makes sense now. The polynomial
is what you mean when you write
abcde_x
, which is the unique representation of that number in base
x
provided that all of the coefficients are less than
x
. Recovering the coefficients of
f(x) = y
is the same as writing
y
x
So backwards programming is good for something! If this interests you, you should read my last post on backwards sorting algorithms.
backwards programming 2
← Insertion sort is dual to bubble sort How traffic actually works →
|
Innovative Renal Cooling Device for Use in Minimally Invasive Surgery | J. Med. Devices | ASME Digital Collection
Ed Summers,
Rachel Batzer,
Rachel Batzer
Julia Stark,
Summers, E., Cervantes, T., Batzer, R., Stark, J., and Lewis, R. (June 15, 2011). "Innovative Renal Cooling Device for Use in Minimally Invasive Surgery." ASME. J. Med. Devices. June 2011; 5(2): 027536. https://doi.org/10.1115/1.3591386
biomedical equipment, cancer, cooling, kidney, medical disorders, surgery
Cooling, Kidney, Surgery, Cancer, Biomedical equipment, Medical disorders
Over 58,000 patients suffer from renal cell carcinoma annually in the United States. Treatment for this cancer often requires surgical removal of the cancerous tissue in a partial nephrectomy procedure. In open renal surgery, the kidney is placed on ice to increase allowable ischemia time; however, there is no widely accepted method for reducing kidney temperature during minimally invasive surgery. A novel device has been designed, prototyped, and evaluated to perform effective renal cooling during minimally invasive kidney surgery to reduce damage due to extended ischemia. The device is a fluid-containing bag with foldable cooling surfaces that wrap around the organ like a taco shell. It is deployed through a 12 mm trocar, wrapped around the kidney and secured using bulldog clamps. The device then fills with an ice slurry and remains on the kidney for up to 20 min. The ice slurry is then removed from the device and the device is retracted from the body. Tests of the prototype show that the device successfully cools porcine kidneys from
37°C
20°C
in 5 minutes.
|
Calculus/Definition of a Series - Wikibooks, open books for an open world
Calculus/Definition of a Series
1 Definition of a Series
Definition of a SeriesEdit
{\displaystyle d}
{\displaystyle D}
{\displaystyle D=d_{1}+d_{2}+d_{3}...}
. This is true for all series, as it follows from the definition. Only adding a sub-sequence is called a partial sum.
Purely using the prior definition of a series is possible, but unwieldy. Instead we can again put to use summation notation, which was partially covered in the section on 'integrals'. Some common properties and identities are outlined here.
{\displaystyle \sum _{k=0}^{n}{c}=nc}
{\displaystyle c}
{\displaystyle \sum _{k=0}^{n}{k}={\frac {n(n+1)}{2}}}
{\displaystyle \sum _{k=0}^{n}{k^{2}}={\frac {n(n+1)(2n+1)}{6}}}
{\displaystyle \sum _{k=0}^{n}{k^{3}}={\frac {n^{2}(n+1)^{2}}{4}}}
{\displaystyle \sum _{k}^{n}{s_{k}}+\sum _{n}^{m}{s_{k}}=\sum _{k}^{m}{s_{k}}}
This is the adding of sums.
{\displaystyle j\sum _{k}^{n}{s_{k}}=\sum _{k}^{n}{js_{k}}}
Note that this is essentially the distributive property, so this will work for anything that follows the distributive property, even non-constant terms.
Retrieved from "https://en.wikibooks.org/w/index.php?title=Calculus/Definition_of_a_Series&oldid=3283920"
|
ConvertGraph - Maple Help
Home : Support : Online Help : Mathematics : Discrete Mathematics : Graph Theory : GraphTheory Package : ConvertGraph
convert graph to/from various formats
ConvertGraph(G, fmt)
ConvertGraph(G)
representation of a graph in one of the allowed formats
name of a format
ConvertGraph converts its input graph G to the format fmt. If G is of type GRAPHLN, the second argument is mandatory and must be one of digraph6, graph6, sparse6, or bits. If G is not of type GRAPHLN, the second argument is optional, and can be either of the above formats as well as maple.
The input graph G can be a GraphTheory graph (GRAPHLN), a networks graph, a string whose value is a digraph6, graph6, or sparse6 representation of a graph, or a bits representation of a graph (list of two integers).
\mathrm{with}\left(\mathrm{GraphTheory}\right):
\mathrm{with}\left(\mathrm{SpecialGraphs}\right):
\mathrm{d6}≔\mathrm{ConvertGraph}\left(\mathrm{PetersenGraph}\left(\right),'\mathrm{digraph6}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{d6}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{"&IRA_dGIEPGHHOIcDDG"}
\mathrm{g6}≔\mathrm{ConvertGraph}\left(\mathrm{PetersenGraph}\left(\right),'\mathrm{graph6}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{g6}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{"Ihe@GT@DG"}
\mathrm{s6}≔\mathrm{ConvertGraph}\left(\mathrm{PetersenGraph}\left(\right),'\mathrm{sparse6}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{s6}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{":I`ES@ocU`gfeTF"}
b≔\mathrm{ConvertGraph}\left(\mathrm{PetersenGraph}\left(\right),'\mathrm{bits}'\right)
\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{10}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{22854075843097}]
\mathrm{ConvertGraph}\left(\mathrm{g6},'\mathrm{sparse6}'\right)
\textcolor[rgb]{0,0,1}{":I`ES@ocU`gfeTF"}
\mathrm{ConvertGraph}\left(\mathrm{s6}\right)
\textcolor[rgb]{0,0,1}{\mathrm{Graph 1: an undirected unweighted graph with 10 vertices and 15 edge\left(s\right)}}
The GraphTheory[ConvertGraph] command was updated in Maple 2017.
|
12 CFR § 324.144 - Simplified supervisory formula approach (SSFA). | CFR | US Law | LII / Legal Information Institute
12 CFR § 324.144 - Simplified supervisory formula approach (SSFA).
(a) General requirements for the SSFA. To use the SSFA to determine the risk weight for a securitization exposure, an FDIC-supervised institution must have data that enables it to assign accurately the parameters described in paragraph (b) of this section. Data used to assign the parameters described in paragraph (b) of this section must be the most currently available data; if the contracts governing the underlying exposures of the securitization require payments on a monthly or quarterly basis, the data used to assign the parameters described in paragraph (b) of this section must be no more than 91 calendar days old. An FDIC-supervised institution that does not have the appropriate data to assign the parameters described in paragraph (b) of this section must assign a risk weight of 1,250 percent to the exposure.
(b) SSFA parameters. To calculate the risk weight for a securitization exposure using the SSFA, an FDIC-supervised institution must have accurate information on the following five inputs to the SSFA calculation:
(1) KG is the weighted-average (with unpaid principal used as the weight for each exposure) total capital requirement of the underlying exposures calculated using subpart D of this part. KG is expressed as a decimal value between zero and one (that is, an average risk weight of 100 percent represents a value of KG equal to 0.08).
(3) Parameter A is the attachment point for the exposure, which represents the threshold at which credit losses will first be allocated to the exposure. Except as provided in § 324.142(l) for nth-to-default credit derivatives, parameter A equals the ratio of the current dollar amount of underlying exposures that are subordinated to the exposure of the FDIC-supervised institution to the current dollar amount of underlying exposures. Any reserve account funded by the accumulated cash flows from the underlying exposures that is subordinated to the FDIC-supervised institution's securitization exposure may be included in the calculation of parameter A to the extent that cash is present in the account. Parameter A is expressed as a decimal value between zero and one.
(4) Parameter D is the detachment point for the exposure, which represents the threshold at which credit losses of principal allocated to the exposure would result in a total loss of principal. Except as provided in § 324.142(l) for n th-to-default credit derivatives, parameter D equals parameter A plus the ratio of the current dollar amount of the securitization exposures that are pari passu with the exposure (that is, have equal seniority with respect to credit risk) to the current dollar amount of the underlying exposures. Parameter D is expressed as a decimal value between zero and one.
(c) Mechanics of the SSFA. KG and W are used to calculate KA, the augmented value of KG, which reflects the observed credit quality of the underlying exposures. KA is defined in paragraph (d) of this section. The values of parameters A and D, relative to KA determine the risk weight assigned to a securitization exposure as described in paragraph (d) of this section. The risk weight assigned to a securitization exposure, or portion of a securitization exposure, as appropriate, is the larger of the risk weight determined in accordance with this paragraph (c), paragraph (d) of this section, and a risk weight of 20 percent.
(1) When the detachment point, parameter D, for a securitization exposure is less than or equal to KA, the exposure must be assigned a risk weight of 1,250 percent;
(2) When the attachment point, parameter A, for a securitization exposure is greater than or equal to KA, the FDIC-supervised institution must calculate the risk weight in accordance with paragraph (d) of this section;
\begin{array}{c}\text{(i) The weight assigned to 1,250 percent equals}\frac{{K}_{A}-A}{D-A};\mathrm{and}\\ \text{(ii) The weight assigned to 1,250 percent times}{\phantom{\rule{0ex}{0ex}}K}_{\mathrm{SSFA}}\phantom{\rule{0ex}{0ex}}\text{equals}\phantom{\rule{0ex}{0ex}}\frac{D-{K}_{A}}{D-A}.\text{The risk weight}\\ \text{will be set equal to:}\\ \text{Risk Weight}=\left[\left(\frac{{K}_{A}-A}{D-A}\right)·1,250\phantom{\rule{0ex}{0ex}}\text{percent}\right]+\left[\left(\frac{D-{K}_{A}}{D-A}\right)·1,250\phantom{\rule{0ex}{0ex}}\mathrm{percent}·{K}_{\mathrm{SSFA}}\right]\\ \text{(d)}\phantom{\rule{0ex}{0ex}}\text{SSFA equation.}\phantom{\rule{0ex}{0ex}}\text{(1) The FDIC-supervised institution must define the following}\\ \text{parameters:}\\ {K}_{A}=\left(1-W\right)·{K}_{G}+\left(0.5·W\right)\\ a=-\frac{1}{p·{K}_{A}}\\ u=D-{K}_{A}\\ l=max\left(A-{K}_{A}·0\right)\\ e=2.71828\text{, the base of the natural logarithms.}\\ \text{(2) Then the FDIC-supervised institution must calculateKSSFAaccording to the}\\ \text{following equation:}\\ {K}_{\mathrm{SSFA}}=\frac{{e}^{a·u}-{e}^{a·l}}{a\left(u-l\right)}\\ \text{(3) The risk weight for the exposure (expressed as a percent) is equal to}\\ {K}_{\mathrm{SSFA}}×1,250.\end{array}
|
On the martingale problem associated to the 2<em>D</em> and 3<em>D</em> stochastic Navier–Stokes equations | EMS Press
On the martingale problem associated to the 2<em>D</em> and 3<em>D</em> stochastic Navier–Stokes equations
Antenne de Bretagne, Bruz, France
In this paper we consider a Markov semigroup
(P_t)_{t\ge 0}
2D
3D
Navier-Stokes equations. In the two-dimensional case
P_t
is unique, whereas in the three-dimensional case (where uniqueness is not known) it is constructed as in \cite{DPD-NS3D} and \cite{DO06}. For
d=2
, we explicit a core, identify the abstract generator of
(P_t)_{t\ge 0}
with the differential Kolmogorov operator
L
on this core and prove existence and uniqueness for the corresponding martingale problem. In dimension
3
, we are not able to prove a similar result and we explain the difficulties encountered. Nonetheless, we explicit a core for the generator of the transformed semigroup
(S_t)_{t\ge 0},
obtained by adding a suitable potential and then using the Feynman--Kac formula. Then we identify the abstract generator
(S_t)_{t\ge 0}
with a differential operator
N
on this core and prove uniqueness for the stopped martingale problem.
Giuseppe Da Prato, Arnaud Debussche, On the martingale problem associated to the 2<em>D</em> and 3<em>D</em> stochastic Navier–Stokes equations. Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. 19 (2008), no. 3, pp. 247–264
|
{\displaystyle f:\mathbb {F} _{2^{n}}\rightarrow \mathbb {F} _{2}}
{\displaystyle 0}
{\displaystyle \pm \mu }
{\displaystyle \mu }
called the amplitude o{\displaystyle f}
{\displaystyle F}
{\displaystyle (n,m)}
{\displaystyle F}
{\displaystyle u\cdot F}
{\displaystyle u\neq 0}
{\displaystyle F}
{\displaystyle F}
{\displaystyle a}nd every
{\displaystyle v}
{\displaystyle \{b\in \mathbb {F} _{2}^{n}:D_{a}D_{b}F(x)=v\}}
{\displaystyle x}
{\displaystyle \{b\in \mathbb {F} _{2}^{n}:D_{a}F(b)=D_{a}F(x)+v\}}
{\displaystyle x}
{\displaystyle f_{\phi ,h}}
{\displaystyle f_{\phi ,h}(x,y)=x\cdot \phi (y)+h(y)}
{\displaystyle x\in \mathbb {F} _{2}^{r},y\in \mathbb {F} _{2}^{s}}
{\displaystyle r}
{\displaystyle s}
{\displaystyle n=r+s}
{\displaystyle \phi :\mathbb {F} _{2}^{s}\rightarrow \mathbb {F} _{2}^{r}}
{\displaystyle h:\mathbb {F} _{2}^{s}\rightarrow \mathbb {F} _{2}}
{\displaystyle f_{\phi ,h}}
{\displaystyle W_{f_{\phi ,h}}(a,b)=2^{r}\sum _{y\in \phi ^{-1}(a)}(-1)^{b\cdot y+h(y)}}
{\displaystyle (a,b)}
{\displaystyle \phi }
{\displaystyle f_{\phi ,h}}
{\displaystyle 2^{r}}
{\displaystyle 2^{r+1}}
{\displaystyle f}
{\displaystyle \sum _{a,b\in \mathbb {F} _{2}^{n}}(-1)^{DaDbf(x)}}
{\displaystyle x\in \mathbb {F} _{2}^{n}}
{\displaystyle F}
{\displaystyle (n,m)}
{\displaystyle v\in \mathbb {F} _{2}^{m}}
{\displaystyle \{(a,b)\in (\mathbb {F} _{2}^{n})^{2}:D_{a}D_{b}F(x)=v\}}
{\displaystyle x}
{\displaystyle x}
{\displaystyle v\in \mathbb {F} _{2}^{m}}
{\displaystyle v\neq 0}
{\displaystyle F}
{\displaystyle D_{a}D_{b}F(x)}
{\displaystyle D_{a}F(b)+D_{a}F(x)}
{\displaystyle (a,b)}
{\displaystyle (\mathbb {F} _{2}^{n})^{2}}
{\displaystyle F,G}
{\displaystyle u\cdot F,u\cdot G}
{\displaystyle F(x)=x^{d}}
{\displaystyle \lambda \neq 0}
{\displaystyle |\{(a,b)\in \mathbb {F} _{2^{n}}^{2}:D_{a}F(b)+D_{a}F(x)=v\}|=|\{(a,b)\in \mathbb {F} _{2^{n}}^{2}:D_{a}F(b)+D_{a}F(x/\lambda )=v/\lambda ^{d}\}|.}
{\displaystyle F}
{\displaystyle v\in \mathbb {F} _{2^{n}}}
{\displaystyle |\{(a,b)\in \mathbb {F} _{2^{n}}^{2}:D_{a}F(b)+D_{a}F(1)=v\}|=|\{(a,b)\in \mathbb {F} _{2^{n}}^{2}:D_{a}F(b)+D_{a}F(0)=v\}|;}
{\displaystyle F}
{\displaystyle v\neq 0}
{\displaystyle F}
{\displaystyle (n,m)}
{\displaystyle F}
{\displaystyle v,x\in \mathbb {F} _{2}^{n}}
{\displaystyle |\{(a,b)\in (\mathbb {F} _{2}^{n})^{2}:D_{a}D_{b}F(x)=v\}|=|\{(a,b)\in (\mathbb {F} _{2}^{n})^{2}:F(a)+F(b)=v\}|.}
{\displaystyle F}
{\displaystyle v}
{\displaystyle v\neq 0}
{\displaystyle {\rm {Im}}(D_{a}F)}
{\displaystyle F}
{\displaystyle f}
{\displaystyle {\Delta _{f}}(a)=\sum _{x\in \mathbb {F} _{2}^{n}}(-1)^{f(x)+f(x+a)}}
A{\displaystyle n}
{\displaystyle f}
{\displaystyle x\in \mathbb {F} _{2}^{n}}
{\displaystyle 2^{n}\sum _{a\in \mathbb {F} _{2}^{n}}\Delta _{f}(a)\Delta _{f}(a+x)=\left(\sum _{a\in \mathbb {F} _{2}^{n}}\Delta _{f}^{2}(a)\right)\Delta _{f}(x).}
{\displaystyle (n,m)}
{\displaystyle F}
{\displaystyle x\in \mathbb {F} _{2}^{n},u\in \mathbb {F} _{2}^{m}}
{\displaystyle 2^{n}\sum _{a\in \mathbb {F} _{2}^{n}}\Delta _{u\cdot F}(a)\Delta _{u\cdot F}(a+x)=\left(\sum _{a\in \mathbb {F} _{2}^{n}}\Delta _{u\cdot F}^{2}(a)\right)\Delta _{u\cdot F}(x).}
{\displaystyle F}
{\displaystyle x\in \mathbb {F} _{2}^{n},u\in \mathbb {F} _{2}^{m}}
{\displaystyle \sum _{a\in \mathbb {F} _{2}^{n}}\Delta _{u\cdot F}(a)\Delta _{u\cdot F}(a+x)=\mu ^{2}\Delta _{u\cdot F}(x).}
{\displaystyle F}
{\displaystyle x,v\in \mathbb {F} _{2}^{n}}
{\displaystyle 2^{n}|\{(a,b,c)\in (\mathbb {F} _{2}^{n})^{3}:F(a)+F(b)+F(c)+F(a+b+c+x)=v\}|=|\{(a,b,c,d)\in (\mathbb {F} _{2}^{n})^{4}:F(a)+F(b)+F(c)+F(a+b+c)+F(d)+F(d+x)=v\}|.}
{\displaystyle f:\mathbb {F} _{2^{n}}\rightarrow \mathbb {F} _{2}}
{\displaystyle 0\neq \alpha \in \mathbb {F} _{2}^{n}}
{\displaystyle \sum _{w\in \mathbb {F} _{2}^{n}}W_{f}(w+\alpha )W_{f}^{3}(w)=0.}
{\displaystyle (n,m)}
{\displaystyle F}
{\displaystyle u\in \mathbb {F} _{2}^{m}}
{\displaystyle 0\neq \alpha \in \mathbb {F} _{2}^{n}}
{\displaystyle \sum _{w\in \mathbb {F} _{2}^{n}}W_{F}(w+\alpha ,u)W_{F}^{3}(w,u)=0.}
{\displaystyle F}
{\displaystyle \sum _{w\in \mathbb {F} _{2}^{n}}W_{F}^{4}(w,u)}
{\displaystyle u}
{\displaystyle u\neq 0}
{\displaystyle f:\mathbb {F} _{2^{n}}\rightarrow \mathbb {F} _{2}}
{\displaystyle b\in \mathbb {F} _{2}}
{\displaystyle \sum _{a\in \mathbb {F} _{2}}W_{f}^{4}(a)=2^{n}(-1)^{f(b)}\sum _{a\in \mathbb {F} _{2}^{n}}(-1)^{a\cdot b}W_{f}^{3}(a).}
{\displaystyle (n,m)}
{\displaystyle F}
{\displaystyle b\in \mathbb {F} _{2}^{n}}
{\displaystyle u\in \mathbb {F} _{m}}
{\displaystyle \sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{4}(a,u)=2^{n}(-1)^{u\cdot F(b)}\sum _{a\in \mathbb {F} _{2}^{n}}(-1)^{a\cdot b}W_{F}^{3}(a,u).}
{\displaystyle F}
{\displaystyle u}
{\displaystyle u\neq 0}
{\displaystyle f}
i{\displaystyle n}
{\displaystyle \left(\sum _{a\in \mathbb {F} _{2}^{n}}W_{f}^{4}(a)\right)^{2}\leq 2^{2n}\left(\sum _{a\in \mathbb {F} _{2}^{n}}W_{f}^{6}(a)\right),}
with equality if and only i{\displaystyle f}
{\displaystyle (n,m)}
{\displaystyle F}
{\displaystyle \sum _{u\in \mathbb {F} _{2}^{m}}\left(\sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{4}(a,u)\right)^{2}\leq 2^{2n}\sum _{u\in \mathbb {F} _{2}^{m}}\left(\sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{6}(a,u)\right),}
{\displaystyle F}
{\displaystyle (n,m)}
{\displaystyle \sum _{u\in \mathbb {F} _{2}^{m}}\sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{4}(a,u)\leq 2^{n}\sum _{u\in \mathbb {F} _{2}^{m}}{\sqrt {\sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{6}(a,u)}},}
{\displaystyle F}
{\displaystyle D_{a}F(x)=D_{a}F(0)}
{\displaystyle F}
{\displaystyle (n,n)}
{\displaystyle F}
{\displaystyle F(x)+F(x+a)=F(0)+F(a)}
{\displaystyle 0\neq a\in \mathbb {F} _{2}^{n}}
{\displaystyle F}
{\displaystyle (n,n)}
{\displaystyle F(0)=0}
{\displaystyle F}
{\displaystyle |\{(x,b)\in \mathbb {F} _{2^{n}}^{2}:F(x)+F(x+b)+F(b)=0\}|=3\cdot 2^{n}-2,}
{\displaystyle \sum _{a\in \mathbb {F} _{2^{n}},u\in \mathbb {F} _{2^{n}}^{*}}W_{F}^{3}(a,u)=2^{2n+1}(2^{n}-1).}
{\displaystyle (n,n)}
{\displaystyle 3\cdot 2^{3^{n}}-2^{2n+1}\leq \sum _{u\in \mathbb {F} _{2}^{n}}{\sqrt {\sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{6}(a,u)}},}
{\displaystyle F}
{\displaystyle 2^{\lambda _{u}}}
{\displaystyle u\cdot F}
{\displaystyle F}
{\displaystyle FisAPNifandonlyif<div><math>\sum _{0\neq u\in \mathbb {F} _{2}^{n}}2^{2\lambda _{u}}\leq 2^{n+1}(2^{n}-1).}
{\displaystyle F}
{\displaystyle (n,n)}
{\displaystyle |\{(a,b)\in (\mathbb {F} _{2}^{n})^{2}:a\neq b,F(a)=F(b)\}|\geq 2\cdot (2^{n}-1),}
{\displaystyle F}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.