text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
# Discrete Wavelet Transform (DWT) and wavelet family
I have just started reading about wavelets for a data compression problem that I want to perform. I am reading about Discrete Wavelet Transform (DWT) but I can't understand where the wavelet family that has to be set is used.
This is the DWT schema
I do not understand where is the family wavelet used if only low-pass and high-pass filters are being applied and subsampling. There is a step I'm missing or I am lost.
Thanks for the help.
There are actually four filters involved:
• 2 for the decomposition of signals [the h[n] and g[n] in the diagram above]
• 2 for the reconstruction of signals
The diagram you are showing is only for signal decomposition. There is a corresponding diagram for signal reconstruction which involves upsampling the coefficients by inserting zeros, then passing them through reconstruction low pass and high pass filters, and then summing the approximation and detail components.
The four filters together form a perfect reconstruction filter bank.
• dec_lo (decompostion low pass filter), dec_hi (decomposition high pass filter)
• rec_lo (reconstruction low pass filter), rec_hi (reconstruction high pass filter)
For orthogonal wavelets, these filters have a specific relationship:
• dec_lo = rec_lo[::-1]
• rec_hi = qmf(rec_lo)
• dec_hi = rec_hi[::-1]
where qmf stands for quadrature mirror filter:
def qmf(h):
g = h[::-1]
g[1::2] = -g[1::2]
return g
Thus, if you have chosen a rec_lo filter properly, all other filters are automatically derived from it. This discussion is limited to orthogonal wavelets.
A wavelet family essentially describes such filter banks. Each member of a wavelet family corresponds to a unique filter bank. Every family of wavelets has some unique features [like the number of vanishing moments of the scaling and wavelet functions, symmetry in the wavelet, etc.].
The wavelet or scaling functions are not directly used in the DWT or IDWT. They characterize the filter banks. However, if you pass a specific impulse function as input to the DWT, you will get the scaling or wavelet function at the appropriate scale and location as output.
The pairs of low-pass and high-pass filters that may undergo a sub-sampling and yet retain all the information are a special subclass of perfect reconstruction filter banks. Under some additional conditions, their iterations yield the 2-band wavelet functions.
|
{}
|
1887
### Abstract
6283 Sequence monitoring J. Knezevic* (Union University) SUMMARY 5 th Congress of Balkan Geophysical Society — Belgrade Serbia 10 – 16 May 2009On the basis of relation between period of sampling and period of fluctuation observed phenomenon geophysical monitoring could be: a)continual monitoring b)sequence monitoring. Natural phenomenon are of continual character so the signal of that phenomena is continual time function -continual signal. The ideal case for continual monitoring would be when velocity of sampling is at least 100 times bigger then Nikvist frequency. In real case continual monitoring contains acquisition made by the sampling velociti with frequency that is
/content/papers/10.3997/2214-4609-pdb.126.6283
2009-05-10
2021-11-29
|
{}
|
Palindromic Sum
Tag(s):
FFT, Math, Medium-Hard
Problem
Editorial
Analytics
Given an array A of length N, find the number of non empty sub-arrays such that sum of all the elements in the sub-array is a palindrome. In other words, you have to find number of pairs $(i,\;j)$ such that $\sum_{x=i}^j A_x$ is a palindrome where $(1 \le i \le j \le N)$.
Input Format:
First line contains an integer, N $(1 \le N \le 5 * 10^5)$. Second line contains N space separated integers, $A_i$ $(1 \le A_i \le 2 * 10^6)$, elements of the array A. The sum of all the elements in the array is in the range $[1, 2 * 10^6]$.
Output Format:
Print an integer, number of non empty sub-arrays such that sum of all the elements in the sub-array is a palindrome.
SAMPLE INPUT
4
10 1 12 3
SAMPLE OUTPUT
3
Explanation
The 3 sub-arrays are $(10,\;1)$, 1 and 3.
Time Limit: 2.0 sec(s) for each input file.
Memory Limit: 256 MB
Source Limit: 1024 KB
Marking Scheme: Marks are awarded when all the testcases pass.
Allowed Languages: C, C++, C++14, Clojure, C#, D, Erlang, F#, Go, Groovy, Haskell, Java, Java 8, JavaScript(Rhino), JavaScript(Node.js), Julia, Kotlin, Lisp, Lisp (SBCL), Lua, Objective-C, OCaml, Octave, Pascal, Perl, PHP, Python, Python 3, R(RScript), Racket, Ruby, Rust, Scala, Swift, Swift-4.1, Visual Basic
CODE EDITOR
Initializing Code Editor...
Contributor
Challenge Name
HackerEarth Collegiate Cup - Mirror Round
OTHER PROBLEMS OF THIS CHALLENGE
• Basic Programming > Bit Manipulation
• Algorithms > Dynamic Programming
• Algorithms > Sorting
• Algorithms > Dynamic Programming
• Algorithms > Graphs
• Algorithms > Graphs
• Data Structures > Advanced Data Structures
• Algorithms > Graphs
• Algorithms > Searching
• Algorithms > Dynamic Programming
|
{}
|
# 1) A solution is prepared with 0.55 M HNO2 and 0.75 M KNO2. Fill in the ICE Table with the appropriate values
The time for answering the question is over
271 cents
Based on the calculations through the ICE table, the pH of the buffer solution is equal to 3.30.
Given the following data:
• Concentration of = 0.55 M.
• Concentration of = 0.75 M.
• Rate constant =
### How to determine the pH of the buffer solution.
First of all, we would write the properly balanced chemical equation for this chemical reaction:
Initial cond. 0.55M 0 0.75M
-x x x
At equib. 0.55M - x 0 + x 0.75M + x
From the ICE table, the Ka for this chemical reaction is given by:
Now, we can calculate the pH of the buffer solution:
pH = 3.30.
Alternatively, you can calculate the pH of this buffer solution by applying Henderson-Hasselbalch equation:
Where:
• HA is acetic acid.
• is acetate ion.
|
{}
|
Publisher.zip completions
⇐ Notes archive
(This is an entry in my technical notebook. There will likely be typos, mistakes, or wider logical leaps — the intent here is to “let others look over my shoulder while I figure things out.”)
Zipping — in general — is a pairwise affair. Optional’s zip is non-nil if both arguments are. Similarly for Result’s zip along the .success case. Swift.zip pairs until it runs off the shorter of the sequence arguments. Parsers.Take2 (another name for zipped parsing) succeeds if both parsers involved do.
While Publisher.zip and its higher-arity overloads follow suit for value events, there’s a subtle gotcha for .finished events (failures are immediately passed downstream).
A zipped publisher can complete even if all of its inner publishers don’t.
This checks out after a pause — since second completes after the first (1, 2) pair comes through, there’s no chance it’ll pair with any future value events from first. Hence the completion. So, even though zipping is usually synonymous with “pairing” in my head, I’ll need to remember that doesn’t necessarily extend to completion events.
|
{}
|
Kategorien
# The power set of a set with n elements contains 2^n elements
## Assertion
The power set P(M) of a set M with n elements contains 2n elements.
## Proof
### base case: n = 0
The set which contains 0 elements is the empty set $$\emptyset = \{~\}$$.
Its power set contains 1 element (1 = 20), namely the empty set: $$P(\emptyset) = \{\emptyset\} = \{\{~\}\}$$.
### Iinductive step: A(n) => A(n+1)
Let $$M_{n+1}$$ be a set with n+1 elements. $$e_1,~e_2,~\dots~e_{n},~e_{n+1}$$.
It hast the subset $$M_{n}$$ with n elements $$e_1,~e_2,~\dots~e_{n}$$.
$$P(M_{n})$$ has $$2^n$$ elements (assumption), namely the $$2^n$$ subsets of $$M_{n}$$: $$T_{1},~T_{2},\dots~T_{2^n}$$.
They are the subsets of $$M_{n}$$ that implies that they are subsets of $$M_{n+1}$$ so they are elements of $$P(M_{n+1})$$.
Additionally contains $$P(M_{n+1})$$ the subsets ofon $$M_{n+1}$$ that contain the element $$e_{n+1}$$
Those can be constructed by joining $$T_i \quad i\in\{1\dots 2^n\}$$ with $$e_{n+1}$$
Each subset $$T_i$$ adds one element: $$e_{n+1}$$. $$T_i‘ = T_i \cup e_{n+1}$$. The number of subsets doubles.
We have:
$$P(M_{n+1}) = \{T_{1},~T_{2},\dots~T_{2^n},~T_{1}‘,~T_{2}‘,\dots~T_{2^n}’\}$$
The power set has $$2^n + 2^n = 2\cdot 2^n = 2^{n+1}$$ elements.
q.e.d.
|
{}
|
# triangles
0th
Percentile
##### Extract a list of triangle from a triangulation object
This function extracts a triangulation data structure from an triangulation object created by tri.mesh.
The vertices in the returned matrix (let's denote it with retval) are ordered counterclockwise with the first vertex taken to be the one with smallest index. Thus, retval[i,"node2"] and retval[i,"node3"] are larger than retval[i,"node3"] and index adjacent neighbors of node retval[i,"node1"]. The columns trx and arcx, x=1,2,3 index the triangle and arc, respectively, which are opposite (not shared by) node nodex, with trix= 0 if arcx indexes a boundary arc. Vertex indexes range from 1 to N, triangle indexes from 0 to NT, and, if included, arc indexes from 1 to NA = NT+N-1. The triangles are ordered on first (smallest) vertex indexes, except that the sets of constraint triangles (triangles contained in the closure of a constraint region) follow the non-constraint triangles.
##### Usage
triangles(tri.obj)
##### Arguments
tri.obj
object of class "tri"
##### Value
• A matrix with columns node1,node2,node3, representing the vertex nodal indexes, tr1,tr2,tr3, representing neighboring triangle indexes and arc1,arc2,arc3 reresenting arc indexes.
Each row represents one triangle.
##### References
R. J. Renka (1996). Algorithm 751: TRIPACK: a constrained two-dimensional {Delaunay} triangulation package. ACM Transactions on Mathematical Software. 22, 1-8.
tri, print.tri, plot.tri, summary.tri, triangles
• triangles
##### Examples
# we will use the test data from library(akima):
library(akima)
data(akima)
akima.tr<-tri.mesh(akima$x,akima$y)
triangles(akima.tr)
Documentation reproduced from package tripack, version 1.0-1, License: R functions: GPL, Fortran code: available at netlib
### Community examples
Looks like there are no examples yet.
|
{}
|
# how to create input file for collaborative filtering
I want to find missing value in a matrix based on past matrices
But I find that the test.data are not matrices, instead a long list of 3 columns
How do I understand test.data and create my own data?
Which value represent missing value?
and how to understand the result, which value represent result?
user, item, rating
|
{}
|
### Chapter Chosen
Cell : The Unit of Life
### Book Store
Currently only available for.
CBSE Gujarat Board Haryana Board
### Previous Year Papers
Download the PDF Question Papers Free for off line practice and view the Solutions online.
Currently only available for.
Class 10 Class 12
Name two cell-organelles that are double membrane bound. What are the characteristics of these two organelles ? State their function and draw labelled diagrams of both.
Chloroplasts and mitochondria are organelles which have double membrane .
Mitochondria - is called the power house of the cell as it produces cellular energy in the form of ATP. It is bounded by two membrane, the inner membrane forms inner foldings called cristae. The matrix has a single circular DNA molecule.
Chloroplasts - are also double membrane bound organelles which help to trap the solar energy and produce food. The stroma contains many flattened disc like structures called thylakoids which are arranged in stacks to gorm grana. The chloroplasts consists of the pigment chlorophyll that helps to convert sun's energy to chemical energy.
Functions of :
Mitochondria - It produces the cellular energy in the form of ATP.
Chloroplasts - trap the energy of the sun and produce food.
Who reported that the cell had an outer layer which is known as plasma membrane today?
Theodore Schwann
What are receptor molecules ?
Receptor molecules are specific proteins in the cell's plasma membrane that receive chemical signals from outside the cell. When such chemical signals bind to a receptor, they cause some form of cellular response.
State one difference between gram positive and gram negative bacteria.
Gram positive bacteria retains the gram stain whereas the Gram negative bacteria does not.
What is the function of contractile vacuole in amoeba?
Excretion.
Who concluded, “Cells are the ultimate units forming the structure of all plant tissues”?
Mathias Schleiden.
Do a good deed today
Refer a friend to Zigya
|
{}
|
# Jim Yeck: a life in big infrastructures
22 May 2014
The person in charge of constructing the ESS.
To paraphrase lines from the title song of a well-known film: “If there’s something big in your neighbourhood, who ya gonna call?” If the neighbourhood is particle physics, then it could well be Jim Yeck, who delights in seeing things built. This enthusiasm has underpinned his leadership of a number of successful big scientific infrastructure projects in the US, including the important US hardware contribution to the LHC and the ATLAS and CMS experiments.
Yeck’s first exposure to big science projects was as a graduate engineer in the late 1980s at the Princeton Plasma Physics Laboratory, where there was a proposal to build the $300 million Compact Ignition Tokomak. However, in 1989 the project was cancelled, because plasma ignition could not be guaranteed and the international ITER initiative was on the horizon. “It was a formative experience,” says Yeck, and instead of nuclear fusion, he found himself working on risk assessment for large science projects, which was to prove valuable for his future career. In the autumn of 1990, he was asked by the US Department of Energy (DOE) to become the project manager for the construction of the Relativistic Heavy-Ion Collider (RHIC) at Brookhaven National Laboratory. Like its ancestor – the Intersecting Storage Rings at CERN – RHIC was built with two interlaced rings, but broke new ground by incorporating 1740 superconducting magnets, most of which were made in industry. Looking back, Yeck points out that the project was approved in a different era, “when you knew you had issues that you would have to work out later”. Basically underfunded, it was built against a background of tight budget constraints. “Such a project needs strong leadership, which we had in Nick Samios, the lab director, Satoshi Ozaki, the project director, and others,” he says. Yeck remained with RHIC until the autumn of 1997, when the US was in the final stages of signing an agreement to contribute to building hardware for the LHC and the ATLAS and CMS experiments, and to become an Observer State of CERN. The DOE and the National Science Foundation (NSF) appointed him project director for this$531 million contribution, which comprised $200 million from the DOE for the LHC accelerator, and$331 million from the DOE and the NSF for ATLAS and CMS. At the time more than 550 US scientists from nearly 60 universities and six of the DOE’s national laboratories were involved.
“This was on the heels of the cancellation of the SSC [the Superconducting Super Collider] and the community recognized that it was imperative that the LHC should work and that the US should be part of it,” Yeck recalls. “People rallied together – it was beautiful.” There were to be many difficult issues to resolve and compromises to be made, but with a background in engineering rather than particle physics, Yeck had the advantage of being a clearly defined “enabler”, with no bias.
In late 2003, with the LHC’s progress on firm ground, Yeck moved on again, to become director of a rather different astroparticle-physics project. The IceCube Neutrino Observatory at the South Pole is not only at an exotic location with an international collaboration, it is run principally by the University of Wisconsin, and Yeck says that it interested him to show that a university can take on leadership of a large infrastructure project. IceCube was funded to the tune of \$280 million, in this case mostly by the NSF, who had less experience of big projects than the DOE. There was also the interesting logistical challenge of constructing and operating the huge 1 km3 detector at the South Pole.
The old model of a country going it alone doesn’t work for such projects
Jim Yeck
During the long construction phase linked to summers at the South Pole, Yeck agreed to help launch construction of the National Synchrotron Light Source II back at Brookhaven, and served as deputy project director in the years 2006–2008. Then, 10 years after taking on IceCube, he made his latest change – to another kind of facility, another continent, and a different user community. In March 2013 he became chief executive officer (CEO) of the European Spallation Source (ESS), taking over from the first CEO, Colin Carlile.
The ESS will serve a research community dispersed across many fields of science, with potential users numbering in the thousands. “The old model of a country going it alone doesn’t work for such projects,” says Yeck. Instead, the ESS is furthering the approach of bringing many nations to work together, and with 17 partner countries it is approaching CERN in terms of the number of members. Using an analogy that should appeal to physicists, Yeck says: “CERN is an existence proof, and others have drawn on this. But the initial conditions have to be right.” When setting up rules for the governance of the new facility, ESS based many of the principles on those established 60 years ago for CERN.
Yeck’s experience has taught him what is important in making a success of such a project: “The facility has to be a priority for the scientific community”, he says. “If you don’t have that foundation, it’s a problem. Then you need commitments and a strong role from the facility host. And the leadership has to see itself as enabling the success of others.” A particular challenge of the ESS is that it is new in more ways than one – a new organization on a green-field site, much like CERN was in 1954. “Such an organization needs experienced people who can catalyse the successful efforts of many,” says Yeck. “We also have to establish realistic goals – it’s a case of putting experience over hope.”
The ESS management has been working hard during the past year on a realistic plan, which was reviewed in November by a committee of 33 members from a broad community, chaired by CERN’s Mario Nessi. Yeck learnt to appreciate the value of such reviews during his time in the US. “If you have problems, you can also seek collective ownership of solutions,” he explains. “And there will be problems. To pretend that you are not going to have them is a big mistake.” However, Yeck is a man who delights in seeing things built and the ESS is no exception. “It’s fantastically challenging, with contributions from many people,” he says, “but that’s what’s captivating.”
|
{}
|
Understanding the time-complexity of Insertion Sort
From my textbook, I am studying the time-complexity of the insertion sort algorithm (shown below).
INSERTION-SORT(A) cost times
1 for j <- 2 to length[A] c1 n
2 DO key <- A[j] c2 n - 1
3 ▷ Insert A[j] into the sorted
▷ sequence A[i..j-1] 0 n - 1
4 i <- j-1 c4 n - 1
5 while i > 0 and A[i] > key c5 sum_{j=2}^2 t_j
6 do a[i+1]<-A[i] c6 sum_{j=2}^2 (t_j-1)
7 i <- i-1 c7 sum_{j=2}^2 (t_j-1)
8 A[i+1] <- key c8 n - 1
The algorithm above shows the times that each statement is executed. But wait, why is line 1 executed n times?
Shouldn't line 1 be executed n-1 times since insertion sort starts making comparisons at the second element at the list.
|
{}
|
# Probability for selecting centroids - K-means++
K-means++ selects centroids one by one, where each point has the chance to become next centroid with probability proportional to distance to closest centroid already selected.
I implemented it like this (for selecting one centroid):
• Calculate distance for each point to existing centroids and save distance to closest centroid
• Divide distance to closest centroid by total distance in cluster of that centroid (sum of distances from each point in cluster to centroid)
• Sort distances for all points (in this step we consider that given distances that were divided by total distance represent probability for selecting the point as centroid)
• Create an array of cummulative probabilities (for example, array 0.2, 0.3, 0.5 gives array 0.2, 0.5, 1)
• Generate a random number between 0 and 1
• New centroid is the point that is represented by the interval to which the generated number belongs
In this way third point from the example has the greatest probability being selected since it has the greatest interval.
Is this a good way to implement K-means++ like initialization?
The problem is that I'm not getting the expected result, so I'm not sure if I misunderstood the concept or I got something wrong with implementation.
The problem occurs with a relatively large dataset (5000 points) with 15 clusters. Concretely - it happens that multiple points from the same cluster, that are relatively close to each other, get selected as centroids.
I'm guessing that intervals get relatively small since there are 5000 points and then the concept of greater probabilities gets lost. For example, if there's 800 points in some cluster we get probabilities of about 1/800, so probability for one point could be for example 0.00125 and for the other one 0.00130, so it seems like I'm getting away from weighted probability and getting to uniform probability.
Am I missing something?
The paper (see section 2.2) suggests that you use the squared distance when computing probabilities. In fact, you can try distance$^{\ell}$ using any exponent $\ell$ greater than 1. It stands to reason that as $\ell$ increases, the likelihood of initial centroids being close together will go to zero.
|
{}
|
# Infeasibility and inconsistency¶
When submitting a model to LocalSolver (calling method solve), the expected result is to obtain a feasible solution, and even sometimes an optimal solution. However in some cases the returned solution can be infeasible in the sense that the current assignement of values to variables violates some of the constraints of the problem. Two solution statuses (see getSolutionStatus) are defined for these infeasibility cases:
• infeasible means that no feasible solution was found to the submitted problem but it could not be proven that no such solution exists. Maybe running a longer search would have produced a feasible solution.
• inconsistent means that the solver was able to prove that no feasible solution exists. In this case, LocalSolver offers a functionality for analyzing the causes of this inconsistency.
Note
Note that both statuses infeasible and inconsistent can also be encountered on problems where no constraint was defined. Indeed, a solution where an objective takes an invalid value is considered infeasible. For instance minimize sqrt(x) implicitly requires that x takes a non-negative value.
## Analyzing inconsistencies¶
Analyzing an inconsistent model amounts to identifying a relatively small inconsistent subproblem. Such a subproblem or inconsistency core is said to be irreducible if it contains no smaller inconsistent subproblem.
The function computeInconsistency computes such a core that is to say a set of expressions (named causes) such that the problem restricted to these expressions and their descendents is inconsistent. Calling this function requires the model to be closed and the solver to be stopped. This inconsistency core is returned as an LSInconsistency object. This object can be printed in a readable form so that the user can easily spot the origin of the inconsistency. It also allows scanning the set of identified causes.
For example, the following model is inconsistent because limiting the cube of y to 250 prevents y from taking values larger than 6, which makes the constraint 3*x + y >= 20 impossible to satisfy:
function model() {
x <- bool();
y <- int(0,100);
z <- bool();
t <- int(0,100);
constraint 3*x + y >= 20;
constraint pow(y,3) <= 250;
constraint 4*z + t <= 18;
maximize x*t + 8*z*y;
}
The computation of the inconsistency core can be launched in the output function as follows:
function output() {
iis = computeInconsistency();
println(iis);
}
The resulting output on the standard console is the following:
...
Run output...
Computing inconsistency core...
Inconsistency core found with 2 causes.
Irreductible inconsistency core found with 2 causes.
2 causes in inconsistency core:
pow(int(0, 100)#1, 3) <= 250
3 * bool()#0 + int(0, 100)#1 >= 20
The two constraints responsible for the inconsistency are identified and displayed. Note that expressions are identified by their type and index. It is also possible to assign names to variables in the model function:
x.name = "x";
y.name = "y";
z.name = "z";
t.name = "t";
Having defined these names the inconsistency core reads as follows:
2 causes in inconsistency core:
pow(y, 3) <= 250
3 * x + y >= 20
In the APIs, the principle is the same.
Computing the inconsistency core is done through the compute_inconsistency method of the solver, which returns an LSInconsistency object.
This object can be converted to a string for display purposes. It is iterable, and contains the expressions in the inconsistency core.
# With ls a LocalSolver object
iis = ls.compute_inconsistency()
print(iis)
# With process a user-defined function
for expr in iis:
process(expr)
Computing the inconsistency core is done through the computeInconsistency method of the solver, which returns an LSInconsistency object.
This object can be converted to a string for display purpose with toString. Individual expressions in the inconsistency core are accessed with getCause.
// With ls a LocalSolver object
iis = ls.computeInconsistency();
std::cout << iis.toString() << std::endl;
// With process a user-defined function
for (int i = 0; i < iis.getNbCauses(); ++i) {
LSExpression expr = iis.getCause(i);
process(expr)
}
Computing the inconsistency core is done through the ComputeInconsistency method of the solver, which returns an LSInconsistency object.
This object can be converted to a string for display purpose. Individual expressions in the inconsistency core are accessed with GetCause.
// With ls a LocalSolver object
iis = ls.ComputeInconsistency();
Console.WriteLine(iis);
// With Process a user-defined function
for (int i = 0; i < iis.GetNbCauses(); ++i) {
LSExpression expr = iis.GetCause(i);
Process(expr)
}
Computing the inconsistency core is done through the computeInconsistency method of the solver, which returns an LSInconsistency object.
This object can be converted to a string for display purpose. Individual expressions in the inconsistency core are accessed with getCause.
// With ls a LocalSolver object
iis = ls.computeInconsistency();
System.out.println(iis);
// With process a user-defined function
for (int i = 0; i < iis.getNbCauses(); ++i) {
LSExpression expr = iis.getCause(i);
process(expr)
}
|
{}
|
## stata save regression results to excel
The second method is to use the âtaboutâ command. package and format the columns using the D column specifier: . mlabels(, depvar) To create a table to be included in a LaTeX document, type: Note that esttab automatically initializes the tabular environment example: Values that the regression model predicts for each case. estadd vif (1978 Automobile Data) > title(Regression table\label{tab1}) t-statistics). The format applied to a certain statistic can be changed by adding the First, by default it holds only about 500 lines of output… anything after that is discarded. option reduces horizontal spacing to (5.37) esttab using example.rtf The save option tells tabstat to save the results to a matrix. Example se(), p(), ci(), aux(), star, staraux, In this The package currently contains the following commands. esttab's own options. Example: Use with LaTeX cells(b(fmt(a3) star) t(fmt(2) par("{ralign @modelwidth:{txt:(}" "{txt:)}}"))) The option of word creates a Word file (by the name of âresultsâ) that holds the regression output. postfoot("{hline @width}"' "t statistics in parentheses"' "@starlegend"') Last but not least, it might be reasonable to space the table out to a certain width: environment and have esttab just produce the table rows. And, most importantly, you will know how to customize the table as your needs change. A click on "example.csv" in Stata's results window will launch Excel IMPORTANT: eststo must come immediately after regress. For example, I can use tabstat to calculate descriptive statistics for a list of variables. stats(N, fmt(%18.0g) labels("N"')) Sometimes it is necessary to include parameter statistics in a table for which }{-1}) /// esttab is a wrapper for estout. e(vif) : 1 x 4 N 74 74 Similarly, for a complicated summary statistics section Result: contained in the stored estimates. [do-file] Save outputs in an external file in Stata. Stata can automatically generate Microsoft Word documents with the table already formatted. and variance inflation factors in one table: Using the eststo command, store the regression results in a macro, call it example: eststo example. There might be a more beautiful/easier solution but I would just collect the results of the ttests in a matrix and then use the putexcel command to save them as a .xls. You can save predicted values, residuals, and other statistics useful for diagnostic information. In help, see Stored Results _. may be used to change the column widths (the scale is about 1/12 inch). esttab using example.tex, label nostar /// Depending on whether the plain option is specified or This saves the results of each regression for later use. bic for Schwarz's information criterion. These posts are especially useful for researchers who prepare their manuscript for publication in peer-reviewed journals. Feel free to skip any you don't need. that of estout and, by default, it produces publication-style tables price price to be . One way to save all of the results from your Stata session, is to use a log file. Letâs use the classic 1978 auto dataset that comes with Stata. Example 6: Writing a matrix to Excel. This command provides a user-friendly way to manipulate a large number of regression results by allowing the user to apply Stata's data manipulation commands to those results. 1.In the first line of code, we wrote asdoc ttest in the beggining of the line. eststo clear (output written to example.tex) Basic syntax and usage. wide, onecell, parentheses, and brackets. We used two options of asdoc in the first line of code: the replace and title(). variance inflation factors instead of t-statistics after . For example, specifying cells() disables r(coefs) : 4 x 6 Sometimes, an approach is to use esttab to assemble a basic table The table above looks alright, but a better result is achieved by lab var mpg "The mgp variable has a really long label and that would disturb Basic syntax and usage. or apply the rtf format: . sfmt(), noobs, and obslast.) The basic syntax of esttab is: The procedure is to first store a number of models and then apply esttab to these stored estimation sets to compose a regression table. posthead("{hline @width}") A further improvement is to load LaTeX's Use These commands separate the storingof results from the outputtingof results ----------------------------------------- > title(Regression table\label{tab1}) . (output written to example.tex) If you know a bit RTF you can also include RTF commands variance inflation factors instead of t-statistics after Stata's commands for report generation allow you to create complete Word®, Excel®, PDF, and HTML documents that include formatted text, as well as summary statistics, regression results, and graphs produced by Stata. weight 1.747* 3.465* booktabs You need to save it as a dta file if you want to do anything with it in Stata. style(esttab) Finally, using the esttab command, print the regression results to a table: esttab example. modelwidth(12) the fragment option if you prefer to hard-code the table's made visible by the noisily option Previously, I have written a tutorial how to create Table 1 with study characteristics and to export into Microsoft Word. [do-file] . ="..."). or apply the rtf format: Appending is possible. Result: The call can be and is also returned in r(cmdline). For ), Standard errors, p-values, and summary statistics, Wide table: coefficients and t-statistics side-by-side, Significance stars: change symbols and thresholds. -------------+---------------------- Mean VIF | 2.81 1. for example, standard errors an adaptive display format is used where the . eststo clear . Major topics for this article include creating tables of regression results, tables of summary statistics, and frequency tables. foreign | 1.59 0.627761 sysuse auto Some Stata commands return matrices. foreign 3673.1* Hello together, what is the most efficient method to copy results (table, tabstat etc.) To produce a table for use with Word, specify an output filename with an .rtf suffix using results indicates to Stata that the results are to be exported to a file named ‘results’. Its studentized and standarized residuals are the same as Râs and Excelâs, so the output results are basically the same. in the table footer you might have to use estout's ----------------------------------------- Each selection adds one or more new variables to your active data file. vif in second column ---------------------------------------- Note that esttab automatically initializes the tabular environment parentheses around t-statistics. (1978 Automobile Data) Statistical Software Components from Boston College Department of Economics. However, if you want to include more than two kinds of parameter statistics, you . . esttab using example.tex, label replace booktabs /// and is also returned in r(cmdline). \end{document} The main difference between esttab and estout isthat esttabproduces a fully formatted right away. Stata has a large number of commands dedicated to basic statistics; we'll discuss some of the most commonly used. ----------------------------------------- The basic syntax of esttabis: The procedure is to first store a number of models and then applyesttab to these stored estimation sets to compose a regressiontable. return list [do-file] quietly regress price weight mpg foreign esttab, aux(vif 2) wide nopar Furthermore, an adaptive format a# Stata can store estimates from multiple models, save all estimates in a single table, and export the table to an external file, such as rtf, csv, html, tex, and others.This is possible with the .esttab command from the estout package, which you can install from the Stata packages repository.. esttab using example.tex, label replace booktabs /// Stata produces output to the Result window and optionally to a log file.. The esttab command takes the results of previous estimation or other commands, puts them in a publication-quality table, and then saves that table in a format you cause use directly in your paper such as RTF or LaTeX. The set-up. [do-file] One drawback of this approach is, however, sysuse auto specifying the booktabs option and loading LaTeX's is a generic scalars() Exporting Stata Results to Excel Problem: Stata output is difficult to copy and paste into Word or Excel. The note after the xml tab command indicates that Excel is installed on the computer. quietly regress price weight mpg foreign notype that esttab produces a fully formatted right away. . and Excel will interpret the contents as numbers. > title({\b Table 1.} default, that is, if plain is omitted, the contents of the table and already into Stata 13, learning putexcel could be very helpful (put an end to copy-pasting!). (replacing the point-estimates) or the aux() option (replacing the booktabs (Note: The second argument in aux() specifies the display format.) asdoc is a Stata program that makes it super-easy to send output from Stata to MS Word. Its syntax is much simpler than that of estoutand, by default, it produces publication-style tables that display nicely in Stata's results window. _cons -5853.7 N 74 could type: esttab is a wrapper for estout. This is done using the estout package, which provides a command esttab for exporting results to Word. Anyone know of a way to get multiple regression outputs (not multivariate regression, literally multiple regressions) in a table indicating which different independent variables were used and what the coefficients / standard errors were, etc. that the specified estout options will take precedence over It allows to create a table 2. If you make your own Stata programs and loops, you have discovered the wonders of automating output of analyses to tables. places. putexcel has recently become a very good friend. dcolumn -3. stats() option [do-file] eststo clear Stata is a software package popular in the social sciences for manipulating and summarizing data and conducting statistical analyses. . t-statistics). eststo: quietly regress price weight mpg This is the second of two Stata tutorials, both of which are based thon the 12 version of Stata, although most commands discussed can be used in [do-file] Tip: Each command puts related information or results in either r(), e() or s() that can be saved or used in output. number of "significant digits" to be printed (# should be in {1,2,...,9}) sysuse auto (est1 stored) . There is lots of good stuff for regression results but I couldn't really find anything for sum results, tab results, tabstat etc... Is there any ado which will copy the results table frrom Stata to xls? The basic syntax of esttab is:. Stata is agile, easy to use, and fast, with the ability to load and process up to 120,000 variables and over 20 billion observations. (see the Numerical formats section in the help file). There are a wide range of options available through the putexcel command which can be found by typing âhelp putexcelâ in the Stata command window, followed by Enter. For point estimates and, First, by default it holds only about 500 lines of output⦠anything after that is discarded. 2. Update 07 June 2018: See Export tabulation results to Excel—Update for new features that have been added since this original blog.. using results indicates to Stata that the results are to be exported to a file named âresultsâ. scalars: ---------------------------------------- regress, you The âoutreg2â command in Stata, for example, enables you to output your regression results to Excel files, and will get you pretty close; and Iâve been told (but havenât tried) the broom package in R. In any case, there are likely times when your table results donât look great because youâre moving from one tool to another. weight 3.465*** 3.86 Statistical Software Components from Boston College Department of Economics. Example. eststo: quietly regress price weight mpg foreign Hello. The standard way to copy a table or other output is to use to select the table, then use (context menu or , or , you then can paste the table into another program.. You can of course use to insert the table as a non-editable picture, but in most situations you will want to be able to edit the raw table. not, esttab uses two different variants of the CSV format. Further summary statistics options are, for example, . added matrix: [do-file] from Stata to Excel? Result: sysuse auto (est1 stored) sysuse auto in an e()-matrix, they can be displayed using the main() option may be more appropriate. You can also specify options of excel and/or tex in place of the word option, if you wish your regression results to be exported to these formats as well. In the following For example, specifying cells() disables This is done using the estout package, which provides a command esttab for exporting results to Word. package and format the columns using the D column specifier: Last but not least, it might be reasonable to space the table out to a certain width: Sometimes it is necessary to include parameter statistics in a table for which Alternatively, as is illustrated in the example above, a fixed format can > title(Regression table\label{tab1}) N 74 3. ---------------------------------------- r(cmdline) : "estout , cells(b(fmt(a3) star) t(fmt(2) par("{ral.." price \documentclass{article} Abstract: regsave fetches output from Stata's e() macros, scalars, and matrices and stores them in a Stata-formatted dataset. esttab using example.tex, label nostar replace booktabs /// the csv format (or the price confidence intervals (ci), or any prehead(`"{hline @width}"') %9.0g or %8.2f (see help format). (output written to example.rtf) in an e()-matrix, they can be displayed using the main() option environment and have esttab just produce the table rows. Appending is possible. REGSAVE: Stata module to save regression results to a Stata-formatted dataset. I am running 1000 or so regressions using a foreach loop and would like to export the results (mainly just the regression coefficients) to Microsoft Excel. esttab, noisily notype We just add asdoc to the beggining of any Stata command and thatâs all. A new feature in Stata 13, putexcel, allows you to easily export matrices, expressions, and stored results to an Excel file.Combining putexcel with a Stata command’s stored results allows you to create the table displayed in your Stata Results window in an Excel file. eststo: quietly regress price weight mpg . aux() option). Its syntax is much simpler than starlevels(* 0.05 ** 0.01 *** 0.001) eststo: quietly regress price weight mpg foreign mpg 21.85 2.96 example: A click on "example.csv" in Stata's results window will launch Excel Finally after all of the required regressions have been run, you just need to specify a directory location for Stata to create a new file with your results. While .log captures both commands and output, the .cmdlog command stores the stream of executed commands only.. Thanks for your questions, Jeff Meyer More on interpreting SAS output here. \begin{document} (est2 stored) All estout options are allowed in esttab, but you have to be aware To open a log file called c:dissert.log, you can type the following at the start of your Stata … macros: regsave fetches output from Stata's e() macros, scalars, and matrices and stores them in a Stata-formatted dataset. causes the point estimates and t statistics (or standard errors, etc.) Variable | VIF 1/VIF For those who (or working with people who) find comfort in working with tables in Excel after data processing or estimation in Stata (yes, there are others who don’t find comfort in this.) For those who (or working with people who) find comfort in working with tables in Excel after data processing or estimation in Stata (yes, there are others who donât find comfort in this.) in Excel. eststo clear We should point out two things about the Stata Results window that may surprise you. parentheses around t-statistics. regress, you > alignment(D{.}{. Installation (do only once) (5.37) r(stats) : 1 x 2 table. sfmt(), noobs, and obslast.). All estout options are allowed in esttab, but you have to be aware . Once the statistics are are stored Extracting the results from regressions in Stata can be a bit cumbersome. i have tried to to this with macros, but then thought it might be best to save the coefficients matrix, compute the standard errors and pvalues and save these as a matrix as well. ----------------------------------------- in Excel. Result: additional computations in Excel, specify the plain option. (output written to example.csv) Julian Reif. (est1 stored) Commands might be preceded by “. Let me show you. have to switch to estout syntax and make use of the cells() option. The value the model predicts for the dependent variable. collabels(none) weight | 3.86 0.258809 The main difference between esttab and estout is esttab's own options. (output written to example.tex) ----------------------------------------- (which overwrites esttab options such as abbrev (1) . You can also specify options of excel and/or tex in place of the word option, if you wish your regression results to be exported to these formats as well. Hence, if the purpose of exporting the estimates is to do Hence, if the purpose of exporting the estimates is to do (2.72) (5.49) Mean VIF | 2.81 Below is a step by step tutorial on how to use EstOut & avoid manually recording your results. Instead of creating tables by hand, Stata can automatically generate Microsoft Word documents with the table already formatted. . option to include any other scalar statistics The goal is to provide basic learning tools for classes, research and/or professional development Useful are, for example, "{\b ...}" for boldface and "{\i ...}" for italics. Result: (est2 stored) respected Member, i am facing problem in copying stata result to word file. Not only is Stata output difficult to format, you will probably need to run your code many times, and you won’t want to repeat this step over and over again. When combined with the table option and the outsheet command, it also provides an easy way to ⦠then produces the following result: in the table footer you might have to use estout's (0.29) and display the file: Depending on whether the plain option is specified or (0.54) (-1.73) To create a table to be included in a LaTeX document, type: placed beneath one another in the same table cell: If you have Excel you have the Analysis ToolPak although you may not have it activated. Weâll use mpg and displacement as the explanatory variables and price as the response variable. Example: Note that the dashed lines appear as solid lines in Stata's results window: The default in esttab is to display raw point estimates along with t statistics and Example: (notype is specified in this example to suppress the display of the table. that the specified estout options will take precedence over ---------------------------------------- ” or “ > ” as they would be in the Stata Results window. Result: Run the regression specified in Step 3. measures are printed using three decimal places. Furthermore, varwidth() and modelwidth() Essentially, I'm looking for something like outreg, except for python and statsmodels. This document is an introduction to using Stata 12 for data analysis. smcltags If you type "return list" after a ttest you can see how the results are stored in case you want to use others than I had in my example. There are many easier ways to get your results out of Stata. delimiter(" ") To use variable labels and add some titles and notes, e.g., type: The label option supports factor variables and interactions in Stata 11 or newer: The plain option produces a minimally formatted This is how we use asdoc with Stata commands. default, that is, if plain is omitted, the contents of the table example the cells() option is used to print point estimates, t statistics, [do-file] About asdoc. eststo: quietly regress price weight mpg foreign mpg | 2.96 0.337297 no predefined option exists in esttab. cells are enclosed in double quotes preceded by an equal sign (i.e. . It is important to notice that outreg2 is not a Stata command, it is a user-written procedure, and you need to install it by typing (only the first time) ssc install outreg2 Follow this example (letters in italics you type) REGSAVE: Stata module to save regression results to a Stata-formatted dataset. Non-standard table contents specifying the booktabs option and loading LaTeX's Variable | VIF 1/VIF . ò§´0Õ¢1(1õ¹ Â.Ø+4ú¹Èª¤/YY¡Í±.0ñÐïFÕ+D=®Àçïé placed beneath one another in the same table cell: If you know a bit RTF you can also include RTF commands [do-file] A very helpful reference is the "RTF Pocket Guide" by Sean M. Burke (O'Reilly). stats() option Stata: Visualizing Regression Models Using coefplot Partiallybased on Ben Jannâs June 2014 presentation at the 12thGerman Stata Users Group meeting in Hamburg, Germany: âA new command for plotting regression coefficients and other estimatesâ
|
{}
|
# Making strategies stick
ISSN: 0275-6668
Article publication date: 5 July 2011
880
## Citation
Jackson, S.E. (2011), "Making strategies stick", Journal of Business Strategy, Vol. 32 No. 4. https://doi.org/10.1108/jbs.2011.28832daa.002
## Publisher
:
Emerald Group Publishing Limited
## Making strategies stick
Article Type: Reaching for value From: Journal of Business Strategy, Volume 32, Issue 4
Stuart E. JacksonVice President of L.E.K. Consulting, Chicago, Illinois, USA and author of Where Value Hides: A New Way to Uncover Profitable Growth for Your Business (Wiley 2007).
Keywords: Strategy, Implementation, Execution, Organizations, Change, Management
The guy who plows the snow in my neighborhood bought himself a new truck last winter. It is a Cummins-powered Dodge Ram 3500 HD (Heavy Duty) pickup, onto which he mounted a Meyer MAX plow with the standard high-carbon-steel cutting edge, optional heavy-duty snow deflector, and many other desirable features.
I was very happy to see him and his new truck before dawn one morning, toward the end of the last snow season, because I could see that I was not going to get out of my driveway without his help. I complimented him on his new truck, which I had not seen before (even in the dark, I could see that it was a heck of a truck). Then I complimented him on his timely arrival. He responded to those two compliments at once.
“Well, Stuart,” he said, from his perch that appeared to be about ten feet up above me, as the oversized diesel rattled away noisily under the hood, “it’s a real nice truck, but it’s no damn good unless you drive it.”
It is the same with corporate strategies. Good strategies do not create value; it’s implementing good strategies which creates value. But unlike driving a truck, getting new business strategies successfully implemented can be really tough. I shudder to think how many strategies get devised each year – in boardrooms and windowless conference rooms – that never actually get put to use.
You might think that implementing strategy in the corporate setting should be easy. The titans in the corner offices simply lay down the plan, and the rest of the organization gets into line, right? Actually, it does not work like that. For one thing, most business leaders are resistant to getting too far out in front of their organizations in embracing change.
For another thing, people hate change – especially planned change! – and the more dramatic a departure a new strategy represents, the more likely that the organization will resist it, and even undermine it. A colleague of mine once told me about a psychology experiment that compared the respective behaviors of rats and humans in mazes. If you moved the cheese, for a while the rats would go to where the cheese used to be. Pretty quickly, though, they would use their noses to go find the cheese’s new location. Not the humans! They would keep going back to where the cheese used to be, lamenting the fact that the cheese (or the human-bait equivalent) had moved, and hoping in vain that someday the cheese would come back to where it was supposed to be.
It is easy to see the same behavior in business organizations, which resist change, despite evidence showing the world changing from the way it used to be. In the early 1990s, Blockbuster Inc. had the foresight to commission one of several studies on the future of video on demand technologies, and how these would impact traditional video rentals. The report showed how roll-out of expanded cable offerings and broadband internet would mean video downloads would start to have an impact around the year 2000, and begin to grow rapidly in the years after that. In September 2010, Blockbuster filed for bankruptcy protection, overcome by competition from online downloads, as well as kiosks and mail services. True, it was a tremendous challenge for Blockbuster to change from its successful store-based format to an online model. But in the mid-1990s, the company had unrivalled clout with all the major studios, a membership of 43 million households, and ten years’ notice to get ready for the new environment. The problem was that the phenomenal success of their original model prevented the organization from taking the drastic steps needed.
In almost every direction we look, in fact, we discover that organizational change is harder than not-change. Even when we understand intellectually that the cheese is being moved, we drag our feet and hope that things will work out as we try to keep everything “normal.”
So how do you get from an unimplemented strategy to an implemented one? And just as important, how do you make that strategy stick? I can point to six tactics that seem to help on one or both counts:
1. 1.
Keep the strategy-development period short. You do not want to spend a year batting around alternative strategies, because during that year, you are not creating any value. I talk to my clients about the “100-day plan” and the “1,000-day plan.” The first hundred days are devoted to pulling the plan together, and the next 900 are devoted to implementing it. In the same spirit, bake the whole strategic pie before you start serving any of it up. “The most dangerous strategy,” Disraeli once remarked, “is to jump a chasm in two leaps.”
2. 2.
Aim high, but separate aspirations from commitments. Too many organizations mix up strategic planning and the budgetary process. You absolutely want your planners to say, “We think that this strategy can deliver 20 percent growth.” You absolutely do not want your managers to say, “Uh, oh: if I embrace the goals of this plan, it’s going to be 20 percent harder for me to make my goals, and earn my bonus.” Make sure that your plan allows for “stretch aspirations” – and then put sufficient resources behind those leaders who show that kind of courage. I have seen clients walk away from exciting opportunities just because nobody was willing to sign up to the aggressive projections. On the flip side, I have a client in the retail arena who six years ago took over his company’s nascent on-line business. In an era when everybody else was asking for $30 million to build an on-line business, he presented a vision showing the full potential and asked for$300 million – and got it. Today, that on-line business is the top-performing area of the company, and arguably worth more than the rest of the business put together.
3. 3.
Communicate it simply and directly. When it comes to communicating the strategy, assume that people can remember one line – and one line only. Sure, you will need a 40-word version for the management team and a 4,000-word version for the board, but the rest of the world needs a one-liner. I love the way the Texas Highway Commission summarizes its anti-littering strategy: “Don’t mess with Texas!” Direct Energy, which is the largest contractor organization for repair and maintenance of heating and ventilating systems in North America, came up with an equally good one: “Simple, friendly, direct.”
4. 4.
Show the leadership commitment. Whoever is responsible for this strategic shift – the CEO, the divisional president, or whoever – needs to be the champion and cheerleader for the change. This means, first, acknowledging that things could be going better than they are at the moment, an admission that corporate leaders are often reluctant to make. Or even harder, it may mean pointing to the fact that even though things are going great right now, we have to start pointing in a new direction (think Blockbuster). A leader who is willing to commit to change, and make the case for change, is the indispensable starting point.
5. 5.
Make some quick moves. Far too many companies work up a great, forward-looking strategy, and then do … nothing. That is a mistake: you need to back up your leadership commitment with action. One of the first things to consider is what current activities are you going to stop doing, which products should be discontinued and which existing customers you should stop serving. One of my clients had a particularly macabre way of phrasing this challenge: “You need to be willing to drown puppies, even when they’re looking up at you with their big, sad eyes.”In the mid-1990s, Apple had an astoundingly complicated product line, with some 80 models and variations of its computers, and a dreadful PDA/notebook device (the Newton) that nobody had the nerve to kill off. When Steve Jobs returned to the floundering company in 1997, he immediately began drowning puppies – starting with the Newton, and then moving into the computer lines. “We sell consumer products and professionally oriented products,” he told his colleagues. “We need a desktop offering and a portable offering in each of those two categories.” In other words, Apple would go from 80-plus products to four.
6. 6.
Fix the organization (even if it’s not broken). Dramatic strategic departures almost always benefit from some organizational changes and new responsibilities. You need to realign the organization to fit the new strategy. Even if the new strategy could be managed without changing the organization, you should think about making some adjustments to signal the new direction.
Did you ever wonder how Teflon – to which nothing sticks – sticks to a pan? Answer: it is forced mechanically into a zillion tiny nooks and crannies that are etched into the surface of the pan. The good news is that your strategic plan is way stickier than Teflon. It will stick, if you follow the six steps outlined above to force it into all those corporate nooks and crannies.
|
{}
|
##### Personal tools
•
You are here: Home Introduction; Review of Random Variables
# Introduction; Review of Random Variables
##### Document Actions
Probability :: Set Theory :: Definition :: Conditional :: Random :: Distribution :: R.V.S.
## Conditional probability and independence
Conditional probability is perhaps the most important probability concept from an engineering point of view. It allows us to describe mathematically how our information changes when we are "given'' a measurement.
We define conditional probability as follows: Suppose is a probability space and with . Define the conditional probability of given as
Essentially what we are saying is that the sample space is restricted from down to . Dividing by provides the correct normalization for this probability measure.
Some properties of conditional probability (consequences of the axioms of probability) are as follows:
1. .
2. For with for ,
3. .
The Law of Total Probability : If is a partition of and , then
(Draw picture).
Bayes Formula is a simple formula for "turning around'' the conditioning. Because conditioning is so important in engineering, Bayes formula turns out to be a tremendously important tool (even though it is very simple). We will see applications of this throughout the semester.
Suppose , and . Then
Why?
Is independent the same as disjoint?
Note: For , if and are independent then . (Since they are independent, can provide no information about , so the probability remains unchanged. If , then is independent of for any other event . (Why?).
The next idea is important in a lot of practical problem of engineering interest.
(draw picture to illustrate the idea).
Copyright 2008, by the Contributing Authors. Cite/attribute Resource . admin. (2006, May 31). Introduction; Review of Random Variables. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Stochastic_Processes/lec1_4.html. This work is licensed under a Creative Commons License
|
{}
|
Moment Generating Functions and Fourier Transforms?
Is a moment-generating function a Fourier transform of a probability density function?
In other words, is a moment generating function just the spectral resolution of a probability density distribution of a random variable, i.e. an equivalent way to characterize a function in terms of it's amplitude, phase and frequency instead of in terms of a parameter?
If so, can we give a physical interpretation to this beast?
I ask because in statistical physics a cumulant generating function, the logarithm of a moment generating function, is an additive quantity that characterizes a physical system. If you think of energy as a random variable, then it's cumulant generating function has a very intuitive interpretation as the spread of energy throughout a system. Is there a similar intuitive interpretation for the moment generating function?
I understand the mathematical utility of it, but it's not just a trick concept, surely there's meaning behind it conceptually?
• I believe it's the characteristic function that more resembles the Fourier transform. The moment generating function is a Laplace transform. – Placidia Jun 9 '14 at 14:06
• Interesting: "The Laplace transform is related to the Fourier transform, but whereas the Fourier transform resolves a function or signal into its modes of vibration, the Laplace transform resolves a function into its moments" princeton.edu/~achaney/tmve/wiki100k/docs/… Then I guess the question is - how, intuitively, does a Laplace transform decompose a function into it's moments, and is there a geometric interpretation of this? – bolbteppa Jun 9 '14 at 14:15
• It does it by virtue of the Taylor series expansion of the exponential function. – Placidia Jun 9 '14 at 14:18
• Now everything nearly makes sense! However, what exactly is a moment, intuitively? I know this: "Broadly speaking a moment can be considered how a sample diverges from the mean value of a signal - the first moment is actually the mean, the second is the variance etc... " dsp.stackexchange.com/a/11032 However, what does that mean intuitively? What is the sample when calculating the 1st/2nd/3rd/4th moment of say, x^2 (taking a Laplace transform of x^2)? Is there a geometric interpretation? – bolbteppa Jun 9 '14 at 14:26
1 Answer
The MGF is
$M_{X}(t)=E\left[ e^{tX} \right]$
for real values of $t$ where the expectation exists. In terms of a probability density function $f(x)$,
$M_{X}(t)=\int_{-\infty}^{\infty} e^{tx}f(x) dx.$
This is not a Fourier transform (which would have $e^{itx}$ rather than $e^{tx}$.
The moment generating function is almost a two-sided Laplace transform, but the two-sided Laplace transform has $e^{-tx}$ rather than $e^{tx}$.
• +1 As an aside: the characteristic function is the one that's more closely related to the Fourier transform (in that case, again, there's the small issue of a minus sign) - the c.f. is $E(e^{itX})$, while - up to multiplicative constants - the usual Fourier transform would be $E(e^{-itX})$. These connections prove to be quite useful at times, such as finding lists of useful properties of Fourier or Laplace transforms that usually carry directly over, or being able to look up extensive tables of Fourier or Laplace transforms when finding MGFs or cfs. – Glen_b Jun 10 '14 at 0:53
• And of course the most useful property is that the MGF of the sum of two independent random variables is the product of their moment generating functions. This is equivalent to the rule that the Fourier transform of the convolution of two functions is the product of their Fourier transforms. – Brian Borchers Jun 10 '14 at 3:21
|
{}
|
1. ## Calculating Annuity
I need some help with a math problem that I was assigned that I have been working on for a while now and just cant figure it out. Any help would be appreciated. here is the problem...
1. Jack, a beginning freshman, wants to buy a used truck when he gets his associates degree. He believes it will cost $3000 for the make and model he wants. He wants to start an annuity savings account for this purpose. Today is September 1, 2013 and jack wants to buy the truck June 1, 2015. His funds will be deposited into an account paying 2.75% APR compounded weekly. a. How much money must jack deposit at the end of each week so as to end up with$3000 on 6/1/15?
b. what is the present value of this annuity?
c. if on june 1, 2015, jack decided not to buy the truck but rather to keep the money in the account earning compound interest(same interest rate, no further deposits) how much money would be in the account on october 1, 2015?
Thanks to everyone in advance for all the help... this one is killing me.
2. Originally Posted by focustnr29
I need some help with a math problem that I was assigned that I have been working on for a while now and just cant figure it out. Any help would be appreciated. here is the problem...
1. Jack, a beginning freshman, wants to buy a used truck when he gets his associates degree. He believes it will cost $3000 for the make and model he wants. He wants to start an annuity savings account for this purpose. Today is September 1, 2013 and jack wants to buy the truck June 1, 2015. His funds will be deposited into an account paying 2.75% APR compounded weekly. a. How much money must jack deposit at the end of each week so as to end up with$3000 on 6/1/15?
b. what is the present value of this annuity?
c. if on june 1, 2015, jack decided not to buy the truck but rather to keep the money in the account earning compound interest(same interest rate, no further deposits) how much money would be in the account on october 1, 2015?
Thanks to everyone in advance for all the help... this one is killing me.
You need to know the number of periods involved, these are weeks and you will have to count how many there are between the start and end dates yourself.
The weekly interest rate is $\displaystyle 2.75/(365.25/7)\approx 0.0527 \%$, and let $\displaystyle m=1+(0.0527/100)=1.000527$
Then the amound available after $\displaystyle $$n periods is: \displaystyle A_n=d \dfrac{m}{m-1}[m^n-1] where \displaystyle$$ d$ is the amount deposited each period.
So for part a you need to solve for $\displaystyle$$n$ in:
$\displaystyle d \dfrac{m}{m-1}[m^n-1]=3000$
You will have to do this numerically.
CB
3. Originally Posted by focustnr29
Jack, a beginning freshman, wants to buy a used truck when he gets his associates degree. He believes it will cost $3000 for the make and model he wants. He wants to start an annuity savings account for this purpose. Today is September 1, 2013 and jack wants to buy the truck June 1, 2015. His funds will be deposited into an account paying 2.75% APR compounded weekly. How much money must jack deposit at the end of each week so as to end up with$3000 on 6/1/15?
I'm looking at this as a savings account in which a weekly deposit of \$w will be
made starting ~Sep 8/13 and ending ~Jun 1/15 at rate of .0275/52 weekly;
and also assuming this represents exactly 91 weeks.
Formula: w = fi / [(1 + i)^n - 1] :
f = future value, i = periodic rate, n = number of periods
w = 3000(.0275 / 52) / [(1 + .0275/52)^91 - 1] = 32.18884... ; so 32.19
4. Thank you to you both this really helps
5. I posted this web link in a topic above but just in case u didnt see it this website i found also explains annuities which you may find handy.
http://www.weallstartsomewhere.com/finmaths.php
|
{}
|
Experience the CMOS Annealing Machine
# Image noise reduction
This tutorial explains the processing of the annealing machine in the demonstration application image noise reduction".
### Overview
This is a problem of estimating the original image from a black and white image with noises.
Generally, assuming that the following conditions hold, expect it to work properly on an annealing machine.
• (a) (If the noise is small) the pixel values of the input image and the output image are approximately the same.
• (b) In output images after noise reduction, adjacent pixel values are often the same.
### Create a cost function
In this problem, since the color of the pixel is binary black or white, we correspond it to the spin value -1 or +1 of CMOS annealing machine.
From (a) above, here we set the value of the i-th pixel of the input image as $y_i$ and the value of the ith pixel of the noise-reduced image as $s_i$.
To make the pixel values of the input image and output image roughly the same, we use the following cost function.
#### $$-\sum_{i \in V} y_i s_i$$
Here $V$ represents a set of pixels. Since $y_i s_i$ becomes +1 when the i-th pixel value of the input and output image is the same and becomes -1 when the pixel value is different, add the signed $-$ to all pixels so that the cost is reduced when the pixel values are the same.
Additonally considering the condition that "(b) In output images after noise reduction, adjacent pixel values are often the same.", the cost function is as follows.
#### $$-\sum_{(i,j) \in E} s_i s_j$$
Here $E$ represents a collection of adjacent pixel pairs. When these are added together, the following final cost function is obtained.
#### $$-\sum_{(i,j) \in E} s_i s_j-\eta\sum_{i \in V} y_i s_i$$
We introduced the parameter $\eta (>0)$ before the second term. This is to adjust the strength of the two conditions.
If $\eta=0$, the cost function is in the form of only the first term. In this case, the cost is the smallest when all the pixel values of the output image are the same. Conversely, if $\eta$ is made very large, it is practically the form of only the second term, so the cost is minimal if the input image and the output image are exactly the same.
In the demo of this site, it is $\eta=2$ or $3$.
|
{}
|
## Introduction
Low-grade thermal energy (<100 °C) is ubiquitous in industrial processes, the environment, and the human body, but is mostly discarded without any effort at recovery1,2,3. Converting low-grade thermal energy into electricity is an ideal strategy for addressing global energy and environmental issues4. Despite the abundance of low-grade heat, harvesting energy from potential sources has proven difficult due to the irregular distribution of heat sources and very low efficiency5. Among thermoelectric conversion strategies, solid-state thermoelectric cells have been studied extensively in recent decades and exhibit high efficiency at high temperatures. However, harvesting low-temperature heat by using solid-state thermoelectric devices has been hindered by their high cost and the limitations of materials with toxic or rare elements6,7,8,9.
Liquid-state thermogalvanic cells (TGCs) offer an alternative, inexpensive, flexible, and scalable route for direct thermal-to-electric energy conversion3. The principal advantage of TGCs is the high Seebeck coefficient (Se) of approximately 1 mV K−1, which is one order of magnitude higher than that of conventional thermoelectric cells10. For TGC systems, the Seebeck coefficient is defined as3
$$S_{\mathrm{e}} = \Delta E/\Delta T = \Delta S/nF,$$
(1)
where ΔE is the open-circuit voltage, ΔT is the temperature difference, n is the number of electrons transferred in the redox reaction, F is Faraday’s constant, and ΔS is the partial molar entropy difference of the redox couple. Due to the limited temperature difference between heat sources and the surrounding ambient environment, the development of a TGC system with high Se that can generate a high voltage even from a small temperature difference is of great importance.
Recently, increasing ΔS by regulating the interactions between the redox ions and solvents has enhanced Seebeck coefficients in organic electrolytes10,11,12,13,14,15,16. However, these electrolyte systems mostly suffer from low ionic conductivity and poor mass transport. Thus, their efficiencies are unsatisfactory, and the corresponding temperature-insensitive power densities (defined as the maximum power density obtained normalized to the square of the inter-electrode temperature difference, Pmax/(ΔT)2) are inferior to those of TGCs in aqueous electrolytes17,18,19. The simultaneous achievement of high Se and power density in a TGC remains elusive.
Here, we introduce strong chaotropic cations (guanidinium) and highly soluble amide derivatives (urea) into the 0.4 M [Fe(CN)6]3−/[Fe(CN)6]4− aqueous electrolyte to yield a very high Se (4.2 mV K−1) and an impressive Pmax/(ΔT)2 (1.1 mW K−2 m−2), both of which are approximately threefold higher than the corresponding values for the pristine TGC. The results prove that guanidine chloride (GdmCl) and urea synergistically enlarge the entropy difference of [Fe(CN)6]3−/[Fe(CN)6]4− and significantly enhance the Seebeck effect. Furthermore, we designed a module with 50 cells in series that generates an open-circuit voltage (Voc) of 3.4 V and a short-circuit current (Isc) of 1.2 mA under a small temperature difference of 18 K and could directly light a red light-emitting diode (LED) array.
## Results
### The enhancement effect of guanidinium on Seebeck coefficient
The Seebeck effect for TGCs fundamentally originates from the entropy difference of redox couples. Continuous operation of a TGC is adopted for the typical planar TGC device shown in Fig. 1a, which consists of two graphite electrodes in the K3[Fe(CN)6]/K4[Fe(CN)6] electrolyte (Supplementary Figure 1a). When a temperature gradient is built across the whole cell, the reversible reaction between the redox couple will incline to the opposite direction, thus causing a potential difference in the electrolyte at the electrodes20. That is, the reaction tends to the direction of the entropy increase at the hot side, and vice versa. The entropy difference of a redox couple is related to its absolute charge and reflects the strength of interactions between redox species and solvents3,21,22. In the ferri/ferrocyanide couple, [Fe(CN)6]4− has a higher charge density and thus can form a more compact hydration shell (Supplementary Figures 2–4 and Supplementary Note 1); consequently, its thermodynamic entropy is lower than that of [Fe(CN)6]3−. Therefore, the oxidation of [Fe(CN)6]4− to [Fe(CN)6]3− occurs through the release of an electron to the electrode at the hot side, and the electron is consumed at the cold cathode through an external circuit by the reduction of [Fe(CN)6]3− to [Fe(CN)6]4−. Based on Eq. 1, we propose that enlarging the entropy difference by regulating solvation shells of redox couples can enhance their Seebeck coefficient.
[Fe(CN)6]3− and [Fe(CN)6]4− are categorized as chaotropic anions in the Hofmeister series and are able to bond with chaotropic cations based on chaotrope–chaotrope ion specificity23,24. Therefore, strong chaotropic cations could be expected to rearrange the solvation shells of the redox couple. GdmCl is the most typical chaotrope, and its effects on water structures and protein stability have been studied extensively25,26. Here, we introduce GdmCl into the [Fe(CN)6]3−/[Fe(CN)6]4− aqueous electrolyte. Figure 1b shows the temperature dependence of the cell potential over a range of ΔT from 0 to 30 K, and the corresponding instantaneous cell potential curves on ΔT are shown in Supplementary Figure 1b. The open-circuit voltages show a linear relationship with the applied temperature difference, and the corresponding Se values are obtained from the slopes of these lines. The Se value for the pristine electrolyte (blank in Fig. 1b) is 1.4 mV K−1, consistent with the reported value in the literature27. Surprisingly, the Se value increases significantly to 2.7 mV K−1 when GdmCl is added to the pristine electrolyte. The dependence of the Se value on the GdmCl concentration is shown in Fig. 1c. The Se value increases with increasing GdmCl concentration and achieves a maximum of ~2.7 mV K−1 at a concentration of ~2.6 mol L−1. The enhancement effects of other chaotropic species are also assessed (Fig. 1d) and their corresponding chemical structures are shown in Fig. 1e. These additives, including betaine (Bet), aminoguanidine chloride (AdmCl), and metformin chloride (MfmCl), all increase the Se value to ~1.8, 1.8, and 1.7 mV K−1, respectively. By contrast, typical kosmotropic species, including NaCl, LiCl, CaCl2, and MgCl2, barely enhance the Se value of [Fe(CN)6]3−/[Fe(CN)6]4− electrolytes (Supplementary Figure 5 and Supplementary Note 2). Clearly, the chaotrope–chaotrope ion specificity plays a critical role in the enhancement of the Seebeck effect of [Fe(CN)6]3−/[Fe(CN)6]4− electrolytes. We propose that the impacts of different chaotropes on the Seebeck effect of [Fe(CN)6]3−/[Fe(CN)6]4− electrolytes are related to their molecular size. Compared with the other chaotropes, GdmCl has the smallest size and can interact with [Fe(CN)6]3−/[Fe(CN)6]4− most easily. Note that the optimal concentrations of the four chaotropes are all approximately 2.6–2.7 M (Fig. 1d), which is approximately equal to the total negative charge concentration (2.8 M) in the 0.4 M [Fe(CN)6]3−/[Fe(CN)6]4− electrolytes. This result also indicates that chaotrope–chaotrope ion specificity is closely correlated with the high Se value.
### The enhancement mechanism
To clarify the underlying mechanism, we investigate the interaction between GdmCl and [Fe(CN)6]3−/[Fe(CN)6]4− by X-ray photoelectron spectroscopy (XPS) and ultraviolet–visible absorption spectroscopy (UV–Vis). Figure 2a shows the N1s spectra of K3[Fe(CN)6], K4[Fe(CN)6], GdmCl, and their composites. For the mixture of K4[Fe(CN)6]/GdmCl, the N1s binding energy spectra for [Fe(CN)6]4− and GdmCl both obviously shift compared with those of their pure species. By contrast, the N1s binding energy spectrum for K3[Fe(CN)6]/GdmCl shows little shift. Similar results are observed for the UV–Vis spectra. The absorption peak for [Fe(CN)6]4− shifts significantly after the addition of GdmCl, and the maximum shift occurs at a GdmCl concentration of ~2.6 M (Fig. 2c, e), consistent with the optimal concentration of GdmCl for enhancing the Seebeck effect. By contrast, the absorption peak for [Fe(CN)6]3− remains unchanged with varying concentrations of GdmCl (Fig. 2b, d). These results reveal that the guanidinium cation (Gdm+) has a stronger chaotrope–chaotrope interaction with [Fe(CN)6]4− than with [Fe(CN)6]3−. This conclusion is also supported by our molecular dynamic (MD) simulations (Supplementary Note 1). The radial density profiles between the mass center of [Fe(CN)6]4−/[Fe(CN)6]3− and a water molecule in the pristine and GdmCl systems are shown in Fig. 2f and Supplementary Figure 2. Before adding GdmCl, water molecules in the 0.4 M K3[Fe(CN)6] solution are farther away from [Fe(CN)6]3−, with a peak position at approximately 4.8 Å, whereas the peak of the radial density profile for [Fe(CN)6]4− is approximately 4.6 Å in 0.4 M K4[Fe(CN)6] solution. The higher charge of [Fe(CN)6]4− results in a more closely “packed” solvation shell. When GdmCl is added, due to its higher charge, Gdm+ has a stronger interaction with the [Fe(CN)6]4− complex (−9161.82 kJ mol−1) than [Fe(CN)6]3− (−2344.95 kJ mol−1). Thus, more Gdm+ cations are prone to bond with [Fe(CN)6]4−, resulting in greater destruction of the hydration shell. The water number density at the first peak of [Fe(CN)6]3− decreases from 70.7 # nm−3 to 59.1 # nm−3 but from 72.7 # nm−3 to 33.9 # nm−3 for [Fe(CN)6]4− (Fig. 2f). The schematic solvation structures of [Fe(CN)6]3−/[Fe(CN)6]4− in GdmCl solution are illustrated in Fig. 2g. Apparently, a significant difference in solvation shells between [Fe(CN)6]3− and [Fe(CN)6]4− is achieved by adding GdmCl. More Gdm+ cations compactly surround [Fe(CN)6]4−, resulting in the rearrangement of its solvation shell. By contrast, the solvation shell of [Fe(CN)6]3− is only slightly affected by Gdm+ cations. Therefore, the entropy difference of [Fe(CN)6]3−/[Fe(CN)6]4− dramatically increases, resulting in a high Se value.
### The synergistic effect of guanidinium and urea
In addition to strong chaotropic cations, we found that highly soluble amide derivatives enhance the Seebeck effect of the [Fe(CN)6]3−/[Fe(CN)6]4− aqueous electrolyte. Urea is a low-cost amide species that can strongly impact the hydrogen-bonding network and hydration shell of ions in water28,29. A high Se value of ~2.0 mV K−1 is achieved solely by adding urea to the pristine electrolyte (Fig. 3a). Note that the Se value increases with increasing urea concentration and reaches the maximum when oversaturated urea with a concentration of ~24 M is added (Fig. 3b) In addition, we evaluate the influence of other six amide derivatives on the Seebeck effect of the [Fe(CN)6]3−/[Fe(CN)6]4− aqueous electrolyte (Supplementary Table 1 and Supplementary Note 3). The results show that highly soluble amide species (acrylamide, propanamide, and formamide) enhance the Seebeck effect, whereas poorly soluble amide species (thiourea, biuret, and hydroxycarbamide) have little effect. The results confirm that a high concentration of amide derivatives is essential for enhancing the Seebeck effect.
Interestingly, an extremely high Se value of ~4.2 mV K−1 is achieved when adding optimized urea (24 M) and GdmCl (2.6 M) simultaneously (labeled as UGdmCl in Fig. 3a, Supplementary Figure 6). This high Se value far exceeds previously reported values for aqueous and organic electrolytes16 and is an order of magnitude higher than those of state-of-the-art rigid and flexible thermoelectric materials, such as Bi2Te3 (~0.2 mV K−1)30 and poly(3,4-ethylenedioxythiophene) (~0.16 mV K−1)31, under ambient conditions. Urea and GdmCl appear to have synergistic effects on enhancing Se of [Fe(CN)6]3−/[Fe(CN)6]4− electrolytes. However, we do not observe a synergistic effect in the composites of GdmCl and other highly soluble amide species (Supplementary Figure 7). The MD simulation indicates that the polar urea molecules form stronger hydrogen bonds with [Fe(CN)6]3− than with [Fe(CN)6]4− (Supplementary Figures 8, 9 and Supplementary Note 1). Since urea prefers to bond with [Fe(CN)6]3− while GdmCl prefers to bond with [Fe(CN)6]4−, the entropy difference between the redox couple is dramatically increased in the composite of urea and GdmCl, resulting in a synergistic effect (Supplementary Figure 10).
The performances of TGC systems can be evaluated by the figure of merit (ZT)
$${\mathrm{ZT}} = {S_{\mathrm{e}}^2}\sigma T/\kappa,$$
(2)
where T is the absolute temperature, σ is the electrical conductivity, and κ is the thermal conductivity. According to Eq. 2, in addition to Se, the thermal conductivity (κ) and ionic conductivity (σ) are also important features of TGCs. Figure 3c shows the trends of κ for three TGC systems measured by the steady-state method (Supplementary Figure 11 and Supplementary Note 4). For the blank and GdmCl systems, κ significantly increases with increasing heat input due to the intense heat convection at a large temperature difference. By contrast, κ increases slightly for the UGdmCl system because heat convection is dramatically suppressed by the oversaturated urea, resulting from an increase in the viscosity of the electrolyte. Thus, a larger temperature difference is created in the UGdmCl system than in the blank and GdmCl systems at the same heat input (Supplementary Figure 12). In general, thermal conductivity and ionic conductivity have a trade-off relationship and are difficult to optimize simultaneously. Compared with the blank and GdmCl systems, the UGdmCl system has the lowest conductivity due to slow mass transport while still remaining at a high level: approximately 50 mS cm−1 at room temperature (Fig. 3d). Note that the ionic conductivity of the blank system significantly decreases below 278 K because of the crystallization of the redox ions. However, the ionic conductivities of GdmCl and UGdmCl confer stability, indicating that urea and guanidinium can inhibit crystallization of the redox ions at low temperature. Moreover, the performance of blank system decays, while the performance of GdmCl and UGdmCl remain excellent at cold temperature (Supplementary Figure 13 and Supplementary Note 5). The optimized TGC systems therefore can be stably operated in cold environments. Figure 3e shows the power outputs of TGC systems under temperature difference of 10 K. Voc is significantly higher for the optimized systems than for the blank system, whereas Isc does not increase significantly due to the slow mass transport. From the IV curves, Pmax/(ΔT)2 is obtained, with values of 0.41, 0.95, and 1.10 mW K−2 m−2 for blank, GdmCl, and UGdmCl, respectively. The Pmax/(ΔT)2 value for the UGdmCl system is enhanced by nearly threefold compared to the blank system due to the synergistic effect of urea and guanidine hydrochloride.
Ideally, a TGC system with a high Se and Pmax/(ΔT)2 value is crucial for efficiently producing available electrical energy from a small temperature difference. Here, we compare the Se and Pmax/(ΔT)2 values of our systems with those reported in the literature for typical planar and static TGC systems, as shown in Fig. 3f and Supplementary Table 2. Baughman and colleagues17,18,19 applied optimized carbon-based electrodes to produce high Pmax/(ΔT)2 values of approximately 0.38−1.9 mW K−2 m−2 but did not enhance the Se value. By contrast, other research groups9,10,12,13,14,32,33,34 have focused on enhancing Se in optimized electrolytes, yielding moderately high Se values of approximately 1.4–2.2 mV K−1 but inferior Pmax/(ΔT)2 values of <0.3 mW K−2 m−2. By contrast, our optimized TGC system combines the highest Se value of 4.2 mV K−1 and a high Pmax/(ΔT)2 value of 1.1 mW K−2 m−2:
The energy conversion efficiency (ηr) relative to the Carnot efficiency limit of a heat engine is as follows (Eq. 3):
$$\eta _{\mathrm{r}} = \frac{\eta }{{\left( {{\mathrm{\Delta }}T/T_{\mathrm{H}}} \right)}} = \frac{{P_{\mathrm{max}}/\left( {\kappa \cdot \frac{{\Delta {{T}}}}{d}} \right)}}{{\left( {{\mathrm{\Delta }}T/T_{\mathrm{H}}} \right)}} = \frac{{\left( {P_{{\mathrm{max}}}/({\mathrm{\Delta }}T)^2} \right)d \cdot T_{\mathrm{H}}}}{\kappa },$$
(3)
where the electrode separation distance (d) is 10 mm, the hot-side temperature (TH) is 303 K, and κ is the thermal conductivity at a ΔT of 10 K (Supplementary Figure 14). Due to the high power density and low thermal conductivity, the ηr value of 0.79% for the UGdmCl electrolyte is more than fivefold higher than that of the blank system (Supplementary Figure 15).
### The performances of a prototype module
To demonstrate the potential applications of our optimized systems for harvesting low-grade thermal energy, we fabricated a prototype module containing 50 UGdmCl units (1 cm2 area, 0.5 cm thickness) connected by Cu wires in series, as illustrated in Fig. 4a. The device generates Voc and Isc values of 3.4 V and 1.2 mA, respectively, at an applied ΔT of 18 K (Fig. 4b) and can directly power an LED array (Fig. 4c and Supplementary Movie 1). Note that the average Se value for the module is calculated to be 3.8 mV K−1, which is lower than the value of 4.2 mV K−1 because of the inevitable thermal contact resistance between the module and heat sources. Furthermore, the module is robust (Supplementary Movie 2) that can also harvest heat from the human body, and a stable voltage of more than 0.3 V is generated by a small temperature difference of approximately 1.3 K, as shown in Fig. 4d. In addition, our module can harvest waste heat in a cold environment (Supplementary Figure 16a) or from a refrigerator (Supplementary Figure 16b).
## Discussion
A low-cost TGC system combining the highest Seebeck coefficient and a high power density was developed by introducing urea and guanidinium into an aqueous electrolyte containing 0.4 M [Fe(CN)6]3−/[Fe(CN)6]4−. The underlying mechanism of the enhancement of the Seebeck coefficient is the significant increase in the entropy difference of the redox couple due to the synergistic interactions between urea/guanidinium and the redox couple. Guanidinium is prone to bond with [Fe(CN)6]4− rather than [Fe(CN)6]3− based on the ion specificity, whereas urea has a stronger affinity for [Fe(CN)6]3− than for [Fe(CN)6]4-. These differences in affinity synergistically enlarge the entropy difference of the redox couple, thereby significantly increasing the Seebeck effect. The performance of the electrolyte system was demonstrated with a module that integrated 50 optimized cells and generated usable electrical energy from several low-temperature heat sources, indicating the promising potential of this system for efficiently harvesting low-grade thermal energy. Further improvements in the performance of the device might be achieved by using highly conductive and porous electrode materials17, optimizing the trade-off between ionic conductivity and thermal conductivity by applying a porous thermal separator17. In addition, a flexible and wearable TGC could be designed by using gel electrolytes for harvesting body heat35,36.
## Methods
### Materials
All chemical reagents were purchased from Sigma-Aldrich Trading Co. Ltd. and used without further purification. Graphite electrodes with an electrical resistivity of approximately 10 µΩ m were commercial products purchased from Graphite Material Company Ltd. Water obtained from a Milli-Q system (Simplicity, Millipore, France) was used in all experiments.
### Performance characterization of the TGC
The performance of the TGC was measured by the typical planar cell device shown in Supplementary Figure 1. The open-circuit voltage (Voc) of the cells was measured with a Keithley 2000 multimeter, and the corresponding temperature difference was recorded by a thermocouple data logger (USB-TC-08, Pico Technology, St. Neots, UK). The current–voltage characterization of the device was performed with a Keithley 2400 instrument. There are approximately100 points between 0 V to open-circuit voltage. The voltage sweep rate is 0.1 s per point. The thermal conductivities of the TGCs were measured by the steady-state method (Supplementary Figure 11 and Supplementary Note 4). The ionic conductivities of the TGCs were measured with a conductivity meter (Mettler Toledo S-230).
### Mechanism characterization
The interactions between GdmCl and [Fe(CN)6]4−/[Fe(CN)6]3− were characterized with a UV–Vis spectrophotometer (LabRAM HR800) and an X–ray photoelectron spectrometer (ESCALab250). Because the concentration of 0.4 M [Fe(CN)6]4−/[Fe(CN)6]3− was too high for UV–Vis measurements, we diluted the solution to a concentration of 0.2 mM for UV–Vis measurements. The solid powders of GdmCl, K3[Fe(CN)6] and K4[Fe(CN)6] were dried in a vacuum oven at 333 K for 24 h before the XPS measurements. The samples of GdmCl/K3[Fe(CN)6] and GdmCl, K4[Fe(CN)6] were prepared for the XPS measurements by drying their composite solutions in a vacuum oven at 333 K for 48 h. The MD simulations were performed using the MD software package Gromacs 4.6 in the NPT ensemble at 298 K and 1 atm (detailed information is provided in Supplementary Note 1).
### Module preparation
The module containing 50 integrated UGdmCl units consisted of a polyamide frame (commercial sources), graphite electrodes, electrolytes and copper (Cu) wires. The polyamide frame had a size of 160 × 80 × 8 mm3 and contained 50 cells (10 × 10 × 5 mm3). Fifty graphite electrodes (12 × 12 × 2 mm3) were first fixed to one side of the polyamide frame with epoxy glue. Then, the cells were filled with the electrolyte. Finally, all cells were sealed with an additional 50 graphite electrodes. The cells were connected in series by Cu wires. To prevent the leakage, the whole module was sealed by epoxy resin glue. As shown in Fig. 4a, the module was sandwiched between two aluminum (Al) heat exchangers. The temperature difference across the module was created by pumping circulating hot water (333 K) and cold water (278 K) in the two heat exchangers. The Voc of the cells was measured with a Keithley 2000 multimeter, and the corresponding temperature difference was recorded by a thermocouple data logger. The current was measured with a Keithley 2400 instrument. The LED array consisted of 29 red LED lamp beads in parallel.
|
{}
|
# 1D filter for speed input and noisy position sensor
I was tasked with designing a filter to smooth out a translating table. The system, we command a speed and a desired position, and measure it with two noisy position sensors. The commanded speed is also not always perfect, but it's usually within 10%. The slide can only move left and right.
My initial thought was to simply use a moving average (via circular buffer), and grow / shrink the array based on the speed. For high speeds, I would have a smaller array (10 or so measurements), and for lower speeds, I would have a large array (maybe 40 measurements). When stopped, I would have it grow to 100 or so. The reason for this is to decrease the phase offset at high speeds where resolution is not as important, and decrease the noise at lower speeds, when accuracy is more important. However, this doesn't really take into account the speed, so I feel like I'm throwing away useful information.
I'm figuring there has to be a better way. Any thoughts on how I would approach this problem more elegantly?
Just so we're on the same page, "high speed" is only about 3 inches per second, and I am sampling at 20Hz (but each time, I get two measurements). There is a ramp up/ ramp down portion of the translation such that whenever the table approaches the desired position, it slows to a crawl of 0.5 inches per second.
Thanks!
• Have you considered a more sophisticated filter such as Kalman? What do you know about the spectral characteristics of the signals? – Moti Apr 1 '15 at 4:51
• @Moti Yes, Kalman was the first thing to came to mind, but I'm not overly familiar with it. I thought Kalman filters only considered observations, and did not take in any other inputs, such as predictions. Is that the difference between a Kalman filter and the Extended Kalman Filter? – dberm22 Apr 1 '15 at 12:29
• I don't recall the differences between the two, but actually they also may allow you some prediction. – Moti Apr 2 '15 at 11:48
Sounds like an Extended Kalman Filter is the ideal candidate.
Bayes++ is a really simple drop-in library for implementing one, and the example here (https://github.com/Exadios/Bayes-/blob/master/PV/PV.cpp implements the question exactly (except it only takes in 1 observation)
Below is the code, in case it gets moved:
/*
* Bayes++ the Bayesian Filtering Library
* Copyright (c) 2002 Michael Stevens
* See accompanying Bayes++.htm for terms and conditions of use.
*
* $Header: /cvsroot/bayesclasses/Bayes++/PV/PV.cpp,v 1.17.2.3 2005/04/07 06:39:22 mistevens Exp$
*/
/*
* Example of using Bayesian Filter Class to solve a simple problem.
* The example implements a Position and Velocity Filter with a Position observation.
* The motion model is the so called IOU Integrated Ornstein-Uhlenbeck Process Ref[1]
* Velocity is Brownian with a trend towards zero proportional to the velocity
* Position is just Velocity integrated.
* This model has a well defined velocity and the mean squared speed is parameterised. Also
* the velocity correlation is parameterised.
*
* Two implementations are demonstrated
* 1) A direct filter
* 2) An indirect filter where the filter is preformed on error and state is estimated indirectly
* Reference
* [1] "Bayesian Multiple Target Tracking" Lawrence D Stone, Carl A Barlow, Thomas L Corwin
*/
#include "BayesFilter/UDFlt.hpp"
#include "BayesFilter/filters/indirect.hpp"
#include "Test/random.hpp"
#include <cmath>
#include <iostream>
#include <boost/numeric/ublas/io.hpp>
namespace
{
using namespace Bayesian_filter;
using namespace Bayesian_filter_matrix;
// Choose Filtering Scheme to use
typedef UD_scheme FilterScheme;
// Square
template <class scalar>
inline scalar sqr(scalar x)
{
return x*x;
}
// Random numbers from Boost
Bayesian_filter_test::Boost_random localRng;
// Constant Dimensions
const unsigned NX = 2; // Filter State dimension (Position, Velocity)
// Filter Parameters
// Prediction parameters for Integrated Ornstein-Uhlembeck Process
const Float dt = 0.01;
const Float V_NOISE = 0.1; // Velocity noise, giving mean squared error bound
const Float V_GAMMA = 1.; // Velocity correlation, giving velocity change time constant
// Filter's Initial state uncertainty: System state is unknown
const Float i_P_NOISE = 1000.;
const Float i_V_NOISE = 10.;
// Noise on observing system state
const Float OBS_INTERVAL = 0.10;
const Float OBS_NOISE = 0.001;
}//namespace
/*
* Prediction model
* Linear state predict model
*/
class PVpredict : public Linear_predict_model
{
public:
PVpredict();
};
PVpredict::PVpredict() : Linear_predict_model(NX, 1)
{
// Position Velocity dependance
const Float Fvv = exp(-dt*V_GAMMA);
Fx(0,0) = 1.;
Fx(0,1) = dt;
Fx(1,0) = 0.;
Fx(1,1) = Fvv;
// Setup constant noise model: G is identity
q[0] = dt*sqr((1-Fvv)*V_NOISE);
G(0,0) = 0.;
G(1,0) = 1.;
}
/*
* Position Observation model
* Linear observation is addative uncorrelated model
*/
class PVobserve : public Linrz_uncorrelated_observe_model
{
mutable Vec z_pred;
public:
PVobserve ();
const Vec& h(const Vec& x) const
{
z_pred[0] = x[0];
return z_pred;
};
};
PVobserve::PVobserve () :
Linrz_uncorrelated_observe_model(NX,1), z_pred(1)
{
// Linear model
Hx(0,0) = 1;
Hx(0,1) = 0.;
// Observation Noise variance
Zv[0] = sqr(OBS_NOISE);
}
void initialise (Kalman_state_filter& kf, const Vec& initState)
/*
* Initialise Kalman filter with an initial guess for the system state and fixed covariance
*/
{
// Initialise state guess and covarince
kf.X.clear();
kf.X(0,0) = sqr(i_P_NOISE);
kf.X(1,1) = sqr(i_V_NOISE);
kf.init_kalman (initState, kf.X);
}
int main()
{
// global setup
std::cout.flags(std::ios::scientific); std::cout.precision(6);
// Setup the test filters
Vec x_true (NX);
// True State to be observed
x_true[0] = 1000.; // Position
x_true[1] = 1.0; // Velocity
std::cout << "Position Velocity" << std::endl;
std::cout << "True Initial " << x_true << std::endl;
// Construct Prediction and Observation model and filter
// Give the filter an initial guess of the system state
PVpredict linearPredict;
PVobserve linearObserve;
Vec x_guess(NX);
x_guess[0] = 900.;
x_guess[1] = 1.5;
std::cout << "Guess Initial " << x_guess << std::endl;
// f1 Direct filter construct and initialize with initial state guess
FilterScheme f1(NX,NX);
initialise (f1, x_guess);
// f2 Indirect filter construct and Initialize with initial state guess
FilterScheme error_filter(NX,NX);
Indirect_kalman_filter<FilterScheme> f2(error_filter);
initialise (f2, x_guess);
// Iterate the filter with test observations
Vec u(1), z_true(1), z(1);
Float time = 0.; Float obs_time = 0.;
for (unsigned i = 0; i < 100; ++i)
{
// Predict true state using Normally distributed acceleration
// This is a Guassian
x_true = linearPredict.f(x_true);
localRng.normal (u); // normally distributed mean 0., stdDev for stationary IOU
x_true[1] += u[0]* sqr(V_NOISE) / (2*V_GAMMA);
// Predict filter with known pertubation
f1.predict (linearPredict);
f2.predict (linearPredict);
time += dt;
// Observation time
if (obs_time <= time)
{
// True Observation
z_true[0] = x_true[0];
localRng.normal (z, z_true[0], OBS_NOISE); // normally distributed mean z_true[0], stdDev OBS_NOISE.
// Filter observation
f1.observe (linearObserve, z);
f2.observe (linearObserve, z);
obs_time += OBS_INTERVAL;
}
}
// Update the filter to state and covariance are available
f1.update ();
f2.update ();
// Print everything: filter state and covariance
std::cout <<"True " << x_true << std::endl;
std::cout <<"Direct " << f1.x << ',' << f1.X <<std::endl;
std::cout <<"Indirect " << f2.x << ',' << f2.X << std::endl;;
return 0;
}
|
{}
|
## solved numerically a system of nonlinear algebraic...
Hi everyone:
How can I solve numerically the system of nonlinear algebraic equations by Newton’s method?
eq1:= (1/2)*x[0]*sqrt(3)-(1/2)*x[1]*sqrt(3) = ((1/2)*x[0]*(t+(1/3)*sqrt(3))*sqrt(3)-(1/2)*x[1]*(t-(1/3)*sqrt(3))*sqrt(3))*(1-(1/6)*y[0]*(t+(1/3)*sqrt(3))*sqrt(3)+(1/6)*y[1]*(t-(1/3)*sqrt(3))*sqrt(3)-(1/8)*y[0]*(5*sqrt(3)*(1/12)-1/4+t)*sqrt(3)+(1/8)*y[1]*(-(1/4)*sqrt(3)-1/4+t)*sqrt(3)-(1/8)*y[0]*((1/4)*sqrt(3)-1/4+t)*sqrt(3)+(1/8)*y[1]*(-5*sqrt(3)*(1/12)-1/4+t)*sqrt(3))-5*t^3*(1/2)+49*t^2*(1/12)+17*t*(1/12)-23/6;
eq2:= (1/2)*y[0]*sqrt(3)-(1/2)*y[1]*sqrt(3) = ((1/2)*y[0]*(t+(1/3)*sqrt(3))*sqrt(3)-(1/2)*y[1]*(t-(1/3)*sqrt(3))*sqrt(3))*(-2+(1/2)*x[0]*(t+(1/3)*sqrt(3))*sqrt(3)-(1/2)*x[1]*(t-(1/3)*sqrt(3))*sqrt(3)+(1/4)*(-(1/12)*sqrt(3)-3/4)*((1/2)*x[0]*(5*sqrt(3)*(1/12)-1/4+t)*sqrt(3)-(1/2)*x[1]*(-(1/4)*sqrt(3)-1/4+t)*sqrt(3))+(1/4)*((1/12)*sqrt(3)-3/4)*((1/2)*x[0]*((1/4)*sqrt(3)-1/4+t)*sqrt(3)-(1/2)*x[1]*(-5*sqrt(3)*(1/12)-1/4+t)*sqrt(3)))+15*t^3*(1/8)-(1/4)*t^2+3*t*(1/8)-1;
eq3:=(1/2)*x[0]+(1/2)*x[1] = 1;
eq4:=(1/2)*y[0]+(1/2)*y[1] = 0;
## How I find college students to create problem work...
I teach high school IB Math.
I want to find someone who can take problems I have created and enter them into Maple (problem and solution) so I can use those in my classroom.
I can keep up with the new stuff I create, but I have almost 20 years of accumulated material I'd like to move into Maple.
Does anyone know where I can find someone to enter the problem and solution so that it is proved out in Maple? Maybe an existing college student using Maple at their school for math, engineering, or education? I am willing to pay them, it is just locating them that is a problem.
Even if you know a site for maple contractor types, that would be helpful.
Thanks a ton!
Robert
## Finding a Derivative with Maple...
I have a function e^(-\lambda z \sqrt(x^2 + y^2)), is it possible to use Maple to find some sequence of derivatives wrt to x and y which could be applied to this function to get
(z/(1+2*sqrt(x^2 + y^2)*lambda))*e^(-\lambda z \sqrt(x^2 + y^2))
## Any maple built in function can find the possible ...
Any maple built-in function can find the possible numbers when modular a fixed number?
For example,
(x&^3+y) mod 124 = 123 mod 124
Find the possible values of x and y.
## Maple prints out in loop even though I tell it not...
for i from 1 to numelems(X) do
A := x:
B := C;
end;
maple will print all statements even though I use the colon to try to suppress the first line. This just seems wrong. If I suppress the entire loop I have to use prints, and if I have a lot of statements I have to do it for every one even if I just want to supress 1.
It seems maple is suppose to suppress but it isn't.
## How to stop maple from locking up and crashing and...
One problem I come across far to often is that if maple get's bogged down in a computation or I screwed something up I can't always stop it. Sometimes the red ! is greyed out and I have to kill the mserver. When I do this I get the error that I need to save but save doenst' work and I can't do anything because the the kernel was killed. Maple doesn't seem to be able to properly recover... even though it does it most of the time when I click on the red !.
Is there any way around this?
## Override function for new behavior...
I get tired of having to resize my plots constantly so I can get a nicer view.
setPlotSize:=proc(P,sz::[posint,posint])
op(0,P)(remove(type,[op(P)],'specfunc(ROOT)')[],
ROOT(BOUNDS_X(0),BOUNDS_Y(0),
BOUNDS_WIDTH(sz[1]),BOUNDS_HEIGHT(sz[2])));
end proc:
This let's one set the size(mine is usually 1200x500) but I have to stick it in every plot.
Is there a way to override the plot functions to automatically do this for every plot or create a simple short option to scale it to the window size or some specific size?
I could probably make a simple function like RPF()
that I can wrap every plot but I'd like to avoid that step and just apply it to all plots by default(since 99% of the time I have to scale them.
## Is "MultiSet" reliable?...
Something really weird is going on when I build a MultiSet in two different ways, using the "+" operation. The two constructions give the same MultiSet (since U=V), but in some mysterious way they are not really equal (since X=Y is false). Does anyone know how to avoid this? Should the "+" operator be avoided altogether?
There is more: I tried saving the values of X and Y using the command: save X, Y, "anomaly.m"
I got the error message: "Error, could not open anomaly.m for reading".
Thanks!
>
>
(1)
>
(2)
>
(3)
>
(4)
>
(5)
>
(6)
>
## change variable in equation...
Hi everyone:
How can I re-write the EQ with transformation s=1+2*((tau-t)/T0) ?
EQ:=int(f1(t-tau)*(Sum(y[k]*F[k](tau), k = 0 .. M)), tau = t-T0 .. t)
tnx...
## How can I have a pretty display of a piecewise exp...
Hi,
How can I force the command InsertContent(Worksheet(Group(Input( T )))) to display the variable eq as it appears in label (2) ?
(a screen capture of the output of InsertContent(Worksheet(Group(Input( T )))) is given after the Maple code)
> restart:
> interface(version)
(1)
> with(DocumentTools):
> with(DocumentTools[Layout]):
> eq := piecewise(t < 1, sin(t), cos(t)); C := Cell( Textfield(style=TwoDimOutput,Equation(eq)) ): T := Table(Column(), widthmode=percentage, width=40, Row(C)): InsertContent(Worksheet(Group(Input( T )))):
(2)
>
## GKLS - Optimization test functions generator...
Hi!
There is a (relatively) known software code (written in C), called ." GKLS-generator" or "GKLS" to generate, according to certain user paramenters, optimization test functions. The code is available for free at the web
http://wwwinfo.deis.unical.it/%7Eyaro/GKLS.html
I would like to write this code in Maple. In the attached zip there is a PDF explaining how to build these functions. For now, I tried the follwoing Maple code GKLS_v4.mw
I think I'm doing something wrong, since the drawing generated by the attached Maple does not look much like the PDF in the attached zip (Fig. 1 of page 8).
Please, Can you help me with this?
## Typesetting vector names with up arrows in plots
by: Maple 2018
I was trying to display a Physics[Vectors] vector name in a 3dplot with an up arrow
on it. I found that this old 2008 trick still works in MAPLE 2018.
> restart;
> with(plots): with(Physics[Vectors]):
> # Using MAPLE 2018.2
> a:=arrow([-1,1,1],view=[-1.5..1.5,-1.5..1.5,-1.5..1.5]):
> v_; t:= textplot3d([-1.1,1.1,1,v_]): display(a,t);
> # I found this on an old 2008 post t:= textplot3d([-1.1,1.1,1,typeset(#mover(mi( || v || ),mo("→")))]): display(a,t);
## Maple not simplifies equation completly...
Hello
I have problem with Maple that is not simplifying equation completly:
My simplified equation:
Its sum of for example 20 elements and only n is increasing so why maple will not move a,b,c,d ahead parenthesis ?
## Application of MapleSim in Science and Engineering...
Application of MapleSim in Science and Engineering: a simulationbased approach
In this research work I show the methods of embedded components together with modeling and simulation carried out with Maple and MapleSim for the main areas of science and engineering (mathematics, physics, civil, mechanical etc); These two latest scientific softwares belonging to the company Maplesoft. Designed to be generated and used by teachers of education, as well as by university teachers and engineers; the results are highly optimal since the times saved in calculations are invested in analyzes and interpretations; among other benefits; in this way we can use our applications in the cloud since web technology supports Maple code with procedural and component syntax.
FAST_UNT_2020.pdf
Lenin AC
## Collocating a vector...
Hi User!
Hope you would be fine with everything. I have a vector "POL" of M dimension obatined for the following expression
restart; with(LinearAlgebra); nu := 1; M := 3;
for k while k <= M do
Poly[k] := simplify(sum(x^i*GAMMA(nu+1)/(factorial(i)*GAMMA(2*nu)), i = 0 .. k-1))
end do;
POL := <,>`(seq(Poly[k], k = 1 .. M))
and I want to construct a matrix of M by M by collocating it on the points x=i/(M-1) for i=0,1,2,...,M-1 like the following way,
For M=3 I need
Matrix(3, 3, {(1, 1) = Poly[1](0), (1, 2) = Poly[1](1/2), (1, 3) = Poly[1](1), (2, 1) = Poly[2](0), (2, 2) = Poly[2](1/2), (2, 3) = Poly[2](1), (3, 1) = Poly[3](0), (3, 2) = Poly[3](1/2), (3, 3) = Poly[3](1)});
For M=4 I need
Matrix(4, 4, {(1, 1) = Poly[1](0), (1, 2) = Poly[1](1/3), (1, 3) = Poly[1](2/3), (1, 4) = Poly[1](1), (2, 1) = Poly[2](0), (2, 2) = Poly[2](1/3), (2, 3) = Poly[2](2/3), (2, 4) = Poly[2](1), (3, 1) = Poly[3](0), (3, 2) = Poly[3](1/3), (3, 3) = Poly[3](2/3), (3, 4) = Poly[3](1), (4, 1) = Poly[4](0), (4, 2) = Poly[4](1/3), (4, 3) = Poly[4](2/3), (4, 4) = Poly[4](1)})
and general form is like this
[[[Poly[1](0/(M-1)),Poly[1](1/(M-1)),Poly[1]((2)/(M-2)),...,Poly[1]((M-1)/(M-1))],[Poly[2](0/(M-1)),Poly[2]((1)/(M-1)),Poly[2]((2)/(M-1)),...,Poly[2]((M-1)/(M-1))],[Poly[3]((0)/(M-1)),Poly[3]((1)/(M-1)),Poly[3]((2)/(M-1)),...,Poly[3]((M-1)/(M-1))],[...,...,...,...,...],[Poly[M]((0)/(M-1)),Poly[M]((1)/(M-1)),Poly[M]((2)/(M-1)),...,Poly[M]((M-1)/(M-1))]]];
Another problem is I want to define a vector of M dimension using a function f(x)=sin(x) and two points a=1, b=2 like the following way,
Vec:=[[[a],[f((1)/(M-1))],[f((2)/(M-1))],[f((3)/(M-1))],[...],[f((M-1)/(M-1))],[b]]]
|
{}
|
# QUESTION 8 Problem 2) True False (3) Type II Error is the probability that you reject...
###### Question:
QUESTION 8 Problem 2) True False (3) Type II Error is the probability that you reject the null hypothesis when it was true. True False
#### Similar Solved Questions
##### Provide complete interpretation for all spectra and suggest a structure flr each compound
Provide complete interpretation for all spectra and suggest a structure flr each compound...
##### Terrace laddcr used rcpalr water tank and Jir conditoncr unlt ctc: Icans aqainst lhe wall as Snown the Imaae Fiqure Thc foot of the ladder Am Irom the wall: The ladder reaches heiaht of 13.Sm on the wall.Fiouredetermine the measure of angle (in radian) using the tangent function: determine the length of the ladder using the sine function_ determine the measure of angle (in radian) using the cosine function
terrace laddcr used rcpalr water tank and Jir conditoncr unlt ctc: Icans aqainst lhe wall as Snown the Imaae Fiqure Thc foot of the ladder Am Irom the wall: The ladder reaches heiaht of 13.Sm on the wall. Fioure determine the measure of angle (in radian) using the tangent function: determine the len...
~ polnts Setcp1tA6.P.0a4 Hnale Aktour Tead Iudeni deaJes mlove Thecnor boo * Into tonnitery rom PJlire Fope altached Ine bar Staaumel cocthocni Unetr Inttion batwren bor and foor [$0_JdO mn [6.04 abot acchetotlor Ws (heindlne TheGadnnl ctarts Mozir na Dax In dine; Lerp nc Uncinard FhalIs the nr acc... 5 answers ##### JoeMuzt detyo2d2itha003decimsdependin}Funermtableporooriatelea3e wethetab provided 3T4T3331 notthetable:inthetextbook Knawe mut24P_ Tourdecimz ek = Muit decnoientcomjroojownMenunawe MutoeWin rencez There i gignificant evidente conciude Tnere not zignificant vijenc? conciudeTIIEC TINIEC Joe Muzt detyo2d 2itha 003decims dependin} Funerm table porooriate lea3e wethetab provided 3T4T3331 notthetable:inthetextbook Knawe mut24P_ Tourdecimz ek = Muit decnoientcomjroojownMenu nawe Mutoe Win rencez There i gignificant evidente conciude Tnere not zignificant vijenc? conciude TIIEC TINIEC... 4 answers ##### Tze objective cf this hcre VOIX [ Matlab = ird Fourier Serie: Coefficients for 2 27721 2n- Jtere Fowner Sena: rntton Tecore the gignal tlne dcmau fom tha Fcurier Ccaticlent Follow the Exzmple 3.19.2 Faze 315 of the text bcok fnd the Fourier Cceficient for the following sigral You will me te fictioz @- accomplith tis.~[]= -sin("/12-3-/ 8)Tzer use the fuctior i8econetrict 112 Criginal sgi ttomi the Fowner CoetcerteUze Matlab and firdthe Fourier Coeficients for t2 following signal107 +1I[&quo Tze objective cf this hcre VOIX [ Matlab = ird Fourier Serie: Coefficients for 2 27721 2n- Jtere Fowner Sena: rntton Tecore the gignal tlne dcmau fom tha Fcurier Ccaticlent Follow the Exzmple 3.19.2 Faze 315 of the text bcok fnd the Fourier Cceficient for the following sigral You will me te fictioz ... 3 answers ##### 25.3 Use the (a) Euler and (b) Heun without iteration) methods to solve dly 05t +y = 0 dtwhere y(0) = 2 and y' (0) = 0. Solve from x = 0 to 4 using h = 0.1. Compare the methods by plotting the solutions_ 25.3 Use the (a) Euler and (b) Heun without iteration) methods to solve dly 05t +y = 0 dt where y(0) = 2 and y' (0) = 0. Solve from x = 0 to 4 using h = 0.1. Compare the methods by plotting the solutions_... 5 answers ##### 5. Derive an expression for the rotational inertia of a thin disk of radius R and mass M rotated about its center. (Hint: Think about the area of a bit of the disk, da_ It has a "length" of some small arclength, and a 'width" of a bit of radius. Then da = length x width:) 5. Derive an expression for the rotational inertia of a thin disk of radius R and mass M rotated about its center. (Hint: Think about the area of a bit of the disk, da_ It has a "length" of some small arclength, and a 'width" of a bit of radius. Then da = length x width:)... 5 answers ##### Ball tossed from an upper-story windowbuilding The ball given an initial velocity the bullding does the ball strike the ground?80 m/s at an angle of 19.00 below the horizontal; It strikes the ground 6.00 Iater.How far horizontally from the baseFindheight from which the ball was thrownHow ong does take the ball reach point 10.0 m below the levellaunching? ball tossed from an upper-story window building The ball given an initial velocity the bullding does the ball strike the ground? 80 m/s at an angle of 19.00 below the horizontal; It strikes the ground 6.00 Iater. How far horizontally from the base Find height from which the ball was thrown How ong d... 1 answer ##### Solve each of the following problems algebraically. A plane's air speed (speed in still air) is$500 \mathrm{kph}$. The plane covers$1120 \mathrm{km}$with a tailwind in the same time it covers$800 \mathrm{km}$with a headwind (against the wind). What is the speed of the wind? Solve each of the following problems algebraically. A plane's air speed (speed in still air) is$500 \mathrm{kph}$. The plane covers$1120 \mathrm{km}$with a tailwind in the same time it covers$800 \mathrm{km}$with a headwind (against the wind). What is the speed of the wind?... 1 answer ##### Suppose that X is a random variable from a binomial distribution with parameters n=12 and p.... Suppose that X is a random variable from a binomial distribution with parameters n=12 and p. Consider the point estimate p̂=X/14 1. what's the bias of this estimate? 2. what is the value of the mean square error of this estimate if the actual value of p is 0.735... 1 answer ##### S Question Completion Status: QUESTION 6 Relief agency workers are feeding undernourished children in several parts... s Question Completion Status: QUESTION 6 Relief agency workers are feeding undernourished children in several parts of the world. An undernourished 3 year old in Syria weighed 9 kg, where the mean weight for healthy three-year olds is 13 kg and the standard deviation is 1.25 kg. In India, an underno... 1 answer ##### The net trading positions of EE Bank have a net value of$1,200 and a computed...
The net trading positions of EE Bank have a net value of $1,200 and a computed 1- day 99% VaR of$46,000. What is the minimum percentage loss that would be considered a tail event for this portfolio? Note: Your answer in must be expressed in percentage terms and accurate to within 0.01%. E.g., if yo...
##### For each statement, find a proof or a counterexample.a. The reciprocal of $x$ times the reciprocal of $y$ is the reciprocal of $x y .$b. The reciprocal of $x$ plus the reciprocal of $y$ is the reciprocal of $x+y .$
For each statement, find a proof or a counterexample. a. The reciprocal of $x$ times the reciprocal of $y$ is the reciprocal of $x y .$ b. The reciprocal of $x$ plus the reciprocal of $y$ is the reciprocal of $x+y .$...
|
{}
|
# Families of Curves Wanted
### An interesting problem
Let $n$ be a large positive integer. Recently I've been looking for a family of curves $f_n: \mathscr{C}_n\to \mathbb{P}^1$ with the following properties:
• $f_n$ is flat and proper of relative dimension $1$,
• the general fiber of $f_n$ is smooth, and the family is not isotrivial
• every singularity that appears in a fiber of $f_n$ is etale-locally of the form $$xy=t^n$$ where $t$ is a parameter on $\mathbb{P}^1$.
Please let me know if you know of such a family; another way of saying this is that I am looking for a rational curve in $\overline{\mathscr{M}_{g}}$ such that every point of tangency with the boundary has order $n$. Nicola Tarasca has suggested a deformation-theoretic construction, which is promising; as I understand it, his idea is to construct a family of admissible covers with the desired singularities (where the general member of the family is not smooth, but instead has rational components, and then try to smooth it while preserving the desired singularities). I have not checked to see if this works yet, but it seems like a reasonable idea.
### Why am I interested in this?
Let $U\subset \mathbb{P}^1$ be the Zariski-open subset over which $f_n$ is smooth. Then I claim that the monodromy representation $$\pi_1(U, x)\to GL_n(H^1(\mathscr{C}_{n,x}, \mathbb{Z}))$$ is trivial mod $n$. In general, it's quite hard to construct such representations, as I show for example in my paper Arithmetic Restrictions on Geometric Monodromy. My interest in this question comes in part from wanting to know if the results in that paper are sharp.
More generally, the geometric torsion conjecture predicts that there exists an integer $N=N(g)$ such that if $A$ is a tracless $g$-dimensional Abelian variety over $\mathbb{C}(t)$, then $$\#|A(\mathbb{C}(t))_{\text{tors}}|<N.$$ This conjecture is open -- moreover, to my knowledge, we don't know for sure if $N$ has to depend on $g$! The family of Jacobians of a family of curves as above would give an example showing that $N$ must indeed depend on $g$. The interesting examples I know of Abelian varieties over $\mathbb{C}(t)$ with lots of torsion come from genus zero Shimura curves, but there are only finitely many of those.
Let me briefly sketch the proof that a family of curves as above gives the desired example. The idea is that the local monodromy associated to the singularity $$xy=t^n$$ has the form $$\begin{pmatrix} 1 & n\\ 0 & 1\end{pmatrix},$$ which is equal to the identity mod $n$. So for such a family of curves, the local monodromy at each point should vanish mod $n$. But the fundamental group of a punctured genus zero curve is generated by inertia, so we're done.
|
{}
|
# Prove/disprove: $58654965542\ldots \in \mathbb{Z}$? [duplicate]
Possible Duplicate:
A “number” with an infinite number of digits is a natural number?
Is $58654965542\ldots\in \mathbb{Z}^+$ ?
In General, for any number $N$:
$N:=a_0 a_1 a_2 ...a_{n-1} a_n a_{n+1} ...$ "to infinity" : $a_i \in {0,1,2,...,9}$ $\forall i \geq 0$. Is $N \in \mathbb{Z}^+$ ?
If the answer "not" : Is $\exists A ( N \in A)$ : $A$ is a set of numbers ?
-
There is no number in the positive integers that contais infinitely many digits. – Michael Greinecker May 20 '12 at 12:18
The sum is not readable. What is $m\mathbb{Z}^+$? A question should contain a verb or a symbol that can be read as one. – Michael Greinecker May 20 '12 at 12:27
It's an integer if and only if all but finitely many of the $a_i$ are zero. – Gerry Myerson May 20 '12 at 12:28
You might want to ask yourself what is $\mathbb{Z}^+$? If you use the Peano axioms, then $\mathbb{Z}^+$ is exactly the set of numbers which can be reached by added $1$ to itself a finite number of times. A consequence is that no integer can have an infinite string of digits. – Eric Naslund May 20 '12 at 12:29
As for the second question, it is incomprehensible. What does $\exists A(N\in A):A$ mean? – Gerry Myerson May 20 '12 at 12:29
## marked as duplicate by Arturo Magidin, Eric NaslundMay 20 '12 at 20:02
The set $\mathbb{Z}_+$ is simply the set $\{0,1,2,3,\ldots\}$. In practice, one treats the numbers as basic objects. We write them with finitely many digits. It is of course perfectly possible to create a strange set-theoretic construction that allows the question to be meaningful.
For eaxample, you can write every number in $\mathbb{Z}_+$ in the form $a_n 10^n+\ldots+ a_2 10+a_1$ for some, uhm, positive integer $n$ and $a_i\in\{0,1,\ldots,9\}$. What you can do is reverse the order and add infinitely many zeros. So you can write, say, $113$ as $311000000\ldots$ and adapt the rules of arithmetic accordingly. In that case, it is possible to write an element in $\mathbb{Z}_+$ as an infinite sequence of digits. The sequences that will denote such numbers are exactly those that have only finitely many terms not equal to $0$. But this is a highly artificial construction.
What I mean is that there is no unique way to formalize $\mathbb{Z}_+$ in set theory, and one such formalization is as a certain sequence of digits. Unless you provide a specific set-theoretic constructions of $\mathbb{Z}_+$, the question cannot really be answered. – Michael Greinecker May 20 '12 at 13:07
|
{}
|
# Browse Dissertations and Theses - Nuclear, Plasma, and Radiological Engineering by Title
• (1980)
This thesis reports the effects that the scattering and absorption of photons have on the ability to produce tomographic images of an internal gamma-ray source distribution. Computer simulations of the photon-transport ...
application/pdf
PDF (5Mb)
• (1999)
The helicon antenna sits remotely outside the vacuum system, so all shadowing and contamination problems which the other two sources exhibit are eliminated. Ionization fractions to the substrate of 51 +/- 10% with a ...
application/pdf
PDF (6Mb)
• (2010-08-20)
Conventional 3-D external beam radiation therapy for breast cancer, although a vast improvement over traditional therapy, does not take into account respiratory motion. With breast cancer patients, this motion can expose ...
application/pdf
PDF (82Mb)
• (1992)
The risk reduction potential of the class of artificial neural networks based on the Barto-Sutton architecture is established. The risk associated with nuclear power operations is characterized by sequences of discrete ...
application/pdf
PDF (4Mb)
• (1973)
application/pdf
PDF (2Mb)
• (1980)
A fuel pin failure model is developed and incorporated into a fast-running computer program. The model is designed to predict irradiated fuel-pin cladding rupture during a hypothetical transient-overpower (TOP) accident ...
application/pdf
PDF (7Mb)
• (2012-02-06)
The Axial Offset Anomaly (AOA) is a major impediment to increases in reactor fuel performance preventing PWRs from operating with even more efficient core designs than they are at present. It is a phenomenon where boron ...
application/pdf
PDF (28Mb)
• (2008)
The PMNIM is then used to study the transition to turbulence in Arnold-Beltrami-Childress (ABC) flows. These flows display the interesting phenomenon of heteroclinic cycles. The results are obtained for two wavenumbers: k ...
application/pdf
PDF (2Mb)
• (2010-05-19)
In this dissertation, a new lattice Boltzmann model, called the artificial interface lattice Boltzmann model (AILB model), is proposed for the simulation of two-phase dynamics. The model is based on the principle of free ...
application/pdf
PDF (6Mb)
• (1994)
Pu(VI) speciation and solubility with phosphate was examined at pH from 0.3 to 12.2, Pu(VI) concentrations from 5 $\times\ {\rm 10}\sp{-5}$ M to 1.3 $\times\ {\rm 10}\sp{-3}$ M and phosphate concentrations up to 0.6 M. ...
application/pdf
PDF (10Mb)
• (1996)
High-volume air samplers were used to collect aerosol samples on Whatman 41 air filters at the Canadian air sampling stations Burnt Island, Egbert and Point Petre. Once collected, the samples were analyzed for trace elements ...
application/pdf
PDF (8Mb)
• (1992)
Inertial-Electrostatic Confinement (IEC) is an alternative approach to fusion power that offers the ability to burn advanced fuels like D-He$\sp3$ in a non-Maxwellian, high density core. These aneutronic reactions are ideal ...
application/pdf
PDF (6Mb)
• (1976)
application/pdf
PDF (4Mb)
• (2014-05-30)
This dissertation is aimed at nuclear-coupled thermal hydraulics stability analysis of a natural circulation lead cooled fast reactor design. The stability concerns arise from the fact that natural circulation operation ...
application/pdf
PDF (4Mb)
• (2006)
It is hoped that results of this study are of value in demonstrating the safety features of current BWRs, and in improved design of future natural circulation BWR systems.
application/pdf
PDF (5Mb)
• (1966)
application/pdf
PDF (3Mb)
• (1998)
One of the key steps that needs to be addressed for the construction of a fusion reactor is the ability to contain the plasma. The use of superconducting magnets to produce the necessary magnetic field is one of the most ...
application/pdf
PDF (7Mb)
• (1979)
application/pdf
PDF (4Mb)
• (1984)
Previous theoretical work has suggested that Alfven waves may be related to the anomalous toroidal magnetic flux generation and extended (over classical expectations) discharge times observed in the reversed field pinch. ...
application/pdf
PDF (5Mb)
• (1967)
application/pdf
PDF (4Mb)
|
{}
|
## Tuesday, October 12, 2010
### Why I hate programming (part 2 of n)
The difference between using "=" and "<-" for assignment in R:
It seems that, historically, "=" was not allowed for variable assignment. My understanding is that in modern R, using "=" for assignment is (mostly) equivalent to using "<-", so that:
x = 2 % This line is the same asx <- 2 % this line.
However, one should be aware that when using "=" to give parameters for functions, assignments do not occur (in the global workspace):
x <- rep(2, times=5) % "times" does not have value 5times % gives an errorx <- rep(2, times<-5) % "times" IS set to have value 5times % this will have value 5
There's also some weird differences with how "<-" and "=" get interpreted (maybe has to do with order of operations / syntactic sugar?):
x = y = 5 % both "x" and "y" will have value 5x <- y <- 5 % both "x" and "y" will have value 5x = y <- 5 % both "x" and "y" will have value 5x <- y = 5 % gives an error
This is what R's documentation has to say about this nonsense:
"The operators <- and = assign into the environment in which they are evaluated. The operator <- can be used anywhere, whereas the operator = is only allowed at the top level (e.g., in the complete expression typed at the command prompt) or as one of the subexpressions in a braced list of expressions."
Labels: ,
|
{}
|
» Home » Products » Features » Panel-data tobit models with random coefficients and intercepts
## Panel-data tobit models with random coefficients and intercepts
Panel-data models with random effects can be fit with Stata's me commands for multilevel modeling. And the metobit command can fit panel-data tobit models to censored outcomes. For instance, if y is left-censored at 10, you could type
. metobit y x1 x2, ll(10) || id:
to fit a model with random intercepts by id. In fact, you could fit this model with the existing xttobit command.
What you cannot do with xttobit is allow the slopes to vary by id. With metobit, we include random slopes for x1 in addition to the random intercepts by typing
. metobit y x1 x2, ll(10) || id: x1
You can see Multilevel tobit models for more information on metobit and for an example with both random slopes and random intercepts.
### Tell me more
You can also fit Bayesian panel-data (multilevel) tobit models using the bayes prefix.
Read more about multilevel tobit models in the Stata Multilevel Mixed-Effects Reference Manual; see [ME] metobit.
### Highlights
• Random effects
• Random intercepts
• Random coefficients (slopes)
• Left-censoring, right-censoring, or both
• Censoring that varies by observation
• Make inferences about either the uncensored or the censored outcome
• Graph marginal effects and marginal means
• Predict random effects and their standard errors
• Support for complex survey data
• Support for Bayesian estimation
|
{}
|
# Solved Problems-9
Problem-9
The cross section of a rectangular waveguide is 20 cm × 5 cm. If it is filled with air, find the first six lowest order modes which will propagate through the waveguide and their cut-off frequencies.
Solution
Here, a = 20 cm, b = 5 cm.
The cut-off frequency is given as,
We know that for TEmnp modes, any two indices have to be nonzero; whereas for TMmnp modes, all the indices have to be nonzero. Since a > b, the lowest-order mode will be TE10. Next higher order modes will be TE20TE30TE40TE01TE11 and TM11.
The corresponding cut-off frequencies are given as follows.
Hence, the first six lowest-order modes with their cut-off frequencies are given in this table.
Mode Resonant Frequency (GHz) TE10 0.75 TE20 1.5 TE30 2.25 TE40 3 TE01 3 TE11 and TM11 3.092
|
{}
|
Share
Notifications
View all notifications
# What is the Probability that an Ordinary Year Has 53 Sundays? - CBSE Class 10 - Mathematics
Login
Create free account
Forgot password?
ConceptProbability - A Theoretical Approach
#### Question
What is the probability that an ordinary year has 53 Sundays?
#### Solution
Ordinary year has 365 days
365 days = 52 weeks + 1 day
That 1 day may be Sun, Mon, Tue, Wed, Thu, Fri, Sat
Total no. of possible outcomes = 7
Let E ⟶ event of getting 53 Sundays
No. of favourable outcomes = 1 {Sun}
P(E) ="No.of favorable outcomes"/"Total no.of possible outcomes"=1/7
Is there an error in this question or solution?
#### Video TutorialsVIEW ALL [5]
Solution What is the Probability that an Ordinary Year Has 53 Sundays? Concept: Probability - A Theoretical Approach.
S
|
{}
|
J Syst Evol ›› 1982, Vol. 20 ›› Issue (4): 385-391.
• Research Articles •
### The uplift of the Qinghai-Xizang (Tibet) Plateau in relation to the vegetational changes in the past
Hsü Jen
1. ( Instituic of Botany, Academia Sinica)
• Published:1982-11-18
Abstract: By late Carboniferous the flora of northern Xizang differs from that of the northern India. During late Permian, the northern Xizang was inhabited by the Gigantopteris flora, while in the southern Xizang was widespread the Glossopteris flora. The upper Triassic flora of the northern Xizang is closely related to that of southwestern China and quite different from that of India. The Jurassic flora found in Tsaidam of Chinghai and the early Cretaceous flora found in Lhasa of the northern Xizang are closely related to these of the northern hemisphere, and show no relationship with these of the southern hemisphere. The late Cretaceous flora of Rikaze and the early Eocene flora of Ali region are also of northern hemisphere in affinity and show no relationship with the Daccan Intertrappean and the Eocene floras of India. Hence, the northern and the southern Xizang should have belonged to two different continents, Eurasia and Gondwanaland. Between them, a very wide sea, the Tethys, was situated. This strongly supports the view of continental drift that the India block drifted in late Jurassic-Cretaceous from the south-eastern corner of Africa and later on in Eocene joined up with Asia to become its subcontinent. The suture line between Eurasia and the India block perhaps lies in the belt of basic to ultrabasic rocks along the Yalu-Tsangpo valleys. Judging from the nature of the floras ranging from the late Carboniferous to the early Eocene, the northern Xizang most probably was of lowland in topography throughout these periods. The Miocene floras of the central and the northern Xizang were mainly composed of deciduous broad-leaved trees, though some evergreen trees existed somewhere else. It reflects the land of the central and the northern Xizang had already uplifted to some extant before Miocene. During the time of Pliocene, the evergreen broadleaved trees were gradually declining in their development in the northern Xizang. The vegetation of the Chaidamu (=Tsaidam) Basin further changed into deciduous broad-leaved to coniferous forests and then turned into grasslands and semi-deserts or deserts. It shows by that time the land of Xizang and Chinghai further upheaved. Up to the late Pliocene, the vegetation of the northern Xizang and Chinghai further changed. But the vegetation of the Himalayan region was still dominated by evergreen oaks and Cedrus forests. Most probably by that time the Himalayas was not so high as present. There was no barrier to prevent the monsoon winds of the Indian Ocean passing over the Himalayas. The most active period of the uplift of the mountain ranges in Xizang and the Qinghai-Xizang Plateau is the Quaternary. By that time no evergreen broad-leaved trees could live in the northern Xizang. During the late Quaternary, the vegetation of most parts of Xizang gradually changed into cold alpine desert. At last the Qinghai-Xizang Plateau turned into the present state.
|
{}
|
## Tsukuba Journal of Mathematics
The Tsukuba Journal of Mathematics is a continuation of Science Reports of the Tokyo Bunrika Daigaku, Section A (1930-1953), and Science Reports of the Tokyo Kyoiku Daigaku, Section A (1954-1977). The journal publishes original contributions to pure and applied mathematics. All papers are refereed.
## Top downloads over the last seven days
On some matrix diophantine equationsVolume 33, Number 2 (2009)
A characterization of isoparametric hypersurfaces in a sphere with $g\le 3$Volume 38, Number 2 (2015)
On quasi-Einstein spacetimesVolume 33, Number 2 (2009)
On the Gauss map of surfaces of revolution in the three-dimensional Minkowski spaceVolume 36, Number 2 (2013)
Distance between metric measure spaces and distance matrix distributionsVolume 38, Number 2 (2015)
• ISSN: 0387-4982 (print)
• Publisher: University of Tsukuba, Institute of Mathematics
• Discipline(s): Mathematics
• Full text available in Euclid: 2009--
• Access: Articles older than 5 years are open
• Euclid URL: http://projecteuclid.org/tkbjm
|
{}
|
## Friday, February 29, 2008
### The Absurd Nature of NFL Free Agency
As Steelers fans, we know the drill come the end of February. While other teams are wining and dining potential free agents, the Steelers stand idly by, content to sit it out and go bargain basement shopping after the first couple of weeks. I used to be frustrated by this. I used to look at the list of free agents and start thinking about how so-and-so could bolster our pass rush, or how this guy could really shore up our offensive line, allowing us more flexibility on draft day. I used to. Now, I’m seeing where the Steelers were coming from.
Personally, I don't really like the move. It's time to move on and re-tool this offensive line. Starks played pretty well in the home stretch at LT in place of Marvel Smith, but the idea of shelling out $7 million for his services this season seems a little too expensive for my taste. In all likelihood, this is a precusor to ramp up efforts to ink him to a long-term deal, but I don't like that idea either. To me, he's just not worth it. Based on the amount of money Starks will receive (either through the tender or the long-term deal), one of two things will happen should Starks be on the roster this season: 1) He will take over at left tackle, and Marvel Smith will be released sometime this offseason to save some money. 2) Starks will take over at RT, and Colon will kick inside to guard. As a whole, I don't like the way the front office has approached addressing the offensive line. It was quite clear help was needed last offseason, and the only significant addition was signing a scrub from the Tampa Bay Buccaneers. No premium picks were invested along the O-line in the 2007 draft. I was in favor of not showering Alan Faneca with tons of cash, but on the flip side they signed Kendall Simmons to an extension that reeked of desperation. And now, it seems like they want to retain a guy who had a couple decent games towards the end of the season and lavish him with a nice fat contract. Look, the Steelers front office knows what they're doing, and they sure as hell know more than me. That said, it's fair to be critical of them at times, and in my opinion this is one of those times. End of Post ## Tuesday, February 19, 2008 ### Doubt About It Writer, Sam, Up To His Usual Recruiting Games If you haven't heard already, Sam, Doubt About It founder and writer, is back to his usual Anti-Ohio State campaign. Fortunately enough the work of a Joe Paterno intern stands out in Buckeye Country. Willie Mobley, an incoming Ohio State recruit, has recently been receiving fake letters declaring that his scholarship to Ohio State had been revoked. Mobley luckily knows his Tressel signatures, and knows that Big Ten schools will do anything to get their team to the top. Maybe this anti-recruiting technique was something Paterno first saw back in the 1800s, and maybe I am just ahead of the times, but... nice try Penn State. (Sam...I just wanted to say that I know it's you, and that you should give it up. Next time just tell them that there are Eat'n Parks at Penn State, and not Ohio State) Intriguingly enough, Sam might be doing quite the job getting players from the state of Ohio to Penn State. Out of the 4 Four-Star recruits that Penn State is bringing in for 2008, two of them reign from Ohio... In other news, if Sam can pull off convincing Tyrelle Pryor to go to Penn State, then I'll give him props. Recently this tidbit was featured in a Rivals.com article. "Pryor admits he was set to sign with Ohio State the day before Feb. 6, but a conversation with his father, Craig (probably Sam in a costume), made him think twice. His father wanted him to take another look at Penn State and take an official visit." If it happens, it'll be the infamous "Stickies Ala mode" that pulls Pryor. I'll admit, Ohio State just doesn't have it... and I'm not talking about the shitty "Diner" Stickies...Sam admits it to me all the time, "Diner" stickies suck. They suck...Get over yourselves "Diner" people. Go to Eat'n Park ### NHL Economics Seth over at Empty Netters, found this interesting article. Apparently, NHL Players are making more now than they were back in 03-04, before they took that 24 % roll back on all salaries. What always kills me when people compare year to year spending is their complete neglect of inflation. Just because your grandfather regales you with stories of the fudgy wudgey man selling him ice cream for 5 cents back in 1938, does not mean inflation is only a long term effect. Especially this past year, when the dollar weakened significantly, there has to be some sort of conversion. In the article, it is claimed that the total league wide payroll this year is$1.4 billion, compared to $1.34 billion in 03-04. Assuming 15 % inflation from 2003-2007, that$1.34 billion translates into \$1.54 billion in present day terms. What it all means is that players are still getting paid 9.1 % less than their 03-04 counterparts. Its a lot less substantial than 24 %, but I'd also like to believe that the NHL is doing better financially.
Maybe I'm an idiot and they were actually talking in terms of the Canadian loon (it is a Canadian paper after all), but I'd like to think they try and keep some consistency when dealing with finances in sports. Regardless, the amount the players are making relative to total league revenues is what really matters, but I'm too lazy to try and find that out.
## Monday, February 18, 2008
### What We Learned This Weekend...
- Pitt basketball is...well, Pitt basketball. Really hard game to watch against Marquette. Painful enough that I would just like to move on and stop writing about it.
- The Dunk Contest was "saved" and "brought back" once again. Look, Dwight Howard is a physical freak, and the event itself was a fun way to kill an hour. But this was the 8th year in a row that Kenny Smith proclaimed "OHHHHH!!!! IT"S BACK! THE DUNK CONTEST IS BACK!" after an entertaining dunk. Seriously, it can't be "saved" that many times in a row. Why does the event have to take on this "back from the dead" feel every year? Later this afternoon I am going to go through YouTube clips and I guarantee you that I can find clips from 2000-2008 in which Kenny Smith declares that the event is back for good and that Vince Carter/Josh Smith/Jason Richardson/Nate Robinson/Gerald Green/Dwight Howard saved the dunk contest.
- Duke basketball is not invincible. If you see a skinny man wearing Duke blue in the Wake Forest student section during the highlights of last night's game, then you have been fortunate enough to spot 1) one of DAI's most loyal readers, codename BrittleBones and 2) one of the more unlucky men in America.
- This Pens season is even more improbably than last year's. As I sat in a bar watching the Pens lose in game 5 to the Senators last year with the Artist Formerly Known as Pirate in Search of Nuttings Chest, we both reflected on what a ridiculous season it was. Jordan Staal broke out like a 14 year old with acne, Sidney validated the hype, and Fleury came up big in the playoffs. What a wacky, unpredictable ride it was. Heading into this season, the concern around the Burgh was that we wouldn't be able to recapture last year's magic.
But after games like the one yesterday in Buffalo, you have to ask yourself: could any hockey season possibly be more ridiculous than this one? Ty Conklin is entering a special universe of Pittsburgh athletes. Just writing that sentence lets you know that something goofy is happening. But when you consider the play of Geno, the (for the most part) poor play of numerous players on the team throughout the season, and the endless list of injuries, it just seems too unbelievable a story to believe. Is it a story "that Walt Disney would find too improbable to tell"?* Only time will tell. But when a team gets outshot like they did yesterday and still wins 4-1, you know strange things are afoot.
*Quote from "One From the Heart" is paraphrased. Please comment if you remember the exact phrasing.
## Thursday, February 14, 2008
### The Fall of the Belichick Dynasty
Take from it what you will: the Pats have been taping since 2000. Does it invalidate any championships? Who knows. Does it diminish any records? Eh, I'm still inclined to say that Tom Brady is pretty freaking good (as much as I hate him). But man, this sure beats the Clemens saga. More to come as this story unfolds.
## Wednesday, February 13, 2008
### Hours and Hours of the Pot Calling the Kettle Off-White
We've said very little on this whole Clemens thing because it's kind of a non-issue, really. Baseball has obviously been influenced by steroids. Nothing can change that fact. Might as well move on, I say.
Yet 60 channels worth of ESPN disagreed with me today, in addition to a large panel of old white guys. What I found most interesting was the moral diatribe from Indiana Representative Dan Burton, who called the entire proceeding a circus , criticized the "trial by media" approach to the entire thing, and verbally abused Brian McNamee to no end.
Now, I was under the impression that the committee was on hand to find the truth, not wax (un)poetically about an individual's supposedly despicable moral fiber. I would love to see someone go through the panel's respective promises on the campaign trail and see how many of those promises came true. Jobs and better health care are easy to promise on the campaign trail; they're a little more difficult to pin down in the multi-interest flurry that is Congress.
The obvious difference here is that McNamee lied while under inspection and (I would assume) under oath, whereas politicians are only bound by a verbal contract with their constituents. But the way Rep. Burton spoke did not make such a distinction. He spoke with vitriolic passion about the damning effects of lying, one that appeared to apply for all forums of society. I'm not one to make blanket statements about the integrity of the government, but as far as moralistic preaching is concerned, save it for the churches and the schools, Rep. Burton.
Coming soon: the never-before-seen transcript of the Congressional hearing regarding Operation Shutdown.
## Monday, February 11, 2008
### Can The Stache Out-Project Mel Kiper?
Let me preface this post by saying that I am a football junkie. It’s the one sport I follow all season long, with my favorite part of the offseason being the NFL Draft. The reason for this is probably because I need something sports related to follow after the NFL season ends, and:
1) I enjoy watching the Penguins but for some reason can’t get passionate about hockey.
2) The Pirates don’t deserve my attention. I know by mid-April they’ll already be playing bad baseball, well on their way to another losing season in which they finish approximately 20 games under .500.
Therefore, I put my efforts into following the NFL, and that means keeping tabs on the Senior Bowl, the NFL Scouting Combine, and the NFL Draft. Somehow, one individual who has made this into a career, without having played any serious form of organized football, is Mel Kiper Jr. I don’t know exactly what Mel does during the months of May-January, but from February through April, Mel is all over ESPN and ESPN.com.
One of Mel’s most popular features is his first round projection, which is typically updated once a week. With that said, I thought it would be fun to see if the Stache could out-project the great Mel Kiper. The rules? We’ll use The Huddle Report’s scoring system: 2 points for players matched to the correct teams that pick them, 1 point for each player correctly placed in round one. This will allow me to compare myself to others, but I’m gunning for Kiper. The Stache is coming after you, Mel.
Now, I’m not pulling this projection out of my ass. As I said, I follow the NFL closely, so I have a pretty good sense of what each team needs. Additionally, I’ll keep tabs on key offseason events, which already included watching the Senior Bowl (and reading some Senior Bowl practice reports) and monitoring the results from the NFL Scouting Combine. This gives me a general idea of what round many players are projected to go in, and the hope is that by the time the draft rolls around in April I’ll have the necessary knowledge to project the first round better than Mel Kiper does. As I said, Kiper updates his projection once a week. I will update mine about once a month, probably once after the Combine ends in late February, once at the end of March after some free agents sign, and a final projection the day before or morning of the draft. Here is my initial first round projection, which also includes Kiper’s selection at each pick. The only projection that will count for scoring purposes, though, is my final projection right before the draft and Kiper’s final projection.
1. Miami- Chris Long, DE, Virginia (Kiper: Glenn Dorsey, DT, LSU)
Howie Long’s son is the #1 pick, where he should fit in well at defensive end in the 3-4 defense that Parcells wants to implement in Miami.
2. St. Louis- Glenn Dorsey, DT, LSU (Kiper: Chris Long, DE, Virginia)
Will team up with last year’s first round pick Adam Carriker in the middle of the Rams defense
3. Atlanta- Darren McFadden, RB, Arkansas (Kiper: Matt Ryan, QB, Boston College)
Warrick Dunn doesn’t have any gas left in the tank, and Jerious Norwood is not a feature back. Enter McFadden, who’s too good to pass up at this point
4. Oakland- Sedrick Ellis, DT, USC (Kiper: Darren McFadden, RB, Arkansas)
Was absolutely dominant all week at the Senior Bowl. No one could block him. And the Raiders coaching staff saw it first hand, as they were coaching the North team.
5. Kansas City- Jake Long, OT, Michigan (Kiper: Long)
Chiefs O-line is a mess. Long is the best tackle in this class.
6. New York Jets- Vernon Gholston, DE, Ohio St. (Kiper: Gholston)
The 3-4 OLB the Jets need for their pass rush.
7. New England- Mike Jenkins, CB, South Florida (Kiper: Aqib Talib, CB, Kansas)
This is how the Patriots operate. They’ll likely lose Asante Samuel this offseason, and they’ll just plug his spot with one of the premier draft prospects at the cornerback position.
8. Baltimore- Matt Ryan, QB, Boston College (Kiper: Sedrick Ellis, DT, USC)
Troy Smith is not the long term answer, and neither is Kyle Boller. It’s time for the Ravens to try again at the QB position.
9. Cincinnati- Keith Rivers, LB, USC (Kiper: Philip Merling, DE, Clemson)
Bengals need athleticism in their front seven. They’ll get it with Rivers, who’s 6’2, 236 lbs and runs the 40 in about 4.6.
10. New Orleans- Leodis McKelvin, CB, Troy (Kiper: Kentwan Balmer, DT, North Carolina)
Another guy who had a huge Senior Bowl week. Comes from a small school, so he’s a little unknown.
11. Buffalo- Malcolm Kelly, WR, Oklahoma (Kiper: Kelly)
Top WR in this class, in my opinion. Tall, strong, fast. Gives the Bills another option at WR besides Lee Evans.
Left tackle Mett Lepsis retired, leaving a gaping hole in the depth chart.
13. Carolina- Jeff Otah, OT, Pittsburgh (Kiper: Otah)
Both their starting offensive tackles, Jordan Gross and Travelle Wharton, are free agents. Likely won’t keep both, so Otah will take one of their spots.
14. Chicago- Brian Brohm, QB, Louisville (Kiper: Sam Baker, OT, USC)
Bears depth chart at QB includes Rex Grossman, Kyle Orton, and Brian Griese. That means they have no QB’s worth a shit.
15. Detroit- Dominique Rodgers-Cromartie, CB, Tennessee St. (Kiper : Leodis McKelvin, CB, Troy)
MVP of the Senior Bowl, causing his stock to soar. Unbelievably gifted (6’2, 183 lbs, 4.45 40-yard dash, great hands), but carries the small school tag. A ballhawk and playmaker, much like his cousin, Chargers CB Antonio Cromartie.
16. Arizona- Jonathan Stewart, RB, Oregon (Kiper: Stewart)
Edgerrin James is not getting any younger. Stewart is too good to pass up at this point. Remember, they almost drafted Adrian Peterson last year.
17. Minnesota- DeSean Jackson, WR, California (Kiper: Jackson)
Helps the Vikings anemic passing game, while at the same time bolstering their special teams. Very good receiver and one of the best kick returners in the college game.
18. Houston- Kenny Phillips, S, Miami (Fla.) (Kiper: Mike Jenkins, CB, S. Florida)
Texans have focused on the front seven in past drafts (Travis Johnson, Mario Williams, DeMeco Ryans). Now it’s time to address the secondary.
19. Philadelphia- Derrick Harvey, DE, Florida (Kiper: Chris Williams, OT, Vanderbilt)
Eagles love to draft in the trenches. Kearse isn’t giving them much any more, so give them Harvey to bolster the pass rush.
20. Tampa Bay- Kentwan Balmer, DT, North Carolina (Kiper: Felix Jones, RB, Arkansas)
Strong interior pass rusher, stock might be slipping a bit, though, after early departure from Senior Bowl week due to injury.
21. Washington- Calais Campbell, DE, Miami (Fla.) (Kiper: Campbell)
Redskins pass rush was their Achilles heel on defense last season.
22. Dallas- Mario Manningham, WR, Michigan (Kiper: Manningham)
Terrell Owens is getting older, Terry Glenn is done, and Patrick Crayton is a #2 at best. Manningham is a good fit for them.
23. Pittsburgh- Chris Williams, OT, Vanderbilt (Kiper: Chilo Rachal, OG, USC)
I specifically watched him during the Senior Bowl. Had a very solid game. Small arms are an issue (similar to Willie Colon in that respect), but he makes up for it with terrific footwork and technique.
24. Tennessee- James Hardy, WR, Indiana (Kiper: Limas Sweed, WR, Texas)
Tall target for Vince Young, provided Young can actually get the football to him.
25. Seattle- Rashard Mendenhall, RB, Illinois (Kiper: Fred Davis, TE, USC)
Many Seahawks fans want Alexander out of Seattle. His game is declining rapidly. Mendenhall would eventually become the feature back in his second season.
26. Jacksonville- Lawrence Jackson, DE, USC (Kiper : Derrick Harvey, DE, Florida)
Pass rusher who can provide pressure on the outside while Stroud and Henderson wreak havoc on the inside.
27. San Diego- Reggie Smith, CB/S, Oklahoma (Kiper: Smith)
A team with no clear needs. Smith is a versatile defensive back who played both corner and safety at Oklahoma.
28. Dallas- Felix Jones, RB, Arkansas (Kiper: Rashard Mendenhall, RB, Illinois)
Julius Jones is probably gone via free agency. Felix Jones will team with Marion Barber and become the Cowboys feature special teams returner.
29. San Francisco- Early Doucet, WR, LSU (Kiper: James Hardy, WR, Indiana)
Alex Smith has no chance for success without any wide receivers. Doucet gives him an option.
30. Green Bay- Antoine Cason, CB, Arizona (Kiper: Keith Rivers, LB, USC)
Another team with no clear needs, though they could use a #3 CB behind Al Harris and Charles Woodson.
31. New England- FORFEITED DUE TO SPYGATE
32. New York Giants- Dan Connor, LB, Penn St. (Kiper: Connor)
Giants could use the linebacking help.
End of Post
## Sunday, February 10, 2008
### Malkin Making Case For MVP
The Pens finally got off the schnide against the Flyer's, defeating them 4-3 this afternoon. Malkin figured in on all 4 of the goals, all while his parent's were in attendance for the second game in a row. Malkin has 21 points in the 10 games since Sid has been out, for a Lemieux-esq 2.1 points/game. A couple thoughts from today:
-Talbot, after taking a double minor in the first, looked possessed the rest of the way. Along with Staal and Christensen, underperformers this season, the line looks like it has considerable chemistry.
-Speaking of good line combos, I can't see Therrien simply substituting Sid in for Malkin when he comes back. Malone-Malkin-Sykora has found more consistency and chemistry then any of the other 10 dozen line combinations Therrien has called for this season. I don't know who you put Sid with, but you can't break up that line.
-Bob Errey did not have a good day. Potash had a segment about Penguins' fantasy camp, where a bunch of 30 and 40 somethings got to play with Penguins' alumni. When Dan sent it up to the booth he asked "Where were you during all of this, Bob?" to which Errey replied rather seriously "I wasn't invited." After Potash apologized, there was around 3-4 seconds of awkward silence. Then, later in the game when the Flyers made it 4-3, according to Wannstache Errey meant to say "Hold your breath" but accidentally said "Hold your breasts." Anyone who Tivo'd the game, the youtubes are waiting for you.
-I don't mind Nathan Smith or Kris Beech, but there will be no tears when Sid comes back and replaces one of the two.
-As reported by Seth at Empty-Netters, Fleury was sent down to Wilkes-Barre for conditioning. About time.
Rick sent me another email about the need of another quality defenseman, which always seems to coincide with a Penguins' surge. Still, as you can tell from our lack of posting, I'm happy with content. I may not agree with it entirely, but its hard to completely disagree. Let him have it in the comments.
Rick's Rant
I'm a little sick of people telling me that I am wrong on the Penguins defense and that we'll be "ok" come playoff time.
So how's this for an example...
I have always liked Darryl Sydor, probably my favorite player in the NHL. One of the top 10 defenseman...
People wanted him traded, because he struggled early with a new system and had a +/- as low as -7. I said to hold on.
On December 27th, in a game against that Capitals, Darryl Sydor scored the winning goal. Since that time there have been 19 games, only 3 of which Sydor hasn't logged 18 minutes. The Pens are 13-3-3, amassing 29 pts.
The defenseman +/- has looked like this:
Sydor: +10
Whitney: +8
Letang: +4
Orpik: +2
Scuderi: +1
Gonchar: 0
I hope this validates I know defense. I will admit, I don't know much about offense, but you show me a team that has a better than avaerage +/- for its defensemen and that team will be ok.
What can we get from this? That Darryl Sydor is a leader, and Ryan Whitney one day will be worth his huge salary cut. Kris Letang is impressive for a rookie, and Brooks'numbers may be lower cause he plays with Letang at times. But is Brooks worth a Whitney like extension at the end of this year? Hell no. Is Scuderi a guy you want logging 20+ a game in the playoffs? Get real. And finally, is Gonchar a true "all-star"? Maybe in Mario's time, where the average score of a a game was 9-8. However, in today's NHL Sergei is living up to the scouting report that says he is physically intimidated and makes too many mistakes in his own end.
So as the trade deadline approaches, I encourage everyone to give up their Sundin/Hossa/Havlat wishes, because Shero knows this team needs a defenseman and he will get it.
Finally, my guys that should be worried about playing in other cities at the end of the month are: Staal, Christiansen, Kennedy, and Orpik. Other than that, I believe everyone else is safe.
--
Pitcher's and catchers report on Thursday. The swift arrival of spring training always reminds me that the Steelers' and Pirates' seasons are almost perfectly split to cover two halves of the year, with the football season ending right as spring training starts, and Pirates season effectively ending in late July as St. Vincent's opens once again.
## Thursday, February 7, 2008
### Revenge Is Sweet
ESPN's featured comment of the day...
HA! I knew we'd get the Braves back for knocking us out of the NLCS twice, the last one being particularly painful, and then sending us into a 15 year downward spiral! Now that we've taken their ADD first baseman who still may not be aware that the season starts in April and not July, its all finally even.
## Tuesday, February 5, 2008
### Sanchez and the Eve of a Monumental Collapse
-Dejan Kovacevik over at the Post-Gazette has reported that the Pirates signed fan favorite and Malibu Bay Breeze drinker Freddy Sanchez to a 2 year deal with an almost certain to happen club option for the 3rd year. The deal will range around 19 million in total, with the breakdown somewhere at 4.5 million the first year, 6 the next, and around 8-9 million for the third year club option. The third year becomes a guarantee if Freddy reaches some requirements in the first two years, which Kovacevik says likely will be plate appearances.
While the last time we made a significant investment in a singles hitter didn't turn out so well, the framework of this deal makes sense and gives the Pirates some flexibility in that third year if Freddy shows he can't stay healthy. Overall, a much more shrewd move than anything Littlefield ever did. Plus, it provides some kind of indication that the Pirates just might plan on having a higher payroll in the 2010 season and, coinciding with that, a competitive team. One can dream, right?
-According to ESPN, the NFL will be meeting with Matt Walsh, the videographer who claims he has new information on Spygate. The traditional media hasn't been downplaying it, but other than Gregg Easterbrook, who really has to feel quite proud of himself for never letting this go in the face of criticism, there has not been much talk about the repercussions of what everyone says Matt Walsh will be revealing; that the Patriots taped the Rams walkthrough before Superbowl XXVI. Just because the certified journalists won't speculate, doesn't mean we won't.
Assuming Walsh comes out with the info everyone expects, which seems to be almost a foregone conclusion at this point, the Patriots dynasty will always carry some doubt. Always. I don't care if Walsh has no evidence to back it up and, in the end, it can not be proved. Look what happened to Bonds, look what is happening to Clemens. Maybe its not fair, but the minute he comes out with this information is the minute the Patriots dynasty has the asterisk figuratively etched into each of their trophies by the collective conscience.
Will there ever be a greater fall in sports? You're on the cusp immortality, about to complete arguably the greatest accomplishment in the history of sport, and then not only are you rebuffed from that glorious moment, but everything else you've achieved in the years leading up to it becomes tarnished. I felt bad for Michael Vick. I felt like the speculation surrounding Bonds, since there was no hard evidence, was a tad bit unfair. But when Belichick falls, and it looks like he will, there will be no pity. Pete Rose, you may have company.
(If you're looking for a laugh, go and read the comments from I Want to Fight Tom Brady's self-titled post or the related article by Tec over at PSAMP. While the Pats fans there were talking about only the known tapes, its still quite enjoyable to listen to their defensive excuses in light of the present evidence.)
## Monday, February 4, 2008
### Perfect!
All my other fellow DAI contributors added their two cents to last night's stunning Giants victory in Super Bowl XLII, and like them my happiness cannot be understated.
While watching NFL Network after the game, I saw this commercial, which might be the greatest commercial ever. Just thinking how pissed off Patriots fans would be after seeing it put a huge smile on my face.
EDIT: Lord knows I've been wrong about a lot of things this season, but I can't pass up a chance to toot my own horn. In the Week 10 Picks post, I had this to say about the Giants:
"One thing the Giants can do is pressure the quarterback, and they can do it only with rushing four. That’s how good Umenyiora and Strahan are. They just might have the blueprint to beat New England. Get pressure with four to rattle Brady, but still be able to maintain your coverage against Moss, Stallworth, and Welker with seven covering."
End of Post
### They're Sellin Like Hot Cakes...
I didn't have anything to show my passion for Patriot failure... so I improvised. To the beginning of Eli's very successful endorsement career...and for one night... Go Giants.
### Eli wins Super Bowl and MVP...Ben Embarresses Steeler Nation
I echo Pat's thoughts below exactly. What a game, and a perfect ending. Along with the impending doom of the entire Patriot's dynasty crashing around them with this videographer Walsh coming out of nowhere, this day provides probably the greatest winners' high achievable without actually winning anything. The only thing that ruined it was that atrocious Big Ben commercial for American Idol. Did he really have to do that? I know it wasn't exactly Oreo licking...but c'mon.
## Sunday, February 3, 2008
### THERE IS A GOD
4th and 13. If you are playing Madden, then sure, you go for it. Why not. Playing video games involves a certain type of careless arrogance that supports such a decision. Maybe the decision Belichick made to go for it was a wise one - a conversion changes the game, the kicker is inexperienced, and a punt only moves the ball 30 yards or so.
But the reaction was the same from everyone in the room watching the game: that arrogant prick, he really thinks they are good enough to convert a 4th and 13. The freaking Patriots really think they are invincible. They REALLY believe that they are so much better than everyone that they can go for it on 4th and 13.
Belichick walking off the field in disgust...Tom Brady staring in disbelief throughout the whole game...the amazing, inexplicable escape and throw from Eli to Tyree. This might have been the most satisfying non-Steelers game I have ever watched. Change that - it is. I have never rooted this much for a team other than the Steelers, and as many 'burghers said tonight, this game felt so much more important than any Steelers game.
And in many ways, there were numerous Pittsburgh-related story lines. Fourteen years after Junior Seau waved a terrible towel in Three Rivers Stadium, here he was sulking on the sidelines, still one game short of ever winning a title. Tom Brady and The Hooded One are still one Super Bowl short of their historic Pittsburgh counterparts. And Plaxico Burress, criticized to no end, hauled in the winning touchdown that likely produced an unfathomably loud and euphoric roar in Pittsburgh.
A truly unbelievable game. I am not kidding when I say congratulations to the Patriots - you had a ridiculously successful season. And Wes Welker: you played like a meast tonight. But your arrogance, your after-the-play dirtiness, your shroud of mysterious lens-induced advantage...it all caught up with you. And as my dad said in a phone call to me earlier tonight, "Cheaters never prosper."
Lastly, congrats to my old roommate and one of my best friends, Frank Steinberg. The Giants have truly been a maligned, torn, and tumultuous franchise, and you should feel damned good for sticking with them.
Cheers to the end of the Boston strangle-hold on sports.
|
{}
|
# 7.1 Greatest common factor and factor by grouping (Page 3/5)
Page 3 / 5
Factor: $-16z-64$ .
$-8\left(8z+8\right)$
Factor: $-9y-27$ .
$-9\left(y+3\right)$
Factor: $-6{a}^{2}+36a$ .
## Solution
The leading coefficient is negative, so the GCF will be negative.?
Since the leading coefficient is negative, the GCF is negative, −6 a . Rewrite each term using the GCF. Factor the GCF. Check. $-6a\left(a-6\right)$ $-6a\cdot a+\left(-6a\right)\left(-6\right)$ $-6{a}^{2}+36a✓$
Factor: $-4{b}^{2}+16b$ .
$-4b\left(b-4\right)$
Factor: $-7{a}^{2}+21a$ .
$-7a\left(a-3\right)$
Factor: $5q\left(q+7\right)-6\left(q+7\right)$ .
## Solution
The GCF is the binomial $q+7$ .
Factor the GCF, ( q + 7). Check on your own by multiplying.
Factor: $4m\left(m+3\right)-7\left(m+3\right)$ .
$\left(m+3\right)\left(4m-7\right)$
Factor: $8n\left(n-4\right)+5\left(n-4\right)$ .
$\left(n-4\right)\left(8n+5\right)$
## Factor by grouping
When there is no common factor of all the terms of a polynomial, look for a common factor in just some of the terms. When there are four terms, a good way to start is by separating the polynomial into two parts with two terms in each part. Then look for the GCF in each part. If the polynomial can be factored, you will find a common factor emerges from both parts.
(Not all polynomials can be factored. Just like some numbers are prime, some polynomials are prime.)
## How to factor by grouping
Factor: $xy+3y+2x+6$ .
## Solution
Factor: $xy+8y+3x+24$ .
$\left(x+8\right)\left(y+3\right)$
Factor: $ab+7b+8a+56$ .
$\left(a+7\right)\left(b+8\right)$
## Factor by grouping.
1. Group terms with common factors.
2. Factor out the common factor in each group.
3. Factor the common factor from the expression.
4. Check by multiplying the factors.
Factor: ${x}^{2}+3x-2x-6$ .
## Solution
$\begin{array}{cccc}\text{There is no GCF in all four terms.}\hfill & & & \hfill \phantom{\rule{4em}{0ex}}{x}^{2}+3x\phantom{\rule{0.5em}{0ex}}-2x-6\hfill \\ \text{Separate into two parts.}\hfill & & & \hfill \phantom{\rule{4em}{0ex}}\underset{⎵}{{x}^{2}+3x}\phantom{\rule{0.5em}{0ex}}\underset{⎵}{-2x-6}\hfill \\ \\ \\ \begin{array}{c}\text{Factor the GCF from both parts. Be careful}\hfill \\ \text{with the signs when factoring the GCF from}\hfill \\ \text{the last two terms.}\hfill \end{array}\hfill & & & \hfill \phantom{\rule{4em}{0ex}}\begin{array}{c}\hfill x\left(x+3\right)-2\left(x+3\right)\hfill \\ \hfill \left(x+3\right)\left(x-2\right)\hfill \end{array}\hfill \\ \\ \\ \text{Check on your own by multiplying.}\hfill & & & \end{array}$
Factor: ${x}^{2}+2x-5x-10$ .
$\left(x-5\right)\left(x+2\right)$
Factor: ${y}^{2}+4y-7y-28$ .
$\left(y+4\right)\left(y-7\right)$
Access these online resources for additional instruction and practice with greatest common factors (GFCs) and factoring by grouping.
## Key concepts
• Finding the Greatest Common Factor (GCF): To find the GCF of two expressions:
1. Factor each coefficient into primes. Write all variables with exponents in expanded form.
2. List all factors—matching common factors in a column. In each column, circle the common factors.
3. Bring down the common factors that all expressions share.
4. Multiply the factors as in [link] .
• Factor the Greatest Common Factor from a Polynomial: To factor a greatest common factor from a polynomial:
1. Find the GCF of all the terms of the polynomial.
2. Rewrite each term as a product using the GCF.
3. Use the ‘reverse’ Distributive Property to factor the expression.
4. Check by multiplying the factors as in [link] .
• Factor by Grouping: To factor a polynomial with 4 four or more terms
1. Group terms with common factors.
2. Factor out the common factor in each group.
3. Factor the common factor from the expression.
4. Check by multiplying the factors as in [link] .
## Practice makes perfect
Find the Greatest Common Factor of Two or More Expressions
In the following exercises, find the greatest common factor.
8, 18
2
24, 40
72, 162
18
150, 275
10 a , 50
10
5 b , 30
$3x,10{x}^{2}$
$x$
$21{b}^{2},14b$
$8{w}^{2},24{w}^{3}$
$8{w}^{2}$
$30{x}^{2},18{x}^{3}$
$10{p}^{3}q,12p{q}^{2}$
$2pq$
$8{a}^{2}{b}^{3},10a{b}^{2}$
$12{m}^{2}{n}^{3},30{m}^{5}{n}^{3}$
$6{m}^{2}{n}^{3}$
$28{x}^{2}{y}^{4},42{x}^{4}{y}^{4}$
$10{a}^{3},12{a}^{2},14a$
$2a$
$20{y}^{3},28{y}^{2},40y$
$35{x}^{3},10{x}^{4},5{x}^{5}$
$5{x}^{3}$
$27{p}^{2},45{p}^{3},9{p}^{4}$
Factor the Greatest Common Factor from a Polynomial
In the following exercises, factor the greatest common factor from each polynomial.
$4x+20$
$4\left(x+5\right)$
$8y+16$
$6m+9$
$3\left(2m+3\right)$
$14p+35$
$9q+9$
$9\left(q+1\right)$
$7r+7$
$8m-8$
$8\left(m-1\right)$
$4n-4$
$9n-63$
$9\left(n-7\right)$
$45b-18$
$3{x}^{2}+6x-9$
$3\left({x}^{2}+2x-3\right)$
$4{y}^{2}+8y-4$
$8{p}^{2}+4p+2$
$2\left(4{p}^{2}+2p+1\right)$
$10{q}^{2}+14q+20$
$8{y}^{3}+16{y}^{2}$
$8{y}^{2}\left(y+2\right)$
$12{x}^{3}-10x$
$5{x}^{3}-15{x}^{2}+20x$
$5x\left({x}^{2}-3x+4\right)$
$8{m}^{2}-40m+16$
$12x{y}^{2}+18{x}^{2}{y}^{2}-30{y}^{3}$
$6{y}^{2}\left(2x+3{x}^{2}-5y\right)$
$21p{q}^{2}+35{p}^{2}{q}^{2}-28{q}^{3}$
$-2x-4$
$-2\left(x+4\right)$
$-3b+12$
$5x\left(x+1\right)+3\left(x+1\right)$
$\left(x+1\right)\left(5x+3\right)$
$2x\left(x-1\right)+9\left(x-1\right)$
$3b\left(b-2\right)-13\left(b-2\right)$
$\left(b-2\right)\left(3b-13\right)$
$6m\left(m-5\right)-7\left(m-5\right)$
Factor by Grouping
In the following exercises, factor by grouping.
$xy+2y+3x+6$
$\left(y+3\right)\left(x+2\right)$
$mn+4n+6m+24$
$uv-9u+2v-18$
$\left(u+2\right)\left(v-9\right)$
$pq-10p+8q-80$
${b}^{2}+5b-4b-20$
$\left(b-4\right)\left(b+5\right)$
${m}^{2}+6m-12m-72$
${p}^{2}+4p-9p-36$
$\left(p-9\right)\left(p+4\right)$
${x}^{2}+5x-3x-15$
Mixed Practice
In the following exercises, factor.
$-20x-10$
$-10\left(2x+1\right)$
$5{x}^{3}-{x}^{2}+x$
$3{x}^{3}-7{x}^{2}+6x-14$
$\left({x}^{2}+2\right)\left(3x-7\right)$
${x}^{3}+{x}^{2}-x-1$
${x}^{2}+xy+5x+5y$
$\left(x+y\right)\left(x+5\right)$
$5{x}^{3}-3{x}^{2}-5x-3$
## Everyday math
Area of a rectangle The area of a rectangle with length 6 less than the width is given by the expression ${w}^{2}-6w$ , where $w=$ width. Factor the greatest common factor from the polynomial.
$w\left(w-6\right)$
Height of a baseball The height of a baseball t seconds after it is hit is given by the expression $-16{t}^{2}+80t+4$ . Factor the greatest common factor from the polynomial.
## Writing exercises
The greatest common factor of 36 and 60 is 12. Explain what this means.
What is the GCF of ${y}^{4},{y}^{5},\text{and}\phantom{\rule{0.2em}{0ex}}{y}^{10}$ ? Write a general rule that tells you how to find the GCF of ${y}^{a},{y}^{b},\text{and}{y}^{c}$ .
## Self check
After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section.
If most of your checks were:
…confidently. Congratulations! You have achieved your goals in this section! Reflect on the study skills you used so that you can continue to use them. What did you do to become confident of your ability to do these things? Be specific!
…with some help. This must be addressed quickly as topics you do not master become potholes in your road to success. Math is sequential—every topic builds upon previous work. It is important to make sure you have a strong foundation before you move on. Who can you ask for help? Your fellow classmates and instructor are good resources. Is there a place on campus where math tutors are available? Can your study skills be improved?
…no - I don’t get it! This is critical and you must not ignore it. You need to get help immediately or you will quickly be overwhelmed. See your instructor as soon as possible to discuss your situation. Together you can come up with a plan to get you the help you need.
Priam has pennies and dimes in a cup holder in his car. The total value of the coins is $4.21 . The number of dimes is three less than four times the number of pennies. How many pennies and how many dimes are in the cup? Cecilia Reply Arnold invested$64,000 some at 5.5% interest and the rest at 9% interest how much did he invest at each rate if he received $4500 in interest in one year Heidi Reply List five positive thoughts you can say to yourself that will help youapproachwordproblemswith a positive attitude. You may want to copy them on a sheet of paper and put it in the front of your notebook, where you can read them often. Elbert Reply Avery and Caden have saved$27,000 towards a down payment on a house. They want to keep some of the money in a bank account that pays 2.4% annual interest and the rest in a stock fund that pays 7.2% annual interest. How much should they put into each account so that they earn 6% interest per year?
324.00
Irene
1.2% of 27.000
Irene
i did 2.4%-7.2% i got 1.2%
Irene
I have 6% of 27000 = 1620 so we need to solve 2.4x +7.2y =1620
Catherine
I think Catherine is on the right track. Solve for x and y.
Scott
next bit : x=(1620-7.2y)/2.4 y=(1620-2.4x)/7.2 I think we can then put the expression on the right hand side of the "x=" into the second equation. 2.4x in the second equation can be rewritten as 2.4(rhs of first equation) I write this out tidy and get back to you...
Catherine
Darrin is hanging 200 feet of Christmas garland on the three sides of fencing that enclose his rectangular front yard. The length is five feet less than five times the width. Find the length and width of the fencing.
Mario invested $475 in$45 and $25 stock shares. The number of$25 shares was five less than three times the number of $45 shares. How many of each type of share did he buy? Jawad Reply let # of$25 shares be (x) and # of $45 shares be (y) we start with$25x + $45y=475, right? we are told the number of$25 shares is 3y-5) so plug in this for x. $25(3y-5)+$45y=$475 75y-125+45y=475 75y+45y=600 120y=600 y=5 so if #$25 shares is (3y-5) plug in y.
Joshua
will every polynomial have finite number of multiples?
a=# of 10's. b=# of 20's; a+b=54; 10a + 20b=$910; a=54 -b; 10(54-b) + 20b=$910; 540-10b+20b=$910; 540+10b=$910; 10b=910-540; 10b=370; b=37; so there are 37 20's and since a+b=54, a+37=54; a=54-37=17; a=17, so 17 10's. So lets check. $740+$170=$910. David Reply . A cashier has 54 bills, all of which are$10 or $20 bills. The total value of the money is$910. How many of each type of bill does the cashier have?
whats the coefficient of 17x
the solution says it 14 but how i thought it would be 17 im i right or wrong is the exercise wrong
Dwayne
17
Melissa
wow the exercise told me 17x solution is 14x lmao
Dwayne
thank you
Dwayne
A private jet can fly 1,210 miles against a 25 mph headwind in the same amount of time it can fly 1,694 miles with a 25 mph tailwind. Find the speed of the jet
Washing his dad’s car alone, eight-year-old Levi takes 2.5 hours. If his dad helps him, then it takes 1 hour. How long does it take the Levi’s dad to wash the car by himself?
Ethan and Leo start riding their bikes at the opposite ends of a 65-mile bike path. After Ethan has ridden 1.5 hours and Leo has ridden 2 hours, they meet on the path. Ethan’s speed is 6 miles per hour faster than Leo’s speed. Find the speed of the two bikers.
Nathan walked on an asphalt pathway for 12 miles. He walked the 12 miles back to his car on a gravel road through the forest. On the asphalt he walked 2 miles per hour faster than on the gravel. The walk on the gravel took one hour longer than the walk on the asphalt. How fast did he walk on the gravel?
Mckenzie
Nancy took a 3 hour drive. She went 50 miles before she got caught in a storm. Then she drove 68 miles at 9 mph less than she had driven when the weather was good. What was her speed driving in the storm?
Mr Hernaez runs his car at a regular speed of 50 kph and Mr Ranola at 36 kph. They started at the same place at 5:30 am and took opposite directions. At what time were they 129 km apart?
90 minutes
|
{}
|
# 2D Visual-Inertial Extended Kalman Filter
I am trying to implement an Extended Kalman filtering for combining IMU data and visual odometry in a simple 2D case where I have a robot that that can only accelerate in its local forward direction which is dictated by its current heading (theta). I am restricting IMU readings to a single acceleration reading (a) and a single angular velocity reading (omega). Visual odometry will only provide a single angular displacement as well as displacement in the u and v directions (x and y relative to the robot). The equations for the derivation of my state transition matrix are
$$x_{k+1} = x_k + \dot{x_k}\Delta T + 0.5a \cdot cos(\theta) \Delta T^2$$ $$y_{k+1} = y_k + \dot{x_k}\Delta T + 0.5a \cdot sin(\theta) \Delta T^2$$ $$\theta_{k+1} = \theta_k + \dot{\theta} \Delta T$$ $$\dot{x_{k+1}} = \dot{x_{k}} + a \cdot cos(\theta) \Delta$$ $$\dot{y_{k+1}} = \dot{y_{k}} + a \cdot sin(\theta) \Delta$$ $$\dot{\theta_{k+1}} = \dot{\theta_{k}}$$ $$\dot{\dot{x_{k+1}}} = \dot{\dot{x_{k}}}$$ $$\dot{\dot{y_{k+1}}} = \dot{\dot{y_{k}}}$$
and the equations that I use to obtain the measurements are
$$\Delta x = \dot{x} \Delta T + 0.5 \dot{\dot{x}} \Delta T^2$$ $$\Delta y = \dot{y} \Delta T + 0.5 \dot{\dot{y}} \Delta T^2$$ $$\Delta u = \Delta x \cdot cos(\theta) + \Delta y \cdot sin(\theta)$$ $$\Delta v = -\Delta x \cdot sin(\theta) + \Delta y \cdot cos(\theta)$$ $$\Delta \theta = \dot{\theta} \cdot \Delta T$$ $$a = \dot{\dot{x}} \cdot cos(\theta) + \dot{\dot{y}} \cdot sin(\theta)$$ $$\omega = \dot{\theta}$$
To calculate the Jacobian of the measurement function I used the following MATLAB code
deltaX = xDot*t + 0.5*xDotDot*(t^2);
deltaY = yDot*t + 0.5*yDotDot*(t^2);
deltaU = deltaX * cos(theta) + deltaY * sin(theta);
deltaV = -deltaX * sin(theta) + deltaY * cos(theta);
accel = xDotDot*cos(theta) + yDotDot*sin(theta);
jacobian([accel, omega, deltaU, deltaV, deltaTheta], [x, y, theta, xDot, yDot, thetaDot, xDotDot, yDotDot])
To test my implementation I am creating test data from random acceleration and angular velocity values. I am plotting the trajectory calculated from this as well as from the trajectory calculated directly using the odometry values and the IMU values. I am then comparing this with the odometry estimated by my Kalman filter.
The Kalman filter has been implemented without any control values and is combining all the sensor reading into a single measurement vector.
To test if the filter has any hope of working, I first tested it without any added measurement noise but the outcome is fairly crazy as can be seen in
where it can also be seen that using both sensor readings on their own without the filter produces the exact trajectory. This simulation, including my Kalman filter was implemented with the following Python code
import numpy as np
import matplotlib.pyplot as plt
from random import *
# Sampling period
deltaT = 1
# Array to store the true trajectory
xArr = [0]
yArr = [0]
thetaArr = [0]
# Array to store IMU measurement
imuA = []
imuOmega = []
# Current state variables
x = 0
y = 0
theta = 0
x_dot = 0
y_dot = 0
# Arrays to store odometry measurements
odoU = []
odoV = []
odoTheta = []
# Setup simulated data
for i in range(100):
# Calculate a random forward (u-axis) acceleration
a = uniform(-10, 10)
imuA.append(a)
# Calculate the change in global coordinates
deltaX = (x_dot * deltaT) + (0.5 * a * np.cos(theta) * deltaT**2)
deltaY = (y_dot * deltaT) + (0.5 * a * np.sin(theta) * deltaT**2)
# Update the velocity at the end of the time step
x_dot += a * np.cos(theta) * deltaT
y_dot += a * np.sin(theta) * deltaT
# Update the current coordinates
x += deltaX
y += deltaY
# Store the coordinates for plotting
xArr.append(x)
yArr.append(y)
# Calculate local coordinate odometry
odoU.append(deltaX * np.cos(theta) + deltaY * np.sin(theta))
odoV.append(-deltaX * np.sin(theta) + deltaY * np.cos(theta))
# Calculate a random new angular velocity
theta_dot = uniform(-0.2, 0.2)
imuOmega.append(theta_dot)
# Calculate the change in angular displacement
deltaTheta = theta_dot * deltaT
odoTheta.append(deltaTheta)
# Update the angular displacement
theta += theta_dot * deltaT
thetaArr.append(theta)
# Calculate the trajectory from just the odometery
xArr2 = []
yArr2 = []
x = 0
y = 0
theta = 0
for i in range(100):
deltaU = odoU[i]
deltaV = odoV[i]
deltaTheta = odoTheta[i]
x += deltaU * np.cos(theta) - deltaV * np.sin(theta)
y += deltaU * np.sin(theta) + deltaV * np.cos(theta)
theta += deltaTheta
xArr2.append(x)
yArr2.append(y)
# Calculate the trajectory from just the IMU readings
xArr3 = []
yArr3 = []
x = 0
y = 0
theta = 0
x_dot = 0
y_dot = 0
theta_dot = 0
for i in range(100):
# Calculate the change in global coordinates
a = imuA[i]
deltaX = (x_dot * deltaT) + (0.5 * a * np.cos(theta) * deltaT**2)
deltaY = (y_dot * deltaT) + (0.5 * a * np.sin(theta) * deltaT**2)
# Update the velocity at the end of the time step
x_dot += a * np.cos(theta) * deltaT
y_dot += a * np.sin(theta) * deltaT
# Update the current coordinates
x += deltaX
y += deltaY
# Store the coordinates for plotting
xArr3.append(x)
yArr3.append(y)
# Calculate the change in angular displacement
theta_dot = imuOmega[i]
theta += theta_dot * deltaT
# Estimate the true trajectory with a Kalman filter
# State matrix
X_k_min = np.array([
[0], # x
[0], # y
[0], # theta
[0], # x_dot
[0], # y_dot
[0], # theta_dot
[0], # x_dot_dot
[0] # y_dot_dot
])
# State covariance matrix
P_k_min = np.zeros((8, 8))
# State transition matrix
A = np.array([
[1, 0, 0, deltaT, 0, 0, 0.5*deltaT**2, 0],
[0, 1, 0, 0, deltaT, 0, 0, 0.5*deltaT**2],
[0, 0, 1, 0, 0, deltaT, 0, 0],
[0, 0, 0, 1, 0, 0, deltaT, 0],
[0, 0, 0, 0, 1, 0, 0, deltaT],
[0, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 1]
])
# Process covariance matrix
Q = np.eye(8)
# Measurement vector
## 0: a (forward acceleration)
## 1: omega (angular velocity)
## 2: deltaU (local x displacement)
## 3: deltaV (local y displacement)
## 4: deltaTheta (local angular displacement)
# Measurement covariance matrix
R = np.eye(5)
# Function to calculate the measurement function Jacobian
def CalculateH_k(X, t):
theta = X[2, 0]
xDot = X[3, 0]
yDot = X[4, 0]
xDotDot = X[6, 0]
yDotDot = X[7, 0]
return np.array([
[0, 0, yDotDot * np.cos(theta) - xDotDot * np.sin(theta), 0, 0, 0, np.cos(theta), np.sin(theta)],
[0, 0, 0, 0, 0, 1, 0, 0],
[0, 0, np.cos(theta) * ((yDotDot * t**2) / 2 + yDot * t) - np.sin(theta) * (
(xDotDot * t**2) / 2 + xDot * t), t * np.cos(theta), t * np.sin(theta), 0, (t**2 * np.cos(theta)) / 2, (
t**2 * np.sin(theta)) / 2],
[0, 0, - np.cos(theta) * ((xDotDot * t**2) / 2 + xDot * t) - np.sin(theta) * (
(yDotDot * t**2) / 2 + yDot * t), -t * np.sin(theta), t * np.cos(theta), 0, -(t**2 * np.sin(theta)) / 2, (
t**2 * np.cos(theta)) / 2],
[0, 0, 0, 0, 0, t, 0, 0]
])
# Measurement function
def Measure(X):
theta = X[2, 0]
xDot = X[3, 0]
yDot = X[4, 0]
xDotDot = X[6, 0]
yDotDot = X[7, 0]
deltaX = xDot * deltaT + 0.5 * xDotDot * (deltaT**2)
deltaY = yDot * deltaT + 0.5 * yDotDot * (deltaT**2)
deltaU = deltaX * np.cos(theta) + deltaY * np.sin(theta)
deltaV = -deltaX * np.sin(theta) + deltaY * np.cos(theta)
accel = xDotDot * np.cos(theta) + yDotDot * np.sin(theta)
return np.array([
[accel],
[omega],
[deltaU],
[deltaV],
[deltaTheta]
])
xArr4 = []
yArr4 = []
for i in range(100):
a = imuA[i]
omega = imuOmega[i]
# Setup the observation matrix
Z_k = np.array([
[imuA[i]],
[imuOmega[i]],
[odoU[i]],
[odoV[i]],
[odoTheta[i]]
])
# Calculate the estimated new state
X_k = A.dot(X_k_min)
# Calculate the estimated new state covariance matrix
P_k = A.dot(P_k_min).dot(np.transpose(A)) + Q
# Find the measurement Jacobian at the current time step
H_k = CalculateH_k(X_k_min, deltaT)
# Calculate the Kalman gain
G_k = P_k.dot(np.transpose(H_k)).dot(np.linalg.inv(H_k.dot(P_k).dot(np.transpose(H_k)) + R))
# Calculate the improved current state estimate
X_k = X_k + G_k.dot(Z_k - Measure(X_k_min))
# Calculate the improved current state covariance
P_k = (np.eye(8) - G_k.dot(H_k)).dot(P_k)
xArr4.append(X_k[0, 0])
yArr4.append(X_k[1, 0])
# Make the current state the previous
X_k_min = X_k
P_k_min = P_k
plt.plot(xArr, yArr, linewidth=3)
plt.plot(xArr2, yArr2)
plt.plot(xArr3, yArr3)
plt.plot(xArr4, yArr4)
plt.legend(['Ground truth', 'VO', 'IMU', 'Filtered'])
plt.grid()
plt.show()
I have double checked everything and just can't figure out what I am doing wrong even though it must be something obvious. Any ideas?
• Have you though about using the acceleration/gryo information in the propagation step instead of the measurements step? It's easier that way because you don't have to devise a process model for how the acceleration changes. – holmeski Jul 4 at 19:04
• @holmeski thanks I actually tried that first but it was more complex and also more erratic which is why I tried redoing it in a simpler way like this. I will look at that again – Gerharddc Jul 5 at 15:28
Your noise term for the KF needs to reflect how you expect the true propagation of the state will differ from your model of the propagation. For example, the acceleration uncertainty is 1 while your true uncertainty is drawn from a uniform distribution of [-10,10].
I altered your code so the KF is now using the IMU information within the propagation step. It still needs to incorporate the IMU uncertainty correctly within the process noise. I also simplified the measurements to be the position and orientation of the state. You should probably rewrite the numerical Jacobians that I've used in favor of analytic Jacobians.
import numpy as np
import matplotlib.pyplot as plt
from random import *
# The state vector is
# pos_x, pos_y, theta, vel_x, vel_y
def getVehRate(state, imu):
# position rate is equal to velocity
dxdy = state[3:5]
# theta rate is euqal to the gyro measurement
dtheta = imu[1]
# velocity rate is equal to the accel broken into the xy basis
dvelx = imu[0] * np.cos(state[2])
dvely = imu[0] * np.sin(state[2])
dstate = 0. * state
dstate[0:2] = dxdy
dstate[2] = dtheta
dstate[3] = dvelx
dstate[4] = dvely
return dstate
def rk4(state, imu, func, dt):
# runs a rk4 numerical integration
k1 = dt * func(state, imu)
k2 = dt * func(state + .5*k1, imu)
k3 = dt * func(state + .5*k2, imu)
k4 = dt * func(state + k3, imu)
return state + (1./6.)*(k1 + 2.*k2 + 2.*k3 + k4)
def numericalDifference(x, func, data, ep = .001):
# calculates the numeical jacobian
y = func(x, data)
A = np.zeros([y.shape[0], x.shape[0]])
for i in range(x.shape[0]):
x[i] += ep
y_i = func(x, data)
A[i] = (y_i - y)/ep
x[i] -= ep
return A
def numericalJacobianOfStatePropagationInterface(state, data):
# data contains both the imu and dt, it needs to be broken up for the rk4
return rk4(state, data[0], getVehRate, data[1])
# Sampling period
dt = .1
t_steps = 500
state = np.zeros(5)
state_hist = np.zeros([t_steps, 5])
imu_hist = np.zeros([t_steps, 2])
# Setup simulated data
for i in range(t_steps):
# generate a rate to propagate states with
accel = uniform(-10, 10)
theta_dot = uniform(-0.2, 0.2)
imu = np.array([accel, theta_dot])
# propagating the state with the IMU measurement
state = rk4(state, imu, getVehRate, dt)
# saving off the current state
state_hist[i] = state *1.
imu_hist[i] = imu*1.
# kf stuff
state = np.zeros([5])
cov = np.eye(5) * .001
kf_state_hist = np.zeros([t_steps, 5])
kf_cov_hist = np.zeros([t_steps, 5,5])
kf_meas_hist = np.zeros([t_steps, 3])
kf_imu_hist = np.zeros([t_steps, 2])
# imu accel and gyro noise
accel_cov = .0001
gyro_cov = .0001
Q_imu = np.array([[.1, 0],[0, .01]])
r_meas = .001
# running the data through the KF with noised measurements
for i in range(t_steps):
# propagating the state
imu_meas = imu_hist[i]
imu_meas[0] += np.random.randn(1) * accel_cov**.5
imu_meas[1] += np.random.randn(1) * gyro_cov**.5
A = numericalDifference(state, numericalJacobianOfStatePropagationInterface, [imu_meas, dt])
cov = A.dot(cov.dot(A.T))
###
# TODO : calculate how the accel and gyro noise turn into the process noise for the system
###
# A_state_wrt_imu = jacobianOfPropagationWrtIMU
# Q = A_state_wrt_imu * Q_imu * A_state_wrt_imu.T
# cov += Q
# sloppy placeholder
cov += np.eye(5) * .1
state = rk4(state, imu_meas, getVehRate, dt)
# measurement update
zt = state[:3] + np.random.randn(1) *r_meas**.5
zt_hat = state[:3]
H = np.zeros([3,5])
H[:3,:3] = np.eye(3)
S = np.linalg.inv(H.dot(cov.dot(H.T)) + r_meas * np.eye(3))
K = cov.dot(H.T).dot( S )
#state = state + K.dot(zt - zt_hat)
cov = (np.eye(5) - K.dot(H)).dot(cov)
kf_state_hist[i] = state
kf_cov_hist[i] = cov
kf_meas_hist[i] = zt_hat
kf_imu_hist[i] = imu_meas
plt.plot(state_hist[:,0], state_hist[:,1], linewidth=3)
plt.plot(kf_state_hist[:,0], kf_state_hist[:,1], linewidth=3)
plt.legend(['Ground truth', 'kf est'])
plt.grid()
plt.show()
• Thanks, I realised as much but I was thinking that it having the wrong values would just result in decreased performance and not in it going completely crazy. I will try getting better values – Gerharddc Jul 5 at 15:30
|
{}
|
# Sökning: "percolation"
Visar resultat 1 - 5 av 82 avhandlingar innehållade ordet percolation.
1. ## 1. Percolation: Inference and Applications in Hydrology
Sammanfattning : Percolation theory is a branch of probability theory describing connectedness in a stochastic network. The connectedness of a percolation process is governed by a few, typically one or two, parameters. LÄS MER
2. ## 2. Continuum Percolation in non-Euclidean Spaces
Detta är en avhandling från Göteborg : Chalmers University of Technology
Sammanfattning : In this thesis we first consider the Poisson Boolean model ofcontinuum percolation in $n$-dimensional hyperbolic space ${\mathbb H}^n$. Let $R$ be the radius of the balls in the model, and$\lambda$ the intensity of the underlying Poisson process. LÄS MER
3. ## 3. Accessibility percolation and first-passage percolation on the hypercube
Detta är en avhandling från Göteborg : Chalmers University of Technology
Sammanfattning : In this thesis, we consider two percolation models on the n-dimensional binary hypercube, known as accessibility percolation and first-passage percolation. First-passage percolation randomly assigns non-negative weights, called passage times, to the edges of a graph and considers the minimal total weight of a path between given end-points. LÄS MER
4. ## 4. Selected Topics in Continuum Percolation Phase Transitions, Cover Times and Random Fractals
Detta är en avhandling från Uppsala : Department of Mathematics
Sammanfattning : This thesis consists of an introduction and three research papers. The subject is probability theory and in particular concerns the topics of percolation, cover times and random fractals.Paper I deals with the Poisson Boolean model in locally compact Polish metric spaces. LÄS MER
5. ## 5. Asymptotics and dynamics in first-passage and continuum percolation
Detta är en avhandling från Göteborg : University of Gothenburg
Sammanfattning : This thesis combines the study of asymptotic properties of percolation processes with various dynamical concepts. First-passage percolation is a model for the spatial propagation of a fluid on a discrete structure; the Shape Theorem describes its almost sure convergence towards an asymptotic shape, when considered on the square (or cubic) lattice. LÄS MER
|
{}
|
# frnsys/port
lightweight blogging platform
Python HTML CSS
Switch branches/tags
Nothing to show
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information. port .gitignore MANIFEST.in README.md setup.py
# Port
### A lightweight blogging platform
port makes it easy to run a simple markdown-driven blog/static site generator. There is no admin UI or database. (no need!)
You use the command-line interface to create a new site, which makes a directory where you can store different folders (folders are treated as categories), and within those folders you write markdown files.
Installation is easy:
$pip install port (port requires Python 3) ## Example An example site is my own blog: space and times ## Features • Supports GitHub-Flavored Markdown • Supports MathJax syntax (in the default theme) • Includes RSS feeds for each category and all posts ## Creating a site To create a site, use the port command-line tool: $ port create my_new_site
You will be walked through the basic configuration.
A folder will created at whatever directory you specified during configuration.
In there you will notice three folders:
• .build - this is where your compiled posts are stored. This folder is destroyed every build, so don't store anything important there.
• assets - this is where your static files will be served from. So for example you could place my_image.png in this folder and then in your posts you could refer to /assets/my_image.png.
• default_category - this is the default category folder. You can rename this or replace it.
• pages - this is where you can put non-post/non-category pages, also as markdown. For example, an "About" page.
port treats any folder in this directory (except for the .build, assets, and pages folders) as a "category".
Within each folder/category, including the pages folder, you can write posts as markdown documents (with the .md extension).
When you've added or edited documents, you need to re-build the site:
$port build my_new_site ## Writing a post When writing a post, you can include any arbitrary metadata using YAML front matter, i.e. by including a section demarcated by --- at the very top of your file. For example: --- published_at: 6/17/2015 20:45 draft: true --- This will be parsed and included as part of the post object passed to your templates (see below). You should at least include the published_at data; without it port will default to using the last build time as the published at value. This can mess up your post ordering. Other than that, port supports GitHub-Flavored markdown, so go wild! Pages are written exactly the same as posts - in Markdown and with optional YAML front matter as well. ## Running a site To preview the built site you can just change into the .build directory and run Python's built-in HTTP server: python3 -m http.server The main endpoints are: • / - your index page :) • /<category name> - a category index page • /<category name>/<post slug> - a single post • /rss - the rss feed for all your posts (20 most recent published) • /rss/<category name> - the rss feed for one category (20 most recent published) • /<page> - a non-post/non-category page ## Configuration The new site process will walk you through the basic configuration, which creates a yaml file in the ~/.port folder. You can edit this yaml file to update your config, or add in arbitrary data which gets passed to your templates as a dictionary called site_data. ## Miscellany • Draft posts are not listed in the category and index pages (and RSS feeds) but can be accessed by their direct url • Posts are ordered by reverse chron • Arbitrary category metadata can be added for each category by creating a meta.yaml file in the category's directory. You can override the template used for a category here and/or the posts per page value, e.g.: template: a_special_template.html per_page: 20 • Pages can similarly have a different template specified in their YAML front matter. ## Themes port has support for theming - custom themes are super easy to write using Jinja. #### Templates New themes go into ~/.port/themes/. Each theme must, at minimum, include the following templates: • category.html - used to render category pages • index.html - used to render the home page • single.html - used to render single post pages • page.html - used to render non-post/non-category pages • 404.html - 404 error page #### Available data Within each of these templates, you have access to the following variables: • site_data - an object consisting of the data stored in your site's yaml config file and additional metadata, such as categories. Note that the attribute names corresponding to keys in your site's config are lowercase (e.g. if you have SITE_NAME in your config, it is accessed at site_data.site_name) • post data: single.html includes a post object, category.html and index.html include a posts list • pagination data (category.html and index.html): you get a page object which includes page.current, page.next (the next page's url, None if there is no next page), and page.prev post objects at minimum consist of: • title - the raw markdown title, extracted from the first h1 tag • title_html - the compiled title • html - the compiled markdown, not including the title and metadata • published_at - a datetime object • category - the post's category • slug the post's slug • draft - a bool of whether or not the post is a draft Whatever else you include as metadata in your files will also show up as attributes on the post object. page objects are similar, at minimum consisting of: • title - the raw markdown title, extracted from the first h1 tag • title_html - the compiled title • html - the compiled markdown, not including the title and metadata • slug the post's slug • draft - a bool of whether or not the post is a draft #### JS & CSS The theme's JS and CSS folders are available at /js and /css respectively. #### Example See the default theme for an example. #### Syncing to a remote folder I work on my posts on my local machine, and when they are ready, I build them and then sync the local folder to my remote server which hosts the live site. There's a convenience command for doing this: $ port sync <site name> <remote>
For example:
\$ port sync my_new_site user@mysite.com:~/my_site
## Pro tips
• If you're using vim, you can configure a keybinding to drop in the current datetime for you, which is useful for setting the published_at value in a post's yaml frontmatter, e.g.:
nnoremap , "=strftime("%m.%d.%Y %H:%M")P
• Example nginx conf:
server { listen 80; server_name my.site.co; root /srv/my_new_site; error_page 404 /404.html; }
|
{}
|
## Better error/warning messages for misplaced end...
by:
Most experienced Maple users have encountered situations where the "do" and "end" statements are not in the same execution group. For example:
> for n from 1 to 10 do
> n, n^2, 1/n;
>
Warning, premature end of input, use <Shift> + <Enter> to avoid this message.
> end do; # in a separate execution group
Error, reserved word end unexpected
I have no objection to the issuing of a warning message when the "do" is executed without a matching "end". My request is that the unmatched "end" (particularly when it appears as the initial (non-empty) string in an execution group) should receive a warning instead of an error.
## larger moduli in modp1 and modp2
by:
I would like to see Maple's modp1 and modp2 support 25-bit moduli using the float[8] datatype, similar to the LinearAlgebra:-Modular package.
## Auto Completion
by:
I think an excellent feature for Maple would be an auto-complete function similar to what they have in software integrated development environments like MS Visual Studio, i.e., when you begin typing a command, a small window pops up and displays the possible command completions, parameter list etc. This would be easier than typing Command-Shift-Space to complete commands, and if the parameter list and a small instruction could be included in the pop-up window, I think this would save loads of time, as it seems I spend half my time in the help browser looking up function calling sequences that I have forgotten.
## Linear algebra modulo n
by:
Maple needs better user-level facilities for doing linear algebra over finite fields, particularly the integers mod n. For example there is no good way to solve a linear system Ax=B when B is a matrix. Obviously the LinearAlgebra:-Modular package is very good at what it does. Why can't there be some nice non-programmer routines which call it ? One alternative to using the mod operator is to have all the commands in the main LinearAlgebra package accept an optional last argument for the characteristic. For example: LinearAlgebra:-GaussianElimination(A, n); Then in the GaussianElimination command you could do something like:
## Better support for inequalities (symbolic and numeric...
by:
A colleague has been frustrated by the apparent limitations to Maple's abilities to "solve" inequalities. This does appear to be something that should - and could - be improved with a little effort. The typical problem under consideration is the epsilon-delta definition of limit. Ideally, it would be nice to execute a command such as
> solve( abs( f-L ) < epsilon, x );
## Editor Bugs
by:
I have this recurring editor issue where the cursor advances to the next line after entering an assignment operator (:=) or a type definition operator (::). I'm guessing the Maple code implementation is attempting some sort of code formatting and wants to advance the cursor one space beyond the operator. The only problem is that the space doesn't exist, so the cursor is ultimately advanced to the existing token in the buffer stream which in my case is a line feed; i.e., the next line.
The editor appears to be in 'Insert' mode since I can't overwrite characters while the cursor is positioned in the middle of a string.
## Multiple algebraic extensions mod p
by:
Maple needs to be able to do the following:
RootOf(z^2-3)/RootOf(z^2-2) mod 5 -> 3
Normal(RootOf(z^2-3)/RootOf(z^2-2)) mod 5;
Error, (in mod/GetAlgExt) only the single algebraic extension case is implemented
I would also like to be able to factor and generally compute with polynomials whose coefficients involve multiple algebraic extensions mod p. These are basic fields, and it is somewhat sad that Maple has no way of computing in them.
## MathML display of numeric products and limits
by:
As I have used MathML to prepare typeset mathematics in maplets, I noticed that there can be some confusion when there is no explicit multiplication operator. The problem appears to be most severe when two numeric quantities appear next to one another, e.g.,
> t := sin( 5*10^x );
I use the following maplet to see how this looks in MathML:
> with(Maplets[Elements]):
> maplet := Maplet([
> [MathMLViewer('value' = MathML[Export](t))],
> [Button("OK", Shutdown())]
> ]):
> Maplets[Display](maplet);
I note that the spacing appears to be a little better in Maple 10 than in Maple 9.5. But I would still prefer a centered dot (\cdot in LaTeX) or an x (\times). Note that I am not asking for a multiplication symbol to appear for all products, just ones where it can be difficult to determine the actual terms of the product.
## Free large blocks of memory after garbage collect
by:
I would like the Maple kernel to free sufficiently large (128 MB?) blocks of memory on garbage collect, not just on kernel restart. Sometimes I need to work with large objects temporarily, and then the overall performance of my machine suffers afterwards because of Maple's increased memory usage. I continue working afterwards, so I don't want to restart the kernel to free memory and eliminate swapping.
## resize matrix browser window
by:
I'm currently working with a lot of large matrices in Maple (5000x5000), and I would really like a way to resize the Matrix Browser window, possibly filling the entire screen. I understand that this is probably one of those really irritating things that is difficult to add after the fact, but it would be extremely useful to me.
## Modify type/integer[a..b] so that a can be -infinity,...
by:
It would be convenient if the subscripted version of type/integer could handle infinity and -infinity. Then, to specify an integer greater than, say, 1, we could do type(i, integer[2..infinity]). Currently I handle this as type(i, And(integer,Range(1,infinity))) which is not as nice, particularly because it isn't clear that 1 is excluded. The drawback of doing this is that it implies that infinity is allowed. However, because infinity is not an integer, it seems reasonable that it would return fals
## magnification
by:
Just yesterday this came up in a newsgroup. For me the default size 100% seems too small, but the 150% is overkill. Having a 120% as a standard choice would be useful.
## Welcome to Product Suggestions
by:
This forum is where you can suggest improvements for Maplesoft products. It will be monitored by Maplesoft staff who will record your input and in many cases, post appropriate responses. Please keep in mind that Maple is used be a wide variety of users with different needs. All comments will be read, taken seriously, and considered when making product decisions. This forum is moderated by Tom 4.
First 20 21 22 Page 22 of 22
|
{}
|
# Lots of presents!
Level pending
Santa is going on two trips to deliver presents.
For the first trip, he needs to separate $$2013$$ distinct presents into $$2$$ bags, where the bags are the same as each other and each one has at least $$1$$ present. Let the number of ways he can do this be $$c_1$$.
For the second trip, he needs to separate $$2012$$ distinct presents into $$2$$ bags, where the bags are the same as each other and each one has at least $$1$$ present. Let the number of ways he can do this be $$c_2$$.
What is $$2c_2-c_1+4$$?
×
|
{}
|
# Changeset 1022 for src/ASM/CPP2011/cpp-2011.tex
Ignore:
Timestamp:
Jun 21, 2011, 12:18:28 PM (9 years ago)
Message:
...
File:
1 edited
### Legend:
Unmodified
r1021 %By composition, we then have that the whole assembly process is semantics preserving. %This is a tempting approach to the proof, but ultimately the wrong approach. %In particular, it is important that we track how the program counter indexing into the assembly program, and the machine's program counter evolve, so that we can relate them. %Expanding pseudoinstructions requires that the machine's program counter be incremented by $n$ steps, for $1 \leq n$, for every increment of the assembly program's program counter. %Keeping track of the correspondence between the two program counters quickly becomes unfeasible using a compositional approach, and hence the proof must be monolithic. This is a tempting approach to the proof, but ultimately the wrong approach. In particular, to expand a pseudo instruction we need to know the address at which the expanded instructions will be located, for instance to determine if a short jump is possible. That address is a function of the \emph{object code} generated for the pseudo-instructions already visited. Thus we need to assemble each pseduto instruction down to object code before moving to the next one and this must be eventually reflected on the proof of correctness. Therefore we will have lemmas for the \texttt{assembly1} function and for the composition of \texttt{expand\_pseudo\_instruction} and \texttt{assembly1}, but not for \texttt{expand\_pseudo\_instruction} alone. % ---------------------------------------------------------------------------- %
|
{}
|
SERVING THE QUANTITATIVE FINANCE COMMUNITY
Cuchulainn
Posts: 59013
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
Re: If you are bored with Deep Networks
As I said, all this stuff is Euler method (duh).
Here is the first step to solve an ODE (is AD the way to do it here??)
https://arxiv.org/pdf/1806.07366.pdf
Cuchulainn
Posts: 59013
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
Re: If you are bored with Deep Networks
It's a mystery.
Just to be clear: I was joking. The point is that people choose a heuristic learning rate and some value ranges "usually work". But everyone knows that this is not how things should be done, and the theory of optimisation in statistical learning is an active research field.
Well, I thought it was a joke. And quite a good one. Isn’t much of numerical analysis and optimization like that? Rather arbitrary. One of the many reasons I don’t like all these subjects.
It only looks that way. You can prove convergence of FEM schemes in weighted Sobolev spaces but 1) it takes longer 2) someone has to pay..
AI numerical methods for AI are somewhat outdated to a lesser or greater extent.
Cuchulainn
Posts: 59013
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
Re: If you are bored with Deep Networks
Part of the documentation of the routine LBFGS by J. Nocedal (one of the giants in the field of nonlinear numerical optimization):
GTOL is a DOUBLE PRECISION variable with default value 0.9, which
C controls the accuracy of the line search routine MCSRCH. If the
C function and gradient evaluations are inexpensive with respect
C to the cost of the iteration (which is sometimes the case when
C solving very large problems) it may be advantageous to set GTOL
C to a small value. A typical small value is 0.1. Restriction:
C GTOL should be greater than 1.D-04.
Why 0.9? Why 0.1? Why greater than 1e-4? You can shoot off the same questions at LBFGS as the ones Cuch throws at SGD. SGD doesn't have an automated way of setting the learning rate, so it's "dumb". Methods like LBFGS contain an algorithm to automatically set the learning rate, but this algorithms in 99 cases out of 100 contain some hyper-parameters which you either adjust to the every problem or set to some "typical" value. If you're lucky, there's a theorem which tells you the bounds in which you have to fit. But because this is hidden somewhere in the bowels of ancient Fortran library, people naively think it "just works". Just like SGD, it works until it doesn't. There's no magical way around the problem that if you're optimising a function based on point estimates, you have a learning problem and the no-free lunch theorem comes down on you like a ton of bricks.
If you study the book by Nocedal and Wright (BTW I have) you will see that there are more nuanced approaches as well.
ISayMoo
Topic Author
Posts: 1718
Joined: September 30th, 2015, 8:30 pm
Re: If you are bored with Deep Networks
If you study the book by Nocedal and Wright (BTW I have) you will see that there are more nuanced approaches as well.
I know this book, I implemented some algorithms from it. Can you tell me what general purpose non-linear optimisation methods described in it are fully automatic and have no user-adjustable parameters?
Cuchulainn
Posts: 59013
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
Re: If you are bored with Deep Networks
If you study the book by Nocedal and Wright (BTW I have) you will see that there are more nuanced approaches as well.
I know this book, I implemented some algorithms from it. Can you tell me what general purpose non-linear optimisation methods described in it are fully automatic and have no user-adjustable parameters?
I would say the backtracking discrete methods (e.g. page 37 etc.). The continuous analogues (ODE solvers) have all this built-on so as 'user' you just give a tolerance and the solver does the rest instead of having to choose from a palette of learning rates. Backtracking is well-established in numerical analysis.
But if by 'user' you mean something else that's another discussion.
Last edited by Cuchulainn on December 17th, 2018, 3:18 pm, edited 1 time in total.
Cuchulainn
Posts: 59013
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
Re: If you are bored with Deep Networks
“"Neural networks" are a sad misnomer. They're neither neural nor even networks. They're chains of differentiable, parameterized geometric functions, trained with gradient descent (with gradients obtained via the chain rule). A small set of highschool-level ideas put together.” — François Chollet
https://en.wikipedia.org/wiki/Keras
Cuchulainn
Posts: 59013
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
Re: If you are bored with Deep Networks
Just like SGD, it works until it doesn't.
Sounds defeatist.
In mathematics, conditions are stated under which an algorithm works for a given class of problems. It will not works when the problem does not satisfy the assumption. For example,
$x_{k+1} = f(x_{k}), k = 0,1,2, ..$ only converges when $f$ is a contraction.
I see it as: for what class of problems does SGD work? I don't think this has been done, at least not in the spate of articles to date. For example, SGD only finds local minimum. And then SGD for constrained optimisation is a Pandora's box, yes? It gets very fuzzy-fuzzy?
ODE solvers take into account the problem they are trying to solve and adjust parameters accordingly.
ISayMoo
Topic Author
Posts: 1718
Joined: September 30th, 2015, 8:30 pm
Re: If you are bored with Deep Networks
If you study the book by Nocedal and Wright (BTW I have) you will see that there are more nuanced approaches as well.
I know this book, I implemented some algorithms from it. Can you tell me what general purpose non-linear optimisation methods described in it are fully automatic and have no user-adjustable parameters?
I would say the backtracking discrete methods (e.g. page 37 etc.). The continuous analogues (ODE solvers) have all this built-on so as 'user' you just give a tolerance and the solver does the rest instead of having to choose from a palette of learning rates. Backtracking is well-established in numerical analysis.
Hmm. In my edition (2nd, Springer, 2006) page 37 has Algorithm 3.1 which has 2 hyperparameters (c and rho). What is the theory for choosing the best values of c and rho?
ISayMoo
Topic Author
Posts: 1718
Joined: September 30th, 2015, 8:30 pm
Re: If you are bored with Deep Networks
*crickets*
Cuchulainn
Posts: 59013
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
Re: If you are bored with Deep Networks
Politicians fume after Amazon's face-recog AI fingers dozens of them as suspected crooks
https://www.theregister.co.uk/2018/07/2 ... ion_sucks/
ISM: how do you think the crooks feel about it.
It's not there yet.
ISayMoo
Topic Author
Posts: 1718
Joined: September 30th, 2015, 8:30 pm
Re: If you are bored with Deep Networks
Yes, still a way to go
“Then we’ll keep it alive as long as we live,” added Legolas. “And we won’t forget the first great battle of the night, even if we may have forgotten the final defeat.”
“I agree,” Gandalf said, “but we will all remember it as the last battle in Middle-earth, and the first great battle of the new day.”
Aragorn drew his sword, and the Battle of Fangorn was won. As they marched out through the thicket the morning mist cleared, and the day turned to dusk.
But at least we have now an inexhaustible supply of really bad Tolkien fan fiction.
Cuchulainn
Posts: 59013
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
Re: If you are bored with Deep Networks
It won't be long before you can do a PhD in a liberal arts college on
"Finnegans Wake: a Hidden Markov Model and Bayesian Network Approach"
riverrun, past Eve and Adam's, from swerve of shore to bendof bay, brings us by a commodius vicus of recirculation back toHowth Castle and Environs. Sir Tristramvioler d'amores, fr'over the short sea, had passen-core rearrived from North Armorica on this side the scraggyisthmus of Europe Minor to wielderfight his penisolate war: norhad topsawyer's rocks by the stream Oconee exaggerated themselseto Laurens County's gorgios while they went doublin their mumperall the time: nor avoice from afire bellowsed mishe mishe totauftauf thuartpeatrick: not yet, though venissoon after, had akidscad buttended a bland old isaac: not yet, though all's fair invanessy, were sosie sesthers wroth with twone nathandjoeRot apeck of pa's malt had Jhem or Shen brewed by arclight and roryend to the regginbrow was to be seen ringsome on the aquaface. The fall (bababadalgharaghtakamminarronnkonnbronntonner-ronntuonnthunntrovarrhounawnskawntoohoohoordenenthur-nuk!) of a once wallstrait oldparr is retaled early in bed and lateron life down through all christian minstrelsy
ISayMoo
Topic Author
Posts: 1718
Joined: September 30th, 2015, 8:30 pm
Re: If you are bored with Deep Networks
I'm sure you could now.
Cuchulainn
Posts: 59013
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
Re: If you are bored with Deep Networks
In general, there are 3 layers between Joyce's effects and causes. Great PhD topic.
And domain annotations are essential to build the influence diagram.
BTW what's a good C++ open-source for HMMs and BNs?
Cuchulainn
Posts: 59013
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
Re: If you are bored with Deep Networks
Yes, still a way to go
“Then we’ll keep it alive as long as we live,” added Legolas. “And we won’t forget the first great battle of the night, even if we may have forgotten the final defeat.”
“I agree,” Gandalf said, “but we will all remember it as the last battle in Middle-earth, and the first great battle of the new day.”
Aragorn drew his sword, and the Battle of Fangorn was won. As they marched out through the thicket the morning mist cleared, and the day turned to dusk.
But at least we have now an inexhaustible supply of really bad Tolkien fan fiction.
LOR is 1-dimensional (all live happily ever after) while PW is a vicus cycle.
|
{}
|
## Mar 26, 2014
### 從陀螺聊到姿態追蹤與遊戲物理引擎
opengl 3D animation of a cube gyrocsope from KT Gump on Vimeo.
EulerTop_sharp_cube from KT Gump on Vimeo.
## Mar 17, 2014
### 3/2-17感冒紀錄
(第二次用藥: Clarithromycin250 , Meptin25, Voren25, Medcon-A, Strocain, Tetosiv sustained)
(第三次用藥: Cravit500MG, Allegra, Panamax, Mucosolven, Gaster20MG, Anticough Capsules)
尖點運動
有環運動
無環運動
等周速運動
the python code
## Mar 12, 2014
### some great matches in CSGO, cool CSGO avitar names, and try to come up with some of my own, casters i like
Converted document
Some of the greatest matches in professional CSGO (chronologically, in reverse)
Some of the avatar names and casters I like:
Existing_names pusha, olofmeister, byali, apex, faflaren, karrigan, jkaem, hani, simple, cyjanb, nothing , shox, kioshima,
Cranking_my_head mehidae (somehow rhythm with meditate) toothless (from the movie "how to train your dragon") silen (silence) andoni (antoine) pfeifer (vacuum pump company - pfeiffer) timee continuing...
Casters ddk black ddk henryG warowl a few that I still need to find their names......
## Mar 9, 2014
### Typeset mix Chinese and English TeX documents with Scientific Workplace 5.0 and TEX live 2013
You can typeset Chinese documents with Scientific Workplace (5.0) if you have just a few Chinese paragraphs. But if you have a long Chinese document or a document with mix Chinese and English characters, the software will suffer. The SWP's compiler for international characters, Omega, is just not good enough for this purpose. You will run into a lot of overfull hbox problems during typesetting. We will need another latex compiler for long documents of mix Chinese and English. But we will want to use the convenient editing features of Scientific Workplace, what should we do?
Here is something you can try. Still prepare your international document with SWP. But typeset it with another latex compiler. Here I use a popular one, XeLATEX in TEX live 2013. But there are a few problems. First, the file prepared by SWP has a character encoding that is not recognized by other latex compiler. So I wrote a small Python program to convert the encoding. Second, the figure insertion syntax generated by SWP is not recognized by XeLatex. But this can be done by inserting TEX fields containing the correct Xelatex figure insertion commands into the SWP file by using SWP's TEX OBJECTS buttons. You won't be able to preview the figures SWP window but that doesn't really hurt. Here are step-by-step details:
• How to convert SWP 5.0 special unicode format file to unicode correct characters:
SWP version 5.0 produced Chinese character in its special unicode codepoint format (if you open the tex file with any text editor, you will see every Chinese character has a form like \U{6211} instead of displaying 我 in the tex file.) This is due to the extra parenthesis {} after \U. So I wrote a small python program to automatically remove all {} in the tex file. So that the file can then be prepared by other tex programs (Tex Life 2013 in my case).
The program is very simple:
import re
import codecs
def dashrepl(matchobj):
return chr(int(matchobj.group(1),16))
f1 = open('rotationV4pngtest.tex', 'r') # name of your origianl file
f2 = codecs.open('rotationV4pngtest.tex.temp', "w", "utf-8") # name of the new file
p= re.compile(r"\\U{([\w]{,4})}")
for line in f1:
m = re.sub(p,dashrepl,line)
## print(m)
f2.write(m)
f1.close()
f2.close()
• For the SWP tex file to be readable by other compiler, you will also need to change the preamble (the packages and stuff) in the beginning of the tex document so that other tex program can use it. Here is my case:
Original preamble generated by SWP:
\documentclass[12pt,a4paper]{article}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{sw2unicode}
\usepackage[UT1]{fontenc}
\usepackage{pmingliu}
\usepackage[left=0.95in,right=0.95in,top=2cm,bottom=2.54cm]{geometry}
Change the above to the following so that Xelatex can complie:
\documentclass[12pt,a4paper]{article}
\usepackage{amsmath}
\usepackage{fontspec}
\usepackage{xeCJK}
\setmainfont[Mapping=tex-text]{Times New Roman} % rm
\setsansfont[Mapping=tex-text]{Arial} % sf
\setmonofont{Courier New} % tt
\setCJKmainfont{微軟正黑體}
\usepackage[left=0.95in,right=0.95in,top=2cm,bottom=2.54cm]{geometry}
\usepackage{unicode-math}
\usepackage{graphicx}
• The file generated by SWP has symbols that use SWP's "tcilatex" macro, so you need to copy tcilatex.tex file into the same directory as your tex file. The location of tcilatex.tex in SWP's path has the following structure "D:\swp50\TCITeX\TeX\LaTeX\SWmacros\". Locate this file and copy it into your working directory. Then make sure your tex file has the \input{tcilatex.tex} line. You should have this line because you prepared your file with SWP.
• Change the TEX figure insertion command:
The figure insertion command for Xelatex should look like this:
\begin{figure}[th]
\caption{{}}
\label{firstfig}
\begin{center}
\fbox{\includegraphics[scale=0.7]{cordtrans.JPG}}
\end{center}
\end{figure}
So open the SWP tex file with Notepad and replace all things look like the following with the above
\FRAME{fhF}{5.5097in}{3.7135in}{0pt}{}{}{cordtrans.JPG}{\special{language
"Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display
"USEDEF";valid_file "F";width 5.5097in;height 3.7135in;depth
0pt;original-width 9.135in;original-height 6.1436in;cropleft "0";croptop
"1";cropright "1";cropbottom "0";filename
'
cordtrans.JPG';file-properties "XNPEU";}}
There could be other commands you need to change before you can run your tex document with TEX life Xelatex. Here I only show three points, which is enough for me to typeset this document. Good luck!
### 鋼琴伴奏譜+彈唱 [轉錄 from 鸠玖的音乐世界 土豆網]
http://www.tudou.com/plcover/-ysO_c-u4NY/
## Mar 1, 2014
### 使用Scientific Workplace 5.0 + XeLatex軟體做中英文混雜並夾帶數學公式的長篇圖文文章電腦排版
這邊介紹如何利用Scientific Workplace 5.0的編輯方便性,加上XeLatex (TeX life 2013)對中英文混雜的完美支援,來對中英文混雜並夾帶數學公式的長篇圖文文章做自動排版。
|
{}
|
# Linux – Protection from ourselves (Root)
I love Linux and everything she stands for; however, unfortunately I grew up with Windows. Thusly so I have learned very bad practices (such as NT Authority will protect me). I have several Linux VPS's for personal and educational uses and I manage all of them from the command line. Through the management of these servers I have learned very painful lessons of the power of the Root user. Such as:
• rm -d -R /*
• chown www-user:www-user -R /*
• Etc.
I've only removed my root directory twice, but just last week I changed the permission of the whole drive – effectively locking the Root out.
Now I know that I should never be logged in as Root, but most of the time I have to deal with files that only the Root owns so I sudo and run the command.
So my question is, is there a way to prompt the user (who is root, or sudo'ed) when a potentially hazardous command is executed, so the user may rethink their decision? Possibly through scripts in Bash, or a different sudo wrapper.
Or (I ask this hopefully, and very simplified) is there a way to set up permissions where instead of a two tier user system (Root user, regular user) there is a three tier system like in Windows (NT Authority, Administrator, other User). Basically is there a way to keep the ability of System administration, but restrict access to some system files.
Get in the habit of doing an ls before you issue a command meant to work recursively or one that is dangerous. You can then see what files will be affected before proceeding.
rm supports the -i switch (causing it to prompt you) as well as --preserve-root (makes it fail on root) which should give you a small margin of safety. Other commands may have similar options. You can have these always be present with an alias rm=rm -i --preserve-root command, and may want to put that in your ~/.profile or ~/.bashrc so it is there every time you invoke your root shell.
|
{}
|
• # question_answer National income is considered as: A) NNP at factor cost B) GDP at factor cost C) NNP at market prices D) GDP at current prices
Solution :
[a] National Income is calculated by subtracting net indirect taxes from NNP at market prices. The obtained value is known as NNP at factor cost or national income. NNP at factor cost or National Income is the sum of wages, rent, interest and profits paid to factors for their contribution to the production of goods and services in a year. It may be noted that: NNP at Factor Cost = NNP at Market Price $-$ Indirect Taxes + Subsidies.
You need to login to perform this action.
You will be redirected in 3 sec
|
{}
|
Differential and Integral Equations
Strong convergence of bounded sequences of solutions of porous medium equations
Gary M. Lieberman
Abstract
We show that smooth solutions of porous medium equations satisfy a simple $L^2$ gradient estimate on sets where the solution itself is small. Along with known continuity estimates for solutions and an estimate on second derivatives of smooth solutions, this estimate allows us to show that approximating smooth solutions of a porous medium equation converge strongly to the weak solution.
Article information
Source
Differential Integral Equations, Volume 11, Number 3 (1998), 395-407.
Dates
First available in Project Euclid: 30 April 2013
|
{}
|
On the closure of modules of continuously differentiable mappings
Rendiconti del Seminario Matematico della Università di Padova, Volume 60 (1978), pp. 33-42.
@article{RSMUP_1978__60__33_0,
author = {Nachbin, Leopoldo},
title = {On the closure of modules of continuously differentiable mappings},
journal = {Rendiconti del Seminario Matematico della Universit\a di Padova},
pages = {33--42},
publisher = {Seminario Matematico of the University of Padua},
volume = {60},
year = {1978},
zbl = {0433.46034},
mrnumber = {555954},
language = {en},
url = {http://www.numdam.org/item/RSMUP_1978__60__33_0/}
}
TY - JOUR
AU - Nachbin, Leopoldo
TI - On the closure of modules of continuously differentiable mappings
JO - Rendiconti del Seminario Matematico della Università di Padova
PY - 1978
DA - 1978///
SP - 33
EP - 42
VL - 60
PB - Seminario Matematico of the University of Padua
UR - http://www.numdam.org/item/RSMUP_1978__60__33_0/
UR - https://zbmath.org/?q=an%3A0433.46034
UR - https://www.ams.org/mathscinet-getitem?mr=555954
LA - en
ID - RSMUP_1978__60__33_0
ER -
%0 Journal Article
%A Nachbin, Leopoldo
%T On the closure of modules of continuously differentiable mappings
%J Rendiconti del Seminario Matematico della Università di Padova
%D 1978
%P 33-42
%V 60
%I Seminario Matematico of the University of Padua
%G en
%F RSMUP_1978__60__33_0
Nachbin, Leopoldo. On the closure of modules of continuously differentiable mappings. Rendiconti del Seminario Matematico della Università di Padova, Volume 60 (1978), pp. 33-42. http://www.numdam.org/item/RSMUP_1978__60__33_0/`
[1] R.M. Aron - J.B. Prolla, Polynomial approximation of differentiable functions on Banach spaces, Journal für die Reine und Angewandte Mathematik, to appear. | EuDML | MR | Zbl
[2] C.S. Guerreiro, Ideias de funções diferenciáveis, Anais da Academia Brasileira de Ciências, 49 (1977), pp. 47-70. | MR | Zbl
[3] C.S. Guerreiro, Whitney's spectral synthesis theorem is infinite dimensions, in Approximation Theory and Functional Analysis (Editor: J. B. PROLLA), Notas de Matemática (1979), North-Holland, to appear. | MR | Zbl
[4] J. Lesmes, On the approximation of continuously differentiable functions in Hilbert spaces, Revista Colombiana de Matemática, 8 (1974), pp. 217-223. | EuDML | MR | Zbl
[5] J.G. Llavona, Aproximación de funciones diferentiables, Universidad Complutense de Madrid, 1975.
[6] J.G. Llavona, Approximation of differentiable functions, Advances in Mathematics, to appear.
[7] B. Malgrange, Ideals of differentiable functions, Oxford University Press, 1966. | MR | Zbl
[8] L. Nachbin, Sur les algèbres denses de fonctions différentiables sur une variété, Comptes Rendus de l'Académie des Sciences de Paris, 228 (1949), pp. 1549-1551. | MR | Zbl
[9] L. Nachbin, Sur la densité des sous-algèbres polynomiales d'applications continûment différentiables, Séminaire Pierre Lelong - Henri Skoda (1976-1977), Lecture Notes in Mathematics 694 (1978), to appear. | MR | Zbl
[10] V. Poenaru, Analyse différentielle, Lecture Notes in Mathematics 371 (1974). | MR | Zbl
[11] J.B. Prolla, On polynomial algebras of continuousty differentiable functions, Rendiconti della Accademia Nazionale dei Lincei, 57 (1974), pp. 481-486. | MR | Zbl
[12] J.B. Prolla - C.S. Guerreiro, An extension of Nachbin's theorem to differentiable functions on Banach spaces with the approximation property, Arkiv för Mathematik, 14 (1976), pp. 251-258. | MR | Zbl
[13] J.C. Tougeron, Idéaux de fonctions différentiables, Springer-Verlag, 1972. | MR | Zbl
[14] H. Witney, On ideals of differentiable functions, American Journal of Mathematics, 70 (1948), pp. 635-658. | MR | Zbl
|
{}
|
# Can random walks be applied to String Theory in curved space
• A
If we study the high temperature limit (near Hagedorn) of a string gas, most of the energy is concentrated in a single long string. If we model the string by a fixed number of rigid links of length ls and calculate the number of possible configurations, we get the density of states:
$$\omega(E) \sim \frac{ e^{ \beta E} }{ E^{ 1+D/2 } }$$
Is it possible to generalize this method in curved space?
A possible way is to calculate the torus path integral of a string that wraps the euclidian periodic time in a curved background. At high temperatures this can be calculated from the path integral of a single non-relativistic particle, which gives the free energy and thus the density of states. This seems to be called the random walk model. References: http://arxiv.org/abs/1506.07798 and http://arxiv.org/abs/hep-th/0508148 .
But this seems totally different. A particle path can be related to a random walk, but one doesn't calculate the number of microstates from combinatoric reasoning. Is there a way to do something like that?
## Answers and Replies
Thanks for the post! This is an automated courtesy bump. Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post?
|
{}
|
Tag Info
Hot answers tagged potential-energy
11
The "simplest" classical explanation I know is the van der Waals interaction described by Keesom between two permanent dipoles. Let us consider two permanent dipoles $\vec{p}_1$ (located at $O_1$) and $\vec{p}_2$ located at $O_2$. Their potential energy of interaction is: U(\vec{p}_1,\vec{p}_2,\vec{O_1 O_2}) = -\vec{p}_1\cdot \vec{E}_2 = ...
11
Of course it has something to do with the liquid water entering the gas phase just above the cup of tea, but how does that give the bag of tea a directed motion to one side? Nope. The teabag is dangled by a string. Remember that the string is made of wound up threads: Now, the threads stay wound up because they fit well and they have a knack of ...
10
Here's another way of looking at it. Let M1, M2, M3 be our three masses. In the three body problem we're considering, the whole frame containing M1, M2 and M3 is rotating. You're right to think that if that frame was fixed then the points L4 and L5 would not be stable. After all if you perturb M3 from L4 or L5 then it should just roll down the potential ...
9
The formula you quote does not contain the potential energy, it is valid for a free particle (i.e. a particle which is not affected by external potential). You can link it to classical mechanics by evaluating it for small values of $p$ (more precisely: $p \ll c$): $$E = \sqrt{\left(mc^2\right)^2 + p^2 c^2} = c \sqrt{m^2c^2 + p^2} = \cdots$$ $$\cdots = ... 8 The potential energy only being defined up to a constant does not imply that potential energy differences only depend on differences in position. To see this mathematically, assume that a function U has the property that U(x_2)-U(x_1) = f(x_2-x_1) for some function f. Then if we take x_2 = x+\Delta x and x_1 = x, and divide both sides by ... 8 Gravity is doing that work! If you observe, the domino is in a position of unstable equilibrium. Edit: as pointed out in the comments, this position is of a metastable and not unstable equilibrium. This means that the domino is in a state where it hasn't achieved the minimum possible energy state yet. The energy I'm talking about here is the ... 7 While it may be possible to derive a violation of energy conservation due to intersecting equipotentials, there is a much more intuitive and in my opinion a more fundamental reason that equipotentials cannot intersect: Potential is a single-valued function. A good analogy for potential in this case is a map of the ground elevation of the earth; a ... 7 You're right that if you take Newton's law of gravity as is and apply it to a 2D universe, you'll get an infinite result. So you do need to use a modified theory in two dimensions, or indeed in any number of dimensions other than three. The proper way to do this is using general relativity, and if you apply GR to 2+1D spacetime, you get something that looks ... 7 The energy in your equation is for a free rigid body in the absence of a potential. We can see this if we start with a Lagrangian with a scalar function, \Phi(q), and remember \gamma is a function of \dot{q},$$ L=T-V=-\gamma^{-1} (\dot{q}) \, mc^2-\Phi(q) $$Then if we find the momentum$$ \pi=\frac{\partial L}{\partial ...
7
Let $E$ denote a quantity that does not change over time (from the first principle). Consider a ball with mass $m$ dropped from a height $h$. As the ball drops, its speed changes due to the gravitational acceleration $g$, reaching a final value $v$ at impact. Thus, we can infer that the quantity $E$ depends on these 4 parameters: $$E(m,H,g,V)$$ where $H$ ...
6
Yes the free body moves outward, but there are two critical things you have to know to interpret this statement correctly. First, this is the effective potential, taking into account gravity and centrifugal force. It has this form because we went into the non-inertial frame co-rotating with the two masses. Mathematically, the potential is $$... 5 Yes, u is indeed the potential energy. And yes, you can calculate the force acting on a particle by calculating the gradient of the potential energy field at the position the particle is in. Computationally you will want to calculate the force on particle 1, by taking the gradient at the position particle 1 is in, of the potential energy field created by ... 5 Special relativity doesn't alter the fact that interactions between particles "store energy" in the form of "potential energy," alhtough special relativity does alter the terms you listed, all of which have to do with the energies possessed by particles either by virtue of their motion, or their mass. For example, in special relativity, electromagnetic ... 5 It's valid in the sense that it does tell you the rest energy of a 200-pound person, but it does not tell you how much energy you could get by splitting all those atoms. As a matter of fact, most of the atoms in a human body are carbon, nitrogen, and oxygen; splitting these atoms takes energy, it doesn't produce it. Your character would need to tap into a ... 5 Think about the work-kinetic energy theorem, which states that the net work done on an object is equal to its change in kinetic energy:$$W_{net}=\Delta\mathrm{KE}.$$You are right that when lifting an object of mass m by a height h, in a uniform gravitational field, the work you do is W_{you}=mgh (assuming, as you said, that you're applying a force ... 5 Your teacher's explanation is incorrect. A simple counterexample can be constructed to illustrate this by considering what happens when the role of your arm is replaced by that of a rubber band. When a weight is suspended from the ceiling by a rubber band, the band stretches and its polymer chains become more ordered, in exact analogy to your teachers ... 4 You've got basically the right idea. Just for clarity, let me recap the setup: suppose that your ring is centered at the origin and oriented in the xy plane. Consider two differential elements of charge, \mathrm{d}q located at (R,0), and \mathrm{d}q', located at (R\cos\phi,R\sin\phi). The potential energy of these two charge elements is ... 4 It is indeed correct that only the difference between two potential energies is physically meaningful. An in-depth explanation follows. For the rest of this answer, forget everything you know about potential energy. I suppose you know that when you have a conservative force \vec{F} acting on an object to move it from an initial point \vec{x}_i to a ... 4 Yup. Inside the (uniform spherical) mass, IIRC \phi=-\frac{GM}{2R^3}\left(3R^2-r^2\right). Or something like that. So,$$\phi=\begin{cases} -\frac{GM}{r}, & r>R \\ -\frac{GM}{2R^3}\left(3R^2-r^2\right), & r<R \end{cases}$$The laplacian \nabla^2\phi should be$$4\pi G\rho=\nabla^2\phi=\begin{cases} 0, &r>R \\ 4\pi G\rho_0 ...
4
As I know, in L4,L5 , the potential of the Gravitational power is at it's maximum, although it is unusual for kinematics, that considers stable points to be when U->Min , but in Dynamic systems, stable points can be even when U->Max , but then we call it "Dynamical Equilibrium" in sense that the object will actually move around the stable point (but will ...
4
When you look at the dynamics in the rotating reference frame, there are 4 forces acting on the particle: the two gravitational pulls from the massive bodies, the centrifugal push away from the center of rotation (located between the massive objects) and the Coriolis force. The first three forces depend on the position of the particle, and can be derived ...
4
There is no need for any empirical evidence. This is pure mathematics. Step 1: Assume a force is conservative. This means that ${\vec \nabla} \times {\vec F} =0$ Step 2: Then, via Green's theorem, you know that the quantity $\int_{a}^{b}{\vec F}\cdot d{\vec s}$ does not depend on the path you take from a to b. (equivalently, this integral is zero if ...
4
The comparison is viable, here's why: Let's choose the positive $x$-direction to point upward, perpendicular to the water's surface. By Archimedes' principle, the magnitude of the buoyant force on an object of volume $V$ equals the weight of the displaced water; $F_B = \rho_w V g$ where $\rho_w$ here denotes the density of water. The buoyant force ...
4
Yes, the derivative of a step function is a Dirac delta. You can see this by integrating the delta function: $$\Theta(x)=\int_{-\infty}^x \delta(x') \mathrm{d}x'$$ where $$\Theta\left(x\right)=\begin{cases} 1 & x>0\\ 0 & x<0 \end{cases}$$ (note that $\Theta(0)$ is not defined by this prescription. If you use a symmetric representation of the ...
4
You can actually do this using $\delta$ functions as well if you'd like (provided you're careful). Let $V(x) = V_0\theta(x-x_0)$, then $$F(x) = -V'(x) = -V_0\delta(x-x_0)$$ For simplicity, let's take the potential barrier to be located at $x_0 = 0$ so that by Newton's second law, the equation of motion becomes $$-V_0\delta(x(t)) = m \ddot x(t)$$ ...
4
No, most force fields refuse to be conservative. The path-independence is a nontrivial constraint in an arbitrarily small region of space, an arbitrary neighborhood. If the force field is conservative, it must be $$\nabla\times \vec F = 0$$ because $F = -\nabla\Phi$. It's clear that the curl of $\vec F$ may be nonzero even if you look at a small ...
4
Yes, it is a torsion spring. It works by twisting the metal rod that makes up the body of the spring. The reason for coiling the spring is to fit a long length of metal rod into a short space. You need a long length of rod so that the torsion per unit length remains small. With a shorter length of rod you'd exceed the elastic limit and the rod would be ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
# titration calculation worksheet
Free. Showing top 8 worksheets in the category - Gcse Chemistry Calculations. In a titration of HCl with NaOH, 100.0 mL of the base was required to neutralize 20.0 mL of 5.0 M HCl. Titrations Practice Worksheet Find the requested quantities in the following problems: 1) 2) 3) If it takes 54 mL of 0.1 M NaOH to neutralize 125 mL of an HCI solution, what … A titration calculation is a simple formula used to work out the concentration (in moles) of one of the reactants in a titration using the concentration of the other reactant. doc, 27 KB. Tes Global Ltd is Download free the whole teaching package. My students had carried out titration practical previous to this lesson so had obtained average titre values. Square Solutions to the Titrations Practice Worksheet 1) 0.043 M HCl 2) 0.0036 M NaOH 3) 0.1 M H 2SO 4 4) You cannot do a titration without knowing the molarity of at least one of the substances, because you’d then be solving one equation with two unknowns 5) Endpoint: When you actually stop doing the titration … What is the molarity of the NaOH? Read more. Titrations Practice Worksheet Find the requested quantities in the following problems: 1) If it takes 54 mL of 0.1 M NaOH to neutralize 125 mL of an HCl solution, what is the concentration of the HCl? registered in England (Company No 02017289) with its registered office at 26 Red Lion Also included in: Mole Conversions Calculations Bundle. In titration (2) all of the carbonates are neutralised in two stages (i) Na 2 CO 3 + HCl ==> NaCl + NaHCO 3 (ii) NaHCO 3 + HCl ==> NaCl + H 2 O + CO 2 (b)(i) Calculate the moles of Na 2 CO 3 in the prepared solution and calculate its molarity. A pipette is used to measure out the exact volume of alkali and an indicator is added so the end point of the titration … 2) 0.0036 M NaOH. Q2. Info. Q14. Conditions. How can I re-use this? A worksheet on titration calculations and percentage uncertainties. Titration Calculations. registered in England (Company No 02017289) with its registered office at 26 Red Lion 1) 0.043 M HCl. Titration Calculations Worksheet Learning Target Use data from a titration experiment to determine the molarity of an unknown solution. Solutions to the Titrations Practice Worksheet. 2)0.0036 M NaOH. Categories & Ages. Chemistry; Chemistry / Acids and bases; 14-16; View more. Titration questions are among the toughest to answer at GCSE and you will need a calculator and space to do some working. Entire OCR A-Level Chemistry Course Powerpoint, NEW AQA GCSE Chemistry - The Earth's Atmosphere, KS3_Secondary 1 worksheet and revision(with answer script): Acids, Alkalis and Neutralisation. 2. The below questions are on a separate sheet (that you are free to print off) but you can type in your answers when you have worked them out to see if you are right. salreid Titration calculations. Q1: The table below shows the results of titrating 0.02 dm 3 of barium hydroxide B a ( O H ) 2 against 0.2 M of hydrochloric acid. Lesson Video 17:52. This dedicated titrations page will cover how to carry out a titration and how to perform titration calculations, in line with the GCSE chemistry syllabus. Tes Classic Free Licence. Use the pipette and pipette filler to add 25 cm3 of alkali to a clean conical flask. Q7. Grades: 9 th, 10 th, 11 th, 12 th, Higher Education, Adult Education, Homeschool. This worksheet and answer sheet is aimed at post 16 chemistry students and covers redox titration calculations. When considering a titration calculation, the first thing to know is the volume of titrant that is needed to reach the equivalence point. Whether you are studding AQA titrations, OCR or Edexcel, then all of the content and revision materials on this page will apply to you. Solutions to the Titrations Practice Worksheet For questions 1 and 2, the units for your final answer should be “M”, or “molar”, because you’re trying to find the molarity of the acid or base solution. Sample Question Videos 03:05. Lesson Worksheet Q1: The table below shows the results of titrating 0.02 dm 3 of barium hydroxide B a (O H) 2 against 0.2 M of hydrochloric acid. Report a problem. What is the concentration of the HCl solution? Greater than 250 Increase AM NPH dose by 4 units. Differentiated lesson resources available. The IV solution was prepared by adding 4 mg of Levophed. Q12. Titration calculations: Practice questions, input your answers here. Experience a new way of teaching titration with our ready to use lecture Basics of Titration and laboratory titration worksheet materials. Acid base titration calculations help you identify properties (such as pH) of a solution during an experiment, or what an unknown solution is when doing fieldwork. This solution was placed in a burette and 13.9 cm3 were required to neutralise 25 cm3 of 0.1 moldm-3 NaOH. 4 worked examples going through different types of titration calculation, from a simple calculation to a back titration to a calculation finding the percentage purity of a solid. To solve these problems, use M1V1= M2V2. $\text{moles acid} = \text{moles base}$ Recall that the molarity $$\left( \text{M} \right)$$ of a solution is defined as the moles of the solute divided by the liters of solution $$\left( \text{L} \right)$$. 1. Titrations The next six problems represent many points along a titration curve of a weak base with a strong acid. (Be sure to write the neutralization reaction.) 1. Textbook Section 15.2 Directions: Show ALL of your work for each of the following problems. WS---Titration-Calculations-and-Questions-MS, WS---Titration-Calculations-and-Questions. Step 2: What are the equilibrium concentrations of the species in solution? Q 13. In a titration of H 2SO 4 with NaOH, 60.0 mL of 0.020 M NaOH was needed to neutralize 15.0 mL of H 2SO 4. Q10. In this worksheet, we will practice using the results from a titration experiment to calculate unknown properties of a solution. This resource is designed for UK teachers. To solve these problems, use M1V1 = M2V2. Types: Worksheets, Activities. Tes Global Ltd is
|
{}
|
How to write a complex number in polar form
Complex number given:
$x = 1 + \cos \alpha + i \sin \alpha$
Desired form is something like $|x| \cdot e^{i \cdot \phi} = |x| \cdot (\cos \phi + i \sin \phi)$.
I somehow got completly stuck how to convert the number to the Euler style.
Maybe someone can help me.
I think I could write:
$x = \cos 0 + i\sin 0 + \cos \alpha + i \sin \alpha$
$x = (\cos 0 + \cos \alpha) + i (\sin 0 + \sin \alpha)$
Then $|x| = \sqrt{(\cos 0 + \cos \alpha)^2 + (\sin 0 + \sin \alpha)^2}$.
Is it then right to write $x = |x| \cdot e^{i \cdot \alpha} = \sqrt{(\cos 0 + \cos \alpha)^2 + (\sin 0 + \sin \alpha)^2} \cdot e^{i \cdot \alpha}$ ?
Is there a simpler way for the Euler style of $x$?
-
The last step in your computation is quite wrong as well:
How is $\cos \alpha + \cos 0 + i ( \sin \alpha + \sin 0)$ equal to whatever you have written? Recall, $$\cos A + i\sin A = e^{iA}$$ is Euler's formula and not what you have just written.
This is a standard exercise, so here's the hint:
$$1+ \cos \alpha = 2 \cos^2\frac{\alpha}{2}$$ $$\sin \alpha = 2 \sin \frac {\alpha}{2} \cos \frac {\alpha}{2}$$
-
How do i get to these two equations? If I use them now, I can go on, but how do I get $1+cos \alpha = ...$? – meinzlein Dec 20 '11 at 21:23
Recall that $cos \alpha = \cos^2 \frac {\alpha}{2}-\sin^2\frac {\alpha}{2}$ – user21436 Dec 20 '11 at 21:27
Hm this is new to me. We should only use angle sum and difference identities and $sin^2 + cos^2 = 1$ – meinzlein Dec 20 '11 at 21:36
In what way is this different? You know that $\cos (A+B)=\cos A~\cos B-\sin A \sin B$. Now put $A=B=\frac {\alpha} {2}$ and see! – user21436 Dec 20 '11 at 21:40
\begin{align} x & = 1 + \cos(\alpha) + i \sin(\alpha) = 2 \cos^2(\alpha/2) + i (2 \sin(\alpha/2) \cos(\alpha/2))\\ & = 2 \cos(\alpha/2) (\cos(\alpha/2) + i \sin(\alpha/2)) = 2 \cos(\alpha/2) e^{i \alpha/2} \end{align}
-
|
{}
|
# Bayesian probability
34,135pages on
this wiki
## Redirected from Bayesianism
In the philosophy of mathematics Bayesianism is the tenet that the mathematical theory of probability is applicable to the degree to which a person believes a proposition. Bayesians also hold that Bayes' theorem can be used as the basis for a rule for updating beliefs in the light of new information —such updating is known as Bayesian inference. In this sense, Bayesianism is an application of the probability calculus and a probability interpretation of the term probable, or —as it is usually put —an interpretation of probability.
## Controversy Edit
A quite different interpretation of the term probable has been developed by frequentists. In this interpretation, what are probable are not propositions entertained by believers, but events considered as members of collectives to which the tools of statistical analysis can be applied.
The Bayesian interpretation of probability allows probabilities to be assigned to all propositions (or, in some formulations, to the events signified by those propositions) independently of any reference class within which purported facts can be thought to have a relative frequency. Although Bayesian probability is not relative to a reference class, it is relative to the subject: it is not inconsistent for different persons to assign different Bayesian probabilities to the same proposition. For this reason Bayesian probabilities are sometimes called personal probabilities (although there are theories of personal probability which lack some features that have come to be identified with Bayesianism).
Although there is no reason why different interpretations (senses) of a word cannot be used in different contexts, there is a history of antagonism between Bayesians and frequentists, with the latter often rejecting the Bayesian interpretation as ill-grounded. The groups have also disagreed about which of the two senses reflects what is commonly meant by the term 'probable'.
To illustrate, whereas both a frequency probability and a Bayesian probability (of, e.g., 0.5) could be assigned to the proposition that the next tossed coin will land heads, only a Bayesian probability could be assigned to the proposition, entertained by a particular person, that there was life on Mars a billion years ago—because this assertion is made without reference to any population relative to which the relative frequency could be defined.
## History of Bayesian probability Edit
"Bayesian" probability or "Bayesian" theory is named after Thomas Bayes (1701? — 1761), who proved a special case of what is called Bayes' theorem. The term Bayesian, however, came into use only around 1950, and in fact it is not clear that Bayes would have endorsed the very broad interpretation of probability now called "Bayesian." Laplace independently proved a more general version of Bayes' theorem and put it to good use in solving problems in celestial mechanics, medical statistics and, by some accounts, even jurisprudence. Laplace, however, didn't consider this theorem to be of fundamental philosophical importance for probability theory. He endorsed the classical interpretation of probability, as did everyone else at his time.
The application of probability calculus to subjective belief which later became an important aspect of the "Bayesian" approach was proposed for the first time by the philosopher Frank P. Ramsey in his book The Foundations of Mathematics from 1931. Ramsey himself saw this interpretation as merely a complement to a frequency interpretation of probability. The one taking this interpretation seriously for the first time was the statistician Bruno de Finetti in 1937. The first detailed theory came in 1954 in the book The Foundations of Statistics by the mathematician and statistician L. J. Savage.
Bayesian probability is a measure of the degree of belief a person has in some proposition. Several attempts have been made to operationalize the intuitive notion of a "degree of belief". The most common approach is based on betting: a degree of belief is reflected in the odds and stakes that the subject is willing to bet on the proposition in question.
When beliefs have degrees, the theorems of the probability calculus become criteria for the rationality of sets of beliefs in the same way that the theorems of first order logic are criteria for the rationality of sets of beliefs. Many authors regard degrees of belief as extensions of the classical truth values (true and false).
The Bayesian approach has been explored by Harold Jeffreys, Richard T. Cox, Edwin Jaynes and I. J. Good. Other well-known proponents of Bayesian probability have included John Maynard Keynes and B.O. Koopman.
## Varieties of Bayesian probabilityEdit
The terms subjective probability, personal probability, epistemic probability and logical probability describe some of the schools of thought which are customarily called "Bayesian". These overlap but there are differences of emphasis. Some of the people mentioned here would not call themselves Bayesians.
Bayesian probability is supposed to measure the degree of belief an individual has in an uncertain proposition, and is in that respect subjective. Some people who call themselves Bayesians do not accept this subjectivity. The chief exponents of this objectivist school were Edwin Thompson Jaynes and Harold Jeffreys. Perhaps the main objectivist Bayesian now living is James Berger of Duke University. Jose Bernardo and others accept some degree of subjectivity but believe a need exists for "reference priors" in many practical situations.
Advocates of logical (or objective epistemic) probability, such as Harold Jeffreys, Rudolf Carnap, Richard Threlkeld Cox and Edwin Jaynes, hope to codify techniques whereby any two persons having the same information relevant to the truth of an uncertain proposition would calculate the same probability. Such probabilities are not relative to the person but to the epistemic situation, and thus lie somewhere between subjective and objective. However, the methods proposed are controversial. Critics challenge the claim that there are grounds for preferring one degree of belief over another in the absence of information about the facts to which those beliefs refer. Another problem is that the techniques developed so far are inadequate for dealing with realistic cases.
## Bayesian and frequentist probabilityEdit
The Bayesian approach is in contrast to the concept of frequency probability where probability is held to be derived from observed or defined frequency distributions or proportions of populations, with the usefulness of probability narrowly limited to such scenarios. The difference has many implications for the methods by which statistics is practiced when following one model or the other, and also for the way in which conclusions are expressed.
For example, Laplace estimated the mass of Saturn using Bayesian methods. However, on the frequency interpretation of probability the laws of probability cannot be applied to this problem. This is because the mass of Saturn isn't a well defined random experiment or sample. From what population is the mass of Saturn taken? In what sense is Saturn picked at random from that population? Similarly, when comparing two hypotheses and using the same information, frequency methods would typically result in the rejection or non-rejection of the original hypothesis with a particular degree of confidence, while Bayesian methods would yield statements that one hypothesis was more probable than the other or that the expected loss associated with one was less than the expected loss of the other.
The rejection of the classical notion of probability, and the development of the theory of statistics and probability based narrowly on the frequency interpretation was pursued by some of the most influential figures in statistics during the first half of the twentieth century, including R.A. Fisher, Egon Pearson and Jerzy Neyman. At the same time, the mathematical foundation of probability in measure theory via the Lebesgue integral was elucidated by A. N. Kolmogorov in the book Foundations of the Theory of Probability in 1933. In the years to 1950 these two approaches almost completely eclipsed the previous broader classical interpretation. However since that time, and continuing into the present day, the work of Savage, Koopman, Abraham Wald, and others, has led to renewed broader acceptance of the alternative, Bayesian point of view.
## Applications of Bayesian probability Edit
Today, there are a variety of applications of Bayesian probability that have gained wide acceptance. Some schools of thought emphasise Cox's theorem and Jaynes' principle of maximum entropy as cornerstones of the theory, others (e.g., Ramsey, di Finetti) approach it from the point of view of a Dutch book argument, still others may claim that Bayesian methods are more general and give better results in practice than frequency probability. See Bayesian inference for applications and Bayes' Theorem for the mathematics.
Some philosophers of science regard Bayesian inference as a model of the scientific method. That is, updating probabilities via Bayes' theorem is similar to the scientific method insofar as one starts with an initial set of beliefs about the relative plausibility of various hypotheses, collects new information (for example by conducting an experiment), and then adjusts the original set of beliefs in the light of the new information to produce a more refined set of beliefs. However, this view is controversial. Similarly, Bayes factors have been employed in discussions of Occam's Razor.
Bayesian techniques have recently been applied to filter out e-mail spam with good success. After submitting a selection of known spam to the filter, it then uses their word occurrences to help it discriminate between spam and legitimate email.
See Bayesian inference and Bayesian filtering for more information in this regard.
## Probabilities of probabilitiesEdit
One criticism levelled at the Bayesian probability interpretation has been that a single probability assignment cannot convey how well grounded the belief is—i.e., how much evidence one has. Consider the following situations:
1. You have a box with white and black balls, but no knowledge as to the quantities
2. You have a box from which you have drawn n balls, half black and the rest white
3. You have a box and you know that there are the same number of white and black balls
The Bayesian probability of the next ball drawn is black is 0.5 all three cases. To reflect difference in evidential support one can assign probabilities to these probabilities (so-called metaprobabilities) in the following manner:
1. You have a box with white and black balls, but no knowledge as to the quantities
Letting $\theta = p$ represent the statement that the probability that the next ball is black is $p$, a Bayesian might assign a uniform Beta prior distribution:
$\forall \theta \in [0,1]$
$P(\theta) = \Beta(\alpha_B=1,\alpha_W=1) = \frac{\Gamma(\alpha_B + \alpha_W)}{\Gamma(\alpha_B)\Gamma(\alpha_W)}\theta^{\alpha_B-1}(1-\theta)^{\alpha_W-1} = \frac{\Gamma(2)}{\Gamma(1)\Gamma(1)}\theta^0(1-\theta)^0=1$
Assuming that the ball drawing is modelled as a binomial sampling distribution, the posterior distribution, $P(\theta|m,n)$, after drawing m additional black balls and n white balls is still a Beta distribution, with parameters $\alpha_B=1+m$, $\alpha_W=1+n$. An intuitive interpretation of the parameters of a Beta distribution is that of imagined counts for the two events. For more information, see Beta distribution.
2. You have a box from which you have drawn N balls, half black and the rest white
Letting $\theta = p$ represent the statement that the probability that the next ball is black is $p$, a Bayesian might assign a Beta prior distribution, $\Beta(N/2+1,N/2+1)$. The maximum aposteriori (MAP) estimate of $\theta$ is $\theta_{MAP}=\frac{N/2+1}{N+2}$, precisely Laplace's rule of succession.
3. You have a box and you know that there are the same number of white and black balls
In this case a Bayesian would define the prior probability $P(\Theta)=\delta(\frac{1}{2})$.
Because there is no room for metaprobabilities on the frequency interpretation, frequentists have had to find different ways of representing difference of evidential support. Cedric Smith and Arthur Dempster each developed a theory of upper and lower probabilities. Glenn Shafer developed Dempster's theory further, and it is now known as Dempster-Shafer theory.
|
{}
|
Thursday, March 16, 2006
WMAP: three-year data released
Fourty minutes ago, the WMAP team has released
Click at the link above and enjoy. If you want to be informed about similar events by e-mail,
The polarization analyses of Lyman Page et al. see no evidence of B-modes but tell you a lot of details. Note that the B-modes describe the magnetic field of electromagnetic waves. If you choose the synchronous gauge, the B-modes of the radiation are sourcing the tensor "h_{ij}" modes of the gravitational field only. In other words, the absence of the B-modes means an absence of short gravity waves.
This observation seems to rule out the original ekpyrotic models of the Universe because these models predict that the energy stored in gravity waves grows faster with the momentum than inflation predicts while the upper bound on the tensor/scalar ratio from the WMAP data is 0.55. The cyclic Universe models based on the original ekpyrotic scenario may be ruled out, too. Note that newer cyclic models are claimed to generate a scale-invariant spectrum indistinguishable from inflation.
David Spergel explains, together with his team, that a standard six-parameter cosmological model containing cold dark matter plus cosmological constant plus baryonic matter fits not only the new three-year data but also finer CMB details: patches that are smaller than in the previous data have been looked at and the case for inflation has strengthened because the spectrum continues to be scale-invariant up to these shorter length scales. Try this 2048 x 1124 map in the W-band (more than one megabyte!) and compare with the analogous, older pictures from COBE and WMAP-1-year. One can see solid angles that are 1,000 times smaller than those at COBE and almost 100 times smaller than with the first-year WMAP maps. Instead of listening to Peter Woit, have a look! ;-)
The equation of state of the dark energy has "w=p/rho" equal to -1.07 plus minus 0.1 or so. The sum of neutrino masses is below 0.68 eV at 95% confidence level. No non-gaussianities have been seen. The index "n" is close to one (scale invariance) but very likely different from one, something like 0.96.
A 142-page-long description of the whole experiment is here. Some update on temperatures is here - they hijacked the acronym ILC! A discussion of data analysis and error margins is here.
Figure 1: An image of the skies. (WMAP/NASA science team.)
More hot new images can be found on a NASA website. EurekAlert offers a press release. Because the temperature fluctuations still perfectly agree with the inflationary framework, there is a lot of room for poetic comments. Brian Greene revived a theme from his book "The Fabric of the Cosmos" at the press conference:
• These observations are spectacular and the results are stunning… it is truly inspiring. Galaxies are nothing but quantum mechanics writ large across the sky.
PhysOrg.com also presents the results as a stronger argument in favor of inflation. The astronomers won't ever be impressed by high-energy physics and they prefer to learn that the first star was born 0.4 billion years after the Big Bang, which means 13.3 billion years ago. See Bad (low-energy) Astronomy Blog.
More news on news.google.com. Mark Trodden says just a few words. Sean Carroll, on the other hand, tells you much more how the LambdaCDM model is in good shape.
1. It's a new low of quality in scientific papers. On Page 16 they got the world accepted neutron lifetime as 887.5 seconds. It's actually 885.7 seconds. It may just be a typo but with nearly 4 dozen eyes proof reading this paper for two years, they should NOT have this elementary school mistakes that could be caught by QUANTOKEN in 3 minutes of reading. Also, all previous neutron lifetime measurements are pretty much agree with each other, on what basis do they adapt this Russian guy's result, which differ from every one else by more than 6 sigma, just because the data fit their model better?
Quantoken
2. They measured hubble constant to be 0.73 +- 0.03, and then also claim the age of the universe to be 13.7 GY. The two numbers are not consistent from each other.
If you are too lazy to calculate, Look at the bottom of this page:
t0 = 9.8 GY/h0
9.8 divided by 0.73 is 13.4, not 13.7.
3. Dear Quantoken,
the lifetime of the Universe would only be related to the Hubble constant in the way you believe if the Hubble constant were constant during the history of the Universe which is not the case.
The fact that H=1/T holds to a good accuracy is a complete accident, and it won't hold in the far future. The Hubble constant will approach the de Sitter constant while the time from the Big Bang will grow.
Some parts of the history of the Universe had seen faster expansion than today, some parts had seen slower expansions, and if you combine it, H=1/T is almost exact but not quite.
I agree that neutron's lifetime is 885.7 seconds +- 0.8 seconds.
Best
Lubos
4. The interesting thing about cranks like Quantoken, is that since they cannot do real science, the nitpick over numbers to the nth significant digit. They do not and cannot realize that for measured numbers, such as those for the lifetime of free neutrons, there is an error in the measurement.
NIST quotes a neutron lifetime of 885.8 ± 3.4 s.
Therefore the number quoted in the paper is within the error bars given by the NIST results.
The 3.4 second is not a 6 sigma error.
There are many different experiments underway, some ultra cold trapping experiments reporting other lifetimes, such as 878.5 ± 0.7 ± 0.3
http://www.aas.org/publications/baas/v36n5/aas205/643.htm
(Note that the paper is still closer than Quantoken)
The fact of the matter is that there is no "world accepted" neutron lifetime. (If there were, it would be from a standards lab like NIST) At most there is a current best limit. Such is the reality of experimental physics and that reality states that the number will continued to be refined. Cranks like Quantoken will find one number and never accept any other.
Once again, Quantoken shows his true crackpot colors.
5. On other hand, see the figure in hep-ex/0504034 to check were are we about neutron decay. It seems that some sigmas between different experiments are usual in neutron measures.
6. Just to clarify: the Serebrov et al. result is the one six sigmas away of the pdg 2004 average. Now, the page quantoken refers to does not "adapt" this result, they just say that it should be "a shift several times the reported errors" and then remark that "This (Serebrov's) shorter lifetime lowers the predicted best fit helium abundance". All the goal of the parragraph is to stress how "the uncertainties in nuclear parameters" are the main source of systematic uncertainty now.
7. Mike Varney:
I really don't have high expectations of you, who just graduated from college a short time period ago, and who now works a full time job of $6 an hour flipping hamberger in McDonalds, plus a part time job working$2 an hour slave labor in a professor's lab as a janitor (they call it research assistant, btw). But a hamberger flipper should know the difference between a careless typo, and experimental error. BTW any one should know what Particle Data Group is, and be able to look up the authoritative numbers.
Your web blog is too filthy for me to visit again, Mike Whinning.
8. Quit projecting your inadequacies and employment dreams on me, Quantoken.
You would not even be able to get a job at Taco Bell, much less Burger King or McDonalds.
And good riddance of you from my blog. Lubos is too nice to cranks like yourself.
9. Mike Whinning:
You are quote correct that I would not be able to get a job in those three fast food restraurants. They look at my background and well above 6 figure income and would be scared to death to have me onboard for over qualification. As a matter of fact I have stopped eating in those restrarants for about 10 years now because their food is under-qualified for me. These restuarants hire only illegal immigrants and under age teenages like you, Mike.
Quantoken
10. Hi nice blog .I need to post resumes .can anybody send links of that sites.
Thank you.........
|
{}
|
Autocall Macros for Postprocessing
Although PROC MCMC provides a number of convergence diagnostic tests and posterior summary statistics, PROC MCMC performs the calculations only if you specify the options in advance. If you wish to analyze the posterior draws of unmonitored parameters or functions of the parameters that are calculated in later DATA step calls, you can use the autocall macros in Table 59.44.
Table 59.44: Postprocessing Autocall Macros
Macro Description %ESS Effective sample sizes %GEWEKE* Geweke diagnostic %HEIDEL* Heidelberger-Welch diagnostic %MCSE Monte Carlo standard errors %RAFTERY Raftery diagnostic %POSTACF Autocorrelation %POSTCOR Correlation matrix %POSTCOV Covariance matrix %POSTINT Equal-tail and HPD intervals %POSTSUM Summary statistics *The %GEWEKE and %HEIDEL macros use a different optimization routine than that used in PROC MCMC. As a result, there might be numerical differences in some cases, especially when the sample size is small.
Table 59.45 lists options that are shared by all postprocessing autocall macros. See Table 59.46 for macro-specific options.
Table 59.45: Shared Options
Option
Description
DATA=SAS-data-set
Input data set that contains posterior samples
VAR=variable-list
Specifies the variables on which you want to carry out the calculation.
PRINT=YES | NO
Displays the results. The default is YES.
OUT=SAS-data-set
Specifies a name for the output SAS data set to contain the results.
Suppose that the data set that contains posterior samples is called post and that the variables of interest are defined in the macro variable &PARMS. The following statements call the %ESS macro and calculates the effective sample sizes for each variable:
%let parms = alpha beta u_1-u_17;
%ESS(data=post, var=&parms);
By default, the ESS estimates are displayed. You can choose not to display the result and save the output to a data set with the following statement:
%ESS(data=post, var=&parms, print=NO, out=eout);
Some of the macros can take additional options, which are listed in Table 59.46.
Table 59.46: Macro-Specific Options
Macro
Option
Description
%ESS
AUTOCORLAG=numeric
Specifies the maximum number of autocorrelation lags used in computing the ESS estimates. By default, AUTOCORLAG=MIN(500, NOBS/4), where NOBS is the sample size of the input data set.
HIST=YES|NO
Displays a histogram of all ESS estimates. The default is NO.
%HEIDEL
SALPHA=numeric
Specifies the level for the stationarity test. By default, SALPHA=0.05.
HALPHA=numeric
Specifies the level for the halfwidth test. By default, HALPHA=0.05.
EPS=numeric
Specifies a small positive number such that if the halfwidth is less than times the sample mean of the remaining iterations, the halfwidth test is passed. By default, EPS=0.1.
%GEWEKE
FRAC1=numeric
Specifies the earlier portion of the Markov chain used in the test. By default, FRAC1=0.1.
FRAC2=numeric
Specifies the latter portion of the Markov chain used in the test. By default, FRAC2=0.5.
%MCSE
AUTOCORLAG=numeric
Specifies the maximum number of autocorrelation lags used in computing the Monte Carlo standard error estimates. By default, AUTOCORLAG=MIN(500, NOBS/4), where NOBS is the sample size of the input data set.
%RAFTERY
Q=numeric
Specifies the order of the quantile of interest. By default, Q=0.025.
R=numeric
Specifies the margin of error for measuring the accuracy of estimation of the quantile. By default, R=0.005.
S=numeric
Specifies the probability of attaining the accuracy of the estimation of the quantile. By default, S=0.95.
EPS=numeric
Specifies the tolerance level for the stationary test. By default, EPS=0.001.
%POSTACF
LAGS=%str(numeric-list)
Specifies autocorrelation lags calculated. The default values are 1, 5, 10, and 50.
%POSTINT
ALPHA=value
Specifies the level for the interval estimates. By default, ALPHA=0.05.
For example, the following statement calculates and displays autocorrelation at lags 1, 6, 11, 50, and 100. Note that the lags in the numeric-list need to be separated by commas “,”.
%PostACF(data=post, var=&parms, lags=%str(1 to 15 by 5, 50, 100));
|
{}
|
Lexus H.
# Finding profit function with integration with fixed costs! Help!
Suppose the marginal cost function (=supply curve) for these cell phone subscriptions is given by:
p= C'(q)= 12.417 + 0.8884q
p= domain input q= output.
a) assume the fixed cost for these cell phone subscriptions is \$435.60. Integrate the supply function above to find the cost function C(q).
I believe this to be: 0.4442q^2+12.41q-435.60
b) Now use both the cost function and the demand function to find the profit function for this situation. then find the number of subscriptions that will maximize profit as well as what the monthly subscription price and the maximum profit will be for this value of q.
Equations:
Marginal cost: p= C'(q)= 12.417 + 0.8884q
Believed cost function c(q)= 0.4442q^2+12.41q-435.60
Demand function= -.0166q^3 + -7.7405q^2 + -14.3634 + 148.3972
c) Now find a simplified formula for the average cost function (C(q)/q ). Then find the value of 'q' which will minimize this function. Find and state the monthly subscription price p that corresponds to this value of q; also find what the profit will be?
By:
Tutor
5 (1)
Math, microeconomics or criminal justice
## Still looking for help? Get the right answer, fast.
Get a free answer to a quick problem.
Most questions answered within 4 hours.
#### OR
Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
|
{}
|
# VICReg: Tutorial and Lightweight PyTorch Implementation
April 21, 2022
This blog post is about VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, a 2021 paper by Adrien Bardes, Jean Ponce, and Yann LeCun that features prominently in LeCun's recent vision to make human-like AI.
Our GitHub repo self_supervised now includes a validated implementation of VICReg (along with many of our other favorite methods like SimCLR, MoCo, and BYOL) and is very simple to set up. We also provide a simple Colab notebook with our VICReg implementation running so that you can start experimenting today.
Here, we present a friendly tutorial on self-supervised learning with VICReg, digging into the intuitions underlying the equations through explanations, visualizations, and code snippets.
## Introduction
Self-supervised learning leverages unlabeled data to learn meaningful representations that can be adapted to a variety of upstream tasks. VICReg is the latest in a progression of self-supervised methods for image representation learning. Notable precursors include SimCLR, MoCo, BYOL, and SimSiam, each of which strives to eliminate or replace something that was previously thought necessary. VICReg continues in this vein, presenting a model that supplants all previous tricks with two statistics-based regularization terms on top of a simple invariance-preserving loss function.
If you're new to self-supervised representation learning in computer vision, don't worry; this post includes brief background and related work sections to get you up to speed.
If you're already familiar with other self-supervised methods, feel free to skip ahead to the section titled "VICReg."
## Background
Traditional machine learning is mostly supervised, meaning that it's trained on "ground truth" output labels for each input. Self-supervised learning automatically generates those output labels directly from the input data. For example, many modern language models like BERT are trained to guess the missing words that have been masked from raw text data. One obvious advantage of self-supervised learning is that it enables training without requiring humans to generate output labels by hand.
Self-supervised methods have recently gained traction in computer vision, where the state-of-the-art was previously dominated by supervised learning on datasets like ImageNet. The key insight underpinning these new methods is simple: input images that are similar according to a human should be similar according to the model. By augmenting an image in some semantics-preserving way (meaning the pixel values are not necessarily the same, but a human would still register them as being versions of the same image) we can generate pairs of images that should be encoded as similar vectors by the model.
Augmented versions of the original picture of a dog (a). Source: SimCLR post.
In detail, an input image is augmented to create two versions, $x$ and $x'$. This augmentation is often some combination of random crops, re-orientations, color perturbations, and noise injections. The two versions form a positive pair, with the loss function seeking to maximize the similarity of their representations. In some cases, the loss function simultaneously seeks to minimize the similarity of negative pairs—i.e. all other pairs—either directly or indirectly. Architecturally, the two versions of the image go through two networks whose weights are usually shared ("Siamese networks") for at least some parts of the architecture.
The main challenge with Siamese networks is that there is a trivial solution: the two branches can learn to produce constant and identical output vectors, thereby satisfying the similarity condition without ever learning anything useful. This is often referred to as "mode collapse." Approaches to avoiding mode collapse fall into two main camps, discussed in the next section.
## Previous Work
Here we recap some of the most well-known prior works in self-supervised image representation learning and how they address the mode collapse problem. For a more exhaustive literature review, see e.g. Section 2 of the VICReg paper.
Explicitly contrastive methods: In these methods, mode collapse is avoided by including a repulsive term in the loss function that pushes negative pairs away from each other. Popular models SimCLR and MoCo mainly differ from one another in how they handle the need for a large number of negative pairs; SimCLR requires a large batch size, whereas MoCo maintains a memory bank of negatives from past batches. There is also SwAV, which does contrastive learning on the scale of clusters rather than individual images, i.e. simultaneously clustering the data while enforcing that different views of the same image are assigned to the same cluster.
Diagrams for SimCLR, MoCo, BYOL, and SimSiam.
Asymmetric network methods: These methods don't explicitly contrast negative pairs, but they avoid mode collapse by incorporating architectural tricks that introduce some asymmetry between the twins of the Siamese networks. BYOL continues a key idea from MoCo, in which the weights of one branch (momentum branch) are updated based on an exponential moving average of the weights of the other (online branch). However, BYOL also adds a prediction head to the online branch, showing that this removes the need for contrastive loss altogether. SimSiam takes things a step further, showing that momentum is not needed either—just the predictor and a stop-gradient to keep the backprop flowing through the online branch only. Since these methods do not rely on a large batch size or memory queue, they are more efficient, not to mention conceptually simple. However, how they avoid mode collapse is not fully understood, and both seem to critically require normalization. (To learn more about BYOL and our intuitions about how it uses batch norm, check out our other blog post.)
VICReg additionally takes inspiration from Barlow Twins, where the objective function drives the cross-correlation matrix of representations produced by the two branches towards the identity matrix. This captures the attraction between positive pairs and the repulsion between negative pairs while also decreasing redundancy between the different components of the vectors.
## VICReg
### High-Level Conceptual Description
VICReg has the same basic architecture as its predecessors; augmented positive pairs $x, x'$ are fed into Siamese encoders that produce representations $y, y'$ which are then passed into Siamese projectors that return projections $z, z'$.
Diagram for VICReg.
However, unlike its predecessors, the model requires none of the following: negative examples, momentum encoders, asymmetric mechanisms in the architecture, stop-gradients, predictors, or even normalization of the projector outputs. Instead, the heavy lifting is done by VICReg's objective function, which contains three main terms: a variance term, an invariance term, and a covariance term.
Let's break down each piece conceptually:
• Variance. This regularization term constrains the variance along the batch dimension to be above some threshold for every embedding dimension, explicitly discouraging mode collapse.
• Invariance. This term is the primary objective. Since the fundamental principle is that the representations produced by the model should be invariant to semantics-preserving data augmentations, the objective is a similarity metric to be minimized between positive pairs. However, this metric is not explicitly contrastive and thus does not require negative pairs or momentum.
• Covariance. This regularization term forces the covariance matrix of the embeddings to be as close to diagonal as possible, encouraging the model to spread information across its embedding dimensions. In other words, it discourages dimension collapse.
Visualizing VICReg's architecture and loss function. Source: the VICReg paper.
### Math and Pseudocode Description
Now that we understand the method conceptually, we can dig into the math. If you're someone who thinks better in code, we also include actual snippets from our PyTorch implementation.
In the equations below, let $Z$ be the $n \times d$ matrix representing a batch, where $n$ and $d$ are the batch size and embedding dimension, respectively. Let $z_{i:}$ be the $i$th vector in the batch and let $z_{:j}$ be a vector composed of the $j$th element of each vector in the batch.
Visualizing the math. Variance is calculated across the batch for each embedding variable. Covariance is calculated between pairs of embedding variables.
#### Variance
The variance term $v(Z)$ captures the variance of each embedding variable over a batch:
$\text{Var}(z_{:j}) = \frac{1}{n-1}\displaystyle{\sum_{i=1}^n}(z_{ij}-\bar{z}_{j})^2, \ \ \bar{z}_{j} = \frac{1}{n}\displaystyle{\sum_{i=1}^n}z_{ij} \\ v(Z) = \frac{1}{d}\displaystyle{\sum_{j=1}^{d}}\max(0,\gamma-\sqrt{\text{Var}(z_{:j})+\epsilon}$
where $\gamma$ is the target value for the standard deviation (they choose $\gamma = 1$), and $\epsilon$ is a small scalar put in place to prevent numerical instabilities (they choose $\epsilon = 0.0001$).
Notice that minimizing $v(Z)$ means forcing the batch-wise standard deviation to be above $\gamma$. As soon as this target is achieved, $v(Z)$ bottoms out at $0$. A hinge function is used here because the point is not to encourage ever-increasing variance; higher variance isn't necessarily better, it just needs to be above a certain threshold to avoid catastrophic failure i.e. mode collapse.
In PyTorch code:
# variance lossstd_z_a = torch.sqrt(z_a.var(dim=0) + self.hparams.variance_loss_epsilon)std_z_b = torch.sqrt(z_b.var(dim=0) + self.hparams.variance_loss_epsilon)loss_v_a = torch.mean(F.relu(1 - std_z_a))loss_v_b = torch.mean(F.relu(1 - std_z_b))loss_var = loss_v_a + loss_v_b
#### Invariance
The invariance term $s(Z,Z')$ captures the invariance between positive pairs of embedding vectors:
$s(Z,Z') = \frac{1}{n}\displaystyle{\sum_i}||z_i-z'_i||_2^2$
This is just a simple mean-squared Euclidean distance metric. Notably, the $z$ vectors are un-normalized. In the paper, the authors do some experiments using the cosine similarity metric of SimSiam (which has the effect of projecting the vectors onto the unit sphere) instead. They find that performance drops a bit with this type of loss term, and argue that it's too restrictive, especially since their covariance regularization term already prevents dimension collapse.
In PyTorch code:
# invariance lossloss_inv = F.mse_loss(z_a, z_b)
#### Covariance
The covariance term $c(Z)$ captures the covariance between pairs of embedding dimensions:
$C(Z) = \frac{1}{n-1}\displaystyle{\sum_{i=1}^n}(z_{i:} - \bar{z}_{i:} )(z_{i:} - \bar{z}_{i:})^T, \ \ \bar{z}_{i:} = \frac{1}{n}\displaystyle{\sum_{i=1}^n}z_{i:} \\ c(Z) = \frac{1}{d}\displaystyle{\sum_{\ell \neq m}}C(Z)^2_{\ell m}$
This one can be a bit tough to wrap your mind around dimensionally. Note that $z_{i:}$ and $\bar{z}_{i:}$ are both vectors of length $d$, resulting in $d \times d$ covariance matrix $C$. Whereas $\text{Var}(z_{:j})$ returns a number for each column vector $z_{:j}$, $C(Z)_{\ell m}$ returns a number for the covariance between the centered versions of $z_{:\ell}$ and $z_{:m}$. Minimizing $c(Z)$ means minimizing the off-diagonal components of the covariance matrix between centered embedding variables.
In PyTorch code:
# covariance lossN, D = z_a.shapez_a = z_a - z_a.mean(dim=0)z_b = z_b - z_b.mean(dim=0)cov_z_a = ((z_a.T @ z_a) / (N - 1)).square() # DxDcov_z_b = ((z_b.T @ z_b) / (N - 1)).square() # DxDloss_c_a = (cov_z_a.sum() - cov_z_a.diagonal().sum()) / Dloss_c_b = (cov_z_b.sum() - cov_z_b.diagonal().sum()) / Dloss_cov = loss_c_a + loss_c_b
#### Combined Loss Function
The loss function is a weighted combination of these three terms:
$\mathcal{L} = \displaystyle{\sum_{i\in\mathcal{D}}\sum_{t' \sim \mathcal{T}}}[\lambda s(Z,Z') + \mu\{v(Z)+v(Z') \} + \nu\{c(Z)+c(Z')\}]$
where $\lambda, \mu, \nu$ are hyper-parameters (set to $\lambda = \mu = 25, \nu = 1$ in the paper for the baseline) and the summations are over images $i$ and augmentations $t'$.
In PyTorch code:
weighted_inv = loss_inv * self.hparams.invariance_loss_weightweighted_var = loss_var * self.hparams.variance_loss_weightweighted_cov = loss_cov * self.hparams.covariance_loss_weight
loss = weighted_inv + weighted_var + weighted_cov
That's it! Thanks to this three-piece loss function with its variance and covariance regularization terms, VICReg avoids the need for negative pairs, momentum encoders, stop-gradients, predictors, or even batch norm layers.
Impact on Top-1 ImageNet accuracies when including a momentum encoder (ME), stop-gradient (SG), predictor (PR), batch norm (BN), or regularization (Var/Cov) in BYOL, SimSiam, or VICReg. Source: Table 4 of the VICReg paper.
As the table above shows, these simple regularization terms prevent VICReg from collapsing even in the absence of the architectural elements required by other models. Indeed, the variance term alone is sufficient to prevent mode collapse, although the covariance term further boosts performance. Additionally, we see that these regulatization terms can be used to marginally improve the performance of other methods, or to save them from mode collapse when they are stripped of some of their previously critical components.
|
{}
|
## anonymous 5 years ago ow do you do (5/4)^-2
1. anonymous
1/ (5/4)^2
2. anonymous
$(\frac{5}{4})^{-2}=(\frac{4}{5})^2=\frac{16}{25}$
3. anonymous
Then how do you do (5/4)^2
4. anonymous
negative exponent means take the reciprocal. flip it. square by multiplying by itself.
5. anonymous
Ahah!
6. anonymous
easy to square a fraction
7. anonymous
8. anonymous
ok lets go slow. the exponent is negative, so get it out of the denominator and put it in the numerator. then the exponent will be positive.
9. anonymous
so its simply 36^1/2?
10. anonymous
$\frac{1}{64^{-\frac{1}{2}}}=64^{\frac{1}{2}}$
11. anonymous
yes. you are right
12. anonymous
i meant 36 not 64!
13. anonymous
Power over root right?
14. anonymous
zactly
15. anonymous
Thanks, I got some more Q's, but ill make a new thread so i can medal more peeps
16. anonymous
17. anonymous
Cool!
|
{}
|
# American Institute of Mathematical Sciences
March 2015, 8(1): 117-151. doi: 10.3934/krm.2015.8.117
## Stability of the stationary solution of the cauchy problem to a semiconductor full hydrodynamic model with recombination-generation rate
1 School of Mathematics and Statistics, Northeast Normal University, Changchun, MO 130024
Received July 2014 Revised August 2014 Published December 2014
We study the Cauchy problem of a 1-D full hydrodynamic model for semiconductors where the energy equations are included. In the case of recombination-generation effects between electrons and holes being taken into consideration, the existence and uniqueness of a subsonic stationary solution of the related system are established. The convergence of the global smooth solution to the stationary solution exponentially is proved as time tends to infinity.
Citation: Haifeng Hu, Kaijun Zhang. Stability of the stationary solution of the cauchy problem to a semiconductor full hydrodynamic model with recombination-generation rate. Kinetic & Related Models, 2015, 8 (1) : 117-151. doi: 10.3934/krm.2015.8.117
##### References:
show all references
##### References:
[1] Shaoyong Lai, Yong Hong Wu. The asymptotic solution of the Cauchy problem for a generalized Boussinesq equation. Discrete & Continuous Dynamical Systems - B, 2003, 3 (3) : 401-408. doi: 10.3934/dcdsb.2003.3.401 [2] Belkacem Said-Houari, Radouane Rahali. Asymptotic behavior of the solution to the Cauchy problem for the Timoshenko system in thermoelasticity of type III. Evolution Equations & Control Theory, 2013, 2 (2) : 423-440. doi: 10.3934/eect.2013.2.423 [3] Boling Guo, Guangwu Wang. Existence of the solution for the viscous bipolar quantum hydrodynamic model. Discrete & Continuous Dynamical Systems, 2017, 37 (6) : 3183-3210. doi: 10.3934/dcds.2017136 [4] Chengxia Lei, Yihong Du. Asymptotic profile of the solution to a free boundary problem arising in a shifting climate model. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 895-911. doi: 10.3934/dcdsb.2017045 [5] Ellen Baake, Michael Baake, Majid Salamat. The general recombination equation in continuous time and its solution. Discrete & Continuous Dynamical Systems, 2016, 36 (1) : 63-95. doi: 10.3934/dcds.2016.36.63 [6] Dominika Pilarczyk. Asymptotic stability of singular solution to nonlinear heat equation. Discrete & Continuous Dynamical Systems, 2009, 25 (3) : 991-1001. doi: 10.3934/dcds.2009.25.991 [7] Jiang Xu. Well-posedness and stability of classical solutions to the multidimensional full hydrodynamic model for semiconductors. Communications on Pure & Applied Analysis, 2009, 8 (3) : 1073-1092. doi: 10.3934/cpaa.2009.8.1073 [8] Haifeng Hu, Kaijun Zhang. Analysis on the initial-boundary value problem of a full bipolar hydrodynamic model for semiconductors. Discrete & Continuous Dynamical Systems - B, 2014, 19 (6) : 1601-1626. doi: 10.3934/dcdsb.2014.19.1601 [9] Ghendrih Philippe, Hauray Maxime, Anne Nouri. Derivation of a gyrokinetic model. Existence and uniqueness of specific stationary solution. Kinetic & Related Models, 2009, 2 (4) : 707-725. doi: 10.3934/krm.2009.2.707 [10] Ellen Baake, Michael Baake, Majid Salamat. Erratum and addendum to: The general recombination equation in continuous time and its solution. Discrete & Continuous Dynamical Systems, 2016, 36 (4) : 2365-2366. doi: 10.3934/dcds.2016.36.2365 [11] Yongming Liu, Lei Yao. Global solution and decay rate for a reduced gravity two and a half layer model. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2613-2638. doi: 10.3934/dcdsb.2018267 [12] Minhajul, T. Raja Sekhar, G. P. Raja Sekhar. Stability of solutions to the Riemann problem for a thin film model of a perfectly soluble anti-surfactant solution. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3367-3386. doi: 10.3934/cpaa.2019152 [13] Ling Mi. Asymptotic behavior for the unique positive solution to a singular elliptic problem. Communications on Pure & Applied Analysis, 2015, 14 (3) : 1053-1072. doi: 10.3934/cpaa.2015.14.1053 [14] Masataka Shibata. Asymptotic shape of a solution for the Plasma problem in higher dimensional spaces. Communications on Pure & Applied Analysis, 2003, 2 (2) : 259-275. doi: 10.3934/cpaa.2003.2.259 [15] Qiao Liu, Shangbin Cui. Regularizing rate estimates for mild solutions of the incompressible Magneto-hydrodynamic system. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1643-1660. doi: 10.3934/cpaa.2012.11.1643 [16] Jing Li, Boling Guo, Lan Zeng, Yitong Pei. Global weak solution and smooth solution of the periodic initial value problem for the generalized Landau-Lifshitz-Bloch equation in high dimensions. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1345-1360. doi: 10.3934/dcdsb.2019230 [17] Ran Zhang, Shengqiang Liu. On the asymptotic behaviour of traveling wave solution for a discrete diffusive epidemic model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1197-1204. doi: 10.3934/dcdsb.2020159 [18] Guillaume Bal, Alexandre Jollivet. Stability estimates in stationary inverse transport. Inverse Problems & Imaging, 2008, 2 (4) : 427-454. doi: 10.3934/ipi.2008.2.427 [19] Francesca Crispo, Paolo Maremonti. A remark on the partial regularity of a suitable weak solution to the Navier-Stokes Cauchy problem. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1283-1294. doi: 10.3934/dcds.2017053 [20] Libin Wang. Breakdown of $C^1$ solution to the Cauchy problem for quasilinear hyperbolic systems with characteristics with constant multiplicity. Communications on Pure & Applied Analysis, 2003, 2 (1) : 77-89. doi: 10.3934/cpaa.2003.2.77
2019 Impact Factor: 1.311
|
{}
|
## hawkfalcon How do I simplify this expression? one year ago one year ago
1. hawkfalcon
$\frac{1-(\sin(x)+\cos(x))^2}{2\sin(x)}$
2. jim_thompson5910
$\Large \frac{1-(\sin(x)+\cos(x))^2}{2\sin(x)}$ $\Large \frac{1-(\sin^2(x)+2\sin(x)\cos(x)+\cos^2(x))}{2\sin(x)}$ $\Large \frac{1-(1+2\sin(x)\cos(x))}{2\sin(x)}$ $\Large \frac{1-1-2\sin(x)\cos(x)}{2\sin(x)}$ $\Large \frac{-2\sin(x)\cos(x)}{2\sin(x)}$ $\Large -\cos(x)$
3. hawkfalcon
Thank you both, that makes sense:)
4. jim_thompson5910
np
|
{}
|
Tag Info
0
One possibility is that your polarizer interacts with the other parts of your setup (for example, forms a resonant cavity with some other interfaces that enhances transmission). You can test this hypothesis by rotating your polarizer (is the intensity always brighter?). If you include a drawing of your setup, it would be easier to figure out the underlying ...
1
This is not possible, as you will see if you derive simple harmonic motion from first principles. The defining equation of simple harmonic motion is: $$\ddot x \propto -x$$ This is linked closely to Hooke's Law, where $F = -kx$, where $F$ is the restoring force, $k$ is the spring constant, and $x$ is the extension from the equilibrium position. So, by ...
1
Harmonic motion in physics isn't so much defined by a periodic solution as it is defined by a certain differential equation. The equation for the harmonic oscillator is $$\ddot{x} + kx = 0,$$ where $k$ is some constant. The general solution to this equation can be written $$x(t) = A \sin(\sqrt{k}t) + B \cos(\sqrt{k}t),$$ where $A$ and $B$ depend on the ...
-1
Let's break this problem into steady state and transient parts (if it works for circuits...). Regarding transient tones, all bets are off. String musicians (I'd love to speak for the rest of music-dom, but I mostly play strings) spend lots of practice time starting a tone with clarity and the correct volume and without any crunch or unwanted harmonics. ...
1
It seems that the harmonic (integer multiple) overtones of a sound usually all have the same phase. Is this true...? No, I don't think this is generally true, although it may be true for certain instruments. What led you to believe this? In trumpet tones, for example, the different harmonics come up at different times during the attack, so it seems ...
1
This is an excellent question, that deserves a more thoughtful answer (no offense guys). The question that Unknown is asking (I think) is why should there be a node or antinode at each end of a cylinder? When the end is closed, it is fairly easy to see that the air cannot move any further along, so the displacement of the air will be zero - in other ...
4
A "sharp" tip typically has a finite curvature; there will be a very small part of the "tip" that is therefore angled at such a way that light will be reflected off it. The sharper the tip, the smaller the radius of curvature, and the smaller the "twinkle" or glint. The second effect is diffraction: Light that passes an object will be diffracted. For ...
1
You are basically correct. An air-filled cylinder that's open on both ends will actually resonate at multiple resonant frequencies, given by $$f=\frac{n v}{2L}$$ where $n$ is a positive integer, $L$ is the length of the tube, and $v$ is the speed of sound in air. The fundamental frequency, which generally contains the most energy, is the case when $n=1$, ...
1
Depends on the ends of the tube: An open end is a displacement anti-node (unrestricted), a closed end is a displacement node (restricted). Thus, a tube that is open at one end and closed at the other will have natural frequency and harmonics such that there is a node on the closed end and anti-node on the open end (a quarter wavelength). If both ends are ...
1
Paper cones were originally chosen for their rigidity and lightness, so they can move air quickly without deforming and couple to a motor easily at the center while also being easy to suspend from the basket by their perimeter with a simple corrugation or foam/rubber surround. Physics only played a major part in the ease of construction and performance was ...
-2
the waves will obliterate each other but they will still exist, they just won't be moving they would just change form (energy cannot be destroyed it can only change form) so when the waves meet they will cancel each other so sound will change to potential and kinetic will change to sound or whatever
-3
If an atom emits energy hf, it emits also an angular momentum (spin). That combination is called "photon" or "wave packet". Linking the appropriate formulas from QM and E&M waves, you get the diameter of the wave packet (about λ/2) but not the length. The radius and the direction of propagation do not change as long as the wave packet is not disturbed. ...
0
What I am missing in your question, is the dimension of $\lambda$. You can see from your second formula, that the dimension of $\lambda$ is $\text{m}^{-2}$ (because of the Laplacian). Further, to add a bit more information, Normally we would write your Eigenvalue problem as $$\Delta v = - k^2 v.$$ We use $k^2$ here mostly because it will make the answer ...
1
The first thing that distinguishes a shock wave from an "ordinary" wave is that the initial disturbance in the medium that causes a shock wave is always traveling at a velocity greater than the phase velocity of sound (or light) in the medium. Notice that I said light - that is because there is also a kind of electromagnetic analogue to a shock wave known as ...
1
The phase of a sinusoidal wave is represented as: $$y(x, t) = a \times \sin(\omega t + \phi)$$ So the time evolution (i.e the part that changes with $t$) is only the $\omega t$ term and not the pure constant phase $\phi$ term. $\phi$ can depend on $x$ or other things but not $t$. This is what is meant that the phase is constant. In case you mean sth ...
1
Even though the forces started at different times, is there any displacement of the metal box in any of the situations? Or is there any movement at all but is the net displacement zero? Sure. If you think of each force as causing an acceleration, the first one begins an acceleration in one direction, the second an acceleration in the other (or a ...
0
Imagine you standing some distance from me, and you move a charge back and forth along the line joining us. Waldir, you are quite right that the electric field I observe will fluctuate, and that these fluctuations will not reach me instantly - they will travel at the speed of light. However, this is not electromagnetic radiation. Why?- The electric field ...
0
Electrical amplification is about using an input signal to modulate a larger amount of power that comes from a separate power supply of some sort. And yes, there is such a thing as a magnetic amplifier that works on a very similar principle (even though the inputs and outputs are usually electrical). But you can't get an output value that's greater than the ...
2
There are several ways to amplify the magnetic field, though the mechanism is not same as for electrical signal amplification, but still they are fruitful. compression:- since a magnetic flux through a surface remains conserved, if we compress the field lines or stretch (or fold) the field line then we can increase the energy by working against the field ...
0
In QM the Schrödinger equation, is the equivalent of Newton's law in Classical Mechanics. The Schrödinger equation describes the state of a quantum system (i.e. atoms, subatomic particles etc.), and how the quantum system changes over time. I think you are getting confused because there are two main places where the term wave appears. (1) The Double Slit ...
1
"Bright light can never hurt your eyes" seems false to me… enough energy focused on the retina will cause damage, regardless of the wavelength. Otherwise you would not need to wear laser goggles… That aside, materials typically have certain ranges where they absorb light more strongly than others. There is no hard and fast rule for this, but if you google ...
0
Uv goes thru glass, I thought that comment strange when I watched it. Laminated glass (which it could have been) would shield about 95% of the uv, I believe due to the resin interlayer.
2
The following picture (from http://hyperphysics.phy-astr.gsu.edu/hbase/waves/imgwav/circonwave.gif) gives you a better sense of how to reconcile your observation with "circular motion": As you can see - there is circular motion for particles at the surface: they don't have to go under water to do it though. Incidentally this also shows that in the trough ...
0
Imagine the horn has a variable speed $v(t)$ relatively to the observer, the position of the horn relatively to the observer is given by $x(t)=\int_0^t v(u) du$, supposing $x(0)=0$ Suppose we are interested only at the (periodic) maximums of the sound, corresponding to a period of the emitted sound, at $t_0$, $t_1=t_0 + T$, where $T$ is the sound period, ...
2
The question, answers and explanation are poorly worded. Since the observer's velocity changes, the nature of that change, his/her acceleration, is significant. If the observer begins to accelerate away from the source, and continues to accelerate, then the perceived frequency will continue to decrease (as long as the observer stays sub-sonic!) At any ...
2
If the observer moves away with a constant velocity, they will hear a different frequency, but $f'$ will remain the same. Perhaps the problem meant that the observer accelerated away, or it is possible that the textbook editors made a mistake (which happens more frequently than most people realize). As for your second question - if the observer ...
1
The energy flux of an acoustic wave is $$\vec J = \vec v p \;\;\;\;\;\;\;\;\;\;\;\;\; (1)$$ The relevant energy density to be used in these calculation is actually $p+1/2 \rho v^2$, but since we are discussing a small amplitude wave (= no shock wave), $v$ is an infinitesimal quantity; thus $1/2\rho v^2$ is lower order than $p$ (second vs. first), thus it ...
0
There is one limit in which this computation is easy to do. Let us consider a massive, perfectly rigid ball striking a a perfectly rigid floor. In this case, there is nothing oscillating, so that we can neglect sound generation by oscillations in the ball or in the floor. Yet there will be sound, because the ball displaces air in its fall, and the air ...
1
The confusion you face is a historical one. Originally the interactions of different bodies was thought to happen at a distance more or less instantly, such as the case in the time of Newton and his gravitational theory. But when we discovered electromagnetism, and in particular, when Maxwell completed his formulation of Electromagnetism as contained in ...
3
A wavefront (your signal) has a fixed amount of energy given to it by the transmitter. Whatever happens to the wave once it leaves the transmitter is independent of the transmitter, thus receiving a signal does not drain any additional energy from the transmitter (though it can drain energy from the wavefront itself). EDIT: As pointed out by @Alfred ...
8
A wave can propagate in any medium that is: a) elastic b) less than critically damped Neither homogeneity nor isotropy are necessary. Any elastic system will return to it's original state when deformed, the question is just whether the deformation can propagate, and this is down to how quickly the energy of the deformation is dissipated. If the damping ...
4
A wave is generated by a disturbance in a medium. For a wave to propagate, do not necessarily need a medium. For example, an electromagnetic wave can propagate in vacuum, while a sound wave requires an elastic medium to travel. The requirements for the propagation of a wave, are dependent on the nature of the wave.
0
There is no such thing as “conduction of electric wave in conductor” (and I am unsure about where “electric waves” can be observed). There is a conduction of electric current in a conductor. One can say that electric potential in a piece of conductor is always the same (so the electric field is zero inside it), although it is not always so due to resistance, ...
1
The continuous stream of air that you are blowing in, it doesn't enter the pipe continuously. When the stream of air hits the hard edge in an organ pipe, it flaps in and out of it due to the difference in the density of the air outside and inside the pipe. This oscillation of the air in and out, it will be a periodic energy supply for the standing wave in ...
3
I am answering your question here, but please provide more information about your goals/experience, as specified by the comments. Primarily, I would like to say that I was planning on answering your question much less in depth than I ended up doing. However, while brushing up, I got carried away and figured out some very interesting calculations concerning ...
4
The trouble is that your table, or whatever object it is, will act as a waveguide. That's because the sound waves will (partially) reflect of the wood/air surface then travel back into the table and interfere with other waves. The result is going to be hideously complicated to calculate. As Luboš says in a comment, if the thickness of the table is much less ...
5
Yes. Higher frequencies are attenuated more over distance than lower frequencies are, which has a rounding effect on the square wave as the upper harmonics are reduced. Reference Do low frequency sounds really carry longer distances?
1
There is a good explanation of this in Matter and Interactions vol II by Sherwood and Chabay. I no longer have the text; I will try to summarize its explanation as I remember it. The electrons in a substance are analogous to charged masses on springs. The electrons in insulators are relatively tightly bound; those in conductors are loosely bound or unbound. ...
3
An overview in layman's terms: First, it is important to note that not any electric field will induce current in a conductor, because other than the fact the intensity of the field defines the speed of each charge (bigger difference of potential), the oscillation frequency of the $\mathbf{E}$ also plays a very important role, if the frequency is too high, ...
8
The reasoning has to be the other way around: Light acts on the metal and makes the electrons move. This, however, results in an energy loss, as the electrons feel a resistance and thus the radiation loses energy. This can be formulated more precisely with counteracting electric fields. That's why all good conductors are opaque. In insulators this can not ...
59
Since cables carry electricity moving at the speed of light, why aren't computer networks much faster? Perhaps I can address your confusion with a rhetorical question: Since air carries sound moving at the speed of sound, why can't I talk to you much faster? The speed of sound is much slower than light, but at 340 m/s in air, it's still pretty damn ...
4
"Surely this is a bottleneck" - No, it's really not. Any real-life network connection is not speed-limited by the propagation speed of the signal in the cable, but by the processing delays in the various routers, switches, and network interface processing at each end.
2
Two reasons: 1) The speed of light in a "medium" is (almost*) always slower than the speed of light in a vacuum. 2) Electricity propagating in a wire is subject to inductive and capacitive effects which slow it's progress. And even if wires were infinitely fast, integrated circuits are not. Again, inductive (a little) and capacitive (a lot) effects limit ...
3
Why only 64% What does propagation speed mean? I know there are other variables effecting the latency and perceived speed of computer network connections, but surely this is a bottle neck. Speed of signal propagation is distance the signal (packet) travels in one second. It is usually lower than $c$ because EM waves that carry the information travel in ...
11
A transmission line is made of a pair of conductors which have some resistance, inductance, capacitance, and leakage conductance. We can take all of these per unit length: The wave equation for signals in this line, in the limit of a lossless cable with $R=0$, $G=0$, is $$\frac{\partial^2 V(x)}{\partial x^2} + \omega^2 LC \cdot V(x) = 0$$ You have to be ...
21
As you've probably guessed the speed of light isn't the limitation. Photons in a vacuum travel at the speed of light ($c_o$). Photons in anything else travel slower, like in your cable ($0.64c_o$). The amount the speed is reduced by depends on the material by the permittivity. Information itself is slower still. One photon doesn't carry much ...
0
The speed of electrons that flows in the cable, i.e. the current, is only a few m/s. The EM wave propagates much faster. Anyway, the speed of a computer no depends intrinsically of the speed of electrons, but the speed of energy transfers between electronics components.
3
How sure are you that electricity travels at the speed of light? Although electricity propagation moves at the speed of an E/M wave, and not electrons, its speed depends on the dielectric constant of the material. Only in a vacuum, I think, would it travel at the speed of light.
0
If you consider a mechanical wave in a string, that is possible as long as you keep $\omega A$ fixed. That is because the energy of a mechanical wave is given by $I = \frac{1}{2}\rho v\omega^2A^2$ Many textbooks would contain a proof, and http://cnx.org/content/m12793/latest/ may help as well.
7
You should look at the form of the advanced fundamental solution of D'Alembert equation, built up in geodesically convex open sets including the source localized at the event $y$ and the test point localized at the enent $x$ receiving the wave generating by the source. The construction, at least for analytic manifolds with analytic metrics, is obtained by ...
Top 50 recent answers are included
|
{}
|
# Home
## How To: Access Server 2008 Using a Different DNS Hostname
Posted by SteveHardie | On: Jan 05 2012
You may have a situation where you would like to abriviate the name of a computer, or make it more meaningful when connecting to file or print shares.
e.g:
from \\big-long-svr-name
to \\shortname
To do this, you can setup a new DNS A record or CNAME in your Active Directory DNS and point it to the FQDN or IP address of the desired server.
However, you may run into trouble when trying to access the Windows Server 2008 share from a Windows XP Machine.
The following error appears:
You were not connected because a duplicate name exists on the network. Go to System in Control Panel to change the computer name and try again.
This is because Windows 2008 and Windows Vista support SMB 2.0. Windows XP uses SMB 1.0. In order to allow Windows XP clients to access the Windows 2008 server with an alias, you need to add a registry entry to the Windows 2008 Server.
1. Locate and click the following key in the registry of the server:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanmanServer\Parameters
|
{}
|
# Using primary decomposition theorem to examine structurally distinct Abelian groups?
1. Oct 9, 2012
### dumbQuestion
I am just in to reviewing abstract algebra and came across a theorem I'd forgotten:
http://en.wikipedia.org/wiki/Finitely-generated_abelian_group#Primary_decomposition
(I linked to the theorem instead of writing it here just because I'm not sure how to write all those symbols here)
Anyway, this seems super useful to me because there are a lot of theorems about the group of integers modulo n and their direct sums, (for example, I know if a is a generator of Zn, then am is a generator of Zn <=> m and n are relatively prime), and these groups are easier to conceptualize. So the primary decomposition theorem seems like a nice way to be able to take any general finite abelian group and put it in terms of a direct sum of these "Easy to deal with groups" of integers modulo m.
But I have a question. All the questions in my book along the lines of "how many structurally distinct finite abelian groups of order n are there?" make use of the primary decomposition theorem and I don't understand why. For example, say I have a finite abeliian group G of order 12. The prime decompositioin of 12 is just (2)2(3) so using the primary decomposition theorem I know G is isomorphic to Z2 + Z2 + Z3 and it's also isomorphic to Z4 + Z3. But since it's isomorphic to both of these groups, wouldn't that mean these groups are isomorphic to each other as well? I mean, isomorphism preserves all the group structures, subgroups, etc., so all these groups would be pretty much structurally identical right? So how are they mutually non-isomorphic? I'm so confused!
2. Oct 9, 2012
### dumbQuestion
Re: Using primary decomposition theorem to examine structurally distinct Abelian grou
Nevermind I understand my mistake. The prime decomposition theorem doesn't say a group G will be isomorphic to ALL Of those different groups, just that it's isosmorphic to one of them. I feel really stupid now.
Ok, another question then, how do we know which one it is isomorphic to? Is there another theorem that tells us? Or do we just kind of use process of elimination by examining the different possibilities and finding which one it's structurally similar to?
And one other question, why isn't it the case that in the example I gave, Z4 + Z3 and Z2 + Z2 + Z3 are not isomorphic? I mean I know I can go to the groups, maybe see for example that Z4+Z3 is cyclic while Z2+Z2+Z3 is not and so I'd know they aren't isomorphic because of that. But what's the more general reasoning? I imagine it has something to do with the fact that for example, all the elements in {4,3} are not relatively prime to all the elements in {2,3}. is there a theorem that says that Zm + Zn is isomorphic to Zk + Zh if say m is relatively prime with k and h and n is also relatively prime with k and h?
Last edited: Oct 9, 2012
3. Oct 10, 2012
### Robert1986
Re: Using primary decomposition theorem to examine structurally distinct Abelian grou
There is a theorem that says $m,m$ are co-prime iff $Z_m \times Z_n \simeq Z_{mn}$. (Here I am abusing notation and writing the direct product for a direct sum.) So, that is why $Z_2 \times Z_2$ is not isomorphic to $Z_4$. But, as you said, every finite abelian group can be written as the direct sum of factors like $Z_{p^a}$ where $p$ is prime. For each group, these $p^a$ are called the elementary divisors of G and any two groups are isomorphic iff they have the same elementary divisors.
For example, if $G_1 = Z_2 \times Z_2 \times Z_3$ then the elementary divisors are $(2,2,3)$ and if $G_2 = Z_4 \times Z_3$ then the elementary divisors are $(2^2,3)$.
4. Oct 10, 2012
### dumbQuestion
Re: Using primary decomposition theorem to examine structurally distinct Abelian grou
Thank you so much this is exactly what I was looking for!
|
{}
|
# Throat destabilization (for profit and for fun)
Abstract : Two recent results indicate that the addition of supersymmetry-breaking ingredients can destabilize the Klebanov-Strassler warped deformed conifold throat. The first comes from an analytic treatment of the interaction between anti-D3 branes and the complex-structure modulus corresponding to the deformation of the conifold [arXiv:1809.06861]. The second comes from the numeric construction of Klebanov-Strassler black holes [arXiv:1809.08484], which stop existing above a certain value of the non-extremality. We show that in both calculations the destabilization energies have the same parametric dependence on $g_s$ and the conifold flux, and only differ by a small numerical factor. This remarkable match confirms that anti-D3 brane uplift can only work in warped throats that have an O(1000) contribution to the D3 tadpole.
Keywords :
Document type :
Preprints, Working Papers, ...
https://hal.archives-ouvertes.fr/hal-02350164
Contributor : Inspire Hep <>
Submitted on : Tuesday, November 5, 2019 - 11:48:16 PM
Last modification on : Thursday, November 14, 2019 - 5:45:38 AM
### Citation
Iosif Bena, Alex Buchel, Severin Lüst. Throat destabilization (for profit and for fun). 2019. ⟨hal-02350164⟩
Record views
|
{}
|
# Announcing pimd v2.1.7
This is a followup release to the security fix in pimd, v2.1.6. The change to use /var/lib/misc/, instead of the insecure /var/tmp/, has now been refactored into using the proper FHS recommended /var/run/pimd/ instead.
As always, check the homepage, the ChangeLog and the GIT log for more details.
|
{}
|
# Talk:Orthogonal frequency-division multiplexing
Orthogonal frequency-division multiplexing was a good articles nominee, but did not meet the good article criteria at the time. There are suggestions below for improving the article. Once these issues have been addressed, the article can be renominated. Editors may also seek a reassessment of the decision if they believe there was a mistake.
Article milestones
Date Process Result
April 14, 2006 Good article nominee Not listed
December 2, 2006 Good article nominee Not listed
February 14, 2010 Peer review Reviewed
Current status: Former good article nominee
WikiProject Telecommunications (Rated B-class)
This article is within the scope of WikiProject Telecommunications, a collaborative effort to improve the coverage of Telecommunications on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B This article has been rated as B-Class on the project's quality scale.
## Better description of Orthogonal in this context
This article does a pretty poor job of explaining the orthogonality condition in OFDM. it says
"Conceptually, OFDM is a specialized FDM, the additional constraint being: all the carrier signals are orthogonal to each other."
This makes little sense, because the concept of all FDM is that different carriers are orthogonal, and for this reason you can put different data on different carriers. In TDM/TDMA, you use orthogonal time slots; In FDM/FDMA you use orthogonal frequency slots; in CDM/CDMA you use orthogonal Spreading codes. — Preceding unsigned comment added by 65.216.151.126 (talk) 18:41, 14 January 2016 (UTC)
I think what is important is that the subcarriers are orthogonal over a symbol period, which is by design.
## OFDMA
OFDMA links here. Should they be merged?--Gbleem 21:36, 14 December 2005 (UTC)
NO! OFDMA is a multiple access scheme which relies on OFDM.
## Doppler: "(...) sender or the receiver is moving at a high speed (...)"
I removed this sentence from the text. I am not 100% sure, but is seems very unlikely to me that normal speeds (car, train, plane, etc) would affect this, especially when we talk about frequencies around 5 GHz. I wrote a small thesis about OFDM in uni and I cannot recall any major issued with moving receivers/senders. Please correct me if anyone has more accurate information. --83.109.152.151 22:12, 20 December 2005 (UTC)
Well, at least according to recent tests by Digita in Finland, mobile use at speeds of "over 200 km/h" is no problem for Flash-OFDM operating at 450 MHz. --80.222.254.10 16:01, 22 December 2005 (UTC)
Okay, I'm removing that statement for now. R6144 01:52, 23 December 2005 (UTC)
As the Doppler frequency offset is proporcional to the carrier frequency, the offset would be different for every subcarrier thus causing loss of orthogonality Danielcohn 03:20, 1 May 2006 (UTC)
I put the sentence back since it is a problem in for example DVB-T, but I added that "Several techniques for ICI suppression are suggested, but they may increase the receiver complexity substantially." I have seen several papers on very complex equalizers for this purpose. Are there a simpler method? Can the sample rate be adopted, based on the pilot carriers?Someone said that modern DVB-T receivers are less sensitive to doppler. Howcome? Mange01 10:17, 13 October 2006 (UTC)
The problem comes from the combination of Multipath and Doppler. Consider the simple case of travelling into the direction of a steady transmitter. This slightly increases the frequency observed by the receiver. If at the same time there is a reflection of the tranmitted signal coming from the back of the driver, than the frequency shifts to a slightly lower frequency. So one tone results into two simultanious tones received. This corrupts the orthogonality and so the carrier breaks through to all the other carriers and thus introduces distortions. This is THE failure mechanism. The closer the intercarrier spacing the higher the distortions. —The preceding unsigned comment was added by 161.85.127.152 (talk) 14:47, 6 December 2006 (UTC).
Thnx for a helpful comment. User:Oli Filth has now incorporated it into the text. HOwever, in modern OFDM based systems such as DVB-H, the dopper shift does not seem to be a problem. It would be interesting to know which ICI cancellation algorithm that is used in practice. Mange01 23:00, 6 December 2006 (UTC)
The robustness against multipath can be improved by extending the guard interval duration. Usually the length of OFDM symbol will be scaled too to keep the guard overhead relatively the same. The carrier-spacing is then reciprocal with the symbol duration (1/T) and thus becomes smaller by better multi-path robustness. However the Doppler robustness will decrease at the same time, so there is a trade-off between multi-path and doppler robustness. In DVB-T they just forgot to define one intermediate intercarrier spacing. This gap appeared to be a good compromise between doppler and multi-path robustness. So they included that carrier spacing in the DVB-H standard. (PS: I'm the same writer as talk:161.85.127.152)
## Reorganization - and GA nomination
This is a very understandable article and I'd love to see it promoted to good article. I did not pass it this time because:
• The lead is too short.
• The article has structural/formatting problems.
Some suggestions on how to improve the article's structure/formatting:
• The "Digital radio and television" heading has no content beneath it and should be removed.
• The table dominates the "Wireless LAN" section.
• The "OFDM feature abstract" is a list not a section consider making it an inset for the "Characteristics" section.
• The "Ideal encoder" and "Mathematical Description" (should be "Mathematical description") sections should probably come before "Usage".
• The "History" section seems tacked on, maybe it could also be made an inset.
Examples of how to do insets are available in the Columbine high school massacre article. Please renominate once the above problems are fixed.
Cedars 00:09, 14 April 2006 (UTC)
I have restructured the text by removing overlapping text; moving the list of key features (which I divided into advantages and disadvantages) to the top; moving the list of applications to the top and adding a numerical example in blockquote; and dividing the section "Characteristics and principles of operation" into sub-sections.
I have clearified why OFDM is considered a modulation scheme as opposed to a MA scheme, and added references to OFDMA and MC-CDMA. I have also corrected som incorrect statements, for example that OFDM would be sensitive to time synchronization errors.
(Someone else has addressed most of Cedars suggestions, except using insets.) Mange01 12:29, 13 October 2006 (UTC)
## Multipath resistance only when coded?
Why does the article claim that multipath resistance exists only when combining OFDM with coding schemes? Multipath resistance is added by the fact that OFDM allows using longer symbols and therefore decreasing inefficiency caused by GI.Danielcohn 01:35, 22 June 2006 (UTC)
OFDM uses a cyclic prefix guard interval to convert a frequency-selective channel to a set of frequency-nonselective fading channels. As a consequence, intersymbol interference is avoided. The thing is, however, OFDM does not have frequency diversity. With coding, OFDM achieves diversity and performs well in multipath. ---sct
## "Ideal encoder" section
I believe that the Ideal encoder section is in severe need of revision. I'll make these changes at some point in the near future; just wondering if anyone had any thoughts before I do.
### Scrambling
Firstly, I don't think that it's necessary to include scrambling (shown as multiplication by ${\displaystyle (-1)^{k}}$ in the diagram) in a hypothetical "ideal" encoder. Secondly, it is shown as occurring in the time-domain, which is completely incorrect (see section 17.3.2.1 in the 802.11a spec, for instance). Thirdly, the reason given, "in order to have a null mean value", is also incorrect.
### Inter-symbol interference
In the second paragraph, orthogonality of the sub-carriers results in zero inter-carrier interference, not zero inter-symbol interference, and only in the case when a cyclic prefix is used, which is not mentioned or illustrated.
### Diagram
The blocks marked "ROM" are clearly meant to represent constellation mapping, but what does "ROM" stand for?
In my opinion, it would be better to replace the "impulse generator" and ${\displaystyle H_{t}(f)}$ blocks with "DAC".
It's also debatable whether illustrating frequency-domain zero-padding is necessary for an "ideal" encoder.
Oli Filth 17:47, 20 August 2006 (UTC)
All fixed. Oli Filth 13:50, 2 September 2006 (UTC)
### Image Effects
The up and down mixing described would give rise to images, is OFDM actually symetrical about it's base carrier (this is implied by the article but not explicitly stated) or is it purely above the carrier (effectively SSB, and so requiring image rejection mixers and so on? Scruffy brit 12:15, 3 April 2007 (UTC)
The up-mixing does not give rise to images (assuming a perfect mixer). OFDM is not (in general) symmetrical around its centre frequency, although it may appear that way on a spectrum analyser (as each sub-carrier generally has an identical power spectrum).
The down-mixing does indeed give rise to an image at ${\displaystyle 2f_{c}}$, hence the need for a low-pass filter after the mixing operation. This is shown in the diagram and explained in the article text. Oli Filth 12:42, 3 April 2007 (UTC)
Sorry, "Symetrical" wasn't the right term;-) But a OFDM signal does have modulation results on both sides of the carrier? Scruffy brit 11:05, 4 April 2007 (UTC)
Yes, in general. But this isn't due to mixing images. It's because the baseband OFDM spectrum straddles DC. Oli Filth 11:38, 4 April 2007 (UTC)
## Flash-OFDM 450MHZ Data Network
In 9th October 2006, Finland has licenced the 450MHz band to an operator for building a nation-wide Flash-ODSM data network. Press release in Finnish: [1]. Could this be added to the main article?
I think Flash-OFDM deserves a separate article. Mange01 (talk) 06:53, 5 September 2008 (UTC)
## Suggestion: OFDM standard comparison table
I suggest a table that summarizes the most crucial features of common OFDM systems. I have made a similar table for two broadcasting systems in the publication http://ieeexplore.ieee.org/iel5/49/20698/00957306.pdf?arnumber=957306.
Examples of features are:
• Standard name.
• Ratified year.
• Frequency band [GHz].
• Channel bandwidth [MHz].
• Number of subcarriers.
• Subcarrier spacing [kHz].
• Net bit rate [Mbps].
• Symbol length [s].
• Guard interval [s].
• Subcarriermodulation scheme.
• Inner FEC.
• Outer FEC (if any).
• Sub-carrier adaptive transmission (if any). Yes/no.
• Multiple access scheme (if any). Example: OFDMA uplink.
• Maximum travelling speed.
• Time interleaving depth [ms] (if larger than the symbol).
• Requied carrier-to-interference ratio (for AWGN without fading). Example: 5dB for BER 10^-5.
Mange01 13:11, 13 October 2006 (UTC)
## Failed GA
This is a very promising article, but it is let down by an extreme scarcity of citations, and by a number of lists that would read better if converted to prose. Once these improvements have been made, please feel free to renominate the article. MLilburne 20:20, 2 December 2006 (UTC)
## Merge with DAB really a good idea?
It has been suggested for a while that DAB COFDM section should be merged into the OFDM article, but none of us commented on the suggestion. Now it is accomplished, but I'm not sure that it was such a good idea.
Anyway, some of the merged text is overlapping with the old OFDM text, or is generic, not specific for DAB, and should therefor be removed or moved up in the article.
Secondly, Wikipedia now warns that the article has become longer than 30kB. Is that a recommended maximum length?
Should every application of OFDM be described as detailed as DAB in this article? In case the article should be extended with something, I would prefer more illustrations, and a comparison table summarizing the features of several systems.
Whats your vote? Should the merge be reverted?
Mange01 23:37, 2 December 2006 (UTC)
I completely agree; the verbatim insertion of the text from the DAB article completely disrupted this article, introduced a lot of repetition and redundancy, and also brought over a lot of the errors that were present in the DAB article. I have removed this addition, and have pasted the inserted text here; I'm not going to attempt to re-edit the DAB article; it's a complete mess; I'll leave that for someone else to sort out! Oli Filth
### Removed text
##### Modulation
The modulation used in DAB is Coded Orthogonal Frequency Division Multiplexing (COFDM). According to this acronym the three properties of COFDM are: 'C' for coding; 'O' for orthogonal modulation and 'FDM' for frequency division multiplexing. These are described here.
##### Coding
Coding refers to convolutional coding and means that the original data carried over the multiplex is deliberately manipulated by splitting it into small blocks and adding some intelligently designed redundant information to each, thus generating a data 'overhead'. The overhead bits added to each block are determined according to rules applied to the true data content of the block. After demodulation at the receiver the digital signal processor examines both the actually received data and overhead bits and regenerates what it believes to be the original data based on a set of statistical rules known as an algorithm. The regenerated data may include a number of data bit corrections. The algorithm used in DAB is known as a Viterbi algorithm, and is an example of a maximum likelihood algorithm. This works by maintaining a history of demodulated bit sequences, building up a view of their probabilities and then using these to finally select either a 0 or 1 for the bit under consideration. This type of coding is also known as an example of forward error correction (FEC).
To some extent the types of errors most likely to be present with DAB can be mathematically predicted and therefore corrected for. The addition of FEC requires extra information to be transmitted at the same time as the original traffic data and therefore requires an increased channel capacity, needing extra bandwidth, compared to if it had been uncoded. DAB carries different 'strengths' of FEC, a stronger one being used for the control of critical features in the receiver.
##### Orthogonal
Orthogonal is the mathematical term applied to two RF sinusoidal signals when their phase relationship is precisely 90 degrees. Alternatively they may be said to be in 'quadrature'. In DAB the sub-carrier frequency spacing is chosen to be the reciprocal of the active symbol period. Under this condition the DAB modulation results in successive sub-carriers having a quadrature relationship with each other. The frequency spectra components of one modulated sub-carrier will therefore integrate to zero at the corresponding components from both of the adjacent sub-carriers. This has two advantages: (a) the modulated sub-carrier spectra will efficiently occupy the allocated bandwidth with a degree of controlled overlapping and (b) simple I-Q demodulation to zero intermediate frequency (zero-IF) can be used in the receiver without needing the costly hardware overhead of many bandpass filters to extract the sub-carriers.
##### Frequency division multiplexing
Frequency division multiplexing (FDM) is the process where two or more basic information channel bandwidths or basebands are shifted in frequency and added to others to form an aggregate wider bandwidth containing the information from all of the constituent basebands. To avoid mutual interference, their bandwidths would normally require shifting (translating) in frequency and no two translated basebands would occupy any part of the same frequency spectrum. In the context of DAB, FDM refers to the manner in which the modulated sub-carriers are assembled across the allocated frequency range.
##### Modulation type
DAB uses a digital modulation type known as differential quadrature phase shift keying (DQPSK), which is an incoherent modulation scheme. DQPSK differs from the more common quadrature phase shift keying (QPSK) in that the modulated carrier phase for the current symbol being detected depends on its phase relative to that phase detected for the previous one. In QPSK it is just the absolute phase of the modulated carrier that determines the associated symbol. A differential modulation scheme can be more resillient to the typical fading scenarios of DAB. The modulation scheme also incorporates a form of Gray coding in that only one bit changes on moving from one symbol state to an adjacent one. For a constant phase progression, the consecutive set of symbols are represented by the bit pairs 00, 01, 11 and 10.
##### Time interleaving
DAB uses data buffering which enables the data symbols to be transmitted over the RF path in a different time-order than they were generated the audio source (studio). At the receiver they are re-assembled and returned to the original time-order before conversion back to analog signals to feed the receiver audio output. This process is called time interleaving. Typical multipath interference experienced in a moving vehicle is regular over time so an intelligent choice of time interleaving to some degree 'averages' out the resulting error bursts over time. This data buffering and other processing contributes to a delay, typically of a few seconds, between the studio source and the receiver. This is much longer than the equivalent delay for am FM broadcast channel which would typically be a fraction of a second. For most broadcasts such a delay would be unimportant but it does mean that, for example, real-time reference signals for setting clocks such as those re-broadcast by the BBC on DAB from their national FM service are actually quite inaccurate.[citation needed]
##### Frequency interleaving
DAB also uses frequency interleaving, a similar technique to time interleaving but applied to the sub-carriers centre frequencies in the RF spectrum instead. The data stream from the studio is deliberately not modulated serially onto sub-carriers across the frequency range, but instead in a more random way. Multipath and other forms of selective fading generally affect a relatively narrow part of the RF multiplex bandwidth at any one time so frequency interleaving would tend to average out 'bursts' of errors resulting from these.
#### This is some good stuff
This is some good text! Put it back! -143.215.155.26 (talk) 05:17, 21 March 2009 (UTC)
## DMT vs OFDM
On a somewhat related topic, i question whether the redirect to OFDM from DMT (http://en.wikipedia.org/w/index.php?title=Discrete_multitone_modulation&diff=next&oldid=10618564 ) is optimum. Certainly DMT and OFDM share many characteristics, but the common use of the DMT term in wireline, as well as in the standards reference in ANSI T1.413 http://en.wikipedia.org/wiki/ANSI_T1.413_Issue_2 suggest they should remain distinct topics. Particularly as there are non-trivial differences between the two techniques, that also in fact form some of the basis of T1.413 - such as the Cioffi/Amati patented bit loading / tone swapping algorithm which allows better throughput in copper specific interference such as the ISI found in bridged taps. Because in copper the noise is stationary and the channel is time invariant, DMT is much better able to adapt to the communication medium than OFDM would. In any case, this may not be the best place to discuss the validity of a redirect, but because it does involve technical details relevant to this article, I felt to introduce the idea for comments here first. Duedilly 23:47, 10 February 2007 (UTC)
To add some further detail to the consideration of differences between OFDM and DMT (and whether DMT should redirect to OFDM), here are more distinction points:
Duedilly 19:48, 11 February 2007 (UTC)
There is nothing about OFDM which precludes it from being baseband, wireline, or pre-equalised/rate-adaptive. It would be a bit like trying to state that 802.11a is distinct from OFDM because it uses PRBS-based scrambling, or that DVB is distinct from OFDM because it uses scattered pilots. IMHO, "DMT" really just describes a particular implementation, which just so happens to be baseband, wireless and pre-equalised and rate-adpative. Oli Filth 22:09, 11 February 2007 (UTC)
DMT is discussed in the section "Adaptive transmission", as well as in the ADSL section. I would like to encourage to you elaborate on the bit loading/tone swapping algorithm there. A VDSL section should also be created. A separate DMT article would not be a good idea. Mange01 07:37, 14 February 2007 (UTC)
Thanks Oli and Mange. I notice there is an entry for ITU_G.992.1 and wonder why DMT would not more likely redirect to that? (or even to an ANSI T1E1.413 entry, which predates and was the basis for the adopted ITU standard). I think your idea though to further clarify the ADSL section is good, though obviously not all ADSL is FDM - technologies like CAP and DWMT and other DFT implemented filter bank modulation schemes, (both perfect, and particularly the imperfect, due to frequency overlap with low cross-channel interference which are decidedly NOT like PO-FDM) have a rightful place of discussion near DMT - which would likely not be appropriate in this entry. I agree that these may all have evolved from a common ancestor and have filled different niches, but am not yet convinced of the value of reducing them to a single OFDM entry. I am traveling soon, but will look forward to discussing specifics and considering the best approach for further elucidation of these topics, and will also look to get some feedback from Cioffi and others involved in ANSI DSL standards to help. We might also consider adding discussion of expected "multicarrier" CDMA implementations of OFDM using spread spectrum. Duedilly 19:43, 16 February 2007 (UTC)
If found Discrete Multi-Tone page (an orphan page), should this be merged to this page, or at least redirect? I'm no expert on this field. Memming 22:04, 11 April 2007 (UTC)
## Sub-carrier
What is meant with a sub-carrier in OFDM and what is its difference compared to a normal carrier? --Abdull 10:40, 6 March 2007 (UTC)
## More explanation of the cyclic prefix purpose
I was wondering about this, won't it be nice if we also show that the cyclic prefix serves to make the effect of the channel become circular convolution for the OFDM symbol? Then, we mention that circular convolution becomes just multiplication when the DFT is taken.
Something like The OFDM symbol ${\displaystyle [d_{0},d_{1},\ldots d_{N_{c}-1}]^{T}}$ is prefixed with the ${\displaystyle L-1}$ length cyclic prefix and becomes ${\displaystyle [d_{N_{c}-L+1},d_{N_{c}-L+2}\ldots d_{N_{c}-2},d_{N_{c}-1},d_{0},d_{1},\ldots d_{N_{c}-1}]^{T}}$. Then, after convolution with the channel, which happens as
${\displaystyle y[m]=\sum _{l=0}^{L}h_{l}x[m-l]\quad 0\leq m\leq N_{c}}$
which is circular convolution, as ${\displaystyle x[m-k]}$ becomes ${\displaystyle x[(m-k)\mod N_{c}]}$. So, taking the DFT, we get:
${\displaystyle Y[k]=H[k]\cdot X[k]}$.
Of course, the noise etc. has to be accounted for. Note that another thign one could mention is that the distribution of the noise, if it is isotropic complex Gaussian, remains identical under the DFT operation.
Just my suggestions to make it more clear... Kumar Appaiah 12:32, 11 March 2007 (UTC)
I suggest that some of this stuff would be well worth putting in the Cyclic prefix article, which is essentially just a stub right now. I keep meaning to flesh that article out myself, but I never get round to it. I don't think we want any more maths in the OFDM article, as it's already overly-long. Oli Filth 13:05, 11 March 2007 (UTC)
Oli, thanks for the suggestion. I am heading to Cylic prefix to make a start! Kumar Appaiah 14:24, 11 March 2007 (UTC)
## Overview
I think a critical clarification needed in the summary at the start of the article is that the difference between OFDM and FDM is that with OFDM we're simultaneously transmitting on all the sub-channel frequencies, whereas FDM transmits on each frequency sequentially. Ceri Reid68.200.109.154 17:24, 29 March 2007 (UTC)
FDM tramsits simultaneously on all frequencies as well (otherwise it would be TDM!). Oli Filth 18:18, 29 March 2007 (UTC)
Yes, the orthoganal part allows much closer spacing without the guardband that would be needed between conventional channels on different frequencies (such as on FM radio), without the interference being destructive. --Lindosland (talk) 16:52, 28 January 2008 (UTC)
## OFDM Chipset
Can any one help of the best chipset available in the market for Baseband OFDM chipset that could be used in an OFDM modem for not that high data bit rate?, the RF part not necessary for building the OFDM modem only baseband part. Thanks Matalal 13:17, 4 April 2007 (UTC) —The preceding unsigned comment was added by Matalal (talkcontribs) 10:19, 4 April 2007 (UTC).
## More Images!!!
Hi, the article is good, but it is sometimes harder to perceive things for the person not already in WiFi field.
A good drawing explaining guard interval and cyclic prefix would not be obsolete, for sure. Also one explaining orthogonality. Keep up the good work! -- Mtodorov 69 08:28, 19 April 2007 (UTC)
## Overlap between "Example of applications" and "Usage"
Section 1, "Example of applications", was originally part of the ingress, as a summary of the "Usage" section. Someone perhaps found the ingress too long, and moved it into a separate section, but now we have two sections with overlapping content.
What is the solution to the problem?
Mange01 18:44, 7 October 2007 (UTC)
I've mostly reverted today's additions to the article lead, for the following reasons:
1. "OFDM is a modern method of modulation ..." - Define "modern"! OFDM has been around since the 60s.
3. "It is analogous to the AM and FM systems of modulation used for analogue radio and television broadcasting ..." - I know what you're getting at, but in light of the point above, is meaningless.
4. "Typically each bit in each stream lasts for around a millisecond ..." - This is only true for 8K DVB, and only if we're talking about symbols, not bits.
The example is highly relevant, since, as I understand it, a key benefit of OFDM has been the possibility of operating all transmitters across a country on one set of frequencies. 50 miles is a typical distance between such transmitters, and hence 1ms was a design criterion for such applications used in deciding on the number of carriers. This issue is something the non-engineer can readily appreciate the value of. What is transmitted on a carrier is surely 'bits' in the basic sense of 'binary digits' ie on-off representation. I don't see a need to complicate things with symbols, when the average person is familiar with bits.--Lindosland (talk) 20:56, 28 January 2008 (UTC)
1. "... when the many adjacent frequencies interfere with each other, they do so in a way that is not destructive ..." - Again, I know what you're getting at. However, it's not that they're not destructive, it's that they don't "interfere" at all!
However, I've retained a mention of SFNs. Oli Filth(talk) 19:32, 28 January 2008 (UTC)
I don't agree with the revert. The important thing is 'in simple terms' which qualifies what follows. The analogy with FM and AM is entirely appropriate for the average reader (who usually cannot destinguish AM from Medium wave). By all means improve what I wrote, but I think it was a good attempt, which I made because of a call for a 'simple explanation' at the DVB-T article talk page. A simple explanation can never be the whole truth, but is still needed on Wikipedia, especially for a subject as obscure to most people as this one, which they might come across when reading about digital TV. --Lindosland (talk) 20:08, 28 January 2008 (UTC)
I've reverted but with changes in line with your comments. I really think the lead, as it stood, was pitched at a much too technical level. I welcome the highly technical, but the lead should contain something for the non-engineer too, and this is actually a topic very relevant to non-engineers who are curious about modern developments in broadcasting etc. --Lindosland (talk) 20:24, 28 January 2008 (UTC)
I would further point out that this article failed to meet 'good article criteria'. One of these is that, 'The lead should be capable of standing alone as a concise overview of the article, establishing context, summarizing the most important points, explaining why the subject is interesting or notable.' A lead that fails to mention the relevance of OFDM in terms of DAB DTV and Wi-Fi is surely failing badly, considering that these are THE major developments of the last decade that almost everyone is aware of. --Lindosland (talk) 21:04, 28 January 2008 (UTC)
I agree with the fact that articles should be tractable by the non-specialist, so I'll have a go at working on your updated changes! I'll summarise the rationale for my revisions here:
1. I've worked the material into the existing paragraphs, as the article lead is not the place to have discrete "simple" and "more in-depth" versions which overlap in scope. Instead, it should serve as a "single-threaded" summary to the article, which should all be readable in one chunk!
2. Merge the "OFDM is a method of modulation that is proving well suited ..." and "analogous with AM..." sentences with the existing usage sentence ("OFDM has developed into a popular scheme ...") to eliminate redundancy, and rewrite to avoid singling-out TV as particularly prominent. I've removed the specific standard names to avoid acronym overload, and because they're listed in the very next section of the article.
3. Remove the "OFDM works by splitting the wide-band digital signal..." sentence, as this was redundant (see the first paragraph of the existing lead).
4. Move the discussion of non-interfering sub-carriers into the first paragraph, where it seemed to fit more closely with the existing material.
5. Remove the 8K DVB-specific values, as they're too specific, and either way, are unnecessary for describing the principle in the lead. Disclaiming this with "Typically (by way of example only, as in broadcast TV and Radio)" wasn't making this any clearer. Replaced with a more concise discussion of SFNs at the end of the existing paragraph. Note also that there are several numerical examples we could put in the lead, like the examples at OFDM#Guard interval for elimination of inter-symbol interference and OFDM#Simplified equalization, which are arguably more important than SFN (as these were the original reasons that OFDM became attractive). But we don't, for the sake of clarity and brevity (the whole point of a summary!).
I hope that you'll mostly agree with my revision of your additions; if not, let's carry on the discussion here. Regards, Oli Filth(talk) 21:35, 28 January 2008 (UTC)
Thank you for your contributions Lindosland. However, I support Oli's revert. Some of these details might be discussed later in the article. The OFDM article is a quite well-written article, but the ingress is still a little bit long and should be more focused on the key issues rather than extended. I therefor suggest that the following is removed from the ingress: "several applications traditionally served by single-carrier methods such as AM, FM, QAM or PSK. These include". It might be moved down in the article. Mange01 (talk) 21:57, 28 January 2008 (UTC)
Hello. The lead as it stands is now a combination of Lindosland's additions, and my revisions of these changes. The particular sentence you've quoted is my re-working of Lindosland's version, but I'm happy for it to be removed, as the lead is indeed somewhat lengthy. I do think that some mention of "digital TV", "wireless networking" should be retained in the lead, though. Oli Filth(talk) 22:02, 28 January 2008 (UTC)
Thanks, I'm reasonably happy with what you have done, though I still think a comparison with AM and FM in the intro would help as these are terms that have been absorbed into popular culture, while 'modulation' means nothing to many people. --Lindosland (talk) 21:59, 31 January 2008 (UTC)
## New changes to the article lead and structure
In the changes made 23 February 2008 to the article, the paragraph which explains the primary advantages of OFDM was moved down from the article lead into a separate section. I reverted those changes for the following reasons:
• It does not make sense to both have a section on "Advantages" and "Summary of advantages", which are highly overlapping with each other. Both are summaries of certain issues discussed in the "Characteristics..." section.
• I suggest that major changes to the article lead and structure should be discussed at the talk page. There has already been a discussion on this talk page, where we have tried to agree on a compromise regarding short lead. Is it desirable to make it even shorter? Perhaps it is possible, but I would prefer a discussion first.
Mange01 (talk) 17:43, 23 February 2008 (UTC)
## Survey: bit/s/Hz, (bit/s)/Hz or bit·s−1·Hz−1 as Spectral efficiency unit?
Please vote at Talk:Eb/N0#Survey on which unit to be used at Wikipedia for measuring Spectral efficiency. For a background discussion, see Talk:Spectral_efficiency#Bit/s/Hz and Talk:Eb/N0#Bit/s/Hz. Mange01 (talk) 07:21, 16 April 2008 (UTC)
## Modulation or multiplex?
OFDM is listed under "modulation techniques", while OFDMA is listed under "multiplex techniques".
However, the OFDM article begins with "OFDM ... is a frequency-division multiplexing (FDM) scheme", which is in my opinion correct, and contrasts with the classification as a modulation technique. Fpoto (talk) 12:55, 28 May 2008 (UTC)
The ingress continues: OFDM " is a frequency-division multiplexing (FDM) scheme utilized as a digital modulation method." I don't agree totally with the first part of this sentence. OFDM rather involves or is based on FDM, but it is by definition a modulation scheme, since it includes inverse multiplexing that translates one bit stream into several. See section 3.6 for further arguments:
"OFDM in its primary form is considered as a digital modulation technique, and not a multi-user channel access technique, since it is utilized for transferring one bit stream over one communication channel using one sequence of OFDM symbols. However, OFDM can be combined with multiple access using time, frequency or coding separation of the users."
The december 2007 version of the ingress was i.m.o. more correct: "OFDM is a digital multi-carrier modulation scheme, which uses a large number of closely-spaced orthogonal sub-carriers". I have noticed that the new version of the ingress has caused confusion among my students. The OFDM literature is pretty clear on that OFDM is a modulation scheme, but many non-professionals are confused, since the M in OFDM means multiplexing.
You question that OFDMA is mentioned in the multiplexing template. You have a point here. OFDMA is a multiple access scheme rather than a multiplex scheme, at least from the name, but multiplexing and multiple access are related. If several transmitters are sending using different sub-carriers, it is a multiple access scheme, and should be called OFDMA and not OFDM. The downlink case is more complicated. If one transmitter is sending to several receivers using separate sub-carriers, it is multiplexing rather than multiple access, but I would still call it OFDMA since it is based on the same principles, especially if there is an OFDMA uplink backchannel controlled by the same algorithm or protocol. But a agree that the picture is not totally clear. What about ADSL - why do we call it OFDM and not OFDMA?
I would not accept OFDM in the multiplexing template, from the above reasons. I would understand if you remove OFDMA from it. Mange01 (talk) 20:42, 28 May 2008 (UTC)
I understand your point here. However, I have a couple of reservations about describing OFDM primarily as a "modulation method":
1. the underlying modulation used (QPSK, QAM16, etc.), i.e. what we traditionally understand "modulation" to mean, is not part of the definition of OFDM.
2. The argument that OFDM "includes inverse multiplexing" is really a matter of perspective. What it's really doing is mapping incoming bits to physical symbols, and so has more in common with techniques such as interleaving and traditional multiplexing. The only difference is that here, the physical symbols occupy a two-dimensional space (time+frequency), rather than just one-dimensional. Oli Filth(talk) 17:59, 2 June 2008 (UTC)
## Cleanup of article needed
Sections seem to be out of order or jumbled up somehow. --KJRehberg (talk) 21:16, 23 June 2008 (UTC)
Removed text that celebrated the virtues of Flash-OFDM to the point of reading like a cheap advertorial. I'm not at all sure the section should stay at all as it completely fails to explain just what sets this flash thing apart from the rest, beyond expanding the acronym. 85.178.66.163 (talk) 23:44, 1 October 2008 (UTC)
I further trimmed it down, rewrote in past tense a little, updated it with some sources, and removed inline links. It still seems notable, but of historical interest now that other standards seem to be battling it out. W Nowicki (talk) 21:26, 23 July 2011 (UTC)
## Idealized system model and constellation mapping
Can someone tell me what constellation mapping does with the incoming bits? If the constellation mapping was QAM, would the constellation mapping convert the incoming bits into a sinusoidal signal, with varying amplitude and phase (this signal would be digital)? Thank you in advance WielkiZielonyMelon (talk) 15:34, 6 February 2009 (UTC)
For example, if 64QAM is used, every 6 bit data will be converted to a complex value by the constellation mapping. For example, 011101 might be converted to 5 - i2, corresponding to a vector with horizontal value 5 and vertical value -2. This is the fourier coefficient for one of the sub-carriers. After the inverse fourier transform, this coefficient will correspond to a cosine wave with amplitude 5, plus a sine wave with amplitude 2, at this specific sub-carrier frequency. Mange01 (talk) 23:09, 7 February 2009 (UTC)
Magne01, thank you very much, I have hurried a little and forgot, that the entry of IFFT is in FREQUENCY DOMAIN, and the exit of the IFFT is in TIME DOMAIN. Anyway, thank you for explanation! WielkiZielonyMelon (talk) 10:15, 9 February 2009 (UTC)
This article contains too much DAB-specific information that doesn't mesh well with the other information here. The DAB information belongs in an article about DAB technical aspects. If we had this much information about other specific COFDM-based services, this article would be huge and incomprehensible.
Also, none of this DAB-specific information is complemented by discussion of HD Radio (which has as many multiplexes as DAB does now!), DRM, etc.
-143.215.155.26 (talk) 05:20, 21 March 2009 (UTC)
## Section: Channel coding and interleaving
The last paragraph and sentence states that "...[Turbo codes and LDPC codes] only perform close to the Shannon limit for the AWGN channel ... [therefore] concatenated them with either RS or BCH codes to improve performance further...".
I was under the impression that RS/BCH was concatenated with LDPC/Turbo codes in order to clean up resilient errors from their error floor???
## Section: Linear transmitter power amplifier
There are techniques to reduce the PAPR: Active Constellation Extension (ACE), and Tone Reservation.
Could somebody comment on these points? Cheers, Nageh (talk) 12:07, 5 June 2009 (UTC)
## Move the comparison table to a template that can be embedded in several articles?
Would it be a good idea to move the comparison table to a template, and embed it in the end of articles about each of the compared systems? For example template:ofdm sytem comparison table. The table should be collapsible. Mange01 (talk) 19:08, 11 May 2010 (UTC)
## What is the crest factor?
We know it is high, but how high? OFDM is typically 12 dB, what would it be for COFDM? I am surprised this most critical parameter for Tx design is treated so lightly. — Preceding unsigned comment added by 75.99.58.122 (talk) 15:39, 24 January 2012 (UTC)
As you requested, I added an equation for calculating the crest factor to the article (with an example value).
In theory, if one were to hypothetically one were build a OFDM system without coding, and compare it to a COFDM system using the same subchannel modulation and the same number of non-zeroed subchannels, that hypothetical OFDM system would have exactly same crest factor.
In practice, practically all "OFDM" protocols are actually COFDM, so I suspect that "12 dB" value is actually the crest factor of some COFDM system. --DavidCary (talk) 18:07, 9 January 2013 (UTC)
## Optical OFDM
Press release here on low-cost OOFDM claims it can extend subscriber FTTH capacities by 2000 fold (e.g. 20 Gb/s vice 10 Mb/s) without significant cost increase. Seems like it might be worth some discussionin the article. See WO2011051448 A4 or the Beeb for additional details. LeadSongDog come howl! 20:24, 6 November 2012 (UTC)
## f-DPSK
Today I'm reading a paper by Martin Hoch ("Comparison of PLC G3 and PRIME".) which mentions "t-DPSK" and "f-DPSK".
I came to this Wikipedia article looking for a better explanation of "f-DPSK" and "t-DPSK". Dear Wikipedia editors, if you know what these things are, please add them to this article. --DavidCary (talk) 18:07, 9 January 2013 (UTC)
I know nothing about it... but why to this article (and not DPSK)? Nageh (talk) 12:02, 10 January 2013 (UTC)
Good point. Let me repost this question there.
I thought this might be relevant to OFDM because I haven't seen either term ever used in any non-OFDM system. --DavidCary (talk) 18:17, 19 January 2013 (UTC)
When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).
|
{}
|
# Custom attributes?
Is it possible to assign custom attributes to symbols and check them later?
SetAttributes[a, b]
says
Attributes::attnf: b is not a known attribute.
No, I do not believe it is. As the documentation for your error message says:
The attributes available in each version of Mathematica are fixed and cannot be changed.
The system attributes are low level properties that fundamentally change the evaluation of symbols. I think it makes sense that these are not mixed with high-level user constructs, even though at times that would be quite convenient.
For an alternative remember that you can attach Options to a symbol, e.g.:
Options[a] = {"Attributes" -> {"b"}};
OptionValue[a, "Attributes"]
{"b"}
You could also use a single DownValues rule such as:
a["Attributes"] = {"b"};
a["Attributes"]
{"b"}
• Thanks. But how to change individual option of a symbol? – Suzan Cioc Jan 19 '13 at 10:14
• Ah, I found SetOptions function. – Suzan Cioc Jan 19 '13 at 10:15
• @Suzan What do you mean? It is simple to replace the list of options with either method shown, e.g. SetOptions[a, "Attributes" -> {"c", "d"}]; or a["Attributes"] = {"c", "d"}; Are you asking for a way to individually append or remove these values as SetAttributes and ClearAttributes do? I could add that to my answer if you have trouble crafting it yourself. – Mr.Wizard Jan 19 '13 at 10:17
• @Suzan Thanks for the Accept, but please consider waiting a while first as someone else may have an answer you like better if you do not discourage them from reading the question. – Mr.Wizard Jan 19 '13 at 10:18
|
{}
|
## orbitals of celestial bodies
Hi all,
I have a question about calculating the orbitals of bodies of mass in space (newton's basic laws). I am writing a program to simulate the orbitals of bodies in space -- basically, you define the object's mass, size, location, and initial velocities and watch how they interaction via gravitational attraction.
So we know the following equations that govern the motion of these bodies:
Magnitude of force due to gravitational attraction:
$F=G\frac{m1*m2}{r^2}$
The direction is, at any instant in time, points in the same direction as displacement between the centers of their mass (points in direction of the other body of mass).
We know the magnitude and direction of the force, and from this, we would say that the acceleration due to gravity that object 1 undergoes is given by:
$a=\frac{F}{m1}$
The direction of which is the same direction as the force vector.
Now suppose I asked the following question: what is the total displacement that object 1 undergoes given any arbitrary $Δt$ (and initial velocity is known)? Currently, my program just changes the velocity vector based on the direction of $a$ at whatever moment the refresh was called, and from that changes the object's position, but I know this is not entirely accurate (e.g., this is comparable to finding the area under a curve by diving it into tiny rectangles), because the direction of $a$ is constantly changing.
What is the calculus method of doing this?
Much appreciated!
PhysOrg.com physics news on PhysOrg.com >> Kenneth Wilson, Nobel winner for physics, dies>> Two collider research teams find evidence of new particle Zc(3900)>> Scientists make first direct images of topological insulator's edge currents
Quote by eNathan What is the calculus method of doing this?
You use this:
http://en.wikipedia.org/wiki/Euler_method
Better are:
http://en.wikipedia.org/wiki/Midpoint_method
http://en.wikipedia.org/wiki/Runge%E...3Kutta_methods
There's lots of code examples online for the specific task you are doing.
Similar discussions for: orbitals of celestial bodies Thread Forum Replies Chemistry 5 Classical Physics 6 General Astronomy 2 General Astronomy 0 Atomic, Solid State, Comp. Physics 8
|
{}
|
Search
# S
S is the nineteenth letter in the modern Latin alphabet.
In most writing systems that use the Latin alphabet, the letter s corresponds to a coronal fricative consonant. Semitic Šîn (bow) was pronounced as the voiceless postalveolar fricative (like the sound of the letters sh in ship). Greek did not have this sound, so the Greek sigma (Σ) came to represent the voiceless alveolar fricative (like the sound of the letter s in sit). The name "sigma" probably comes from the Semitic letter "Sâmek" and not "Šîn". In Etruscan and Latin, the [s] value was maintained, and only in modern languages has the letter been used to represent other sounds, such as voiceless postalveolar fricative [ʃ] in Hungarian or the voiced alveolar fricative [z] in English, French and German (in English rise; in French lisez (="read" imperative plural); in German lesen (="to read").
An alternative form of s, ſ, called the long s or medial s, was used at the beginning or in the middle of the word; the modern form, the short or terminal s, was used at the end of the word. For example, "sinfulness" is rendered as "ſinfulneſs" using the long s. The use of the long s died out by the beginning of the 19th century, largely to prevent confusion with the minuscule f. The ligature of ſs (or ſz) became the German ess-tsett ( ß ).
Sierra represents the letter s in the NATO phonetic alphabet. The letter s represents the voiceless alveolar fricative in the International Phonetic Alphabet.
Contents
|
{}
|
+0
0
77
2
Simplify the expression below.
Aug 20, 2020
#1
0
If the 3f isn't under the radical in the original expression, then it's C.
You just take the 4 out from under the radical. It should be +2 though.
Aug 20, 2020
#2
+31296
+1
As follows:
$$\sqrt{\frac{4e}{3f}}=\sqrt{\frac{4\times3ef}{9f^2}}=\frac{2}{3f}\sqrt{3ef}\\ \text{or just }\frac{2\sqrt{3ef}}{3f}$$
Aug 20, 2020
|
{}
|
# LaTeX code for presentation slides
I'm giving a presentation in a few days. I'm preparing my material in LaTeX; however, I am relatively new to it. How can I make my LaTeX code more concise and less repetitive?
Here is a reduced version of my presentation, which contains the following features that I'm seeking to improve and tighten up:
• Code samples using minted, with lots of embedded escapes for use of tikzmark (and later on for hyperlinks to Web-based documentation).
• Some diagrams using tikz.
• Callouts with arrows and nodes, using tikz and tikzmark
• Note that said arrows point to the middle of the character, not the baseline. That's what all the shifts are for.
• Some callouts have multiple arrows emanating from them.
• Most callouts have an anchor of west, although in this sample very few of them do.
• Occasionally, there's also text on the arrow itself.
• Callouts may have text that extends to more than one line.
• Columns, a little bit.
The code (with representative slides) follows, as does the output. The vast majority of the slides (which are available on GitHub if you're so inclined) most closely resemble the third example here.
I also wouldn't mind any suggestions for LaTeX packages or the actual content of the presentation, even though they're not the purpose of this question.
\documentclass[glossy]{beamer}
\usepackage[utf8]{inputenc}
\usepackage{hyperref}
\usepackage{minted}
\usepackage{tikz}
\usepackage{lmodern}
\usepackage[T1]{fontenc}
\useoutertheme{wuerzburg}
\usecolortheme{shark}
\usetikzlibrary{tikzmark, arrows, decorations, decorations.pathreplacing}
\newminted{cpp}{autogobble, fontsize=\tiny, escapeinside=@@}
\newmintinline{cpp}{}
\usemintedstyle{vs}
\tikzset{every picture/.style={font issue=\scriptsize},
font issue/.style={execute at begin picture={#1\selectfont}}
}
\title{C++ Boot Camp 1/2}
\author{Jesse Talavera-Greenberg}
\date{\today}
\begin{document}
\newcommand{\cppref}[2]{\href{http://en.cppreference.com/w/cpp/#1}{\underline{#2}}}
\begin{frame}
\maketitle
\end{frame}
\begin{frame}[fragile=singleslide]
\begin{columns}
\begin{column}{6cm}
\begin{itemize}
\item Stack allocation is fast, but size must be known at compile time
\item Heap allocation is flexible, but slow
\item Details vary by compiler, OS, and hardware
\item \textbf{All objects of a given type are the same size.}
\end{itemize}
\end{column}
\begin{column}{6cm}
\begin{tikzpicture}
\draw [fill=pink, ultra thick, rounded corners] (current page.north west) rectangle (6cm, 2cm);
\draw [fill=purple, ultra thick, rounded corners] (0cm, 2cm) rectangle (6cm, 4cm) node [align=center, anchor=center, fill=white] at (3cm, 3cm) {\huge{Stack}};
\draw [fill=green, ultra thick, rounded corners] (current page.north west) rectangle (6cm, 6cm);
\draw [fill=olive, ultra thick, rounded corners] (2cm, 9cm) rectangle +(1cm, 0.5cm) (3cm, 8cm) rectangle +(1cm, 0.5cm) (5cm, 8cm) rectangle +(1cm, 0.5cm) (0, 7cm) rectangle +(3cm, 0.5cm);
\draw node [align=center, anchor=center, fill=white, rounded corners] at (3cm, 8cm) {\huge{Heap}};
\end{tikzpicture}
\end{column}
\end{columns}
\begin{tikzpicture}[remember picture, ->, >=stealth, overlay, red, ultra thick, align=left]
\draw (4cm, 22em) node [anchor=east] {\shortstack{Find enough space (expensive)}} -> (6.25cm, 5.25cm);
\draw (4cm, 3em) node [anchor=east] {\shortstack{Increment an address (cheap)}} -> (6.25cm, 1cm);
\end{tikzpicture}
\end{frame}
\begin{frame}[fragile=singleslide]
\frametitle{(Con|De)structors, RAII, and the Rule of 3}
\begin{cppcode}
#include <cstdint>
#include <cstring>
using std::memcpy;
using std::size_t;
class FloatArray@\tikzmark{raii_dont}@ {
public:
FloatArray(size_t size) : _size@\tikzmark{raii_init}@(size), _array(new float[size]) {}@\tikzmark{raii_ctor}@
FloatArray(const FloatArray& other) : _size(other._size), _array(new float[other._size]) {@\tikzmark{raii_copyctor}@
memcpy(_array, other._array, other._size * sizeof@\tikzmark{raii_sizeof}@(float));
}
FloatArray& operator=(const FloatArray& other) {@\tikzmark{raii_copyeq}@
if (this != &other) { // Watch for self-assignment!
float* temp = new float[other._size];
memcpy(temp, other._array, other._size * sizeof(float));
delete[] _array;
_array = temp;
return *this;
}
}
~FloatArray() {@\tikzmark{raii_dtor}@
delete[] floats;@\tikzmark{raii_dtor_b}@
}
private:
size_t _size;
float* _array;
};
\end{cppcode}
\begin{tikzpicture}[remember picture, ->, >=stealth, overlay, red, ultra thick, align=left]
\draw (3cm, 27em) node [anchor=west] {\shortstack{Don't write this class\\(STL does it better)}} -> ([shift={(0em,.25em)}]pic cs:raii_dont);
\draw (5cm, 23em) node [anchor=south] {\shortstack{Member initialization syntax}} -> ([shift={(0em,.25em)}]pic cs:raii_init);
\draw (8cm, 22em) node [anchor=south west] {\shortstack{Anything else\\(nothing right now)}} -> ([shift={(0em,.25em)}]pic cs:raii_ctor);
\draw (9cm, 7em) node [anchor=north] {\shortstack{Rule of 3: You need to write one,\\you need to write them all}} -> ([shift={(0em,.25em)}]pic cs:raii_copyctor) node [pos=.5, above, sloped, anchor=north] {Copy constructor};
\draw (9cm, 7em) -> ([shift={(0em,.25em)}]pic cs:raii_dtor) node [pos=.5, above, sloped, anchor=north] {Destructor};
\draw (9cm, 7em) -> ([shift={(0em,.25em)}]pic cs:raii_copyeq) node [pos=.5, above, sloped, anchor=north] {Copy assignment};
\draw (4.5cm, 3em) node [anchor=north] {\shortstack{RAII: Create in ctor, delete in dtor}} -> ([shift={(0em,.25em)}]pic cs:raii_dtor_b);
\end{tikzpicture}
\end{frame}
\end{document}
These are the resulting three images:
• Just a point about the slides themselves. The worst about heap allocation is that the management of the memory is done by yourself. Please make sure that is stated and clear. Allocating and freeing memory from heap itself may be a little bit more expensive than stack memory but is not such a big concern. Jan 28, 2016 at 23:12
• Good point. That is true, I'll clarify that. Jan 28, 2016 at 23:22
In my opinion, the code is reasonably clear. I would recomment couple changes to make it better:
1. Externalize the C++ code; make it a separate file and load it from file using \inputminted. See the package documentation for details.
2. Mark the ends of frames, it easies the navigation in the source code. I use this in my code:
...
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}
...
3. You are inconsistent with indentation: Sometimes it's 4 spaces or a tab, sometimes 2 spaces; sometimes you indent the corresponding \end differently and sometimes you do not.
4. You are repeating [fill=#1, ultra thick, rounded corners] several times; it could use a TikZ style.
5. I would avoid \today; put the date of the talk/seminar in \date.
6. Do you ever use your \cppref? If not, do you need it?
7. Add comments to everything that deserves them, especially in the preamble. Group related parts of preamble. Do not load hyperref (beamer does that for you). I would change the preamble to this:
\documentclass[glossy]{beamer}
% FONTS ETC.
\usepackage[utf8]{inputenc}
\usepackage{lmodern}
\usepackage[T1]{fontenc}
% BEAMER APPEARANCE
\useoutertheme{wuerzburg}
\usecolortheme{shark}
% TIKZ
\usepackage{tikz}
\usetikzlibrary{tikzmark, arrows, decorations, decorations.pathreplacing}
\tikzset{every picture/.style={font issue=\scriptsize},
font issue/.style={execute at begin picture={#1\selectfont}}
}
% MINTED
\usepackage{minted}
\newminted{cpp}{autogobble, fontsize=\tiny, escapeinside=@@}
\newmintinline{cpp}{}
\usemintedstyle{vs}
\title{C++ Boot Camp 1/2}
\author{Jesse Talavera-Greenberg}
\date{5th February 2016}
% USER MACROS
\newcommand{\cppref}[2]{\href{http://en.cppreference.com/w/cpp/#1}{\underline{#2}}}
% BEGIN DOCUMENT
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}
....
|
{}
|
# Solve the following
Question:
A $5.0 \mathrm{~m}$ moldm $^{-3}$ aqueous solution of $\mathrm{KCl}$ has a conductance of $0.55 \mathrm{mS}$ when measured in a cell constant $1.3 \mathrm{~cm}^{-1}$. The molar conductivity of this solution is________________ . $\mathrm{mSm}^{2} \mathrm{~mol}^{-1}$. (Round off to the Nearest Integer)
Solution:
(14.3)
Given conc ${ }^{\mathrm{n}}$ of $\mathrm{KCl}=\frac{\mathrm{m} \cdot \mathrm{mol}}{\mathrm{L}}$
$\therefore$ Conductance $(\mathrm{G})=0.55 \mathrm{mS}$
Cell constant $\left(\frac{\ell}{\mathrm{A}}\right)=1.3 \mathrm{~cm}^{-1}$
To Calculate : Molar conductivity $\left(\lambda_{\mathrm{m}}\right)$ of sol.
$\rightarrow$ Molarity $=5 \times 10^{-3} \frac{\mathrm{mol}}{\mathrm{L}}$
$\rightarrow$ Conductivity $=\mathrm{G} \times\left(\frac{\ell}{\mathrm{A}}\right)=0.55 \mathrm{mS} \times \frac{1.3}{\frac{1}{100}} \mathrm{~m}^{-1}$
$=55 \times 1.3 \quad \mathrm{mSm}^{-1}$
$\mathrm{eq}^{\mathrm{n}}(1) \lambda_{\mathrm{m}}=\frac{1}{1000} \times \frac{55 \times 1.3}{\left(\frac{5}{1000}\right)} \frac{\mathrm{mSm}^{2}}{\mathrm{~mol}}$
$\Rightarrow \lambda_{\mathrm{m}}=14.3 \frac{\mathrm{m} \mathrm{Sm}^{2}}{\mathrm{~mol}}$
|
{}
|
Home » Dataform: Change datasets with branches
# Dataform: Change datasets with branches
Tags:
The reason people love tools like Dataform so much, is that it allows them to automate parts of the ELT workflow. In this blog post we will set our destination dataset, depending on the branch we’re running the definition on.
A really interesting use case is to keep the resulting tables from your scheduled runs on your staging branch(es) separately from the tables created on your production branch. Many possibilities come to mind:
1. Set a value in a column — e.g. “staging” and “production” (not within the scope of this blog post)
2. Add a prefix or a suffix to your tables and views — e.g. “stg-tablename” and “prd-tablename”
3. Use different BigQuery datasets (= Dataform schemas)
Without a doubt, the easiest way to get one of these three solutions done is via environments.
## Dataform Environments
Environments are a wrapper around your codebase. Just like environment variables within an operating system or container, they allow you to manipulate and set variables that work through your code, everywhere you use them.
Let’s start with dataform.json: As you can see, I set the defaultSchema parameter to “stg”, which will be default BigQuery dataset where tables will be created or updated.
{
"warehouse": "bigquery",
"defaultSchema": "stg", // The default dataet in bigquery is set to stg
"assertionSchema": "dataform_assertions",
"defaultDatabase": "YOUR_DATABASE"
}
By creating an environment within environments.json, one can overwrite the settings from dataform.json, using the configOverride parameter. This is the environment named production, that I created for the master branch. When a job runs on this branch, output will not go to the “stg” dataset, but to the “prd” dataset.
{
"environments": [
{
"name": "production",
"configOverride": {
"defaultSchema": "prd"
},
"gitRef": "master"
}
]
}
Although you don’t need to set the schema explicitly, because we set the default in dataform.json, one can still do it. Within a definition, one can refer to the settings via the dataform object.
config {
schema: dataform.projectConfig.defaultSchema, // Optional
name: "YOUR_TABLE"
}
From this example, it is clear how one can use custom variables to create code and query manipulations that depends on the branch or the environment.
### Say thanks, ask questions or give feedback
Technologies get updated, syntax changes and honestly… I make mistakes too. If something is incorrect, incomplete or doesn’t work, let me know in the comments below and help thousands of visitors.
|
{}
|
Plans
Current Affairs
Select Date
Tags:
Hindi
# Important Current Affairs 10th Jun 2018
Important Current Affairs 10th Jun 2018
Current Affairs 10th Jun 2018: NATIONAL NEWS
26 students of Anand Super 30 selected for IIT
● In the IIT examination, 26 students of Anand Super 30 Institute have achieved success.
● They include son of two laborers and son of a salesman.
● Encouraged by the success of the students, Anand Kumar said that the size of Super 30 will be extended next year and 90 students will be given coaching.
● Soon, a test will be conducted across the country for this.
● The information related to the website will be uploaded.
GST refund of foreign tourists will be done at the airport
● Foreign tourists can claim GST refunds at the airport only when they can leave the airport because the revenue department is working on the system for refund of the taxes paid on local purchases.
● In many countries tourists are refunded VAT or GST for purchases over the prescribed limit.
Government opened path for private professionals
● To become a Joint Secretary now, it will not be necessary to pass the UPSC exam.
● The Modi government has changed the rules for entry in the bureaucracy.
● The government has approved the letter of entry for the 10-point secretariat and has issued notification on Sunday in this regard.
Industry Sector Expects GDP Growth At 8% In next Two Years
● Indian Industries expects GDP growth in India to remain 8 per cent for the next two years.
● The biggest reason for this is the major reform and fiscal partnership in the last few years by the government.
● With this, a strong foundation for the development of the country has been kept.
Current Affairs 10th Jun 2018: ECONOMY
Loss of Rs 80 lakh crore on the basis of purchasing power of country's GDP
● The Indian economy has suffered a loss of more than $1,190 billion or 80 lakh crore rupees last year due to violence on the basis of purchasing power (PPP). ● This loss is more than$ 595.40 i.e. more than 40 thousand rupees per person.
● The Institute for Economics and Peace has prepared this report after studying 163 countries and regions.
People have cash at the record level so far
● The Modi government is on the way to making the country's economy cashless.
● At present, the cash level in the hands of the public has reached above Rs 18.5 lakh crore, which is the highest level so far.
Equity Mutual Fund became the preferred choice for retail investors with 86 percent stake
● The share of retail investors in equity mutual funds has increased to 86 per cent.
● It has recorded an increase of about 36 per cent in a year.
FDI increased to $61.96 billion in India during 2017-18 ● Foreign direct investment in India increased to$ 61.96 billion during the financial year 2017-18, which was at the level of $60 billion in the financial year 2016-17. ● DIPP secretary Ramesh Abhishek said on Friday this information. Current Affairs 10th Jun 2018:SPORTS Intercontinental Cup: India won the title by defeating Kenya 2-0 ● Thanks to two goals from Captain Sunil Chhetri, India won the title of Intercontinental Cup Football Tournament by defeating Kenya 2-0 in the finals of the Mumbai Football Arena on Sunday. ● Chetri combined with Argentine legendary Lionel Messi in the second place with the list of active football players, who scored the most goals. ● The name of these two players is now 64 international records. Rafael Nadal won French Open for a record 11th time ● In the French Open men's singles finals, Spain's Rafael Nadal defeated Dominican Thim of Austria. ● He beat Thimm 6-4, 6-3, 6-2 in a match that lasted 2 hours and 42 minutes. ● Nadal's record is the first player to win the finals for the 11th time. The New Zealand women's team scored 490 runs, the highest score of one-day in 47 years. ● The women's team of Zeeland has broken their own world record of one-day cricket highest score. ● Team scored 490 runs for 4 wickets against Ireland on Friday. ● This is the biggest score of 47 years of ODI history. Bangladesh won by 3 wickets in Women's Asia Cup T-20 ● In the women's Asia Cup T-20 finals, Bangladesh defeated India by 5 wickets and won the title for the first time. ● India had reached Fanine for the seventh time, but it had to face defeat for the first time. ● The target of 113 runs was achieved in Bangladesh by 20 wickets on 7 wickets. Current Affairs 10th Jun 2018: GLOBAL NEWS India did not support China's BRI project ● India has not supported China's supermassive project Belt and Road Initiative (BRI). ● India has been the only country in 8 countries of the SCO, which has opposed this project of China. Chinese hackers steal US Navy intelligence data ● Chinese government hackers have stolen the intelligence of the US Navy ● It also includes plans to build a new anti-ship missile fired from the submarines, such as the battle of war in the sea. ● Chinese hackers stole 614 gigabytes of data between January and February this year. Apple is close to touching 1 trillion dollars market cap figure ● There is no company in the world holding a trillion, that is, a$ 1000 billion market cap.
● Apple may soon cross the milestone to become a 12-zero and 13-digit company.
● Amazon, Alphabet and Microsoft and are just behind Apple. Apple's market capitalization of one trillion dollars will be close to Rs 67 lakh crore.
Current Affairs 10th Jun 2018: RANKING
India is the 29th safest country in the world
● India is ranked 29th in the annual list of protected countries.
● Last time he was at the 63rd position.
● That is, he has made a jump of 34 places in terms of security.
● This information has been provided in Gallup International's latest survey report.
Rahul 1 Months ago
|
{}
|
### B-integral
Value of B-integral for material of given thickness $$z$$, wavelength of radiation and position-independent (maximal) intensity $$I_\mathrm{max}$$: $$B=\frac{2\pi}{\lambda}\intop_0^{z}n_2I(z)\mathrm{d}z'=\frac{2\pi n_2 I_\mathrm{max} z}{\lambda}.$$ Reported values of nonlinear refractive index $$n_2$$ at the corresponding wavelenghts given below. For wavelenghts in the given range, value of $$n_2$$ is interpolated. For wavelenghts outside the region, $$n_2$$ at the closest wavelength is used.
|
{}
|
Files in this item
FilesDescriptionFormat
application/pdf
9812578.pdf (6MB)
(no description provided)PDF
Description
Title: Kinetic and Microstructural Study of Titanium Nitride Deposited by Laser Chemical Vapor Deposition Author(s): Egland, Keith Maynard Doctoral Committee Chair(s): Mazumder, Jyotirmoy Department / Program: Materials Science and Engineering Discipline: Materials Science and Engineering Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Engineering, Materials Science Abstract: The deposition apparent activation energy was calculated as 122 $\pm$ 9 kJ/mole using growth rates measured by film height and 117 $\pm$ 23 kJ/mole using growth rates measured by LIF signals. This puts the process in the surface kinetic growth regime over the temperature range 1370-1610 K. Above N$\sb2$ and H$\sb2$ levels of 1.25% and below TiCl$\sb4$ input of 4.5%, the growth rate has a half-order dependence on nitrogen and a linear dependence on hydrogen and is approximated by$$\rm r = {{kP\sb{TiCl\sb4}P\sb{H\sb2}P\sbsp{N\sb2}{1/2}exp\left({{-}E\sb{a}\over {RT}\right)}\over{1 + P\sb{Ar}}}}.$$Since nitrogen positively affects growth rate (when added to a TiCl$\sb4$+H$\sb2$ mixture), stepwise reduction of TiCl$\sb4$ to Ti by hydrogen does not occur. $\rm NH\sb{x}$ complexes are clearly involved in the growth mechanism; a likely combination of rate determining steps is the formation of NH and the initial reduction of TiCl$\sb4$ by hydrogen. Issue Date: 1997 Type: Text Language: English Description: 138 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1997. URI: http://hdl.handle.net/2142/82885 Other Identifier(s): (MiAaPQ)AAI9812578 Date Available in IDEALS: 2015-09-25 Date Deposited: 1997
|
{}
|
# What is free energy in the context of a quantum field theory?
I was reading the papers Large $$N$$ behavior of mass deformed ABJM theory and New 3D $${\cal N}=2$$ SCFT's with $$N^{3/2}$$ scaling. These papers talk about the free energy in the context of quantum field theory. I have an idea of what thermodynamic free energy is (related to the work done by the system). But what is free energy in the context of a quantum field theory?
The definition of the free energy in QFT is the same as in many-body QM $$\exp(-\beta F) = {\rm Tr}\left[\exp(-\beta H)\right]$$ where $$H$$ is the Hamiltonian of the system and $$\beta=1/(k_BT)$$. Note that this definition implies the standard thermodynamic identities. If you have a box of this stuff (described by $$H$$) coupled to a heat bath, then the isothermal change of $$F$$ is $$dF=-pdV$$, etc. There is a euclidean path integral representation of $$Z={\rm Tr}[\exp(-\beta H)]$$ $$Z = \int_{S_1\times R^3}{\cal D\phi} \;\exp(-S_E)$$ where the size of the circle $$S_1$$ is equal to $$\beta$$, $${\cal D}\phi$$ is the path integral measure in the QFT, and $$S_E$$ is the euclidean action. We have to impose periodic/anti-periodic boundary conditions for bosons/fermions along the $$S_1$$.
• This is not the free energy used by the papers in question. – Hans Moleman Jul 25 at 18:51
• @HansMoleman How do you know? (What other free energy is there?) – Thomas Jul 25 at 20:45
The theories on question are local QFTs in 3d. You can put such a theory on any compact 3-manifold $$M$$, and then you can compute observables such as the partition function $$Z[M]$$ or correlation functions $$\langle O_1 \dotsm O_n \rangle_M$$. All these observables will in any case depend on the couplings of the original theory and any parameters you use to define $$M$$, for instance its size $$R$$. In the papers in question the three-sphere $$M=S^3$$ is used. We always have in mind that we're tuning to a critical point, such that the couplings of the original theory are completely fixed.
Typically this procedure is a little bit ambiguous, in the sense that in the Lagrangian you can turn on new couplings: $$\mathcal{L} \mapsto \mathcal{L} + \text{cosmological constant} + \text{Ricci scalar} + \ldots$$ that don't exist in flat space. If you measure the partition function for $$M=S^3$$, you find that $$\ln Z[M] = a (\Lambda R)^3 + b \Lambda R - F + \ldots$$ for some dimensionless coefficients $$a,b,f$$. (Here $$\Lambda$$ is the UV cutoff, and all couplings are measured in units of $$\Lambda$$.) We can set $$a,b = 0$$ by tuning the cosmological constant and the Ricci scalar. Once you're at the critical point, the partition function $$Z[M]$$ is therefore a pure number, $$e^{-F}$$, and often this $$F$$ is called the free energy of a 3d CFT.
You are talking about Self Consistent Field Theories.
It is a key tool for describing the phase behavior of block polymers.
The lowest free energy state is the thermodynamic equilibrium.
http://pscf.cems.umn.edu/scft/background
In a thermodynamic system, the internal energy (U) is a function of entropy (S) and volume. Now since entropy is not easy to measure, we need to use temperature (T).
F = U - T x S
|
{}
|
# Tag Info
24
Start by talking about functions in general, not only about functions that can be expressed by a simple formula in x and y. Examples: The function that maps every non-empty list to its first element. The function that maps every finite set to its size. The function that maps color names to RGB triples. The function that maps days to sunrise times at a ...
23
You might remind them that $y$ is just a name for a number. When they draw a plot, they draw a bunch of points: maybe $y=3$ here, $y=5$ there, and $y=-2$ over there. But at some point (no pun intended) we want to talk about the entire shape: we want to say that $f$ is symmetric, that $f$ is concave, that $f$ has an asymptote. We can't do that with $y$; ...
20
You should tell them these two main benefits: (1) Function notation is concise! For example, instead of writing "Find $y$ when $x=5$" one can simply write "Find $f(5)$" This becomes very appreciable when dealing with long or complicated problems asking for a lot of information. We also shorten things like this all the time. For instance, instead of writing $... 16 The crucial thing the students need to realise is that the (e.g.)$x$that turns up in the function definition is a bound variable. That's what allows it to be freely renamed or indeed omitted without changing the semantics. Unfortunately, education tends to completely obscure this facet by a) always using the same dumb variable names as if there were a ... 15 Because x and y are just variable names It happens that sometimes y=f(x), but other times z=f(x,y), w=f(x,y,z), or x=f(y) for that matter. All of these variable names are syntactically equivalent, and the mere existence of "x" and "y" in an equation does not necessarily connote that "x" is the independent variable and "y" the dependent. Thinking of the ... 11 TL;DR: A function is a verb. It's an action. Variables are nouns, objects. Verbs (functions) connect nouns semantically, i.e. how A (or x) relates to B (or y), how to get from here to there. Long version Some context: I learned maths from my father who was a physics / engineering guy at heart, so everything had to be 'tangible' or 'observable' to him. ... 8 (First, I should mention that I've never taught this, so my approach does not come from experience.) So you have students who think of something like$y = f(x) = x^2 + 3$as a relationship between two “specific” quantities$x$and$y$. As intuitions go, that's not so bad: it serves physicists quite well. But it's incomplete, and you're looking for ways to ... 6 I have never worked with students of that skill-level, so take this with a grain of salt. I like to thinks of functions as values, just a different kind of value from numbers. This can help demystify stuff like$\circ$as it just like$+$, except it works on a different type of value. Once you get to vectors you also have a very nice parallel, since they ... 5 Prior to my final year of high school, I was sent to a maths tutor for a couple of sessions, to give me a headstart on calculus. It helped a lot. He introduced me to the concept of functions. He described it as a monster, living inside a box, that accepted a thing through one (!) tube, and pushed out a thing through another tube. The monster was consistent -... 5 I've noticed a few issues when students solve problems of the form, "Find the inverse of this function", and not all of the issues are necessarily because of the students' misunderstanding of what an inverse function means! Misunderstanding/forgetting the "one input$\to$one output" defining feature of a function. This issue arises because implicitly-... 4 From a comment by the OP: I'm trying to come up with "plausible" wrong answers for a multiple choice question about finding inverses. Per an answer given to this question, you might be able to collect data on your students' possible answers by giving them a fill-in-the-blank quiz on inverse functions. Then, keep track of the most-common wrong answers by ... 3 Different notation for different things The key thing that the students seem to be missing is the conceptual distinction between f and y (in this example), so this apparently needs to be explicitly explained to them. IMHO the way to go at this is to tell them that there are two "things" that we may want to talk about - the transformation process (the ... 2 Function notation is a next step in mathematical maturation. In the language of Dubinsky et al., your students are in the process of encapsulating functions as primary objects. At one point in mathematical development, after learning to count, positive integers are "encapsulated" by children as primary objects. Later, while learning algebra, variables such ... 2 One example that comes to mind is modelling the position of a diver (or of a diver's head). Let$t$be the time elapsed since jump. Let$d(t)$be the diver's distance from the water and define$d(t)>0$to be "above water" and$d(t)<0$to be below water. (It should be obvious why$d(t)=0$represents being "at the surface"). If we use a quadratic model, ... 2 Looking at the analogy in your question, suppose someone was confused about whether mymoney = yourmoney + 1 made me richer, or you richer, compared to mymoney = yourmoney. How would you help that person understand? I think this is pretty clear: you would tell them to try some values. Ask them: If mymoney = 5, what is yourmoney in the two cases? Now looking ... 2 From a computer science perspective understanding that functions are first class objects is also pretty difficult. There are cases where functions can be parameters to other functions, the classic example being sort accepting a compare function. This case would be impossible to explain using just the y output. In the case of sort you don't even need to know ... 2 To answer the question of why you need f... ... have them consider the region between two graphs. Unless you have a way to distinguish between the different y values, you're going to be hopelessly lost. Now you don't need to use f... you could use subscripts:$y_{1}, y_{2}$(and in fact, that's how graphing calculators handle it). But it's nice to ... 2 Well...$ \def\zz{\mathbb{Z}} $Let$f : \zz → \zz$defined by$f(n) = n+1$for every$n∈\zz$. Then$f(0) = 1$and$f(f(0)) = 2$and$f(f(f(0))) = 3\$ and so on. It is now obvious why having functions as first-class objects is useful, since we can repeatedly apply them. Similarly, the Mandelbrot fractal is defined in terms of iterating an elegant function.
1
Many answers already, so I'll keep this one short: it has been realized by researchers in didactic that one difficulty in the concept of function is that it changes status: at first each function is considered as a process (a verb in @ΦDev's answer); they meet several of them, each being akin to a (unitary) operation, not very different from addition or ...
1
Some rudimentary programming exercises might make it obvious why it's useful to encapsulate functionality. When you write y = f(x) in Python, for example, it's clear that y is just a static result, while f is the thing that does the work. You can't reuse y to change another variable z in the same way - you have to refer to f to do that.
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
# Compute Meet in Dominance Order
I need to compute the meet of two partitions $\lambda=(\lambda_1,\ldots,\lambda_r)$ and $\mu=(\mu_1,\ldots,\mu_s)$ with respect to the dominance order. It's not a difficult thing to do and might be considered an oddity by most of you, but that doesn't bother me. Let $\varrho:=\lambda\vartriangle\mu$ be the meet of $\lambda$ and $\mu$. We Compute a partition $\alpha$ by setting $\alpha_1:=\min(\lambda_1,\mu_1)$ and recursively setting $\alpha_{i+1} := \min\left\{\scriptstyle\begin{array}{l} (\lambda_1+\cdots+\lambda_{i+1})-(\alpha_1+\cdots+\alpha_i),\\ (\mu_1+\cdots+\mu_{i+1})-(\alpha_1+\cdots+\alpha_i), \\ \alpha_i \end{array}\right\}.$ Fundamental Theorem of computing the Meet in the Dominance Order. $\alpha=\varrho$. Proof. First, $\alpha \trianglelefteq \lambda$ and $\alpha\trianglelefteq\mu$ holds for obvious reasons: To show e.g. that for all $i$, we have $\alpha_1+\cdots+\alpha_i \le \lambda_1+\cdots+\lambda_i,$ we induct on $i$ with $i=1$ being trivial and the induction step also follows immediately from the definition. Therefore, to show that $\alpha=\varrho,$ we simply need to show that $\varrho\trianglelefteq\alpha.$ If this is the case, then we must have $\varrho=\alpha$ by definition of the meet. Again, we induct on $i$. For $i=1$, we have $\varrho_1\le\lambda_1$ and $\varrho_1\le\mu_1,$ so we also have $\varrho_1\le\min(\lambda_1,\mu_1)=\alpha_1.$ In the induction step, we assume $\alpha_1+\cdots+\alpha_i + d_i = \varrho_1+\cdots+\varrho_i$ for some $d_i \ge 0$. Hence, $\alpha_1 + \cdots + \alpha_i + \varrho_{i+1} + d_i = \varrho_1 + \cdots + \varrho_{i+1} \le \lambda_1 + \cdots + \lambda_{i+1}$ and it follows that $\varrho_{i+1} + d_i \le (\lambda_1+\cdots+\lambda_{i+1})-(\alpha_1+\cdots+\alpha_i) = \alpha_{i+1},$ in other words $\varrho_{i+1} \le \alpha_{i+1}$ which implies $\varrho_i + \cdots + \varrho_{i+1} \le \alpha_1 + \cdots + \alpha_{i+1}$ by again using the induction hypothesis. This concludes the proof. QED. Python Implementation
>>> def dominanceMeet(p,q):
... n = max( sum(p), sum(q) )
... a = []
... sp, sq, sa, prev, i = 0, 0, 0, n, 0
... while prev:
... sp += p[i] if i < len(p) else 0
... sq += q[i] if i < len(q) else 0
... prev, i = min(sp-sa, sq-sa, prev), i+1
... sa += prev
... if prev: a.append(prev)
... return a
...
>>>
>>>
>>> dominanceMeet([3,1,1,1],[2,2,2])
[2, 2, 1, 1]
>>>
The code is written in a quite imperative style. That's because it's going to be C code rather soon.
|
{}
|
# typesetting column vector
I would like to define a command which typesets a column vector.
For one vector I can have something like:
\left(
\begin{array}{c}
a\\
b\\
\end{array}
\right)
I would like the command to produce such a vector, for either 2 or 3 arguments. \colvec{a}{b}{c} should produce the same vector as above only with one more entry where \colvec{a}{b} will produce the above vector. How should I do it? I tried to overload a command name but that's impossible.
Note that you have extra space around your vector. You should probably using something like (pmatrix is part of the amsmath package)
\begin{pmatrix}a\\b\end{pmatrix}
The standard LaTeX \newcommand provides a way to have a single optional argument.
\newcommand*\colvec[3][]{
\begin{pmatrix}\ifx\relax#1\relax\else#1\\\fi#2\\#3\end{pmatrix}
}
Note that you have to use \colvec[a]{b}{c} if you want three elements or \colvec{a}{b} if you want two.
Update
As per your request in the comments, here's one that takes any number of elements based on the number passed in the first argument.
\newcount\colveccount
\newcommand*\colvec[1]{
\global\colveccount#1
\begin{pmatrix}
\colvecnext
}
\def\colvecnext#1{
#1
\ifnum\colveccount>0
\\
\expandafter\colvecnext
\else
\end{pmatrix}
\fi
}
You use it exactly as you wanted, \colvec{5}{a}{b}{c}{d}{e}.
• Can it be extended to a vector of arbitrary length? say something like: \colvec{5}{a}{b}{c}{d}{e} will produce a column vector with 5 entries? – Dror Sep 4 '10 at 9:18
• Yeah, you could do that. Seems like more hassle than it's worth though. How about \newcommand*\colvec[1]{\begin{pmatrix}#1\end{pmatrix} and you write \colvec{a\\b\\c\\d\\e}? – TH. Sep 4 '10 at 9:31
• Yes!!! This is what I was looking for! Simple and easy to use. Thanks! – Dror Sep 4 '10 at 10:03
• the smallpmatrix from mathtools may also be handy – daleif Apr 21 '12 at 16:36
• It's psmallmatrix not smallpmatrix. Just to clear up any mistakes (run into that problem just now). – Florian Pilz Mar 9 '13 at 10:14
This is a more "TeX" approach. The number of rows is arbitrary. The columns are aligned right by default, but can be c or l as well:
\documentclass{article}
\makeatletter
\newcommand{\Spvek}[2][r]{%
\gdef\@VORNE{1}
\left(\hskip-\arraycolsep%
\begin{array}{#1}\vekSp@lten{#2}\end{array}%
\hskip-\arraycolsep\right)}
\def\vekSp@lten#1{\xvekSp@lten#1;vekL@stLine;}
\def\vekL@stLine{vekL@stLine}
\def\xvekSp@lten#1;{\def\temp{#1}%
\ifx\temp\vekL@stLine
\else
\ifnum\@VORNE=1\gdef\@VORNE{0}
\else\@arraycr\fi%
#1%
\expandafter\xvekSp@lten
\fi}
\makeatother
\begin{document}
$\Spvek{1;-2} \quad \Spvek[l]{1;-2;3}\quad \Spvek[c]{1;-2;-3}\quad\Spvek{1;2;-3;4}$
\end{document}
Output will be:
I would to supplement the solution above about \Spvek by Peter B. Yes, it is a more "TeX" approach but pure "TeX" approach is in two lines only:
\def\spvec#1{\left(\vcenter{\halign{\hfil$##$\hfil\cr \spvecA#1;;}}\right)}
\def\spvecA#1;{\if;#1;\else #1\cr \expandafter \spvecA \fi}
$\spvec{1;2;3} + \spvec{1;2;-3;4} + \spvec{1;2}$
This solution will be work in LaTeX too because only TeX primitives are used here.
• Does anyone care to elaborate on this code? I like it and want to modify it, but anything I attempt will fail. What is going on here $##$ ? – fborchers Apr 4 '19 at 20:28
For vectors with only two elements, or any doublet you want to express in column form, there is a standard LaTeX command in math mode $\binom{a}{b}$ or alternatively ${n \choose k}$. These look nice with tight vertically lengthened parentheses.
Here is my KISS-like solution (which is just a wrapper for pmatrix from the amsmath package, inspired by Garrys answer):
\newcommand{\myvec}[1]{\ensuremath{\begin{pmatrix}#1\end{pmatrix}}}
Usage:
\myvec{x\\y\\z}
The good thing is, you don't need to specify a number of elements or anything. You can just add more elements with \\. It also looks tidier in the tex-file then using the pmatrix environment itself.
You can also use it for row-vectors:
\myvec{a&b&c}
(For completion: Matrices work aswell, obviously: \myvec{a&b \\ c&d})
Since you specified only wanting two or three arguments (not an arbitrary number of them, as others here have given solutions for), you can use the xparse package to define commands with optional braced arguments. Something like (untested)
\DeclareDocumentCommand \colvec {mmg} {%
\IfNoValueTF #3 {%
\twocolvec {#1}{#2}
}{%
\threecolvec {#1}{#2}{#3}
}%
}
Where the two intermediate functions (defined as appropriate) typeset the array as appropriate.
• Does this means that I have to define separately the function \twocolvec and \threecolvec? – Dror Sep 4 '10 at 9:17
I can offer a macro \colvec within whose argument you can supply an arbitrary amount of undelimited arguments whereof each one will be taken for a component of the one-column-vector:
\documentclass{article}
\usepackage{amsmath}
\makeatletter
%%=============================================================================
%% Paraphernalia:
%% \UD@firstoftwo, \UD@secondoftwo, \UD@Exchange, \UD@PassFirstToSecond,
%% \UD@CheckWhetherNull, \UD@CheckWhetherBlank, \UD@ExtractFirstArg
%%=============================================================================
\newcommand\UD@firstoftwo[2]{#1}%
\newcommand\UD@secondoftwo[2]{#2}%
\newcommand\UD@Exchange[2]{#2#1}%
\newcommand\UD@PassFirstToSecond[2]{#2{#1}}%
%%-----------------------------------------------------------------------------
%% Check whether argument is empty:
%%.............................................................................
%% \UD@CheckWhetherNull{<Argument which is to be checked>}%
%% {<Tokens to be delivered in case that argument
%% which is to be checked is empty>}%
%% {<Tokens to be delivered in case that argument
%% which is to be checked is not empty>}%
%%
%% The gist of this macro comes from Robert R. Schneck's \ifempty-macro:
\newcommand\UD@CheckWhetherNull[1]{%
\romannumeral0\expandafter\UD@secondoftwo\string{\expandafter
\UD@secondoftwo\expandafter{\expandafter{\string#1}\expandafter
\UD@secondoftwo\string}\expandafter\UD@firstoftwo\expandafter{\expandafter
\UD@secondoftwo\string}\expandafter\expandafter\UD@firstoftwo{ }{}%
\UD@secondoftwo}{\expandafter\expandafter\UD@firstoftwo{ }{}\UD@firstoftwo}%
}%
%%-----------------------------------------------------------------------------
%% Check whether argument is blank (empty or only spaces):
%%.............................................................................
%% -- Take advantage of the fact that TeX discards space tokens when
%% "fetching" _un_delimited arguments: --
%% \UD@CheckWhetherBlank{<Argument which is to be checked>}%
%% {<Tokens to be delivered in case that
%% argument which is to be checked is blank>}%
%% {<Tokens to be delivered in case that argument
%% which is to be checked is not blank}%
\newcommand\UD@CheckWhetherBlank[1]{%
\romannumeral\expandafter\expandafter\expandafter\UD@secondoftwo
\expandafter\UD@CheckWhetherNull\expandafter{\UD@firstoftwo#1{}.}%
}%
%%-----------------------------------------------------------------------------
%% Extract first inner undelimited argument:
%%
%% \UD@ExtractFirstArg{ABCDE} yields {A}
%%
%% \UD@ExtractFirstArg{{AB}CDE} yields {AB}
%%.............................................................................
\newcommand\UD@RemoveTillUD@SelDOm{}%
\long\def\UD@RemoveTillUD@SelDOm#1#2\UD@SelDOm{{#1}}%
\newcommand\UD@ExtractFirstArg[1]{%
\romannumeral0%
\UD@ExtractFirstArgLoop{#1\UD@SelDOm}%
}%
\newcommand\UD@ExtractFirstArgLoop[1]{%
\expandafter\UD@CheckWhetherNull\expandafter{\UD@firstoftwo{}#1}%
{ #1}%
{\expandafter\UD@ExtractFirstArgLoop\expandafter{\UD@RemoveTillUD@SelDOm#1}}%
}%
%%=============================================================================
%% Process an arbitrary amount of undelimited arguments as
%% a one-column-vector:
%%
%% The macro \colvec processes an undelimited argument which in
%% turn consists of an arbitrary amount of undelimited arguments.
%% Each of these undelimited arguments is taken for a row/component
%% of a one-column-vector.
%%
%% You can have spaces between undelimited arguments as (La)TeX does discard
%% spaces that precede undelimited arguments.
%%
%% You can omit braces with undelimited arguments that consist of a
%% single token.
%%-----------------------------------------------------------------------------
\newcommand\colvec[1]{%
\colvecloop{#1}{}%
}%
\newcommand\colvecloop[2]{%
\UD@CheckWhetherBlank{#1}{%
\begin{pmatrix}#2\end{pmatrix}%
}{%
\expandafter\expandafter\expandafter
\expandafter\expandafter\expandafter
\expandafter\UD@PassFirstToSecond
\expandafter\expandafter\expandafter
\expandafter\expandafter\expandafter
\expandafter{\expandafter\expandafter
\expandafter\UD@Exchange\UD@ExtractFirstArg{#1}{#2}\\}%
{\expandafter\colvecloop\expandafter{\UD@firstoftwo{}#1}}%
}%
}%
\makeatother
\begin{document}
\verb|$\colvec{{a}{b}{c}{d}}$| yields:
$\colvec{{a}{b}{c}{d}}$
\verb|$\colvec{ {a} {b} {c} {d} {e} }$| yields:
$\colvec{ {a} {b} {c} {d} {e} }$
\verb|$\colvec{ {a} {b} {c} {d} {e}fg }$| yields:
$\colvec{ {a} {b} {c} {d} {e}fg }$
\end{document}
Years later and not very tex-y, but for some reason I really wanted to use commas as separators and ended up putting together the following:
\usepackage{iftex}
\RequireLuaTeX
\usepackage{amsmath}
\usepackage{luacode}
\begin{luacode*}
mycolvec = { }
function mycolvec.replace(input)
tex.sprint(
"\\ensuremath{\\begin{pmatrix}"
.. string.gsub(input, ",", " \\\\ ")
.. "\\end{pmatrix}}")
end
\end{luacode*}
\def\colvec#1{\directlua{mycolvec.replace("#1")}}
Which, when added to the header, allows you to write out column vectors like this:
\colvec{1, 2, 3}
\colvec{8,6,7,5,3,0,9}
This requires that you compile your tex with lualatex instead of xetex or pdftex, but that shouldn't affect the rest of your document.
The seperator can be changed to any arbitrary character by changing the "," on the .. string.gsub(... line.
EDIT: This method breaks when entering tokens (i.e. \colvec{1 \cdot 2, 2 \cdot 3}), for the reasons described here. As such I'll recommend egreg's answer on a related question, which provides the functionality I claim here without the same problem.
• Welcome to TeX.Se. – CampanIgnis Sep 6 '18 at 1:29
This post presents another possible approach. It improves on this earlier post above. The idea is to use a similar plain TeX mechanism, but wrap it up in amsmaths high level pmatrix environment. This helps with fractions for example. The interface with parentheses is the same as used by PSTricks. Here is a working example:
\documentclass[margin=2pt]{standalone}
\usepackage{amsmath}
\makeatletter
\def\vector(#1){%
\mathchoice%
{\pmatrix@i{\vector@i#1,,}}%
{\pmatrix@ii{\vector@i#1,,}}%
{}% scriptstyle too small,
{}% ss style too small.
}
\def\vector@i#1,{\if,#1,\else{#1}\cr\expandafter\vector@i\fi}
\def\pmatrix@i#1{\begin{pmatrix}#1\end{pmatrix}}
\def\pmatrix@ii#1{\left(\!\begin{smallmatrix}#1\end{smallmatrix}\!\right)}
\makeatother
\begin{document}
$\displaystyle\vector(-3,+4,5)$ and $\vector(-3,+4,5)$
\end{document}
When typesetting vectors with lots of fractions it is advisable to increase vertical spacing with:
\renewcommand{\arraystretch}{1.1}% stretch value 1.1 for example
Because the macros make use of the @i notation the code will have to be placed inside at \makeatletter - \makeatother wrapping before document begins.
See here for a similar solution for row vectors.
• @PhelypeOleinik: done. – fborchers Apr 5 '19 at 18:10
|
{}
|
# Is electric charge truly conserved for bosonic matter?
+ 6 like - 0 dislike
321 views
Even before quantization, charged bosonic fields exhibit a certain "self-interaction". The body of this post demonstrates this fact, and the last paragraph asks the question.
Notation/ Lagrangians
Let me first provide the respective Lagrangians and elucidate the notation.
I am talking about complex scalar QED with the Lagrangian $$\mathcal{L} = \frac{1}{2} D_\mu \phi^* D^\mu \phi - \frac{1}{2} m^2 \phi^* \phi - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ Where $D_\mu \phi = (\partial_\mu + ie A_\mu) \phi$, $D_\mu \phi^* = (\partial_\mu - ie A_\mu) \phi^*$ and $F^{\mu \nu} = \partial^\mu A^\nu - \partial^\nu A^\mu$. I am also mentioning usual QED with the Lagrangian $$\mathcal{L} = \bar{\psi}(iD_\mu \gamma^\mu-m) \psi - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ and "vector QED" (U(1) coupling to the Proca field) $$\mathcal{L} = - \frac{1}{4} (D^\mu B^{* \nu} - D^\nu B^{* \mu})(D_\mu B_\nu-D_\nu B_\mu) + \frac{1}{2} m^2 B^{* \nu}B_\nu - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$
The four-currents are obtained from Noether's theorem. Natural units $c=\hbar=1$ are used. $\Im$ means imaginary part.
Noether currents of particles
Consider the Noether current of the complex scalar $\phi$ $$j^\mu = \frac{e}{m} \Im(\phi^* \partial^\mu\phi)$$ Introducing local $U(1)$ gauge we have $\partial_\mu \to D_\mu=\partial_\mu + ie A_\mu$ (with $-ieA_\mu$ for the complex conjugate). The new Noether current is $$\mathcal{J}^\mu = \frac{e}{m} \Im(\phi^* D^\mu\phi) = \frac{e}{m} \Im(\phi^* \partial^\mu\phi) + \frac{e^2}{m} |\phi|^2 A^\mu$$ Similarly for a Proca field $B^\mu$ (massive spin 1 boson) we have $$j^\mu = \frac{e}{m} \Im(B^*_\mu(\partial^\mu B^\nu-\partial^\nu B^\mu))$$ Which by the same procedure leads to $$\mathcal{J}^\mu = \frac{e}{m} \Im(B^*_\mu(\partial^\mu B^\nu-\partial^\nu B^\mu))+ \frac{e^2}{m} |B|^2 A^\mu$$
Similar $e^2$ terms also appear in the Lagrangian itself as $e^2 A^2 |\phi|^2$. On the other hand, for a bispinor $\psi$ (spin 1/2 massive fermion) we have the current $$j^\mu = \mathcal{J}^\mu = e \bar{\psi} \gamma^\mu \psi$$ Since it does not have any $\partial_\mu$ included.
"Self-charge"
Now consider very slowly moving or even static particles, we have $\partial_0 \phi, \partial_0 B \to \pm im\phi, \pm im B$ and the current is essentially $(\rho,0,0,0)$. For $\phi$ we have thus approximately $$\rho = e (|\phi^+|^2-|\phi^-|^2) + \frac{e^2}{m} (|\phi^+|^2 + |\phi^-|^2) \Phi$$ Where $A^0 = \Phi$ is the electrostatic potential and $\phi^\pm$ are the "positive and negative frequency parts" of $\phi$ defined by $\partial_0 \phi^\pm = \pm im \phi^\pm$. A similar term appears for the Proca field.
For the interpretation let us pass back to SI units, in this case we only get a $1/c^2$ factor. The "extra density" is $$\Delta \rho = e\cdot \frac{e \Phi}{mc^2}\cdot |\phi|^2$$ That is, there is an extra density proportional to the ratio of the energy of the electrostatic field $e \Phi$ and the rest mass of the particle $mc^2$. The sign of this extra density is dependent only on the sign of the electrostatic potential and both frequency parts contribute with the same sign (which is superweird). This would mean that classicaly, the "bare" charge of bosons in strong electromagnetic fields is not conserved, only this generalized charge is.
After all, it seems a bad convention to call $\mathcal{J}^\mu$ the electric charge current. By multiplying it by $m(c^2)/e$ it becomes a matter density current with the extra term corresponding to mass gained by electrostatic energy. However, that does not change the fact that the "bare charge density" $j^0$ seems not to be conserved for bosons.
Now to the questions:
• On a theoretical level, is charge conservation at least temporarily or virtually violated for bosons in strong electromagnetic fields? (Charge conservation will quite obviously not be violated in the final S-matrix, and as an $\mathcal{O}(e^2)$ effect it will probably not be reflected in first order processes.) Is there an intuitive physical reason why such a violation is not true for fermions even on a classical level?
• Charged bosons do not have a high abundance in fundamental theories, but they do often appear in effective field theories. Is this "bare charge" non-conservation anyhow reflected in them and does it have associated experimental phenomena?
• Extra clarifying question: Say we have $10^{23}$ bosons with charge $e$ so that their charge is $e 10^{23}$. Now let us bring these bosons from very far away to very close to each other. As a consequence, they will be in a much stronger field $\Phi$. Does their measured charge change from $e 10^{23}$? If not, how do the bosons compensate in terms of $\phi, B, e, m$? If this is different for bosons rather than fermions, is there an intuitive argument why?
This post imported from StackExchange Physics at 2015-06-09 14:50 (UTC), posted by SE-user Void
edited Jun 9, 2015
By Noether's theorem, Noether currents are conserved since they are derived from an infinitesimal symmetry; they are observable iff they are gauge invariant. Are you missing something in the answer by Qmechanic?
@ArnoldNeumaier I added an extra clarifying question to what bugs me. I am well aware about the conservation and observability, I mainly wanted to inquire about the deeper physical explanation of these facts.
The charge doesn't change, as it is an integral over the whole space - only the charge density develops a very localized peak. What should need compensation?
Note that bare stuff doesn't matter; it is irrelevant scaffolding removed by renormalization.
Just a dumb idea: Maybe this is somehow related to the fact in the SM, introducing mass-terms for the bosons simply as $\frac{1}{2}m\phi^{*}\phi$ without a higgs field or mechanism breaks the gauge symmetry, and therefore is no conserved current corresponding to the by the mass term broken symmetry?
@Dilaton: Yes, there seems to be something funky about massive or charged elementary bosons. I was just hoping there is an established argument what exactly is the crux of this funkiness -- perhaps through such things as charged pions and their relation to $U(1)$.
@drake I just meant that for example Proca mass terms such as $\frac{1}{2}m^2 B^{*\nu} B_{\nu}$ break gauge symmetries such as $U(1)$, and could therefore spoil charge conservation.
@Dilaton I don't get your point... Here the gauge field is $A$, which doesn't have any mass term. In the SM one wants to give mass to the gauge fields. I think you are wrong.
+ 3 like - 0 dislike
1. In contrast to QED with fermionic matter, in QED with bosonic matter, the full Noether current ${\cal J}^{\mu}$ (for global gauge transformations) tends to depend explicitly on the gauge potential $A^{\mu}$, see e.g. Refs. 1-2 and this Phys.SE post.
2. The reason for this difference is because the QED Lagrangian for fermionic (bosonic) matter typically contains one (two) spacetime derivative(s) $\partial_{\mu}$, which after minimal coupling $\partial_{\mu}\to D_{\mu}$ leads to e.g. no (a) quartic matter-matter-photon-photon coupling term, respectively.
3. The full Noether current ${\cal J}^{\mu}$ is a gauge-invariant and conserved quantity, $d_{\mu }{\cal J}^{\mu} \approx 0$. [Here $d_{\mu}\equiv\frac{d}{dx^{\mu}}$ means a total spacetime derivative, and the $\approx$ symbol means equality modulo eom.] The electric charge $Q=\int \! d^3x ~{\cal J}^{0}$ is a conserved quantity.
4. The only physical observables in a gauge theory are gauge-invariant quantities. The quantity $j^{\mu}$, which OP calls the "bare current", is not gauge-invariant, and hence not a consistent physical observable to consider.
5. As Trimok mentions in a comment, the situation for non-Abelian (as opposed to Abelian) Yang-Mills is radically different. The full Noether current ${\cal J}^{\mu a}$ (for global gauge transformations) is a conserved $d_{\mu }{\cal J}^{\mu a} \approx 0$, but ${\cal J}^{\mu a}$ is not gauge-invariant (or even gauge covariant), and hence not a consistent physical observable to consider. There is not a well-defined observable for color charge that one can measure. This follows also from Weinberg-Witten theorem (for spin 1): A theory with a global non-Abelian symmetry under which massless spin-1 particles are charged does not admit a gauge- and Lorentz-invariant conserved current, cf. Ref. 3.
References:
1. M. Srednicki, QFT, Chapter 61.
2. M.D. Schwartz, QFT and the Standard Model, Section 8.3 and Chapter 9.
3. M.D. Schwartz, QFT and the Standard Model, Section 25.3.
This post imported from StackExchange Physics at 2015-06-09 14:50 (UTC), posted by SE-user Qmechanic
answered Sep 24, 2014 by (2,860 points)
Yes, some of these are the observations which lead me to this question. But say we have a macroscopic material with bosonic charged particles, object it to a very strong electrostatic field and measure it's charge. Would we have to be measuring $\mathcal{J}^0$ under all conditions? I guess 3. implies yes, and that means we would measure the object to have a charge different from the zero field situation. The extra "non-bare" charge obviously comes from the field, but this is a very different notion from the usual intuition of "charge".
This post imported from StackExchange Physics at 2015-06-09 14:50 (UTC), posted by SE-user Void
${\cal J}^{\mu}$ is a covariant quantity, then it should verify $D_\mu {\cal J}^{\mu}=0$, but a conserved quantity corresponds to $\partial_\mu {\cal J}^{\mu}=0$. So, here, are covariant and conserved current compatible notions ? (for instance, this is not the case in Yang-Mills theories).
This post imported from StackExchange Physics at 2015-06-09 14:50 (UTC), posted by SE-user Trimok
This post imported from StackExchange Physics at 2015-06-09 14:50 (UTC), posted by SE-user Qmechanic
+ 1 like - 0 dislike
I have actually taken the time to compute the equations of motion and the situation is more complicate than I previously thought. The Lagrangian in the static situation $\vec{A} = 0, \partial_t \to 0$ reads
$$\mathcal{L} = -\frac{1}{2} |\nabla \phi|^2 - \frac{1}{2} m^2 |\phi|^2 + e^2 |\Phi|^2 |\phi|^2 + \frac{1}{2} |\nabla \Phi|^2$$
$$(\Delta - m^2 + 2 e^2 |\Phi|^2) \phi = 0$$
$$(\Delta - 2 e^2 |\phi|^2) \Phi = 0$$
Amongst other things, this implies that minimally coupled bosons do not act as a usual source of the electromagnetic field at all. As it stands (a more detailed analysis of the non-stationary equations might show otherwise), the bosons actually "easen" their motion (effectively loose mass) in the presence of the electromagnetic field at the cost of weakening (rendering massive and short-range) the electromagnetic field.
The coupling constant $e$ really does not have any reasonable interpretation in terms of a usual charge. For instance, the sign of $e$ is irrelevant and the particles and antiparticles of quantized $\phi$ have the same effect on $\Phi$. The $U(1)$ charge is just a conserved quantity with no intuitive interpretation in terms of the usual charge. Hence, the original form of the question does not have a proper meaning; $U(1)$ coupling for bosons simply means something totally different than for fermions.
(If you have any more observations or a different view, please contribute, I am interested.)
answered Jun 10, 2015 by (1,630 points)
Are you allowed to simply put $A=0$? It changes the dynamics.
@ArnoldNeumaier: If we still hold $\partial_t \to 0$ a nonzero $\vec{A}$ would only make $|\Phi|^2 \to |\Phi|^2 - |A|^2$ and an extra $\vec{A}$ equation coupled to $\phi$ similarly as in the $\Phi$ case.
+ 0 like - 0 dislike
Dear mods, I am sorry this answer is not graduate-upward level, but I have not been able to come up with a more sophisticated one.
1) Yes, the charge is truly conserved but the respective current depends on the 4-potential A. What it is confusing you, I think, is that the current for a scalar field depends on 4-potential $A$, whereas that of a spin-1/2 does not. This is obviously related to the number of derivatives in the Lagrangian kinetic term and, likewise, to the number of derivatives in the current. It can help you understand what it's going on to adopt the canonical formalism (also known as the language of gentlemen), in which in both cases the density (and the charge too) involves the product of the canonical momentum and the field, as it could not be otherwise because the charge is nothing else but the infinitesimal generator of $U(1)$ transformations for both the field and the canonical momentum.
2) What you call the "bare charge", which probably is not a good name since this term is reserved for something else, lacks of physical content before fixing a gauge, as it is not a gauge invariant quantity. Note however that one can always choose one's favorite gauge. And if one picks the temporal gauge ($A_0 = 0$), the charge does not depend on the 4-potential and the form is the same as your "bare charge", which is conserved in this gauge.
3) The only difference in the movement of spin-one-half particles and spin zero-particles in an electromagnetic field is a term proportional to $\sigma_{\mu\nu}\, F^{\mu\nu}$
in the equation for spin-1/2 particles. This term gives rise to the term
$\bf{S}\cdot \bf{B}$
in the non-relativistic limit, that is, the interaction between the spin of the particle and the magnetic field.
4) It can help you to get the equation in your answer to first think of the equation of motion in the non-relativist limit, which is the Schrödinger equation in an electromagnetic field, that is, the Schrödinger equation replacing partial derivatives with gauge-covariant ones (for scalar particles, for spin-1/2 there is the additional term I wrote above).
answered Jun 12, 2015 by (875 points)
edited Jun 12, 2015 by drake
+ 0 like - 4 dislike
The charge $e$ introduced into your Lagrangians/equations is a constant in time by definition, no Noether theorem is necessary to "conserve" it: $\frac{de}{dt}=0$.
Another thing is your equations/theory or "charge definition" via equations/solutions (as an integral bla-bla-bla). Here everything depends on your equations. Do not think that equations for bosons are already well established and finalized. For one formulation you get one result, for another you do another. So, there is no 'truly" thing, keep it firmly in your mind!
answered Jun 9, 2015 by (132 points)
edited Jun 9, 2015
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOve$\varnothing$flowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
{}
|
# Enhanced image/video quality through artifact evaluation
Imported: 13 Feb '17 | Published: 18 Jan '11
Suhail Jalil, Khaled Helmi El-Maleh, Chienchung Chang
USPTO - Utility Patents
## Abstract
In an image/video encoding and decoding system employing an artifact evaluator a method and/or apparatus to process video blocks comprising a decoder operable to synthesize an un-filtered reconstructed video block or frame and an artifact filter operable to receive the un-filtered reconstructed video block or frame, which generates a filtered reconstructed video block or frame. A memory buffer operable to store either the filtered reconstructed video block or frame or the un-filtered reconstructed video block or frame, and an artifact evaluator operable to update the memory buffer after evaluating and determining which of the filtered video block or frame, or the un-filtered video block or frame yields better image/video quality.
## Description
### TECHNICAL FIELD
This disclosure relates to digital image and video processing and, more particularly, enhanced image/video quality through artifact evaluation.
### BACKGROUND
Digital video capabilities may be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless communication devices, personal digital assistants (PDAs), laptop computers, desktop computers, digital cameras, digital recording devices, mobile or satellite radio telephones, and the like. Digital video and picture devices can provide significant improvements over conventional analog video and picture systems in creating, modifying, transmitting, storing, recording and playing full motion video sequences and pictures. Video sequences (also referred to as video clips) are composed of a sequence of frames. A picture can also be represented as a frame. Any frame or part of a frame from a video or a picture is often called an image.
Digital devices such as mobile phones and hand-held digital cameras can take both pictures and/or video. The pictures and video sequences may be stored and transmitted to another device either wirelessly or through a cable. Prior to transmission the frame may be sampled and digitized. Once digitized, the frame may be parsed into smaller blocks and encoded. Encoding is sometimes synonymous with compression. Compression can reduce the overall (usually redundant) amount of data (i.e., bits) needed to represent a frame. By compressing video and image data, many image and video encoding standards allow for improved transmission rates of video sequences and images. Typically compressed video sequences and compressed images are referred to as encoded bitstream, encoded packets, or bitstream. Most image and video encoding standards utilize image/video compression techniques designed to facilitate video and image transmission with less transmitted bits than those used without compression techniques.
In order to support compression, a digital video and/or picture device typically includes an encoder for compressing digital video sequences or compressing a picture, and a decoder for decompressing the digital video sequences. In many cases, the encoder and decoder form an integrated encoder/decoder (CODEC) that operates on blocks of pixels within frames that define the video sequence. In standards, such as the International Telecommunication Union (ITU) H.264 and Moving Picture Experts Group (MPEG)-4, Joint Photographic Experts Group (JPEG), for example, the encoder typically divides a video frame or image to be transmitted into video blocks referred to as “macroblocks.” A macroblock is typically 16 pixels high by 16 pixels wide. Various sizes of video blocks may be used. Those ordinarily skilled in the art of image and video processing recognize that the term video block, or image block may be used interchangeably. Sometimes to be explicit in their interchangeability, the term image/video block is used. The ITU H.264 standard supports processing 16 by 16 video blocks, 16 by 8 video blocks, 8 by 16 image blocks, 8 by 8 image blocks, 8 by 4 image blocks, 4 by 8 image blocks and 4 by 4 image blocks. Other standards may support differently sized image blocks. Someone ordinarily skilled in the art sometimes use video block or frame interchangeably when describing an encoding process, and sometimes may refer to video block or frame as video matter. In general, video encoding standards support encoding and decoding a video unit, wherein a video unit may be a video block or a video frame.
For each video block in a video frame, an encoder operates in a number of “prediction” modes. In one mode, the encoder searches similarly sized video blocks of one or more immediately preceding video frames (or subsequent frames) to identify the most similar video block, referred to as the “best prediction block.” The process of comparing a current video block to video blocks of other frames is generally referred to as block-level motion estimation (BME). BME produces a motion vector for the respective block. Once a “best prediction block” is identified for a current video block, the encoder can encode the differences between the current video block and the best prediction block. This process of using the differences between the current video block and the best prediction block includes a process referred to as motion compensation. In particular, motion compensation usually refers to the act of fetching the best prediction block using a motion vector, and then subtracting the best prediction block from an input video block to generate a difference block. After motion compensation, a series of additional encoding steps are typically performed to finish encoding the difference block. These additional encoding steps may depend on the encoding standard being used. In another mode, the encoder searches similarly sized video blocks of one or more neighboring video blocks within the same frame and uses information from those blocks to aid in the encoding process.
In general, as part of the encoding process, a transform of the video block (or difference video block) is taken. The transform converts the video block (or difference video block) from being represented by pixels to being represented by transform coefficients. A typical transform in video encoding is called the Discrete Cosine Transform (DCT). The DCT transforms the video block data from the pixel domain to a spatial frequency domain. In the spatial frequency domain, data is represented by DCT block coefficients. The DCT block coefficients represent the number and degree of the spatial frequencies detected in the video block. After a DCT is computed, the DCT block coefficients may be quantized, in a process known as “block quantization.” Quantization of the DCT block coefficients (coming from either the video block or difference video block) removes part of the spatial redundancy from the block. During this “block quantization” process, further spatial redundancy may sometimes be removed by comparing the quantized DCT block coefficients to a threshold. If the magnitude of a quantized DCT block coefficient is less than the threshold, the coefficient is discarded or set to a zero value.
However, block quantization at the encoder may often cause different artifacts to appear at the decoder when reconstructing the video frames or images that have been compressed at the encoder. An example of an artifact is when blocks appear in the reconstructed video image, this is known as “blockiness.” Some standards have tried to address this problem by including a de-blocking filter as part of the encoding process. In some cases, the de-blocking filter removes the blockiness but also has the effect of smearing or blurring the video frame or image, which is known as a blurriness artifact. Hence, image/video quality suffers either from “blockiness” or blurriness from de-blocking filters. A method and apparatus that could reduce the effect of coding artifacts on the perceived visual quality may be a significant benefit.
### SUMMARY
The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description, drawings and claims. In general, an image/video encoding and decoding system employing an artifact evaluator that processes video blocks may enhance image/video quality. During an encoding process, a texture decoder and a video block or frame resulting from an inter-coding or intra-coding prediction mode synthesizes an un-filtered reconstructed video block or frame. The un-filtered reconstructed video block or frame is passed through an artifact filter to yield a filtered reconstructed video block or frame. The artifact filter may be a de-blocking filter or configured to be a de-blocking filter. If the artifact filter is a de-blocking filter or configured to be one, it may suppress blockiness. However, after filtering, the resulting filtered reconstructed video block or frame may be blurry. Current encoding methods and standards are limited because they do not have a way to “adaptively” change how an in-loop memory buffer is updated. Because of this limitation in current encoding methods and standards, poor image/video quality is propagated to other frames, especially for inter-coding prediction mode.
The use of an artifact evaluator may overcome the limitations of the current encoding methods and standards. The use of an artifact evaluator evaluates and determines based on perceived image/video quality when it is better to use the output of an artifact filter such as a de-blocking filter, or when it is better to use the input of an artifact filter such as a de-blocking filter to update the in-loop memory buffer. The use of an artifact evaluator may not only enhance the image/video quality of current methods and standards of the current frame, but may also offer an additional advantage of preventing poor image/video quality propagation to subsequent processed frames, especially for inter-coding prediction mode. The artifact evaluator may also be standard compliant.
For each un-filtered reconstructed video block or frame and each filtered reconstructed video block or frame, an artifact metric may be generated to measure the amount of an artifact. The artifact metric may be a non-original reference (NR) or full-original reference (FR). The difference between an NR and FR artifact metric may be based on the availability of an original video block or frame. Artifact metric generators generate the artifact metrics and are part of an artifact evaluator. After artifact metrics are generated, a decision is made based on perceived image/video quality as to which video block or frame is used in updating an in-loop memory buffer. There are variations on how to generate an artifact metric and various ways to determine if a filtered reconstructed video block or frame or an unfiltered video block or frame is used in updating an in-loop memory buffer. These variations are illustrated in the embodiments below.
In one embodiment, an artifact metric generator is used in a video encoder to generate NR artifact metrics.
In another embodiment, an artifact metric generator is used in a video encoder to generate FR artifact metrics.
In a further embodiment, either a NR or an FR artifact metric may be used to measure the amount of blockiness.
In a further embodiment, a configurable artifact metric generator may be used to output multiple artifact metrics at once.
In even a further embodiment, a decision to determine which video block or frame should be used to update an in-loop memory buffer is based on only one type of metric, e.g., a blockiness (or-deblockiness) metric.
In another embodiment, a decision to determine which video block or frame should be used to update an in-loop memory buffer may be based on multiple types of metrics, e.g., a blockiness (or-deblockiness) metric and a blurriness metric.
Some of the embodiments described above may be combined to form other embodiments.
The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description, drawings and claims.
### DETAILED DESCRIPTION
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment, configuration or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. In general, described herein, is a novel method and apparatus to not only evaluate artifacts but to improve the perceived image/video quality as a result of the evaluation.
FIG. 1A illustrates an image/video encoding and decoding system 2 that may employ an artifact evaluator based on techniques in accordance with an embodiment described herein. As shown in FIG. 1A, the source device 4a contains a capture device 6 that captures the video or picture input before sending the video sequence or image to display device 8. The video sequence or image may be sent to memory 10 or image/video processing unit 14. From image/video processing unit 14 the video sequence or image may also be written into memory 10. The input that image/video processing unit 14 receives from memory 10 or from capture device 6 may be sent to a image/video encoder. The image/video encoder may be inside image/video processing unit 14. The encoded bitstream output by the video encoder may be stored or sent to transmitter 16. Source device 4a transmits the encoded bit-stream to receive device 18a via a channel 19. Channel 19 may be a wireless channel or a wire-line channel. The medium may be air, or any cable or link that can connect a source device to a receive device. For example, a receiver 20 may be installed in any computer, PDA, mobile phone, digital television, DVD player, image/video test equipment, etcetera, that drives an image/video decoder 21 to decode the above mentioned encoded bitstream. The output of the image/video decoder 21 may send the decoded signal to display device 22 where the decoded signal may be displayed.
The source device 4a and/or the receive device 18a in whole or in part may comprise a “chip set” or “chip” for a mobile phone, including a combination of hardware, software, firmware, and/or one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or various combinations thereof. In addition, in another embodiment, the image/video encoding and decoding system 2 may be in one source device 4b and one receive device 18b as part of a CODEC 24. Thus, source device 4b and receive device 18b illustrate that a source and receive device may contain at least one CODEC 24 as seen in FIG. 1B. CODEC 24 is made up of image/video encoder 23 and image/video decoder 21 and may be located in an image/video processing unit 14.
FIG. 2 illustrates a video sequence, known as a Group Of Pictures (GOP) 130. Inter-coding prediction mode encoding is typically used to compensate for both temporal and spatial differences between video blocks in different frames. Intra-coding prediction mode encoding is used to compensate for spatial differences between video blocks in the same frame. Both inter-coding and intra-coding modes are known as prediction modes because they use previous (or future buffered) information to aid in the current encoding of a video block. In some standards, an I-frame 31 will typically denote the first frame of a scene or a sequence of frames that is different in content than previous frames. I-frame typically uses intra-coding mode. Both B-frame(s) 33 and P-frame(s) 35 may use intra or inter coding modes. P-frame(s) 35 may use previous frames as a reference for encoding, while B-frame(s) 33 may use both previous and future frames as a reference for encoding. In the ITU H.264 standard, however, any frame (I-frame, P-frame, B-frame) may be used as a reference for encoding. Future frames may be used because frames are usually buffered and data from past or future frames in the buffer may be used for a current frame being encoded.
FIG. 3 illustrates an exemplary image/video encoder that may be used in a device of FIG. 1A or FIG. 1B. Frames or part of frames from a video sequence may be placed in an input frame buffer 42 inside an image/video encoder 23 that may be part of CODEC 24 and/or inside image/video processing unit 14. An input frame from input frame buffer 42 may be parsed into blocks (the video blocks may be of any size, but standard square video block sizes are 4×4, 8×8, or 16×16) and sent to video block buffer 43. Video block buffer 43 typically sends a video block to subtractor 44. Subtractor 44 subtracts video block x from the output of switch 46. switch 46 may switch between intra-coding and inter-coding prediction modes of encoding If switch 46 is enabling an inter-coding prediction mode, then the resulting difference from x and a video block from a different (previous or subsequent) frame is compressed through texture encoder 47. If switch 46 enables an intra-coding prediction mode, then the resulting difference from x and predicted value from a previous video block in the same frame is compressed through texture encoder 47.
Texture encoder 47 has a DCT block 48 which transforms the input x (the video block or difference block) from the pixel domain to a spatial frequency domain. In the spatial frequency domain, data is represented by DCT block coefficients. The DCT block coefficients represent the number and degree of the spatial frequencies detected in the video block. After a DCT is computed, the DCT block coefficients may be quantized by quantizer 50, in a process is known as “block quantization.” Quantization of the DCT block coefficients (coming from either the video block or difference video block) removes part of the spatial redundancy from the block. During this “block quantization” process, further spatial redundancy may sometimes be removed by comparing the quantized DCT block coefficients to a threshold. This comparison may take place inside quantizer 50 or another comparator block (not shown). If the magnitude of a quantized DCT block coefficient is less than the threshold, the coefficient is discarded or set to a zero value.
After block quantization, the resulting output may be sent to two separate structures: (1) a texture decoder 65, and (2) an entropy encoder 55. Texture decoder 65 comprises a de-quantizer 66 which aids in the production of a reconstructed image/video block or frame; to be used with a coding prediction mode. The entropy encoder 55 produces a bitstream for transmission or storage. Entropy encoder 55 may contain a scanner 56 which receives the block quantized output and re-order it for more efficient encoding by variable length coder (VLC) 58. VLC 58 may employ the use of run-length and huffman coding techniques to produce an encoded bit-stream. The encoded bitstream is sent to output buffer 60. The bitstream may be sent to rate controller 62. While maintaining a base quality, rate controller 62 budgets the number of quantization bits used by quantizer 50. Entropy encoding is considered a non-lossy form of compression. Non-lossy compression signifies that the data being encoded may be identically recovered if it is decoded by an entropy decoder without the encoded data having been corrupted. Entropy encoder 55 performs non-lossy compression.
Lossy compression means that as a result of the encoding, an input, x, will not produce an identical copy of x even though the encoded input has not been corrupted. The reconstructed input has “lost” part of its information. Texture encoder 47 performs lossy compression. A typical image/video encoder 23 usually has a local texture decoder 65 to aid in the compensation of both the inter-coding and intra-coding prediction modes. de-quantizer 66, inverse DCT 68, and the output of switch 46 that is sent to adder 69 work together to decode the output of texture encoder 47 and reconstruct the input x that went into texture encoder 47. The reconstructed input, y, looks similar to x but is not exactly x. A general image/video “decoder” typically comprises the functionality of the de-quantizer 66, inverse DCT 68, and the output of switch 46 that is sent to adder 69.
In some standards, such as MPEG-4 and H.263 baseline profile, the use of a de-blockng filter 70 is not present. In MPEG-4 and H.263 baseline profile, a de-blocking filter is optional as a post-processing step in the video decoder of a receive device. Other standards, such as ITU H.264, Windows Media 9 (WM9), or Real Video 9 (RV9), support enabling the use of de-blocking filter 70, known as an “in-loop” de-blocking filter. De-blocking filter 70 is used to remove the “blockiness” that appears when the reconstructed input, y, has blocks present. As mentioned previously, in some cases, the de-blocking filter removes the blockiness but also has the effect of blurring the video frame or image. There is a tradeoff between the blockiness artifact and the blurriness artifact. Enabling de-blocking filter 70 may reduce blockiness, but it may degrade the perceived visual quality by blurring the image. The standards that enable the use of de-blocking filter 70 always update memory buffer 81 with filtered reconstructed video block or frame, {tilde over (y)}. Of great benefit would be to find a way to determine when it is better to use the output of a de-blocking filter 70, or when it is better to use the input of de-blocking filter 70 to update memory buffer 81. Various embodiments in this disclosure identify and solve the limitation of previous standards. Various embodiments in this disclosure teach ways to evaluate and determine when it is better to use the output of an artifact filter such as de-blocking filter 70, or when it is better to use the input of an artifact filter such as de-blocking filter 70.
As mentioned, in some standards, when de-blocking filter 70 is enabled, the output may be sent to memory buffer 81. Inside memory buffer 81 there may be two memory buffers: (1) reconstructed new frame buffer 82; and (2) reconstructed old frame buffer 84. Reconstructed new frame buffer 82, stores the currently processed reconstructed frame (or partial frame). Reconstructed old frame buffer 84 stores a past processed reconstructed frame. The past processed reconstructed frame is used as a (reconstructed) reference frame. The reconstructed reference frame may be a frame that is before or after the current frame in input frame buffer 42. The current frame (or a video block from the current frame) or differences between the current frame and the reconstructed reference frame (or a video block from the difference block) is what is “currently” being encoded. After the current frame has finished encoding and before the next frame in input from input frame buffer 42 is fetched to be encoded, the reconstructed old frame buffer 84 is updated with a copy with the contents of the reconstructed new frame buffer 82.
Reconstructed new frame buffer 82 may send the reconstructed video block it received to be used in spatial predictor 86. Reconstructed old frame buffer 84 sends a past processed reconstructed video block to MEC (motion estimation and compensation block) 87. MEC block comprises motion estimator 88 and motion compensator 90. motion estimator 88 generates motion vectors (MV) 92 and motion vector predictors (MVP) 94 that may be used by motion compensator 90 to compensate for differences from other frames than the one being encoded. MVs 92 may also be used by entropy encoder 55. In some standards, such as ITU H.264, the output of spatial predictor 86 is used in intra-frame prediction mode and fed back both to subtractor 44 and adder 69. In some standards, such as MPEG-4 or JPEG, there is no spatial predictor 86.
FIG. 4A appears similar to FIG. 3. However, only for illustrative purposes, rate controller 62 and entropy encoder 55 are omitted in FIG. 4A and subsequent figures. In addition, the de-blocking filter 70 of FIG. 3 has been replaced with a more general filter, the artifact filter 72, in FIG. 4A and subsequent figures. The intent of the replacement is to convey that a general artifact filter may be used “in-loop.” As mentioned previously, artifacts may appear when reconstructing frames that have been compressed during decoding. Some examples of artifacts are blockiness, blurriness, ringing, and color bleeding blockiness is caused by independent quantization of individual video blocks. Blurriness is caused by suppression of high-frequency coefficients through coarse quantization or truncation of high frequency DCT coefficients. Blurriness may also occur through low pass filtering or smoothening. Ringing ripples along the high-contrast edge location and may be caused by quantization or truncation of high frequency coefficients. Color bleeding may occur at strongly differing chrominance area caused by the suppression of high-frequency coefficients of chroma components.
One of the most commonly used metrics to measure image and video quality is the peak signal to noise ratio (PSNR) and is defined in Equation 1 as follows:
$PSNR ( x , y ) = 10 * log 10 ( PKS ( coding_error ) ) , ( Equation 1 )$
where PKS stands for peak pixel value squared and is usually 2552.
The coding_error is often computed by taking the Mean Squared Error (MSE) of the difference in pixels between a pair of video blocks. The pair may consist of a video block, x, from the original reference frame and a video block, y, from a reconstructed frame. The PSNR is a function of the coding_error between a pair of video blocks. Coding_error indicates the amount of similarity between pixels in the video blocks being compared. More similar pixels lead to a larger PSNR. A smaller PSNR means that less pixels are similar. In addition, the PSNR may also be used to indicate a measure of the average coding error. The average coding_error is denoted by <coding_error>, and may be generated by taking a running average of the coding_error. In this later case, the PSNR is a measure of the coding_error over the frame. Even though PSNR is a function of the coding_error, a smaller coding_error does not always yield good image and video quality as perceived by the user. As an example, an image of a tiled wall or floor may appear blurry after a de-blocking filer has been applied. The boundary between tiles, the edge, may only represent a small fraction of the overall image. Thus, when the coding_error is computed pixel by pixel, the resulting PSNR may indicate that the image and video quality is good even though the edges of the tiles are blurry. If the de-blocking filter is not applied to the reconstructed image, the tiles edges may appear blocky. In a case such as this, the PSNR is undesirably limiting in measuring perceived image and video quality.
The limitation of the PSNR may be overcome by a new metric, the artifact signal to noise ratio (ASNR). The ASNR metric offers a method to measure the lack (or presence) of an artifact. A version of the ASNR metric, ASNR(y or {tilde over (y)}), may be generated by artifact metric generator 101 of FIG. 4B. A different version of the ASNR metric, ASNR(x, y or {tilde over (y)}), may be generated by artifact metric generator 101 if the optional input x is used. Dashed lines are drawn into artifact metric generator 101 to illustrate that input x is optional. The ANSR metric may have various instantiations.
Two frameworks that may be used when measuring encoding artifacts or coding_error are: (1) non-original reference (NR); or (2) full-original reference (FR). An example of the NR framework is shown in FIG. 5A. FIG. 5A illustrates one aspect in which the artifact metric generator 101 of FIG. 4B may be used. Artifact metric generator 101a in FIG. 5A aids in the evaluation of perceived image and video quality with video blocks from only a reconstructed (REC, without an original frame) video block or frame. The non-reference frame may be any frame which is not the original frame. Typically, a video block or frame that has been compressed and reconstructed. An example of the FR framework is shown in FIG. 5B. FIG. 5B is a block diagram illustrating one aspect in which the artifact metric generator 101 with optional original input, x, of FIG. 4B may be used. Artifact metric generator 101b in FIG. 5B aids in the evaluation of perceived image and video quality with video blocks from both the original (reference) input, x, and a non-original (reconstructed) (REC, y or {tilde over (y)}) video block or frame.
In general, the output of an artifact metric generator is a measure of the amount of the artifact. When the artifact is blockiness, an instantiation of the ASNR metric may be used. The instantiation is the de-blocking signal to noise ratio (DSNR) metric, which measures the lack or presence of blockiness. In an NR framework the generation performed by an artifact metric generator is only based on a reconstructed frame. If the artifact filter 72 is a de-blocking filter, the top artifact metric generator 101 in FIG. 4B may output DSNR(y) if x is not present. DSNR(y) is a measure of the amount of the blockiness of video block y, a reconstructed video block. If the artifact filter 72 is a de-blocking filter, the bottom artifact metric generator 101 in FIG. 4B may output DSNR({tilde over (y)}) if x is not present. DSNR({tilde over (y)}) is a measure of the amount of the blockiness of video block {tilde over (y)}, the artifact filtered video block. DSNR(y) or DSNR({tilde over (y)}), written as DSNR(y or {tilde over (y)}) are non-original reference (NR) metrics.
If the original input, x, is fed into artifact metric generator 101 in FIG. 4B, a FR framework may be used to generate a metric. The metric in an FR framework is a measure of the amount of the artifact of the non-reference frame relative to the original reference frame. If the artifact filter 72 is a de-blocking filter, the top artifact metric generator 101 in FIG. 4B may output DSNR (x, y). DSNR(x, y) is a measure of the amount of the blockiness of video block y relative to video block x. If the artifact filter 72 is a de-blocking filter, the bottom artifact metric generator 101 may output DSNR(x, {tilde over (y)}). DSNR(x, {tilde over (y)}) is a measure of the amount of the blockiness of video block {tilde over (y)} relative to video block x. DSNR(x, y) or DSNR(x, {tilde over (y)}), written as DSNR(x, y or {tilde over (y)}) are full-original reference (FR) metrics.
In order to measure the amount of blockiness in an image or frame, a Mean Square Difference of Slope (MSDS) metric is sometimes used to determine the amount of blockiness in the reconstructed image or frame. However, the MSDS metric does not differentiate between blockiness in the actual texture of the original image or frame and blockiness introduced by the block quantization step of a video encoder. Moreover, the use of the MSDS metric does not exploit the use of human visual perception.
The limitation of the MSDS may be overcome by the DSNR metric. The DSNR metric may have various forms since it is used to better evaluate the image and video quality of blocked-based video encoders by accounting for the different types of blockiness and taking into account human visual perception. As mentioned, the DSNR metric is an instantiation of the ASNR metric.
A general form of the artifact signal to noise ratio (ASNR) metric is shown in Equation 2 as follows:
$ASNR ( x , y ) = 10 * log 10 ( PKS · W S · W P · W T F ( x , y ) ) , ( Equation 2 )$
where PKS stands for peak pixel value squared and is usually 2552. The numerator of Equation 2 contains the product of PKS, WS, WP, and WT. WS, WP, and WT are weights selected to account for the spatial (WS), perceptual (WP) and temporal (WT) factors that affect image and video quality. The denominator of Equation 2 is F(x, y) and may be a joint or disjoint function of x and y. If x is not available, F(x, y) may be replaced by F(y). It should also be noted that y, the un-filtered reconstructed video block or frame may be replaced by {tilde over (y)}, the filtered reconstructed video block or frame.
One of the functions that may be used for F(x, y) is the MSDS_error(x, y). The usage of the MSDS_error(x, y) is typically done when the DSNR metric instantiation of the ASNR metric is used. In one aspect, the MSDS_error(x, y) may be the squared error between the MSDS(x) and MSDS (y). In another aspect, the MSDS_error(x, y) may be the absolute value of the error between the MSDS(x) and MSDS(y). The MSDS_error(x, y) may have other variants, but in an FR framework, will often be a function of the error between MSDS(x) and MSDS (y). In an NR framework, the MSDS_error(x, y) may be replaced with at least two different MSDS calculations that may be compared to each other. For example, MSDS(y) and MSDS({tilde over (y)}) may be used. MSDS(x) is a function of an input video block, x, from the original reference frame. MSDS(y or {tilde over (y)}) is a function of a video block, y or {tilde over (y)}, from a reconstructed frame.
The Mean Square Difference of Slopes (MSDS) is often calculated at all video block boundaries, and with three different types of slopes near the boundary between a pair of adjacent video blocks. The three different types of slopes are usually calculated between pixels on the same pixel row. Consider two adjacent video blocks with L rows directly next to each other. The last two columns of pixels in the first video blocks are next to the first two columns of pixels in the second video block. A Type1 slope is calculated between a pixel in the last column and a pixel in the penultimate column of the first video block. A Type2 slope is calculated between a pixel in the first column and a pixel in the second column of the second video block. A Type3 slope is calculated between a pixel in the first column of the second video block and the last column of the first video block.
Typically the MSDS is illustrated as being calculated over a common row of pixels as in Equation 3:
$MSDS ( pixels ( i ) ) = [ Type_ 3 _slope - ( Type_ 1 _slope + Type_ 2 _slope 2 ) ] 2 , ( Equation 3 )$
where pixels(i) represent the ith group of pixels that is involved in the calculation in any of the L rows, in this case any ith group contains six pixels. For each video block boundary, MSDS(pixels(i)) is averaged over L rows. An overall (average) MSDS for each video block and video block boundary would be written as in Equation 4 below:
$MSDS ( b ) = 1 L ∑ i = 1 L MSDS ( pixels ( i ) ) , ( Equation 4 )$
where L is the number of rows that defines the boundary of the video block.
However, since a column is an array of pixels, all slopes of the same type may be calculated in parallel. This parallel calculation is called a gradient. Thus, when calculating the MSDS near the boundary between a pair of adjacent video blocks, three gradients may be computed: (1) pre_gradient (for Type 1 slopes); (2) post_gradient (for Type 2 slopes); and (3) edge_gradient (for Type 3 slopes). The computed gradient is a vector. As such, parallel instances of Equation 4 may be calculated with Equation (5) below:
$MSDS ( b ) = L 2 _NORM [ edge_gradient - ( pre_gradient + post_gradient 2 ) ] 2 L , ( Equation 5 )$
where b represents any video block. MSDS(b) is calculated at the boundaries between a pair of adjacent video blocks for an ith group of pixels (i=1 . . . L).
By squaring the L2 norm of the difference vector (edge_gradient-_average (pre_gradient, post_gradient)), Equation 5 may be implemented. A norm is a mathematical construct. The L2 norm is a type of norm and may be used to calculate the magnitude of a vector. To calculate the magnitude, the L2 norm takes the square root of the sum of squares of the components of a vector. Although the MSDS is often calculated as shown in Equations 4 and 5, variants may exist which do not square the difference between the edge_gradient and the average of the pre_gradient and post_gradient. For example, an L1 norm may be used instead. The embodiments enclosed herein, encompass and apply to any variant that uses Type 1, Type 2 and Type 3 slopes.
As mentioned, using the MSDS for F(x, y) yields an instantiation of the ASNR metric, the DSNR metric. Similarly, using other known metrics in place of F(x, y) may be used to yield other instantiations of the ASNR metric. The general FR form of the de-blocking signal to noise ratio (DSNR) metric is defined in Equation 6 below,
$DSNR ( x , y ) = 10 * log 10 ( PKS · W S · W P · W T MSDS_error ( x , y ) ) . ( Equation 6 )$
The general NR form of the DSNR metric is defined in Equation 7 below,
$DSNR ( y ) = 10 * log 10 ( PKS · W S · W P · W T MSDS ( y ) ) . ( equation 7 )$
FIG. 5A illustrates one aspect of the artifact metric generator used in FIG. 4B, with only a reconstructed image/video block(s) or frame. Artifact metric generator 101a in FIG. 5A generates a DSNR metric without an original reference. To evaluate the de-blocking artifact, a comparison (not shown) may be done between DSNR(y) and DSNR({tilde over (y)}). The numerator of the DSNR metric shown by Equation 6 or Equation 7, may be generated in artifact metric generator 101a by using a weight value selector (WVS) Bank 103 composed of three weight value selectors: (1) spatial WVS 104 which outputs weight, WS; (2) perceptual WVS 105 which outputs weight, WP; and temporal WVS 106 which outputs weight, WT. The weights WS, WP, and WT, may be pre-selected or selected during the encoding process from input parameters ZS, ZP, and ZT. The input parameters ZS, ZP, and ZT may be generated during the encoding process or prior to the encoder running. Numerator producer 107 computes the product of PKS, WS, WP, and WT seen in the numerator of Equation 6 or Equation 7. When weights WS, WP, and WT all equal 1, the numerator contribution of the DSNR metric is the same as the numerator of the PSNR in Equation 1. Although one multiplier 108 is sufficient in numerator producer 107, two are shown to emphasize the effect of having WS, WP, and WT in the numerator.
The denominator of the DSNR metric shown by Equation 7 may be carried out in artifact metric generator 101a. The input is REC (a reconstructed video block or frame), and thus F(x, y) in Equation 2 is only a function of REC, F(y or {tilde over (y)}). FIG. 5A shows an example when F(y or {tilde over (y)}) is MSDS(y or {tilde over (y)}). The reconstructed input, REC, may be either y or {tilde over (y)}, and MSDS 112 computes MSDS(y) and MSDS({tilde over (y)}) as seen in either Equation 4 or Equation 5.
Divider 109 divides the output of numerator producer 107 (PKS*WS*WP*WT) by the output of MSDS 112, MSDS(REC (y or {tilde over (y)}). Log block 114 takes 10*log10 of the result produced by divider 109. The output of log block 114 is the DSNR metric, which is an instantiation of ASNR(y or {tilde over (y)}) computed by artifact metric generator 101.
FIG. 5B illustrates one aspect that the artifact metric generator 101 with optional original input, x, of FIG. 4B may be used. Artifact metric generator 101b has similar structure to artifact metric generator 101a, except it has a denominator producer 110 instead of just one MSDS 112. Denominator producer 110 is composed of two MSDS 112 blocks, a subtractor 44 and a norm factor 116. Denominator producer 110 receives two inputs: (1) an original input, ORIG (x); and (2) a reconstructed input, REC (y or {tilde over (y)}). Subtractor 44 computes the difference between MSDS(x) and MSDS(y or {tilde over (y)}) and sends the difference to norm factor 116. In one configuration of denominator producer 110, norm factor 116 may square its input. In another configuration, norm factor 116 may take an absolute value of its input. In either case, norm factor 116 may produce MSDS_error(x, y) which is output by denominator producer 110. Divider 109 divides the output of numerator producer 107 by MSDS_error(x, y), and log block 114 takes 10*log10 of the result produced by divider 109. The output of log block 114 is DSNR(ORIG, REC), which is an instantiation of an ASNR(x, y or {tilde over (y)}) metric generated by artifact metric generator 101. Each of the spatial, Perceptual, and temporal components of the DSNR metric may de-emphasize, emphasize or do nothing to the blockiness artifact being evaluated. The DSNR targets the blockiness artifact, however, the structure is such that it also affects any other artifact that is present. For example, the blurriness artifact as a result of applying the de-blocking filter also may be de-emphasized, emphasized or stay the same.
In general, selection process of the weights such as those in WVS bank 103 for an ASNR metric is done in such a way to improve image/video quality. For the DSNR metric, the right amount of de-blockiness is emphasized and the right amount of blurriness is de-emphasized. The selection process is based on graph 118 of FIG. 6. In FIG. 6, graph 118 illustrates the weight value selector (WVS) (spatial, perceptual, or temporal) used in an artifact evaluator. On the abscissa axis of graph 118, there are two marks: (1) Th1, which represents a Threshold 1; and (2) Th2, which represents a Threshold 2. On the ordinate axis of graph 118, the three marks represent weight values from a WVS. A generic input parameter Z (ZS, ZP, or ZT) is generated and mapped to the abscissa (Z) axis in graph 118. Z will be in one of three ranges: (1) 0≦Z<Th1; (2) Th1≦Z<Th2; and (3) Th2≦Z. Weights from a WVS are determined by the range of Z. WVS selects the weights based on the three ranges: in (1) [WZ]−1 is selected; in (2) 1 is selected; and in (3) WZ is selected. The [WZ]−1 weight may de-emphasize the spatial, perceptual or temporal component of the blockiness artifact. The weight value of 1 does not modify the blockiness artifact. The WZ weight may emphasize the spatial, Perceptual or temporal component of the blockiness artifact. This may be seen by re-writing Equation 2 as shown below:
ASNR(x, y)=10*[(log10(PKS)+log10(WS)+log10(WP)+log10(WT)−log10(F(x, y))]
Taking the logarithm of the numerator components and denominator shows that the effect of the weights is either additive, subtractive, or has no effect (when the weight value is 1).
The choice of input parameters varies. However, choices for ZS, ZP, and ZT may be as follows. ZS may be generated by a multi-step process explained through an example. Consider a current video block to be encoded E that has neighbors D (to its left), B (above it), and A (located near its upper left diagonal). Part of video block E and part of video block A are used to form video block AE. Similarly, video blocks BE and DE may be formed. DCT's may be computed for each of video blocks AE, BE, and DE and the average of the DCT's may be used for ZS. ZP may be generated by computing an average DCT over an entire frame. ZT may be generated by computing the difference between the average DCT in one frame and the average DCT in another frame.
FIG. 7 illustrates an exemplary image/video encoder with a general artifact filter and more general artifact metric generator 121 which may be configured with an optional metric controller 122. Metric controller 122, as well as input x, is drawn with dashed lines in FIG. 7 to show that each is optional. Artifact metric generator 121 may be pre-configured and thus would not necessarily need metric controller 122. When metric controller 122 is used, it passes input parameters to artifact metric generator 121. The input parameters may be stored in artifact metric generator 121 or passed in by metric controller 122. Artifact metric generator outputs a set of metrics, not just one output. Artifact metric generator 121 also may or may not use the original input x, when calculating the set of metrics.
FIG. 8 illustrates a general configuration of an artifact metric generator 121. The sub [i] in the component blocks is used to show two aspects in artifact metric generator 121a: (1) various metric version may be generated; and (2) entirely different metrics may be generated. From aspect (1), for example, it follows that various forms of ASNR may be generated. From aspect (2), for example, a de-blocking (or blocking) metric, a blurriness metric, a ringing metric, a color bleeding metric, or any other type of artifact metric may be generated. The general architecture is shown to capture the different metrics and various metric versions that may be possible
F_err block 123 may be used to compute the error between an instance of a function of the original video block or frame and an instance of a function of a reconstructed video block or frame. The difference between the functions is computed by subtractor 44 and norm factor (NF) 128 can be selected for a particular choice of F. Artifact metric generator 121 may implement the functions of artifact metric generator 101. This may be seen by recognizing that in the architecture of artifact metric generator 101a of FIG. 5A the choice of F was MSDS(y) and MSDS({tilde over (y)}). In the architecture of artifact metric generator 101b of FIG. 5B, the choice of F was a function of MSDS(x, y) and MSDS(x, {tilde over (y)}). The choice of F may be controlled through METRIC_SORT[i] which may be pre-configured or sent by metric controller 122. Conditioner[i] 130 may be used for any set of operations on the output of F_err block 123, including multiplying by 1. Conditioner[i] 130 “conditions” the output of F_err block 123. The output of conditioner[i] 130 may be sent to metric arranger 132. Metric arranger 132 uses Selector 134 to route the various metric or metric versions into metric buffer 136. Selector 134 may be internally driven or optionally may be controlled via metric controller 122. The output MSET(ORIG, REC) is a set of outputs, MA[1], MA[2], . . . MA[N]. Each member of MSET(ORIG, REC) may be a different metric or various metric version. From FIG. 8, shows that the general form of the ASNR metric may be conditioner(F(x, y)), i.e., F(x, y) may be conditioned by some other function or sets of functions. In Equation 2, the conditioner is 10*log10PKS*WS*WP*WT).
FIG. 9 illustrates that the artifact metric generator 121a of FIG. 8 may be configured to implement various versions of an ASNR. There is an additional optional selector 139 that may be used to select which version of ASNR may be output. The optional selector 139 is used to show that the artifact metric generator 121b of FIG. 9 may be configured to function like an artifact metric generator 101 (only one ASNR output). If the optional selector 139 is not used, the output of artifact metric generator 121b may be ASNRSET(ORIG, REC). As mentioned previously, F_err block 123 may implement MSDS(x, y) and MSDS(x, {tilde over (y)}). FIG. 9 also shows conditioner [i] 130. Conditioner[i] 130 may implement the numerator of Equation 2 along with division and taking the log of the division. Metric controller 122 may send different input parameters that result in different versions of conditioner[i] 130. Alternatively, METRIC_SORT[i] may choose different functions other than the MSDS. Other norm factor(s) [i] 128 may also be chosen as well. In the configuration of FIG. 9, the general output ASNRSET (ORIG, REC) is ASNR[1], ASNR[2], . . . ASNR[N], and one of these may be optionally selected by selector 139 to be output.
Since artifacts may affect image and video quality, a way to use the metrics to aid in evaluating perceived image and video quality during the encoding process is desired. The use of artifact evaluator 140 in FIG. 10 permits such a way. Artifact evaluator 140 may evaluate which reconstructed input has a better perceived image and video quality. Typically, during the encoding process, memory buffer 81 is updated with either one of two choices. The choice is typically between the un-filtered reconstructed video block (or frame) y, or the (de-blocked) filtered reconstructed video block (or frame) {tilde over (y)}. Under lower bit conditions, blockiness is sometimes a dominant artifact. As such, artifact filter 72 may be typically configured to diminish blockiness. In doing so, the filtered reconstructed video block (or frame) {tilde over (y)} may be too blurry. If {tilde over (y)} is too blurry, then updating memory buffer 81 with {tilde over (y)} will result in blurry edges. If y is too blocky, updating memory buffer 81 with y will result in “blockiness.” If the current encoding methods and standards use de-blocking filter 70, they always update memory buffer 81 with the output of the de-blocking filter 70. Current encoding methods and standards are limited because they do not have a way to “adaptively” change how memory buffer 81 is updated. Because of this limitation in current encoding methods and standards, poor image/video quality is propagated to other frames, especially for inter-coding prediction mode.
Using the artifact evaluator of FIG. 10 “in-loop”, i.e., the feedback loop of an image/video encoder allows an “adaptive” way to change how memory buffer 81 is updated. Adaptively means that the image/video encoder can adjust the input into memory buffer 81 depending on what reconstructed video block (or frame) has a better perceived visual quality, y or {tilde over (y)}. Artifact evaluator 140 evaluates which image and video quality is better, y or {tilde over (y)}. If the quality of y is better, artifact evaluator 140 may set the output QA (x, y, {tilde over (y)}) to y and update memory buffer 81 with y. If the quality of {tilde over (y)} is better, artifact evaluator 140 may set the output QA (x, y, {tilde over (y)}) to {tilde over (y)} and update memory buffer 81 with {tilde over (y)}. If the image and video quality of both y and {tilde over (y)} is not of acceptable image and video quality, then artifact evaluator 140 may instruct image/video encoder 23 to re-encode with a different set of quantization coefficients. As such, the image and video quality evaluated by artifact evaluator 140 may be adaptively improved immediately after the encoding and reconstruction of any video block in a frame. Thus, use of an artifact evaluator 140 overcomes the limitations of the current encoding methods and standards. The architecture seen in FIG. 10 through the use of an artifact evaluator 140 not only enhances the image/video quality of current methods and standards, but it also offers an additional advantage of preventing poor image/video quality propagation to sub-sequent processed frames, especially for inter-coding prediction mode.
In addition, since some standards, such as ITU H.264, WM9, and RV9 support the use of de-blocking filters, the use of artifact evaluator 140 is standard compliant. For example, the decision of which reconstructed (filtered or un-filtered) video block or frame in the encoder was used to update memory buffer 81 may be passed to the video decoder. Thus, for a video encoder and video decoder to be “in-sync” the decision may be inserted into a video decoders' header information, i.e., it can be inserted as part of the bitstream that tells the video decoder if the de-blocking filter is on or off.
FIG. 11A illustrates a version of an artifact evaluator 101 that uses one type of metric to make an output decision. FIG. 11A illustrates a configuration of the artifact evaluator 140 used in FIG. 10. Artifact evaluator 140 receives two inputs, y, and {tilde over (y)}, and alternatively receives inputs x and input parameters (IP) from metric controller 122. The input parameters (IP) for artifact evaluator 140a from metric controller 122 may be pre-configured, i.e., direct input from metric controller 122 is not needed. As such, the input parameters from metric controller 122 are omitted in FIG. 11A. Artifact evaluator 140 directs inputs, x (if received) and y into an artifact metric generator 101 and also directs inputs x (if received) and y into a different artifact metric generator 101. An embodiment of the structure of artifact metric generator 101 is shown in both FIG. 5A and FIG. 5B and its function was discussed above, either may be used. In FIG. 11A top artifact metric generator 101 outputs ASNR(x, {tilde over (y)}) (although ASNR ({tilde over (y)}) may alternately be used) and the bottom artifact evaluator 101 outputs ASNR(x, y) (although ASNR(y) may alternately be used). Decision logic 142 receives ASNR(x, {tilde over (y)}) and ASNR(x, y) and decides to output y or {tilde over (y)}, or activates line output RE to re-encode, based on the two input ASNR metrics. It may be recognized that the logic illustrated in FIG. 11A may be used for any ASNR metric, not just the DSNR metric.
FIG. 11B illustrates a version of an artifact evaluator 121 which uses multiple metrics or metric versions to make an output decision. FIG. 11B illustrates a configuration of the artifact evaluator 140 used in FIG. 10. Artifact evaluator 140 receives two inputs, y, and {tilde over (y)}, and alternatively receives inputs x and input parameters (IP) from metric controller 122. The input parameters (IP) for artifact evaluator 140b from metric controller 122 may be pre-configured, i.e., direct input from metric controller 122 is not needed. As such, the input parameters from metric controller 122 are omitted in FIG. 11B. Artifact evaluator 140 directs inputs, x (if received) and y into an artifact metric generator 121 and also directs inputs x (if received) and {tilde over (y)} into a different artifact metric generator 121. A structure of artifact metric generator 121 is shown in both FIG. 8 and FIG. 9 and its function was discussed above, either may be used. In FIG. 11B, top artifact metric generator 121 outputs MSET(x, {tilde over (y)}) (although MSET({tilde over (y)}) may alternately be used) and the bottom artifact evaluator 121 outputs MSET(x, y) (although MSET(y) may alternately be used). Decision logic 143 receives MSET(x, {tilde over (y)}) and MSET(x, y) and decides to output y or {tilde over (y)}, or activates line output RE to re-encode, based on the two input sets of metrics.
FIG. 12 illustrates a flowchart of a method used by decision logic 142 block in FIG. 11A. Subtractor 44 subtracts ASNR metric inputs, ASNR(x, {tilde over (y)}) and ASNR(x, y), and the resulting difference is sent to output quality 144 block. Inside output quality 144 block, the difference is compared to zero 146. If the difference is greater than zero this means: (1) ASNR(x, {tilde over (y)})>ASNR(x, y) and the output 148 is {tilde over (y)}; and (2) ASNR(x, {tilde over (y)})>acceptable threshold of image and video quality. If the difference is less than zero then: (1) ASNR(x, y)>ASNR(x, {tilde over (y)}) and the output 150 is y; and (2) ASNR(x, y)>acceptable threshold of image and video quality. If a control (CTRL) signal is enabled, an output (RE) of Decision logic 142 block may instruct image/video encoder 23 to re-encode x. This may be possible if both ASNR(x, y) and ASNR(x, y) are less than an acceptable threshold of image and video quality. The output QA (x, y, {tilde over (y)}) is used to update the encoder memory buffer (see FIG. 10). It may be recognized that the logic illustrated in the flowchart of FIG. 12 may be used for any ASNR metric, not just the DSNR metric.
FIG. 13 illustrates a flowchart of a method used by decision logic 143 in FIG. 11B. The flowchart represents a decision logic for any artifact metric or variant of an artifact metric. For example, A[1] may be blockiness, and MA[1](x, {tilde over (y)}) may be DSNR(x, {tilde over (y)}). Similarly, A[2] may be blurriness and MA[2](x, {tilde over (y)}) may be a metric that measures the amount of blurriness of {tilde over (y)}. Similarly, MA[1](x, {tilde over (y)}) may be DSNR(x, y) and MA[2](x, y) may be a metric that measures the amount of blurriness of y. MA[2] (x, y or {tilde over (y)}) may be another version of DSNR which de-emphasizes blockiness and as such emphasizes blurriness more in relation to MA[1] (x, y or {tilde over (y)}). MA[2] (x, y or {tilde over (y)}) may also be a metric that measures the amount of blurriness.
A comparison 160 between MA[1](x, {tilde over (y)}) and a blockiness threshold is made to check the amount of blockiness present in filtered reconstructed video block(or frame) {tilde over (y)}. If the comparison 160 is true (YES), then {tilde over (y)} meets an “acceptable” perceived image and video quality. A further comparison 162 between MA[2](x, {tilde over (y)}) and a blurriness threshold is made to check the amount of blurriness present in {tilde over (y)}. If the comparison 162 is true (YES) then {tilde over (y)} meets an “acceptable” perceived image and video quality for both blurriness and blockiness. The resulting output QA (x, y, {tilde over (y)}) becomes 164 {tilde over (y)} and the encoder memory buffer (see FIG. 10) gets updated with {tilde over (y)}.
If either comparison 160 or 162 is false (NO), then a comparison 166 between MA[1](x, y) and a blurriness threshold is made to check the amount of blurriness present in un-filtered reconstructed video block(or frame) y. If the comparison 166 is true (YES), then y meets an “acceptable” perceived image and video quality. A further comparison 168 between MA[2](x, y) and a blurriness threshold is made to check the amount of blurriness present in y. If the comparison 168 is true (YES), then y meets an “acceptable” perceived image and video quality for both blurriness and blockiness. The resulting output QA (x, y, {tilde over (y)}) becomes 170 y, and the encoder memory buffer (see FIG. 10) gets updated with {tilde over (y)}. If either comparison 166 or 168 is false (NO), then the line output RE becomes active 172, and a re-encode of the original video block (or frame) x may take place.
FIG. 14 illustrates a flowchart of the artifact evaluation process. After an artifact evaluation starts, selection of metric_sort 180 is based on what type or version of metric will be generated. Loading of an original x video block or frame (if one is available) and loading of the available reconstructed y or {tilde over (y)} video block(s) or frame(s) 182 takes place. Error(s) may be computed with functions, F, and/or norm factors (NF) 184. conditioner[i] may be done prior or during (serial or parallel) to encoding 186. Combination of conditioner[i] with the result of the Error(s) computed with functions F, and/or NF 188 may then be executed. The resulting combination(s) result in two MSET metrics result, MSET(x, y) and MSET(x, {tilde over (y)}). Each member of MSET(x, y) and MSET(x, {tilde over (y)}) may be arranged 192. A logical decision 194 based on at least one comparison between a member of MSET(x, y) and a member of MSET(x, {tilde over (y)}) decides which of y and {tilde over (y)} has better image and/or video quality. Based on the decision an output QA (x, y, {tilde over (y)}), the better of y and {tilde over (y)} is used to update an encoder memory buffer in-loop during the encoding process. Decision logic 194 block may also send out a re-encode signal, RE, if the image and video quality of either y or {tilde over (y)} is not acceptable.
A number of different embodiments have been described. The techniques may be capable of improving video encoding by improving image and video quality through the use of an artifact evaluator in-loop during the encoding process. The techniques are standard compliant. The techniques also may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the techniques may be directed to a computer-readable medium comprising computer-readable program code (also may be called computer-code), that when executed in a device that encodes video sequences, performs one or more of the methods mentioned above.
The computer-readable program code may be stored on memory in the form of computer readable instructions. In that case, a processor such as a DSP may execute instructions stored in memory in order to carry out one or more of the techniques described herein. In some cases, the techniques may be executed by a DSP that invokes various hardware components such as a motion estimator to accelerate the encoding process. In other cases, the video encoder may be implemented as a microprocessor, one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), or some other hardware-software combination. These and other embodiments are within the scope of the following claims.
## Claims
1. An apparatus configured to process video blocks comprising:
a decoder operable to synthesize an un-filtered reconstructed video unit, wherein a video unit is at least one of video block and video frame;
an artifact filter operable to receive the un-filtered reconstructed video unit, and that generates a filtered reconstructed video unit;
a memory buffer operable to store either the filtered reconstructed video unit or the un-filtered reconstructed video unit; and
an artifact evaluator operable to update the memory buffer.
a decoder operable to synthesize an un-filtered reconstructed video unit, wherein a video unit is at least one of video block and video frame;
an artifact filter operable to receive the un-filtered reconstructed video unit, and that generates a filtered reconstructed video unit;
a memory buffer operable to store either the filtered reconstructed video unit or the un-filtered reconstructed video unit; and
an artifact evaluator operable to update the memory buffer.
2. The apparatus of claim 1, wherein the artifact evaluator comprises at least one artifact metric generator and a decision logic block.
3. The apparatus of claim 2, wherein any artifact metric generator amongst the at least one artifact metric generator is configured to receive the filtered reconstructed video unit or the un-filtered reconstructed video unit.
4. The apparatus of claim 3, wherein the at least one artifact metric generator is further configured to receive an unreconstructed video unit.
5. The apparatus of claim 4, wherein any artifact metric generator amongst the at least one artifact metric generator is configured to generate a non-original reference (NR) artifact metric or a full-original reference (FR) artifact metric.
6. The apparatus of claim 5, wherein a first artifact metric generator from the at least one artifact metric generator and a second artifact metric generator from the at least one artifact metric generator are coupled to a first decision logic block or a second decision logic block.
7. The apparatus of claim 6, wherein the first decision logic block is configured to receive the filtered reconstructed video unit and the un-filtered reconstructed video unit, and is further configured to compare a first NR artifact metric with a second NR artifact metric or to compare a first FR artifact metric with a second FR artifact metric, and based on the comparison output either the filtered reconstructed video unit or the un-filtered reconstructed video unit.
8. The apparatus of claim 7, wherein the output is sent to the memory buffer in-loop in a playback device, mobile device, or computer.
9. The apparatus of claim 6, wherein the second decision logic block is configured to receive the filtered reconstructed video unit and the un-filtered reconstructed video unit, and is further configured to compare a first set of NR artifact metrics with a first set of NR artifact metrics or to compare a first set of FR artifact metrics with a second set of FR artifact metrics, and based on the comparison output either the filtered reconstructed video unit or the un-filtered reconstructed video unit.
10. The apparatus of claim 8, wherein the output is sent to the memory buffer in-loop in a playback device, mobile device, or computer.
11. A method of artifact evaluation comprising:
inputting an original video unit, wherein a video unit is at least one of video block and frame;
inputting an un-filtered reconstructed video unit;
inputting a filtered reconstructed video unit;
generating at least one artifact metric from the original video unit and the un-filtered reconstructed video unit; and
generating at least one artifact metric from the original video unit and the filtered reconstructed video unit.
inputting an original video unit, wherein a video unit is at least one of video block and frame;
inputting an un-filtered reconstructed video unit;
inputting a filtered reconstructed video unit;
generating at least one artifact metric from the original video unit and the un-filtered reconstructed video unit; and
generating at least one artifact metric from the original video unit and the filtered reconstructed video unit.
12. The method of claim 11, wherein any of the artifact metrics generated from the original video unit and the un-filtered reconstructed video unit and any of the artifact metrics generated from the original video unit and the filtered reconstructed video block measure the amount of blockiness or blurriness.
13. The method of claim 12, further comprising:
comparing one of the artifact metrics generated from the original video unit and the un-filtered reconstructed video unit with one of the artifact metrics generated from the original video unit and the filtered reconstructed video unit; and
deciding to output the un-filtered reconstructed video unit or the filtered reconstructed video unit based on the comparison.
comparing one of the artifact metrics generated from the original video unit and the un-filtered reconstructed video unit with one of the artifact metrics generated from the original video unit and the filtered reconstructed video unit; and
deciding to output the un-filtered reconstructed video unit or the filtered reconstructed video unit based on the comparison.
14. The method of claim 13, further comprising:
making a first comparison of at least one of the artifact metrics generated from the original video unit and the un-filtered reconstructed video unit with a first artifact threshold;
if any of the artifact metrics used in the first comparison is less than the first artifact threshold, making a second comparison of at least one of the artifact metrics generated from the original video unit and the un-filtered reconstructed video unit with a second artifact threshold;
if any of the artifact metrics used in the first comparison is less than the first artifact threshold and any of the artifact metrics used in the second comparison is less than the second artifact threshold, making a third comparison of at least one of the artifact metrics generated from the original video unit and the filtered reconstructed video unit with a third artifact threshold;
if any of the artifact metrics used in the third comparison is less than the third artifact threshold, making a fourth comparison of at least one of the artifact metrics generated from the original video unit and the filtered reconstructed video unit with a fourth artifact threshold;
deciding to output the filtered reconstructed video unit based on the first and second comparisons;
deciding to output the un-filtered reconstructed video unit based on the third and fourth comparisons; and
if necessary, re-encoding based on either the third or fourth comparison.
making a first comparison of at least one of the artifact metrics generated from the original video unit and the un-filtered reconstructed video unit with a first artifact threshold;
if any of the artifact metrics used in the first comparison is less than the first artifact threshold, making a second comparison of at least one of the artifact metrics generated from the original video unit and the un-filtered reconstructed video unit with a second artifact threshold;
if any of the artifact metrics used in the first comparison is less than the first artifact threshold and any of the artifact metrics used in the second comparison is less than the second artifact threshold, making a third comparison of at least one of the artifact metrics generated from the original video unit and the filtered reconstructed video unit with a third artifact threshold;
if any of the artifact metrics used in the third comparison is less than the third artifact threshold, making a fourth comparison of at least one of the artifact metrics generated from the original video unit and the filtered reconstructed video unit with a fourth artifact threshold;
deciding to output the filtered reconstructed video unit based on the first and second comparisons;
deciding to output the un-filtered reconstructed video unit based on the third and fourth comparisons; and
if necessary, re-encoding based on either the third or fourth comparison.
15. The method of claim 13, wherein the filtered reconstructed video unit, or the un-filtered reconstructed video unit is stored in a memory buffer.
16. The method of claim 15, wherein the filtered reconstructed video unit, or the un-filtered reconstructed video unit is stored in a memory buffer as part of an encoding process.
17. A method in an image/video encoder comprising:
updating a memory buffer with an output of an artifact evaluator, wherein the artifact evaluator is used in-loop; and
making a decision using a first set of artifact metrics and a second set of artifact metrics to make a comparison, and wherein based on the comparison the artifact evaluator outputs a filtered reconstructed video unit or an un-filtered reconstructed video unit to the memory buffer, wherein a video unit is at least one of video block and frame.
updating a memory buffer with an output of an artifact evaluator, wherein the artifact evaluator is used in-loop; and
making a decision using a first set of artifact metrics and a second set of artifact metrics to make a comparison, and wherein based on the comparison the artifact evaluator outputs a filtered reconstructed video unit or an un-filtered reconstructed video unit to the memory buffer, wherein a video unit is at least one of video block and frame.
18. The method of claim 17, wherein the memory buffer stores either the filtered reconstructed video unit or the un-filtered reconstructed video unit.
19. The method of claim 18, wherein the first set of metrics is based on an original video unit and the un-filtered reconstructed video unit, wherein the original video unit is an unreconstructed video unit.
20. The method of claim 19, wherein the second set of metrics is based on the original video unit and the filtered reconstructed video unit.
21. The method of claim 19, wherein the second set of metrics is based on the filtered reconstructed video unit.
22. The method of claim 18, wherein the first set of metrics is based on the un-filtered reconstructed video unit.
23. The method of claim 17, wherein the first set of artifact metrics and the second set of artifact metrics, include the following artifact metric implementation:
$ASNR ( y ) = 10 * log 10 ( PKS · W S · W P · W T F ( y ) ) , where$
y represents either a un-filtered reconstructed video unit or a filtered reconstructed video unit;
PKS is the peak value of the pixel squared;
WS is a weight that affects the un-filtered reconstructed video unit, based on spatial factors;
WP is a weight that affects the un-filtered reconstructed video unit, based on perceptual factors;
WT is a weight that affects the un-filtered reconstructed video unit, based on temporal factors; and
F(y) is a function of y.
y represents either a un-filtered reconstructed video unit or a filtered reconstructed video unit;
PKS is the peak value of the pixel squared;
WS is a weight that affects the un-filtered reconstructed video unit, based on spatial factors;
WP is a weight that affects the un-filtered reconstructed video unit, based on perceptual factors;
WT is a weight that affects the un-filtered reconstructed video unit, based on temporal factors; and
F(y) is a function of y.
24. The method of claim 23, wherein ASNR(y) is DSNR(y) if F(y) is Norm_Factor(MSDS(y)); and
Norm_Factor involves taking either an absolute value or a squaring.
Norm_Factor involves taking either an absolute value or a squaring.
25. The method of claim 24, wherein the measuring of the artifact further comprises:
emphasizing or de-emphasizing a blockiness artifact through a combination of values WS,WP, or WT.
emphasizing or de-emphasizing a blockiness artifact through a combination of values WS,WP, or WT.
26. The method of claim 17, wherein the first set of artifact metrics and the second set of artifact metrics, include the following artifact metric implementation:
$ASNR ( x , y ) = 10 * log 10 ( PKS · W S · W P · W T F ( x , y ) ) , where$
y represents either a un-filtered reconstructed video unit or a filtered reconstructed video unit;
x represents the original video unit, wherein a video unit is at least one of video block and frame;
PKS is the peak value of the pixel squared;
WS is a weight that affects the filtered reconstructed video unit, based on spatial factors;
WP is a weight that affects the filtered reconstructed video unit, based on perceptual factors;
WT is a weight that affects the filtered reconstructed video unit, based on temporal factors; and
F(x,y) is a function of x and y.
y represents either a un-filtered reconstructed video unit or a filtered reconstructed video unit;
x represents the original video unit, wherein a video unit is at least one of video block and frame;
PKS is the peak value of the pixel squared;
WS is a weight that affects the filtered reconstructed video unit, based on spatial factors;
WP is a weight that affects the filtered reconstructed video unit, based on perceptual factors;
WT is a weight that affects the filtered reconstructed video unit, based on temporal factors; and
F(x,y) is a function of x and y.
27. The method of claim 26, wherein ASNR(x,y) is DSNR(x,y) if F(x,y) is MSDS_error(x,y);
MSDS_error(x,y) =Norm_Factor(MSDS(x)−MSDS(y));
MSDS(x) is the Mean Square Slope of Differences of (x);
MSDS(y) is the Mean Square Slope of Differences of (y); and
Norm_Factor involves taking either an absolute value or a squaring.
MSDS_error(x,y) =Norm_Factor(MSDS(x)−MSDS(y));
MSDS(x) is the Mean Square Slope of Differences of (x);
MSDS(y) is the Mean Square Slope of Differences of (y); and
Norm_Factor involves taking either an absolute value or a squaring.
28. The method of claim 27, wherein the measuring of the artifact further comprises:
emphasizing or de-emphasizing a blockiness artifact through a combination of values WS,WP, or WT.
emphasizing or de-emphasizing a blockiness artifact through a combination of values WS,WP, or WT.
29. An apparatus comprising:
means for updating a memory buffer with an output of an artifact evaluator; and
means for making a decision with a decision logic block of the artifact evaluator, wherein the means for making the decision uses a first set of artifact metrics and a second set of artifact metrics to make a comparison, and wherein based on the comparison the artifact evaluator outputs a filtered reconstructed video unit or an un-filtered reconstructed video unit, wherein a video unit is at least one of video block and frame.
means for updating a memory buffer with an output of an artifact evaluator; and
means for making a decision with a decision logic block of the artifact evaluator, wherein the means for making the decision uses a first set of artifact metrics and a second set of artifact metrics to make a comparison, and wherein based on the comparison the artifact evaluator outputs a filtered reconstructed video unit or an un-filtered reconstructed video unit, wherein a video unit is at least one of video block and frame.
30. The apparatus of claim 29, wherein the memory buffer stores either the filtered reconstructed video unit or the un-filtered reconstructed video unit.
31. The apparatus of claim 30, wherein the first set of metrics is based on an unreconstructed video unit and the un-filtered reconstructed video unit.
32. The apparatus of claim 31, wherein the second set of metrics is based on the original video unit and the filtered reconstructed video unit.
33. The apparatus of claim 31, wherein the second set of metrics is based on the filtered reconstructed video unit.
34. The apparatus of claim 30, wherein the first set of metrics is based on the un-filtered reconstructed video unit.
35. A non-transitory computer-readable medium configured to store a set of instructions, where the instructions, when executed, perform a method comprising:
updating a memory buffer with an output of an artifact evaluator; and
making a decision using a first set of artifact metrics and a second set of artifact metrics to make a comparison, and wherein based on the comparison the artifact evaluator outputs a filtered reconstructed video unit or an un-filtered reconstructed video unit, wherein a video unit is at least one of video block and frame.
updating a memory buffer with an output of an artifact evaluator; and
making a decision using a first set of artifact metrics and a second set of artifact metrics to make a comparison, and wherein based on the comparison the artifact evaluator outputs a filtered reconstructed video unit or an un-filtered reconstructed video unit, wherein a video unit is at least one of video block and frame.
36. The computer-readable medium of claim 35, wherein the memory buffer stores either the filtered reconstructed video unit or the un-filtered reconstructed video unit.
37. The computer-readable medium of claim 36, wherein the first set of metrics is based on an unreconstructed video unit and the un-filtered reconstructed video unit.
38. The computer-readable medium of claim 37, wherein the second set of metrics is based on the original video unit and the filtered reconstructed video unit.
39. The computer-readable medium of claim 37, wherein the second set of metrics is based on the filtered reconstructed video unit.
40. The computer-readable medium of claim 36, wherein the first set of metrics is based on the un-filtered reconstructed video unit.
|
{}
|
zbMATH — the first resource for mathematics
Representation and approximation of functions via (0,2)-interpolation. (English) Zbl 0614.41001
For $$f\in C^ 2({\mathbb{R}})$$ we introduce an interpolation operator $$R(f;z):=\sum^{\infty}_{n=-\infty}(f(n)A_ n(z)+f''(n)B_ n(z))$$ with the following properties: R(f;.) is an entire function of exponential type $$2\pi$$, $$R(f;n)=f(n)$$, $$R''(f;n)=f''(n)$$ for all $$n\in {\mathbb{Z}}$$; $$R'(f;0)=R\prime''(f;0)=0$$. We establish essentially best possible conditions under which an entire function f of exponential type $$\tau <2\pi$$ or $$\tau =2\pi$$ may be represented as $$f(z)=R(f;z)+c_ 1(f) \sin \pi z+c_ 2(f) \sin 2\pi z$$ with explicitly given constants $$c_ 1(f)$$, $$c_ 2(f)$$ depending on f’(0) and f”’(0). These results are used for approximation of a continuous function by a sequence of (0,2)- interpolating entire functions of exponential type. Defining $R_{\tau}(f;\beta,z)\quad:=\quad \sum^{\infty}_{n=- \infty}(f(\frac{n\pi}{\tau})A_ n(\frac{\tau}{\pi}z)+(\frac{\pi}{\tau})^ 2\beta_{\tau n}B_ n(\frac{\tau}{\pi}z))$ we have an entire function of exponential type $$2\pi$$ such that $$R_{\tau}(f;\beta,n\pi /\tau)=f(n\pi /\tau)$$ and $$R''_{\tau}(f;\beta,n\pi /\tau)=\beta_{\tau n}$$ for all $$n\in {\mathbb{Z}}$$. It is then shown that $$\lim_{\tau \to \infty}R_{\tau}(f;\beta,x)=f(x)$$ uniformly on every compact subset of $${\mathbb{R}}$$ provided f is a bounded continuous function on $${\mathbb{R}}$$ such that $$| f(x+h)-2f(x)+f(x-h)| =o(h)$$ as $$h\to 0$$ uniformly for $$x\in {\mathbb{R}}$$ and the ”free parameters” $$\beta_{\tau n}$$ satisfy $$\sup_{n}| \beta_{\tau n}| =o(\tau)$$ as $$\tau\to \infty$$. These results are analogous to the work of P. Turán with J. Balász and J. Surányi, and of G. Freud on (0,2)-interpolation by algebraic polynomials. Furthermore they extend a result of O. Kiš on (0,2)- interpolation of periodic functions by trigonometric polynomials.
MSC:
41A05 Interpolation in approximation theory 41A30 Approximation by other special function classes 30D10 Representations of entire functions of one complex variable by series and integrals 30D15 Special classes of entire functions of one complex variable and growth estimates
Full Text:
References:
[1] Balász, J; Turán, P, Notes on interpolation. II. explicit formulae, Acta math. acad. sci. hungar., 8, 201-215, (1957) · Zbl 0078.05401 [2] Balász, J; Turán, P, Notes on interpolation. III. convergence, Acta math. acad. sci. hungar., 9, 195-214, (1958) · Zbl 0085.05104 [3] Balász, J; Turán, P, Notes on interpolation. IV. inequalities, Acta math. acad. sci. hungar., 9, 243-258, (1958) · Zbl 0085.05202 [4] Boas, R.P, Entire functions, (1954), Academic Press New York [5] Erdös, P; Turán, P, On the role of the Lebesgue function in the theory of the Lagrange interpolation, Acta math. acad. sci. hungar., 6, 47-66, (1955) · Zbl 0064.30101 [6] Freud, G, Bemerkungen über die konvergenz eines interpolationsverfahrens von P. Turán, Acta math. acad. sci. hungar., 9, 337-341, (1958) · Zbl 0085.05201 [7] Freud, G; Scheick, J.T, Polynomial approximation on the real line, Studia sci. math. hungar., 6, 23-25, (1971) · Zbl 0226.41001 [8] Freud, G; Szabados, J, Rational approximation on the whole real axis, Studia sci. math. hungar., 3, 201-209, (1968) · Zbl 0174.35401 [9] Gervais, R; Rahman, Q.I, An extension of Carlson’s theorem for entire functions of exponential type, Trans. amer. math. soc., 235, 387-394, (1978) · Zbl 0373.30025 [10] Gervais, R; Rahman, Q.I; Schmeisser, G, Approximation by (0, 2)-interpolating entire functions of exponential type, J. math. anal. appl., 82, 184-199, (1981) · Zbl 0469.30027 [11] Kiš, O, On trigonometric interpolation, Acta math. acad. sci. hungar., 11, 255-276, (1960), [Russian] · Zbl 0103.28703 [12] Surányi, J; Turán, P, Notes on interpolation. I. on some interpolatorical properties of the ultraspherical polynomials, Acta math. acad. sci. hungar., 6, 67-79, (1955) · Zbl 0064.30005 [13] Timan, A.F, Theory of approximation of functions of a real variable, (1963), Pergamon New York · Zbl 0117.29001
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
{}
|
## anonymous one year ago Volume of a graph (Washer)
1. anonymous
Find the volume of $y=\sqrt{x}$ about x=4
2. anonymous
I started with $x=y^2$ so would that be the outer radius?
3. SolomonZelman
|dw:1441595115874:dw|roughly
4. SolomonZelman
x=y² is not the same as y=√x because it will have a twice larger volume
5. anonymous
I'm sorry I forgot to mention it is bounded by y=0 so just quadrant 1
6. SolomonZelman
oh, and then if it is rotated about x=4, then I will assume that this is where the √x region ends at?
7. anonymous
Yes
8. SolomonZelman
Oh, ok, so you know that your limits of integration are from 0 to 2. $$\large\color{black}{ \displaystyle \int_{0}^{2} \pi\left(y^2\right)dy }$$
9. SolomonZelman
So, you can tell that your radius is y², and it is from y=0 to y=2. THis is what I would do.
10. SolomonZelman
I missed it should e y^4
11. SolomonZelman
$$\large\color{black}{ \displaystyle \int_{0}^{2} \pi\left(y^2\right)^2dy }$$ because radius squared. Sorry
12. SolomonZelman
So that is just an integral of $$\pi$$y$$^4$$ from y=0 to y=2.
13. anonymous
Wow I tried this the first time and I guess I integrated incorrectly so I was so confused!
14. anonymous
Thank you
15. SolomonZelman
you can put integral into wolfram. what is important is to get a good practice of making a setup of the integral for volume. integration you know already....
16. anonymous
Absolutely
17. SolomonZelman
32π/5 is what i got. (want to know how to make a π • ÷ × √ with no latex?)
18. anonymous
Sure!
19. DanJS
here are some probs with the solns, pg3 has a nice little summary
20. anonymous
That will definitely be useful, thank you
21. SolomonZelman
I am glitching a bit
22. SolomonZelman
Ok, so here is going to be a short guide. The algorithm is: $$1)$$ Click ALT and hold it $$2)$$ Click the "Number Code" (on the numberpad on the right of the keyboard- if you got one) $$3)$$ Release the ALT ------------------------------ Number code Result 0, 2, 1, 5 × 2, 5, 1 √ 7 • 2, 4, 6 ÷ 2, 2, 7 π
23. SolomonZelman
there are some others too.... but these are useful examples
24. anonymous
So if it was x=6 would the outer radius be 2+y^2 and the inner radius be 2?
25. SolomonZelman
if it was x=6, |dw:1441596092489:dw|
26. anonymous
27. SolomonZelman
Do you mean, if it was a region of y^2 bound by y=0 and x=4, but rotated about x=6?
28. anonymous
Yes
29. SolomonZelman
30. SolomonZelman
31. anonymous
I thought I had to subtract the inner area?
32. SolomonZelman
yeah, my bad, I am overheating let me think
33. anonymous
|dw:1441596258832:dw|
34. anonymous
Haha no problem
35. SolomonZelman
|dw:1441596426856:dw|
36. anonymous
For this problem I can only use washer
37. SolomonZelman
Oh, you can do it with respect to x, and do f(x)-g(x)
38. SolomonZelman
But, shell is also good.
39. SolomonZelman
it is a matter of preference
40. anonymous
Would f(x) be y^2+2 and g(x) be 2?
41. anonymous
$\int\limits_{0}^{2} (2+y^2)^2-(2)^2$ ?
42. SolomonZelman
lets rvw the washer with x's
43. SolomonZelman
|dw:1441596741353:dw|
44. SolomonZelman
of that integral should say π INTEGRAL f(x)²-g(x)² dx
45. SolomonZelman
from a to b
46. anonymous
Yes I get that
47. anonymous
I just don't understand what g(x) would be in this situation
48. SolomonZelman
and with y, you get the same thing|dw:1441596949137:dw|
49. SolomonZelman
the area of the cylinder with h=2, r=2 is 8π
50. SolomonZelman
Or you can say that g is 2.
51. SolomonZelman
if instead of 4, you had some z(x) boundary for the region, then the radius for g would be 6-z(x) (of course from y=0 to y=2)
52. anonymous
Okay I think I understand this better now thank you
53. SolomonZelman
Anytime... thank you for refreshing me on these rotations:)
54. SolomonZelman
I mean, I really should tell you that Shell method rocks in so many cases, so try to use that as well. in any case, good luck!
55. anonymous
Actually, did I plug this in incorrectly? http://www.wolframalpha.com/input/?i=integrate+from+0+to+2+pi%28%282%2By%5E2%29%5E2-4%29
56. anonymous
57. SolomonZelman
for the problem where you want to know about the redion of y62 bound by y=0 and x=4, rotated about x=6....?
58. SolomonZelman
region*
59. SolomonZelman
y^2 (not y62)
60. anonymous
yes
61. SolomonZelman
|dw:1441597714083:dw|
62. SolomonZelman
I actually do not get why the answer isn't what you got in wolfram.
63. anonymous
hmmm
64. SolomonZelman
you can take the whole volume of radius y^2+2 with limits of y=0 to y=2, and subtract 8π cylinder in the middle, and you get precisely the same.
65. SolomonZelman
I have to go, it is almost 12am in my location. maybe I was answering a wrong question idk, but for what I asked, it should be 256π/15
66. SolomonZelman
Maybe my brain just shot down xD
67. anonymous
Hahaha
68. anonymous
But I really appreciated your time so thank you
69. SolomonZelman
I will look at it when I have time. Whatever I can do with my little knowledge:) gtg c(u)
70. anonymous
Bye!
|
{}
|
You are not logged in. Please login at www.codechef.com to post your questions!
×
# TACTQUER- editorial
Author: Tuấn Anh Trần Đặng
Tester: Kamil Dębowski
Editorialist: Tuấn Anh Trần Đặng
Hard
# PREREQUISITES:
Graph, Dynamic Programming
# Problem
Given a weighted vertex-catus of N vertices answer Q queries about the shortest path between a pair of arbitrary vertices u and v.
# Assumptions
We will mainly work on the DFS tree of the catus eg. performing DFS from 1 and only take the edges that connect a vertex to the unvisited vertex during the process.
# Sub-problem 1
Calculate the shortest path from 1 to every vertex u.
### Solution
Let’s call this d[u]. We can get d by doing a dynamic programming in the DFS process. When we visit u there are two situations:
1. u is not belong to any cycle: d[u] = d[parent[u]] + length of the edge from parent[u] to u.
2. u is in a cycle C. In this case let top[C] is the vertex closest to the root in C. d[u] = d[top[C]] + shortest path from top[C] to u.
Note that there are exactly two ways to go between two vertices in the same cycle C so the shortest path from top[C] to u is just either the distance L from top[C] to u in the DFS tree or length of cycle C - L. top[C] should be pre calculated.
# Sub-problem 2
Calculate the distance disTopDown(u, v) between u and one of it descendants v (in the DFS tree).
### Solution
• If u is not belongs to any cycle then disTopDown(u, v) = d[v] - d[u]. This is correct since when we go from 1 to v we must go to u first.
• If u is belong to the cycle C. When go from 1 to v we may not go to u since we have two ways to pass through the cycle C. However we will have to go to the vertex b that is the vertex closest to v in C. We have that disTopDown(u, v) = d[v] - d[b] + shortest distance from b to u. Again since b and u are in the same cycle the distance between them can be easily calculated.
So it seems like our solution is quite obvious now: distance between two vertices u and v is disTopDown(lca(u, v), u) + disTopDown(lac(u, v), v) where lca(u, v) is the lowest common ancester of u and v in the DFS tree. But we still missing one more piece: how to find b. Let get to the final sub-problem.
# Sub-problem 3
We will make the problem a bit more genernal. Given the cycle C and the vertex v find the first vertex of C in the path from v to the root of the DFS tree.
### solution
Hint: we can use the similar technique that we used in finding LCA.
Order the cycle in the way that if cycle A lies on the path from root to cycle B then cycle A will get the larger order. One way is to use the order of the DFS completion of the top vertex eg. if the DFS process at top[B] finished later then B got larger order. Now you may already guessed the solution. We trying to go from v toward the top as far as possible making sure that we never enter the cycle C. Let order[C] is the order of cycle C we have to make sure that the maximum order of cycles that has a vertex in the path from v toward the root does not larger or equal to order[C].
Recall the problem of finding LCA, we have to prepare f[u][i] is the 2^i th parent of u. The formular is quite simple:
• f[u][0] = parent[u]
• f[u][i] = f[f[u][i - 1]][i - 1]
Apply the same technique let maxOrder[u][i] is the maximum label from u to its 2^ith parent:
• maxOrder[u][0] = max(order[u], order[parent[u]]) where order[u] is the order of the cycle contain u or -1 if is does not belong to any cycle.
• maxOrder[u][i] = max(maxOrder[u][i - 1], maxOrder[f[u][i - 1]][i - 1]);
With maxOrder calculated we try to jump from v toward the root and makesure that we never go a vertex with a order larger than or equal to order[C]. If b exist it will be the parent of the vertex we ended up at since we tried to go as close to C as possible but never enter it.
# Summary
The solutions contains three part:
1. Initial DFS to prepare infomation of the cycles eg. top vertex, label …
2. Second DFS to calculate d[] and maxLabel[][].
3. Answer the queries: (distance from u to v) = disTopDown(lca(u, v), u) + disTopDown(lca(u, v), v);
Complexity: O((N + M)logN)
# Author's/Tester's Solutions:
This question is marked "community wiki".
asked 18 Sep '16, 21:32
116613
accept rate: 0%
19.7k350498541
Please move the problem to practice, the given link doesn't work.
(22 Sep '16, 01:21) 6★
4 I was very disappointed solving that problem, it is very sad that in Codechef you are giving such well-known and messy problem. For example, "BZOJ 2125" is the exactly the same problem. I had also seen a lot of variations of such problem. I'd love to see much more fresh problems here. answered 19 Sep '16, 02:22 130●4 accept rate: 0%
my solution using dijkstra .. output is ok.. but why it shows sigsev .. please anyne tell :)
# include<bits stdc++.h="">
using namespace std;
# define INF 300100
vector<int>G[max]; vector<int>cost[max]; int d[max];
struct data {
int city,dist;
data(int a,int b)
{
city = a;
dist = b;
}
bool operator < (const data& p) const
{
return dist>p.dist;
}
};
int Dijkstra(int start,int node,int end) { int i,j,u,v;
for(j=0; j<=node; j++)
d[j] = INF;
priority_queue<data>Q;
Q.push(data(start,0));
d[start] =0;
while(!Q.empty())
{
data top = Q.top();
Q.pop();
u = top.city;
if( u == end ) return ( d[end] );
for(i=0; i<G[u].size(); i++)
{
v = G[u][i];
if(d[u]+cost[u][i]<d[v])
{
d[v] = d[u]+cost[u][i];
Q.push(data(v,d[v]));
}
}
}
}
int main() { int node,edge,i,j,k,l,m,x,y,z,start,end,cas;
scanf("%d%d",&node,&edge);
for(i=0; i<edge; i++)
{
cin>>x>>y>>z;
G[x].push_back(y);
G[y].push_back(x);
cost[x].push_back(z);
cost[y].push_back(z);
}
scanf("%d",&cas);
while( cas-- )
{
scanf("%d%d",&start,&end);
printf( "%d\n",Dijkstra(start,node,end) );
memset(d,0,sizeof(d));
}
return 0;
}
answered 20 Sep '16, 20:20
1
accept rate: 0%
toggle preview community wiki:
Preview
### Follow this question
By Email:
Once you sign in you will be able to subscribe for any updates here
Markdown Basics
• *italic* or _italic_
• **bold** or __bold__
• image?
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported
• mathemetical formulas in Latex between \$ symbol
Question tags:
×15,526
×2,103
×1,334
×1,197
×42
×5
question asked: 18 Sep '16, 21:32
question was seen: 3,454 times
last updated: 22 Sep '16, 01:21
|
{}
|
# is p paramagnetic or diamagnetic
Geplaatst op
But, actually the … Question = Is CLO3- polar or nonpolar ? https://en.wikipedia.org/wiki/Paramagnetism Diamagnetic Read More on This Topic. ¢ï˜ÏÀ}§sjÑÙuÆ|?„4�FI6T3,c|À|€iƒÚÅÆw êß~0�zÀµÀ”A«±ˆi , which is positive for paramagnetic substances and negative for diamagnetic substances. Paramagnetic Substances: Those substances which are weekly magnetized when placed in an external magnetic field in the same direction as the applied field are called Paramagnetic … The paramagnetic response has then two possible quantum origins, either coming from permanent magnetic moments of the ions or from the spatial motion of the conduction electrons inside the material. Answer = TeCl4 ( Tellurium tetrachloride ) is Polar What is polar and non-polar? Diamagnetic … How is b2 paramagnetic? Water has no unpaired electrons and is thus diamagnetic. Question = Is SCN- polar or nonpolar ? *dµf±a¶c²OŒ‹ÏÌHNJªécà´“E. So, Na+ is diamagnetic because all its electrons are paired up. ââ�†OŒO¤Ş°Ï`O°=ÀÇ0¡��9HªÃD€çƒÜ‡Ä†O›S˜Nˆ+*ØXÓp–QFò�Õ ¦ ¬ÛLÅçâüر@ˆ?A…A”á?sóÆ >‡O Low spin complexes contain strong field ligands. 9 years ago. Curie temperature - definition The Curie temperature or Curie point, is the temperature at which certain materials lose their permanent magnetic properties, to be replaced by induced magnetism. For low levels of magnetisation, the magnetisation of paramagnets follows Curie's lawto good approximation: where 1. So, F- is also diamagnetic because all its electrons are paired up. Sort the following atom or ions as paramagnetic or diamagnetic according to the electron configurations determined in Part A.C, Ni, S2−, Au+, KTo use electron configuration to explain magnetic behavior. C2 species: Use MO diagram with sp mixing that raises energy of σ3> π1; s,p labels changed to numerical labels: So, Mg+2 is also diamagnetic because all its electrons are paired up. How to Tell if a Substance is Paramagnetic or Diamagnetic The magnetic properties of a substance can be determined by examining its electron configuration: If it has unpaired electrons, then the substance is paramagnetic and if all electrons are paired, the substance is then diamagnetic. para. Electrons that are alone in an orbital are called paramagnetic electrons. And let's look at some elements. dia. If there is a presence of unpaired electron then the molecule is said to be paramagnetic in nature. Question = Is AsH3 polar or nonpolar ? Mo . Molecules are considered to be paramagnetic in nature depending upon the pairing of electrons. Answer = CF2Cl2 (Dichlorodifluoromethane) is Polar What is polar and non-polar? The Bohr–van Leeuwen theorem proves that there cannot be any diamagnetism or paramagnetism in a purely classical system. A simple rule of thumb is used in chemistry to determine whether a particle (atom, ion, or molecule) is paramagnetic or diamagnetic: If all electrons in the particle are paired, then the substance made of this particle is diamagnetic; If it has unpaired electrons, then the substance is paramagnetic. BII. Answer = ClF (Chlorine monofluoride) is Polar What is polar and non-polar? Is neutral nitrogen monoxide diamagnetic or paramagnetic? Diamagnetic materials are repelled by a magnetic field; an applied magnetic field creates an induced magnetic field in them in the opposite direction, causing a repulsive force. Magnetic levitation of diamagnetic and paramagnetic substances in a paramagnetic liquid is explored. An atom could have ten diamagnetic electrons, but as long as it also has one paramagnetic electron, it is still considered a paramagnetic atom. a bond order of 1.5. Paramagnetic and diamagnetic. Relevance. Favorite Answer. Diamagnetic metal ions cannot have an odd number of electrons. And we can figure out if atoms or ions are paramagnetic or diamagnetic by writing electron configurations. Here, 10Dqo > P (pairingenergy) and hence all the electrons are paired. https://en.wikipedia.org/wiki/Ferromagnetism, www.periodictable.com/Properties/A/MagneticType.html. Ferromagnetic substances have permanently aligned magnetic dipoles. Question: Is H2SO3 an ionic or Molecular bond ? dia. Label the following atoms and/or ions as being either paramagnetic or diamagnetic: Kr+1 . ceptibility, Dmeas, which is positive for paramagnetic substances and negative for diamagnetic substances. Paramagnetic: characteristic of unpaired electrons in an atom/ion, attracted into a magnetic field. Low spin complexes can be paramagnetic. Answer = C2Cl4 ( Tetrachloroethylene ) is nonPolar What is polar and non-polar? B−1 . Paramagnetic materials have at least one unpaired electron in the system, but diamagnetic materials have all their electrons paired. Diamagnetic Materials; Paramagnetic Materials ; Ferromagnetic Materials; Magnetic Permeability is the approach used to measure the relative strength of a magnetic field inside a material. The reason that it is paramagnetic is because the oxygen molecule has two unpaired electrons. O2 has two unpaired electrons in its π* orbitals, and a bond order of 2. Answer = C4H10 ( BUTANE ) is Polar What is polar and non-polar? So, P is paramagnetic as it contains 3p³ orbital with 3 unpaired electrons. In the presence of the external field the sample moves toward the strong field, attaching itself to the pointed pole. It is also described as the property of a substance which is equal to the magnetic flux density established inside the material. An unpaired electron acts like a little magnet. Diamagnetic substances have a negative relative permeability (susceptibility); paramagnetic substances have positive. dia. So far my answers are: para. If C2+ is [CC]^+ it has 7 valence e⁻ and necessarily has an unp e⁻ hence paramagnetic (but see below). T is absolute temperature, measured in kelvins 4. Diamagnetic characteristic of an atom/ion where electrons are paired. It's worth noting, any conductor exhibits strong diamagnetism in the presence of a changing magnetic field because circulating currents will oppose magnetic field lines. Answer = ICl3 (Iodine trichloride) is Polar What is polar and non-polar? magnetism: Magnetic properties of matter. So fluorine, because of its unpaired electron, is weakly attracted into a magnetic field, and is said to be paramagnetic. Answer = AsH3 ( Arsine ) is Polar What is polar and non-polar? The reason that it is paramagnetic is because the oxygen molecule has two unpaired electrons. So let's look at a shortened version of the periodic table. In case of Cu 2+ the electronic configuration is 3d 9 thus it has one unpaired electron in d- subshell thus it is paramagnetic. Kr+1 [Ar] 3d^10 4s^2 4p^5 must have 1 unp e⁻ hence P. Consequently, octahedral Ni (II) complex with strong field should be diamagnetic. The electronic configuration of Copper is 3d 10 4s 1 In Cu + the electronic configuration is 3d 10 completely filled d- shell thus it is diamagnetic. The diamagnetic and paramagnetic character of Cu+ and Cu+ are discussed below.. Now, depending upon the hybridization, there are two types of possible structure of Cu+ and Cu2+ ion are formed with co … Download PDF for free. A paramagnetic electron is an unpaired electron. In contrast, paramagnetic and ferromagnetic materials are attracted by a magnetic field. 1 Answer. Curie's law is only valid under conditions of low magnetisation, since it does not cons… Paramagnetic List. Diamagnetic atoms have only paired electrons, whereas paramagnetic atoms, which can be made magnetic, have at least one unpaired electron. If diamagnetic gas is introduced between pole pieces of magnet, it spreads at a right angle to the magnetic field. If is is C^2+ it would be 1s^2 2s^2 and e⁻s are paired: diamagnetic. Polar &... Is Phosphorus ( P ) a Paramagnetic or Diamagnetic ? Hund’s Rule: in a set of degenerate orbitals, electrons may not be spin-paired in an orbital until each orbital in the set contains one electron; electrons singly occupying orbitals in degenerate set have parallel spins. Question = Is CF2Cl2 polar or nonpolar ? paramagnetic or diamagnetic, respectively. In high spin octahedral complexes, $$\Delta_{o}$$ is less than the electron pairing energy, and is relatively very small. Answer = CLO3- (Chlorate) is Polar What is polar and non-polar? species like B2 are paramagnetic due to presence of two unpaired electrons in pi 2p bonding … Electrons not only go around the atom in their … I'm afraid you've confused water with O2 and atomic orbitals with molecular ones. Paramagnetic: Tellurium: Diamagnetic: Uranium: Paramagnetic: Aluminum: Paramagnetic: Iodine: Diamagnetic: Neptunium: N/A: Silicon: Diamagnetic: Xenon: Diamagnetic: Plutonium: Paramagnetic: Phosphorus: Diamagnetic: Cesium: Paramagnetic: Americium: Paramagnetic: Sulfur: Diamagnetic: Barium: Paramagnetic: Curium: N/A: Chlorine: Diamagnetic: Lanthanum: Paramagnetic: Berkelium: N/A: Argon: Diamagnetic: Cerium: Paramagnetic: … Ferromagnetism is the basic mechanism by which certain materials (such as iron) form permanent magnets, or are attracted to magnets. Indeed, all substances are diamagnetic: the strong external magnetic field speeds up or slows down the electrons orbiting in atoms in such a way as to oppose the action of the external field in accordance with Lenz’s law. Explain the meaning of diamagnetic and paramagnetic. Regarding this, how is o2 paramagnetic? Paramagnetic contribu - tions to the measured susceptibility, or paramagnetic suscepti-bility χ P, are positive and temperature-dependent (for a Curie paramagnet, χ P is proportional to 1/T where T is temperature). One may also ask, is o2 a bond order? Question = Is ClF polar or nonpolar ? Se . Since a p subshell can hold 6 electrons, one of the fluorine electrons is unpaired. Paramagnetic Paramagnetism is a form of magnetism whereby certain materials are weakly attracted by an externally applied magnetic field, and form internal, induced magnetic fields in the direction of the applied magnetic field. Diamagnetic, Paramagnetic, and Ferromagnetic Materials . So paramagnetic materials are also diamagnetic, but because paramagnetism is stronger, that is how they are classified. Phosphorus ( P ) is Diamagnetic. Water is paramagnetic, which means that it has a slight magnetic moment, because the last two electrons in oxygen's shell are unpaired and each one is in the p_x* and p_y* orbitals. A material is called diamagnetic if the value of\mu = 0$. Mis the resulting magnetization 2. Paramagnetic contribu-tions to the measured susceptibility, or paramagnetic suscepti-bility DP, are positive and temperature-dependent (for a Curie paramagnet, DP is … Question = Is TeCl4 polar or nonpolar ? Hence the number of unpaired electrons, i.e. Diamagnetic susceptibilities, χ D C is a material-specific Curie constant This law indicates that the susceptibility χ of paramagnetic materials is inversely proportional to their temperature. Question = Is C2Cl4 polar or nonpolar ? the excess number of electrons in the same spin, determines the magnetic moment. ...$1 s^{2} 2 s^{2} 2 p^{4} 3 s^{2} 3 p^{3}\mathrm{B}: 1 s^{2} 2 s^{2} 2 p^{5}\mathrm{F}: 1 s^{2} 2 s^{2} 2 p^{6}$Problem 77. No it is not paramagnetic.O2^2- has 2 electrons more than O2.Pi 2p molecular orbitals get completely filled hence it is diamagnetic. B is the magnetic flux density of the applied field, measured in teslas 3. K+1 . I'll tell you the Paramagnetic or Diamagnetic list below. Paramagnetic Paramagnetism is a form of magnetism whereby certain materials are weakly attracted by an externally applied magnetic field, and form internal, induced magnetic fields in the direction of the applied magnetic field. Both description are given below. https://en.wikipedia.org/wiki/Diamagnetism. List Paramagnetic or Diamagnetic. https://en.wikipedia.org/wiki/Paramagnetism. Answer: Phosphorus ( P ) is a Diamagnetic What is Paramagnetic and Diamagnetic ? Paramagnetism is a form of magnetism whereby certain materials are weakly attracted by an externally applied magnetic field, and form internal, induced magnetic fields in the direction of the applied magnetic field. Give an example of an element that is diamagnetic and one that is paramagnetic. Question = Is C4H10 polar or nonpolar ? Therefore, an atom is considered to be paramagnetic when it contains at least one paramagnetic electron. Why is Cu+ diamagnetic while Cu2+ is paramagnetic? An atom is considered paramagnetic if even one orbital has a net spin. Lv 7. The diamagnetic and paramagnetic character of a substance depends on the number of odd electron present in that substance. Question = Is ICl3 polar or nonpolar ? If you want to quickly find the word you want to search, use Ctrl + F, then type the word you want to search. Answer Save. And let's figure out whether those elements are para- or diamagnetic… £óö‚r )�Q«XC¸X6p&HeŒdXÃø…ùKç!mÆP†Œ€şMÀšœ»8önÒL@ìÄEj¬ dÎÀ0}-�–cà(míléhnªm¬)®. Answer = SCN- (Thiocyanate) is Polar What is polar and non-polar? Thus paramagnetic materials are permanent magnets by their intrinsic property, but the magnetic moment is too weak to detect it physically. Classical system in kelvins 4 = 0$ any diamagnetism or paramagnetism in a purely classical.! Presence of two unpaired electrons in the presence of two unpaired electrons in its π * orbitals, is! Materials is inversely proportional to their temperature species like B2 are paramagnetic or diamagnetic list below either... Cu 2+ the electronic configuration is 3d 9 thus it is paramagnetic of Cu 2+ the configuration. But, actually the … hence the number of odd electron present that! Polar and non-polar is inversely proportional to their temperature determines the magnetic flux density of the field! Ii ) complex with strong field should be diamagnetic due to presence the! An element that is how they are classified o2 a bond order and/or ions as being either paramagnetic or by... Being either paramagnetic or diamagnetic list below paramagnetic and ferromagnetic materials are permanent,. Which can be made magnetic, have at least one unpaired electron in system... Also ask, is o2 a bond order 2+ the electronic configuration is 3d 9 it... Susceptibilities, χ D so, F- is also described as the property of a substance which is to... Have a negative relative permeability ( susceptibility ) ; paramagnetic substances and negative for diamagnetic substances SCN-... In teslas 3 d- subshell thus it is paramagnetic $\mu = 0$ as. Is weakly attracted into a magnetic field, attaching itself to the pointed.... Tecl4 ( Tellurium tetrachloride ) is a material-specific Curie constant This law indicates that the susceptibility χ of materials... Be 1s^2 2s^2 and e⁻s are paired up, whereas paramagnetic atoms, which can be made magnetic have! The following atoms and/or ions as being either paramagnetic or diamagnetic by electron... Being either paramagnetic or diamagnetic itself to the magnetic moment field should be.! Be any diamagnetism or paramagnetism in a purely classical system Dichlorodifluoromethane ) is polar What is polar What is and. Be diamagnetic its π * orbitals, and is said to be in! In a purely classical system so paramagnetic materials have all their electrons paired in that substance octahedral Ni II... Answer = CF2Cl2 ( Dichlorodifluoromethane ) is polar What is polar and non-polar electrons is unpaired … hence number. Least one unpaired electron then the molecule is said to be paramagnetic, 10Dqo > P ( )! Icl3 ( Iodine trichloride ) is polar What is polar What is polar and non-polar P is is..., whereas paramagnetic atoms, which is equal to the pointed pole the magnetic.! 3P³ orbital with 3 unpaired electrons constant This law indicates that the susceptibility of! Contains 3p³ orbital with 3 unpaired electrons, whereas paramagnetic atoms, which can be made magnetic, at... An element that is paramagnetic as the property of a substance depends on the number of odd present! Made magnetic, have at least one unpaired electron then the molecule is said to be paramagnetic when it 3p³. To presence of the applied field, and is thus diamagnetic polar and?. P is paramagnetic is p paramagnetic or diamagnetic because the oxygen molecule has two unpaired electrons and said... Tecl4 ( Tellurium tetrachloride ) is polar and non-polar ions are paramagnetic due to presence of electrons!: //en.wikipedia.org/wiki/Paramagnetism diamagnetic so paramagnetic materials have at least one unpaired electron magnetic flux density established the! Polar and non-polar the system, but because paramagnetism is stronger, that paramagnetic. Ions as being either paramagnetic or diamagnetic: Kr+1 is inversely proportional to their temperature ions not., 10Dqo > P ( pairingenergy ) and hence all the electrons are paired for levels! No unpaired electrons in its π * orbitals, and is thus diamagnetic but because paramagnetism is,!, an atom is considered paramagnetic if even one orbital has a net spin diamagnetic. Shortened version of the external field the sample moves toward the strong field, itself! 1S^2 2s^2 and e⁻s are paired up its π * orbitals, and a bond order of.... In nature are called paramagnetic electrons species like B2 are paramagnetic due to presence of unpaired electrons in pi bonding. Magnets, or are attracted to magnets have an odd number of odd electron present in substance! ( Chlorine monofluoride ) is polar What is polar What is polar and non-polar 10Dqo > P ( )... Measured in kelvins 4 determines the magnetic moment flux density of the external field the sample moves toward the field! And we can figure out if atoms or ions are paramagnetic due to presence of the applied,. In contrast, paramagnetic and diamagnetic attracted by a magnetic field and is to! Constant This law indicates that the susceptibility χ of paramagnetic materials is proportional. The magnetic flux density established inside the material the oxygen molecule has two unpaired electrons in the system, because. Diamagnetic susceptibilities, χ D so, Na+ is diamagnetic because all its electrons are paired a diamagnetic What polar! Too weak to detect it physically d- subshell thus it has one unpaired electron in system. Paramagnetic: characteristic of unpaired electrons can not be any diamagnetism or paramagnetism in a purely classical system you. Or are attracted by a magnetic field detect it physically o2 a bond order of 2 an. A purely classical system present in that substance configuration is 3d 9 thus it is paramagnetic Curie This! No unpaired electrons Thiocyanate ) is polar and non-polar, whereas paramagnetic atoms, which is equal to the pole... Of unpaired electron in the system, but diamagnetic materials have all their electrons.. Label the following atoms and/or ions as being either paramagnetic or diamagnetic: Kr+1 contains at least one paramagnetic.. 1S^2 2s^2 and e⁻s are paired https: //en.wikipedia.org/wiki/Paramagnetism diamagnetic so paramagnetic materials are diamagnetic. Element that is diamagnetic and paramagnetic character of a substance depends on the number unpaired. Stronger, that is diamagnetic and paramagnetic character of a substance which is equal to the pointed is p paramagnetic or diamagnetic 6! Thus paramagnetic materials is inversely proportional to their temperature be 1s^2 2s^2 and e⁻s paired. Then the molecule is said to be paramagnetic, that is how are! A material is called diamagnetic if the value of $\mu = 0$ and! Magnetic field, attaching itself to the pointed pole AsH3 polar or nonpolar has a net spin by a field!... is Phosphorus ( P ) is polar What is polar and non-polar paramagnetic diamagnetic! For diamagnetic substances have positive its π * orbitals, and is thus diamagnetic have... Paramagnetic is because the oxygen molecule has two unpaired electrons and is said be! Is is C^2+ it would be 1s^2 2s^2 and e⁻s are paired up molecular bond the excess number of electron... Orbitals with molecular ones or paramagnetism in a purely classical system ( Chlorate ) is What... B2 are paramagnetic due to presence of unpaired electrons in an atom/ion, attracted a! Pi 2p bonding … Question = is AsH3 polar or nonpolar then the molecule is said to be.! Electrons that are alone in an orbital are called paramagnetic electrons paramagnetic electron property, because! = AsH3 ( Arsine ) is polar What is polar What is polar What is polar and?... = CF2Cl2 ( Dichlorodifluoromethane ) is polar and non-polar and diamagnetic if is is C^2+ would... Electrons are paired up one orbital has a net spin so, F- is also described as the property a. A P subshell can hold 6 electrons, whereas paramagnetic atoms, which can made! Intrinsic property, but because paramagnetism is stronger, that is diamagnetic and one that is how they classified... 'Ll tell you the paramagnetic or diamagnetic: Kr+1 have at least one unpaired electron contains at least unpaired... T is absolute temperature, measured in kelvins 4 attracted by a magnetic field attracted. Paired up paramagnetic and ferromagnetic materials are attracted to magnets thus paramagnetic are... Because of its unpaired electron in d- subshell thus it has one unpaired.... Consequently, octahedral Ni ( II ) complex with strong field, measured in 3! = SCN- ( Thiocyanate ) is polar What is polar and non-polar value of \$ =... Ash3 polar or nonpolar ( Arsine ) is polar and non-polar has one unpaired electron TeCl4 ( Tellurium )... In pi 2p bonding … Question = is AsH3 polar or nonpolar polar non-polar! It contains at least one unpaired electron, is weakly attracted into a field... Pi 2p bonding … Question = is AsH3 polar or nonpolar inside material! Water with o2 and atomic orbitals with molecular ones is too weak detect. Considered paramagnetic if even one orbital has a net spin magnets, or are attracted by a magnetic field number... Thus it is also described as the property of a substance which is positive for substances... Then the molecule is said to be paramagnetic polar &... is Phosphorus ( P ) paramagnetic... Diamagnetic characteristic of unpaired electrons in its π * orbitals, and said. Confused water with o2 and atomic orbitals with molecular ones ) complex with field. An element that is paramagnetic ionic or molecular bond ionic or molecular bond ions as being paramagnetic! Diamagnetic: Kr+1 o2 and atomic orbitals with molecular ones susceptibility ) ; paramagnetic substances have a negative permeability... Too weak to detect it physically so, Mg+2 is also described as property! The molecule is said to be paramagnetic in nature depending upon the pairing electrons. Have all their electrons paired b is the magnetic flux density established the. The susceptibility χ of paramagnetic materials is inversely proportional to their temperature magnets by their intrinsic property, but paramagnetism. Kelvins 4 2+ the electronic configuration is 3d 9 thus it has one electron.
|
{}
|
Consider the function $\displaystyle f(x) = \frac{\ln(x)}{x^{3}}$. For this function there are two important intervals: $(A, B]$ and $[B,\infty)$ where $A$ and $B$ are critical numbers or numbers where the function is undefined.
Find $A$
Find $B$
For each of the following intervals, tell whether $f(x)$ is increasing (type in INC) or decreasing (type in DEC).
$(A, B]$:
$[B,\infty)$:
|
{}
|
# Chapter 9 - Roots and Radicals - 9.4 - Products and Quotients Involving Radicals - Problem Set 9.4 - Page 418: 65
$\dfrac{x-3\sqrt{x}}{x-9}$
#### Work Step by Step
Multiplying by the conjugate of the denominator and using $(a+b)(a-b)=a^2-b^2,$ the given expression, $\dfrac{\sqrt{x}}{\sqrt{x}+3} ,$ simplifies to \begin{array}{l}\require{cancel} \dfrac{\sqrt{x}}{\sqrt{x}+3}\cdot\dfrac{\sqrt{x}-3}{\sqrt{x}-3} \\\\= \dfrac{\sqrt{x}(\sqrt{x}-3)}{(\sqrt{x})^2-(3)^2} \\\\= \dfrac{x-3\sqrt{x}}{x-9} .\end{array}
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{}
|
NEW
New Website Launch
Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc...
## Marks 1
More
A negative sequence relay is commonly used to protect
GATE EE 2011
A two machine power system in shown below. Transmission line $$XY$$ has positive sequence impedance of $${Z_1}\Omega$$ ...
GATE EE 2008
In a biased differential relay the bias is defined as a ratio of
GATE EE 2006
The transmission line distance protection relay having the property of being inherently directional is
GATE EE 2004
Consider the problem of relay co-ordination for the distance relays $$R1$$ and $$R2$$ on adjacent lines of a transmissio...
GATE EE 2002
In an inverse definite minimum time, electromagnetic type over-current relay, the minimum time feature is achieved becau...
GATE EE 2000
In a 3-step distance protection, the reach of the three zones of the relay at the beginning of the first line typically ...
GATE EE 2000
Reactance relay is normally preferred for protection against
GATE EE 1997
If the fault current is $$2000$$ A, the relay setting is $$50$$% and CT ratio is $$400:5,$$ the plug setting multiplier ...
GATE EE 1996
A Buchholz relay is used for
GATE EE 1992
## Marks 2
More
A $$3$$-phase transformer rated for $$33 kV/11kV$$ is connected in delta/star as shown in figure. The current transforme...
GATE EE 2015 Set 2
The over current relays for the line protection and loads connected at the buses are shown in the figure. The relays a...
GATE EE 2014 Set 1
Consider a stator winding of an alternator with an internal high resistance ground fault. The currents under the fault c...
GATE EE 2010
Match the items in List-$$I$$ (Type of transmission line) with the items in List-$$II$$ (Type of distance relay preferr...
GATE EE 2009
Voltage phasors at the two terminals of a transmission line of length $$70$$ km have a magnitude of $$1.0$$ per unit bu...
GATE EE 2008
A list of relays and the power system components protected by the relays are given in List-$${\rm I}$$ and List-$${\rm I... GATE EE 2003 The plug setting of a negative sequence relay is$$0.2A.$$The current transformer ratio is$$5:1$$. The minimum va... GATE EE 2000 The neutral of$$10MVA,11kV$$alternator is earthed through a resistance of$$5$$ohms. The earth fault r... GATE EE 1998 The distribution system shown in figure is to be protected by over current system of protection. For proper fault discri... GATE EE 1993 ## Marks 5 More The per unit voltages of two synchronous machines connected through a lossless line are$$0.95\,\angle {10^ \circ }$$an... GATE EE 1996 Type of Relay$$A.$$Buchholz Relay$$B.$$Translay relay$$C.$$Carrier current, phase comparison relay$$D. Directio...
GATE EE 1995
The distance relay with inherent directional property is known as ________relay.
GATE EE 1995
### Joint Entrance Examination
JEE Main JEE Advanced WB JEE
### Graduate Aptitude Test in Engineering
GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN
NEET
Class 12
|
{}
|
Programming Videos
Search
Program to Calculate Area of Circle
Write a program to calculate area of a circle. To calculate the area of a circle use the Mathematical formula define below.
Area of Circle is : π × r2
Where r is the radius of the circle and the value of PI is 3.14 (22/7).
Let's say someone input the value of radius of circle (r) is 3.
Area of circle is : 3.14 * 3 * 3
:28.27
C Program to Calculate Simple Interest.
Print factors of a number.
Program to Calculate Area of Circle
```#include <stdio.h>
int main()
{
/* Declare PI and area as float datatype. */
float PI=3.14,area;
/* Take radius of a circle as a input from user. */
printf("\n Enter radius of a circle: ");
/* Area of circle */
printf("\nArea of circle is : %f ",area);
return(0);
}
```
Output:
Enter radius of a circle: 3
Area of circle is : 28.260001
|
{}
|
Class 7 Maths
# Properties of Triangles
## Exercise 6.5
Question 1: PQR is a triangle, right angled at P. If PQ = 10 cm and PR = 24 cm, find QR.
QR^2 = PQ^2 + PR^2
Or, QR^2 = 10^2 + 24^2 = 100 + 576 = 676
Or, QR^2 = 2 xx 2 xx 13 xx 13
Or, QR = 2 xx 13 = 26 cm
Question 2: ABC is a triangle, right angled at C. If AB = 25 cm and AC = 7 cm, find BC.
AB^2 = AC^2 + BC^2
Or, 25^2 = 7^2 + BC^2
Or, 625 = 49 + BC^2
Or, BC^2 = 625 – 49 = 576
Or, BC^2 = 2 xx 2 xx 2 xx 2 xx 2 xx 2 xx 3 xx 3 = 2^2 xx 2^2 xx 2^2 xx 3^2
Or, BC = 2 xx 2 xx 2 xx 3 = 24
Question 3: A 15 m long ladder reached a window 12 m high from the ground on placing it against a wall at a distance a. Find the distance of the foot of the ladder from the wall.
Answer: The ladder is making hypotenuse while the wall is making one of the legs of the triangle.
Using Pythagoras rule, h^2 = p^2 + b^2
Or, b^2 = h^2 – p^2 = 15^2 – 12^2
Or, b^2 = 225 – 144 = 81
Or, b^2 = 3 xx 3 xx 3 xx 3 = 3^2 xx 3^2
Or, b = 3 xx 3 = 9
Question 4: Which of the following can be the sides of a right triangle?
(a) 2.5 cm, 6.5 cm, 6 cm
Answer: Square of the longest side = 6.5^2 = 42.25
Now, 2.5^2 = 6.25 and 6^2 = 36
Adding above two, we get; 6.25 + 36 = 42.25 which is equal to the square of the longest side.
Hence, it is a right angled triangle.
(b) 2 cm, 2 cm, 5 cm
Answer: Square of the longest side = 5^2 = 25
Now, 2^2 = 4
Adding the squares of remaining two sides, we get, 4 + 4 = 16
This figure is not equal to the square of the longest side.
Hence, this is not a right angled triangle.
(c) 1.5 cm, 2 cm, 2.5 cm
Answer: Square of the longest side = 2.5^2 = 6.25
Now, 1.5^2 = 2.25 and 2^2 = 4
Adding the above two, we get, 2.25 + 4 = 6.25
This figure is equal to the square of the longest side.
Hence, it is a right angled triangle.
In case of right angled triangles, identify the right angles.
Answer: In options ‘a’ and ‘c’, right angle is made by the smaller sides or legs of the triangle.
Question 5: A tree is broken at a height of 5 m from the ground and its top touches the ground at a distance of 12 m from the base of the tree. Find the original height of the tree.
Answer: We have; two legs of right angled triangle = 5 m and 12 m
Hypotenuse can be calculated as follows:
h^2 = p^2 + b^2
Or, h^2 = 5^2 +12^2 = 25 + 144 = 169
Or, h^2 = 13 xx 13
Or, h = 13 m
So, original height of tree = 12 + 13 = 25 m
Question 6: Angles Q and R of a ΔPQR are 25° and 65°. Write which of the following is true.
(a) PQ^2 + QR^2 = RP^2
(b) PQ^2 + RP^2 = QR^2
(c) RP^2 + QR^2 = PQ^2
Answer: Here, Sum of angles Q and R = 25° + 65° = 90°
So, angle P = 90°
This means, h = QR while other sides are PQ and PR.
So, QR^2 = PQ^2 + PR^2
Hence, option ‘b’ is correct.
Question 7: Find the perimeter of a rectangle whose length is 40 cm and a diagonal is 41 cm.
Answer: Breadth of the rectangle can be calculated by using Pythagoras rule because length, breadth and diagonal would make a right angled triangle.
So, h^2 = p^2 + b^2
Or, 41^2 = 40^2 + b^2
Or, 1681 = 1600 + b^2
Or, b^2 = 1681 – 1600 = 81
Or, b^2 = 9 xx 9
Or, b = 9 cm.
Perimeter can be calculated as follows:
= 2(40 + 9) = 2 × 49 = 98 cm
Question 8: The diagonals of a rhombus measure 16 cm and 30 cm. Find its perimeter.
Answer: Halves of diagonals of a rhombus make the legs of a right angle triangle while hypotenuse is made by a side of the rhombus. So, side of the rhombus can be calculated by using Pythagoras rule;
We know, h^2 = p^2 + b^2
Or, h^2 = 8^2 + 15^2 = 64 + 225 = 289
Or, h^2 = 17 xx 17
Or, h = 17 cm
Hence, Perimeter = 17 × 4 = 68 cm
|
{}
|
# Record the following entry: The inventory of supplies at the end of the year is valued at $2,200. 1. Record the following entry: The inventory of supplies at the end of the year is valued at$2,200.
Balance
Assets
Supplies
= Liabilities + Capital
(Beginn ing of month) Entry (?)
Balance (?)
(End of month)
## Related Questions in Accounting Concepts and Principles
• ### 1.7 Record the following entry: The inventory of supplies at the end of the year is valued at...
November 20, 2015
1.7 Record the following entry: The inventory of supplies at the end of the year is valued at $2,200 . Balance Assets Supplies = Liabilities + Capital ( Beginn ing of month ) Entry (?) Balance (?) ( End • ### adjusting entry (Solved) February 03, 2011 What is the proper adjusting entry at December 31, the end of the fiscal year , based on supplies account balance before adjustment of$5,200, and supplies inventory on December 31 of $1,200? 1) Debit Supplies,$1,200; credit Supplies Expense, $1,200 2) Debit Supplies Expense,$1,200; credit
the answer is 4th one. Supplies expenses $4000 supplies$4000 Note: Before adjustment the supplies accounts balance is$5200 at the end • ### intermediate accounting 1 (Solved) September 06, 2014 to record depreciation of equipment?. Entry to record interest revenue on a short-term Note Receivable, with all interest and principle received in four months .? Entry to record the expired portion of a two- year life insurance policy paid for on July 1, 2012, and charged to a permanent account #### Answer Preview : 2. at the end of 2012, ABC did NOT make the adjusting entries indicated below. Indicate the effect of the error on 2012 Net Income, Assets, Liabilities, and Owner’s Equity (on December 31,... • ### management accounting (Solved) September 20, 2010 the balance in the supplies account before adjustment at the end of the year , is$ 2975. journalize the adjusting entry required if the amount of supplies on hand at the end of the year is $614. #### Answer Preview : Supplies expenses A/c 2361 Supplies A/c 2361 Being expenditure incurred out of supplies Income summary 2361 • ### Journalyzing Adjusting Entries [15-25 min] (Solved) June 18, 2013 of Prepaid insurance are shown in the account: Prepaid Insurance - Jan 1$ 4,500.Laughter prepay's a full year insurance each year in January 1. Record insurance expense for the year ended December 31.d. The beginning balance of Supplies was \$4,000. During the year , Laughter purchased supplies for...
Laughter Landscaping has the following independent cases at the end of the year on December 31, 2014: a. each Friday, Laughter pays employees for the current weeks work. The amounts of the...
|
{}
|
# What is postselection in quantum computing?
A quantum computer can efficiently solve problems lying in the complexity class BQP. I have seen a claim the one can (potentially, because we don't know whether BQP is a proper subset or equal to PP) increase the efficiency of a quantum computer by applying postselection and that the class of efficiently solvable problems becomes now postBQP = PP.
What does postselection mean here?
"Postselection" refers to the process of conditioning on the outcome of a measurement on some other qubit. (This is something that you can think of for classical probability distributions and statistical analysis as well: it is not a concept special to quantum computation.)
Postselection has featured quite often (up to this point) in quantum mechanics experiments, because — for experiments on very small systems, involving not very many particles — it is a relatively easy way to simulate having good quantum control or feedforward. However, it is not a practical way of realising computation, because you have to condition on an outcome of one or more measurements which may occur with very low probability.
Actually 'selecting' a measurement outcome is nothing you can do easily in quantum mechanics — what one actually does is throw away any outcome which does not allow you to do what you want to do. If the outcome which you are trying to select has probability $0 < p < 1$, you will have to try an expected number $1/p$ times before you manage to obtain the outcome you are trying to select. If $p = 1/2^n$ for some large integer $n$, you may be waiting a very long time.
The result that postselection 'increases' (as you say) the power of bounded-error quantum computation from BQP to PP is a well-liked result in the theory of quantum computation, not because it is practical, but because it is a simple and crisp result of a sort which is rare in computational complexity, and is useful for informing intuitions about quantum computation — it has led onward to ideas of "quantum supremacy" experiments, for example. But it is not something which you should think of as an operation which is freely available to quantum computers as a practical technique, unless you can show that the outcomes which you are trying to postselect are few enough and of high-enough probability (or, as with measurement-based computation, that you can simulate the 'desirable' outcome by a suitable adaptation of your procedure if you obtain one of the 'undesirable' outcomes).
As the other answer conveyed (and to which I am just trying to provide some clarification), post-selection is about just looking at a subset of possible measurement outcomes. To my mind, this falls into two different cases, as below. Yes, they are different aspects of the same thing, but they are used very differently by two different communities.
## Experimental Post-selection
You do some experiments, but you only gather data when certain conditions are fulfilled. Generally it's used to compensate for heralded experimental imperfections (i.e. something is triggered that tells us we've had a undesired result before proceeding with another part of the experiment). For example, you might be using a pair of photons as information or entanglement carriers, but sometimes those photons get lost on the way. If you only do things when both photons are detected, you are post-selecting on their successful arrival.
## Theoretical Post-selection
This is a thought experiment of "how much more powerful could my computer be if I could choose the outcomes of measurements?" and is not a practical proposition.
As a simple example, think about quantum teleportation. In the normal scenario, Alice and Bob share a Bell pair, and Alice has a qubit in an unknown state that she wants to teleport to Bob. She performs a Bell measurement on her two qubits, and sends her measurement outcome to Bob. If Bob is far away from Alice, the information on the measurement result takes a finite time to get there, and it's after that time that he can be thought of as having received the qubit (because he has to compensate for effects of the different results on the qubit he holds).
However, if Alice can post-select on the measurement result as always being one particular result, and Bob knows that she's going to select that one, then Alice doesn't need to send the measurement result to Bob. He can use the qubit he holds immediately. Even stronger, he can use it before Alice has made the measurement, secure in the knowledge that she will! So, not only are you achieving faster-than-light communication, you're actually communicating backwards in time! You can start to imagine how immensely powerful that could be for a computer (compute for an arbitrarily long time and then send the answer back in time to the moment the question was asked).
• I don't get the last paragraph: Even if Alice post-selects on a certain outcome of a Bell measurement, there are measurements that she has to discard because they didn't give the correct outcome and Alice needs to communicate the fact whether she has accepted or discarded the experimental outcome. – jk - Reinstate Monica Apr 11 '18 at 14:17
• @jknappen That's the difference between theory and experiment. Experiments discard the false results. The theory version posits that you can force it to always give the right result. There's nothing to discard. – DaftWullie Apr 11 '18 at 14:19
• I don't think so, even in theory you have to discard some computations. In classical computation, the same holds for well-known zero-knowledge proof protocols. – jk - Reinstate Monica Apr 11 '18 at 14:22
• @jknappen I have to admit I was reconstructing this argument from my memory of a paper that, now I come to look for it, I can't immediately lay my hands on to verify the details. However, this one talks about doing just the same. – DaftWullie Apr 11 '18 at 14:32
• @jknappen In the last paragraph, DaftWullie is referring to a hypothetical world where you could really truly do a post-select operation (e.g. apply the non-unitary single-qubit operation [[1,0],[0,0]] followed by a normalization of the wavefunction, as can be done in a simulator). – Craig Gidney May 28 '18 at 1:45
|
{}
|
Sec 10.1.pdf - Section 10.1 Parametric equations Tangent lines and arc length for parametric curves Section 10.1 Parametric equations Tangent lines and
# Sec 10.1.pdf - Section 10.1 Parametric equations Tangent...
• 32
This preview shows page 1 - 10 out of 32 pages.
Section 10.1: Parametric equations; Tangent lines and arc length for parametric curves Section 10.1: Parametric equations; Tangent lines and arc length
Parametric curves Now that we have finished chapter 9, we have completed what is generally considered “calculus”. For the remainder of the semester, we will discuss some topics that will prepare you for calculus III, as well as other advanced courses in mathematics. First we study: Parametric Curves. Section 10.1: Parametric equations; Tangent lines and arc length
Parametric curves We begin with the following definition: Definition 1 A parametric curve is a curve in the xy -plane whose coordinates ( x , y ) can be specified by functions f ( t ) and g ( t ) such that each point ( x , y ) on the curve satisfies x = f ( t ) and y = g ( t ) for some t . The variable t is called a parameter . By selecting a few values of t , we can plot points and sketch graphs of the curves in much the same way as we do for functions. Section 10.1: Parametric equations; Tangent lines and arc length
Parametric curves Example 1 Sketch a graph of the curve defined by x = t + 1, and y = t + 2. Solution. First we make a table of values: t -1 0 1 2 x 0 1 2 3 y 1 2 3 4 Section 10.1: Parametric equations; Tangent lines and arc length
Parametric curves If we plot ( x , y ) for each t , we get the following scatter plot: Section 10.1: Parametric equations; Tangent lines and arc length
Parametric curves If we draw a smooth curve through these points, we obtain the following graph: Remarks: Here, it is easy to see what the final curve will look like. For more complicated curves, the resulting scatter plot may not be so easy to sketch. Section 10.1: Parametric equations; Tangent lines and arc length
Parametric curves Example 2 Sketch a graph of the parametric curve defined by x = t - 3 sin t , y = 4 - 3 cos t . Solution. As before, we first make a table of values: t 0 1 2 3 4 5 6 7 8 9 10 x 0 -1.5 -0.7 2.6 6.3 7.9 6.8 5.0 5.0 7.8 11.6 y 1.0 2.4 5.2 7.0 6.0 3.1 1.1 1.7 4.4 6.7 6.5 Section 10.1: Parametric equations; Tangent lines and arc length
Parametric curves If we plot these points, we obtain the following scatter plot: Looking at these points, it is not so clear that what the curve should look like. If we add (many) more points to our scatter plot, you should (hopefully) be able to see that it should look like this: Section 10.1: Parametric equations; Tangent lines and arc length
Parametric curves Thus, drawing sketches of parametric curves can be quite difficult; more so than for functions y = f ( x ).
|
{}
|
# Tag Archives: identities
## Envelope Curves
My precalculus class recently returned to graphs of sinusoidal functions with an eye toward understanding them dynamically via envelope curves: Functions that bound the extreme values of the curves. What follows are a series of curves we’ve explored over the past few weeks. Near the end is a really cool Desmos link showing an infinite progression of periodic envelopes to a single curve–totally worth the read all by itself.
GETTING STARTED
As a simple example, my students earlier had seen the graph of $f(x)=5+2sin(x)$ as $y=sin(x)$ vertically stretched by a magnitude of 2 and then translated upward 5 units. In their return, I encouraged them to envision the function behavior dynamically instead of statically. I wanted them to see the curve (and the types of phenomena it could represent) as representing dynamic motion rather than a rigid transformation of a static curve. In that sense, the graph of f oscillated 2 units (the coefficient of sine in f‘s equation) above and below the line $y=5$ (the addend in the equation for f). The curves $y=5+2=7$ and $y=5-2=3$ define the “Envelope Curves” for $y=f(x)$.
When you graph $y=f(x)$ and its two envelope curves, you can picture the sinusoid “bouncing” between its envelopes. We called these ceiling and floor functions for f. Ceilings happen whenever the sinusoid term reaches its maximum value (+1), and floors when the sinusoidal term is at its minimum (-1).
Those envelope functions would be just more busy work if it stopped there, though. The great insights were that anything you added to a sinusoid could act as a midline with the coefficient, AND anything multiplied by the sinusoid is its amplitude–the distance the curve moves above and below its midline. The fun comes when you start to allow variable expressions for the midline and/or amplitudes.
VARIABLE MIDLINES AND ENVELOPES
For a first example, consider $y= \frac{x}{2} + sin(x)$. By the reasoning above, $y= \frac{x}{2}$ is the midline. The amplitude, 1, is the coefficient of sine, so the envelope curves are $y= \frac{x}{2}+1$ (ceiling) and $y= \frac{x}{2}-1$ (floor).
That got their attention! Notice how easy it is to visualize the sine curve oscillating between its envelope curves.
For a variable amplitude, consider $y=2+1.2^{-x}*sin(x)$. The midline is $y=2$, with an “amplitude” of $1.2^{-x}$. That made a ceiling of $y=2+1.2^{-x}$ and a floor of $y=2-1.2^{-x}$, basically exponential decay curves converging on an end behavior asymptote defined by the midline.
SINUSOIDAL MIDLINES AND ENVELOPES
Now for even more fun. Convinced that both midlines and amplitudes could be variably defined, I asked what would happen if the midline was another sinusoid? For $y=cos(x)+sin(x)$, we could think of $y=cos(x)$ as the midline, and with the coefficient of sine being 1, the envelopes are $y=cos(x)+1$ and $y=cos(x)-1$.
Since cosine is a sinusoid, you could get the same curve by considering $y=sin(x)$ as the midline with envelopes $y=sin(x)+1$ and $y=sin(x)-1$. Only the envelope curves are different!
The curve $y=cos(x)+sin(x)$ raised two interesting questions:
1. Was the addition of two sinusoids always another sinusoid?
2. What transformations of sinusoidal curves could be defined by more than one pair of envelope curves?
For the first question, they theorized that if two sinusoids had the same period, their sum was another sinusoid of the same period, but with a different amplitude and a horizontal shift. Mathematically, that means
$A*cos(\theta ) + B*sin(\theta ) = C*cos(\theta -D)$
where A & B are the original sinusoids’ amplitudes, C is the new sinusoid’s amplitude, and D is the horizontal shift. Use the cosine difference identity to derive
$A^2 + B^2 = C^2$ and $\displaystyle tan(D) = \frac{B}{A}$.
For $y = cos(x) + sin(x)$, this means
$\displaystyle y = cos(x) + sin(x) = \sqrt{2}*cos \left( x-\frac{\pi}{4} \right)$,
and the new coefficient means $y= \pm \sqrt{2}$ is a third pair of envelopes for the curve.
Very cool. We explored several more sums and differences with identical periods.
WHAT HAPPENS WHEN THE PERIODS DIFFER?
Try a graph of $g(x)=cos(x)+cos(3x)$.
Using the earlier concept that any function added to a sinusoid could be considered the midline of the sinusoid, we can picture the graph of g as the graph of $y=cos(3x)$ oscillating around an oscillating midline, $y=cos(x)$:
IF you can’t see the oscillations yet, the coefficient of the $cos(3x)$ term is 1, making the envelope curves $y=cos(x) \pm 1$. The next graph clear shows $y=cos(3x)$ bouncing off its ceiling and floor as defined by its envelope curves.
Alternatively, the base sinusoid could have been $y=cos(x)$ with envelope curves $y=cos(3x) \pm 1$.
Similar to the last section when we added two sinusoids with the same period, the sum of two sinusoids with different periods (but the same amplitude) can be rewritten using an identity.
$cos(A) + cos(B) = 2*cos \left( \frac{A+B}{2} \right) * cos \left( \frac{A-B}{2} \right)$
This can be proved in the present form, but is lots easier to prove from an equivalent form:
$cos(x+y) + cos(x-y) = 2*cos(x) * cos(y)$.
For the current function, this means $y = cos(x) + cos(3x) = 2*cos(x)*cos(2x)$.
Now that the sum has been rewritten as a product, we can now use the coefficient as the amplitude, defining two other pairs of envelope curves. If $y=cos(2x)$ is the sinusoid, then $y= \pm 2cos(x)$ are envelopes of the original curve, and if $y=cos(x)$ is the sinusoid, then $y= \pm 2cos(2x)$ are envelopes.
In general, I think it’s easier to see the envelope effect with the larger period function. A particularly nice application connection of adding sinusoids with identical amplitudes and different periods are the beats musicians hear from the constructive and destructive sound wave interference from two instruments close to, but not quite in tune. The points where the envelopes cross on the x-axis are the quiet points in the beats.
A STUDENT WANTED MORE
In class last Friday, my students were reviewing envelope curves in advance of our final exam when one made the next logical leap and asked what would happen if both the coefficients and periods were different. When I mentioned that the exam wouldn’t go that far, she uttered a teacher’s dream proclamation: She didn’t care. She wanted to learn anyway. Making up some coefficients on the spot, we decided to explore $f(x)=2sin(x)+5cos(2x)$.
Assuming for now that the cos(2x) term is the primary sinusoid, the envelope curves are $y=2sin(x) \pm 5$.
That was certainly cool, but at this point, we were no longer satisfied with just one answer. If we assumed sin(x) was the primary sinusoid, the envelopes are $y=5cos(2x) \pm 2$.
Personally, I found the first set of envelopes more satisfying, but it was nice that we could so easily identify another.
With the different periods, even though the coefficients are different, we decided to split the original function in a way that allowed us to use the $cos(A)+cos(B)$ identity introduced earlier. Rewriting,
$f(x)=2sin(x)+5cos(2x) = 2cos \left( x - \frac{ \pi }{2} \right) + 2cos(2x) + 3cos(2x)$ .
After factoring out the common coefficient 2, the first two terms now fit the $cos(A) + cos(B)$ identity with $A = x - \frac{ \pi }{2}$ and $B=2x$, allowing the equation to be rewritten as
$f(x)= 2 \left( 2*cos \left( \frac{x - \frac{ \pi }{2} + 2x }{2} \right) * cos \left( \frac{x - \frac{ \pi }{2} - 2x }{2} \right) \right) + 3cos(2x)$
$\displaystyle = 4* cos \left( \frac{3}{2} x - \frac{ \pi }{4} \right) * cos \left( - \frac{1}{2} x - \frac{ \pi }{4} \right) + 3cos(2x)$.
With the expression now containing three sinusoidal expressions, there are three more pairs of envelope curves!
Arguably, the simplest approach from this form assumes $cos(2x)$ from the $latex$3cos(2x)\$ term as the sinusoid, giving $y=2sin(x)+2cos(2x) \pm 3$ (the pre-identity form three equations earlier in this post) as envelopes.
We didn’t go there, but recognizing that new envelopes can be found simply by rewriting sums creates an infinite number of additional envelopes. Defining these different sums with a slider lets you see an infinite spectrum of envelopes. The image below shows one. Here is the Desmos Calculator page that lets you play with these envelopes directly.
If the $cos \left( \frac{3}{3} x - \frac{ \pi}{4} \right)$term was the sinusoid, the envelopes would be $y=3cos(2x) \pm 4cos \left( - \frac{1}{2} x - \frac{ \pi }{4} \right)$. If you look closely, you will notice that this is a different type of envelope pair with the ceiling and floor curves crossing and trading places at $x= \frac{\pi}{2}$ and every $2\pi$ units before and after. The third form creates another curious type of crossing envelopes.
CONCLUSION:
In all, it was fun to explore with my students the many possibilities for bounding sinusoidal curves. It was refreshing to have one student excited by just playing with the curves to see what else we could find for no other reason than just to enjoy the beauty of these periodic curves. As I reflected on the overall process, I was even more delighted to discover the infinite spectrum of envelopes modeled above on Desmos.
I hope you’ve found something cool here for yourself.
## A Student’s Powerful Polar Exploration
I posted last summer on a surprising discovery of a polar function that appeared to be a horizontal translation of another polar function. Translations happen all the time, but not really in polar coordinates. The polar coordinate system just isn’t constructed in a way that makes translations appear in any clear way.
That’s why I was so surprised when I first saw a graph of $\displaystyle r=cos \left( \frac{\theta}{3} \right)$.
It looks just like a 0.5 left translation of $r=\frac{1}{2} +cos( \theta )$ .
But that’s not supposed to happen so cleanly in polar coordinates. AND, the equation forms don’t suggest at all that a translation is happening. So is it real or is it a graphical illusion?
I proved in my earlier post that the effect was real. In my approach, I dealt with the different periods of the two equations and converted into parametric equations to establish the proof. Because I was working in parametrics, I had to solve two different identities to establish the individual equalities of the parametric version of the Cartesian x- and y-coordinates.
As a challenge to my precalculus students this year, I pitched the problem to see what they could discover. What follows is a solution from about a month ago by one of my juniors, S. I paraphrase her solution, but the basic gist is that S managed her proof while avoiding the differing periods and parametric equations I had employed, and she did so by leveraging the power of CAS. The result was that S’s solution was briefer and far more elegant than mine, in my opinion.
S’s Proof:
Multiply both sides of $r = \frac{1}{2} + cos(\theta )$ by r and translate to Cartesian.
$r^2 = \frac{1}{2} r+r\cdot cos(\theta )$
$x^2 + y^2 = \frac{1}{2} \sqrt{x^2+y^2} +x$
$\left( 2\left( x^2 + y^2 -x \right) \right) ^2= \sqrt{x^2+y^2} ^2$
At this point, S employed some CAS power.
[Full disclosure: That final CAS step is actually mine, but it dovetails so nicely with S’s brilliant approach. I am always delightfully surprised when my students return using a tool (technological or mental) I have been promoting but hadn’t seen to apply in a particular situation.]
S had used her CAS to accomplish the translation in a more convenient coordinate system before moving the equation back into polar.
Clearly, $r \ne 0$, so
$4r^3 - 3r = cos(\theta )$ .
In an attachment (included below), S proved an identity she had never seen, $\displaystyle cos(\theta) = 4cos^3 \left( \frac{\theta }{3} \right) - 3cos \left( \frac{\theta }{3} \right)$ , which she now applied to her CAS result.
$\displaystyle 4r^3 - 3r = 4cos^3 \left( \frac{\theta }{3} \right) - 3cos \left( \frac{\theta }{3} \right)$
So, $\displaystyle r = cos \left( \frac{\theta }{3} \right)$
Therefore, $\displaystyle r = cos \left( \frac{\theta }{3} \right)$ is the image of $\displaystyle r = \frac{1}{2} + cos(\theta )$ after translating $\displaystyle \frac{1}{2}$ unit left. QED
Simple. Beautiful.
Obviously, this could have been accomplished using lots of by-hand manipulations. But, in my opinion, that would have been a horrible, potentially error-prone waste of time for a problem that wasn’t concerned at all about whether one knew some Algebra I arithmetic skills. Great job, S!
S’s proof of her identity, $\displaystyle cos(\theta) = 4cos^3 \left( \frac{\theta }{3} \right) - 3cos \left( \frac{\theta }{3} \right)$ :
## Trig Identities with a Purpose
Yesterday, I was thinking about some changes I could introduce to a unit on polar functions. Realizing that almost all of the polar functions traditionally explored in precalculus courses have graphs that are complete over the interval $0\le\theta\le 2\pi$, I wondered if there were any interesting curves that took more than $2\pi$ units to graph.
My first attempt was $r=cos\left(\frac{\theta}{2}\right)$ which produced something like a merged double limaçon with loops over its $4\pi$ period.
Trying for more of the same, I graphed $r=cos\left(\frac{\theta}{3}\right)$ guessing (without really thinking about it) that I’d get more loops. I didn’t get what I expected at all.
Wow! That looks exactly like the image of a standard limaçon with a loop under a translation left of 0.5 units.
Further exploration confirms that $r=cos\left(\frac{\theta}{3}\right)$ completes its graph in $3\pi$ units while $r=\frac{1}{2}+cos\left(\theta\right)$ requires $2\pi$ units.
As you know, in mathematics, it is never enough to claim things look the same; proof is required. The acute challenge in this case is that two polar curves (based on angle rotations) appear to be separated by a horizontal translation (a rectangular displacement). I’m not aware of any clean, general way to apply a rectangular transformation to a polar graph or a rotational transformation to a Cartesian graph. But what I can do is rewrite the polar equations into a parametric form and translate from there.
For $0\le\theta\le 3\pi$ , $r=cos\left(\frac{\theta}{3}\right)$ becomes $\begin{array}{lcl} x_1 &= &cos\left(\frac{\theta}{3}\right)\cdot cos\left (\theta\right) \\ y_1 &= &cos\left(\frac{\theta}{3}\right)\cdot sin\left (\theta\right) \end{array}$ . Sliding this $\frac{1}{2}$ a unit to the right makes the parametric equations $\begin{array}{lcl} x_2 &= &\frac{1}{2}+cos\left(\frac{\theta}{3}\right)\cdot cos\left (\theta\right) \\ y_2 &= &cos\left(\frac{\theta}{3}\right)\cdot sin\left (\theta\right) \end{array}$ .
This should align with the standard limaçon, $r=\frac{1}{2}+cos\left(\theta\right)$ , whose parametric equations for $0\le\theta\le 2\pi$ are $\begin{array}{lcl} x_3 &= &\left(\frac{1}{2}+cos\left(\theta\right)\right)\cdot cos\left (\theta\right) \\ y_3 &= &\left(\frac{1}{2}+cos\left(\theta\right)\right)\cdot sin\left (\theta\right) \end{array}$ .
The only problem that remains for comparing $(x_2,y_2)$ and $(x_3,y_3)$ is that their domains are different, but a parameter shift can handle that.
If $0\le\beta\le 3\pi$ , then $(x_2,y_2)$ becomes $\begin{array}{lcl} x_4 &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot cos\left (\beta\right) \\ y_4 &= &cos\left(\frac{\beta}{3}\right)\cdot sin\left (\beta\right) \end{array}$ and $(x_3,y_3)$ becomes $\begin{array}{lcl} x_5 &= &\left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot cos\left (\frac{2\beta}{3}\right) \\ y_5 &= &\left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot sin\left (\frac{2\beta}{3}\right) \end{array}$ .
Now that the translation has been applied and both functions operate over the same domain, the two functions must be identical iff $x_4 = x_5$ and $y_4 = y_5$ . It’s time to prove those trig identities!
Before blindly manipulating the equations, I take some time to develop some strategy. I notice that the $(x_5, y_5)$ equations contain only one type of angle–double angles of the form $2\cdot\frac{\beta}{3}$ –while the $(x_4, y_4)$ equations contain angles of two different types, $\beta$ and $\frac{\beta}{3}$ . It is generally easier to work with a single type of angle, so my strategy is going to be to turn everything into trig functions of double angles of the form $2\cdot\frac{\beta}{3}$ .
$\displaystyle \begin{array}{lcl} x_4 &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot cos\left (\beta\right) \\ &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot cos\left (\frac{\beta}{3}+\frac{2\beta}{3} \right) \\ &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot\left( cos\left(\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)-sin\left(\frac{\beta}{3}\right) sin\left(\frac{2\beta}{3}\right)\right) \\ &= &\frac{1}{2}+\left[cos^2\left(\frac{\beta}{3}\right)\right] cos\left(\frac{2\beta}{3}\right)-\frac{1}{2}\cdot 2cos\left(\frac{\beta}{3}\right) sin\left(\frac{\beta}{3}\right) sin\left(\frac{2\beta}{3}\right) \\ &= &\frac{1}{2}+\left[\frac{1+cos\left(2\frac{\beta}{3}\right)}{2}\right] cos\left(\frac{2\beta}{3}\right)-\frac{1}{2}\cdot sin^2\left(\frac{2\beta}{3}\right) \\ &= &\frac{1}{2}+\frac{1}{2}cos\left(\frac{2\beta}{3}\right)+\frac{1}{2} cos^2\left(\frac{2\beta}{3}\right)-\frac{1}{2} \left( 1-cos^2\left(\frac{2\beta}{3}\right)\right) \\ &= & \frac{1}{2}cos\left(\frac{2\beta}{3}\right) + cos^2\left(\frac{2\beta}{3}\right) \\ &= & \left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot cos\left(\frac{2\beta}{3}\right) = x_5 \end{array}$
Proving that the x expressions are equivalent. Now for the ys
$\displaystyle \begin{array}{lcl} y_4 &= & cos\left(\frac{\beta}{3}\right)\cdot sin\left(\beta\right) \\ &= & cos\left(\frac{\beta}{3}\right)\cdot sin\left(\frac{\beta}{3}+\frac{2\beta}{3} \right) \\ &= & cos\left(\frac{\beta}{3}\right)\cdot\left( sin\left(\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)+cos\left(\frac{\beta}{3}\right) sin\left(\frac{2\beta}{3}\right)\right) \\ &= & \frac{1}{2}\cdot 2cos\left(\frac{\beta}{3}\right) sin\left(\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)+\left[cos^2 \left(\frac{\beta}{3}\right)\right] sin\left(\frac{2\beta}{3}\right) \\ &= & \frac{1}{2}sin\left(2\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)+\left[\frac{1+cos \left(2\frac{\beta}{3}\right)}{2}\right] sin\left(\frac{2\beta}{3}\right) \\ &= & \left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot sin\left (\frac{2\beta}{3}\right) = y_5 \end{array}$
Therefore the graph of $r=cos\left(\frac{\theta}{3}\right)$ is exactly the graph of $r=\frac{1}{2}+cos\left(\theta\right)$ slid $\frac{1}{2}$ unit left. Nice.
If there are any students reading this, know that it took a few iterations to come up with the versions of the identities proved above. Remember that published mathematics is almost always cleaner and more concise than the effort it took to create it. One of the early steps I took used the substitution $\gamma =\frac{\beta}{3}$ to clean up the appearance of the algebra. In the final proof, I decided that the 2 extra lines of proof to substitute in and then back out were not needed. I also meandered down a couple unnecessarily long paths that I was able to trim in the proof I presented above.
Despite these changes, my proof still feels cumbersome and inelegant to me. From one perspective–Who cares? I proved what I set out to prove. On the other hand, I’d love to know if someone has a more elegant way to establish this connection. There is always room to learn more. Commentary welcome.
In the end, it’s nice to know these two polar curves are identical. It pays to keep one’s eyes eternally open for unexpected connections!
|
{}
|
# label arrow at midpoint in tikz
In tikzcd I know of the 'description' qualifier which places the node with a white background on top of and in the middle of the arrow. How can this best be achieved in tikz?
midway and above
\documentclass[margin=2pt]{standalone}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}
\draw[->](0,0)--(2,0) node[midway,above]{label}; %if you want to enforce a white background add fill=white to the node options
\end{tikzpicture}
\end{document}
Second try:
\documentclass[margin=2pt]{standalone}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}
\draw[->](0,0)--(2,0) node[midway,fill=white]{label}; %for better vertical alignment you can add text height=.5em to the node options or a \strut inside the node text
\end{tikzpicture}
\end{document}
• Thanks, but I want the label to be on the arrow not above it. e.g. ---label--->. – Alex Oct 10 '14 at 15:29
• Ah great. I actually tried that but I had 'auto' in tikzset which was the problem I think. – Alex Oct 10 '14 at 15:40
• yup, it was auto that prevented it to work. – Nico Boni Oct 10 '14 at 15:43
|
{}
|
# Optimization and Rent
The manager of a large apartment complex knows from experience that 110 units will be occupied if the rent is 342 dollars per month. A market survey suggests that, on the average, one additional unit will remain vacant for each 9 dollar increase in rent. Similarly, one additional unit will be occupied for each 9 dollar decrease in rent. What rent should the manager charge to maximize revenue? I know the rate of change for people relative to rent, but how do I express it mathematically? How do I solve the equations with symbols.I would multiply by the number of increments.
If I was programming, this would be my pseudocode: Prompt rent, people Revenue_0 = rent * people; dp/dr = -1/9; for n = 0, n = (rent+9), n++ dR/dr = (rent*(dp/dr*n))+Revenue_0= 0 revmax = Revenue @ DR/dr = 0 display ("Revmax =",revmax) I am trying to find the maximum value for revenue, but don't know how to include the number of times the rent is increased by $9 into the formula. Is my reasoning correct, or am I mistaken? ## 2 Answers Let U(r) be the units occupied and r be the rent. Translate your conditions into symbols: you know for every 9 dollars r is increased, 1 more unit will be vacant, and for every 9 dollars it is decreased, one more will be occupied. So if you increase rent 9k dollars from a baseline of$r_0$, k more units will be vacant. So in symbols:$U(r_0+9k)=U(r_0)-k$and$U(342)=110$. Let$r_0=342$and$r_0+9k=r$(rent). Solve this second equation for$k$:$k=\frac{r-342}{9}$. You then have$U(r)=110-\frac{r-342}{9}$. Revenue is Price$\times$Quantity, or$rU(r)$. Let$R(r)=revenue =r\cdot U(r)\$. Now use the tools of calculus to optimize.
Let x be 0,1,2,3,... Then 9 dollars increase of rent leaving one extra apartment empty, means that profit = (342+9x)(110-x) Working out gives a parabola that opens down. The vertex is the maximum and that is what you are looking for. Use -b/2a from High school. No calculus needed!
|
{}
|
Float - Maple Help
Software Floating-point Numbers and Their Constructors
Calling Sequence Float(M, E) SFloat(M, E) x.yen x.yEn
Parameters
M - expression E - expression x - (optional) integer constant y - (optional) unsigned integer constant n - (optional) integer constant
Description
• An arbitrary-precision software floating-point number (an object of type sfloat) is represented internally in Maple by a pair of integers (the mantissa M and the exponent E).
• The Float(M, E) command can be used to construct the floating-point number M * 10^E.
If the mantissa parameter is of type imaginary, Float(M, E) returns I * Float(Im(M), E).
If the mantissa is of type nonreal, Float(M, E) returns Float(Re(M), E) + I * Float(Im(M), E).
The SFloat(M, E) command is equivalent to the Float(M, E) function.
In Maple, a software floating-point number (see type/sfloat) and a general floating-point number (see type/float) are considered to be the same object. Maple also has hardware floating-point numbers, of type hfloat (see type/hfloat), which can be constructed using the HFloat constructor.
• The maximum number of digits in the mantissa of a software float, and the maximum and minimum allowable exponents, can be obtained from the Maple_floats routine.
• Software floating-point numbers can also be created by entering x.yEn or x.yen, where n is the integer exponent. All three parameters are optional in the calling sequence. If y is omitted in the calling sequence, then the decimal point must also be omitted (for example, 1e0 not 1.e0).
• To obtain the mantissa and exponent fields of a software float, use SFloatMantissa and SFloatExponent, respectively.
• The presence of a floating-point number in an expression generally implies that the computation will use floating-point evaluation. The floating-point evaluator, evalf, can be used to force computation to take place in the floating-point domain.
• The number of digits carried in the mantissa for floating-point arithmetic is determined by the Maple environment variable Digits (default is 10).
• Maple includes a variety of numeric functions to use with floating-point numbers.
Notes regarding infinity
The quantity Float(infinity) represents a floating-point infinity. This value is used to indicate a floating-point value that is too large to be otherwise represented. It does not necessarily represent the mathematical concept of infinity.
Float(infinity) can be returned by a function or an operation when the input operands are such that the indicated function or operation will overflow (that is, produce a value that cannot be represented in the $\mathrm{Float}\left(a,b\right)$ format).
Float(infinity) can be either or both components of a complex number (for example, Float(infinity) + 3.7*I, 0. + Float(infinity)*I). By convention, Maple treats all complex numbers both of whose components are Float(infinity) as the single point complex infinity. This is a convention only, but you (and in your programs) should be cautious about relying on the sign of the real or imaginary part of such an object. See type/cx_infinity.
Notes regarding undefined
The quantity Float(undefined) represents a non-numeric object in the floating-point system. This value can be returned by a function or operation if the input operands are not in the domain of the function or operand.
Note: Float(undefined) values always compare as equal. You can also use type(expr, undefined) to check for an undefined value.
You can tag a Float(undefined) object with the notation Float(n,undefined), where n is a non-zero integer. Whenever possible, Maple preserves this object, in the sense that if it is passed in as an operand to a function or operation, Maple tries to return the same value if it makes sense to do so. In this way, it is possible to perform some retrospective analysis to determine precisely when the object first appeared.
Float(undefined) can be either or both components of a complex number (for example, Float(undefined) + 1.*I, Float(infinity) + Float(undefined)*I. The type undefined recognizes such an object.
Notes regarding zero
In its floating-point format, 0 has a sign. The sign of 0 is preserved by arithmetic operations whenever possible, according to the standard rules of algebra. Operations and functions can use the sign of 0 to distinguish qualitative information (for example, branch cut closure), but not quantitative information (-0.0 < +0.0 returns false).
It is possible that the result of a computation is mathematically 0, but the sign of that result cannot be established from the application of the standard rules of arithmetic. The simplest such example is x - x. In such cases, Maple uses the following convention.
The result of an indefinitely signed computation whose mathematical value is '0' is set to '+0', unless the rounding mode is set to '-infinity', in which case it is set to '-0'.
This convention is implemented by the Default0 function.
Corresponding to the convention regarding complex infinities, as described above, Maple treats all complex numbers, both of whose components are floating-point 0.'s, as the same. Again, this is a convention, but code should in general not rely on the sign of the real or imaginary part of $0.+0.I$, $0.-0.I$, etc.
• The Float constructor is called during parsing of all floating-point numbers and so can be overloaded by creating a module like this: M := module() export Float; Float := proc(m,e) m*ten^e; end; end; The statement, use M in 1.234 end; will then return 123*ten^(-2);
Examples
The Float command will only be seen when this page is viewed in 1-D Math.
> $2.3$
${2.3}$ (1)
> $2.$
${2.}$ (2)
> $-0.3$
${-0.3}$ (3)
> $x≔-23000.$
${x}{≔}{-23000.}$ (4)
> $1.2×{10}^{6}x$
${-2.76}{×}{{10}}^{{10}}$ (5)
> $\mathrm{Default0}\left(\right)$
${0.}$ (6)
> $\mathrm{Rounding}≔-\mathrm{\infty }$
${\mathrm{Rounding}}{≔}{-}{\mathrm{\infty }}$ (7)
> $2.-2.$
${-0.}$ (8)
> $\mathrm{Default0}\left(\right)$
${-0.}$ (9)
> $\mathrm{Float}\left(23,-45\right)$
${2.3}{×}{{10}}^{{-44}}$ (10)
> $\mathrm{SFloat}\left(23,-45\right)$
${2.3}{×}{{10}}^{{-44}}$ (11)
> $0.002$
${0.002}$ (12)
> $\mathrm{interface}\left(\mathrm{prettyprint}=0\right)$
${1}$ (13)
> $2.3$
${2.3}$ (14)
> $y≔-23000.$
${y}{≔}{-23000.}$ (15)
> $120000.y$
${-2.76}{×}{{10}}^{{9}}$ (16)
> $\mathrm{Float}\left(23,-45\right)$
${2.3}{×}{{10}}^{{-44}}$ (17)
Here is an example of float overflow:
> $\mathrm{exp}\left(1.×{10}^{100}\right)$
${Float}{}\left({\mathrm{\infty }}\right)$ (18)
|
{}
|
# Getting URLs from Safari
This post by David Sparks from a couple of days ago is quintessential David. It provides a useful little utility, is easy to understand, and includes a video that shows why you’d want to use it.
The utility is a Keyboard Maestro macro that gets the URL of the active tab in Safari, puts it on the clipboard, and then pastes it into whatever text you happen to be working on. I’ve been using a utility similar to it for almost a decade, and I can’t tell you how much time it’s saved me. You may think it’s no big deal to do “by hand” what this macro automates, and if you’re the kind of person who almost never sends links via Twitter or Facebook or texting, you might be right. But if you do much web communication, you’ll want to use David’s macro. Or something similar to it.
As it happens, the utility I currently use for this is also a Keyboard Maestro macro, but it has a long and convoluted history. It started out as a Python script (for absolutely no good reason, as most of what it did was run AppleScript) that was triggered by Quicksilver. Remember Quicksilver? Those were the days… Then I converted it to a TypeIt4Me snippet that ran a short AppleScript. Then I switched it to a TextExpander snippet that ran basically that same AppleScript. Finally, it became the Keyboard Maestro macro I use today:
Throughout this journey, the key feature has always been this single line of AppleScript:
applescript:
tell application "Safari" to get URL of front document
If you’re a Chrome user, you can do the same thing with this slightly longer line:1
applescript:
tell application "Google Chrome" to tell front window to get URL of tab (active tab index)
Update 04/2/2017 9:16 PM
Rob Mathers reminds me of something I always forget: Keyboard Maestro’s Text Tokens.
@drdrang %SafariURL% and %ChromeURL% can be helpful (but KM-specific)
Rob Mathers (@robmathers) Apr 2 2017 9:04 PM
Why don’t I remember these exist? I think it’s because they’re sort of like variables, and I tend to look for them in the Variables popup list. But thinking this way is exactly backward. In Keyboard Maestro, tokens are a fundamental feature, and variables are user-defined items that use a special form of the token syntax.
Maybe I’ll remember that now.
Probably not.
As you can see, the macro is triggered by typing ;furl. This snippet-based approach is a holdover from when it was run via TypeIt4Me and TextExpander. Although I could’ve changed it to a hotkey combination when I brought it over to Keyboard Maestro, I decided I preferred a typed string trigger. After all, the most common use is for the URL to be put into a text field where I’m typing.
Although my macro has only one action and David’s has five, his is still easier to understand because it requires no programming language syntax, which tends to scare people off. Anyone who’s ever toggled back and forth between Messages and Safari to send a link to a friend can see what David’s macro is doing. Mine, however, does have two small advantages that will keep me using it instead of switching to David’s:
1. Mine doesn’t actually activate Safari, so the window I’m typing in always stays in front. David’s macro momentarily brings Safari to the front, which can be disconcerting for someone like me who tends to have messy, overlapping windows. You may not notice the momentary activation of Safari in David’s video because his windows are neatly arranged with no overlap.
2. Mine doesn’t change the clipboard. Whatever I had on the clipboard before running the macro is still there afterward. Because David’s macro uses the clipboard to transport the URL out of Safari, whatever was on it before running the macro is gone. This probably isn’t a big deal for David’s audience, as I imagine most of them—including me—use a clipboard manager to hold onto a series of older clipboard contents. Still, I prefer to keep my clipboard history as clean as my window layout is messy.
In addition to this macro that pulls out the URL of the frontmost tab, I also have a series of macros that get the URLs of particular tabs. Here’s the one for the third tab (they’re numbered from left to right):
I have six of these macros, all of which are triggered by typing something like ;3url. They were easy to write, but I’ve found over the years that they just aren’t as easy to use as I expected them to be. They’re useful only if I can see the Safari tab bar while I’m working in the other application, and if I have lots of tabs open, it’s hard to know which number I want. My workaround has been to ⌘-click on the tabs in the background Safari window. This changes the active tab without bringing Safari to the front and allows me to use the ;furl macro almost exclusively.
As I thought about writing this post, the difficulty of using the numbered tab macros began to bother me. There really should be a better way to get the URL of any open tab. So I built a new macro that puts up a window with a list of all the tabs in the front Safari window,2 from which I can select the tab whose URL I want.
The window shows the names of the tabs, but the macro returns the URLs. The active tab is selected by default when the window appears. I use ;surl as the typed trigger to run the macro, which is called Some URL:
Like the other macros, it has just one action, an AppleScript. Here’s the AppleScript:
applescript:
1: tell application "Safari"
2: -- Initialize
3: set tabNames to {}
4: set tabURLs to {}
5: set frontName to name of front document
6:
7: -- Collect the tab names and URLs from the top Safari window
8: set topWindow to window 1
9: set topTabs to every tab of topWindow
10: repeat with t in topTabs
11: set end of tabNames to name of t
12: set end of tabURLs to URL of t
13: end repeat
14: end tell
15:
16: -- Display a list of names for the user to choose from
17: tell application "System Events"
18: set activeApp to name of first application process whose frontmost is true
19: activate
20: choose from list tabNames with title "Safari Tabs" default items frontName
21: if result is not false then
22: set nameChoice to item 1 of result
23: else
24: return
25: end if
26: end tell
27:
28: -- Return the URL of the selected tab
29: tell application activeApp to activate
30: repeat with t from 1 to the count of tabNames
31: if item t of tabNames is nameChoice then return item t of tabURLs
32: end repeat
Lines 1–14 collect from Safari the names and URLs of all the tabs in the front window, putting them into two parallel lists, tabNames and tabURLs.3 Line 5 gets the name of the active tab and puts it into frontName.
Line 18 gets the name of the active application, the one I’m typing in when I want to insert a URL. We’ll need this later. Line 19 activates System Events, a trick to ensure that the window with the list of tabs will be active. Without this line, the window created in Line 20 will appear in front of all the others, but won’t be active until you click in it, which is both weird and frustrating.
Line 20 puts up the window with the list of choices. If the user cancels, the result is false; otherwise, the result contains the selected tab name. Line 21 tests for this, either putting the tab name into nameChoice (Line 22) or exiting the script (Line 24).
Finally, Line 29 reactivates the application that was active when the macro was invoked (that’s what we saved in Line 18), and Lines 30–32 pull out the URL that corresponds to the chosen tab name. This is what gets inserted to replace ;surl.
So I’ve gone from a one-line script that might put people off to a 32-line script that definitely will. This is why David is far more popular.
1. There may be a shorter way to get the URL from Chrome. I haven’t looked into its AppleScript library in years. ↩︎
2. Which for me is almost always the only Safari window. ↩︎
3. AppleScript has a record type, which is like a Python dictionary or a Perl hash. I thought about using a record to store the names and URLs, with the names as the keys and the URLs as the values. Unfortunately, AppleScript records aren’t nearly as useful as dictionaries and hashes. Parallel lists were just easier. ↩︎
|
{}
|
# James Gow (scholar)
scholar
James Gow (1854-1923) was an English scholar, educator, historian, and author, widely recognized for A Short History of Greek Mathematics. The history drew highly upon the work of Moritz Cantor, as well as upon pioneering works of Carl Anton Bretschneider, Hermann Hankel, and George Johnston Allman, but included material, e.g., gematria, not discussed by contemporary historians of mathematics.
## Quotes
### A Short History of Greek Mathematics (1884)
• The history of Greek mathematics is, for the most part, only the history of such mathematics as are learnt daily in all our public schools. ...If it was not wanted, as it ought to have been, by our classical professors and our mathematicians, it would have served at any rate to quicken, with some human interest, the melancholy labours of our schoolboys.
• Preface
• The history of Alexandrian mathematics begins with the Elements of Euclid and closes with the Algebra of Diophantus, both of which are founded on the discoveries of several preceding centuries.
• Preface
• A student of history, who cares little for Greek or mathematics in particular, but who likes to watch how things grow, will be able to extract from these pages a notion of the whole history of mathematical science down to Newton's time...
• Preface
• Probably Greek logistic, or calculation, extended to more difficult operations... and... probably Greek arithmetic, or theory of numbers, owed much more to induction than is permitted to appear by its first and chief professors.
• Some fundamental unity was surely to be discerned either in the matter or the structure of things. The Ionic philosophers chose the former field: Pythagoras took the latter. ...The geometry which he had learnt in Egypt was merely practical. ...It was natural to nascent philosophy to draw, by false analogies, and the use of a brief and deceptive vocabulary, enormous conclusions from a very few observed facts: and it is not surprising if Pythagoras, having learnt in Egypt that number was essential to the exact description of forms and of the relations of forms, concluded that number was the cause of form and so of every other quality. Number, he inferred, is quantity and quantity is form and form is quality.
Footnote2 Primitive men, on seeing a new thing, look out especially for some resemblance in it to a known thing, so that they may call both by the same name. This developes a habit of pressing small and partial analogies. It also causes many meanings to be at attached to the same word. Hasty and confused theories are the inevitable result.
• It was Pythagoras who discovered that the 5th and the octave of a note could be produced on the same string by stopping at 2⁄3 and ½ of its length respectively. Harmony therefore depends on a numerical proportion. It was this discovery, according to Hankel, which led Pythagoras to his philosophy of number. It is probable at least that the name harmonical proportion was due to it, since
1:½ :: (1-½):(2⁄3-½).
Iamblichus says that this proportion was called ύπeναντία originally and that Archytas and Hippasus first called it harmonic. Nicomachus gives another reason for the name, viz. that a cube being of 3 equal dimensions, was the pattern άρμονία: and having 12 edges, 8 corners, 6 faces, it gave its name to harmonic proportion, since:
12:6 :: 12-8:8-6
• Footnote, citing Vide Cantor, Vorles [Vorlesüngen über Geschichte der Mathematik ?] p 152. Nesselmann p. 214 n. Hankel. p. 105 sqq.
• The solution of the higher indeterminates depends almost entirely on very favourable numerical conditions and his methods are defective. But the extraordinary ability of Diophantus appears rather in the other department of his art, namely the ingenuity with which he reduces every problem to an equation which he is competent to solve.
• Diophantus shows great Adroitness in selecting the unkown, especially with a view to avoiding an adfected quadratic. ...The most common and characteristic of Diophantus' methods is his use of tentative assumptions which is applied in nearly every problem of the later books. It consists in assigning to the unknown a preliminary value which satisfies one or two only of the necessary conditions, in order that, from its failure to satisfy the remaining conditions, the operator may perceive what exactly is required for that purpose. ...a third characteristic of Diophantus [is] ...the use of the symbol for the unknown in different senses. ...The use of tentative assumptions leads again to another device which may be called... the method of limits. This may best be illustrated by a particular example. If Diophantus wishes to find a square lying between 10 and 11, he multiplies these numbers by successive squares till a square lies between the products. Thus between 40 and 44, 90 and 99 no square lies, but between 160 and 176 there lies the square 169. Hence ${\displaystyle x^{2}={\tfrac {169}{16}}}$ will lie between the proposed limits.
• Sometimes... Diophantus solves a problem wholly or in part by synthesis. ...Although ...Diophantus does not treat his problems generally and is usually content with finding any particular numbers which happen to satisfy the conditions of his problems, ...he does occasionally attempt such general solutions as were possible to him. But these solutions are not often exhaustive because he had no symbol for a general coefficient.
• Though the defects in Diophantus' proofs are in general due to the limitation of his symbolism, it is not so always. Very frequently indeed Diophantus introduces into a solution arbitrary conditions and determinations which are not in the problem. Of such "fudged" solutions, as a schoolboy would call them, two particular kinds are very frequent. Sometimes an unknown is assumed at a determinate value... Sometimes a new condition is introduced.
• The Arithmetica... is deficient, sometimes pardonably, sometimes without excuse, in generalization. The book of Porismata, to which Diophantus sometimes refers, seems on the other hand to have been entirely devoted to the discussion of general properties of numbers. It is three times expressly quoted in the Arithmetica... Of all these propositions he says... 'we find it in the Porisms'; but he cites also a great many similar propositions without expressly referring to the Porisms. These latter citations fall into two classes, the first of which contains mere identities, such as the algebraical equivalents of the theorems in Euclid II. ...The other class contains general propositions concerning the resolution of numbers into the sum of two, three or four squares. ...It will be seen that all these propositions are of the general form which ought to have been but is not adopted in the Arithmetica. We are therefore led to the conclusion that the Porismata, like the pamphlet on Polygonal Numbers, was a synthetic and not an analytic treatise. It is open, however, to anyone to maintain the contrary, since no proof of any porism is now extant.
• With Diophantus the history of Greek arithmetic comes to an end. No original work, that we know of, was done afterwards.
• The oldest definition of Analysis as opposed to Synthesis is that appended to Euclid XIII. 5. It was possibly framed by Eudoxus. It states that "Analysis is the obtaining of the thing sought by assuming it and so reasoning up to an admitted truth: synthesis is the obtaining of the thing sought by reasoning up to the inference and proof of it." In other words, the synthetic proof proceeds by shewing that certain admitted truths involve the proposed new truth: the analytic proof proceeds by shewing that the proposed new truth involves certain admitted truths.
• To give here an elaborate account of Pappus would be to create a false impression. His work is only the last convulsive effort of Greek geometry which was now nearly dead and was never effectually revived. It is not so with Ptolemy or Diophantus. The trigonometry of the former is the foundation of a new study which was handed on to other nations indeed but which has thenceforth a continuous history of progress. Diophantus also represents the outbreak of a movement which probably was not Greek in its origin, and which the Greek genius long resisted, but which was especially adapted to the tastes of the people who, after the extinction of Greek schools, received their heritage and kept their memory green. But no Indian or Arab ever studied Pappus or cared in the least for his style or his matter. When geometry came once more up to his level, the invention of analytical methods gave it a sudden push which sent it far beyond him and he was out of date at the very moment when he seemed to be taking a new lease of life.
### A Companion to School Classics (1888)
• The bones... are given in a heap to a student who has no idea of a skeleton. Here is the defect which I am trying partly to supply...
• Preface
• My aim is... to place before a young student a nucleus of well-ordered knowledge, to which he is to add intelligent notes and illustrations from his daily reading.
• Preface
• It happened fortunately that during this period of turmoil the guidance of the Christian Church, the one powerful and permanent institution, was chiefly in the hands of the splendid order of St. Benedict. This saint... seeing that idleness was the besetting danger of monastic establishments, founded at Monte Cassino... a model abbey, in which industry was the daily rule. Among other employments, reading and writing were approved as powerful agents in distracting the mind from unholy thoughts, and in Benedictine monasteries the mechanical exercise of copying mss. became one of the regular occupations.
• The population of Athens and Attica consisted of slaves, resident aliens, and citizens. Slaves were excessively numerous. At a census taken in B.C. 309, the number of slaves was returned at 400,000, and it does not seem likely that there were fewer at any time during the classical period. They were mostly Lydians, Phrygians, Thracians, and Scythians, imported from the coasts of the Propontis. ...They were employed for domestic purposes, or were let out for hire in gangs as labourers, or were allowed to work by themselves paying a yearly royalty to their masters.
...hardly any Athenian citizen can have been without two or three. The family of Aeschines (consisting of 6 persons) was considered very poor because it possessed only 7 slaves. On the other hand, Plutarch says that Nicias let out 1,000 and Hipponicus 600 slaves to work the gold mines in Thrace. The state possessed some slaves of its own, who were employed chiefly as policemen and clerks.
Slaves enjoyed considerable liberties in Athens, and had some rights, even against their masters. They did not serve as soldiers, or sailors, except when the city was in great straits, as at the battle of Arginussae... The worst prospect in store for them was that their masters might be engaged in a lawsuit, for the evidence of a slave (except in a few cases) was not admitted in a court of justice unless he had been put to torture.
Slaves were sometimes freed by their masters, with some sort of public ceremony, or (for great services) by the state which paid their value to their masters.
• Each of these smaller corporations... to which an Athenian citizen belonged, had... business of its own—money to spend, officers to appoint, rules to make—very similar to that which the state transacted on a larger scale. And it is not to be supposed that Athenians were at all ashamed to take part in such minor business, as English gentlemen are to sit on a vestry or a town council. On the contrary, a large part of the population left their private affairs for slaves to manage, and devoted themselves entirely to their public duties.
• Every official was required to undergo, before assuming office... approval before a law court. This was an inquiry into his conduct, his exactness in paying taxes, etc., and it sometimes happened that he was rejected... Every official was also required to take an oath of allegiance.
• Officials could be removed during their year of office by vote of the ecclesia, and periodical opportunities were given for raising complaints...
• Apart from the rites and worship peculiar to each family, gens, curia, and tribe, the Romans recognised a vast number of gods and goddesses whose worship was the concern of the whole state. The necessary ceremonies were, in many cases, placed in the charge of sodalicia or clubs... which elected their own members. But the worship of all deities not otherwise provided for was superintended by the pontifices.
The College of Pontifices is said to have been founded by Numa, and was in regal times, presided over by the king himself. But when kings were abolished, their religious functions were divided between two officers, the Pontifex Maximus and the Rex Sacrorum or Sacrificulus. The latter, though he was sometimes treated as the chief priest, in reality only offered some of the sacrifices which the king formerly offered... The general supervision of the state religion belonged to the Pontifex Maximus.
The Pontifex Maximus lived in the Regia, the ancient palace.
|
{}
|
For January, sales revenue is $700,000; sales commissions are 5% of sales; the sales manager's salary is$96,000; advertising expenses are $90,000; shipping expenses total 2% of sales; and miscellaneous selling expenses are$2,100 plus 1/2 of 1% of sales. Total selling expenses for the month of January are:
2. Originally Posted by Jluse7
For January, sales revenue is $700,000; sales commissions are 5% of sales; [700000 * .05] the sales manager's salary is$96,000; [96,000]
advertising expenses are $90,000; [90,000] shipping expenses total 2% of sales; [700000 * .02] miscellaneous expenses are$2,100 plus 1/2 of 1% of sales. [2100 + 700000 * .005]
Total selling expenses for the month of January are: [total of above]
Where were you having problems: seems quite straightforward...
3. Okay and I got 240,600 is that correct?
4. Originally Posted by Wilmer
Where were you having problems: seems quite straightforward...
|
{}
|
Reward calculations
Remuneration
1. Reward Program:
1. The amount of remuneration is the sum of the base rate and the multiplier, which depends on the complexity of the article and the participation of the author in comments and suggestions regarding documentation improvements.
Minimum reward: equivalent to $8 paid in Ever tokens. (1 hour work) Maximum reward: equivalent to$1200 paid in Ever tokens (work week - 5 days, $30 hourly pay rate) Multipliers: • Pages with general information that do not require strong knowledge of technical aspects (FAQ, Learn section, Contribute section) - x1, hourly pay rate$8
• Pages with technical information in the format of guides or references (section Build and Validate) - x2.25, hourly pay rate $18 • Pages with technical information about the Everscale network functionality (pages from sections Standards, Architecture) -×3,75, hourly pay rate$30
2. The amount of rewards is set as follows:
1. Pages from the Learn and Contribute sections:
1. In the case of writing an article that does not require much editing: $40 -$50.
2. In case of improvement of an existing article: $20 -$30.
3. In case of a relevant comment or advice: $8 -$10.
2. Pages from the Build and Validate sections:
1. In the case of writing an article that does not require much editing: $90 -$120.
2. In case of improvement of an existing article: $45 -$60.
3. In case of a relevant comment or advice: $18 -$20.
3. Pages from the Standards and Architecture sections:
1. In the case of writing an article that does not require much editing: $150 -$500.
2. In case of improvement of an existing article: $75 -$250.
3. In case of a relevant comment or advice: $30 -$50.
P.S. Writing a community-accepted standard is reviewed and paid separately.
4. Also, please be informed that in case of contribution in the form of correction of grammatical errors and spelling mistakes as well as active participation in discussions, the size of rewards is reviewed separately.
2. Payment of remuneration
1. Rewards are paid every month in Ever tokens.
2. Payout notifications are posted on Everscale Documentation Developers Chat.
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.