content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Understanding The Tangent Line: Unveiling The Calculus Connection
Understanding the Tangent Line: Unveiling the Calculus Connection
What is the Difference Between The Tangent Line?
In the world of mathematics, understanding the concept of the tangent line is crucial. It plays a fundamental role in calculus and geometry, yet many students struggle to grasp its true meaning. In
this article, we will delve into the definition and characteristics of the tangent line, exploring its significance in various mathematical contexts. Whether you're a student or an educator, this
article aims to clarify any confusion surrounding the tangent line and provide you with a solid foundation for further mathematical exploration. So, let's dive in and unravel the mysteries of the
tangent line together!
Definition of the Tangent Line
The tangent line is a fundamental concept in calculus that relates to the slope of a curve at a specific point. It can be defined as a straight line that touches the curve at that point, without
crossing it. The tangent line represents the instantaneous rate of change of the curve at that particular point.
Important concepts: tangent line, curve, slope, instantaneous rate of change.
Differentiation and the Tangent Line
Differentiation is a mathematical process used to find the derivative of a function. Derivatives provide information about the rate of change of a function at any given point. The derivative of a
function gives the slope of the tangent line at each point on the curve.
Important concepts: differentiation, derivative, function, rate of change, slope.
Tangent Line and Approximation
One practical application of the tangent line is its use in approximating values of a function near a specific point. By finding the equation of the tangent line at that point, we can estimate the
value of the function nearby. This approximation becomes more accurate as the distance from the point decreases.
Important concepts: approximation, equation, estimate, value, accuracy.
Tangent Line and Graphical Interpretation
Graphically, the tangent line can help us understand the behavior of a function at a given point. It shows how the function changes locally around that point and provides insights into the shape of
the curve. The steepness or flatness of the tangent line indicates the rate at which the function is increasing or decreasing.
Important concepts: graphical interpretation, behavior, function, shape, rate of change.
frequently asked questions
What is the definition of a tangent line in mathematics?
In mathematics education, the tangent line is defined as a straight line that touches a curve at only one point, without crossing it.
How does the tangent line relate to the concept of slope?
The tangent line is closely related to the concept of slope. The slope of a tangent line to a curve at a certain point is equal to the instantaneous rate of change of the function at that point. In
other words, it represents how steep the curve is at that specific point. Therefore, finding the equation of the tangent line allows us to determine the slope of the curve at any given point.
Can you explain the difference between the tangent line and secant line?
The tangent line is a straight line that touches a curve at only one point. It represents the instantaneous rate of change of the curve at that specific point.
On the other hand, a secant line is a straight line that intersects a curve at two points. It represents the average rate of change of the curve between those two points.
In summary, the tangent line represents the local behavior of a curve at a specific point, while the secant line represents the global behavior of the curve between two points.
In what situations is the tangent line used in calculus?
The tangent line is used in calculus to approximate the behavior of a function at a specific point. It is particularly useful when determining instantaneous rates of change, finding maximum and
minimum values, and solving optimization problems.
How can the tangent line be used to approximate the behavior of a function near a specific point?
The tangent line can be used to approximate the behavior of a function near a specific point by estimating the slope of the function at that point. The tangent line represents the best linear
approximation of the function at that point, providing insight into the local behavior of the function. This approximation can be useful in various mathematical applications, such as optimization or
curve sketching.
In conclusion, understanding the concept of the tangent line is crucial in mathematics education. The tangent line represents the instantaneous rate of change of a function at a specific point. It
differs from other types of lines, such as secant lines, as it touches the graph of the function at only one point. By studying the tangent line, students can gain insights into the behavior of
functions and their derivatives. This knowledge is fundamental in various mathematical fields, including calculus and physics. Therefore, educators should emphasize the significance of the tangent
line in their teaching, ensuring that students grasp its definition and applications.
If you want to know other articles similar to Understanding the Tangent Line: Unveiling the Calculus Connection you can visit the category General Education.
|
{"url":"https://warreninstitute.org/what-is-the-difference-between-the-tangent-line/","timestamp":"2024-11-05T22:04:45Z","content_type":"text/html","content_length":"103731","record_id":"<urn:uuid:a02c7d1c-1ca9-4e39-92d2-48698574b829>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00059.warc.gz"}
|
Emirati (ADX) Property and Casualty Insurance Industry Analysis
Industry Comparison
How does Emirati Property and Casualty Insurance compare with similar industries?
AE Market 1.12%
Financials -0.44%
Insurance 0.027%
Property and Casualty Insurance 0.27%
General Insurance 0.0051%
Insurance Brokers 0%
Life and Health Insurance 0%
Reinsurance 0%
Industry PEThere are no additional sub-industries under this industry.
Forecasted GrowthThere are no additional sub-industries under this industry.
|
{"url":"https://simplywall.st/markets/ae/financials/insurance/property-casualty-insurance","timestamp":"2024-11-11T03:37:14Z","content_type":"text/html","content_length":"419110","record_id":"<urn:uuid:11fdd84d-c9e3-48c2-99d1-e68bb4823571>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00138.warc.gz"}
|
On experimental mathematics
The term “experimental mathematics” refers to a contemporary practice in mathematics, where computers are used to inspire research. This can mean, among others:
• finding examples and counterexamples to definitions or conjectures,
• inspiring new conjectures and ensuring that no small counterexamples exist,
• informing proof strategies by counterexamples.
The last point sounds a bit vague. I hope to clarify it in the next section and then give examples of a strange new phenomenon in the use of the advanced mathematical software tools which we have
Sometimes the software is much smarter than the user and by the speed with which it refutes hand-crafted potential counterexamples, it apparently contains a fitting heuristic in its source code. This
heuristic might translate into a proof of the conjecture — but the software doesn’t tell you what it is!
This is as marvellous an achievement of human engineering as it is frustrating if you can’t extract the heuristic from the source code and learn the missing part of math to prove your conjecture.
Note: I will generally assume that the software works correctly. Certifiability of yes/no answers found by software is certainly related but it is a bigger can of worms than what I want to point out
Positive-definite points on certain varieties of matrices
My colleague Andreas Kretschmer and I have been working on a question asked by Aida Maraj about two weeks ago. I don’t want to go into too many details but it concerns solutions to certain systems of
polynomial equations over the positive-definite matrices.
One thing I like to use computers for is to assess which parts of the conjecture are necessary. That means, when we remove certain assumptions from the conjecture, do we find an easy counterexample?
Such data points about the conjecture tell us which kinds of proof methods are doomed to fail. For example, is positive definiteness necessary? A real symmetric matrix is positive-definite if and
only if all of its principal minors are positive. A relatively natural generalization of this requires that all principal minors are non-zero. This condition is called principal regularity.
The reason for this relaxation, which might not be obvious to the non-algebraist, is that we switch to an easier theory. With positive definiteness, the set of counterexamples to Aida’s conjecture is
a semialgebraic set. These are theoretically tractable on a computer, but we have already ruled out all potential counterexamples which are small enough for these general methods to terminate in any
reasonable amount of time. But with principal regularity, all polynomial constraints are either equations or inequations; there are no inequalities anymore. Hence, we can put the problem into the
algebraically closed complex numbers and the set of counterexamples becomes constructible. Constructible sets tend to be somewhat easier to deal with on a compuer than semialgebraic ones and we can
indeed explore some bigger examples — hoping that among the counterexamples we may find, there is one which transfers back into the real numbers of our original conjecture.
As it turns out, we can find counterexamples to the relaxed conjecture where positive definiteness is replaced by principal regularity. Here is the most impressive counterexample I managed to find.
After a long computation in Macaulay2, we find that the Zariski closure of the constructible set we are interested in, which contains the principally regular counterexamples to the conjecture, is
described by the prime ideal generated by the following polynomials on six variables $x_1, \dots, x_6$:
$x_2 x_4+6 x_1 x_6-x_2 x_6, \\ 28 x_4^2+21 x_1 x_6+21 x_2 x_6+8 x_3 x_6-8 x_4 x_6-48 x_5 x_6+28 x_6^2, \\ 28 x_1 x_4-24 x_3 x_5+38 x_1 x_6+18 x_2 x_6+32 x_3 x_6+3 x_4 x_6-24 x_5 x_6+3 x_6^2, \\ 4 x_3
^2-24 x_3 x_5+21 x_1 x_6+21 x_2 x_6+28 x_3 x_6+3 x_4 x_6-24 x_5 x_6+3 x_6^2, \\ 73794 x_2 x_5-1510376 x_3 x_5-36015 x_4 x_5-82320 x_5^2+1338582 x_1 x_6+1275030 x_2 x_6+1768998 x_3 x_6+179193 x_4
x_6-1824711 x_5 x_6+189987 x_6^2+8873205, \\ 73794 x_1 x_5-1510376 x_3 x_5+37779 x_4 x_5-82320 x_5^2+1264788 x_1 x_6+1348824 x_2 x_6+1832250 x_3 x_6+189735 x_4 x_6-1835253 x_5 x_6+179445 x_6^
2+8873205, \\ 12299 x_3 x_4-536256 x_3 x_5+294 x_4 x_5+169344 x_5^2+421596 x_1 x_6+425010 x_2 x_6+626563 x_3 x_6+72030 x_4 x_6-891114 x_5 x_6+72114 x_6^2+2957735, \\ 12299 x_2 x_3-1510376 x_3
x_5-36015 x_4 x_5-82320 x_5^2+1338582 x_1 x_6+1348824 x_2 x_6+1842792 x_3 x_6+179193 x_4 x_6-1824711 x_5 x_6+189987 x_6^2+8873205, \\ 12299 x_1 x_3-2046632 x_3 x_5+38073 x_4 x_5+87024 x_5^2+1760178
x_1 x_6+1773834 x_2 x_6+2471112 x_3 x_6+261765 x_4 x_6-2726367 x_5 x_6+251559 x_6^2+11830940, \\ 24598 x_2^2+451920 x_3 x_5-86387 x_4 x_5-169344 x_5^2-358344 x_1 x_6-361758 x_2 x_6-543984 x_3
x_6-49189 x_4 x_6+819077 x_5 x_6-73871 x_6^2-2957735, \\ 24598 x_1 x_2-620592 x_3 x_5+86387 x_4 x_5+169344 x_5^2+484848 x_1 x_6+488262 x_2 x_6+712656 x_3 x_6+70273 x_4 x_6-987749 x_5 x_6+94955 x_6^
2+2957735, \\ 344372 x_1^2-10833312 x_3 x_5+1554966 x_4 x_5+3048192 x_5^2+8732535 x_1 x_6+8793987 x_2 x_6+12926200 x_3 x_6+1222746 x_4 x_6-18032490 x_5 x_6+1667022 x_6^2+53239230.$
This might look scary, but Mathematica finds a generic point in this variety in about two seconds on my laptop, which is indeed principally regular. The solution coordinates look something like this,
involving invocations to Mathematica’s Root function, which represents real algebraic numbers exactly by univariate polynomials and an index of the root to pick.
x1 -> -(
1775380433750000000 - 78478332364687500000 #1^2 +
891090422891055078125 #1^4 - 849232777556096718750 #1^6 +
871885597934472187500 #1^8 - 643355496779169825000 #1^10 +
96277889059372440000 #1^12 - 5847789209959296000 #1^14 +
235403285409331200 #1^16 - 6977138838405120 #1^18 +
95569451679744 #1^20 &, 1]) /
+ ...
There are ten such summands in the solution coordinate $x_1$. It looks similar for the other coordinates. Thus our system has a solution which is principally regular. If we replace the principal
regularity constraints by positive definiteness constraints and call FindInstance, then Mathematica knows after five seconds that there exists no solution. So, our search for a counterexample was not
successful: we were able to check a promising example of larger size by relaxing positive definiteness to an algebraic condition, but the counterexamples to the relaxation yielded no counterexample
to the original conjecture, in this case.
This tells the experimental mathematician two things. First, if the conjecture is true, positive definiteness is necessary but the algebraic version of it, principal regularity, does not suffice. The
conjecture cannot be proved by mere algebraic manipulation of the equations alone. Positivity has to be used at some point, because otherwise the conjecture would be true in the principally regular
setting, too.
Second, experience tells that a system of eleven equations of degree 2 in six variables plus positive definiteness constraints are never ever solved in five seconds by general methods like
Positivstellensatz, CAD or Gröbner bases. Mathematica needs to have a heuristic whose preconditions are quick to verify and which leads to an insolvability proof very quickly. And, by the way, the
same pattern should apply to the principally regular system as well. Mathematica must have had a systematic way of finding these insane algebraic numbers shown above.
This gives us hope that the conjecture might be true. The situation presents itself to me as if some mathematician or engineer at Wolfram Inc. figured out a pattern of polynomial systems once which
implies structure on the solution set which can be used to produce a solution quickly or prove that none exists. This pattern “obviously” applies to our system, but we have no idea what it is! We
only know that there is a (proprietary!) software which knows. But of course, it could also be that our example was just not big enough to yield a real counterexample because whatever structure is
exploited here is not always present in the situation of our conjecture. This situation would be easier to judge if we knew what Mathematica saw that made this computation so easy.
Orientability of LUBF-gaussoids
But this is not a nuisance of proprietary software alone. A while ago I computed the orientability of LUBF-gaussoids on ground sets of size up to eight. I didn’t talk about it in the article, because
it was about the symmetry reduction, but the same phenomenon of software seeing things the human doesn’t occurred in the stage that actually decided orientability. At the end of the symmetry
reduction, there were 222 LUBF-gaussoids, which means 222 Boolean formulas for orientability, on average in 546 variables and with 33335 clauses. The venerable MiniSAT solved them in total time of
five seconds. This includes startup time, parsing the formulas and output processing in my shell script — all done sequentially. MiniSAT can obviously tell extremely, extremely quickly which of these
formulas have a solution.
Experience tells that this cannot happen unless the SAT solver contains a fitting heuristic. The good news is that MiniSAT is open source software, so we can look inside. The bad news is that it does
not contain much more than a “metaheuristic”. MiniSAT is a rather clean solver, written for educational purposes and reusability, but it also happens to be really fast. It does not contain a library
of special-case heuristics for formulas of a certain structure, but implements conflict-driven clause learning. According to the MiniSAT paper, the solver explores the space of all assignments to the
variables and tries to deduce, or learn, more information about the formula from (partial) assignments which fail to satisfy it.
Thus, the source code is of no great help in finding why LUBF leads to apparently easy orientability problems. The solver is intelligent enough on its own to discover the reason in realtime by just
trying out some assignments. I, for one, still don’t know the reason. I suppose if I really wanted to, I could inspect the database of learnt clauses and try to infer which special structure the
solver discovered. That’s a plus for free software.
In conclusion, I want to emphasize that, compared to mathematical practice at large, this is a completely new phenomenon brought to us by the extensive collaboration and expertise that goes into
today’s advanced mathematical software.
Addendum (29 Sep 2021): The question I mentioned above turned out to have an affirmative answer for positive-definite matrices (although it is false for principally regular real matrices). See
Theorem 3.23 of The geometry of Gaussian double Markovian distributions.
Addendum (28 Jan 2022): Zach Teitler kindly informs me that the situation is not hopeless all around. The effect I described above also struck Frank Sottile in the late 90s when investigating a
conjecture of Shapiro and Shapiro in real Schubert calculus. The story has it that after setting up one instance of the problem, he did not even have time to leave his seat to get a
computations-coffee. If I understand correctly, it turned out that the problem was benign enough to not lead to the worst-case complexity of Gröbner basis computations and that this fact could be
understood after being discovered experimentally. A good starting point to learn more details seems to be Real Schubert Calculus: Polynomial Systems and a Conjecture of Shapiro and Shapiro.
|
{"url":"https://taboege.de/blog/2021/07/On-experimental-mathematics/","timestamp":"2024-11-04T02:18:05Z","content_type":"text/html","content_length":"187381","record_id":"<urn:uuid:fe421c67-a1df-481e-aebb-3b888227e2f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00122.warc.gz"}
|
- Tree Count
National Olympiad in Informatics, China, 2013
Day 1, Problem 2 - Tree Count
We know that a rooted tree can be traversed via depth-first search (DFS) and breadth-first search (BFS) to obtain the DFS and BFS orderings of their vertices. Two different trees can have the same
DFS ordering, and at the same time their BFS orderings can also be the same. For example, the following two trees both have a DFS ordering of 1 2 4 5 3 and a BFS ordering of 1 2 3 4 5.
Given a DFS and BFS ordering, we would like to know the average height of all rooted trees satisfying the condition. For example, if there are K different rooted trees simultaneously possessing the
DFS and BFS orderings, and their heights are respectively h[1], h[2], …, h[K], then you are asked to output the value of (h[1] + h[2] + … + h[K])/K.
Input Format
The first line contains a single positive integer n, representing the number of vertices.
The second line contains n positive integers (each between 1 and n, inclusive), representing the DFS ordering.
The third line contains n positive integers (each between 1 and n, inclusive), representing the BFS ordering.
The input guarantees that at least one tree satisfying the two orderings will exist.
Output Format
Output a single real number, rounded half-up to three places after the decimal point, representing the average height of the trees.
Sample Input
Sample Output
If your output differs from the correct answer by no more than 0.001, then you will receive full marks on the test case. Otherwise you will receive no marks.
20% of the test cases will satisfy n ≤ 10;
40% of the test cases will satisfy n ≤ 100;
85% of the test cases will satisfy n ≤ 2000;
100% of the test cases will satisfy 2 ≤ n ≤ 200000.
If a rooted tree has only one vertex, then its height is 1. Otherwise, it's height is equal to 1 plus the maximum height across all of the subtrees rooted at each children.
For any three vertices a, b, and c in the tree, if a and b are both c's children, then the relative orders of a and b in both the BFS and DFS orderings are the same. That is, either a is always
before b, or a is always after b.
All Submissions
Best Solutions
Point Value: 25 (partial)
Time Limit: 1.00s
Memory Limit: 256M
Added: May 18, 2015
Languages Allowed:
C++03, PAS, C, HASK, ASM, RUBY, PYTH2, JAVA, PHP, SCM, CAML, PERL, C#, C++11, PYTH3
Even though the output format says to print 1 number, you should print 3: ans - 0.001, ans, and ans + 0.001 (on separate lines).
Even though the output format says to print 1 number, you should print 3: ans - 0.001, ans, and ans + 0.001 (on separate lines).
Haha, that's amusing.
I'll fix this in a second.
Edit: This is now fixed. I didn't rejudge, though.
Haha, that's amusing.
I'll fix this in a second.
Edit: This is now fixed. I didn't rejudge, though.
|
{"url":"https://wcipeg.com/problem/noi13p2","timestamp":"2024-11-13T23:07:38Z","content_type":"text/html","content_length":"14133","record_id":"<urn:uuid:9da7be29-36a2-4054-87d7-167ae13ce551>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00557.warc.gz"}
|
From Deep to Long Learning?
Mar 28, 2023 · 9 min read
From Deep to Long Learning?
For the last two years, a line of work in our lab has been to increase sequence length. We thought longer sequences would enable a new era of machine learning foundation models: they could learn from
longer contexts, multiple media sources, complex demonstrations, and more. All data ready and waiting to be learned from in the world! It’s been amazing to see the progress there. As an aside, we’re
happy to play a role with the introduction of FlashAttention (code, blog, paper) by Tri Dao and Dan Fu from our lab, who showed that sequence lengths of 32k are possible–and now widely available in
this era of foundation models (and we’ve heard OpenAI, Microsoft, NVIDIA, and others use it for their models too–awesome!).
The context lengths of foundation models have been growing recently (and alternate explanations abound)! What's next?
As the GPT4 press release noted, this has allowed almost 50 pages of text as context–and tokenization/patching ideas like those in Deepmind’s Gato are able to use images as context. So many amazing
ideas coming together, awesome!
This article is about another approach to increasing sequence length at a high level, and the connection to a new set of primitives.
This blog is about another approach to increasing sequence length!
One fundamental issue we ran into was that the attention layers in Transformers scale quadratically in sequence length: going from 32k length to 64k length isn’t 2x as expensive, but 4x more
expensive. This led us to investigate models that are nearly linear time in sequence length. For our lab, this started with Hippo, followed by S4, H3, and now Hyena. These models hold the promise to
have context lengths of millions… or maybe even a billion!
Some Recent History and Progress
Long Range Arena and S4
The Long Range Arena benchmark was introduced by Google researchers in 2020 to evaluate how well different models can handle long-range dependencies. LRA tests a suite of tasks covering different
data types and modalities such as text, images, and mathematical expressions, with sequence lengths up to 16K (Path-X: classifying images that have been unrolled into pixels, without any spatial
inductive bias). There’s been a lot of great work on scaling Transformers to longer sequences, but many of them seem to sacrifice accuracy. And there’s that pesky Path-X column: all these Transformer
methods and their variants struggled to do better than random guessing.
Transformer variants benchmarked on Long Range Arena, along with S4.
Enter S4, led by the amazing Albert Gu! Inspired by the results from the LRA benchmark, Albert wanted to figure out how to better model long-range dependencies. Building on a long line of work on
orthogonal polynomials and the relationships between recurrent and convolutional models, we introduced S4 – a new sequence model based on structured state space models (SSMs).
Critically, SSMs scale with $O(N \log N)$ in sequence length $N$, instead of quadratically like attention. S4 was able to successfully model the long-range dependencies in LRA, and was also the first
model to achieve better than average performance on Path-X (and can now get 96.4% accuracy!). Since releasing S4, we’ve been super excited by how people are building on the ideas and making the space
richer: with models like S5 from Scott Linderman’s group, DSS from Ankit Gupta (and our own follow-on collaboration S4D), Liquid-S4 from Hasani & Lechner, and more – and of course we are always
indebted to Sasha Rush and Sidd Karamcheti for the amazing Annotated S4!
As an aside: when we released FlashAttention, we were able to increase the sequence length of Transformers. We found that Transformers could also get non-trivial performance (63%) on Path-X – simply
by increasing the sequence length to 16K!
The Gap with Language
But S4 still had a gap in quality on language modeling – up to 5 perplexity points (for context, that’s the gap between a 125M model and a 6.7B model). To close this gap, we looked at synthetic
languages like associative recall to figure out what properties you should need for language. We ended up designing H3 (Hungry Hungry Hippos) – a new layer that stacked two SSMs, and multiplied their
outputs together with a multiplicative gate.
Using H3, we replaced almost all the attention layers in GPT-style Transformers, and were able to match Transformers on both perplexity and downstream evaluations, when trained on 400B tokens from
the Pile:
Model Pile PPL SuperGlue Zero-Shot
GPT-Neo-1.3B 6.2 52.1
H3, 2 attn (1.3B) 6.0 56.5
GPT-Neo-2.7B 5.7 54.6
H3, 2 attn (2.7B) 5.4 56.8
Since the H3 layer is built on SSMs, it also has compute that grows in $O(N \log N)$ in sequence length. The two attention layers still make the whole model $N^2$ overall, but more on that in a
Of course, we weren’t the only folks thinking in this direction: GSS also found that SSMs with gating could work well in concert with attention in language modeling (which inspired H3), Meta released
their Mega model which also combined an SSM with attention, the BiGS model replaced attention in BERT-style models, and our RWKV friends have been looking at completely recurrent approaches. Very
exciting work in this area!
The Next Advance: Hyena
The next architecture in this line of work is Hyena – we wanted to see if it was possible to get rid of those last two attention layers in H3, and get a model that grows nearly linearly in sequence
length. Turns out, two simple insights led us to the answer:
• Every SSM can be viewed as a convolution filter the length of the input sequence – so we can replace the SSM with a convolution the size of the input sequence, to get a strictly more powerful
model for the same compute. In particular, we parametrize the convolutional filters implicitly via another small neural network, borrowing powerful methods from the neural fields literature, and
the great CKConv / FlexConv line of work. Plus, the convolution can be computed in $O(N \log N)$ time in sequence length – nearly-linear scaling!
• The gating behavior in H3 can be generalized: H3 takes three projections of the input, and iteratively takes convolutions and applies a gate. In Hyena, we simply add more projections and more
gates, which helps generalize to more expressive architectures and closes the gap to attention.
In Hyena, we proposed the first fully near linear-time convolutional models that could match Transformers on perplexity and downstream tasks, with promising results in initial scaling experiments. We
trained small- and medium-sized models on subsets of the PILE, and saw that val PPL matched Transformers:
Model 5B 10B 15B
GPT-2 Small (125M) 13.3 11.9 11.2
Pure H3 (153M) 14.8 13.5 12.3
Hyena (153M) 13.1 11.8 11.1
GPT-2 Medium (355M) 11.4 9.8 9.3
Hyena (355M) 11.3 9.8 9.2
With some optimizations (more on that below), Hyena models are slightly slower than Transformers of the same size at sequence length 2K – but get a lot faster at longer sequence lengths.
We’re super excited to see how far we can take these models, and excited to scale them up to the full size of the PILE (400B tokens): what happens if we combine the best ideas from H3 and Hyena, and
how long can we go?
A Common Primitive: the FFT... or Something More Basic?
A common primitive in all these models is the FFT – that’s how we can efficiently compute a convolution as long as the input sequence in $O(N \log N)$ time. However, the FFT is poorly supported on
modern hardware, which is dominated by specialized matrix multiplication units and GEMMs (e.g., tensor cores on NVIDIA GPUs).
We can start to close the efficiency gap by rewriting the FFT as a series of matrix multiplication operations – using a connection to Butterfly matrices that folks in our group have used to explore
sparse training. In our recent work, we’ve used this connection to build fast convolution algorithms like FlashConv and FlashButterfly, by using a Butterfly decomposition to compute the FFT as a
series of matmul operations.
But we can draw on the prior work to make a deeper connection: you can also let these matrices be learned – which takes the same wall-clock time, but gives you extra parameters! We’ve started
exploring this connection on some small datasets with promising initial results, and we’re excited to see where else this connection can take us (how can we make it work for language models?):
Block Size sCIFAR Acc
Baseline 91.0
16x16 Learned 91.8
32x32 Learned 92.4
256x256 Learned 92.5
We’re looking forward to exploring this more deeply. What class of transforms does this extension learn, and what can it allow you to do? What happens when we apply it to language?
What's Next
We are super excited by these directions, and what’s next: longer and longer sequences, new architectures that allow us to explore this new regime. We’re especially motivated by applications that
could benefit from longer-sequence models – high-resolution imaging, new modalities of data, language models that can read entire books. Imagine giving a language model an entire book and having it
summarize the plot, or conditioning a code generation model on all the code you’ve ever written. The possibilities are wild – and we’re excited.
You can find model code to play around with the synthetics languages we used to develop H3 & Hyena here. If you’re also excited by these directions, please reach out – we would love to chat!
Dan Fu: danfu@cs.stanford.edu; Michael Poli: poli@stanford.edu
Thanks to Alex Tamkin, Percy Liang, Albert Gu, Michael Zhang, Eric Nguyen, and Elliot Epstein for their comments and feedback on this post.
Alternate Explanations Abound
H/t to @typedfemale for bringing this to our attention. ↩
|
{"url":"https://hazyresearch.stanford.edu/blog/2023-03-27-long-learning","timestamp":"2024-11-02T21:30:30Z","content_type":"text/html","content_length":"65572","record_id":"<urn:uuid:15609fa7-b2c3-4bbb-bf21-8b195e4826b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00061.warc.gz"}
|
Re: st: Interpretation of interaction term in log linear (non linear) mo
Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Interpretation of interaction term in log linear (non linear) model
From David Hoaglin <[email protected]>
To [email protected]
Subject Re: st: Interpretation of interaction term in log linear (non linear) model
Date Sun, 9 Jun 2013 13:58:13 -0400
Dear Suryadipta,
It may be helpful to focus, initially, on the fitted model in the log scale.
The definition of the coefficient of "dummy" includes the list of
other predictors in the model (constant, x1, and dummy*x1). Also,
when you interpret the coefficient of "dummy", you should mention that
it summarizes the effect of "dummy" on log(Trade) after adjusting for
simultaneous linear change in x1 and dummy*x1. If the interpretation
of the coefficient of "dummy" does not mention those adjustments, it
gives the impression that the coefficient summarizes the change in
log(Trade) corresponding to an increase of 1 unit (i.e., 1 SD) in x1
when the other predictors are held constant. In your model that
oversimplified interpretation is misleading, because one cannot change
"dummy" and hold dummy*x1 constant. More generally, the "held
constant" interpretation does not reflect the way multiple regression
The presence of the interaction term implies that the model makes
separate adjustments for the contribution of x1 in the two groups
defined by "dummy". It also implies (as you mentioned) that the
effect of "dummy" depends on the value of x1. It is easiest to
calculate that effect when x1 = 0. That may be an appropriate
starting point, but you should also show the mean of x1 when "dummy" =
0 and the mean of x1 when "dummy" = 1 (and look at the relation
between the ranges of x1 in the two groups). Centering the variable
underlying x1 is likely to be a good idea, but the case for dividing
by its standard deviation is less clear.
This discussion should clarify the interpretation and provide a basis
for translating it to the original scale of the data. It applies also
if you use quasi-likelihood, conveniently available in the glm/poisson
framework. If you want to work in terms of elasticities, please check
that any derivatives involved do not (inappropriately) assume that the
other predictors can be held constant.
David Hoaglin
On Sat, Jun 8, 2013 at 12:12 PM, Suryadipta Roy <[email protected]> wrote:
> Dear Statalisters,
> I was wondering if some one would be kind enough to clarify if I am on
> the right track in clarifying the coefficient of the interaction term
> when the dependent variable is in logarithm. The estimated model is of
> the form: log(Trade) = constant + 0.15dummy - 0.15x1 + 0.12dummy*x1,
> where dummy is (0,1) categorical variable, x1 is a continuous variable
> (standardized 0 - 1), and dummy*x1 is the interaction term. The result
> has been obtained from a fixed effects panel regression using -areg-
> with robust standard error option, and all the variables are
> statistically significant. Based on readings of Maarten's Stata tip
> 87: Interpretation of interactions in non-linear model, several
> Statalist postings, and the following link
> http://www.stanford.edu/~mrosenfe/soc_388_notes/soc_388_2002/Interpreting%20the%20coefficients%20of%20loglinear%20models.pdf
> , I wanted to make sure if any of the following interpretation of the
> above result is correct:
> 1. The coefficient of "dummy" indicates that this category (dummy
> variable = 1) has 16% (= exp(0.15) - 1) more of "Trade" compared to
> the base category (dummy variable = 0).
> 2. The effect of being in this category on "Trade" increases when the
> value of x1 increases. For every standard deviation increase in x1,
> the effect of "dummy" increases by about 13% (exp(0.12) - 1), OR there
> is a statistically significant 13% increase in "Trade" to countries
> having more of x1 relative to countries that have one standard
> deviation lower value of x1, OR the effect of being in the "dummy = 1"
> category in a country with one standard deviation more of x1 than
> average is exp(0.12)*exp(0.15) = 1.31, which means that "dummy=1"
> category has about 31% more "Trade" than "dummy=0" category.
> Following suggestions elsewhere in the Statalist, I have pursued other
> non-linear estimation strategies (and have asked questions to that
> effect earlier), but there is a tradition in this literature to use
> log-linear models. Any suggestion is greatly appreciated.
> Sincerely,
> Suryadipta Roy.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"https://www.stata.com/statalist/archive/2013-06/msg00346.html","timestamp":"2024-11-12T13:38:20Z","content_type":"text/html","content_length":"14375","record_id":"<urn:uuid:b388e880-16a1-423d-8a8b-57a6ea0c9889>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00394.warc.gz"}
|
PLP 2019:
The 6th Workshop on Probabilistic Logic Programming
An ICLP workshop
21st, September 2019, Las Cruces, New Mexico, USA
Venue | Invited Speakers | Program | Registration | Accepted Papers | Programme Committee
Probabilistic logic programming (PLP) approaches have received much attention in this century. They address the need to reason about relational domains under uncertainty arising in a variety of
application domains, such as bioinformatics, the semantic web, robotics, and many more. Developments in PLP include new languages that combine logic programming with probability theory as well as
algorithms that operate over programs in these formalisms.
The workshop encompasses all aspects of combining logic, algorithms, programming and probability.
PLP is part of a wider current interest in probabilistic programming. By promoting probabilities as explicit programming constructs, inference, parameter estimation and learning algorithms can be run
over programs which represent highly structured probability spaces. Due to logic programming's strong theoretical underpinnings, PLP is one of the more disciplined areas of probabilistic programming.
It builds upon and benefits from the large body of existing work in logic programming, both in semantics and implementation, but also presents new challenges to the field. PLP reasoning often
requires the evaluation of large number of possible states before any answers can be produced thus breaking the sequential search model of traditional logic programs.
While PLP has already contributed a number of formalisms, systems and well understood and established results in: parameter estimation, tabling, marginal probabilities and Bayesian learning, many
questions remain open in this exciting, expanding field in the intersection of AI, machine learning and statistics.
This workshop aims to bring together researchers in all aspects of probabilistic logic programming, including theoretical work, system implementations and applications. Interactions between
theoretical and applied minded researchers are encouraged. The presence of this workshop at ICLP is intended to encourage collaboration with researchers from the field of logic programming,
This workshop provides a forum for the exchange of ideas, presentation of results and preliminary work, in the following areas
• probabilistic logic programming formalisms
• parameter estimation
• statistical inference
• implementations
• structure learning
• reasoning with uncertainty
• constraint store approaches
• stochastic and randomised algorithms
• probabilistic knowledge representation and reasoning
• constraints in statistical inference
• applications, such as
□ bioinformatics
□ semantic web
□ robotics
• probabilistic graphical models
• Bayesian learning
• tabling for learning and stochastic inference
• MCMC
• stochastic search
• labelled logic programs
• integration of statistical software
The above list should be interpreted broadly and is by no means exhaustive.
The workshop will take place at Las Cruces, New Mexico, USA.
You can find workshop schedule at: https://www.cs.nmsu.edu/ALP/iclp2019/schedule_plp.html
The workshop will take place at Las Cruces, New Mexico, USA.
Invited speakers
• Joohyung Lee (Arizona State University, USA)
DeepLPMLN: A neural probabilistic logic programming language
This talk will introduce DeepLPMLN, which extends probabilistic answer set programming language LPMLN by embracing neural networks via the notion of neural atoms borrowed from DeepProbLog. The
formalism allows for seamless coordination of the perception and the reasoning tasks, as well as joint parameter learning between probabilistic logic programs and neural networks. The gradients
from the rules do not stop at updating the weights of the rules but could backpropagate further into neural networks, thereby enabling neural networks to learn not only from implicit correlations
from the data but also from explicit complex semantic constraints conveniently expressed by answer set programs.
• Guy Van den Broeck (University of California, Los Angeles)
Discrete Probabilistic Programming from First Principles. (slides)
This talk will build up semantics and probabilistic reasoning algorithms for discrete probabilistic programs from first principles. We begin by explaining simple semantics for imperative
probabilistic programs, highlighting how they are different from classical representations of uncertainty in AI, and the possible pitfalls along the way. Then we dive into algorithms for
reasoning about such programs and exploiting their structure, either through abstraction of the probabilistic program, or by compilation into a tractable representation for inference.
• Elena Bellodi (University of Ferrara, Italy)
Efficient inference in discrete and continuous domains for PLP languages under the Distribution Semantics (slides)
In Probabilistic Logic Programming, a large number of languages have been independently proposed. Many of these however follow a common approach, the distribution semantics (Sato 1995). Since PLP
systems generally must solve a large number of inference problems in order to perform learning, they rely critically on the support of efficient inference systems. The talk will provide an
overview of the most recent and scalable techniques for exact and approximate reasoning on PLP programs under the distribution semantics, in the presence of discrete or continuous random
Registration open through ICLP registration: https://shopcart.nmsu.edu/shop/icpl2019
Papers due: [S:Fri, 26th July 2019:S] Mon, 19th August 2019
Notification to authors: [S:Fri, 9th August 2019:S] Sat, 31st August 2019
Camera ready version due: [S:Fri, 23th August 2019:S] Fri,13th September 2019
Workshop date: 21, September 2019
(the deadline for all dates is 23:59 BST)
Accepted Papers
• Efficient Learning of Relational Gaifman Models using Probabilistic Logic (Devendra Dhami, Gautam Kunapuli and Sriraam Natarajan)
• Elicitation of probabilistic logic rules from medical records: A preliminary study in learning policies for the management of critically ill children (Michael Skinner, Sriraam Natarajan, Lakshmi
Raman, Neel Shah and Abdelaziz Farhat)
• Bridging Commonsense Reasoning and Probabilistic Planning via a Probabilistic Action Language (Yi Wang, Shiqi Zhang and Joohyung Lee)
At least one author of each accepted paper will be required to attend the workshop to present the contribution.
Programme committee
• Sriraam Natarajan (The University of Texas at Dallas, USA) [co-chair]
• Yi Wang (Arizona State University, USA) [co-chair]
• Joohyung Lee (Arizona State University, USA)
• Elena Bellodi (University of Ferrara, Italy)
• Fabio Cozman (University of Sao Paulo, Brasil)
• Luke Dickens (University College London, UK)
• Matthias Nickles (National University of Ireland, Ireland)
• Rolf Schwitter (Macquarie University, Australia)
• Chung-Chieh Shan (Indiana University Bloomington, USA)
Senior Committee
• Nicos Angelopoulos (Sanger Institute, UK)
• Vitor Santos Costa (Universidade do Porto, Portugal)
• James Cussens (University of York, UK)
• Arjen Hommersom (Open University, The Netherlands)
• Angelika Kimmig (Cardiff University, UK)
• Evelina Lamma (University of Ferrara, Italy)
• David Poole (University of British Columbia, Canada)
• Luc De Raedt (KU Leuven, Belgium)
• Fabrizio Riguzzi (University of Ferrara, Italy)
• Alessandra Russo (Imperial College, UK)
• Joost Vennekens (KU Leuven, Belgium)
Last modified: Wed 31 July 2019
|
{"url":"https://stoics.org.uk/plp/plp2019/","timestamp":"2024-11-07T10:05:46Z","content_type":"text/html","content_length":"16825","record_id":"<urn:uuid:b881f4ec-0d6f-433b-90a1-5dd6dac67a59>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00736.warc.gz"}
|
Lecture 22: ArrayLists
Lecture 22: ArrayLists
Binary search over sorted ArrayLists, sorting ArrayLists
In the last lecture we began implementing several functions over ArrayLists as methods in a helper utility class. We continue that work in this lecture, designing methods to find an item in an
ArrayList matching a predicate, and to sort an ArrayList according to some comparator.
22.1 Finding an item in an arbitrary ArrayList
To find an item that matches a given predicate, we need to add a new method to our utility class as well. Our initial guess at a signature is
// In ArrayUtils <T> ??? find(ArrayList<T> arr, IPred<T> whichOne) {
What should the return type of our method be? If we simply return the item (i.e. have a return type of T), we are limited in what we can do: we can modify the item, perhaps, but we cannot remove the
item from the ArrayList itself, because removal requires specifying an index. Therefore we should return the index where we found the item. If no item is found, we could throw an exception, but since
it is a common occurrence for a value not to be found, we should instead return some invalid index, rather than aborting our program:
// In ArrayUtils // Returns the index of the first item passing the predicate, // or -1 if no such item was found <T> int find(ArrayList<T> arr, IPred<T> whichOne) {
How can we implement this method? We don’t have ConsList and MtList against which to dynamically dispatch to methods. Even if we did, we’d need to keep count of the index we were at (using an
accumulator parameter) so that we could return it. Here, we can use that index to drive the iteration as well. We define a helper method
// In ArrayUtils // Returns the index of the first item passing the predicate at or after the // given index, or -1 if no such such item was found <T> int findHelp(ArrayList<T> arr, IPred<T>
whichOne, int index) {
if (whichOne.apply(arr.get(index)) {
return index;
else {
return findHelp(arr, whichOne, index + 1);
What’s wrong with this code?
We’ve forgotten our base case: this code will continue its recursion until index gets too big, at which point get will throw an exception. We need to compare index with the size of the list:
// In ArrayUtils // Returns the index of the first item passing the predicate at or after the // given index, or -1 if no such such item was found <T> int findHelp(ArrayList<T> arr, IPred<T>
whichOne, int index) {
if (index >= arr.size()) {
return -1;
else if (whichOne.apply(arr.get(index)) {
return index;
else {
return findHelp(arr, whichOne, index + 1);
What would happen if we had used > instead of >=?
22.2 Finding an item in a sorted ArrayList – version 1
Suppose we happen to know that our ArrayList contains items that are comparable, and that the ArrayList itself is sorted. Can we do better than blindly scanning through the entire ArrayList? For
concreteness, let’s assume our ArrayList is an ArrayList<String> and we’ll use the built-in comparisons on Strings. We’ll revisit this decision after we’ve developed the method, and generalize it to
arbitrary element types.
To guide our intuition on a better searching algorithm, consider a well-known sorted list of strings: a dictionary, whose entries are alphabetized. Here is a sample dictionary of words, along with
their 0-based indices:
[apple, banana, cherry, date, fig, grape, honeydew, kiwi, watermelon]
Suppose we were searching for “grape”.
• We know that words beginning with ‘g’ are not likely to appear at the very front of the dictionary, nor are they likely to appear at the back. Instead we start our search somewhere in the middle
of the dictionary. In this case, the middle of our dictionary is index 4, “fig”. Because the dictionary is alphabetized, and “grape” comes after “fig” in the alphabet, we now know that all
indices of 4 and below will definitely not contain the word we seek. Instead, we turn our attention to indices 5 (which is one more than the middle index, 4+1) through 8 (our upper bound on which
indices might contain our word).
• We could begin blindly scanning through all those items (and indeed, in this particular example, we’d luckily find our target on the very next try!), but our first approach of checking the
“middle” index and eliminating half the dictionary in one shot worked so well; let’s try it again. This time, the middle index is 6 (or 7; either will work, but since indices must be integers, we
will use integer division, allowing Java to truncate any fractional part and we’ll get 6 as our answer), “honeydew”. Since “grape” precedes “honeydew”, we now know that indices 6 and up will
definitely not contain the word we seek. So we continue with indices 5 (our lower bound) through 5 (which is one less than the middle index, 6-1).
• Happily, index 5 contains “grape”, so we return 5 as our answer.
What indices would we check if we were searching for “blueberry”?
Let’s trace through the steps involved in searching for “blueberry”.
• Once again, we consider the entire ArrayList, from index 0 through index 8, and start our search at the middle index 4, “fig”, which is greater than our target word. So we eliminate indices 4 and
up, and focus on indices 0 (our lower bound on where to find the word) through 3 (which is 4-1).
• Our middle index is 2, corresponding to “cherry”, which is greater than “blueberry”, so we eliminate indices 2 and up, and focus on indices 0 (our lower bound) through 1 (which is 2-1).
• Now our middle index is 0, “apple”, which is less than our target, so we eliminate index 0, and focus on indices 1 (which is 0+1) through 1 (our upper bound).
• Index 1 contains “banana”, which is less than our target, so we eliminate index 1, and focus on indices 2 (which is 1+1) through 1 (our upper bound).
• Now our bounds have crossed: our lower bound is greater than our upper bound, so there are no possible words in the dictionary that might be our target. We must not have the target word in our
ArrayList; we therefore return -1.
Let’s see how to translate this description into code. This search technique, which splits the search space in half each time, is known as binary search, so we’ll implement a new method to
distinguish it from our previous find operation:
// In ArrayUtils // Returns the index of the target string in the given ArrayList, or -1 if the string is not found // Assumes that the given ArrayList is sorted aphabetically int binarySearch(
ArrayList<String> strings, String target) {
Once again, we find ourselves in need of a helper method: we need to keep track of the lower and upper bounds on the indices where our target string might be found.
// In ArrayUtils // Returns the index of the target string in the given ArrayList, or -1 if the string is not found // Assumes that the given ArrayList is sorted aphabetically int
binarySearchHelp_v1(ArrayList<String> strings, String target, int lowIdx, int highIdx) {
int midIdx = (lowIdx + highIdx) / 2;
if (target.compareTo(strings.get(midIdx)) == 0) {
return midIdx; // found it! }
else if (target.compareTo(strings.get(midIdx)) > 0) {
return this.binarySearchHelp_v1(strings, target, midIdx + 1, highIdx); // too low }
else {
return this.binarySearchHelp_v1(strings, target, lowIdx, midIdx - 1); // too high }
What’s wrong with this code?
Once again we forgot our base case: when the indices cross, the target must not be present:
// In ArrayUtils // Returns the index of the target string in the given ArrayList, or -1 if the string is not found // Assumes that the given ArrayList is sorted aphabetically int
binarySearchHelp_v1(ArrayList<String> strings, String target, int lowIdx, int highIdx) {
int midIdx = (lowIdx + highIdx) / 2;
if (lowIdx > highIdx) {
return -1; // not found }
else if (target.compareTo(strings.get(midIdx)) == 0) {
return midIdx; // found it! }
else if (target.compareTo(strings.get(midIdx)) > 0) {
return this.binarySearchHelp_v1(strings, target, midIdx + 1, highIdx); // too low }
else {
return this.binarySearchHelp_v1(strings, target, lowIdx, midIdx - 1); // too high }
What would happen if we didn’t add or subtract 1 from midIdx in the recursive calls?
Consider searching for “clementine”, this time without adding or subtracting 1:
• We start the search between indices 0 and 8. The middle index is 4, and “fig” is bigger than “clementine”, so we search from the lower bound to the middle index.
• We search between indices 0 and 4. The middle index is 2, and “banana” is smaller than “clementine”, so we search from the middle index to the upper bound.
• We search between indices 2 and 4. The middle index is 3, and “cherry” is smaller than “clementine”, so we search from the middle index to the upper bound.
• We search between indices 3 and 4. The middle index is 3, and “cherry” is smaller than “clementine”, so we search from the middle index to the upper bound.
• We search between indices 3 and 4...
If we don’t add or subtract 1, then we can easily get stuck comparing the same items with the same upper and lower bounds indefinitely. Once again, when dealing with indices, we have to be
particularly careful about our edge cases.
What would happen if our exit condition were if (loxIdx >= highIdx)...?
Now that we have a working helper, we just need to invoke it from the main binarySearch method:
// In ArrayUtils int binarySearch_v1(ArrayList<String> strings, String target) {
return this.binarySearchHelp_v1(strings, target, 0, strings.size() - 1);
22.3 Finding an item in a sorted ArrayList – version 2
Functionally, the code above works great: we’ve covered all cases, and it computes the correct answer. Aesthetically, though, it’s a bit...fiddly. All those adding and subtracting 1s from the indices
is tricky to get right, and if we miss even one of them, our code could loop indefinitely. Perhaps there’s a cleaner, less brittle way we could organize our code to avoid these.
Recall our discussions from Fundies I about semi-open intervals: a semi-open interval \([m, n)\) consists of all numbers \(x\) such that \(m \leq x < n\), i.e. it includes \(m\) (and so is “closed”
on the left) and excludes \(n\) (and so is “open” on the right). As a degenerate case, the interval \([m, m)\) is empty, because it must both include and exclude its edge values. How might we use
this concept in our binary search?
What kind of intervals were we using in version 1 of our binary search code?
We never actually stated explicitly what lowIdx and highIdx meant in our code above! We just blindly manipulated them arithmetically, but never specifically gave them an interpretation. We can infer
their meaning by looking at the initial call to binarySearchHelp_v1 in binarySearch_v1 itself: we pass in 0 for the lower bound, and strings.size() - 1 for the upper bound. Apparently, the lower
bound means the lowest possible valid index where the data could be found, and the upper bound means the highest possible valid index where the data could be found. Because lowIdx and highIdx are
inclusive bounds, they represent a closed interval.
Ironically, the mathematical terminology here is to say that closed intervals are not “closed under splitting.” Further ironically, semi-open intervals are “closed under splitting.”
Mathematicians overload the term “closed” with multiple meanings.
Arithmetically, what we’ve noticed in our code, with its adding and subtracting 1s everywhere, is that it’s hard to split a closed interval into two pieces that are themselves closed intervals — and
we need the two pieces to be closed intervals, or else we can’t pass them to recursive calls. On the other hand, it’s easy to split a semi-open interval in two: we can split an interval \([l, h)\)
into \([l, m)\) and \([m, h)\), for any \(l \leq m \leq h\), and it’s straightforward to check that both smaller intervals contain all the values of the original interval, and that the smaller
intervals do not overlap.
Confirm this — use the definition of semi-open above.
What if we used a semi-open interval for our indices, instead of a closed one? The skeleton of our code will be identical to the version above, but a few details will change.
// In ArrayUtils // Returns the index of the target string in the given ArrayList, or -1 if the string is not found // Assumes that the given ArrayList is sorted aphabetically // Assumes that
[lowIdx, highIdx) is a semi-open interval of indices int binarySearchHelp_v2(ArrayList<String> strings, String target, int lowIdx, int highIdx) {
int midIdx = (lowIdx + highIdx) / 2;
if (lowIdx ??? highIdx) {
return -1; // not found }
else if (target.compareTo(strings.get(midIdx)) == 0) {
return midIdx; // found it! }
else if (target.compareTo(strings.get(midIdx)) > 0) {
return this.binarySearchHelp_v2(strings, target, midIdx ???, highIdx); // too low }
else {
return this.binarySearchHelp_v2(strings, target, lowIdx, midIdx ???); // too high }
Read the calls to binarySearchHelp_v2 as “find the index of the target string in the given list, knowing that it must be at least at the low index and before the high index.” We have three holes to
fill in, which we’ll examine out of order:
• We need a base case to determine when there are no valid indices left to check. This now falls out of the definition of semi-open intervals: the interval is empty when lowIdx >= highIdx.
• Otherwise we split the interval in half. If the target is too high, then the midIdx is too big. We need to exclude it in the recursive call, and since the interpretation of the high index is that
it’s excluded, we can simply pass midIdx directly, with no subtracting 1.
• If the target is too low, then the midIdx is too small. We can exclude it from the recursive call by adding 1 to it. Sadly, this addition is necessary and can’t be eliminated, because indices are
integers, not reals, and so we run the risk of infinitely recuring when computing midIdx that we get the exact same numbers we started with.
Suppose we didn’t add 1 in the last case. Construct a test case that causes the search to recur forever.
Our final code, then, is this:
// In ArrayUtils // Returns the index of the target string in the given ArrayList, or -1 if the string is not found // Assumes that the given ArrayList is sorted aphabetically // Assumes that
[lowIdx, highIdx) is a semi-open interval of indices int binarySearchHelp_v2(ArrayList<String> strings, String target, int lowIdx, int highIdx) {
int midIdx = (lowIdx + highIdx) / 2;
if (lowIdx >= highIdx) {
return -1; // not found }
else if (target.compareTo(strings.get(midIdx)) == 0) {
return midIdx; // found it! }
else if (target.compareTo(strings.get(midIdx)) > 0) {
return this.binarySearchHelp_v2(strings, target, midIdx + 1, highIdx); // too low }
else {
return this.binarySearchHelp_v2(strings, target, lowIdx, midIdx); // too high }
It should be clear from looking at the code that we split the original interval \([lowIdx, highIdx)\) into \([lowIdx, midIdx)\) and \([midIdx + 1, highIdx)\), which — since we’re only considering
integers — clearly cover the original interval with no overlap.
Finally, we need our initial routine that calls the helper. Now, since our upper bound is excluded, we don’t need to subtract 1 from the size of the list, because we’ll never consider the initial
upper bound as a valid index:
// In ArrayUtils int binarySearch_v2(ArrayList<String> strings, String target) {
return this.binarySearchHelp_v2(strings, target, 0, strings.size());
22.4 Generalizing to arbitrary element types
For completeness, here is the version of binarySearch that works for arbitrary element types. Our signature gets slightly more complicated, but the logic behind the index computations and comparisons
remains the same:
// In ArrayUtils <T> int gen_binarySearch_v2(ArrayList<T> arr, T target, IComparator<T> comp) {
return this.gen_binarySearchHelp_v2(arr, target, comp, 0, arr.size());
<T> int gen_binarySearchHelp_v2(ArrayList<T> arr, T target, IComparator<T> comp,
int lowIdx, int highIdx) {
int midIdx = (lowIdx + highIdx) / 2;
if (lowIdx >= highIdx) {
return -1;
else if (comp.compare(target, strings.get(midIdx)) == 0) {
return midIdx;
else if (comp.compare(target, strings.get(midIdx)) > 0) {
return this.gen_binarySearchHelp_v2(strings, target, comp, midIdx + 1, highIdx);
else {
return this.gen_binarySearchHelp_v2(strings, target, comp, lowIdx, midIdx);
(Note that obviously in practice, these methods would lose the gen_ and _v2 affixes, which were added here only to distinguish the various versions of our code.)
22.5 Sorting an ArrayList
Picture a set of cards spread out in a row on a table, each with a word on them:
[kiwi, cherry, apple, date, banana, fig, watermelon, grape, honeydew]
How would we sort this? There are many, many techniques we could use, but since we have only two hands to move the cards around, one of the most natural might be the following. We pick up the first
card, “kiwi”, and look for the card that ought to go in that spot — “apple” — and replace “kiwi” with “apple”. Since we do not want to lose “kiwi”, and since we have to set it down again somewhere,
we might as well place it in the spot where “apple” was: we exchange them.
[apple, cherry, kiwi, date, banana, fig, watermelon, grape, honeydew]
How did we decide that “apple” was the appropriate replacement for “kiwi”?
Next, we pick up the second card, “cherry”, and look for the card that ought to go in that spot — “banana” — and exchange them.
[apple, banana, kiwi, date, cherry, fig, watermelon, grape, honeydew]
How did we decide that “banana” was the appropriate replacement for “cherry”?
Let’s be a bit more rigorous about what we’re doing here. In the first case, when we were searching for a replacement for “kiwi”, we were looking for the smallest item of the list. In the second
case, we could not possibly have been searching for the smallest item of the list, or else we’d have found “apple” again! Instead, we were searching for the smallest item of the rest of the list,
beyond the location we were swapping. Why does this work? Our algorithm essentially partitions the list into two segements: the front of the list has been processed, while the back of the list
remains to be processed. Moreover, the front of the list is guaranteed to be sorted.
0 1 || 2 3 4 5 6 7 8
[apple, banana,|| kiwi, date, cherry, fig, watermelon, grape, honeydew]
SORTED <--++--> NOT YET SORTED
By searching for the smallest item of the not-yet-sorted portion of the list, and exchanging it with the first item in the not-yet-sorted portion, we have essentially sorted that one item:
0 1 || 2 3 4 5 6 7 8
[apple, banana,|| kiwi, date, cherry, fig, watermelon, grape, honeydew]
SORTED <--++--> NOT YET SORTED
Swap items at index 2 and index 3...
0 1 2 || 3 4 5 6 7 8
[apple, banana, date,|| kiwi, cherry, fig, watermelon, grape, honeydew]
SORTED <--++--> NOT YET SORTED
Now if we repeat this process for each index in the list, we’ll have grown the sorted section to encompass the entire list: we’ll have sorted the list.
But how to do that? We cannot use a for-each loop here, because we specifically care about the indices, more than we care about the particular items. We could write our code using a recursive method
and an accumulator parameter:
// In ArrayUtil // EFFECT: Sorts the given list of strings alphabetically void sort(ArrayList<String> arr) {
this.sortHelp(arr, 0); // (1) }
// EFFECT: Sorts the given list of strings alphabetically, starting at the given index void sortHelp(ArrayList<String> arr, int minIdx) {
if (minIdx >= arr.size()) { // (2) return;
else { // (3) int idxOfMinValue = ...find minimum value in not-yet-sorted part...
this.swap(arr, minIdx, idxOfMinValue);
this.sortHelp(arr, minIdx + 1); // (4) }
But this feels clumsy: there’s too much clutter surrounding the actual activity of sortHelp. Again, since iterating over all items by position is such a common operation, Java provides syntax to make
it easier: a counted-for loop, or just a for loop. We introduce counted-for loop syntax by rewriting sort and sortHelp to use one:
// In ArrayUtil // EFFECT: Sorts the given list of strings alphabetically void sort(ArrayList<String> arr) {
for (int idx = 0; // (1) idx < arr.size(); // (2) idx = idx + 1) { // (4) // (3) int idxOfMinValue = ...find minimum value in not-yet-sorted part...
this.swap(arr, minIdx, idxOfMinValue);
A for loop consists of four parts, which are numbered here (and their corresponding parts are numbered in the recursive version of the code). First is the initialization statement, which declares the
loop variable and initializes it to its starting value. This is run only once, before the loop begins. Second is the termination condition, which is checked before every iteration of the loop body.
As soon as the condition evaluates to false, the loop terminates. Third is the loop body, which is executed every iteration of the loop. Fourth is the update statement, which is executed after each
loop body and is used to advance the loop variable to its next value. Read this loop aloud as “For each value of idx starting at 0 and continuing while idx < arr.size(), advancing by 1, execute the
The initialization, termination condition, and update statement used here are pretty typical: loops often count by ones, starting at zero and continuing until some upper bound. But these loops can be
far more flexible: they could start counting at some large number, and count down to some lower bound:
for (int idx = bigNumber; idx >= smallNumber; idx = idx - 1) { ... }
or count only odd numbers:
for (int idx = smallOddNumber; idx < bigNumber; idx = idx + 2) { ... }
or anything else that the problem at hand requires.
Practice using the counted-for loop: design a method
<T> ArrayList<T> interleave(ArrayList<T> arr1, ArrayList<T> arr2)
that takes two ArrayLists of the same size, and produces an output ArrayList consisting of one item from arr1, then one from arr2, then another from arr1, etc.
Design a method
<T> ArrayList<T> unshuffle(ArrayList<T> arr)
that takes an input ArrayList and produces a new list containing the first, third, fifth ... items of the list, followed by the second, fourth, sixth ... items.
22.6 Finding the minimum value
Design the missing method to finish the sort method above: this method should find the minimum value in the not-yet-sorted part of the given ArrayList<String>.
|
{"url":"https://course.khoury.northeastern.edu/cs2510h/lecture22.html","timestamp":"2024-11-04T08:11:07Z","content_type":"text/html","content_length":"131273","record_id":"<urn:uuid:c18d9dfc-cf96-4ce3-b627-a456acc6b4e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00331.warc.gz"}
|
Quantum expanders: Motivation and constructions
We define quantum expanders in a natural way. We give two constructions of quantum expanders, both based on classical expander constructions. The first construction is algebraic, and is based on the
construction of Cayley Ramanujan graphs over the group PGL{2, q) given by Lubotzky, Philips and Sarnak [27]. The second construction is combinatorial, and is based on a quantum variant of the Zig-Zag
product introduced by Reingold, Vadhan and Wigderson [35]. Both constructions are of constant degree, and the second one is explicit. Using quantum expanders, we characterize the complexity of
comparing and estimating quantum entropies. Specifically, we consider the following task: given two mixed stales, each given by a quantum circuit generating it, decide which mixed state has more
entropy. We show that this problem is QSZK-complele (where QSZK is the class of languages having a zero-knowledge quantum interactive protocol). This problem is very well motivated from a physical
point of view. Our proof resembles the classical proof that the entropy difference problem is SZK-complete, but crucially depends on the use of quantum expanders.
Original language English
Title of host publication Proceedings - 23rd Annual IEEE Conference on Computational Complexity, CCC 2008
Pages 292-303
Number of pages 12
State Published - 2008
Externally published Yes
Event 23rd Annual IEEE Conference on Computational Complexity, CCC 2008 - College Park, MD, United States
Duration: 23 Jun 2008 → 26 Jun 2008
Publication series
Name Proceedings of the Annual IEEE Conference on Computational Complexity
ISSN (Print) 1093-0159
Conference 23rd Annual IEEE Conference on Computational Complexity, CCC 2008
Country/Territory United States
City College Park, MD
Period 23/06/08 → 26/06/08
Dive into the research topics of 'Quantum expanders: Motivation and constructions'. Together they form a unique fingerprint.
|
{"url":"https://cris.huji.ac.il/en/publications/quantum-expanders-motivation-and-constructions","timestamp":"2024-11-12T18:51:21Z","content_type":"text/html","content_length":"49305","record_id":"<urn:uuid:c49e8b35-7182-4509-aaa6-b8fc7f2a5d9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00511.warc.gz"}
|
Mean | JavaScript Front End Interview Questions with Solutions
Implement a function mean(array) that returns the mean (also known as average) of the values inside array, which is an array of numbers.
1. array (Array): Array of numbers.
(Number): Returns the mean of the values in array.
mean([4, 2, 8, 6]);
mean([1, 2, 3, 4]);
mean([1, 2, 2]);
The function should return NaN if array is empty.
|
{"url":"https://www.greatfrontend.com/questions/javascript/mean","timestamp":"2024-11-11T00:07:40Z","content_type":"text/html","content_length":"450756","record_id":"<urn:uuid:327115f6-543b-4056-96db-8af90bc22db2>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00185.warc.gz"}
|
Dresden was born in Amsterdam on November 23, 1882, into a wealthy banking family. After matriculating for three years at the University of Amsterdam he used tuition money in 1903 to book passage on
a ship to New York City. He then traveled to Chicago to help a friend, arriving there on his 21st birthday. Two years later, after saving money from working at various jobs, he enrolled in the
graduate program at the University of Chicago, where he earned his Ph.D. in 1909 under the direction of Oskar Bolza with thesis The Second Derivatives of the Extremal Integral.^[5]
Research and teaching
Dresden taught at the University of Wisconsin 1909–1927. During this time he wrote several papers on the calculus of variations and systems of linear differential equations. He directed one doctoral
dissertation. He was elected a Fellow of the American Association for the Advancement of Science in 1911.^[6] He was recruited to Swarthmore College by President Frank Aydelotte to initiate an honors
program in mathematics that ended up being a model for other colleges and universities throughout the U.S. Dresden remained at the elite Quaker college until retiring in 1952; he was adored by many
of his students. He was a Guggenheim Fellow for the academic years 1930–1931 and 1934–1935.^[7] In 1935–1936 he was on sabbatical at the Institute for Advanced Study, where he wrote An Invitation to
Mathematics.^[8]^[9]^[10] He died on April 10, 1954, in Swarthmore, Pennsylvania, at age 71.
While at Wisconsin Arnold Dresden was active in and served as secretary of, the Chicago Section of the American Mathematical Society. A charter member of the Mathematical Association of America, he
was elected President for 1933–1934. He also served as Vice-President during 1931 and as a member of the Board of Governors for 1935–1940 and 1943–1945. His retiring presidential address, “A program
for mathematics",^[11] encapsulated his deep concern about the place of mathematics in general culture and about the mathematical community's laissez-faire attitude toward the role it should play. A
recurring theme was his belief that abstract concepts can be grasped by young people, which he preached in his 1936 book, An Invitation to Mathematics. He was also known as an ally to women in the
field, as well.^[12] He also wrote three textbooks and translated van der Waerden’s classic Science Awakening from Dutch into English.^[13]
• Dresden, Arnold (1921). Plane trigonometry. John Wiley.
• Dresden, Arnold (1964) [1930]. Solid Analytical Geometry and Determinants. NY and London (1930): John Wiley and Chapman & Hall; (reprint) Dover.{{cite book}}: CS1 maint: location (link)
• Dresden, Arnold (1936). An Invitation to Mathematics. H. Holt.
• Dresden, Arnold (1940). Introduction to the Calculus. H. Holt.
• Waerden, B. L., van der; English trans. Arnold Dresden (1954). Science Awakening. Noordhoff.{{cite book}}: CS1 maint: multiple names: authors list (link)
External links
• Rank and File American Mathematicians (pdf) by David Zitarelli
• Records of editors, presidents, and secretaries from MAA headquarters, Arnold Dresden, 1932-1950 at the Archives of American Mathematics from Texas Archival Resources Online
|
{"url":"https://www.knowpia.com/knowpedia/Arnold_Dresden","timestamp":"2024-11-13T21:58:23Z","content_type":"text/html","content_length":"99273","record_id":"<urn:uuid:6a6c132d-1a48-4e14-ab05-58a6ec22f867>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00817.warc.gz"}
|
Satisfiability certificates verifiable in subexponential time
It is common to classify satisfiability problems by their time complexity. We consider another complexity measure, namely the length of certificates (witnesses). Our results show that there is a
similarity between these two types of complexity if we deal with certificates verifiable in subexponential time. In particular, the well-known result by Impagliazzo and Paturi [IP01] on the
dependence of the time complexity of k-SAT on k has its counterpart for the certificate complexity: we show that, assuming the exponential time hypothesis (ETH), the certificate complexity of k-SAT
increases infinitely often as k grows. Another example of time-complexity results that can be translated into the certificate-complexity setting is the results of [CIP06] on the relationship between
the complexity of k-SAT and the complexity of SAT restricted to formulas of constant clause density. We also consider the certificate complexity of CircuitSAT and observe that if CircuitSAT has
subexponential-time verifiable certificates of length cn, where c < 1 is a constant and n is the number of inputs, then an unlikely collapse happens (in particular, ETH fails).
סדרות פרסומים
שם Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
כרך 6695 LNCS
ISSN (מודפס) 0302-9743
ISSN (אלקטרוני) 1611-3349
כנס 14th International Conference on Theory and Applications of Satisfiability Testing, SAT 2011
מדינה/אזור ארצות הברית
עיר Ann Arbor, MI
תקופה 19/06/11 → 22/06/11
טביעת אצבע
להלן מוצגים תחומי המחקר של הפרסום 'Satisfiability certificates verifiable in subexponential time'. יחד הם יוצרים טביעת אצבע ייחודית.
|
{"url":"https://cris.ariel.ac.il/iw/publications/satisfiability-certificates-verifiable-in-subexponential-time","timestamp":"2024-11-07T03:53:20Z","content_type":"text/html","content_length":"59459","record_id":"<urn:uuid:dd3763a2-585d-4830-be37-0ae46f047936>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00317.warc.gz"}
|
Why does the Marginal Hilbert Spectrum of an anharmonic oscillator have two peaks?
5 years ago
●1 reply●
latest reply 5 years ago
149 views
I've been looking at the marginal Hilbert Spectra of both the simple harmonic oscillator and an anharmonic Morse oscillator. I have found that while the SHO has a marginal spectrum with a single and
roughly lorentzian band shape, the corresponding Morse oscillator splits into two peaks, and the spectrum more closely resembles the probability density of a classical oscillator.
Looking at the instantaneous frequencies in the time domain indicates that the Morse oscillator frequencies change during the oscillation, and that as the oscillator spends more time at the turning
points, this gives rise to the two distinct peaks.
However, it doesn't make physical sense to say that a single oscillating mode of a single oscillator has two distinct frequency components, and the equivalent Fourier spectrum gives a single peak
roughly in between the two, corresponding roughly to the oscillators correct frequency.
This remains true when empirical mode decomposition (Hilbert-Huang Transform, HHT) is performed on the signal first so as to construct the marginal spectrum from intrinsic mode functions that should
have well behaved Hilbert transforms.
I have tried doing this in both LabVIEW and MATLAB. What is the mathematical reason why this seemingly non-physical spectrum occurs for an anharmonic oscillation using the HHT? Alternatively does
anyone know if might have made a mistake somewhere? If anyone knows of any relavent literature on the topic, that would help as well. As an aside if anyone could explain the origin of the Lorentzian
in the harmonic case, that would help too.
[ - ]
Reply by ●April 30, 2020
I'm not familiar with Hilbert Spectra. But I'll point you at "Analysis and Design of Autonomous Microwave Circuits" chapter 3 "Bifurcation Analysis". There are stable and unstable branches in phase
space which can cause an oscillator to jump to an alternate branch as conditions change. I'm not sure it's the same thing, but oscillators can definitely do weird things because they are
non-linear. The Lorentzian spread looks like the plots in a lot of the graphs in that book, so "that's how it works" is about the best answer I can come up with for the moment.
I got the book to understand self injection oscillators. It's been a while since I looked at that stuff in detail, but I suspect you'll find useful information and good references too.
|
{"url":"https://dsprelated.com/thread/11061/why-does-the-marginal-hilbert-spectrum-of-an-anharmonic-oscillator-have-two-peaks","timestamp":"2024-11-12T15:41:13Z","content_type":"text/html","content_length":"30504","record_id":"<urn:uuid:20c23087-ca72-4078-b448-0164e836b801>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00837.warc.gz"}
|
The Mayan Calendar
The mathematical basis of the Mayan Calendar has proved to be far more accurate over the long term. This is due to the two factors:
Because Central America is located between the tropics of Cancer and Capricorn, the area is an ideal place to develop an accurate calendar, Macri points out. Only in these latitudes is it possible to
observe precise solar zeniths , or when the sun is at the center of the sky at noon.
By about 200 B.C., people living along the Gulf Coast in Central America developed the Long Count through their observations of the solar zeniths. The count forms the basis of the more extensive
Mesoamerican calendar. The Long Count, with its 13 full cycles of 400 years each, accurately extends back to 3114 B.C., which is when the current era was supposed to have begun.
One wonders how far less accurate Stonehenge is. The religion incorporated observation and calculation: a gentle reminder that knowledge has been embedded in faith far longer than in science.
To handle the uneven counting during leap years, the Long Count developers created an elegant mathematical solution, Macri says.
"They saw how leap years shifted over thousands of days," Macri says. "So, to deal with the fractions, they expressed the numbers by multiplying to get a full number, thus allowing for a more
accurate calendar over a long period."
In their observations of solar, lunar and planetary movements over time, the Native Americans were able to create complex mathematical tables. To this day, "daykeepers" in the Guatemala highland
serve as Mesoamerican calendar priests by continuing to observe the skies and note their observations, Macri says.
Graduate student Grofe has examined the complex tables used to record the counting cycles that the Mayans used to create accurate projections thousands of years into the past and the future.
"Although computers can calculate time now, these people were very capable of observation and empirical science," says Grofe, who also holds biology and anthropology degrees. "Using complex tables,
they recorded unbroken counting cycles over thousands of years."
|
{"url":"https://thebewilderness.typepad.com/my_weblog/2006/02/the_mayan_calen.html","timestamp":"2024-11-13T04:47:37Z","content_type":"application/xhtml+xml","content_length":"36877","record_id":"<urn:uuid:e463fafa-38f8-4d9c-801b-e9fbe2fa0405>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00278.warc.gz"}
|
Derivative of the Jacobi theta function: Introduction to the Jacobi theta functions (subsection JacobiThetas/04)
Connections within the group of Jacobi theta functions and with other function groups
Representations through related equivalent functions
The elliptic theta functions , , , and can be represented through the Weierstrass sigma functions by the following formulas:
where , are the Weierstrass half-periods and is the Weierstrass zeta function.
The ratios of two different elliptic theta functions , , , and can be expressed through corresponding elliptic Jacobi functions with power factors by the following formulas:
where is an elliptic nome and is a complete elliptic integral.
Representations through other Jacobi theta functions
Each of the theta functions , , , and can be represented through the other theta functions by the following formulas:
The derivatives of the theta functions , , , and can also be expressed through the other theta functions and their derivatives by the following formulas:
|
{"url":"https://functions.wolfram.com/EllipticFunctions/EllipticThetaPrime4/introductions/JacobiThetas/04/ShowAll.html","timestamp":"2024-11-06T11:35:35Z","content_type":"text/html","content_length":"56961","record_id":"<urn:uuid:ffc01a90-a066-441b-883b-b61a7953dfe9>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00333.warc.gz"}
|
LeapSecond.com :: GREAT 2016a Project GREAT 2016a -- Hawking, Einstein, and Time Dilation on Mt Lemmon
Created 17-May-2016
In January of this year I performed another relativity clock experiment; a repeat of Project GREAT (General Relativity Einstein Anniversary Test). The original was named Project GREAT 2005 and I'm
naming this Project GREAT 2016a to celebrate the 100th anniversary of Einstein's General Theory of Relativity.
One of the bold and strange predictions of relativity is that time is not fixed but instead it slows down where gravity is stronger. The usual example given is a black hole. If the theory is true, it
also means clocks should run a tiny bit faster on top of a mountain (where gravity is slightly weaker) compared to clocks nearer sea level (where gravity is slightly stronger). This effect is called
time dilation.
The purpose of these student-friendly DIY clock experiments is to verify the prediction is true; to discover first-hand if time dilation is real. Can time really speed up or slow down; does gravity
really warp space and time?
Compared to a black hole, the effect is extremely small here on Earth and almost immeasurable – which is why we don't notice it on a human scale. But with extremely precise clocks and electronic
instruments that can measure down to a billionth of a second, it is possible to test the theory.
For additional photos and technical details see: GREAT 2016a - photos.
Here is a promotional photo for the new TV series:
What's different about this 2016 edition of the experiment is that it was filmed in conjunction with a new TV series called GENIUS by Stephen Hawking. I have not seen the show yet, but from what I
understand, parts of my experiment are shown in Episode 1: Can We Time Travel where the three volunteers (the "actors" in the show) see if they can detect time dilation.
Last year I was contacted by the UK producers of the Stephen Hawking Genius series to help create a visual demonstration of general relativity and time dilation using atomic clocks at high elevation.
I offered to repeat Project GREAT for their show. My experiment wasn't the focus nor was it filmed in its entirety, but they incorporated parts of it into their scenes.
We did the experiment in Tucson, Arizona because their film schedule required a January date. Up here near Seattle, all the roads on Mt Rainier are closed for the winter. But Tucson is dry enough and
far enough south that the road to the summit of Mt Lemmon is open in winter. In fact there are popular hiking and skiing facilities on the mountain so road access is good. Moreover, there is a
University of Arizona astronomy facility at the summit which provided not only picturesque scenes their TV show, but also accommodations for the crew, not to mention a safe place to stow the clocks.
On film the experiment lasted just one day but for me the timeline was:
• day −10 (Wednesday, January 6th) — begin clock and equipment testing
• day −2 (Thursday, January 14th) — leave Seattle area, clocks running
• day −1 (Friday, January 15th) — traveling through Oregon, California, Arizona
• day 0 (Saturday, January 16th) — synchronize and separate clocks
• day +1 (Sunday, January 17th) — reunite hotel and summit clocks and compare
• day +2 (Monday, January 18th) — clean up and head back home
A relativity cartoon (from npl.co.uk) illustrates gravitational time dilation:
Conceptually, the experiment is very simple. We take one accurate clock the top of the mountain and we take another accurate clock to a hotel at the base of the mountain and let them sit there for a
day. Then we bring the clocks together again and compare. If time dilation is false then the clocks should still agree. If time dilation is real then we would expect the clock that was at the hotel
to be a little behind the clock that was at the summit.
The only math you need to know – if a clock is h meters high, it will run faster by gh/c², where g is the acceleration of gravity, 9.8 m/s², and c is the speed of light, 299,792,458 m/s. Clocks
that run fast gain time and clocks that run slow lose time.
Now, this effect is really small. If a clock is raised 1 meter it will run faster by a factor of only 1.1×10^-16. That is, it will gain 0.000,000,000,000,000,1 second per second. It turns out no
clock is this precise and it's not possible to measure that small amount of time anyway.
So the trick is to raise the clock much higher than 1 meter and to measure much longer than 1 second. If a clock is at 1 km elevation it will run fast by 1.1×10^-13. Over a span of 24 hours this
accumulates to about 10 nanoseconds (billionths of a second). Still small, but easily measured using modern electronic instruments.
The main challenge is the clocks. Any conventional clock will gain or lose far more than this over a day and so you would be measuring the error in the clock, not the "error" in space-time. So only
the best clocks can be used, and they also have to be portable. For this experiment I used Hewlett-Packard (aka Agilent, Keysight, Symmetricom, Microsemi) model 5071A cesium atomic clocks.
Note that cesium (Cs) atomic clocks are not radioactive; instead they use ultra-precise electromagnetic properties of naturally occurring, safe and stable Cs133 atoms. You may be thinking instead of
Cs137 – which is entirely different – the dangerous and unstable radioactive isotope, often created by nuclear accidents.
I took a total of six clocks:
At a minimum two clocks are needed, but having multiple clocks and redundant power greatly reduces chances of failure. All six were synchronized and carried in my car for a couple of days. Then we
left 3 of them at the top of the mountain, storing them in the astronomer's dormitory, and kept the other 3 in the car as we drove back down to a hotel in Tucson. We waited a day, and then drove back
up and compared the clocks.
The elevation of the clocks at the hotel was 775 m (about 2540 feet) and the elevation of clocks at the summit was 2780 m (about 9120 feet). So the difference in elevation was 2.00 km (about 6580
feet). The length of time the clocks were separated was about 23 hours. The formula gh/c² predicts a time dilation of about 18 ns (nanoseconds).
Now, these clocks are not without error. Even atomic clocks are not perfect. But they are extremely well-designed, and they were all tested prior to the experiment, and found to have a timing error
of approximately ± 2 ns per day. Since we are comparing imperfect clock(s) at the base with imperfect clock(s) at the summit the experiment has some inherent error. The key is to design the
experiment so that the effect being measured (gravitational time dilation) is at least ten times greater than the inherent fluctuations in the clocks themselves.
In general, the higher you go, the longer you stay, the better the clocks, the more clocks you use, then the more certain the results will be. In this experiment the (preliminary) measured value of
21 ns is within expectations and close to the predicted value of 18 ns. So Hawking is genius, Einstein is happy, time dilation is real, PBS got quality footage, and a good time was had by all.
For additional photos and technical details see: GREAT 2016a - photos
Return to LeapSecond.com home page.
Comments/questions to tvb.
|
{"url":"http://www.leapsecond.com/great2016a/index.htm","timestamp":"2024-11-12T23:13:16Z","content_type":"text/html","content_length":"9465","record_id":"<urn:uuid:eb6a8ad6-4abf-462b-8592-cc2df6e8baa9>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00663.warc.gz"}
|
Basic Multiplication Practice Worksheets
Math, particularly multiplication, creates the keystone of countless academic disciplines and real-world applications. Yet, for many learners, mastering multiplication can posture a difficulty. To
resolve this obstacle, teachers and moms and dads have actually accepted a powerful device: Basic Multiplication Practice Worksheets.
Introduction to Basic Multiplication Practice Worksheets
Basic Multiplication Practice Worksheets
Basic Multiplication Practice Worksheets -
More practice means better memory Classroom Games Fun games for every class Practice with Games Because learning should be fun Free Multiplication Worksheets Download and printout our FREE worksheets
HOLIDAY WORKSHEETS Free Secret Word Puzzle Worksheets New Years Worksheets Martin Luther King Jr
Multiplication by 7s Some of the multiplication facts with 7 as a factor can be tricky Try these practice activities to help your students master these facts Multiplication by 8s These printable
learning activities feature 8 as a factor in basic multiplication Multiplication by 9s
Importance of Multiplication Method Comprehending multiplication is crucial, laying a solid structure for innovative mathematical principles. Basic Multiplication Practice Worksheets supply
structured and targeted method, fostering a deeper understanding of this essential math operation.
Evolution of Basic Multiplication Practice Worksheets
Simple Multiplication Worksheets Superstar Worksheets
Simple Multiplication Worksheets Superstar Worksheets
These multiplication worksheets are appropriate for 3rd Grade 4th Grade and 5th Grade Free dynamically created math multiplication worksheets for teachers students and parents Great resource for
lesson plans quizzes homework or just practice different multiplication topics
These multiplication facts worksheets provide various exercise to help students gain fluency in the multiplication facts up to 12 x 12 Jump to your topic Multiplication facts review times tables
Multiplication facts practice vertical Multiplication facts practice horizontal Focus numbers Circle drills Missing factor questions
From standard pen-and-paper workouts to digitized interactive layouts, Basic Multiplication Practice Worksheets have advanced, catering to varied understanding styles and preferences.
Sorts Of Basic Multiplication Practice Worksheets
Standard Multiplication Sheets Basic exercises concentrating on multiplication tables, helping students construct a solid math base.
Word Problem Worksheets
Real-life circumstances integrated into issues, improving vital thinking and application skills.
Timed Multiplication Drills Tests made to enhance speed and accuracy, helping in fast mental mathematics.
Benefits of Using Basic Multiplication Practice Worksheets
Multiplication 2 Digit By 2 Digit Thirty Worksheets Multiplication Free Printable Math
Multiplication 2 Digit By 2 Digit Thirty Worksheets Multiplication Free Printable Math
Math explained in easy language plus puzzles games quizzes videos and worksheets For K 12 kids teachers and parents Multiplication Worksheets Worksheets Multiplication Mixed Tables Worksheets
Worksheet Number Range Online Primer 1 to 4 Primer Plus 2 to 6 Up To Ten 2 to 10 Getting Tougher 2 to 12 Intermediate 3
This exclusive page has over 90 printable worksheets on multiplication models like writing multiplication sentences using equal groups area arrays and number lines Multiplication Models Worksheets
Multiplication using pictures The kids may use the pictures to solve the multiplication problems E g how many legs do 4 bugs have
Improved Mathematical Skills
Regular method develops multiplication proficiency, improving total math capacities.
Improved Problem-Solving Abilities
Word issues in worksheets establish logical thinking and technique application.
Self-Paced Learning Advantages
Worksheets accommodate specific discovering rates, fostering a comfortable and versatile understanding environment.
How to Produce Engaging Basic Multiplication Practice Worksheets
Incorporating Visuals and Shades Lively visuals and colors record focus, making worksheets aesthetically appealing and engaging.
Including Real-Life Circumstances
Associating multiplication to day-to-day scenarios includes relevance and practicality to exercises.
Customizing Worksheets to Different Ability Levels Tailoring worksheets based upon differing efficiency degrees makes certain inclusive knowing. Interactive and Online Multiplication Resources
Digital Multiplication Tools and Games Technology-based sources provide interactive learning experiences, making multiplication interesting and enjoyable. Interactive Internet Sites and Apps On the
internet platforms supply varied and available multiplication method, supplementing typical worksheets. Personalizing Worksheets for Numerous Understanding Styles Aesthetic Learners Visual aids and
layouts aid comprehension for learners inclined toward aesthetic learning. Auditory Learners Verbal multiplication issues or mnemonics deal with learners that understand concepts through acoustic
methods. Kinesthetic Students Hands-on activities and manipulatives support kinesthetic students in understanding multiplication. Tips for Effective Execution in Understanding Uniformity in Practice
Routine method reinforces multiplication abilities, advertising retention and fluency. Stabilizing Repeating and Range A mix of recurring workouts and diverse problem layouts preserves rate of
interest and understanding. Supplying Positive Comments Responses help in determining areas of enhancement, urging continued progress. Challenges in Multiplication Method and Solutions Motivation and
Interaction Hurdles Dull drills can cause uninterest; ingenious approaches can reignite motivation. Getting Rid Of Anxiety of Math Negative assumptions around mathematics can prevent development;
producing a positive discovering environment is vital. Effect of Basic Multiplication Practice Worksheets on Academic Efficiency Research Studies and Research Study Searchings For Study indicates a
favorable correlation between regular worksheet use and boosted math efficiency.
Basic Multiplication Practice Worksheets emerge as versatile devices, cultivating mathematical effectiveness in learners while suiting diverse knowing styles. From standard drills to interactive
on-line sources, these worksheets not just enhance multiplication skills however additionally promote crucial reasoning and analytic capabilities.
Free 1 Digit Multiplication Worksheet By 0s And 1s Free4Classrooms
Introductory Algebra Worksheets Free Printable Basic Math Worksheets Activity Shelter A
Check more of Basic Multiplication Practice Worksheets below
Worksheet Coloring Multiplication Worksheets Grass Fedjp Worksheet Study Site
Multiplication Quiz Printable
Multiplication Chart Printable Super Teacher PrintableMultiplication
Conventional Multiplication Times Table Practice Worksheets Printable Multiplication Worksheets
Challenging Multiplication Worksheets Free Printable
Multiplication Facts 0 2 Worksheets Times Tables Worksheets
Printable Multiplication Worksheets Super Teacher Worksheets
Multiplication by 7s Some of the multiplication facts with 7 as a factor can be tricky Try these practice activities to help your students master these facts Multiplication by 8s These printable
learning activities feature 8 as a factor in basic multiplication Multiplication by 9s
Multiplication Worksheets K5 Learning
Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental multiplication exercises to improve numeracy skills
Choose your grade topic Grade 2 multiplication worksheets Grade 3 multiplication worksheets Grade 4 mental multiplication worksheets
Multiplication by 7s Some of the multiplication facts with 7 as a factor can be tricky Try these practice activities to help your students master these facts Multiplication by 8s These printable
learning activities feature 8 as a factor in basic multiplication Multiplication by 9s
Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental multiplication exercises to improve numeracy skills
Choose your grade topic Grade 2 multiplication worksheets Grade 3 multiplication worksheets Grade 4 mental multiplication worksheets
Conventional Multiplication Times Table Practice Worksheets Printable Multiplication Worksheets
Multiplication Quiz Printable
Challenging Multiplication Worksheets Free Printable
Multiplication Facts 0 2 Worksheets Times Tables Worksheets
Multiplication Practice Worksheets Kids Learning Station
Multiplication Worksheets Grade 3 Fun Thekidsworksheet
Multiplication Worksheets Grade 3 Fun Thekidsworksheet
Printable Multiplication Sheet 5th Grade
FAQs (Frequently Asked Questions).
Are Basic Multiplication Practice Worksheets suitable for every age groups?
Yes, worksheets can be customized to different age and skill degrees, making them adaptable for different students.
Exactly how frequently should trainees exercise using Basic Multiplication Practice Worksheets?
Regular method is key. Normal sessions, ideally a few times a week, can yield significant improvement.
Can worksheets alone enhance math abilities?
Worksheets are an useful device yet should be supplemented with different discovering approaches for extensive ability development.
Are there on the internet systems supplying cost-free Basic Multiplication Practice Worksheets?
Yes, several educational web sites supply open door to a large range of Basic Multiplication Practice Worksheets.
How can moms and dads sustain their kids's multiplication technique in your home?
Motivating regular practice, giving support, and creating a favorable learning setting are valuable actions.
|
{"url":"https://crown-darts.com/en/basic-multiplication-practice-worksheets.html","timestamp":"2024-11-13T22:22:19Z","content_type":"text/html","content_length":"29036","record_id":"<urn:uuid:9b0810c8-e4a1-4b79-b8ce-403b66684661>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00622.warc.gz"}
|
Fractional Brownian motion in confining potentials: non-equilibrium distribution tails and optimal fluctuations
At long times, a fractional Brownian particle in a confining external potential reaches a non-equilibrium (non-Boltzmann) steady state. Here we consider scale-invariant power-law potentials V ( x ) ∼
| x | m , where m > 0, and employ the optimal fluctuation method (OFM) to determine the large- | x | tails of the steady-state probability distribution P ( x ) of the particle position. The
calculations involve finding the optimal (that is, the most likely) path of the particle, which determines these tails, via a minimization of the exact action functional for this system, which has
recently become available. Exploiting dynamical scale invariance of the model in conjunction with the OFM ansatz, we establish the large- | x | tails of ln P ( x ) up to a dimensionless factor α (
H , m ) , where 0 < H < 1 is the Hurst exponent. We determine α ( H , m ) analytically (i) in the limits of H → 0 and H → 1, and (ii) for m = 2 and arbitrary H, corresponding to the fractional
Ornstein-Uhlenbeck (fOU) process. Our results for the fOU process are in agreement with the previously known exact P ( x ) and autocovariance. The form of the tails of P ( x ) yields exact
conditions, in terms of H and m, for the particle confinement in the potential. For H ≠ 1 / 2 , the tails encode the non-equilibrium character of the steady state distribution, and we observe
violation of time reversibility of the system except for m = 2. To compute the optimal paths and the factor α ( H , m ) for arbitrary permissible H and m, one needs to solve an (in general nonlinear)
integro-differential equation. To this end we develop a specialized numerical iteration algorithm which accounts analytically for an intrinsic cusp singularity of the optimal paths for H < 1 / 2 .
Bibliographical note
Publisher Copyright:
© 2024 The Author(s). Published by IOP Publishing Ltd.
• fractional Brownian motion
• fractional Gaussian noise
• fractional Ornstein-Uhlenbeck process
• nonequilibrium steady state
• optimal fluctuation method
Dive into the research topics of 'Fractional Brownian motion in confining potentials: non-equilibrium distribution tails and optimal fluctuations'. Together they form a unique fingerprint.
|
{"url":"https://cris.huji.ac.il/en/publications/fractional-brownian-motion-in-confining-potentials-non-equilibriu","timestamp":"2024-11-02T11:11:23Z","content_type":"text/html","content_length":"52602","record_id":"<urn:uuid:0ff23266-7a84-4d2e-9d43-4eb72074324f>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00212.warc.gz"}
|
Everything You Need to Know About Truncation
Truncation is a process of reducing the length of a text or data set. It is commonly used in mathematics, social media, literature, and writing. Truncation can be used to elipsize, whitespace,
line-clamping, and CSS n. It can also be used to truncate SQL commands such as delete, foreign keys, temporary tables, and truncate table.
In addition, it can be used to delete additional code from emails and server configuration. Truncation is also used to solve issues with Microsoft Outlook Express and Gmail. It can be used to fix
SMTP service errors and HTML encoding issues. Truncation can also be used to delete statements with a WHERE clause in SQL Server.
Truncation is a verb that means to shorten or cut off something. It can also be an adjective that describes something that has been shortened or cut off. In mathematics, truncation is a statistical
measure of central trend that is used to calculate the probability distribution of a sample. It is also known as a trimmed mean or basic truncation.
Commands Used for Truncation
The most common commands used for truncation are LEFT(A1, 6) and TRUNC(). LEFT(A1, 6) will return the first six characters of the string A1. TRUNC() will truncate the decimal part of a number and
return an integer value.
Examples of Truncation
Truncation can be used in many different ways. For example, it can be used to truncate language by removing casualisms and lexicalized forms such as twaitter or shorten tangible and intangible words.
It can also be used to truncate carbohydrates in the hood or the offender. In addition, it can be used to truncate emails or recover truncated emails using email recovery tools. Truncation can also
be used in CSS with max-width, word-wrap, white-space, ellipsis, break-word, and nowrap properties. These properties will determine how text will be truncated when it exceeds the specified width. For
example, if the max-width property is set to 200px and the text exceeds this width, it will be truncated and an ellipsis (...) will be added at the end.
Truncation in Mathematics
In mathematics, truncation is a statistical measure of central trend that is used to calculate the probability distribution of a sample.
The most common type of truncation is the mean which is calculated by taking the sum of all values in a sample and dividing it by the number of values in the sample. Truncation can also be used to
calculate other statistical measures such as median and mode. The median is calculated by taking the middle value in a sample while the mode is calculated by finding the most frequently occurring
value in a sample.
Truncation in Trees
Truncation can also be used to prune trees or remove branches from trees. This process is known as treeing or treed. It involves cutting off branches with an electric saw or chainsaw and using karate
kicks to remove additional branches from the tree.
Truncation is a process of reducing the length of a text or data set that is commonly used in mathematics, social media, literature, and writing.
It can be used for elipsizing, whitespace, line-clamping, CSS n, SQL commands such as delete and foreign keys, emails and server configuration issues with Microsoft Outlook Express and Gmail, SMTP
service errors and HTML encoding issues. In addition, it can be used for statistical measures such as mean, median, mode, trees pruning (treeing), deleting additional code from emails and server
configuration issues with Microsoft Outlook Express and Gmail. Truncation is an important tool for reducing data sets and making them easier to understand. It can help make data more manageable and
easier to analyze.
Leave Reply
|
{"url":"https://www.truncations.net/everything-you-need-to-know-about-truncation","timestamp":"2024-11-05T19:41:39Z","content_type":"text/html","content_length":"105848","record_id":"<urn:uuid:1bf78f1d-e454-4dfc-b1fa-f37cd210c7bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00121.warc.gz"}
|
Recurrent Layers | Deeplearning4j
Recurrent Layers
Recurrent Neural Network (RNN) implementations in DL4J.
This document outlines the specifics training features and the practicalities of how to use them in DeepLearning4J. This document assumes some familiarity with recurrent neural networks and their use
- it is not an introduction to recurrent neural networks, and assumes some familiarity with their both their use and terminology.
The Basics: Data and Network Configuration
DL4J currently supports the following types of recurrent neural network
LSTM (Long Short-Term Memory)
Java documentation for each is available: SimpleRnn, LSTM.
Data for RNNs
Consider for the moment a standard feed-forward network (a multi-layer perceptron or 'DenseLayer' in DL4J). These networks expect input and output data that is two-dimensional: that is, data with
"shape" [numExamples,inputSize]. This means that the data into a feed-forward network has ‘numExamples’ rows/examples, where each row consists of ‘inputSize’ columns. A single example would have
shape [1,inputSize], though in practice we generally use multiple examples for computational and optimization efficiency. Similarly, output data for a standard feed-forward network is also two
dimensional, with shape [numExamples,outputSize].
Conversely, data for RNNs are time series. Thus, they have 3 dimensions: one additional dimension for time. Input data thus has shape [numExamples,inputSize,timeSeriesLength], and output data has
shape [numExamples,outputSize,timeSeriesLength]. This means that the data in our INDArray is laid out such that the value at position (i,j,k) is the jth value at the kth time step of the ith example
in the minibatch. This data layout is shown below.
When importing time series data using the class CSVSequenceRecordReader each line in the data files represents one time step with the earliest time series observation in the first row (or first row
after header if present) and the most recent observation in the last row of the csv. Each feature time series is a separate column of the of the csv file. For example if you have five features in
time series, each with 120 observations, and a training & test set of size 53 then there will be 106 input csv files(53 input, 53 labels). The 53 input csv files will each have five columns and 120
rows. The label csv files will have one column (the label) and one row.
RnnOutputLayer is a type of layer used as the final layer with many recurrent neural network systems (for both regression and classification tasks). RnnOutputLayer handles things like score
calculation, and error calculation (of prediction vs. actual) given a loss function etc. Functionally, it is very similar to the 'standard' OutputLayer class (which is used with feed-forward
networks); however it both outputs (and expects as labels/targets) 3d time series data sets.
Configuration for the RnnOutputLayer follows the same design other layers: for example, to set the third layer in a MultiLayerNetwork to a RnnOutputLayer for classification:
.layer(2, new RnnOutputLayer.Builder(LossFunction.MCXENT).activation(Activation.SOFTMAX)
Use of RnnOutputLayer in practice can be seen in the examples, linked at the end of this document.
RNN Training Features
Truncated Back Propagation Through Time
Training neural networks (including RNNs) can be quite computationally demanding. For recurrent neural networks, this is especially the case when we are dealing with long sequences - i.e., training
data with many time steps.
Truncated backpropagation through time (BPTT) was developed in order to reduce the computational complexity of each parameter update in a recurrent neural network. In summary, it allows us to train
networks faster (by performing more frequent parameter updates), for a given amount of computational power. It is recommended to use truncated BPTT when your input sequences are long (typically, more
than a few hundred time steps).
Consider what happens when training a recurrent neural network with a time series of length 12 time steps. Here, we need to do a forward pass of 12 steps, calculate the error (based on predicted vs.
actual), and do a backward pass of 12 time steps:
For 12 time steps, in the image above, this is not a problem. Consider, however, that instead the input time series was 10,000 or more time steps. In this case, standard backpropagation through time
would require 10,000 time steps for each of the forward and backward passes for each and every parameter update. This is of course very computationally demanding.
In practice, truncated BPTT splits the forward and backward passes into a set of smaller forward/backward pass operations. The specific length of these forward/backward pass segments is a parameter
set by the user. For example, if we use truncated BPTT of length 4 time steps, learning looks like the following:
Note that the overall complexity for truncated BPTT and standard BPTT are approximately the same - both do the same number of time step during forward/backward pass. Using this method however, we get
3 parameter updates instead of one for approximately the same amount of effort. However, the cost is not exactly the same there is a small amount of overhead per parameter update.
The downside of truncated BPTT is that the length of the dependencies learned in truncated BPTT can be shorter than in full BPTT. This is easy to see: consider the images above, with a TBPTT length
of 4. Suppose that at time step 10, the network needs to store some information from time step 0 in order to make an accurate prediction. In standard BPTT, this is ok: the gradients can flow
backwards all the way along the unrolled network, from time 10 to time 0. In truncated BPTT, this is problematic: the gradients from time step 10 simply don't flow back far enough to cause the
required parameter updates that would store the required information. This tradeoff is usually worth it, and (as long as the truncated BPTT lengths are set appropriately), truncated BPTT works well
in practice.
Using truncated BPTT in DL4J is quite simple: just add the following code to your network configuration (at the end, before the final .build() in your network configuration)
The above code snippet will cause any network training (i.e., calls to MultiLayerNetwork.fit() methods) to use truncated BPTT with segments of length 100 steps.
Some things of note:
By default (if a backprop type is not manually specified), DL4J will use BackpropType.Standard (i.e., full BPTT).
The tBPTTLength configuration parameter set the length of the truncated BPTT passes. Typically, this is somewhere on the order of 50 to 200 time steps, though depends on the application and data.
The truncated BPTT lengths is typically a fraction of the total time series length (i.e., 200 vs. sequence length 1000), but variable length time series in the same minibatch is OK when using
TBPTT (for example, a minibatch with two sequences - one of length 100 and another of length 1000 - with a TBPTT length of 200 - will work correctly)
Masking: One-to-Many, Many-to-One, and Sequence Classification
DL4J supports a number of related training features for RNNs, based on the idea of padding and masking. Padding and masking allows us to support training situations including one-to-many,
many-to-one, as also support variable length time series (in the same mini-batch).
Suppose we want to train a recurrent neural network with inputs or outputs that don't occur at every time step. Examples of this (for a single example) are shown in the image below. DL4J supports
training networks for all of these situations:
Without masking and padding, we are restricted to the many-to-many case (above, left): that is, (a) All examples are of the same length, and (b) Examples have both inputs and outputs at all time
The idea behind padding is simple. Consider two time series of lengths 50 and 100 time steps, in the same mini-batch. The training data is a rectangular array; thus, we pad (i.e., add zeros to) the
shorter time series (for both input and output), such that the input and output are both the same length (in this example: 100 time steps).
Of course, if this was all we did, it would cause problems during training. Thus, in addition to padding, we use a masking mechanism. The idea behind masking is simple: we have two additional arrays
that record whether an input or output is actually present for a given time step and example, or whether the input/output is just padding.
Recall that with RNNs, our minibatch data has 3 dimensions, with shape [miniBatchSize,inputSize,timeSeriesLength] and [miniBatchSize,outputSize,timeSeriesLength] for the input and output
respectively. The padding arrays are then 2 dimensional, with shape [miniBatchSize,timeSeriesLength] for both the input and output, with values of 0 ('absent') or 1 ('present') for each time series
and example. The masking arrays for the input and output are stored in separate arrays.
For a single example, the input and output masking arrays are shown below:
For the “Masking not required” cases, we could equivalently use a masking array of all 1s, which will give the same result as not having a mask array at all. Also note that it is possible to use
zero, one or two masking arrays when learning RNNs - for example, the many-to-one case could have a masking array for the output only.
In practice: these padding arrays are generally created during the data import stage (for example, by the SequenceRecordReaderDatasetIterator – discussed later), and are contained within the DataSet
object. If a DataSet contains masking arrays, the MultiLayerNetwork fit will automatically use them during training. If they are absent, no masking functionality is used.
Evaluation and Scoring with Masking
Mask arrays are also important when doing scoring and evaluation (i.e., when evaluating the accuracy of a RNN classifier). Consider for example the many-to-one case: there is only a single output for
each example, and any evaluation should take this into account.
Evaluation using the (output) mask arrays can be used during evaluation by passing it to the following method:
Evaluation.evalTimeSeries(INDArray labels, INDArray predicted, INDArray outputMask)
where labels are the actual output (3d time series), predicted is the network predictions (3d time series, same shape as labels), and outputMask is the 2d mask array for the output. Note that the
input mask array is not required for evaluation.
Score calculation will also make use of the mask arrays, via the MultiLayerNetwork.score(DataSet) method. Again, if the DataSet contains an output masking array, it will automatically be used when
calculating the score (loss function - mean squared error, negative log likelihood etc) for the network.
Masking and Sequence Classification After Training
Sequence classification is one common use of masking. The idea is that although we have a sequence (time series) as input, we only want to provide a single label for the entire sequence (rather than
one label at each time step in the sequence).
However, RNNs by design output sequences, of the same length of the input sequence. For sequence classification, masking allows us to train the network with this single label at the final time step -
we essentially tell the network that there isn't actually label data anywhere except for the last time step.
Now, suppose we've trained our network, and want to get the last time step for predictions, from the time series output array. How do we do that?
To get the last time step, there are two cases to be aware of. First, when we have a single example, we don't actually need to use the mask arrays: we can just get the last time step in the output
INDArray timeSeriesFeatures = ...;
INDArray timeSeriesOutput = myNetwork.output(timeSeriesFeatures);
int timeSeriesLength = timeSeriesOutput.size(2); //Size of time dimension
INDArray lastTimeStepProbabilities = timeSeriesOutput.get(NDArrayIndex.point(0), NDArrayIndex.all(), NDArrayIndex.point(timeSeriesLength-1));
Assuming classification (same process for regression, however) the last line above gives us probabilities at the last time step - i.e., the class probabilities for our sequence classification.
The slightly more complex case is when we have multiple examples in the one minibatch (features array), where the lengths of each example differ. (If all are the same length: we can use the same
process as above).
In this 'variable length' case, we need to get the last time step for each example separately. If we have the time series lengths for each example from our data pipeline, it becomes straightforward:
we just iterate over examples, replacing the timeSeriesLength in the above code with the length of that example.
If we don't have the lengths of the time series directly, we need to extract them from the mask array.
If we have a labels mask array (which is a one-hot vector, like [0,0,0,1,0] for each time series):
INDArray labelsMaskArray = ...;
INDArray lastTimeStepIndices = Nd4j.argMax(labelMaskArray,1);
Alternatively, if we have only the features mask: One quick and dirty approach is to use this:
INDArray featuresMaskArray = ...;
int longestTimeSeries = featuresMaskArray.size(1);
INDArray linspace = Nd4j.linspace(1,longestTimeSeries,longestTimeSeries);
INDArray temp = featuresMaskArray.mulColumnVector(linspace);
INDArray lastTimeStepIndices = Nd4j.argMax(temp,1);
To understand what is happening here, note that originally we have a features mask like [1,1,1,1,0], from which we want to get the last non-zero element. So we map [1,1,1,1,0] -> [1,2,3,4,0], and
then get the largest element (which is the last time step).
In either case, we can then do the following:
int numExamples = timeSeriesFeatures.size(0);
for( int i=0; i<numExamples; i++ ){
int thisTimeSeriesLastIndex = lastTimeStepIndices.getInt(i);
INDArray thisExampleProbabilities = timeSeriesOutput.get(NDArrayIndex.point(i), NDArrayIndex.all(), NDArrayIndex.point(thisTimeSeriesLastIndex));
Combining RNN Layers with Other Layer Types
RNN layers in DL4J can be combined with other layer types. For example, it is possible to combine DenseLayer and LSTM layers in the same network; or combine Convolutional (CNN) layers and LSTM layers
for video.
Of course, the DenseLayer and Convolutional layers do not handle time series data - they expect a different type of input. To deal with this, we need to use the layer preprocessor functionality: for
example, the CnnToRnnPreProcessor and FeedForwardToRnnPreprocessor classes. See here for all preprocessors. Fortunately, in most situations, the DL4J configuration system will automatically add these
preprocessors as required. However, the preprocessors can be added manually (overriding the automatic addition of preprocessors, for each layer).
For example, to manually add a preprocessor between layers 1 and 2, add the following to your network configuration: .inputPreProcessor(2, new RnnToFeedForwardPreProcessor()).
Inference: Predictions One Step at a Time
As with other types of neural networks, predictions can be generated for RNNs using the MultiLayerNetwork.output() and MultiLayerNetwork.feedForward() methods. These methods can be useful in many
circumstances; however, they have the limitation that we can only generate predictions for time series, starting from scratch each and every time.
Consider for example the case where we want to generate predictions in a real-time system, where these predictions are based on a very large amount of history. It this case, it is impractical to use
the output/feedForward methods, as they conduct the full forward pass over the entire data history, each time they are called. If we wish to make a prediction for a single time step, at every time
step, these methods can be both (a) very costly, and (b) wasteful, as they do the same calculations over and over.
For these situations, MultiLayerNetwork provides four methods of note:
rnnGetPreviousState(int layer)
rnnSetPreviousState(int layer, Map<String,INDArray> state)
The rnnTimeStep() method is designed to allow forward pass (predictions) to be conducted efficiently, one or more steps at a time. Unlike the output/feedForward methods, the rnnTimeStep method keeps
track of the internal state of the RNN layers when it is called. It is important to note that output for the rnnTimeStep and the output/feedForward methods should be identical (for each time step),
whether we make these predictions all at once (output/feedForward) or whether these predictions are generated one or more steps at a time (rnnTimeStep). Thus, the only difference should be the
computational cost.
In summary, the MultiLayerNetwork.rnnTimeStep() method does two things:
Generate output/predictions (forward pass), using the previous stored state (if any)
Update the stored state, storing the activations for the last time step (ready to be used next time rnnTimeStep is called)
For example, suppose we want to use a RNN to predict the weather, one hour in advance (based on the weather at say the previous 100 hours as input). If we were to use the output method, at each hour
we would need to feed in the full 100 hours of data to predict the weather for hour 101. Then to predict the weather for hour 102, we would need to feed in the full 100 (or 101) hours of data; and so
on for hours 103+.
Alternatively, we could use the rnnTimeStep method. Of course, if we want to use the full 100 hours of history before we make our first prediction, we still need to do the full forward pass:
For the first time we call rnnTimeStep, the only practical difference between the two approaches is that the activations/state of the last time step are stored - this is shown in orange. However, the
next time we use the rnnTimeStep method, this stored state will be used to make the next predictions:
There are a number of important differences here:
In the second image (second call of rnnTimeStep) the input data consists of a single time step, instead of the full history of data
The forward pass is thus a single time step (as compared to the hundreds – or more)
After the rnnTimeStep method returns, the internal state will automatically be updated. Thus, predictions for time 103 could be made in the same way as for time 102. And so on.
However, if you want to start making predictions for a new (entirely separate) time series: it is necessary (and important) to manually clear the stored state, using the
MultiLayerNetwork.rnnClearPreviousState() method. This will reset the internal state of all recurrent layers in the network.
If you need to store or set the internal state of the RNN for use in predictions, you can use the rnnGetPreviousState and rnnSetPreviousState methods, for each layer individually. This can be useful
for example during serialization (network saving/loading), as the internal network state from the rnnTimeStep method is not saved by default, and must be saved and loaded separately. Note that these
get/set state methods return and accept a map, keyed by the type of activation. For example, in the LSTM model, it is necessary to store both the output activations, and the memory cell state.
Some other points of note:
We can use the rnnTimeStep method for multiple independent examples/predictions simultaneously. In the weather example above, we might for example want to make predicts for multiple locations
using the same neural network. This works in the same way as training and the forward pass / output methods: multiple rows (dimension 0 in the input data) are used for multiple examples.
If no history/stored state is set (i.e., initially, or after a call to rnnClearPreviousState), a default initialization (zeros) is used. This is the same approach as during training.
The rnnTimeStep can be used for an arbitrary number of time steps simultaneously – not just one time step. However, it is important to note:
For a single time step prediction: the data is 2 dimensional, with shape [numExamples,nIn]; in this case, the output is also 2 dimensional, with shape [numExamples,nOut]
For multiple time step predictions: the data is 3 dimensional, with shape [numExamples,nIn,numTimeSteps]; the output will have shape [numExamples,nOut,numTimeSteps]. Again, the final time
step activations are stored as before.
It is not possible to change the number of examples between calls of rnnTimeStep (in other words, if the first use of rnnTimeStep is for say 3 examples, all subsequent calls must be with 3
examples). After resetting the internal state (using rnnClearPreviousState()), any number of examples can be used for the next call of rnnTimeStep.
The rnnTimeStep method makes no changes to the parameters; it is used after training the network has been completed only.
The rnnTimeStep method works with networks containing single and stacked/multiple RNN layers, as well as with networks that combine other layer types (such as Convolutional or Dense layers).
The RnnOutputLayer layer type does not have any internal state, as it does not have any recurrent connections.
Loading Time Series Data
Data import for RNNs is complicated by the fact that we have multiple different types of data we could want to use for RNNs: one-to-many, many-to-one, variable length time series, etc. This section
will describe the currently implemented data import mechanisms for DL4J.
The methods described here utilize the SequenceRecordReaderDataSetIterator class, in conjunction with the CSVSequenceRecordReader class from DataVec. This approach currently allows you to load
delimited (tab, comma, etc) data from files, where each time series is in a separate file. This method also supports:
Variable length time series input
One-to-many and many-to-one data loading (where input and labels are in different files)
Label conversion from an index to a one-hot representation for classification (i.e., '2' to [0,0,1,0])
Skipping a fixed/specified number of rows at the start of the data files (i.e., comment or header rows)
Note that in all cases, each line in the data files represents one time step.
(In addition to the examples below, you might find these unit tests to be of some use.)
Example 1: Time Series of Same Length, Input and Labels in Separate Files
Suppose we have 10 time series in our training data, represented by 20 files: 10 files for the input of each time series, and 10 files for the output/labels. For now, assume these 20 files all
contain the same number of time steps (i.e., same number of rows).
To use the SequenceRecordReaderDataSetIterator and CSVSequenceRecordReader approaches, we first create two CSVSequenceRecordReader objects, one for input and one for labels:
SequenceRecordReader featureReader = new CSVSequenceRecordReader(1, ",");
SequenceRecordReader labelReader = new CSVSequenceRecordReader(1, ",");
This particular constructor takes the number of lines to skip (1 row skipped here), and the delimiter (comma character used here).
Second, we need to initialize these two readers, by telling them where to get the data from. We do this with an InputSplit object. Suppose that our time series are numbered, with file names
"myInput_0.csv", "myInput_1.csv", ..., "myLabels_0.csv", etc. One approach is to use the NumberedFileInputSplit:
featureReader.initialize(new NumberedFileInputSplit("/path/to/data/myInput_%d.csv", 0, 9));
labelReader.initialize(new NumberedFileInputSplit(/path/to/data/myLabels_%d.csv", 0, 9));
In this particular approach, the "%d" is replaced by the corresponding number, and the numbers 0 to 9 (both inclusive) are used.
Finally, we can create our SequenceRecordReaderdataSetIterator:
DataSetIterator iter = new SequenceRecordReaderDataSetIterator(featureReader, labelReader, miniBatchSize, numPossibleLabels, regression);
This DataSetIterator can then be passed to MultiLayerNetwork.fit() to train the network.
The miniBatchSize argument specifies the number of examples (time series) in each minibatch. For example, with 10 files total, miniBatchSize of 5 would give us two data sets with 2 minibatches
(DataSet objects) with 5 time series in each.
Note that:
For classification problems: numPossibleLabels is the number of classes in your data set. Use regression = false.
Labels data: one value per line, as a class index
Label data will be converted to a one-hot representation automatically
For regression problems: numPossibleLabels is not used (set it to anything) and use regression = true.
The number of values in the input and labels can be anything (unlike classification: can have an arbitrary number of outputs)
No processing of the labels is done when regression = true
Example 2: Time Series of Same Length, Input and Labels in Same File
Following on from the last example, suppose that instead of a separate files for our input data and labels, we have both in the same file. However, each time series is still in a separate file.
As of DL4J 0.4-rc3.8, this approach has the restriction of a single column for the output (either a class index, or a single real-valued regression output)
In this case, we create and initialize a single reader. Again, we are skipping one header row, and specifying the format as comma delimited, and assuming our data files are named "myData_0.csv", ...,
SequenceRecordReader reader = new CSVSequenceRecordReader(1, ",");
reader.initialize(new NumberedFileInputSplit("/path/to/data/myData_%d.csv", 0, 9));
DataSetIterator iterClassification = new SequenceRecordReaderDataSetIterator(reader, miniBatchSize, numPossibleLabels, labelIndex, false);
miniBatchSize and numPossibleLabels are the same as the previous example. Here, labelIndex specifies which column the labels are in. For example, if the labels are in the fifth column, use labelIndex
= 4 (i.e., columns are indexed 0 to numColumns-1).
For regression on a single output value, we use:
DataSetIterator iterRegression = new SequenceRecordReaderDataSetIterator(reader, miniBatchSize, -1, labelIndex, true);
Again, the numPossibleLabels argument is not used for regression.
Example 3: Time Series of Different Lengths (Many-to-Many)
Following on from the previous two examples, suppose that for each example individually, the input and labels are of the same length, but these lengths differ between time series.
We can use the same approach (CSVSequenceRecordReader and SequenceRecordReaderDataSetIterator), though with a different constructor:
DataSetIterator variableLengthIter = new SequenceRecordReaderDataSetIterator(featureReader, labelReader, miniBatchSize, numPossibleLabels, regression, SequenceRecordReaderDataSetIterator.AlignmentMode.ALIGN_END);
The argument here are the same as in the previous example, with the exception of the AlignmentMode.ALIGN_END addition. This alignment mode input tells the SequenceRecordReaderDataSetIterator to
expect two things:
That the time series may be of different lengths
To align the input and labels - for each example individually - such that their last values occur at the same time step.
Note that if the features and labels are always of the same length (as is the assumption in example 3), then the two alignment modes (AlignmentMode.ALIGN_END and AlignmentMode.ALIGN_START) will give
identical outputs. The alignment mode option is explained in the next section.
Also note: that variable length time series always start at time zero in the data arrays: padding, if required, will be added after the time series has ended.
Unlike examples 1 and 2 above, the DataSet objects produced by the above variableLengthIter instance will also include input and masking arrays, as described earlier in this document.
Example 4: Many-to-One and One-to-Many Data
We can also use the AlignmentMode functionality in example 3 to implement a many-to-one RNN sequence classifier. Here, let us assume:
Input and labels are in separate delimited files
The labels files contain a single row (time step) (either a class index for classification, or one or more numbers for regression)
The input lengths may (optionally) differ between examples
In fact, the same approach as in example 3 can do this:
DataSetIterator variableLengthIter = new SequenceRecordReaderDataSetIterator(featureReader, labelReader, miniBatchSize, numPossibleLabels, regression, SequenceRecordReaderDataSetIterator.AlignmentMode.ALIGN_END);
Alignment modes are relatively straightforward. They specify whether to pad the start or the end of the shorter time series. The diagram below shows how this works, along with the masking arrays (as
discussed earlier in this document):
The one-to-many case (similar to the last case above, but with only one input) is done by using AlignmentMode.ALIGN_START.
Note that in the case of training data that contains time series of different lengths, the labels and inputs will be aligned for each example individually, and then the shorter time series will be
padded as required:
Available layers
LSTM recurrent neural network layer without peephole connections. Supports CuDNN acceleration - see cuDNN for details
Recurrent Neural Network Loss Layer. Handles calculation of gradients etc for various objective (loss) time distributed dense component here. Consequently, the output activations size is equal to the
input size. Input and output activations are same as other RNN layers: 3 dimensions with shape [miniBatchSize,nIn,timeSeriesLength] and [miniBatchSize,nOut,timeSeriesLength] respectively. Note that
RnnLossLayer also has the option to configure an activation function
public void setNIn(int nIn)
param lossFunction Loss function for the loss layer
and labels of shape [minibatch,nOut,sequenceLength]. It also supports mask arrays. Note that RnnOutputLayer can also be used for 1D CNN layers, which also have [minibatch,nOut,sequenceLength]
activations/labels shape.
public RnnOutputLayer build()
param lossFunction Loss function for the output layer
Bidirectional is a “wrapper” layer: it wraps any uni-directional RNN layer to make it bidirectional. Note that multiple different modes are supported - these specify how the activations should be
combined from the forward and separate copies of the wrapped RNN layer, each with separate parameters.
This Mode enumeration defines how the activations for the forward and backward networks should be combined. ADD: out = forward + backward (elementwise addition) MUL: out = forward backward
(elementwise multiplication) AVERAGE: out = 0.5 (forward + backward) CONCAT: Concatenate the activations. Where ‘forward’ is the activations for the forward RNN, and ‘backward’ is the activations for
the backward RNN. In all cases except CONCAT, the output activations size is the same size as the standard RNN that is being wrapped by this layer. In the CONCAT case, the output activations size
(dimension 1) is 2x larger than the standard RNN’s activations array.
public IUpdater getUpdaterByParam(String paramName)
Get the updater for the given parameter. Typically the same updater will be used for all updaters, but this is not necessarily the case
param paramName Parameter name
return IUpdater for the parameter
LastTimeStep is a “wrapper” layer: it wraps any RNN (or CNN1D) layer, and extracts out the last time step during forward pass, and returns it as a row vector (per example). That is, for 3d (time
series) input (with shape [minibatch, layerSize, timeSeriesLength]), we take the last time step and return it as a 2d array with shape [minibatch, layerSize]. Note that the last time step operation
takes into account any mask arrays, if present: thus, variable length time series (in the same minibatch) are handled as expected here.
activationFn( in_t inWeight + out_(t-1) recurrentWeights + bias)}.
Note that other architectures (LSTM, etc) are usually much more effective, especially for longer time series; however SimpleRnn is very fast to compute, and hence may be considered where the length
of the temporal dependencies in the dataset are only a few steps long.
|
{"url":"https://deeplearning4j.konduit.ai/1.0.0-m2/deeplearning4j/reference/recurrent-layers","timestamp":"2024-11-15T03:29:00Z","content_type":"text/html","content_length":"1050618","record_id":"<urn:uuid:3d63a5af-3c49-4c96-a2c3-fa4ab1f1418f>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00167.warc.gz"}
|
Using SC3 with batch corrected MNN values
Last seen 2.2 years ago
Hello, I want to use SC3 for data sets from multiple batches. I use fastMNN() function of Scran/Scater package for batch normalization but it does not effect logcounts, it creates a reduced dimension
"MNN" that shows the corrected data which also used in clustering step. How can I use SC3 with these values? Can I create a new SingleCellExperiment with MNN matrix and use SC3 on that matrix? MNN
matrix includes negative values so I know I should not use gene_filter parameter as TRUE.
Thank you in advance.
Entering edit mode
Thank you for answer. As you said, negative values effects results as expected. To try it, I used sc3estimatek function on both data set itself and reduced dimension (PCA with first 50 PCs in that
case), and estimated k was 27 for all data and 5 for reduced dimension. Probably it is not a good way to do it. Since the data sets from different batches are really common, what is optimal way to
use SC3 on these kind of data sets? Actually I looked for a method to correct all logcounts but could not find any method.
Entering edit mode
There are lots of batch correction methods at the moment. Not all of them correct the expression matrix though. But for those that don't you could use other clustering methods such as louvain
clustering on knn graph (default in scanpy package). Here we cover some of the batch correction methods: R - https://github.com/cellgeni/notebooks/blob/master/files/notebooks/
10X-batch-correction-harmony-mnn-cca-other.Rmd python - https://github.com/cellgeni/notebooks/blob/master/files/notebooks/10X-batch-correction-bbknn-scanorama.ipynb
Entering edit mode
Thank you for answer. Actually I am planning to use MNN correction. It is more suitable in my situation and further analyses I am planning. MNN can create a corrected expression matrix but it also
have negative values (due to cosine normalization I believe). I took the risk and used SC3 on this corrected matrix but I have NAs in clustering results.
Entering edit mode
I'll chip in here and mention that a batch correction method will only be able to preserve zeroes if it is aware that the data are derived from counts. This is not the case for the vast majority of
methods, which operate on transformed expression values where the count-based nature of the data are lost. And for good reason; the theory for count-based models is difficult. (See
batchelor::rescaleBatches() for a limited exception.) Indeed, there is no philosophical reason that log-expression values should be non-negative. The fact that they often are is simply a matter of
practical convenience to avoid loss of sparsity.
Now, I can't remember exactly what special stuff SC3 does, but if you just want to do no-frills k-means clustering, you can apply kmeans on the low-dimensional MNN corrected values. Any feature
selection should have been done before MNN correction anyway.
|
{"url":"https://support.bioconductor.org/p/119807/","timestamp":"2024-11-06T11:16:00Z","content_type":"text/html","content_length":"26109","record_id":"<urn:uuid:4dd3fa07-8e8f-4711-bf02-b28f5de00836>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00863.warc.gz"}
|
Day 30: Clustering
Today we’re going to look at the Clustering package, the documentation for which can be found here. As usual, the first step is loading the package.
We’ll use the RDatasets package to select the xclara data and rename the columns in the resulting data frame.
using RDatasets
xclara = dataset("cluster", "xclara");
names!(xclara, [symbol(i) for i in ["x", "y"]]);
Using Gadfly to generate a plot we can clearly see that there are three well defined clusters in the data.
Next we need to transform the data into an Array and then transpose it so that each point lies in a separate column (remember that this is key to calculating distances!).
xclara = convert(Array, xclara);
xclara = xclara';
Before we can run the clustering algorithm we need to identify seed points which act as the starting locations for clusters. There are a number of options for doing this. We’re simply going to choose
three points in the data at random. How did we arrive at three starting points (as opposed to, say, six)? Well, in this case it was simply visual inspection: there appear to be three clear clusters
in the data. When the data are more complicated (or have higher dimensionality) then choosing the number of clusters becomes a little more tricky.
initseeds(:rand, xclara, 3)
3-element Array{Int64,1}:
Now we’re ready to run the clustering algorithm. We’ll start with k-means clustering.
xclara_kmeans = kmeans(xclara, 3);
A quick plot will confirm that it has recognised the three clusters that we intuitively identified in the data.
We can have a look at the cluster centers, the number of points assigned to each cluster and (a subset of) the cluster assignments.
2x3 Array{Float64,2}:
9.47805 69.9242 40.6836
10.6861 -10.1196 59.7159
3-element Array{Int64,1}:
10-element Array{Int64,1}:
The k-means algorithm is limited to using the Euclidean metric to calculate the distance between points. An alternative, k-medoids clustering, is also supported in the Clustering package. The
kmedoids() function accepts a distance matrix (from an arbitrary metric) as it’s first argument, allowing for a far greater degree of flexibility.
The final algorithm implemented by Clustering is DBSCAN, which is a density based clustering algorithm. In addition to a distance matrix, dbscan() also requires neighbourhood radius and the minimum
number of points per cluster.
using Distances
dclara = pairwise(SqEuclidean(), xclara);
xclara_dbscan = dbscan(dclara, 10, 40);
As is apparent from the plot below, DBSCAN results in a dramatically different set of clusters. The loosely packed blue points on the periphery of each of the three clusters have been identified as
noise by the DBSCAN algorithm. Only the high density cores of these clusters are now separately identified.
That’s it for the moment about clusters. The full code for today can be found on GitHub. Tomorrow we’ll take a look at regression. In the meantime, take a few minutes to watch the video below about
using Julia’s clustering capabilities for climate classification.
|
{"url":"https://datawookie.dev/blog/2015/10/monthofjulia-day-30-clustering/","timestamp":"2024-11-04T14:21:50Z","content_type":"text/html","content_length":"18132","record_id":"<urn:uuid:a5f0368c-e71e-49e9-8abc-98c2b4ad2437>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00125.warc.gz"}
|
Binomial Distribution
The binomial distribution is a discrete probability distribution of obtaining exactly n successes out of N trials.
Binomial distribution is a college-level concept that would be first encountered in a probability and statistics course. It is an Advanced Placement Statistics topic and is listed in the California
State Standards for Probability and Statistics.
Binomial Coefficient The binomial coefficient is a notation and function giving the number of ways of picking k unordered outcomes from n possibilities, also known as a combination or combinatorial
: number.
Classroom Articles on Probability and Statistics (Up to College Level)
Arithmetic Mean Mode
Box-and-Whisker Plot Moment
Central Limit Theorem Normal Distribution
Chi-Squared Test Outlier
Conditional Probability Paired t-Test
Confidence Interval Poisson Distribution
Correlation Coefficient Probability
Covariance Problem
Erf Sample
Histogram Scatter Diagram
Hypothesis Standard Deviation
Independent Events Statistical Test
Law of Large Numbers Statistics
Least Squares Fitting Uniform Distribution
Mean Variance
Median z-Score
|
{"url":"https://mathworld.wolfram.com/classroom/BinomialDistribution.html","timestamp":"2024-11-05T21:49:28Z","content_type":"text/html","content_length":"49551","record_id":"<urn:uuid:510e8841-b9d5-427e-9567-f9b0748f0d7c>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00214.warc.gz"}
|
Arithmetic by Counters
Before electronic computers came into use, the word computer referred to a person who performed arithmetic manually. Though Arabic notation for numbers and pen-and-paper algorithms for computing with
them were known to European mathematicians in the later centuries of the SCA period, people with more mundane commercial occupations used Roman numerals and the abacus up until the seventeenth and
eighteenth enturies.
In this article, I'll give a description of arithmetic using counters on a board, which seems to have been the usual way of performing arithmetic in Europe throughout the SCA period. I've relied
heavily on Pullan (1968) for photographs of abacus-related items kept in European museums. Background information on the history of mathematics is based on Cajori (1991) and Yeldham (1926).
The Flat-Board Abacus
The type of abacus used in mediaeval Europe was not the bead-frame abacus still used in some Eastern countries today. Europeans used a flat surface divided into a number of horizontal or vertical
lines, upon which counters were placed. The abaci that I have made use vertical divisions, so for convenience I will write as if the abacus is made up of columns. A horizontal abacus works the same
way, everything is just turned about ninety degrees.
The layout of the abacus is familiar to anyone who knows Arabic notation. The right-most column represents the number of ones; the second right-most column represents the number of tens; the third
the number of hundreds, etc. In fact, the tenth-century monk Gerbert (later Pope Sylvester II) used counters marked with Arabic digits, and placed a counter bearing the appropriate digit in each
column to represent a number. It was more usual, however, to have a large supply of identical counters, and a number of counters equal to the number of ones, tens, etc., was placed in the appropriate
To make the abacus easier to read, and to reduce the number of counters required, a counter placed between two columns was used to represent five counters in the column to its right. For example,
eight is represented by three counters in the right-most column and one counter between the first and second columns. Using this technique, we can relate the abacus to Roman notation: the numerals I,
X, C and M represent counters placed on the columns, while the numerals V, L and D represent counters placed between the columns. Some abaci, in fact, have the columns labelled with Roman numerals to
make them easier to read; others have every third column marked by a cross (x) in the same way that we use commas or spaces in writing large numbers with Arabic notation.
Figure 1. 1058 (MLVIII) on an abacus.
If you understand Arabic notation, and have grasped the layout of the abacus, you will see how integer arithmetic can be done on the abacus. A genuine period exposition in English can be found in the
Second Dialogue of Robert Recorde's The Grounde of Artes (1542). Recorde's algorithms for the basic arithmetic operations are more or less the same as the pen-and-paper algorithms I was taught in
primary school. I'll describe the multiplication algorithm here as an example of how the abacus was used.
The numbers to be multiplied are the multiplier and the multiplicand; of course it doesn't matter to the result (the product) which is which, but they play different roles in the algorithm so we have
to give them names.
We start by setting the multiplier at the bottom of the abacus, and setting out the multiplicand above it. A line is drawn across the abacus to separate them, and another line to separate the product
from the multiplier, though leaving a space will do if you don't want to draw on your abacus.
We work on one column of the multiplicand at a time, say, the nth column. Recorde works from top to bottom on a horizontal abacus, which is equivalent to working left to right on a vertical abacus,
but working right to left works equally well.
For each counter in the nth column of the multiplicand, set down a copy of the multiplier at the top of the abacus as if the nth column was the first column, i.e. shift the copy n-1 columns to the
left. You can keep count by removing the counter from the multiplicand that produced the copy.
If there is a counter in the space to the left of the column, you could copy the multiplier into the nth column five times. A more efficient method, however, is to copy half the multiplier into the (
n+1)th column, rounding down if the multiplier is odd. If the multiplier is odd, also put a counter in between the nth and (n+1)th column (this counter is equivalent to half a counter in the (n+1)th
If, like me, you were made to invest much of your childhood schooling in memorising multiplication tables, you can speed up the computation by replacing the repeated addition I've just described with
a single multiplication. Recorde uses multiplication tables in his pen algorithms, but the average abacus-user was an illiterate who never went to primary school, and would not have known the tables.
Having set a large number of counters onto your abacus, you will probably want to clean them up by adding them all into a single number. To add, bring all of the counters in each column together. For
every five counters in a column, replace them by one counter between that column and the next, and for every two counters between columns, replace them with one counter in the column to the left. Of
course you can keep putting down more and more counters for so long as you have counters and space, if you like, but you will have to add them all together some time.
FIgure 2. Multiplying 153 (multiplicand) by 23 (multiplier), after the first two columns have been multiplied. The partial products are 23 x 100 and 23 x 50 = 11.5 x 100.
At the end, after all of the columns in the multiplicand have been multiplied in this way, and all of the results have been summed, the product will be left in the space above the multiplicand. It is
straightforward to prove that the algorithm is correct using the Distributive Law.
Arithmetic involving non-integers is cumbersome using the period techniques due to the way such numbers were represented and I've never seen a good description of how it was done. The fractional part
of the number was represented by a sum of fractions in inconvenient denominations such as 1/2 and 1/3 that went by their Latin names rather than symbols. Each fraction had its own column in the
abacus, either to the right of the integer part, or in another row of columns below it.
The techniques used by modern computers for dealing with non-integers can be adapted to the abacus but this article is about mediaeval computing and we won't talk about these methods here.
Constructing an Abacus
A flat-board abacus is simply a set of parallel lines on a flat surface, and mathematics historians give descriptions of many ways of achieving this based on their interpretation of contemporary
accounts. In a pinch, you can simply draw lines in the dirt and use pebbles for counters.
According to Barnard (1916) (as quoted by Pullan),
the extreme rarity of of specimens of the counting board is remarkable, considering its general vogue in Western Europe during the six centuries from 1200 to the French Revolution, as is the
survival of any examples of its more perishable substitute the reckoning cloth … The [three] tables at Basle, the doubtful one at Nuremberg and the [five] reckoning cloths at Munich, are all that
I have been able to find after considerable search and enquiry…
Pullan, however, seems to have discovered another one in the Strasbourg Municipal Museum, apparently from the end of the sixteenth century, of which he has a photograph. It is a wooden table with a
lip around the edge (presumably to stop unused counters from falling off). Two abaci are painted on top of it, one on each side of the table, as shown in Figure 3.
Figure 3. The layout of the reckoning table at Strasbourg. The abaci can be used vertically for working with currency, or horizontally for working with plain numbers.
A cheaper and more portable alternative is to use a cloth. I made one for a collegium by taking a rectangular piece of fabric and using fabric paint to draw an abacus pattern on it. The pattern I
used is a slightly simplified version of a diagram in Yeldham's book showing a twenty-seven column abacus with the columns grouped in threes by semi-circles at the top. Yeldham says this abacus was
used in England in 1111; the pattern is very similar to the one reported in Cajori's book as being described by Bernelinus, a pupil of Gerbert. My abacus is shown in Figure 4. In hindsight, I think I
should have used only nine or twelve columns because the current columns are too narrow, and I have no use for twenty-seven-digit numbers. I can only speculate what use Gerbert had for them.
Figure 4. My reckoning cloth.
For counters, there are many references to the use of pebbles for computation in classical literature; Pullan gives a summary of these in an appendix. In later times, counters were minted from metal
in the same way as coins, and Pullan devotes a whole chapter to these (he calls them "jettons") and has many photographs of extant examples. Modern coins seem a good substitute for those of us who
don't own a mint.
A classmate of mine once complained of a complex analysis technique in electrical engineering, "What do we need to learn this for? The computer can do it." Of course there's no future in that line of
argument if you design computer systems. Studying methods of computation other than the ones I learnt mechanically at school has improved my feeling for how computation works, and even the most
mundane computations are now accorded their due respect.
F. P. Barnard, The Casting-Counter and the Counting-Board, 1916.
F. Cajori, A History of Mathematics, 5th Ed., 1991.
J. M. Pullan, The History of the Abacus, 1968.
R. Recorde, The Grounde of Artes, 1542.
F. A. Yeldham, The Story of Reckoning in the Middle Ages, 1926.
1 June 2024 - I've posted a video demonstrating the four basic arithmetic operations to Vimeo.
Originally published in
Cockatrice #16 (November 2002)
|
{"url":"https://www.nps.id.au/sca/arithmetic-by-counters/","timestamp":"2024-11-03T10:44:36Z","content_type":"text/html","content_length":"19443","record_id":"<urn:uuid:e7dc96af-c36b-41f6-a658-45a895acb8be>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00207.warc.gz"}
|
Stochastic Calculus by Alan Bain
Stochastic Calculus
by Alan Bain
Number of pages: 99
These notes provide a very informal introduction to Stochastic Calculus, and especially to the Ito integral and some of its applications. The text concentrates on the parts of the course which the
author found hard, there is often little or no comment on more standard matters.
Download or read it online for free here:
Download link
(510KB, PDF)
Similar books
Synchronization and Linearity: An Algebra for Discrete Event Systems
F. Baccelli, G. Cohen, G. J. Olsder, J. Quadrat
John Wiley & SonsPresents new modelling and analysis techniques for the description of discrete event dynamic systems. Created within the text is a calculus which allows the derivation of analytical
tools for computing the time behavior of this type of system.
Lectures on Singular Stochastic PDEs
M. Gubinelli, N. Perkowski
arXivThe aim is to introduce the basic problems of non-linear PDEs with stochastic and irregular terms. We explain how it is possible to handle them using two main techniques: the notion of energy
solutions and that of paracontrolled distributions.
Lectures on Stochastic Differential Equations and Malliavin Calculus
S. Watanabe
Tata Institute of Fundamental ResearchThe author's main purpose in these lectures was to study solutions of stochastic differential equations as Wiener functionals and apply to them some infinite
dimensional functional analysis. This idea was due to P. Malliavin.
Lectures on Stochastic Processes
K. Ito
Tata Institute of Fundamental ResearchThe book discusses the elementary parts of Stochastic Processes from the view point of Markov Processes. Topics: Markov Processes; Srong Markov Processes;
Multi-dimensional Brownian Motion; Additive Processes; Stochastic Differential Equations; etc.
|
{"url":"http://e-booksdirectory.com/details.php?ebook=2181","timestamp":"2024-11-11T05:08:29Z","content_type":"text/html","content_length":"11169","record_id":"<urn:uuid:7e9926aa-6f90-41da-b63b-542ceba99440>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00798.warc.gz"}
|
x: The length of a rectangle is increased by 20%. The width is ... | Filo
x: The length of a rectangle is increased by . The width is decreased by . Which of the following accurately describes the change in the area of the rectangle?
Not the question you're searching for?
+ Ask your question
Originally, . Now, The area has decreased by . Answer (C). Most students think the answer is (D). It's not.
Was this solution helpful?
Found 5 tutors discussing this question
Discuss this question LIVE for FREE
10 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from Ratio and Proportion in the same exam
Practice more questions from Ratio and Proportion
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text x: The length of a rectangle is increased by . The width is decreased by . Which of the following accurately describes the change in the area of the rectangle?
Topic Ratio and Proportion
Subject Mathematics
Class Grade 12
Answer Type Text solution:1
Upvotes 79
|
{"url":"https://askfilo.com/mathematics-question-answers/x-the-length-of-a-rectangle-is-increased-by-20-the-width-is-decreased-by-20","timestamp":"2024-11-10T04:44:04Z","content_type":"text/html","content_length":"202306","record_id":"<urn:uuid:e8163922-e7a0-40a8-ba93-a6818cb16ea1>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00245.warc.gz"}
|
Comment to Stephen Wolfram's The Concept of the Observer.
Observer Theory
There seems to be a strange filtering of comments on Stephen Wolfram's blog, so publishing the comment made several months ago here:
So the observer function is an abstracting compressor of computationally irreducible complexity into an adaptability model, in a coherent form of subjective experience.
In essence, you are making a case for the limitation of the classical computational way of understanding in order to grasp reality, which is hindered due to physical limitations of the observer,
either biological or artificial.
There are many correlations in understanding the observer as a geometrically restricted consciousness experience, and there's a reverse approach to understanding with Donald Hoffman's conscious agent
theory development using Markov blankets.
You have mentioned the question of finding meaning in different forms: physical (objects), mathematical (formulas), conceptual (ideas), linked to different paradigms of understanding. These meanings
are likely developed from various sensory inputs, such as cognitive from sight with written language/symbols, mathematics and spoken language from hearing, and object forms from touch, etc.
So, we need to incorporate different paradigms of understanding and trigger the next scientific revolution in Thomas Kuhn's terms, reassessing assumptions related to time, self, independence, and
environment. This is crucial to build a coherent consensus reality that includes all subjective experiences.
In terms of time, it seems that the model of time as a linear flow with past/present/future, suitable for adaptability, is better viewed as a causality wave in classical space. This perspective is
unlikely to exist in the quantum realm.
On the cost of observability, it is worthwhile to consider the investigation of correlation with probabilities of events. What if the observer's conscious decision cost to choose a specific branch is
equivalent to the probability? Thus, choosing less probable events in creating reality requires more willpower. Further research on non-classical computational models and investigating the link
between consciousness and probabilities could be beneficial.
In summary, it is a good approach to build a more complete understanding of reality by incorporating a better understanding of the observer and its effect on observation. Or in other terms -
consciousness and individual consciousness experiences.
|
{"url":"https://blog.anatolykern.com/comment-on/","timestamp":"2024-11-13T11:19:59Z","content_type":"text/html","content_length":"25363","record_id":"<urn:uuid:047842ba-e764-4c24-892f-2cea80521feb>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00868.warc.gz"}
|
Quantum chemistry, also called molecular quantum mechanics, is a branch of physical chemistry focused on the application of quantum mechanics to chemical systems, particularly towards the
quantum-mechanical calculation of electronic contributions to physical and chemical properties of molecules, materials, and solutions at the atomic level.^[1] These calculations include
systematically applied approximations intended to make calculations computationally feasible while still capturing as much information about important contributions to the computed wave functions as
well as to observable properties such as structures, spectra, and thermodynamic properties. Quantum chemistry is also concerned with the computation of quantum effects on molecular dynamics and
chemical kinetics.
Chemists rely heavily on spectroscopy through which information regarding the quantization of energy on a molecular scale can be obtained. Common methods are infra-red (IR) spectroscopy, nuclear
magnetic resonance (NMR) spectroscopy, and scanning probe microscopy. Quantum chemistry may be applied to the prediction and verification of spectroscopic data as well as other experimental data.
Many quantum chemistry studies are focused on the electronic ground state and excited states of individual atoms and molecules as well as the study of reaction pathways and transition states that
occur during chemical reactions. Spectroscopic properties may also be predicted. Typically, such studies assume the electronic wave function is adiabatically parameterized by the nuclear positions
(i.e., the Born–Oppenheimer approximation). A wide variety of approaches are used, including semi-empirical methods, density functional theory, Hartree–Fock calculations, quantum Monte Carlo methods,
and coupled cluster methods.
Understanding electronic structure and molecular dynamics through the development of computational solutions to the Schrödinger equation is a central goal of quantum chemistry. Progress in the field
depends on overcoming several challenges, including the need to increase the accuracy of the results for small molecular systems, and to also increase the size of large molecules that can be
realistically subjected to computation, which is limited by scaling considerations — the computation time increases as a power of the number of atoms.
Some view the birth of quantum chemistry as starting with the discovery of the Schrödinger equation and its application to the hydrogen atom. However, a 1927 article of Walter Heitler (1904–1981) and
Fritz London is often recognized as the first milestone in the history of quantum chemistry.^[2] This was the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the
phenomenon of the chemical bond.^[3] However, prior to this a critical conceptual framework was provided by Gilbert N. Lewis in his 1916 paper The Atom and the Molecule,^[4] wherein Lewis developed
the first working model of valence electrons. Important contributions were also made by Yoshikatsu Sugiura^[5]^[6] and S.C. Wang.^[7] A series of articles by Linus Pauling, written throughout the
1930s, integrated the work of Heitler, London, Sugiura, Wang, Lewis, and John C. Slater on the concept of valence and its quantum-mechanical basis into a new theoretical framework.^[8] Many chemists
were introduced to the field of quantum chemistry by Pauling's 1939 text The Nature of the Chemical Bond and the Structure of Molecules and Crystals: An Introduction to Modern Structural Chemistry,
wherein he summarized this work (referred to widely now as valence bond theory) and explained quantum mechanics in a way which could be followed by chemists.^[9] The text soon became a standard text
at many universities. ^[10] In 1937, Hans Hellmann appears to have been the first to publish a book on quantum chemistry, in the Russian ^[11] and German languages.^[12]
In the years to follow, this theoretical basis slowly began to be applied to chemical structure, reactivity, and bonding. In addition to the investigators mentioned above, important progress and
critical contributions were made in the early years of this field by Irving Langmuir, Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Hans Hellmann, Maria Goeppert Mayer, Erich Hückel, Douglas
Hartree, John Lennard-Jones, and Vladimir Fock.
Electronic structure
The electronic structure of an atom or molecule is the quantum state of its electrons.^[13] The first step in solving a quantum chemical problem is usually solving the Schrödinger equation (or Dirac
equation in relativistic quantum chemistry) with the electronic molecular Hamiltonian, usually making use of the Born–Oppenheimer (B–O) approximation. This is called determining the electronic
structure of the molecule.^[14] An exact solution for the non-relativistic Schrödinger equation can only be obtained for the hydrogen atom (though exact solutions for the bound state energies of the
hydrogen molecular ion within the B-O approximation have been identified in terms of the generalized Lambert W function). Since all other atomic and molecular systems involve the motions of three or
more "particles", their Schrödinger equations cannot be solved analytically and so approximate and/or computational solutions must be sought. The process of seeking computational solutions to these
problems is part of the field known as computational chemistry.
Valence bond theory
As mentioned above, Heitler and London's method was extended by Slater and Pauling to become the valence-bond (VB) method. In this method, attention is primarily devoted to the pairwise interactions
between atoms, and this method therefore correlates closely with classical chemists' drawings of bonds. It focuses on how the atomic orbitals of an atom combine to give individual chemical bonds when
a molecule is formed, incorporating the two key concepts of orbital hybridization and resonance.^[15]
Molecular orbital theory
An anti-bonding molecular orbital of Butadiene
An alternative approach to valence bond theory was developed in 1929 by Friedrich Hund and Robert S. Mulliken, in which electrons are described by mathematical functions delocalized over an entire
molecule. The Hund–Mulliken approach or molecular orbital (MO) method is less intuitive to chemists, but has turned out capable of predicting spectroscopic properties better than the VB method. This
approach is the conceptual basis of the Hartree–Fock method and further post-Hartree–Fock methods.
Density functional theory
The Thomas–Fermi model was developed independently by Thomas and Fermi in 1927. This was the first attempt to describe many-electron systems on the basis of electronic density instead of wave
functions, although it was not very successful in the treatment of entire molecules. The method did provide the basis for what is now known as density functional theory (DFT). Modern day DFT uses the
Kohn–Sham method, where the density functional is split into four terms; the Kohn–Sham kinetic energy, an external potential, exchange and correlation energies. A large part of the focus on
developing DFT is on improving the exchange and correlation terms. Though this method is less developed than post Hartree–Fock methods, its significantly lower computational requirements (scaling
typically no worse than n^3 with respect to n basis functions, for the pure functionals) allow it to tackle larger polyatomic molecules and even macromolecules. This computational affordability and
often comparable accuracy to MP2 and CCSD(T) (post-Hartree–Fock methods) has made it one of the most popular methods in computational chemistry.
Chemical dynamics
See also
• Cook, David Branston (1998). Handbook of computational quantum chemistry. Oxford University Press. ISBN 9780198501145. OCLC 468919475.
External links
• The Sherrill Group – Notes
• ChemViz Curriculum Support Resources
• Early ideas in the history of quantum chemistry
|
{"url":"https://www.knowpia.com/knowpedia/Quantum_chemistry","timestamp":"2024-11-08T06:04:52Z","content_type":"text/html","content_length":"137920","record_id":"<urn:uuid:786898cc-c03d-4026-8241-b079c8195601>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00059.warc.gz"}
|
How to do Addition in Excel | Addition using AUTOSUM
Have you ever wondered how to do addition in excel? Microsoft Excel serves to be a perfect tool for any type of calculation. This tutorial takes a quick look to perform additions with step-by-step
Whether you're figuring out the total cost of an item or adding up the monthly sales at your store, Excel can help you do it quickly and easily. Just follow along as we walk you through some simple
additional tasks in Excel.
Table of Contents:
1. Simple addition using formula
Suppose you want to add the marks a student scored in an exam, as shown in the example below.
You can easily add the numbers from B2 to B6 using the formula in B7 Cell.
As seen in the example, you can see the same formula typed into another cell(B7).
Enter all the cell numbers you wish to add, then press enter, and the sum of the numbers is displayed as shown below.
Similarly, you use the formula to add any number of cells and type in the numbers directly to find the total.
2. Addition using the SUM function
The previous method works well when there are fewer numbers/cells to add. Having a large set, however, would make it difficult to do it manually.
Let us see how to do addition in Excel using the SUM function.
The SUM() function is a more efficient way to total up the values of multiple cells.
The function can add up individual cells simply by naming the first and last cell in a range of cells you wish to total up.
The main advantage of the SUM function is that it can be used to write simple formulae that add up hundreds or thousands of cells at once.
Syntax: SUM(Range)
Let us consider our previous example to see how the SUM() function works.
We type in the formula,
Here B2:B6 indicates the range.
And you can see that the result is as same as the previous result.
The SUM() function can add thousands of cells at a time using the same formula.Say, SUM(B2:B5000)
Alternately, you can add a range of numbers in different cells.
By now, you must have realized how powerful the SUM function is, to sum up, a large number of cells.
3. Addition using AUTOSUM
If you want to know how to do addition in Excel when you don't remember the formula, there is a method for that.
By using AUTOSUM, you can add numbers directly without having to use a formula. It is located on the Home tab in the Editing group.
To use AUTOSUM, select all the numbers to be added as shown below.
Click on Autosum, the selected numbers get added automatically, and the sum will be displayed as shown.
Note: Instead of clicking on AUTOSUM, you can also select the cell just below the column of numbers you want to add, then press Alt + = to automatically place the SUM formula in that cell.
Alternately, you can also select the numbers and find the sum at the bottom right corner of the spreadsheet as shown.
That’s it! Hopefully, you found this post helpful. If you want to know more about Microsoft Excel, check out our tutorials.
|
{"url":"https://www.basictutorials.in/how-to-do-addition-in-excel.php","timestamp":"2024-11-09T10:00:07Z","content_type":"text/html","content_length":"25734","record_id":"<urn:uuid:303d5bdd-941e-4470-84a4-3d2fb297f5e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00515.warc.gz"}
|
Influence of fineness level and applied agglomeration pressure of peppermint herb (
Issue BIO Web Conf.
Volume 10, 2018
Contemporary Research Trends in Agricultural Engineering
Article Number 02028
Number of page(s) 5
Section Engineering and Technology
DOI https://doi.org/10.1051/bioconf/20181002028
Published online 26 March 2018
BIO Web of Conferences
, 02028 (2018)
Influence of fineness level and applied agglomeration pressure of peppermint herb (Mentha piperita L.) on the mechanical properties of the obtained product
University of Agriculture in Krakow, Faculty of Production and Power Engineering, Balicka 116 B, 30-149 Krakow, Poland
^* Corresponding author: urszula.sadowska@ur.krakow.pl
The objective of the conducted study was to evaluate the impact of the pressure agglomeration process of peppermint herb on the mechanical properties of the obtained product. The separated fractions
of peppermint with 0.5-2.5 and 2.5-5 mm particles were compacted using a hydraulic press Fritz Heckert EU 20, with pressure 50, 100, 150 and 200 MPa. A closed matrix with the compression chamber
diameter of 15.6 mm was used. Every time, a 2-g herb sample (corresponding to the weight of tea used for the production of tea bags) was poured into the matrix. Thus, compacted herb in the form of a
straight cylinder was obtained. When producing the agglomerate compaction work was determined. Strength tests of the obtained agglomerate were conducted using the MTS Insight 2 testing machine. The
density of the produced agglomerate, its compaction level and strength in the Brazilian test was calculated.
The obtained results indicate that the values of the tested parameters increase with the increase of pressure in the tested range, yet differences occur between the tested herb fractions. Typically,
the agglomerate produced from 0.5-2.5 mm fraction is characterized by a greater density, and the higher level of agglomerate compaction is obtained using 2.5-5 mm herb fraction. The highest strength
determined using Brazilian test was determined for agglomerate produced from 0.5-5 mm peppermint herb fraction at 200 MPa pressure and 0.5-2.5 mm fraction using 150 and 200 MPa pressure.
© The Authors, published by EDP Sciences, 2018
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the
original work is properly cited. (http://creativecommons.org/licenses/by/4.0/).
1 Introduction
For centuries, humans have used herbaceous plants for different aims, focusing primarily on their medicinal properties. Taste or aroma of herbal teas often constitute one of our first childhood
One of the most commonly used herbaceous plant species distributed around the world for the preparation of herbal teas peppermint [1, 2, 3, 4]. It typically is sold in loose form(leaf), or portioned,
in bags. The present study proposes new form of peppermint tea, where the herb is subjected to the pressure agglomeration process. However, only few medicinal substances possess properties enabling
their direct compaction. Yaman et al. [5] believe that the compaction pressure and other parameters of the process shall be selected depending on the compacted material. However, selection of the
suitable pressure is not an easy task, as its values change depending on the properties of the processed material [6]. The compaction pressure is one of the most important factors determining
obtaining of a product (pellet, blocks) with the desired quality [7]. Koutný et al. [8] state that the main parameters indicating the quality of products obtained through pressure agglomeration, is,
above all, the density, as well as their mechanical strength.
The objective of the conducted study was to evaluate the impact of the pressure agglomeration process of peppermint herb on the mechanical properties of the obtained product.
2 Material and Methods
2.1 Plant material
The study material was dried peppermint herb (Mentha piperita L.) with the moisture of 11%, which is the upper allowable norm for the species in pharmaceutic terms [9]. From the herb, two fractions
of peppermint herb (0.5-2.5 mm and 2.5-5 mm) were separated, using LPzE-2e laboratory shaker with the vibration amplitude of 0-2.5 mm and sieves with square mesh diameter of 0.5 mm, 2.5 mm and 5 mm.
2.2. Methods
2.2.1. Pressure compaction process of pepper mint herb
The bulk density for both obtained fractions was established following the NORM PN-ISO 79712:1998 using a 1 dm^3 vessel.
The separated herb fractions were compacted using a Fritz Heckert EU 20 hydraulic press with computer recording of with a closed matrix equipped with a compression the compaction process and the
pressing mechanism chamber diameter of 15.6 mm and 82 mm long. During the test, a curve, the so- called compaction characteristics (relationship between the compaction force and cylinder
displacement) was recorded, which was the basis for determination of the process' parameters. Based on presented data, it was possible to determine the compaction process work, defined as the area
under the curve. The head travel speed during compaction was 10 mm·min^-1. The process was being carried out at ambient temperature.
The following compaction pressures were used: 50, 100, 150 and 200 MPa. Every time, the 2-g weighed herb sample (corresponding to the weight of tea used for the production of tea bags) was manually
poured into the matrix. As a result of the process, herb compacted into the form of a straight cylinder was obtained.
2.2.2. Strength tests
The study of mechanical strength of compacted peppermint herb with the shape of straight cylinder included determination of tensile strength using Brazilian test method. In this method, a cylindrical
sample is subjected to a compacting load along the diameter, thus the other name for the method is the disk test. Such types of load result in sample destruction when the tensile strength is
exceeded, in the perpendicular direction to the plane of symmetry containing the direction of the load [10]. This method is commonly used to assess the mechanical properties of cylinder-shaped
brittle blocks [11]. The obtained agglomerates were subjected to the disk test using a MTS Insight 2 test machine. The compaction process was conducted until cracks beginning in the central portion
of the cylinder, spreading along the vertical axis became visible to the naked eye (Fig. 1).
Tensile strength during radial compaction was calculated based on the following formula [12, 13].
σ[n] – tensile strength (MPa)
Fn – agglomerate destructive force (N),
d – compacted sample diameter (m),
l – agglomerate length (m).
Moreover, we measured length, diameter and weight of the obtained agglomerate in order to calculate their density in g·cm^-3. The measurements were conducted in 20 repetitions for each combination.
The agglomerate compaction level was also determined, as the multiplication factor of volume decrease, following the methodology presented by Skonecki et al. [14] according to the following formula:
S[za] – agglomerate compaction level
ρ[a1] – agglomerate density after 48 hours of storage (g·cm^-3)
ρ[n] – initial material density in the compression chamber (bulk density) (g·cm^-3).
After the agglomeration process, the total compaction work was calculated using the registered compaction curves.
Examples of registered courses of the compaction process are presented in Fig. 2.
Fig. 1.
Example appearance of agglomerate subjected to Brazilian test
Fig. 2.
Selected courses of peppermint herb compaction process
2.3 Statistical analysis
Arithmetic means and standard deviations (SD) were calculated for the obtained results. Statistical analysis was performed using the Statistica 10.0 software (Stat Soft, Inc., Tulsa, Oklahoma USA).
Multidirectional variance analysis was used. The significance of differences was evaluated using Duncan test at the significance level of p=0.05, and the results were considerably below the level of
p<0.05. Linear regression models were constructed using Microsoft Excel software for the relationship of the compaction work and the applied pressure.
3. Study results
The obtained results of agglomerate strength measurement with the use of the Brazilian test method are presented in Table 1. A significant influence of the applied pressure as well as the separated
peppermint herb fraction on its tensile strength was observed. However, a relationship between the applied pressure and separated fraction of peppermint herb was found. According to the Brazilian
test, higher strength characterized the agglomerate produced from 0.5-2.5 mm fraction at pressure 50 and 150 MPa as compared to the same pressures for 2.5-5 mm fraction. The phenomenon of higher
susceptibility to cracking of agglomerates produced from materials with lower fineness level is reported by Olsson [15].
The highest strength characterized the agglomerate produced at 200 MPa pressure, independently of fraction and 150 MPa only for 0.5-2.5 mm fraction. It can be expected that in the case of the thicker
fraction of the herb, empty microspaces could occur between agglomerate particles. The study of Sadowska et al. [16] regarding the grind ability of peppermint agglomerate in a kinetic test also
indicate the occurrence of general tendency to improve the mechanical strength of the product when higher pressure is used in the compaction process. However, when subjecting the results of both
tests to a comparative analysis, it was observed, that the agglomerate produced from the smaller herb fraction (particularly with lower pressure levels) exhibits better strength parameters in the
Brazilian test, whereas in the kinetic test (where its grind ability was tested) it produces inferior results. This phenomenon can be explained by the mechanical wedging-out of the particles, which
takes place during compaction of components in the agglomeration process, and then easier separation of the finer particles, due to the limited contact surface of the particles, especially those,
which are located on the margins of the product.
Results of the study regarding the density of the obtained agglomerate are presented in tab. 2. An increase of the agglomerate density was observed along with the increase of pressure applied in the
experiment. Similar relationships are provided in Skonecki et al. [17]. Typically higher density characterized agglomerate produced from the finer herb fraction. The relationship for different
substrates are confirmed by Mani et al. [18], Mani et al. [7], and Carone et al. [19]. However, in the presented study an influence of the applied pressure and herb fraction and the obtained
agglomerate density was observed. The obtained results indicate that at 200 MPa pressure the fineness level of peppermint herb no longer influences the density of the produced agglomerate.
Changes in the agglomerate density confirm results of its compaction level (Table 3). Analysis of the parameter indicated its increased value along the increase of compaction pressure. The
relationship for different substrates are confirmed by Kulig et al. [20]. It also depended on the used material fraction, attaining higher values at lower fineness level of the material and equal
pressure for each fraction. This relationship appears to be logical, as the material is characterized by lower density also in the bulk form. Such high level of compaction indirectly explains the
possible savings of the storage and transporting spaces which can be generated using the agglomeration process.
Strength parameters of the obtained product shall also be considered in terms of the workload incurred to obtain them. The presented study demonstrated that the level of applied pressure influences
the values of compaction work. The total compaction work increased with the increase of pressure value (Table 4). In the majority of cases, no statistically significant differences occurred between
the peppermint herb fractions used in the study. Only in the case of the applied pressure of 200 MPa slightly higher values were observed for the finer fraction.
Regression analysis demonstrated that the relationships of the applied agglomeration pressure and values of compaction work can be presented with linear equations. Results of the conducted analysis,
with regression equations explaining these relationships are presented in the following charts (Figs. 3-4).
High matching index for the linear regression function to the empirical data was demonstrated. In the case of both peppermint herb fractions the coefficient of determination was approx. 0.98.
Table 1.
Values of the Brazilian test of agglomerate produced from different peppermint herb fractions at different pressures
Table 2.
Agglomerate density depending on the used pressure and separated herb fraction
Table 3.
Agglomerate compaction level depending on the used pressure and separated herb fraction
Table 4.
Values of compaction work depending on the applied pressure and peppermint herb fraction
Fig. 3.
Relationship between agglomeration pressure and the total compaction work of peppermint herb with fraction 0.5-2.5 mm
Fig. 4.
Relationship between agglomeration pressure and the total compaction work of peppermint herb with fraction 2.5-5.0 mm
4 Conclusions
• 1. It was determined that the mechanical strength of peppermint agglomerates depends on the pressure applied during compaction process as well as the fineness level of peppermint herb. Increase
of the applied pressure in the range from 50 to 200 MPa results in increased strength of the agglomerate measured with Brazilian test, produced from 0.5-2.5 mm fraction by approx. 269%, whereas
for 2.5-5.0 mm fraction by approx. 323%. In the tested pressure range, an increase in the density of agglomerate from the finer herb fraction occurs by approx. 133%, whereas the thicker fraction
about 139%.
• 2. The work necessary to compact the tested peppermint herb fractions depends on the applied agglomeration pressure. Along with the increased pressure in the range 50-200 MPa, the workload
necessary to compact the material to the form of agglomerate are increased twofold. Strict relationships between agglomeration pressure and the work value for individual peppermint herb fractions
were found.
• 3. An analysis of the obtained compaction work values, agglomerate density as well as the results of the Brazilian test allows to conclude that the favorable solution for 0.5-2.5 mm fraction is
production of agglomerate at 150 MPa pressure, since as a result it is characterized by high mechanical strength, acceptable density, and the incurred workload is lower than in the case of
subsequent pressure used in the experiment. On the other hand, study results regarding 2.5-5.0 mm fraction indicate, that obtaining agglomerates with the highest density and mechanical strength
requires the use of 200 MPa pressure.
This research was financed by Ministry of Science and Higher Education of the Republic of Poland.
All Tables
Table 1.
Values of the Brazilian test of agglomerate produced from different peppermint herb fractions at different pressures
Table 2.
Agglomerate density depending on the used pressure and separated herb fraction
Table 3.
Agglomerate compaction level depending on the used pressure and separated herb fraction
Table 4.
Values of compaction work depending on the applied pressure and peppermint herb fraction
All Figures
Fig. 1.
Example appearance of agglomerate subjected to Brazilian test
In the text
Fig. 2.
Selected courses of peppermint herb compaction process
In the text
Fig. 3.
Relationship between agglomeration pressure and the total compaction work of peppermint herb with fraction 0.5-2.5 mm
In the text
Fig. 4.
Relationship between agglomeration pressure and the total compaction work of peppermint herb with fraction 2.5-5.0 mm
In the text
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on
Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.
|
{"url":"https://www.bio-conferences.org/articles/bioconf/full_html/2018/01/bioconf_wipie2018_02028/bioconf_wipie2018_02028.html","timestamp":"2024-11-10T09:17:59Z","content_type":"text/html","content_length":"90835","record_id":"<urn:uuid:827ef818-c59f-4324-8cda-246a4d24ac18>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00761.warc.gz"}
|
2-8 Proving Angle Relationships Answer Key - Angleworksheets.com
Proving Angle Relationships Worksheet 2 8 Answers – Angle worksheets are a great way to teach geometry, especially to children. These worksheets contain 10 types of questions on angles. These include
naming the vertex and the arms of an angle, using a protractor to observe a figure, and identifying supplementary and complementary pairs of angles. … Read more
Worksheet Section 2-8 Proving Angle Relationships Answers
Worksheet Section 2-8 Proving Angle Relationships Answers – Angle worksheets are a great way to teach geometry, especially to children. These worksheets include 10 types of questions about angles.
These questions include naming the vertex, arms, and location of an angle. Angle worksheets are a key part of a student’s math curriculum. They help students … Read more
|
{"url":"https://www.angleworksheets.com/tag/2-8-proving-angle-relationships-answer-key/","timestamp":"2024-11-05T00:43:01Z","content_type":"text/html","content_length":"53281","record_id":"<urn:uuid:c3bbef4b-f0b0-4684-a883-fdbbcc366604>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00835.warc.gz"}
|
Compa ratio
It is a compensation metric organizations use to determine how an employee's salary compares to the midpoint of the salary range for their position. It is calculated by dividing the employee's
current salary by the midpoint of the salary range for their position. The resulting ratio is expressed as a percentage, with a ratio of 100% indicating that the employee's salary is exactly at the
midpoint of the range. Ratios above 100% indicate that an employee is paid above the midpoint, while ratios below 100% indicate that an employee is paid below the midpoint. Compa ratio is often used
with other compensation metrics to ensure that employees are paid fairly and competitively within their organization. A broader definition of compa ratio is an assessment of the competitiveness of an
individual salary point in reference to a comparative point of interest, such as the midpoint of a grade salary range, the market median, the 75th percentile and many other reference points of
|
{"url":"https://www.thehumancapitalhub.com/glossary/compa-ratio","timestamp":"2024-11-10T14:53:29Z","content_type":"text/html","content_length":"27170","record_id":"<urn:uuid:8b2026fb-2060-43b7-a4d7-82f59e2bf121>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00087.warc.gz"}
|
Diagonalization - (Quantum Mechanics) - Vocab, Definition, Explanations | Fiveable
from class:
Quantum Mechanics
Diagonalization is the process of transforming a matrix into a diagonal form, where all non-diagonal elements are zero, making it easier to analyze its properties. This is particularly important in
quantum mechanics, as it allows for the simplification of linear operators, revealing the eigenvalues and eigenvectors that represent observable quantities in a physical system.
congrats on reading the definition of Diagonalization. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. A matrix is diagonalizable if it has enough linearly independent eigenvectors to form a basis for the vector space.
2. The process of diagonalization simplifies many matrix operations, such as finding powers of matrices or solving differential equations.
3. In quantum mechanics, diagonalizing an operator allows for the direct measurement of observables since the eigenvalues represent possible measurement outcomes.
4. Not all matrices are diagonalizable; some may only be put into Jordan form, which includes Jordan blocks for repeated eigenvalues.
5. Diagonalization can be accomplished using similarity transformations, where one matrix can be expressed in terms of another by an invertible matrix.
Review Questions
• How does diagonalization simplify the process of solving quantum mechanical problems?
□ Diagonalization simplifies solving quantum mechanical problems by transforming operators into a form where their eigenvalues and eigenvectors are easily accessible. When an operator is
diagonalized, its action on a state vector can be directly interpreted in terms of measurable quantities. This allows for straightforward calculations of expectation values and predictions of
measurement outcomes, as the diagonal elements correspond directly to observable results.
• Discuss the conditions under which a matrix is diagonalizable and provide an example of a non-diagonalizable matrix.
□ A matrix is diagonalizable if it has enough linearly independent eigenvectors to form a complete basis. For instance, consider the 2x2 matrix $$A = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end
{pmatrix}$$. This matrix has one eigenvalue, λ = 1, with only one linearly independent eigenvector. As it lacks sufficient independent eigenvectors, this matrix cannot be diagonalized and can
instead only be expressed in Jordan form.
• Evaluate the significance of diagonalization in quantum mechanics compared to classical mechanics.
□ Diagonalization holds significant importance in quantum mechanics as it directly relates to the observables that govern physical measurements. Unlike classical mechanics where physical
quantities can often be computed directly from equations, quantum mechanics relies on operators acting on state vectors. By diagonalizing these operators, we can easily identify eigenvalues
that correspond to measurable quantities and eigenvectors that represent states of the system. This highlights the fundamental difference between the probabilistic nature of quantum mechanics
and deterministic classical mechanics, emphasizing how diagonalization allows for clearer interpretation and analysis of quantum systems.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
|
{"url":"https://library.fiveable.me/key-terms/quantum-mechanics/diagonalization","timestamp":"2024-11-04T05:30:52Z","content_type":"text/html","content_length":"154208","record_id":"<urn:uuid:fc61f60a-4757-411a-8467-3a223fc7b005>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00835.warc.gz"}
|
Illumination - Scenario 1
This post is inspired by Problem 27, Chapter 20 (Illumination) of "Textbook-Reviewer in Electrical Engineering, 1st Edition" by Professional Electrical Engr. (PEE) Marcialito M. Valenzona.
An industrial work area of 33 meters by 13 meters needs an illumination level of 72 lumens per square meter using 200W lamps with 2730 lumens of output per lamp. The area's coefficient of utilization
is 40% and its maintenance factor is 71%. How many lamps does the area need?
1.) Desired work area illumination:
(72 lumens/m^2) * (33*13)m^2 = 30,888 lumens
2.) Lumens from sources:
Coefficient of utilization is the lumens reaching the work area (diminished by absorption and reflection) over the lumens emitted by sources.
30,888 lumens / 0.4cu = 77,220 lumens
3.) Lumens when clean/new:
Maintenance factor is the lumens emitted by source under normal working conditions (dust, dirt, smoke, etc) over the lumens emitted when everything is completely clean or brand new.
77,220 lumens / 0.71mf = 108,760.56 lumens
4.) Total number of lamps needed:
108,760.56 lumens / (2730 lumens/lamp) = 39.84 lamps
The factory area needs 40 lamps to reach the desired level of illumination.
|
{"url":"https://www.electricaldean.com/2018/06/illumination-scenario-1.html","timestamp":"2024-11-06T20:26:05Z","content_type":"text/html","content_length":"108869","record_id":"<urn:uuid:033a4f14-4769-40ef-b3c0-a92183889c75>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00097.warc.gz"}
|
Proceedings of the ... American Control Conference. American Control Conference
Proceedings of the ... American Control Conference. American Control Conference Journal
publication venue for
©2024 Regents of the University of Colorado | Terms of Use | Powered by VIVO
Data updated last 11/10/2024 22:30 10:30:01 PM
University of Colorado Boulder / CU Boulder
Fundamental data on national and international awards provided by Academic Analytics.
|
{"url":"https://experts.colorado.edu/display/journal_202341","timestamp":"2024-11-11T15:28:42Z","content_type":"text/html","content_length":"56999","record_id":"<urn:uuid:a08be39c-3383-4ac1-b3b3-4b6bd448a577>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00734.warc.gz"}
|
Math Is Fun Forum
Registered: 2023-02-24
Posts: 306
Momentum is instantaneous, yeah?
I’m 90kg, walking east at 3m/s, my momentum (p) is;
p=mass x instantaneous velocity
= 90kg x 3m/s
=270kg m/s
What’s the significance of momentum?
I’m guessing Force is part of it.
If I bump into you while I’m walking at 3m/s, with my mass of 90kg, I’ll hit you (push you?) with quite a force
But I’d hit you with a greater force if I was travelling at 6m/s (p=540kg m/s), or I had a mass of 180kg (also p=540kg m/s)
Also, how do I work out what that force is?
F=90kg x a
What is a?
Answer; we don’t have enough information? We only have instantanous v, we don’t have final v and intial v. And what about t? This isn’t happening over x seconds, it’s happening instantaneously, yeah?
Prioritise. Persevere. No pain, no gain.
Registered: 2010-06-20
Posts: 10,610
Re: Momentum
If you are travelling with a constant momentum then it is a timeless measurement. If you hit something there is an impulse as momentum is transferred. This takes place over time.
Have a look at this MIF page as it gives a good account of this:
Children are not defined by school ...........The Fonz
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Sometimes I deliberately make mistakes, just to test you! …………….Bob
Registered: 2023-02-24
Posts: 306
Re: Momentum
Thanks, Bob.
I thought I'd replied to this already. Must have dreamt it
Anyway. I look forward to checking out the MIF page, although those pages, for me at this stage, tend to go beyond GCSE level quite quickly, I think (?).
In the meantime, some basics.
With GCSE physics questions (such as, A man with a mass of 90kg is travelling at 3m/s; what is his momentum?) do we assume that the 3m/s is his instantaneous velocity? And by instantaneous momentum I
meant; as opposed to the average momentum that we would get if the 3m/s was his avg v.
Prioritise. Persevere. No pain, no gain.
Registered: 2010-06-20
Posts: 10,610
Re: Momentum
do we assume that the 3m/s is his instantaneous velocity?
Yes, that's all you can do. Momentum must be a vector measure as it has velocity as a component. If the man is accelerating then his momentum is going up.
Children are not defined by school ...........The Fonz
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Sometimes I deliberately make mistakes, just to test you! …………….Bob
|
{"url":"https://mathisfunforum.com/viewtopic.php?id=31949","timestamp":"2024-11-11T17:38:03Z","content_type":"application/xhtml+xml","content_length":"12718","record_id":"<urn:uuid:cedf9a37-3174-47e2-8959-594394e8d403>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00085.warc.gz"}
|
A subgrid model for star formation and effective equation of state
This year saw completion of an extensive simulation study calibrating the parameters in our pressure-regulated, feedback-modulated (PRFM) star formation rate (SFR) theory covering a wide range of
galactic environments (Kim et al., 2024) and (Kim et al., 2023). The survey employed the TIGRESS numerical framework and provides fits for feedback yield (used in the SFR prediction) and effective
velocity dispersion (equivalent to an equation of state, EOS) ; these are chosen as calibration variables because they can be robustly measured in cosmological simulations even at coarse resolution.
We submitted a paper (Jeffreson et al., 2024) presenting results from global galaxy simulations of both disk- and bulge-dominated systems using the GalactISM framework; these global simulations were
used to validate the PRFM theory hypotheses (thermal and vertical dynamical equilibrium) and extend subgrid model calibrations. We also submitted a paper (Hassan et al., 2024) in which we
post-processed outputs from the IllustrisTNG cosmological simulations to compare the native-TNG SFRs with PRFM predictions, also assessing what cosmological resolution is required for employing the
PRFM-resolved vs. the PRFM-unresolved subgrid model formulation for setting the SFR. We explored the impact of this model on cosmological simulations in a post-processing approximation (Hassan et
al., 2024). Because the PRFM model has higher star formation efficiency (SFE) at higher density and pressure while the native-TNG model has nearly constant SFE, we expect implementation of the PRFM
SFR model in next-generation LtU cosmological simulations will lead to significant enhancements of star formation at high redshift, consistent with recent JWST observations. We have also worked with
the Cosmological Modeling Working Group to implement the PRFM-resolved subgrid model, and we have been testing an implementation of PRFM-unresolved in isolated galaxies.
The Astrophysical Journals, Jan 2023
|
{"url":"http://learning-the-universe.org/projects/SF_PRFM/","timestamp":"2024-11-04T15:33:32Z","content_type":"text/html","content_length":"27656","record_id":"<urn:uuid:9e01cc3b-60fd-431f-917a-f97bb1b721da>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00609.warc.gz"}
|
Vibrations: Embry-Riddle Aeronautical University
Vibration isolation
We have learned that a simple, effective way to passively control steady-state vibrations is through "vibration isolation". As an example, consider a base-excited single-DOF system for which we would
like to reduce its absolute response motion. For this case, the transfer function between the input base motion and the output displacement of the oscillator is the well-known transmissibility
function. The characteristics of this transfer function are related to the frequency ratio r = omega/omega_n, where omega is the frequency of base motion and omega_n is the natural frequency of the
• For r < sqrt(2), the system displacement is amplified over the base motion.
• For r = sqrt(2), the system displacement is equal to the base motion, regardless of the damping in the system.
• For r > sqrt(2), the system displacement is reduced from the base motion. Increasing the damping in this frequency range actually increases the system displacement.
For effective vibration isolation, it is desirable to increase the frequency ratio r to a value much larger than sqrt(2) by either reducing the stiffness of the system or increasing its mass. This is
demonstrated in the simulation results below. As seen, for r = 4, the absolute motion of the system mass is effectively zero.
Also, as discussed above it is desired to keep the damping as small as reasonably possible for effective isolation. This is demonstrated in the simulation results below. As seen is this simulation, a
5% damping ratio in the isolation system is a very effective design, whereas the larger damping produces significantly larger amplitude response.
|
{"url":"https://www.purdue.edu/freeform/ervibrations/chapter-v-animations/vibration-isolation/","timestamp":"2024-11-09T05:01:20Z","content_type":"text/html","content_length":"36971","record_id":"<urn:uuid:b3fe2f89-7b64-4469-9b79-91f70077560a>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00222.warc.gz"}
|
bears-den - a Dustforce map
/ 9 votes
/ 5 votes
map notes
First proper map I have made, hope you enjoy it. Its quite rough around the edges and any feedback would be great =]
16 comments
The thumbnail is... very dark, and impossible to distinguish foreground from background. I opened the map and prayed, but the map is just as dark. At least the parallax makes it a little easier to
tell, but the problem is still there. My advice is that you always try to make a large brightness shift between the collision and background... preferably making the BG the darker of the two. A color
shift helps as well, like maybe the BG is darker but also bluer. There are other problems I think some testing would've ironed out... like the turkey between the two ceiling runs. That placement
makes the player want to up-heavy it, but you didn't put any spikes on the terrain up there so it spreads dust and is very bad. Most reasonable options don't work, so I have to airjump and then
backward down-heavy it to avoid spreading dust... pretty awkward but it works. I am just curious if that was your intent?
edited Sep 6, 2014
um i agree, it was quite a rush job, I was just having fun with it and wanted to show my friends, still a bit clunky with all the settings in the level editor sorry but I think you are 110% right
about it being dark. I tested the map a whole bunch of times but I wasn't aware it made it difficult for players, it felt norm to mean but I never did try ( or thought to) use that technique
described. I'm sorry you didn't enjoy it as much as I hoped but maybe next time! Thanks for the advice fella and thanks for at least giving it a go =]
Something else i want to point out (because i used to do the exact same thing, you can ask EklipZ) is when you're downdashing, you spam dashes, this actually will get you worse boosts and ruin your
boosts because the timing is important of when you dash, dash as you hit the ground only, all the excess dashes do nothing :P
It's not bad. Certain parts could be made more "flowy".
Abit too dark. Take note that brightness depends on screen settings too. So If your monitor is especially bright, ask someone with a dim one to test it for you, or tune your settings.
Take note of the area surrounding a turkey. For example, the first turkey can sometimes splash onto the ledge with a forward straight heavy making it annoying.
|
{"url":"http://atlas.dustforce.com/3717/bears-den","timestamp":"2024-11-13T06:27:47Z","content_type":"text/html","content_length":"47359","record_id":"<urn:uuid:cf9c0a4a-f972-4faa-bf41-85077358c7bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00197.warc.gz"}
|
ResetCounters Sets CurrentIn and CurrentOut counters to zero.
StatLastSecondsIn(ANumLastSeconds as Integer) as Returns incoming traffic statistics in megabytes for the some number of last seconds defined by ANumLastSeconds parameter. Parameter can be from 1
Double to 100 (seconds).
StatLastSecondsOut(ANumLastSeconds as Integer) as Same as StatLastSecondsIn but for outgoing traffic.
StatLastMinutesIn(ANumLastMinutes as Integer) as Returns incoming traffic statistics in megabytes for the some number of last minutes defined by ANumLastMinutes parameter. Parameter can be from 1
Double to 100 (minutes).
StatLastMinutesOut(ANumLastMinutes as Integer) as Same as StatLastMinutesIn but for outgoing traffic.
StatLastHoursIn(ANumLastHours as Integer) as Returns incoming traffic statistics in megabytes for the some number of last hours defined by ANumLastHours parameter. Parameter can be from 1 to
Double 100 (hours).
StatLastHoursOut(ANumLastHours as Integer) as Same as StatLastHoursIn but for outgoing traffic.
StatLastDaysIn(ANumLastDays as Integer) as Double Returns incoming traffic statistics in megabytes for the some number of last days defined by ANumLastDays parameter. Parameter can be from 1 to 100
StatLastDaysOut(ANumLastDays as Integer) as Double Same as StatLastDaysIn but for outgoing traffic.
|
{"url":"http://routix.net/netcom/manual/rule_methods.html","timestamp":"2024-11-11T09:38:24Z","content_type":"application/xhtml+xml","content_length":"5073","record_id":"<urn:uuid:1697a54f-b71c-456c-b49e-3d6512565aef>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00336.warc.gz"}
|
22000 Dollar Loan
Enter a higher figure to see how much money you can save by paying off your debt faster. It will also show you how long it will take to pay off the loan at the. Definitions. Loan Amount: The total
amount of money borrowed. Loan Term: The amount of time the borrower has to pay off. Calculate your next loan! Information and interactive calculators are made available to you as self-help tools for
your independent use. Auto Loan Payment Information. How much is the monthly payment for a $22, car loan paid over 60 months? Here are some helpful tips to understand how this. Car Loan Calculator.
Use this calculator to help you determine your monthly car loan payment or your car purchase price. After you have entered your current.
If you borrowed money to buy a car, it's possible you owe more on your car loan than the car is worth. When that happens, you have “negative equity” in the car. $22, Car Loan. What's the payment of a
22, dollar car loan? (adjust inputs to calculate new loan). Purchase Price. $. Down Payment. $. Percent Down. Use this calculator to calculate the payment of a car loan. Loan Amount: Amount of loan
taken. Interest Rate: Interest rate of the loan. Length of Loan: Time. Loan amount: Total dollar amount of your loan. Interest rate: The annual interest rate, often called an annual percentage rate
(APR) for this loan or line of. Use this calculator to determine how many payments it will take to pay off your loan. Loan Information. Current Balance Monthly Payment Interest Rate. Results. Related
Calculators. Car Loan Refinance Calculator. Car Affordability Calculator. Car Lease Calculator. Car Lease or Buy 22, and have a 12, loan how. Looking to buy a new car? We'll do the math for you.
Scotiabank free auto loan calculator gives you estimate for car loan, monthly payment, interest rate. View the amortization loan schedule for a 22, dollar auto loan over 60 months. Use the above
calculator to see the monthly payment of a different loan. Loan amount. Enter the amount of money you want to borrow. Or you can enter the car price, your down payment amount and the trade-in value.
$22, Mortgage Amortization Schedule. View the amortization loan schedule for 22, dollars. What's the monthly payment on a $22k mortgage? Purchase Price.
Amortization Payment Schedule. Can I afford a $22, car or truck at a apr? Below is the amortization schedule for a 22, dollar loan. It shows how much. Use the Loan Calculator to determine your
regular payments, along with the total loan amount (principal and interest), and see how increasing your payments. Use this calculator to determine your monthly payments and the total costs of your
personal loan. Over the course of the loan, you will pay a total of $3, in interest. Calculate the monthly loan payment for a $22, car or truck. Use this calculator to calculate the monthly payment
of a 22k loan. It can be used for a car loan, mortgage, student debt, boat, motorcycle, credit cards, etc. Or, enter in the loan amount and we will calculate your monthly payment. You can then
examine your principal balances by payment, total of all payments made. Enter the vehicle price, down payment, and interest rate into our car finance calculator below. The calculator will give your
estimated weekly, biweekly, or. Try our Line of Credit & Loan Payment calculator now to estimate your minimum line of credit payments or installment payments on a personal loan. loan over several
years. Once the loan term is up, you've paid for the car plus interest. Interest is what the auto loan company charges you to borrow the money.
What is the average amount of monthly income needed to qualify for a car loan around $22,? All related (35). Free personal loan calculator that returns the monthly payment, real loan cost, and the
APR after considering the fee, insurance, interest of a personal. The size of your monthly payment depends on loan amount, loan term, and interest rate. Loan amount equals vehicle purchase price
minus down payment. $22, Auto Payment Chart ; at %, 1,, , , ; at %, 1,, , , The size of your monthly payment depends on loan amount, loan term, and interest rate. Loan amount equals vehicle purchase
price minus down payment.
Accounts To Follow To Gain Followers On Instagram | Does Staples Print Documents From Flash Drives
|
{"url":"https://syzrangame.ru/community/22000-dollar-loan.php","timestamp":"2024-11-06T02:47:49Z","content_type":"text/html","content_length":"12757","record_id":"<urn:uuid:6aa22dee-78b0-457a-9bb0-f74961470c1b>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00417.warc.gz"}
|
Domain decomposition solution of nonlinear twodimensional parabolic problems by random trees
Acebron, Juan; Rodriguez-Rozas, A.; Spigler, R.
Journal of Computational Physics, 228(15) (2009), 5574-5591
A domain decomposition method is developed for the numerical solution of nonlinear parabolic partial differential equations in any space dimension, based on the probabilistic representation of
solutions as an average of suitable multiplicative functionals. Such a direct probabilistic representation requires generating a number of random trees, whose role is that of the realizations of
stochastic processes used in the linear problems. First, only few values of the sought solution inside the space-time domain are computed (by a Monte Carlo method on the trees). An interpolation is
then carried out, in order to approximate interfacial values of the solution inside the domain. Thus, a fully decoupled set of sub-problems is obtained. The algorithm is suited to massively parallel
implementation, enjoying arbitrary scalability and fault tolerance properties. Pruning the trees is shown to increase appreciably the efficiency of the algorithm. Numerical examples conducted in 2D,
including some for the KPP equation, are given.
|
{"url":"https://cemat.tecnico.ulisboa.pt/document.php?project_id=4&member_id=118&doc_id=1713","timestamp":"2024-11-12T13:34:29Z","content_type":"text/html","content_length":"9119","record_id":"<urn:uuid:d6e93f4e-c62e-4dea-8f25-4dde7f9a71f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00733.warc.gz"}
|
Bearing fault identification based on ASMOTE-CFR
Aiming at the problem of data unbalance caused by the lack of bearing failure test data, the paper proposes a collaborative filtering recommendation (CFR) method for adaptive Smote (ASMOTE)
resampling and matrix decomposition of minority samples (ASMOTE-CFR). The method first adopts adaptive Smote method to synthesize different number of new sample equalization test data sets according
to the data distribution. and then a variety of typical feature values such as time domain, frequency domain, time frequency domain, etc. are extracted to obtain the bearing feature matrix, and then
a scoring matrix that accurately describes the bearing state is designed and based on the matrix Based on the decomposed collaborative filtering algorithm, a set of collaborative filtering
recommendation system for bearing state recognition is proposed. Using this method, different forms of fault data on the outer ring of the rolling bearing were identified and verified. The accuracy
of identification reached more than 98 %. Compared with the recognition accuracy of the collaborative filtering recommendation algorithm, this method improved 8 %.
1. Introduction
During the operation of the wind turbine, bearing failure is the main failure of the wind turbine. If the timeliness of finding the bearing failure cannot be guaranteed, the operating life of the
entire generator set will be greatly reduced, or even cause a major safety accident, so how to effectively identify the bearing The fault state has become one of the main contents in the field of
fault diagnosis.
Motor bearings will generate huge amounts of data during monitoring. Normally, the normal sample data will be much larger than the fault sample. In recent years, many scholars have carried out
research to improve the imbalanced learning problem [1-4]. Sampling Random Oversampling randomly copies a few samples to balance the class distribution; Jose et al. [5] proposed a Smote oversampling
method. This method is not simply to copy its samples but there is a synthetic sample mechanism blindly so that the learning of samples is easy to cause overfitting.
At present, Collaborative Filtering (CF) is one of the most commonly used methods in the field of recommendation systems. The core idea is to predict user preferences through rating information of
similar users or similar items [2, 6]. The paper [7-9] proposed a probabilistic matrix decomposition model, which describes the matrix decomposition process from the perspective of the probability
generation process, effectively alleviating the problem of data sparsity.
Aiming at the difficulty of designing the scoring matrix of the recommendation system in the field of fault diagnosis, this paper first extracts the typical features from the time domain, frequency
domain, and time-frequency domain to obtain the bearing feature matrix, and then constructs a set of accurate scoring matrices for the bearing state. Two matrixes with different characteristics are
organically combined to obtain a joint scoring matrix for bearing state recognition. Based on the matrix filtering collaborative filtering algorithm and gradient descent optimization algorithm, a set
of collaborative filtering recommendation systems for bearing state recognition is proposed.
2. CFR system based on ASMOTE
2.1. Adaptive smote oversampling method (ASMOTE)
This article sets the number of new minority samples to be generated according to the balance of data distribution. The specific algorithm flow is as follows:
Step 1: Calculate the degree of unbalance. Recall that the minority sample is ${X}_{s}$ and the majority is ${X}_{m}$, then the imbalance:
$d=\frac{{X}_{s}}{{X}_{m}},d\in \left(0,1\right).$
Step 2: Calculate the number of samples to be synthesized:
$G=\left({X}_{m}-{X}_{s}\right)\mathrm{*}b,\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}b\in \left(0,1\right),$
where $b\in \left(0,1\right)$ is a parameter used to specify the desired balance level after generation of the synthetic data.
Step 3: For each sample $X$ that belongs to the minority class, calculate the $X=\left\{{X}_{1},{X}_{2},\cdots ,{X}_{n}\right\}$ neighbors with Euclidean distance, $\mathrm{\Delta }i$ is the number
of samples belonging to the majority class among the $k$ neighbors, and the ratio $r$ is $r=\mathrm{\Delta }i/k$, $i=$ 1, 2,…, ${X}_{s}$, $r\in \left(0,1\right)$, where $\mathrm{\Delta }i$ is the
number of examples in the $k$ nearest neighbors of ${x}_{i}$ that belong to the majority class.
Step 4: Normalize ${r}_{i}$ for each minority sample obtained in Eq. (3):
${r}_{i}=\frac{{r}_{i}}{{\sum }_{i=1}^{ms}{r}_{i}}.$
Step 5: Calculate the number of synthesized samples for each minority sample:
Step 6: Randomly choose one minority data example, ${x}_{zi}$, from the $K$ nearest neigh bors for data ${x}_{i}$. and synthesize according to the following equation:
${s}_{i}={x}_{i}+\left({x}_{zi}-{x}_{i}\right)×\lambda .$
Repeat the synthesis until the number of synthesis required by Eq. (5) is satisfied.
2.2. Collaborative filtering recommendation algorithm based on matrix decomposition
The idea of the collaborative filtering algorithm based on matrix decomposition is to decompose the higher-dimensional “user-movie” rating matrix into the product of two lower-dimensional matrices.
These two low-dimensional matrices are the user latent factor matrix and Project latent factor matrix, where $k$ is the number of latent factor features, as shown in Eq. (6):
${R}_{mn}\approx {P}_{km}\mathrm{*}{Q}_{kn}^{T}.$
For the existing $n$ score records, the square of error is used to calculate the loss function of each score. The specific formula is as follows:
$L\left(P,Q,R\right)=\frac{1}{n}\sum L\left({P}^{j},{Q}^{i},{R}_{i}^{\left(j\right)}\right)=\frac{1}{n}{\sum \left({R}_{i}^{\left(j\right)}-{R}_{i}^{\left(j\right)}\right)}^{2}.$
To prevent overfitting, regularization terms are added to the overall loss function:
$\mathrm{L}=\mathrm{a}\mathrm{r}\mathrm{g}\mathrm{m}\mathrm{i}\mathrm{n}\left(L\left(P,Q,R\right)+\lambda \left({‖P‖}^{2}+{‖Q‖}^{2}\right)\right),$
where $\lambda$ is the regularization coefficient, further, the gradient descent method is used to deal with the minimization problem, the core problem of the matrix factorization model is to
minimize the overall loss function of the above formula by finding the appropriate parameters $P$ and $Q$.
3. Bearing fault identification based on ASMOTE-CFR
ASMOTE-CFR is based on the unbalance of data in massive data. First, the ASMOTE algorithm is used to equalize a few samples of faulty bearings. Further combined with the CFR method to design specific
scoring rules to establish the corresponding scoring matrix, so as to effectively solve the problem of low accuracy in unbalanced data sets. Then extract the typical characteristic values in the time
domain, the fuzzy entropy value in the frequency domain and the wavelet packet entropy value in the frequency domain to obtain the bearing feature matrix, and then design a scoring matrix that
accurately describes the bearing state. Finally, these two matrices with different characteristics are organically Combined, a joint scoring matrix for bearing status identification is obtained.
Based on the joint scoring matrix, the bearing status is effectively identified.
Suppose there are signal data of $u$ group of rolling bearings $\left({S}^{1},{S}^{1},\cdots ,{S}^{k},{S}^{k+1},\cdots ,{S}^{u-1},{S}^{u}\right)$ and there are $v$ different types of states) $\left
({Z}_{1},{Z}_{2},{Z}_{3},\dots ,{Z}_{v}\right)$. In the signal data of group $u$ rolling bearings, it is assumed that the set of minority samples is ${X}_{s}=\left\{{X}_{1},{X}_{2},\dots ,{X}_{n}\
right\}$, the set of majority samples is ${X}_{m}=\left\{{X}_{1},{X}_{2},\dots ,{X}_{m}\right\}$, ${X}_{n}$ represents the feature vector of the $n$ minority sample, and ${X}_{m}$ represents the
feature vector of the $m$ majority sample. and here $\left(m+n\le u\right)$. Calculate the number of samples to be synthesized by $G=\left({X}_{m}-{X}_{s}\right)\mathrm{*}b$, $b\in \left(0,1\right)$,
and for the minority sample ${X}_{s}$, use the Euclidean distance to calculate the $h$ nearest neighbor, and then randomly choose one minority data example, ${x}_{zi}$ ,from the $h$ nearest neigh
bors for data ${x}_{i}$ and synthesize according to ${S}_{i}={X}_{s}+\left({X}_{s}-{X}_{i}\right)×\lambda$. The sample data after ASMOTE will increase by certain percentage. Assuming that the data
after ASMOTE has $w$ group, the state of the previous $k$ group of training data ${S}^{1},{S}^{1},\cdots ,{S}^{k}$ is known, and now the CFR is used to identify the state of the $w-k+1$ group of test
data. ${S}^{k+1},\cdots ,{S}^{w}.$
In the $i$ group of data, 17 features are extracted in the time domain, which are average, root mean square value, root square amplitude, rectified average, kurtosis, variance, maximum, minimum,
peak-to-peak value, Standard deviation, waveform index, peak index, pulse index, margin index, skewness index, kurtosis index. Frequency domain extraction fuzzy entropy and sample entropy,
time-frequency domain extraction wavelet packet energy entropy and EMD decomposition into 12 Each order component IMF entropy extracts a total of 32 mixed domain features.
The entropy extraction is defined as follows.
Assuming that the energy corresponding to ${S}_{aj}^{i}$$\left(i=0,1,2,\cdots ,b\right)$ is $e$$\left(j=0,1,2,\cdots ,b\right)$, then:
${{E}_{aj}}^{i}=\int {‖{S}_{aj}^{i}\left(t\right)‖}^{2}dt.$
Then the total energy of the signal is:
${E}^{i}={\sum }_{j=0}^{b}{E}_{aj}^{i},$
then the entropy is:
$H=-{\sum }_{i=1}^{b}p\left({E}_{aj}^{i}\right)\mathrm{l}\mathrm{g}p\left({E}_{aj}^{i}\right).$
According to the information entropy, the fuzzy entropy in the frequency domain, the sample entropy, the IMF entropy of the EMD components in the time-frequency domain, and the wavelet packet entropy
are obtained. Furthermore, the 32 feature extraction values are used as elements to construct a normalized feature vector as follows:
${T}^{i}=\left[{{f}_{a0}}^{i},{{f}_{a1}}^{i},\cdots ,{{f}_{ab}}^{i}\right],\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}{f}_{aj}^{i}=\frac{{E}_{aj}^{i}}{{E}^{i}},\mathrm{}\mathrm{}\
mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\left(i=1,2,\cdots ,k,k+1,\cdots ,w,j=1,2,\cdots ,b\right).$
According to the corresponding state of the bearing, the paper designs the state score table of the bearing, as shown in the Table 1 and obtains the corresponding state score matrix $B$.
As shown in the Table 2, the maximum value of the corresponding state for the training data ${S}^{1},{S}^{2},\cdots ,{S}^{w}$ is 1 and the given state is recorded as the minimum value $\epsilon$ ($\
epsilon$ is a number infinitely close to 0), and for the test data ${S}^{k+1},\cdots ,{S}^{w}$, the score of the state ${Z}_{v}$ is unknown, given a value of 0, and recorded as ${R}_{i\mathrm{"}}^{j}
\mathrm{}\left(i\mathrm{"}=k+1,k+2,\cdots ,w,j=1,2,\cdots ,v\right)$.
Table 1Bearing characteristic score table
${S}^{1}$ ${S}^{2}$ … ${S}^{k}$ ${S}^{k+1}$ … ${S}^{w}$
${{f}_{a0}}^{i}$ ${{f}_{a0}}^{1}$ ${{f}_{a0}}^{2}$ … ${{f}_{a0}}^{k}$ ${{f}_{a0}}^{k+1}$ … ${{f}_{a0}}^{w}$
${{f}_{a1}}^{i}$ ${{f}_{a1}}^{1}$ ${{f}_{a1}}^{2}$ … ${{f}_{a1}}^{k}$ ${{f}_{a1}}^{k+1}$ … ${{f}_{a1}}^{w}$
${{f}_{a2}}^{i}$ ${{f}_{a2}}^{1}$ ${{f}_{a2}}^{2}$ … ${{f}_{a2}}^{k}$ ${{f}_{a2}}^{k+1}$ … ${{f}_{a2}}^{w}$
$⋮$ $⋮$ $⋮$ $\ddots$ $⋮$ $⋮$ $\ddots$ $⋮$
${{f}_{ab}}^{i}$ ${{f}_{ab}}^{1}$ ${{f}_{ab}}^{2}$ … ${{f}_{ab}}^{k}$ ${{f}_{ab}}^{k+1}$ … ${{f}_{ab}}^{w}$
Table 2Bearing condition score table
Bearing status ${S}^{1}$ ${S}^{2}$ … ${S}^{k}$ ${S}^{k+1}$ … ${S}^{w}$
${Z}_{1}$ 1 $\epsilon$ … $\epsilon$ ${R}_{1}^{k+1}$ … ${R}_{1}^{w}$
${Z}_{2}$ $\epsilon$ 1 … $\epsilon$ ${R}_{2}^{k+1}$ … ${R}_{2}^{w}$
$⋮$ $⋮$ $⋮$ $\ddots$ $⋮$ $⋮$ $\ddots$ $⋮$
${Z}_{v}$ $\epsilon$ $\epsilon$ … 1 ${R}_{v}^{k+1}$ … ${R}_{v}^{w}$
This paper combines the bearing feature scoring matrix $A$ and the bearing status scoring matrix $B$ to obtain a joint scoring matrix $C$ for bearing status recognition; in order to diagnose the
state of the test data, we need to decompose the joint scoring matrix $C$ into two low-dimensional feature matrices $P$ and $Q$, it is $C=P\mathrm{*}{Q}^{T}$. Furthermore, the state score of the test
data is predict ${P}_{i\mathrm{"}}^{j}$$\left(i\mathrm{"}=k+1,k+2,\cdots ,w;j=1,2,\cdots ,v\right)$ based on these two feature matrices. Finally, the gradient descent method is used to find the
optimal parameters $P$ and $Q$ to minimize the overall loss function, and then the test data ${S}^{k+1}$ predicts the score ${R}_{i\mathrm{"}}^{j}\mathrm{}\left(i\mathrm{"}=k+1,k+2,\cdots ,w;j=1,2,\
cdots ,v\right)$ for the state ${Z}_{j}$, where ${R}_{i\mathrm{"}}^{j}={P}_{i\mathrm{"}}^{j}\mathrm{*}{Q}_{i\mathrm{"}}^{j}$, then the state ${Z}_{j}$ corresponding to the highest score ${R}_{i\
mathrm{"}}^{j}$ is the predicted test data state ${S}^{k+1}$.
4. Example verification of different failure forms of bearing outer ring
In order to further verify the effectiveness of the fault identification method proposed in this paper, this section identifies different forms of faults on the bearing outer ring. Using the bearing
test stand as shown Fig. 3, set the speed to 1200 rpm and the sampling frequency to 16384 Hz. Experiment with 6205EKA deep groove ball bearings. Collect 402 groups outer ring pitting corrosion and
390 groups outer ring cracks (as shown Fig. 2), outer ring current damage (as shown Fig. 1) 85 groups and normal 423 groups total 1300 sets of data samples.
First use ASMOTE to take 423 groups of normal state samples as reference, oversample a few types of shaft current damage samples by 3 times, and finally get 255 groups of shaft current damage
samples, and then further equalize the total 1470 groups of samples according to 6:2:2 ratio is randomly divided into a training set (882 groups), a cross-validation set (294 groups) and a test set
(294 groups), and a state-of-the-art recognition is performed using a collaborative filtering recommendation system for bearing state recognition.
As can be seen from the Fig. 5, when the regularization coefficients $\lambda =$ 0.002 and $K=$ 11, the bearing condition score of the test set reached 99.23 %, and when $K=$ 11, $\lambda$ is 0.0025,
0.003, and 0.0035, the accuracy of the bearing test set of state has reached 98.63 %, 98.98 % and 98.76 %, respectively. The performance of the model on the test set is evaluated, which proves that
the model has good generalization ability under this parameter.
Fig. 1Shaft current damage
Fig. 4Accuracy identification CFR
Fig. 5Accuracy identification ASMOTE- CFR
Fig. 6Compare k= 11 the recognition rate CFR and ASMOTE-CFR
Taking $\lambda =$0.002 and $K=$11 as examples, the Table 3 shows the specific recognition of the model for various states on the test set.
Table 3Test set recognition effect
Bearing status Number of test samples Recognize the number correctly Recognize the number false State recognition accuracy
Crack 69 67 2 97 %
Putting 79 79 0 100 %
Shaft current 49 49 0 100 %
Normal 97 49 0 100 %
Total 294 292 2 99 %
5. Conclusions
In this paper, the combination of adaptive synthetic minority oversampling technology (ASMOTE) and matrix decomposition-based collaborative filtering technology (CFR) is applied to the field of
mechanical equipment fault recognition. For the identification of rolling bearing states, this paper oversamples a few types of shaft current samples by three times, then extracts 32 typical features
in time domain, frequency domain, time frequency domain and other multi-domain states to construct a bearing feature matrix, and then designs an accurate description of the bearing status , And
finally combine these two matrices with different characteristics organically to obtain a joint scoring matrix for bearing status identification. Experiments with different regularization
coefficients and eigenvalues on bearings with pitting corrosion, cracks, and current damage on the outer ring of rolling bearings and normal bearings, the highest accuracy rate reached more than 98
%. Compared with CFR, the accuracy of method ASMOTE-CFR is improved by 8 %.
• Manlangit S., Azam S., Shanmugam B., et al. An efficient method for detecting fraudulent transactions using classification algorithms on an anonymized credit card data set. Proceedings of
International Conference on Intelligent Systems Design and Applications, 2017.
• Liu C., Wu J., Mirador L., et al. Classifying DNA methylation imbalance data in cancer risk prediction using Smote and Tomek Lonk methods. Communications in Computer and Information Science, Vol.
902, Issue 5, 2018, p. 1-9.
• Al-Azani S., El-Alfy E.-S.-M. Using word embeddings and ensemble learning for highly imbalanced data sentiment analysis in short Arabic text. Procedia Computer Science, Vol. 109, Issue 22, 2017,
p. 359-366.
• Ebo Bennin K., Keung J., Phannachitta P., et al. MAHAKIL: diversity based oversampling approach to alleviate the class imbalance issue in software defect prediction. IEEE Transactions on Software
Engineering, Vol. 44, Issue 1, 2017, p. 534-550.
• Saez J. A., Luengo J., Stefanowski J., Herrera F. SMOTE-IPF: Addressing the noisy and borderline examples problem in imbalanced classification by a re-sampling method with filtering. Information
Sciences, Vol. 291, Issue 10, 2015, p. 184-203.
• Guo Gui Bing, Zhang Jie, Thalmann Daniel, et al. From ratings to trust: an empirical study of implicit trust in recommender systems. Proceedings of the 29th Annual ACM Symposium on Applied
Computing, 2014.
• Kim MinGun, Kim Kyoungjae Recommender systems using SVD with social network information. Journal of Intelligence and Information Systems, Vol. 2, Issue 4, 2016, p. 1-18.
• Parham Moradi, Sajad Ahmadian A reliability-based recommendation method to improve trust-aware recommender systems. Expert Systems with Applications, Vol. 42, Issue 21, 2015, p. 7386-7398.
• Ma Hao, Yang Hai Xuan, Lyu Michael R., et al. So Rec: social recommendation using probabilistic matrix factorization. Proceedings of the17th ACM Conference on Information and Knowledge
Management, 2008.
• Rodriguez T., Di Persia L. E., Milone D. H., et al. Extreme learning machine prediction under high class imbalance in bioinformatics. Latin American Computer Conference, 2017.
• Moreo A., Esuli A., Sebastiani F. Distributional random oversampling for imbalanced text classification. Proceedings of the 39th International ACM SIGIR Conference on Research and Development in
Information, 2016.
About this article
Fault diagnosis based on vibration signal analysis
unbalanced data
collaborative filtering
recommended system
Financial support from National Natural Science Foundation of China (51575178), financial support from Hunan Natural Science Foundation of China (2018JJ2120) and Hunan Power Machinery Research
Institute of AECC.
Copyright © 2020 Huanke Cheng, et al.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
{"url":"https://www.extrica.com/article/21520","timestamp":"2024-11-14T03:37:37Z","content_type":"text/html","content_length":"145244","record_id":"<urn:uuid:ddb3e646-3088-4dda-8290-aa0638416700>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00590.warc.gz"}
|
Google Quantum AI
Quantum error correction below the surface code threshold
Rajeev Acharya, Laleh Aghababaie-Beni, Igor Aleiner, Trond I Andersen, Markus Ansmann, Frank Arute, Kunal Arya, Abraham Asfaw, Nikita Astrakhantsev, Juan Atalaya, Ryan Babbush, Dave Bacon, Brian
Ballard, Joseph C Bardin, Johannes Bausch, Andreas Bengtsson, Alexander Bilmes, Sam Blackwell, Sergio Boixo, Gina Bortoli, Alexandre Bourassa, Jenna Bovaird, Leon Brill, Michael Broughton, David A
Browne, Brett Buchea, Bob B Buckley, David A Buell, Tim Burger, Brian Burkett, Nicholas Bushnell, Anthony Cabrera, Juan Campero, Hung-Shen Chang, Yu Chen, Zijun Chen, Ben Chiaro, Desmond Chik,
Charina Chou, Jahan Claes, Agnetta Y Cleland, Josh Cogan, Roberto Collins, Paul Conner, William Courtney, Alexander L Crook, Ben Curtin, Sayan Das, Alex Davies, Laura De Lorenzo, Dripto M Debroy,
Sean Demura, Michel Devoret, Agustin Di Paolo, Paul Donohoe, Ilya Drozdov, Andrew Dunsworth, Clint Earle, Thomas Edlich, Alec Eickbusch, Aviv Moshe Elbag, Mahmoud Elzouka, Catherine Erickson, Lara
Faoro, Edward Farhi, Vinicius S Ferreira, Leslie Flores Burgos, Ebrahim Forati, Austin G Fowler, Brooks Foxen, Suhas Ganjam, Gonzalo Garcia, Robert Gasca, Élie Genois, William Giang, Craig Gidney,
Dar Gilboa, Raja Gosula, Alejandro Grajales Dau, Dietrich Graumann, Alex Greene, Jonathan A Gross, Steve Habegger, John Hall, Michael C Hamilton, Monica Hansen, Matthew P Harrigan, Sean D Harrington,
Francisco JH Heras, Stephen Heslin, Paula Heu, Oscar Higgott, Gordon Hill, Jeremy Hilton, George Holland, Sabrina Hong, Hsin-Yuan Huang, Ashley Huff, William J Huggins, Lev B Ioffe, Sergei V Isakov,
Justin Iveland, Evan Jeffrey, Zhang Jiang, Cody Jones, Stephen Jordan, Chaitali Joshi, Pavol Juhas, Dvir Kafri, Hui Kang, Amir H Karamlou, Kostyantyn Kechedzhi, Julian Kelly, Trupti Khaire, Tanuj
Khattar, Mostafa Khezri, Seon Kim, Paul V Klimov, Andrey R Klots, Bryce Kobrin, Pushmeet Kohli, Alexander N Korotkov, Fedor Kostritsa, Robin Kothari, Borislav Kozlovskii, John Mark Kreikebaum,
Vladislav D Kurilovich, Nathan Lacroix, David Landhuis, Tiano Lange-Dei, Brandon W Langley, Pavel Laptev, Kim-Ming Lau, Loïck Le Guevel, Justin Ledford, Kenny Lee, Yuri D Lensky, Shannon Leon, Brian
J Lester, Wing Yan Li, Yin Li, Alexander T Lill, Wayne Liu, William P Livingston, Aditya Locharla, Erik Lucero, Daniel Lundahl, Aaron Lunt, Sid Madhuk, Fionn D Malone· arXiv preprint
arXiv:2408.13687, 2024
Quantum error correction provides a path to reach practical quantum computing by combining multiple physical qubits into a logical qubit, where the logical error rate is suppressed exponentially as
more qubits are added. However, this exponential suppression only occurs if the physical error rate is below a critical threshold. In this work, we present two surface code memories operating below
this threshold: a distance-7 code and a distance-5 code integrated with a real-time decoder. The logical error rate of our larger quantum memory is suppressed by a factor of = 2.14 0.02 when
increasing the code distance by two, culminating in a 101-qubit distance-7 code with 0.143% 0.003% error per cycle of error correction. This logical memory is also beyond break-even, exceeding its
best physical qubit's lifetime by a factor of 2.4 0.3. We maintain below-threshold performance when decoding in real time, achieving an average decoder latency of 63 s at distance-5 up to a million
cycles, with a cycle time of 1.1 s. To probe the limits of our error-correction performance, we run repetition codes up to distance-29 and find that logical performance is limited by rare correlated
error events occurring approximately once every hour, or 3 10 cycles. Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant
quantum algorithms.
|
{"url":"https://quantum-dot-devsite-v2-prod-3p.appspot.com/research/index_ac97e674cf2653ecc504c308c3d078b396d367def4d478cea83fac97c4d19da4.frame","timestamp":"2024-11-02T15:35:26Z","content_type":"text/html","content_length":"947984","record_id":"<urn:uuid:4dd38cc7-6f68-4c23-8aab-81327171f9ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00432.warc.gz"}
|
How To Calculate Momentum using Rust?
To calculate momentum using Rust, you can follow these steps:
1. Define your variables: First, define the variables needed for the calculation. These variables include mass (m) and velocity (v).
2. Write a function to calculate momentum: Create a function that takes mass and velocity as input parameters and returns the calculated momentum. The formula to calculate momentum is P = m * v,
where P is momentum, m is mass, and v is velocity.
3. Implement the function: Implement the function in your Rust program. Here is an example code snippet to calculate momentum:
1 fn calculate_momentum(mass: f64, velocity: f64) -> f64 {
2 let momentum = mass * velocity;
3 momentum
4 }
6 fn main() {
7 let mass = 10.0; // in kg
8 let velocity = 5.0; // in m/s
10 let momentum = calculate_momentum(mass, velocity);
12 println!("The momentum is: {}", momentum);
13 }
1. Run the code: Run the Rust program to calculate momentum based on the given mass and velocity values. The calculated momentum will be displayed in the output.
That's it! You have successfully calculated momentum using Rust.
|
{"url":"https://emediaworld.com/thread/how-to-calculate-momentum-using-rust","timestamp":"2024-11-03T18:39:19Z","content_type":"text/html","content_length":"97518","record_id":"<urn:uuid:5fcecb5c-41ed-46fe-9b3e-ad8c617d840b>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00466.warc.gz"}
|
Multiplication Chart Printables | Multiplication Chart Printable
Multiplication Chart Printables
Free And Printable Multiplication Charts Activity Shelter
Multiplication Chart Printables
Multiplication Chart Printables – A Multiplication Chart is a handy tool for kids to discover how to multiply, divide, and discover the smallest number. There are several usages for a Multiplication
What is Multiplication Chart Printable?
A multiplication chart can be made use of to assist youngsters discover their multiplication truths. Multiplication charts can be found in several types, from full web page times tables to single web
page ones. While private tables serve for presenting pieces of details, a complete page chart makes it easier to examine facts that have currently been understood.
The multiplication chart will normally include a left column and also a top row. When you want to locate the item of two numbers, select the first number from the left column and also the 2nd number
from the leading row.
Multiplication charts are helpful understanding tools for both grownups as well as youngsters. Youngsters can utilize them in the house or in college. Multiplication Chart Printables are available on
the Internet and can be published out and laminated for sturdiness. They are a wonderful tool to make use of in math or homeschooling, as well as will certainly provide an aesthetic tip for
youngsters as they learn their multiplication truths.
Why Do We Use a Multiplication Chart?
A multiplication chart is a diagram that reveals how to increase two numbers. You select the first number in the left column, relocate it down the column, and also then choose the 2nd number from the
top row.
Multiplication charts are useful for several reasons, consisting of helping youngsters discover how to divide as well as streamline portions. They can additionally help youngsters learn exactly how
to pick a reliable common denominator. Multiplication charts can likewise be practical as workdesk sources since they act as a continuous reminder of the student’s progression. These tools assist us
develop independent students that recognize the fundamental concepts of multiplication.
Multiplication charts are also helpful for helping pupils remember their times tables. They help them discover the numbers by minimizing the variety of steps needed to complete each operation. One
approach for remembering these tables is to concentrate on a solitary row or column at once, and then move onto the following one. Eventually, the entire chart will be committed to memory. Similar to
any skill, remembering multiplication tables requires time as well as method.
Multiplication Chart Printables
1 10 Multiplication Chart PrintableMultiplication
Free Multiplication Chart Printable Paper Trail Design
Free Multiplication Chart Printable Paper Trail Design
Multiplication Chart Printables
If you’re searching for Multiplication Chart Printables, you’ve concerned the ideal place. Multiplication charts are offered in different styles, including complete dimension, half dimension, and a
range of adorable styles. Some are vertical, while others include a horizontal layout. You can also discover worksheet printables that consist of multiplication equations as well as mathematics
Multiplication charts as well as tables are vital tools for children’s education. These charts are excellent for usage in homeschool math binders or as classroom posters.
A Multiplication Chart Printables is a helpful tool to strengthen mathematics realities as well as can assist a youngster discover multiplication swiftly. It’s also a terrific tool for miss checking
as well as learning the moments tables.
Related For Multiplication Chart Printables
|
{"url":"https://multiplicationchart-printable.com/multiplication-chart-printables/","timestamp":"2024-11-13T01:27:23Z","content_type":"text/html","content_length":"43274","record_id":"<urn:uuid:29961b07-0f8a-4e67-909e-1daee2072196>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00127.warc.gz"}
|
Dijkstra's Rallying Cry for Generalization
Submitted by egdaylight on
In my previous post I maintained that reasoning like a mathematician helps in order to grasp the history of mathematics. Here I shall support a complementary claim which is less obvious: knowing
important developments in the history of mathematics facilitates mathematical comprehension. Specifically, I engage with fellow scholars and I use Henri Poincaré as an historical actor in an attempt
to explain why teaching mathematics with a historical dimension is desirable. Finally, in the last two paragraphs I briefly mention my own agenda on how to combine history and maths.
Henri Poincaré wrote on mathematical education. First he described mathematical heterogeneity. Quoting from [1, p.120-1] with my numbering:
(1) Many children are incapable of becoming mathematicians who must none the less be taught mathematics ...
(2) [M]athematicians themselves are not all cast in the same mould. We have only to read their works to distinguish among them two kinds of minds—logicians like Weierstrass, for instance, and
intuitionists like Riemann.
(3) There is the same difference among our students. Some prefer to treat their problems "by analysis," as they say, others "by geometry."
(4) Since the word understand has several meanings, the definitions that will be best understood by some are not those that will be best suited to others.
Second, Poincaré addressed the evolution from mathematics which was mostly "devoid of exactness" to the formal rigor of David Hilbert et al. — an evolution which came with a "sacrifice" [1, p.124]:
(5) What [the science of mathematics] has gained in exactness it has lost in objectivity. It is by withdrawing from reality that it has acquired this perfect purity.
Here we see Poincaré scrutinize the work of the modern logician, a topic which I have addressed repeatedly on this blog, e.g., in this recent post. My current understanding is that Poincaré was an
ontological dualist and in a similar way to Einstein [2], as the last sentence in the following quote suggests:
(6) We used to possess a vague notion, formed of incongruous elements, some a priori and others derived from more or less digested experiences, and we imagined we knew its principal properties by
intuition. Today we reject the empirical element and preserve only the a priori ones. One of the properties serves as definition, and all the others are deduced from it by exact reasoning. This is
very well, but it still remains to prove that this property, which has become a definition, belongs to the real objects taught us by experience, from which we had drawn our vague intuitive notion. In
order to prove it we shall certainly have to appeal to experience or make an effort of intuition; and if we cannot prove it, our theorems will be perfectly exact but perfectly useless. [1, p.125, my
Lost in Logic — makes a great title for a book; the preface would go like this:
(7) When the logician has resolved each demonstration into a host of elementary operations, all of them correct, he will not yet be in possession of the whole reality; that indefinable something that
constitutes the unity of the demonstration will still escape him completely [to the extent that the lecturer does not even realize this: read, e.g., my 2021 article].
What good is it to admire the mason's work in the edifices erected by great architects, if we cannot understand the general plan of the master? Now pure logic cannot give us this view of the whole;
it is to intuition we must look for it. [1, p.126, my emphasis]
(My oral histories with Peter Naur and Michael A. Jackson convey a similar view w.r.t. computer programming.) The crux is that students need case studies and lots of intuition in order to appreciate
mathematical definitions, let alone theorems and proofs.
Third, how then did Poincaré propose to teach both the intuition and the rigor pertaining to Mathematics? By resorting to the history of mathematics. In his words:
(8) Zoologists declare that the embryonic development of an animal repeats in a very short period of time the whole history of its ancestors of the geological ages. It seems to be the same with the
development of minds. The educator must make the child pass through all that his fathers have passed through, more rapidly, but without missing a stage. On this account, the history of any science
must be our first guide. [1, p.127, my emphasis]
Mathematical education and history come together splendidly. And since I'm interested in both topics I shall quote Poincaré in full:
(9) Our fathers imagined they knew what a fraction was, or continuity, or the area of a curved surface; it is we who have realized that they did not. In the same way our pupils imagine that they know
it when they begin to study mathematics seriously. If, without any other preparation, I come and say to them: "No, you do not know it; you do not understand what you imagine you understand; I must
demonstrate to you what appears to you evident;" and if, in the demonstration, I rely on premises that seem to them less evident than the conclusion, what will the wretched pupils think? They will
think that the science of mathematics is nothing but an arbitrary aggregation of useless subtleties; ...
Later on, on the contrary, when the pupil's mind has been familiarized with mathematical reasoning and ripened by this long intimacy, doubts will spring up of their own accord, and then your
demonstration will be welcome. It will arouse new doubts, and questions will present themselves successively to the child, as they presented themselves successively to our fathers, until they reach a
point when only perfect exactness will satisfy them. It is not enough to feel doubts about everything; we must know why we doubt. [1, p.128, my emphasis]
Not totally unrelated to Poincaré's narrative is Carlo Rovelli's account of what science (in general) entails: "it's the awareness of our ignorance that gives science its reliability" [3, p.230-1].
Likewise, Evangelos N. Panagiotou's 2011 article, entitled Using History to Teach Mathematics: The Case of Logarithms [4], not only provides seversal reasons why students can benefit from historical
awareness, it also explains how the history of mathematics can be used as a didactical tool. For instance, engaging with Freudenthal [5], Panagiotou writes:
The presentation in the form Definition-Theorem-Proof-Corollary can be elegant and can save time but the students remain with the query: How did the idea for these definitions and theorems come
[about]? According to Freudenthal [5, p.107]: "[T]he basic definitions should not appear in the beginning of an exploration, because in order to define something one should know what this is and also
in what it is useful." [4, p.28]
Several references to the literature are provided in Panagiotou's article. In contrast to most of these references however, my methodological preference is to address a particular episode (in the
history of mathematics) from the vantage points of various historical actors. So, instead of focusing on a detailed chronology (which is useful), I want to convey, say, three very different
receptions of Cantor's diagonal argument: receptions by Georg Cantor himself, Henri Poincaré, and Ernest Hobson — as I shall expound in a forthcoming blog post. To borrow the terminology of my
previous post: Cantor was an actualist (embracing an actual infinity and Platonism, as we say today), Poincaré was a potentialist (eschewing completed infinities), and Hobson was an operational
actualist (reasoning operationally with a completed infinity). Subsequently, I will connect each of these historical actors to a present-day mathematician or computer scientist.
Each of Poincaré's concerns (listed above) will come to the fore in my actor-dependent account of Cantor's diagonal argument. First, mathematical heterogeneity will be illustrated by comparing the
intellectual positions of different actors. Second, the contrast between a modern logician or a set theorist (on the one hand) and an intuitionist or a constructivist (on the other hand) will become
apparent due to my specific choice of historical actors: Cantor versus Poincaré (although the latter was more a semi-intuitionist than a Brouwerian intuitionist). Third, the doubts cast by each actor
onto the writings of his contemporaries will allow students to see different developments of mathematical minds without me having to repeat the whole history of its ancestors. In this sense, then, my
proposal is more practical and, at any rate, quite different both from Poincaré's position conveyed in (8) above and from Panagiotou's educational case study of logarithms [4].
[Last update: 15 September 2022]
1. Henri Poincaré, "Mathematical Definitions and Education" in Science and Method, Thomas Nelson and Sons, 1913.
2. Thomas Ryckman, Einstein, Routledge, 2017.
3. Carlo Rovelli, Reality Is Not What It Seems, Penguin Books, 2016.
4. Evangelos N. Panagiotou, Using History to Teach Mathematics: The Case of Logarithms, Science & Education (2011) 20:1-35.
5. H. Freudenthal (1973). What groups mean in mathematics and what they should mean in mathematical education. In A. G. Howson (Ed.), Developments in mathematical education (pp. 101-114). Cambridge
University Press.
|
{"url":"https://dijkstrascry.com/TeachigMathematicsWithHistory?page=0%2C1","timestamp":"2024-11-03T12:05:08Z","content_type":"text/html","content_length":"40421","record_id":"<urn:uuid:28c58ab3-f1fe-4807-a23a-3abfd5811649>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00072.warc.gz"}
|
Character constant, gives whether the shortest paths to or from the given vertices should be calculated for directed graphs. If out then the shortest paths from the vertex, if in then to it will be
considered. If all, the default, then the corresponding undirected graph will be used, edge directions will be ignored. This argument is ignored for undirected graphs.
|
{"url":"https://www.rdocumentation.org/packages/igraph/versions/1.2.5/topics/eccentricity","timestamp":"2024-11-01T21:01:04Z","content_type":"text/html","content_length":"59280","record_id":"<urn:uuid:b2d0cd16-ed3f-4ad0-a4ae-2f9c97d155ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00794.warc.gz"}
|
Gravity Essay: Definition and Major Facts
Gravity represents a fundamental force in the universe that binds all matter and indeed dark matter together. It is a force that can be used to explain the apple falling from the tree to the workings
and orbits of the celestial bodies. The following paper explores the notion of Gravity as it represents such a fundamental force within the universe as well as how it has shaped our understanding of
reality and influences us on a day to day basis.
As every lay person knows, gravity is the force that keeps our feet on the ground which is an amazing feat considering the Earth is flying through the universe at roughly 67,000 miles per hour.
Although Humans do not often question why everything remains ‘stuck’ to the surface of the planet or why when we pour orange juice it doesn’t float to the top of the glass, the answer is simply that
the gravitational pull of the earth attracts all things with any mass towards its center.
Your 20% discount here.
Use your promo and get a custom paper on
"Gravity Essay: Definition and Major Facts".
Order Now
Promocode: custom20
However, gravity as a force affects many more things than the objects on the Earth. For example the moon remains in its orbits around the Earth precisely because it, as an object with mass, is drawn
towards the gravitational pull of the earth (as an object with a larger mass). As well as gravity being able to keep the moon in its rotation around the earth, the gravity of the moon affects the
tides by exerting a level of force on the Earth. This is why the tides are set by the astronomic position of the moon, thus high tides are dictated and predicted by the relative position of the moon
so in spring, when the moon is in a close orbit with the earth, is when the tides are highest. The first person to develop a coherent theory of gravity was Sir Isaac Newton. Newton was a
mathematician and physicist who lived around three centuries ago. Sir Isaac Newton was born in England in 1643 and was a man of many talents who specialized in: astronomy, mathematics, physics, and
alchemy. In 1687 he published his greatest work Philosophiae Naturalis Principia Mathematica. This work is considered to be one of the most influential books in the history of science because it laid
down the fundamental principles of classical mathematics and physics.
The legend of Newton was that he was sitting under an apple tree when an apple fell and hit him on the head which made him consider why object naturally fell, or were attracted towards, the Earth.
What Newton explored was the observable force that affected all object trajectory and speed. After further examination Newton discovered the level and amount of force could be measured and predicted
which gave rise to his theory of gravity. Newton’s basic mathematical model which was called Newton’s Law of Universal Gravitation postulated that every point of mass in the galaxy attracted, to some
degree, every other point of mass in the universe. This is gravity in its most basic form. The ramifications of this theory mean that when a human jumps in the air the mass they lift off the planet
is not only attracted and drawn back to the earth, but to a microscopic degree the earth’s mass is also drawn towards the jumping body. While it seems that a person jumps and is pulled back to the
ground what actually happens at the same time is the earth moves very slightly towards the person.
In the Philosophiae Naturalis Principia Mathematica Newton describes this principle of universal gravitation as well as the three laws of motion. These theories have dominated scientific though and
provided the basic framework for scientific inquiry for over three century’s and are still used to try an explain more complex theories such as string theory and quantum mechanics. At the heart of
Newton’s theory is the notion that the earth and the celestial bodies are all governed by the same set of natural laws which in turn removed the final scientific doubts that the earth was the center
of the universe. In Newtonian physics there are a couple of factors that affect the gravitational force of an object. These are: mass, distance and placement of other bodies of mass.
Thus the amount of gravity on earth varies from point to point. Mass is a particularly important factor to consider as the greater the mass the greater the gravitational pull. Distance between
objects also plays an important role as the greater the distance equals less of a gravitational pull on an object. Gravity has been used to explain the tides, the movement of celestial bodies such as
the moon rotation around the Earth and the Earth’s orbit around the Sun. Gravity is the natural force of attraction, but also affects time itself. When there is a higher level of gravity time moves
faster than normal. This has been shown in many experiments when two stop watches are set off at the same instant, one at the top of a mountain and one at the bottom. Even on earth there is a slight
difference due to the differing levels of gravity experienced with the stopwatch closer to the earth and the one further away. Gravity is a fundamental force that affects us all and is considered one
of the building blocks of all physics and mathematics primarily due to the work of Sir Isaac Newton.
|
{"url":"https://samples.mycustomessay.com/gravity-essay-definition-and-major-facts.html","timestamp":"2024-11-02T11:19:21Z","content_type":"text/html","content_length":"52994","record_id":"<urn:uuid:59b0803c-9ef0-4c21-a49b-93974f6bad9d>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00334.warc.gz"}
|
Statistic math question | Mathematics homework help | COURSE ACHIEVERS
Need Help Writing an Essay?
Tell us about your assignment and we will find the best writer for your paper.
Write My Essay For Me
For each hypothesis test in Problems 5-7, please provide the following information.
( i) What is the level of significance? State the null and alternate hypotheses.
( ii) What sampling distribution will you use? What assumptions are you making? What is the value of the sample test statistic?
( iii) Find ( or estimate) the P- value. Sketch the sampling distribution and show the area corresponding to the P- value.
( iv) Based on your answers in parts ( i) to ( iii), will you reject or fail to reject the null hypothesis? Are the data statistically significant at level a?
( v) Interpret your conclusion in the context of the application.
5. How profitable are different sectors of the stock market? One way to answer such a question is to examine profit as a percentage of stockholder equity. A random sample of 32 retail stocks such as
Toys ‘ ’ Us, Best Buy, and Gap was studied for x1 profit as a percentage of stockholder equity. The result was x1=13.7. A random sample of 34 utility ( gas and electric) stocks such as Boston Edison,
Wisconsin Energy, and Texas Utilities was studied for x2 profit as a percentage of stockholder equity. The result was x2 = 10.1. Assume o1=4.1 and o2 =2.7.
( a) Let m1 represent the population mean profit as a percentage of stockholder equity for retail stocks, and let m2 represent the population mean profit as a percentage of stockholder equity for
utility stocks. Find a 95% confidence interval for m1 – m2.
( b) Examine the confidence interval and explain what it means in the context of this problem. Does the interval consist of numbers that are all positive? all negative? of different signs? At the 95%
level of confidence, does it appear that the profit as a percentage of stockholder equity for retail stocks is higher than that for utility stocks?
c) Test the claim that the profit as a percentage of stockholder equity for retail stocks is higher than that for utility stocks. Use a = 0.01
6. A random sample of 17 wolf litters in Ontario, Canada, gave an average of x1=4.9 wolf pups per litter with estimated sample standard deviations1=1.0. Another random sample of 6 wolf litters in
Finland gave an average of x2=2.8 wolf pups per litter with sample standard deviation s2 =1.2
( a) Find an 85% confidence interval for m1-m2, the difference in population mean litter size between Ontario and Finland.
( b) Examine the confidence interval and explain what it means in the context of this problem. Does the interval consist of numbers that are all positive? all negative? of different signs? At the 85%
level of confidence, does it appear that the average litter size of wolf pups in Ontario is greater than the average litter size in Finland?
( c) Test the claim that the average litter size of wolf pups in Ontario is greater than the average litter size of wolf pups in Finland. Use a = 0.01
7. Locander et al. also studied the accuracy of responses on questions involving more sensitive material than voter registration. From public records, individuals were identified as having been
charged with drunken driving not less than 6 months or more than 12 months from the starting date of the study. Two random samples from this group were studied. In the first sample of 30 individuals,
the respondents were asked in a face- to- face interview if they had been charged with drunken driving in the last 12 months. Of these 30 people interviewed face- to- face, 16 answered the question
accurately. The second random sample consisted of 46 people who had been charged with drunken driving. During a telephone interview, 25 of these responded accurately to the question asking if they
had been charged with drunken driving during the past 12 months. Assume that the samples are representative of all people recently charged with drunken driving.
( a) Let p1 represent the population proportion of all people with recent charges of drunken driving who respond accurately to a face- to- face interview asking if they have been charged with drunken
driving during the past 12 months. Let p2 represent the population proportion of people who respond accurately to the same question when it is asked in a telephone interview. Find a 90% confidence
interval for p1-p2
( b) Does the interval found in part ( a) contain numbers that are all positive? all negative? mixed? Comment on the meaning of the confidence interval in the context of this problem. At the 90%
level, do you detect any differences in the proportion of accurate responses to the question from face- to- face inter-views as compared with the proportion of accurate responses from telephone
interviews? ( c) Test the claim that there is a difference in the proportion of accurate responses from face- to- face interviews compared with the proportion of accurate responses from telephone
interviews. Use a = 0.05
Havent found the Essay You Want?
The Paper is Written from Scratch Specifically for You
WHY CHOOSE courseachievers.com
• Confidentiality & Authenticity Guaranteed
• Plagiarism Free answers Guarantee
• We Guarantee Timely Delivery of All essays
• Quality & Reliability
• Papers Written from Scratch and to Your exact Instructions
• Qualified Writers Only
• We offer Direct Contact With Your Writer
• 24/7 Customer Support
|
{"url":"https://courseachievers.com/statistic-math-question-mathematics-homework-help/","timestamp":"2024-11-11T14:39:41Z","content_type":"text/html","content_length":"63969","record_id":"<urn:uuid:f3b85c9c-fa6f-430e-8f8a-a30e74792dec>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00396.warc.gz"}
|
Volvo TL31, Lastterrängbil 934 (Ltgb 934). 6x6 with limited-slip differentials on all 3 axels. Can drive in 1,1m deep snow! #volvo #armytruck #militaryvehicle
Calculation for the Chi-Square test: An interactive calculation tool for chi-square tests of goodness of fit and independence Kristopher J. Preacher (Vanderbilt University) How to cite this page.
This web utility may be cited in APA style in the following manner: Preacher, K. J. (2001, April).
Eigenvalues and Eigenvectors Calculator for a 6 X 6 Real Matrix This page contains a routine that numerically finds the eigenvalues and eigenvectors of a 6 X 6 Real Matrix. The algorithm is from the
EISPACK collection of subroutines. Use our Concrete Slabs Calculator or Estimator as a guide for how much concrete mix you need for your project. Using the lumber calculator: an example. Let's assume
you want to use this board foot calculator to determine how much you should pay for a few hardwood pieces. Decide on the number of pieces you want to purchase. Let's assume it's five wooden boards.
Select the room's shape, enter your measurements and click the “Calculate” button to 1, Room Capacity Calculator. 2. 3, This calculator can be used to support schools and districts in determining
22, Dimensions, Square Feet, 6 x 6 Block, 6 ft. 4 May 2017 Fill in the blanks below, and click 'Calculate' to determine the right amount of paint for your project with our custom paint calculator.
This calculator estimates the potential number of reps, based on the specified data.
mmCalc is a super simple photography focal length calculator. Simply input your focal length, sensor size, and max aperture and we'll give you what the 35mm equivalent is of that configuration.
AC−B2=6x6y−(−3)2=0−9=−9. Vi får att A C − B 2 < 0 AC-{ B }^{ 2 }<0 AC−B2<0, alltså är punkten ( 0 , 0 ) (0,0) (0,0) en sadelpunkt!
http://bit.ly/subscribe2westenMerch: https:// How to calculate a matrix determinant? For a 2x2 square matrix (order 2), the calculation is:. Trestlewood makes no representations or warranties
whatsoever relative to the accuracy of this calculator (or any of its other calculators) and accepts no liability Use our materials calculator to figure out what you need | 5 Local Locations.
innebär också en extra kostnad - Stämpel på omslaget av loggo eller namnet på det företag; maximal storlek 5x7 cm eller 6x6 cm. - Foto tryckt på metall i färg,
The G 63 AMG 6X6, a small series of which is expected to go into production at the end of the year, combines the best of three worlds: Mercedes-Benz's more TH400 Stage II 800Hp. Red Eagle Racing
Clutches w/Kolene Steel (6x6 Forward/Direct); High Volume Oil Pump; HD Sprag w/New Clutch Drum (Direct clutch) Paper mechanical iris - Iris Calculator.
Ask for prices on 4×4s, 2×4s, and pickets in each wood you are considering. Also, don’t forget to check the price of exterior screws and post-setting concrete.
Linero vårdcentral boka tid
This calculator uses standard 2 x 6 pressure treated lumber for decking available at most hardware stores. It also assumes floor joists are 16 inches apart and 6 x 6 footers (stumps) are 8 feet apart
and the deck is ground level. A Free Online Calculator, Quick and Easy, and Full Screen!
Hem AC−B2=6x6y−(−3)2=0−9=−9.
Vem driver centrum för svensk finkultur
årsta barnmorska uppsalagamla kinesiska uppfinningarembargo act of 1807isometrisk træningjordbruksverket hundregisterörlogsfartyg avison
Use our Concrete Slabs Calculator or Estimator as a guide for how much concrete mix you need for your project.
It also assumes floor joists are 16 inches apart and 6 x 6 footers (stumps) are 8 feet apart and the deck is ground level. A Free Online Calculator, Quick and Easy, and Full Screen! The calculator
will indicate the number of 80 lb bags of QUIKRETE ® Base Coat Stucco and Finish Coat Stucco you will need to construct your stucco wall using a traditional 3 coat or 2 coat application process.
The Linear Algebra Calculator is here! This application provides you with an opportunity to check your work or perform quick calculations. Capabilities:
In the training program, we can often see the exercise load as a percentage TIP: Carpet is quoted either in lineal metres or square metres so ensure that you are comparing like for like. As carpet
width is usually 3.66m, a price in lineal Coverage Calculator - Measurements: Imperial. Fencing. Wood stains for Fencing.
Calculates p-value, power, Enter the Length and Width of an area and output square feet and square meters. A Punnett Square shows the genotypes two individuals can produce when crossed. To draw a
square, write all possible allele combinations one parent can To find out how much ArtResin you will need for a single 1/8" coat, input the length and width of your piece here! Let the calculator
do the rest :) Depth of Field Calculator. Camera, film format, or circle of confusion. Canon 30D, 20D, 20Da, 10D, Canon 60D, 60Da, 50D, 40D, Canon D60, D30, Canon League Settings · Teams: · Budget:
· Min Bid: · Bat Split ?: · League: · Projection: · Experimental ?: How to calculate square footage? It's easy.
|
{"url":"https://hurmanblirrikknzg.firebaseapp.com/82610/64559.html","timestamp":"2024-11-05T17:18:20Z","content_type":"text/html","content_length":"10312","record_id":"<urn:uuid:226cdd3c-e79f-4d45-814e-6689f2359816>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00066.warc.gz"}
|
The Mystery of the Right Triangle Area
What is the area of a right triangle with legs labeled as eight times x and seven times x if x = 4 cm?
A. 16 cm^2
B. 240 cm^2
C. 448 cm^2
D. 896 cm^2
The correct area of the right triangle is 448 cm^2.
Right triangles are fascinating geometric figures that have specific properties when it comes to calculating their area. In this case, we are presented with a right triangle whose legs are labeled as
eight times x and seven times x. Given that x is equal to 4 cm, we can easily determine the dimensions of the triangle.
First, we substitute the value of x into the expressions for the legs of the triangle. The base of the triangle, labeled as eight times x, becomes 8 * 4 cm = 32 cm. Meanwhile, the height of the
triangle, labeled as seven times x, becomes 7 * 4 cm = 28 cm.
Next, we utilize the formula for calculating the area of a right triangle, which is 1/2 * base * height. By plugging in the values we found for the base and height, we get: Area = 1/2 * 32 cm * 28 cm
= 448 cm^2.
Therefore, the area of the right triangle with legs labeled as eight times x and seven times x, when x is equal to 4 cm, is 448 cm^2. This calculation showcases the importance of understanding
geometric formulas and how to apply them effectively.
|
{"url":"https://airdocsolutions.com/physics/the-mystery-of-the-right-triangle-area.html","timestamp":"2024-11-05T16:12:26Z","content_type":"text/html","content_length":"21036","record_id":"<urn:uuid:64642127-8d86-4da8-9532-c972f942b427>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00728.warc.gz"}
|
Atomistic Assessment of Solute-Solute Interactions during Grain Boundary Segregation
Department of Materials Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA
Author to whom correspondence should be addressed.
Submission received: 20 August 2021 / Accepted: 8 September 2021 / Published: 11 September 2021
Grain boundary solute segregation is becoming increasingly common as a means of stabilizing nanocrystalline alloys. Thermodynamic models for grain boundary segregation have recently revealed the need
for spectral information, i.e., the full distribution of environments available at the grain boundary during segregation, in order to capture the essential physics of the problem for complex systems
like nanocrystalline materials. However, there has been only one proposed method of extending spectral segregation models beyond the dilute limit, and it is based on simple, fitted parameters that
are not atomistically informed. In this work, we present a physically motived atomistic method to measure the full distribution of solute-solute interaction energies at the grain boundaries in a
polycrystalline environment. We then cast the results into a simple thermodynamic model, analyze the Al(Mg) system as a case study, and demonstrate strong agreement with physically rigorous hybrid
Monte Carlo/molecular statics simulations. This approach provides a means of rapidly measuring key interactions for non-dilute grain boundary segregation for any system with an interatomic potential.
1. Introduction
Nanocrystalline metals exhibit a wide range of useful properties that often exceed what is achievable at the microscale [
]. However, they are often in unstable, nonequilibrium states due to a high concentration of grain boundaries (GBs) that contribute to the free energy of the system and create an increasingly large
driving force for grain growth at the nanoscale [
]. Alloying can provide a means of thermodynamically stabilizing the nanocrystalline state by lowering the grain boundary energy via grain boundary solute segregation [
]. This thermodynamic approach has been gaining increased attention in recent years compared to kinetic methods of stabilization [
], due to its reliability and relatively simple design space, which requires only thermodynamic knowledge of the alloy system.
Prior work in this area has focused on the development of models that can predict the segregated state of alloy systems. For example, there are a number of isotherm models that predict
solute concentrations [
], including those of McLean [
], Fowler-Guggenheim [
], Guttman [
], and Wynblatt and Chatain [
]. In more recent years, this approach has been extended to specifically consider the nanocrystalline state with regular solution, lattice Monte Carlo, and phase-field models [
]. However, a major shortcoming of most all such models is their use of a single segregation energy to characterize the entire grain boundary network, which in reality has a complex diversity of
segregation sites. The shortcomings of this assumption were recently analyzed and corrected by Wagih and Schuh [
]. Taking inspiration from the works of White and Stein [
] and Kirchheim [
], they used a spectral McLean-type isotherm, in which each atomic grain boundary site has its own dilute limit segregation energy, and calibrated it directly to atomistic results on nanocrystalline
Wagih and Schuh demonstrated that the spectral approach achieves significantly better agreement with the results of full atomistic simulations on Al-Mg polycrystals. More recently, they addressed the
issue of solute-solute interactions [
], a necessary consideration away from the dilute limit, when solutes begin to interact in the GBs, locally affecting
segregation there. In the spectral model, there is a wide range of such interactions, and rather than treat them all, Wagih and Schuh showed that the addition of a single, fitted interaction energy
(assumed relevant to all sites) could account for non-dilute interactions in an average sense, with good agreement to the overall segregated solute concentrations [
]. However, because the parameter calculated by Wagih and Schuh was simply fitted to the results of atomistic simulations, it is not derived from atomistic-level physics directly. As a result, it is
not generalizable without expensive computations on each individual alloy.
The focus of this work is therefore to seek a physically motivated atomistic method to assess solute interactions during grain boundary segregation, in a way that acknowledges the wide diversity of
sites and can be easily incorporated into existing spectral isotherm models. For the Al-Mg system, we show how atomistic simulations can be used to measure the full spectrum of solute interactions
over the full spectrum of segregation sites in a polycrystal. The results of such simulations lead to a simple hypothesized general form for GB solute interactions for future modeling efforts.
2. Thermodynamics of Grain Boundary Segregation
2.1. Free Energy vs. Enthalpy of Segregation
A rigorous thermodynamic treatment of
segregation must consider the Gibbs free energy of segregation [
$Δ G s e g$
. The segregation free energy includes not only the enthalpic contribution considered above,
$Δ E s e g$
, but also a work term,
$− P Δ V$
, where
is the pressure and
$Δ V$
is the volume change, as well as a vibrational entropy term,
$− T Δ S s e g v i b$
], such that the free energy of segregation is given as:
$Δ G s e g = Δ E s e g − P Δ V − T Δ S s e g v i b .$
However, the vibrational entropy component of
segregation is generally not well understood, and can be neglected at reasonably low temperatures, as we do here. Furthermore,
$P Δ V$
is generally negligible in solids [
], and is neglected here. Self-consistency is achieved by using only enthalpic measurements of the segregated states at 0 K via conjugate gradient minimization. Thus, even though configurational
space is sampled at finite temperature during the following simulations, vibrational contributions are consistently neglected, and we can assume that
$Δ G s e g ≈ Δ E s e g$
in the isotherms presented below.
2.2. Classical Segregation Models
The first isotherm for grain boundary segregation was proposed by McLean [
], in which the segregation energy is taken to be a single average parameter,
$Δ E ¯ s e g$
, given as the difference in energy of the full system when a solute,
, occupies a grain boundary site,
$E G B B$
, vis-à-vis a bulk site,
$E c B$
$Δ E ¯ s e g = E G B B − E c B$
This approach assumes that the segregation energy,
$Δ E ¯ s e g$
, is independent of grain boundary character (or the site occupied by the solute), solute concentration, and temperature (T), resulting in McLean’s isotherm [
$X ¯ G B 1 − X ¯ G B = X c 1 − X c exp ( − Δ E ¯ s e g k T )$
$X ¯ G B$
is the average solute concentration in the
$X c$
is the concentration in the bulk, and
is Boltzmann’s constant.
To extend this treatment beyond the dilute limit, Fowler and Guggenheim accounted for concentration dependence of the segregation energy via the addition of a single interaction parameter based on a
heat of mixing in the
$Ω G B$
$X ¯ G B 1 − X ¯ G B = X c 1 − X c exp ( − Δ E ¯ s e g + 2 Ω G B X ¯ G B k T )$
which assumes that solute interactions in the bulk are negligible, due primarily to the assumption of relatively large, dilute grains, and thus relatively constant, dilute values of
$X c ≈ X t o t$
, where
$X t o t$
is the total system solute concentration. This assumption can be corrected with the addition of a term that includes the bulk heat of mixing,
$Ω c$
]. This term appears consistently in more recent models that explicitly consider the nanocrystalline grain sizes [
], and when combined with the mixture rule, where
$X t o t$
is fixed and
$X c$
$X ¯ G B$
can vary dependently as [
$X t o t = ( 1 − f G B ) X c + f G B X ¯ G B ,$
results in the complete isotherm for nanocrystalline alloys:
$X t o t = ( 1 − f G B ) X c + f G B [ 1 − 1 − X c X c exp ( Δ E ¯ s e g − 2 Ω G B X ¯ G B + 2 Ω c X c k T ) ] − 1$
$f G B$
is the volume fraction of the grain boundary, and is typically related to the grain size,
, and grain boundary thickness,
, by the equation:
$f G B = 1 − ( d − t d ) 3 .$
Assuming only nearest-neighbor contributions for solvent
and solute
, the heat of mixing can be represented as:
$Ω s = 1 2 z s w s = 1 2 z s ( E A − B s − E A - A s + E B - B s 2 )$
refers to either the
or the bulk,
is the atomic coordination, and
$E A - B s$
$E A - A s$
, and
$E B - B s$
are the bond energies of
$A - B$
$A - A$
, and
$B - B$
bonds, respectively.
2.3. Spectral Segregation Models
Following the density of sites approach introduced by White and Stein [
] and Kirchheim [
], Wagih and Schuh developed a spectral model for grain boundary segregation, which assumes that each atomic grain boundary site has its own dilute limit segregation energy. Assuming a McLean-type
contribution from each site type
with dilute limit segregation energy
$Δ E i s e g$
, and accounting for the mixture rule of Equation (5), Wagih and Schuh’s spectral isotherm is given as an integral over segregation energies [
$X t o t = ( 1 − f G B ) X c + f G B ∫ − ∞ ∞ F i G B [ 1 + 1 − X c X c exp ( Δ E i s e g k T ) ] − 1 d ( Δ E i s e g )$
$F i G B$
is the density of sites of type
, and was shown by Wagih and Schuh to follow a roughly skew-normal distribution for general polycrystals:
$F i G B = 1 2 π σ exp [ − ( Δ E i s e g − μ ) 2 2 σ 2 ] erfc [ − α ( Δ E i s e g − μ ) 2 σ ]$
, and
are the fitted shape, location, and breadth of the dilute limit segregation energy distribution, respectively. The values of these parameters for several hundred binary alloys have been presented in
reference [
Following from Equation (4), this spectral isotherm can be adapted to account for solute interactions in the grain boundary with a single Fowler-type interaction parameter:
$X t o t = ( 1 − f G B ) X c + f G B ∫ − ∞ ∞ F i G B [ 1 − 1 − X c X c exp ( Δ E i s e g − 2 Ω G B X ¯ G B k T ) ] − 1 d ( Δ E i s e g ) .$
Wagih and Schuh showed that for the Al-Mg system, the grains remain dilute even as the
segregation raises the concentration locally at the boundary, leading to a significant effect via
$Ω G B$
; thus, a single fitted value of
$Ω G B$
provided a reasonably accurate description of full atomistic simulations beyond the dilute limit [
]. For other nanocrystalline alloys, the bulk concentration may vary more significantly, so for completeness it is appropriate to use both
and bulk contributions to the interactions, as in Equation (6). Thus, the isotherm of Equation (9) can be extended to account for non-dilute interaction as follows:
$X t o t = ( 1 − f G B ) X c + f G B ∫ − ∞ ∞ F i G B [ 1 − 1 − X c X c exp ( Δ E i s e g − 2 Ω ¯ G B X ¯ G B + 2 Ω c X c k T ) ] − 1 d ( Δ E i s e g )$
$Ω ¯ G B$
$Ω c$
are the average heat of mixing parameters of the grain boundary and bulk, respectively. The overbar on the former term is introduced to acknowledge that this
$Ω G B$
is no longer formally a single parameter in the spectral model, as there are many sites with unique behaviors. Assessing the average value over many sites from atomistic information will be the major
focus of our efforts below.
3. Atomistic Simulation Methods
3.1. Production of Pure Al Polycrystal
A cubic polycrystal of pure aluminum was produced, with dimensions of (10 nm)
, 60,367 total atoms, and 10 grains of random orientation with an average diameter of 6 nm (
Figure 1
). The polycrystal was randomly initialized via Voronoi tessellation using the toolkit Atomsk (Version b0.11.1, University of Lille, Villeneuve d’Ascq, France) [
], followed by structural relaxation with conjugate gradient minimization. The polycrystal was then thermally annealed in an isothermal isobaric ensemble with a Nose-Hoover thermostat/barostat, at
zero pressure and a temperature of 600 K for 0.5 ns. Finally, the polycrystal was cooled to 0 K over 0.25 ns, followed by a final conjugate gradient minimization.
An image of the grain boundary network is shown in
Figure 1
, using polyhedral template matching to identify non-face-centered-cubic (non-FCC) regions in the Open Visualization Tool OVITO (Version 3.5.0, Darmstadt University of Technology, Darmstadt, Germany)
]. All simulations here and in the remainder of this work were performed with the LAMMPS simulation package (Version 7Aug19, Sandia National Laboratories, Albuquerque, NM, USA) [
] and use the embedded atom method (EAM) potential by Mendelev for Al-Mg [
Here it should be noted that the (10 nm)
polycrystal used in this work, at an average grain size of 6 nm, is significantly smaller than the (15 nm)
and (36 nm)
polycrystals used by Wagih and Schuh previously [
], with grain sizes of 9 and 12 nm, respectively. However, preliminary work in analyzing the grain size dependence of the segregation energy distribution indicates that changes in the distribution
with respect to grain size are due primarily to the increased presence of triple junctions and quadruple nodes at smaller grain sizes. While this effect is non-negligible, for most alloys, including
Al-Mg, the effective difference in segregation energy when decreasing the grain size from 12 nm to 6 nm is of at least an order of magnitude less than the effective segregation energy itself.
3.2. Dilute Limit Segregation Energy Distributions
The Al-Mg system studied in this work was chosen for the strong agreement between its available interatomic potential [
] and density functional theory [
] when calculating segregation energies, and because it has been previously used for spectral
segregation analysis [
]. To compute the dilute limit segregation energy distribution of the Al polycrystal, we follow the procedure of Wagih and Schuh [
]. We compute the energy difference between the fully relaxed polycrystal with a single solute atom,
, at
$E G B , i B$
, or at a bulk site in the center of the largest grain,
$E c B$
$Δ E ¯ i s e g = E G B , i B − E c B$
and systematically test every site lacking FCC coordination. The resulting discrete distribution for Al-Mg is thus shown in
Figure 2
, with a skew-normal function fitted to Equation (10) overlaid. The distribution calculated here is skew-left, spans from approximately −60 to 40 kJ/mol, and has a mean of −6.82 kJ/mol, all of which
are in excellent agreement with the distribution calculated previously by Wagih and Schuh for a (36 nm)
polycrystal [
Because the isotherm models presented in
Section 2
assume random mixing in the grain boundary in order to derive the linear interaction parameters that we are attempting to measure, it is necessary to demonstrate that random mixing is a reasonable
assumption to make for the Al-Mg polycrystal used in this work. Wagih and Schuh have already shown using a two-point correlation function that, for random polycrystals with general grain boundaries,
such as those used in this work, grain boundary sites of a given segregation energy are approximately randomly distributed along the grain boundary network in Al-Mg [
]. To demonstrate this in a simple manner,
Figure 3
plots the relationship between the segregation energy of a given grain boundary site, and the average segregation energy of its nearest neighbors, identified using a Voronoi analysis. For this Al-Mg
polycrystal, it is readily apparent that there is little to no correlation between a site’s segregation energy, and those of its nearest neighbors. This, in combination with the random distribution
of segregation energies along the
network, indicates that random mixing in the grain boundary is a reasonable approximation from which to assess solute interactions in the
for this system.
It should be stressed, however, that such a random distribution of solutes is achieved generally only in the case of mild solute-solute interactions at the grain boundary. This condition occurs when
the segregation energy dominates over the interaction energy—for segregation energy distributions with particularly large negative tails, and at concentrations low enough to access primarily those
sites—or at temperatures high enough to thermalize the interactions (but not
segregation itself) and achieve some semblance of random mixing. If the interactions are stronger, random mixing may not occur at relevant temperatures. For example, we have found that in systems
with strong attractive interactions the solutes readily cluster upon
segregation, as a prelude to outright phase separation. For the present analysis, the competition between second phase formation and
segregation is explicitly not of interest (although it has been addressed in prior work in the dilute limit [
] and we will address it in our future work beyond the dilute limit). Future work should address in more detail how a given system may be explored to achieve these conditions; for the moment we can
proceed with confidence that the Al-Mg system is a viable case study for the proposed model.
3.3. The True Equilibrium Segregation State: Hybrid MC/MS
To evaluate the predictions of the procedure proposed in this work, it is necessary to obtain the equilibrated segregation state of our Al-Mg polycrystal with finite solute content. This is done
using a standard Monte Carlo (MC) procedure at a finite temperature to sample configurational space, in combination with molecular statics relaxations [
]. The Al polycrystal shown in
Figure 1
was randomly populated with Mg solute, at concentrations of
$X t o t$
up to 10 percent. One step in the hybrid MC/MS procedure, referred to as one MC step, was conducted as a series of micro-MC steps at finite temperature, followed by a full-system relaxation at 0 K
and constant pressure. Each micro-MC step consisted of a Monte-Carlo swap, attempted with a probability given by the metropolis criterion at 600 K, using the EAM potential for all energy evaluations.
6000 micro-MC steps were attempted per MC step in the hybrid MC/MS procedure. 1000 to 2000 MC steps, scaling linearly with total solute concentration, were conducted to reach adequate convergence in
both system energy and solute distribution.
The final state of the system after this process is taken as the true equilibrium segregation state, from which the final solute distribution is measured. An example equilibrated polycrystal of Al-Mg
$X t o t$
= 0.05 is shown in
Figure 4
a. The distribution of occupied sites is shown in red in
Figure 4
b, and resembles prior work on this system from Ref. [
]. These occupation distributions represent the true equilibrium segregation state, which we intend to understand in terms of Equation (12).
The resulting equilibrium grain boundary solute concentration,
$X G B$
, is plotted as a function of
$X t o t$
, shown as red points in
Figure 4
c. In the work of Wagih and Schuh [
], Equation (11) was simply fitted to simulation results such as these, treating the solute interaction parameter(s) as unknown constants. Following this same approach here, as shown in
Figure 4
c, results in a value of
$Ω G B$
= −22.86 kJ/mol. For comparison, a McLean-style isotherm is plotted in green, using an effective segregation energy,
$Δ E ¯ e f f s e g$
= −26.5 kJ/mol, fitted from Equation (3) in the dilute limit. Equation (9), which includes the effect of the segregation energy spectrum in the dilute limit, is also shown in blue.
This result, while physically motivated by the work of Fowler and Guggenheim [
], is ultimately a fitted parameter that is not derived from atomistic-level physics directly, and requires relatively expensive simulations to compute. Additionally, the use of a single interaction
parameter does not explicitly separate the interaction contributions from the bulk and grain boundary. Our goal here is to instead seek a direct atomistic assessment of those parameters, and success
will be measured by our ability to reproduce the true segregation state in
Figure 4
3.4. Grain Boundary Heat of Mixing Distributions
Use of the isotherm given in Equation (12) requires knowledge of an average heat of mixing parameter for both the bulk and grain boundary. We are not aware of any prior measurement of the full
distribution of the heat of mixing across all grain boundary sites, so we proceed to make one here. To separate the contributions of coordination and bond energy distributions in the grain boundary,
we calculate the per-bond parameter $w G B$, as given in Equation (8), in addition to the coordination of each GB site.
The coordination of each grain boundary site is calculated via Voronoi analysis in the OVITO visualization tool.
$w G B$
is then extracted for each nearest neighbor bond of each grain boundary site, including
-bulk bonds, in the following manner. For a given
and nearest neighbor site
, the per-atom energy of atom
in the fully relaxed polycrystal,
$E i j , I J G B$
, is calculated for each of the example 2D configurations shown in
Figure 5
, where atoms
can be occupied by either a solvent atom
or solute atom
, and the energy of each configuration is given as:
$E i j , x y G B = 1 2 [ ( z i G B − 1 ) E y − x G B + E i j , x − y G B ]$
can be either solute A or solvent B in the four possible permutations shown in
Figure 5
, and the per-bond parameter
$w i j G B$
for bond
$i - j$
can be calculated as:
$w i j G B = ( E i j , A − B G B − E i j , A A G B + E i j , B B G B 2 ) = E i j , B A G B − E i j , B B G B + E i j , A B G B − E i j , A A G B .$
The parameter $w i j G B$ can then be averaged over each nearest neighbor for a given GB site $i$ to obtain an average per-site parameter $w i G B$. This value can in turn be combined with the atomic
coordination of the site to obtain the per-site heat of mixing parameter, $Ω i G B$, and thus the full heat of mixing distribution of the grain boundary. Here, it should be noted that the heat of
mixing parameters calculated effectively assume the structure of the pure solvent A—in either the grain boundary or bulk, respectively—as the reference state for both components A and B.
Following this procedure for a bulk site in the interior of a fully relaxed 16 × 16 × 16 supercell of FCC Al, values for the grain interior of
$z c = 12$
$w c =$
−4.72 kJ/mol, and
$Ω c =$
−28.32 kJ/mol were obtained. Then, following this procedure for the GBs, we achieve the distribution shown in
Figure 6
a. This per-site parameter exhibits a roughly skew-normal distribution, similar to the segregation energy spectrum itself, with an average value of
$w ¯ G B =$
−3.78 kJ/mol. We note that this spectrum confirms our earlier observations about the modest nature of solute-solute interactions in Al-Mg, as it is far less energetic than the
segregation spectrum itself (cf.
Figure 2
); this means that at low temperatures, thermal energy is enough to randomize the solute-solute interactions in the GBs but not to desegregate them, achieving exactly the random mixing conditions
required to evaluate solute interactions.
To directly compare the measured per-site parameter
$w i G B$
with the fitted parameter
$Ω G B$
, we must also account for the atomic coordination
$z i G B$
of each
site, as per Equation (8). However,
$w i G B$
$z i G B$
are not necessarily independent. Thus, to explicitly separate the contributions due to coordination and bond energy distributions, the atomic coordination distribution of the grain boundary,
$z i G B$
, was also measured and is shown in
Figure 6
b. When plotting
$w i G B$
as a function of
$z i G B$
, as shown in
Figure 6
c, it is readily apparent that the spread of
$w i G B$
varies significantly with atomic coordination. However, there is very little overall correlation between the two, so their rigorous site-wise combination to produce a spectrum as in
Figure 6
d, followed by averaging, produces much the same result as first averaging each distribution and then using Equation (8) subsequently. This analysis gives an average heat of mixing parameter for the
regions as
$Ω ¯ G B =$
−27.10 kJ/mol.
4. Discussion
The results in
Figure 6
represent what we believe to be the first atomistic measurement of the full spectrum of solute-solute interaction effects during
segregation in a polycrystal. As such, they permit a very detailed level of analysis of the
segregation state beyond the dilute limit. For example, in the spirit of exhaustive rigor, we might consider an isotherm analysis on the basis of both the spectrum of segregation energies and the
spectrum of solute interactions across the
, combined together in a self-consistent probabilistic model. This is explored in
Figure 7
a, where the per-site dilute limit segregation energy and interaction parameter are cross-compared, and together apparently constitute a single 2D distribution function with a single central peak.
Such a distribution could be modeled by, e.g., a bivariate normal (or skew-normal) distribution [
]. Equation (9) might therefore be modified to include an integral over the joint probability density of the segregation and interaction energies. The skewness is small in the present case, so a
bivariate normal distribution is appropriate, and has the following form:
$F i j G B = 1 ( 2 π ) 2 | Σ | exp [ − 1 2 ( x − µ ) T Σ − 1 ( x − µ ) ]$
$F i j G B$
varies with the vector quantities
, where
contains the segregation and interaction energies and
their means, and
is their covariance matrix. For Al-Mg, we find the bivariate normal parameters to be
$µ = [ Δ E ¯ s e g , w ¯ G B ]$
, where
$w E ¯ s e g$
= −7.10 kJ/mol is the mean segregation energy and
$Δ ¯ G B$
= −3.78 kJ/mol is the mean interaction energy, with a covariance matrix given by:
$Σ = [ 244.04 4.76 4.76 3.86 ] kJ / mol .$
Performing an integration over both the segregation energy and interaction energy produces the following isotherm:
$X t o t = ( 1 − f G B ) X c + f G B ∫ − ∞ ∞ ∫ − ∞ ∞ F i j G B [ 1 − 1 − X c X c exp ( Δ E i s e g − 2 Ω ¯ j G B X ¯ G B + 2 Ω c k T ) ] − 1 d ( Ω ¯ j G B ) d ( Δ E i s e g ) .$
Equation (17) can be readily solved numerically, and the resulting occupation distribution and isotherm are shown in magenta in
Figure 8
for Al-Mg. When this fully atomistic solution is compared with the single-parameter Fowler-like fit in the details of the atomic site distributions (
Figure 8
a), it is clear that the full bivariate distribution more accurately captures the distribution at equilibrium. It also credibly reproduces the trend of the isotherm in
Figure 8
b with no fitting parameters. Interestingly, though, the conformity in
Figure 8
b is not better than can be achieved with direct fitting. Thus, even though the full bivariate distribution approach may be more rigorous, it may not dramatically improve predictive power over a
simple linear interaction term, if one is concerned only with the average
solute concentration and does not care about the details of site occupation. Since the full bivariate spectrum approach adds significantly more computational complexity, an atomistically-informed
single parameter model may be a preferred solution. Introducing the directly atomistically measured average values of
$Ω ¯ G B$
$Ω c$
into Equation (12) achieves the results shown by black lines in
Figure 8
; the result is a reasonable compromise between accuracy and speed.
One additional result is provided by the dashed black line in
Figure 8
b. This is the prediction of Equation (12) if only the solute-solute interactions in the
are considered, and not the bulk interactions. As anticipated, in Al-Mg, this effect is relatively small, but not insignificant, especially at higher concentrations, indicating the need to account
for both the grain boundary and bulk contributions in the general case.
The above analysis shows that the general approach of using a Fowler-like composition-dependent correction to the spectral model, as proposed by Wagih and Schuh, is indeed an excellent compromise
between simplicity and accuracy to capture
segregation beyond the dilute limit. However, the manner of its use proposed by those authors is computationally cumbersome: in order to rigorously compute the true segregation state in
Figure 8
by MC/MS and then fit the interaction parameter takes on the order of 200 h of compute time on a system using graphics processing unit (GPU) accelerated potential calculations with a Nvidia Quadro
P4000 graphics card (NVIDIA, Santa Clara, CA, USA) and an Intel i7 4770 K processor (Intel, Santa Clara, CA, USA). In contrast, knowing that only a single average interaction value is needed, the
present method based on direct atomistic sampling of solute-solute interactions becomes remarkably efficient. Rather than obtain the entire interaction spectrum as in
Figure 6
, we may instead take small samples to obtain just its mean. For the distribution shown in
Figure 6
d, the standard deviation of the distribution is
$σ Ω G B$
= 14.2 kJ/mol, and a sample size of
= 100
sites would be sufficient to reduce the standard error of the distribution mean to
$σ Ω ¯ G B$
= 1.42 kJ/mol. These hundred computations would take about 1/100 the time of the MC/MS approach above. In future work we hope to apply this advance to rapidly screen solute-solute interactions for
many alloys.
5. Conclusions
Recent progress in accounting for the full spectrum of GB segregation sites has brought new clarity to the dilute limit situation but left the important topic of solute-solute interactions at higher
concentrations in need of development. Here we have explored the natural extension of the spectral model for GB segregation by assessing a comparable distribution of solute-solute interaction
energies. The method presented here has provided what is, to our knowledge, the first measurement of the full spectrum of solute-solute interaction energies at the GB. The spectrum of interaction
energies follows a roughly skew-normal distribution for the Al-Mg system analyzed here, and when combined with the existing segregation energy distribution constitutes a full bivariate (skew-) normal
distribution that describes the GB beyond the dilute limit.
A full bivariate normal distribution of site and interaction energies provides an excellent prediction of the solute distribution at equilibrium, as validated against rigorous hybrid Monte Carlo/
Molecular statics simulations, both on average and over the full spectrum of GB sites. Importantly, though, in the present case the interactions can be approximated by a scalar average over their
full distribution and still achieve reasonable accuracy for many practical problems. This compromise is one that has the benefit of being fully atomistically informed, but less computationally
intensive. This work thus paves the way to use simple, inexpensive atomistic measurement to predict solute interaction behavior during grain boundary segregation.
Author Contributions
Conceptualization, C.A.S.; methodology, T.P.M.; validation, T.P.M.; formal analysis, T.P.M.; investigation, T.P.M.; writing—original draft preparation, T.P.M.; writing—review and editing, T.P.M.,
C.A.S.; visualization, T.P.M.; supervision, C.A.S.; project administration, C.A.S.; funding acquisition, C.A.S. All authors have read and agreed to the published version of the manuscript.
This research was primarily funded by the US Department of Energy, Office of Basic Energy Sciences under grant DE-SC0020180. This material is based upon work supported by the National Science
Foundation Graduate Research Fellowship under Grant No. 1745302.
We would like to acknowledge Malik Wagih and Nutth Tuchinda (both of MIT) for valuable discussions that helped guide the interpretation of the results presented in this work.
Conflicts of Interest
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the
decision to publish the results.
Figure 1. Visualization of the grain boundary network of the pure Al polycrystal after relaxation and annealing, with dimensions of (10 nm)^3, 10 randomly oriented grains of average diameter 6 nm,
and 60,367 total atoms.
Figure 2. Dilute limit segregation energy distribution for Al-Mg, calculated from the (10 nm)^3 polycrystal, with a fitted skew-normal distribution overlaid.
Figure 3. Correlation plot of the average segregation energy of the nearest neighbors of a given grain boundary site vs. the segregation energy of that site for Al-Mg in the (10 nm)^3 polycrystal.
Figure 4. (a) Al-Mg polycrystal with 5% total solute, equilibrated with hybrid MC/MS at 600 K. (b) Segregation energy distribution with the equilibrium occupied distribution shown in red. Predicted
occupied distribution is shown for the dilute case (Equation (9) (blue)). (c) For the (10 nm)^3 Al-Mg polycrystal: McLean-style isotherm with effective segregation energy $Δ E ¯ e f f s e g$ = −26.5
(Equation (3) (green)), dilute limit spectral isotherm (Equation (9) (blue)), and polycrystal equilibrated via MC/MS, with a fitted linear interaction parameter $Ω G B$ = −22.86 kJ/mol (Equation (11)
Figure 5. Example 2d atomic configurations used to calculate the per-bond parameter $w i j G B$ for bond $i - j$, by measuring the per-atom energy of atom I in the fully relaxed polycrystal, $E i j ,
I J G B$, where atoms I and J can be either solvent A or solute B.
Figure 6. For every GB site in the (10 nm)^3 Al-Mg polycrystal: (a) Atomic coordination of every GB site. (b) Correlation plot of atomic coordination and per-site parameter $w i G B$. (c) Average
per-site parameter $w i G B$. (d) Average per-site heat of mixing parameter $Ω i G B$.
Figure 7.
) 2D histogram of the dilute limit segregation energy and per-site interaction parameter
$w i G B$
, exhibiting a bivariate skew-normal distribution. (
) Bivariate normal distribution fitted to the data depicted in
Figure 7
Figure 8. (a) For the (10 nm)^3 Al-Mg polycrystal: isotherm for the polycrystal equilibrated via MC/MS at 600 K, with a fitted linear interaction parameter $Ω G B$ = −22.86 kJ/mol (Equation (11)
(red)), spectral isotherm with the average bulk interaction parameter $Ω c =$ −28.32 kJ/mol and average grain boundary interaction parameter $Ω ¯ G B =$ −27.10 kJ/mol (Equation (12) (solid black)),
and spectral isotherm with fitted bivariate normal distribution (Equation (17) (magenta)). (b) Equilibrium occupied distribution, with predicted occupation distributions using: a fitted linear
interaction parameter $Ω G B$ = −22.86 kJ/mol (Equation (11) (red)), average interaction parameters $Ω c =$ −28.32 kJ/mol and $Ω ¯ G B =$ −27.10 kJ/mol (Equation (12) (black)) and the full fitted
bivariate normal distribution (Equation (17) (magenta)).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Matson, T.P.; Schuh, C.A. Atomistic Assessment of Solute-Solute Interactions during Grain Boundary Segregation. Nanomaterials 2021, 11, 2360. https://doi.org/10.3390/nano11092360
AMA Style
Matson TP, Schuh CA. Atomistic Assessment of Solute-Solute Interactions during Grain Boundary Segregation. Nanomaterials. 2021; 11(9):2360. https://doi.org/10.3390/nano11092360
Chicago/Turabian Style
Matson, Thomas P., and Christopher A. Schuh. 2021. "Atomistic Assessment of Solute-Solute Interactions during Grain Boundary Segregation" Nanomaterials 11, no. 9: 2360. https://doi.org/10.3390/
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics
|
{"url":"https://www.mdpi.com/2079-4991/11/9/2360","timestamp":"2024-11-14T06:04:29Z","content_type":"text/html","content_length":"526975","record_id":"<urn:uuid:9bcd688b-60a1-4b01-a1a6-a4cc65a2017a>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00618.warc.gz"}
|
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.STACS.2021.8
URN: urn:nbn:de:0030-drops-136533
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2021/13653/
Barloy, Corentin ; Clemente, Lorenzo
Bidimensional Linear Recursive Sequences and Universality of Unambiguous Register Automata
We study the universality and inclusion problems for register automata over equality data (A, =). We show that the universality L(B) = (Σ × A)^* and inclusion problems L(A) ⊆ L(B) B can be solved
with 2-EXPTIME complexity when both automata are without guessing and B is unambiguous, improving on the currently best-known 2-EXPSPACE upper bound by Mottet and Quaas. When the number of registers
of both automata is fixed, we obtain a lower EXPTIME complexity, also improving the EXPSPACE upper bound from Mottet and Quaas for fixed number of registers. We reduce inclusion to universality, and
then we reduce universality to the problem of counting the number of orbits of runs of the automaton. We show that the orbit-counting function satisfies a system of bidimensional linear recursive
equations with polynomial coefficients (linrec), which generalises analogous recurrences for the Stirling numbers of the second kind, and then we show that universality reduces to the zeroness
problem for linrec sequences. While such a counting approach is classical and has successfully been applied to unambiguous finite automata and grammars over finite alphabets, its application to
register automata over infinite alphabets is novel.
We provide two algorithms to decide the zeroness problem for bidimensional linear recursive sequences arising from orbit-counting functions. Both algorithms rely on techniques from linear
non-commutative algebra. The first algorithm performs variable elimination and has elementary complexity. The second algorithm is a refined version of the first one and it relies on the computation
of the Hermite normal form of matrices over a skew polynomial field. The second algorithm yields an EXPTIME decision procedure for the zeroness problem of linrec sequences, which in turn yields the
claimed bounds for the universality and inclusion problems of register automata.
BibTeX - Entry
author = {Barloy, Corentin and Clemente, Lorenzo},
title = {{Bidimensional Linear Recursive Sequences and Universality of Unambiguous Register Automata}},
booktitle = {38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
pages = {8:1--8:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-180-1},
ISSN = {1868-8969},
year = {2021},
volume = {187},
editor = {Bl\"{a}ser, Markus and Monmege, Benjamin},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2021/13653},
URN = {urn:nbn:de:0030-drops-136533},
doi = {10.4230/LIPIcs.STACS.2021.8},
annote = {Keywords: unambiguous register automata, universality and inclusion problems, multi-dimensional linear recurrence sequences}
Keywords: unambiguous register automata, universality and inclusion problems, multi-dimensional linear recurrence sequences
Collection: 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)
Issue Date: 2021
Date of publication: 10.03.2021
DROPS-Home | Fulltext Search | Imprint | Privacy
|
{"url":"http://dagstuhl.sunsite.rwth-aachen.de/opus/frontdoor.php?source_opus=13653","timestamp":"2024-11-11T14:21:19Z","content_type":"text/html","content_length":"7394","record_id":"<urn:uuid:2cb3a17f-6795-4f61-9bc8-150222f3da61>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00462.warc.gz"}
|
The Regular Travelling Salesman, Part 2
Richard Harris explores more of the mathematics of modelling problems with computers.
Last time I described the regular travelling salesman problem and we discovered that whilst the shortest tour was trivial to determine, the distribution of tour lengths was a little more difficult.
Specifically, the factorial growth of the number of tours as the number of cities increased limited us to tours of no more than 14 cities.
So how should we go about reducing the computational expense? Well, if we can spot any more symmetries we might be able to exploit them. Taking a look at every 5 city tour, fixing the first city as
usual, might give a hint as to whether any more symmetries exist.
Figure 1 shows the complete set of tours for 5-city fixed-start regular TSP. Clearly there's a symmetry we've not yet taken into account since only 4 of the 24 possible tours are distinct from one
So where is it?
Well, perhaps surprisingly, it's the most obvious of them all. The fixed starting city and tour direction symmetries that we have already addressed exist for all TSPs. This final symmetry results
from our tour being around a regular polygon. Specifically, it results from the fact that we can rotate and reflect the city labels on the polygon.
Trivially, reversing the city labels is equivalent to reversing the direction of the tour. More interestingly, rotating the city labels is not necessarily equivalent to rotating the starting city.
This is easily demonstrated by taking a tour that does not have rotational symmetry, say the second in Figure 1, rotating the labels and then checking whether rotating the starting point results in
the same tour.
Figure 2 clearly shows that rotating the labels results in a tour that cannot be created by rotating the starting point.
Rotating labels for a 5-city regular TSP
Initial tour: 0-1-2-4-3
Rotate labels: 1-2-3-0-4
Rotate starting point: 0-4-1-2-3
Figure 2
Before we embark on constructing an algorithm to efficiently generate the minimal set of symmetrically distinct tours, it's probably worth figuring out how many of them there are. The analysis is
easiest for tours with a prime number of cities, p.
First of all, we should count the number of tours for which any rotation of the labels is equivalent to changing the starting city. Trivially, these tours must move the same number of vertices around
the perimeter of the polygon at each step since if two consecutive steps were of different lengths, rotating the labels would mean that one of the cities would be followed by a different step, as
illustrated in Figure 3.
For odd, and hence prime, regular tours there are
such tours (the factor of ½ resulting from the reflectional symmetry).
For prime regular TSPs, all remaining distinct tours must have a layout such that no rotation of the labels is equivalent to a rotation of the starting city.
To see why, assume that rotating the labels k times, where k is not equal to either 1 or p, is equivalent to the initial tour with a different starting city. Rotating it another k times must also be
equivalent, as must rotating it any multiple of k times, since we return to an equivalent of the starting tour every time. We should also note that rotating the labels more than p times is equivalent
to rotating them that number modulo p.
For each label, l, and any multiple of the k rotations, m, l will be mapped to
Now, it is a property of prime numbers that repeatedly applying this mapping must result in every number between 0 and p-1. For p equal to 5 and k equal to 2 we can demonstrate this by enumerating
every step
0 → 0 + 2 = 2 → 2
2 → 2 + 2 = 4 → 4
4 → 4 + 2 = 6 → 1
1 → 1 + 2 = 3 → 3
3 → 3 + 2 = 5 → 0
Whilst this is a reasonable illustration of this fact, it is not remotely akin to a proof. To prove it, we first look for a multiple of the k rotations that maps every label to itself.
We can subtract the label value from both sides of the equation giving
Since p is prime, mk can only be a multiple of p if either m or k are a multiple of p.
This demonstrates that if k is not equal to a multiple of p, repeatedly applying k rotations of the labels must generate all other rotations of the labels before returning to the initial layout.
Therefore if k label rotations lead to a tour which is equivalent to the first, we simply keep repeating them to find that every possible rotation must also be equivalent.
So the remaining distinct tours must generate 2p2, rather than 2p, tours since they have the extra rotational symmetry of the labels. The total number of tours must be equal to the sum of them both,
Hence the total number of distinct tours is given by
Whilst this does save us an extra order of magnitude, it's still factorial complexity so it doesn't really help us all that much.
For odd non-prime regular TSPs, the situation is even worse. This is because there will be some distinct tours for which there is a partial rotation of the labels that is equivalent to a rotation of
the starting city. Since these will generate fewer tours, there must be more distinct tours.
For even regular TSPs, it is only the tour around the perimeter of the polygon for which label and starting city rotation are equivalent. This leads, by a similar argument, to a lower bound for the
number of distinct tours being
The reason that this is only a lower bound is that, as for odd non-prime regular TSPs, there exist partial label rotations that are equivalent to starting city rotations which will each generate
fewer tours.
I rather suspect that it's not therefore worth the effort it would require to develop an efficient algorithm for enumerating the symmetrically distinct tours.
So how should we proceed?
Well, if we're willing to sacrifice a little accuracy, we can simply generate a random subset of the tours. If the subset is large enough the resulting distribution of tour lengths should be
approximately equal to that of the complete set of tours.
Fortunately for us, the standard library also includes a function for generating random permutations of sequences that we can use to generate our random tours; std::random_shuffle. Once again, we
will ignore the reflectional symmetry for the sake of simplicity. We will still, however, exploit the rotational symmetry, although this time it's to distribute the samples as evenly as possible
amongst the full set of tours. Listing 1 shows sampling the tour histogram.
tsp::sample_tour(tour_histogram &histogram,
size_t samples)
distances dists(histogram.vertices());
tour t(histogram.vertices());
generate_tour(t.begin(), t.end());
std::random_shuffle(t.begin()+1, t.end());
histogram.add(tour_length(t, dists));
Listing 1
Since we're no longer bound by the number of cities, but by the number of samples we might as well take a look at histograms for large numbers of cities.
Figure 4 and Figure 5 record the results of 1,000 and 10,000 city regular TSPs with 10,000,000 and 100,000,000 samples respectively. Table 1 shows the approximate average tour lengths for these
┃ n │ mean │ mean/n ┃
┃ 1,000 │ 1,274.5 │ 1.27 ┃
┃ 10,000 │ 12,725.1 │ 1.27 ┃
Table 1
It seems reasonable that the limit of the average tour length is going to be approximately 1.27n. The question that remains is why? Can we deduce a formula for the limit of the distribution of tour
lengths for very large numbers of cities?
For extremely large numbers of cities, most steps in a regular TSP tour are more or less independent to those that have already been taken. It is only when the majority of cities have been visited
that the choice of steps will be restricted to limited regions on the circumference of the polygon.
There is a statistical theorem called the law of large numbers which states that as n tends to infinity, the sum of n random numbers independently drawn from any single given distribution tends to n
times the average of that distribution. If our assertion that the steps are more or less independent to each other is valid we should be able to approximate the average tour length with n times the
average step length. For very large n, the average step length will be approximately equal to the average distance between two randomly selected points on the circumference of a circle of unit
radius. In the same way that we can add up a finite set of step lengths and divide by the number of them to get the average, we can integrate the lengths of steps to cities separated by an angle of θ
around the circumference and divide by 2π.
This clearly confirms that our expectation of the average tour length was correct, but is not enough for us to completely determine how the tour lengths are distributed.
There is another statistical theorem we can use to help us; the central limit theorem. The central limit theorem states, for a very wide class of distributions, that the sum of a set of independently
drawn random numbers is normally distributed. Because of this property, it shows up in a vast number of places.
The normal distribution is defined in terms of both the average, μ, and the standard deviation, σ, of the numbers drawn from it. The standard deviation is a measure of how different on average the
numbers in a set are from their mean and it is calculated as follows
Note that in this context E means the expected, or average, value.
Given these values the normal distribution is defined by its cumulative density function, or cdf, which is the function in x that gives the probability that a random number will be less than x.
Unfortunately this integral does not have a closed form, meaning a simple formulaic, solution. The derivative, known as the probability density function, or pdf, is simple to calculate however and
its graph is shown in Figure 6 (the normal distribution pdf).
So the final piece of the puzzle is to calculate the average squared distance between two cities in a regular TSP, which we can use to determine which normal distribution is applicable. We could
approximate it with an integral over the circle again, but there is an approximate formula for regular TSPs with a number of cities equal to a multiple of 4, so we may as well use it.
This may not look very easy to solve, but appearances can be deceptive. The trick is to exploit some trigonometric identities. It does get a little bit fiddly though, so those of you for whom the
word 'trigonometry' conjures images of sinister maths teachers intent on ruining your life (or at least that double period after lunch on Thursdays) might want to skip ahead and just trust me.
Now, the identities in question are
We can use these by splitting the sum into four parts (Equation 1).
Now since the last three terms are sums over ¼n steps offset by a constant factor, we can simply shift the constant factor from the index into the sum itself (Equation 2).
The next point to note is that we can perform the second and fourth sums backwards by subtracting from the last angle in each sum (Equation 3).
Now we exploit the identity that equates the sine of the angle added to or subtracted from ½p to the cosine of the angle (Equation 4).
Finally, we exploit the identity that equates the sum of the squares of the sine and cosine of an angle to 1 to yield the result.
Therefore, the standard deviation of the step length is given by
In addition to stating that the sums of random numbers are normally distributed, the central limit theorem states that the specific normal distribution will have an average equal to n times that of
their distribution and a standard deviation equal to the square root of n times that of their distribution.
This means that the distribution of tour length of a regular TSP with n cities should tend, for large n, towards
Figure 7 compares the histogram we'd expect from the normal distribution (at the bottom) to that we generated by sampling the 1,000 city tour (at the top). Under the assumption of normality a bucket
with mid point x and width w should contain the proportion of the samples given by
Well, despite the fact that the assumption that the tour steps are independent is demonstrably false these look remarkably similar, a fact borne out by the histogram of the difference between them,
plotted on the same scale in Figure 8.
In fact, there exists a mathematical technique for determining the likelihood that a sample histogram is consistent with a particular distribution. I strongly suspect that it would indicate that the
sample histogram is not consistent with the normal distribution, but since we have already acknowledged that our assumptions are false we shouldn't find that surprising. Nevertheless, given that the
maximum difference is of the order of 0.015, or 1½%, it's not too bad an approximation.
So can we perform a similar analysis on the usual type of TSP?
Well, let's assume that the cities are evenly randomly distributed on the unit square. If we're interested in the average tour length of all possible tours we should firstly note that we can take a
tour of a random TSP by simply visiting each city in order. Furthermore, every possible tour can be generated by changing the labels and using the same scheme, since we can view the labels as
instructions as to the order in which we should visit them. This means that picking the location of the next city in a TSP is equivalent to picking the next city in a tour. Since the former is
independent of the cities already chosen, the latter must be independent the steps already taken, satisfying the independence requirement of the law of large numbers.
However, the distribution of step lengths is dependent on where in the square we are currently located, and this breaks the requirement that the step lengths are identically distributed. However,
there is another version of the law of large numbers which states that the sum of independent random numbers from different distributions will tend to the sum of the averages of those distributions.
Known as the strong law of large numbers, it requires that the standard deviations of those distributions have a particular property which happens to be satisfied if they do not grow without limit,
or in other words are all less than some finite number. For cities in the unit square, this will be true for any reasonable definition of distance and so this approximation is actually more
reasonable for normal TSPs than it is for regular TSPs.
Unfortunately, the expression for the average step length is a little bit more complicated this time. If we represent a pair of points by their coordinates on the unit square, (x, y) and (a, b), we
Once again, this is because the integral is the continuous limit of a sum. The fraction is the limit of the sum of the distances between all pairs of points in the unit square divided by the number
of such pairs.
For the usual definition of distance, the integral becomes
Whilst I'm not willing to assert that this does not have a closed form solution, it's too complicated for me to attempt. If we change the cost of travelling between cities to the square of the
distance it becomes a little easier however.
Continuing in the same vein leads to the result
The average cost of a tour should therefore be approximately equal to 1/3n.
If you are interested, I invite you to investigate the accuracy of this approximation for different numbers of cities. You may be surprised as to just how accurate it actually is.
So is there anything more that can be said about the statistical properties of tours through TSPs? Well certainly, but not by me as I am afraid I have exhausted my mathematical toolbox. But this is
an active area of research and a great many results have been found, of which just a few are described below.
Beardwood, Halton and Hammersley [Beardwood59] proved that the expected length of the shortest path through a random TSP tends to a value proportional to the square root of the number of cities.
Jaillet [Jaillet93] examined the probabilistic TSP in which each city has a probability that it may be skipped during the tour and provided bounds on the expected length of the shortest tour.
Agnihothri [Agnihothri98] examined the travelling repairman problem in which a repairman must travel to fix machines when they break down and developed a mathematical model with which expected
travelling time, amongst other things, can be calculated.
And you, dear reader, may be able to shed further light on the properties of either the regular or normal TSP, and if you do please let me know.
With thanks to Larisa Khodarinova for a lively discussion on group theory that led to the correct count of distinct tours and to Astrid Osborn and John Paul Barjaktarevic for proof reading this
References and further reading
[Agnihothri98] Agnihothri, 'A Mean Value Analysis of the Travelling Repairman Problem', IEE Transactions, vol. 20, pp. 223-229, 1998.
[Beardwood59] Beardwood, Halton and Hammersley, 'The Shortest Path Through Many Points', Proceedings of the Cambridge Philosophical Society, vol. 55, pp. 299-327, 1959.
[Jaillet93] Jaillet, 'Analysis of Probabalistic Combinatorial Optimization Problems in Euclidean Spaces', Mathematics of Operations Research, vol. 18, pp. 51-71, 1993.
Archimedes, On the Measurement of the Circle, c. 250-212BC.
Basel and Willemain, 'Random Tours in the Travelling Salesman Problem: Analysis and Application', Computational Optimization and Applications, vol. 20, pp. 211-217, 2001.
Clay Mathematics Institute 'Millennium Problems', http://www.claymath.org/millennium.
Hoffman and Padberg, 'Travelling Salesman Problem', Encyclopedia of Operations Research and Management Science, Gass and Harris (Eds.), Kluwer Academic, Norwell, MA, 1996.
A number of errors crept into the first of this series of articles. The first of which was the title - while trying to clean up the layout of the article header, I forgot that the second article (the
one in this issue) also covered the travelling salesman problem (and that 'part one' referred to this, not to the series as a whole).
We also managed to confuse 2θ with 2π on page 8 and to replace the Greek character μ with a question mark in Table 2 (page 12).
Apologies to our readers and to Richard Harris for these errors.
Alan (ed).
Overload Journal #83 - Feb 2008 + Design of applications and programs
Browse in : All > Journals > Overload > 83 (8)
All > Topics > Design (236)
Any of these categories - All of these categories
|
{"url":"https://members.accu.org/index.php/journals/1471","timestamp":"2024-11-07T16:11:06Z","content_type":"application/xhtml+xml","content_length":"49849","record_id":"<urn:uuid:ac935000-ac50-43fc-8a61-c5379dd6bd82>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00749.warc.gz"}
|
median - Definition & Meaning | Englia
The Greeks prescribe the median or middle vein to be opened, and so much blood to be taken away as the patient may well spare, and the cut that is made must be wide enough.
1624, Democritus Junior [pseudonym; Robert Burton], The Anatomy of Melancholy: […], 2nd edition, Oxford, Oxfordshire: Printed by John Lichfield and James Short, for Henry Cripps, partition II,
section 5, member 2
|
{"url":"https://englia.app/definition/median","timestamp":"2024-11-13T12:58:37Z","content_type":"text/html","content_length":"89455","record_id":"<urn:uuid:1e8971ca-2b16-4585-abbd-62c8aef789ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00556.warc.gz"}
|
International Conference on Mathematics and Statistical Engineering ICMSE on October 28-29, 2024 in Paris, France
This event is expired!
International Conference on Mathematics and Statistical Engineering ICMSE on October 28-29, 2024 in Paris, France
Submit Your Paper
Advanced Numerical Algorithms
Analytic methods
Approximation theory
Bayesian computing
Biological and medical applications
Biometry and Statistics
Computation in Complex Networks
Computational Biology and Medicine
Computational Chemistry
Computational Economics and Finance
Computational electrodynamics
Computational electromagnetics
Computational Engineering
Computational Finance
Computational fluid dynamics
Computational Geosciences and Meteorology
Computational Grids
Computational Mathematics
Computational Mechanics
Computational models
Computational Physics
Computational Statistics
Control theory and applications
Coupled problems
Data exploration and data mining
Dynamical systems
Engineering Mathematics
Factorization methods
Finite difference methods
Finite element methods
Generalized eigen-problems
High order difference approximations
High Performance Computing
Hybrid Computational Methods
Industrial Mathematics
Integral equations
Inversion problems in Geophysics
Iterative methods
Mathematical Chemistry
Mathematical methods in continuum mechanics
Mathematical modeling
Mathematical models for the information society
Mathematical models in Economy and Insurance
Mathematical models in Medicine
Mathematics and circuit simulation
Mathematics of Finance
Methods for integration on a uniform and non-uniform mesh
Modeling and computation of soft matter materials and complex fluids
Molecular dynamics
Monte Carlo methods and applications
Nonlinear systems and eigenvalue solvers
Nonsymmetric solvers
Numerical analysis
Numerical and Computational Mathematics
Numerical linear algebra
Numerical Mathematics in general
Numerical methods and simulation
Operations Research and Information Engineering
Optimization and optimal control
Ordinary and partial differential equations, integral equations, singular perturbation problems
Ordinary differential equations
Overlapping and nonoverlapping domain decomposition methods
Partial differential equations
Scientific computing and supercomputing benchmark design
Scientific visualization
Social Statistics
Software architectures for scientific computing
Space Geodesy and Space Dynamics
Splines and wavelets and applications
Stochastic differential equations
Supercomputing and scientific computing
Theoretical Chemistry
Theoretical Physics
Applications of scientific computing in engineering and applied sciences.
Name: World Academy of Science, Engineering and Technology
Website: https://waset.org/
Address: UAE
World Academy of Science, Engineering and Technology is a federated organization dedicated to bringing together a significant number of diverse scholarly events for presentation within the conference
|
{"url":"https://conferenceindex.org/event/international-conference-on-mathematics-and-statistical-engineering-icmse-2024-october-paris-fr","timestamp":"2024-11-11T17:41:48Z","content_type":"text/html","content_length":"48828","record_id":"<urn:uuid:30348301-1ed1-4151-9267-33d34641f86c>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00635.warc.gz"}
|
Occupational Employment & Salaries: STEM & Educational Qualifications
Posted on 04/09/2019 by Beverly Kerr
• STEM occupations account for 11.1% of all jobs in the Austin MSA in 2018, making Austin the 6^th most concentrated in STEM among large U.S. metros.
• Austin’s two largest STEM occupations are applications software developers (13,520 jobs) and sales representatives for technical and scientific products (10,010).
• Over 26% of all jobs in the Austin MSA are in occupations with a bachelor’s degree as the typical entry-level education requirement.
• The median salary of jobs requiring a bachelor’s degree is twice that of a job requiring a high school diploma, both nationally and in Austin.
New data released by the U.S. Bureau of Labor Statistics (BLS) on March 29^th provides estimates of 2018 employment and salaries by occupation. Estimates from the Occupational Employment Statistics
(OES) program are available for the nation as a whole and for over 500 other areas.
Across all occupations, Austin’s average annual salary is $53,810 and its median salary is $40,070. The average hourly wage is $25.87 and the median is $19.26 in the Austin metro. Austin’s average
salary is 3.6% above the national average and the median is 3.7% above the national median. Salaries in Austin are about 2% higher (average and median) than in Dallas-Fort Worth and 15% (average) or
14% (median) higher than San Antonio. Compared to Houston, Austin’s average salary is about 1% lower and the median about 1% higher.
Part of the differences in average wages and salaries between different areas is due to variation in the mix of occupations prevalent in each area. Over 800 occupations are covered in the OES survey,
with over 600 of those available for the Austin MSA. The OES survey is sent to hundreds of Austin-area employers every May and November and is vital as the foundational data needed to quantify pay
and employment for occupations.
Associated with the survey are a pair of supplementary sets of OES data treating science, technology, engineering and math (STEM) occupations and occupations by the typical entry-level educational
requirement. The following focuses on insights from those two tabulations.
STEM Occupations
BLS classifies 100 occupations as STEM, including computer and mathematical, architecture and engineering, and life and physical science occupations, as well as managerial and postsecondary teaching
occupations related to these functional areas, and sales occupations requiring scientific or technical knowledge at the postsecondary level.
In 2018, there were an estimated 113,700 STEM jobs in the Austin metro, representing 11.1% of total employment. Nationally, STEM jobs account for 6.3% of employment. Among the 50 largest metros,
Austin ranks sixth for percentage of jobs in STEM occupations. San Jose tops the major metros ranking with 21.0% in STEM. Las Vegas and Riverside have the lowest employment shares, both 2.9%.
Texas has the same rate of STEM employment, 6.3%, as the nation. The range among the major Texas metros is wide. Houston’s percentage is 7.3% (19^th), Dallas-Fort Worth is 7.2% (ranking 22^nd), while
San Antonio is 5.0% (ranking 45^th).
Across major metros, higher shares of STEM employment are associated with higher average salaries overall.
In Austin, STEM occupations earn a median annual salary of $88,820 compared to $37,030 for non-STEM occupations. Nationally, STEM’s median salary is $84,880 compared to $37,020 for non-STEM. The
median STEM salary in Austin is 240% of the non-STEM salary. Nationally, the median STEM salary is 229% of the non-STEM salary. STEM salaries are highest in San Jose, which has a median of $123,210,
which is 252% of San Jose’s $48,920 non-STEM salary.
The two highest paid STEM occupations, nationally and in Austin, are architectural and engineering managers and computer and information systems managers. In Austin, architectural and engineering
managers have a median salary of $152,560, which is 8.4% above the national median of $140,760; and computer and IS managers have a median salary of $145,420, which is just 2.0% above the national
median of $142,530. Compared to San Jose, where these two occupations are much more concentrated, Austin architectural and engineering managers earn a median salary that is 16.4% less, and computer
and IS managers earn 23.1% less, than they do in San Jose.
Austin’s largest STEM occupation is applications software developers, which are estimated at 13,520 jobs. As the graph above indicates, this occupation has a location quotient (LQ) in Austin of 2.11.
Location quotients are a useful byproduct of the survey’s employment estimates. The location quotient (LQ) represents the ratio of an occupation’s share of employment in a given area to that
occupation’s share of employment in the U.S. as a whole. In this case, application software developers make up 1.3% of employment in Austin compared with 0.6% of U.S. employment. Austin’s 1.3% rate
is 2.11 times 0.6%, therefore Austin’s LQ for the occupation is 2.11.
The next largest STEM occupation in Austin is wholesale and manufacturing sales representatives for technical and scientific products. There are 10,010 of these, the occupation has an LQ of 4.50, and
no metro has a higher LC for this occupation. This occupation may be one signal of some of the differences between Austin’s and San Jose’s tech sectors. This sales occupation is also in San Jose’s
STEM top 10, but there are actually fewer in San Jose (7,720) than Austin, and the San Jose LQ is 3.19.
San Jose, in fact, has more jobs for computer hardware engineers (8,450) which has a remarkable LQ of 17.99. Electronics engineers, except computer, is also a top 10 occupation in San Jose (6,040
jobs and an LQ of 5.83). In Austin, electronics engineers, except computer (3,030 jobs) has an LQ of 3.18, and computer hardware engineers (400 jobs) has an LQ of only 0.92 (i.e., the occupation is
slightly less prevalent here than it is nationally).[1]
Occupations by Education/Training
Each occupation in the OES survey is associated with a typical entry-level educational/training requirement, ranging from “no formal educational credential” to “doctoral or professional degree.”
In Austin 270,170 jobs, or 26.3%, require a bachelor’s degree. Nationally, 21.7% of jobs require a bachelor’s degree. Austin’s 26.3% rate ranks as the seventh highest among the 50 largest U.S. metro
Occupations requiring a master’s, doctoral, or professional degree (41,680) are less concentrated in Austin (4.1%) than nationally (4.4%). Among large metros, Austin ranks 29^th for the percent of
jobs requiring a graduate degree.
Jobs requiring a bachelor’s degree or higher account for 30.3%[2]of jobs in Austin and Austin ranks 10^th for the percent of jobs in occupations requiring at least a bachelor’s degree as the typical
entry-level educational requirement.
The median salary of a job requiring a bachelor’s degree is twice that of a job requiring a high school diploma, both nationally and in Austin.
In Austin, the highest paid 10% of workers (those at or above the 90^th percentile) earned about five times as much as the lowest paid 10% (the 10^th percentile). When jobs are grouped by educational
requirement, the differential between the top 10% and the bottom 10% increases with education. The top 10% of jobs with no formal educational credential only pay about twice as much as the bottom
10%; among workers in jobs requiring a high school diploma, the highest paid 10% earn about three times the lowest paid 10%; and among workers in jobs requiring a bachelor’s degree, the highest paid
10% earn four times the lowest paid 10%.
[1] It should be noted, when relying on employment estimates for individual occupations like these, that all estimates are subject to the survey’s sampling error and BLS publishes relative standard
error (RSE) statistics for each employment estimate. The RSE is defined as the ratio of the standard error to the survey estimate. For example, a RSE of 10% implies that the standard error is
one-tenth as large as the survey estimate. Austin’s total STEM employment estimate (113,700) has a relative standard error (RSE) of 1.6. Electronics engineers, except computer, are estimated at 3,030
in 2018, but the RSE is 21.7.
[2] Note that occupations are associated with a typical entry-level educational requirement. In practice, a range of levels of educational attainment may be prevalent in an occupation. The share of
workers in Austin with a bachelor’s degree or higher is greater than the 30.3% share of jobs requiring this level of education. According to the Census Bureau’s American Community Survey, 48.4% of
employed workers between 25 and 64 years have a bachelor’s degree or higher. About 16% of the civilian employed are outside this age range, so the actual percent of workers with a bachelor’s degree
or higher isn’t estimated.
|
{"url":"https://www.austinchamber.com/blog/04-09-2019-occupational-employment-salaries","timestamp":"2024-11-10T14:43:10Z","content_type":"text/html","content_length":"44273","record_id":"<urn:uuid:ea0a7e35-2f1b-444d-ba70-7b3af73eaee3>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00272.warc.gz"}
|
Singular integral operators with non-smooth kernels on irregular domains | EMS Press
Singular integral operators with non-smooth kernels on irregular domains
• Xuan Thinh Duong
Macquarie University, Sydney, Australia
• Alan G.R. McIntosh
Australian National University, Canberra, Australia
Let be a space of homogeneous type. The aims of this paper are as follows:
i) Assuming that is a bounded linear operator on we give a sufficient condition on the kernel of so that is of weak type , hence bounded on for ; our condition is weaker than the usual Hörmander
integral condition.
ii) Assuming that is a bounded linear operator on where is a measurable subset of , we give a sufficient condition on the kernel of so that is of weak type , hence bounded on for .
iii) We establish sufficient conditions for the maximal truncated operator , which is defined by = sup, to be bounded, . Applications include weak estimates of certain Riesz transforms and
boundedness of holomorphic functional calculi of linear elliptic operators on irregular domains.
Cite this article
Xuan Thinh Duong, Alan G.R. McIntosh, Singular integral operators with non-smooth kernels on irregular domains. Rev. Mat. Iberoam. 15 (1999), no. 2, pp. 233–265
DOI 10.4171/RMI/255
|
{"url":"https://ems.press/journals/rmi/articles/5108","timestamp":"2024-11-14T17:52:34Z","content_type":"text/html","content_length":"96405","record_id":"<urn:uuid:115945e8-68f6-4b29-b76a-46a4a86353d0>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00797.warc.gz"}
|
geometry of physics -- principal bundles
Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
Discussion Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
Welcome to nForum
If you want to take part in these discussions either
(if you have an account),
apply for one now
(if you don't).
nLab > Latest Changes: geometry of physics -- principal bundles
Bottom of Page
|
{"url":"https://nforum.ncatlab.org/discussion/6540/","timestamp":"2024-11-03T19:01:56Z","content_type":"application/xhtml+xml","content_length":"12226","record_id":"<urn:uuid:279875d2-2ae6-4dbe-90ff-ec558cdf5b13>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00316.warc.gz"}
|
It has not been possible to post comments at my blog for some months. Apparently, my reCAPTCHA plugin was broken (amazingly, spam comments still made their way into the moderation queue).
This should be fixed now.
I’m also on twitter now: @SyntopiaDK, where I’ll post links and news releated to generative systems, 3D fractals, or whatever pops up.
Finally, if you are near Stockholm, some of my images are on display at a small gallery (from July 9th to September 11th): Kungstensgatan 27.
Assorted Links
Pixel Bender 3D
Adobe has announced Pixel Bender 3D:
If I understand it correctly, it is a new API for Flash, and not as such a direct extension of the Pixel Bender Toolkit. So what does it do?
As far as I can tell, it is simply a way to write vertex and fragment shader for Flash. While this is certainly nice, I think Adobe is playing catchup with HTML5 here – many browsers already support
custom shaders through WebGL (in their development builds, at least). Or compare it to a modern 3D browser plugin such as Unity, with deferred lightning, depth-of-field, and occlusion culling…
And do we really need another shader language dialect?
Flash raytracer
Kris Temmerman (Neuro Productions) has created a raytracer in Flash, complete with ambient occlusion and depth-of-field:
Kris has also produced several other impressive works in Flash:
Chaos Constructions
Quite and Orange won the 4K demo at Chaos Constructions 2010 with the very impressive ‘CDAK’ demo:
(Link at Pouet, including executable).
Ex-Silico Fractals
This YouTube video shows how to produce fractals without a computer. I’ve seen video feedback before, but this is a clever setup using multiple projectors to create iterated function systems.
Vimeo Motion Graphics Award
‘Triangle’ by Onur Senturk won the Vimeo Motion Graphics Award. The specular black material looks good. Wonder if I could create something similar in Structure Synth’s internal raytracer?
Liquid Pixels
A few days ago, I found this little gem on the always inspiring WOWGREAT tumbleblog:
Hiroshi Sugimoto, Tyrrhenian Sea (1994); Pixels sorted by Blue.
It was created by Jordan Tate and Adam Tindale by sorting the pixels of this picture. See their site, Lossless processing, for more pixel shuffling goodness.
Adding some randomness
I liked the concept, so I decided to try something similar. But instead of sorting the pixels, I had this vague idea of somehow stirring the pixels in an image, and letting the pixels settle into
layers. Or something.
After trying out a few different schemes, I came up with the following procedure:
1. Pick two pixels from a random column. Swap them if the upper pixel has a higher hue than the lower pixel.
2. Pick two pixels from a random row. Swap them if the left pixel has a higher saturation (or brightness) than the right pixel.
3. Repeat the above steps until the image converges.
The first step takes care of the layering of the colors. The second step adds some structure and makes sure the process converges. (If we just swapped two arbitrary pixels, based on the hue, the
process would not converge. By swapping pixels column-wise and adding the second step, we impose a global ordering on the image).
The following are some examples of the method (applied to some photos I took while visiting California recently).
And finally, a classic:
The Image Reshuffler was implemented in Processing. It was my first try with Processing, and as I expected it was quite easy to use. Personally, I prefer C++ and Qt, but for someone new to
programming, Processing would be an obvious choice.
The script is available here: reshuffler.pde.
Generative Art Links & Resources
I’ve started collecting links for Generative Art software, blogs, papers, websites and related stuff here:
Syntopia Generative Art Links
Quaternion Julia sets and GPU computation.
Subblue has released another impressive Pixel Bender plugin, this time a Quaternion Julia set renderer.
The plugin can be downloaded here.
Quaternions are extensions of the complex numbers with four independent components. Quaternion Julia sets still explore the convergence of the system z ← z^2 + c, but this time z and c are allowed to
be quaternion-valued numbers. Since quaternions are essentially four-dimensional objects, only a slice (the intersection of the set with a plane) of the quaternion Julia sets is shown.
Quaternion Julia sets would be very time consuming to render if it wasn’t for a very elegant (and surprising) formula, the distance estimator, which for any given point gives you the distance to the
closest point on the Julia Set. The distance estimator method was first described in: Ray tracing deterministic 3-D fractals (1989).
My first encounter with Quaternion Julia sets was Inigo Quilez’ amazing Kindernoiser demo which packed a complete renderer with ambient occlusion into a 4K executable. It also used the distance
estimator method and GPU based acceleration. If you haven’t visited Quilez’ site be sure to do so. It is filled with impressive demos, and well-written tech articles.
Transfigurations (another Quaternion Julia set demo) from Inigo Quilez on Vimeo.
In the 1989 Quaternion Julia set paper, the authors produced their images on an AT&T Pixel Machine, with 64 CPU’s each running at 10 megaFLOPS. I suspect that this was an insanely expensive machine
at the time. For comparison, the relatively modest NVIDIA GeForce 8400M GS in my laptop has a theoretical maximum processing rate of 38 gigaFLOPS, or approximately 60 times that of the Pixel Machine.
A one megapixel image took the authors of the 1989 paper 1 hour to generate, whereas Subblues GPU implementation uses ca. 1 second on my laptop (making it much more efficient than what would have
been expected from the FLOPS ratio).
GPU Acceleration and the future.
These days there is a lot of talk about using GPUs for general purpose programming. The first attempts to use GPUs to speed up general calculations relied on tricks such as using pixel shaders to
perform calculations on data stored in texture memory, but since then several API’s have been introduced to make it easier to program the GPUs.
NVIDIAs CUDA is currently by far the most popular and documented API, but it is for NVIDIA only. Their gallery of applications demonstrates the diversity of how GPU calculations can be used. AMD/ATIs
has their competing Stream API (formerly called Close To Metal) [DEL:but don’t bet on this one – I’m pretty sure it is almost abandoned already:DEL]. Update: as pointed out in the comments, the new
ATI Stream 2.0 SDK will include ATIs OpenCL implemention, which for all I can tell is here to stay. What I meant to say was, that I don’t think ATIs earlier attempts at creating a GPU programming
interface (including the Brook+ language) are likely to catch on.
Far more important is the emerging OpenCL standard (which is being promoted in Apples Snow Leopard, and is likely to become a de facto standard). Just as OpenGL, it is managed by the Khronos group.
OpenCL was originally developed by Apple, and they still own the trademark, which is probably why Microsoft has chosen to promote their own API, DirectCompute. My guess is that CUDA and Brook+ will
slowly fade away, as both OpenCL and DirectCompute will come to co-exist just the same way as OpenGL and Direct3D do.
For cross-platform development OpenCL is therefore the most interesting choice, and I’m hoping to see NVIDIA and AMD/ATI release public drivers for Windows as soon as possible (as of now they are in
[DEL:closed:DEL] beta versions).
GPU acceleration could be very interesting from a generative art perspective, since it suddenly becomes possible to perform advanced visualization, such as ray-tracing, in real-time.
A final comment: a few days ago I found this quaternion Julia set GPU implementation for the iPhone 3GS using OpenGL ES 2.0 programmable shaders. I think this demonstrates the sophistication of the
iPhone hardware and software platform – both that a hand-held device even has a programmable GPU, but also that the SDK is flexible enough to make it possible to access it.
Modul (2009)
Maxim Zhestkov is a 23-year old russian designer. Modul (2009) is his diploma work for his Master of Fine Arts degree.
modul / zhestkov.com from Zhestkov on Vimeo.
Also check out 005 by Zhestkov.
Fractal Explorer Plugin
In July Subblue released another Pixel Blender plugin, called the Fractal Explorer Plugin – for exploring Julia sets and fractal orbit maps. I didn’t get around to try it out until recently, but it
is really a great tool for exploring fractals.
Most people have probably seen examples of Julia and Mandelbrot sets – where the convergence properties of the series generated by repeated application of a complex-valued function is investigated.
The most well-known example is the iteration of the function z ← z^2+c. The Mandelbrot set is created by plotting the convergence rate for this function while c varies over the complex plane.
Likewise, the Julia set is created for a fixed c while varying the initial z-value over the complex plane.
Glynn1 by Subblue (A Julia-set where an exponent of 1.5 is used).
Where ordinary Julia and Mandelbrot sets only take into account whether the series created by the iterated function tends towards infinity (diverges) or not, fractal orbits instead uses another image
as input, and checks whether the complex number series generated by the function hits a (non-transparent) pixel in the source image. This allows for some very fascinating ‘fractalization’ of existing
A fractal orbit showing a highly non-linear transformation of a Mondrian picture.
Subblue suggests starting out using Ernst Haeckel beautiful illustrations from the book Artforms of Nature, and he has put up a small gallery with some great examples:
An example of an orbit mapped Ernst Haeckel image.
To try out Subblue’s filter, download the Pixel Blender SDK and load his kernel filter and an input image of choice. It is necessary to uncheck the “Build | Turn On Flash Player Warnings and Errors”
menu item in order to start the plugin. On my computer I also often experience that the Pixel Blender SDK is unable to detect and initialize my GPU – it sometimes help to close other programs and
restart the application. The filter executes extremely fast on the GPU – often with more than 100 frames per second, making it easy to interactively explore and discover the fractals.
As I final note, I implemented a fractal drawings routine myself in Structure Synth (version 1.0) just for fun. It is implemented as a hidden easter egg, and not documented at all, but the code
belows shows an example of how to invoke it:
Size: 800x800
MaxIter: 150
Term: (-0.2,0)
Term: 1*Z^1.5
BreakOut: 2
View: (-0.0,0.2) -> (0.7,0.9)
Arguable, this code is not very optimized (it is possible to add an unlimited number of terms, making the function evaluation somewhat slow), but still it takes seconds to calculate an image making
it more than a hundred times slower than the Pixel Blender GPU solution.
Flickr Findings: Human Chaos Project
Markus Mooslechner has created some fascinating visualizations of electrocardiograms in a project called ‘Human Chaos’.
The pictures depict the electrical activity of the heart at different parts of the lifecycle, spanning from pre-birth to death.
See more examples at his Flickr stream.
Metamorphosis (Processing)
Impressive Processing animations made by Glenn Marshall:
In order to see these at the best quality, go to Full Screen in the Vimeo player, make sure HD is on, and that Scaling is off.
Generative Art in 4KB
Once again I’m extremely impressed by what shows up on the demoscene.
Inigo Quilez managed to create the following picture within a 4KB executable. In fact he managed to fit a renderer and the generative code for both the scene and the textures in only 3900 bytes.
|
{"url":"https://blog.hvidtfeldts.net/index.php/category/digital-art/","timestamp":"2024-11-05T19:36:18Z","content_type":"text/html","content_length":"51936","record_id":"<urn:uuid:11969fbe-e462-45d6-9092-96c4a72f5562>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00836.warc.gz"}
|
Review of “R Graphs Cookbook” by Hrishi Mittal | R-bloggersReview of "R Graphs Cookbook" by Hrishi Mittal
Review of “R Graphs Cookbook” by Hrishi Mittal
[This article was first published on
Portfolio Probe » R language
, and kindly contributed to
]. (You can report issue about the content on this page
) Want to share your content on R-bloggers?
if you have a blog, or
if you don't.
Executive summary: Extremely useful for new users, informative to even quite seasoned users.
Once upon a time a publisher asked if I would referee a book (unspecified) about R. In an instance that can only be described as psychotic I said yes. That bit of insanity turned out to be a good
I was treated to chapters of a cookbook on R graphics doled out in installments, like how Thackeray’s Vanity Fair was originally published.
It is fairly embarrassing how much I learned from the book.
The format
All you need to know about each task is presented in specific sections:
• The task: what is to be done
• Getting ready: packages that might need to be attached, for instance
• How to do it …: the R code
• How it works …: a brief explanation of what the code means
• There’s more …: variations on the theme
You only need to get your own data into R in order to get similar plots that you care about.
The graphs are in black and white, not color — at least in the hardcopy version. Heatmaps in grayscale are suboptimal. The Panglossian view is that this will encourage readers to create the graphs
I made an effort to rid the book of the L-word when “package” is meant. The L-word is “library” (see Some quibbles about “The R Book” and its comments for more on this). Alas, I failed. I fear
I’ll be expelled from the JaRgon Police Force.
Getting it
You can go to the R Graphs Cookbook webpage.
|
{"url":"https://www.r-bloggers.com/2011/01/review-of-%E2%80%9Cr-graphs-cookbook%E2%80%9D-by-hrishi-mittal/","timestamp":"2024-11-15T03:15:46Z","content_type":"text/html","content_length":"89143","record_id":"<urn:uuid:d6109d44-223d-4c51-ba89-4cf5ab3ca40a>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00405.warc.gz"}
|
Square Root of 3, The
by Nemiroff
Download Book
(Respecting the intellectual property of others is utmost important to us, we make every effort to make sure we only link to legitimate sites, such as those sites owned by authors and publishers. If
you have any questions about these links, please contact us.)
link 1
About Book
From the reviews:
"Written by an expert teacher as a conversation between a ‘master’ and a ‘pupil’ on the threshold of adulthood, this investigation of the subtleties of the number concept and sequences of rational
approximations becomes an initiation into the pleasures of mathematical experimentation, exploration, and generalization. … This book is thus an ideal gift for any bright young person with
computational ability and self-directed reading curiosity … ." (Andrew M. Rockett, Mathematical Reviews, Issue 2006 j)
"David Flannery’s book, The Square Root of 2, is the kind of book to recommend to a particularly bright high school senior, not to ignore a frosh in college. From page 1 through its conclusion, it is
a masterful dialogue … . Flannery seeks to arouse a cool passion for mathematics in his student. … Flannery has woven an engaging dialogue from history and theory that offers the student insights
into the thinking mind of the working mathematician." (Barnabas Hughes, Covergence, April, 2006)
"The book is more about some mathematics pertaining to the square root of two … . I would recommend it to good high school students … . I also think it would be a wonderful topic for a colloquium
presentation for undergraduate students. … I think the book is easy to understand and interesting as long as you like math. … I would recommend it to other kids in algebra II or precalculus as well …
." (Doug Ensley and John Ensley, MAA Online, March, 2006)
Book Description
The square root of 2 is a fascinating number – if a little less famous than such mathematical stars as pi, the number e, the golden ratio, or the square root of –1. (Each of these has been honored by
at least one recent book.) Here, in an imaginary dialogue between teacher and student, readers will learn why v2 is an important number in its own right, and how, in puzzling out its special
qualities, mathematicians gained insights into the illusive nature of irrational numbers. Using no more than basic high school algebra and geometry, David Flannery manages to convey not just why v2
is fascinating and significant, but how the whole enterprise of mathematical thinking can be played out in a dialogue that is imaginative, intriguing, and engaging. Original and informative, The
Square Root of 2 is a one-of-a-kind introduction to the pleasure and playful beauty of mathematical thinking.
PLEASE READ: All comments must be approved before appearing in the thread; time and space constraints prevent all comments from appearing. We will only approve comments that are directly related to
the article, use appropriate language and are not attacking the comments of others.
Related Free eBooks
|
{"url":"http://2020ok.com/books/57/square-root-of-3-the-41857.htm","timestamp":"2024-11-06T02:48:01Z","content_type":"text/html","content_length":"14856","record_id":"<urn:uuid:0c6e330f-1265-4c54-9c37-c2ed0ebfb1e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00048.warc.gz"}
|
Axioms and Postulates
An axiom is a statement, usually considered to be self-evident, that is assumed to be true without proof. It is used as a starting point in mathematical proof for deducing other truths.
What is the Difference Between Axioms and Postulates?
Classically, axioms were considered different from postulates. An axiom would refer to a self-evident assumption common to many areas of inquiry, while a postulate referred to a hypothesis specific
to a certain line of inquiry, that was accepted without proof. As an example, in Euclid's Elements, you can compare "common notions" (axioms) with postulates.
In much of modern mathematics, however, there is generally no difference between what were classically referred to as "axioms" and "postulates". The word "assumption" is sometimes used as well; in
this context, it means the same as both "axiom" and "postulate." Modern mathematics does distinguish between logical axioms and non-logical axioms, with the latter sometimes being referred to as
See also: mathematical systems.
|
{"url":"http://mathlair.allfunandgames.ca/axiom.php","timestamp":"2024-11-12T16:26:15Z","content_type":"text/html","content_length":"3579","record_id":"<urn:uuid:b82c4e6d-0cc4-4f71-972b-79a6e75748c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00440.warc.gz"}
|
50) Negative Sulphur and other Impossible Properties
Where does negative sulphur come from? If you are fortunate enough to only ever see converged solutions to very well-behaved models, then you might not have ever seen negative sulphur, or percent
distilled values above 100, or benzene content below zero – but these impossible properties can turn up in distributed-recursion models, even in optimal solutions.
Consider a stream that can be put into inventory or transported to another processing site, without anything else being added to it. Whatever the sulphur content of that stream, it should have the
same value in the next period and at the other location. But if the source a blended pool or a process unit yield with variable properties, and the split between transport and inventory is not
pre-determined, there is no linear equation that can impose that “same” condition. However, since this is the classic pooling problem, it can be optimized via a series of linear approximations. (See
How Distributed Recursion Solves the Pooling Problem
This is, however, an imperfect system, as the qualities are not automatically correct. They will only be right when a solution is found that maintains consistency between assumed and actual values.
Does Convergence Matter?)
The approximations built into the model can give rise to negative sulphur or other impossible properties on recursion passes where unconverged solutions are produced. Let’s add some numbers to our
simple example configuration. The sulphur value of the source and the pool dispositions are not known in advance, so we need some initial estimates. These are used to set up the equations for
tracking its value in the destinations and satisfying any specifications in the first linear matrix.
Assume that the source stream has 10 ppm sulphur and is equally likely to go to inventory or be transported, a 50/50 split. Imagine that in the optimal solution to this approximation, the stream came
out with 12 ppm sulphur. Although we know that the destination pools should also have come out as 12 ppm, their sulphur values will depend on the accuracy of the assumed error distributions. Because
of the Distributed-Recursion structure in the matrix, the sulphur of each destination pool is determined by the assumed fraction of the error divided by the actual tonnes received. (In the matrix,
the error is multiplied by the amount of pool produced, but if we consider just 1 “unit” of material, whatever that is, then the distribution factor is a stand-in for the quantity.) The average
actual value of the pool in the next period or at the other site in this solution is:
= Assumed Quality + (Quality Error * Assumed Distribution)/Actual Distribution.
If the pool was actually divided evenly between the two uses, then each of the destination pools would have perceived the sulphur as 10 ppm + (2 * 0.5)/0.5 = 12 ppm. That would be a converged
solution because the pool sulphurs all match. However, if the pool did not distribute evenly, as assumed, but split 40/60, then the destination pools would not have matched the source (or each
10 + (2 * 0.5)/0.4 = 12.5 ppm and 10 + (2 * 0.5)/0.6 = 11.6667 ppm.
One pool has too much sulphur because the error is divided over a smaller amount than assumed, while the other has too little because it did not receive enough error. This is not a converged
solution, and another approximation needs to be tried. Notice, however, that the sulphur balance is maintained:
0.4 * 12.5 + 0.6 * 11.6667 = 12 ppm overall.
So far, the values are reasonable, but when optimal sulphur is less than assumed the error is negative. If the actual distribution moves far enough from the assumed, a negative overall value can be
produced for one of the destination pools. Suppose on the next pass the sulphur drops from the assumed 12 to 8 ppm, and the tons are split 90/10.
- The destination sulphur values would be
12 + (-4 * 0.4)/0.9 = 9.78 ppm and 12 + (-4 * 0.6)/0.1 = -8 ppm.
- The overall sulphur balance is still maintained: 9.78 * .9 + 0.1 * -8 = 8.
Bad qualities can also be generated in delta-base process units. If, for instance, there is a delta vector that says sulphur goes down as severity goes up, it could run further than intended taking
the output property below zero. The negative value often makes the stream a very desirable material. It could be used to blend away a much higher sulphur contribution from other components going with
it into finished products or unit feeds. This can create a strong incentive to generate such values, particularly if there is no economic impact of the excess sulphur on the other the destination -
say if it is just being sold or left in inventory.
The algorithms that are used between recursion passes to update the approximations can apply a dose of reality to weird solutions. Without the requirement of expressing rules as linear equations, we
can, for example, take into account that these destination pools must always match the source. If you are using a linked process simulator to update your deltas on each recursion pass, that should
also reset the model to a more realistic position. (Unless your simulator itself predicts negative sulphur, in which case some improved filters and maybe a skip test, or even a bit of “back to the
drawing board” is in order.)
However, these impossible quality values can still hamper the optimization. If they keep reappearing in the solutions despite more sensible assumptions being used, the model is unlikely to converge.
An effective countermeasure is adding a specification: Sulphur >=0, to the pool where the bad value is turning up. It may seem like you are stating the obvious, but the optimizer only only knows what
it finds in the matrix equations – it doesn’t understand physical reality – its all just numbers. A specification like this is a way of indicating the limitations of the approximation's match to the
original problem. It establishes a “trust region” where solutions are possibly useful, and makes infeasible those that cannot be.
Use the
Recursion Monitor
to identify pools that are prone to bad values and put in just a few of these trust specs to start with. Controlling one property on a pool will often prevent other bad values as well, since the new
specification limits the shift in distributions or blended property on each pass. But you probably don’t want to put them in everywhere by default. We don't want to make our models bigger and slower
than necessary. You may also find with too many constraints, that it might not be able to get started - finding that first optimal pass. Sometimes a little bad behaviour is needed to get the
optimization process out of a hole.
You wouldn’t expect to see this sort of specification constraining in a converged solution, but that doesn’t mean it isn’t doing something important. Use the
Recursion Monitor
again to see if it is having an influence on previous passes – in which case it is helping you fight the forces of chaos (See
SLP and Chaos
) and you should leave it be!.
Comments and suggestions may be sent via the usual e-mail addresses or here.
You may also use this form to ask to be added to the distribution list so that you are notified via e-mail when new articles are posted.
From Kathy's Desk 4th February 2019.
|
{"url":"https://www.haverly.com/kathy-blog/697-blog-50-negative-sulphur","timestamp":"2024-11-06T04:53:32Z","content_type":"application/xhtml+xml","content_length":"18883","record_id":"<urn:uuid:ef884ed8-f5a6-4cc1-85ba-d8f8c0f58445>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00526.warc.gz"}
|
[Solved] Find and interpret a 95% confidence inter | SolutionInn
Find and interpret a 95% confidence interval for the mean price of a home in California. Refer
Find and interpret a 95% confidence interval for the mean price of a home in California.
Refer to the dataset Homes For Sale, which has data on houses available for sale in three Mid-Atlantic states (NY, NJ, and PA) as well as California (CA). Table 6.17 has summary statistics for each
of the four states, with prices given in thousands of dollars. (Since n = 30, we ask you to use the t-distribution here despite the fact that the data are quite skewed. In practice, we might have
enough concern about the skewness to choose to use bootstrap methods instead.)
Table 6.17
Transcribed Image Text:
Mean Std. Dev. State New York New Jersey Pennsylvania California 30 565.6 697.6 30 388.5 224.7 179.3 1112.3 30 249.6 715.1 30
Fantastic news! We've Found the answer you've been seeking!
Step by Step Answer:
Answer rating: 75% (20 reviews)
For the confidence interval for a single mean in California we use The sample si...View the full answer
Answered By
Bhartendu Goyal
Professional, Experienced, and Expert tutor who will provide speedy and to-the-point solutions. I have been teaching students for 5 years now in different subjects and it's truly been one of the most
rewarding experiences of my life. I have also done one-to-one tutoring with 100+ students and help them achieve great subject knowledge. I have expertise in computer subjects like C++, C, Java, and
Python programming and other computer Science related fields. Many of my student's parents message me that your lessons improved their children's grades and this is the best only thing you want as a
3.00+ 2+ Reviews 10+ Question Solved
Students also viewed these Mathematics questions
Study smarter with the SolutionInn App
|
{"url":"https://www.solutioninn.com/study-help/statistics-the-art-and-science/find-and-interpret-a-95-confidence-interval-for-the-mean","timestamp":"2024-11-04T05:06:05Z","content_type":"text/html","content_length":"82729","record_id":"<urn:uuid:5c8fb1e6-e19d-4a3f-a100-abdaf04facd6>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00223.warc.gz"}
|
What is capacity and volume measured in?
Volume is measured in the cubic units, whereas capacity is measured in almost all other units which include litres, pounds, gallons, etc. Volume is calculated when you multiply the length, width, and
height of an object, whereas the capacity measurement is more towards ml or cc.
What is capacity in the metric system?
Capacity is the total amount of liquid an object can contain. Capacity is often measured using the metric units of liter (L) and milliliter (mL). Note that milliliter also uses the metric prefix of
“milli-” to indicate that 1 milliliter is a thousandths of a liter.
What are all the units of capacity?
There are five basic units for measuring capacity in the U.S. customary measurement system. These are the fluid ounce, cup, pint, quart, and gallon.
What do we measure in capacity?
Measures the quantity of liquid that an object holds. For example, the capacity of a bottle is the quantity of liquid with which we can fill it. Another word for capacity is volume. Let’s say that
capacity is the volume that a body occupies in space.
Is litres volume or capacity?
A liter (or litre) is a metric unit used to measure volume or capacity. Liters are a common measurement often used to measure beverages and other liquids, such as a 2 liter bottle of soda. Sometimes
you will need to calculate the volume of an object in liters, given the object’s dimensions.
What is the measuring of capacity?
Capacity is a measure of how much something can hold, before it becomes full. A millilitre is the volume of one cubic centimetre. A thousand millilitres is a litre.
What is the best measure of capacity?
We know that capacity is the amount of liquid which a container can hold. The basic units of measurement of capacity are liter (l) and milliliter (ml). To measure smaller quantities of liquid, we use
milliliter (ml) and to measure larger quantities we use liter (l).
Which of the following are units of volume?
The fundamental unit for measuring volume is the cubic meter. There are also other units for measuring large and small quantities of volume….SI Unit of Volume.
cubic kilometer km³ 1,000,000,000 m³
cubic meter m³ 1 m³
cubic decimeter dm³ 0.001 m³
cubic centimeter cm³ 0.000001 m³
cubic millimeter mm³ 0.000000001 m³
What is the basic unit of capacity?
Which metric volume is largest?
Cubic Kilometer (km3) It is very large! It is equal to 1,000,000,000 cubic meters (1 billion m3) or 1,000,000,000,000 liters (1 trillion L). Useful for measuring large lakes, seas and oceans.
How do you measure volume?
Units of Measure
1. Volume = length x width x height.
2. You only need to know one side to figure out the volume of a cube.
3. The units of measure for volume are cubic units.
4. Volume is in three-dimensions.
5. You can multiply the sides in any order.
6. Which side you call length, width, or height doesn’t matter.
What is difference between volume and capacity?
Volume indicates the total amount of space covered by an object in three-dimensional space. Capacity refers to the ability of something (like a solid substance, gas, liquid) to hold, absorb or
receive by an object. Both solid and hollow objects have volume. Only hollow objects have the capacity.
Which is the smallest unit of capacity?
Units of Capacity A fluid ounce is the smallest unit of measuring capacity and the gallon is the largest unit.
How is volume capacity measured?
You find the volume V of a rectangular container by measuring its length (l), width (w) and height (h) and multiplying these quantities. You express the result in cubic units.
What is the difference between capacity and volume?
Volume and capacity are properties of three-dimensional objects. Volume is the space that a three-dimensional object occupies or contains; capacity, on the other hand, is the property of a container
and describes how much a container can hold.
What are 3 units of measurement for volume?
Units of Measurement/Volume
System Unit Symbol
SI cubic metre m3
CGS cubic centimetre cm3
Imperial cubic foot ft3
What is the difference between volume and capacity?
|
{"url":"https://www.sheppard-arts.com/essay-writing/what-is-capacity-and-volume-measured-in/","timestamp":"2024-11-11T07:35:31Z","content_type":"text/html","content_length":"76122","record_id":"<urn:uuid:a25a7552-e1c5-484d-8ff7-66aa37d9e9b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00147.warc.gz"}
|
Generation of Thermoptim model equations for combined cycles
Single pressure combined cycle
This combined cycle is the subject of guided exploration n°C-M3-V1, which you can refer to if you want to learn about this type of cycle. Its synoptic diagram is given below:
The diagram and project files are given below. Please note that they require the use of Thermoptim in English, i.e. with the inth2.zip file of this language.
Conversion to EES format
EES is a solver developed by f-Chart, which requires a license. The conversion results in a file that can be processed by the solver. The equations for calculating the fluid properties converted to
this format are given below, the others remaining unchanged:
The file that can be resolved in EES is provided below.
Dual pressure combined cycle
This cycle has been optimised using the Thermoptim pinch method. Detailed explanations on how the architecture of this cycle can be defined are given in the guidance page n°11 of this portal.
Its synoptic view is given below:
Raw equations generated
There are 257 of them. They are given in this file.
The analysis of these equations identifies 22 groups of variables and equations.
It is possible to study the direct dependencies between them.
As an example, the figure below shows the graph relating to the number of transfer units NTU of the exchanger corresponding to the high-pressure vaporizer EV_HP.
The corresponding mind map is given below.
Conversion to EES format
The file that can be resolved in EES is provided below. In the end, it has 239 equations.
The only changes to be made to the file of raw equations were the following:
• removal of redundant equations, bearing in mind that it would be necessary to change the default setting of the exchangers, which corresponds to a given epsion efficiency, to replace it with the
calculation of the gas outlet temperatures, those of the steam being set by the choice of pressures
• parameterization of the steam economizer outlets with a subcooling of 0.5 °C, in order to ensure that a minimum temperature difference of this value exists between the inlet and outlet of the
steam flow
• deletion of one of the equations at the outlet of the first divisor of the gas vein, which is in fact redundant
• replacement of the equations of the properties of the burnt gases using the two EES functions provided
• addition of the missing equations providing the values of certain parameters
This model, which involves 239 equations, seven heat exchangers and 30 components, was thus generated from the Thermoptim model at the cost of about an hour's work.
|
{"url":"https://direns.minesparis.psl.eu/Sites/Thopt/en/co/equa-cc.html","timestamp":"2024-11-11T01:44:17Z","content_type":"text/html","content_length":"16884","record_id":"<urn:uuid:3b6d5ecd-f9a7-414c-a2a5-f72bcdaf398b>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00152.warc.gz"}
|
Tempotoets, hier leer je gratis foutloos rekenen
Psss.... did you know that
tests have already been made here ?
Also try how good you are at mental arithmetic! Arithmetic without time? Go to leren-rekenen.nl Here you learn arithmetic error-free for FREE
Here you can exercise fractions.
Suppose the sunm 1/2 + 1/3 than the answer is 5/6. You fill in 5/6.
If you choose 1-5 than the fractions are additions. The denominators are not larger than 5
If the aswer is 5/4 than you don't have to rewrite the answer to 1 1/4
|
{"url":"https://en.tempotoets.nl/leerling/breuken.php","timestamp":"2024-11-06T11:03:23Z","content_type":"text/html","content_length":"169799","record_id":"<urn:uuid:458e5c58-b682-4240-b95e-3552c74a6498>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00201.warc.gz"}
|
Beast Mode Help: Two layers of division
Beast Mode Help: Two layers of division
I've got a calculation I am trying to perform with a beast mode formula, that I can't get to work.
The calculation is (Number of Lines / Hours Active) / Standard
I'm trying to protect against divide by zero in the two layers of the formula. The first item in the order of operation is the (Number of Lines / Hours Active), then the result of that calculation
is / Standard
Here is the beast mode I attempted, but is not working. I know this case is checking for NULL, do I need to check for zero as well?
WHEN IFNULL(SUM(`RCV Standard - Dynamic`),0)=0
THEN 0
WHEN IFNULL(SUM(`RCV Time Active`),0)=0
THEN 0
(SUM(`RCV Number of Lines`)/SUM(`RCV Time Active`))/`RCV Standard - Dynamic`
Best Answer
• Another option is to do a nested CASE statement to account for the divide by zero in both divisors:
WHEN IFNULL(`Standard`,0) = 0 THEN 0
WHEN IFNULL(`Hours Active`,0) = 0 THEN 0
ELSE (IFNULL(Number of Lines,0) / IFNULL(Hours Active,0))
END) / IFNULL(`Standard`,0)
If I have answered your question, please click "Yes" on my comment's option.
• Hi.
Please try using nullif and ifnull.
(Number of Lines / Hours Active) / Standard
Standard = 0 -> null
(Number of Lines / Hours Active) / nullif(Standard,0)
0 or null -> 0
ifnull((Number of Lines / Hours Active) / nullif(Standard,0),0)
• I've tried a few times without success to change my existing Beast Mode to include what you've suggested and I cannot get it to work. Can you show me the completed Beast Mode instead of just
those suggested lines please?
• Hi @swagner,
If I correctly understood your problem... Have you tried using just nullif? Divisions by 0 do raise an error but division by null equals null (This is what I systematically use), so in your case,
I would go with :
(Number of Lines / NULLIF(Hours Active,0)) / NULLIF(Standard,0)
This way no divisor will ever be zero.
In the end, you can always revert to zero if the result is null (I do not agree with this approach as a division by zero tends to an infinite number, not a zero! But some people follow this)
IFNULL((Number of Lines / NULLIF(Hours Active,0)) / NULLIF(Standard,0),0)
Hope this helps.
Ricardo Granada
**If the post solves your problem, mark it by clicking on "Accept as Solution"
**You can say "Thank you" by clicking the thumbs up in the post that helped you.
• Another option is to do a nested CASE statement to account for the divide by zero in both divisors:
WHEN IFNULL(`Standard`,0) = 0 THEN 0
WHEN IFNULL(`Hours Active`,0) = 0 THEN 0
ELSE (IFNULL(Number of Lines,0) / IFNULL(Hours Active,0))
END) / IFNULL(`Standard`,0)
If I have answered your question, please click "Yes" on my comment's option.
• I forgot to nullif to "Hours Active".
(Number of Lines / Hours Active) / Standard
Standard = 0 -> null
(Number of Lines / nullif(Hours Active,0)) / nullif(Standard,0)
(sum(Number of Lines) / nullif(sum(Hours Active),0)) / nullif(sum(Standard),0)
0 or null -> 0
ifnull((Number of Lines / nullif(Hours Active,0)) / nullif(Standard,0),0)
ifnull((sum(Number of Lines) / nullif(sum(Hours Active),0)) / nullif(sum(Standard),0),0)
• 1.8K Product Ideas
• 1.5K Connect
• 2.9K Transform
• 3.8K Visualize
• 677 Automate
• 34 Predict
• 394 Distribute
• 121 Manage
• 5.4K Community Forums
|
{"url":"https://community-forums.domo.com/main/discussion/comment/27314","timestamp":"2024-11-02T23:34:32Z","content_type":"text/html","content_length":"394454","record_id":"<urn:uuid:5c841be1-5d63-4be0-bee5-165df4b35d04>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00217.warc.gz"}
|
perplexus.info :: Just Math : Tangent Ellipses
Dear DAK,
I am having trouble making the picture. I do not see how an ellipse within a circle can, all at the same time: be tangent to the circle, have a major axis parallel to that tangent, and still be
completely within the circle. It seems to me that such an ellipse must intersect the circle and be partially outside the circle.
To do what is required, it must have its _minor_ axis parallel to the tangent line, or else be a circle, and in the latter case, it would have no unique major axis.
So, what am I missing?
Thanks very much,
Steve Lord
P.S., Here is my picture:
Edited on November 9, 2018, 2:44 am
Posted by Steven Lord on 2018-11-09 02:42:26
|
{"url":"http://perplexus.info/show.php?pid=11511&cid=60289","timestamp":"2024-11-12T20:00:14Z","content_type":"text/html","content_length":"13031","record_id":"<urn:uuid:68381247-72e8-472d-ae0b-9c4402751e24>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00143.warc.gz"}
|
Phil 148
Philosophy of Probability and Induction
Spring 2003
Professor Paolo Mancosu
Office: 233 Moses Hall
Phone: 642-5033
E-mail: mancosu@socrates.berkeley.edu
Class meets:
Office hours:
Course Description
The course is an introduction to the area of inductive logic and related topics. It is divided into three parts. The first part discusses some basic forms of deductive and inductive inferences and
introduces the mathematical calculus of probability. The second part of the course is devoted to a) the problem of the justification of induction and b) the analysis of the notion of confirmation.
This part will include discussion of Hume’s position on induction and of Goodman’s paradox. In the third part of the course we will discuss the major foundational views in probability, that is the
classical, frequentist, logical, and subjectivist theories.
Prerequisites: Phil 12A (or equivalent) [no exceptions!] and at least another course in philosophy
Week 1: Introduction; Inductive and deductive Logic (ch. 1)
Week 2: Some basic forms of inductive inference, (ch.2)
Week 3: Causal inference, (ch.2)
Week 4: Probability, (ch.3)
Week 5: Probability, (ch.3)
Week 6: The justification of induction, (ch. 5)
Week 7: The justification of induction (ch. 5)
Week 8: Confirmation and its problems, (ch. 6)
Week 9: Confirmation and its problems (ch. 6)
Week 10: Probability and expected value, (ch. 4)
Week 11: Probability and expected value (ch. 4)
Week 12: Theories of probability, (ch. 7)
Week 13: Theories of probability (ch. 7)
Week 14: Theories of probability (ch. 7)
Week 15: Review
W. Gustason, Reasoning from Evidence, Macmillan, 1994. The book is out of print but it is available as a packet of readings at Copy Central on Bancroft.
There will be another packet of readings, available later in the semester, on the justification of induction, confirmation, and conceptions of probability (Hume, Goodman, Black, Ramsey, von Mises,
Keynes etc.)
|
{"url":"https://philosophy.berkeley.edu/people/page/42","timestamp":"2024-11-13T11:18:28Z","content_type":"text/html","content_length":"8703","record_id":"<urn:uuid:9b56bb96-992d-4ea4-8c97-9d726044af27>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00833.warc.gz"}
|
2.3: Modeling Complex Systems
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
The challenge in developing a model becomes particularly tough when it comes to the modeling of complex systems, because their unique properties (networks, nonlinearity, emergence, self-organization,
etc.) are not what we are familiar with. We usually think about things on a single scale in a step-by-step, linear chain of reasoning, in which causes and effects are clearly distinguished and
discussed sequentially. But this approach is not suitable for understanding complex systems where a massive amount of components are interacting with each other interdependently to generate patterns
over a broad range of scales. Therefore, the behavior of complex systems often appears to contradict our everyday experiences.
As illustrated in the examples above, it is extremely difficult for us to come up with a reasonable model when we are facing something unfamiliar. And it is even more difficult to come up with a
reasonable set of microscopic rules that could explain the observed macroscopic properties of a system. Most of us are simply not experienced enough to make logical connections between things at
multiple different scales.
How can we improve our abilities to model complex systems? The answer might be as simple as this: We need to become experienced and familiar with various dynamics of complex systems to become a good
modeler of them. How can we become experienced? This is a tricky question, but thanks to the availability of the computers around us, computational modeling and simulation is becoming a reasonable,
practical method for this purpose. You can construct your own model with full details of microscopic rules coded into your computer, and then let it actually show the macroscopic behavior arising
from those rules. Such computational modeling and simulation is a very powerful tool that allows you to gain interactive, intuitive (simulated) experiences of various possible dynamics that help you
make mental connections between micro- and macroscopic scales. I would say there are virtually no better tools available for studying the dynamics of complex systems in general.
There are a number of pre-built tools available for complex systems modeling and simulation, including NetLogo [13], Repast [14], MASON [15], Golly [16], and so on. You could also build your own
model by using general-purpose computer programming languages, including C, C++, Java, Python, R, Mathematica, MATLAB, etc. In this textbook, we choose Python as our modeling tool, specifically
Python 2.7, and use PyCX [17] to build interactive dynamic simulation models^3. Python is free and widely used in scientific computing as well as in the information technology industries. More
details of the rationale for this choice can be found in [17].
When you create a model of a complex system, you typically need to think about the following:
1. What are the key questions you want to address?
2. To answer those key questions, at what scale should you describe the behaviors of the system’s components? These components will be the “microscopic” components of the system, and you will define
dynamical rules for their behaviors.
3. How is the system structured? This includes what those microscopic components are, and how they will be interacting with each other.
4. What are the possible states of the system? This means describing what kind of dynamical states each component can take.
5. How does the state of the system change over time? This includes defining the dynamical rules by which the components’ states will change over time via their mutual interaction, as well as
defining how the interactions among the components will change over time.
Figuring out the “right” choices for these questions is by no means a trivial task. You will likely need to loop through these questions several times until your model successfully produces behaviors
that mimic key aspects of the system you are trying to model. We will practice many examples of these steps throughout this textbook.
Create a schematic model of some real-world system of your choice that is made of many interacting components. Which scale do you choose to describe the microscopic components? What are those
components? What states can they take? How are those components connected? How do their states change over time? After answering all of these questions, make a mental prediction about what kind of
macroscopic behaviors would arise if you ran a computational simulation of your model
^3For those who are new to Python programming, see Python’s online tutorial at docs.python. org/2/tutorial/index.html. Several pre-packaged Python distributions are available for free, such as
Anaconda (available from continuum.io/downloads) and Enthought Canopy (available from enthought.com/products/canopy/). A recommended environment is Anaconda’s Python code editor named “Spyder.”
|
{"url":"https://math.libretexts.org/Bookshelves/Scientific_Computing_Simulations_and_Modeling/Introduction_to_the_Modeling_and_Analysis_of_Complex_Systems_(Sayama)/02%3A_Fundamentals_of_Modeling/2.03%3A_Modeling_Complex_Systems","timestamp":"2024-11-08T00:58:25Z","content_type":"text/html","content_length":"133194","record_id":"<urn:uuid:ee2f141d-8674-4303-8f32-cda9127f0939>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00872.warc.gz"}
|
class compas.geometry.Rotation(matrix=None, check=True)[source]
Bases: Transformation
Class representing a rotation transformation.
The class contains methods for converting rotation matrices to axis-angle representations, Euler angles, quaternion and basis vectors.
matrix (list[list[float]], optional) – A 4x4 matrix (or similar) representing a rotation.
ValueError – If the default constructor is used, and the provided transformation matrix is not a rotation.
>>> from compas.geometry import Frame
>>> f1 = Frame([0, 0, 0], [0.68, 0.68, 0.27], [-0.67, 0.73, -0.15])
>>> R = Rotation.from_frame(f1)
>>> args = False, 'xyz'
>>> alpha, beta, gamma = R.euler_angles(*args)
>>> xaxis, yaxis, zaxis = [1, 0, 0], [0, 1, 0], [0, 0, 1]
>>> Rx = Rotation.from_axis_and_angle(xaxis, alpha)
>>> Ry = Rotation.from_axis_and_angle(yaxis, beta)
>>> Rz = Rotation.from_axis_and_angle(zaxis, gamma)
>>> f2 = Frame.worldXY()
>>> f1 == f2.transformed(Rx * Ry * Rz)
euler_angles Returns Euler angles from the rotation according to specified axis sequence and rotation type.
from_axis_and_angle Construct a rotation transformation from a rotation axis and an angle and an optional point of rotation.
from_axis_angle_vector Construct a rotation transformation from an axis-angle vector.
from_basis_vectors Construct a rotation transformation from basis vectors (= orthonormal vectors).
from_euler_angles Construct a rotation transformation from Euler angles.
from_frame Construct a rotation transformationn from world XY to frame.
from_quaternion Construct a rotation transformation` from quaternion coefficients.
Inherited Methods
ToString Converts the instance to a string.
concatenate Concatenate another transformation to this transformation.
concatenated Concatenate two transformations into one Transformation.
copy Returns a copy of the transformation.
decomposed Decompose the Transformation into its components.
from_change_of_basis Construct a change of basis transformation between two frames.
from_data Construct an object of this type from the provided data.
from_frame_to_frame Construct a transformation between two frames.
from_json Construct an object from serialized data contained in a JSON file.
from_jsonstring Construct an object from serialized data contained in a JSON string.
from_list Creates a transformation from a list of 16 numbers.
from_matrix Creates a transformation from a list[list[float]] object.
inverse Returns the inverse transformation.
invert Invert this transformation.
inverted Returns the inverse transformation.
sha256 Compute a hash of the data for comparison during version control using the sha256 algorithm.
to_data Convert an object to its native data representation.
to_json Serialize the data representation of an object to a JSON file.
to_jsonstring Serialize the data representation of an object to a JSON string.
transpose Transpose the matrix of this transformation.
transposed Create a transposed copy of this transformation.
validate_data Validate the object's data against its data schema.
validate_json Validate the object's data against its json schema.
|
{"url":"https://compas.dev/compas/1.16.0/api/generated/compas.geometry.Rotation.html","timestamp":"2024-11-11T07:42:18Z","content_type":"text/html","content_length":"36429","record_id":"<urn:uuid:5c44eb63-fe23-4f57-9949-633d445e33ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00695.warc.gz"}
|
Fullscreen Multiplication Charts 1 And 80 2024 - Multiplication Chart Printable
Fullscreen Multiplication Charts 1 And 80
Fullscreen Multiplication Charts 1 And 80 – You can get a blank Multiplication Chart if you are looking for a fun way to teach your child the multiplication facts. This will likely allow your kid to
complete the information on their own. You will discover blank multiplication charts for many different item varies, which includes 1-9, 10-12, and 15 products. You can add a Game to it if you want
to make your chart more exciting. Here are some ideas to get your kid started out: Fullscreen Multiplication Charts 1 And 80.
Multiplication Graphs
You can use multiplication maps as part of your child’s university student binder to assist them to commit to memory math details. Even though many young children can commit to memory their math
specifics by natural means, it will take many others time to do this. Multiplication charts are an ideal way to reinforce their boost and learning their self-confidence. As well as being educative,
these graphs might be laminated for toughness. Listed below are some helpful approaches to use multiplication maps. Also you can look at websites like these for valuable multiplication truth
This lesson handles the basic principles from the multiplication kitchen table. Along with understanding the principles for multiplying, individuals will fully grasp the idea of aspects and
patterning. By understanding how the factors work, students will be able to recall basic facts like five times four. They may also be able to utilize the home of zero and one to eliminate more
complex merchandise. Students should be able to recognize patterns in multiplication chart 1, by the end of the lesson.
Besides the regular multiplication chart, students might need to build a graph with a lot more elements or less elements. To generate a multiplication graph with more factors, college students need
to create 12 dining tables, every single with twelve rows and three posts. All 12 furniture need to in shape in one sheet of papers. Collections ought to be driven having a ruler. Graph papers is
right for this undertaking. Students can use spreadsheet programs to make their own tables if graph paper is not an option.
Online game ideas
Whether you are training a newcomer multiplication training or concentrating on the expertise of the multiplication dinner table, you may think of entertaining and fascinating video game concepts for
Multiplication Graph or chart 1. A few entertaining suggestions are listed below. This video game requires the students to remain work and pairs on a single difficulty. Then, they will all endure
their greeting cards and discuss the answer for any min. If they get it right, they win!
When you’re instructing youngsters about multiplication, one of the best equipment it is possible to give them is really a computer multiplication graph. These computer bedding appear in a number of
models and might be published on one web page or a number of. Little ones can understand their multiplication facts by copying them from your memorizing and chart them. A multiplication chart can be
helpful for many motives, from helping them understand their math facts to instructing them the way you use a calculator.
Gallery of Fullscreen Multiplication Charts 1 And 80
Math Timetables For Kids Bing Images Math Time Homeschool Math
Copy Of Multiplication Table Multiplication Table Multiplication
Multiplication Chart 80 80 PrintableMultiplication
Leave a Comment
|
{"url":"https://www.multiplicationchartprintable.com/fullscreen-multiplication-charts-1-and-80/","timestamp":"2024-11-06T09:13:01Z","content_type":"text/html","content_length":"51999","record_id":"<urn:uuid:76a7c5ac-450b-4ae3-bede-e46e27d641d0>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00328.warc.gz"}
|
Learn What Is Trend Formula And How To Use. -
TREND WHAT IS CONST b ?
What Does It Do ?
This function predicts values based upon three sets of related values.
The prediction is based upon the Linear Trend of the original values.
The function is an array function and must be entered using Ctrl+Shift+Enter.
The KnownYs is the range of values, such as Sales Figures.
The KnownXs is the intervals used when collecting the data, such as Months.
The RequiredXs is the range for which you want to make the prediction, such as Months.
No special formatting is needed.
The following tables were used by a company to predict when they would start to make a profit.
Their bank manager had told the company that unless they could show a profit by the end of the next year, the bank would no longer provide an overdraft facility.
To prove to the bank that, based upon the past years performance, the company would start to make a profit at the end of the next year, the =TREND() function was used.
The historical data for the past year was entered, months 1 to 12.
The months to predict were entered, 13 to 24.
The =TREND() function shows that it will be month 22 before the company make a profit.
How To Enter An Array Formula
Select all the cells where the array is required, such as F41 to F52.
Type the formula such as =TREND(C41:C52,B41:B52,E41:E52), but do not press Enter.
Hold the Ctrl+Shift keys down.
Press Enter to enter the formula as an array.
|
{"url":"https://aakhirkyon.in/learn-what-is-trend-formula-and-how-to-use/","timestamp":"2024-11-05T23:38:46Z","content_type":"text/html","content_length":"113523","record_id":"<urn:uuid:04bc05e2-3c3f-47d5-8f1f-f456241c7657>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00478.warc.gz"}
|
Multiplying 4 Digit By 1 Digit Numbers Worksheet 2024 - NumbersWorksheets.net
Multiplying 4 Digit By 1 Digit Numbers Worksheet
Multiplying 4 Digit By 1 Digit Numbers Worksheet – This multiplication worksheet focuses on educating individuals how to psychologically grow entire amounts. Pupils may use custom grids to put
particularly one query. The worksheets also coverdecimals and fractions, and exponents. You will even find multiplication worksheets with a handed out property. These worksheets certainly are a need
to-have for your personal arithmetic type. They can be utilized in course to learn to emotionally increase complete numbers and line them up. Multiplying 4 Digit By 1 Digit Numbers Worksheet.
Multiplication of whole numbers
If you want to improve your child’s math skills, you should consider purchasing a multiplication of whole numbers worksheet. These worksheets can help you grasp this basic idea. You may decide to use
one digit multipliers or two-digit and about three-digit multipliers. Powers of 10 will also be a great choice. These worksheets will help you to training lengthy practice and multiplication reading
the phone numbers. Also, they are a terrific way to aid your child fully grasp the value of learning the different types of whole numbers.
Multiplication of fractions
Getting multiplication of fractions over a worksheet might help teachers prepare and put together instruction successfully. Using fractions worksheets allows educators to easily examine students’
understanding of fractions. Pupils might be pushed in order to complete the worksheet in just a certain time and then tag their answers to see where by they require further more training. College
students may benefit from word conditions that associate maths to true-lifestyle scenarios. Some fractions worksheets consist of instances of contrasting and comparing numbers.
Multiplication of decimals
Once you increase two decimal numbers, be sure to group of people them vertically. The product must contain the same number of decimal places as the multiplicant if you want to multiply a decimal
number with a whole number. For instance, 01 x (11.2) by 2 could be similar to 01 by 2.33 x 11.2 unless the merchandise has decimal spots of less than two. Then, the product is circular on the
nearest total amount.
Multiplication of exponents
A arithmetic worksheet for Multiplication of exponents will assist you to exercise dividing and multiplying figures with exponents. This worksheet may also provide problems that will need students to
grow two different exponents. By selecting the “All Positive” version, you will be able to view other versions of the worksheet. Aside from, you can even key in special guidelines on the worksheet on
its own. When you’re completed, you are able to click on “Create” and the worksheet will probably be delivered electronically.
Division of exponents
The basic tip for division of exponents when multiplying numbers is always to subtract the exponent inside the denominator from the exponent within the numerator. You can simply divide the numbers
using the same rule if the bases of the two numbers are not the same. For instance, $23 separated by 4 will identical 27. However, this method is not always accurate. This system can cause confusion
when multiplying figures that happen to be too big or not big enough.
Linear capabilities
If you’ve ever rented a car, you’ve probably noticed that the cost was $320 x 10 days. So, the total rent would be $470. A linear purpose of this kind provides the type f(x), where ‘x’ is the amount
of days and nights the vehicle was booked. Furthermore, they have the form f(by) = ax b, where by ‘b’ and ‘a’ are real phone numbers.
Gallery of Multiplying 4 Digit By 1 Digit Numbers Worksheet
Leave a Comment
|
{"url":"https://www.numbersworksheets.net/multiplying-4-digit-by-1-digit-numbers-worksheet/","timestamp":"2024-11-12T21:45:53Z","content_type":"text/html","content_length":"60937","record_id":"<urn:uuid:0c2a8b8e-8b7d-44ba-82a0-4988df1817e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00866.warc.gz"}
|
Crystalline Lattice - NguyenStarch
Crystalline Lattice
The two types, allomorphic A and B, differ essentially by stacking propellers and the number of water molecules in the crystal lattice. Type A has a monoclinic lattice type (a = 2.124 nm, b = 1.172
nm, c = 1.069 nm, and γ = 123.5 degrees) and belongs to the space group B2, which implies that the repeating unit of the helix is a trimer glucose (maltotriose) repeated two times per helical turn.
Each propeller has six neighbors and the mesh contains only four water molecules distributed equivalently (Imberty et al., 1988). The crystalline B type has a hexagonal lattice with parameters a = b
= 1.85 nm and c = 1.04 nm. In this structure, each helix has only three neighbors but this stacking generates a large canal that contains 36 water molecules per unit cell (Imberty and Perez, 1988).
More recent work by Takahashi et al. (2004) tends to confirm this structure with a hexagonal packing belonging to the P61 space group and slightly different lattice parameters a = b = 1.852 nm, c =
1.057 nm. The repeating unit of the helix is in this case a dimer (maltose) repeated three times per turn. The allomorphic types A and B can also be differentiated in nuclear magnetic resonance (NMR)
13C CP/MAS (cross-polarization/magic angle spinning). The carbon signal C1 (involved in the glycoside bond) of glucose is a triplet for A type and a doublet for B type. Indeed, the dihedral angles
from both sides of the C1 carbon take three slightly different values in the case where the repeating unit is a maltotriose (Type A) and only two in the case of maltose (Type B) (Horn et al., 1987;
Veregin et al., 1987a,b).
|
{"url":"https://nguyenstarch.com/crystalline-lattice/","timestamp":"2024-11-08T06:11:21Z","content_type":"text/html","content_length":"70644","record_id":"<urn:uuid:fedff47c-d18e-457f-a060-c9dc372e8d50>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00887.warc.gz"}
|
Democrats: Clinton wins day despite surprise Sanders Michigan win
You may have heard all the breathless coverage last night of Sanders’ surprise win in Michigan. And it was indeed a surprise. He outperformed all the recent polls by a substantial margin. This was a
big huge win, right?
Well, no. It wasn’t. The media hype is all around the fact that it is more interesting to cover an actual race than a slow march to an almost inevitable win. Time to look at the numbers.
First of all, the straight up delegates out of Michigan: Sanders 69, Clinton 61
That means Sanders got 53.1% of the Michigan delegates. To be on a pace to catch up and win, he needed 60.2% of the delegates. He may have “won”, but he didn’t win by anywhere near enough to actually
catch up with Clinton.
But Michigan wasn’t the only state handing out delegates. Mississippi did too. And Mississippi went for Clinton 32 to 4. So the total for the day was actually Clinton 93 to Sanders 73.
There were also some superdelegate updates since my last post. Net change: Clinton +4, Sanders +3.
So total since the Maine results on Sunday: Clinton 97, Sanders 76.
So Sanders only got 43.9% of the delegates since Sunday. This is not close to 60.2%.
Yes, Sanders pulled out a surprise win in Michigan. But he could duplicate that win in EVERY STATE from here until the end of the primary season and he STILL would not catch up. He would not win. He
can’t catch up by “just winning”. He needs to win by huge margins to catch up. That did happen (barely) in Maine. But Michigan didn’t do it, and Sanders has only very rarely managed the margins he
would need to catch up.
So, looking at the graph of “% of remaining needed to win”:
Clinton now needs 38.96% of the remaining delegates to win. Sanders needs 61.14% of the remaining delegates to catch up.
Oh, unless the superdelegates start changing their minds in massive numbers. That would make things harder for Clinton and easier for Sanders. And to be fair, if all 460 of Clinton’s superdelegates
flipped to Sanders tomorrow, Sanders would indeed be ahead by 1035 delegates to 771.
I wouldn’t hold my breath on that one though.
Right now the totals are Clinton 1231, Sanders 575.
Next up are the Northern Marianas on Saturday, then Florida, Illinois, Missouri, North Carolina and Ohio next Tuesday.
[Edit 16:22 UTC to add the following exchange]
Minutes after I posted this update, I got a comment via Facebook that prompted me to do some additional analysis. It seemed directly relevant and interesting, so adding it here (lightly edited):
Jenn: Since superdelegates have never actually gone against the popular vote and could change their minds, what’s the math if we simply exclude them? I’m not saying it’ll happen. I’m just curious and
too lazy to do it myself.
Sam: On superdelegates, my position is that if they ever start to change their minds because of the results of the pledged delegate race, we will see that because they will say so. So speculating
about them changing their minds is pointless, you can actually watch and see if they do. If they start changing their minds, then it is a real thing and it will be measured and tracked as it happens.
So long as they don’t, it is just a fantasy. But OK, I’ll quickly do the calculation of what things would look like if you only look at pledged delegates and assume superdelegates will follow the
pledged delegate result.
[Edit 2016-03-10 16:07 UTC to strike through the calculation below due to really bad stupid math error that completely invalidates the result. Sanders needed percentage will be closer to 55%. Redoing
calculations shortly.]
[DEL:Sam: OK. Here goes. The current totals are Clinton 1231, Sanders 575, O’Malley 1. If you take out supers, that becomes Clinton 771, Sanders 552. Now, there are 2472 delegates total, so you need
1237 delegates to win. But that includes superdelegates. If the assumption is that supers will go for the pledged delegate winner, then you shouldn’t count them in the total number of delegates
either because they now don’t matter. Without supers there are 1755 delegates, and you need 878 to win. Sanders therefore needs 326 more delegates to catch up and win. Between Clinton and Sanders
they have collected 1323 delegates already. So there are only 432 pledged delegates remaining. 326/432 = 75.5%. So if you look only at pledged delegates it is actually WORSE for Sanders. He needs
nearly 76% of the remaining delegates to catch up and win rather than “only” 61%.:DEL]
[Edit 2016-03-10 17:30 to add additional conversation correcting the erroneous calculation. I am leaving the first version struck out above for the record. The below is lightly edited from the
original Facebook conversation.]
Sam: Urg!!!! I made a huge error on those calculations! I blame it on…. Uh…. Being stupid. I used the total number of Republican delegates in one part of the calculation rather than the Democratic
totals, which of course invalidates the whole thing. I suspect Sanders actually needs closer to 55% if you don’t count supers. I will redo the calculation.
Sam: OK, here goes again. I suck. Numbers that have changed bolded. The current totals are Clinton 1231, Sanders 575, O’Malley 1. If you take out supers, that becomes Clinton 771, Sanders 552. Now,
there are 4765 delegates total, so you need 2383 delegates to win. But that includes superdelegates. If the assumption is that supers will go for the pledged delegate winner, then you shouldn’t count
them in the total number of delegates either. Without supers there are 4048 delegates, and you need 2025 to win. Sanders therefore needs 1450 more delegates to catch up and win. Between Clinton and
Sanders they have collected 1323 delegates already. So there are only 2725 pledged delegates remaining. 1450/2725 = 53.2%. This is significantly better than the 41.7% of pledged delegates Sanders has
gotten so far, but it is not yet in the impossible zone by a long shot, and it is still better than where he is when you include superdelegates. Apologies for the stupid error.
Sam: I had actually checked and rechecked the calculation several times before posting it originally, but I made the same mistake every time. Sigh! Oh well!
Sam: See also this article by Andrew Prokop for more on what would be involved in a Sanders comeback. It was while reading this that I realized my error.
[Update 2016-03-10 06:45 UTC – Update in Michigan shifts 2 additional delegates from Sanders to Clinton. This does not substantially change the analysis above. In addition, the number of total
“unpledged PLEOs” was adjusted in several states, giving a net addition of 1 total convention delegate.]
[Update 2016-03-11 05:29 UTC – Superdelegate update: Clinton loses one as a second superdelegate says they will just vote for the pledged delegate winner, putting them back in the uncommitted
category for now.]
[Update 2016-03-12 23:50 UTC – Superdelegate update to prepare for March 12th results: Clinton +1, Sanders +1]
Note: This post is an update based on the data on ElectionGraphs.com. Election Graphs tracks both a poll based estimate of the Electoral College and a numbers based look at the Delegate Races. All of
the charts and graphs seen in this post are from that site. Additional graphs, charts and raw data can be found there. All charts above are clickable to go to the current version of the detail page
the chart is from, which may contain more up to date information than the snapshots on this page, which were current as of the time of this post. Follow @ElectionGraphs on Twitter or like Election
Graphs on Facebook to see announcements of updates or to join the conversation. For those interested in individual general election poll updates, follow @ElecCollPolls on Twitter for all the polls as
they are added.
[Edit 15:59 to fix one place I said Sanders instead of Clinton. Fixed. Thanks Jenn for pointing it out.]
[Edit 2016-03-10 21:12 UTC to fix author of Vox article I linked to.]
I think you misreported one of the wins as sanders instead of Clinton. Also, since superdelegates have never actually gone against the popular vote and could change their minds, what’s the math if we
simply exclude them? I’m not saying it’ll happen. I’m just curious and too lazy to do it myself.
Fixed the Clinton/Sanders mixup. Thanks for pointing it out. In terms of the superdelegates, my position is that if they ever start to change their minds because of the results of the pledged
delegate race, we will see that because they will say so. So speculating about them changing their minds is pointless, you can actually watch and see if they do. If they start changing their minds,
then it is a real thing and it will be measured and tracked as it happens. So long as they don’t, it is just a fantasy. But OK, I’ll quickly do the calculation of what things would look like if you
considered all the superdelegates as undecided. Back with that in a minute.
OK. Here goes. The current totals are Clinton 1231, Sanders 575, O’Malley 1. If you take out supers, that becomes Clinton 771, Sanders 552. Now, there are 2472 delegates total, so you need 1237
delegates to win. But that includes superdelegates. If the assumption is that supers will go for the pledged delegate winner, then you shouldn’t count them in the total number of delegates either.
Without supers there are 1755 delegates, and you need 878 to win. Sanders therefore needs 326 more delegates to catch up and win. Between Clinton and Sanders they have collected 1323 delegates
already. So there are only 432 pledged delegates remaining. 326/432 = 75.5%. So if you look only at pledged delegates it is actually WORSE for Sanders.
Considering if I should do an edit and add that to the post or not. But I have to get my son to school now… :-)
OK. I added it (with minor edits). We are going to be late for school now. :-)
Samuel Minter your constituents thank you (and Alex)
Sanders needs 61.1% of the remaining delegates to win. If superdelegates did not exist, he would need 75.5%. Oops. https://t.co/lAkY6ahdul
Michael Lofts liked this on Facebook.
Urg!!!! I made a huge error on those calculations! I blame it on…. Uh…. Being stupid. I used the total number of Republican delegates in one part of the calculation, which of course invalidates the
whole thing. I suspect Sanders actually needs closer to 55% if you don’t count supers. Will redo the calculation as soon as I can, but I currently don’t have power at my house and I have to get on
work conference calls…
OK, here goes again. I suck. Numbers that have changed marked. The current totals are Clinton 1231, Sanders 575, O’Malley 1. If you take out supers, that becomes Clinton 771, Sanders 552. Now, there
are *4765* delegates total, so you need *2383* delegates to win. But that includes superdelegates. If the assumption is that supers will go for the pledged delegate winner, then you shouldn’t count
them in the total number of delegates either. Without supers there are *4048* delegates, and you need *2025* to win. Sanders therefore needs *1450* more delegates to catch up and win. Between Clinton
and Sanders they have collected 1323 delegates already. So there are only *2725* pledged delegates remaining. *1450*/*2725* = *53.2%*. This is significantly better than the 41.7% of pledged delegates
Sanders has gotten so far, but it is not yet in the impossible zone by a long shot, and it is still better than where he is when you include superdelegates. Apologies for the stupid error.
Correction posted in the article on my site as well.
I had actually checked and rechecked the calculation several times before posting it originally, but I made the same mistake every time. Sigh! Oh well!
Corrected post now up: https://t.co/lAkY6ahdul If superdelegates did not exist Sanders would need only 53.2% of remaining delegates.
See also this article by Matt Yglesias for more on what would be involved in a Sanders comeback. It was while reading this that I realized my error: http://www.vox.com/2016/3/10/11189908/
You must be logged in to post a comment.
|
{"url":"http://www.abulsme.com/2016/03/09/democrats-clinton-wins-day-despite-surprise-sanders-michigan-win/","timestamp":"2024-11-11T22:42:32Z","content_type":"application/xhtml+xml","content_length":"130463","record_id":"<urn:uuid:027a1469-2b3d-4f16-a6b9-407687135ed4>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00745.warc.gz"}
|
1 Arcsec/Square Week to Arcsec/Square Microsecond
Arcsec/Square Week [arcsec/week2] Output
1 arcsec/square week in degree/square second is equal to 7.5940584281266e-16
1 arcsec/square week in degree/square millisecond is equal to 7.5940584281266e-22
1 arcsec/square week in degree/square microsecond is equal to 7.5940584281266e-28
1 arcsec/square week in degree/square nanosecond is equal to 7.5940584281266e-34
1 arcsec/square week in degree/square minute is equal to 2.7338610341256e-12
1 arcsec/square week in degree/square hour is equal to 9.8418997228521e-9
1 arcsec/square week in degree/square day is equal to 0.0000056689342403628
1 arcsec/square week in degree/square week is equal to 0.00027777777777778
1 arcsec/square week in degree/square month is equal to 0.0052519354095805
1 arcsec/square week in degree/square year is equal to 0.75627869897959
1 arcsec/square week in radian/square second is equal to 1.3254132315963e-17
1 arcsec/square week in radian/square millisecond is equal to 1.3254132315963e-23
1 arcsec/square week in radian/square microsecond is equal to 1.3254132315963e-29
1 arcsec/square week in radian/square nanosecond is equal to 1.3254132315963e-35
1 arcsec/square week in radian/square minute is equal to 4.7714876337469e-14
1 arcsec/square week in radian/square hour is equal to 1.7177355481489e-10
1 arcsec/square week in radian/square day is equal to 9.8941567573375e-8
1 arcsec/square week in radian/square week is equal to 0.0000048481368110954
1 arcsec/square week in radian/square month is equal to 0.000091663564999257
1 arcsec/square week in radian/square year is equal to 0.013199553359893
1 arcsec/square week in gradian/square second is equal to 8.4378426979185e-16
1 arcsec/square week in gradian/square millisecond is equal to 8.4378426979185e-22
1 arcsec/square week in gradian/square microsecond is equal to 8.4378426979185e-28
1 arcsec/square week in gradian/square nanosecond is equal to 8.4378426979185e-34
1 arcsec/square week in gradian/square minute is equal to 3.0376233712506e-12
1 arcsec/square week in gradian/square hour is equal to 1.0935444136502e-8
1 arcsec/square week in gradian/square day is equal to 0.0000062988158226253
1 arcsec/square week in gradian/square week is equal to 0.00030864197530864
1 arcsec/square week in gradian/square month is equal to 0.0058354837884228
1 arcsec/square week in gradian/square year is equal to 0.84030966553288
1 arcsec/square week in arcmin/square second is equal to 4.556435056876e-14
1 arcsec/square week in arcmin/square millisecond is equal to 4.556435056876e-20
1 arcsec/square week in arcmin/square microsecond is equal to 4.556435056876e-26
1 arcsec/square week in arcmin/square nanosecond is equal to 4.556435056876e-32
1 arcsec/square week in arcmin/square minute is equal to 1.6403166204754e-10
1 arcsec/square week in arcmin/square hour is equal to 5.9051398337113e-7
1 arcsec/square week in arcmin/square day is equal to 0.00034013605442177
1 arcsec/square week in arcmin/square week is equal to 0.016666666666667
1 arcsec/square week in arcmin/square month is equal to 0.31511612457483
1 arcsec/square week in arcmin/square year is equal to 45.38
1 arcsec/square week in arcsec/square second is equal to 2.7338610341256e-12
1 arcsec/square week in arcsec/square millisecond is equal to 2.7338610341256e-18
1 arcsec/square week in arcsec/square microsecond is equal to 2.7338610341256e-24
1 arcsec/square week in arcsec/square nanosecond is equal to 2.7338610341256e-30
1 arcsec/square week in arcsec/square minute is equal to 9.8418997228521e-9
1 arcsec/square week in arcsec/square hour is equal to 0.000035430839002268
1 arcsec/square week in arcsec/square day is equal to 0.020408163265306
1 arcsec/square week in arcsec/square month is equal to 18.91
1 arcsec/square week in arcsec/square year is equal to 2722.6
1 arcsec/square week in sign/square second is equal to 2.5313528093755e-17
1 arcsec/square week in sign/square millisecond is equal to 2.5313528093755e-23
1 arcsec/square week in sign/square microsecond is equal to 2.5313528093755e-29
1 arcsec/square week in sign/square nanosecond is equal to 2.5313528093755e-35
1 arcsec/square week in sign/square minute is equal to 9.112870113752e-14
1 arcsec/square week in sign/square hour is equal to 3.2806332409507e-10
1 arcsec/square week in sign/square day is equal to 1.8896447467876e-7
1 arcsec/square week in sign/square week is equal to 0.0000092592592592593
1 arcsec/square week in sign/square month is equal to 0.00017506451365268
1 arcsec/square week in sign/square year is equal to 0.025209289965986
1 arcsec/square week in turn/square second is equal to 2.1094606744796e-18
1 arcsec/square week in turn/square millisecond is equal to 2.1094606744796e-24
1 arcsec/square week in turn/square microsecond is equal to 2.1094606744796e-30
1 arcsec/square week in turn/square nanosecond is equal to 2.1094606744796e-36
1 arcsec/square week in turn/square minute is equal to 7.5940584281266e-15
1 arcsec/square week in turn/square hour is equal to 2.7338610341256e-11
1 arcsec/square week in turn/square day is equal to 1.5747039556563e-8
1 arcsec/square week in turn/square week is equal to 7.7160493827161e-7
1 arcsec/square week in turn/square month is equal to 0.000014588709471057
1 arcsec/square week in turn/square year is equal to 0.0021007741638322
1 arcsec/square week in circle/square second is equal to 2.1094606744796e-18
1 arcsec/square week in circle/square millisecond is equal to 2.1094606744796e-24
1 arcsec/square week in circle/square microsecond is equal to 2.1094606744796e-30
1 arcsec/square week in circle/square nanosecond is equal to 2.1094606744796e-36
1 arcsec/square week in circle/square minute is equal to 7.5940584281266e-15
1 arcsec/square week in circle/square hour is equal to 2.7338610341256e-11
1 arcsec/square week in circle/square day is equal to 1.5747039556563e-8
1 arcsec/square week in circle/square week is equal to 7.7160493827161e-7
1 arcsec/square week in circle/square month is equal to 0.000014588709471057
1 arcsec/square week in circle/square year is equal to 0.0021007741638322
1 arcsec/square week in mil/square second is equal to 1.350054831667e-14
1 arcsec/square week in mil/square millisecond is equal to 1.350054831667e-20
1 arcsec/square week in mil/square microsecond is equal to 1.350054831667e-26
1 arcsec/square week in mil/square nanosecond is equal to 1.350054831667e-32
1 arcsec/square week in mil/square minute is equal to 4.860197394001e-11
1 arcsec/square week in mil/square hour is equal to 1.7496710618404e-7
1 arcsec/square week in mil/square day is equal to 0.00010078105316201
1 arcsec/square week in mil/square week is equal to 0.0049382716049383
1 arcsec/square week in mil/square month is equal to 0.093367740614764
1 arcsec/square week in mil/square year is equal to 13.44
1 arcsec/square week in revolution/square second is equal to 2.1094606744796e-18
1 arcsec/square week in revolution/square millisecond is equal to 2.1094606744796e-24
1 arcsec/square week in revolution/square microsecond is equal to 2.1094606744796e-30
1 arcsec/square week in revolution/square nanosecond is equal to 2.1094606744796e-36
1 arcsec/square week in revolution/square minute is equal to 7.5940584281266e-15
1 arcsec/square week in revolution/square hour is equal to 2.7338610341256e-11
1 arcsec/square week in revolution/square day is equal to 1.5747039556563e-8
1 arcsec/square week in revolution/square week is equal to 7.7160493827161e-7
1 arcsec/square week in revolution/square month is equal to 0.000014588709471057
1 arcsec/square week in revolution/square year is equal to 0.0021007741638322
|
{"url":"https://hextobinary.com/unit/angularacc/from/arcsecpw2/to/arcsecpmicros2/1","timestamp":"2024-11-09T20:49:43Z","content_type":"text/html","content_length":"113808","record_id":"<urn:uuid:a07df338-665c-4394-82d9-c4af40ab0669>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00198.warc.gz"}
|
DLARFB - Linux Manuals (3)
DLARFB (3) - Linux Manuals
dlarfb.f -
subroutine dlarfb (SIDE, TRANS, DIRECT, STOREV, M, N, K, V, LDV, T, LDT, C, LDC, WORK, LDWORK)
DLARFB applies a block reflector or its transpose to a general rectangular matrix.
Function/Subroutine Documentation
subroutine dlarfb (characterSIDE, characterTRANS, characterDIRECT, characterSTOREV, integerM, integerN, integerK, double precision, dimension( ldv, * )V, integerLDV, double precision, dimension( ldt,
* )T, integerLDT, double precision, dimension( ldc, * )C, integerLDC, double precision, dimension( ldwork, * )WORK, integerLDWORK)
DLARFB applies a block reflector or its transpose to a general rectangular matrix.
DLARFB applies a real block reflector H or its transpose H**T to a
real m by n matrix C, from either the left or the right.
SIDE is CHARACTER*1
= 'L': apply H or H**T from the Left
= 'R': apply H or H**T from the Right
TRANS is CHARACTER*1
= 'N': apply H (No transpose)
= 'T': apply H**T (Transpose)
DIRECT is CHARACTER*1
Indicates how H is formed from a product of elementary
= 'F': H = H(1) H(2) . . . H(k) (Forward)
= 'B': H = H(k) . . . H(2) H(1) (Backward)
STOREV is CHARACTER*1
Indicates how the vectors which define the elementary
reflectors are stored:
= 'C': Columnwise
= 'R': Rowwise
M is INTEGER
The number of rows of the matrix C.
N is INTEGER
The number of columns of the matrix C.
K is INTEGER
The order of the matrix T (= the number of elementary
reflectors whose product defines the block reflector).
V is DOUBLE PRECISION array, dimension
(LDV,K) if STOREV = 'C'
(LDV,M) if STOREV = 'R' and SIDE = 'L'
(LDV,N) if STOREV = 'R' and SIDE = 'R'
The matrix V. See Further Details.
LDV is INTEGER
The leading dimension of the array V.
If STOREV = 'C' and SIDE = 'L', LDV >= max(1,M);
if STOREV = 'C' and SIDE = 'R', LDV >= max(1,N);
if STOREV = 'R', LDV >= K.
T is DOUBLE PRECISION array, dimension (LDT,K)
The triangular k by k matrix T in the representation of the
block reflector.
LDT is INTEGER
The leading dimension of the array T. LDT >= K.
C is DOUBLE PRECISION array, dimension (LDC,N)
On entry, the m by n matrix C.
On exit, C is overwritten by H*C or H**T*C or C*H or C*H**T.
LDC is INTEGER
The leading dimension of the array C. LDC >= max(1,M).
WORK is DOUBLE PRECISION array, dimension (LDWORK,K)
LDWORK is INTEGER
The leading dimension of the array WORK.
If SIDE = 'L', LDWORK >= max(1,N);
if SIDE = 'R', LDWORK >= max(1,M).
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
September 2012
Further Details:
The shape of the matrix V and the storage of the vectors which define
the H(i) is best illustrated by the following example with n = 5 and
k = 3. The elements equal to 1 are not stored; the corresponding
array elements are modified but restored on exit. The rest of the
array is not used.
DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R':
V = ( 1 ) V = ( 1 v1 v1 v1 v1 )
( v1 1 ) ( 1 v2 v2 v2 )
( v1 v2 1 ) ( 1 v3 v3 )
( v1 v2 v3 )
( v1 v2 v3 )
DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R':
V = ( v1 v2 v3 ) V = ( v1 v1 1 )
( v1 v2 v3 ) ( v2 v2 v2 1 )
( 1 v2 v3 ) ( v3 v3 v3 v3 1 )
( 1 v3 )
( 1 )
Definition at line 195 of file dlarfb.f.
Generated automatically by Doxygen for LAPACK from the source code.
|
{"url":"https://www.systutorials.com/docs/linux/man/3-DLARFB/","timestamp":"2024-11-07T14:18:16Z","content_type":"text/html","content_length":"11546","record_id":"<urn:uuid:85c1bd9f-4d41-408e-8d2c-00abf69a7e71>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00793.warc.gz"}
|
emaining useful life
Estimate parameters of remaining useful life model using historical data
The fit function estimates the parameters of a remaining useful life (RUL) prediction model using historical data regarding the health of an ensemble of similar components, such as multiple machines
manufactured to the same specifications. Depending on the type of model, you specify the historical health data as a collection of lifespan measurements or degradation profiles. Once you estimate the
parameters of your model, you can then predict the remaining useful life of similar components using the predictRUL function.
Using fit, you can configure the parameters for the following types of estimation models:
• Degradation models
• Survival models
• Similarity models
For a basic example illustrating RUL prediction, see Update RUL Prediction as Data Arrives.
For general information on predicting remaining useful life using these models, see RUL Estimation Using RUL Estimator Models.
fit(mdl,data) fits the parameters of the remaining useful life model mdl using the historical data in data. This syntax applies only when data does not contain table or timetable data.
fit(mdl,data,lifeTimeVariable) fits the parameters of mdl using the time variable lifeTimeVariable and sets the LifeTimeVariable property of mdl. This syntax applies only when data contains:
• Nontabular data
• Tabular data, and mdl does not use data variables
fit(mdl,data,lifeTimeVariable,dataVariables,censorVariable) specifies the censor variable for a survival model and sets the CensorVariable property of mdl. The censor variable indicates which
life-time measurements in data are not end-of-life values. This syntax applies only when mdl is a survival model and data contains tabular data.
fit(mdl,data,lifeTimeVariable,dataVariables,censorVariable,encodedVariables) specifies the encoded variables for a covariate survival model and sets the EncodedVariables property of mdl. Encoded
variables are usually nonnumeric categorical features that fit converts to numeric vectors before fitting. This syntax applies only when mdl is a covariateSurvivalModel object and data contains
tabular data.
Train Linear Degradation Model
Load training data.
The training data is a cell array of column vectors. Each column vector is a degradation feature profile for a component.
Create a linear degradation model with default settings.
mdl = linearDegradationModel;
Train the degradation model using the training data.
Train Reliability Survival Model
Load training data.
This data is a column vector of duration objects representing battery discharge times.
Create a reliability survival model with default settings.
mdl = reliabilitySurvivalModel;
Train the survival model using the training data.
Train Hash Similarity Model Using Tabular Data
Load training data.
The training data is a cell array of tables. Each table is a degradation feature profile for a component. Each profile consists of life time measurements in the "Time" variable and corresponding
degradation feature measurements in the "Condition" variable.
Create a hash similarity model that uses the following values as hashed features:
mdl = hashSimilarityModel('Method',@(x) [mean(x),std(x),kurtosis(x),median(x)]);
Train the similarity model using the training data. Specify the names of the life time and data variables.
Predict RUL Using Covariate Survival Model
Load training data.
This data contains battery discharge times and related covariate information. The covariate variables are:
• Temperature
• Load
• Manufacturer
The manufacturer information is a categorical variable that must be encoded.
Create a covariate survival model, and train it using the training data.
mdl = covariateSurvivalModel('LifeTimeVariable',"DischargeTime",'LifeTimeUnit',"hours",...
Successful convergence: Norm of gradient less than OPTIONS.TolFun
Suppose you have a battery pack manufactured by maker B that has run for 30 hours. Create a test data table that contains the usage time, DischargeTime, and the measured ambient temperature,
TestAmbientTemperature, and current drawn, TestBatteryLoad.
TestBatteryLoad = 25;
TestAmbientTemperature = 60;
DischargeTime = hours(30);
TestData = timetable(TestAmbientTemperature,TestBatteryLoad,"B",'RowTimes',hours(30));
TestData.Properties.VariableNames = {'Temperature','Load','Manufacturer'};
TestData.Properties.DimensionNames{1} = 'DischargeTime';
Predict the RUL for the battery.
estRUL = predictRUL(mdl,TestData)
estRUL = duration
38.332 hr
Plot the survival function for the covariate data of the battery.
Input Arguments
mdl — Remaining useful life prediction model
degradation model | survival model | similarity model
Remaining useful life prediction model, specified as one of these models. fit updates the parameters of this model using the historical data in data.
For more information on the different model types and when to use them, see Models for Predicting Remaining Useful Life.
data — Historical data
column vector | array | table | timetable | cell array
Historical data regarding the health of an ensemble of similar components, such as their degradation profiles or life spans, specified as an array or table of component life times, or a cell array of
degradation profiles.
If your historical data is stored in an ensemble datastore object, you must first convert it to a table before estimating your model parameters. For more information, see Data Ensembles for Condition
Monitoring and Predictive Maintenance.
The format of data depends on the type of RUL model you specify in mdl.
Degradation Model
If mdl is a linearDegradationModel or exponentialDegradationModel, specify data as a cell array of component degradation profiles. Each element of the cell array contains the degradation feature
profile across the lifetime of a single component. There can be only one degradation feature for your model. You can specify data as a cell array of:
• Two-column arrays, where each row contains the usage time in the first column and the corresponding feature measurement in the second column. In this case, the usage time column must contain
numeric values; that is, it cannot use, for example, duration or timedate values.
• table objects. Select the variable from the table that contains the feature degradation profile using dataVariables, and select the usage time variable, if present, using lifeTimeVariable.
• timetable objects. Select the variable from the table that contains the feature degradation profile using dataVariables, and select the usage time variable using lifeTimeVariable.
Survival Model
For survival models, data contains the life span measurements for multiple components. Also, for covariate survival models, data contains corresponding time-independent covariates, such as the
component provider or working regimes. Specify data as one of the following:
• Column vector of life span measurements — This case applies only when mdl is a reliabilitySurvivalModel.
• Array — The first column contains the life span measurements, and the remaining columns contain the covariate values. This case applies only when mdl is a covariateSurvivalModel.
• table or timetable — In this case, select the variable from the table that contains the life span measurements using lifeTimeVariable. For covariate survival models, select the covariate
variables using dataVariables. For reliability survival models, fit ignores dataVariables.
By default, fit assumes that all life span measurements are end-of-life values. To indicate that a life span measurement is not an end-of-life value, use censoring. To do so, specify data as a table
or timetable that contains a censor variable. The censor variable is a binary variable that is 1 when the corresponding life span measurement is not an end-of-life value. Select the censor variable
using censorVariable.
Similarity Model
If mdl is a hashSimilarityModel, pairwiseSimilarityModel, or residualSimilarityModel, specify data as a cell array of degradation profiles. Each element of the cell array contains degradation feature
profiles across the lifetime a single component. For similarity models, you can specify multiple degradation features, where each feature is a health indicator for the component. You can specify data
as a cell array of:
• N-by-(M[i]+1) arrays, where N is the number of feature measurements (at different usage times) and M[i] is the number of features. The first column contains the usage times and the remaining
columns contain the corresponding measurements for degradation features.
• table objects. Select the variables from the table that contain the feature degradation profiles using dataVariables, and select the corresponding usage time variable, if present, using
• timetable objects. Select the variables from the table that contain the feature degradation profiles using dataVariables, and select the corresponding usage time variable using lifeTimeVariable.
fit assumes that all the degradation profiles represent run-to-failure data; that is, the data starts when the component is in a healthy state and end when the component is close to failure or
lifeTimeVariable — Life time variable
"" (default) | string
Life time variable, specified as a string. If data is a:
• table, then lifeTimeVariable must match one of the variable names in the table.
• timetable, then lifeTimeVariable one of the variable names in the table or the dimension name of the time variable , data.Properties.DimensionNames{1}.
table or timetable, then lifeTimeVariable must match one of the variable names in the table. If there is no life time variable in the table or if data is nontabular, then you can omit
lifeTimeVariable must be "" or a valid MATLAB^® variable name, and must not match any of the strings in dataVariables.
fit stores lifeTimeVariable in the LifeTimeVariable property of the model.
dataVariables — Feature data variables
"" (default) | string | string array
Feature data variables, specified as a string or string array. If data is a:
• Degradation model, then dataVariables must be a string
• Similarity model or covariate survival model, then dataVariables must be a string array
• Reliability survival model, then fit ignores dataVariables
If data is:
• A table or timetable, then the strings in dataVariables must match variable names in the table.
• Nontabular, then dataVariables must be "" or contain the same number of strings as there are data columns in data. The strings in dataVariables must be valid MATLAB variable names.
fit stores dataVariables in the DataVariables property of the model.
censorVariable — Censor variable
"" (default) | string
Censor variable for survival models, specified as a string. The censor variable is a binary variable that indicates which life time measurements in data are not end-of-life values. To use censoring,
data must be a table or timetable.
If you specify censorVariable, the string must match one of the variable names in data and must not match any of the strings in dataVariables or lifeTimeVariable.
fit stores censorVariable in the CensorVariable property of the model.
encodedVariables — Encoded variables
"" (default) | string | string array
Encoded variables for covariate survival models, specified as a string or string array. Encoded variables are usually nonnumeric categorical features that fit converts to numeric vectors before
fitting. You can also designate logical or numeric values that take values from a small set to be encoded.
The strings in encodedVariables must be a subset of the strings in dataVariables.
fit stores encodedVariables in the EncodedVariables property of the model.
Version History
Introduced in R2018a
|
{"url":"https://uk.mathworks.com/help/predmaint/ref/lineardegradationmodel.fit.html","timestamp":"2024-11-06T11:07:36Z","content_type":"text/html","content_length":"121570","record_id":"<urn:uuid:3afd721c-6ccd-4c91-a676-6934ac285f57>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00133.warc.gz"}
|
What is quantum in physics and computing?
What is a quantum?
A quantum (plural: quanta) is the smallest discrete unit of a phenomenon. For example, a quantum of light is a photon, and a quantum of electricity is an electron. Quantum comes from Latin, meaning
"an amount" or "how much?" If something is quantifiable, then it can be measured.
What is quantum in physics?
The modern use of quantum in physics was coined by Max Planck in 1901. He was trying to explain black-body radiation and how objects changed color after being heated. Instead of assuming that the
energy was emitted in a constant wave, he posed that the energy was emitted in discrete packets, or bundles. These were termed quanta of energy. This led to him discovering Planck's constant, which
is a fundamental universal value.
Planck's constant is symbolized as h and relates the energy in one photon to the frequency of the photon. Further units were derived from Planck's constant: Planck's distance and Planck's time, which
describe the shortest meaningful unit of distance and the shortest meaningful unit of time. For anything smaller, Werner Heisenberg's uncertainty principle renders the measurements meaningless.
The discovery of quanta and the quantum nature of subatomic particles led to a revolution in physics. This became quantum theory, or quantum mechanics. Quantum theory describes the behavior of
microscopic particles; Albert Einstein's theory of relativity describes the behavior of macroscopic things. These two theories are the underpinning of modern physics. Unfortunately, they deal with
different domains, leaving physicists to seek a so-called unified theory of everything.
The double-slit experiment showed that light behaves like both a wave and a particle.
Subatomic particles behave in ways that are counterintuitive. A single photon quantum of light can simultaneously go through two slits in a piece of material, as shown in the double-slit experiment.
Schrödinger's cat is a famous thought experiment that describes a quantum particle in superposition, or the state where the probability waveform has not collapsed. Particles can also become quantumly
entangled, causing them to interact instantly over a distance.
What is quantum in computing?
Quantum computing uses the nature of subatomic particles to perform calculations instead of using electrical signals as in classical computing. Quantum computers use qubits instead of binary bits. By
programming the initial conditions of the qubit, quantum computing can solve a problem when the superposition state collapses. The forefront of quantum computer research is in linking greater numbers
of qubits together to be able to solve larger and more complex problems.
Quantum computing uses the nature of subatomic particles to execute calculations as an alternative to the electrical signals used in classical computing.
Quantum computers can perform certain calculations much faster than classical computers. To find an answer to a problem, classical computers need to go through each option one at a time. It can take
a long time to go through all the options for some types of problems. Quantum computers do not need to try each option; instead, they resolve the answer almost instantly.
Some problems that quantum computers can solve quicker than classical computers are factoring for prime numbers and the traveling salesman problem. Once quantum computers demonstrate the ability to
solve these problems faster than classical computers, quantum supremacy will be achieved.
Quantum computing is still an emerging technology.
Prime factorization is an important function for the modern cryptography systems that secure digital communication. Experts currently expect that quantum computers will render existing cryptographic
systems insecure and obsolete.
The differences between classical cryptography and quantum cryptography.
Efforts to develop post-quantum cryptography are underway to create algorithms that are resistant to quantum attacks, but can still be used by classical computers. Eventually, fully quantum
cryptography will be available for quantum computers.
See also: Table of Physical Units and Table of physical constants
This was last updated in June 2022
Continue Reading About quantum
|
{"url":"https://www.techtarget.com/whatis/definition/quantum","timestamp":"2024-11-02T18:24:20Z","content_type":"text/html","content_length":"337988","record_id":"<urn:uuid:ef7be37e-c84a-4bab-be9b-25218f0e3a00>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00354.warc.gz"}
|
Python Number Guess Game Project Made Easy - EasyCodeBook.com
Python Number Guess Game Project Made Easy
Python Number Guess Game Project Made Easy – In this Python Project for beginners we will explain the code and logic of the popular number guessing game program in Python programming language.
How Python Number Guessing Project Works
Import Python Modules :- random and math
First of all we will include some Python modules which are necessary
to write a number guess program in Python.
import random
import math
Get input from the user
In this step we will get input that is the range of numbers to guess from.
# Get Inputs from user for lower bound and upper bound limits
lower_bound = int(input("Enter Lower bound of range(Integer number): "))
# Taking upper_bound of range
upper_bound = int(input("Enter Upper bound of range(Integer number):- "))
Code for Random Number Generation – Python Number Guess Game
# code to generate random number between
# the lower bound and upper bound
guess_num = random.randint(lower_bound, upper_bound)
Python instructions to get Maximum Try using log function
Here we will calculate the number of tries to guess the number. The log base 2 function
will provide us the total number of attempts to guess the number in game.
max_try = round(math.log(upper_bound - lower_bound + 1, 2))
print("\n\t Kindly note that, You've only ",
" chances to guess the number!\n")
Define a Count variable and initialize it to zero
count variable will store how many number of attempts acomplished by the user so far.
# count of user guess.
count = 0
Start the loop from 1 to maximum number of attempts
In this while loop we will get the guess from the user. We will compare this guess with
the actual number. An if statement will check if the guess is equal to the actual number
, less than or greater than the actual number. This will cause to display a message to the user
accordingly. For example, if the actual number to guess is 5 and user enters 7, we will display
a message “Your guess is too high”.capitalize So that the user will input a number less than 7 next time.
while count < max_try: count += 1 # get the guessing number from the user guess = int(input("Please Guess a number:- ")) # check if guess number is correct or not if guess_num == guess: print("Success :- Congratulations you guessed the number in ", count, " try") # break the loop you are done break # if guess is greater than actual number elif guess_num > guess:
print("Sorry Dear:- You guessed too small! Try again:-")
# if guess is greater than actual number
elif guess_num < guess: print("Sorry Dear:-You Guessed too high! Try again:-") # If Guessing is more than required guesses, # show a message like:- No try left. if count >= max_try:
print("\nSorry you have no Try left")
print("\nThe number was %d" % guess_num)
print("\tDear:- Better Luck Next time!")
Python Number Guess Game Project – Sample Output
Enter Lower bound of range(Integer number): 1
Enter Upper bound of range(Integer number):- 10
Kindly note that, You've only 3 chances to guess the number!
Please Guess a number:- 5
Sorry Dear:- You guessed too small! Try again:-
Please Guess a number:- 7
Sorry Dear:- You guessed too small! Try again:-
Please Guess a number:- 9
Sorry Dear:- You guessed too small! Try again:-
Sorry you have no Try left
The number was 10
Dear:- Better Luck Next time!
Another Sample Run
Enter Lower bound of range(Integer number): 1
Enter Upper bound of range(Integer number):- 5
Kindly note that, You've only 2 chances to guess the number!
Please Guess a number:- 3
Sorry Dear:- You guessed too small! Try again:-
Please Guess a number:- 4
Success :- Congratulations you guessed the number in 2 try
Code for Python Number Guess Game Project
import random
import math
# Get Inputs from user for lower bound and upper bound limits
lower_bound = int(input("Enter Lower bound of range(Integer number): "))
# Taking Inputs
upper_bound = int(input("Enter Upper bound of range(Integer number):- "))
# code to generate random number between
# the lower bound and upper bound
guess_num = random.randint(lower_bound, upper_bound)
max_try = round(math.log(upper_bound - lower_bound + 1, 2))
print("\n\t Kindly note that, You've only ",
" chances to guess the number!\n")
# count of user guess.
count = 0
# for calculation of minimum number of
# guesses depends upon range
while count < max_try:
count += 1
# get the guessing number from the user
guess = int(input("Please Guess a number:- "))
# check if guess number is correct or not
if guess_num == guess:
print("Success :- Congratulations you guessed the number in ",
count, " try")
# break the loop you are done
# if guess is greater than actual number
elif guess_num > guess:
print("Sorry Dear:- You guessed too small! Try again:-")
# if guess is greater than actual number
elif guess_num < guess:
print("Sorry Dear:-You Guessed too high! Try again:-")
# If Guessing is more than required guesses,
# show a message like:- No try left.
if count >= max_try:
print("\nSorry you have no Try left")
print("\nThe number was %d" % guess_num)
print("\tDear:- Better Luck Next time!")
|
{"url":"https://easycodebook.com/2021/05/python-number-guess-game-project-made-easy/","timestamp":"2024-11-09T03:36:10Z","content_type":"text/html","content_length":"120075","record_id":"<urn:uuid:c718e61a-0f20-4a5a-9d05-b80bf8a1b87c>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00295.warc.gz"}
|
9.11 Limitations to Keep in Mind
Course Outline
• segmentGetting Started (Don't Skip This Part)
• segmentStatistics and Data Science: A Modeling Approach
• segmentPART I: EXPLORING VARIATION
• segmentChapter 1 - Welcome to Statistics: A Modeling Approach
• segmentChapter 2 - Understanding Data
• segmentChapter 3 - Examining Distributions
• segmentChapter 4 - Explaining Variation
• segmentPART II: MODELING VARIATION
• segmentChapter 5 - A Simple Model
• segmentChapter 6 - Quantifying Error
• segmentChapter 7 - Adding an Explanatory Variable to the Model
• segmentChapter 8 - Digging Deeper into Group Models
• segmentChapter 9 - Models with a Quantitative Explanatory Variable
• segmentPART III: EVALUATING MODELS
• segmentChapter 10 - The Logic of Inference
• segmentChapter 11 - Model Comparison with F
• segmentChapter 12 - Parameter Estimation and Confidence Intervals
• segmentFinishing Up (Don't Skip This Part!)
• segmentResources
list High School / Advanced Statistics and Data Science I (ABC)
9.11 Limitations to Keep in Mind
Regression and correlation are powerful tools for modeling relationships between variables. But each must be used thoughtfully. It is important always to interpret the findings in context, and use
everything else you know about the context to help you draw reasonable conclusions based on the data.
Correlation Does Not Imply Causation
Most important to bear in mind is that correlation does not imply causation, something you no doubt have heard before. Just the fact that an explanatory and outcome variable are correlated does not
necessarily mean we understand what causes this variation. And in this sense, regression is no different from correlation.
There are many examples of this. Children’s shoe size is correlated with their scores on an achievement test, but neither variable causes the other. An increase in age of the child, a confounding
variable, causes both shoe size and achievement to go up.
Also keep in mind that a relationship can be bidirectional, meaning each variable has a causal effect on the other. Reading skills and writing skills tend to be highly correlated. It might be that
reading a lot causes writing to improve. But it’s also plausible that practicing writing might help students improve their reading skills.
As in all things, we should interpret statistics like the correlation coefficient and regression slope with common sense. The tendency to wear skimpy clothing is correlated with higher temperatures.
In this case the relationship is real, but the causal direction must be sensibly interpreted. Hiking up the temperature might indeed cause people to shed their clothing. But taking off clothes is not
going to cause the temperature to go up.
Correlation. (n.d.). Retrieved from https://xkcd.com/552/
Thumb length measured in millimeters is going to be perfectly correlated with thumb length measured in centimeters. The points will be perfectly laid out on a straight line. But does spotting this
relationship get us any closer to understanding the DGP that produces variation in thumb length? Of course not.
Disambiguating causal relationships and controlling for possible confounds is not achievable through statistical analysis alone. Statistics can help, and correlation can certainly suggest that there
might be causation there. But research design is a necessary tool. Random assignment of equivalent objects to conditions that do and don’t receive some treatment is often required to figure out
whether a particular relationship is causal or not.
Are All Lines Straight?
Another thing to point out is that the models we have considered in this chapter are linear models. We fit a straight line to a scatter of points, and then look to see how well it fits by measuring
residuals around the regression line.
But sometimes a straight line is just not going to be a very good model for the relationship between two variables.
Take this graph from a study of the relationship of body weight to risk of death (from McGee DL, 2005, Ann Epidemiol 15:87 and Adams KF, 2006, N Engl J Med 355:763). Being underweight and being
overweight both increase the risk of death, whereas being in the middle reduces that risk.
If you ignored the shape of the relationship and overlaid a regression line, the line would probably be close to flat, indicating no relationship. But if you did that you would be missing an
important systematic curvilinear relationship.
Before fitting a linear regression model, look at the relationship and see if a linear function would be a sensible model. If it isn’t, think about a different model. Mathematicians have lots of
models to offer beyond just the simple straight line.
Do Regression Lines Go On Forever?
Source of picture: (http://smbc-comics.com/comic/2011-08-05)
Finally, there is the problem of extrapolation. We have already pointed out from our regression of Thumb on Height that, according to the model, someone who is 0 inches tall would have a thumb length
of -3.33 millimeters. Obviously, the regression model only works within a certain range, and it is risky to extend that range beyond where you have substantial amounts of data.
In general, common sense and a careful understanding of research methods must be applied to the interpretation of any statistical model.
End of Chapter Survey
Mid-Course Survey #2
You’re two-thirds through the book! Please tell us about your experience so far. (Estimated time: 5 minutes)
|
{"url":"https://staging.coursekata.org/preview/book/a50b96c3-f72b-4249-bb8d-70e65a37a93e/lesson/12/10","timestamp":"2024-11-14T21:49:13Z","content_type":"text/html","content_length":"95451","record_id":"<urn:uuid:efe056f1-8f38-41a3-8835-a1e69e7145c7>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00701.warc.gz"}
|
Subject:Mathematical analysis - Wikibooks, open books for an open world
Mathematical analysis
Books in this subject area deal with
mathematical analysis
: the branch of mathematics most explicitly concerned with the notion of a limit, whether the limit of a sequence or the limit of a function. It also includes the theories of differentiation,
integration and measure, infinite series, and analytic functions. These theories are often studied in the context of real numbers, complex numbers, and real and complex functions.
Completed books
In subsections: Subsections
Books nearing completion
In subsections:
Half-finished books
In subsections:
Partly developed books
Featured Books
In subsections:
Freshly started books
In subsections:
In subsections:
Unknown completion
In subsections:
|
{"url":"https://en.m.wikibooks.org/wiki/Subject:Mathematical_analysis","timestamp":"2024-11-11T09:49:02Z","content_type":"text/html","content_length":"41333","record_id":"<urn:uuid:7ddcbdd7-57ef-4ac6-8703-b87b17e0da01>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00811.warc.gz"}
|
What is quantum in physics and computing?
What is a quantum?
A quantum (plural: quanta) is the smallest discrete unit of a phenomenon. For example, a quantum of light is a photon, and a quantum of electricity is an electron. Quantum comes from Latin, meaning
"an amount" or "how much?" If something is quantifiable, then it can be measured.
What is quantum in physics?
The modern use of quantum in physics was coined by Max Planck in 1901. He was trying to explain black-body radiation and how objects changed color after being heated. Instead of assuming that the
energy was emitted in a constant wave, he posed that the energy was emitted in discrete packets, or bundles. These were termed quanta of energy. This led to him discovering Planck's constant, which
is a fundamental universal value.
Planck's constant is symbolized as h and relates the energy in one photon to the frequency of the photon. Further units were derived from Planck's constant: Planck's distance and Planck's time, which
describe the shortest meaningful unit of distance and the shortest meaningful unit of time. For anything smaller, Werner Heisenberg's uncertainty principle renders the measurements meaningless.
The discovery of quanta and the quantum nature of subatomic particles led to a revolution in physics. This became quantum theory, or quantum mechanics. Quantum theory describes the behavior of
microscopic particles; Albert Einstein's theory of relativity describes the behavior of macroscopic things. These two theories are the underpinning of modern physics. Unfortunately, they deal with
different domains, leaving physicists to seek a so-called unified theory of everything.
The double-slit experiment showed that light behaves like both a wave and a particle.
Subatomic particles behave in ways that are counterintuitive. A single photon quantum of light can simultaneously go through two slits in a piece of material, as shown in the double-slit experiment.
Schrödinger's cat is a famous thought experiment that describes a quantum particle in superposition, or the state where the probability waveform has not collapsed. Particles can also become quantumly
entangled, causing them to interact instantly over a distance.
What is quantum in computing?
Quantum computing uses the nature of subatomic particles to perform calculations instead of using electrical signals as in classical computing. Quantum computers use qubits instead of binary bits. By
programming the initial conditions of the qubit, quantum computing can solve a problem when the superposition state collapses. The forefront of quantum computer research is in linking greater numbers
of qubits together to be able to solve larger and more complex problems.
Quantum computing uses the nature of subatomic particles to execute calculations as an alternative to the electrical signals used in classical computing.
Quantum computers can perform certain calculations much faster than classical computers. To find an answer to a problem, classical computers need to go through each option one at a time. It can take
a long time to go through all the options for some types of problems. Quantum computers do not need to try each option; instead, they resolve the answer almost instantly.
Some problems that quantum computers can solve quicker than classical computers are factoring for prime numbers and the traveling salesman problem. Once quantum computers demonstrate the ability to
solve these problems faster than classical computers, quantum supremacy will be achieved.
Quantum computing is still an emerging technology.
Prime factorization is an important function for the modern cryptography systems that secure digital communication. Experts currently expect that quantum computers will render existing cryptographic
systems insecure and obsolete.
The differences between classical cryptography and quantum cryptography.
Efforts to develop post-quantum cryptography are underway to create algorithms that are resistant to quantum attacks, but can still be used by classical computers. Eventually, fully quantum
cryptography will be available for quantum computers.
See also: Table of Physical Units and Table of physical constants
This was last updated in June 2022
Continue Reading About quantum
|
{"url":"https://www.techtarget.com/whatis/definition/quantum","timestamp":"2024-11-02T18:24:20Z","content_type":"text/html","content_length":"337988","record_id":"<urn:uuid:ef7be37e-c84a-4bab-be9b-25218f0e3a00>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00354.warc.gz"}
|
Posted: 2023-09-19
Last modified: 2023-09-19 @ 8ded1b5
Linear Algebra with Const Generics
I was recently doing a project with some friends where we extracted the pixel data from RAW images. One step in extracting a picture from a RAW image is demosaicing, which boils down to doing some
kind of convolution over the pixel intensities in the image with a convolution kernel defined by the color filter array. That’s when I had the idea to just tinker with creating a linear algebra/image
processing library in Rust that utilizes const generics in order to encode some properties of matrices in the type system. This post chronicles the creation of one such library. It is very bare
bones, but I find it interesting what you can encode in the type system.
This is not useful as a general linear algebra/image processing library, since const generics are compile-time. Thus creating matrices of arbitrary dimensions at runtime can’t be done with const
generics. This is purely an academic exercise in what you can do with them. But it was fun nonetheless!
Table of Contents
Earlier (better) work
There are a couple of crates that do linear algebra, multidimensional arrays, and computer graphics. Here’s a small selection:
None of these utilize const generics the way I’ll be doing, and for good reason. We will get to that in due time, though.
In the beginning, there was the matrix
I will be restricting this to 2D matrices ^1, as the original domain where I got the idea was relating to 2D photographs. There are multiple ways we could define a matrix with M rows and N columns,
encoding the dimensions with const generics. There are a couple of data types that you could base a Matrix<T> on. They all have their pros and cons.
Option 1: Vec<Vec<T>>
A nested Vec is very flexible and easy to reason about when it comes to indexing, as there is no need to manually calculate a linear index from a two-dimensional one, and its size is completely
dynamic. However, given that it is a double Vec, there is also potentially a double indirection happening, having to address memory into the outer Vec to find the inner one and then index into that
one. So I opted not to use this representation.
Option 2: [[T; N]; M]
A two-dimensional array of T has the same advantage of intuitive indexing as a nested Vec, but without the disadvantage of a double indirection, since its dimensions are known statically. This means
the compiler will take care of the calculation of a linear index for us! However, an array lives on the stack and perhaps we want to work with some really large matrices in certain applications. So I
opted not to use this representation either.
Option 3: [[T; M]; N]
Wait, did I accidentally put option 2 in twice? No, look again! The dimensions are switched. Option 2 contained M arrays of length N, whereas this one has N arrays of length M. This is known as
row-major and column-major order. Option 2 stored the elements in memory sequentially, row by row. This option instead stores them column by column. The choice between them is essentially arbitrary,
but it is important to know. I elected to store my matrices in row-major order, so this option is not that relevant but very important to mention. There will be a cool little trick regarding
row-major vs. column-major later.
Option 4: [T; M * N]
Ah, a linear array. Pretty simple, though you have to calculate the linear index manually to use it. That’s not too bad though. It’s really simple to do operations on all elements at once since there
is no nesting to think about and row-major vs. column-major order is entirely arbitrary. Both options can be represented by this type and you don’t need to go through std::mem::transmute to get from
one to the other. Wonderful!
However, you cannot do this with const generics. They can only be used as standalone arguments, not in const operations. This just leaves us with…
Option 5: Vec<T>
This has the same properties of option 4, but with the huge advantage that it is actually valid Rust! Also, since it’s a Vec it lives on the heap and our Matrix<T> type can own that memory. No large
amounts of data on the stack. And the information about the matrix dimensions is entirely on the Matrix type. Our underlying type is completely agnostic to the size and layout of the Matrix type,
which I think is pretty nice. No double bookkeeping. What relates to the matrix, is defined in the type of the matrix. All of these reasons (and the row-major vs. column-major trick I will get to
later) made me decide on this as the underlying type of the matrix:
#[derive(Debug, Clone)]
pub struct Matrix<
const ROWS: usize,
const COLS: usize
You can probably already start to see another reason why other libraries do not use const generics this way. That is one long type definition. But this is just for fun, so we’ll roll with it.
Besides, it’s going to get so, so much worse :)
impl-blocks from Hell
Let’s implement some functionality for our shiny new Matrix<T>. How about being able to construct a matrix from a two-dimensional array? That would enable us to do things like
let matrix = Matrix::new([
[1, 2],
[3, 4],
Pretty nice and readable! Now let’s see what the implementation of that would look like:
impl<T: Element, const ROWS: usize, const COLS: usize>
Matrix<T, ROWS, COLS> {
pub fn new(array: [[T; COLS]; ROWS]) -> Self {
Oh. Oh no, this is going to get out of hand fast. And that’s exactly what’s so fun about it. :)
Element is just a combination trait of Copy + Clone + PartialEq that I created and implemented for i64 and f64. It’s going to show up in every single impl block.
Continuing on with this syntax-driven development, one of the simplest operations I would like to be able to do is access elements of the matrix. Say I have a matrix m. I would like to get the
element at row 0, column 1 by doing something like this:
fn index() {
let m = Matrix::new([
[1, 2],
[3, 4],
let element = m[(0, 1)];
assert_eq!(element, 2);
This means we’ll have to implement the Index<(usize, usize)> trait for our Matrix<T>. However, it’s not a Matrix<T>, is it? It’s a Matrix<T, ROWS, COLS>. Oh no, time for another impl block!
impl<T: Element, const ROWS: usize, const COLS: usize>
Index<(usize, usize)> for Matrix<T, ROWS, COLS> {
type Output = T;
fn index(&self, (row, col): (usize, usize)) -> &T {
assert!(row < ROWS && col < COLS);
&self.0[row * COLS + col]
Okay, the implementation itself is straight-forward. But are all the impl<...> lines going to look like that? No, no. They’re going to get worse ;)
IndexMut<(usize, usize)> is essentially identical, but pretty important to implement as well if we want to be able to change our matrices.
The assert! is pretty ugly, but I didn’t want to introduce even more syntax (there will be plenty of that) by making the indexing return an Option<&T>. It also makes operations on the numbers simpler
by not having to check and unwrap them all the time, and since this is not intended to be a good, production-ready library I will just do the easy thing here.
Transposing a matrix is a pretty common operation. We want to be able to reflect it along its diagonal, swapping the rows and columns. So do we have to reach into our underlying Vec<T> and shuffle it
around? Remember that it is stored in row-major order, so if the columns are now considered the rows, we have to swap all the elements around, right?
Here’s the sneaky little row-major vs. column-major trick: Don’t touch the underlying memory at all. Just add more information into the type system. That’s right, we’re adding another const generic
to our matrix type, baby!
pub struct Matrix<
const ROWS: usize,
const COLS: usize
const TRANSPOSED: bool,
This means the two previous impl blocks have to be updated. For the one containing the constructor, I will simply restrict it to always have this new bool set to false, meaning all matrices start out
non-transposed, in row-major order:
impl<T: Element, const ROWS: usize, const COLS: usize>
- Matrix<T, ROWS, COLS> {
+ Matrix<T, ROWS, COLS, false> {
pub fn new(array: [[T; COLS]; ROWS]) -> Self {
For the Index<(usize, usize)> implementation, there are two possible ways to go about it. Either you just add the same false as for the constructor above and implement it again for true (we still
want to be able to index transposed matrices after all), or you put the checking of this bool inside the index() function. Either way to go about it is fine. I’m just going to arbitrarily put the
check inside to keep the number of impl blocks down.
- impl<T: Element, const ROWS: usize, const COLS: usize>
- Index<(usize, usize)> for Matrix<T, ROWS, COLS> {
+ impl<T: Element, const ROWS: usize, const COLS: usize, const TRANSPOSED: bool>
+ Index<(usize, usize)> for Matrix<T, ROWS, COLS, TRANSPOSED> {
type Output = T;
fn index(&self, (row, col): (usize, usize)) -> &T {
assert!(row < ROWS && col < COLS);
+ if TRANSPOSED {
+ &self.0[col * ROWS + row]
+ } else {
&self.0[row * COLS + col]
+ }
Pretty simple, just flip col/COLS with row/ROWS ^2! Remember, the underlying Vec is always stored in row-major order, so we’re just pretending to actually have it be transposed to column-major.
So where’s the trick? How do we actually do this pretend transposition? It’s really simple. It’s just this pair of impl blocks ^3:
impl<T: Element, const ROWS: usize, const COLS: usize>
Matrix<T, ROWS, COLS, false> {
pub fn transpose(self) -> Matrix<T, COLS, ROWS, true> {
impl<T: Element, const ROWS: usize, const COLS: usize>
Matrix<T, ROWS, COLS, true> {
pub fn transpose(self) -> Matrix<T, COLS, ROWS, false> {
One for each value of TRANSPOSED. They just return a Matrix with ROWS and COLS swapped, and TRANSPOSED inverted. It takes ownership of self, transferring ownership of the memory to the transposed
matrix. No copying/cloning at all! The memory is not touched. The underlying Vec points to the exact same memory!
fn transpose_moves() {
let m = Matrix::new([[1, 2, 3, 4]]);
assert_eq!(m.0, [1, 2, 3, 4]);
assert_eq!(m[(0, 1)], 2);
let initial_addr = m.0.as_ptr();
let m = m.transpose();
assert_eq!(m.0, [1, 2, 3, 4]);
assert_eq!(m[(1, 0)], 2);
let transposed_addr = m.0.as_ptr();
assert_eq!(initial_addr, transposed_addr);
The test passes, meaning indexing works, and the memory is left where it is with only ownership being transferred! Success! We can transpose matrices of any size in constant time.
Another thing I would like to be able to do is check whether two matrices are equal^4. That’s easy, right? Just check if the underlying Vec:s are equal!
Not so fast.
Remember the whole thing about row-major vs. column-major? And the transposition we just implemented? Yeah, since the transposition is pretend, and the underlying memory is not touched, two matrices
can have the same memory contents but be unequal. Moreover, they can have different dimensions, so we shouldn’t even pass the type-check in that case. Expressed in a test, the following should pass:
fn equality() {
let m1 = Matrix::new([
[1, 2],
[3, 4],
let m2 = m1.clone();
let m3 = m1.clone().transpose();
assert_eq!(m1, m2);
assert_ne!(m1, m3); // asymmetric transpose
assert_eq!(m1.0, m2.0);
assert_eq!(m1.0, m3.0);
let m1 = Matrix::new([
[1, 2],
[2, 1],
let m2 = m1.clone().transpose();
assert_eq!(m1, m2); // symmetric transpose
assert_eq!(m1.0, m2.0);
Equality is impossible for matrices of different dimensions, so this test sticks to square matrices to prove that the asymmetric transposition equality check should fail, even though all Vec:s are
identical. Now, we need to implement PartialEq for our matrix type. Oh no, I hear an impl block coming…
T: Element,
const ROWS: usize,
const COLS: usize,
const LHS_T: bool,
const RHS_T: bool,
PartialEq<Matrix<T, ROWS, COLS, RHS_T>>
for Matrix<T, ROWS, COLS, LHS_T> {
fn eq(&self, other: &Matrix<T, ROWS, COLS, RHS_T>) -> bool {
for row in 0..ROWS {
for col in 0..COLS {
if self[(row, col)] != other[(row, col)] {
return false;
That’s right, this impl-block has yet another const generic. Either one of the left and right hand sides of the equals sign could have been transposed, and we need to make sure this is implemented
for every possible combination. So the impl needs another bool in this case. Will it ever end ^5? Other than that, the implementation itself is very simple and utilizes the fact that we already
implemented Index.
Element-wise addition
Now, let’s actually start doing things with our matrices! How about being able to add them? This simple test should suffice:
fn addition() {
let a = Matrix::new([
[1, 2],
[3, 4],
let b = Matrix::new([
[1, 1],
[1, 1],
let c = Matrix::new([
[2, 3],
[4, 5],
assert_eq!(a + b, c);
Okay, now let’s brace for the impl…
T: Element + Add<Output = T>,
const ROWS: usize,
const COLS: usize,
const LHS_T: bool,
const RHS_T: bool,
Add<Matrix<T, ROWS, COLS, RHS_T>>
for Matrix<T, ROWS, COLS, LHS_T> {
type Output = Matrix<T, ROWS, COLS, LHS_T>;
fn add(mut self, other: Matrix<T, ROWS, COLS, RHS_T>) -> Self::Output {
for row in 0..ROWS {
for col in 0..COLS {
self[(row, col)] = self[(row, col)] + rhs[(row, col)];
Oh. That wasn’t so bad. There’s a new bound on T, it has to implement the Add trait, outputting itself for the addition of the elements to work. Other than that though, the implementation looks
really similar to the one for PartialEq. And in this one we’re using the implementation of IndexMut that I left out for brevity to assign into the left hand matrix. I only implemented the trait for
taking ownership of the added matrices, but it’s straightforward to implement for taking them by reference as well.
Well, since that wasn’t so bad, I guess we’ve been through the worst now! The trait impl:s surely won’t get much more complex than this! :)
Hey, did you know that matrix multiplication, unlike addition, doesn’t require the dimensions to be identical? That’s right, the impl for Mul is going to be more complex! But first, a test.
Multiplication requires the left-hand side to have as many columns as the right-hand side has rows. This means that a matrix can always be multiplied with its transpose, which makes for a pretty
simple to write test:
fn multiplication() {
let a = Matrix::new([
[1, 2, 3],
[4, 5, 6],
let b = a.clone().transpose();
let c = Matrix::new([
[14, 32],
[32, 77],
assert_eq!(a * b, c);
That should be good enough! Now we just need to impl Mul. And this time I will just go through the trait bounds and const generic on their own before the implementation, because it will be clearer
that way and I want to talk about some weird choices in the implementation. Here is the impl<...> part:
T: Element + Mul<Output = T> + Add<Output = T>,
const ROWS: usize,
const COLS: usize,
const MATCHING_DIM: usize,
const LHS_T: bool,
const RHS_T: bool,
Phew. Now, it’s the most complex one in this post, but it’s not that bad. We can see that T has a new Mul trait bound, but it also has the Add trait bound. This is because matrix multiplication is
defined by a bunch of multiplications and additions, so both these bounds must be present. And then we have yet another const generic. Last one, I promise. As I said before, the left-hand side of the
multiplication must have as many columns as the right-hand side has rows. This is what MATCHING_DIM will represent. The remaining two dimensions of the left and right side are arbitrary, but will
become the dimensions of the output matrix. So multiplying a 2x3 matrix with a 3x4 matrix will yield a 2x4 matrix.
Now, before I show you the implementation, I have to warn you that it’s a little strange. It looks like this purely because I’m a nerd and liked the idea of it. The implementation makes the following
(non-)assumptions of the underlying type T in our matrix:
1. T might not be numerical.
2. T might not have a multiplicative or additive identity. ^6
3. T might not impl Default.
These are implicit in the trait bounds I have chosen for T, but I figured I would make them explicit.
Now, on to the implementation (impl line from above excluded):
Mul<Matrix<T, MATCHING_DIM, COLS, RHS_T>>
for Matrix<T, ROWS, MATCHING_DIM, LHS_T> {
type Output = Matrix<T, ROWS, COLS, LHS_T>;
fn mul(self, rhs: Matrix<T, MATCHING_DIM, COLS, RHS_T>) -> Self::Output {
let mut vec = Vec::with_capacity(ROWS * COLS);
unsafe {
for row in 0..ROWS {
for col in 0..COLS {
let mut products = Vec::with_capacity(MATCHING_DIM);
for i in 0..MATCHING_DIM {
self[(row, i)] * rhs[(i, col)]
vec[row * COLS + col] = products
.fold(products[0], |acc, x| acc + *x);
Whoa, is that an unsafe block?? Yes, but don’t worry. It’s fine. Really, I promise ^7. Mathematically, the resulting matrix is guaranteed to have ROWS * COLS elements in it. And to avoid having a
bunch of reallocations, I initialize the matrix to have this capacity. But if I don’t set the length, I can’t assign to indices in the Vec. Rust will panic! if I try. So into unsafe land we go, and
we pinky-promise Rust that the Vec does indeed have the same length as it has capacity. So indexing into it and assigning things there is fine. Of course, in reality this is uninitialized memory. We
are just manually initializing it.
“Why not just call Vec::resize instead?”, you may ask. To which I reply “And fill it with what?”. Vec::resize takes an element to pad the Vec with to the desired length. And remember point 3 from
above. T might not impl Default, so what should we put there? Better to just leave it uninitialized since we’re immediately going to initialize it anyway.
The rest of the implementation is relatively straighforward. Each element in the output matrix is a sum of products from the input matrices, so the products vector is self-explanatory. But why am I
summing products like that? Why not just call .sum() or .fold() directly on the iterator? Well, that all comes back to point 2 from above. T might not have a multiplicative or additive identity. So
we skip the first element, use that as the initial value for the accumulator in .fold(), and do a folding addition like normal. This implementation requires the dimensions to be non-zero, but we can
just skip that check for brevity ^8.
If I wanted to use .sum(), I could constrain T to require an additive identity by adding another trait bound requiring T to implement std::iter::Sum. If we also require std::iter::Product, we could
force a multiplicative identity to exist, and that could simplify the innermost loop of the implementation. But again, I wanted to keep this general for fun.
And there we have it! A very simple linear algebra library. Or rather, a simple matrix addition and multiplication library. But other operations are now pretty trivial to implement and build on top
of this foundation.
This is not how you should do things. All the const generics and trait bounds make it really hard to read and implement anything, and trying to stay too general will result in strange
implementations. But boy, is it fun!
The real libraries do not use const generics and implement this stuff in much smarter ways in order to actually be useful at runtime. This was all just a fun excursion into const generic land in
order to learn and see how far I could take it. If nothing else, it was a fun way to spend an evening.
Use cases?
Very few. As mentioned, all matrix dimensions are created and checked at compile-time, so any real world use for a library like this would be pretty limited.
Further work
Absolutely not. But if one wanted to expand on this, implementing subtraction, inverses, convolutions, views, scalar multiplication, and all manner of fun things should in theory be possible. Don’t
blame me for any loss of sanity if you try, though. Keeping all these trait bounds and const generics in check takes a good bit of care and energy.
|
{"url":"https://markau.dev/posts/linear-algebra-with-const-generics/","timestamp":"2024-11-06T15:14:32Z","content_type":"text/html","content_length":"56645","record_id":"<urn:uuid:ae3b1b54-cbac-4069-a7aa-0ac7ec045355>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00280.warc.gz"}
|
Seminario de Computación y Complejidad
Próxima Charla
Miércoles 20 de abril de 2011 - 15.40 hs
Michael Shub
Universidad de Buenos Aires
Complexity and Bezout's Theorem: Survey of Some Results and Some Open Problems
I will review some fundamental notions and recent progress by Beltran-Pardo and Buergisser Cucker on Smale's 17th problem. Can one find a root of a polynomial system approximately with average
polynomial time and a uniform algorithm., as well as relations with complexity theory and some open problems.
Charlas anteriores
Miércoles 6 de abril de 2011 - 15.40 hs
Alicia Dickenstein
Universidad de Buenos Aires
Polynomials and mass action kinetics chemical reaction networks
This talk will be a gentle introduction to the systems of non linear ODE´s modeling mass action kinetics chemical reaction networks. These systems have interesting combinatorial, algebro-geometric
and dynamical properties.
Miércoles 30 de marzo de 2011 - 15.40 hs
Teresa Krick
Universidad de Buenos Aires
On arithmetic effective Nullstellensätze and implicitation problems
Joint work with Carlos D'Andrea and Martin Sombra, on precise bounds for the effective Nullstellensatz and the implicitation problem in their arithmetic aspects.
The Nullstellensatz establishes that if f_1,...,f_s in k[x_1,...,x_n] are polynomials with no common roots in the algebraic closure of the field k, then they satisfy a Bezout identity 1=
g_1f_1+...+g_sf_s for some polynomials g_1,...,g_s in k[x_1,...,x_n].
The implicitation problem consists in computing equations for an algebraic variety from a given rational parameterization of it. The typical case is when the variey is a hypersurface, in which case
it is defined by a single ``implicit equation'' and the problem consists in computing it.
Miércoles 16 de marzo de 2011 - 15 hs
Pablo Heiber
Universidad de Buenos Aires
A better complexity of finite sequences
Joint work with Veronica Becher
We present a new complexity function defined from combinatorial properties of strings, and we prove that not only it satisfies fundamental properties of program-size complexity, but also it is
computable in linear time and space. Our function is monotone in the prefix ordering of strings, and subadditive for concatenation. We prove an upper bound of the number of strings with complexity up
to a given value, and we show that most strings of any given length have almost maximal complexity. No previously known efficiently computable functions meet all these basic properties of classical
description complexity.
Description complexity, in each of its varieties, has been defined as minimal description length for a given description method.
In every case it is been hard, when even possible, to relate it with combinatorial properties of strings and, for computable versions, with string processing algorithms and data structures.
Our function uses an algebraic approach, which calculates complexity in terms of the set of substrings of the given string. This kind of definition is in the field of stringology, unlike description
This aims at narrowing the gap between the two views.
Miércoles 16 de marzo de 2011 - 15 hs
Gabriela Jerónimo
Universidad de Buenos Aires
A Geometric Approach to Differential Algebraic Equation Systems
One of the main invariants of a Differential Algebraic Equation (DAE) system is its differentiation index. There are several definitions of differentiation indices not all completely equivalent.
Roughly speaking, for first order systems, the differentiation index counts the number of total derivatives of the system needed to obtain an explicit ODE.
From the point of view of numerical solving, it is desirable for a DAE to have a differentiation index as small as possible. This motivates the study of index reduction methods, which given a DAE
system compute a new system, in some sense equivalent to the input DAE, but with a lower differentiation index (possibly 0 or 1).
We will present an algebraic definition of a differentiation index for a wide class of DAE systems and address the index reduction problem for these systems. We will show that any of these systems
can be transformed into a generically equivalent first order DAE system consisting of a single purely algebraic (polynomial) equation plus an under-determined ODE (that is, a semi-explicit DAE system
with differentiation index 1). Finally, we will describe an algorithm with bounded complexity which computes this associated system.
This is a joint work with Lisi D'Alfonso, François Ollivier, Alexandre Sedoglavic and Pablo Solernó.
Miércoles 30 de febrero de 2011 - 15 hs
Guillermo Matera
Lower bounds for robust interpolation algorithms
In this lecture we shall discuss lower bounds on the complexity of robust algorithms for solving families of interpolation problems. Our notion of robustness models the behavior of all known
universal methods for solving families of interpolation problems avoiding unnecessary branchings and allowing the solution of certain limit problems. We shall first show that a robust algorithm
solving a family of Lagrange interpolation problems with N nodes encoded by a Zariski open subset of the N-dimensional complex space has a cost which is at least linear in N, showing thus that
standard interpolation methods are essentially optimal. Then we shall consider families of interpolation problems with singularities. In particular, we shall consider the family of problems which
consists of interpolating a polynomial given by a straight-line program of length L from its value in a correct-test sequence. We shall show that any robust algorithm solving such a family of
problems requires a number of arithmetic operations which is exponential in L. Joint work with Nardo Gimenez, Joos Heintz and Pablo Solerno.
Miércoles 23 de febrero de 2011 - 15 hs
Joos Heintz
Departamento de Computación, Universidad de Buenos Aires
The Software Architecture of Algebraic Geometry
In 1978 Malte Sieveking (Goethe-Universität, Frankfurt) and myself introduced the arithmetic circuit representation of polynomials in effective algebraic geometry and asked for a non-polynomial lower
complexity bound for the elimination of a single existential quantifier block in circuit represented formulas of the first order theory of algebraically closed fields. For this purpose I showed
together with Claus Peter Schnorr (Goethe-Universität, Frankfurt) that identity of circuit represented polynomials can efficiently be checked. In this context, we arrived at a couple of basic
questions like the determination of the complexity character of the gcd of two circuit represented polynomials (without the trivializing reference to their degree of course). This and other related
problems remain today completely unsolved.
The question of the complexity status of quantifier elimination over algebraically closed fields was later reformulated and reconsidered in the context of the BSS model. In the last twelve years I
started with my collaborators (Pablo Solerno, Guillermo Matera and others) to reanalyze the problem of the lower complexity bounds for quantifier elimination in circuit represented first order
formulas of the first order theory of algebraically closed fields.
The goal was to show the optimality of the pseudpolynomial "Kronecker" algorithm for the solution of multivariate polynomial equation systems, which became developed and implemented in the last 10-15
years by the international peronist research group TERA. The basic idea, which during the last seven years slowly took form, was to restrict the algorithmic model by means of well motivated
architectural quality requirements. This was finally achieved thanks to the development of a refinement of the theory of rational maps in algebraic geometry. The basic outcome of this refinement is
the notion of a geometrically robust constructible map. The motivation is given as a nontrivial application of Zariski's Main Theorem to Computer Science.
These lectures contain joint work with Andrés Rojas Paredes.We elaborated together an architecture based computation model which allows to express all known symbolic and seminumeric computational
methods in effective algebraic geometry. The particular properties of this model are implications of well motivated and natural quality requirements on the procedures of this model. These procedures
transform so called robust parametrized arithmetic circuits in other ones. The main tool for establishing the model are the geometrically robust constructible maps. Within our computation model we
are able to give a mathematical proof of the intrinsic exponential complexity character of already most simple elimination problems in effective algebraic geometry.
The mathematical aspect of this proof is also interesting. The argument is based on the exhibition of a computationally well motivated rational map such that any factorization of this map by simple
blow ups and a polynomial map requires an exponential number of them. Such a type of statement was not known before in singularity theory.
All these methods may also be applied to other fields of Scientific Computing, for example to polynomial interpolation. One may show that coalescent (i.e., geometrically robust) interpolation of
circuit represented multivariate polynomials requires exponential time (joint work with Nardo Giménez, Guillermo Matera and Pablo Solerno).
Para más información, contactar a Michael Shub ( michael.shub@gmail.com ).
Created by slaplagn Last modified
2011-04-18 04:37 PM
|
{"url":"http://cms.dm.uba.ar/actividades/seminarios/scc/","timestamp":"2024-11-02T02:06:36Z","content_type":"application/xhtml+xml","content_length":"36165","record_id":"<urn:uuid:0f2d8497-4259-4f01-9aab-b32d73d6453a>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00086.warc.gz"}
|
9.2: Solution Concentration
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
• To describe the concentrations of solutions quantitatively
Many people have a qualitative idea of what is meant by concentration. Anyone who has made instant coffee or lemonade knows that too much powder gives a strongly flavored, highly concentrated drink,
whereas too little results in a dilute solution that may be hard to distinguish from water. In chemistry, the concentration of a solution is the quantity of a solute that is contained in a particular
quantity of solvent or solution. Knowing the concentration of solutes is important in controlling the stoichiometry of reactants for solution reactions. Chemists use many different methods to define
concentrations, some of which are described in this section.
The most common unit of concentration is molarity, which is also the most useful for calculations involving the stoichiometry of reactions in solution. The molarity (M) is defined as the number of
moles of solute present in exactly 1 L of solution. It is, equivalently, the number of millimoles of solute present in exactly 1 mL of solution:
\[ molarity = \dfrac{moles\: of\: solute}{liters\: of\: solution} = \dfrac{mmoles\: of\: solute} {milliliters\: of\: solution} \label{4.5.1} \]
The units of molarity are therefore moles per liter of solution (mol/L), abbreviated as \(M\). An aqueous solution that contains 1 mol (342 g) of sucrose in enough water to give a final volume of
1.00 L has a sucrose concentration of 1.00 mol/L or 1.00 M. In chemical notation, square brackets around the name or formula of the solute represent the molar concentration of a solute. Therefore,
\[[\rm{sucrose}] = 1.00\: M \nonumber \]
is read as “the concentration of sucrose is 1.00 molar.” The relationships between volume, molarity, and moles may be expressed as either
\[ V_L M_{mol/L} = \cancel{L} \left( \dfrac{mol}{\cancel{L}} \right) = moles \label{4.5.2} \]
\[ V_{mL} M_{mmol/mL} = \cancel{mL} \left( \dfrac{mmol} {\cancel{mL}} \right) = mmoles \label{4.5.3} \]
Figure \(\PageIndex{1}\) illustrates the use of Equations \(\ref{4.5.2}\) and \(\ref{4.5.3}\).
Figure \(\PageIndex{1}\): Preparation of a Solution of Known Concentration Using a Solid Solute
Calculate the number of moles of sodium hydroxide (NaOH) in 2.50 L of 0.100 M NaOH.
Given: identity of solute and volume and molarity of solution
Asked for: amount of solute in moles
Use either Equation \ref{4.5.2} or Equation \ref{4.5.3}, depending on the units given in the problem.
Because we are given the volume of the solution in liters and are asked for the number of moles of substance, Equation \ref{4.5.2} is more useful:
\( moles\: NaOH = V_L M_{mol/L} = (2 .50\: \cancel{L} ) \left( \dfrac{0.100\: mol } {\cancel{L}} \right) = 0 .250\: mol\: NaOH \)
Calculate the number of millimoles of alanine, a biologically important molecule, in 27.2 mL of 1.53 M alanine.
41.6 mmol
Calculations Involving Molarity (M): https://youtu.be/TVTCvKoSR-Q
Concentrations are also often reported on a mass-to-mass (m/m) basis or on a mass-to-volume (m/v) basis, particularly in clinical laboratories and engineering applications. A concentration expressed
on an m/m basis is equal to the number of grams of solute per gram of solution; a concentration on an m/v basis is the number of grams of solute per milliliter of solution. Each measurement can be
expressed as a percentage by multiplying the ratio by 100; the result is reported as percent m/m or percent m/v. The concentrations of very dilute solutions are often expressed in parts per million (
ppm), which is grams of solute per 10^6 g of solution, or in parts per billion (ppb), which is grams of solute per 10^9 g of solution. For aqueous solutions at 20°C, 1 ppm corresponds to 1 μg per
milliliter, and 1 ppb corresponds to 1 ng per milliliter. These concentrations and their units are summarized in Table \(\PageIndex{1}\).
Table \(\PageIndex{1}\): Common Units of
Concentration Units
m/m g of solute/g of solution
m/v g of solute/mL of solution
ppm g of solute/10^6 g of solution
ppb g of solute/10^9 g of solution
The Preparation of Solutions
To prepare a solution that contains a specified concentration of a substance, it is necessary to dissolve the desired number of moles of solute in enough solvent to give the desired final volume of
solution. Figure \(\PageIndex{1}\) illustrates this procedure for a solution of cobalt(II) chloride dihydrate in ethanol. Note that the volume of the solvent is not specified. Because the solute
occupies space in the solution, the volume of the solvent needed is almost always less than the desired volume of solution. For example, if the desired volume were 1.00 L, it would be incorrect to
add 1.00 L of water to 342 g of sucrose because that would produce more than 1.00 L of solution. As shown in Figure \(\PageIndex{2}\), for some substances this effect can be significant, especially
for concentrated solutions.
Figure \(\PageIndex{2}\): Preparation of 250 mL of a Solution of (NH[4])[2]Cr[2]O[7] in Water. The solute occupies space in the solution, so less than 250 mL of water are needed to make 250 mL of
The solution contains 10.0 g of cobalt(II) chloride dihydrate, CoCl[2]•2H[2]O, in enough ethanol to make exactly 500 mL of solution. What is the molar concentration of \(\ce{CoCl2•2H2O}\)?
Given: mass of solute and volume of solution
Asked for: concentration (M)
To find the number of moles of \(\ce{CoCl2•2H2O}\), divide the mass of the compound by its molar mass. Calculate the molarity of the solution by dividing the number of moles of solute by the volume
of the solution in liters.
The molar mass of CoCl[2]•2H[2]O is 165.87 g/mol. Therefore,
\[ moles\: CoCl_2 \cdot 2H_2O = \left( \dfrac{10.0 \: \cancel{g}} {165 .87\: \cancel{g} /mol} \right) = 0 .0603\: mol \nonumber \]
The volume of the solution in liters is
\[ volume = 500\: \cancel{mL} \left( \dfrac{1\: L} {1000\: \cancel{mL}} \right) = 0 .500\: L \nonumber \]
Molarity is the number of moles of solute per liter of solution, so the molarity of the solution is
\[ molarity = \dfrac{0.0603\: mol} {0.500\: L} = 0.121\: M = CoCl_2 \cdot H_2O \nonumber \]
The solution shown in Figure \(\PageIndex{2}\) contains 90.0 g of (NH[4])[2]Cr[2]O[7] in enough water to give a final volume of exactly 250 mL. What is the molar concentration of ammonium dichromate?
\[(NH_4)_2Cr_2O_7 = 1.43\: M \nonumber \]
To prepare a particular volume of a solution that contains a specified concentration of a solute, we first need to calculate the number of moles of solute in the desired volume of solution using the
relationship shown in Equation \(\ref{4.5.2}\). We then convert the number of moles of solute to the corresponding mass of solute needed. This procedure is illustrated in Example \(\PageIndex{3}\).
The so-called D5W solution used for the intravenous replacement of body fluids contains 0.310 M glucose. (D5W is an approximately 5% solution of dextrose [the medical name for glucose] in water.)
Calculate the mass of glucose necessary to prepare a 500 mL pouch of D5W. Glucose has a molar mass of 180.16 g/mol.
Given: molarity, volume, and molar mass of solute
Asked for: mass of solute
1. Calculate the number of moles of glucose contained in the specified volume of solution by multiplying the volume of the solution by its molarity.
2. Obtain the mass of glucose needed by multiplying the number of moles of the compound by its molar mass.
A We must first calculate the number of moles of glucose contained in 500 mL of a 0.310 M solution:
\( V_L M_{mol/L} = moles \)
\( 500\: \cancel{mL} \left( \dfrac{1\: \cancel{L}} {1000\: \cancel{mL}} \right) \left( \dfrac{0 .310\: mol\: glucose} {1\: \cancel{L}} \right) = 0 .155\: mol\: glucose \)
B We then convert the number of moles of glucose to the required mass of glucose:
\( mass \: of \: glucose = 0.155 \: \cancel{mol\: glucose} \left( \dfrac{180.16 \: g\: glucose} {1\: \cancel{mol\: glucose}} \right) = 27.9 \: g \: glucose \)
Another solution commonly used for intravenous injections is normal saline, a 0.16 M solution of sodium chloride in water. Calculate the mass of sodium chloride needed to prepare 250 mL of normal
saline solution.
2.3 g NaCl
A solution of a desired concentration can also be prepared by diluting a small volume of a more concentrated solution with additional solvent. A stock solution is a commercially prepared solution of
known concentration and is often used for this purpose. Diluting a stock solution is preferred because the alternative method, weighing out tiny amounts of solute, is difficult to carry out with a
high degree of accuracy. Dilution is also used to prepare solutions from substances that are sold as concentrated aqueous solutions, such as strong acids.
The procedure for preparing a solution of known concentration from a stock solution is shown in Figure \(\PageIndex{3}\). It requires calculating the number of moles of solute desired in the final
volume of the more dilute solution and then calculating the volume of the stock solution that contains this amount of solute. Remember that diluting a given quantity of stock solution with solvent
does not change the number of moles of solute present. The relationship between the volume and concentration of the stock solution and the volume and concentration of the desired diluted solution is
\[(V_s)(M_s) = moles\: of\: solute = (V_d)(M_d)\label{4.5.4} \]
where the subscripts s and d indicate the stock and dilute solutions, respectively. Example \(\PageIndex{4}\) demonstrates the calculations involved in diluting a concentrated stock solution.
Figure \(\PageIndex{3}\): Preparation of a Solution of Known Concentration by Diluting a Stock Solution. (a) A volume (V[s]) containing the desired moles of solute (M[s]) is measured from a stock
solution of known concentration. (b) The measured volume of stock solution is transferred to a second volumetric flask. (c) The measured volume in the second flask is then diluted with solvent up to
the volumetric mark [(V[s])(M[s]) = (V[d])(M[d])].
What volume of a 3.00 M glucose stock solution is necessary to prepare 2500 mL of the D5W solution in Example \(\PageIndex{3}\)?
Given: volume and molarity of dilute solution
Asked for: volume of stock solution
1. Calculate the number of moles of glucose contained in the indicated volume of dilute solution by multiplying the volume of the solution by its molarity.
2. To determine the volume of stock solution needed, divide the number of moles of glucose by the molarity of the stock solution.
A The D5W solution in Example 4.5.3 was 0.310 M glucose. We begin by using Equation 4.5.4 to calculate the number of moles of glucose contained in 2500 mL of the solution:
\[ moles\: glucose = 2500\: \cancel{mL} \left( \dfrac{1\: \cancel{L}} {1000\: \cancel{mL}} \right) \left( \dfrac{0 .310\: mol\: glucose} {1\: \cancel{L}} \right) = 0 .775\: mol\: glucose \nonumber \]
B We must now determine the volume of the 3.00 M stock solution that contains this amount of glucose:
\[ volume\: of\: stock\: soln = 0 .775\: \cancel{mol\: glucose} \left( \dfrac{1\: L} {3 .00\: \cancel{mol\: glucose}} \right) = 0 .258\: L\: or\: 258\: mL \nonumber \]
In determining the volume of stock solution that was needed, we had to divide the desired number of moles of glucose by the concentration of the stock solution to obtain the appropriate units. Also,
the number of moles of solute in 258 mL of the stock solution is the same as the number of moles in 2500 mL of the more dilute solution; only the amount of solvent has changed. The answer we obtained
makes sense: diluting the stock solution about tenfold increases its volume by about a factor of 10 (258 mL → 2500 mL). Consequently, the concentration of the solute must decrease by about a factor
of 10, as it does (3.00 M → 0.310 M).
We could also have solved this problem in a single step by solving Equation 4.5.4 for V[s] and substituting the appropriate values:
\[ V_s = \dfrac{( V_d )(M_d )}{M_s} = \dfrac{(2 .500\: L)(0 .310\: \cancel{M} )} {3 .00\: \cancel{M}} = 0 .258\: L \nonumber \]
As we have noted, there is often more than one correct way to solve a problem.
What volume of a 5.0 M NaCl stock solution is necessary to prepare 500 mL of normal saline solution (0.16 M NaCl)?
16 mL
What are the concentrations of all species derived from the solutes in these aqueous solutions?
1. 0.21 M NaOH
2. 3.7 M (CH[3])[2]CHOH
3. 0.032 M In(NO[3])[3]
Given: molarity
Asked for: concentrations
A Classify each compound as either a strong electrolyte or a nonelectrolyte.
B If the compound is a nonelectrolyte, its concentration is the same as the molarity of the solution. If the compound is a strong electrolyte, determine the number of each ion contained in one
formula unit. Find the concentration of each species by multiplying the number of each ion by the molarity of the solution.
1. Sodium hydroxide is an ionic compound that is a strong electrolyte (and a strong base) in aqueous solution: \( NaOH(s) \xrightarrow {H_2 O(l)} Na^+ (aq) + OH^- (aq) \)
B Because each formula unit of NaOH produces one Na^+ ion and one OH^− ion, the concentration of each ion is the same as the concentration of NaOH: [Na^+] = 0.21 M and [OH^−] = 0.21 M.
2. A The formula (CH[3])[2]CHOH represents 2-propanol (isopropyl alcohol) and contains the –OH group, so it is an alcohol. Recall from Section 4.1 that alcohols are covalent compounds that dissolve
in water to give solutions of neutral molecules. Thus alcohols are nonelectrolytes.
B The only solute species in solution is therefore (CH[3])[2]CHOH molecules, so [(CH[3])[2]CHOH] = 3.7 M.
3. A Indium nitrate is an ionic compound that contains In^3^+ ions and NO[3]^− ions, so we expect it to behave like a strong electrolyte in aqueous solution:
\( In(NO _3 ) _3 (s) \xrightarrow {H_ 2 O(l)} In ^{3+} (aq) + 3NO _3^- (aq) \)
B One formula unit of In(NO[3])[3] produces one In^3^+ ion and three NO[3]^− ions, so a 0.032 M In(NO[3])[3] solution contains 0.032 M In^3^+ and 3 × 0.032 M = 0.096 M NO[3]^–—that is, [In^3^+] =
0.032 M and [NO[3]^−] = 0.096 M.
What are the concentrations of all species derived from the solutes in these aqueous solutions?
1. 0.0012 M Ba(OH)[2]
2. 0.17 M Na[2]SO[4]
3. 0.50 M (CH[3])[2]CO, commonly known as acetone
Answer a
\([Ba^{2+}] = 0.0012\: M; \: [OH^-] = 0.0024\: M\)
Answer b
\([Na^+] = 0.34\: M; \: [SO_4^{2-}] = 0.17\: M\)
Answer c
\([(CH_3)_2CO] = 0.50\: M\)
Solution concentrations are typically expressed as molarities and can be prepared by dissolving a known mass of solute in a solvent or diluting a stock solution.
• definition of molarity: \[ molarity = \dfrac{moles\: of\: solute}{liters\: of\: solution} = \dfrac{mmoles\: of\: solute} {milliliters\: of\: solution} \nonumber \]
• relationship among volume, molarity, and moles: \[ V_L M_{mol/L} = \cancel{L} \left( \dfrac{mol}{\cancel{L}} \right) = moles \nonumber \]
• relationship between volume and concentration of stock and dilute solutions: \[(V_s)(M_s) = moles\: of\: solute = (V_d)(M_d) \nonumber \]
The concentration of a substance is the quantity of solute present in a given quantity of solution. Concentrations are usually expressed in terms of molarity, defined as the number of moles of solute
in 1 L of solution. Solutions of known concentration can be prepared either by dissolving a known mass of solute in a solvent and diluting to a desired final volume or by diluting the appropriate
volume of a more concentrated solution (a stock solution) to the desired final volume.
Contributors and Attributions
|
{"url":"https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Structure_and_Properties_(Tro)/09%3A_Introduction_to_Solutions_and_Aqueous_Reactions/9.02%3A_Solution_Concentration","timestamp":"2024-11-07T23:16:35Z","content_type":"text/html","content_length":"157299","record_id":"<urn:uuid:6eadd4fa-7986-46a2-8b82-4a9fe4c404ae>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00533.warc.gz"}
|
Unveiling the Secrets of Geometry: Uncover the Key to Success in "august 2019 geometry regents answers" | Mastering Success: The Power of Answer Keys and QuizzesUnveiling the Secrets of Geometry: Uncover the Key to Success in "august 2019 geometry regents answers"
Unveiling the Secrets of Geometry: Uncover the Key to Success in "august 2019 geometry regents answers"
The August 2019 Geometry Regents Exam was a standardized test administered to high school students in New York State. The exam covered a range of topics in geometry, including transformations,
measurement, and trigonometry.
The exam is designed to assess students’ understanding of the New York State Geometry curriculum. Students who perform well on the exam may be eligible for advanced placement in mathematics courses
in college.
The August 2019 Geometry Regents Exam was released to the public in September 2019. Students and teachers can use the exam to review the content covered on the exam and to prepare for future exams.
August 2019 Geometry Regents Answers
The August 2019 Geometry Regents Exam was a standardized test administered to high school students in New York State. The exam covered a range of topics in geometry, including transformations,
measurement, and trigonometry.
• Content: The exam covered a wide range of geometry topics, including transformations, measurement, and trigonometry.
• Difficulty: The exam was challenging, but fair. Students who had a strong understanding of the geometry curriculum were able to perform well.
• Scoring: The exam was scored on a scale of 0-100. Students who scored 65 or higher earned a passing grade.
• Preparation: Students can prepare for the Geometry Regents Exam by taking a geometry course in high school and by studying the New York State Geometry curriculum.
• Importance: The Geometry Regents Exam is an important assessment for high school students in New York State. Students who perform well on the exam may be eligible for advanced placement in
mathematics courses in college.
The August 2019 Geometry Regents Exam was a challenging but fair exam. Students who had a strong understanding of the geometry curriculum were able to perform well. The exam is an important
assessment for high school students in New York State, and students who perform well on the exam may be eligible for advanced placement in mathematics courses in college.
The August 2019 Geometry Regents Exam covered a wide range of geometry topics, including transformations, measurement, and trigonometry. This content is essential for students to master in order to
be successful on the exam and in future mathematics courses.
• Transformations: Transformations are operations that move, flip, or turn a figure. Students must be able to identify and perform transformations, as well as understand the properties of
• Measurement: Measurement involves finding the length, area, or volume of a figure. Students must be able to use a variety of formulas to find measurements, as well as understand the relationships
between different units of measurement.
• Trigonometry: Trigonometry is the study of triangles. Students must be able to use trigonometric ratios to solve problems involving triangles, as well as understand the relationships between the
sides and angles of triangles.
By understanding the content covered on the August 2019 Geometry Regents Exam, students can better prepare for the exam and for future mathematics courses.
The difficulty of the August 2019 Geometry Regents Exam was a reflection of the challenging nature of the geometry curriculum. Students who had a strong understanding of the content were able to
perform well on the exam, while those who struggled with the content found the exam to be more difficult.
The difficulty of the exam is an important component of the August 2019 Geometry Regents Answers because it provides students with a benchmark for their own understanding of the geometry curriculum.
Students who performed well on the exam can be confident that they have a strong understanding of the content, while those who struggled with the exam may need to review the content and seek
additional help.
The difficulty of the exam also has practical significance for students. Students who perform well on the exam may be eligible for advanced placement in mathematics courses in college, which can save
them time and money. Additionally, students who perform well on the exam are more likely to be successful in future mathematics courses.
The scoring system for the August 2019 Geometry Regents Exam is an integral part of the exam as it determines the level of student achievement and provides valuable feedback for educators and
students alike. Here are a few key aspects of the scoring system in relation to “august 2019 geometry regents answers”:
• Grading Scale: The exam is scored on a scale of 0-100, with 65 being the passing grade. This grading scale provides a clear benchmark for students to assess their performance and identify areas
for improvement.
• Levels of Proficiency: The scoring system is designed to differentiate between different levels of proficiency in geometry. Students who score higher demonstrate a deeper understanding of the
subject matter and a greater ability to apply geometric concepts to solve problems.
• Diagnostic Tool: The exam results, including the scoring details, serve as a valuable diagnostic tool for educators to identify areas where students may need additional support or enrichment. By
analyzing the strengths and weaknesses revealed by the scoring, teachers can tailor their instruction to meet the specific needs of their students.
In summary, the scoring system for the August 2019 Geometry Regents Exam plays a crucial role in evaluating student achievement, providing feedback for improvement, and informing instructional
decisions. It helps ensure that students are meeting the expected standards and are well-prepared for future mathematics courses.
Preparation is a key factor in achieving success on the August 2019 Geometry Regents Exam. Students who take a geometry course in high school and study the New York State Geometry curriculum will be
well-prepared for the exam and will have a greater chance of earning a high score.
• Geometry Course: Taking a geometry course in high school is the most important step in preparing for the Geometry Regents Exam. In this course, students will learn the fundamental concepts of
geometry, including transformations, measurement, and trigonometry. They will also develop the problem-solving skills necessary to succeed on the exam.
In addition to taking a geometry course, students should also study the New York State Geometry curriculum. This curriculum outlines the specific topics that will be covered on the exam, and it is
important for students to be familiar with this material before taking the exam.
By taking a geometry course and studying the New York State Geometry curriculum, students can prepare themselves for the August 2019 Geometry Regents Exam and increase their chances of earning a high
The connection of the Geometry Regents Exam to the “august 2019 geometry regents answers” lies in its significance as a determinant of college readiness in mathematics. The exam serves as a
standardized measure of a student’s understanding of geometry concepts and problem-solving abilities, providing valuable insights for higher education institutions.
By performing well on the Geometry Regents Exam, students not only demonstrate their mastery of the subject but also open doors to advanced opportunities in college. Earning a high score on the exam
can qualify students for advanced placement (AP) courses in mathematics, allowing them to skip introductory college math courses and potentially earn college credit. This can save students both time
and money while giving them a head start in their college mathematics studies.
Furthermore, a strong performance on the Geometry Regents Exam can serve as an indicator of a student’s overall academic capabilities and readiness for college-level work. Colleges and universities
often consider the Regents Exam scores as part of their admissions criteria, recognizing them as a reliable measure of a student’s academic rigor and achievement.
In summary, the “august 2019 geometry regents answers” hold importance beyond just providing solutions to specific exam questions. They represent a student’s understanding of geometry and
problem-solving skills, which are crucial for success in college mathematics and beyond.
FAQs about the August 2019 Geometry Regents Exam Answers
The August 2019 Geometry Regents Exam was a standardized test administered to high school students in New York State. The exam covered a wide range of topics in geometry, including transformations,
measurement, and trigonometry. The following are some frequently asked questions about the exam and its answers:
Question 1: When were the August 2019 Geometry Regents Exam answers released?
The August 2019 Geometry Regents Exam answers were released to the public in September 2019.
Question 2: Where can I find the August 2019 Geometry Regents Exam answers?
The August 2019 Geometry Regents Exam answers can be found on the New York State Education Department website.
Question 3: How can I use the August 2019 Geometry Regents Exam answers?
The August 2019 Geometry Regents Exam answers can be used to review the content covered on the exam and to prepare for future exams.
Question 4: What is the passing score for the August 2019 Geometry Regents Exam?
The passing score for the August 2019 Geometry Regents Exam is 65.
Question 5: What are some tips for preparing for the Geometry Regents Exam?
Some tips for preparing for the Geometry Regents Exam include taking a geometry course in high school, studying the New York State Geometry curriculum, and practicing with practice exams.
Question 6: What are some resources that can help me prepare for the Geometry Regents Exam?
Some resources that can help you prepare for the Geometry Regents Exam include textbooks, online resources, and practice exams.
The August 2019 Geometry Regents Exam answers are a valuable resource for students preparing for the exam. By understanding the content covered on the exam and practicing with the answers, students
can increase their chances of success.
Tips for Preparing for the Geometry Regents Exam
By following these tips, students can increase their chances of success on the Geometry Regents Exam.
Tip 1: Take a Geometry Course
The best way to prepare for the Geometry Regents Exam is to take a geometry course in high school. In this course, students will learn the fundamental concepts of geometry, including transformations,
measurement, and trigonometry. They will also develop the problem-solving skills necessary to succeed on the exam.
Tip 2: Study the New York State Geometry Curriculum
The Geometry Regents Exam is based on the New York State Geometry curriculum. Students should make sure that they are familiar with all of the topics covered in the curriculum. They can do this by
reviewing their class notes, textbooks, and online resources.
Tip 3: Practice with Practice Exams
One of the best ways to prepare for the Geometry Regents Exam is to practice with practice exams. This will help students to become familiar with the format of the exam and the types of questions
that they can expect. Practice exams can be found online and in libraries.
Tip 4: Get a Good Night’s Sleep Before the Exam
It is important to get a good night’s sleep before the Geometry Regents Exam. This will help students to be well-rested and focused on the day of the exam.
Tip 5: Eat a Healthy Breakfast on the Day of the Exam
Eating a healthy breakfast on the day of the Geometry Regents Exam will help students to have the energy they need to perform their best.
By following these tips, students can increase their chances of success on the Geometry Regents Exam.
Transition to the article’s conclusion:
The Geometry Regents Exam is an important assessment for high school students in New York State. By preparing for the exam, students can increase their chances of earning a high score and meeting
their academic goals.
The August 2019 Geometry Regents Exam answers provide students with a valuable resource for preparing for the exam. By understanding the content covered on the exam and practicing with the answers,
students can increase their chances of success.
The Geometry Regents Exam is an important assessment for high school students in New York State. It is a challenging exam, but by taking a geometry course, studying the curriculum, and practicing
with practice exams, students can prepare themselves for success.
Images References
You must be logged in to post a comment.
|
{"url":"https://sncollegecherthala.in/august-2019-geometry-regents-answers/","timestamp":"2024-11-03T12:48:26Z","content_type":"text/html","content_length":"140976","record_id":"<urn:uuid:82f66cb9-1887-4993-8a98-b8614b9e8d7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00601.warc.gz"}
|
Re: st: how to svyset for stratified multiple-stage cluster sampling in
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: how to svyset for stratified multiple-stage cluster sampling in STATA
From [email protected] (Jeff Pitblado, StataCorp LP)
To [email protected]
Subject Re: st: how to svyset for stratified multiple-stage cluster sampling in STATA
Date Wed, 19 Apr 2006 17:52:46 -0500
Jian Zhang <[email protected]> has a -svyset- question:
> The sample was obtained as follows. I sampled the population by
> stratifying it first, and then I randomly selected several clusters for
> each stratum. Within each cluster, I then random selected several
> subclusters, and then for each subcluster, I randomly selected a certain
> number of observations. for this sampling plan, how do I set up the
> sampling plan using command svyset in STATA?
> Would it be:
> svyset [pweight = pwt], fpc(fpc) psu(cluster) strata(strata)?
I'll assume Stata 9, since this is the first release where -svyset- has a
syntax to deal with multiple stages of clustered sampling.
Let's make up some variable names to represent survey design characteristics:
pwt - sampling weights
strata1 - stage 1 strata
su1 - stage 1 sampling units (PSU)
fpc1 - stage 1 finite population correction
strata2 - stage 2 strata
su2 - stage 2 sampling units (PSU)
fpc2 - stage 2 finite population correction
... you get the idea
Given Jian's description above, the -svyset- command should be as follows:
svyset su1 [pw=pwt], strata(strata1) fpc(fpc1) ///
|| su2, fpc(fpc2) || _n, fpc(fpc3)
(note: '///' tells Stata to continue to the next line in ado/do files.)
> I know this is for stratified TWO-stage cluster sampling plan, which is "
> sample the population by stratifying it first, and then randomly select
> several clusters for each stratum. Within each cluster, then randomly
> select a certain number of observations."
> Would the svyset for multiple-stage cluster sample (more than 2 stages)
> with stratification be same as TWO-stage cluster sampling with
> stratification?
Actually, Jian's original -svyset- command:
> svyset [pweight = pwt], fpc(fpc) psu(cluster) strata(strata)
should not be used with a two-stage design because an -fpc()- was specified
but nothing was mentioned about the second stage.
Prior to Stata 9, -svyset- only allowed you to specify the first stage design
variables and we recommended that you omit the -fpc()- if the design involved
sampling within PSUs. In Stata 9 you can specify the design variables for
each stage provided you have them, using '||' to delimit between the stages.
> More complicated is that what if I do cluster sampling first, and then
> stratify each cluster, and then do cluster sampling again, what would
> the command svy for setting up this sampling plan be?
In this case Jian stratified in the second stage, so Jian should have a
variable like 'strata2' instead of 'strata1':
svyset su1 [pw=pwt], fpc(fpc1) ///
|| su2, strata(strata2) fpc(fpc2) || _n, fpc(fpc3)
> Similarly, if I stratify the population first, and then do the cluster,
> and then do stratification again and then do cluster sampling again, what
> would the svyset command be for this sampling plan?
svyset su1 [pw=pwt], strata(strata1) fpc(fpc1) ///
|| su2, strata(strata2) fpc(fpc2) || _n, fpc(fpc3)
> To generalize the question, if we change the order of cluster sampling
> and stratification sampling when sampling the population, would the
> svyset command be different?
In Stata 9, you need to know from which stage a stratum variable identifies
the strata. See -[SVY] svyset- for more examples of how to -svyset-
multi-stage designs.
Prior to Stata 9, you would only use the -strata()- option if your design had
stratification in the first stage.
[email protected]
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"https://www.stata.com/statalist/archive/2006-04/msg00722.html","timestamp":"2024-11-02T01:52:12Z","content_type":"text/html","content_length":"11435","record_id":"<urn:uuid:c89747fa-dfd8-46d1-b3fd-92bff0f13f9e>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00462.warc.gz"}
|
The 9 horizontal and 9 vertical lines on an 8X8 chessboard form 'r' r
Algebra> jee king pc d3 q5...
The 9 horizontal and 9 vertical lines on an 8X8 chessboard form 'r' rectangles& 's' squares. The ratio of s/r in the lowest terms is?
2 Answers
Badiuddin askIITians.ismu Expert
Last Activity: 14 Years ago
Dear jee king
number of rectangles = r = ^9C2 *^9C2 =1296
number of square=s = (1^2 + 2^2 + 3^2 + ................+ 8^2 ) =204
so s/r =204/1296
Please feel free to post as many doubts on our discussion forum as you can.
If you find any question Difficult to understand - post it here and we will get you the answer and detailed solution very quickly.
We are all IITians and here to help you in your IIT JEE & AIEEE preparation.
All the best.
Askiitians Experts
Spandan Mallick
Last Activity: 5 Years ago
First for squares,
We see in a 8×8 chessboard, there are 64, 1×1 squares. Similarly 32, 2×2 squares and 16, 3×3 squares. Hence it forms a pattern. Also if you divide the area... total area is 64 sq. units.
For 1×1 squares, hence 64 are possible. For 2×2squares you can get it. Hence,
No of squares = (1^2+2^2+3^2+...+8^2) = 204.
For rectangles, consider the number of lines rather than squares. There are 9 lines. A rectangle can be formed by combination of any of these two four lines taking two from each pair of 9. Hence,
9C2 × 9C2 = 1296.
Spandan Mallick, IIT-JEE Aspirant
Provide a better Answer & Earn Cool Goodies
Enter text here...
Ask a Doubt
Get your questions answered by the expert for free
Enter text here...
|
{"url":"https://www.askiitians.com/forums/Algebra/22/5983/jee-king-pc-d3-q5.htm","timestamp":"2024-11-13T08:32:54Z","content_type":"text/html","content_length":"186214","record_id":"<urn:uuid:5e7754e0-774f-4029-b7c0-32c108165d23>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00606.warc.gz"}
|
void FIfWinKaiser (float win[], int N, double alpha)
Generate a Kaiser window
A Kaiser window is specified by the following equation in continuous time,
0 , x < -1
I0(a sqrt(1 - x^2)
w(x) = ------------------ , -1 <= x <= 1
0 1 < x
This window sits on a pedestal of height 1/I0(a). The discrete-time window of length N is obtained by setting x = 2n/(N-1)-1, for 0 <= n < N.
The parameter a (alpha) determines the shape of the window, with increasing a giving a larger mainlobe width. For a=0, the window is a rectangular. For a = 5.4414, the window has the same mainlobe
width as a Hamming window.
J. F. Kaiser, "Nonrecursive digital filter design using the I0-sinh window function", Proc. 1974 IEEE Int. Symp. on Circuits and Syst., pp. 20-23, April 1974.
<- float win[]
Array containing the window values
-> int N
Number of window values
-> double alpha
Window parameter
Author / revision
P. Kabal / Revision 1.11 2003/05/09
See Also
FIKaiserLPF, FIfWinHCos, FIfWinHamm, FIfWinRCos
Main Index libtsp
|
{"url":"https://mmsp.ece.mcgill.ca/Documents/Software/Packages/libtsp/FI/FIfWinKaiser.html","timestamp":"2024-11-05T12:26:20Z","content_type":"text/html","content_length":"1959","record_id":"<urn:uuid:cc7af6c2-70ae-4bbb-b0e3-988f406ec1bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00606.warc.gz"}
|
Note on the geometrical interpretation of quantum groups and noncommutative spaces in gravity
Quantum groups and noncommutative spaces have been repeatedly utilized in approaches to quantum gravity. They provide a mathematically elegant cutoff, often interpreted as related to the Planck-scale
quantum uncertainty in position. We consider here a different geometrical interpretation of this cutoff, where the relevant noncommutative space is the space of directions around any spacetime point.
The limitations in angular resolution express the finiteness of the angular size of a Planck-scale minimal surface at a maximum distance 1/√Λ related the cosmological constant Λ. This yields a simple
geometrical interpretation for the relation between the quantum deformation parameter q=eiΛlPlanck2 and the cosmological constant, and resolves the difficulty of more conventional interpretations of
the physical geometry described by quantum groups or fuzzy spaces.
All Science Journal Classification (ASJC) codes
• Nuclear and High Energy Physics
• Physics and Astronomy (miscellaneous)
Dive into the research topics of 'Note on the geometrical interpretation of quantum groups and noncommutative spaces in gravity'. Together they form a unique fingerprint.
|
{"url":"https://pure.psu.edu/en/publications/note-on-the-geometrical-interpretation-of-quantum-groups-and-nonc","timestamp":"2024-11-08T09:43:32Z","content_type":"text/html","content_length":"49484","record_id":"<urn:uuid:9c12f091-b32b-4424-abd1-cf932971bfb2>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00721.warc.gz"}
|
Hydraulics II Report 2 S1 2023 Final - FIND A TUTOR ONLINE!
Hydraulics II Report 2 S1 2023 Final :
Hydraulics II Report 2 S1 2023 Final
Examiner Marks out of Weighting (%) Due date
Kamrun Nahar 250 25 8^th May 2023
Evaluate and apply the equations available for the description and design of open channel flowsolve the equations governing both steady and unsteady gradually varied channel flow and
apply them to the solution of practical flow problems including: backwater profiles; runoff on a plane surface and routing of a stream hydrograph; design erodible and vegetative lined
Objectives channels;Solve simple pipe networks using an appropriate methodapply rigid column theory to unsteady pipeline flow to analyse mass oscillations in pipelines and calculate maximum
allowable rates for valve opening and closureDesign a range of hydraulic structures including: fixed and movable crest weirs; gated control structures; pipe conveyance structures;
spillways and energy dissipation structure; critical flow measuring flumes; gulley control structures; weir and culvert type structures using the minimum specific energy concept.
Rationale This report is based on the material covered in this course. As such you will be directed to attempt tutorial questions from modules 8,10,15 before starting this assessment.
Important At UniSQ, Policy and Procedure guides our approach to assessment, and it is important that all students are familiar with the relevant Policy and Procedure documents and complete the
Information Academic Integrity Mandatory Training every 12 months. By submitting this assignment you hereby certify that: The submission is entirely my own work except where due acknowledgement is
made in the text and that no part has been copied from any other person’s work.
Spreadsheets or computer programs must be the work of the individual student.Assignments submitted without adequate proof of program validation will not be eligible for greater than a C
Special grading.A proportion of the marks is allocated to the communication aspects of the assignment. Marks will be deducted for untidy and poorly presented work, poor English expression, and
Instructions failure to cite sources of information.Plagiarism is taken seriously in this course, as such your assignment report will be checked using Turnitin and your spreadsheets (if you have
chosen to use Excel or equivalent) will be checked for plagiarism
Submission for this assignment is in two parts: Report introducing the problem, providing background in all relevant theory, descriptions of methods and equations used, discussion of
Instructions results and your reflection.Electronic copy of all computer code or spreadsheets used so the examiner can validate the models. The report should be compiled in such a manner that
for assessment can be completed without access to the electronic copies of the code/spreadsheet files. It is normal practice to include technical details (e.g. computer code) as an appendix.
Submission The assignment is to be submitted electronically via study desk. The link is available on the course study desk. Please note that hand written equations within the body of the report
are permitted. In many cases they are preferred as they are simpler to produce and easier to read than poorly set out computer produced equations.
Late If students submit assignments after the due date without (prior) approval of the examiner then a penalty of 5% of the total marks gained by the student for the assignment may apply for
Submissions each working day late up to ten working days at which time a mark of zero may be recorded. No assignments will be accepted after feedback files have been posted.
Assessment This report is comprised of three questions with the marks allocated as follows Question 1 – Pipe Network 100 marks Question 2 –
Task Vegetative Lined Channels 60 marks Question 3 –Control Structure 70 marks
Question 1 – Pipe Network (100 Marks)
A pipe network as shown in Figure 1 has been constructed in order to convey water from a reservoir A to a number of delivery points.
The details of each pipe are given in the table below. You may also neglect all minor losses that may occur in the system.
Pipe AB BC CD ED EA EF FB FC
Length (m) 300 400 200 320 325 120 150 160
Diameter (mm) 250 200 150 150 250 180 180 150
Roughness (mm) 0.05 0.07 0.09 0.15 0.05 0.2 0.2 0.25
Figure 1 – Pipe network for Q1
The head added by the pump (P) is a constant 20m. The pressure head elevation at point A is 78 m.
The elevation of each node of the pipe network is given below.
Node A B C D E F
Elevation (m) 30 30 35 30 30 30
a. Use the linearisation method to solve for the unknown discharges in each pipe of the network.
• Accounting for the elevation of each node, estimate the pressure head in metres at each pipe junction (A, B, C, D, E, F)
HINT: The pump has the opposite effect (opposite direction) to the friction loss in pipe BC
Marking Scheme: Question 1 – Pipe Network
Items Requirements Marks
Formulation of Equations Diagram with assumed flow directionsContinuity (node) equationsenergy loop equationsCorrectly accounted for pump 30
Method Model uses the linearization methodModel is correctCalculates friction properlyAccounts for pump 20
Solution for Q Correct solution for the flows 20
Calculation of Heads Correct solution for the heads 20
Discussion & Presentation Solution processResults (including impact of pump)Following report format 10
Total 100
Question 2 – Vegetative Lined Channels (60 Marks)
Design the broad shallow grassed waterway for the transmission of flood flows in a compound channel illustrated below:
Design Data:
• The soil type is erosion resistant
• The channel is currently planted with Rhodes grass
• The Bed slope is 3%
The channel consists of:
• a narrow concrete lined section (n = 0.014) designed to carry the normal low streamflow; and
□ a broad shallow grassed waterway for the transmission of flood flows. Your design must satisfy two main criteria:
The velocity of flow does not exceed the permissible velocity nominated for the particular
grass and soil in the waterway, that is, the channel must be stable.
The depth of flow does not exceed the height of the channel banks, that is, the channel must have sufficient capacity.
Marking Scheme: Question 2
Sections Marks
Preliminary Check 15
Stability Check 15
Capacity Check 15
Report with all relevant information 15
Total 60
Question 3 – Control Structure (70 marks)
An irrigation scheme is fed from a river via a diversion channel. The irrigation channel is 2 m wide and is constructed of rough concrete with an estimated value of 0.025 for the Manning
n. The bed slope is 0.0025.
The discharge into the channel is controlled by a vertical sluice gate (Cc = 0.61). The depth upstream of the gate is a constant 3.0 m, and the maximum discharge is 7 m^3 /s.
The designer of the gate has prepared a rating curve for the sluice gate (yg vs Q) for free- flowing conditions as shown below in the table.
The issue is that the designer has not considered those cases where the gate may be drowned by the hydraulic jump downstream of the gate. The depth on the downstream side of the gate is equal to the
normal flow depth.
Independently you have determined the normal flow depths over the operating range of discharges, which is also included in the table below.
Q (m^3/s) yg (Assuming free lowing (m) yn or y3 (Normal flow downstream of hydraulics jump) (m)
1 0.1080 0.5138
2 0.2181 0.8424
3 0.3309 1.1410
4 0.4463 1.4253
5 0.5638 1.7013
6 0.6839 1.9718
7 0.8067 2.2385
Figure 2 – Gate rating curve assuming free flowing conditions
Your Task:
• Determine at what discharge the gate changes from freely flowing to submerged conditions (to the nearest m^3 /s)?
• Calculate the new gate opening (yg) for those discharges for where the gate is submerged by the depth downstream of the gate.
• Plot the new rating curve (alter the part where gate is submerged)
Marking Scheme Question 3 – Control Gate
Items Requirements Marks
Equations and Diagram Labelled diagramEquations are introduced 10
Determining when gate is submerged Apply the hydraulic jump equationValues at appropriate intervals of QPlot of h or y vs QSample hand calc 30
New yg for submerged Method is correctsample hand calc or explanation 20
Rating Curve Plotted the new curve with given yg 10
Total 70
Hydraulics II Report 2 S1 2023 Final
Also visit:https://www.notesnepal.com/archives/767
The post Hydraulics II Report 2 S1 2023 Final appeared first on Aussienment.
|
{"url":"https://answers.essaypanel.com/hydraulics-ii-report-2-s1-2023-final/","timestamp":"2024-11-07T20:20:43Z","content_type":"text/html","content_length":"88390","record_id":"<urn:uuid:16ab712c-83eb-4d83-9351-c4d212f78999>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00320.warc.gz"}
|
How do you find z-score using Z-table?
How do you find z-score using Z-table?
To use the z-score table, start on the left side of the table go down to 1.0 and now at the top of the table, go to 0.00 (this corresponds to the value of 1.0 + . 00 = 1.00). The value in the table
is . 8413 which is the probability.
What is Z-table and z-score?
A z-table is a table that tells you what percentage of values fall below a certain z-score in a standard normal distribution. A z-score simply tells you how many standard deviations away an
individual data value falls from the mean. It is calculated as: z-score = (x – μ) / σ
Is z-score and Z-table the same?
A Z-score table is also known as a standard normal table used to find the exact area. Z-score table tells the total quantity of area contained to the left side of any score or value (x). In the
Z-table top row and the first column corresponds to the Z-values and all the numbers in the middle corresponds to the areas.
How do you find the z-score in statistics?
If you know the mean and standard deviation, you can find z-score using the formula z = (x – μ) / σ where x is your data point, μ is the mean, and σ is the standard deviation.
How do you find the Z test statistic?
To calculate the Z test statistic:
1. Compute the arithmetic mean of your sample.
2. From this mean subtract the mean postulated in null hypothesis.
3. Multiply by the square root of size sample.
4. Divide by the population standard deviation.
5. That’s it, you’ve just computed the Z test statistic!
How do you calculate z in statistics?
μ = population mean. ơ = population standard deviation. In the case of a sample, the formula for z-test statistics of value is calculated by deducting sample mean from the x-value. Then the result is
divided by the sample standard deviation. Mathematically, it is represented as, Z = (x – x_mean) / s. where.
What Z score represents the 80th percentile?
The question doesn’t state whether she wants at least the top 30% or at max the top 30%, but the former seems reasonable. Choosing 0.53 as the z-value, would mean we ‘only’ test 29.81% of the
students. I would have assumed it would make more sense to choose z=0.52 for that reason, so that we at least cover 30%.
How do you calculate probability of z score?
z-score = (x – μ) / σ. where: x: individual data value; μ: population mean; σ: population standard deviation; Step 2: Find the probability that corresponds to the z-score. Once we’ve calculated the
z-score, we can look up the probability that corresponds to it in the z table. The following examples show how to use this process in different scenarios.
How do you find a z score?
About one-third of recent movers ii (34%) say it’s harder to find a house to buy than a spouse Tools like Zillow’s travel-time function, walk score and transit score can help shoppers choose
|
{"url":"https://www.fdotstokes.com/2022/10/31/how-do-you-find-z-score-using-z-table/","timestamp":"2024-11-10T09:38:51Z","content_type":"text/html","content_length":"55947","record_id":"<urn:uuid:319978ab-9692-46f1-87e5-27b683853ec0>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00043.warc.gz"}
|
Distance Modulus
Today parallaxes can only be measured for stars out to distances of 500 light-years. Since our galaxy is approximately 100,000 light-years in diameter, this only includes a small fraction of the
total number of stars in the galaxy. How can the distances to stars even farther away be determined? One method that can be used is to compare their apparent brightness and luminosity.
Suppose a friend in the distance is carrying a powered 100W light bulb. The further away the friend is, the dimmer the light bulb will appear. The closer the friend is, the brighter the light bulb
will appear. So, by comparing how bright the bulb appears to how bright the light bulb is intrinsically, the distance can be determined. This is possible because of the inverse square law.
The diagram to the right visually depicts the inverse square law and light. The number of photons/rays going through each square is different depending on the distance of the square. Like parallax,
this is a purely geometric effect.
Astronomers express the inverse square law effect with the distance modulus which is expressed in terms of magnitudes. The difference between the apparent magnitude (m) and the absolute magnitude (M)
defines the distance to the object in parsecs. That is: m − M = −5 + 5 log[10] d.
Additionally, the table below can be used for a quick check. Note that the 10, 16, 25, 40, 63 pattern repeats (with an increasing number of zeroes) and may be used to calculate values not contained
in the table..
m − M 0 1 2 3 4 5 6 7 8 9 10 15 20 25
distance (pc) 10 16 25 40 63 100 160 250 400 630 10^3 10^4 10^5 10^6
One of the best known distance indicators are RR Lyrae Stars. These are pulsating variable stars – stars that change in brightness over time because they are periodically growing larger and smaller
much like breathing. These stars pulsate because the release of energy from the outer layers of the star varies over time (due to a layer of partially ionized helium). When this ionized layer is
close to the center of the star and hot – it becomes very opaque to the flow of radiation and the radiation pressure pushes it outward. When the ionized layer gets far from the star, it cools off and
its opacity decreases. Radiation can now stream through and the layer falls back toward the center of the star.
RR Lyrae stars are very good “standard candles”. These are objects where have a pretty good idea how intrinsically bright they are. It turns out that all RR Lyrae stars have absolute magnitudes very
near M[V] = 0.5. However, since they are small, faint stars, they cannot be seen at large distances.
The figure to the right shows the variation in the apparent magnitude of the RR Lyrae star VX Her. Note that the average apparent magnitude is about 10.5. Thus, the distance modulus for this stars is
(m - M) = 10.5 - 0.5 = 10, which corresponds to a distance of 1000 pc.
There are many other objects that astronomers use with the distance modulus to obtain distance. They all involve some method by which the astronomer uncovers the value of absolute magnitude M for an
object and there are many different approaches used. The apparent magnitude m is then observed to obtain the distance.
|
{"url":"http://astro.unl.edu/naap/distance/distance_modulus.html","timestamp":"2024-11-13T21:18:14Z","content_type":"application/xhtml+xml","content_length":"15578","record_id":"<urn:uuid:c7c2276d-0458-4bcc-afa6-b10f99912df0>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00427.warc.gz"}
|
Ch 5 Review Questions
Review Questions
Ch 5 – Newton’s Laws
Concept Questions
c1. Forces on connected blocks
Blocks \(A\) and \(B\) are connected by a string passing over a pulley. Block \(B\) if moving down, dragging block \(A\) across a frictionless table.
a. Sketch the forces on blocks \(A\) and \(B\).
b. Considering the axes shown, state the acceleration constraint for this system. (That is, what equation relates the accerlations of \(A\) and \(B\)?)
c2. Force on puck
A hockey puck is sliding frictionlessly in the \(+x\) direction. When it reaches the position marked by the dot, you must give it one sharp blow with a hammer. After hitting it you want the puck to
move in the \(-y\) direction at a speed similar to what it had initially.
Draw a force vector to show the direction in which you exert the force of the hammer.
c3. Motion diagram
The motion diagram shows a dot at the location of an object every second. Draw the net force vectors at each point.
c4. Hand and tension forces
Blocks \(A\) and \(B\), with \(m_B > m_A\), are connected by a string. A hand pushes \(A\) and accelerates the system across a frictionless surface.
Rank the magnitudes of the following forces:
• \(T_A\), the string tension force acting on \(A\)
• \(T_B\), the string tension force acting on \(B\)
• \(H_A\), the force of the hand on \(A\)
Since the masses are accelerating to the right, both free-body diagrams should have the net rightward force greater than the net leftward force.
• \(H_A > T_A\) to accelerate \(A\) to the right
• \(T_A = T_B\) because tension is constant throughout the string
\(H_A > T_A = T_B\)
c5. Masses on pulley
Blocks \(A\) and \(B\) are connected by a massless string over an ideal pulley. The blocks have just been released from rest. Rank the following forces: the weight of \(A\) (\(W_A\)), the weight of \
(B\) (\(W_B\)) and the tension in the string (\(T\))?
Block \(A\) must have net force downward. Block \(B\) must have net force upward.
\(W_A > T > W_B\)
c6. Accelerating elevators
A woman stands on a scale in an elevator. The velocity of each is shown, along with its change in speed. Rank the scale readings, \(S_1\), \(S_2\) and \(S_3\).
\(S_1 = S_2 \gt S_3\)
The only two forces on the person are gravity downward (which does not change) and scale force \(S\) upward. If \(\vec{a}\) points upward, then the net force points upward. This is true for case 1
and 2.
Long Answer Questions
1. Forces on books at rest
A history book is lying on top of a physics book on a desk, as shown. The history and physics books weigh 14 N and 18 N, respectively.
a. Sketch a free-body diagram (FBD) for each book. Identify each force with a double subscript notation (for instance, the contact force by the history book on the physics book can be described as \
b. Determine the magnitude of each force in the FBDs.
b. Force magnitudes are as follows
\(F_{eh} = 14\) N. This is the force by the Earth on the history book.
\(F_{ph} = 14\) N. This must balance \(F_{eh}\)
\(F_{ep} = 18\) N. This is the force by the Earth on the physics book.
\(F_{hp} = 14\) N. This is the 3rd Law pair to \(F_{ph}\).
\(F_{dp} = 32\) N. This is the sum of \(F_{hp} + F_{ep}\).
2. Find force from acceleration
Two forces are applied to a \(5.0\) kg object, and it accelerates at a rate of \(2.0\) m/s² in the positive \(y\)-direction. If one of the forces acts in the positive \(x\)-direction with magnitude \
(12.0\) N, find the magnitude of the other force.
\(F_2 = 15.6\) N
\(\displaystyle \vec{F}_{\rm{net}} = m \vec{a}\)
\((12.0) \hat{\imath} + \vec{F}_2 = (5.0) (2.0) \hat{\jmath}\)
\(\vec{F}_2 = -12 \hat{\imath} +10 \hat{\jmath}\)
\(F_2 = \sqrt{(-12)^2 + 10^2} = 15.6\) N
3. Spring tension
Two identical springs, each with the spring constant \(20\) N/m, support a \(15.0\) N weight.
a. What is the tension in spring A?
b. What is the amount of stretch of spring A from its equilibrium position?
a. \(8.66\) N
b. \(0.433\) m
• Draw a free body diagram for the weight.
• This will have two identical spring forces and one weight force.
• The three vertical forces add to zero, which yields an equation for the spring tension.
• With this tension, use \(F=kx\) to find the spring extension \(x\).
4. Force from position function
A force acts on a car of mass \(m\) so that the position of the car changes in time according to \(\vec{r}(t)= (At) \hat{\imath} + (B/t) \hat{\jmath}\).
Find the force vector acting on the car as a function of time.
\[ \vec{F} = m \vec{a} = m \frac{d}{dt} \frac{d\vec{r}}{dt} = m \frac{d}{dt} \left ( A \hat{\imath} - \frac{B}{t^2} \hat{\jmath} \right ) = \frac{2mB}{t^3} \hat{\jmath} \]
5. Connected masses on ramp
For the connected masses shown, \(M = 6.0\) kg and \(\theta = 30^{\circ}\). Find the acceleration of the system and the tension in the connecting string. The pulley and surfaces are frictionless.
\(a = 7.35\) m/s²
\(T=14.7\) N
Start with a free-body diagram for each mass. It’s ok if each FBD has its own coordinate system; what’s important is that the scalar variables (\(T\), \(a\), \(M\)) used in multiple figures all refer
to the same quantity.
Next we write \(\vec{F}_{\rm{net}} = m \vec{a}\) in each case.
For the mass on the incline:
\(T \hat{\imath} + N \hat{\jmath} + Mg ( \hat{\imath} \sin \theta - \hat{\jmath} \cos \theta ) = Ma \hat{\imath}\)
which is really two equations:
\(T + Mg \sin \theta = Ma\)
\(N - Mg \cos \theta = 0\)
For the hangning mass:
\(T \hat{\jmath} -Mg \hat{\jmath} = M(-a \hat{\jmath})\)
\(T -Mg = -Ma\)
The equations that contain \(T\) and \(a\) must be solved simultaneously. One way to do this is to take their ratio:
\(\displaystyle \frac{ T + Mg \sin \theta}{T-Mg} = -1\)
\(\displaystyle T = \tfrac{1}{2} Mg ( 1- \sin \theta) =14.7\) N
Plug this result into the hanging mass equation to find
\(a = 7.35\) m/s².
Last modified: Tue October 22 2024, 10:11 PM.
|
{"url":"http://madisoncollegephysics.net/phys1fall24/model_questions/05.html","timestamp":"2024-11-14T23:32:19Z","content_type":"application/xhtml+xml","content_length":"21947","record_id":"<urn:uuid:19f8d8e7-50c7-4d64-96ad-2a25f9eb6ec7>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00601.warc.gz"}
|
Content description
Choose appropriate units of measurement for length, area, volume, capacity and mass (ACMMG108)
• recognising that some units of measurement are better suited for some tasks than others, for example kilometres rather than metres to measure the distance between two towns
Source: Australian Curriculum, Assessment and Reporting Authority (ACARA)
What is length?
When we are comparing and describing length, height, width, breadth, distance, how far or how long, we are using length measurement. Before year 5 students may have used paces or handspans to measure
length informally. They may also be familiar with metric units of measurement. The most commonly used units of length measurement are millimetres, centimetres and metres:
• 1 metre = 1000 millimetres.
• 1 metre = 100 centimetres.
• 1 centimetre = 10 millimetres.
• 1 kilometre = 1000 metres.
We sometimes abbreviate the unit of measurement. Metre is shortened to m, centimetre is cm and millimetre is mm.
Appropriate choice of unit
If you wanted to measure the length of a piece of ribbon you might use centimetres or metres. You would not use centimetres or metres to measure the distance travelled on an around-the-world holiday.
We need to carefully choose units appropriate to the task. This is best learned by doing lots of actual measurement with a range of measuring tools.
|
{"url":"https://amsi.org.au/ESA_middle_years/Year5/Year5_1cT/Year5_1cT_R1_pg1.html","timestamp":"2024-11-07T02:41:23Z","content_type":"text/html","content_length":"4527","record_id":"<urn:uuid:7e6fd3db-9eb7-4982-ba8b-3e7711a28433>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00074.warc.gz"}
|