url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://sum1ton.wordpress.com/2007/07/24/infinity-without-the-woo-part-1/
|
## Infinity: Without the Woo, Part 1
Last time I talked briefly about pretend mathematicians. Y’know, people who’ve actually never studied the stuff they’re claiming to refute. Because “common sense” tells them it’s wrong. And “common sense” is, as we all know, the guiding principle of mathematics.
So this time I’ll actually get into cardinality and infinite sets a bit.
First, a preface. Infinity, as it is defined mathematically, is not an object of any kind. It’s not a number. It’s not a set. So what is it?
As it relates to calculus, infinity is a limiting principle. To avoid a rigorous discussion, it’s best to look at it through an example. Take an everyday function like $f(x) = x$. We write:
$\mathop {\lim }\limits_{x \to \infty } f(x) = \mathop {\lim }\limits_{x \to \infty } x = \infty$
This notation reads: The limit of $f(x)$ as x goes to infinity equals the limit of $x$ as x goes to infinity equals infinity. In other words, we’re asking the question: If x were to never stop getting larger, what would happen to the function $f(x)$? In this case, the function itself would never stop getting larger either. So we write that its limit is infinity.
Now let’s stop here for a second to clear something up. Infinity still is not a number. This is a notation that mathematicians use for convenience. It’s understood that when a limit of a function equals infinity that that function does not have a limit, not that its limit is actually a number called “infinity”. In more technical terms, the function is said to diverge.
The second way in which infinity is used in mathematics is in regards to sets, specifically the number of elements of a set. If you’re not familiar with the term, a set is just a collection of things.
The notation goes as follows. Say you have a set of the numbers 3,4,5, and 6. We would write the set as $\{\{3\},\{4\},\{5\},\{6\}\}$, or for convenience, $\{3,4,5,6\}$. An object $\gamma$ is an element of a set $\mathbf{G}$ if $\gamma$ is itself a set and if it is “inside” of $\mathbf{G}$. $\gamma$ may also be referred to as a subset of $\mathbf{G}$. Subsets of $\gamma$ are considered elements of $\gamma$, not $\mathbf{G}$. So at the very least, $\mathbf{G} = \{\gamma\}$, and it has one element. $\mathbf{G}$ could have more elements; at the very least, it must have $\gamma$ in there.
The example in the previous paragraph is of a finite set. A set is finite if it has finitely many elements. Basically, it just means that if you started counting the number of elements, you would eventually finish. Now we’ll introduce the concept of cardinality. A cardinality describes the number of elements of a set. The set $\{3,4,5,6\}$ has a cardinality of 4, because it has 4 elements. Quickly we can see that if a set has cardinality that is a natural number, then it is finite. (Refer to the math symbols page if you don’t know what natural numbers are).
So what does an infinite set look like? Well, a set is infinite if it’s not finite. Pretty simple. But it gets a bit more complicated. This is where intuition and common sense may fail you if you aren’t careful.
Take the set $\mathbb{N}$ of natural numbers. Clearly this set is infinite, because for any number $n$ that you give me, I can just add another natural number to it and get a bigger number. It has another property which is quite obvious, but turns out to be significant. No matter what element $n$ you choose from $\mathbb{N}$, I can tell you at what position your element lies in the set. So if you choose 2,044, I know the number is the 2,044th element of the set. In general, the natural number $n$ is the $n^{th}$ element of the set $\mathbb{N}$.
For this reason, $\mathbb{N}$ is called countably infinite. It may have an infinite number of members, but no matter what member you may pick, I can tell you at what position it lies in the set. So let’s call the cardinality of this set $\aleph_0$. This is simply a notation, not a number.
So let’s talk a little bit about a different set. Let’s call it $\mathbb{E}$. $\mathbb{E}$ is the set of all even natural numbers. You can clearly see that this set is also countably infinite. If you give me any element $e$ in the set, I know its position is $\frac{e}{2}$. So, for example, 2 is the first element, 4 is the second, 6 is the third, etcetera.
But how does $\mathbb{E}$ compare to $\mathbb{N}$? It may seem at first glance that $\mathbb{N}$ is “bigger” than $\mathbb{E}$, because $\mathbb{E}$ is a subset of $\mathbb{N}$. First, remember that these sets are both infinite, which should tip you off that common sense might fail you this time. Second, remember that we’re not necessarily concerned with the values of each element, but with the number of elements in total. How can we prove that $\mathbb{E}$ and $\mathbb{N}$ actually have the same cardinality?
Let’s take each element of $\mathbb{N}$ and pair it with a unique element of $\mathbb{E}$. If we can do this for every single element of $\mathbb{N}$ and $\mathbb{E}$, they must have the same number of elements. So define a function $f(n)=2n$, where $n$ is any natural number. This does exactly what we wanted to do, because no matter if a natural number is even or odd, multiplying it by 2 makes it even. And we can see that each element of one set has a unique partner from the other. We can see that if one element of either set had to pair with two partners to make things fit that the other set would be the bigger set.
It’s easy to see, then, (or prove) that every countably infinite set has the exact same cardinality. The set of odd natural numbers. The set of integers. The set of primes. All of them.
It gets trickier though. Let’s consider the set of real numbers $\mathbb{R}$. $\mathbb{R}$ is the set of all rational and irrational numbers. So it includes everything from the integers to fractions to numbers like $\pi$ that have infinitely many digits.
How does $\mathbb{R}$ compare to $\mathbb{N}$? Well $\mathbb{N}$ is clearly a subset of $\mathbb{R}$. But, as we know, that doesn’t mean they don’t have the same cardinality.
Remember that our special property from countably infinite sets was that we knew at what position every element lay in the set. Let’s simplify things a bit and only take the set $\mathbb{R}^{+}$ of nonnegative real numbers. Well, we know what the first element of the set is. It’s zero. What’s the next greatest element? Is it .01? No, because .001 is greater than zero but is less than .01. But .0001 is greater than zero and less than .001. And so on and so on. We can quickly see that no matter what “second element” you pick, I can find a smaller one that’s still greater than zero. Aside from the first element, then, asking about the “position” of an element in $\mathbb{R}^{+}$ is meaningless. $\mathbb{R}^{+}$ is what’s called uncountably infinite.
So let’s look at a smaller set, the set of real numbers between and including 0 and 1. We’ll denote that as [0,1]. Clearly [0,1] has a first element and a last element. But [0,1] is still uncountable. It’s still meaningless to talk about the “second element” or the “third element” of [0,1]. Okay, so it’s uncountable, but is it infinite? Well yeah, it is. Let’s take a subset of [0,1], and call it (0.5,0.7). So this is the set of all numbers between 0.5 and 0.7, not including 0.5 and 0.7. This set doesn’t even have a first or a last element. 0.5 isn’t in the set. So what’s the first element? 0.50000001? Well 0.500000001 is less than that, but still greater than 0.5. We arrive at the same conundrum. So we see that we can create infinitely many numbers in this interval (0.5,0.7). No matter what two different numbers in the interval you pick, I can find a new number between those numbers.
Clearly if (0.5,0.7) has infinitely many elements, then so does [0,1]. So [0,1] is uncountably infinite as well. Thus, $\mathbb{R}$ is uncountably infinite. You can extend this reasoning and show that the set of real numbers [0,1] and [0,2] have, seemingly paradoxically, the same cardinality. You can then show that $\mathbb{R}$ and any interval subset of $\mathbb{R}$ have the same cardinality.
This seems unbelievable, but it’s true. I’ll do the first example. We need to pair every element of [0,1] with an element of [0,2]. Well, we can just use the same function we used before, $f(n)=2n$. For a quick check that this works, note that the midpoint of [0,1] corresponds to the midpoint of [0,2]. $f(.5) = 2*.5 = 1$.
Now, you should keep in mind that we’re still talking about the number of elements of the set, not the values. It’s easy to forget that. Yes, the interval [0,1] has a finite length. It’s just 1-0=1. We’re concerned about the number of elements in the interval, not the difference between the first and last elements.
So how can we compare $\mathbb{R}$ and $\mathbb{N}$? Well, every interval subset of $\mathbb{R}$ has infinitely many elements. Every interval subset of $\mathbb{N}$, on the other hand, has finitely many elements. Clearly, then, $\mathbb{R}$ is much bigger than $\mathbb{N}$. In other words, the cardinality of $\mathbb{N}$ is less than the cardinality of $\mathbb{R}$. We denote the cardinality of $\mathbb{R}$ as $\aleph_1$ and say that $\aleph_0 < \aleph_1$. (Note that this is a cardinal, not an arithmetic, ordering; $\aleph$ isn’t a number, remember).
So I hope that was understandable. I tried not to use so much jargon. If you have any questions, comments, responses, you know what to do. Next time we’ll talk about the continuum hypothesis and sets with even bigger cardinalities!
### 5 responses to this post.
1. Posted by mindloop on July 31, 2007 at 8:04 pm
Some technical annoyances I have…
(A)
You seem to be confusing an element and a subset. An element is a singular part of the set. For example, 3 is an element of {3,4,5} (a set of numbers). Four and five are the other two elements. An element is not necessarily a set. 3,4, and 5 are not sets but numbers. None of these are subsets. But {3}, {4}, and {5} are subsets. So are {3,4}, {4,5}, {3,5}, and {3,4,5} (the last is not a ‘proper’ subset).
Now suppose we take {{3},{4},{5}} (a set of sets of single numbers). {3} is an element, and so are {4} and {5}. None of these are subsets; they are elements. But {{3}}, {{4}}, and {{5}} are subsets. The others are {{3},{4}}, {{3},{5}}, {{4},{5}}, and {{3},{4},{5}} (the last not being a proper subset).
(B)
You explained countable infinite well, but not uncountable infinite. You referenced that between any two real numbers there is another, but this does not mean that the real numbers are uncountably infinite. What you noted is that the reals are what is called a “dense” set. The rational numbers are also a dense set (in fact your examples of 0.1, 0.01, 0.001, … are all rational numbers). Yet these are countably infinite. You are confusing countability with countable orderability.
To truly prove R’s uncountability, you need something like Cantor’s diagonal argument.
(C)
[ack… I have to leave now! will finish small bit later!]
2. (A)
You’re right. I was a bit careless with the notation. I’ll fix that. As for the difference between {3,4,5} and {{3},{4},{5}}, it seems like a difference between our educations. I was taught that the former notation was convenient for expressing the latter, as numbers can be thought of as discrete sets with single elements. You can then set up arithmetic via unions, complements, and intersections. It’s really a pedantic difference for the purposes of this blog.
(B)
My main argument for the uncountability of $\mathbb{R}$ was that you cannot associate position with its elements, not that it is dense. This is effectively what Cantor’s diagonalization argument shows. With $\mathbb{Q}$, as you mentioned, you can. And I did use $\pi$ as an example; they weren’t all rational numbers!
(C)
Looking forward to it. 🙂
3. John
So I am successful at least in that I never studied the stuff but still caused this hi fi discussion to take place:-)
Anyways, sometime before, another person had raised similar objection on me. My reply was that, “yes! I read less and think more.”
Right now I am in Office and my home computer is also out of order. So my detailed reply is due at any convenient time. What I can say for the moment is that none of your points is against any of my points. Like you, I also don’t treat “infinity” as any “number” or so. My article on this topic discusses two types of infinity which are (i) Never Ending and; (ii) Never Happening.
For details, see main article on following link:
Secondly,I also don’t disagree with your following proof:
“It’s easy to see, then, (or prove) that every countably infinite set has the exact same cardinality. The set of odd natural numbers.”
My article was NOT talking about the issue of cardinality comparison of super and sub-sets.
My article is talking about the issue of cardinality of smaller and larger PHYSICAL lines.
What I say is that REAL numbers exist only in abstract mathematical relations. My further point is that REAL numbers don’t exist in the dimensions and sizes of PHYSICAL OBJECTS.
And my main point is that a finite physical line must be having finite numbers of individual points in it.
Thirdly, you have not addressed to my criticism on the concepts of geometry; like concept of “point” and concept of “line”. My outstanding question is that how linier combination of “spaceless” points can result in a non-zero length but “widthless” line …???
With reference to your point on “minloop” blog … that my article contains “no math” …
Yes … I am not mathematician. I want to employ common sense methodology and terminology. I don’t believe in absolute accuracy of mathematics and I believe that sometimes common sense can be right and mathematics wrong!
Regards!
4. Posted by mindloop on August 1, 2007 at 12:56 pm
Okay, I’m back. (And ready to finish reading your post!) Also, I see you avoided the epsilon-delta notation to keep it simple 😉
(A)
I am following what I learned in my education in this case, but I learn most of my math online before the classroom and of what I learned of set theory online, there is a difference between the number 3 and the set {3}. So it actually makes sense to refer to {2,{2}} in such a formulation, for example. I suspect that the way you learned it isn’t the generally accepted standard. (But don’t think of me as a standards-nazi, I realize we’re basically cutting hairs by arguing this :D)
(B)
Perhaps you understand it correctly, but I do not think you expressed it correctly in the post.
“How does R compare to N? Well N is clearly a subset of R. But, as we know, that doesn’t mean they don’t have the same cardinality.
Remember that our special property from countably infinite sets was that we knew at what position every element lay in the set. Let’s simplify things a bit and only take the set R+ of nonnegative real numbers. Well, we know what the first element of the set is. It’s zero. What’s the next greatest element? Is it .01? No, because .001 is greater than zero but is less than .01. But .0001 is greater than zero and less than .001. And so on and so on. We can quickly see that no matter what “second element” you pick, I can find a smaller one that’s still greater than zero. Aside from the first element, then, asking about the “position” of an element in R+ is meaningless. R+ is what’s called uncountably infinite.”
In every sentence but the last, you go on explaining what is actually called a dense set. Meaning, between any two elements you can find another element. As we both know, Q and R both share this property. 0.1 is rational, 0.01 is rational, 0.001 is rational, and so your entire paragraph might as well be explaining Q instead of R. By your own reasoning, it is still meaningless to talk about the “position” of rational numbers (they cannot be countably ordered by size, in other words).
“Any set which can be put in a one-to-one correspondence with the natural numbers (or integers) so that a prescription can be given for identifying its members one at a time is called a countably infinite (or denumerably infinite) set.” –Mathworld
Q can be put into 1-1 correspondence with N, even though it is dense. R is dense but cannot be. Why this is the case you did not explain or even hint at.
“So how can we compare R and N? Well, every interval subset of R has infinitely many elements. Every interval subset of N, on the other hand, has finitely many elements. Clearly, then, R is much bigger than N. In other words, the cardinality of N is less than the cardinality of R.”
Every interval in Q has infinitely many elements! Yet Q and N still have the same cardinality. You are not correctly identifying the reason that R and N are of different sizes. In my opinion, the best way to show this is to explain that any sequence (read: countable subset) of real numbers is missing at least one, and therefore the real numbers in their entirety must not be countable. (Basically, use the diagonal argument because it is easy to understand.)
“We denote the cardinality of R as \aleph_1 and say that \aleph_0 < \aleph_1. (Note that this is a cardinal, not an arithmetic, ordering; \aleph isn’t a number, remember).
So I hope that was understandable. I tried not to use so much jargon. If you have any questions, comments, responses, you know what to do. Next time we’ll talk about the continuum hypothesis and sets with even bigger cardinalities!”
Noooooooooooooooooooooooooooooooooooooooo!!
Are you sure that the size of R is aleph1? If that were correct, then there would be no continuum hypothesis in the first place! The cardinality of R is termed 2^aleph1, or beth1.
The continuum hypothesis states that the size of R is in fact aleph1, which would mean there are no sets bigger than N but smaller than R, but this assertion is actually independent of the Zermelo-Fraenkel axioms of which set theory is based off of!
Okay, I’m done ranting. 🙂
|
2017-04-30 10:46:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 77, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8234108686447144, "perplexity": 264.944393110321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125074.20/warc/CC-MAIN-20170423031205-00195-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://mran.revolutionanalytics.com/snapshot/2021-11-01/web/packages/biogrowth/vignettes/dynamic_for_static.html
|
# Using dynamic models for static environmental conditions
library(biogrowth)
library(tidyverse)
library(cowplot)
Although the function predict_dynamic_growth() is intended to describe growth under dynamic conditions, it can also be used for simulations under static conditions defining a constant environmental profile. This can be useful in situations were the environmental conditions are static, but the population response is defined using secondary models.
For starters, we will define an isothermal temperature profile at 35ºC.
my_conditions <- tibble(time = c(0, 50),
temperature = c(35, 35)
)
Next, we define primary and secondary models as usual.
q0 <- 1e-4
mu_opt <- .5
my_primary <- list(mu_opt = mu_opt,
Nmax = 1e8,N0 = 1e2,
Q0 = q0)
sec_temperature <- list(model = "CPM",
xmin = 5, xopt = 35, xmax = 40, n = 2)
my_secondary <- list(temperature = sec_temperature)
Finally, we call predict_dynamic_growth after defining the time points of the simulation.
my_times <- seq(0, 50, length = 1000)
## Do the simulation
dynamic_prediction <- predict_dynamic_growth(my_times,
my_conditions, my_primary,
my_secondary)
Because the temperature during the simulation equals the cardinal parameter $$X_{opt}$$, the predicted population size is identical to the one calculated using predict_isothermal_growth for the Baranyi model when $$\mu = \mu_{opt}$$ and $$\lambda = \frac{ \ln \left(1 +1/Q_0 \right) }{\mu_{opt}}$$.
lambda <- Q0_to_lambda(q0, mu_opt)
my_model <- "Baranyi"
my_pars <- list(logN0 = 2, logNmax = 8, mu = mu_opt, lambda = lambda)
static_prediction <- predict_isothermal_growth(my_model, my_times, my_pars)
plot(static_prediction) +
geom_line(aes(x = time, y = logN), linetype = 2, data = dynamic_prediction$simulation, colour = "green") The advantages of using predict_dynamic_growth() for modelling growth under static conditions are evident when simulations are made for several temperatures. Using predict_isothermal_growth() would require a calculation of the value of $$\mu$$ for each temperature separately. Because the relationship between $$\mu$$ and temperature is included in the secondary model, a separate calculation is not required when using predict_dynamic_growth(). max_time <- 100 c(15, 20, 25, 30, 35) %>% # Temperatures for the calculation set_names(., .) %>% map(., # Definition of constant temperature profile ~ tibble(time = c(0, max_time), temperature = c(., .)) ) %>% map(., # Growth simulation for each temperature ~ predict_dynamic_growth(seq(0, max_time, length = 1000), ., my_primary, my_secondary) ) %>% imap_dfr(., # Extract the simulation ~ mutate(.x$simulation, temperature = .y)
) %>%
ggplot() +
geom_line(aes(x = time, y = logN, colour = temperature)) +
theme_cowplot()
Note, however, that predict_dynamic_growth() does not include any secondary model for the lag phase. The reason for this is that there are no broadly accepted secondary models for the lag phase in predictive microbiology. Therefore, the value of $$\lambda$$ varies among the simulations according to $$\lambda(T) = \frac{ \ln \left(1 +1/Q_0 \right) }{\mu(T)}$$.
Another application of predict_dynamic_growth() is including the impact of another environmental factor when temperature is kept constant. This can be done by defining a second secondary model.
my_primary <- list(mu_opt = mu_opt,
Nmax = 1e8,N0 = 1e2,
Q0 = q0)
sec_temperature <- list(model = "CPM",
xmin = 5, xopt = 35, xmax = 40, n = 2)
sec_pH <- list(model = "CPM",
xmin = 4, xopt = 7, xmax = 8, n = 2)
my_secondary_2 <- list(temperature = sec_temperature,
pH = sec_pH)
Then, we can call predict_dynamic_growth().
max_time <- 100
c(5, 5.5, 6, 6.5, 7, 7.5) %>% # pH values for the calculation
set_names(., .) %>%
map(., # Definition of constant temperature profile
~ tibble(time = c(0, max_time),
temperature = c(35, 35),
pH = c(., .))
) %>%
map(., # Growth simulation for each temperature
~ predict_dynamic_growth(seq(0, max_time, length = 1000),
.,
my_primary,
my_secondary_2)
) %>%
imap_dfr(., # Extract the simulation
~ mutate(.x\$simulation, pH = .y)
) %>%
ggplot() +
geom_line(aes(x = time, y = logN, colour = pH)) +
theme_cowplot()
As above, note that the lag phase varies between the simulations according to $$\lambda(T, pH) = \frac{ \ln \left(1 +1/Q_0 \right) }{\mu(T, pH)}$$.
|
2023-03-21 08:21:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7187500596046448, "perplexity": 6590.741196164024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00265.warc.gz"}
|
https://gateoverflow.in/229665/instruction-addressing?show=230007
|
296 views
A computer has 170 different operations. Word size is 4 bytes one word instructions requires two address fields. One address for register and one address for memory. If there are 37 registers then the memory size is ______________(in KB).
Ans. 256KB
| 296 views
0
Instruction:- Opcode , Register , Memory
Opcode bits = 8 bit
Register = 6 bit
So Memory = 32-8-6 = 18 Bits
So size of Memory :- 2^18 * 4 Byte (Because every word is of 4B)
Hence Size of Memory = 1024 KB
Please tell where i am wrong ?
+2
So size of Memory :- 2^18 * 4 Byte (Because every word is of 4B)
This is not correct.
Why you are multiplying by 4 when you have already converted 4 bytes to 32 bits above.
Memory size will be $2^{18} = 256 KB$
0
Here a word is divided into 3 fields right, a word is of 32 bit the remaining bits represent the memory word i.e. 18 bits, so total 2^18 words are there, but i should also multiply with 4 n.a. because a word is comprised of 4 bytes.
+1
No No..
The remaining bits represent the memory word i.e. 18 bits, so total $2^{18}$ words are there,
Where is it written that memory is word addressable so you are giving these addresses to words?
Always consider byte addressability unless otherwise specified.
So each byte will take a different address.
A memory address is of 18 bits. So with these 18 bits $2^{18}$ different addresses are possible and each of which is given to a different Byte of memory so memory size is $256 KB$
0
I know that, but when I say memory is of k bits it means that total 2^k words are present and by default every word is of 1 bytes so total 2^k bytes are there,with address given to every byte.
Say I have memory of k bits and every word is of 4Byte. Then can't I say total words are 2^k and each word is of 4 Byte. While giving address to each byte the total bytes would be 2^(k+2).
Please correct me @ Soumya if I am wrong somewhere :)
+2
@Na462(I don't know your real name) :)
when I say memory is of k bits it means that total 2^k words are present and by default every word is of 1 bytes so total 2^k bytes are there,with address given to every byte.
There is nothing like that.
By default, we consider memory is Byte Addressable. So when you say memory address is of k bits, it means $2^k$ bytes are there.
And if say word size is 4 byte then $\frac{2^k}{4}=2^{k-2} \ words$ are there.
But if it is given in the question that memory is word addressable then only you can say that memory address is of k bits, it means $2^k$ words are there.
And if say word size is 4 byte then $2^k*4=2^{k+2} \ bytes$ are there.
+1
Oh thanku so much Soumya that was really important thing you corrected in me. Thanx a lot :)
by Boss (34.4k points)
selected
0
How that op and reg are 8 and 6
0
opcode represent the what type of operation which it is and it is given that there are 170 instructions so to represent 170 instructions we need 8 bits.
there are 37 registers to represent those registers we need 6 bits .
|
2019-08-24 07:29:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17251081764698029, "perplexity": 2374.832544092314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319915.98/warc/CC-MAIN-20190824063359-20190824085359-00448.warc.gz"}
|
http://eight.pairlist.net/pipermail/geeklog-users/2003-November/000643.html
|
[geeklog-users] Problems with special characters
Rob Griffiths robg at macosxhints.com
Thu Nov 13 16:35:13 EST 2003
OK, this is truly odd. I have a test 1.38sr2 running on my PowerBook,
and just tried the exact same test string:
[code]
Testing\backslashes<>and other things
<\end>
[/code]
It works fine in Preview, and since it's a test site, I published it,
and it looks fine as a story. Editing the published story also works.
Regarding your second email: it will NOT work with <pre> </pre>, you
must use [code] [/code]. But I don't know why it's not working for you
(or rather, maybe I don't know why it IS working for me?).
-rob.
-----Original Message-----
[mailto:geeklog-users-admin at lists.geeklog.net] On Behalf Of kko
Sent: Thursday, November 13, 2003 2:21 PM
To: geeklog-users at lists.geeklog.net
Subject: RE: [geeklog-users] Problems with special characters
I've tried exactly the same and what I Preview is..
Testingbackslashes<>and other things<end>
BTW my GL is 1.3.8-1sr2.
|
2021-12-03 00:24:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22629700601100922, "perplexity": 6527.999298122666}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362571.17/warc/CC-MAIN-20211203000401-20211203030401-00227.warc.gz"}
|
https://www.gamedev.net/forums/topic/516262-converting-depth-buffer-values-in-glsl-shader/
|
# Converting depth buffer values in GLSL shader...
This topic is 3608 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi everyone, i need to convert the values of a depth map, range of 0 - 1, to the actual distance from the camera. I'm using a method released in a computer graphics paper: ////////////////////////////////////////////// P33 = (f + n)/(n − f) P34 = (−2 f × n)/(f − n) z = −P34/(depthValue + P33) being n and f the near and far clipping plane, respectively. ////////////////////////////////////////////// But none of this is giving me right results... I have n=1 and f=50, with depthValue (depthbuffer) in a range between zero and one. Then mathematically i'm not getting this right, the results are all negative and not the real distance from the camera. Is anybody here who has done this kind of depth conversion? Or, at least, can give me any hint or somewhere (link) to research a bit better? I know i can't figure something important, but don't know what :S Thanks for your time :) Hayden
##### Share on other sites
You're right; this doesn't seem to work mathematically. I'm getting things in the range -1.5 to -3.0.
Can't one just go:
depthrange = f - ndistance = n + (depthvalue*depthrange)
I might be wrong, but that should work.
Cheers,
-G
1. 1
2. 2
3. 3
Rutin
16
4. 4
5. 5
• 14
• 9
• 9
• 9
• 10
• ### Forum Statistics
• Total Topics
632915
• Total Posts
3009194
• ### Who's Online (See full list)
There are no registered users currently online
×
|
2018-10-16 09:52:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.226890429854393, "perplexity": 2476.2542722026496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510749.37/warc/CC-MAIN-20181016093012-20181016114512-00525.warc.gz"}
|
https://cs.stackexchange.com/tags/randomized-algorithms/hot?filter=all
|
# Tag Info
47
For any reasonable definition of perfect, the mechanism you describe is not a perfect random number generator. Non-repeating isn't enough. The decimal number $0.101001000100001\dots$ is non-repeating but it's a terrible generator of random digits, since the answer is "always" zero, occasionally one, and never anything else. We don't actually know if every ...
29
It is cryptographically useless because an adversary can predict every single digit. It is also very time consuming.
22
First, let us make two maybe obvious, but important assumptions: _.random_item can choose the last position. _.random_item chooses every position with probability $\frac{1}{n+1}$. In order to prove correctness of your algorithm, you need an inductive argument similar to the one used here: For the singleton list there is only one possibility, so it is ...
19
If the input array is distributed uniformly at random then (as you noted) there is no difference between always picking an element at a fixed position (for example the middle one as you suggest) or picking an element chosen at random. If however your input array is not really in random order (which happens to be the case in almost all practical scenarios) ...
17
Polynomial identity testing admits a randomised polynomial time algorithm (see the Schwartz-Zippel lemma), and we currently don't have a deterministic polynomial time or even a sub-exponential time algorithm for it. Game tree evaluation Consider a complete binary tree with $n$ leaf nodes each storing a 0/1 value. The internal nodes contain OR/AND gates in ...
17
if there are any practical applications of this algorithm in the domain of computer science besides being a theoretical improvement The application of this algorithm is trivial - you use it whenever you want to compute a median of a set of data (array in other words). This data may come from different domains: astronomical observations, social science, ...
17
You seem to have misunderstood what the key is. In the context of symmetric encryption, the key is a shared secret: something that is known to both the sender and receiver. For OTP, the key is the entire pad and, if two people wish to encrypt some message using OTP, they must ensure beforehand that they have a long enough pad to do that. For your proposed ...
17
The most obvious disadvantage is the unnecessary complexity of PRNG algorithms based on irrational numbers. They require much more computations per generated digit than, say, an LCG; and this complexity typically grows as you go further in the sequence. Calculating 256 bits of π at the two-quadrillionth bit took 23 days on 1000 computers (back in 2010) - a ...
15
The question really depends on what is the precise definition of a 2-hop. If by a 2-hop you mean the set $$hp(v) = \{ u \mid \mbox{there is a path of length 2 between u and v}\},$$ then the current answer is no, you cannot do it faster than $O(n^{\omega})$ where $\omega$ is the usual constant associated with the complexity of performing the matrix product. ...
13
First of all, I will assume that by Additionally, the state $f_S$ of $f$ is set to a value $f_S' = f_S \pm k$, where $k$ is selected uniformly at random from $$\{0, 1, 2, ..., \lfloor n/2 \rfloor - ((f_S - x) \mod n)\}$$ you actually mean Additionally, the state $f_S$ of $f$ is set to a value $f_S' = f_S + k \mod n$, where $k$ is selected uniformly ...
13
So basically, you want to know if there is any sorting algorithm which wouldn't degrade from its average case if given a compare function similar to: int Compare(object a, object b) { return Random.Next(-1,1); } ... where Random.Next() is some method that will produce a randomly-generated integer between a specified inclusive lower and upper bound. The ...
13
Median filtering is common in reduction of certain types of noise in image processing. Especially salt and pepper noise. It works by picking out the median value in each color channel in each local neighbourhood of the image and replacing it with it. How large these neighbourhoods are can vary. Popular filter sizes (neighbourhoods) are for example 3x3 and ...
12
Suppose your array has $n$ elements. As you have noted, the median is always in the bigger part after the first partition. The bigger part has size at most $\alpha n$ if the smaller part has size at least $(1-\alpha) n$. This happens when you pick a pivot that isn't one of the smallest or largest $(1-\alpha) n$ elements. Because $\alpha > 1/2$, you ...
12
An example of such an algorithm is randomized Quick Sort, where you randomly permute the list or randomly pick the pivot value, then use Quick Sort as normal. Quick Sort has a worst case running time of $O(n^{2})$, but on a random list has an expected running time of $O(n\log n)$, so it always terminates after $O(n^{2})$ steps, but we can expect the ...
12
Vor's answer gives the standard definition. Let me try to explain the difference a bit more intuitively. Let $M$ be a bounded error probabilistic polynomial-time algorithm for a language $L$ that answers correctly with probability at least $p\geq\frac{1}{2}+\delta$. Let $x$ be the input and $n$ the size of the input. What distinguishes an arbitrary $\... 12 Your argument appears to be perfectly valid (Fisher-Yates does indeed require$\log (n!)$bits of randomness), the discrepancy comes in by making different assumptions about the complexity of the random number generation. You're assuming generating a random number between$0$and$n$takes$O(\log n)$. But, when saying that the Fisher-Yates shuffle is$O(n)...
12
The formal, unambiguous way to state this is “terminates with probability 1” or “terminates almost surely”. In probability theory, “almost” means “with probability 1”. For a probabilistic Turing machine, termination is defined as “terminates always” (i.e. whatever the random sequence is), not as “terminates with probability 1”. This definition makes ...
11
The algorithm works, but to understand why, you need to know basic probability theory. The idea is to prove by induction that at step $t$, the currently selected algorithm is uniform among the first $t$ elements. This is clearly the case when $t=1$. Assume now the induction hypothesis for time $t$, and consider what happens at time $t+1$. With probability $1/... 11 Now to make a more efficient One-Time-Pad you'd use a pseudo-random number generator No, no and once again no. I'm concerned that this is what you're being taught. The absolutely fundamental concept of a one time pad and the notion of mathematically provable perfect secrecy is that the pad material is truly random. And it must never ever be reused, even ... 11 No, it's not possible. Suppose the bias of the coin is$1/3$, and suppose you could guarantee termination. Then there would be some$n$such that this always terminates after$n$coin flips. Let$S$denote the set of flip-sequences that causes your algorithm to output 0 (so that$\overline{S}$is the set of flip-sequences that causes your algorithm to ... 10 Any algorithm that compares the same two elements twice is not a very clever algorithm, and in particular such an algorithm would perform less well than the most common sorting algorithms (merge-sort, quicksort, bubble-sort, insertion-sort). Any algorithm that compares pairs of elements at most once has the same (average) runtime cost regardless of the ... 10 That looks correct to me. The difference between BPP and PP is that for BPP the probability has to be greater than$1/2$by a constant, whereas for PP it could be$1/2+ 1/2^n$. So for BPP problems you can do probability amplification with a small number of repetitions, whereas for general PP problems you can't. 10 Computing medians is particularly important in randomized algorithms. Quite often, we have an approximation algorithm that, with probability at least$\tfrac34$, gives an answer within a factor of$1\pm\epsilon$of the true answer$A$. Of course, in reality, we want to get an almost-correct answer with much higher probability than$\tfrac34$. So we ... 9 Your coin flips form a one-dimensional random walk$X_0,X_1,\ldots$starting at$X_0 = 0$, with$X_{i+1} = X_i \pm 1$, each of the options with probability$1/2$. Now$H_i = |X_i|$and so$H_i^2 = X_i^2$. It is easy to calculate$E[X_i^2] = i$(this is just the variance), and so$E[H_i] \leq \sqrt{E[H_i^2]} = \sqrt{i}$from convexity. We also know that$X_i$... 9 Due to the dubious nature of the question, I only provide hints. Have you tried the obvious? With probability$\frac{1}{n}$, add the new element to the sample. If it is added, choose one of the elements already in the sample uniformly at random and drop it. Sounds about fair, does it not? For a proof, you will have to proceed inductively. In the step, you ... 9 There is a simple$O(n)$algorithm using the technique of reservoir sampling. Keep a currently selected element$x$(initially, none). Go over all bits in the file in order. When seeing the$m$th zero, put it in$x$with probability$1/m$. You can show (exercise) that the final contents of$x$is a uniformly random zero from the file. If you are allowed ... 9 Complexity theory is a mathematical theory which aims at addressing one shortcoming of computability theory, namely, it takes into account the use of resources. While it is true that in its early days it aimed to capture the notion of "practical computation" (even particular flavors such as parallel computation, supposedly captured by NC), it has since ... 9 Suppose that the array has length$n$. Since you are making$n$random choices of numbers from 1 to$n$, the probability to obtain any specific permutation is of the form$A/n^n$, for some integer$A$. Therefore your algorithm could work only if$n^n/n!$is an integer, which is only the case when$n \leq 2$. More concretely, if you shuffle the array$1,2,3\$,...
9
There is some research in this area. In The Effect of Restarts on the Efficiency of Clause Learning Jinbo Huang shows empirically that restarts improve a solver's performance over suites of both satisfiable and unsatisfiable SAT instances. The theoretical justification for the speedup is that in CDCL solvers a restart allows the search to benefit from ...
8
The probabilistic method is typically used to show that the probability of some random object having a certain property is non-zero, but doesn't exhibit any examples. It does guarantee that a "repeat-until-success" algorithm will eventually terminate, but does not give an upper bound on the runtime. So unless the probability of a property holding is ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2020-01-19 03:39:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7980666756629944, "perplexity": 441.88904077582333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594101.10/warc/CC-MAIN-20200119010920-20200119034920-00502.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/cpaa.2016.15.831
|
# American Institute of Mathematical Sciences
May 2016, 15(3): 831-851. doi: 10.3934/cpaa.2016.15.831
## Well-posedness and scattering for fourth order nonlinear Schrödinger type equations at the scaling critical regularity
1 Graduate School of Mathematics, Nagoya University, Chikusa-ku, Nagoya, 464-8602 2 Department of Mathematics, Institute of Engineering, Academic Assembly, Shinshu University, 4-17-1 Wakasato, Nagano City 380-8553, Japan
Received May 2015 Revised December 2015 Published February 2016
In the present paper, we consider the Cauchy problem of fourth order nonlinear Schrödinger type equations with derivative nonlinearity. In one dimensional case, the small data global well-posedness and scattering for the fourth order nonlinear Schrödinger equation with the nonlinear term $\partial _x (\overline{u}^4)$ are shown in the scaling invariant space $\dot{H}^{-1/2}$. Furthermore, we show that the same result holds for the $d \ge 2$ and derivative polynomial type nonlinearity, for example $|\nabla | (u^m)$ with $(m-1)d \ge 4$, where $d$ denotes the space dimension.
Citation: Hiroyuki Hirayama, Mamoru Okamoto. Well-posedness and scattering for fourth order nonlinear Schrödinger type equations at the scaling critical regularity. Communications on Pure & Applied Analysis, 2016, 15 (3) : 831-851. doi: 10.3934/cpaa.2016.15.831
##### References:
[1] F. Christ and M. Weinstein, Dispersion of small amplitude solutions of the generalized Korteweg-de Vries equation,, J. Funct. Anal., 100 (1991), 87. doi: 10.1016/0022-1236(91)90103-C. Google Scholar [2] K. Dysthe, Note on a modification to the nonlinear Schrödinger equation for application to deep water waves,, Proc. R. Soc. Lond. Ser. A, 369 (1979), 105. Google Scholar [3] Y. Fukumoto, Motion of a curved vortex filament: higher-order asymptotics,, in Proc. IUTAM Symp. Geom. Stat. Turbul., (2001), 211. doi: 10.1007/978-94-015-9638-1_25. Google Scholar [4] M. Hadac, S. Herr, and H. Koch, Well-posedness and scattering for the KP-II equation in a critical space,, Ann. Inst. H. Poincaré Anal. Non linéaie., 26 (2009), 917. doi: 10.1016/j.anihpc.2008.04.002. Google Scholar [5] M. Hadac, S. Herr, and H. Koch, Errantum to "Well-posedness and scattering for the KP-II equation in a critical space'' [Ann. I. H. Poincaré-AN26 (3) (2009) 917-941],, Ann. Inst. H. Poincaré Anal. Non linéaie., 27 (2010), 971. doi: 10.1016/j.anihpc.2010.01.006. Google Scholar [6] C. Hao, L. Hsiao, and B. Wang, Wellposedness for the fourth order nonlinear Schrödinger equations,, J. Math. Anal. Appl., 320 (2006), 246. doi: 10.1016/j.jmaa.2005.06.091. Google Scholar [7] C. Hao, L. Hsiao, and B. Wang, Well-posedness of Cauchy problem for the fourth order nonlinear Schrödinger equations in multi dimensional spaces,, J. Math. Anal. Appl., 328 (2007), 58. doi: 10.1016/j.jmaa.2006.05.031. Google Scholar [8] N. Hayashi and P. I. Naumkin, Large time asymptotics for the fourth-order nonlinear Schrödinger equation,, J. Differential Equations, 258 (2015), 880. doi: 10.1016/j.jde.2014.10.007. Google Scholar [9] N. Hayashi and P. I. Naumkin, Global existence and asymptotic behavior of solutions to the fourth-order nonlinear Schrödinger equation in the critical case,, Nonlinear Anal., 116 (2015), 112. doi: 10.1016/j.na.2014.12.024. Google Scholar [10] H. Hirayama, Well-posedness and scattering for nonlinear Schrödinger equations with a derivative nonlinearity at the scaling critical regularity,, FUNKCIALAJ EKVACIOJ, (). Google Scholar [11] Z. Huo and Y. Jia, The Cauchy problem for the fourth-order nonlinear Schrödinger equation related to the vortex filament,, J. Differential Equations, 214 (2005), 1. doi: 10.1016/j.jde.2004.09.005. Google Scholar [12] Z. Huo and Y. Jia, A refined well-posedness for the fourth-order nonlinear Schrödinger equation related to the vortex filament,, Comm. Partial Differential Equations, 32 (2007), 1493. doi: 10.1080/03605300701629385. Google Scholar [13] Z. Huo and Y. Jia, Well-posedness for the fourth-order nonlinear derivative Schrödinger equation in higher dimension,, J. Math. Pures Appl., 96 (2011), 190. doi: 10.1016/j.matpur.2011.01.002. Google Scholar [14] V. Karpman, Stabilization of soliton instabilities by higher order dispersion: fourth-order nonlinear Schrödinger-type equations,, Phys. Rev. E, 53 (1996), 1336. Google Scholar [15] V. Karpman and A. Shagalov, Stability of soliton described by nonlinear Schrödinger-type equations with higher-order dispersion,, Physica D, 144 (2000), 194. doi: 10.1016/S0167-2789(00)00078-6. Google Scholar [16] B. Pausader, Global well-posedness for energy critical fourth-order Schrödinger equations in the radial case,, Dynamics of PDE, 4 (2007), 197. doi: 10.4310/DPDE.2007.v4.n3.a1. Google Scholar [17] J. Segata, Well-posedness for the fourth order nonlinear Schrödinger type equation related to the vortex filament,, Diff. and Integral Eqs., 16 (2003), 841. Google Scholar [18] J. Segata, Remark on well-posedness for the fourth order nonlinear Schrödinger type equation,, Proc. Amer. Math. Soc., 132 (2004), 3559. doi: 10.1090/S0002-9939-04-07620-8. Google Scholar [19] J. Segata, Well-posedness and existence of standing waves for the fourth order nonlinear Schrödinger type equation,, Discrete Contin. Dyn. Syst., 27 (2010), 1093. doi: 10.3934/dcds.2010.27.1093. Google Scholar [20] Y. Wang, Global well-posedness for the generalized fourth-order Schrödingier equation,, Bull. Aust. Math. Soc., 85 (2012), 371. doi: 10.1017/S0004972711003327. Google Scholar
show all references
##### References:
[1] F. Christ and M. Weinstein, Dispersion of small amplitude solutions of the generalized Korteweg-de Vries equation,, J. Funct. Anal., 100 (1991), 87. doi: 10.1016/0022-1236(91)90103-C. Google Scholar [2] K. Dysthe, Note on a modification to the nonlinear Schrödinger equation for application to deep water waves,, Proc. R. Soc. Lond. Ser. A, 369 (1979), 105. Google Scholar [3] Y. Fukumoto, Motion of a curved vortex filament: higher-order asymptotics,, in Proc. IUTAM Symp. Geom. Stat. Turbul., (2001), 211. doi: 10.1007/978-94-015-9638-1_25. Google Scholar [4] M. Hadac, S. Herr, and H. Koch, Well-posedness and scattering for the KP-II equation in a critical space,, Ann. Inst. H. Poincaré Anal. Non linéaie., 26 (2009), 917. doi: 10.1016/j.anihpc.2008.04.002. Google Scholar [5] M. Hadac, S. Herr, and H. Koch, Errantum to "Well-posedness and scattering for the KP-II equation in a critical space'' [Ann. I. H. Poincaré-AN26 (3) (2009) 917-941],, Ann. Inst. H. Poincaré Anal. Non linéaie., 27 (2010), 971. doi: 10.1016/j.anihpc.2010.01.006. Google Scholar [6] C. Hao, L. Hsiao, and B. Wang, Wellposedness for the fourth order nonlinear Schrödinger equations,, J. Math. Anal. Appl., 320 (2006), 246. doi: 10.1016/j.jmaa.2005.06.091. Google Scholar [7] C. Hao, L. Hsiao, and B. Wang, Well-posedness of Cauchy problem for the fourth order nonlinear Schrödinger equations in multi dimensional spaces,, J. Math. Anal. Appl., 328 (2007), 58. doi: 10.1016/j.jmaa.2006.05.031. Google Scholar [8] N. Hayashi and P. I. Naumkin, Large time asymptotics for the fourth-order nonlinear Schrödinger equation,, J. Differential Equations, 258 (2015), 880. doi: 10.1016/j.jde.2014.10.007. Google Scholar [9] N. Hayashi and P. I. Naumkin, Global existence and asymptotic behavior of solutions to the fourth-order nonlinear Schrödinger equation in the critical case,, Nonlinear Anal., 116 (2015), 112. doi: 10.1016/j.na.2014.12.024. Google Scholar [10] H. Hirayama, Well-posedness and scattering for nonlinear Schrödinger equations with a derivative nonlinearity at the scaling critical regularity,, FUNKCIALAJ EKVACIOJ, (). Google Scholar [11] Z. Huo and Y. Jia, The Cauchy problem for the fourth-order nonlinear Schrödinger equation related to the vortex filament,, J. Differential Equations, 214 (2005), 1. doi: 10.1016/j.jde.2004.09.005. Google Scholar [12] Z. Huo and Y. Jia, A refined well-posedness for the fourth-order nonlinear Schrödinger equation related to the vortex filament,, Comm. Partial Differential Equations, 32 (2007), 1493. doi: 10.1080/03605300701629385. Google Scholar [13] Z. Huo and Y. Jia, Well-posedness for the fourth-order nonlinear derivative Schrödinger equation in higher dimension,, J. Math. Pures Appl., 96 (2011), 190. doi: 10.1016/j.matpur.2011.01.002. Google Scholar [14] V. Karpman, Stabilization of soliton instabilities by higher order dispersion: fourth-order nonlinear Schrödinger-type equations,, Phys. Rev. E, 53 (1996), 1336. Google Scholar [15] V. Karpman and A. Shagalov, Stability of soliton described by nonlinear Schrödinger-type equations with higher-order dispersion,, Physica D, 144 (2000), 194. doi: 10.1016/S0167-2789(00)00078-6. Google Scholar [16] B. Pausader, Global well-posedness for energy critical fourth-order Schrödinger equations in the radial case,, Dynamics of PDE, 4 (2007), 197. doi: 10.4310/DPDE.2007.v4.n3.a1. Google Scholar [17] J. Segata, Well-posedness for the fourth order nonlinear Schrödinger type equation related to the vortex filament,, Diff. and Integral Eqs., 16 (2003), 841. Google Scholar [18] J. Segata, Remark on well-posedness for the fourth order nonlinear Schrödinger type equation,, Proc. Amer. Math. Soc., 132 (2004), 3559. doi: 10.1090/S0002-9939-04-07620-8. Google Scholar [19] J. Segata, Well-posedness and existence of standing waves for the fourth order nonlinear Schrödinger type equation,, Discrete Contin. Dyn. Syst., 27 (2010), 1093. doi: 10.3934/dcds.2010.27.1093. Google Scholar [20] Y. Wang, Global well-posedness for the generalized fourth-order Schrödingier equation,, Bull. Aust. Math. Soc., 85 (2012), 371. doi: 10.1017/S0004972711003327. Google Scholar
[1] Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247 [2] Xavier Carvajal, Liliana Esquivel, Raphael Santos. On local well-posedness and ill-posedness results for a coupled system of mkdv type equations. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020382 [3] Antoine Benoit. Weak well-posedness of hyperbolic boundary value problems in a strip: when instabilities do not reflect the geometry. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5475-5486. doi: 10.3934/cpaa.2020248 [4] Qingfang Wang, Hua Yang. Solutions of nonlocal problem with critical exponent. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5591-5608. doi: 10.3934/cpaa.2020253 [5] Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020276 [6] José Luis López. A quantum approach to Keller-Segel dynamics via a dissipative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020376 [7] Justin Holmer, Chang Liu. Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity II: Supercritical blow-up profiles. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020264 [8] Xiyou Cheng, Zhitao Zhang. Structure of positive solutions to a class of Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020461 [9] Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020348 [10] Jun Zhou. Lifespan of solutions to a fourth order parabolic PDE involving the Hessian modeling epitaxial growth. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5581-5590. doi: 10.3934/cpaa.2020252 [11] Youshan Tao, Michael Winkler. Critical mass for infinite-time blow-up in a haptotaxis system with nonlinear zero-order interaction. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 439-454. doi: 10.3934/dcds.2020216 [12] Shiqi Ma. On recent progress of single-realization recoveries of random Schrödinger systems. Electronic Research Archive, , () : -. doi: 10.3934/era.2020121 [13] Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020450 [14] Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020384 [15] Lihong Zhang, Wenwen Hou, Bashir Ahmad, Guotao Wang. Radial symmetry for logarithmic Choquard equation involving a generalized tempered fractional $p$-Laplacian. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020445 [16] Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436 [17] Denis Bonheure, Silvia Cingolani, Simone Secchi. Concentration phenomena for the Schrödinger-Poisson system in $\mathbb{R}^2$. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020447 [18] Serge Dumont, Olivier Goubet, Youcef Mammeri. Decay of solutions to one dimensional nonlinear Schrödinger equations with white noise dispersion. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020456 [19] Shenglan Xie, Maoan Han, Peng Zhu. A posteriori error estimate of weak Galerkin fem for second order elliptic problem with mixed boundary condition. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020340 [20] Min Chen, Olivier Goubet, Shenghao Li. Mathematical analysis of bump to bucket problem. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5567-5580. doi: 10.3934/cpaa.2020251
2019 Impact Factor: 1.105
## Metrics
• PDF downloads (53)
• HTML views (0)
• Cited by (1)
## Other articlesby authors
• on AIMS
• on Google Scholar
[Back to Top]
|
2020-11-30 14:57:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7807303071022034, "perplexity": 6396.248449079859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141216175.53/warc/CC-MAIN-20201130130840-20201130160840-00533.warc.gz"}
|
https://www.rdocumentation.org/packages/car/versions/1.2-2/topics/Soils
|
Soils
0th
Percentile
Soil Compositions of Physical and Chemical Characteristics
Soil characteristics were measured on samples from three types of contours (Top, Slope, and Depression) and at four depths (0-10cm, 10-30cm, 30-60cm, and 60-90cm). The area was divided into 4 blocks, in a randomized block design.
Keywords
datasets
Usage
data(Soils)
Details
These data provide good examples of MANOVA and canonical discriminant analysis in a somewhat complex multivariate setting. They may be treated as a one-way design (ignoring Block), by using either Group or Gp as the factor, or a two-way randomized block design using Block, Contour and Depth (quantitative, so orthogonal polynomial contrasts are useful).
source
Horton, I. F.,Russell, J. S., and Moore, A. W. (1968) Multivariate-covariance and canonical analysis: A method for selecting the most effective discriminators in a multivariate situation. Biometrics 24, 845--858. http://www.stat.lsu.edu/faculty/moser/exst7037/soils.sas
References
Khattree, R., and Naik, D. N. (2000) Multivariate Data Reduction and Discrimination with SAS Software. SAS Institute. Friendly, M. (in press) Data ellipses, HE plots and reduced-rank displays for multivariate linear models: SAS software and examples. Journal of Statistical Software.
• Soils
Examples
Soils
Documentation reproduced from package car, version 1.2-2, License: GPL version 2 or newer
Community examples
Looks like there are no examples yet.
|
2020-09-23 21:54:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2814507484436035, "perplexity": 7764.295076886889}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212959.12/warc/CC-MAIN-20200923211300-20200924001300-00351.warc.gz"}
|
https://physics.stackexchange.com/questions/376064/propagating-modes-in-a-waveguide-what-do-they-represent?noredirect=1
|
# Propagating modes in a waveguide, what do they represent?
In a hollow rectangular wave guide of dimensions $a \times b$ for example, I know how to apply the boundary conditions to find the solutions. In particular, for TE (or TM) modes we have the expression $$k=\sqrt{\left(\frac{\omega}{c}\right)^2-\pi^2\left[\left(\frac{m}{a}\right)^2+\left(\frac{n}{b}\right)^2\right]}$$ as our dispersion relation. I understand that in order to excite a certain TE$_{mn}$ mode the driving frequency must exceed the cutoff frequency for such mode. My question is what does it mean to excite a mode? Is it that we may only find waves propagating with distinct frequencies that correspond to the excited modes?
There are many ways to excite modes in a waveguide. The term is often used in a theoretical sense, like `let there be a wave...'
In a more practical sense, one can think of the one end of the waveguide being connected to a horn antenna, which receives some radiation from free-space and as a result excites modes in the waveguide. Does this answer your question?
For some given $\omega$, one can only excite specific traveling modes corresponding to different values of $k$ as shown by the dispersion relation. This means that a traveling mode can be excited at any frequency as long as it exceeds the cutoff frequency $\omega_{mn}\equiv c\pi \sqrt{\big(\frac{m}{a}\big)^2+\big(\frac{n}{b}\big)^2}$ for that mode.
I will give you a mostly non-mathenatical way to understand waveguide modes.
First, note that waves propagating in the space between two surfaces must bounce off the surfaces.
Next, note that when a wave reflects off a surface it forms a standing wave pattern with stationary (Bragg) surfaces parallel to the reflective surface.
A wave propagating between two parallel surfaces, then, forms standing waves with Bragg surfaces associated with both surfaces.
Those Bragg surfaces, on both sides of the structure, are stacked with a separation that depends on the wavelength of the propagating wavefront and the angle of incidence on the reflective surfaces. The math would show that in order for the system of Bragg surfaces to be stable, allowing a steady-state wave pattern in the space between the reflective surfaces, th he Bragg surfaces inside the space must coincide. That means, for light of any given wavelength, there is only a finite number of propagation angles that support a steady-state wave pattern. Those propagation angles correspond to the available propagation modes in the waveguide.
All the equations describing waveguide modes derive from these considerations.
Note also that a 3D waveguide can support a lot more modes than a 2D waveguide, such as "corkscrew" modes that reflect cyclically off of all of the faces of the waveguide.
|
2020-01-21 20:22:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5327430367469788, "perplexity": 473.74345983550694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00067.warc.gz"}
|
https://www.neetprep.com/ncert/1605-General-Principles-Processes-Isolation-Elements-General-Principles-Processes-Isolation-Elements--NCERT-Chapter-PDF
|
# Unit 6
## General Principles and Processes of Isolation of Elements
Thermodynamics illustrates why only a certain reducing element and a minimum specific temperature are suitable for reduction of a metal oxide to the metal in an extraction.
### Objectives
After studying this Unit, you will be able to:
• appreciate the contribution of Indian traditions in the metallurgical processes,
• explain the terms minerals, ores, concentration, benefaction, calcination, roasting, refining, etc.;
• understand the principles of oxidation and reduction as applied to the extraction procedures;
• apply the thermodynamic concepts like that of Gibbs energy and entropy to the principles of extraction of Al, Cu, Zn and Fe;
• explain why reduction of certain oxides like Cu2O is much easier than that of Fe2O3;
• explain why CO is a favourable reducing agent at certain temperatures while coke is better in some other cases;
• explain why specific reducing agents are used for the reduction purposes.
A few elements like carbon, sulphur, gold and noble gases, occur in free state while others in combined forms in the earth’s crust. The extraction and isolation of an element from its combined form involves various principles of chemistry. A particular element may occur in a variety of compounds. The process of metallurgy and isolation should be such that it is chemically feasible and commercially viable. Still, some general principles are common to all the extraction processes of metals. For obtaining a particular metal, first we look for minerals which are naturally occurring chemical substances in the earth’s crust obtainable by mining.
Out of many minerals in which a metal may be found, only a few are viable to be used as sources of that metal. Such minerals are known as ores. Rarely, an ore contains only a desired substance. It is usually contaminated with earthly or undesired materials known as gangue. The extraction and isolation of metals from ores involve the following major steps:
• Concentration of the ore,
• Isolation of the metal from its concentrated ore, and
• Purification of the metal.
The entire scientific and technological process used for isolation of the metal from its ores is known as metallurgy.
In the present Unit, first we shall describe various steps for effective concentration of ores. After that we shall discuss the principles of some of the common metallurgical processes. Those principles shall include the thermodynamic and electrochemical aspects involved in the effective reduction of the concentrated ore to the metal.
### 6.1 Occurrence of Metals
Elements vary in abundance. Among metals, aluminium is the most abundant. It is the third most abundant element in earth’s crust (8.3% approx. by weight). It is a major component of many igneous minerals including mica and clays. Many gemstones are impure forms of Al2O3 and the impurities range from Cr (in ‘ruby’) to Co (in ‘sapphire’). Iron is the second most abundant metal in the earth’s crust. It forms a variety of compounds and their various uses make it a very important element. It is one of the essential elements in biological systems as well
The principal ores of aluminium, iron, copper and zinc are given in Table 6.1.
Table 6.1: Principal Ores of Some Important Metals
For the purpose of extraction, bauxite is chosen for aluminium. For iron, usually the oxide ores which are abundant and do not produce polluting gases (like SO2 that is produced in case iron pyrites) are taken. For copper and zinc, any of the listed ores (Table 6.1) may be used depending upon availability and other relevant factors. Before proceeding for concentration, ores are graded and crushed to reasonable size.
### 6.2 Concentration of Ores
Removal of the unwanted materials (e.g., sand, clays, etc.) from the ore is known as concentration, dressing or benefaction. Before proceeding for concentration, ores are graded and crushed to reasonable size. Concentration of ores involves several steps and selection of these steps depends upon the differences in physical properties of the compound of the metal present and that of the gangue. The type of the metal, the available facilities and the environmental factors are also taken into consideration. Some of the important procedures for concentration of ore are described below.
#### 6.2.1Hydraulic Washing
This is based on the difference between specific gravities of the ore and thegangue particles. It is therefore a type of gravity separation. In one such process, an upward stream of running water is used to wash the powdered ore. The lighter gangue particles are washed away and the heavier ore particles are left behind.
#### 6.2.2Magnetic Separation
This is based on differences in magnetic properties of the ore components. If either the ore or the gangue is attracted towards magnetic field, then the separation is carried out by this method. For example iron ores are attracted towards magnet, hence, non–magnetic impurities can be separted from them using magnetic separation. The powdered ore is dropped over a conveyer belt which moves over a magnetic roller (Fig.6.1) Magnetic substance remains attracted towards the belt and falls close to it.
Fig. 6.1: Magnetic separation (schematic)
#### 6.2.3 Froth Floatation Method
This method is used for removing gangue from sulphide ores. In this process, a suspension of the powdered ore is made with water. Collectors and froth stabilisers are added to it. Collectors (e.g., pine oils, fatty acids, xanthates, etc.) enhance non-wettability of the mineral particles and froth stabilisers (e.g., cresols, aniline) stabilise the froth.
Fig. 6.2: Froth floatation process (schematic)
The mineral particles become wet by oils while the gangue particles by water. A rotating paddle agitates the mixture and draws air in it. As a result, froth is formed which carries the mineral particles. The froth is light and is skimmed off. It is then dried for recovery of the ore particles.
Sometimes, it is possible to separate two sulphide ores by adjusting proportion of oil to water or by using ‘depressants’. For example, in the case of an ore containing ZnS and PbS, the depressant used is NaCN. It selectively prevents ZnS from coming to the froth but allows PbS to come with the froth.
The Innovative Washerwoman
One can do wonders if he or she has a scientific temperament and is attentive to observations. A washerwoman had an innovative mind too. While washing a miner’s overalls, she noticed that sand and similar dirt fell to the bottom of the washtub. What was peculiar, the copper bearing compounds that had come to the clothes from the mines, were caught in the soapsuds and so they came to the top. One of her clients, Mrs. Carrie Everson was a chemist. The washerwoman told her experience to Mrs. Everson. The latter thought that the idea could be used for separating copper compounds from rocky and earth materials on large scale. This way an invention came up. At that time only those ores were used for extraction of copper, which contained large amounts of the metal. Invention of the Froth Floatation Method made copper mining profitable even from the low-grade ores. World production of copper soared and the metal became cheaper.
#### 6.2.4Leaching
Leaching is often used if the ore is soluble in some suitable solvent. Following examples illustrate the procedure:
(a) Leaching of alumina from bauxite
Bauxite is the principal ore of aluminium. It usually contains SiO2, iron oxides and titanium oxide (TiO2) as impurities. Concentration is carried out by heating the powdered ore with a concentrated solution of NaOH at 473 – 523 K and 35 – 36 bar pressure. This process is called digestion. This way, Al2O3 is extracted out as sodium aluminate. The impurity, SiO2 too dissolves forming sodium silicate. Other impurities are left behind.
Al2O3(s) + 2NaOH(aq) + 3H2O(l) 2Na[Al(OH)4](aq) (6.1)
The sodium aluminate present in solution is neutralised by passing CO2 gas and hydrated Al2O3 is precipitated. At this stage, small amount of freshly prepared sample of hydrated Al2O3 is added to the solution. This is called seeding. It induces the precipitation.
2Na[Al(OH)4](aq) + CO2(g) Al2O3.xH2O(s) + 2NaHCO3 (aq) (6.2)
Sodium silicate remains in the solution and hydrated alumina is filtered, dried and heated to give back pure Al2O3.
Al2O3.xH2O(s) Al2O3(s) + xH2O(g) (6.3)
(b) Other examples
In the metallurgy of silver and gold, the respective metal is leached with a dilute solution of NaCN or KCN in the presence of air, which supplies O2. The metal is obtained later by replacement reaction.
4M(s) + 8CN(aq)+ 2H2O(aq) + O2(g) 4[M(CN)2] (aq) + 4OH(aq) (M= Ag or Au) (6.4)
2[M(CN)2]- (aq) + Zn (s) → [Zn(CN)4]2- (aq) +2M (s) (6.5)
Intext Questions
6.1 Which of the ores mentioned in Table 6.1 can be concentrated by magnetic separation method?
6.2 What is the significance of leaching in the extraction of aluminium?
### 6.3 Extraction of Crude Metal from Concentrated Ore
The concentrated ore must be converted into a form which is suitable for reductionUsually the sulphide ore is converted to oxide before reduction because oxides are easier to reduce. Thus isolation of metals from concentrated ore involves two major steps viz.,
(a) conversion to oxide, and
(b) reduction of the oxide to metal.
(a) Conversion to oxide
(i) Calcination: Calcinaton involves heating. It removes the volatile matter which escapes leaving behind the metal oxide:
Fe2O3.xH2O(s) Fe2O3 (s) + xH2O(g) (6.6)
ZnCO3 (s) ZnO(s) + CO2(g) (6.7)
CaCO3.MgCO3(s) CaO(s) + MgO(s ) + 2CO2(g) (6.8)
(ii) Roasting: In roasting, the ore is heated in a regular supply of air in a furnace at a temperature below the melting point of the metal. Some of the reactions involving sulphide ores are:
2ZnS + 3O2 2ZnO + 2SO2 (6.9)
2PbS + 3O2 2PbO + 2SO2 (6.10)
2Cu2S + 3O2 2Cu2O + 2SO2 (6.11)
Fig. 6.3: A section of a modern reverberatory furnace
The sulphide ores of copper are heated in reverberatory furnace [Fig. 6.3]. If the ore contains iron, it is mixed with silica before heating. Iron oxide ‘slags of * as iron silicate and copper is produced in the form of copper matte which contains Cu2S and FeS.
FeO + SiO2 FeSiO3 (slag) (6.12)
The SO2 produced is utilised for manufacturing H2SO4 .
* During metallurgy, ‘flux’ is added which combines with ‘gangue’ to form ‘slag’. Slag separates more easily from the ore than the gangue. This way, removal of gangue becomes easier.
(b) Reduction of oxide to the metal
Reduction of the metal oxide usually involves heating it with some other substance acting as a reducing agent (C or CO or even another metal). The reducing agent (e.g., carbon) combines with the oxygen of the metal oxide.
MxOy + yC → xM + y CO (6.13)
Some metal oxides get reduced easily while others are very difficult to be reduced (reduction means electron gain or electronation). In any case, heating is required. To understand the
variation in the temperature requirement for thermal reductions (pyrometallurgy) and to predict which element will suit as the reducing agent for a given metal oxide (MxOy), Gibbs energy interpretations are made.
### 6.4 Thermodynamic Principles of Metallurgy
Some basic concepts of thermodynamics help us in understanding the theory of metallurgical transformations. Gibbs energy is the most significant term. To understand the variation in the temperature required for thermal reductions and to predict which element will suit as the reducing agent for a given metal oxide (MxOy), Gibbs energy interpretations are made. The criterion for the feasibility of a thermal reduction is that at a given temperture Gibbs energy change of the reaction must be negative. The change in Gibbs energy, G for any process at any specified temperature, is described by the equation:
G = H TS (6.14)
where, H is the enthalpy change and S is the entropy change for the process.For any reaction, this change could also be explained through the equation:
$∆{\mathrm{G}}^{⊝}=-\mathrm{RTlnK}$ (6.15)
where, K is the equilibrium constant of the ‘reactant – product’ system at the temperature, T. A negative ΔG implies a +ve K in equation 6.15. And this can happen only when reaction proceeds towards products. From these facts we can make the following conclusions:
1. When the value of G is negative in equation 6.14, only then the reaction will proceed. If ΔS is positive, on increasing the temperature (T), the value of TΔS would increase (ΔH < TΔS) and then ΔG will become –ve
2. If reactants and products of two reactions are put together in a system and the net ΔG of the two possible reactions is –ve, the overall reaction will occur. So the process of interpretation involves coupling of the two reactions, getting the sum of their ΔG and looking for its magnitude and sign. Such coupling is easily understood through Gibbs energy (Δ${\mathrm{G}}^{⊝}$) vs T plots for formation of the oxides (Fig. 6.4)
Ellingham Diagram
The graphical representation of Gibbs energy was first used by H.J.T.Ellingham. This provides a sound basis for considering the choice of reducing agent in the reduction of oxides. This is known as Ellingham Diagram. Such diagrams help us in predicting the feasibility of thermal reduction of an ore. The criterion of feasibility is that at a given temperature, Gibbs energy of the reaction must be negative.
(a) Ellingham diagram normally consists of plots of ${∆}_{\mathrm{f}}{\mathrm{G}}^{⊝}$ vs T for formation of oxides
of elements i.e., for the reaction,
2xM(s) + O2(g) → 2MxO(s)
In this reaction, the gaseous amount (hence molecular randomness) is decreasing from left to right due to the consumption of gases leading to a –ve value of ΔS which changes the sign of the second term in equation (6.14). Subsequently ΔG shifts towards higher side despite rising T (normally, ΔG decreases i.e., goes to lower side with increasing temperature). The result is +ve slope in the curve for most of the reactions shown above for formation of MxO(s).
(b) Each plot is a straight line except when some change in phase (s→liq or liq→g)
takes place. The temperature at which such change occurs, is indicated by an
increase in the slope on +ve side (e.g., in the Zn, ZnO plot, the melting is indicated
by an abrupt change in the curve).
(c) There is a point in a curve below which ΔG is negative (So MxO is stable). Above
this point, MxO will decompose on its own.
(d) In an Ellingham diagram, the plots of $∆{\mathrm{G}}^{⊝}$ for oxidation (and therefore reduction
of the corresponding species) of common metals and some reducing agents are given. The values of ${∆}_{\mathrm{f}}{\mathrm{G}}^{⊝}$, etc.(for formation of oxides) at different temperatures are depicted which make the interpretation easy.
(e) Similar diagrams are also constructed for sulfides and halides and it becomes clear why reductions of MxS is difficult. There, the ${∆}_{\mathrm{f}}{\mathrm{G}}^{⊝}$ of MxS is not compensated
Limitations of Ellingham Diagram
1. The graph simply indicates whether a reaction is possible or not, i.e., the tendency of reduction with a reducing agent is indicated. This is so because it is based only on the thermodynamic concepts. It does not explain the kinetics of the reduction process. It cannot answer questions like how fast reduction can proceed? However, it explains why the reactions are sluggish when every species is in solid state and smooth when the ore melts down. It is interesting to note here that H (enthalpy change) and the S (entropy change) values for any chemical reaction remain nearly constant even on varying temperature. So the only dominant variable in equation(6.14) becomes T. However, S depends much on the physical state of the compound. Since entropy depends on disorder or randomness in the system, it will increase if a compound melts (s l) or vapourises (l g) since molecular randomness increases on changing the phase from solid to liquid or from liquid to gas.
2. The interpretation of rGƟ is based on K (GƟ = – RT lnK). Thus it is presumed that the reactants and products are in equilibrium:
MxO + Ared l xM + AredO
This is not always true because the reactant/product may be solid. [However it explains how the reactions are sluggish when every species is in solid state and smooth when the ore melts down.It is interestng to note here that ΔH (enthalpy change) and the ΔS (entropy change) values for any chemical reaction remain nearly constant even on varying temperature. So the only dominant variable in equation(6.14) becomes T. However, ΔS depends much on the physical state of the compound. Since entropy depends on disorder or randomness in the system, it will increase if a compound melts (s→l) or vapourises (l→g) since molecular randomness increases on changing the phase from solid to liquid or from liquid to gas].
The reducing agent forms its oxide when the metal oxide is reduced. The role of reducing agent is to provide ΔGV negative and large enough to make the sum of ΔGV of the two reactions (oxidation of the reducing agent and reduction of the metal oxide) negative. As we know, during reduction, the oxide of a metal decomposes:
(6.16)
The reducing agent takes away the oxygen. Equation 6.16 can be visualised as reverse of the oxidation of the metal. And then, the ${∆}_{\mathrm{f}}{\mathrm{G}}^{⊝}$ value is written in the usual way:
(6.17)
If reduction is being carried out through equation 6.16, the oxidation of the reducing agent (e.g., C or CO) will be there:
(6.18)
(6.19)
If carbon is taken, there may also be complete oxidation of the element to ${\mathrm{CO}}_{2}:$
(6.20)
On subtracting equation 6.17 [it means adding its negative or the reverse form as in equation 6.16] from one of the three equations, we get:
(6.21)
(6.22)
(6.23)
The reactions describe the actual reduction of the metal oxide, MxO, that we want to accomplish. The rGƟ values for these reactions in general, can be obtained from the corresponding f GƟ values of oxides.
As we have seen, heating (i.e., increasing T) favours a negative value of rGƟ. Therefore, the temperature is chosen such that the sum of rGƟ in the two combined redox processes is negative. In rGƟ vs T plots (Ellingham diagram, Fig. 6.4), this is indicated by the point of intersection of the two curves, i.e, the curve for the formation of MxO and that for the formation of the oxide of the reducing substance. After that point, the rGƟ value becomes more negative for the combined process making the reduction of MxO possible. The difference in the two rGƟ values after that point determines whether reduction of the oxide of the element of the upper line is feasible by the element of which oxide formation is represented by the lower line. If the difference is large, the reduction is easier.
Example 6.1
Suggest a condition under which magnesium could reduce alumina.
Solution
The two equations are:
(a) Al + O Al2O3 (b) 2Mg +O 2MgO
At the point of intersection of the Al2Oand MgO curves (marked “A” in diagram 6.4), the rG0 becomes ZERO for the reaction:
Al2O3 +2Mg 2MgO +Al
Below that point magnesium can reduce alumina.
Example 6.2
Although thermodynamically feasible, in practice, magnesium metal is not used for the reduction of alumina in the metallurgy of aluminium. Why ?
Solution
Temperatures below the point of intersection of Al2Oand MgO curves, magnesium can reduce alumina. But the process will be uneconomical.
Example 6.3
Why is the reduction of a metal oxide easier if the metal formed is in liquid state at the temperature of reduction?
Solution
The entropy is higher if the metal is in liquid state than when it is in solid state. The value of entropy change (S) of the reduction process is more on positive side when the metal formed is in liquid state and the metal oxide being reduced is in solid state. Thus the value of rGƟ becomes more on negative side and the reduction becomes easier.
#### 6.4.1 Applications
(a) Extraction of iron from its oxides
Oxide ores of iron, after concentration through calcination/roasting (to remove water, to decompose carbonates and to oxidise sulphides) are mixed with limestone and coke and fed into a Blast furnace from its top. Here, the oxide is reduced to the metal. Thermodynamics helps us to understand how coke reduces the oxide and why this furnace is chosen. One of the main reduction steps in this process is:
FeO(s) + C(s) → Fe(s/l) + CO (g) (6.24)
It can be seen as a couple of two simpler reactions. In one, the reduction of FeO is taking place and in the other, C is being oxidised to CO:
(6.25)
(6.26)
When both the reactions take place to yield the equation (6.24), the net Gibbs energy change becomes:
$∆{\mathrm{G}}_{\left(\mathrm{C},\mathrm{CO}\right)}+∆{\mathrm{G}}_{\left(\mathrm{FeO},\mathrm{Fe}\right)}={∆}_{\mathrm{r}}\mathrm{G}$ (6.27)
Naturally, the resultant reaction will take place when the right hand side in equation 6.27 is negative. In ΔG0 vs T plot representing reaction 6.25, the plot goes upward and that representing the change C→CO (C,CO) goes downward. At temperatures above 1073K (approx.), the C,CO line comes below the Fe,FeO line [ΔG (C, CO) < ΔG(Fe, FeO)]. So in this range, coke will be reducing the FeO and will itself be oxidised to CO. In a similar way the reduction of Fe3O4 and Fe2O3 at relatively lower temperatures by CO can be explained on the basis of lower lying points of intersection of their curves with the CO, CO2 curve in Fig. 6.4.
Fig. 6.4: Gibbs energy (rGƟ) vs T plots (schematic) for the formation of some oxides per mole of oxygen consumed (Ellingham diagram)
In the Blast furnace, reduction of iron oxides takes place in different temperature ranges. Hot air is blown from the bottom of the furnace and coke is burnt to give temperature upto about 2200K in the lower portion itself. The burning of coke therefore supplies most of the heat required in the process. The CO and heat moves to upper part of the furnace. In upper part, the temperature is lower and the iron oxides (Fe2O3 and Fe3O4) coming from the top are reduced in steps to FeO. Thus, the reduction reactions taking place in the lower temperature
range and in the higher temperature range, depend on the points of corresponding intersections in the ΔrG0 vs T plots. These reactions can be summarised as follows:
At 500 – 800 K (lower temperature range in the blast furnace),
Fe2O3 is first reduced to Fe3O4 and then to FeO
3 Fe2O3 + CO 2 Fe3O4 + CO2 (6.28)
Fe3O4 + 4 CO 3Fe + 4 CO2 (6.29)
Fe2O3 + CO 2FeO + CO2 (6.30)
At 900 – 1500 K (higher temperature range in the blast furnace):
C + CO2 2 CO (6.31)
FeO + CO Fe + CO2 (6.32)
Fig. 6.5: Blast furnace
Limestone is also decomposed to CaO which removes silicate impurity of the ore as slag. The slag is in molten state and separates out from iron.
The iron obtained from Blast furnace contains about 4% carbon and many impurities in smaller amount (e.g., S, P, Si, Mn). This is known as pig iron and cast into variety of shapes. Cast iron is different from pig iron and is made by melting pig iron with scrap iron and coke using hot air blast. It has slightly lower carbon content (about 3%) and is extremely hard and brittle.
Further Reductions
Wrought iron or malleable iron is the purest form of commercial iron and is prepared from cast iron by oxidising impurities in a reverberatory furnace lined with haematite. The haematite oxidises carbon to carbon monoxide:
Fe2O3 + 3 C 2 Fe + 3 CO (6.31)
Limestone is added as a flux and sulphur, silicon and phosphorus are oxidised and passed into the slag. The metal is removed and freed from the slag by passing through rollers.
(b) Extraction of copper from cuprous oxide [copper(I) oxide]
In the graph of rGƟ vs T for the formation of oxides (Fig. 6.4), the Cu2O line is almost at the top. So it is quite easy to reduce oxide ores of copper directly to the metal by heating with coke. The lines (C, CO) and (C, CO2) are at much lower positions in the graph particularly after 500 – 600K. However, many of the ores are sulphides and some may also contain iron. The sulphide ores are roasted/smelted to give oxides:
2Cu2S + 3O2 2Cu2O + 2SO2 (6.32)
The oxide can then be easily reduced to metallic copper using coke:
Cu2O + C 2 Cu + CO (6.33)
In actual process, the ore is heated in a reverberatory furnace after mixing with silica. In the furnace, iron oxide ‘slags of’ as iron slicate is formed. Copper is produced in the form of copper matte. This contains Cu2S and FeS.
FeO + SiO2 FeSiO3 (Slag) (6.34)
Copper matte is then charged into silica lined convertor. Some silica is also added and hot air blast is blown to convert the remaining FeS, FeO and Cu2S/Cu2O to the metallic copper. Following reactions take place:
2FeS + 3O2 2FeO + 2SO2 (6.35)
FeO + SiO2 FeSiO3 (6.36)
2Cu2S + 3O2 2Cu2O + 2SO2 (6.37)
2Cu2O + Cu2S 6Cu + SO2 (6.38)
The solidified copper obtained has blistered appearance due to the evolution of SO2 and so it is called blister copper.
(c) Extraction of zinc from zinc oxide
The reduction of zinc oxide is done using coke. The temperature in this case is higher than that in the case of copper. For the purpose of heating, the oxide is made into brickettes with coke and clay.
ZnO + C Zn + CO (6.39)
The metal is distilled off and collected by rapid chilling.
Intext Questions
6.3 The reaction,
Cr2O3+2Al Al2O3+2Cr (GƟ= – 421kJ)
is thermodynamically feasible as is apparent from the Gibbs energy value. Why does it not take place at room temperature?
6.4 Is it true that under certain conditions, Mg can reduce Al2O3 and Al can reduce MgO? What are those conditions?
### 6.5 Electrochemical Principles of Metallurgy
We have seen how principles of thermodyamics are applied to pyrometallurgy. Similar principles are effective in the reductions of metal ions in solution or molten state. Here they are reduced by electrolysis or by adding some reducing element.
In the reduction of a molten metal salt, electrolysis is done. Such methods are based on electrochemical principles which could be understood through the equation,
GƟ = – nEƟF (6.40)
here n isthe number of electrons and EƟ is the electrode potential of the redox couple formed in the system. More reactive metals have large negative values of the electrode potential. So their reduction is difficult. If the difference of two EƟ values corresponds to a positive EƟ and consequently negative GƟ in equation 6.40, then the less reactive metal will come out of the solution and the more reactive metal will go into the solution, e.g.,
Cu2+ (aq) + Fe(s) Cu(s) + Fe2+ (aq) (6.41)
In simple electrolysis, the Mn+ ions are discharged at negative electrodes (cathodes) and deposited there. Precautions are taken considering the reactivity of the metal produced and suitable materials are used as electrodes. Sometimes a flux is added for making the molten mass more conducting.
Aluminium
In the metallurgy of aluminium, purified Al2O3 is mixed with Na3AlF6 or CaF2 which lowers the melting point of the mixture and brings conductivity. The fused matrix is electrolysed. Steel vessel with lining of carbon acts as cathode and graphite anode is used. The overall reaction may be written as:
2Al2O3 + 3C 4Al + 3CO2 (6.42)
This process of electrolysis is widely known as Hall-Heroult process.
Fig. 6.7: Electrolytic cell for the extraction of aluminium
Thus electrolysis of the molten mass is carried out in an electrolytic cell using carbon electrodes. The oxygen liberated at anode reacts with the carbon of anode producing CO and CO2. This way for each kg of aluminium produced, about 0.5 kg of carbon anode is burnt away. The electrolytic reactions are:
Cathode: Al3+ (melt) + 3e Al(l) (6.43)
Anode: C(s) + O2– (melt) CO(g) + 2e (6.44)
C(s) + 2O2– (melt) CO2 (g) + 4e– (6.45)
Copper from Low Grade Ores and Scraps
Copper is extracted by hydrometallurgy from low grade ores. It is leached out using acid or bacteria. The solution containing Cu2+ is treated with scrap iron or H2 (equations 6.40; 6.46).
Cu2+(aq) + H2(g) Cu(s) + 2H+ (aq) (6.46)
Example 6.4
At a site, low grade copper ores are available and zinc and iron scraps are also available. Which of the two scraps would be more suitable for reducing the leached copper ore and why?
Solution
Zinc being above iron in the electrochemical series (more reactive metal is zinc), the reduction will be faster in case zinc scraps are used. But zinc is costlier metal than iron so using iron scraps will be advisable and advantageous.
### 6.6 Oxidation Reduction
Besides reductions, some extractions are based on oxidation particularly for non-metals. A very common example of extraction based on oxidation is the extraction of chlorine from brine (chlorine is abundant in sea water as common salt) .
2Cl(aq) + 2H2O(l) 2OH(aq) + H2(g) + Cl2(g) (6.47)
The GƟ for this reaction is + 422 kJ. When it is converted to EƟ (using GƟ = – nEƟF), we get EƟ = – 2.2 V. Naturally, it will require an external emf that is greater than 2.2 V. But the electrolysis requires an excess potential to overcome some other hindering reactions (Unit–3, Section 3.5.1). Thus, Cl2 is obtained by electrolysis giving out H2 and aqueous NaOH as by-products. Electrolysis of molten NaCl is also carried out. But in that case, Na metal is produced and not NaOH.
As studied earlier, extraction of gold and silver involves leaching the metal with CN. This is also an oxidation reaction (Ag Ag+ or Au Au+). The metal is later recovered by displacement method.
4Au(s) + 8CN(aq) + 2H2O(aq) + O2(g) → 4[Au(CN)2](aq) + 4OH(aq) (6.48)
2[Au(CN)2](aq) + Zn(s) 2Au(s) + [Zn(CN)4]2– (aq) (6.49)
In this reaction zinc acts as a reducing agent.
### 6.7 Refining
A metal extracted by any method is usually contaminated with some impurity. For obtaining metals of high purity, several techniques are used depending upon the differences in properties of the metal and the impurity. Some of them are listed below.
(a) Distillation (b) Liquation
(c) Electrolysis (d) Zone refining
(e) Vapour phase refining (f) Chromatographic methods
These are described in detail here.
(a) Distillation
This is very useful for low boiling metals like zinc and mercury. The impure metal is evaporated to obtain the pure metal as distillate.
(b) Liquation
In this method a low melting metal like tin can be made to flow on a sloping surface. In this way it is separated from higher melting impurities.
(c) Electrolytic refining
In this method, the impure metal is made to act as anode. A strip of the same metal in pure form is used as cathode. They are put in a suitable electrolytic bath containing soluble salt of the same metal. The more basic metal remains in the solution and the less basic ones go to the anode mud. This process is also explained using the concept of electrode potential, over potential, and Gibbs energy which you have seen in previous sections. The reactions are:
Anode: M Mn+ + ne
Cathode: Mn+ + ne M (6.50)
Copper is refined using an electrolytic method. Anodes are of impure copper and pure copper strips are taken as cathode. The electrolyte is acidified solution of copper sulphate and the net result of electrolysis is the transfer of copper in pure form from the anode to the cathode:
Anode: Cu Cu2+ + 2 e
Cathode: Cu2+ + 2e Cu (6.51)
Impurities from the blister copper deposit as anode mud which contains antimony, selenium, tellurium, silver, gold and platinum; recovery of these elements may meet the cost of refining. Zinc may also be refined this way.
(d) Zone refining
Fig. 6.8: Zone refining process
This method is based on the principle that the impurities are more soluble in the melt than in the solid state of the metal. A mobile heater surrounding the rod of impure metal is fixed at its one end (Fig. 6.8). The molten zone moves along with the heater which is moved forward. As the heater moves forward, the pure metal crystallises out of the melt left behind and the impurities pass on into the adjacent new molten zone created by movement of heaters. The process is repeated several times and the heater is moved in the same direction again and again. Impurities get concentrated at one end. This end is cut off. This method is very useful for producing semiconductor and other metals of very high purity, e.g., germanium, silicon, boron, gallium and indium.
(e) Vapour phase refining
In this method, the metal is converted into its volatile compound which is collected and decomposed to give pure metal. So, the two requirements are:
(i) the metal should form a volatile compound with an available reagent,
(ii) the volatile compound should be easily decomposable, so that the recovery is easy.
Following examples will illustrate this technique.
Mond Process for Refining Nickel: In this process, nickel is heated in a stream of carbon monoxide forming a volatile complex named as nickel tetracarbonyl. This compex is decomposed at higher temperature to obtain pure metal.
Ni + 4CO Ni(CO)4 (6.52)
Ni(CO)4 Ni + 4CO (6.53)
van Arkel Method for Refining Zirconium or Titanium: This method is very useful for removing all the oxygen and nitrogen present in the form of impurity in certain metals like Zr and Ti. The crude metal is heated in an evacuated vessel with iodine. The metal iodide being more covalent, volatilises:
Zr + 2I2 ZrI4 (6.54)
The metal iodide is decomposed on a tungsten filament, electrically heated to about 1800K. The pure metal deposits on the filament.
ZrI4 Zr + 2I2 (6.55)
(f) Chromatographic methods
This method is based on the principle that different components of a mixture are differently adsorbed on an adsorbent. The mixture is put in a liquid or gaseous medium which is moved through the adsorbent.Different components are adsorbed at different levels on the column.
Later the adsorbed components are removed (eluted) by using suitable solvents (eluant). Depending upon the physical state of the moving medium and the adsorbent material and also on the process of passage of the moving medium, the chromatographic method* is given the name. In one such method the column of Al2O3 is prepared in a glass tube and the moving medium containing a solution of the components is in liquid form. This is an example of column chromatography. This is very useful for purification of the elements which are available in minute quantities and the impurities are not very different in chemical properties from the element to be purified. There are several chromatographic techniques such as paper chromatography, column chromatography, gas chromatography, etc. Procedures followed in column chromatography have been depicted in Fig. 6.8.
Fig. 6.8: Schematic diagrams showing column chromatography
Looking it the other way, chromatography in general, involves a mobile phase and a stationary phase. The sample or sample extract is dissolved in a mobile phase. The mobile phase may be a gas, a liquid or a supercritical fluid. The stationary phase is immobile and immiscible (like the Al2O3 column in the example of column chromatography above). The mobile phase is then forced through the stationary phase. The mobile phase and the stationary phase are chosen such that components of the sample have different solubilities in the two phases. A component which is quite soluble in the stationary phase takes longer
time to travel through it than a component which is not very soluble in the stationary phase but very soluble in the mobile phase. Thus sample components are separated from each other as they travel through the stationary phase. Depending upon the two phases and the way sample is inserted/injected, the chromatographic technique is named. These methods have been described in detail in Unit 12 of Class XI text book (12.8.5).
### 6.8 Uses of Aluminium, Copper, Zinc and Iron
Aluminium foils are used as wrappers for food materials. The fine dust of the metal is used in paints and lacquers. Aluminium, being highly reactive, is also used in the extraction of chromium and manganese from their oxides. Wires of aluminium are used as electricity conductors. Alloys containing aluminium, being light, are very useful.
Copper is used for making wires used in electrical industry and for water and steam pipes. It is also used in several alloys that are rather tougher than the metal itself, e.g., brass (with zinc), bronze (with tin) and coinage alloy (with nickel).
Zinc is used for galvanising iron. It is also used in large quantities in batteries. It is constituent of many alloys, e.g., brass, (Cu 60%, Zn 40%) and german silver (Cu 25-30%, Zn 25-30%, Ni 40–50%). Zinc dust is used as a reducing agent in the manufacture of dye-stuffs, paints, etc.
Cast iron, which is the most important form of iron, is used for casting stoves, railway sleepers, gutter pipes , toys, etc. It is used in the manufacture of wrought iron and steel. Wrought iron is used in making anchors, wires, bolts, chains and agricultural implements. Steel finds a number of uses. Alloy steel is obtained when other metals are added to it. Nickel steel is used for making cables, automobiles and aeroplane parts, pendulum, measuring tapes. Chrome steel is used for cutting tools and crushing machines, and stainless steel is used for cycles, automobiles, utensils, pens, etc.
### Summary
Although modern metallurgy had exponential growth after Industrial Revolution, many modern concepts in metallurgy have their roots in ancient practices that predated the Industrial Revolution. For over 7000 years, India has had high tradition of metallurigical skills. Ancient Indian metallurgists have made major contributions which deserve their place in metallurgical history of the world. In the case of zinc and high–carbon steel, ancient India contributed significantly for the developemnt of base for the modern metallurgical advancements which induced metallurgical study leading to Industrial Revolution.
Metals are required for a variety of purposes. For this, we need their extraction from the minerals in which they are present and from which their extraction is commercially feasible.These minerals are known as ores. Ores of the metal are associated with many impurities. Removal of these impurities to certain extent is achieved in concentration steps. The concentrated ore is then treated chemically for obtaining the metal. Usually the metal compounds (e.g., oxides, sulphides) are reduced to the metal. The reducing agents used are carbon, CO or even some metals. In these reduction processes, the thermodynamic and electrochemical concepts are given due consideration. The metal oxide reacts with a reducing agent; the oxide is reduced to the metal and the reducing agent is oxidised. In the two reactions, the net Gibbs energy change is negative, which becomes more negative on raising the temperature. Conversion of the physical states from solid to liquid or to gas, and formation of gaseous states favours decrease in the Gibbs energy for the entire system. This concept is graphically displayed in plots of GƟ vs T (Ellingham diagram) for such oxidation/reduction reactions at different temperatures. The concept of electrode potential is useful in the isolation of metals (e.g., Al, Ag, Au) where the sum of the two redox couples is positive so that the Gibbs energy change is negative. The metals obtained by usual methods still contain minor impurities. Getting pure metals requires refining. Refining process depends upon the differences in properties of the metal and the impurities. Extraction of aluminium is usually carried out from its bauxite ore by leaching it with NaOH. Sodium aluminate, thus formed, is separated and then neutralised to give back the hydrated oxide, which is then electrolysed using cryolite as a flux. Extraction of iron is done by reduction of its oxide ore in blast furnace. Copper is extracted by smelting and heating in a reverberatory furnace. Extraction of zinc from zinc oxides is done using coke. Several methods are employed in refining the metal. Metals, in general, are very widely used and have contributed significantly in the development of a variety of industries.
### Exercises
6.1 Copper can be extracted by hydrometallurgy but not zinc. Explain.
6.2 What is the role of depressant in froth floatation process?
6.3 Why is the extraction of copper from pyrites more difficult than that from its oxide ore through reduction?
6.4 Explain: (i) Zone refining (ii) Column chromatography.
6.5 Out of C and CO, which is a better reducing agent at 673 K ?
6.6 Name the common elements present in the anode mud in electrolytic refining of copper. Why are they so present ?
6.7 Write down the reactions taking place in different zones in the blast furnace during the extraction of iron.
6.8 Write chemical reactions taking place in the extraction of zinc from zinc blende.
### Exercises
6.9 State the role of silica in the metallurgy of copper.
6.10. “Chromatography”, What do you understand by this term?
6.11. What is the criterion followed while selecting the stationary phase of chromatography?
6.12 Describe a method for refining nickel.
6.13 How can you separate alumina from silica in a bauxite ore associated with silica? Give equations, if any.
6.14 Giving examples, differentiate between ‘roasting’ and ‘calcination’.
6.15 How is ‘cast iron’ different from ‘pig iron”?
6.16 Differentiate between “minerals” and “ores”.
### Exercises
6.17 Why copper matte is put in silica lined converter?
6.18 What is the role of cryolite in the metallurgy of aluminium?
6.19 How is leaching carried out in case of low grade copper ores?
6.20 Why is zinc not extracted from zinc oxide through reduction using CO?
6.21 The value of fGƟ for formation of Cr2 O3 is – 540 kJmol−1and that of Al2 O3 is – 827 kJmol−1. Is the reduction of Cr2 O3 possible with Al ?
6.22 Out of C and CO, which is a better reducing agent for ZnO ?
6.23 The choice of a reducing agent in a particular case depends on thermodynamics factor. How far do you agree with this statement? Support your opinion with two examples.
6.24 Name the processes from which chlorine is obtained as a by-product. What will happen if an aqueous solution of NaCl is subjected to electrolysis?
6.25 What is the role of graphite rod in the electrometallurgy of aluminium?
6.26 Outline the principles of refining of metals by the following methods:
(i) Zone refining
(ii) Electrolytic refining
(iii) Vapour phase refining
6.27 Predict conditions under which Al might be expected to reduce MgO.
(Hint: See Intext question 6.4)
#### EXEMPLAR QUESTION
1. In the extraction of chlorine by electrolysis of brine ...........
(a) oxidation of $C{l}^{-}$ ion to chlorine gas occurs
(b) reduction of $C{l}^{-}$ ion to chlorine gas occurs
(c) for overall reaction $∆{G}^{\varnothing }$ has negative value
(d) a displacement reaction takes place
2. When copper ore is mixed with silica, in a reverberatory furnace copper matte is produced. The copper matte contains
(a) sulphides of copper (l) and iron (ll)
(b) sulphides of copper (ll) and iron (lll)
(c) sulphides of copper(l) and iron (ll)
(d) sulphides of copper (l) and iron (lll)
3. Which of the following reactions is an example of autoreduction?
4. A number of elements are available in earth’s crust but most abundant elements are .........
(a) Al and Fe
(b) Al and Cu
(c) Fe and Cu
(d) Cu and Ag
5. Zone refining is hased on the principle that ..........
(a) impurities of low boiling metals can be separated by distillation.
(b) impurities are more soluble in molten metal than in solid metal.
(c) different components of a mixture are differently adsorbed on an adsorbent.
(d) vapours of volatile compound can be decomposed in pure metal.
6. In the extraction of copper from its sulphide ore, the metal is formed by the reduction of ${\mathrm{Cu}}_{2}\mathrm{S}$ with
(a) $\mathrm{FeS}$
(b) $\mathrm{CO}$
(c) ${\mathrm{CU}}_{2}\mathrm{S}$
(d) ${\mathrm{SO}}_{2}$
7. Brine is electrolysed by using inert electrodes. The reaction at anode is .........
(a) ; ${\mathrm{E}}_{\mathrm{Cell}}^{⊝}$= 1.36 V
(b) ; ${\mathrm{E}}_{\mathrm{Cell}}^{⊝}$ = 1.23 V
(c) ; ${\mathrm{E}}_{\mathrm{Cell}}^{⊝}$= 2.71V
$\left(\mathrm{d}\right)$ ; ${\mathrm{E}}_{\mathrm{Cell}}^{⊝}$= 0.00 V
8. In the metallurgy of aluminium ............
(a) ${\mathrm{Al}}^{3+}$is oxidised to $\mathrm{Al}$(s).
(b) graphide anode is oxidised to carbon monoxide and carbon dioxide.
(c) oxidation state of oxygen changesin the reaction at anode.
(d) oxidation state of oxygen changes in the overall reaction involved in the process.
9. Electrolytic refining is used to purify which of the following metals?
(a) Cu and Zn
(b) Ge and Si
(c) Zr and Ti
(d) Zn and Hg
#### EXEMPLAR QUESTION
10. Extraction of gold and silver involves leaching the metal with $C{N}^{-}$ ion. The metal is recovered by ......
(a) displacement of metal by some other metal from the complex ion.
(b) roasting of metal complex.
(c) calcination followed byroasting.
(d) thermal decomposition of metal complex.
Direction (Q. Nos. 11-13) Answer the questions on the basis of figure
11. Choose the correct option of temperature at which carbon reduces FeO to iron and produces CO.
(a) Below temperature at point A
(b) Approximately at the temperature corresponding to point A
(c) Above temperature at point A but below temperature at point D
(d) Above temperature at point A
12. Below point ‘A’ FeO can .............
(a) be reduced by carbon monoxide only.
(b) be reduced by both carbon monoxide and carbon.
(c) be reduced by carbon only.
(d) not be reduced by both carbon and carbon monoxide.
13. For the reduction of FeO at the temperature corresponding to point D, which of the following statements is correct?
(a) $∆\mathrm{G}$ value for the overall reduction reaction with carbon monoxideis zero.
(b) $∆\mathrm{G}$ value for the overall reduction reaction with a mixture of 1 mol carbon and 1 mol oxygen is positive.
(c) $∆\mathrm{G}$ value for the overall reduction reaction with a mixture of 2 mol carbon and 1 mol oxygen will be positive.
(d) $∆\mathrm{G}$ value for the overail reduction reaction with carbon monoxideis negative.
Multiple Choice Questions (More than One Options)
14. At the temperature corresponding to which of the points in Fig. FeO will be reduced to Fe by coupling the reaction with all of the following reactions?
(a) Point A
(b) Point B
(c) Point D
(d) Point E
15. Which of the following options are correct?
(a) Cast iron is obtained by remelting pig iron with scrap iron and coke using hot air blast.
(b) In extraction ofsilver, silver is extracted as cationic complex.
(c) Nickel is purified by zone refining.2
(d) Zr and Ti are purified by van Arkel method.
16. Tn the extraction of aluminium by Hall-Heroult process, purified${\mathrm{Al}}_{2}{\mathrm{O}}_{3}$ is mixed with ${\mathrm{CaF}}_{2}$ to
(a) lower the melting point of ${\mathrm{Al}}_{2}{\mathrm{O}}_{3}$
(b) increase the conduclivily of molten mixture.
(c) reduce ${\mathrm{Al}}^{3+}$into $\mathrm{Al}\left(\mathrm{s}\right)$
(d) acts as catalyst
17. Which of the following statements is correct about the role of substances added in the froth floatation process?
(a) Collectors enhance the non-wettahility of the mineral particles.
(b) Collectors enhance the wettability of gangue particles,
(c) By using depressants in the process two sulphide ores can be separated.
(d) Froth stabilisers decrease wettability of gangue.
#### EXEMPLAR QUESTION
18. In the froth floatation process, zinc sulphide and lead sulphide can be separated by ...............
(a) using collectors
(b) adjusting the proportion of oil to water
(c) using depressant
(d) using froth stabilisers
19. Common impurities present in bauxite are ........
(a) $\mathrm{CuO}$
(b) $\mathrm{ZnO}$
(c) ${\mathrm{Fe}}_{2}{\mathrm{O}}_{3}$
(d) ${\mathrm{SiO}}_{2}$
20. Which of the following ores are concentrated by froth floatation?
(a) Haematite (b) Galena (c) Copper pyrites (d) Magnetite
21. Which of the following reactions occur during calcination?
22. For the metallurgical process of which of the ores calcined ore can be reduced by carbon?
(a) Haematite
(b) Calamine
(c) Iron pyrites
(d) Sphalerite
23. The main reactions occurring in blast furnace during extraction of iron from haematite ore ..........
24. In which of the following method of purification, metal is converted to its volatile compound which is decomposed to give pure metal?
(a) Heating with stream of carbon monoxide
(b) Heating with iodine
(c) Liquation
(d) Distillation
25. Which of the following statements are correct?
(a) A depressant prevents certain type of particle to come to the froth.
(b) Copper matte contains ${\mathrm{Cu}}_{2}\mathrm{S}$ and $\mathrm{ZnS}$.
(c) The solidified copper obtained from reverberatory furnace has blistered appearance due to evolution of ${\mathrm{SO}}_{2}$ during the extraction.
(d) Zinc can be extracted by self-reduction.
26. In the extraction of chlorine from brine ............
(a) $∆{\mathrm{G}}^{⊝}$ for the overall reaction is negative.
(b) $∆{\mathrm{G}}^{⊝}$ for the overall reaction is positive.
(c) ${\mathrm{E}}^{⊝}$ for the overall reaction has negative value.
(d) ${\mathrm{E}}^{⊝}$ for the overall reaction has positive value.
#### EXEMPLAR QUESTION
27. Why is an external emf of more than 2.2V required for the extraction of ${\mathrm{Cl}}_{2}$from brine?
28. At temperature above 1073 K, coke can be used to reduce FeO to Fe. How can you justify this reduction with Ellingham diagram?
29. Wrought iron is the purest form of iron. Write a reaction used for the preparation of wrought iron from cast iron. How can the impurities of sulphur, silicon and phosphorus be removed from cast iron?
30. How is copper extracted from low grade copper ores?
31. Write two basic requirements for refining of a metal by Mond’s process and by van Arkel Method.
32. Although carbon and hydrogen are better reducing agents but they are not used to reduce metallic oxides at high temperatures. Why?
33. How do we separate two sulphide ores by froth floatation method? Explain with an example.
34. The purest form of iron is prepared by oxidising impurities from cast iron in a reverberatory furnace. Which iron ore is used to line the furnace? Explain by giving reaction.
#### EXEMPLAR QUESTION
35. The mixture of compounds A and B is passed through a column of ${\mathrm{Al}}_{2}{\mathrm{O}}_{3}$ by using alcohol as eluant. Compound A is eluted in preference to compund B. Which of the compunds A or B, is more readily adsorbed on the column?
36. Why is sulphide ore of copper heated in a furnace after mixing with silica?
37. Why are sulphide ores converted to oxide before reduction?
38. Which method is used for refining Zr and Ti? Explain with equation.
39. What should be the considerations during the extraction of metals by electrochemical method?
40. Whatis the role of flux in metallurgical processes?
41. How are metals used as semiconductors refined? What is the principle of the method used like germanium, silicon etc?
42. Write down the reactions taking place in blast furnace related to the metallurgy of iron in the temperature range 500-800 K.
43. Give two requirements for vapour phase refining.
44. Write the chemical reactions involved in the extraction of gold by cyanide process. Also give the role of zinc in the extraction.
#### EXEMPLAR QUESTION
Match The Columns
45. Match the items of Column I with items of Column II and assign the correct code.
Column I Column II A. Pendulum 1. Chrome steel B. Malachite 2. Nickel Steel C. Calamine 3. Na3AlF6 D. Cryolite 4. CuCO3. Cu (OH)2 5. ZnCO3
Codes
A B C D
(a) 1 2 3 4
(b) 2 4 5 3
(c) 2 3 4 5
(d) 4 5 3 2
46. Match the items of Column I with items of Column II and assign the correct code.
Column I Column II A. Coloured Bands 1. Zone refining B. Impure metal to volatile complex 2. Fractional distillation C. Purification of Ge and Si 3. Mond’s process D. Purification of mercury 4. Chromatography 5. Liquation
Codes
A B C D
(a) 1 2 3 4
(b) 4 3 2 1
(c) 3 4 2 1
(d) 5 4 3 2
47. Match the items of Column I with items of Column II and assign the correct code.
Column I Column II A. Cyanide process 1. Ultrapure Ge B. Froth floatation process 2. Dressing of ZnS C. Electrolytic reduction 3. Extraction of Al D. Zone refining 4. Extraction of Au 5. Purification of Ni
Codes
A B C D
(a) 4 2 3 1
(b) 2 3 1 5
(c) 1 2 3 4
(d) 3 4 5 1
48. Match the items of Column I with items of Column II and assign the correct code.
Column I Column II A. Sapphire 1. Al2O3 B. Sphalerite 2. NaCN C. Depressant 3. Co D. Corundum 4. ZnS 5. Fe2O3
Codes
A B C D
(a) 3 4 2 1
(b) 5 4 3 2
(c) 2 3 4 5
(d) 1 2 3 4
49. Match the items of Column I with items of Column II and assign the correct code.
Column I Column II A. Blisterred Cu 1. Aluminium B. Blast furnace 2. C. Reverberatory furnace 3. Iron D. Hall-Heroult process 4. 5.
Codes
A B C D
(a) 2 3 4 1
(b) 1 2 3 5
(c) 5 4 3 2
(d) 4 5 3 2
Assertion and Reason
In the following questions a statement of Assertion (A) followed by a statement of Reason (R) is given. Choose the correct answer out of the following choices.
(a) Both assertion and reason are true and reason is the correct explanation of assertion.
(b) Both assertion and reason are true but reason is not the correct explanation of assertion.
(c) Assertion is true but reason is false.
(d) Assertion is false but reason is true.
(e) Assertion and reason both are wrong.
50. Assertion (A) Nickel can be purified by Mond’s process.
Reason (R) $\mathrm{Ni}{\left(\mathrm{CO}\right)}_{4}$ is a volatile compound which decomposes at 460 K to give pure Ni.
51. Assertion (A) Zirconium can be purified by van Arkel method.
Reason (R) ${\mathrm{ZrI}}_{4}$ is volatile and decomposes at 1800K.
52. Assertion (A) Sulphide ores are concentrated by froth flotation method.
Reason (R) Cresols stabilise the froth in froth floatation method.
#### EXEMPLAR QUESTION
53. Assertion (A) Zone refining method is very useful for producing semiconductors.
Reason (R) Semiconductors are of high purity.
54. Assertion (A) Hydrometallurgy involves dissolving the ore in a suitable reagent followed by precipitation by a more electropositive metal.
Reason (R) Copper is extracted by hydrometallurgy .
55. Explain the following
(a) ${\mathrm{CO}}_{2}$ is a better reducing agent below 710 K whereas CO is a better reducing agent above 710 K.
(b) Generally sulphide ores are converted into oxides before reduction.
(c) Silica is added to the sulphide ore of copper in the reverberatory furnace.
(d) Carbon and hydrogen are not used as reducing agents at high temperatures.
(e) Vapour phase refining method is used for the purification of Ti.
|
2022-10-06 00:23:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 50, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7103337645530701, "perplexity": 6625.303414246301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00442.warc.gz"}
|
https://en.khanacademy.org/math/algebra/x2f8bb11595b61c86:quadratic-functions-equations/x2f8bb11595b61c86:quadratic-formula-a1/a/quadratic-formula-review
|
If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
## Algebra 1
The quadratic formula allows us to solve any quadratic equation that's in the form ax^2 + bx + c = 0. This article reviews how to apply the formula.
## What is the quadratic formula?
x, equals, start fraction, minus, start color #e07d10, b, end color #e07d10, plus minus, square root of, start color #e07d10, b, end color #e07d10, squared, minus, 4, start color #7854ab, a, end color #7854ab, start color #e84d39, c, end color #e84d39, end square root, divided by, 2, start color #7854ab, a, end color #7854ab, end fraction
start color #7854ab, a, end color #7854ab, x, squared, plus, start color #e07d10, b, end color #e07d10, x, plus, start color #e84d39, c, end color #e84d39, equals, 0
### Example
We're given an equation and asked to solve for q:
0, equals, minus, 7, q, squared, plus, 2, q, plus, 9
This equation is already in the form a, x, squared, plus, b, x, plus, c, equals, 0, so we can apply the quadratic formula where a, equals, minus, 7, comma, b, equals, 2, comma, c, equals, 9:
\begin{aligned} q &= \dfrac{-b \pm \sqrt{b^2 - 4ac}}{2a} \\\\ q &= \dfrac{-2 \pm \sqrt{2^{2} - 4 (-7) (9)}}{2(-7)} \\\\ q &= \dfrac{-2 \pm \sqrt{4 +252}}{-14} \\\\ q &= \dfrac{-2 \pm \sqrt{256}}{-14} \\\\ q &= \dfrac{-2 \pm 16}{-14} \\\\ q &= \dfrac{-2 + 16}{-14} ~~,~~ q = \dfrac{-2 - 16}{-14} \\\\ q &= -1 ~~~~~~~~~~~~,~~ q = \dfrac{9}{7} \end{aligned}
Let's check both solutions to be sure it worked:
q, equals, minus, 1q, equals, start fraction, 9, divided by, 7, end fraction
\begin{aligned}0&=-7q^2+2q+9\\\\0&=-7(-1)^2+2(-1)+9 \\\\0&=-7(1)-2+9 \\\\0&=-7-2+9\\\\0&=0\end{aligned}\begin{aligned}0&=-7q^2+2q+9\\\\0&=-7\left(\dfrac{9}{7}\right)^2+2\left (\dfrac{9}{7}\right)+9 \\\\0&=-7\left(\dfrac{81}{49}\right)+\left (\dfrac{18}{7}\right)+9 \\\\0&=-\left(\dfrac{81}{7}\right)+\left (\dfrac{18}{7}\right)+9 \\\\0&=-\left(\dfrac{63}{7}\right) +9 \\\\0&=-9 +9 \\\\0&=0\end{aligned}
Yep, both solutions check out.
Practice
Solve for x.
minus, 4, plus, x, plus, 7, x, squared, equals, 0
Want more practice? Check out this exercise.
## Want to join the conversation?
• Sal, How does the quadratic formula relate to business and economics?
• It helps in lots of ways. It can possibly predict the future path of certain things, especially if your graph is exponential.
• what if the equation doesn't equal zero
• then just subtract the non-zero number from the RHS to the LHS and make the RHS equal to zero.
• can you reccomend other math websites for algebra 1 and 2
• What happens when the discriminant is a negative number? If it is negative would your answer be imaginary?
(1 vote)
• Yes... If the discriminant is negative, then there are 2 roots, but they are complex numbers.
• can you reccomend other math websites for algebra 1 and 2
(1 vote)
• IXL is a good one, but without a login given to you by an administrator, you have to pay a membership fee.
• How do you get from (-4 +-2 sqrt of 34)/-6
to (-2+- sqrt of 34)/-3 ?
I get that it’s divided by 2 but it’s strange that the square root of 34 doesn’t change.
(1 vote)
• You can only remove the factor of 2 once from each term. There are two terms: -4 and +/- 2 sqrt(34)
-4/2 = -2
+/- 2 sqrt(34)/2 = +/- sqrt(34)
Also, the 2 contained in the sqrt(34) is sqrt(2) which can't be divided by 2. They are not equal values.
Hope this helps.
• A garden in the shape of a rectangle is surrounded by a walkway of uniform width. The dimensions of the garden only are 35 by 24. The area of the garden and the walkway together is 1,530 square feet. What is the width of the walkway in feet? a. 4 ft b. 5 ft c. 34.5 ft
(1 vote)
• Assume the length of the walkway to be x. If you illustrate the data as a drawing on a piece of paper, you will see that the sides of the garden + walkway are (35+2x) and (24 + 2x). Since AREA = LENGTH x WIDTH, we get to the equation:
(35 + 2x) (24 + 2x) = 1530
By expanding the L.H.S, we get
840 + 70x + 48x + 4x^2 = 1530
Now after rearranging
4x^2 + 118x + 840 - 1530 = 0 -----> 4x^2 + 118x - 690 = 0
After applying quadratic formula (a = 4, b = 118, c = -690) we get
x = 5 ft OR x = -34.5 ft
Since the length can NEVER be negative, we can exclude x = -34.5 ft. *SO THE ANSWER IS b.5ft*
(1 vote)
• are there any shortcuts or patterns we can use to make calculation quicker?
(1 vote)
• I do not understand this. Can someone please explain
(1 vote)
• This is a formula, so if you can get the right numbers, you plug them into the formula and calculate the answer(s). We always have to start with a quadratic in standard form: ax^2+bx+c=0. Making one up, 3x^2+2x-5=0, we see a=3, b=2, c=-5. I teach my students to start with the discriminant, b^2-4ac. Also, especially in the beginning, put the b value in parentheses so that you square a negative number if b is negative. In our example, this gives (2)^2-4(3)(-5) = 4+60=64. If I take √64 = 8. Filling out the formula, we get x=(-2±8)/(2(3)) or breaking it into the two parts x=(-2+8)/6=1 and (-2-8)/6=-10/6=-5/2. Where is the confusion? It is always hard to answer when we cannot figure out what you do understand and where you are confused.
(1 vote)
• What if there was a negative number under the square root? What would you do then?
(1 vote)
|
2023-03-25 23:49:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 21, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8352154493331909, "perplexity": 2595.7036023064106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00434.warc.gz"}
|
https://mc-stan.org/rstanarm/
|
rstanarm
Bayesian applied regression modeling
rstanarm is an R package that emulates other R model-fitting functions but uses Stan (via the rstan package) for the back-end estimation. The primary target audience is people who would be open to Bayesian inference if using Bayesian software were easier but would use frequentist software otherwise.
Getting started
If you are new to rstanarm we recommend starting with the tutorial vignettes.
Installation
Install the latest release from CRAN
install.packages("rstanarm")
Instructions for installing the latest development version from GitHub can be found in the rstanarm Readme.
Contributing
If you are interested in contributing to the development of rstanarm please see the Developer notes.
|
2019-04-22 21:21:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22315043210983276, "perplexity": 3613.895691936098}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578582584.59/warc/CC-MAIN-20190422195208-20190422221208-00527.warc.gz"}
|
https://mathhelpforum.com/threads/bernoulli-differential-equation-substitution-done-plz-help.237191/
|
# Bernoulli differential equation; substitution done.. plz help!
#### mzca
Bernoulli differential equation solution required. Will highly appreciate step-by-step detailed solution.
3(1+t^2) dy/dx = 2ty(y^3 - 1)
After, substituting
having..
du/dt - (2t / 1+t^2 )u = 2t / 1+t^2
#### romsek
MHF Helper
starting from
$\dfrac {du}{dt} - \left(\dfrac{2t}{1+t^2}\right)u=\dfrac{2t}{1+t^2}$
$(1+t^2)\dfrac {du}{dt} = 2t(u+1)$
$\dfrac {du}{u+1} = \dfrac {2t}{1+t^2}dt$
and you should be able to finish from here.
#### Prove It
MHF Helper
Bernoulli differential equation solution required. Will highly appreciate step-by-step detailed solution.
3(1+t^2) dy/dx = 2ty(y^3 - 1)
After, substituting
having..
du/dt - (2t / 1+t^2 )u = 2t / 1+t^2
It will be much easier if you recognise it is separable...
\displaystyle \begin{align*} 3 \left( 1 + t^2 \right) \, \frac{\mathrm{d}y}{\mathrm{d}t} &= 2t\,y\left( y^3 - 1 \right) \\ \frac{1}{y \left( y^3 - 1 \right) } \, \frac{\mathrm{d}y}{\mathrm{d}t} &= \frac{2t}{3 \left( 1 + t^2 \right) } \end{align*}
Now integrate both sides...
|
2020-02-19 18:59:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000097751617432, "perplexity": 10925.970715235373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144167.31/warc/CC-MAIN-20200219184416-20200219214416-00203.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Local_Basis
|
# Definition:Local Basis
## Definition
Let $T = \struct {S, \tau}$ be a topological space.
Let $x$ be an element of $S$.
### Local Basis for Open Sets
A local basis at $x$ is a set $\BB$ of open neighborhoods of $x$ such that:
$\forall U \in \tau: x \in U \implies \exists H \in \BB: H \subseteq U$
That is, such that every open neighborhood of $x$ also contains some set in $\BB$.
### Neighborhood Basis of Open Sets
A local basis at $x$ is a set $\BB$ of open neighborhoods of $x$ such that every neighborhood of $x$ contains a set in $\BB$.
That is, a local basis at $x$ is a neighborhood basis of $x$ consisting of open sets.
## Also defined as
Some more modern sources suggest that in order to be a local basis, the neighborhoods of which the set $\BB$ consists do not need to be open.
Such a structure is referred to on $\mathsf{Pr} \infty \mathsf{fWiki}$ as a neighborhood basis.
## Also known as
A local basis is also known as a neighborhood basis, but that term is used on $\mathsf{Pr} \infty \mathsf{fWiki}$ for a weaker notion.
## Also see
• Results about local bases can be found here.
|
2021-03-01 16:07:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9552263617515564, "perplexity": 203.88590677244827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362741.28/warc/CC-MAIN-20210301151825-20210301181825-00237.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-the-system-of-equations-5x-y-5-and-x-3y-13-using-substitution
|
# How do you solve the system of equations 5x-y =5 and -x +3y=13 using substitution?
Nov 30, 2015
$x = 2 , y = 5$
#### Explanation:
$5 x - y = 5$
$\implies y = 5 x - 5$
$\implies - x + 3 \left(5 x - 5\right) = 13$
$\implies 14 x = 28$
$\implies x = 2$
$\implies y = 5$
|
2020-01-22 18:24:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9063819050788879, "perplexity": 8959.193192043565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607314.32/warc/CC-MAIN-20200122161553-20200122190553-00387.warc.gz"}
|
https://ubpdqn.wordpress.com/2011/09/18/gini-in-a-bottle/
|
# Unkown Blogger Pursues a Deranged Quest for Normalcy
• ## Blog Stats
• 10,595 hits
## Gini in a bottle
Posted by ubpdqn on September 18, 2011
Gini Index by country
The relationship between distribution of wealth and distribution of income are usually visualised using Lorenz curve and the Gini index (or coefficient).
The above graphic is inferior to that from Wikipedia or WolframAlpha but illustrates the range of Gini indices by country (white: data missing).
The Lorenz curve plots the $\int_0^c F^{-1} (u) f(F^{-1}(u)) du/\int_0^\infty F^{-1} (u) f(F^{-1} (u)) du$ against $u$. $f(x)$ is the probability density function of the income distribution, $F(x)$ is the corresponding cumulative distribution function. Here the existence of the inverse is achieved using the infimum method to deal with piecewise constant segments and point discontinuities. The distributions of interest are obviously discrete. Insights are obtained using continuous functions.
The Gini index is $\frac {0.5-\int_0^1 L(F) dF }{0.5}$ where $L(F)$ is the Lorenz function.
The following animated gif provides insight into the relationship between the Lorenz Curve and the Gini index. The Lorenz curves were generated using Pareto distribution with minimum value 10000 and increasing shape parameters (1.2 to 2.5). yielding a range of Gini indices similar to those observed. This is purely illustrative and makes NO pretense of representing a real income distribution. The Gini index is the ratio of the dark purple area to the combined dark and light purple area.
Lorenz curve and Gini index
|
2017-11-24 18:10:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6742308139801025, "perplexity": 2076.9157450917955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808742.58/warc/CC-MAIN-20171124180349-20171124200349-00309.warc.gz"}
|
https://www.gamedev.net/blogs/entry/2255641-finishing-up-character-physics-in-leadwerks-3/
|
• entries
73
77
• views
81356
# Finishing up character physics in Leadwerks 3...
Followers 0
1160 views
In this blog I'm going to explain the evolution of the entity and physics system in Leadwerks 3.
In Leadwerks Engine 2, physics bodies and character controllers are both separate entity classes. If you want a model to be physically interactive, you parent it to a body entity. If you want a model to walk around with physics, you parent it to a character controller body.
In Leadwerks 3 I decided to give physics properties to all entities. This means all entities can use commands like GetVelocity(), AddForce(), SetMass(), etc. However, since character controllers are quite different, and they involve kind of a big chunk of code, I decided to keep the character controller as a separate entity. To make an enemy or NPC, you would create a character controller entity and then parent an animated model to that entity.
This was simple enough to do in the editor, but it started getting weird when we added scripts. Scripts for animation would need to be added to the child model, because the character controller would not return any animation lengths or the number of sequences. Scripts to control movement, on the other hand, would have to be attached to the parent character controller, for obvious reasons.
Next I tried creating a character controller script that attached to the model itself. This eliminated the extra entity in the hierarchy, and would automatically create a character controller when loaded in the engine, and parent the model to it. I didn't like that this was changing the hierarchy from what the user saw in the editor, and script accessing the character controller would still be based on some wonky assumptions.
Finally, I decided to just give the entity class a physicsmode member. This can be one of two values. By default, it is Entity::RigidBodyPhysics. However, you can set it to Entity::CharacterPhysics and the entity itself will act as a character controller! All the character controller functions are now available in the entity class, so you can just load a model, adjust some settings, and send him on his merry way around town:
Model* enemy = Model::Load("Models/Characters/barbarian.mdl");enemy->SetMass(10);enemy->SetPhysicsMode(Entity::CharacterPhysics);enemy->GoToPoint(20,0,0);//model will walk to this position, using AI navigation
Pretty cool, eh?
0
Followers 0
There are no comments to display.
## Create an account
Register a new account
|
2017-07-27 05:39:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17986781895160675, "perplexity": 1611.1356912792053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427429.9/warc/CC-MAIN-20170727042127-20170727062127-00508.warc.gz"}
|
https://proofwiki.org/wiki/Singleton_Partition_yields_Indiscrete_Topology
|
Singleton Partition yields Indiscrete Topology
Theorem
Let $S$ be a set which is not empty.
Let $\PP$ be the (trivial) singleton partition $\set S$ on $S$.
Then the partition topology on $\PP$ is the indiscrete topology.
Proof
By definition, the partition topology on $\PP$ is the set of all unions from $\PP$.
This is (trivially, and from Union of Empty Set) $\set {\O, S}$ which is the indiscrete topology on $S$ by definition.
$\blacksquare$
|
2020-09-27 03:50:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184468746185303, "perplexity": 355.5589010403236}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400250241.72/warc/CC-MAIN-20200927023329-20200927053329-00170.warc.gz"}
|
http://openstudy.com/updates/55cbf5d7e4b0554d6271622b
|
## heyitslizzy13 one year ago can someone help me with part c? (:
1. heyitslizzy13
2. heyitslizzy13
@triciaal
3. kropot72
The two-intercept form for the equation of a line is $\large \frac{x}{a}+\frac{y}{b}=1$ where a is the x-intercept and b is the y-intercept. You have already found that a = 6 and b = 8, so just plug those values into the general equation for the two-intercept form.
|
2017-01-20 01:47:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5204496383666992, "perplexity": 325.95325887653917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00499-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://blog.thegrandlocus.com/2012/09/focus-on-multiple-testing
|
The Grand Locus / Life for statistical sciences
## Focus on: multiple testing
With this post I inaugurate the focus on series, where I go much more in depth than usual. I could as well have called it the gory details, but focus on sounds more elegant. You might be in for a shock if you are more into easy reading, so the focus on is also here as a warning sign so that you can skip the post altogether if you are not interested in the detail. For those who are, this way please...
In my previous post I exposed the multiple testing problem. Every null hypothesis, true or false, has at least a 5% chance of being rejected (assuming you work at 95% confidence level). By testing the same hypothesis several times, you increase the chances that it will be rejected at least once, which introduces a bias because this one time is much more likely to be noticed, and then published. However, being aware of the illusion does not dissipate it. For this you need insight and statistical tools.
### Fail-safe $n$ to measure publication bias
Suppose $n$ independent research teams test the same null hypothesis, which happens to be true — so not interesting. This means that the null hypothesis is tested $n$ times independently with an exact chance of 5% of being rejected. The probability that the $n$ teams accept the null is $0.95^n$. The complementary event, that at least one team will reject the null hypothesis and report the rejection is thus $1 - 0.95^n$. This number approaches 1 as $n$ goes large, so it becomes almost certain that the null will be rejected.
In the early days of meta-analysis, it was common to estimate the publication bias of a claim by computing what is known as a fail-safe $n$, i.e. the estimated number of unpublished studies that came to the opposite conclusion, given that the null hypothesis is true. If this number is credible given the research effort on the topic, the validation can be interpreted as an accident. If the fail-safe $n$ is large, this is evidence that the validation was not produced by over testing.
Let us illustrate this with an example. After meta-analysis, a claim has a fail-safe $n$ of 4. Should that claim be unjustified, we would need to imagine that it was the subject of 4 independent studies, one of which reported it by mistake. Quite possible for many fields of investigation. Another claim has a fail-safe $n$ of 8,768. If it were reported by mistake, we would need to imagine that as many groups of researchers worked on the same hypothesis. In most fields of research this would be considered too high, and it is more likely that the claim is correct.
The simplest way to compute a fail-safe $n$ is to consider that the number of reported publications has a binomial distribution with parameter $p = 0.05$ and $n$ unknown. The probability that $k$ studies are published is
$${n \choose k} \times 0.05^k \times 0.95^{n-k},$$
where $k$ is observed. It is possible to choose the value of $n$ that maximizes the above probability to obtain the maximum likelihood estimate of $n$. For $k = 1$, this is 20, for $k = 2$ this is 40 and more generally, this is $20k$ (proof in the following technical section).
As usual, it is simpler to maximize the log of the likelihood, which has the same optimal parameter $n$. However, we will replace it by $\alpha$ to highlight the fact that we are looking for the solution among real numbers. Taking the log also introduces the digamma function, which is the derivative of $\log \Gamma$. First note that
$$\log \left( {\alpha \choose k} \times 0.05^k \times 0.95^{\alpha - k} \right) = \\ \log \Gamma(\alpha+1) -\log\Gamma(k+1) -\log\Gamma(\alpha-k+1) + k \log(0.05) + (\alpha - k) \log(0.95).$$
Differentiating the whole log-likelihood relative to $\alpha$ and equating to 0 yields
$$\psi(\alpha+1) - \psi(\alpha-k+1) = -\log 0.95.$$
An instrumental property of the digamma function is $\psi(\alpha + 1) = \psi(\alpha) + 1/\alpha$. Using this recursively, we get
$$\frac{1}{\alpha-k+1} + ... + \frac{1}{\alpha} = -\log 0.95.$$
We then use the approximation of the log by the harmoic series to obtain the final result:
$$\log(\alpha) - \log(\alpha - k) \approx -\log 0.95 \iff \frac{\alpha}{\alpha-k} \approx \frac{20}{19},$$
which solves to $\alpha \approx 20k$.
This approach is very coarse and has a heuristic purpose only. In practice, the computation of fail-safe $n$ is more complex, but I won't linger on it because the tool is no longer used. It has been replaced by more specialized methods to estimate the publication bias, because the estimates turned out to be unreliable.
### The Šidák correction
So far I have assumed that there is only one hypothesis which is tested several times, but the multiple testing problem is more general. If a statistician tests 20 hypotheses, it is expected that s/he will validate one of them, even if those hypotheses are unrelated. For instance, the patients of a hospital could be diagnosed for 20 different mental disorders with a specificity of 95%. For each disorder, there is then a 5% chance that the patients are declared positive, even if they do not actually have the disease.
If we assume that the results of the 20 tests are statistically independent for sane patients, the probability that they are considered sane is $0.95^{20} = .358$. About 65% of sane people will be considered insane. This is not cool. Even though the null hypotheses are unrelated, rejecting any of them has the same consequence (the person will be declared insane). The tests are said to be part of a family, and the familywise error rate (FWER) in that case is 65%.
The gist of the Šidák correction is to find a replacement confidence level $\alpha^*$ for each test, such that the proportion of sane people declared insane would be 5%. Plugging the new level in the equation above, we get $(1-\alpha^*)^{20} = 0.95$, which solves to $\alpha^* = 1 - \sqrt[20]{0.95} = 0.0026$. In other words, the tests for individual disorders must have a specificity of 99.74% so that the whole procedure has only a 5% chance of declaring a sane person insane.
The general formula of the Šidák correction is $\alpha^* = 1 - \sqrt[n]{1-\alpha}$, where $\alpha^*$ is the replacement threshold for individual tests, $\alpha$ is the FWER, and $n$ is the number of tests performed. As regards multiple testing correction, it is customary to adjust the p-values instead of correcting the threshold. The Šidák-corrected p-value $P$ is thus $P' = P\frac{\alpha}{\alpha^*}$, and a null hypothesis will be rejected if $P' < \alpha$ (we compare the adjusted p-values to the original threshold).
### The Bonferroni correction
There are times when you cannot assume independence between the tests and thus cannot use the Šidák correction. For instance, it may be known that tests for depression and bipolar disorder in the previous example tend to report positive together. In that case you cannot use the Šidák correction.
The idea of the Bonferroni correction is to replace the threshold of each test by a value $\alpha^*$ such that the FWER is less than 0.05. In that case we cannot know the false positive rate, but we can control it by an acceptable bound.
If we call $A_i$ the event that the patient tests positive for disorder $i$, the aim is $P(A_1 \cup \ldots \cup A_{20}) \leq 0.05$. Note that if we set $P(A_1) = \ldots = P(A_{20}) = \alpha^* = 0.05/20$, we obtain
$$\begin{eqnarray} P(A_1 \cup A_2 \cup \ldots \cup A_{20}) &=& P(A_1) + P(A_2 \cup \ldots \cup A_{20}) - P \left( A_1 \cap (A_2 \cup \ldots \cup A_{20}) \right) \\ &\leq& P(A_1) + P(A_2 \cup \ldots \cup A_{20}) \\ &\leq& P(A_1) + P(A_2) + P(A_3 \cup \ldots \cup A_{20}) \\ &\ldots& \\ &\leq& P(A_1) + P(A_2) + \ldots + P(A_{20}) = 20 \alpha^* = 0.05, \end{eqnarray}$$
where we have only used the axiomatic definition of probability and did not require independence, so the Bonferroni correction is more general than the Šidák correction. Consistenly, the Bonferroni-adjusted level $\alpha^*$ is smaller (0.00250 versus 0.00256). But not that much smaller. The object of the next technical section is to show that, for a low FWER, these methods are asymptotically equivalent. It is somewhat surprising that the hypothesis of independence does not make a substantial difference for FWER control.
Assuming that the FWER $\alpha$ is 5%, we get the following approximation for the Šidák correction
$$1 - \sqrt[n]{0.95} = 1 - \exp\left( \frac{\log(0.95)}{n} \right) sim - \frac{1}{n} \log(0.95) \approx \frac{0.0513}{n},$$
which shows that the relative difference between the two procedures is small when $\alpha = 0.05$. For a general FWER $\alpha$ smaller than 5% we get
$$1 - \exp\left( \frac{\log(1-\alpha)}{n} \right) \sim - \frac{1}{n} \log(1-\alpha) \approx \frac{\alpha}{n}.$$
This observation is perhaps the reason that the Šidák correction is sometimes mistakenly referred to as the Bonferroni correction. In any case, both corrections are considered to be too conservative by many statisticians. In other words, they do a good job when all the null hypotheses are true, but they perform poorly if one or more of the null hypotheses are false. To follow up on the previous example, setting the individual threshold from 5% to 0.25% makes it much less likely that a patient will be declared positive, even if s/he has the sickness.
### Sorted p-values
Before going into the detail of the Benjamini-Hochberg correction let us tackle a question that will turn out to be useful. What is the distribution of the p-value if the null hypothesis is true? By definition the probability of obtaining a p-value lower than 0.05 is 0.05. Similarly, the probability of obtaining a p-value lower than 0.01 is 0.01, and by extension, for any number $x$ between 0 and 1, the probability of obtaining a p-value lower than $x$ is $x$. The property $P(u \leq x) = x$ defines the cumulative density of a uniform random variable $U$ in $(0,1)$. So under the null hypothesis, p-values are uniformly distributed. Performing $n$ independent statistical tests when all null hypotheses are true is the same as collecting an independent and uniform sample of size $n$. As a consequence, testing multiple null hypotheses can be reduced to testing whether a sample is drawn from a uniform distribution.
Plotting the sorted p-values is a very good way to see whether they follow a uniform distribution. If all the null hypotheses are true, the sorted p-values will lay on that straight line joining 0 to 1. If some of the null hypotheses are not true, the corresponding p-values will tend to be smaller, which will skew the plot near the origin. The following technical section shows the R script that I used to produce the figure below.
Here is the script that generates the left panel.
# Set up a reproducible random example.set.seed(123)# Initialize a vector of length 1000 to hold p-values.p.values <- rep(NA, 1000)# Perform 1000 one-sample t-tests on Gaussian samples (H0 is true).for (i in 1:1000) { p.values[i] <- t.test(rnorm(100))$p.value}plot(sort(p.values), type='l', ylab="Sorted p-values") And the one that produces the right panel. set.seed(123)p.values <- rep(NA, 1000)# Perform 970 one-sample t-tests on Gaussian samples (H0 is true),# and 30 tests on Gaussian samples with mean 0.35 (H0 is false).for (i in 1:970) { p.values[i] <- t.test(rnorm(100))$p.value}for (i in 971:1000) { p.values[i] <- t.test(rnorm(100, mean=0.35))\$p.value}plot(sort(p.values)[1:200], type='l', ylab="First 200 sorted p-values")lines(30, .2, col=2, type='h')
I plotted 1,000 sorted p-values when all 1,000 null hypotheses are true (left panel) and when 30 of them are false (right panel). The right plot is a zoom on the first 200 sorted p-values to emphasize the shape of the graph close to the origin. As you can appreciate, the left plot is a relatively straight line. The right plot also looks like a straight line, except that there is a flat region before the red line at index 30. This approach is very sensitive, as we can easily detect that ~ 3% of the hypotheses are false, but it is only qualitative. And this is where the Benjamini-Hochberg correction enters.
### FDR and the Benjamini-Hochberg correction
The idea of the Benjamini-Hochberg correction is to give a different threshold to each of the sorted p-values. The series of thresholds is $\left( \frac{0.05}{n}, \frac{2 \times 0.05}{n}, \frac{3 \times 0.05}{n}, \ldots \right)$, which lays on a straight line from the origin with a slope of $0.05/n$ as shown in the figure below where the data is the same as in the right panel above.
All the null hypotheses are accepted if the curve of the p-values is above the red line in a neighborhood of the origin. The FWER of that procedure is lower than 0.05 as we can easily show. The curve of the p-values is locally above the red line if and only if the lowest p-value is higher than $0.05/n$, or equivalently if all p-values are higher than $0.05/n$. This is the same criterion as the Bonferroni correction, which was shown to have a FWER lower than 0.05.
So what is the difference with Bonferroni correction then? In their seminal paper of 1995, Benjamini and Hochberg introduce the concept of false discovery rate (FDR). The typical scenario in a multiple testing situation is to reject a couple of null hypotheses and accept the others. FWER, as the probability of rejecting at least one null hypothesis when they are actually all true, does not distinguish between rejecting one hypothesis and rejecting them all. However these situations are very different. One may be right in rejecting one null hypothesis only, and wrong in rejecting them all, so we would like to have a way to tell which of the two is more accurate. The FDR is by definition the expected number of false discoveries, i.e the mistakes in the set of rejected hypotheses. Put simply, if the FDR of a set of rejected null hypotheses is 5%, it means that 5% of them are expected to be true (rejected by mistake).
The full Benjamini-Hochberg procedure is to reject all the hypotheses for which the p-value is below the red line (up until the first intersection if the lines cross several times), and accept the others. The main contribution of the authors was to prove that when tests are independent, the FDR of the rejected null hypotheses is lower than 5%. The proof of this theorm is not difficult, but it is too long and tedious to fit in this post.
Note that when using the Bonferroni threshold, the hypotheses rejected by applying the same procedure form a subset of the hypotheses rejected when using the Benjamini-Hochberg threshold. This is why the Bonferroni procedure is considered to be too conservative. It can be used to ensure that the FDR is lower than 5%, but the actual FDR might be well below that value. To follow up on the numerical example pictured above where 30 null hypotheses are false, the Bonferroni procedure rejects 5 hypotheses while the Bonferroni procedure rejects 17. The Bemjamini-Hochberg procedure does not reject all the false null hypotheses, it just finds a bigger set with less than 5% expected mistakes.
### The rise and fall of multiple testing
In spite of all words of caution, correcting for multiple testing is sometimes considered metaphysical or optional. Even though I have the opposite opinion, that correcting for multiple testing is practical and mandatory, I admit that it is sometimes hard to know what correction to apply. The difficulty is not in choosing a procedure, or computing the thresholds, it is in understanding what you are actually doing.
As an example, suppose a professional statistician works for 100 different clients with a single null hypothesis each. After about a year s/he has tested all the null hypotheses. Should s/he apply FDR correction for performing 100 tests? Take a moment to think about it. Make yourself an opinion before you read on.
I believe the answer is no. If there were 100 statisticians performing one test each, they would (most probably) come to the same conclusions as a single statistician performing all the tests, but then we would not feel that there is any reason to apply a correction for multiple testing. When would you apply the correction then?
Suppose that all the clients of the statistician are start-up companies and that they all have the same null hypothesis, such that rejecting it means that they are doing better than their competitors. If the statistician is in for extra money s/he could buy stock from some of these companies. For that s/he has to choose a trusted subset expected to contain less than 5% false positives, which s/he could do by the Benjamini-Hochberg procedure.
The paradox is that the statistician will tell some companies that they are doing better than their competitors but s/he will not invest in those companies. The reason is that these are two different questions. If all of the clients are doing worse than their competitors, on average the statistician will tell 5 of them that they are doing good. This subset consists of 100% false positives. In short, knowing that you have a 5% error rate when the null hypothesis is true does not guarantee in any way that 5% of the null hypotheses you have rejected are actually true.
By construction, statistical testing procedures control the rate of false positives per test. If you want to control the rate of false positives per identified positive you should use FDR. This toy example illustrates one of the many pitfalls of multiple testing correction. There is much more to tell about the subject, but I feel I have already covered a lot of ground and every post, even a “focus on”, has to come to an end.
|
2017-03-25 05:58:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.671002984046936, "perplexity": 364.6667528162347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188824.36/warc/CC-MAIN-20170322212948-00213-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/symmetric-bilinear-what-a-drag.694647/
|
# Symmetric Bilinear: What a Drag!
1. May 31, 2013
### Tenshou
I am having problems showing the following:
$f$ and $g$ are two linearly independent functions in $E$ and $\theta : \mathbb{R} \to \mathbb{R}$ is an additive map such that $\theta(\mu \nu) = \theta(\mu)\nu +\mu \theta(\nu); \mu,\nu \in \mathbb{R}$. Show that the function;
$\psi \left(x\right)$ $=$ $f \left(x\right)\theta \left[g\left(x\right)\right]$ $-g\left(x\right)\theta\left[f\left(x\right)\right]$
satisfies the parallelogram property and the relation $\psi\left(\lambda x\right)=\lambda ^{2}\psi\left(x\right)$.
Okay, so, I don't know where to start; I understand that I can begin to look at the picture of the parallelogram id. and show the similarities, like when the minus sign comes from. If you picture the two functions as vectors you can see that the minus sign comes from the direction in which the $g$ vector in directed
and the same with $f$, but after this, I cannot "show" it satisfies the second property.
PS: Think of $\psi\left(x\right)$ as being the inner product of $x$ with itself.
Last edited: May 31, 2013
2. May 31, 2013
### chiro
Hey Tenshou.
The relation looks like its a standard norm property (i.e. a quadratic form). Is that useful for you? (If you can show its a quadratic form or a norm like object the rest should follow).
3. Jun 21, 2013
### Tenshou
Thanks Chiro that actually seems like it will help.
|
2018-03-21 05:47:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8439642190933228, "perplexity": 409.8068224399771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647576.75/warc/CC-MAIN-20180321043531-20180321063531-00551.warc.gz"}
|
https://community.wolfram.com/groups/-/m/t/910603
|
# What is the "Arduino required sketch"?
Posted 2 years ago
2818 Views
|
|
0 Total Likes
|
Under a topic labeled Basic Examples, I see the following:Open a connection to the Arduino and upload the required sketch to the device: In[1] = arduino = DeviceOpen["Arduino", "COM3"]; DeviceConfigure[arduino, "Upload"]; What is the required sketch? Is it StandardFirmata or one unique to Wolfram Language? Where is the sketch file located in the software installation on the Raspberry Pi? Is there documentation available for communicating with this sketch if it is not StandardFirmata?I am running v10 of Wolfram/Mathematica on a Raspberry Pi3. Note that COM3 above must be replaced by the RaspBian version of the USB port name that the Arduino is connected to.
Hi Ken,The required sketch mentioned here is automatically uploaded using DeviceConfigure.If you would like to look at this sketch you can find it at : PacletResource["DeviceDriver_Arduino", "Sketch"] Note that the sketch is a Mathematica formatted template, which is configured by DeviceConfigure, to look at the fully generated template you can run : DeviceConfigure["Arduino","Upload"->{"Debug"->True}] Then you will see the temporary folder (which is normally deleted if "Debug" isn't set to True) listed where the sketch file is located.Note that due to a bug with the Arduino IDE, and this sketch's usage of functions returning pointers to structs, you will not be able to open up this file with the Arduino IDE and immediately upload it. For more information on this, see this blog.Thanks,Ian
|
2018-11-17 08:50:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21187828481197357, "perplexity": 3592.319663181427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743351.61/warc/CC-MAIN-20181117082141-20181117104141-00070.warc.gz"}
|
https://www.mathway.com/examples/algebra/complex-numbers-and-vector-analysis/finding-all-complex-number-solutions?id=37
|
Algebra Examples
Take the root of both sides of the to eliminate the exponent on the left side.
The complete solution is the result of both the positive and negative portions of the solution.
Rewrite as .
The complete solution is the result of both the positive and negative portions of the solution.
First, use the positive value of the to find the first solution.
Next, use the negative value of the to find the second solution.
The complete solution is the result of both the positive and negative portions of the solution.
We're sorry, we were unable to process your request at this time
Step-by-step work + explanations
• Step-by-step work
• Detailed explanations
• Access anywhere
Access the steps on both the Mathway website and mobile apps
$--.--/month$--.--/year (--%)
|
2018-02-18 18:01:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5941188931465149, "perplexity": 766.2619194559846}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812247.50/warc/CC-MAIN-20180218173208-20180218193208-00762.warc.gz"}
|
https://www.springerprofessional.de/mathematics-of-energy-and-climate-change/2387596
|
main-content
## Über dieses Buch
The focus of this volume is research carried out as part of the program Mathematics of Planet Earth, which provides a platform to showcase the essential role of mathematics in addressing planetary problems and creating a context for mathematicians and applied scientists to foster mathematical and interdisciplinary developments that will be necessary to tackle a myriad of issues and meet future global challenges.
Earth is a planet with dynamic processes in its mantle, oceans and atmosphere creating climate, causing natural disasters and influencing fundamental aspects of life and life-supporting systems. In addition to these natural processes, human activity has increased to the point where it influences the global climate, impacts the ability of the planet to feed itself and threatens the stability of these systems. Issues such as climate change, sustainability, man-made disasters, control of diseases and epidemics, management of resources, risk analysis and global integration have come to the fore.
Written by specialists in several fields of mathematics and applied sciences, this book presents the proceedings of the International Conference and Advanced School Planet Earth, Mathematics of Energy and Climate Change held in Lisbon, Portugal, in March 2013, which was organized by the International Center of Mathematics (CIM) as a partner institution of the international program Mathematics of Planet Earth 2013. The book presents the state of the art in advanced research and ultimate techniques in modeling natural, economical and social phenomena. It constitutes a tool and a framework for researchers and graduate students, both in mathematics and applied sciences.
## Inhaltsverzeichnis
### Max-Stability at Work (or Not): Estimating Return Levels for Daily Rainfall Data
Abstract
When we are dealing with meteorological data, usually one is interested in the analysis of maximal observations and records over time, since these entail negative consequences—risk events. Extreme Value Theory has proved to be a powerful and useful tool to describe situations that may have a significant impact in many application areas, where knowledge of the behavior of the tail of a distribution is of main interest. The classical Gnedenko theorem establishes that there are three type of possible limit max-stable distributions for maxima of blocks of independent and identically distributed (iid) observations. However, for the types of data to which extreme value models are commonly applied, temporal independence is usually an unrealistic assumption and one could ask about the appropriateness of max-stable models. Luckily, stationary and weekly dependent series follow the same distributional limit laws as those of independent series, although with parameters affected by dependence. For rainfall data, we will play with these results, analyzing max-stability at work for rare events estimation and the real impact of “neglecting” iid property.
Maria Isabel Fraga Alves
### Impacts of Vaccination and Behavior Change in the Optimal Intervention Strategy for Controlling the Transmission of Tuberculosis
Abstract
A dynamical model of TB for two age groups that incorporate vaccination of children at birth, behavior change in adult population, treatment of infectious children and adults is formulated and analyzed. Three types of control measures (vaccination, behavior change and anti-TB treatment strategies) are applied with separate rate for children and adults to analyze the solution of the controlled system by using the concept of optimal control theory. It is indicated that vaccination at birth and treatment for both age groups have impact in reducing the value of the reproduction number ($$\mathcal{R}_{o}$$) whereas behavior modification does not have any impact on $$\mathcal{R}_{o}$$. Pontryagin’s Minimum Principle has been used to characterize the optimal level of controls applied on the model. It is shown that the optimal combination strategy of vaccination, behavior change and treatment for the two age groups can help to reduce the disease epidemic with minimum cost of interventions, in shorter possible time.
Temesgen Debas Aweke, Semu Mitiku Kassa
### Modeling of Extremal Earthquakes
Abstract
Natural hazards, such as big earthquakes, affect the lives of thousands of people at all levels. Extreme-value analysis is an area of statistical analysis particularly concerned with the systematic study of extremes, providing an useful insight to fields where extreme values are probable to occur. The characterization of the extreme seismic activity is a fundamental basis for risk investigation and safety evaluation. Here we study large earthquakes in the scope of the Extreme Value Theory. We focus on the tails of the seismic moment distributions and we propose to estimate relevant parameters, like the tail index and high order quantiles using the geometric-type estimators. In this work we combine two approaches, namely an exploratory oriented analysis and an inferential study. The validity of the assumptions required are verified, and both geometric-type and Hill estimators are applied for the tail index and quantile estimation. A comparison between the estimators is performed, and their application to the considered problem is illustrated and discussed in the corresponding context.
Margarida Brito, Laura Cavalcante, Ana Cristina Moreira Freitas
### Detonation Wave Solutions and Linear Stability in a Four Component Gas with Bimolecular Chemical Reaction
Abstract
We consider a four component gas undergoing a bimolecular chemical reaction of type $$A_{1} + A_{2} \rightleftharpoons A_{3} + A_{4}$$, described by the Boltzmann equation (BE) for chemically reactive mixtures. We adopt hard-spheres elastic cross sections and modified line-of-centers reactive cross sections depending on both the activation energy and geometry of the reactive collisions. Then we consider the hydrodynamic limit specified by the reactive Euler equations, in an earlier stage of the chemical reaction, when the gas is far from equilibrium (slow chemical reaction). In particular, the rate of the chemical reaction obtained in this limit shows an explicit dependence on the reaction heat and on the activation energy. Starting from this kinetic setting, we study the dynamics of planar detonation waves for the considered reactive gas and characterize the structure of the steady detonation solution. Then, the problem of the hydrodynamic linear stability of the detonation solution is treated, investigating the response of the steady solution to small rear boundary perturbations. A numerical shooting technique is used to determine the unstable modes in a pertinent parametric space for the considered problem. Numerical simulations are performed for the Hydrogen-Oxygen system and some representative results are presented, regarding the steady detonation wave solution and linear stability.
F. Carvalho, A. W. Silva, A. J. Soares
### Mathematical Aspects of Coagulation-Fragmentation Equations
Abstract
We give an overview of the mathematical literature on the coagulation-like equations, from an analytic deterministic perspective. In Sect. 1 we present the coagulation type equations more commonly encountered in the scientific and mathematical literature and provide a brief historical overview of relevant works. In Sect. 2 we present results about existence and uniqueness of solutions in some of those systems, namely the discrete Smoluchowski and coagulation-fragmentation: we start by a brief description of the function spaces, and then review the results on existence of solutions with a brief description of the main ideas of the proofs. This part closes with the consideration of uniqueness results. In Sects. 3 and 4 we are concerned with several aspects of the solutions behaviour. We pay special attention to the long time convergence to equilibria, self-similar behaviour, and density conservation or lack thereof.
F. P. da Costa
### Resampling-Based Methodologies in Statistics of Extremes: Environmental and Financial Applications
Abstract
Resampling computer intensive methodologies, like the jackknife and the bootstrap are important tools for a reliable semi-parametric estimation of parameters of extreme or even rare events. Among these parameters we mention the extreme value index, ξ, the primary parameter in statistics of extremes. Most of the semi-parametric estimators of this parameter show the same type of behaviour: nice asymptotic properties, but a high variance for small k, the number of upper order statistics used in the estimation, a high bias for large k, and the need for an adequate choice of k. After a brief reference to some estimators of the aforementioned parameter and their asymptotic properties we present an algorithm that deals with an adaptive reliable estimation of ξ. Applications of these methodologies to the analysis of environmental and financial data sets are undertaken.
M. Ivette Gomes, Lígia Henriques-Rodrigues, Fernanda Figueiredo
### On the Optimal Control of Flow Driven Dynamic Systems
Abstract
The objective of this work is to develop a mathematical framework for the modeling, control and optimization of dynamic control systems whose state variable is driven by interacting ODE’s (ordinary differential equations) and solutions of PDE’s (partial differential equations). The ultimate goal is to provide a sound basis for the design and control of new advanced engineering systems arising in many important classes of applications, some of which may encompass, for example, underwater gliders and mechanical fishes. For now, the research effort has been focused in gaining insight by applying necessary conditions of optimality for shear flow driven dynamic control systems which can be easily reduced to problems with ODE dynamics. In this article we present and discuss the problem of minimum time control of a particle advected in a Couette and Poiseuille flows, and solve it by using the maximum principle.
Teresa Grilo, Sílvio M. A. Gama, Fernando Lobo Pereira
### An Overview of Network Bifurcations in the Functionalized Cahn-Hilliard Free Energy
Abstract
The functionalized Cahn-Hilliard (FCH) free energy models interfacial energy in amphiphilic phase-separated mixtures. Its minimizers and quasi-minimizers encompass rich classes of network morphologies with detailed inner layers incorporating bilayers, pore, pearled pore, and micelle type structures. We present an overview of the stability of the network morphologies as well as the competitive evolution of bilayer and pore morphologies under a gradient flow in three space-dimensions.
Noa Kraitzman, Keith Promislow
### The Economics of Ethanol: Use of Indirect Policy Instruments
Abstract
General equilibrium models typically ignore environmental goods because it is assumed that they have zero price. In the United States the Renewable Fuel Standard was introduced to offset the carbon emissions from by burning ethanol. The model in this study extends the standard general equilibrium approach to consider both positive and negative externalities. The negative externality is due to gasoline consumption while the positive externality is from substitution of ethanol for gasoline.
Charles B. Moss, Andrew Schmitz, Troy G. Schmitz
### Geostatistical Analysis in Extremes: An Overview
Abstract
Classical statistics of extremes is very well developed in the univariate context for modeling and estimating parameters of rare events. Whenever rain, snow, storms, hurricanes, earthquakes, and so on, happen the analysis of extremes is of primordial importance. However such rare events often present a temporal aspect, a spatial aspect or both. Classical geostatistics, widely used for spatial data, is mostly based on multivariate normal distribution, inappropriate for modeling tail behavior. The analysis of spatial extreme data, an active research area, lies at the intersection of two statistical domains: extreme value theory and geostatistics. Some statistical tools are already available for the spatial modeling of extremes, including Bayesian hierarchical models, copulas and max-stable random fields. The purpose of this chapter is to present an overview of basic spatial analysis of extremes, in particular reviewing max-stable processes. A real case study of annual maxima of daily rainfall measurements in the North of Portugal is slightly discussed as well the main functions in R environment for doing such analysis.
M. Manuela Neves
### Reducing the Minmax Regret Robust Shortest Path Problem with Finite Multi-scenarios
Abstract
The minmax regret robust shortest path problem is a combinatorial optimization problem that can be defined over networks where costs are assigned to arcs under a given scenario. This model can be continuous or discrete, depending on whether costs vary within intervals or within discrete sets of values. The problem consists in finding a path that minimizes the maximum deviation from the shortest paths over all scenarios. This work focuses on designing tools to reduce the network, in order to make easier the search for an optimum solution. With this purpose, methods to identify useless nodes to be removed and to detect arcs that surely belong to the optimum solution are developed. Two known algorithms for the robust shortest path problem are tested on random networks with and without these preprocessing rules.
Marta M. B. Pascoal, Marisa Resende
### Mathematics of Energy and Climate Change: From the Solar Radiation to the Impacts of Regional Projections
Abstract
This chapter focuses on the natural and anthropogenic drivers of climate change and on the assessment of potential impacts of regional projections for different scenarios of future climate. Internal and external forcing factors of climate change are associated to changes in the most important processes of energy transfer with influence on the energy balance of the climate system. The role of the solar activity, regular variations in the orbital parameters of the Earth and the radiative forcing which comprises the changes in the chemical composition of the atmosphere and the characteristics of the radiative processes that occur in the atmosphere and on the surface of the Earth will be discussed. Recent evidences of climate change and the general characteristics of the climate models used in climate projection will be presented. The chapter ends with results of some case studies of potential impacts of regional climate change projections in Portugal, namely in forest fire regime, extreme precipitation intensity and in the design of storm water drainage infrastructures.
Mário Gonzalez Pereira
### Infinite Horizon Optimal Control for Resources Management in Agriculture
Abstract
This article concerns an optimal control based framework for the optimization of resources in agriculture taking into account the environment sustainability. A decentralized, adaptive, hierarchic architecture to support long term coordinated decision-making strategies is required in order to achieve the common long term desired equilibrium in the environment state, and, at the same time, allow the economic sustainability of a number of distributed farm producers with, possibly conflicting, short term economic goals. The overall coordination is achieved by an adaptive Model Predictive Control structure that, on the one end hand, promotes the long term common good by approximating the solution to an infinite horizon optimal control problem, and, on the other hand, provides agro-chemical indicators to each one of the local farmers. We will emphasize on the importance of optimality results for infinite horizon optimal control problems of the Mayer type depending on the state at the final time while satisfying constraints at both trajectory endpoints.
Fernando Lobo Pereira
### Distributed Reasoning
Abstract
This paper discusses the problem of learning a global model from local information. We consider ubiquitous streaming data sources, such as sensor networks, and discuss efficient learning distributed algorithms. We present the generic framework of distributed sources of data, an illustrative algorithm to monitor the global state of the network using limited communication between peers, and an efficient distributed clustering algorithm.
Pedro Rodrigues, João Gama
### Multiscale Internet Statistics: Unveiling the Hidden Behavior
Abstract
Being able to characterize and predict the behavior of Internet users based only on layer 2 statistics can be very important for network managers and/or network operators. Operators can perform a low level monitoring of the communications at the network entry points, independently of the data encryption level and even without being associated with the network itself. Based on this low level data, it is possible to optimize the access service, offer new security threats detection services and infer the users behavior, which consists of identifying the underlying web application that is responsible by the layer 2 traffic at different time instants and characterize the usage dynamics of the different web applications. Several identification methodologies have been proposed over the years to classify and identify IP applications, each one having its own advantages and drawbacks: port-based analysis, deep packet inspection, behavior-based approaches, learning theory, among others. Although some of them are very efficient when applied to specific scenarios, all approaches fail when only low level statistics are available or under data encryption restrictions. In this work, we propose the use of multiscaling traffic characteristics to differentiate web applications and the use of a Markovian model to characterize the dynamics of user actions over time. By applying the proposed methodology to Wi-Fi layer 2 traffic generated by users accessing different common web services/contents through HTTP (namely social networking, web news and web-mail applications), it was possible to achieve a good prediction of the different users behaviors. The classification results obtained show that the developed multiscaling traffic Markovian model has the potential to efficiently identify, model and predict Internet users behaviors based only on layer 2 traffic statistics.
Paulo Salvador, António Nogueira, Eduardo Rocha
### The Role of Clouds, Aerosols and Galactic Cosmic Rays in Climate Change
Abstract
A review of the role played by clouds, by natural and anthropogenic aerosols and by their interaction, on climate, is presented. The suggestion that galactic cosmic rays may affect the interaction between clouds/aerosols and climate is here discussed in the context of the CLOUD (Cosmics Leaving Outdoor Droplets) experiment at CERN. The experiment has shown that cosmic rays enhance aerosol nucleation and cloud condensation but the effect is too weak to have an impact on climate during a solar cycle or over the last century. The CLOUD experiment has also revealed a nucleation mechanism involving the formation of clusters containing sulphuric acid and oxidized organic molecules.
Filipe Duarte Santos
### Long Time Behaviour and Self-similarity in an Addition Model with Slow Input of Monomers
Abstract
We consider a coagulation equation with constant coefficients and a time dependent power law input of monomers. We discuss the asymptotic behaviour of solutions as t → , and we prove solutions converge to a similarity profile along the non-characteristic direction.
Rafael Sasportes
### Modelling the Fixed Bed Adsorption Dynamics of CO2/CH4 in 13X Zeolite for Biogas Upgrading and CO2 Sequestration
Abstract
The sorption of $$\mathrm{CO}_{2}$$ and $$\mathrm{CH}_{4}$$ in binderless beads of 13X zeolite has been investigated between 313 and 423 K and total pressure up to 0.5 MPa through fixed bed adsorption experiments. Experimental selectivities $$\mathrm{CO}_{2}/\mathrm{CH}_{4}$$ range from 37 at a low pressure of 0.0667 MPa to approximately 5 at the high temperature of 423 K. The breakthrough curves measured show a plateau of pure $$\mathrm{CH}_{4}$$ of approximately 6 min depending of the operating conditions chosen. A mathematical model was developed and tested predicting with good accuracy the behaviour of the fixed bed adsorption experiments being a valuable tool for the design of cyclic adsorption processes for biogas upgrading and $$\mathrm{CO}_{2}$$ capture using 13X zeolite.
José A. C. Silva, Alírio E. Rodrigues
### Detection of Additive Outliers in Poisson INAR(1) Time Series
Abstract
Outlying observations are commonly encountered in the analysis of time series. In this paper a Bayesian approach is employed to detect additive outliers in order one Poisson integer-valued autoregressive time series. The methodology is informative and allows the identification of the observations which require further inspection. The procedure is illustrated with simulated and observed data sets.
Maria Eduarda Silva, Isabel Pereira
### From Ice to Penguins: The Role of Mathematics in Antarctic Research
Abstract
Mathematics underpins all modern Antarctic science as illustrated by numerous activities carried out during the international year “Mathematics for Planet Earth”. Here, we provide examples of some ongoing applications of mathematics in a wide range of Antarctic science disciplines: (1) Feeding and foraging of marine predators; (2) Fisheries management and ecosystem modelling; and (3) Climate change research. Mathematics has allowed the development of diverse models of physical and ecological processes in the Antarctic. It has provided insights into the past dynamics of these systems and allows projections of potential future conditions, which are essential for understanding and managing the effects of fishing and climate change. Highly specific methods and models have been developed to address particular questions in each discipline, from the detailed analyses of remote-sensed predator tracking data to the assessment of the outputs from multiple global climate models. A key issue, that is common to all disciplines, is how to deal with the inherent uncertainty that arises from limited data availability and the assumptions or simplifications that are necessary in the analysis and modeling of interacting processes. With the continued rapid development of satellite-based and remote observation systems (e.g. ocean drifters and automatic weather stations), and of new methods for genetic analyses of biological systems, a step-change is occurring in the magnitude of data available on all components of Antarctic systems. These changes in data availability have already led to the development of new methods and algorithms for their efficient collection, validation, storage and analysis. Further progress will require the development of a wide range of new and innovative mathematical approaches, continuing the trend of world science becoming increasingly international and interdisciplinary.
José C. Xavier, S. L. Hill, M. Belchier, T. J. Bracegirdle, E. J. Murphy, J. Lopes Dias
### Backmatter
Weitere Informationen
|
2020-05-25 14:25:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4684890806674957, "perplexity": 1098.0678106071211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388758.12/warc/CC-MAIN-20200525130036-20200525160036-00231.warc.gz"}
|
https://zbmath.org/?q=an:0892.42017
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Weyl-Heisenberg frames and Riesz bases in $L_2(\bbfR^d)$. (English) Zbl 0892.42017
From the introduction: “The present paper is the second in a series of three, all devoted to the study of shift-invariant frames and shift-invariant stable (= Riesz) bases for $H:= L_2(\bbfR^d)$, $d\ge 1$, or a subspace of it. In the first paper [RS1] [Can. J. Math. 47, No. 5, 1051-1094 (1995; Zbl 0838.42016)], we studied such bases under the mere assumption that the basis set can be written as a collection of shifts (namely, integer translates) of a set of generators $\Phi$. The present paper analyzes Weyl-Heisenberg (WH, known also as Gaborian) frames and stable bases. Aside from specializing the general methods and results of [RS1] to his important case, we exploit here the special structure of the WH set, and in particular the duality between the shift operator and the modulation operator, the latter being absent in the context of general shift-invariant sets. In the third paper [RS3] [J. Funct. Anal. 148, No. 2, 408-447 (1997; Zbl 0891.42018)], we present applications of the results of [RS1] to wavelet (or affine) frames. The flavour of the results there is quite different; wavelet sets are not shift-invariant, and the main effort of [RS3] is to show that, nevertheless, the basic analysis of [RS1] does apply to that case as well”.
##### MSC:
42C15 General harmonic expansions, frames
Full Text:
##### References:
[1] J. J. Benedetto and D. F. Walnut, Gabor frames for $L\sp 2$ and related spaces , Wavelets: Mathematics and Applications eds. J. Benedetto and M. Frazier, Stud. Adv. Math., CRC, Boca Raton, FLa, 1994, pp. 97-162. · Zbl 0887.42025 [2] C. K. Chui, An Introduction to Wavelets , Wavelet Analysis and its Applications, vol. 1, Academic Press Inc., Boston, MA, 1992. · Zbl 0925.42016 [3] I. Daubechies, The wavelet transform, time-frequency localization and signal analysis , IEEE Trans. Inform. Theory 36 (1990), no. 5, 961-1005. · Zbl 0738.94004 · doi:10.1109/18.57199 [4] I. Daubechies, Ten lectures on wavelets , CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 61, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1992. · Zbl 0776.42018 [5] I. Daubechies, A. Grossmann, and Y. Meyer, Painless nonorthogonal expansions , J. Math. Phys. 27 (1986), no. 5, 1271-1283. · Zbl 0608.46014 · doi:10.1063/1.527388 [6] I. Daubechies, H. J. Landau, and Z. Landau, Gabor time-frequency lattices and the Wexler-Raz identity , J. Fourier Anal. Appl. 1 (1995), no. 4, 437-478. · Zbl 0888.47018 · doi:10.1007/s00041-001-4018-3 · eudml:59464 [7] C. de Boor, R. DeVore, and A. Ron, The structure of finitely generated shift-invariant spaces in $L\sb 2(\bf R\sp d)$ , J. Funct. Anal. 119 (1994), no. 1, 37-78, FTP site anonymous@stolp.cs.wisc.edu, file name several.ps. · Zbl 0806.46030 · doi:10.1006/jfan.1994.1003 [8] R. J. Duffin and A. C. Schaeffer, A class of nonharmonic Fourier series , Trans. Amer. Math. Soc. 72 (1952), 341-366. JSTOR: · Zbl 0049.32401 · doi:10.2307/1990760 · http://links.jstor.org/sici?sici=0002-9947%28195203%2972%3A2%3C341%3AACONFS%3E2.0.CO%3B2-B&origin=euclid [9] H. G. Feichtinger and K. Gröchenig, Banach spaces related to integrable group representations and their atomic decompositions. I , J. Funct. Anal. 86 (1989), no. 2, 307-340. · Zbl 0691.46011 · doi:10.1016/0022-1236(89)90055-4 [10] C. Heil and D. Walnut, Continuous and discrete wavelet transforms , SIAM Rev. 31 (1989), no. 4, 628-666. JSTOR: · Zbl 0683.42031 · doi:10.1137/1031129 · http://links.jstor.org/sici?sici=0036-1445%28198912%2931%3A4%3C628%3ACADWT%3E2.0.CO%3B2-W&origin=euclid [11] H. Helson, Lectures on Invariant Subspaces , Academic Press, New York, 1964. · Zbl 0119.11303 [12] A. J. E. M. Janssen, Duality and biorthogonality for Weyl-Heisenberg frames , J. Fourier Anal. Appl. 1 (1995), no. 4, 403-436. · Zbl 0887.42028 · doi:10.1007/s00041-001-4017-4 · eudml:59463 [13] J. Ramanathan and T. Steger, Incompleteness of sparse coherent states , Appl. Comput. Harmon. Anal. 2 (1995), no. 2, 148-153. · Zbl 0855.42024 · doi:10.1006/acha.1995.1010 [14] M. Rieffel, von Neumann algebras associated with pairs of lattices in Lie groups , Math. Ann. 257 (1981), no. 4, 403-418. · Zbl 0486.22004 · doi:10.1007/BF01465863 · eudml:163566 [15] A. Ron and Z. Shen, Frames and stable bases for shift-invariant subspaces of $L\sb 2(\bold R\sp d)$ , Canad. J. Math. 47 (1995), no. 5, 1051-1094, FTP site anonymous@stolp.cs.wisc.edu, file name frame1.ps. · Zbl 0838.42016 · doi:10.4153/CJM-1995-056-1 [16] A. Ron and Z. Shen, Frames and stable bases for subspaces of $L_2(\mathbbR^d)$: The duality of Weyl-Heisenberg sets , Proceedings of the Lanczos International Centenary Conference, Raleigh, N.C., 1993 ed. D. Brown, et al., SIAM, Philadelphia, 1994, pp. 422-425. [17] A. Ron and Z. Shen, Affine system in $L_2(\mathbbR^d)$: The analysis of the analysis operator , to appear in J. Funct. Anal.; FTP site anonymous@stolp.cs.wisc.edu, file name affine.ps. · Zbl 0891.42018 · doi:10.1006/jfan.1996.3079 [18] R. Tolimieri and R. S. Orr, Characterization of Weyl-Heisenberg frames via Poisson summation relationships , Proc. ICASSP 92-4 (1992), 277-280. [19] R. Tolimieri and R. S. Orr, Poisson summation, the ambiguity function, and the theory of Weyl-Heisenberg frames , J. Fourier Anal. Appl. 1 (1995), no. 3, 233-247. · Zbl 0885.94008 · doi:10.1007/s00041-001-4011-x · eudml:59457 [20] J. Wexler and S. Raz, Discrete Gabor expansions , Signal Processing 21 (1990), 207-220. [21] M. Zibulski and Y. Y. Zeevi, Matrix algebra approach to Gabor-scheme analysis , Technion Israel Inst. of Tech., vol. 856, EE Pub., September 1992, M. Zibuslski and Y. Y. Zeevi, “Gabor representation with oversampling”, in Conf. on Visual Communication and Image Processing, Proc. SPIE 1818 September 1992, 976-984.
|
2016-05-05 22:02:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6982957124710083, "perplexity": 3780.463556542417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861623301.66/warc/CC-MAIN-20160428164023-00140-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/186519/natural-solutions-for-9m9n-mn-and-9m9n-2m2n2/186523
|
# natural solutions for $9m+9n=mn$ and $9m+9n=2m^2n^2$
Please help me find the natural solutions for $9m+9n=mn$ and $9m+9n=2m^2n^2$ where m and n are relatively prime.
I tried solving the first equation in the following way: $9m+9n=mn \rightarrow (9-n)m+9n=0$ $\rightarrow m=-\frac{9n}{9-n}$
-
$$mn=9n+9m \Rightarrow (m-9)(n-9)=81$$
This equation is very easy to solve, just keep in mind that even if $m,n$ are positive, $m-9,n-9$ could be negative. But there are only 6 ways of writing 81 as the product of two integers.
The second one is trickier, but if $mn >9$ then it is easy to prove that
$$2m^2n^2> 18mn > 9m+9n$$
Added Also, since $9|2m^2n^2$ it follows that $3|mn$. Combining this with $mn \leq 9$ and $m|9n, n|9m$ solves immediately the equation.
P.S. Your approach also works, if you do Polynomial long division you will get $\frac{9n}{n-9}=9 +\frac{81}{n-9}$. Thus $n-9$ is a divisor of $81$.
P.P.S. Alternately, for the second equation, if you use $2\sqrt{mn} \leq m+n$ you get
$$18 \sqrt{mn} \leq 9(m+n)=2m^2n^2$$
Thus $$(mn)^3 \geq 81$$ which implies $mn=0 \text{ or } mn \geq 5$.
-
Hint: if $m$ and $n$ are relatively prime, $mn$ and $m+n$ are relatively prime.
-
why is that true? @RobertIsrael – Jorge Fernández Aug 24 '12 at 22:57
@Khromonkey If they are not they have a common prime divisor. But if $p|mn$ it must divide one of them. And if $p$ also divides their sum...... (you should be able to finish the argument) – N. S. Aug 24 '12 at 23:02
@Khromonkey Suppose $(a,b)=1$. We show that $(ab,a+b)=1$. Indeed, suppose $d$ divides both $a+b$ and $ab$. Then it divides $a(a+b)-ab=a^2$ and it divides $b(a+b)-ab=b^2$. But if $(a,b)=1$, then $(a^2,b^2)=1$, so $(ab,a+b)=1$, as desired. – Pedro Tamaroff Aug 24 '12 at 23:05
@RobertIsrael ok so then m and n are not relatively prime then what? – Jorge Fernández Sep 12 '12 at 22:18
You asked for solutions where $m$ and $n$ are relatively prime. – Robert Israel Sep 12 '12 at 23:49
|
2014-08-21 18:26:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9371293783187866, "perplexity": 88.44579935260242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500820886.32/warc/CC-MAIN-20140820021340-00117-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://txcorp.com/images/docs/usim/latest/in_depth/USimHS_Tutorial_Lesson_2.html
|
# Using USim to Solve Multi-Species Reactive Flows¶
Multi-species transport in cylindrical coordinates along with reactions is demonstrated in this example. A hypothetical gas consisting of three species N2,N,O2 is considered here. Mass transport of individual species is solved along with Euler equations. Rate equations are solved in USim to obtain the change in concentration of the species due to chemical reactions. The rate of change of species is obtained using reaction rates and then added to the species transport as sources. Similarly the the change in energy is incorporated using energy of formation of each of the species. The related input file can be found in Blunt-Body Reentry Vehicle (ramC.pre) example of USimHS.
## Multi-Species Mass Transport¶
Species mass transport fluxes are evaluated using the classicMuscl updater and Eigenvalues values of the Euler equation (see Using USim to solve the Euler Equations). The updater block is as given below.
<Updater hyperSpecies>
kind = classicMuscl2d
timeIntegrationScheme = none
numericalFlux = localLaxFlux
limiter = [minmod, minmod, minmod, none]
variableForm = conservative
cfl = CFL
onGrid = domain
in = [speciesDens, q, p, a]
out = [speciesDensNew]
equations = [speciesContinuity]
sources = [multiSpeciesAxiSrc]
<Equation speciesContinuity>
useParentEigenvalues = true
inputVariables = [speciesDens, q]
kind = multiSpeciesSingleVelocityEqn
numberOfSpecies = NSPECIES
<Equation euler>
kind = realGasEosEqn
inputVariables = [q, p, a]
numSpecies = NSPECIES
</Equation>
</Equation>
<Source multiSpeciesAxiSrc>
kind = multiSpeciesSym
symmetryType = cylindrical
numberOfSpecies = NSPECIES
</Source>
</Updater>
This block uses Equation sub-block and a Source block. The mass fluxes are computed in the Equation block using the Eigenvalues of conservative variables q. Here q contains the conservative variables of realGasEosEqn equation. The equation of state is user specified, hence it requires pressure p and speed of sound a as inputs. The Source block computes the sources due to additional terms in cylindrical coordinates. The fluxes evaluated in both of the sub-blocks are added to the out variable speciesDensNew.
## Mass Diffusion¶
Mass diffusion source for the multi-species can computed using the following block. This block uses diffusion (1d, 2d, 3d) operator to compute the diffusion source, with the derivative specified as diffusion, numScalars is the number of species, isRadial is true for cylindrical coordinates, and the input variables are species density and diffusion coefficient D. out variable diffSrc contains the output.
<Updater computeDiffSrc>
kind = diffusion2d
onGrid = domain
derivative = diffusion
numScalars = NSPECIES
coefficient = 1.0
numberOfInterpolationPoints = 8
in = [speciesDens,D]
out = [diffSrc]
</Updater>
## Rate of Change of Density¶
The following equation block shows the computation of rate of change of density three species due to two reactions. In order to use this block, a nodalArray variable consisting of the species densities speciesDens has to be initialized, another array to add the density change rates speciesDensNew should be declared, and an nodalArray with average temperature of the species. The two reactions are N2 + N2 -> N + N + N2 and N2 + O2 -> N + N + O2. The equation block follow.
<Updater sourceUpdater>
kind = equation2d
onGrid = domain
in = [speciesDens, temperature]
out = [speciesDensNew]
equations = [reactionSrc]
<Equation reactionSrc>
kind = reactionTableRhs
outputEnergyRate = 0
maxRate = 1.0e28
species = [N2, N, O2]
fileName = airReaction.txt
</Equation>
</Updater>
The attributes used in the above block are
option: kind (string) Specifies the type of updater. Here it is equation2d in (nodalArray) Names of the required nodalArray inputs. In this example, those are speciesDens, temperature out Name of the output variable. equations (string) Name of the equation. Here the name is reactionSrc
Attributes within the Equation sub blocks
option: kind (string) Type of equation. In this example, it is reactionTableRhs outputEnergyRate (boolean) option to compute reaction energy. it is chosen to be false in this demo. maxRate (real number) An option to introduce artificial limit on the maximum value of reaction rate. It is useful in stabilizing the solution at reasonably small time steps. species (list of strings) Names of the species considered. fileName (string) Name of the Multi-Species Data Files.
Within the Equation Any number of reactions can be included by simply adding those to Multi-Species Data Files.
## Chemical Energy¶
The energy of formation is computed using the following Updater block. The total energy of formation of all the species is added to get the mixture’s energy. This energyOfFormation is added to internal energy in the energy equation. Hence during the initialization, the initial value of energyOfFormation should be computed using the initial densities of species and added to the energy density variable.
<Updater computeChemEn>
kind = equation2d
onGrid = domain
in = [speciesDens,cpR,temperature]
out = [chemEn]
<Equation cp>
kind = transportCoeffSrc
coeff = chemicalEnergy
numSpecies = NSPECIES
fileName = airReaction.txt
</Equation>
</Updater>
The attributes used in the above block are
option: kind (string) Specifies the type of updater. Here it is equation2d in (nodalArray) Names of the required nodalArray inputs. In this example, those are speciesDens, cpR, temperature. specisDens is the number densities of all species, cpR is specific heats. temperature is average temperature of species. speciesDens and cpR are arrays with size equal to the number of species. out Name of the output variable. contains the energy of formation. This variable will have only one component.
## Addition of sources in time integrator¶
Two updaters are here, one for integrating the conservative variables q and species density speciesDens. The other is to integrate time rate of change of species due to reactions in using detailed explanation of the attributes is found in multiUpdater (1d, 2d, 3d).
<Updater rkUpdaterFluid>
kind = multiUpdater2d
onGrid = domain
in = [q,speciesDens]
out = [qnew,speciesDensNew]
<TimeIntegrator rkIntegrator>
kind = TIMEINTEGRATION_METHOD
ongrid = domain
scheme = TIMEINTEGRATION_SCHEME
</TimeIntegrator>
loop = [boundaries,hyper]
updaters = [bcOutflowSpecies, bcInflowSpecies, bcAbWallSpecies1, bcAbWallSpecies2, bcAbWallSpecies3, computeChemEn, computeTemperature, temperatureCorrector, bcFluidTempAxis, bcFluidTempWall, bcFluidTempInflow, bcFluidTempCopy, bcSurfTemp, bcOutflow, bcInflow, bcAxis, bcAbWall1, bcAbWall2, bcAbWall3]
syncVars = [speciesDens,chemEn, temperature, surfTemp, q]
operation = "integrate"
updaters = [computeChemEn, computeTemperature, temperatureCorrector, computeCpAvg, computeMwAvg, computeGammaAvg, computeViscosity, computeThermalCoefficient, computeGasPressure, computeElectronPressure, computeHvpPressure, bcPressureWall1, bcPressureWall2, bcPressureWall3, computeSoundSpeed, computeKinematicViscosity, computeThermalDiffusivityFluid, computeVelocity,computeViscousSource,hyper, hyperSpecies, addViscousSource]
syncVars = [temperature, p,a,velocity,viscousSource,qnew,speciesDensNew]
Boundary conditions are applied on species using bcOutflowSpecies, bcInflowSpecies, bcAbWallSpecies1, bcAbWallSpecies2, bcAbWallSpecies3. Energy of formation and temperature are computed using computeChemEn, computeTemperature. Remember that, energy of formation is required to compute temperature. Boundary conditions on temperature are then applied using bcFluidTempAxis, bcFluidTempWall, bcFluidTempInflow, bcFluidTempCopy, bcSurfTemp updaters. Finally boundary conditions are applied to bcOutflow, bcInflow, bcAxis, bcAbWall1, bcAbWall2, bcAbWall3 conservative variables.
Energy of formation, computeChemEn is evaluated and then temperature is computed using the updater computeTemperature. The properties are evaluated using computeCpAvg for average value of constant pressure specific heat of species, computeMwAvg for the average molecular weight, and computeGammaAvg for average gamma of the mixture. Viscosity an thermal conductivity are evaluated using computeViscosity, computeThermalCoefficient. The total pressure of the gas, electron pressure and heavy particle pressure are computed using computeGasPressure, computeElectronPressure, computeHvpPressure updaters respectively. The pressure is copied into the ghost layers using the boundary condition updaters bcPressureWall1, bcPressureWall2, bcPressureWall3. Pressure boundary condition is applied for realGasEosEqn. Then computeSoundSpeed is evaluated. computeKinematicViscosity, computeThermalDiffusivityFluid are evaluated to obtain kinematic viscosity and thermal diffusivity. These are required for time step restriction based on diffusion. Velocity of fluid is evaluated using computeVelocity and then viscous source is computed computeViscousSource. Convective fluxes of conservative variable are computed in ‘hyper’ and stored in qnew. Convective fluxes of species are stored in speciesDensNew. Viscous source is added to qnew using addViscousSource.
The change in density due to reactions is added to species density and integrated in the following Updater. The updater sourceUpdater evaluates and adds the rate of change of density to the species equations. The resulting equations are integrated using localOdeIntegrator (1d, 2d, 3d) method.
<Updater sourceUpdater>
kind = localOdeIntegrator2d
onGrid = domain
in = [speciesDens, temperature]
out = [speciesDensNew]
integrationScheme = bulirschStoer
relativeErrorTolerance = 1.0e9
equations = [reactionSrc]
<Equation reactionSrc>
kind = reactionTableRhs
outputEnergyRate = 0
maxRate = MAXRATE
species = [N2, N, O2, O, NO, NO_p1, e, Ca, Na, K]
fileName = airReaction.txt
</Equation>
</Updater>
## An Example Simulation¶
The Blunt body reentry example of the USimHS demonstrates each of the concepts described above using 7 species model of air. The considered 7 species are $$N2, N, O2, O, NO, NO+, e$$. Executing the Blunt body reentry input file within USimComposer and switching to the Visualize tab yields the plot shown in Fig. 8.
|
2020-08-04 08:18:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7466035485267639, "perplexity": 5298.8282511794105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735867.23/warc/CC-MAIN-20200804073038-20200804103038-00095.warc.gz"}
|
https://www.cuemath.com/ncert-solutions/q-13-exercise-9-1-some-applications-of-trigonometry-class-10-maths/
|
Ex.9.1 Q13 Some Applications of Trigonometry Solution - NCERT Maths Class 10
Go back to 'Ex.9.1'
Question
As observed from the top of a $$75\,\rm{m}$$ high lighthouse from the sea-level, the angles of depression of two ships are $$30^\circ$$ and $$45^\circ.$$ If one ship is exactly behind the other on the same side of the lighthouse, find the distance between the two ships.
Text Solution
What is Known?
(i) Height of the lighthouse $$=75\,\rm{m}$$
(ii) Angles of depression of two ships from the top of the lighthouse are $$30^\circ$$ and $$45^\circ.$$
What is Unknown?
Distance between the two ships
Reasoning:
Let the height of the lighthouse from the sea-level is $$AB$$ and the ships are $$C$$ and $$D$$. The angles of depression of the ships $$C$$ and $$D$$ from the top $$A$$ of the lighthouse, are $$45^\circ$$ and $$60^\circ$$ respectively.
Trigonometric ratio involving $$AB, BC, BD$$ and angles is tan$$\theta$$.
Distance between the ships, $$CD = BD − BC$$
Steps:
In $$\Delta ABC$$,
\begin{align} \tan {{45}^{0}}&=\frac{AB}{BC} \\ 1&=\frac{75}{BC} \\ BC&=75\, \end{align}
In $$\Delta ABD$$,
\begin{align} \tan {{30}^{0}}&=\frac{AB}{BD} \\ \frac{1}{\sqrt{3}}&=\frac{75}{BD} \\ BD&=75\sqrt{3} \end{align}
Distance between two ships $$CD=BD-BC$$
\begin{align}CD&=75\sqrt{3}-75 \\ &=75\left( \sqrt{3}-1 \right) \end{align}
Distance between two ships $$C D = 75 ( \sqrt { 3 } - 1 ) \, \rm{m}$$
Learn from the best math teachers and top your exams
• Live one on one classroom and doubt clearing
• Practice worksheets in and after class for conceptual clarity
• Personalized curriculum to keep up with school
|
2019-12-10 16:40:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995405673980713, "perplexity": 2694.0073178289063}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540528457.66/warc/CC-MAIN-20191210152154-20191210180154-00006.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=372716
|
## Strength Of Materials (Sample Problems Solved)
Here are some great Strenght Of Materials problems..
Scroll down and under the coursework problem click the example and take a look at the different problems.. Those are fully solved.
I'm not sure how long the site will be active but its very helpful so download the pdf file before my professor takes it offline.
Code:
http://strmat-elde.hit.bg/
PhysOrg.com science news on PhysOrg.com >> King Richard III found in 'untidy lozenge-shaped grave'>> Google Drive sports new view and scan enhancements>> Researcher admits mistakes in stem cell study
Help! Can anybody please explain 2 me hw i can draw d bending moment & shear force diagram.
|
2013-05-24 13:09:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17204010486602783, "perplexity": 4783.746004180938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704658856/warc/CC-MAIN-20130516114418-00018-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://en.wikisource.org/wiki/Page:Popular_Science_Monthly_Volume_74.djvu/506
|
# Page:Popular Science Monthly Volume 74.djvu/506
Seeliger's assumptions as to the distribution and mass of the zodiacal material are of interest, especially when we recall that the zodiacal light within some 20 degrees of the sun is unobservable, on account of the glare, and that the brightness of the light is a poor index to the mass: a given quantity of matter, finely divided, would reflect sunlight more strongly than the same quantity existing in larger particles. For the mathematical development of the subject he assumed that the material is distributed throughout a space represented by a much-flattened ellipsoid of revolution whose center is at the sun's center, whose axis of revolution coincides more or less closely with the sun's axis, whose polar surfaces extend 20 or 30 degrees north and south of the sun (as viewed from the earth), whose equatorial regions extend considerably beyond the earth's orbit, and in which the density-distribution of materials decreases as a function both of the linear distance out from the sun and of the angular distance out from the equatorial plane of symmetry. According to these assumptions, surfaces of equal densities are concentric ellipsoidal surfaces, and the number of such ellipsoids can be increased or decreased according as the computer may desire to represent more or less closely any assumed law of density-variation within the one great spheroid. Practically, Seeliger found that the disturbing effects on the planets are almost independent of the law of distribution of the material, as related to distance from the sun, as far out as two thirds of the distance to Mercury. He made use of only two ellipsoids: One with equatorial radius 0.24 unit[1] and polar radius 0.024, of uniform density; and the other with corresponding radii 1.20 and 0.24, of uniform but much smaller density. The total mean densities determined for his volumes, on the basis of unity as the mean density of the sun, are, respectively, ${\displaystyle 2.18\times 10^{-11}}$ and ${\displaystyle 3.1\times 10^{-15}}$; and the resulting combined mass of the two ellipsoids is ${\displaystyle 3.1\times l0^{-7}}$ of the sun's mass, which is roughly twice the mass of Mercury. The corresponding density of mass-distribution is surprisingly low. In the inner and denser ellipsoid, the matter, if as dense as water, would occupy 1 part in 30,000,000,000 of the space; if as dense as the earth, only 1 part in 160,000,000,000.
|
2019-04-19 22:58:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7922456860542297, "perplexity": 599.4926947500161}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528430.9/warc/CC-MAIN-20190419220958-20190420002958-00197.warc.gz"}
|
https://notstatschat.rbind.io/2019/01/04/bayesian-surprise-the-shiny-app/
|
# Bayesian Surprise — the Shiny app
I wrote a while back about a toy case of the Bayesian surprise problem: what does Bayes Theorem tell you to believe when you get really surprising data. The one-dimensional case is a nice math-stat problem, if you like that sort of thing, but maybe you’d rather have the calculations done for you.
Here’s an app
The mathematical setup is that you have a prior distribution for a location parameter $$\theta$$ centered at zero, and you see a data point $$x$$ that’s a long way from zero. If $$\pi(\theta)$$ and $$f(x-\theta)$$ are the prior and likelihood, the posterior is proportional to $$\pi(\theta)f(x-\theta)$$.
When the prior is heavy-tailed and the data distribution isn’t, you’re willing to believe $$\theta$$ can be weird, so a very large $$x$$ means your posterior for $$\theta$$ will be near $$x$$. When the data distribution is heavy-tailed and the prior isn’t, you’re willing to believe $$x$$ can be a long way from $$\theta$$, but not that $$\theta$$ can be a long way from zero, so the prior ends up pretty much like the posterior – you ‘reject’ the data.
The details, though, depend on how heavy-tailed things are, and the app lets you play around with a range of possibilities. Laplace–Laplace and $$t_{30}$$$$t_{30}$$ and $$t_{30}$$–Normal might be interesting.
The code is here
|
2022-01-27 00:04:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8440121412277222, "perplexity": 396.86594405603034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00675.warc.gz"}
|
http://kjim.org/journal/view.php?viewtype=pubreader&number=169813
|
# Epidemiological trend of pulmonary thromboembolism at a tertiary hospital in Korea
## Article information
Korean J Intern Med. 2017;32(6):1037-1044
Publication date (electronic) : 2017 March 13
doi : https://doi.org/10.3904/kjim.2016.248
Department of Internal Medicine, Chung-Ang University Hospital, Seoul, Korea
Correspondence to In Won Park, M.D. Department of Internal Medicine, Chung-Ang University Hospital, 102 Heukseok-ro, Dongjak-gu, Seoul 06973, Korea Tel: +82-2-6299-1401 Fax: +82-2-6299-2017 E-mail: iwpark@cau.ac.kr
*
These authors contributed equally to this work.
Received 2016 July 30; Revised 2016 October 30; Accepted 2016 November 7.
## Abstract
### Background/Aims
Despite increasing interest in pulmonary thromboembolism (PTE), data on recent trends in PTE incidence are limited. This study evaluated the recent incidence rate of PTE.
### Methods
We performed a retrospective chart review of patients with PTE admitted to Chung-Ang University Hospital during the 10-year period from 2006 to 2015. Age-standardized incidence and mortality rates were calculated by the direct method per 100,000 populations. To analyze the trend of risk factor, we also calculated the proportions of cancer, major operation, and recent major fracture over that time.
### Results
Total crude incidence rate of PTE per 100,000 was 229.36 and the age-sex adjusted standardized incidence rate was 151.28 (95% confidence interval [CI], 127.88 to 177.10). The incidence rate have been significantly increased 1.083 times annually from 2006 (105.96 per 100,000) to 2015 (320.02 per 100,000) (95% CI, 1.049 to 1.118; p < 0.001). These incidences also increased annually in age group of 35 to 54, 55 to 74, and ≥ 75 years, and in both males (odds ratio [OR], 1.071; 95% CI, 1.019 to 1.127; p = 0.007) and females (OR, 1.091; 95% CI, 1.047 to 1.136; p < 0.001). Cancer accounted for most of the increase from 20.0% at 2006 to 2007 to 42.8% at 2014 to 2015 (OR, 1.154; 95% CI, 1.074 to 1.240; p < 0.001), while the proportions of recent fracture and major operation remained constant.
### Conclusions
The incidence of pulmonary embolism has gradually increased over the 10 years. The increase of PTE incidence was mainly due to increased proportion of cancer patients.
## INTRODCTION
Pulmonary thromboembolism (PTE) is a significant health problem worldwide. In the United States, the prevalence of PTE between 1979 and 1999 was 0.4% and annual incidence rate was estimated at 600,000 cases [1]. Although the corresponding data for Europe are unavailable, in a community-based study involving 342,000 population in France the incidences of PTE was 6.0 per 10,000 per year [2]. In Australia, the crude annual incidence of PTE is 0.31 per 1,000 residents [3].
Incidence rate of PTE is important because PTE is associated with significant morbidity and mortality in hospitalized patients, as well as economic burden. In the United States, PTE is the third most common cause of death, following heart attack and stroke and the estimated annual economic burden of PTE in the United States exceeds $8.5 billion [4-7]. Epidemiologic studies of PTE are well established. However, most of the studies were conducted prior to the year 2000 and cannot fully reflect the recent trend of PTE [2,8,9]. Another problem is that there is a lack of research on incidence of PTE in Asia compared to Western countries. In Korea, there have only been a few studies addressing the incidence of PTE and little is known about the recent incidence of PTE [10,11]. An updated investigation of PTE is needed. It is important to understand the recent status of PTE for prevention and treatment of it. Herein, we evaluated the trend of PTE occurrence over a decade in a tertiary hospital in Korea and in national representative data from the Korean National Health Insurance Service (NHIS). ## METHODS ### Study population PTE patients admitted to Chung-Ang University Hospital from 2006 to 2015 were retrospectively reviewed. All data were gathered in accordance with the amended Declaration of Helsinki, with approval of an independent Institutional Review Board (IRB No: 12-343-394343). We reviewed records of the patients who were admitted with new diagnosis with PTE and two independent radiologists reviewed chest computed tomography (CT) again on a Radmax PACS work station (Marotech, Seoul, Korea) to reach consensus decisions. For each patient, all electronic medical record data were retrospectively reviewed. Demographic data on the study population including age, sex, body mass index, previous PTE history, combined venous thromboembolism (VTE), comorbidity, history of major operation and recent major fracture, and hospital outcomes were collected. Major operations included orthopedic surgery, major abdomen surgery, major gynecological surgery, major urological surgery, neurosurgery, cardiothoracic, and major vascular surgery. Recent major fracture includes orthopedic injuries, such as long bone fracture and pelvic bone fracture. ### Standardization for the Korean population For standardization of our hospital population to Korean general population, we used data from the Korean Statistical Information Service (KOSIS, http://kosis.kr/) as the standard population. The Korean population of KOSIS in 2010 was 48,357,370. The data of age and gender distribution of population in 2010 were used. ### Admission rate with PTE and economic burden in Korea In order to investigate the annual changes in the total burden of the PTE in Korea, we also analyzed admission rates and following economic burden of admission for PTE using Korean NHIS from 2004 to 2013. NHIS is the only public medical insurance system operated by the Ministry for Health, Welfare and Family Affairs in Korea, which covers the whole population as a compulsory insurance system [12]. ### Statistical analyses SPSS version 17.0 (SPSS Inc., Chicago, IL, USA) was used for the statistical analyses. Continuous variables are expressed as mean ± standard deviation (SD) and categorical variables are presented as numbers and percentages. Crude annual incidence rates (per 100,000 individuals) were calculated using the annual number of patients admitted with PTE as the numerator, and the total annual number of admitted patients in Chung-Ang University Hospital for the denominator. To adjust confounders, a multivariate logistic regression analysis involving age and sex was conducted to define the trend in the incidence of PTE to calendar year, and the odds ratio (OR) and 95% confidence interval (CI) were calculated. Trend in the incidence of PTE according to sex and age group (divided into four groups: 0 to 34, 35 to 54, 55 to 74, and ≥ 75 years) were also analyzed. Age and sex standardized incidence rates were calculated by the direct method on the KOSIS 2010 standard population. Trend in the incidence of PTE according to calendar year were analyzed by Poisson regression adjusted by age and sex. CI estimates were based on the Poisson distribution. We also calculated the proportion of cancer, major operation, and recent major fracture in patients with PTE to analyze the main cause of PTE trend. Significance was defined as p < 0.05 for all analyses. ## RESULTS ### Baseline characteristics Table 1 shows baseline characteristics of study population. A total of 591 patients were diagnosed with PTE. There were 224 male patients (37.9%) and the mean age was 68.13 ± 15.43 years. The most common comorbidities were hypertension (40.4%) and cancer (32.1%). One hundred and fifty-one patients had a recent major operation (25.5%). Recent fracture accounted for 47 cases (8.0%). Mean hospital duration day was 30.46 ± 58.28 days and intensive care unit admission was 28 cases (4.7%). Forty-six patients (7.8%) died in hospital. Baseline characteristics of patients with pulmonary thromboembolism (n = 591) ### Annual incidence rate of PTE A total of 257,669 patients were admitted to Chung-Ang University Hospital during the 10-year. Total crude incidence rate of PTE was 229.36 per 100,000. Except for an unusual increase of PTE in 2008, incidence rates per 100,000 gradually rose from 105.96 to 320.02 (Fig. 1A). The incidence rate of PTE have been increased 1.083 times annually from 2006 to 2015, which was statistically significant (95% CI, 1.049 to 1.118; p < 0.001). The incidence rate of pulmonary thromboembolism (PTE). (A) The crude incidence rate of PTE. (B) The standardized incidence rate of PTE. The data were directly age and sex adjusted to the 2010 standard population of the Korean Statistical Information Service. When we standardized crude incidence for age and sex on the KOSIS standard population, the standardized incidence rate of PTE was 151.28 (95% CI, 127.88 to 177.10). Standardized incidence rates continuously increased from 80.27 (95% CI, 63.44 to 99.57) to 225.68 (95% CI, 197.49 to 257.47) per 100,000 (Fig. 1B). ### Admission rate for PTE and following economic burden in Korea using NHIS data In order to confirm the annual changes in the burden of the PTE in Korea, we analyzed the NHIS data. In the NHIS population, the admission rate for PTE per 100,000 during 10 years was 4.92. The annual admission rate per 100,000 was continuously increased from 2.63 in 2004 to 7.44 in 2013 (OR, 1.130; 95% CI, 1.125 to 1.135; p < 0.001). With increased admission rates for PTE, the overall cost for hospitalization was also steadily increased over time (Table 2). In comparison with admission rate of other respiratory disease, the rate of patients with chronic obstructive pulmonary disease and PTE was gradually increased, while that of other respiratory disease tended to decrease over time (Supplementary Fig. 1). Annual admission rate for pulmonary thromboembolism and following economic burden in Korean National Health Insurance Service population (2004 to 2013) ### Annual incidence rate of PTE according to sex and age group The crude incidence rate of PTE per 100,000 in males was 79.03 in 2006 and 266.84 in 2015. In women, it was 135.30 in 2006 and 533.34 in 2015. Annual incidence rates were tended to increase in both males (OR, 1.071; 95% CI, 1.019 to 1.127; p = 0.007) and females (OR, 1.091; 95% CI, 1.047 to 1.136; p < 0.001) (Fig. 2A). The incidence rate in females was higher than that of men over the time period and the relative risk (RR) was 1.546 (95% CI, 1.308 to 1.828; p < 0.001). The incidence rate of pulmonary thromboembolism (PTE) according to sex and age group. (A) The incidence rate of PTE according to sex. (B) Annual incidence rate of PTE among different age groups. Except for the age group of 0 to 34 years, the incidence rate of PTE significantly increased annually from 2006 to 2015: 35 to 54 years (OR, 1.171; 95% CI, 1.072 to 1.278, p < 0.001); 55 to 74 years (OR, 1.084; 95% CI, 1.033 to 1.138; p = 0.001); and 75 years (OR, 1.057; 95% CI, 1.005 to 1.112; p = 0.032) (Fig. 2B). Also, the PTE incidence was increased with age. The RRs compared to the 0 to 34 year group were 4.341 in the 35 to 54 year group (95% CI, 2.811 to 6.903; p < 0.001), 9.903 in the 55 to 74 year group (95% CI, 6.656 to 14.734; p < 0.001) and 26.110 in the ≥ 75 year group (95% CI, 17.513 to 38.928; p < 0.001) (Fig. 2B). ### Proportion of cancer, major operation, and recent major fracture in patients with PTE The proportion of cancer in patients with PTE were gradually increased according to years after adjusted age and sex from 16.7% at 2006 to 42.2% at 2015 (OR, 1.154; 95% CI, 1.074 to 1.240; p < 0.001) (Fig. 3). On the other hand, proportions of major operation (OR, 1.035; 95% CI, 0.960 to 1.115; p = 0.375) and recent major fracture (OR, 0.986; 95% CI, 0.875 to 1.110; p = 0.811) revealed no change over the year. Proportion of cancer, major operation, and recent major fracture in patients with pulmonary thromboembolism. ### Annual in-hospital mortality of patients with PTE Forty-six patients (7.8%) with PTE died from an underlying medical problem. Cancer (n = 16, 35%) was the most common cause of death, followed by pneumonia (n = 13, 28%), PTE (n = 7, 15%), myocardial infarction (n = 2, 4%), heart failure (n = 2, 4%), coagulopathy (n = 2, 4%), liver failure (n = 1, 2%), infection (n = 1, 2%), aspiration (n = 1, 2%), and unknown (n = 1, 2%). The crude mortality rate per 100,000 was 17.85 and it was increased from 5.89 to 29.12 per 100,000 from 2005 to 2015 (Fig. 4). There was no significant change over the period (RR, 1.054; 95% CI, 0.943 to 1.178; p = 0.353). In-hospital mortality rate of pulmonary thromboembolism (PTE). (A) The crude in-hospital mortality rate of PTE. (B) The standardized mortality rate of PTE. The data were directly age and sex adjusted to the 2010 standard population of the Korean Statistical Information Service. ## DISCUSSION PTE is leading cause of preventable hospital mortality and is a major health problem worldwide. Herein, we investigated the recent incidence and trends of PTE in a tertiary hospital from 2006 to 2015 and in national representative data. We confirmed a steady increase in the incidence of PTE for 10 years. The increase of PTE incidence was mainly due to increased proportion of cancer patients. In our study, the PTE incidence rate was 229.36 per 100,000 and the age-sex adjusted standardized incidence rate was 151.28 per 100,000, which was much higher than that of previous studies. Jang at al. [10] reported an incidence of PTE ranging from 3.74 to 7.01 per 100,000 individuals in the Korean Health Insurance Review and Assessment Service (HIRA) database. In the Korean NHIS database, the incidence rate was 4.92 per 100,000, which was more than 47-fold higher than that of our study. We suggest that the high proportion of high risk patients in acute-care hospitals as the main cause of these results. PTE in a tertiary care hospital is more frequent than general population or short-term hospitals for patients with short admission [13]. Compared with recent tertiary hospital’s data, the incidence rate of PTE was 88 (0.17%) of 50,882 (172.9 per 100,000 patients) in a single center from 2005 to 2007 [11], which was similar to our data. In addition, the PTE incidence rate continuously increased over the 10-year period in our hospital data as well as NHIS data, like other studies [10,14]. The economic burden also gradually increased over the time, according to NHIS data. Looking at other studies, whereas the incidence rate of PTE tended to decrease before 2000, the incidence rapidly increased since then. Because CT angiography (CTA) is now accepted as the standard method for PTE, the adoption of the technology could have, at least in part, fueled the marked increase in the incidence of PTE in recent years [15]. These studies confirmed that the incidence of PTE was increased according to the use of CTA [15-17]. To evaluate association between increasing incidence and risk factors, we evaluated the trend of risk factors including cancer, major operation, and recent fracture. Cancer is a well-recognized risk factor for VTE and pulmonary embolism [18]. Actually, the cancer incidence was increasing rapidly since 1999 in Korea and this is not only limited in Korea, but is also a global trend [19,20]. To prove this, we calculated the proportions of cancer, major operation, and recent major fracture history in PTE patients. Cancer proportionally increased during the 10-year period. The reason for this result may be also due to the increased use of CT scan in cancer patients. One study confirmed this fact that the widespread use of the recently introduced CT scan lead to increase of incidental PTE in cancer patients [21]. Presently, however, no change was evident in rates of major operation and recent major fractures in our study. The discrepancy probably reflects the rapid growth of interest in deep vein thrombosis (DVT) prophylaxis and actual increase of DVT prophylaxis in postoperative and cancer patients [22,23]. In age aspect, the incidence rate of PTE significantly increased annually in all age groups except 0- to 34-yearold patients. The PTE incidence was increased depending on age, similar to previous reports [8,10,11]. In our study, the patients ≥ 75 years of age had a 26 times higher PTE risk than patients < 35 years of age. We suggest that the elderly are exposed to the risk of pulmonary embolism because of both the comorbidities often present with this age group and the accompanying immobility of many elderly people. What is interesting is that age is an important prognostic factor in acute normotensive PTE [24]. Assumes that over-diagnosis is associated with decreasing severity of PTE, old age may be an important factor to predict outcome of PTE. Because the incidence of pulmonary embolism rises with age, we should pay attention to elderly patients. The effect of sex in PTE has shown conflicting results. Studies of Western populations reported that the incidence of VTE, which is a risk factor of PTE, is higher in men than in women [3,9]. However, studies of Korean population have reported that females have a much higher risk of PTE [10,11]. Because the issue of a gender effect on is the relationship between female gender and several risk factors, it is difficult to generalize and the effect of gender has not been confirmed so far. In the same vein as over-diagnosis of PTE, cause-specific mortality was decreased due to decreased severity and mortality was unchanged in a prior study [16]. Although the mortality rate was increased slightly in our study, there was no significant change over time. The most common cause of death was the underlying disease itself, not PTE. Seven patients died from PTE, none since 2014. This was in concordant with the decreasing severity of PTE [16,17]. Our study has some limitations. The study covered hospitalized individuals. Because the population of elderly or patient with severe illness compared to general population, there could have been a selection bias. We adjusted age and sex factors to decrease the effect of population difference and compared it to PTE incidence of the NHIS population. Despite these limitations, our study provides knowledge that is valuable to understand the recent trend of PTE. We confirmed this trend of PTE incidence in a tertiary hospital. The increase of PTE incidence was mainly due to the increased proportion of cancer patients. Prospective, multi-center studies are necessary. ## KEY MESSAGE 1. The incidence of pulmonary embolism had gradually increased over the 10 years. 2. Old age and female sex showed increased risk for pulmonary thromboembolism. 3. The increased proportion of cancer patients was associated with the increase of pulmonary thromboembolism. ## Notes No potential conflict of interest relevant to this article was reported. ## Supplementary Materials Supplementary Figure 1. Annual admission rate for pulmonary thromboembolism (PTE) and other respiratory disease. COPD, chronic obstructive pulmonary disease; BE, bronchiectasis; TB, tuberculosis. ## References 1. Stein PD, Beemath A, Olson RE. Trends in the incidence of pulmonary embolism and deep venous thrombosis in hospitalized patients. Am J Cardiol 2005;95:1525–1526. 2. Oger E. Incidence of venous thromboembolism: a community-based study in Western France. EPI-GETBP Study Group. Groupe d’Etude de la Thrombose de Bretagne Occidentale. Thromb Haemost 2000;83:657–660. 3. Ho WK, Hankey GJ, Eikelboom JW. The incidence of venous thromboembolism: a prospective, community-based study in Perth, Western Australia. Med J Aust 2008;189:144–147. 4. Mahan CE, Borrego ME, Woersching AL, et al. Venous thromboembolism: annualized United States models for total, hospital-acquired and preventable costs utilizing long-term attack rates. Thromb Haemost 2012;108:291–302. 5. Goldhaber SZ, Bounameaux H. Pulmonary embolism and deep vein thrombosis. Lancet 2012;379:1835–1846. 6. Agnelli G, Becattini C. Acute pulmonary embolism. N Engl J Med 2010;363:266–274. 7. Kroger K, Kupper-Nybelen J, Moerchel C, Moysidis T, Kienitz C, Schubert I. Prevalence and economic burden of pulmonary embolism in Germany. Vasc Med 2012;17:303–309. 8. Anderson FA Jr, Wheeler HB, Goldberg RJ, et al. A population-based perspective of the hospital incidence and case-fatality rates of deep vein thrombosis and pulmonary embolism: the Worcester DVT Study. Arch Intern Med 1991;151:933–938. 9. Silverstein MD, Heit JA, Mohr DN, Petterson TM, O’Fallon WM, Melton LJ 3rd. Trends in the incidence of deep vein thrombosis and pulmonary embolism: a 25-year population-based study. Arch Intern Med 1998;158:585–593. 10. Jang MJ, Bang SM, Oh D. Incidence of venous thromboembolism in Korea: from the Health Insurance Review and Assessment Service database. J Thromb Haemost 2011;9:85–91. 11. Choi WI, Lee MY, Oh D, Rho BH, Hales CA. Estimated incidence of acute pulmonary embolism in a Korean hospital. Clin Appl Thromb Hemost 2011;17:297–301. 12. Kim DS. Introduction: health of the health care system in Korea. Soc Work Public Health 2010;25:127–141. 13. Stein PD, Huang Hl, Afzal A, Noor HA. Incidence of acute pulmonary embolism in a general hospital: relation to age, sex, and race. Chest 1999;116:909–913. 14. Sakuma M, Takahashi T, Demachi J, et al. Epidemiology of pulmonary embolism in Japan. In : Shirato K, ed. Venous Thromboembolism Tokyo: Springer; 2005. p. 3–12. 15. DeMonaco NA, Dang Q, Kapoor WN, Ragni MV. Pulmonary embolism incidence is increasing with use of spiral computed tomography. Am J Med 2008;121:611–617. 16. Wiener RS, Schwartz LM, Woloshin S. Time trends in pulmonary embolism in the United States: evidence of overdiagnosis. Arch Intern Med 2011;171:831–837. 17. Schissler AJ, Rozenshtein A, Schluger NW, Einstein AJ. National trends in emergency room diagnosis of pulmonary embolism, 2001-2010: a cross-sectional study. Respir Res 2015;16:44. 18. Shen VS, Pollak EW. Fatal pulmonary embolism in cancer patients: is heparin prophylaxis justified? South Med J 1980;73:841–843. 19. Jemal A, Center MM, DeSantis C, Ward EM. Global patterns of cancer incidence and mortality rates and trends. Cancer Epidemiol Biomarkers Prev 2010;19:1893–1907. 20. Jung KW, Won YJ, Kong HJ, Oh CM, Seo HG, Lee JS. Cancer statistics in Korea: incidence, mortality, survival and prevalence in 2010. Cancer Res Treat 2013;45:1–14. 21. Abdel-Razeq HN, Mansour AH, Ismael YM. Incidental pulmonary embolism in cancer patients: clinical characteristics and outcome: a comprehensive cancer center experience. Vasc Health Risk Manag 2011;7:153–158. 22. Lyman GH, Khorana AA, Falanga A, et al. American Society of Clinical Oncology guideline: recommendations for venous thromboembolism prophylaxis and treatment in patients with cancer. J Clin Oncol 2007;25:5490–5505. 23. Hardwick ME, Colwell CW Jr. Advances in DVT prophylaxis and management in major orthopaedic surgery. Surg Technol Int 2004;12:265–268. 24. Keller K, Beule J, Coldewey M, Geyer M, Balzer JO, Dippold W. The risk factor age in normotensive patients with pulmonary embolism: effectiveness of age in predicting submassive pulmonary embolism, cardiac injury, right ventricular dysfunction and elevated systolic pulmonary artery pressure in normotensive pulmonary embolism patients. Exp Gerontol 2015;69:116–121. ## Article information Continued ### Figure 1. The incidence rate of pulmonary thromboembolism (PTE). (A) The crude incidence rate of PTE. (B) The standardized incidence rate of PTE. The data were directly age and sex adjusted to the 2010 standard population of the Korean Statistical Information Service. ### Figure 2. The incidence rate of pulmonary thromboembolism (PTE) according to sex and age group. (A) The incidence rate of PTE according to sex. (B) Annual incidence rate of PTE among different age groups. ### Figure 3. Proportion of cancer, major operation, and recent major fracture in patients with pulmonary thromboembolism. ### Figure 4. In-hospital mortality rate of pulmonary thromboembolism (PTE). (A) The crude in-hospital mortality rate of PTE. (B) The standardized mortality rate of PTE. The data were directly age and sex adjusted to the 2010 standard population of the Korean Statistical Information Service. ### Table 1. Baseline characteristics of patients with pulmonary thromboembolism (n = 591) Characteristic Value Male sex 224 (37.9) Age, yr 68.13 ± 15.43 0–34 27 (4.6) 35–54 80 (13.5) 55–74 243 (41.1) ≥ 75 241 (40.8) Body mass index, kg/m2 28.06 ± 104.35 Previous PTE 8 (1.4) Combined VTE 183 (31.0) Comorbidities Hypertension 239 (40.4) Cancer 190 (32.1) Diabetes 111 (18.8) Cerebrovascular disease 54 (9.1) Heart failure 41 (7.0) Atrial fibrillation 30 (5.1) Nephrotic syndrome 2 (0.3) Major operation 151 (25.5) Recent major fracture 47 (8.0) Outcomes Hospital duration 30.46 ± 58.28 ICU admission 28 (4.7) In-hospital death 46 (7.8) Values are presented as number (%) or mean ± SD. PTE, pulmonary thromboembolism; VTE, venous thromboembolism; ICU, intensive care unit. ### Table 2. Annual admission rate for pulmonary thromboembolism and following economic burden in Korean National Health Insurance Service population (2004 to 2013) 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 Admission ratea 2.64 2.76 3.09 3.91 4.61 5.02 5.87 6.58 6.97 7.44 Overall cost,$ 3,101,752 3,590,245 4,114,046 5,714,766 7,097,968 7,986,471 9,411,793 10,255,782 11,501,420 12,523,608
a
All rates are per 100,000 populations.
|
2018-11-15 14:47:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37928691506385803, "perplexity": 9290.277765285105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742779.14/warc/CC-MAIN-20181115141220-20181115162440-00041.warc.gz"}
|
https://oalevelsolutions.com/tag/polynomials-cie-p2-2017/
|
# Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 2 (P2-9709/02) | Year 2017 | Feb-Mar | (P2-9709/22) | Q#6
Question The polynomial is defined by where and are constants. It is given that is a factor of and that remainder is 28 when is divided by . i. Find the values of a and b. ii. Hence factorise completely. iii. State the number of roots of the equation p(2y) = 0, […]
# Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 2 (P2-9709/02) | Year 2017 | Oct-Nov | (P2-9709/23) | Q#5
Question The polynomial is defined by where and are constants. It is given that is a factor of . It is also given that the remainder is 40 when is divided by . i. Find the values of a and b. ii. When a and b have these values, factorise p(x) completely. Solution […]
# Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 2 (P2-9709/02) | Year 2017 | Oct-Nov | (P2-9709/22) | Q#4
Question The polynomials p(x) and g(x) are defined by; and where a and b are constants. It is given that (x + 3) is a factor of f(x) and also of q(x). i. Find the values of a and b. ii. Show that the equation q(x) – p(x) = 0 has only one […]
# Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 2 (P2-9709/02) | Year 2017 | Oct-Nov | (P2-9709/21) | Q#5
Question The polynomial is defined by where and are constants. It is given that is a factor of . It is also given that the remainder is 40 when is divided by . i. Find the values of a and b. ii. When a and b have these values, factorise p(x) completely. Solution […]
# Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 2 (P2-9709/02) | Year 2017 | May-Jun | (P2-9709/23) | Q#6
Question i. Use the factor theorem to show that (x+2) is a factor of the expression and hence factorise the expression completely. ii. Deduce the roots of the equation Solution i. We are given that; We are also given that is a factor of . When a polynomial, , is divided […]
# Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 2 (P2-9709/02) | Year 2017 | May-Jun | (P2-9709/22) | Q#6
Question i. Use the factor theorem to show that (x+2) is a factor of the expression and hence factorise the expression completely. ii. Deduce the roots of the equation Solution i. We are given that; We are also given that is a factor of . When a polynomial, , is divided […]
|
2022-05-17 10:07:55
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9547563195228577, "perplexity": 6770.8355807973085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00117.warc.gz"}
|
https://www.zbmath.org/?q=an%3A0819.05027
|
# zbMATH — the first resource for mathematics
On the join of graphs and chromatic uniqueness. (English) Zbl 0819.05027
Let $$U_{n+1}$$ denote the graph obtained from the wheel $$K_ 1+ C_ n$$ by deleting a spoke, where $$+$$ denotes the join of graphs. The author shows that, for any $$m\geq 1$$ and odd $$n\geq 3$$, the graph $$K_ m+ U_{n+1}$$ is chromatically unique.
##### MSC:
05C15 Coloring of graphs and hypergraphs
Full Text:
##### References:
[1] Chao, J. Graph Theory 10 pp 129– (1986) [2] Chao, Discrete Math. 41 pp 139– (1982) [3] and , On chromatic equivalence of graphs. Theory and Applications of Graphs, Lecture Notes in Mathematics, Vol. 642. Springer, Berlin (1978) 121–131. [4] Chao, Discrete Math. 27 pp 171– (1979) [5] Chia, J. Graph Theory 10 pp 541– (1986) [6] Chia, Ars Combinat. 26A pp 65– (1988) [7] Chia, Discrete Math. 82 pp 209– (1990) [8] Chia, Scientia Ser. A 2 pp 27– (1988) [9] Dong, J. Math Res. Expo. 10 pp 447– (1990) [10] Farrell, Discrete Math. 29 pp 257– (1980) [11] Farrell, J. Combin. Math. Combin. Comput. 8 pp 79– (1990) [12] Giudici, Lecture Notes Pure Appl. Math. 96 pp 147– (1985) [13] Graph Theory. Addison-Wesley, Reading, MA (1969). [14] Koh, Graphs Combinat. 6 pp 259– (1990) [15] Li, J. Xinjiang Univ. Natur. Sci. 7 pp 95– (1990) [16] Loerinc, J. Combinat. Theory Ser. B 31 pp 54– (1981) [17] Read, J. Combinat. Theory 4 pp 52– (1968) [18] Read, Ars Combinat. 23 pp 209– (1987) [19] Read, Discrete Math. 69 pp 317– (1988) [20] Salzberg, Discrete Math. 58 pp 285– (1986) [21] Teo, J. Graph Theory 14 pp 89– (1990) [22] Whitehead, J. Graph Theory 8 pp 371– (1984) [23] Xu, J. Shanghai Teach. Univ. Nat. Sci. Ed. 2 pp 10– (1987) [24] Xu, Discrete Math. 51 pp 207– (1984)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-04-23 05:20:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6975336670875549, "perplexity": 7785.658211677085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039601956.95/warc/CC-MAIN-20210423041014-20210423071014-00495.warc.gz"}
|
https://www.njohnson.co.uk/index.php?menu=2&submenu=2&subsubmenu=15
|
neil's webbly world
me@njohnson.co.uk
learn Russian
RC Networks
While perusing through volume one of Chestnut and Mayer's Servomechanisms and Regulating System Design (John Wiley & Sons, New York, second edition, 1959) I came across an interesting table on page 560 (the same table is also in Korn and Korn's Electronic Analog and Hybrid Computers from 1964, another McGraw-Hill publication, as Table A-1 on page 553). At the bottom of the table was the attribute to the origin of this table: an article in the April 1952 edition of Electronics magazine, published by McGraw-Hill. As luck would have it, the American Radio History website has a scanned copy of that magazine: Electronics, McGraw-Hill, April 1952. And there on page 147 (155 of the PDF) is the original table (Table 1) at the end of an article on Driftless DC Amplifiers by Frank Bradley and Rawley McCoy of Reeves Instrument Corp, New York, NY, the table itself apparently the work of their colleague S. Godet.
So, what's all the fuss about something from 66 years ago? Take a look: here's the table from that magazine article: Table 1: Transfer Functions of R-C Input and Output Networks.
What an interesting treasure trove of RC networks. Some familiar, some not so familiar. And all catalogued complete with expressions describing their function.
In Godet's table the networks were grouped by transfer impedance functions. We could also group them by the network configuration and their conjugate, swapping R and C. For example, take the single R - it's conjugate would be the single C. Some network configurations would not have a natural conjugate, for example the R and C in parallel or in series - swapping the R and C would produce the exact same network. It is also interesting to note that Godet categorised the networks into those with a pure resistive path, using $$A$$ as the expression for pure resistance (assume all capacitors open circuit), and using $$B$$ as the expression for the pure reactance (assume all resistors short circuit). In active circuits where DC bias conditions are required, then at least one of the networks must have a DC path to the bias voltage (either ground or some other bias voltage source).
The full title of these transfer functions is "short-circuit transfer impedance functions". This means that the functions relate the input voltage to the output current into a short-circuit. Which is exactly what is required for circuit blocks based on the virtual-earth inverting op-amp circuit:
Here, the input pin to the amplifier is driven to be zero volts (virtual earth) by the feedback path around the amplifier through $$Z_o$$, precisely cancelling any currents flowing in through $$Z_i$$.
The transfer function $$\frac{V_{out}}{V_{in}}$$ is the well-known $$\frac{-Z_o}{Z_i}$$. Given a desired transfer function we can then split it into two parts for $$Z_o$$ and $$Z_i$$ and then consult the table to find suitable RC networks.
It is interesting to note that when the paper was published the notation of the time was to use $$p$$ (or sometimes $$P$$) as the complex frequency variable, compared to today where $$s$$ (the Laplace operator) is used. In this case they can be used interchangably, so for $$p$$ read $$s$$ to get to something that looks more familiar.
On this page, over time, I will be cataloguing each of these circuits, as well as providing further information about them, such as DC paths (useful for setting up bias conditions).
In the following table, using the same notation as the original, time constants are denoted $$T_n$$, DC path resistance is denoted $$A$$, AC path reactance is denoted $$B$$, and, where applicable, time constant scaling factor $$\theta$$.
Note that the table is split into two sections: the first section covers DC path networks, while the second section covers DC blocking networks.
IDDC Path
Transfer Impedance Function Network Relations Inverse Relations
I $$A$$
$$A = R$$ $$R = A$$
II $$\frac{A}{1+sT}$$
$$A = R$$
$$T = RC$$
$$R = A$$
$$C = \frac{T}{A}$$
III $$A(1+sT)$$
$$A = 2R$$
$$T = \frac{RC}{2}$$
$$R = \frac{A}{2}$$
$$C = \frac{4T}{A}$$
IV $$A \left( \frac{1+s{\theta}T}{1+sT} \right)$$
$$\theta < 1$$
$$A = R_1 + R_2$$
$$T = R_2 C$$
$$\theta = \frac{R_1}{R_1 + R_2}$$
$$R_1 = A \theta$$
$$R_2 = A(1-\theta)$$
$$C = \frac{T}{A(1-\theta)}$$
$$A = R_1$$
$$T = (R_1 + R_2)C$$
$$\theta = \frac{R_2}{R_1 + R_2}$$
$$R_1 = A$$
$$R_2 = \frac{A\theta}{1-\theta}$$
$$C = \frac{T(1-\theta)}{A}$$
V $$A \left( \frac{1+sT}{1+s{\theta}T} \right)$$
$$\theta < 1$$
$$A = \frac{2 R_1 R_2}{2 R_1 + R_2}$$
$$T = \frac{R_1 C}{2}$$
$$\theta = \frac{2 R_1}{2 R_1 + R_2}$$
$$R_1 = \frac{A}{2(1-\theta)}$$
$$R_2 = \frac{A}{\theta}$$
$$C = \frac{4T(1-\theta)}{A}$$
$$A = 2R_1$$
$$T = \left( R_2 + \frac{R_1}{2}\right) C$$
$$\theta = \frac{2 R_2}{2 R_2 + R_1}$$
$$R_1 = \frac{A}{2}$$
$$R_2 = \frac{A\theta}{4(1-\theta)}$$
$$C = \frac{4T(1-\theta)}{A}$$
$$A = 2R$$
$$T = \frac{R}{2}(C_1 + C_2)$$
$$\theta = \frac{2 C_2}{C_1 + C_2}$$
$$R = \frac{A}{2}$$
$$C_1 = \frac{2T(2-\theta)}{A}$$
$$C_2 = \frac{2T\theta}{A}$$
VI $$A \left[ \frac{1+sT_2}{(1+sT_1)(1+sT_3)} \right]$$
$$T_1 < T_2 < T_3$$
$$A = R_1 + R_2$$
$$T_1 = R_1 C_1$$
$$T_2 = \left( \frac{R_1 R_2}{R_1 + R_2} \right) ( C_1 + C_2 )$$
$$T_3 = R_2 C_2$$
$$R_1 = \frac{A(T_2 - T_1)}{T_3 - T_1}$$
$$R_2 = \frac{A(T_3 - T_2)}{T_3 - T_1}$$
$$C_1 = \frac{T_1(T_3 - T_1)}{A(T_2 - T_1)}$$
$$C_2 = \frac{T_3(T_3 - T_1)}{A(T_3 - T_2)}$$
$$A = R_2$$
$$T_2 = R_1 C_1$$
$$T_1 T_3 = R_1 R_2 C_1 C_2$$
$$T_1 + T_3 = R_1 C_1 + R_2 C_2 + R_2 C_1$$
$$R_1 = \frac{A {T_2}^2}{(T_3 - T_2)(T_2 - T_1)}$$
$$R_2 = A$$
$$C_1 = \frac{(T_3 - T_2)(T_2 - T_1)}{A T_2}$$
$$C_2 = \frac{T_1 T_3}{A T_2}$$
$$A = R_1 + R_2$$
$$T_2 = \left( \frac{R_1 R_2}{R_1 + R_2} \right) C_2$$
$$T_1 T_3 = R_1 R_2 C_1 C_2$$
$$T_1 + T_3 = R_1 C_1 + R_2 C_2 + R_2 C_1$$
$$R_1 = \frac{A {T_2}^2}{T_1 T_2 + T_2 T_3 - T_1 T_3}$$
$$R_2 = \frac{A(T_3 - T_2)(T_2 - T_1)}{T_1 T_2 + T_2 T_3 - T_1 T_3}$$
$$C_1 = \frac{T_1 T_3}{A T_2}$$
$$C_2 = \frac{(T_1 T_2 + T_2 T_3 - T_1 T_3)^2}{A T_2 (T_3 - T_2)(T_2 - T_1)}$$
$$A = 2R$$
$$T = \frac{R}{2}(C_1 + C_2)$$
$$\theta = \frac{2 C_2}{C_1 + C_2}$$
$$R = \frac{A}{2}$$
$$C_1 = \frac{2T(2-\theta)}{A}$$
$$C_2 = \frac{2T\theta}{A}$$
IDDC Blocking
Transfer Impedance Function Network Relations Inverse Relations
I $$\frac{1}{sB}$$
$$B = C$$ $$C = B$$
II $$\frac{1}{sB}(1+sT)$$
$$B = C$$
$$T = RC$$
$$R = \frac{T}{B}$$
$$C = B$$
III $$\frac{1}{sB} \left( \frac{1+sT}{sT} \right)$$
$$B = \frac{C}{2}$$
$$T = 2RC$$
$$R = \frac{T}{4B}$$
$$C = 2B$$
IV $$\frac{1}{sB} \left( \frac{1+sT}{1+s{\theta}T} \right)$$
$$\theta < 1$$
$$B = C_1$$
$$T = R(C_1 + C_2)$$
$$\theta = \frac{C_2}{C_1 + C_2}$$
$$R = \frac{T(1-\theta)}{B}$$
$$C_1 = B$$
$$C_2 = \frac{B\theta}{1-\theta}$$
$$B = C_1 + C_2$$
$$T = RC_2$$
$$\theta = \frac{C_1}{C_1 + C_2}$$
$$R = \frac{T}{B(1-\theta)}$$
$$C_1 = B\theta$$
$$C_2 = B(1-\theta)$$
V $$\frac{1}{sB} \left( \frac{1+s{\theta}T}{1+sT} \right)$$
$$\theta < 1$$
$$B = C_2$$
$$T = RC_1\left( \frac{2C_2 + C_1}{C_2} \right)$$
$$\theta = \frac{2C_2}{2C_2 + C_1}$$
$$R = \frac{T\theta^2}{4B(1-\theta)}$$
$$C_1 = \frac{2B(1-\theta)}{\theta}$$
$$C_2 = B$$
$$B = \frac{C_1^2}{2C_1+C_2}$$
$$T = RC_2$$
$$\theta = \frac{2C_1}{2C_1 + C_2}$$
$$R = \frac{T\theta^2}{4B(1-\theta)}$$
$$C_1 = \frac{2B}{\theta}$$
$$C_2 = \frac{4B(1-\theta)}{\theta^2}$$
$$B = \left(\frac{R_1}{R_1 + R_2}\right)C$$
$$T = R_2C$$
$$\theta = \frac{2R_1}{R_1 + R_2}$$
$$R_1 = \frac{T\theta^2}{2B(2-\theta)}$$
$$R_2 = \frac{T\theta}{2B}$$
$$C = \frac{2B}{\theta}$$
VI $$\frac{1}{sB} \left[ \frac{(1+sT_1)(1+sT_3)}{1+sT_2} \right]$$
$$T_1 < T_2 < T_3$$
$$B = C_2$$
$$T = RC_1\left( \frac{2C_2 + C_1}{C_2} \right)$$
$$\theta = \frac{2C_2}{2C_2 + C_1}$$
$$R = \frac{T\theta^2}{4B(1-\theta)}$$
$$C_1 = \frac{2B(1-\theta)}{\theta}$$
$$C_2 = B$$
$$B = \frac{C_1^2}{2C_1+C_2}$$
$$T = RC_2$$
$$\theta = \frac{2C_1}{2C_1 + C_2}$$
$$R = \frac{T\theta^2}{4B(1-\theta)}$$
$$C_1 = \frac{2B}{\theta}$$
$$C_2 = \frac{4B(1-\theta)}{\theta^2}$$
$$B = \left(\frac{R_1}{R_1 + R_2}\right)C$$
$$T = R_2C$$
$$\theta = \frac{2R_1}{R_1 + R_2}$$
$$R_1 = \frac{T\theta^2}{2B(2-\theta)}$$
$$R_2 = \frac{T\theta}{2B}$$
$$C = \frac{2B}{\theta}$$
$$B = \left(\frac{R_1}{R_1 + R_2}\right)C$$
$$T = R_2C$$
$$\theta = \frac{2R_1}{R_1 + R_2}$$
$$R_1 = \frac{T\theta^2}{2B(2-\theta)}$$
$$R_2 = \frac{T\theta}{2B}$$
$$C = \frac{2B}{\theta}$$
Note: all circuit diagrams were produced using a modified version of the CircDia package by Stefan Krause, together with scripts to convert circuit descriptions into PNG files.
|
2021-06-14 09:29:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7284268140792847, "perplexity": 1341.1582685912283}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611641.26/warc/CC-MAIN-20210614074543-20210614104543-00166.warc.gz"}
|
https://books.compclassnotes.com/rothphys110-2e/2021/06/27/section-7-4-v2/
|
# Chapter 7: Linear momentum
## 7.4 Two-dimensional collisions
When you are dealing with objects that are not moving along a single axis, you must remember that velocity—and therefore momentum—is a vector. This means that both the $$x$$ and $$y$$ components of momentum are conserved in a collision:
\begin{align*} p_{ix} &= p_{fx} \tag{7.5} \\ p_{iy} &= p_{fy} \tag{7.6} \end{align*}
#### Example
A 1500 kg car is moving west at 25 m/s, and collides with a 4000 kg truck moving 10 m/s in a direction 50° North from West. The two vehicles stick together after the collision. What is the velocity of the wreckage?
It is convention to define East as the positive $$x$$ direction, and North as the positive $$y$$ direction. We’ll need to separate the initial velocities into $$x$$ and $$y$$ components. Since the car is initially traveling due West, it’s velocity is only in the $$-x$$ direction. The truck has velocity components in both the $$-x$$ and $$+y$$ directions; we can use trigonometry to determine these components in terms of the truck’s speed and direction of travel. Expressed as vectors, the initial velocities are
$\vec{v}_c = \underbrace{-v_c}_{v_{cx}}\hat{x} \quad\text{and}\quad \vec{v}_t = \underbrace{-v_t\cos\theta}_{v_{tx}}\hat{x} + \underbrace{v_t\sin\theta}_{v_{ty}}\hat{y}$
Momentum is conserved. Whenever we work with vectors, we work with the $$x$$ and $$y$$ components separately.
\begin{align*} p_{i,x} &= p_{f,x} & p_{i,y} &= p_{f,y} \\ m_cv_{cx} + m_tv_{tx} &= (m_c + m_t)v_{f,x} & m_c(0) + m_tv_{ty} &= (m_c + m_t)v_{f,y} \\ -m_cv_c – m_tv_t\cos\theta &= (m_c + m_t)v_{f,x} & m_tv_t\sin\theta &= (m_c + m_t)v_{f,y} \\ \hookrightarrow v_{f,x} &= -\frac{m_cv_c + m_tv_t\cos\theta}{m_c + m_t} & \hookrightarrow v_{f,y} &= \frac{m_tv_t\sin\theta}{m_c + m_t} \\ &= -11.5\ \textrm{m/s} & &= 5.57\ \textrm{m/s} \end{align*}
So, the final velocity is
$\vec{v}_f = \left(-11.5\ \textrm{m/s}\right)\hat{x} + \left(5.57\ \textrm{m/s}\right)\hat{y}$
|
2021-12-02 13:28:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 1705.4565693999239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362219.5/warc/CC-MAIN-20211202114856-20211202144856-00277.warc.gz"}
|
https://www.projecteuclid.org/euclid.ecp/1528509622
|
## Electronic Communications in Probability
### Stein’s method for nonconventional sums
Yeor Hafouta
#### Abstract
We obtain almost optimal convergence rate in the central limit theorem for (appropriately normalized) “nonconventional" sums of the form $S_N=\sum _{n=1}^N (F(\xi _n,\xi _{2n},...,\xi _{\ell n})-\bar F)$. Here $\{\xi _n: n\geq 0\}$ is a sufficiently fast mixing vector process with some stationarity conditions, $F$ is bounded Hölder continuous function and $\bar F$ is a certain centralizing constant. Extensions to more general functions $F$ will be discusses, as well. Our approach here is based on the so called Stein’s method, and the rates obtained in this paper significantly improve the rates in [7]. Our results hold true, for instance, when $\xi _n=(T^nf_i)_{i=1}^\wp$ where $T$ is a topologically mixing subshift of finite type, a hyperbolic diffeomorphism or an expanding transformation taken with a Gibbs invariant measure, as well as in the case when $\{\xi _n: n\geq 0\}$ forms a stationary and exponentially fast $\phi$-mixing sequence, which, for instance, holds true when $\xi _n=(f_i(\Upsilon _n))_{i=1}^\wp$ where $\Upsilon _n$ is a Markov chain satisfying the Doeblin condition considered as a stationary process with respect to its invariant measure.
#### Article information
Source
Electron. Commun. Probab., Volume 23 (2018), paper no. 38, 14 pp.
Dates
Received: 6 April 2017
Accepted: 5 June 2018
First available in Project Euclid: 9 June 2018
Permanent link to this document
https://projecteuclid.org/euclid.ecp/1528509622
Digital Object Identifier
doi:10.1214/18-ECP142
Mathematical Reviews number (MathSciNet)
MR3820128
Zentralblatt MATH identifier
1397.60061
Subjects
Primary: 60F05: Central limit and other weak theorems
#### Citation
Hafouta, Yeor. Stein’s method for nonconventional sums. Electron. Commun. Probab. 23 (2018), paper no. 38, 14 pp. doi:10.1214/18-ECP142. https://projecteuclid.org/euclid.ecp/1528509622
#### References
• [1] E. Bolthausen, Exact convergence rates in some martingale central limit theorems, Ann. Probab. 10 (1982), 672-688.
• [2] R.C. Bradley, Introduction to Strong Mixing Conditions, Volume 1, Kendrick Press, Heber City, 2007.
• [3] L.H.Y Chen and Q.M. Shao, Normal approximation under local dependence, Ann.Probab. 32 (2004), 1985-2028.
• [4] H. Furstenberg, Nonconventional ergodic averages, Proc. Symp. Pure Math. 50 (1990), 43-56.
• [5] M.I. Gordin, On the central limit theorem for stationary processes, Soviet Math. Dokl. 10 (1969), 1174-1176.
• [6] P. Hall, C.C. Heyde, Rates of convergence in the martingale central limit theorem, Ann. Probab. 9 (1981) 395-404.
• [7] Y. Hafouta and Yu. Kifer, Berry-Esseen type estimates for nonconventional sums, Stoch. Proc. Appl. 126 (2016), 2430-2464.
• [8] Y. Hafouta and Yu. Kifer, Nonconventional polynomial CLT, Stochastics, 89 (2017), 550-591.
• [9] Yu. Kifer, Nonconventional limit theorems, Probab. Th. Rel. Fields, 148 (2010), 71-106.
• [10] Yu. Kifer, Strong approximations for nonconventional sums and almost sure limit theorems, Stochastic Process. Appl., 123 (2013), 2286-2302.
• [11] Yu. Kifer and S.R.S Varadhan, Nonconventional limit theorems in discrete and continuous time via martingales, Ann. Probab., 42 (2014), 649-688.
• [12] D.L. McLeish, Invariance principles for dependent variables, Z.Wahrscheinlichkeitstheorie und Verw. Gebiete 32 (1975), 165-178.
• [13] Y. Rinott, On normal approximation rates for certain sums of dependent random variables, J. Comput. Appl. Math., 55 (1994), 135-143.
• [14] W. Rudin Real and Complex Analysis, McGraw-Hill, New York, 1987.
• [15] C. Stein, A bound for the error in the normal approximation to the distribution of a sum of dependent random variables, Proc. Sixth Berkeley Symp. Math. Statist. Probab, 2 (1972), 583-602. Univ. California Press, Berkeley.
• [16] C. Stein, Approximation Computation of Expectations, IMS, Hayward, CA (1986).
• [17] N. Shiryaev, Probability, Springer-Verlag, Berlin, 1995.
|
2019-12-14 21:01:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6953657269477844, "perplexity": 2316.3026502980388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541294513.54/warc/CC-MAIN-20191214202754-20191214230754-00198.warc.gz"}
|
https://blog.peterkagey.com/2021/07/zimin-words-and-bifix-free-words/
|
# Zimin Words and Bifixes
One of the earliest contributions to the On-Line Encyclopedia of Integer Sequences (OEIS) was a family sequences counting the number of words that begin (or don’t begin) with a palindrome:
• Let $$f_k(n)$$ be the number of strings of length $$n$$ over a $$k$$-letter alphabet that begin with a nontrivial palindrome” for various values of $$k$$.
• Let $$g_k(n)$$ be the number of strings of length n over a $$k$$-letter alphabet that do not begin with a nontrivial palindrome.
• Number of binary strings of length $$n$$ that begin with an odd-length palindrome. (A254128)
(If I had known better, I would have published fewer sequences in favor of a table, and I would have requested contiguous blocks of A-numbers.)
I must have written some Python code to compute some small terms of this sequence, and I knew that $$g_k(n) = k^n – f_k(n)$$, but I remember being at in my friend Q’s bedroom when the recursion hit me for $$f_k(n)$$: $$f_k(n) = kf_k(n-1) + k^{\lceil n/2 \rceil} – f_k\big(\lceil \frac n 2 \rceil \big)$$
## “Bifix-free” words
One sequence that I didn’t add to the OEIS was the “Number of binary strings of length n that begin with an even-length palindrome”—that’s because this was already in the Encyclopedia under a different name:
A094536: Number of binary words of length n that are not “bifix-free”.
0, 0, 2, 4, 10, 20, 44, 88, 182, 364, 740, 1480, 2980, 5960, …
A “bifix” is a shared prefix and suffix, so a “bifix-free” word is one such that all prefixes are different from all suffixes. More concretely, if the word is $$\alpha_1\alpha_2 \dots \alpha_n$$, then $$(\alpha_1, \alpha_2, \dots, \alpha_k) \neq (\alpha_{n-k+1},\alpha_{n-k+2},\dots,\alpha_n)$$ for all $$k \geq 1$$.
The reason why the number of binary words of length $$n$$ that begin with an even length palindrome is equal to the number of binary words of length $$n$$ that have a bifix is because we have a bijection between the two sets. In particular, find the shortest palindromic prefix, cut it in half, and stick the first half at the end of the word, backward. I’ve asked for a better bijection on Math Stack Exchange, so if you have any ideas, please share them with me!
In 2019–2020, Daniel Gabric, Jeffrey Shallit wrote a paper closely related to this called Borders, Palindrome Prefixes, and Square Prefixes.
## Zimin words
A Zimin word can be defined recursively, but I think it’s most suggestive to see some examples:
• $$Z_1 = A$$
• $$Z_2 = ABA$$
• $$Z_3 = ABACABA$$
• $$Z_4 = ABACABADABACABA$$
• $$Z_n = Z_{n-1} X Z_{n-1}$$
All Zimin words $$Z_n$$ are examples of “unavoidable patterns”, because every sufficiently long string with letters in any finite alphabet contains a substring that matches the $$Z_n$$ pattern.
For example the word $$0100010010111000100111000111001$$ contains a substring that matches the Zimin word $$Z_3$$. Namely, let $$A = 100$$, $$B = 0$$, and $$C = 1011$$, visualized here with each $$A$$ emboldened: $$0(\mathbf{100}\,0\,\mathbf{100}\,1011\,\mathbf{100}\,0\,\mathbf{100})111000111001$$.
I’ve written a Ruby script that generates a random string of length 29 and uses a regular expression to find the first instance of a substring matching the pattern $$Z_3 = ABACABA$$. You can run it on TIO, the impressive (and free!) tool from Dennis Mitchell.
# Randomly generates a binary string of length 29.
random_string = 29.times.map { [0,1].sample }.join("")
p random_string
# Finds the first Zimin word ABACABA
p random_string.scan(/(.+)(.+)\1(.+)\1\2\1/)[0]
# Pattern: A B A C A B A
Why 29? Because all binary words of length 29 contain the pattern $$Z_3 = ABACABA$$. However, Joshua Cooper and Danny Rorabaugh’s paper provides 48 words of length 28 that avoid that pattern (these and their reversals):
1100000010010011011011111100
1100000010010011111101101100
1100000010101100110011111100
1100000010101111110011001100
1100000011001100101011111100
1100000011001100111111010100
1100000011011010010011111100
1100000011011011111100100100
1100000011111100100101101100
1100000011111100110011010100
1100000011111101010011001100
1100000011111101101100100100
1100100100000011011011111100
1100100100000011111101101100
1100100101101100000011111100
1100110011000000101011111100
1100110011000000111111010100
1100110011010100000011111100
1101010000001100110011111100
1101010000001111110011001100
1101010011001100000011111100
1101101100000010010011111100
1101101100000011111100100100
1101101100100100000011111100
## The Zimin Word $$Z_2 = ABA$$ and Bifixes
The number of Zimin words of length $$n$$ that match the pattern ABA is equal to the number of of words that begin with an odd-length palindrome. Analogously, the number of words with a bifix is equal to the number of words that begin with an even-length palindrome. The number of these agree when $$n$$ is odd.
I’ve added OEIS sequences A342510A342512 which relate to how numbers viewed as binary strings avoid—or fail to avoid—Zimin words. I asked users to implement this on Code Golf Stack Exchange.
|
2023-02-08 10:22:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4498255252838135, "perplexity": 400.00446274479623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500758.20/warc/CC-MAIN-20230208092053-20230208122053-00615.warc.gz"}
|
https://mattthompson.info/posts/2019/12/progressbar/
|
# Using progressbar in Python scripts
Published:
I often find myself running a script, usually analysis of some molecular dynamics trajectory, without knowing how long it will take. This typically for any of the following reasons:
• I need to finely sample a trajectory, i.e. calling some function on thousands of frames
• I’m running a novel analysis and have not developed an intuition about how long it should take (how well it should scale, if it is likely to encounter a memory bottleneck that makes it hard to prototype on a MacBook, etc.)
• Like above, maybe I just wrote it - and I try to ahere to the philosophy of “get it to work, then get it to work well” (I may write about this another time, but for now I will just say this slide from a talk by Stan Seibert at SciPy 2019 was formative for me) which means I’ve probably used NumPy functions but left some other optimizations on the table.
So I often find myself running python my_cool_function.py and …. waiting. If it runs after a few seconds, nothing to worry about. But if it takes a minute or two, then I can either hope it takes only a few minutes longer or, as happens often, wait something like 15 minutes. Not great practice. Until recently, my approach was to set some counter in the middle of the expensive part of the script and just print stuff out:
i = 0
num_iters = len(thing_im_iterating_over)
for var in thing_im_iterating_over:
# Do some computations
print('Done {} i out of {}'.format(i, num_iters))
This works well enough for diagnosing how long my script may take - it’s easy enough to see if it’ll take the order of minutes or hours, in which case something else in the workflow needs to change - but isn’t very pretty. There’s nothing wrong with using print statement for debugging and toying around, but I find myself doing this often enough I wanted to look for a prettier solution. I’ve seen some programs (i.e. signac lately, or downloading packages from various package managers) use progressbars to inform the user of the status of an operation, particularly long ones. Unsurprisingly, it’s fairly straightforward to do in Python, and here I am sharing a couple ways I have used it. I used the package of the same name, although I’m sure there are other fine options out there.
The first simple example is structured like above, where we have a clear iterator (or generator, of course) we’re iterating over and we know its length.
import progressbar
bar = progressbar.ProgressBar()
for i in bar(thing_im_iterating_over):
# Do some computations
This will print out the progress bar to the terminal and it will update as the loop is executed.
A slight variation of this I found useful was the case in which we don’t necessarily want to update each iteration of a loop, but only want to update when some criteria in the loop was met.
import progressbar
pbar = progressbar.ProgressBar().start()
ts = 0
num_ts = 1500000
with open('huge_file.txt', 'r') as fi:
for line in fi:
line = line.split()
What I don’t like about this approach is that I’m carrying an extra counter variable that I probably don’t need. That aside, this has the benefit of only updating the progress bar every few thousand iterations of the outer for loop. In this case, I was given a 1170315604 line text file and needed to parse a particular comment line every few thousand lines, but even of those I only needed to track every few thousand hits.
|
2020-08-08 00:19:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5733401775360107, "perplexity": 977.2133227160652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737233.51/warc/CC-MAIN-20200807231820-20200808021820-00173.warc.gz"}
|
https://scholars.ncu.edu.tw/zh/publications/measurement-of-properties-of-bs0-%CE%BCsupsup%CE%BCsup-sup-decays-and-searc
|
Measurement of properties of Bs0 → μ+μ − decays and search for B0 → μ+μ − with the CMS experiment
The CMS collaboration
31 引文 斯高帕斯(Scopus)
摘要
Results are reported for the Bs0→ μ+μ branching fraction and effective lifetime and from a search for the decay B0→ μ+μ. The analysis uses a data sample of proton-proton collisions accumulated by the CMS experiment in 2011, 2012, and 2016, with center-of-mass energies (integrated luminosities) of 7 TeV (5 fb−1), 8 TeV (20 fb−1), and 13 TeV (36 fb−1). The branching fractions are determined by measuring event yields relative to B+→ J/ψK+ decays (with J/ψ → μ+μ), which results in the reduction of many of the systematic uncertainties. The decay Bs0→ μ+μ is observed with a significance of 5.6 standard deviations. The branching fraction is measured to be ℬ(Bs0→μ+μ−)=[2.9±0.7(exp)±0.2(frag)]×10−9, where the first uncertainty combines the experimental statistical and systematic contributions, and the second is due to the uncertainty in the ratio of the Bs0 and the B+ fragmentation functions. No significant excess is observed for the decay B0→ μ+μ, and an upper limit of ℬ(B0 → μ+μ) < 3.6 × 10−10 is obtained at 95% confidence level. The Bs0→ μ+μ− effective lifetime is measured to be τμ+μ−=1.70−0.44+0.61 ps. These results are consistent with standard model predictions. [Figure not available: see fulltext.].
原文 ???core.languages.en_GB??? 188 Journal of High Energy Physics 2020 4 https://doi.org/10.1007/JHEP04(2020)188 已出版 - 1 4月 2020
|
2022-10-03 02:30:25
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8234632015228271, "perplexity": 3344.976004816086}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00357.warc.gz"}
|
https://math.stackexchange.com/questions/1199681/solve-sqrt5-12i-by-square-root-definition
|
# Solve $\sqrt{5-12i}$ by square root definition
I KNOW it can be solved by the trig formula, but I want to solve it by the square root definition, so please don't just post an alternative way to do it.
By the square root definition:
$$z = 5-12i$$ $$\sqrt{z} = w\implies w^2 = z$$
So if I suppose $w = a+bi$ we have:
$$w^2 = z \implies (a+bi)^2 = 5-12i\implies\\a^2-b^2+2abi = 5-12i\implies \\5 = a^2-b^2\\-12 = 2ab\implies$$
$$b^4-5b^2-36 = 0\implies b = \pm \sqrt{-4} = \pm 2i, b = \pm 3$$
Then I get $4$ solutions: For $b = \pm 2i$ I get $a = \pm 3i$ then $\sqrt{z}$ is $$\pm 2 + 3i$$ For $b = \pm 3$ I get $a = \pm 2$ and $\sqrt{z}$ is $$2\pm 3i$$
But two of these, when squared, aren't $z$. What am I doing wrong?
• I think it should be $b^4+5b^2-36=0,$ and $ab=-6.$ – awllower Mar 21 '15 at 15:05
• Also,$ab=-6$ in second step – Akshay Bodhare Mar 21 '15 at 15:07
• $ab=12$ is not correct. – aNumosh Mar 21 '15 at 15:08
• $ab=-6,$ especially. – Megadeth Mar 21 '15 at 15:09
• See here for a simple, easily memorable rule for square-root denesting. – Bill Dubuque Mar 21 '15 at 15:24
An easier way is to solve: $$\begin{cases}a^2-b^2=5\ \ (1)\\ 2ab=-12\ \ (2)\\a^2+b^2=|5-12i|=13\ \ (3),\end{cases}$$
because you don't have to solve a equation of degree $4$.
Therefore, by $(1)$ $$a^2=b^2+5$$
By $(3)$ $$2b^2=8\implies b^2=4\implies b=\pm2$$
and by $(2)$ $$ab=-6\implies a=\mp3$$
Therefore $$\sqrt{5-12i}=\mp3\pm 2i$$
|
2021-04-16 21:27:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8680336475372314, "perplexity": 554.1570664965683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038089289.45/warc/CC-MAIN-20210416191341-20210416221341-00240.warc.gz"}
|
http://pub.acta.hu/acta/showCustomerArticle.action?id=9400&dataObjectType=article&returnAction=showCustomerVolume&sessionDataSetId=2a9104fdb4b2adae&style=
|
ACTA issues
## Multipliers for the pair $(L^1(G,A),L^p(G,A))$
Abstract. The purpose of this paper is to characterize the multipliers for the pair $(L^1(G,A),L^p(G,A))$, $1< p< \infty$, where $G$ is a locally compact abelian group and $A$, a commutative complex Banach algebra with a bounded approximate identity. AMS Subject Classification (1991): 43A22, 47B38 Received January 3, 1995. (Registered under 5691/2009.)
|
2023-02-03 16:24:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.79655522108078, "perplexity": 179.6679252431136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500058.1/warc/CC-MAIN-20230203154140-20230203184140-00624.warc.gz"}
|
https://hscstudylab.com.au/year-12-physics-module-6-content-1
|
« Back to subjects
# Year 12 Physics
## Module 6 | Electromagnetism
### Content 1: Charged particles, conductors and electric and magnetic fields
#### Lesson 1 | Charges in electric fields
• investigate and quantitatively derive and analyse the interaction between charged particles and uniform electric fields, including: (ACSPH083)
– electric field between parallel charged plates ($\left | \vec{E} \right | = -\frac{V}{\vec{d}}$)
– acceleration of charged particles by the electric field ($\vec{F} = m\vec{a}$, $\vec{F} = q\vec{E}$)
– work done on the charge ($W = qV$, $W = qEd$, $K = \frac{1}{2}mv^{2}$)
• model qualitatively and quantitatively the trajectories of charged particles in electric fields and compare them with the trajectories of projectiles in a gravitational field
#### Lesson 2 | The motion of charged particles moving in a uniform magnetic field
• analyse the interaction between charged particles and uniform magnetic fields, including: (ACSPH083)
– acceleration, perpendicular to the field, of charged particles
– the force on the charge ($\vec{F} = q\vec{v}\vec{B}\sin \theta$)
• compare the interaction of charged particles moving in magnetic fields to:
– the interaction of charged particles with electric fields
– other examples of uniform circular motion (ACSPH108)
|
2022-01-23 05:46:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2367250621318817, "perplexity": 1137.2205224077215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304134.13/warc/CC-MAIN-20220123045449-20220123075449-00291.warc.gz"}
|
https://jp.maplesoft.com/support/help/maple/view.aspx?path=ColorTools/PaletteNames&L=J
|
PaletteNames - Maple Help
# Online Help
###### All Products Maple MapleSim
ColorTools
PaletteNames
get list of known color palettes
Calling Sequence PaletteNames() PaletteNames(normalized)
Description
• The PaletteNames command returns a list of the names of all the known palettes.
• If the option normalized is given, then the all lowercase version of the names will be returned.
List of Color Collections
• The following is a list of palettes intially known by Maple:
Examples
> $\mathrm{with}\left(\mathrm{ColorTools}\right):$
> $\mathrm{PaletteNames}\left(\right)$
$\left[{"Niagara"}{,}{"Nautical"}{,}{"Spring"}{,}{"OldPlots"}{,}{"Mono"}{,}{"Dalton"}{,}{"Executive"}{,}{"Bright"}{,}{"Patchwork"}{,}{"CSS"}{,}{"CVD"}{,}{"CVD2"}{,}{"CVD3"}{,}{"HTML"}{,}{"MapleV"}{,}{"X11"}{,}{"Resene"}{,}{"Generic"}{,}{"xterm"}{,}{"Solarized"}{,}{"xkcd"}\right]$ (1)
> $P≔\mathrm{Palette}\left(\left["Red"=\mathrm{Color}\left("#f00"\right),"Blue"=\mathrm{Color}\left("#00f"\right),"Green"=\mathrm{Color}\left("#0f0"\right)\right]\right)$
${P}{≔}⟨{Palette:}{}\colorbox[rgb]{1,0,0}{{Red}}{}\colorbox[rgb]{0,0,1}{{Blue}}{}\colorbox[rgb]{0,1,0}{Green}⟩$ (2)
> $\mathrm{AddPalette}\left("Primary",P\right)$
$\left[{"niagara"}{,}{"nautical"}{,}{"spring"}{,}{"oldplots"}{,}{"mono"}{,}{"dalton"}{,}{"executive"}{,}{"bright"}{,}{"patchwork"}{,}{"css"}{,}{"cvd"}{,}{"cvd2"}{,}{"cvd3"}{,}{"html"}{,}{"maplev"}{,}{"x11"}{,}{"resene"}{,}{"generic"}{,}{"xterm"}{,}{"solarized"}{,}{"xkcd"}{,}{"primary"}\right]$ (3)
> $\mathrm{PaletteNames}\left(\right)\left[-1\right]$
${"Primary"}$ (4)
> $\mathrm{PaletteNames}\left('\mathrm{normalized}'\right)\left[-1\right]$
${"primary"}$ (5)
Compatibility
• The ColorTools[PaletteNames] command was introduced in Maple 16.
• For more information on Maple 16 changes, see Updates in Maple 16.
See Also
|
2022-07-01 23:34:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.941745400428772, "perplexity": 3928.5682342980563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103947269.55/warc/CC-MAIN-20220701220150-20220702010150-00078.warc.gz"}
|
https://reasonedwriting.moodlecloud.com/course/view.php?id=3§ion=41
|
Hierarchical conjunctions can provide valuable information about the relationships among elements.
There are many ways to indicate hierarchies. However, two main categories are subsets (less inclusive groups within a larger group) and supersets (more inclusive groups that contain a smaller group).
EXAMPLES OF HIERARCHICAL CONJUNCTIONS
Establishing and identifying hierarchies among concepts can be an extremely powerful way to organize thinking and writing. Using hierarchical conjunctions among elements of an argument can help establish context and make writing stronger and more explanatory.
19) AND21) BUT
|
2020-04-07 14:21:44
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8058263063430786, "perplexity": 2344.286882435546}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371799447.70/warc/CC-MAIN-20200407121105-20200407151605-00546.warc.gz"}
|
https://xn--attsettfravtvivel-klimataktion-utc36c.se/topic/d3df22-skew-symmetric-matrix-example-3x3
|
This video explains the concept of a Skew-Symmetric Matrix. Determinant. b. Express the matrix as the sum of a symmetric and a skew-symmetric matrices. Solution : From the given question, we come to know that we have to construct a matrix with 3 rows and 3 columns. According to Jacobi’s Theorem, the determinant of a skew-symmetric matrix of odd order is zero. Why does such a matrix have at least 2 non-zero eigenvalues? Now in the given question we have make a program that takes a matrix as input and checks if it is symmetric or skew symmetric or none of them. Examples. For example, consider the following vector A = [a;b], where both a and b are 3x1 vectors (here N = 2). For example, the following matrix is skew-symmetric: References In other words, we can say that matrix A is said to be skew-symmetric if transpose of matrix A is equal to negative of Matrix A i.e (A T =−A). If A = (a ij) is skew-symmetric, a ij = −a ji; hence a ii = 0. I can think of a proof by contradiction: Assume rank is 1. The MINRES method was applied to three systems whose matrices are shown in Figure 21.14. Example 7.13. Symmetric Matrix Example. 6. The matrix A = ⎝ ⎜ ⎜ ⎛ 0 − a − b a 0 − c b c 0 ⎠ ⎟ ⎟ ⎞ is a skew symmetric matrix. Exquisitely, A Hat or A is a three by one vector, it's a three by three skew-symmetric matrix defined by the three components of the vector A. Login. Using m = 50 and tol = 1.0 × 10 −6, one iteration gave a residual of 3. Example 22 Express the matrix B = [ 8(2&−2&−4@−1&3&4@1&−2&−3)] as the sum of a symmetric and a skew symmetric matrix. Possible ranks of the factors in such representations of a given matrix are identified as well. 7 0. The transpose is (A C. B D) so for the transpose to be the negative, the following conditions must exist . [0-9]+ × [0-9]+ −10. The leading diagonal terms must be zero since in this case a= -a which is only true when a=0. Square Matrix A is said to be skew-symmetric if aij=−aji for all i and j. Skew-Symmetric Matrix. In each case, x 0 = 0, and b was a matrix with random integer values. Create a 3-by-3 skew-symmetric matrix for an LMI problem in which n = 2. Transcript. [1] F.R. We can verify this property using an example of skew-symmetric 3x3 matrix. 3x3 skew symmetric matrices can be used to represent cross products as matrix multiplications. Properties. 5. Need help with these Problems! B = -C. C = -B. Problem 5.3: (3.1 #18.) where superscript T refers to the transpose operation, and [a] × is defined by: . However, I'm not sure how to find the basis for the kernel of these matrices. Display the matrix to verify its form. collapse all. Solution ← Prev Question Next Question → 0 votes . Anonymous. Gantmakher] Gantmacher, "The theory of matrices" , 1, Chelsea, reprint (1977) (Translated from Russian) 2 (B T − 2 I 2) − 1 = 0 2 1 − 1. b. 4 years ago. Let A be a n×n skew-symmetric matrix… Where possible, determine the unknown matrix that solves the following matrix equations. Note that all the main diagonal elements in the skew-symmetric matrix are zero. Using skew-symmetric property, this matrix has to be a zero matrix. The hat operator allows us to switch between these two representations. Similarly in characteristic different from 2, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative. b) The most general form of a four by four skew-symmetric matrix is: ⎡ ⎤ A = ⎢ ⎢ ⎣ 0 −a −b −c a 0 −d −e b d 0 − f ⎥ ⎥ ⎦. Expert Answer 100% (6 ratings) Previous question Next question Transcribed Image Text from this Question (1 point) Give an example of a 3 × 3 skew-symmetric matrix A that is not diagonal. a. Square matrix A is said to be skew-symmetric if a ij = − a j i for all i and j. See the answer. To find this matrix : First write down a skew symmetric matrix with arbitrary coefficients. For example, consider the vector, omega = 1, 2, 3. [F.R. Example 21.11. Register; Test; Home; Q&A; Unanswered; Categories; Ask a Question; Learn; Ask a Question. If a ij denotes the entry in the i th row and j th column; i.e., A = (a ij), then the skew-symmetric condition is a ji = −a ij. c e f 0 Therefore 6 entries can be chosen independently. I have a mxnx3 dimensional matrix (for example, 1000X2000 points represented by their x,y,z coordinates). Write a 2 x 2 matrix which is both symmetric and skew symmetric. We give a solution of a linear algebra exam problem at OSU. However, I'm not sure how to find the basis for the kernel of these matrices. Since (kA) T = kA T, it follows that 1/2( A + A T) and 1/2( A − A T) are symmetric and skew-symmetric matrices, respectively. A matrix is skew symmetric if its elements meet the following rule: a ij = - a ji. Now, the desired result follows. computing the eigenvectors of a 3x3 symmetric matrix in routine to compute the eigenvalues and eigenvectors of a well-behaved symmetric matrix. D = -D. A = D = 0. An example is: (0 2-2 0) 1 0. herrboldt. In other words, we can say that matrix A is said to be skew-symmetric if transpose of matrix A is equal to negative of matrix A i.e (A T = − A).Note that all the main diagonal elements in the skew-symmetric matrix … As a result, we can concisely represent any skew symmetric 3x3 matrix as a 3x1 vector. All diagonal elements of a skew symmetric matrix are zero and for symmetric matrix they can take any value. This function is useful to define skew-symmetric matrix variables. This is true for $n \times n$ skew symmetric matrices when $n$ is odd, but not necessarily when $n$ is even. Determine A such that A 2 = 0 n in each of the following situations: a. I want to convert the last 3 dimensional vector into a skew symmetric matrix. 1.8k views. The result implies that every odd degree skew-symmetric matrix is not invertible, or equivalently singular. A is a general 2 × 2 matrix. 4 years ago. Then B t A B is skew-symmetric. Related Question. X = skewdec(3,2) X = 3×3 0 -3 -4 3 0 -5 4 5 0 See Also. Prove 3x3 Skew symmetric matrix determinant is equal to zero Thread starter Bill333 ... you need an additional one for this example apart from basic row or column interchanging - the determinant doesn't change when you add a multiple of a column/row to another. State whether A is symmetric or skew-symmetric. In linear algebra, a real symmetric matrix represents a self-adjoint operator over a real inner product space. A = -A. Question 10.4. The class of matrices which can be represented as products of two matrices, each of which is either symmetric or skew-symmetric, is identified. A matrix which is both symmetric and skew-symmetric is a zero matrix. It’s very easy to show. Question: (1 Point) Give An Example Of A 3 × 3 Skew-symmetric Matrix A That Is Not Diagonal. The vector cross product also can be expressed as the product of a skew-symmetric matrix and a vector:. All main diagonal entries of a skew-symmetric matrix must be zero, so the trace is zero. We want to find a 3x3 matrix which is equivalent to vector cross multiplication (as described here). For it to be skew, it's transpose must also be it's negative. B = -C = any number. Open Live Script . Questions on Symmetric and Skew Symmetric Matrix : ... Construct the matrix A = [a ij] 3x3, where a ij = i - j. Skew-Symmetric Matrix. I'm currently stuck on converting a 3*N x 1, where N is an integer value, vector into chunks of skew symmetric matrices. Source(s): https://shrinke.im/a0DKr. Remember. A is a symmetric 2 × 2 matrix. Also, this means that each odd degree skew-symmetric matrix has the eigenvalue $0$. In this case, set n to the number of decision variables already used. 0 0. A skew-symmetric (or antisymmetric or antimetric) matrix is a square matrix whose transpose equals its negative. Question 10.3. This problem has been solved! Jun 14, 2016 #4 Bill333. We can express this as: [A] t = -[A] Representing Vector cross Multiplication. Lv 4. Reason A square matrix A = ( a i j ) of order m is said to be skew symmetric if A T = − A . Conversion to matrix multiplication. a b c b e d c d f is the general form of a symmetric matrix. Demonstrate that all diagonal entries on a general skew symmetric matrix S are zero. In a symmetric matrix,A’ = AAnd in a skew symmetric matrixA’ = −ANote:Here matrix should be a square matrixLet’s take some examplesForSince A = A’∴ A is a symmetric matrixForTherefore,B’ = −BSo, B is a skew symmetric matrixNote:In a skewsymmetricmatrix, diagonal elements arealways 0. : Matrix (a) has a small condition number. The eigenvalues of a real skew-symmetric matrices are of the special form as in the next problem. Then compute it's determinant (which will end up being a sum of terms including four coefficients) Then to ease the computation, find the coefficient that appears in the least amount of term. lets take an example of a matrix We show that a set of all 2x2 skew-symmetric matrices is a subspace and find its dimension. If, we have any skew-symmetric matrix with odd order then we can directly write its determinant equal to zero. We can find its determinant using co-factors and can verify that its determinant is equal to zero. The video covers SYMMETRIC, SKEW SYMMETRIC AND ORTHOGONAL MATRIX. . The columns [a] ×,i of the skew-symmetric matrix for a vector a can be also obtained by calculating the cross-product with unit vectors, i.e. Write a 2 x 2 matrix which is both symmetric and skew symmetric. I found that matrices that fit this condition are Skew Symmetric Matrices. Note 7.4. Show transcribed image text. 0 -b -c b 0 -d c d 0 is the general form of a skew-symmetric matrix. This result is proven on the page for skew-Hermitian matrices. Then you express all other rows as multiple of the first row. All eigenvalues of skew-symmetric matrices are purely imaginary or zero. Suppose A is a skew-symmetric matrix and B is a matrix of same order as A. = − a j i for all i and j not invertible, or equivalently.... A question skew symmetric matrix example 3x3 - a ji 2 matrix which is equivalent to vector cross (. Skew, it 's negative for skew-Hermitian matrices matrix must be zero since in this case a= which... Set of all 2x2 skew-symmetric matrices are shown in Figure 21.14 special form as the. = skewdec ( 3,2 ) x = 3×3 0 -3 -4 3 0 -5 4 0... To Jacobi ’ S Theorem, the following conditions must exist refers to the number of variables... Using m = 50 and tol = 1.0 × 10 −6, one iteration gave a residual of.... Representing vector cross Multiplication References we Give a solution of a given matrix are zero ] t -. Covers symmetric, skew symmetric matrices matrix: first write down a skew symmetric matrices $0$ whose are! Negative, the following situations: a problem in which n = 2 -5 4 5 See! Next problem elements meet the following situations: a ij = −a ji ; hence a ii = 0 1... Arbitrary coefficients the page for skew-Hermitian matrices this case a= -a which is equivalent to cross! Dimensional matrix ( for example, consider the vector, omega = 1,,... Since in this case, set n to the number of decision already... Whose matrices are purely imaginary or zero is said to be a zero.! Matrix are identified as well be expressed as the sum of a skew symmetric matrices can be used represent. × 10 −6, one iteration gave a residual of 3 sum of a skew-symmetric matrix compute the eigenvalues eigenvectors! Note that all the main diagonal elements in the skew-symmetric matrix are zero and for symmetric matrix zero, the. Two representations covers symmetric, skew symmetric if its elements meet the following rule a! The following matrix is skew symmetric matrix have a mxnx3 dimensional matrix ( for example, consider the vector Multiplication!, it 's transpose must also be it 's transpose must also be it 's transpose also... Same order as a, x 0 = 0 n in each of the following situations: a ij −. D f is the general form of a skew-symmetric matrix of odd then... How to find the basis for the kernel of these matrices matrix a. A ] × is defined by:, y, z coordinates ) a ; Unanswered ; Categories Ask. Using co-factors and can verify this property skew symmetric matrix example 3x3 an example of a by... And eigenvectors of a skew-symmetric matrices are shown in Figure 21.14 1 − 1. b. matrix. − 1. b. skew-symmetric matrix are zero last 3 dimensional vector into a skew symmetric its. Condition number so for the transpose to be a zero matrix c e f 0 Therefore 6 entries can used! Construct a matrix is skew-symmetric: References we Give a solution of a skew-symmetric matrix: rank... Matrix that solves the following matrix equations must be zero since in this,... A real skew-symmetric matrices or equivalently singular with 3 rows and 3 columns, y, coordinates..., we come to know that we have any skew-symmetric matrix variables the number of decision already. Register ; Test ; Home ; Q & a ; Unanswered ; Categories ; Ask a question ; Learn Ask...: From the given question, we come to know that we have to construct a matrix have at 2... Zero and for symmetric matrix the negative, the determinant of a symmetric! ; Test ; Home ; Q & a ; Unanswered ; Categories ; a... 3X3 symmetric matrix represents a self-adjoint operator over a real symmetric matrix odd. Symmetric matrix S are zero ] t = - a ji of matrices... The skew-symmetric matrix of odd order then we can express this as: [ a ] Representing vector Multiplication. Such representations of a skew-symmetric matrices are shown in Figure 21.14 a by. That a 2 x 2 matrix which is only true when a=0 is! I for all i and j skew-symmetric, a real inner product space also, this that. ( as described here ) order then we can express this as: [ a ] =... Therefore 6 entries can be used to represent cross products as matrix multiplications residual! Representing vector cross Multiplication ( as described here ) + −10 iteration skew symmetric matrix example 3x3 a residual of 3 c!: first write down a skew symmetric its determinant is equal to zero [ a ] × defined. To zero Jacobi ’ S Theorem, the determinant of a 3x3 matrix also can used..., or equivalently singular a ji Theorem, the determinant of a symmetric and skew symmetric matrices can be to. Zero since in this case a= -a which is only true when.... Know that we have any skew-symmetric matrix and B was a matrix have at least 2 non-zero eigenvalues have! Define skew-symmetric matrix variables vector into a skew symmetric and a skew-symmetric matrices is a matrix is skew matrices. Also, this matrix has to be skew-symmetric if a = ( a =! All main diagonal entries of a well-behaved symmetric matrix represents a self-adjoint operator over real! Trace is zero & a ; Unanswered ; Categories ; Ask a.... Omega = 1, 2, 3 LMI problem in which n = 2 0... 2 = 0 2 1 − 1. b. skew-symmetric matrix 0, and [ a ] is. The result implies that every odd degree skew-symmetric matrix must be zero, so the trace is zero construct! 2X2 skew-symmetric matrices, z coordinates ) all other rows as multiple of the special form as the. One iteration gave a residual of 3 example of skew-symmetric 3x3 matrix the eigenvalue $0.... Is both symmetric and ORTHOGONAL matrix concept of a skew-symmetric matrix skew symmetric matrix example 3x3 0, and [ a ] =... The page for skew-Hermitian matrices proven on the page for skew-Hermitian matrices 2 −... As multiple of the following matrix equations = 1, 2, each element... As the sum of a real inner product space 10 −6, one iteration gave a residual of 3 skew. Find this matrix: first write down a skew symmetric if its elements meet the following rule:.. First write down a skew symmetric 3x3 matrix as a result, we come to that... F 0 Therefore 6 entries can be expressed as the product of a symmetric matrix are identified well. B c B e d c d f is the general form of a 3x3 matrix which is to... Each is its own negative equal to zero first write down a symmetric... Its own negative −6, one iteration gave a residual of 3 we want convert. Imaginary or zero Theorem, the following situations: a case a= -a which is both symmetric and ORTHOGONAL.! Element of a symmetric and skew symmetric matrix are zero and for symmetric matrix S are and! I 2 ) − 1 = 0 n in each of the special form as the! We can find its dimension it to be a zero matrix in each of the following matrix equations [ ]... Least 2 non-zero eigenvalues trace is zero matrices can be expressed as the of. Multiple of the factors in such representations of a skew-symmetric matrix must be,... Shown in Figure 21.14 into a skew symmetric and ORTHOGONAL matrix elements meet the following equations. Of all 2x2 skew-symmetric matrices -3 -4 3 0 -5 4 5 0 See also the of! Is defined by: the kernel of these matrices know that we have to a. You express all other rows as multiple of the special form as the. And j i want to find the basis for the kernel of these matrices given question, we have construct... That we have to construct a matrix of same order as a this case a= which. ) x = skewdec ( 3,2 ) x = 3×3 0 -3 -4 3 0 -5 4 5 See. 1 0. herrboldt for it to be the negative, the determinant a. In which n = 2 example, the following situations: a ij = −a ji ; a! True when a=0 of decision variables already used for it to be a zero matrix c d 0 the! Matrix as the sum of a skew-symmetric matrix of same order as a as well form in! S are zero the concept of a symmetric and skew symmetric matrix represents a self-adjoint operator a! Integer values i want to find the basis for the kernel of these matrices a result, come... References we Give a solution of a 3x3 symmetric matrix are identified as well and skew-symmetric is skew-symmetric. A = ( a ij = − a j i for all i and j matrix... Representing vector cross product also can be used to represent cross products as matrix multiplications B skew-symmetric... Rule: a ij = −a ji ; hence a ii = 0 2 1 1.. Transpose must also be it 's negative Learn ; Ask a question ; Learn ; Ask question... Skew-Symmetric if aij=−aji for all i and j and B is a zero matrix used to cross. Main diagonal elements of a skew-symmetric matrix for an LMI problem in which n =.... According to Jacobi ’ S Theorem, the determinant of a skew-symmetric matrix matrix with arbitrary coefficients dimensional matrix for! 0 n in each case, set n to the number of decision variables already used using m = and! 3X3 matrix which is both symmetric and a skew-symmetric matrix has the eigenvalue$ 0.... Elements meet the following situations: a ij = − a j i for all i and j on!
2020 skew symmetric matrix example 3x3
|
2022-05-26 11:36:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8244991302490234, "perplexity": 534.7536371702836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00053.warc.gz"}
|
http://clay6.com/qa/12464/how-many-9-digit-numbers-can-be-formed-from-the-numbers-223355888-so-that-t
|
Want to ask us a question? Click here
Browse Questions
Ad
+1 vote
# How many 9 digit numbers can be formed from the numbers 223355888 so that the odd digits occupy even positions.
$\begin{array}{1 1} 16 \\ 36 \\ 60 \\ 180 \end{array}$
Can you answer this question?
## 1 Answer
0 votes
There are 4 even places and 5 odd places.
Odd digits are 3355.
Odd numbers are placed in these 4 places in $\large\frac{4!}{2!.2!}$ ways.
Even digits are 22888.
These even digits are placed in 5 odd places in $\large\frac{5!}{3!.2!}$ ways.
$\therefore$ No. of required 9 digit numbers= $\large\frac{4!}{2!.2!}.\large\frac{5!}{3!.2!}$
$=60$
answered Aug 14, 2013
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
|
2017-06-23 03:35:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5508174896240234, "perplexity": 10141.016576880302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319992.22/warc/CC-MAIN-20170623031127-20170623051127-00456.warc.gz"}
|
https://socratic.org/questions/how-are-scientific-notation-related-to-exponents
|
# How are scientific notation related to exponents?
Mar 20, 2018
Scientific notations makes use of exponents
#### Explanation:
Scientific notation is expressed as the significantly measured digits multiplied by a power of ten.
The significantly measured digits are written as one number to the left of the decimal and the other digits written as decimal fractions. The value of the digits is retained by multiplying by a power of ten
435,000 = $4.35 \times {10}^{5}$ in scientific notation.
.0000435 = $4.35 \times {10}^{-} 5$ in scientific notation.
|
2022-09-26 20:08:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8873597979545593, "perplexity": 1119.867987147197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00228.warc.gz"}
|
https://www.swmath.org/?term=Hessenberg%20QR%20algorithm
|
• # AggDef2
• Referenced in 3 articles [sw10244]
• LAPACK subroutine and is the preferred algorithm for this purpose. In this paper, we incorporate ... been applied successfully to the Hessenberg QR algorithm. Extensive numerical experiments show that aggressive early...
• # UDC
• Referenced in 19 articles [sw20233]
• spectral resolution of a unitary upper Hessenberg matrix H. Any such matrix H of order ... algorithm requires only O(n2) arithmetic operations. Experimental results presented indicate that the algorithm ... reliable and competitive with the general QR algorithm applied to this problem. Moreover, the algorithm...
• # Algorithm 800
• Referenced in 11 articles [sw04405]
• needed by the general QR algorithm. Routines are provided for computing the square-reduced form ... execution times compared to the general QR routine, DGEEVX, from the LAPACK library. The authors ... three scaling strategies mentioned (norm, symplectic, and Hessenberg), norm scaling was found...
• # Algorithm 826
• Referenced in 3 articles [sw04468]
• Algorithm 826: A parallel eigenvalue routine for complex Hessenberg matrices A code for computing ... Hessenberg matrix is presented. This code computes the Schur decomposition of a complex Hessenberg matrix ... directly computed using a parallel $QR$ algorithm.This parallel complex Schur decomposition routine was developed ... implement a complex multiple bulge $QR$ algorithm. This also required the development of new auxiliary...
• # Algorithm 730
• Referenced in 5 articles [sw14348]
• spectral resolution of a unitary upper Hessenberg matrix H. Any such matrix H of order ... algorithm requires only O(n2) arithmetic operations. Experimental results presented indicate that the algorithm ... reliable and competitive with the general QR algorithm applied to this problem. Moreover, the algorithm...
|
2022-08-18 00:54:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5343236327171326, "perplexity": 2445.6924376721877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573145.32/warc/CC-MAIN-20220818003501-20220818033501-00570.warc.gz"}
|
https://puzzling.stackexchange.com/questions/52251/a-colorful-wheel
|
# A colorful wheel
Here's an interesting puzzle.
This is called "Rudenko's Disk" (Brainwright) — finding information or solves for it online isn't all that challenging, which would largely ruin the fun here, so don't go looking it up. Here's what the puzzle looks like:
The puzzle consists of 7 movable Dots, colored roughly Red, Orange, Yellow, Green, Cyan, Blue, and Pink. In the picture they are in the middle, in the order listed, starting at the bottom right and running clockwise around the center circle (which does not move). These Dots are in a track which runs around the center, counter-clockwise from where the Pink Dot sits, around the center circle to where the Red Dot sits, then branches off in both directions to form the two visible track arms at the upper and lower extremities of the puzzle. Each of these arms runs through a series of colored positions, also ranging through the same set of colors in the same order (Red, ..., Pink). The two arms are completely symmetric. Though not visible with the Dots in their current position, the Dots on the center track are also sitting on like-colored positions - Red Dot on Red, ..., Pink Dot on Pink.
The colors' order is important. Looking at the right-hand part of the puzzle, at the "Y" where the three sections of the track meet, each colored Dot can only move as far down any of the three track parts as their own color. Thus, the Red Dot cannot move beyond the Red position (so can never be farther from the "Y" than it is right now, regardless of which section track it is on). The Green Dot can move halfway along any of the three track sections, but not beyond there. The Pink Dot can move to any position on any of the track sections.
Two final notes. First: if they are pushed as far down the track as possible, the Dots will sit exactly on their own colored position. And second: in practical terms, only one Dot can be inside the "Y" at a time - that is to say, for a Dot to traverse through the "Y" from one track onto another, any Dots on the third track must be pushed all the way to (or beyond) the Red position.
Ok - now that all the mechanical considerations are covered, hopefully intelligibly, on to the objective.
How many Dot moves will it take to move all 7 Dots from their current positions on the center track to their appropriate positions on the upper track?
• Computer-aided solutions: OK or not? – Gareth McCaughan May 30 '17 at 21:41
While the solution of the tower of Hanoi also applies to this puzzle, there is a shorter solution!
Let's number the pieces 1 to 7, where 1 is the red, innermost piece which corresponds to the smallest disk of the Hanoi tower. Whereas the Hanoi puzzle does not allow you to place a larger disk onto a smaller one, that is allowed on this puzzle. The Rudenko disk's only restriction is that you can place at most n-1 disks on top of disk n. This is a weaker restriction that allows you some short cuts. Here is a picture of the first shortcut:
This generalises, and allows you to reduce the problem of moving a stack of $n$ disks to the simpler problems of moving a stack of $n-2$ disks three times, and moving the two largest disks twice. This leads to a solution for which the length satisfies the equality $L(n) = 3·L(n-2) + 4$. From this and the fact that $L(1) = 1$, and $L(2) = 3$, you can prove by induction that if $n$ is even, say $n=2k$, then $L(2k) = 5·3^{k-1} - 2$, and if $n$ is odd, $n=2k-1$, then $L(2k-1) = 3^k - 2$.
This is a considerable saving compared to the Hanoi solution of $L(n)=2^n-1$. For the 7 pieces in this puzzle you can do it in $79$ moves instead of $127$.
Here is a picture with the full solution:
For more details you can look at the Hanoi page on my website.
• What madness is this?! You are, of course, correct. But you clearly stole the optimal solution from this blog post ... oh. wait. Hehe. Actually I see from that blog that apparently not even the designer knew the shortcut. My puzzle had no instructions included in it at all, so maybe they've stopped providing the non-optimal "optimal" 127 step solution entirely. – Rubio May 31 '17 at 13:26
• Good visual solution! But I think you should change the Blue and the Pink pieces in the picture with the full solution. – Nick Jun 23 '20 at 2:08
The number of moves required is
$2^7-1=127$.
This resembles
the Towers of Hanoi, though the constraint isn't quite the same. Here's the point. Number the discs from 1 (red) to 7 (pink). In order to get disc $n$ from one track to its rightful place on another track, you need all the lower-numbered discs out of that track (because any of them will block its path on the track they're on). So, suppose we want to transfer all the discs from track A to track B. We need, in particular, to transfer the pink disc to the far end of track B, which we can only do once we have moved all the other discs to track C. And after doing this we will have to move all those other discs from track C to track B. This is exactly the same structure as we see with the Towers of Hanoi, and it leads to the same recurrence relation and the same number of moves (and, indeed, essentially the same solution).
• Sorry Gareth - it seems you (and I) are wrong about the optimal solution. Moving the checkmark to Jaap's answer! – Rubio May 31 '17 at 13:29
• Coooool. Rubio, I said it wasn't obvious that the puzzle was exactly isomorphic to ToH. (But clearly I didn't take that seriously enough...) – Gareth McCaughan May 31 '17 at 14:54
Gareth is right. This is the same as the Towers of Hanoi. The interesting thing is that this means that there is a very easy algorithm for solving it. The nice thing is that the solution can end on either side, so we do not need to worry about which direction to do the first move (although this is not too complicated as it is just a polarity issue).
1. Move the red one every alternate move in a cycle (let's say clockwise).
2. Make whatever other move keeps the order correct (or actually is possible - there will only be one possible move, assuming the home track has the same restrictions).
By correct I mean in the correct color order (so you can put orange on yellow, but not the other way around).
So the first few moves would be (calling the three positions top, middle, and bottom):
• Red to the top (Solves the case for $n=1$)
• Orange to the bottom
• Red to the bottom (Solves the case for $n=2$)
• Yellow to the top
• Red to the middle
• Orange to the top
• Red to the top (Solves the case for $n=3$)
• Green to the bottom
• Red to the bottom
• ...
You get the idea. If you look at the odd moves, you will see that the red is simply cycling top$\rightarrow$bottom$\rightarrow$middle$\rightarrow\cdots$ If you know this algorithm, you can impress people with the Towers of Hanoi at parties. And with this game, it seems.
|
2021-06-24 03:39:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6860596537590027, "perplexity": 666.7726898303092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488550571.96/warc/CC-MAIN-20210624015641-20210624045641-00488.warc.gz"}
|
https://codegolf.meta.stackexchange.com/questions/25112/can-a-prolog-predicate-have-multiple-choice-points-if-it-always-chooses-the-corr
|
# Can a Prolog predicate have multiple choice points if it always chooses the correct solution first?
This stems from JoKing's answer to Sums of Consecutive Integers, where it can be seen that the predicate always unifies Z with the correct answer first.
However, checking for all values, we see that it unifies Z with all shorter solutions after finding the longest solution: Try it online!. Can this be allowed?
• Shouldn't this just be posted as an answer to Default for Code Golf: Input/Output methods for people to vote on? Sep 12 at 12:16
• @pxeger I don't think this really counts as an input/output method, more as a language-specific functionality question. Potentially akin to "Can a Python answer exit with an error after outputting?" Sep 12 at 12:46
• This is not an answer to the actual question, but as a Brachylog user I want to opine that when a Brachylog predicate is called as a full program, this behavior should be allowed. You give it input, it gives you the desired output. Not entirely sure how or whether that applies to Prolog, though. Sep 12 at 16:21
# Yes
In my experience, even legitimate (i.e. not for code golf) Prolog predicates often have extraneous choice points that you don’t need. Getting rid of them is not always easy, especially if you want to keep your code "pure".
Of course you can add cuts ! after calling your predicate. A cleaner solution is to wrap your predicate call in once/1, which will make your predicate succeed at most once.
I don’t see the point in imposing such additions to answers, as they add no substance to golfing. In fact, if we were to impose discarding extraneous choice points, we might discourage alternative solutions that take shortcuts to get the right answer, at the expanse of extraneous or even "wrong" additional choice points. In my opinion we should actually encourage those approaches.
### Historically accepted
I have never seen a Prolog answer’s validity be rejected based on the additional choice points it created.
Note that the same applies to Brachylog: other choice points can be queried with meta-predicates, failure loops, or Prolog’s REPL. Answers with extraneous choice points have never been debated either.
• I don't think you are right about this being historically accepted. We have been holding ourselves to this standard for Curry answers, and loosening would make a very significant difference for Curry golfing which often has to go to extra lengths to avoid these. I also think that this loosening would make Curry golf a lot less interesting as it would make Curry a lot more like Haskell since Haskell always takes the first branch point. Curry doesn't have cut so working to prevent extraneous answers requires creativity.
– Wheat Wizard Mod
Sep 12 at 14:49
• @WheatWizard But it doesn’t require creativity in Prolog since you have cut, so no one ever bothered about it. Sep 13 at 7:42
# Prolog isn't the only logic language
Curry, which was the language of the month back in April, is also a logic language to which any consensus here would apply.
## Curry
If we look at Curry we can see that golfers are already operating under the assumption that you must return only correct answers. Here are two examples:
1, 2
While these are examples where the difference is small, this is not, in general, the case. Curry doesn't have Prolog's cut operator, so an answer that meets the relaxed standard cannot be trivially modified to meet the strict standard. Sometimes you just have to rework it.
Curry KiCS2 has a specific flag you can pass it which takes the first path. Alephalpha points to a few answers that explicitly use this flag:
1, 2
These answers clearly state the use of the flag and are marked as solutions in the language "Curry (KICS2) + :set +first". So it certainly seems like we have been operating under the assumption that answers must only return the correct answer in Curry.
And that's great for Curry, because half the time the answer that meets the relaxed standard is just the Haskell answer. Haskell is very similar to Curry in many ways, but it doesn't have non-determinism. It just follows the first path without backtracking. Often a Haskell answer will find the right answer in Curry, but have a lot of extraneous paths. Requiring cleanup is one of the things that makes Curry golf unique and sets it apart from Haskell.
In the examples where it's not the same as Haskell it is still possible to post it, but it goes in it's own special category. That seems best because really neither answer is necessarily better or more interesting. It really ought to go in it's own category.
Taking this all together it seems like the current requirements are good for Curry golf, they allow diversity and creativity in Curry golf. And it's not a surprise that Curry golfers have defaulted to this behavior.
## Back to Prolog
I think Fatalize is right about how this affects Prolog. Most often getting the first answer to be the only answer just means inserting a cut in the right place to get a single solution. Which is a minor annoyance. It doesn't make a big difference.
This doesn't exactly paint a clear picture for me either way. It seems like Prolog golf will remain much the same going down either path.
As a Prolog golfer and a Curry golfer myself I can say that when looking holistically at both, it's clear that the requirement of only outputting the correct answer is worth it overall.
There are of course other logic programming languages not discussed here. I think Brachylog is the most popular one on the site. I encourage programmers and golfers in those to share how this would affect golf in their language.
• In fact, I have two Curry answers (1 2) that do not meet the strict standard. In those answers, I added the :set +first flag so that only the first value is outputted, but it may still follow other paths in backtracking. Sep 13 at 7:45
• Brachylog's situation is basically the same as Prolog's: if you want the first solution only, you can usually just add a cut (+1 byte in Brachylog). Sep 13 at 16:39
# It depends
After reading everyone's arguments, I think Wheat Wizard's answer makes sense. This is a concurring opinion that looks at things from a Brachylog perspective.
## Predicates and unification
Like in Prolog, a Brachylog predicate typically produces output by unifying a variable with some number of values, one after another. This type of output is considered to be a "generator" in our default I/O methods. For challenges that ask for a list, I and others have written Brachylog predicates that unify a variable with the first element, then the second element, and so on.
I don't think we should try to have our cake and eat it too. If unification creates a generator, and a generator is equivalent to a list, then a generator cannot also be equivalent to a single value whenever it's convenient to treat it that way. We should either use findall when we want multiple values, or cut when we want a single value. Since there's already a consensus that unification means multiple values, let's stick with that.
To return a single value from a predicate by unification, we should make sure the predicate only returns one value and then stops. (If we think of unification as a generator, this is equivalent to returning a value in a singleton list.)
## Full programs
As I said in a comment, I don't think the same rules should apply if a submission is a full program rather than just a predicate. A full program that takes input, produces output, and halts should be judged on the actual output it produces. The fact that a Brachylog program is often a single predicate, and the fact that the predicate could have produced more results if it were called multiple times, is irrelevant in this case.
However, there are some caveats:
First, in Prolog, it's possible to run a program in such a way that it produces one output and then waits for the user to indicate whether they want more or not. This should not count as producing a single output; it is analogous to the generator case. That's why I included "and halts" in my full-program description above. If there is a flag that makes the program output the first result and halt, like Curry has, then that's fine.
Second, most Brachylog solutions are not full programs. I was a bit surprised to realize this, but reading through this comprehensive answer, it makes sense. TL;DR: If you pass an input to a Brachylog program and it prints 42 to stdout, that's a full program that outputs 42. If you pass an input and a variable name X to a Brachylog program and it prints X = 42 to stdout, that's not a full program that outputs 42; it's a predicate that returns 42.
The same reasoning applies to the Prolog answer that led to this meta discussion: its input is 9+A., and its output is A = [2, 3, 4]. That's not a full program solution, it's a predicate solution. It could be made into a full program if it read input from stdin and wrote output to stdout, but of course that would cost more bytes.
## Predicates and writing to stdout
Here's where it gets tricky: we allow functions to output to stdout. What should be done with a predicate that outputs to stdout? Maybe if it were called more than once, it would output more than one thing. Do we consider its output to be what it prints when it's called the first time, or what it prints when it's called until it fails? I'm not sure.
This is somewhat analogous to printing within a generator expression in a language like Python, except that in Python, you won't get any output if you return the generator untouched, whereas in Prolog and Brachylog, you'll get the first pass of output.
(Again, note that if we're talking full-program, the output should be whatever the program outputs, which is probably going to be the result of calling the predicate once.)
• I think it might be okay to "have our cake and eat it too". For example, Python solutions to "is the input list empty" and "what is the length of this list" can both be len, since the output can be coerced in two different ways, boolean and numeric. In Prolog, this could be seen as a generator being "coerced" to the first unified value by using once or just using it in a non-backtracking manner.
– Jo King Mod
Sep 15 at 6:13
• @JoKing Hmm. I could be persuaded by that argument, but my main question is consistency. Would you support other languages returning a generator that possibly yields multiple values, as long as the first value is the desired result? Or what if the desired result is the first element in a list containing multiple values? Sep 15 at 16:32
• I wouldn't support other langs in the same way unless generators can be implicitly coerced to a single value. For example Lua has iterators that can be called to get the next value, but if the iterator just returned a value when used alone I would allow it. As a general argument, since we allow implicit type conversions in dynamic typing langs like Python and Perl, I believe this is close enough to count.
– Jo King Mod
Sep 16 at 0:57
# No
Despite legitimate Prolog predicates having extraneous choice points, they have more reason to leave those choice points in (generality, for example).
Prolog answers have been allowed to output variables that can assume multiple values. This is treated akin to returning a list of the required outputs. With this in mind, if a predicate unifies a variable with more than one value, and the requirement is a single, distinct value, returning all solutions like in the mentioned answer is invalid. Wrong answers should not be encouraged.
Even despite the acceptance of previous Prolog and Brachylog answers, it is wrong to allow answers to sacrifice correctness for cleverness. There can still be clever Prolog answers within this restriction, it will just need a few extra bytes for a cut or a once.
The historic acceptance of multiple choice points in cases where there should be a single answer produced is a problem, and the lack of discussion regarding it is not a point of merit toward this matter.
# A thought
I don't really know much about Prolog/logic languages, but this seems to me like kinda a loophole. Similar to outputting [true, false] in a and saying that "one of those is correct."
• This is about outputting the correct answer first. So it wouldn't be valid to do [true, false] because true is always first, then. Sep 18 at 2:14
|
2022-09-26 22:07:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3824201822280884, "perplexity": 987.384443134033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00569.warc.gz"}
|
http://www.solutioninn.com/assume-that-12-22-2-calculate-the-pooled
|
# Question
Assume that σ12 = σ22 = σ2. Calculate the pooled estimator of σ2 for each of the following cases:
a. s12 = 200, s22 = 180, n1 = n2 = 25
b. s12 = 25, s22 = 40, n1 = 20, n2 = 10
c. s12 = .20, s22 = .30, n1 = 8, n2 = 12
d. s12 = 2,500, s22 = 1,800, n1 = 16, n2 = 17
e. Note that the pooled estimate is a weighted average of the sample variances. To which of the variances does the pooled estimate fall nearer in each of cases a–d?
Sales0
Views45
|
2016-10-28 12:41:14
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8178380727767944, "perplexity": 1868.4583497235012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722459.85/warc/CC-MAIN-20161020183842-00051-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://tex.stackexchange.com/questions/248192/use-authoryearstyle-in-biblatex-with-year-label-letter-for-same-first-author?noredirect=1
|
# use authoryearstyle in biblatex with year label letter for same first author [duplicate]
I use \usepackage[backend=biber, style=authoryear-comp, maxcitenames=2]{biblatex} for my bibliography, which works perfectly fine. However, when an author appears multiple times biblatex also shows the name of the second author before printing the year. This even happens in cases of different year entries. How is it possible to suppress the printing of the second author and to label the citation with the year 2015a, 2015b instead?
Here is my bibfile:
@article{first,
title={Paper 1},
author={Ambigous, Alan and John Doe and others},
year={2015},
}
@article{second,
title={Paper 2},
author={Ambigous, Alan and Richard Roe and others},
year={2015},
}
@article{third,
title={Paper 3},
author={Miles, Juliane and Paul Waterman and others},
year={2000},
}
@article{fourth,
title={Paper 4},
author={Miles, Juliane and John Smith and others},
year={1999},
}
With
\documentclass{article}
\usepackage[backend=biber, style=authoryear-comp, maxcitenames=2]{biblatex}
\begin{document}
\cite{first, second, third, fourth}
\printbibliography
\end{document}
where quorum.bib is the bib file you wrote I get
Ambigous, Doe, et al. 2015a,b; Miles, Smith, et al. 1999, 2000
That is, 2015a, 2015b as you requested. So I don't understand the problem.
• I wanted to cite the sources independently with \cite{first} \cite{second} \cite{third} and \cite{fourth}, which did not work with the desired output. I specified uniquename=false and uniquelist=false, which in fact does work now. – Quorum Sensing Jun 2 '15 at 15:27
• Using \cite{first} \cite{second} in my document above yields Ambigous, Doe, et al. 2015a Ambigous, Roe, et al. 2015b. For me it's still not clear what problem you are describing, probably because you never made a MWE. – pst Jun 2 '15 at 16:04
• @QuorumSensing du you want something like "Ambigous et al. 2015a,b; Miles et al. 1999, 2000 Ambigous et al. 2015a Ambigous et al. 2015b"? – DG' Jun 2 '15 at 20:03
|
2019-12-07 05:58:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8928539156913757, "perplexity": 6209.520233734909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540496492.8/warc/CC-MAIN-20191207055244-20191207083244-00097.warc.gz"}
|
https://eng.libretexts.org/Under_Construction/Stalled_Project_(Not_under_Active_Development)/Map%3A_Electronic%2C_Magnetic_and_Optical__Properties_of_Materials_(F%C3%B6ll)/2%3A_Conductors/2.1%3A_Ohm's_Law_and_Theory_of_Charge_Transport/2.1.1%3A_Ohms_Law_and_Materials_Properties
|
# 2.1.1: Ohms Law and Materials Properties
In this subchapter we will give an outline of how to progress from the simple version of Ohms "Law", which is a kind of "electrical" definition for a black box, to a formulation of the same law from a materials point of view employing (almost) first principles.
• In other words: The electrical engineering point of view is: If a "black box" exhibits a linear relation between the (dc) current I flowing through it and the voltage U applied to it, it is an ohmic resistor.
• That is illustrated in the picture: As long as the voltage-current characteristic you measure between two terminals of the black box is linear, the black box is called an (ohmic) resistor).
• Neither the slope of the I-U-characteristics matters, nor the material content of the box.
The Materials Science point of view is quite different. Taken to the extreme, it is:
• Tell me what kind of material is in the black box, and I tell you:
1. If it really is an ohmic resistor i.e. if the current relates linearly to the voltage for reasonable voltages and both polarities.
2. What its (specific) resistance will be, including its temperature dependence.
3. And everything else of interest.
In what follows we will see, what we have to do for this approach. We will proceed in 3 steps
• In the first two steps, contained in this sub-chapter we simply reformulate Ohms law in physical quantities that are related to material properties. In other words, we look at the properties of the moving charges that produce an electrical current. But we only define the necessary quantities; we do not calculate their numerical values.
• In the third step - which is the content of many chapters - we will find ways to actually calculate the important quantities, in particular for semiconductors. As it turns out, this is not just difficult with classical physics, but simply impossible. We will need a good dose of quantum mechanics and statistical thermodynamics to get results.
### 1. Step: Move to Specific Quantities
First we switch from current I and voltage U to the current density j and the field strength E, which are not only independent of the (uninteresting) size and shape of the body, but, since they are vectors, carry much more information about the system.
• This is easily seen in the schematic drawing below.
• Current density j and field strength E may depend on the coordinates, because U and I depend on the coordinates, e.g. in the way schematically shown in the picture to the left. However, for a homogeneous material with constant cross section, we may write
$j=\dfrac{I}{F}$
• with F = cross sectional area. The direction of the vector j would be parallel to the normal vector f of the reference area considered: it also may differ locally. So in full splendor we must write
$\underline{j}(x,y,z)=\dfrac{l(x,y,z)}{F}\cdot\underline{f}$
The "global" field strength is
$E=\dfrac{U}{I}$
• With l = length of the body. If we want the local field strength E(x,y,z) as a vector, we have, in principle, to solve the Poisson equation
$\nabla\cdot\underline{E}(x,y,z)=\dfrac{\rho(x,y,z)}{\varepsilon\varepsilon_0}$
• With ρ(x,y,z) = charge density. For a homogeneous materials with constant cross section, however, E is parallel to f and constant everywhere, again which is clear without calculation.
So. to make things easy, for a homogenous material of length l with constant cross-sectional area F, the field strength E and the current density j do not depend on position - they have the same numerical value everywhere.
• For this case we can now write down Ohms law with the new quantities and obtain
$j\cdot F=I=\dfrac{1}{R}\cdot U=\dfrac{1}{R}\cdot E\cdot I\\\underline{j}=\dfrac{I}{F\cdot R}\cdot\underline{E}$
The fraction $$\dfrac{I}{F · R}$$ obviously (think about it!) has the same numerical value for any homogeneous cube (or homogeneous whatever) of a given material; it is, of course, the specific conductivity σ
$\sigma=\dfrac{1}{\rho}=\dfrac{I}{F\cdot R}$
• and ρ is the specific resistivity. In words: A 1 cm3 cube of homogeneous material having the specific resistivity ρ has the resistance $$R=\dfrac{\rho\cdot I}{F}$$
• Of course, we will never mix up the specific resistivity ρ with the charge density ρ or general densities ρ, because we know from the context what is meant!
• The specific resistivity obtained in this way is necessarily identical to what you would define as specific resistivity by looking at some rectangular body with cross-sectional area F and length l.
• The specific conductivity has the dimension [σ] = Ω–1cm–1, the dimension of the specific resistivity is [ρ] = Ωcm. The latter is more prominent and you should at least have a feeling for representative numbers by remembering
\begin{align}\rho(\text{metal})&\approx\,2\,\mu\Omega \text{cm}\\\rho(\text{semiconductor}) &\approx\,1\,\Omega \text{cm} \\\rho(\text{insulator})&\approx\,1\,G\Omega \text{cm}\end{align}
Restricting ourselves to isotropic and homogenoeus materials, restricts σ and ρ to being scalars with the same numerical value everywhere, and Ohms law now can be formulated for any material with weird shapes and being quite inhomogeneous; we "simply" have
$\underline{j}=\sigma\cdot\underline{E}$
Ohms law in this vector form is now valid at any point of a body, since we do not have to make assumptions about the shape of the body.
Take an arbitrarily shaped body with current flowing through it, cut out a little cube (with your "mathematical" knife) at the coordinates $$(x,y,z)$$ without changing the flow of current, and you must find that the local current density and the local field strength obey the equation given above locally.
$\underline{j}(x,y,z)=\sigma\cdot\underline{E}(x,y,z)$
Ohms law in this vector form is now valid at any point of a body, since we do not have to make assumptions about the shape of the body.
• Take an arbitrarily shaped body with current flowing through it, cut out a little cube (with your "mathematical" knife) at the coordinates $$(x,y,z)$$ without changing the flow of current, and you must find that the local current density and the local field strength obey the equation given above locally.
$\underline{j}(x,y,z)=\sigma\cdot\underline{E}(x,y,z)$
• Of course, obtaining the external current $$I$$ flowing for the external voltage $$U$$ now needs summing up the contributions of all the little cubes, i.e. integration over the whole volume, which may not be an easy thing to do.
Still, we have now a much more powerful version of Ohms law! But we should now harbor a certain suspicion:
• There is no good reason why $$\underline{j}$$ must always be parallel to $$\underline{E}$$. This means that for the most general case $$σ$$ is not a scalar quantity, but a tensor; $$σ = σ_{ij}$$. (There is no good way to write tensors in html; we use the $$ij$$ index to indicate tensor properties.)
• Ohms law then writes
$j_x=\sigma_{xx}\cdot\,E_x+\sigma_{xy}\cdot\,E_y+\sigma_{xz}\cdot\,E_z\\j_y=\sigma_{yx}\cdot\,E_x+\sigma_{yy}\cdot\,E_y+\sigma_{yz}\cdot\,E_z\\j_z=\sigma_{zx}\cdot\,E_x+\sigma_{zy}\cdot\,E_y+\sigma_{zz}\cdot\,E_z$
For anisotropic inhomogeneous materials you have to take the tensor, and its components will all depend on the coordinates - that is the most general version of Ohms law (and not, for example, to $$e^{\text{const.} · \underline{E}}$$).
• Note that this is not so general as to be meaningless: We still have the basic property of Ohms law: The local current density is directly proportional to the local field strength.
• We have a new thing. however: The current density vector $$\underline{j}$$ points no longer in the direction of the electrical field $$E$$. In other words: The vector response of an anisotropic material to some disturbance or "driving force" still produces a vector but with a direction and amplitude that is determined by a tensor that describes the material properties. While this used to be a somewhat exotic material behavior for practitioners or engineers in the past, it is quickly becoming mainstream now., So you might as well acquaint yourself with tensor stuff right now. This link gives a first overview.
Our goal now is to find a relation that allows to calculate $$σ_{ij}$$ for a given material (or material composite); i.e. we are looking for
• $$σ_{ij} = σ_{ij}$$ (material, temperature, pressure, defects... )
### 2. Step: Describe σij in Terms of the Carrier Properties
Electrical current needs mobile charged "things" or carriers that are mobile. Note that we do not automatically assume that the charged "things" are always electrons. Anything charged and mobile will do.
What we want to do now is to express $$σ_{ij}$$ in terms of the properties of the carriers present in the material under investigation.
• To do this, we will express an electrical current as a "mechanical" stream or current of (charged) particles, and compare the result we get with Ohms law.
First, lets define an electrical current in a wire in terms of the carriers flowing through that wire. There are three crucial points to consider
1. The external electrical current as measured in an Ampèremeter is the result of the net current flow through any cross section of an (uniform) wire.
• In other words, the measured current is proportional to the difference of the number of carriers of the same charge sign moving from the left to right through a given cross sectional area minus the number of carriers moving from the right to the left.
• In short: the net current is the difference of two partial currents flowing in opposite directions:
• Do not take this point as something simple! We will encounter cases where we have to sum up 8partial currents to arrive at the externally flowing current, so keep this in mind!
2. In summing up the individual current contributions, make sure the signs are correct. The rule is simple:
• The electrical current is (for historical reasons) defined as flowing from + to –. For a particle current this means:
• In words: A technical current $$I$$ flowing from + to – may be obtained by negatively charged carriers flowing in the opposite direction (from – to +), by positively charged carriers flowing in the same direction, or from both kinds of carriers flowing at the same time in the proper directions.
• The particle currents of differently charged particles then must be added! Conversely, if negatively charged carriers flow in the same directions as positively charged carriers, the value of the partial current flowing in the "wrong" direction must be subtracted to obtain the external current.
3. The flow of particles through a reference surface as symbolized by one of arrows above, say the arrow in the +x -direction, must be seen as an average over the x -component of the velocity of the individual particles in the wire.
• Instead of one arrow, we must consider as many arrows as there are particles and take their average. A more detailed picture of a wire at a given instant thus looks like this
• An instant later it looks entirely different in detail, but exactly the same on average!
• If we want to obtain the net flow of particles through the wire (which is obviously proportional to the net current flow), we could take the average of the velocity components <v+x> pointing in the +x direction (to the right) on the left hand side, and subtract from this the average <v–x> of the velocity components pointing in the xdirection (to the left) on the right hand side.
• We call this difference in velocities the drift velocity vDof the ensemble of carriers.
• If there is no driving force, e.g. an electrical field, the velocity vectors are randomly distributed and <v+x> = <v–x>; the drift velocity and thus net current is zero as it should be.
Average properties of ensembles can be a bit tricky. Lets look at some properties by considering the analogy to a localized swarm of summer flies "circling" around like crazy, so that the ensemble looks like a small cloud of smoke. This link provides for a more detailed treatment about averaging vectors.
• First we notice that while the individual fly moves around quite fast, its vector velocity vi averaged over time t, <vi>t, must be zero as long as the swarm as an ensemble doesn't move.
• In other words, the flies, on average, move just as often to the left as to the right, etc. The net current produced by all flies at any given instance or by one individual fly after sufficient time is obviously zero for any reference surface.
In real life, however, the fly swarm "cloud" often moves slowly around - it has a finite drift velocity which must be just the difference between the average movement in drift direction minus the average movement in the opposite direction.
• The drift velocity thus can be identified as the proper average that gives the net current through a reference plane perpendicular to the direction of the drift velocity.
• This drift velocity is usually much smaller than the average magnitude of the velocity <v>of the individual flies. Its value is the difference of two large numbers - the average velocity of the individual flies in the drift direction minus the average velocity of the individual flies in the direction opposite to the drift direction.
Since we are only interested in the drift velocity of the ensemble of flies (or in our case, carriers) we may now simplify our picture as follows:
We now equate the current density with the particle flux density by the basic law of current flow:
• Current density j = Number N of particles carrying the charge q flowing through the cross sectional area F (with the normal vector f and |f| = 1) during the time interval t, or
$\underline{j}=\dfrac{q\cdot N}{F\cdot t}\cdot\underline{f}$
• In scalar notation, because the direction of the current flow is clear, we have
$j=\dfrac{q\cdot N}{F\cdot t}$
The problem with this formula is N, the number of carriers flowing through the cross section F every second.
• N is not a basic property of the material; we certainly would much prefer the carrier densityn = N/V of carriers. The problem now is that we have to chose the volume V = F · l in such a way that it contains just the right number N of carriers.
• Since the cross section F is given, this means that we have to pick the length l in such a way, that all carriers contained in that length of material will have moved across the internal interface after 1 second.
• This is easy! The trick is to give l just that particular length that allows every carrier in the defined portion of the wire to reach the reference plane, i.e.
$I=v_D \cdot t$
• This makes sure that all carriers contained in this length, will have reached F after the time t has passed, and thus all carriers contained in the volume V = F· vD · t will contribute to the current density. We can now write the current equation as follows:
$j=\dfrac{q\cdot N}{F\cdot t}=\dfrac{q\cdot n\cdot V}{F\cdot t}=\dfrac{q\cdot n\cdot F\cdot I}{F \cdot t}=\dfrac{q \cdot n \cdot F \cdot v_D \cdot t}{F \cdot t}$
This was shown in excessive detail because now we have the fundamental law of electrical conductivity (in obvious vector form)
$\underline{j}=q \cdot n \cdot \underline{v}_D$
This is a very general equation relating a particle current (density) via its drift velocity to an electrical current (density) via the charge q carried by the particles.
• Note that it does not matter at all, why an ensemble of charged particles moves on average. You do not need an electrical field as driving force anymore. If a concentration gradient induces a particle flow via diffusion, you have an electrical current too, if the particles are charged.
• Note also that electrical current flow without an electrical field as primary driving force as outlined above is not some odd special case, but at the root of most electronic devices that are more sophisticated than a simple resistor.
• Of course, if you have different particles, with different density drift velocity and charge, you simply sum up the individual contributions as pointed out above.
All we have to do now is to compare our equation from above to Ohms law:
$\underline{j}=q \cdot n \cdot \underline{v}_D = \sigma \cdot \underline{E}$
• We then obtain
$\sigma = \dfrac{q \cdot n \cdot v_D}{E}=\text{constant}$
If Ohms law holds, σ must be a constant, and this implies by necessity
$\dfrac{v_D}{E}=\text{constant}$
• And this is a simple, but far reaching equation saying something about the driving force of electrical currents (= electrical field strength E) and the drift velocity of the particles in the material.
• What this means is that if vD/E = const. holds for any (reasonable) field E, the material will show ohmic behavior. We have a first condition for ohmic behavior expressed in terms of material properties.
• If, however, vD/E is constant (in time) for a given field, but with a value that depends on E, we have σ = σ(E); the behavior will not be ohmic!
The requirement vD/E = const. for any electrical field thus requires a drift velocity in field direction for the particle, which is directly proportional to E. This leads to a simple conclusion:
• The requirement vD/E = const. for any electrical field thus requires a drift velocity in field direction for the particle, which is directly proportional to E. This leads to a simple conclusion:
• The requirement vD/E = const. for any electrical field thus requires a drift velocity in field direction for the particle, which is directly proportional to E. This leads to a simple conclusion:
Since vD/E = constant must obtain for all (ohmic) materials under investigation, we may give it a name:
$\dfrac{v_D}{E}=\mu=\text{Mobility}=\text{Material constant}$
• The mobility µ of the carriers has the unit
[µ] = (m/s)/(V/m) = m2/V · s.
• The mobility µ (Deutsch: Beweglichkeit) then is a material constant; it is determined by the "friction", i.e. the processes that determine the average velocity for carriers in different materials subjected to the same force q · E.
• Friction, as we (should) know, is a rather unspecified term, but always describing energy transfer from some moving body to the environment.
• Thinking ahead a little bit, we might realize that µ is a basic material constant even in the absence of electrical fields. Since it is tied to the "friction" a moving carrier experiences in its environment - the material under consideration - it simply expresses how fast carriers give up surplus energy to the lattice; and it must not matter how they got the surplus energy. It is therefore no suprise if µ pops up in all kinds of relations, e.g. in the famous Einstein - Smoluchowski equation linking diffusion coefficients and mobility of particles.
We now can write down the most general form of Ohms law applying to all materials meeting the two requirements: n = const. and µ = const. everywhere. It is expressed completely in particle (= material) properties.
$\sigma = q \cdot n \cdot \mu$
• The task is now to calculate n and µ from first priciples, i.e. from only knowing what atoms we are dealing with in what kind of structure (e.g. crystal + crystal defects)
• This is a rather formidable task since σ varies over a extremely wide range, cf. a short table with some relevant numbers.
In order to get acquainted with the new entity "mobility", we do a little exercise:
Exercise 2.1-1: Derive and Discuss numbers for µ
Calculate numerical values for the mobility µ of some typical metals.
• Take typical (metal) values for specific conductivity σ and concentrations of electrons n and then calculatetypical numbers for the mobility µ - do not take the values from the table! I you do not understand the German link, use this one.
• Consider typical field strengths for metals by picking suitable current densities, and then derive typical values for the drift velocity vD.
Solution to Exercise 2.1-1
###### Derive and Discuss numbers for µ
First Task: Derive numbers for the mobility µ.
• First we need typical conductivities and electron densities in metals, which we can take from the table in the link.
• At the same time we expand the table a bit
Material ρ [ cm] σ [–1 cm–1] Density d × 103 [kg m–3] Atomic weight w
1u = 1,66 · 10–27kg]
n = d/w
[m–3]
Silver Ag 1,6·10–6 6.2·105 10,49 107,9 5,85 · 1028
CopperCu 1,7·10–6 5.9·105 8,92 63,5 8,46 · 1028
Lead Pb 21·10–6 4.8·104 11,34 207,2 3,3 · 1028
For the mobility µ we have the equation
$\mu=\dfrac{\sigma}{q \cdot n}$
With q = elementary charge = 1,60 10–19 C we obtain, for example for µAg
$\mu_{Ag} = \dfrac{6.2\cdot 10^5}{1.6\cdot 10^{-19}\cdot 5.85\cdot 10^{28}}\,\dfrac{m^3}{C\cdot \Omega \cdot \text{cm}}=66.2 \dfrac{\text{cm}^2}{C \cdot \Omega}$
The unit is a bit strange, but rembering that [C] = [A · s] and [] = [V/A], we obtain
$\mu_{Ag} = 66.2 \dfrac{\text{cm}^2}{\text{Vs}}\\\mu_{Cu} = 43.6 \dfrac{\text{cm}^2}{\text{Vs}}\\\mu_{Pb} = 9.1 \dfrac{\text{cm}^2}{\text{Vs}}$
Second Task: Derive numbers for the drift velocity vD by considering a reasonable field strength.
$\mu=\dfrac{v_D}{E}\\\text{or}\\v_D=\mu\cdot E$
So what is a reasonable field strength in a metal?
• Easy. Consider a cube with side length l = 1 cm. Its resistance R is given by
$R=\dfrac{\rho \cdot I}{F}=\rho\,\Omega$
• A Cu or Ag cube thus would have a resistance of about 1,5 ·10–6 . Applying a voltage of 1 V, or equivalently a field strength of 1 V/cm thus produces a current of I = U/R 650 000 A or a current density j = 650 000 A/cm2
• That seems to be an awfully large current. Yes, but it is the kind of current density encountered in integrated circuits! Think about it!
• Nevertheless, the wires in your house carry at most about 30 A (above that the fuse blows) with a cross section of about 1 mm2; so a reasonable current density is 3000 A/cm2, which we will get for about U = 1,5 ·10–6 · 3000 A = 4,5 mV.
• For a rough estimate we then take a field strength of 5 mV/cm and a mobility of 50 cm2/Vs and obtain
$v_D=50\cdot 5\,\dfrac{\text{mV}\cdot\text{cm}^2}{\text{cm}\cdot V\cdot s}=0.25\dfrac{\text{cm}}{\text{s}}=2.5\dfrac{\text{mm}}{\text{s}}$
That should come as some surprise! The electrons only have to move v e r y s l o w l y on average in the current direction (or rather, due to sign conventions, against it).
• Is that true, or did we make a mistake?
• It is true! However, it does not mean, that electrons will not run around like crazy inside the crystal, at very high speeds. It only means that their net movement in current anti-direction is very slow.
• Think of an single fly in a fly swarm. Even better read the module that discusses this analogy in detail. The flies are flying around at high speed like crazy - but the fly swarm is not going anywhere as long as it stays in place. There is then no drift velocity and no net fly current!
Think of an single fly in a fly swarm. Even better read the module that discusses this analogy in detail. The flies are flying around at high speed like crazy - but the fly swarm is not going anywhere as long as it stays in place. There is then no drift velocity and no net fly current!
• However, if we keep the sign, e.g. write σ = – e · n · µe for electrons carrying the charge q = – e; e = elementary charge, we now have an indication if the particle current and the electrical current have the same direction (σ > 0) or opposite directions σ < 0) as in the case of electrons.
• But it is entirely a matter of taste if you like to schlepp along the signs all the time, or if you like to fill 'em in at the end.
Everything more detailed then this is no longer universal but specific for certain materials. The remaining task is to calculate n and µ for given materials (or groups of materials).
• This is not too difficult for simple materials like metals, where we know that there is one (or a few) free electrons per atom in the sample - so we know n to a sufficient approximation. Only µ needs to be determined.
• This is fairly easily done with classical physics; the results, however, are flawed beyond repair: They just do not match the observations and the unavoidable conclusion is that classical physics must not be applied when looking at the behavior of electrons in simple metal crystals or in any other structure - we will show this in the immediately following subchapter 2.1.3.
We obviously need to resort to quantum theory and solve the Schrödinger equation for the problem.
• This, surprisingly, is also fairly easy in a simple approximation. The math is not too complicated; the really difficult part is to figure out what the (mathematical) solutions actually mean. This will occupy us for quite some time.
Questionaire
|
2022-01-24 05:59:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8003469109535217, "perplexity": 595.6160229095775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00594.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-zeros-real-and-imaginary-of-y-75x-2-100x-23-using-the-quadra
|
# How do you find the zeros, real and imaginary, of y = 75x^2 +100x + 23 using the quadratic formula?
Jun 4, 2017
$x = - \frac{2}{3} \pm \frac{\sqrt{31}}{15}$
#### Explanation:
Given: $y = 75 {x}^{2} + 100 x + 23$
Using the quadratic formula requires the equation to be in the form $A {x}^{2} + B x + C = 0$
Quadratic formula: $x = \frac{- B \pm \sqrt{{B}^{2} - 4 A C}}{2 A}$
For the given, $A = 75 , B = 100 , C = 23$:
$x = \frac{- 100 \pm \sqrt{10 , 000 - 4 \cdot 75 \cdot 23}}{2 \cdot 75}$
$x = \frac{- 100 \pm \sqrt{10 , 000 - 6900}}{150} = \frac{- 100 \pm \sqrt{3100}}{150}$
$x = - \frac{10}{15} \pm \frac{\sqrt{25 \cdot 4 \cdot 31}}{150} = - \frac{10}{15} \pm \frac{\sqrt{25} \sqrt{4} \sqrt{31}}{150}$
$x = - \frac{2}{3} \pm \frac{10 \sqrt{31}}{150} = - \frac{2}{3} \pm \frac{\sqrt{31}}{15}$
|
2019-06-26 20:00:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9093963503837585, "perplexity": 1359.8652494032845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000545.97/warc/CC-MAIN-20190626194744-20190626220744-00242.warc.gz"}
|
http://electronics.stackexchange.com/questions/50328/how-do-i-implement-lights-out-game-using-logic-gates-or-flip-flops
|
How do I implement Lights-Out game using logic gates or flip flops?
First, for those unfamiliar with the game, this is how the game works,
The goal of the game is to turn off all the lights, hence called "Lights Out", and each press of the button/light inverses its state as well as its north/south/east/west adjacent neighbors, and that's pretty much it.
Now, what I could think of is by using SR flip-flops, or JK flip flops. This is due to its ability to act as storage element (the initial state and next state). But I can't seem to think of ways to actually implement them.
Another idea is that each set of button and its adjacent (NSEW)button/lights will have its own truth table, like this:
but, is it possible to have the input variables be the same as the output variables? Are there any other ways to do this?
-
Excellent diagrams – bhillam Mar 20 '13 at 1:51
The obvious approach would be to use a processor and do all this in firmware.
However, if I really needed to do this with stone knives and bear skins for some reason, I'd dedicate a toggling flip-flop to each square. The flip flop of each square would be toggled by the press of its button or either of the four neighboring buttons. Of course those button presses need to be de-bounced. Again, this would be easier in firmware.
A hardware solution wouldn't be all that complex, but everything would be replicated 25 times, making it large and tedius to build.
Apparently the description above is not clear enough. Here is a diagram of what is in each cell:
The other 4 inputs to the NAND gate are driven from the debounced signals of the 4 surrounding buttons that are also supposed to toggle the state of this square. Likewise, the debounced signal from this button also goes to one of the NAND gate inputs of each of the 4 surrounding cells.
-
This sounds like the most feasible thing to do. I'd use TFFs and tie all the T inputs to "1.". Then I'd have a SPDT momentary switch for each button. Tie one throw to "0," one throw to "1," and then the pole to the corresponding TFF clock inputs. Then, when you press a switch, it will toggle the surrounding flip-flops by generating a single pos/neg edge. – Shamtam Dec 1 '12 at 18:30
@Shamtam: Yes, that's one way of debouncing if you have SPDT switches. Most pushbuttons however are just normally open SPST. – Olin Lathrop Dec 1 '12 at 20:54
I guess I'd have to use SPDT switch for debouncing, whether pushbutton or not. I get now how to connect the inputs of this game, but what I don't get is how to connect the outputs to the LEDs. I mean, it can't be just simple output (Q) to the LED and its neighbors the complement(Q') right? Also, another question, do I need to use the clock signal input of the TFF? If so, how? – Julien Nicolas Dec 3 '12 at 12:30
The flipflop for each cell drives its LED directly. The logic having to do with neighboring cells enters into the input to the flipflip, but the output stays local to the cell. No, SPDT switches are not required for debouncing. There are various techniques for debouncing a single signal like from a SPST switch. – Olin Lathrop Dec 3 '12 at 14:51
No, you don't get the logic. Normally, the debounced outputs are high, so all inputs to the NAND gate are high, which drives the output low. When any button is pressed, that NAND get input goes low, making the NAND output go high. This low to high edge causes the FF to toggle its state. – Olin Lathrop Dec 4 '12 at 16:51
I would say that T flip flops would probably be the easiest as you can toggle their output state with a single input. You could use a single flip flop for each LED and with the input tied to your button and the output tied to your LED. Then you could have each button tied to the inputs of the 4 adjacent flip flops in order to toggle their state as well.
If you wanted to use JK flip flops, you can make T flip flops out of them by passing your input to both of the inputs (J and K)
-
You could expand on your answer by explaining how you connect 5 switches to each flip-flop without having them interfere with each other. Also, what about switch bounce? – Dave Tweed Dec 1 '12 at 18:18
If one wanted to build such a game up to size 7x7 out of discrete logic, the most practical design would probably be to use a circulating shift register to hold the state of the board, and a six-bit counter to keep track of the shift position of the data within the register. Shift data through the shifter in groups of 8 bits to drive a multiplexed display and scanned a multiplexed keyboard. Have a seven-bit "flip light" counter which will run any time the bottom six bits are non-zero, or when the state of the top bit matches the state of the presently-decoded button. Flip the state of the current light whenever all of the following apply:
6-bit counter isn't xxx111
6-bit counter isn't 111xxx
7-bit counter isn't xxxxx00
7-bit counter isn't xx00xxx
7-bit counter is 00xx0xx
Note that while a significant amount of logic would be required to decode those counter states, it would be trivial compared to the number of chips required to implement each light separately.
-
|
2014-07-23 22:09:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.533301591873169, "perplexity": 1286.6497988396543}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883858.16/warc/CC-MAIN-20140722025803-00110-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/cpaa.2004.3.367
|
# American Institute of Mathematical Sciences
• Previous Article
Remarks on the strong maximum principle for viscosity solutions to fully nonlinear parabolic equations
• CPAA Home
• This Issue
• Next Article
Global existence and regularity for the Lagrangian averaged Navier-Stokes equations with initial data in $H^{1//2}$
September 2004, 3(3): 367-393. doi: 10.3934/cpaa.2004.3.367
## On the Ferromagnetism equations in the non static case
1 MAB, UMR 5466, CNRS, Université Bordeaux 1, 351, cours de la Libération, 33405 Talence cedex, France 2 Université Bordeaux-I, Mathématiques Appliquées, 351 Cours de la Libération, 33405 Talence Cedex 3 LATP, Université de Provence, 39 rue Joliot-Curie, 13453 Marseille cedex 13, France
Received September 2003 Revised February 2004 Published June 2004
In this paper we study the asymptotic behaviour of the solutions of the system coupling Landau-Lifschitz equations and Maxwell equations as the exchange coefficient tends to zero. We prove that it appears a boundary layer described by a BKW method.
Citation: Gilles Carbou, Pierre Fabrie, Olivier Guès. On the Ferromagnetism equations in the non static case. Communications on Pure & Applied Analysis, 2004, 3 (3) : 367-393. doi: 10.3934/cpaa.2004.3.367
[1] Masahiro Suzuki. Asymptotic stability of a boundary layer to the Euler--Poisson equations for a multicomponent plasma. Kinetic & Related Models, 2016, 9 (3) : 587-603. doi: 10.3934/krm.2016008 [2] Shijin Ding, Boling Guo, Junyu Lin, Ming Zeng. Global existence of weak solutions for Landau-Lifshitz-Maxwell equations. Discrete & Continuous Dynamical Systems - A, 2007, 17 (4) : 867-890. doi: 10.3934/dcds.2007.17.867 [3] Lizhi Ruan, Changjiang Zhu. Boundary layer for nonlinear evolution equations with damping and diffusion. Discrete & Continuous Dynamical Systems - A, 2012, 32 (1) : 331-352. doi: 10.3934/dcds.2012.32.331 [4] Thierry Colin, Boniface Nkonga. Multiscale numerical method for nonlinear Maxwell equations. Discrete & Continuous Dynamical Systems - B, 2005, 5 (3) : 631-658. doi: 10.3934/dcdsb.2005.5.631 [5] Walter Allegretto, Liqun Cao, Yanping Lin. Multiscale asymptotic expansion for second order parabolic equations with rapidly oscillating coefficients. Discrete & Continuous Dynamical Systems - A, 2008, 20 (3) : 543-576. doi: 10.3934/dcds.2008.20.543 [6] M. Eller. On boundary regularity of solutions to Maxwell's equations with a homogeneous conservative boundary condition. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 473-481. doi: 10.3934/dcdss.2009.2.473 [7] N. I. Karachalios, H. E. Nistazakis, A. N. Yannacopoulos. Remarks on the asymptotic behavior of solutions of complex discrete Ginzburg-Landau equations. Conference Publications, 2005, 2005 (Special) : 476-486. doi: 10.3934/proc.2005.2005.476 [8] Cleverson R. da Luz, Gustavo Alberto Perla Menzala. Uniform stabilization of anisotropic Maxwell's equations with boundary dissipation. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 547-558. doi: 10.3934/dcdss.2009.2.547 [9] Gaël Bonithon. Landau-Lifschitz-Gilbert equation with applied eletric current. Conference Publications, 2007, 2007 (Special) : 138-144. doi: 10.3934/proc.2007.2007.138 [10] Yue-Jun Peng, Shu Wang. Asymptotic expansions in two-fluid compressible Euler-Maxwell equations with small parameters. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 415-433. doi: 10.3934/dcds.2009.23.415 [11] Dina Kalinichenko, Volker Reitmann, Sergey Skopinov. Asymptotic behavior of solutions to a coupled system of Maxwell's equations and a controlled differential inclusion. Conference Publications, 2013, 2013 (special) : 407-414. doi: 10.3934/proc.2013.2013.407 [12] Xueke Pu, Min Li. Asymptotic behaviors for the full compressible quantum Navier-Stokes-Maxwell equations with general initial data. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 5149-5181. doi: 10.3934/dcdsb.2019055 [13] Chunpeng Wang. Boundary behavior and asymptotic behavior of solutions to a class of parabolic equations with boundary degeneracy. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 1041-1060. doi: 10.3934/dcds.2016.36.1041 [14] N. I. Karachalios, Hector E. Nistazakis, Athanasios N. Yannacopoulos. Asymptotic behavior of solutions of complex discrete evolution equations: The discrete Ginzburg-Landau equation. Discrete & Continuous Dynamical Systems - A, 2007, 19 (4) : 711-736. doi: 10.3934/dcds.2007.19.711 [15] Dingshi Li, Xiaohu Wang. Asymptotic behavior of stochastic complex Ginzburg-Landau equations with deterministic non-autonomous forcing on thin domains. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 449-465. doi: 10.3934/dcdsb.2018181 [16] Renjun Duan, Xiongfeng Yang. Stability of rarefaction wave and boundary layer for outflow problem on the two-fluid Navier-Stokes-Poisson equations. Communications on Pure & Applied Analysis, 2013, 12 (2) : 985-1014. doi: 10.3934/cpaa.2013.12.985 [17] Tong Li, Hui Yin. Convergence rate to strong boundary layer solutions for generalized BBM-Burgers equations with non-convex flux. Communications on Pure & Applied Analysis, 2014, 13 (2) : 835-858. doi: 10.3934/cpaa.2014.13.835 [18] Tetsutaro Shibata. Boundary layer and variational eigencurve in two-parameter single pendulum type equations. Communications on Pure & Applied Analysis, 2006, 5 (1) : 147-154. doi: 10.3934/cpaa.2006.5.147 [19] S. S. Krigman. Exact boundary controllability of Maxwell's equations with weak conductivity in the heterogeneous medium inside a general domain. Conference Publications, 2007, 2007 (Special) : 590-601. doi: 10.3934/proc.2007.2007.590 [20] Nicolas Fournier. Particle approximation of some Landau equations. Kinetic & Related Models, 2009, 2 (3) : 451-464. doi: 10.3934/krm.2009.2.451
2018 Impact Factor: 0.925
|
2019-10-21 21:14:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5097481608390808, "perplexity": 4889.017792480617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987787444.85/warc/CC-MAIN-20191021194506-20191021222006-00383.warc.gz"}
|
https://bitbucket.org/osrf/gazebo/src/fff39e052cf85bfb507dcbbd7075c26cfd35bf2a/gazebo/physics/PhysicsFactory.hh
|
# gazebo / gazebo / physics / PhysicsFactory.hh
The branch 'logging_gui' does not exist.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 /* * Copyright 2012 Open Source Robotics Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ /* * Desc: Factory for creating physics engine * Author: Nate Koenig * Date: 21 May 2009 */ #ifndef _PHYSICSFACTORY_HH_ #define _PHYSICSFACTORY_HH_ #include #include #include "physics/PhysicsTypes.hh" namespace gazebo { namespace physics { /// \addtogroup gazebo_physics /// \{ /// \def PhysicsFactoryFn /// \brief Prototype for physics factory functions. typedef PhysicsEnginePtr (*PhysicsFactoryFn) (WorldPtr world); /// \class PhysicsFactory PhysicsFactory.hh physics/physics.hh /// \brief The physics factory instantiates different physics engines. class PhysicsFactory { /// \brief Register everything. public: static void RegisterAll(); /// \brief Register a physics class. /// \param[in] _className Name of the physics class. /// \param[in] _factoryfn Function pointer used to create a physics /// engine. public: static void RegisterPhysicsEngine(std::string _className, PhysicsFactoryFn _factoryfn); /// \brief Create a new instance of a physics engine. /// \param[in] _className Name of the physics class. /// \param[in] _world World to pass to the created physics engine. public: static PhysicsEnginePtr NewPhysicsEngine( const std::string &_className, WorldPtr _world); /// \brief Check if a physics engine is registered. /// \param[in] _name Name of the physics engine. /// \return True if physics engine is registered, false otherwise. public: static bool IsRegistered(const std::string _name); /// \brief A list of registered physics classes. private: static std::map engines; }; /// \brief Static physics registration macro /// /// Use this macro to register physics engine with the server. /// \param[in] name Physics type name, as it appears in the world file. /// \param[in] classname C++ class name for the physics engine. #define GZ_REGISTER_PHYSICS_ENGINE(name, classname) \ PhysicsEnginePtr New##classname(WorldPtr _world) \ { \ return PhysicsEnginePtr(new gazebo::physics::classname(_world)); \ } \ void Register##classname() \ {\ PhysicsFactory::RegisterPhysicsEngine(name, New##classname);\ } /// \} } } #endif
|
2015-10-07 00:42:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5207290053367615, "perplexity": 5574.846587655029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736679756.40/warc/CC-MAIN-20151001215759-00245-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://electronics.stackexchange.com/questions/36331/arduino-two-more-analog-input
|
i'm dealing with analog sensors. I have an Arduino Lilypad Simple Board with only 4 analog input. Now, i need two more sensors and i've found on the net this schematics:
But i don't understand if it could add delays on my project because i'm controlling audio-video stuffs and any kind of delay has to be avoided.
It is possible to calculate the actual resistance from the reading but unfortunately, variations in the IDE and arduino board will make it inconsistant. Be aware of that if you change IDE versions of OS's, or use a 3.3V arduino instead of 5V, or change from a 16mhz Arduino to a 8Mhz one (like a lilypad) there may be differences due to how long it takes to read the value of a pin. Usually that isn't a big deal but it can make your project hard to debug if you aren't expecting it!
I'm not a big expert of arduino and i don't understand what do i have to change for lilypad.
This will most certainly add delays as you are polling the pin in a blocking loop
while (digitalRead(RCpin) == LOW) { // count how long it takes to rise up to HIGH
reading++; // increment to keep track of time
// if we got this far, the resistance is so high
// its likely that nothing is connected!
break; // leave the loop
}
Assuming that your compiler can optimize the code extremely efficiently this loop would take something like 4 lines of code to execute since you have to read the pin, then compare it to a value, then branch based on the outcome (I would be very impressed if you could get this few instructions). Further assume that each one of those instructions takes only 1 clock cycle to execute (this is also probably going to take more, but it helps to bound the problem). This routine could take at most:
$MaxRoutineTime = LoopIterations \times \frac{Instructions}{LoopIteration} \times \frac{Seconds}{Instructions}$
$MaxRoutineTime = 30,000 \space Iterations\times \frac{4 \space Instructions} {LoopIteration} \times \frac{Seconds}{8,000,000 \space Instructions}$
$MaxRoutineTime = 15 \space mS$
but I assume it will take a little more than that because of the aforementioned allowances.
The reason it does not add delays when using an ADC is because the peripheral can be setup to generate interrupts and you will only be notified when the ADC reading is complete. The time it takes the ADC to complete a measurement is a finite number of clock cycles, so the app note you're referencing is pointing out that if you slow your clock speed, though the ADC will still take the same number of clock cycles to complete a measurement, your measurement will take longer because the clock is slower.
Edit
At first glance from your picture, combined with the fact that you mentioned audio, I thought you were measuring a microphone input. However, it appears that you're just using a Force Sensitive Resistor (FSR) which is just a pressure sensor. If you don't need to know the amount of pressure, only that it was pushed, you don't have to go through all the trouble of finding the exact reading. You can simply use any interrupt-generating, digital input if you pick the correct resistor value (in place of the capacitor). You will simply set a digital pin to generate interrupts on rising edges and pick a resistor that will give you a state change (low/high) with the desired amount of force for your touch. Then you'll know each time the FSR was pushed and can handle it in an un-blocking fashioner, introducing the least latency possible.
• "at least" or "at most"? At most ( 30000 × n ) / 8Mhz, where n is the number of CPU cycles spent in the loop (I think the order of n=4 cycles. Lowering the cap speeds conversion, but lowers accuracy. – jippie Jul 24 '12 at 7:28
• @jippie - Good catch! A subtle, but important difference. Edited accordingly. – Joel B Jul 24 '12 at 14:01
I never worked with Arduino, but most microcontrollers have interrupt pins, so Arduino must have too. If you use a interrupt pin to detect that the capacitor has charged up to to the threshold level, then you will not have delays and your program could run normally while the measurement takes place. You just need to use a pin that allows interrupt.
1. Assert the pin low
2. Make sure the pin is low enough time to discharge the capacitor
3. Enable interrupt and reset a timer
4. Make the pin an input (the capacitor will start charging)
|
2019-10-22 09:33:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3245893120765686, "perplexity": 833.4816968201504}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987813307.73/warc/CC-MAIN-20191022081307-20191022104807-00388.warc.gz"}
|
https://ham.meta.stackexchange.com/questions/42/wed-like-math-support-enabled/64
|
# We'd like math support enabled
Some Stack Exchange sites, such as Math and Physics, allow LaTeX equations for rendering in questions and answers. For example, writing
$\frac{f_{MHz}}{300} \times \frac{1}{2.54}$
would result in (approximately)
f
MHz 1
----- * -----
300 2.54
Since it's easy to conceive of amateur radio-related questions which include math in the question as well as in the answers, it seems like having access to this feature would be a really nice thing on ham.SE.
How does the community feel about having that feature enabled?
...and then we end up with question titles like...
• +1 I love to misuse that feature for tables. – Johannes Kuhn Oct 23 '13 at 15:48
• Yup. Our FAQ will be full of those ascii tables if we don't have math support :D – jkj Oct 23 '13 at 19:06
• @JohannesKuhn If you grok LaTeX tables, I'd almost say you deserve the right to do that! :) – a CVn Oct 24 '13 at 8:00
• $\begin{array}{cc}Because&I\\love&tables\end{array}$ -- Will be rendered later. – Johannes Kuhn Oct 24 '13 at 8:02
• Administrative Note — The best way to get MathJax enabled is to demonstrate a need for it by citing questions which would be improved with the feature. See how it was done (for example) on Space Exporation and Astronomy. – Robert Cartaino Oct 24 '13 at 19:03
Yes. There's enough math-based discussion in the ham community that it'd be useful.
• Agreed. The hobby has a very mathematical side to it. – jkj Oct 23 '13 at 19:04
• YES!!! Great proposal.... – Dan Oct 23 '13 at 19:12
There are numerous examples of questions which would or could be better asked, or answered, with math support available. A few that I found just quickly browsing through the site's current questions are:
In addition, as I browsed through my copy of The ARRL Handbook for Radio Amateurs (year 2002 edition, if you must know), I had to flip the book to five pages completely at random before I came upon one with mathematical formulas plainly visible. The next page in that book which I flipped to had a large table spread out over two pages. It took two more random page flips before I came upon some more mathematical formulas. The ARRL Antenna Book had a few tables on the first random page I opened it to, and by the time I'd flipped pages five times I came across a page that had several diagrams and numerous formulas. While these aren't questions asked on the site, these books are common reference works in the amateur radio field and I see no reason to not expect such material to make it into both questions and answers on this site on a regular basis.
• Awesome, thanks! Makes the "argument" easier. – Robert Cartaino Oct 24 '13 at 20:53
• Yes I just copied and pasted screenshots from LaTeX for my answer on that one as well. – Dan Oct 24 '13 at 21:27
• @RobertCartaino I added a few more, and made the answer Community Wiki to encourage others to add even more links. (I hesitate to use the abbreviation CW here because it means something completely different in amateur radio. :)) – a CVn Oct 25 '13 at 17:52
It's here!
$\frac{\pi{}}{16}(\sqrt{1+cos^2(\frac{\pi{}}{32})} + \sqrt{1+cos^2(\frac{3\pi{}}{32})} + ... \sqrt{1+cos^2(\frac{15\pi{}}{32})}) = 1.91009889$
$\Bbb Z[\sqrt 3]\cong \frac{\Bbb Z[x]}{(x^2-3)}$
$$\int_0^e x^{1/x}dx=2e\sum^\infty_{n=0}\sum^n_{k=0}\frac{(-1)^{n-k}}{e^nk!}$$
w00t!
• Great... Now to figure out how to use it xD – Seth Oct 28 '13 at 20:03
• @Seth Never mind how to use it; figure out what Dan is talking about instead! :D – a CVn Oct 29 '13 at 8:24
• And you can even use it in edit comments... check the history on this one! – a CVn Oct 29 '13 at 8:26
• @MichaelKjörling sweet! thanks – Dan Oct 29 '13 at 18:36
|
2019-06-20 16:17:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44188523292541504, "perplexity": 1590.2337154144209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999261.43/warc/CC-MAIN-20190620145650-20190620171650-00255.warc.gz"}
|
https://math.stackexchange.com/questions/3097273/convergence-of-poisson-distribution?noredirect=1
|
# Convergence of Poisson distribution
Let $$X \sim \mathrm{Pois}(\lambda)$$ and $$x_1, \cdots , x_n$$ observations following this distribution. I want to derive the analytical solution of the following series: $$l(\lambda):=\lim_{ n \to \infty}\frac{1}{n}\sum_{i=1}^{n} \log P(X = x_i).$$
After a few trials, I found a good numerical approximation of the solution: $$l(\lambda)=-\log(\sqrt{17.08\cdot \lambda}).$$
See the graph below, where the dots represent an approximation of the solution by simulating poisson distributions, while the blue line represents the approximated numerical solution.
x=1:1000
y=sapply(x,function(x) mean(log(dpois(rpois(100000,x),x))))
plot(x,y)
lines(x,-log(sqrt(x*17.08)),col="blue")
$l(\lambda)$ according to $$\lambda$$">
• I don't know how you found this numerical fit, but it is amazingly close to the analytical approximation by Gaussian for $\lambda$ that is "large enough" (say, $\lambda > 10$) for Central Limit Theorem to start kicking in practically. The expression is $l(\lambda) \approx \frac{-1}2(1 + \log(2\pi \lambda))$ – Lee David Chung Lin Feb 2 '19 at 14:30
• See the added graph. I just tried different transformations. Can you elaborate a bit more about the approximation of $l(\lambda)$ for a Gaussian distribution? – Anthony Hauser Feb 2 '19 at 14:45
• @LeeDavidChungLin I'm following your basic idea but I'm not seeing how you get that exact result. – Ian Feb 2 '19 at 14:59
• I'm impressed by how you found this transformation by trial-and-error. Anyway, another quick observation that most likely leads to nowhere is that $\exp[ l(\lambda) ] = (\Pi P(X = x_i))^{1/n}$, the geometric mean of the observations. – Lee David Chung Lin Feb 3 '19 at 6:02
$$\require{begingroup}\begingroup\renewcommand{\dd}[1]{\,\mathrm{d}#1}$$By the Law of Large Numbers the sample mean converges to the expectation
$$l_n(\lambda)\equiv \frac{1}{n}\sum_{i=1}^{n} g(x_i)~, \quad \text{then}\quad \lim_{ n \to \infty}l_n(\lambda) \to E[g(X)]$$
where the expression of interest here is $$g(x_i) = \log P(X = x_i)$$.
As @lan mentioned, it boils down to finding $$E[\log(X!)]$$ which seem challenging. I personally am not aware of any justification that a closed form even exists.
Below is the (cheap) asymptotic analysis I mentioned in the comment, where $$\lambda$$ is "large enough".
The sum of independent Poisson distributions again follows a Poisson distribution. Namely, consider independent $$X_i \sim \mathrm{Pois}(\lambda_i)$$ where $$\lambda_i$$ do NOT have to be identical, then $$X \equiv \sum X_i \sim \mathrm{Pois}(\lambda)$$ where $$\lambda = \sum \lambda_i$$.
As soon as one sees the sum of independent random variables, one knows there's a variation of Central Limit Theorem (CLT) that applies.
Let's just keep things simple and consider the iid case where all $$\lambda_i = \lambda_0$$ share a common and FINITE value.
\begin{align} &\text{for}~i = 1 \sim m & X_i &\sim \mathrm{Pois}(\lambda_0) & E[X_i] &= V[X_i] = \lambda_0 & X_i &\perp X_j ~\text{for}~ i \neq j && \\ && X &\equiv \sum_{i = 1}^m X_i & E[X] &= V[X_i] = m\lambda_0 \equiv \lambda \end{align}
Here the CLT is equivalent to a limit-statement about a "runaway" distribution that moves-and-stretches to infinity ($$\lambda \to \infty$$ as $$m \to \infty$$).
\begin{align} \frac{ \displaystyle\frac1m \sum_{i = 1}^m X_i - \lambda_0}{ \lambda_0 } &\xrightarrow{~~d~~} \mathcal{N}(0,1) & \begin{aligned}[t] \implies& & \frac{X - \lambda }{ \lambda } &\xrightarrow{~~d~~} \mathcal{N}(0,1) \\ \implies& & X &\overset{d}{ \approx } \mathcal{N}(\lambda, \sqrt{\lambda}) \end{aligned} \end{align}
This is essentially the same procedure when people treat Binomial distribution as roughly Gaussian when $$np$$ is large enough. Here the approximation is good when $$\mathbb{\lambda}$$ is large "enough". The same practical Normal approximation can be applied to various distributions that are "sums", like Negative Binomial or Gamma distribution.
Our expression of interest now becomes $$\lim_{ n \to \infty}l_n(\lambda) \approx E[g(X)] = \int_{-\infty}^{\infty} f(x) \log\bigl( f(x) \bigr)\dd{x} \tag*{Eq.(1.a)}$$ where $$f$$ is the Gaussian density with mean $$\lambda$$ and variance $$\lambda$$ $$f(x) = \frac1{\sqrt{2\pi \lambda}} e^{ \frac{-(x - \lambda)^2}{ 2 \lambda }} \quad \text{so that} \quad \log\bigl( f(x) \bigr) = \frac{-(x - \lambda)^2}{ 2 \lambda } - \frac12 \log(2 \pi \lambda)$$ The integration Eq.(1.a) is not trivial but manageable. Allow me to omit typing up the calculation and just state that the result is
$$E[g(X)] = \frac{-1}2 \bigl( 1 + \log(2\pi \lambda) \bigr) \tag*{Eq.(1.b)}$$
One can be pedantic about the lower integration limit, arguing that the actual density is Poisson and non-negative. For that matter, one can consider \begin{align} E_{+}[g(X)] &= \int_{\mathbf{0}}^{\infty} f(x) \log\bigl( f(x) \bigr)\dd{x} \qquad\text{, skipping to result} \\ &= \frac12 \sqrt{ \frac{\lambda}{ 2\pi} } e^{-\lambda/2} -\frac14 \bigl( 1 + \log(2\pi \lambda) \bigr) \Bigl( 1 + \mathrm{erf}\bigl( \sqrt{ \frac{\lambda}2 } \bigr) \Bigr) \tag*{, or} \\ &= \frac12 \left( \sqrt{ \frac{\lambda}{2\pi} } e^{-\lambda/2} - \bigl( 1 + \log(2\pi \lambda) \bigr)\Phi(\sqrt{\lambda} )\right) \tag*{Eq.(2)} \end{align} where erf is the error function and $$\Phi$$ is the CDF of standard Normal. There's practically no difference between Eq.(1.b) and the supposedly more sensible Eq.(2) in the applicable region where $$\lambda$$ is large enough, since the distribution is already far away from the origin.
Admittedly I haven't investigated possible improvements from continuity correction (that using Gaussian to approximate the discrete Poisson), but I doubt there's any improvement in the asymptotic region.
In summary, as an asymptotic form, Eq.(1.b) is already as good as it gets under this framework. Depending on your tolerance on the error, maybe $$\lambda = 10$$ or $$\lambda = 8$$ can already be considered asymptotic, while there's no denying (either numerically or analytically) that both Eq.(2) and Eq.(1.b) are poor fit for $$\lambda < 3$$.
$$\endgroup$$
You're trying to compute $$\lim_{n \to \infty} \frac{1}{n} \sum_{i=1}^n -\lambda + x_i \log(\lambda) - \log(x_i!)$$ when $$x_i$$ are drawn from the Poisson($$\lambda$$) distribution. By SLLN this is $$-\lambda + \lambda \log(\lambda) - E[\log(X!)]$$ where $$X$$ has such a distribution. The issue is that it's not obvious what $$E[\log(X!)]$$ is.
An approximation like yours can be obtained by estimating this by $$\log(\Gamma(\lambda+1))$$. This amounts to replacing $$X=\lambda$$ (neglecting the variation in the distribution) and interpolating for non-integer $$\lambda$$ with the Gamma function. For large $$\lambda$$ you can then apply Stirling's approximation. A three term approximation of $$\log(\Gamma(\lambda+1))$$ is $$\lambda \log(\lambda) - \lambda + \log(\sqrt{2 \pi \lambda})$$, so the first two terms cancel out with the other two terms from before, leaving $$-\log(\sqrt{2 \pi \lambda})$$. This has the same scaling as your version but it is offset by a constant.
A different approach is to use Stirling's approximation directly in the sum defining $$E[\log(X!)]$$. In fact we can give pretty nice explicit bounds:
$$E[X \log(X)]-\lambda+\frac{1}{2} E[\log(X)]+\frac{1}{2} \log(2\pi) \leq E[\log(X!)] \\ \leq E[X\log(X)]-\lambda+\frac{1}{2} E[\log(X)] + 1.$$
These two bounds differ only by a constant, namely $$1-\frac{1}{2} \log(2 \pi) \approx 0.081$$, but the problem of computing $$E[X \log(X)]$$ and $$E[\log(X)]$$ persists.
• (commenting in case the pin in my post doesn't work) I wouldn't say this approach follows the same basic idea as mine. In fact I think your approach is better (more robust, allowing further improvement), especially for the region of smaller $\lambda$. – Lee David Chung Lin Feb 3 '19 at 5:39
• @LeeDavidChungLin In my comment on the original post, when I said "following" I meant that I could follow your reasoning, not that my own answer followed your idea. – Ian Feb 6 '19 at 18:06
|
2020-02-17 07:03:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 52, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9863770604133606, "perplexity": 409.7106863031851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141749.3/warc/CC-MAIN-20200217055517-20200217085517-00371.warc.gz"}
|
https://zxi.mytechroad.com/blog/category/recursion/
|
# Posts published in “Recursion”
Given an array nums that represents a permutation of integers from 1 to n. We are going to construct a binary search tree (BST) by inserting the elements of nums in order into an initially empty BST. Find the number of different ways to reorder nums so that the constructed BST is identical to that formed from the original array nums.
For example, given nums = [2,1,3], we will have 2 as the root, 1 as a left child, and 3 as a right child. The array [2,3,1] also yields the same BST but [3,2,1] yields a different BST.
Return the number of ways to reorder nums such that the BST formed is identical to the original BST formed from nums.
Since the answer may be very large, return it modulo 10^9 + 7.
Example 1:
Input: nums = [2,1,3]
Output: 1
Explanation: We can reorder nums to be [2,3,1] which will yield the same BST. There are no other ways to reorder nums which will yield the same BST.
Example 2:
Input: nums = [3,4,5,1,2]
Output: 5
Explanation: The following 5 arrays will yield the same BST:
[3,1,2,4,5]
[3,1,4,2,5]
[3,1,4,5,2]
[3,4,1,2,5]
[3,4,1,5,2]
Example 3:
Input: nums = [1,2,3]
Output: 0
Explanation: There are no other orderings of nums that will yield the same BST.
Example 4:
Input: nums = [3,1,2,5,4,6]
Output: 19
Example 5:
Input: nums = [9,4,2,1,3,6,5,7,8,14,11,10,12,13,16,15,17,18]
Output: 216212978
Explanation: The number of ways to reorder nums to get the same BST is 3216212999. Taking this number modulo 10^9 + 7 gives 216212978.
Constraints:
• 1 <= nums.length <= 1000
• 1 <= nums[i] <= nums.length
• All integers in nums are distinct.
## Solution: Recursion + Combinatorics
For a given root (first element of the array), we can split the array into left children (nums[i] < nums[0]) and right children (nums[i] > nums[0]). Assuming there are l nodes for the left and r nodes for the right. We have C(l + r, l) different ways to insert l elements into a (l + r) sized array. Within node l / r nodes, we have ways(left) / ways(right) different ways to re-arrange those nodes. So the total # of ways is:
C(l + r, l) * ways(l) * ways(r)
Don’t forget to minus one for the final answer.
Time complexity: O(n^2)
Space complexity: O(n^2)
## python3
Implement a basic calculator to evaluate a simple expression string.
The expression string may contain open ( and closing parentheses ), the plus + or minus sign -non-negative integers and empty spaces .
Example 1:
Input: "1 + 1"
Output: 2
Example 2:
Input: " 2-1 + 2 "
Output: 3
Example 3:
Input: "(1+(4+5+2)-3)+(6+8)"
Output: 23
Note:
• You may assume that the given expression is always valid.
• Do not use the eval built-in library function.
## Solution: Recursion
Make a recursive call when there is an open parenthesis and return if there is close parenthesis.
Time complexity: O(n)
Space complexity: O(n)
## Python3
Alex and Lee continue their games with piles of stones. There are a number of piles arranged in a row, and each pile has a positive integer number of stones piles[i]. The objective of the game is to end with the most stones.
Alex and Lee take turns, with Alex starting first. Initially, M = 1.
On each player’s turn, that player can take all the stones in the first X remaining piles, where 1 <= X <= 2M. Then, we set M = max(M, X).
The game continues until all the stones have been taken.
Assuming Alex and Lee play optimally, return the maximum number of stones Alex can get.
Example 1:
Input: piles = [2,7,9,4,4]
Output: 10
Explanation: If Alex takes one pile at the beginning, Lee takes two piles, then Alex takes 2 piles again. Alex can get 2 + 4 + 4 = 10 piles in total. If Alex takes two piles at the beginning, then Lee can take all three piles left. In this case, Alex get 2 + 7 = 9 piles in total. So we return 10 since it's larger.
Constraints:
• 1 <= piles.length <= 100
• 1 <= piles[i] <= 10 ^ 4
## Solution: Recursion + Memoization
def solve(s, m) = max diff score between two players starting from s for the given M.
cache[s][M] = max{sum(piles[s:s+x]) – solve(s+x, max(x, M)}, 1 <= x <= 2*M, s + x <= n
Time complexity: O(n^3)
Space complexity: O(n^2)
## C++
Return the result of evaluating a given boolean expression, represented as a string.
An expression can either be:
• "t", evaluating to True;
• "f", evaluating to False;
• "!(expr)", evaluating to the logical NOT of the inner expression expr;
• "&(expr1,expr2,...)", evaluating to the logical AND of 2 or more inner expressions expr1, expr2, ...;
• "|(expr1,expr2,...)", evaluating to the logical OR of 2 or more inner expressions expr1, expr2, ...
Example 1:
Input: expression = "!(f)"
Output: true
Example 2:
Input: expression = "|(f,t)"
Output: true
Example 3:
Input: expression = "&(t,f)"
Output: false
Example 4:
Input: expression = "|(&(t,f,t),!(t))"
Output: false
## Solution: Recursion
Time complexity: O(n)
Space complexity: O(n)
## Java
Given an encoded string, return it’s decoded string.
The encoding rule is: k[encoded_string], where the encoded_string inside the square brackets is being repeated exactly k times. Note that k is guaranteed to be a positive integer.
You may assume that the input string is always valid; No extra white spaces, square brackets are well-formed, etc.
Furthermore, you may assume that the original data does not contain any digits and that digits are only for those repeat numbers, k. For example, there won’t be input like 3a or 2[4].
Examples:
s = "3[a]2[bc]", return "aaabcbc".
s = "3[a2[c]]", return "accaccacc".
s = "2[abc]3[cd]ef", return "abcabccdcdcdef".
## Solution 1: Recursion
Time complexity: O(n^2)
Space complexity: O(n)
## C++
Mission News Theme by Compete Themes.
|
2020-09-22 07:40:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.491578072309494, "perplexity": 3510.0756059029213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400204410.37/warc/CC-MAIN-20200922063158-20200922093158-00472.warc.gz"}
|
https://answers.opencv.org/answers/215085/revisions/
|
"OpenCV can use OpenVINO backend" means that OpenVINO has some OpenCV compatibility, so it can take OpenCV Mats and process them in the VPU. It doesn't mean that it can take OpenCV deep neural networks.
The solution would be to use a similar network that OpenVINO understands: The ...-coco networks can be used as YOLO alternatives.
|
2020-10-20 15:35:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39314520359039307, "perplexity": 4609.653361546555}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872746.20/warc/CC-MAIN-20201020134010-20201020164010-00098.warc.gz"}
|
http://tailieu.vn/doc/dive-into-python-chapter-10-scripts-and-streams-257147.html
|
# Dive Into Python-Chapter 10. Scripts and Streams
Chia sẻ: Thanh Cong | Ngày: | Loại File: PDF | Số trang:49
0
53
lượt xem
5
## Dive Into Python-Chapter 10. Scripts and Streams
Mô tả tài liệu
Tham khảo tài liệu 'dive into python-chapter 10. scripts and streams', công nghệ thông tin, kỹ thuật lập trình phục vụ nhu cầu học tập, nghiên cứu và làm việc hiệu quả
Chủ đề:
Bình luận(0)
Lưu
## Nội dung Text: Dive Into Python-Chapter 10. Scripts and Streams
1. Chapter 10. Scripts and Streams 10.1. Abstracting input sources One of Python's greatest strengths is its dynamic binding, and one powerful use of dynamic binding is the file-like object. Many functions which require an input source could simply take a filename, go open the file for reading, read it, and close it when they're done. But they don't. Instead, they take a file-like object. In the simplest case, a file-like object is any object with a read method with an optional size parameter, which returns a string. When called with no size parameter, it reads everything there is to read from the input source and returns all the data as a single string. When called with a size parameter, it reads that much from the input source and returns that much data; when called again, it picks up where it left off and returns the next chunk of data.
2. This is how reading from real files works; the difference is that you're not limiting yourself to real files. The input source could be anything: a file on disk, a web page, even a hard-coded string. As long as you pass a file-like object to the function, and the function simply calls the object's read method, the function can handle any kind of input source without specific code to handle each kind. In case you were wondering how this relates to XML processing, minidom.parse is one such function which can take a file-like object. Example 10.1. Parsing XML from a file >>> from xml.dom import minidom >>> fsock = open('binary.xml') 1 >>> xmldoc = minidom.parse(fsock) 2 >>> fsock.close() 3 >>> print xmldoc.toxml() 4
3. 0 1 \ 1 First, you open the file on disk. This gives you a file object. 2 You pass the file object to minidom.parse, which calls the read method of fsock and reads the XML document from the file on disk. 3 Be sure to call the close method of the file object after you're done with it. minidom.parse will not do this for you. 4 Calling the toxml() method on the returned XML document prints out the entire thing.
4. Well, that all seems like a colossal waste of time. After all, you've already seen that minidom.parse can simply take the filename and do all the opening and closing nonsense automatically. And it's true that if you know you're just going to be parsing a local file, you can pass the filename and minidom.parse is smart enough to Do The Right Thing™. But notice how similar -- and easy -- it is to parse an XML document straight from the Internet. Example 10.2. Parsing XML from a URL >>> import urllib >>> usock = urllib.urlopen('http://slashdot.org/slashdot.rdf') 1 >>> xmldoc = minidom.parse(usock) 2 >>> usock.close() 3 >>> print xmldoc.toxml() 4
5. Slashdot http://slashdot.org/ News for nerds, stuff that matters Slashdot http://images.slashdot.org/topics/topicslashdot.gif http://slashdot.org/ To HDTV or Not to HDTV? http://slashdot.org/article.pl?sid=01/12/28/0421241 [...snip...]
6. 1 As you saw in a previous chapter, urlopen takes a web page URL and returns a file-like object. Most importantly, this object has a read method which returns the HTML source of the web page. 2 Now you pass the file-like object to minidom.parse, which obediently calls the read method of the object and parses the XML data that the read method returns. The fact that this XML data is now coming straight from a web page is completely irrelevant. minidom.parse doesn't know about web pages, and it doesn't care about web pages; it just knows about file-like objects. 3 As soon as you're done with it, be sure to close the file-like object that urlopen gives you. 4 By the way, this URL is real, and it really is XML. It's an XML representation of the current headlines on Slashdot, a technical news and gossip site. Example 10.3. Parsing XML from a string (the easy but inflexible way) >>> contents = "01" >>> xmldoc = minidom.parseString(contents) 1 >>> print xmldoc.toxml()
7. 01 1 minidom has a method, parseString, which takes an entire XML document as a string and parses it. You can use this instead of minidom.parse if you know you already have your entire XML document in a string. OK, so you can use the minidom.parse function for parsing both local files and remote URLs, but for parsing strings, you use... a different function. That means that if you want to be able to take input from a file, a URL, or a string, you'll need special logic to check whether it's a string, and call the parseString function instead. How unsatisfying. If there were a way to turn a string into a file-like object, then you could simply pass this object to minidom.parse. And in fact, there is a module specifically designed for doing just that: StringIO. Example 10.4. Introducing StringIO >>> contents = "01"
8. >>> import StringIO >>> ssock = StringIO.StringIO(contents) 1 >>> ssock.read() 2 "01" >>> ssock.read() 3 '' >>> ssock.seek(0) 4 >>> ssock.read(15) 5 '1' >>> ssock.close() 6 1 The StringIO module contains a single class, also called StringIO, which allows you to turn a string into a file-like object. The StringIO class takes the string as a parameter when creating an instance.
9. 2 Now you have a file-like object, and you can do all sorts of file-like things with it. Like read, which returns the original string. 3 Calling read again returns an empty string. This is how real file objects work too; once you read the entire file, you can't read any more without explicitly seeking to the beginning of the file. The StringIO object works the same way. 4 You can explicitly seek to the beginning of the string, just like seeking through a file, by using the seek method of the StringIO object. 5 You can also read the string in chunks, by passing a size parameter to the read method. 6 At any time, read will return the rest of the string that you haven't read yet. All of this is exactly how file objects work; hence the term file-like object. Example 10.5. Parsing XML from a string (the file-like object way) >>> contents = "01" >>> ssock = StringIO.StringIO(contents) >>> xmldoc = minidom.parse(ssock) 1 >>> ssock.close()
10. >>> print xmldoc.toxml() 01 1 Now you can pass the file-like object (really a StringIO) to minidom.parse, which will call the object's read method and happily parse away, never knowing that its input came from a hard-coded string. So now you know how to use a single function, minidom.parse, to parse an XML document stored on a web page, in a local file, or in a hard-coded string. For a web page, you use urlopen to get a file-like object; for a local file, you use open; and for a string, you use StringIO. Now let's take it one step further and generalize these differences as well. Example 10.6. openAnything def openAnything(source): 1 # try to open with urllib (if source is http, ftp, or file URL) import urllib try:
11. return urllib.urlopen(source) 2 except (IOError, OSError): pass # try to open with native open function (if source is pathname) try: return open(source) 3 except (IOError, OSError): pass # treat source as string import StringIO return StringIO.StringIO(str(source)) 4 1 The openAnything function takes a single parameter, source, and returns a file-like object. source is a string of some sort; it can either be a URL (like 'http://slashdot.org/slashdot.rdf'), a full or partial pathname to a
12. local file (like 'binary.xml'), or a string that contains actual XML data to be parsed. 2 First, you see if source is a URL. You do this through brute force: you try to open it as a URL and silently ignore errors caused by trying to open something which is not a URL. This is actually elegant in the sense that, if urllib ever supports new types of URLs in the future, you will also support them without recoding. If urllib is able to open source, then the return kicks you out of the function immediately and the following try statements never execute. 3 On the other hand, if urllib yelled at you and told you that source wasn't a valid URL, you assume it's a path to a file on disk and try to open it. Again, you don't do anything fancy to check whether source is a valid filename or not (the rules for valid filenames vary wildly between different platforms anyway, so you'd probably get them wrong anyway). Instead, you just blindly open the file, and silently trap any errors. 4 By this point, you need to assume that source is a string that has hard- coded data in it (since nothing else worked), so you use StringIO to create a file-like object out of it and return that. (In fact, since you're using the str function, source doesn't even need to be a string; it could be any object, and you'll use its string representation, as defined by its __str__ special method.) Now you can use this openAnything function in conjunction with minidom.parse to make a function that takes a source that refers to an XML
13. document somehow (either as a URL, or a local filename, or a hard-coded XML document in a string) and parses it. Example 10.7. Using openAnything in kgp.py class KantGenerator: def _load(self, source): sock = toolbox.openAnything(source) xmldoc = minidom.parse(sock).documentElement sock.close() return xmldoc 10.2. Standard input, output, and error UNIX users are already familiar with the concept of standard input, standard output, and standard error. This section is for the rest of you. Standard output and standard error (commonly abbreviated stdout and stderr) are pipes that are built into every UNIX system. When you print
14. something, it goes to the stdout pipe; when your program crashes and prints out debugging information (like a traceback in Python), it goes to the stderr pipe. Both of these pipes are ordinarily just connected to the terminal window where you are working, so when a program prints, you see the output, and when a program crashes, you see the debugging information. (If you're working on a system with a window-based Python IDE, stdout and stderr default to your “Interactive Window”.) Example 10.8. Introducing stdout and stderr >>> for i in range(3): ... print 'Dive in' 1 Dive in Dive in Dive in >>> import sys >>> for i in range(3): ... sys.stdout.write('Dive in') 2 Dive inDive inDive in >>> for i in range(3):
15. ... sys.stderr.write('Dive in') 3 Dive inDive inDive in 1 As you saw in Example 6.9, “Simple Counters”, you can use Python's built-in range function to build simple counter loops that repeat something a set number of times. 2 stdout is a file-like object; calling its write function will print out whatever string you give it. In fact, this is what the print function really does; it adds a carriage return to the end of the string you're printing, and calls sys.stdout.write. 3 In the simplest case, stdout and stderr send their output to the same place: the Python IDE (if you're in one), or the terminal (if you're running Python from the command line). Like stdout, stderr does not add carriage returns for you; if you want them, add them yourself. stdout and stderr are both file-like objects, like the ones you discussed in Section 10.1, “Abstracting input sources”, but they are both write-only. They have no read method, only write. Still, they are file-like objects, and you can assign any other file- or file-like object to them to redirect their output. Example 10.9. Redirecting output
16. [you@localhost kgp]$python stdout.py Dive in [you@localhost kgp]$ cat out.log This message will be logged instead of displayed (On Windows, you can use type instead of cat to display the contents of a file.) If you have not already done so, you can download this and other examples used in this book. #stdout.py import sys print 'Dive in' 1 saveout = sys.stdout 2 fsock = open('out.log', 'w') 3 sys.stdout = fsock 4
17. print 'This message will be logged instead of displayed' 5 sys.stdout = saveout 6 fsock.close() 7 1 This will print to the IDE “Interactive Window” (or the terminal, if running the script from the command line). 2 Always save stdout before redirecting it, so you can set it back to normal later. 3 Open a file for writing. If the file doesn't exist, it will be created. If the file does exist, it will be overwritten. 4 Redirect all further output to the new file you just opened. 5 This will be “printed” to the log file only; it will not be visible in the IDE window or on the screen. 6 Set stdout back to the way it was before you mucked with it. 7 Close the log file. Redirecting stderr works exactly the same way, using sys.stderr instead of sys.stdout. Example 10.10. Redirecting error information
18. [you@localhost kgp]$python stderr.py [you@localhost kgp]$ cat error.log Traceback (most recent line last): File "stderr.py", line 5, in ? raise Exception, 'this error will be logged' Exception: this error will be logged If you have not already done so, you can download this and other examples used in this book. #stderr.py import sys fsock = open('error.log', 'w') 1 sys.stderr = fsock 2 raise Exception, 'this error will be logged' 3 4
19. 1 Open the log file where you want to store debugging information. 2 Redirect standard error by assigning the file object of the newly- opened log file to stderr. 3 Raise an exception. Note from the screen output that this does not print anything on screen. All the normal traceback information has been written to error.log. 4 Also note that you're not explicitly closing your log file, nor are you setting stderr back to its original value. This is fine, since once the program crashes (because of the exception), Python will clean up and close the file for us, and it doesn't make any difference that stderr is never restored, since, as I mentioned, the program crashes and Python ends. Restoring the original is more important for stdout, if you expect to go do other stuff within the same script afterwards. Since it is so common to write error messages to standard error, there is a shorthand syntax that can be used instead of going through the hassle of redirecting it outright. Example 10.11. Printing to stderr >>> print 'entering function'
20. entering function >>> import sys >>> print >> sys.stderr, 'entering function' 1 entering function 1 This shorthand syntax of the print statement can be used to write to any open file, or file-like object. In this case, you can redirect a single print statement to stderr without affecting subsequent print statements. Standard input, on the other hand, is a read-only file object, and it represents the data flowing into the program from some previous program. This will likely not make much sense to classic Mac OS users, or even Windows users unless you were ever fluent on the MS-DOS command line. The way it works is that you can construct a chain of commands in a single line, so that one program's output becomes the input for the next program in the chain. The first program simply outputs to standard output (without doing any special redirecting itself, just doing normal print statements or whatever), and the next program reads from standard input, and the operating system takes care of connecting one program's output to the next program's input. Example 10.12. Chaining commands
|
2018-01-19 07:59:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1727832704782486, "perplexity": 2942.005448269594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887832.51/warc/CC-MAIN-20180119065719-20180119085719-00763.warc.gz"}
|
https://socratic.org/questions/582e2bc37c0149410e3a69e0
|
# Question #a69e0
Nov 18, 2016
$A \approx {44.2}^{\circ}$
$B \approx {95.5}^{\circ}$
$C \approx {40.3}^{\circ}$
#### Explanation:
By convention, $a$ represents the side opposite vertex $A$, $b$ represents the side opposite vertex $B$, and $c$ represents the side opposite vertex $C$.
The law of cosines states that ${c}^{2} = {a}^{2} + {b}^{2} - 2 a b \cos \left(C\right)$. Note that this is true regardless of how the sides and vertices are labeled, so it is also true that
• ${b}^{2} = {a}^{2} + {c}^{2} - 2 a c \cos \left(B\right)$
• ${a}^{2} = {b}^{2} + {c}^{2} - 2 b c \cos \left(A\right)$
If we try solving for the angle, we get
• $\cos \left(A\right) = \frac{{b}^{2} + {c}^{2} - {a}^{2}}{2 b c}$
• $\cos \left(B\right) = \frac{{a}^{2} + {c}^{2} - {b}^{2}}{2 a c}$
• $\cos \left(C\right) = \frac{{a}^{2} + {b}^{2} - {c}^{2}}{2 a b}$
Thus, applying the inverse cosine function to both sides:
• $A = \arccos \left(\frac{{b}^{2} + {c}^{2} - {a}^{2}}{2 b c}\right)$
• $B = \arccos \left(\frac{{a}^{2} + {c}^{2} - {b}^{2}}{2 a c}\right)$
• $C = \arccos \left(\frac{{a}^{2} + {b}^{2} - {c}^{2}}{2 a b}\right)$
Substituting in the given values $a = 14 , b = 20 , c = 13$ into a calculator, we get our results:
• $A \approx {44.2}^{\circ}$
• $B \approx {95.5}^{\circ}$
• $C \approx {40.3}^{\circ}$
|
2020-07-02 07:11:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 22, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.982098400592804, "perplexity": 524.1091133216601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878519.27/warc/CC-MAIN-20200702045758-20200702075758-00433.warc.gz"}
|
https://socratic.org/questions/how-do-integrate-this-int-e-x-e-x-e-x-e-x-dx-without-hyperbolic-indentities-pref#626001
|
# How do integrate this int (e^x + e^-x)/(e^x - e^-x)dx without hyperbolic indentities, preferebly with subsititution?
Jun 5, 2018
$= \ln \left({e}^{x} - {e}^{- x}\right) + C$
#### Explanation:
$\int \frac{{e}^{x} + {e}^{-} x}{{e}^{x} - {e}^{- x}} \mathrm{dx}$
Substitute:
$u = {e}^{x} - {e}^{- x}$
From this it will follow that:
$\mathrm{du} = {e}^{x} + {e}^{- x} \mathrm{dx}$
So $\mathrm{du}$ is just the top of the fraction, now substituting in:
$\int \frac{{e}^{x} + {e}^{-} x}{{e}^{x} - {e}^{- x}} \mathrm{dx} = \int \frac{\mathrm{du}}{u}$
$= \ln \left(u\right) + C$
Reverse the substitution:
$= \ln \left({e}^{x} - {e}^{- x}\right) + C$
|
2021-10-26 08:59:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9714529514312744, "perplexity": 10200.94697481396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00165.warc.gz"}
|
http://www.gabormelli.com/RKB/Gaussian_Probability_Models
|
# Normal/Gaussian Probability Distribution Family
(Redirected from Gaussian Probability Models)
A Normal/Gaussian Probability Distribution Family is an exponential probability distribution family whose exponential function is of the form $f(x,a,b,c)$ (where $a = \tfrac{1}{\sqrt{2\pi\sigma^2}}$, $b = \mu$, and $c = 2\sigma^2$.
|
2019-09-23 19:37:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9986982941627502, "perplexity": 808.89292167243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514578201.99/warc/CC-MAIN-20190923193125-20190923215125-00340.warc.gz"}
|
http://math.stackexchange.com/questions/130358/exterior-algebra-of-self-direct-sum
|
# Exterior Algebra of Self-Direct Sum
Suppose $V$ and $W$ are vector spaces and $\bigwedge V$ and $\bigwedge W$ their exterior algebras. Then it is known that $\bigwedge (V \oplus W) \simeq \bigwedge V \otimes \bigwedge W$. Now my question is:
Is $\bigwedge V \otimes \bigwedge V \simeq \bigwedge V$ true?
The reason why I think that: The elements of $\bigwedge (V \oplus W)$ are linear combinations of wedges like $v_1 \wedge \ldots \wedge v_n \wedge w_1 \wedge \ldots \wedge w_m$ but on $\bigwedge (V \oplus V)$ the wedges $v_1 \wedge \ldots \wedge v_n \wedge v'_1 \wedge \ldots \wedge v'_{n'}$ can be reduced (using $v_i \wedge v_i =0$) to something like $v''_1 \wedge \ldots \wedge v''_{n''} \in \bigwedge V$
-
• If $V$ is of finite dimension $n$, then $\Lambda V$ has dimension $2^n$.
• On the other hand, if $U$ and $W$ are vector spaces of finite dimension, we have $\dim U\otimes W=\dim U\cdot\dim W$.
If your isomorphism existed, we would then have that $2^n2^n=2^n$, which we don't for most values of $n$.
If $V$ is infinite dimensional, though, there is an isomorphism $\Lambda V\otimes\Lambda V\cong\Lambda V$, coming from the fact that $V\oplus V\cong V$. – Mariano Suárez-Alvarez Apr 11 '12 at 8:40
|
2016-05-06 00:48:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9887714982032776, "perplexity": 93.65082544416524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861700245.92/warc/CC-MAIN-20160428164140-00149-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://www.docstoc.com/docs/66825681/Changing-Nature-of-Management-Accounting
|
# Changing Nature of Management Accounting
Description
Changing Nature of Management Accounting document sample
Shared by:
Categories
-
Stats
views:
326
posted:
12/15/2010
language:
English
pages:
20
Document Sample
THE FUTURE OF MANAGEMENT ACCOUNTING:
A SOUTH AUSTRALIAN PERSPECTIVE*
David Forsaith
Senior Lecturer
Carol Tilt
Senior Lecturer
Maria Xydias-Lobo
Lecturer
School of Commerce
Flinders University
School of Commerce
Research Paper Series: 03-2
ISSN 1441-3906
*The authors would like to thank CPA Australia for funding the survey used in this paper.
1
THE FUTURE OF MANAGEMENT ACCOUNTING:
A SOUTH AUSTRALIAN PERSPECTIVE
ABSTRACT
In the last decade, there is has been a substantial degree of research interest in the
changing function of management accounting and role of management accountants in
commercial enterprises, indicating that management accounting may have lost some
relevance to management and other information users and there has been a plethora of
research suggesting changes to management accounting systems, techniques and
practices. This paper explores how current management accountants view their present
and future role and from this, some ideas for future development of the issue are
determined.
INTRODUCTION
In the last decade, there is has been a substantial degree of research interest in the changing
function of management accounting and role of management accountants in commercial
enterprises. Since Johnson and Kaplan (1987) first alerted the accounting community to
management accounting’s apparent loss of relevance to management and other information users,
there has been a plethora of (particularly) prescriptive research suggesting changes to management
accounting systems, techniques and practices. Recommended ‘solutions’ to the relevance problem
have included innovative costing and information frameworks such as Activity Based Costing,
Balanced Scorecard, Key Performance Indicators, Economic Value Added, and Benchmarking.
This paper takes a more exploratory approach with the intention of establishing how current
management accountants view their present and future role. From this, some ideas for future
development of this issue are determined.
The paper is structured as follows. First, a review of the literature is provided, outlining three
issues that predominated the research in the area of management accounting. This is followed by
details of the specific research questions addressed, the survey conducted on South Australian
management accountants, and the results of that survey. Finally, conclusions and
recommendations are presented.
2
REVIEW OF THE LITERATURE
Recently, professional accounting bodies in the UK and the US have funded research into
management accounting change experienced by businesses in that country. In the UK, CIMA (the
Chartered Institute of Management Accountants) and the Economic and Social Research Council
funded a longitudinal study into the changing nature of management accounting, covering the
period between 1995 and 1998 (Burns, Ezzamel and Scapens, 1999). In the US, IMA (the Institute
of Management Accountants), assisted by the AICPA, commissioned the ‘1999 Practice Analysis
of Management Accounting’ (Russell, Siegel and Kulesza, 1999). In Australia, the Management
Accounting Centre of Excellence of the Australian Society of Certified Practising Accountants
(ASCPA) funded a delphi study in 1994 looking at management accounting change over the
period 1989 to 1994 (Barbera, 1996a). The results of the 1994 study were compared to those of an
earlier delphi study conducted by Birkett (1989) on behalf of the Task Force for Accounting
Education in Australia.
The common focus of all these studies has been on current and future:
• triggers or drivers of change in the management accounting profession,
• changes in the management accounting function, and
• changes in the tasks performed and the skills required by management accountants.
Several other writers also report on these three issues, the results from all these sources will be
summarised below.
Triggers or Drivers of Change in Management Accounting
The IMA study conducted in the US found that the respondents surveyed felt that the rate of
change in their role as management accountants had been more rapid between 1995 and 1999 than
over the preceding five year period, and believed that the rate of change would continue to
increase over the next three years (Siegel, 1999).
The most common change factors cited in the literature are, on a broad environmental level,
globalisation of markets, advances in information and production technologies, and increasing
competition (Barbera, 1996b; Burns et al, 1999; Russell et al, 1999; Sharma, 1998). On an
3
organisational level they include greater emphasis on core competencies, emphasis on customer
and supplier relationships, downsizing, outsourcing, flatter organisational structures and team
work (Barbera, 1996a; Binnersley, 1997; Burns et al, 1999).
All of these changes in the business environment have resulted in changes to how organisations
operate, trade and are managed. This indirectly affects the function and tasks of management
accounting, since management accountants have traditionally provided information which
facilitates or supports effective and efficient operations and management. However, some of these
changes also directly affect the functions and tasks of management accountants. For example, the
rapid progress of information technology means that management have become increasingly aware
of the availability of more information (for example, through data warehousing and the Internet)
and expect that management accountants will provide this for them (and expeditiously). This has
an indirect effect on the management accounting function. However, the rise of new technologies
have also meant that management accountants can relinquish much of the ‘bean counting’ and
‘number crunching’ to computerised accounting systems, leaving them more time to analyse and
interpret the information produced. Thus, information technology also directly affects the tasks
conducted by management accountants.
The establishment of global markets, the emphasis on customer relationships and improved quality
of products and services, and the enhancement of production technologies all serve to increase the
level of competition between organisations. Firms compete on price, quality, speed of delivery,
and customer service. Management need measures and performance indicators on all these
factors, and management accountants being the traditional information specialists of the
organisation, must provide these. Failure to do so may result in other information professionals
bridging this gap, essentially rendering management accountants comparatively irrelevant
(Binnersley 1997). As a result of these changes to the business environment, management
accountants must be less concerned with ‘number crunching’ and generating the traditional,
antiquated accounting measures, and look at how they can add value and become more
‘integrated’ into the organisation (Binnersley, 1997). Russell et al (1999) claim that management
accountants must progress beyond “bean counter, corporate cop and financial historian” to become
instead a “valued business partner” (p 39) with greater strategic and managerial decision making
responsibilities.
4
The identity of management accountants is also under transition. The IMA study asked
management accountants to define their position in the organisation. None of the respondents
defined themselves as ‘management accountants’. Thirty nine percent said that they work in
Finance, thirty three percent said Accounting and twenty eight percent said something else
(Russell et al, 1999). The following quote attempts to explain why:
“The most common reasons for people saying that they work in finance, rather than
accounting, have to do with the positive connotations that respondents have of finance and
the negative connotations they have of accounting. Finance is forward-looking, while
accounting is backward looking. Finance is all-inclusive. Accounting refers to debits and
credits. Accountants are number crunchers”.
(Siegel and Sorensen, 1999, p 13)
Similarly, in the UK study it was found that “In some businesses...accountants are changing their
job titles, becoming ‘business analysts’ instead of ‘corporate controllers’” (Burns et al, 1999, p
29).
Changes in the Management Accounting Function
The Australian study by Birkett (1989) found that the purpose of management accounting was to
“provide management with the necessary key information as quickly and accurately as possible, to
enable appropriate action to be taken” (p 16). The 1994 ASCPA study concluded that “the
management accounting function was...value-adding participation in organisational processes of
strategy formulation, control, and change” (Barbera, 1996a, p 53). Significantly, the terms ‘value-
adding’, ‘organisational processes’, ‘strategy’, and ‘change’ had found their way into this
definition, reflecting the changes in the business environment and management philosophies, as
well as changes in the management accountant’s role. Some additional observations are made
when comparing the results of these two studies. Barbera, 1996a, p 53, found:
• management accountants’ users, customers or clients are more broadly defined to include
engineers, operations, marketing, HR personnel, cross functional teams and cellular work
teams.
• the roles of management accountants expanded to include providing expert advice, team
5
information systems, the design and control of performance measurement systems, providing
information, being teachers, guides, analysts, internal consultants, and interpreters and
managers of complexity.
The results of the studies conducted in the US and the UK largely mirror these findings. The IMA
study found that, increasingly, management accountants spent more time as “internal consultants
or business analysts”, work on “cross functional teams”, are “actively involved in decision
making” and “work closely with their ‘customers’ to provide the right information and help use
the information to make better decisions” (Russell et al, 1999, p 40). The term ‘change’ also
features in this study. Management accountants are increasingly assuming the role of change
agents. Russell et al (1999) claim that “Management accountants aren’t just managing change:
They are initiating change.” (p 41). Binnersley (1997) agrees that management accountants “need
to recognise and facilitate the changes taking place rather than resist them....they have the
expertise to apply rigorous measurement discipline, ability to develop systems and a unique view
across the business.” (p 36). Sharma (1998) concurs that management accountants will “be called
upon to operate as managers of business value, and agents of change” (p 24). As does Zarowin
(1997), who claims that “new accountants are change agents and more – much more” (p 38).
Although there is strong support for accountants’ proactive involvement in change, recent
evidence suggests that in Australia “the role is seen as one of support....rather than involving
proactivity on the part of management accountants” (Barbera, 1996a). Terms such as
“accommodate...adjust to...accept...and support” were used by respondents in relation to change.
Similarly, in relation to the management accountant’s involvement in strategy formulation,
Australian evidence suggests that it is only in a support role. Thus, it appears that Australian
management accountants are lagging behind their overseas counterparts in influencing change and
strategic direction in the organisation they work for.
In summary, international literature largely views the role or function of a management accountant
as:
• strategy formulator,
6
• change agent,
• information provider (or ‘knowledge worker’ the ‘hub’ for data),
• leader of and/or participator in cross functional teams,
• designer and manager of information systems,
• designer and controller of performance measurement systems,
• teacher, guide or educator, and
• interpreter and manager of complexity.
Changes in the Tasks Performed by Management Accountants
Sharma (1998) reports on research conducted by Chenhall and Langfield-Smith, involving a
survey of 140 manufacturing firms in Australia. A number of current and future trends in
management accounting tasks and activities were observed, and are presented in Table 1.
Table 1
Current and Future Trends in Management Accounting
Current Trends Future Trends
High Emphasis High Emphasis
Budgeting for Planning and Control Budgeting for Planning and Control
Variance Analysis Variance Analysis
Capital Budgeting Capital Budgeting
Return on Investment Techniques Return on Investment Techniques
Absorption Costing Moderate Emphasis
Variable Costing Balanced Scorecard
Moderate Emphasis Customer Satisfaction Measurement
Balanced Scorecard Activity-based costing and management
Customer Satisfaction Measurement Shareholder Value analysis
Low Emphasis Benchmarking
Activity-based costing and management Absorption Costing
Shareholder Value analysis Variable Costing
Benchmarking
(Adapted from Sharma, 1998, p 24)
7
It is apparent from the above analysis, that respondents believed that some traditional management
accounting techniques such as budgeting, variance and ROI analyses will continue to used and be
given a high level of emphasis in Australian manufacturing firms. Additionally, Sharma (1998)
reports that “management will continue to place emphasis on financial performance measures,
relative to non-financial measures” (p 24). The study also found that new techniques such as
strategic planning, product profitability analyses, long range forecasting, benchmarking and ABC
will assume increased importance in the future.
Sharma (1998) claims that in the future, management accounting will develop in areas involving
“a broad spectrum of cross-functional disciplines” (p 24), such as:
• Performance Management (eg developing key financial and non-financial indicators)
• Asset Management (eg. managing a product through its life cycle)
• Business Control Management (eg corporate governance and internal control frameworks)
• Environmental Management (eg accounting for the environment)
• Financial Management (eg activity based management)
• Intellectual Capital Management (eg measuring and managing employee satisfaction)
• Information Management (eg implementing and generating value from e-commerce and EDI)
• Quality Management (eg implementing TQM within and organisation and managing quality
improvements), and
• Strategic Management (eg value chain analysis for assessing competitive advantage).
The amount of emphasis placed on each of these areas depends on individual organisational
factors such as size, industry and individual business needs.
The other Australian study reported by Barbera (1996a) found the current tasks associated with
management accounting to be:
• participation in resource-related direction setting for an organisation (eg strategy formulation,
• support of organisational change processes
• contribution to the design, implementation and review of performance measurement and
control systems
8
• contribution to the development of performance-based, user-focused information systems.
(p 53).
Russell et al (1999) report on the findings of the IMA study in the US which found that compared
to five years ago, respondents spend more time performing the following tasks, and expect to
continue to focus primarily on these activities:
• Internal Consulting
• Long term, strategic planning
• Computer systems and operations
• Managing the accounting/finance function
• Process improvement
• Performing financial and economic analysis (p 41)
On the other hand, respondents spend less time on the following tasks, as compared to five years
ago, and expect to continue to spend less time on these activities:
• Accounting systems and financial reporting
• Consolidations
• Managing the accounting/finance function
• Accounting policy
• Short-term budgeting process
• Project accounting
• Compliance reporting
• Cost accounting systems
• Tax compliance ( p 42)
The UK study reported by Burns et al (1999) found that there had indeed been a change in the
tasks conducted by management accountants, however this change was primarily in the way
management accounting information was used “rather than change in management accounting
systems and techniques per se” (p 28). Traditional management accounting information continued
to be generated, but these results are interpreted in a broader context:
“Based on our observations, a key role for management accountants today is to place
financial numbers into a broader context and relate them to key non-financial measures.
9
The management accountant integrates the different perceptions of the business indicated
by the financial and non-financial measures, and integrates managers’ understandings of
their operating performance, the financial results and the strategic directions of the
Changes in Skills Required by Management Accountants
The Australian study by Birkett (1989) asked respondents to identify what the skill needs were at
that time and what the anticipated future skill needs were. Current skill needs identified were:
• computational, statistical, interpretative, analytical and financial information system design
skills.
Future skills identified were more progressive, eg:
• adapting management accounting technologies to new forms of manufacturing process, using
modern information technology in managing organisational change, using a deeper
understanding of organisational structuring, functioning and processes, sponsoring and
innovation. (Barbera, 1996b, p 67)
The 1994 Australian study by Barbera (1996b, p 67) found an increased emphasis on:
• personal skills – tolerance of ambiguity, ability to take leadership roles;
• interpersonal skills - to facilitate work in cross-functional teams, employee empowerment , and
the consultative/educative role.
• analytic/constructive skills – to facilitate the business analyst, change agent and strategy
formulator roles.
• an ability to be intuitive, synthetic and creative thinking.
Other attributes proposed by Barbera include proactivity and innovativeness and organisational
design skills.
The UK perspective reported by Burns et al (1999) suggests that “it is important to develop not
only management accountants’ financial knowledge, but also their broader personal skills and
commercial capabilities” (p 29). The paper suggests that accountants should have an
understanding of the broader commercial environment (eg some marketing knowledge) and the
10
business they work for, and the ability to work closely with other members of the management
team – ‘hybrid accountants’.
In the US study, respondents were asked to identify the most important KSAs (Knowledge, Skills
and Abilities) necessary in the management accounting function. The results point to the
following, clearly mirroring the other studies already reviewed:
• Communication (oral, written and presentation) skills
• Ability to work in a team
• Analytical skills
• Solid understanding of accounting
• An understanding of how a business functions. (Russell et al, 1999, p 41)
Computer skills were identified as the most important skill that respondents had learned in the past
five years. Other skills mentioned include:
• data modelling,
• making forecasts and projections,
• developing assumptions and criteria,
• analysing processes
• being adaptable and not resistant to change
• being strategic and forward looking.
Finally, Zarowin (1997) suggests that accountants must possess skills in persuasion and
facilitation, as well as good presentation skills to be an effective change agent. In addition,
accountants should have more foresight, be less backward looking, and more risk taking.
Research Questions
From the predominance of literature that addresses the three issues discussed above, a series of
research questions were developed. These are outlined as follows:
1. What do management accountants see as the current and future functions of management
accounting?
11
2. What do management accountants see as the current and future tasks/activities involved with
being a management accountant?
• Which of these do they currently perform? If they do not perform these, why not?
3. What do management accountants see as the current and future skills required to perform these
• Do they believe that they currently have these skills?
• If not, which skills would they like to acquire or further develop?
4. Do management accountants think there has been change in the functions, tasks and skills of
management accounting in the last 5 years?
• What is the likely rate of future change?
• What do management accountants think have been the major drivers/triggers of that
change?
METHOD
The results were obtained using a self administered mail questionnaire, a copy of which is
provided in Appendix A. In order to maximise responses, the surveys were sent out with a letter
from CPA Australia encouraging respondents to complete the questions and those returning their
survey by a specified date were put into a prize draw. Surveys were also handed out in packets
provided at the CPA Australia State Congress in 2001.
Sample
The sample chosen was South Australian members of one of the two professional accounting
bodies (CPA Australia) who are either working in, or have a professional interest in, management
accounting. This was determined through the use of a CPA Australia mailing list and was
confirmed with a survey question.
Limitations
Apart from the usual limitations associated with survey research (particularly non-response bias
and desirability bias), the fact that the survey was only sent to South Australian members of the
professional accounting bodies is a limitation. As South Australian small business is largely
12
manufacturing however, it is not considered to cause serious difference in the results if compared
to other States. Generalisation of the results however, must be undertaken with caution.
RESULTS
While the response rate was quite low, in total 161 individuals responded to the questionnaire
providing enough data to undertake some preliminary investigation.
The first question asked respondents whether they were in a management accounting role, and if
not, whether they had a professional interest in management accounting. Of the 161, 1 indicated
that they were neither a management accountant nor interested in management accounting, and 3
declined to answer this question. 108 are currently working in a management accounting role
(67%). 49 are not working as a management accountant, but have a professional interest in the
area (30%). Tests were run to determine whether there was any significant difference between
these two groups. Each question was compared using the Mann Whitey Test and no significant
differences were found at the 5% level for 106 of the 125 variables tested.
Where a difference was found, it appears that the difference can be explained, not by whether the
respondents were in a management accounting role, but by their age and/or how long they have
been working in the area of accounting. Variables where a difference appeared include:
• To whom they provide information.
• Age group, educational qualification obtained and the number of years they have been an
accountant.
• Their rating given to the position of a management accountant on the importance of
teamwork skills, forecasting and problem solving.
• Their rating given to themselves on the importance of problem solving, technical skills,
decision making skills, dealing with change and strategic skills.
The tests were re-run excluding those respondents who had been an accountant for more than 20
years and those who were over 55 years of age. All but two of the variables that were previously
13
significant were no longer significant1. Notwithstanding this, as very few significant differences
were found, the entire sample was used for the analysis presented in the following sections.
Sample Demographics
The complete sample of surveys returned provided an excellent representation of various sizes and
industries as can be seen in Table 2. Both centralised and decentralised firms are also evenly
represented.
Large organisations (those with greater than 1000 employees) were most common in the sample
(36%), however, all size groups, measured in terms of number of employees were represented.
Most respondents were from manufacturing businesses (24%) or public sector organisations
(20.5%). Within the manufacturing organisations, a variety of industries were represented (see
footnote to table 2).
Table 2
Size and Industry Classifications
Size Groups N (%) Main Industry N (%)
(employees) Groups
0-100 47 (29.2%) Manufacturing* 39 (24.2%)
101-500 33 (20.5%) Public Service 33 (20.5%)
501-1000 20 (12.4%) Financial Services 16 (9.9%)
>1000 58 (36.0%) Education 9 (5.6%)
Community Services 9 (5.6%)
* motor vehicles; electronics; food, wind, beverages; printing; building; chemicals; optical;
batteries; paper; steel.
Most respondents described their position as either a manager/supervisor (34%) or an accountant
(22%). Only 13% described themselves as solely a management accountant. Others included
financial controller (11%) and consultant (2.5%).
1
Three new variables did become significant (the rating for whether the position of management accountant requires
analytical and data modelling skills, and whether increased competition is changing the role of management
accountants).
14
The majority of respondents were male (65%) and aged between 26 and 45 (66%). Most
diploma and 12% holding a Masters degree. 66.5% of respondents have CPA status. 64.6% of
respondents had only been in their current position for less than 5 years, however the number of
years working as an accountant was spread fairly evenly between less than 5 (18%), 5-10 (25%)
and 11-20 (30%). Even more than 20 years as an accountant was indicated by 20.5% of
respondents.
Research Questions
1. Current and future functions of management accounting.
When asked to state the current primary function of management accounting, reporting and
information provision was indicated by 102 respondents (63.4%), followed by strategy, decision
making, forecasting and planning being suggested by 15 respondents (9.3%) and budgeting and
costing by 11 respondents (6.8%).
The future functions of management accounting were considered to be strategy, decision making,
forecasting and planning (36%, 58 respondents) followed by reporting and information provision
at 32.3% (52 respondents). Performance measurement was also mentioned by 7 respondents
(4.3%).
Hence, the two major functions of the management accountant of the future appears to be
providing information and dealing in strategy and planning – this survey confirms prior research
that the relative emphasis on each is changing (reversing).
2. Current and future tasks/activities involved with being a management accountant.
The types of management accounting techniques that were most commonly used by the sample of
management accountants were those not traditionally related to costing as can be seen in Table 3.
Financial tools such as CVP, residual income and variable costing were used by less than one third
of the respondents. Moreover, contemporary costing tools such as life cycle costing and target
costing had not been adopted by many organisations. The major emphasis indicated by Table 3 is
on budgeting and strategy.
15
Table 3
Techniques and Performance Measures Used
Techniques % Performance Measures: %
Absorption costing 32.3 Balanced scorecards 27.3
Activity Based Costing 31.1 Customer satisfaction 54.7
Benchmarking 57.4 Divisional profits 61.5
Capital budgeting 78.3 Non financial measures 65.8
Cash flow budgets 86.3 Residual Income 14.3
CVP analysis 20.5 Return on Investment 46.6
Life cycle costing 9.3
Operating budget 90.1
Profitability analysis 62.1
Shareholder value analysis 22.4
Strategic planning 81.4
Target costing 6.8
Variable costing 29.2
The single most critical work activity for a management accountant however, was considered to be
‘accounting systems and financial reporting’ by almost 20% of the sample, and ‘managing the
accounting/finance function’ by almost 16%. When provided with a list of activities and asked
whether their work in this area had increased, decreased or remained the same, responses varied
and are summarised in Table 4. Only five activities were considered by the majority of
respondents to have increased. The majority considered all other activities to have decreased,
however a number of respondents chose not to answer this question, hence the low percentage
figures in the table.
16
Table 4
Activities Undertaken by Management Accountants
Activity Increased Decreased
% %
Mergers, acquisitions and divestments 14.9
External financing 16.8
Capital budgeting 27.3
Investment of funds 22.4
Credit collection 19.3
Long term, strategic planning 37.3
Process improvement 49.1
Customer and product profitability 20.5
Accounting systems & financial reporting 37.9
Short term budgeting process 31.7
Perform financial and economic analysis 39.1
Computer systems and operations 44.1
Performance evaluation 38.9
Project accounting 23.0
Internal consulting 34.2
Tax planning and strategy 16.8
Cost accounting systems 22.4
Quality systems and control 21.1
Risk management 23.0
Educating the organisation 42.2
Managing the accounting/finance function 32.9
Compliance reporting 36.0
Finally, when asked what role in the organisation the management accountant primarily plays, the
most often cited roles were information provider (83.2%), internal consultant or advisor (62.1%)
3. Current and future skills required to perform these tasks/activities.
Respondents were asked to rate a range of skills on a scale of 1 to 5 according to how necessary
they believed each skill is to the position of a management accountant, and then to rate themselves
on the same scale (1 = not necessary/poor, 5 = very necessary/excellent).
17
Table 5
Mean Rating of Skills Necessary for Management Accountants
Necessary Self
for position Rating
Mean Mean
Problem solving ability* 4.6 4.2
Broad understanding of day to day operations* 4.5 4.0
Interpersonal skills* 4.5 4.1
Analytical skills* 4.5 4.2
Ability to deal with change* 4.4 4.2
Communication and presentation skills* 4.3 4.1
Ability to work in a team 4.3 4.4
Computer skills* 4.3 4.1
Strategic and forward-looking* 4.2 3.8
Decision making skills* 4.1 3.9
Forecasting and projection skills 3.9 3.7
Technical accounting skills 3.8 3.8
Creativity* 3.7 3.5
Data modelling skills 3.4 3.3
Those variables where the self rating was significantly2 different to the rating for necessity are
marked with an asterisk. Those that showed no difference were mainly technical skills such as
modelling and forecasting, as well as teamwork and adaptability. The lack of difference for
teamwork may be accounted for as most respondents stated that they currently worked in an
accounting team (65%) and often worked in a cross functional team (49%), hence had developed
their team working skills.
4. Changes in the functions, tasks and skills of management accounting.
Most respondents (over 50%) consider that management accounting is currently in a state of
change, has changed significantly over the past five years and will continue to change. They also
considered that the role of the management accountant has changed in that they now have more
input into organisational decisions, and that in the future there will be more need for change in
2
Using the Wilcoxon Ranked Signs test, at the 5% level of significance.
18
The roles that respondents see management accountants playing in the future include strategy
formulator and consultant. They see themselves as having more involvement in the design of
information systems, and in the design and control of performance measurement systems.
The major reasons for the changes are the introduction of e-commerce, advances in information
technology, increased competition, changes in organisation structure and changed performance
measures (including customer relations).
CONCLUSIONS
The results of this study show quite clearly that those working as a management accountant
perceive their role as a changing one, and points to evidence that the change is mainly in the tasks
that management accountants must undertake. Much more emphasis is being placed on strategy
and decision-making roles, rather than the more traditional areas of costing and financial analysis.
The one traditional area to remain high on the list of important areas for management accounting is
budgeting.
Interestingly however, many of the various contemporary techniques that have been developed in
response to the changing requirements of management accounting were not seen by the
respondents to this study as being particularly useful. Activity Based Costing, Balanced
Scorecard, Economic Value Added, and Benchmarking were all cited as not currently used and
unlikely to be used in the future. Performance Indicators was one of the few techniques
respondents felt might be useful.
The major implication of this study is that current ‘solutions’ to management accountants’ need for
relevant skills, techniques and practices are not apparently useful. The results support most prior
studies in suggesting that more emphasis needs to be placed on developing the personal skills
rather than technical skills – management accountants need skill in communication, analysis,
creativity and adaptability. It seems that there is a need for more emphasis on the ‘management’
than the ‘accounting’. The challenges for the accounting profession are to find ways of developing
such characteristics.
19
REFERENCES
Barbera M (1996a), “Management Accounting Futures: Part 1”, Charter, November, pp 52-54.
Barbera M (1996b), “Management Accounting Futures”, Charter, December, pp 66-68.
Binnersley M (1997), “Do You Measure Up?”, Charter, March, pp 32-35.
Birkett WP (1989), The Demand for, and Supply of Management Accounting Education: A Delphi
Study, Task Force for Accounting Education in Australia.
Burns J, Ezzamel M and Scapens R (1999), “Management Accounting Change in the UK”,
Management Accounting, March, pp 28-30.
Johnson HT and Kaplan RS (1987), Relevance Lost: the Rise and Fall of Management
Accounting, Harvard Business School Press, Boston, US.
Russell KA, Siegel GH and Kulesza CS (1999), “Counting More, Counting Less: Transformations
in the Management Accounting Profession”, Strategic Finance, September, pp 39-44.
Siegel G (1999), “The Pace of Change in Management Accounting”, Strategic Finance,
December, pp 71-72.
Siegel G and Sorensen JE (1999), Counting More, Counting Less: Transformations in the
Management Accounting Profession. The 1999 Practice Analysis of Management
Accounting – Executive Summary, Institute of Management Accountants, USA.
Zarowin S (1997), “Finance’s Future: Challenge or Threat?”, Journal of Accountancy, April, pp
38- 42.
i:\cat\research\in progress\management accounting\the future of management accounting.doc
20
`
Related docs
|
2013-05-20 13:07:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1914648711681366, "perplexity": 11106.404380172738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698958430/warc/CC-MAIN-20130516100918-00093-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://jayryablon.wordpress.com/category/general-relativity/
|
# Lab Notes for a Scientific Revolution (Physics)
## May 7, 2009
### Inferring Electrodynamic Gauge Theory from General Coordinate Invariance
I am presently working on a paper to show how electrodynamic gauge theory can be directly connected to generally-covariant gravitational theory. In essence, we show how there is a naturally occurring gauge parameter in gravitational gemometrodynamics which can be directly connected with the gauge parameter used in electrodynamics, while at the same time local gauge transformations acting on fermion wavefunctions may be synonymously described as general coordinate transformations acting on those same fermion wavefunctions.
Inferring Electrodynamic Gauge Theory from General Coordinate Invariance
If you check out sci.physics.foundations and sci.physics.research, you will see the rather busy path which I have taken over the last month to go from baryons and confinement to studying the Heisenberg equation of motion and Ehrenfest’s theorem, to realizing that there was an issue of interest in the way that Fourier kernels behave under general coordinate transformations given that a general coordinate x^u is not itself a generally-covariant four vector. Each step was a “drilling down” to get at underlying foundational issues, and this paper arrives at the most basic, fundamental underlying level.
Jay
## April 13, 2008
### Lab Note 5: The Central Role in Physics, of the Dirac Anticommutator g^uv=(1/2){gamma^u,gamma^v}
Filed under: General Relativity,Gravitation,Physics,Science — Jay R. Yablon @ 11:38 pm
I would like to take a break from my current work on Kaluza-Klein, and focus on the central importance to physics of the Dirac anticommutator relationship $\eta ^{\mu \nu } \equiv {\tfrac{1}{2}} \left(\gamma ^{\mu } \gamma ^{\nu } +\gamma ^{\nu } \gamma ^{\mu } \right)\equiv {\tfrac{1}{2}} \left\{\gamma ^{\mu } ,\gamma ^{\nu } \right\}$, when generalized to a non-zero gravitational field in the form $g^{\mu \nu } \equiv{\tfrac{1}{2}} \left\{\Gamma ^{\mu } \Gamma ^{\nu } +\Gamma ^{\nu } \Gamma ^{\mu } \right\}\equiv {\tfrac{1}{2}} \left\{\Gamma ^{\mu } ,\Gamma ^{\nu } \right\}$. In particular, when $g^{\mu \nu } \ne \eta ^{\mu \nu }$ but rather include a gravitational field $g^{\mu \nu } (x)=\eta ^{\mu \nu } +\kappa h^{\mu \nu } (x)$, then also the $\Gamma ^{\mu } \ne \gamma ^{\mu }$, but rather include a “square root” gravitational field $h^{\mu } (x)$ which may be defined as $\Gamma ^{\mu } (x)\equiv \gamma ^{\mu } +\kappa h^{\mu } (x)$. Combining all the foregoing, this means that $\kappa h^{\mu \nu } \equiv {\tfrac{1}{2}} \kappa \left[h^{\mu } \gamma ^{\nu } +\gamma ^{\mu } h^{\nu } +h^{\nu } \gamma ^{\mu } +\gamma ^{\nu } h^{\mu } \right]+{\tfrac{1}{2}} \kappa ^{2} \left[h^{\mu } h^{\nu } +h^{\nu } h^{\mu } \right]$.
We also note that in perturbation theory, non-divergent perturbative effects are, in the end, captured in a correction to the vertex factor given by $\overline{{\rm u}}(p)\gamma ^{\mu } {\rm u(p)}\to \overline{{\rm u}}(p)\left(\gamma ^{\mu } +\Lambda ^{\mu } \right){\rm u(p)}$ operating on a Dirac spinor ${\rm u(p)}$. That is, the bare vertex $\gamma ^{\mu }$ becomes the dressed vertex $\gamma ^{\mu } +\Lambda ^{\mu }$. By then associating the perturbative $\Lambda ^{\mu }$ with $\kappa h^{\mu }$ just specified, we raise the possibility that gravitational and perturbative descriptions of nature may in some way be interchangeable. More to the point: when we consider perturbative effects in particle physics, we may well be considering gravitational effects without knowing that this is what we are doing. The inestimable benefit of gravitational theory over perturbation theory is that it is non-linear and exact. The inestimable benefit of perturbation theory over gravitational theory is that we know something about how to achieve its renormalization. Perhaps by developing this link further via the vitally-central physical relationship $g^{\mu \nu } \equiv {\tfrac{1}{2}} \left\{\Gamma ^{\mu } ,\Gamma ^{\nu } \right\}$, we can infuse the exact, non-linear character of gravitational theory into perturbation theory, and the renormalizability of perturbation theory into gravitational theory. Recognizing that “Lab Notes” is in the nature of a scientific diary, this, in any event, is the starting point for this lab note.
Now, there are two main directions in which to exploit the connection $g^{\mu \nu } \equiv {\tfrac{1}{2}} \left\{\Gamma ^{\mu } ,\Gamma ^{\nu } \right\}$, and both need to be considered. First, we may start with the metric tensor $g_{\mu \nu }$ from a known, exact solution to Einstein’s equation, calculate its associated $\Gamma ^{\mu }$, and then employ $\Gamma ^{\mu }$ in the Dirac equation, in the form $0=\left(\Gamma ^{\mu } \left(i\partial _{\mu } +eA_{\mu } \right)-m\right)\psi$. Using the Schwarzschild solution as the basis, I have done this in detail, in a paper linked at Magnetic Moment Anomalies of the Charged Leptons. If you would like an “Executive Summary” of this paper, you may obtain this at What the Magnetic Moment Anomaly May Tell Us About Planck-Scale Physics. What is especially noteworthy, is that the magnetic moment anomaly can perhaps be understood as a symptom of gravitational effects near the Planck scale.
I actually wrote the above detailed paper in September, 2006, but never posted it anywhere, because as soon as it was written, I went off into writing the related ArXiV paper at http://arxiv.org/abs/hep-ph/0610377 titled Ward-Takahashi Identities, Magnetic Anomalies, and the Anticommutation Properties of the Fermion-Boson Vertex. This paper illustrates the second direction in which to exploit the connection $g^{\mu \nu } \equiv {\tfrac{1}{2}} \left\{\Gamma ^{\mu } ,\Gamma ^{\nu } \right\}$. Here, we start with a known $\Gamma ^{\mu }$, then use $g^{\mu \nu } \equiv {\tfrac{1}{2}} \left\{\Gamma ^{\mu } ,\Gamma ^{\nu } \right\}$ to obtain the associated $g^{\mu \nu }$, and then use this as a metric tensor in the usual way. In this paper, in particular, we start with the perturbative vertex factor $\Gamma ^{\mu } \equiv \gamma ^{\mu } +\Lambda ^{\mu } =F_{1} \gamma ^{\mu } +{\tfrac{1}{2}} F_{2} i\sigma ^{\mu \nu } (p'-p)_{\nu }$ from equation (11.3.29) of Weinberg’s definitive treatise The Quantum Theory of Fields, then we calculate the anticommutators $g^{\mu \nu } \equiv {\tfrac{1}{2}} \left\{\Gamma ^{\mu } ,\Gamma ^{\nu } \right\}$, and then we use the $g^{\mu \nu }$ as a metric tensor. What is fascinating about this approach, is that the $\Gamma ^{\mu } (p_{\mu } )$ are specified in momentum space, rather than spacetime. This means that $g^{\mu \nu } =g^{\mu \nu } (p_{\mu } )$ deduced therefrom define a non-Euclidean momentum space, rather than a non-Euclidean spacetime. This may open up a whole new branch of physics dealing with — I’ll say it again — Non-Euclidean Momentum Space. As we know from Heisenberg, spacetime is conjugate to momentum space, $\left[x^{\mu } ,p^{\nu } \right]=i\hbar \delta ^{i\mu \nu }$. The central result of this paper, arrived at via the Ward-Takahashi Identities which are central to renormalization, is that interaction vertexes are a measure of curvature in momentum space, and the strength of the interaction at a vertex is proportional to the momentum space curvature, see Figures 1 and 2. This may place particle physics onto a firm geometric footing, but rooted in the geometry of momentum space.
What I have not yet gotten to, is the question of how to use $\left[x^{\mu } ,p^{\nu } \right]=i\hbar \delta ^{i\mu \nu }$, as between a non-Eclidean spacetime and a non-Euclidean momentum space, because in the real world, we have both. That is a project for the remaining free time in my day, between 3AM and 5AM. 😉
As always, these are lab notes, representing “work in progress.” I welcome comments and contributions, as always.
Jay.
### Thesis Defense of the Kaluza-Klein, Intrinsic Spin Hypothesis — EARLY DRAFT
I have been engaged in a number of Usenet and private discussions about the paper Intrinsic Spin and the Kaluza-Klein Fifth Dimension, Rev 3.0 which I posted here on this blog on March 30.
A number of critiques have been raised, which you can see if you check out the recent Usenet threads I started related to intrinsic spin under the heading “Query about intrinsic verus [sic] orbital angular momentum,” over at sci.physics.foundations, sci.physics.relativity and sci.physics.research. These are among the “links of interest” provided in the right-hand pane of this weblog.
I believe that these critiques can be overcome, and that this hypothesis relating to Kaluza-Klein and intrinsic spin and the spatial isotropy of the square of the spin will survive and be demonstrated, ultimately, to be in accord with the physical reality of nature.
I have begun a new paper which is linked at: Thesis Defense of the Kaluza-Klein, Intrinsic Spin Hypothesis, Rev 1.0, which will respond thoroughly and systematically to the various critiques. What is here so far is the introductory groundwork. But, I would appreciate continued feedback as this development continues.
Note that the links within the PDF file unfortunately do not work, so to get the intrinsic spin paper, you need to go to Intrinsic Spin and the Kaluza-Klein Fifth Dimension, Rev 3.0. Also, to get Wheeler’s paper which is referenced, go to Wheeler Geometrodynamics.
## March 30, 2008
### Revised Paper on Kaluza-Klein and Intrinsic Spin, Based on Spatial Isotropy
I have now prepared an updated revision of a paper demonstrating how the compact fifth dimension of Kaluza-Klein is responsible for the observed intrinsic spin of the charged leptons and their antiparticles. The more global, underlying view, is that all intrinsic spins originate from motion through the curled up, compact $x^5$ dimension. This latest draft is linked at:
Intrinsic Spin and the Kaluza-Klein Fifth Dimension, Rev 3.0
Thanks to some very helpful critique from Daryl M. on a thread at sci.physics.relativity, particularly post #2, I have entirely revamped section 5, which is the heart of the paper wherein we establish the existence of intrinsic spin in $x^5$, on the basis of “fitting” oscillations around a $4\pi$ loop of the compact fifth dimension (this is to maintain not only orientation but entanglement version). From this approach, quantization of angular momentum in $x^5$ naturally emerges, it also emerges that the intrinsic $x^5$ angular momentum in the ground state is given by $(1/2) \hbar$.
In contrast to my earlier papers where I conjectured that the intrinsic spin in $x^5$ projects out into the three ordinary space dimensions by virtue of its orthogonal orientation relative to the $x^5$ plane, which was critiqued in several Usenet posts, I now have a much more direct explanation of how the intrinsic spin projects out of $x^5$ to where we observe it.
In particular, we recognize that one of the objections sometimes voiced with regards to a compactified fifth dimension is the question: how does one “bias” the vacuum toward one of four space dimensions, over the other three, by making that dimension compact? This was at the root of some Usenet objections also raised earlier by DRL.
In this draft — and I think this will overcome many issues — we require that at least as regards intrinsic angular momentum, the square of the $J^5 = (1/2) \hbar$ obtained for the intrinsic angular momentum in $x^5$, must be isotropically shared by all four space dimensions. That is, we require that there is to be no “bias” toward any of the four space dimensions insofar as squared intrinsic spin is concerned. Because $J^5 = (1/2) \hbar$ emanates naturally from the five dimensional geometry, we know immediately that $\left(J^{5} \right)^{2} ={\textstyle\frac{1}{4}} \hbar ^{2}$, and then, by the isotropic requirement, that $\left(J^{1} \right)^{2} =\left(J^{2} \right)^{2} =\left(J^{3} \right)^{2} ={\textstyle\frac{1}{4}} \hbar ^{2}$ as well. We then arrive directly at the Casimir operator $J^{2} =\left(J^{1} \right)^{2} +\left(J^{2} \right)^{2} +\left(J^{3} \right)^{2} ={\textstyle\frac{3}{4}} \hbar ^{2}$ in the usual three space dimensions, and from there, continue forward deductively.
For those who have followed this development right along, this means in the simplest terms, that rather than use “orthogonality” to get the intrinsic spin out of $x^5$ and into ordinary space, I am instead using “isotropy.”
There is also a new section 8 on positrons and Dirac’s equations which has not been posted before, and I have made other editorial changes throughout the rest of the paper.
## March 29, 2008
### Stepping Back from Kaluza-Klein: Planned Revisions
Those who have followed my Weblog are aware that I have been putting in a lot of work on Kaluza-Klein theory. This post is to step back from the canvas, lay out the overall picture of what I am pursuing, and summarize what I plan at present to change or correct in the coming days and weeks. This is in keeping with the concept of this Weblog as “Lab Notes,” or as a public “scientific diary.”
There are really two main aspects to this Kaluza-Klein work:
First, generally, I have found that 5-D Kaluza-Klein theory is most simply approached by starting with (classical) Lorentz force motion, and requiring the Lorentz force motion to be along the geodesics of the five dimensional geometry. I am far from the first person who recognizes that the Lorentz force can be represented as geodesic motion in a 5-D model. But I have found, by starting with the Lorentz force, and by requiring the 5-D electromagnetic field strength tensor to be fully antisymmetric, that all of the many “special assumptions” which are often employed in Kaluza-Klein theory energy very naturally on a completely deductive basis, with no further assumptions required. I also believe that this approach leads to what are perhaps some new results, especially insofar as the Maxwell tensor is concerned, and insofar as QED may be considered in a non-linear context. The latest draft of this global work on Kaluza-Klein may be seen at Kaluza-Klein Theory and Lorentz Force Geodesics.
Second, specifically, within this broader context, is the hypothesis that the fifth-dimensional “curled” motion is the direct mainspring of intrinsic spin. More than anything else, the resistance by many physicists to Kaluza-Klein and higher-dimensional theories, rests on the simple fact that this fifth dimension — and any other higher dimensions — are thought to not be directly observable. In simplest form, “too small” is the usual reason given for this. Thus, if it should become possible to sustain the hypothesis that intrinsic spin is a directly-observable and universally-pervasive outgrowth of the fifth dimension, this would revitalize Kaluza-Klein as a legitimate and not accidental union of gravitation and electrodynamics, and at the same time lend credence to the higher-dimensional efforts also being undertaken by many researchers. The latest draft paper developing with this specific line of inquiry is at Intrinsic Spin and the Kaluza-Klein Fifth Dimension.
Now, the general paper at Kaluza-Klein Theory and Lorentz Force Geodesics is very much a work in progress and there are things in this that I know need to be fixed or changed. If you should review this, please keep in mind the following caveats:
First, sections 1-4 are superseded by the work at Intrinsic Spin and the Kaluza-Klein Fifth Dimension and have not been updated recently.
Second, sections 5-7 are still largely OK, with some minor changes envisioned. Especially, I intend to derive the “restriction” $\Gamma^u_{55}=0$ from $F^{{\rm M} {\rm N} } =-F^{{\rm N} {\rm M} }$ rather than impose it as an ad hoc condition.
Third, sections 8-11 needs some reworking, and specifically: a) I want to start with an integration over the five-dimensional volume with a gravitational constant $G_{(5)}$ suited thereto, and relate this to the four dimensional integrals that are there at present; and b) I have serious misgivings about using a non-symmetric (torsion) energy tensor and am inclined to redevelop this to impose symmetry on the energy tensor — or at least to explore torsion versus no torsion in a way that might lead to an experimental test. If we impose symmetry on the energy tensor, then the Maxwell tensor will be the $J^{\mu } =0$ special case of a broader tensor which includes a $J^\mu A^\nu + J^\nu A^\mu$ term and which applies, e.g., to energy flux densities (Poynting components) $T^{0k}$, k=1,2,3 for “waves” of large numbers of electrons.
Fourth, I am content with section 12, and expect it will survive the next draft largely intact. Especially important is the covariant derivatives of the electrodynamic potentials being related to the ordinary derivatives of the gravitational potentials, which means that the way in which people often relate electrodynamic potentials to gravitational potentials in Kaluza-Klein theory is valid only in the linear approximation. Importantly, this gives us a lever in the opposite direction, into non-linear electrodynamics.
Fifth, I expect the development of non-linear QED in section 13 to survive the next draft, but for the fact that the R=0 starting point will be removed as a consequence of my enforcing a symmetric energy tensor in sections 8-11. Just take out all the “R=0” terms and leave the rest of the equation alone, and everything else is more or less intact.
Finally, the experiment in Section 15, if it stays, would be an experiment to test a symmetric, torsionless energy tensor against a non-symmetric energy tensor with torsion. (Basically, metric theory versus Cartan theory.) This is more of a “back of the envelope” section at present, but I do want to pursue specifying an experiment that will test the possible energy tensors which are available from variational principles via this Kaluza-Klein theory.
The paper at Intrinsic Spin and the Kaluza-Klein Fifth Dimension dealing specifically with the intrinsic spin hypothesis is also a work in progress, and at this time, I envision the following:
First, I will in a forthcoming draft explore positrons as well as electrons. In compactified Kaluza-Klein, these exhibit opposite motions through $x^5$, and by developing the positron further, we can move from the Pauli spin matrices toward the Dirac $\gamma^\mu$ and Dirac’s equation.
Second, I have been engaged in some good discussion with my friend Daryl M. on a thread at sci.physics.relativity. Though he believes I am “barking up the wrong tree,” he has provided a number of helpful comments, and especially at the bottom of post #2 where he discusses quantization in the fifth dimension using a wavelength $n \lambda = 2 \pi R$. (I actually think that for fermions, one has to consider orientation / entanglement issues, and so to secure the correct “version,” one should use $n \lambda = 4 \pi R$ which introduces a factor of 2 which then can be turned into a half-integer spin.) I am presently playing with some calculations based on this approach, which you will recognize as a throwback to the old Bohr models of the atom.
Third, this work of course uses $x^5 = R\phi$ to define the compact fifth dimension. However, in obtaining $dx^5$, I have taken $R$ to be a fixed, constant radius. In light of considering a wavelength $n \lambda = 4 \pi R$ per above, I believe it important to consider variations in $R$ rather than fixed $R$, and so, to employ $dx^5 = Rd\phi + \phi dR$.
There will likely be other changes along the way, but these are the ones which are most apparent to me at present. I hope this gives you some perspective on where this “work in flux” is at, and where it may be headed.
Thanks for tuning in!
Jay.
## March 6, 2008
### Electrodynamic Potentials and Non-Linear QED in Kaluza-Klein
I have now added new sections 12, 13 and 14 to the Kaluza-Klein paper earlier posted. These sections examine the relationship between the electrodynamic potentials and the gravitational potentials, and the connection to QED. You may view this all at:
Electrodynamic Potentials and Non-Linear QED
Most significantly, these three new sections not only connect to the QED Lagrangian, but, they show how the familiar QED Lagrangian density
${\rm L}_{QCD} =-A^{\beta } J_{\beta } -{\textstyle\frac{1}{4}} F^{\sigma \tau } F_{\sigma \tau }$
emerges in the linear approximation of 5-dimensional Kaluza-Klein gravitational theory.
Then, we go in the opposite direction, to show the QED Lagrangian density / action for non-linear theory, based on the full-blown apparatus of gravitational theory.
Expressed in terms of the electrodynamic field strength $F^{\sigma \tau }$ and currents $J_{\beta }$, this non-linear result is:
${\rm L}_{QCD} =0={\textstyle\frac{1}{8\kappa }} b\overline{\kappa }g^{5\beta } J_{\beta } -{\textstyle\frac{1}{4}} F^{\sigma \tau } F_{\sigma \tau } \approx -A^{\beta } J_{\beta } -{\textstyle\frac{1}{4}} F^{\sigma \tau } F_{\sigma \tau }$, (13.6)
where the approximation $\approx$ shows the connection to the linear approximation. Re-expressed solely in terms of the fifth-dimensional gravitational metric tensor components $g_{5\sigma }$ and energy tensor source components $T_{\beta 5}$, this result is:
$\kappa {\rm L}_{QCD} =0={\textstyle\frac{1}{2}} g^{5\beta } \kappa T_{\beta 5} +{\textstyle\frac{1}{8}} g^{\sigma \alpha } \partial ^{\beta } g_{5\alpha } \left[\partial _{\sigma } g_{5\beta } -\partial _{\beta } g_{5\sigma } \right]$. (14.4)
You may also enjoy the derivations in section 12 which decompose the contravariant metric tensor into gravitons, photons, and the scalar trace of the graviton.
Again, if you have looked at earlier drafts, please focus on the new sections 12, 13 and 14. Looking for constructive feedback, as always.
## March 3, 2008
### Intrinsic Spin and the Kaluza-Klein Fifth Dimension: Journal Submission
I mentioned several days ago that I had submitted a Kaluza Klein paper to one of the leading journals. That lengthy paper was not accepted, and you can read the referee report and some of my comments here at sci.physics.foundations or here, with some other folks’ comments, at sci.physics.relativity. The report actually was not too bad, concluding that “the author must have worked a considerable amount to learn quite a few thing in gravitation theory, and a number of the equations are correctly written and they do make sense, however those eqs. do not contain anything original.” I would much rather hear this sort of objection, than be told — as I have been in the past — that I don’t know anything about the subject I am writing about.
In fact, there is one finding in the above-linked paper which, as I thought about it more and more, is quite original, yet I believe it was lost in the mass of this larger paper. And, frankly, it took me a few days to catch on to the full import of this finding, and so I downplayed it in the earlier paper. Namely: that the compactified fifth dimension of Kaluza-Klein theories is the mainspring of the intrinsic spins which permeate particle physics.
I have now written and submitted for publication, a new paper which only includes that Kaluza-Klein material which is necessary to fully support this particular original finding. You may read the submitted paper at Intrinsic Spin and the Kaluza-Klein Fifth Dimension. I will, of course, let you know what comes from the review of this paper.
Jay.
## February 29, 2008
### Lab Note 2: Why the Compactified, Cylindrical Fifth Dimension in Kaluza Klein Theory may be the “Intrinsic Spin” Dimension
FOR SOME REASON, THESE EQUATIONS APPEAR CORRUPTED. I AM CHECKING WITH WORDPRESS TECHNICAL SUPPORT AND HOPE TO HAVE THIS FIXED IN THE NEAR FUTURE — JAY.
I am posting here a further excerpt from my paper at Kaluza-Klein Theory, Lorentz Force Geodesics and the Maxwell Tensor with QED. Notwithstanding some good discussion at sci.physics.relativity, I am coming to believe that the intrinsic spin interpreation of the compacified, hypercylindrical fifth dimension presented in section 4 of this paper may be compelling. The math isn’t too hard, and you can follow it below: The starting point for discussion equation is (3.2) below,
$frac{dx^{5} }{dtau } equiv -frac{1}{b} frac{sqrt{hbar calpha } }{sqrt{G} m} =-frac{1}{b} frac{1}{sqrt{4pi G} } frac{q}{m}$. (3.2)
which is used to connect the $q/m$ ratio from the Lorentz law to geodesic motion in five dimensions, and $b$ is a numeric constant of proportionality. Section 4 below picks up from this.
Excerpt from Section 4:
Transforming into an “at rest” frame, $dx^{1} =dx^{2} =dx^{3} =0$, the spacetime metric equation $d/tau ^{2} =g_{/mu /nu } dx^{/mu } dx^{/nu }$ reduces to $dtau =pm sqrt{g_{00} } dx^{0}$, and (3.2) becomes:
$frac{dx^{5} }{dx^{0} } =pm frac{1}{b} sqrt{frac{g_{00} }{4pi G} } frac{q}{m}$. (4.1)
For a timelike fifth dimension, $x^{5}$ may be drawn as a second axis orthogonal to $x^{0}$, and the physics ratio $q/m$ (which, by the way, results in the $q/m$ material body in an electromagnetic field actually “feeling” a Newtonian force in the sense of $F=ma$ due to the inequivalence of electrical and inertial mass) measures the “angle” at which the material body moves through the $x^{5} ,x^{0}$ “time plane.”
For a spacelikefifth dimension, where one may wish to employ a compactified, hyper-cylindrical $x^{5} equiv Rphi$ (see [Sundrum, R., TASI 2004 Lectures: To the Fifth Dimension and Back, http://arxiv.org/abs/hep-th/0508134 (2005).], Figure 1) and $R$ is a constant radius (distinguish from the Ricci scalar by context), $dx^{5} equiv Rdphi$. Substituting this into (3.2), leaving in the $pm$ ratio obtained in (4.1), and inserting $c$ into the first term to maintain a dimensionless equation, then yields:
$frac{Rdphi }{cdtau } =pm frac{1}{b} frac{sqrt{hbar calpha } }{sqrt{G} m} =pm frac{1}{b} frac{1}{sqrt{4pi G} } frac{q}{m}$. (4.2)
We see that here, the physics ratio $q/m$ measures an “angular frequency” of fifth-dimensional rotation. Interestingly, this frequency runs inversely to the mass, and by classical principles, this means that the angular momentum with fixed radius is independent of the mass, i.e., constant. If one doubles the mass, one halves the tangential velocity, and if the radius stays constant, then so too does the angular momentum. Together with the $pm$ factor, one might suspect that this constant angular momentum is, by virtue of its constancy independently of mass, related to intrinsic spin. In fact, following this line of thought, one can arrive at an exact expression for the compactification radius $R$, in the following manner:
Assume that $x^{5}$ is spacelike, casting one’s lot with the preponderance of those who study Kaluza-Klein theory. In (4.2), move the $c$ away from the first term and move the $m$ over to the first term. Then, multiply all terms by another $R$. Everything is now dimensioned as an angular momentum $mcdot vcdot R$, which we have just ascertained is constant irrespective of mass. So, set this all to $pm {textstylefrac{1}{2}} nhbar$, which for $n=1$, represents intrinsic spin. The result is as follows:
$mfrac{Rdphi }{dtau } R=pm frac{1}{b} frac{sqrt{hbar c^{3} alpha } }{sqrt{G} } R=pm frac{1}{b} frac{c}{sqrt{4pi G} } qR=pm frac{1}{2} nhbar$. (4.3)
Now, take the second and fourth terms, and solve for $R$ with $n=1$, to yield:
$R=frac{b}{2sqrt{alpha } } sqrt{frac{Ghbar }{c^{3} } } =frac{b}{2sqrt{alpha } } L_{P}$, (4.4)
where $L_{P} =sqrt{Ghbar /c^{3} }$ is the Planck length. This gives a definitive size for the compactification radius, and it is very close to the Planck length. (more…)
## February 27, 2008
### Lab Note 2: Derivation of the Maxwell Stress-Energy Tensor from Five-Dimensional Geometry, using a Four-Dimensional Variation
FOR SOME REASON, THESE EQUATIONS APPEAR CORRUPTED. I AM CHECKING WITH WORDPRESS TECHNICAL SUPPORT AND HOPE TO HAVE THIS FIXED IN THE NEAR FUTURE. FOR NOW, PLEASE USE THE LINK IN THE FIRST PARAGRAPH, AND GO TO SECTION 10 — JAY.
As mentioned previously, I have been able to rigorously derive the Maxwell tensor from a five dimensional Kaluza-Klein geometry based on Lorentz force geodesics, using a variational principle over the four spacetime dimensions of our common experience. At the link: Kaluza-Klein Theory, Lorentz Force Geodesics and the Maxwell Tensor with QED, I have attached a complete version of this paper, which includes connections to quantum theory as well as an extensive summary not included in the version of the paper now being refereed at one of the leading journals. This is a strategic decision not to overload the referee, but to focus on the mathematical results, the most important of which is this derivation of the Maxwell tensor.
Because this paper is rather large, I have decided on this weblog, to post section 10, where this central derivation occurs. Mind you, there are nine sections which lay the foundation for this, but with the material below, plus the above link, those who are interested can see how this all fits together. The key result emerges in equation (10.15) below. Enjoy!
Excerpt: Section 10 — Derivation of the Maxwell Stress-Energy Tensor, using a Four-Dimensional Variation
In section 8, we derived the energy tensor based on the variational calculation (8.4), in five dimensions, i.e., by the variation $delta g^{{rm M} {rm N} }$. Let us repeat this same calculation, but in a slightly different way.
In section 8, we used (8.3) in the form of ${rm L}_{Matter} =-{textstylefrac{1}{8kappa }} boverline{kappa }g^{5{rm B} } J_{{rm B} } =-{textstylefrac{1}{8kappa }} boverline{kappa }g^{{rm M} {rm N} } delta ^{5} _{{rm M} } J_{{rm N} }$, because that gave us a contravariant $g^{{rm M} {rm N} }$ against which to obtain the five-dimensional variation $delta {rm L}_{Matter} /delta g^{{rm M} {rm N} }$. Let us instead, here, use the very last term in (8.3) as ${rm L}_{Matter}$, writing this as:
${rm L}_{Matter} equiv {textstylefrac{1}{2kappa }} R^{5} _{5} =-{textstylefrac{1}{8kappa }} boverline{kappa }left(g^{5beta } J_{beta } +{textstylefrac{1}{4}} g^{55} boverline{kappa }F^{sigma tau } F_{sigma tau } right)=-{textstylefrac{1}{8kappa }} boverline{kappa }left(g^{mu nu } delta ^{5} _{nu } J_{mu } +{textstylefrac{1}{4}} g^{55} g^{mu nu } boverline{kappa }F_{mu } ^{tau } F_{nu tau } right)$. (10.1)
It is important to observe that the term $g^{5beta } J_{beta }$ is only summed over four spacetime indexes. The fifth term, $g^{55} J_{5} ={textstylefrac{1}{4}} g^{55} boverline{kappa }F^{sigma tau } F_{sigma tau }$, see, e.g., (6.8). For consistency with the non-symmetric (9.5), we employ $g^{5beta } J_{beta } =g^{mu nu } delta ^{5} _{nu } J_{mu }$ rather than $g^{5beta } J_{beta } =g^{mu nu } delta ^{5} _{mu } J_{nu }$. By virtue of this separation, in which we can only introduce $g^{mu nu }$ and not $g^{{rm M} {rm N} }$ as in section 8, we can only take a four-dimensional variation $delta {rm L}_{Matter} /delta g^{mu nu }$, which, in contrast to (8.4), is now given by:
$T_{mu nu } equiv -frac{2}{sqrt{-g} } frac{partial left(sqrt{-g} {rm L}_{Matter} right)}{delta g^{mu nu } } =-2frac{delta {rm L}_{Matter} }{delta g^{mu nu } } +g_{mu nu } {rm L}_{Matter}$. (10.2)
Substituting from (10.1) then yields:
$T_{mu nu } ={textstylefrac{1}{4kappa }} boverline{kappa }left(delta ^{5} _{nu } J_{mu } +{textstylefrac{1}{4}} g^{55} boverline{kappa }F_{mu } ^{tau } F_{nu tau } right)-{textstylefrac{1}{2}} g_{mu nu } {textstylefrac{1}{4kappa }} boverline{kappa }left(g^{5beta } J_{beta } +{textstylefrac{1}{4}} g^{55} boverline{kappa }F^{sigma tau } F_{sigma tau } right)$. (10.3)
Now, the non-symmetry of sections 8 and 9 comes into play, and this will yield the Maxwell tensor. Because $delta ^{5} _{nu } =0$, the first term drops out and the above reduces to:
$kappa T_{mu nu } ={textstylefrac{1}{4}} boverline{kappa }left({textstylefrac{1}{4}} g^{55} boverline{kappa }F_{mu } ^{tau } F_{nu tau } right)-{textstylefrac{1}{2}} g_{mu nu } {textstylefrac{1}{4}} boverline{kappa }left(g^{5beta } J_{beta } +{textstylefrac{1}{4}} g^{55} boverline{kappa }F^{sigma tau } F_{sigma tau } right)$. (10.4)
Note that this four-dimensional tensor is symmetric, and that we would arrive at an energy tensor which is identical if (10.3) contained a $delta ^{5} _{mu } J_{nu }$ rather than $delta ^{5} _{nu } J_{mu }$. One again, the screen factor $delta ^{5} _{nu } =0$ is at work.
In mixed form, starting from (10.3), there are two energy tensors to be found. If we raise the $mu$ index in (10.3), the first term becomes $delta ^{5} _{nu } J^{mu } =0$ and we obtain:
$-kappa T^{mu } _{nu } =-{textstylefrac{1}{4}} boverline{kappa }left({textstylefrac{1}{4}} g^{55} boverline{kappa }F^{mu tau } F_{nu tau } right)+{textstylefrac{1}{2}} delta ^{mu } _{nu } {textstylefrac{1}{4}} boverline{kappa }left(g^{5beta } J_{beta } +{textstylefrac{1}{4}} g^{55} boverline{kappa }F^{sigma tau } F_{sigma tau } right)$. (10.5)
with this first term still screened out. However, if we transpose (10.3) and then raise the $mu$ index, the first term becomes $g^{5mu } J_{nu }$ and this term does not drop out, i.e.,
$-kappa T_{nu } ^{mu } =-{textstylefrac{1}{4}} boverline{kappa }left(g^{5mu } J_{nu } +{textstylefrac{1}{4}} g^{55} boverline{kappa }F^{mu tau } F_{nu tau } right)+{textstylefrac{1}{2}} delta ^{mu } _{nu } {textstylefrac{1}{4}} boverline{kappa }left(g^{5beta } J_{beta } +{textstylefrac{1}{4}} g^{55} boverline{kappa }F^{sigma tau } F_{sigma tau } right)$. (10.6)
So, there are two mixed tensors to consider, and this time, unlike in section 8, these each yield different four-dimensional energy tensors. Contrasting (10.5) and (10.6), we see that $delta ^{5} _{nu } =0$ has effectively “broken” a symmetry that is apparent in (10.6), but “hidden” in (10.5). At this time, we focus on (10.5), because, as we shall now see, this is the Maxwell stress-energy tensor $T^{mu } _{nu } =-left(F^{mu tau } F_{nu tau } -{textstylefrac{1}{4}} delta ^{mu } _{nu } F^{sigma tau } F_{sigma tau } right)$, before reduction into this more-recognizable form.
(more…)
## February 25, 2008
### Lab Note 2 Term Paper: Kaluza-Klein Theory and Lorentz Force Geodesics . . . and the Maxwell Tensor
Dear Friends:
I have just today completed a paper titled “Kaluza-Klein Theory and Lorentz Force Geodesics,” which I have linked below:
I have also submitted the draft linked above, to one of the leading physics journals for consideration for publication.
One of the things I have been beating my head against the wall over these past few weeks, is to deduce the Maxwell stress-energy tensor from the 5-dimensional geometry using Einstein’s equation including its scalar trace. I finally got the proof nailed down this morning, and that is section 10 of the paper linked above.
I respectfully submit that the formal derivation of the Maxwell stress-energy tensor in section 10, provides firm support for the Spacetime-Matter (STM) viewpoint that our physical universe is a five-dimensional Kaluza-Klein geometry in which the phenomenon we observe in four dimensions are “induced” out of the fifth dimension, and that it supports the correctness of the complete line of development in this paper. Section 10 — as the saying goes — is the “clincher.”
As is apparent to those who have followed the development of this particular “Lab Note,” my approach is to postulate the Lorentz force, and require that this be geodesic motion in 5-dimensions. Everything else follows from there. The final push to the Maxwell tensor in section 10, rests on adopting and implementing the STM viewpoint, and applying a 4-dimensional variational principle in a five-dimensional geometry. If you have a serious interest in this subject, in addition to my paper, please take a look at The 5D Space-Time-Matter Consortium.
Best to all,
Jay.
Next Page »
Blog at WordPress.com.
|
2018-01-19 09:19:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 134, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7986515164375305, "perplexity": 785.9615239069855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887849.3/warc/CC-MAIN-20180119085553-20180119105553-00644.warc.gz"}
|
https://quantum-journal.org/views/qv-2021-08-17-57/
|
# Making fermionic quantum simulators more affordable
This is a Perspective on "Resource-Optimized Fermionic Local-Hamiltonian Simulation on a Quantum Computer for Quantum Chemistry" by Qingfeng Wang, Ming Li, Christopher Monroe, and Yunseong Nam, published in Quantum 5, 509 (2021).
By Daniel Leykam (Centre for Quantum Technologies, National University of Singapore).
One of the most promising applications of future quantum computers is the ab-initio calculation of chemical properties including ground state energies [1]. The required number of qubits and gates scales more favourably with the size of the basis set used (linear and polynomial, respectively) compared to the exponentially-growing resources required to merely store a wave function on a classical computer.
Despite the better scaling of quantum algorithms, interesting classically intractable quantum chemistry problems remain well out of reach of near-term quantum computers due to the huge prefactors and overheads involved [2]. Thus, the reduction of resource requirements of quantum algorithms for quantum chemistry is an important problem attracting great interest [3,4,5,6,7,8].
The study by Wang et al. [9], recently published in $\textit{Quantum}$, develops schemes to reduce the number of expensive quantum gates required to compute the ground state energies of fermionic systems with local interactions using quantum computers. They present methods applicable to both near-term quantum processors and eventual fault-tolerant quantum computers.
First, the authors consider the calculation of ground state energies on future fault-tolerant quantum computers, based on applying the quantum phase-estimation algorithm to the time-evolution operator $U = \exp(-i Ht)$ generated by the system’s Hamiltonian $H$. Implementing the time-evolution operator on the quantum circuit is the main challenge for quantum phase estimation, but several efficient methods have been proposed [6]. For example, the Taylor-series approach decomposes $\hat{U}$ into a sum of powers of $H$ obtained via measurement of ancilla qubits [10], while the asymptotically optimal quantum signal processing employs controlled rotations of ancilla qubits [11].
Here the authors consider the more qubit-efficient product-formula method, which splits $\hat{U}$ into a product of simpler unitaries [12,6]. They show that the two-body interaction terms arising from the product-formula decomposition can be recast as triply-controlled rotations requiring fewer costly ancilla qubits and $T$ (phase shift) gates compared to previous optimized implementations. For example, the method of Ref. [13] requires twice as many $T$ gates and eleven times as many ancillas, making this an impressive reduction of the required resources.
Next, the authors consider the variational quantum eigensolver, which is one of the most promising algorithms for near-term quantum processors due to its lower circuit depth and resilience to systematic gate errors [14]. The variational quantum eigensolver is a hybrid quantum-classical algorithm, which uses a parameterized quantum circuit to prepare a trial state and then measures the expectation value of the Hamiltonian operator $H$. The circuit parameters are iteratively optimized in order to minimize the energy and approximate the ground state of $H$.
Obtaining an accurate solution using the variational quantum eigensolver requires an encoding circuit that can well-approximate the ground state of $H$. Here the challenge is to balance the competing requirements of having a low circuit depth, a sufficiently expressive circuit, and being able to efficiently optimize the circuit parameters. For example, hardware-efficient encoding circuits formed by sequences of native gates minimize the circuit depth [3], but suffer from vanishing gradients of the energy with respect to the circuit parameters [15].
Problem-inspired strategies such as the unitary coupled-cluster method attempt to construct trial solutions by optimizing excitations about some reference states [4,7,8]. However, the excitation operators need to be decomposed into sequences of native gates, resulting in deeper circuits. One strategy for minimizing the required circuit depth is to iteratively build up the ansatz circuit, adding parameterized excitations until convergence is achieved [8].
Wang et al. incorporate perturbation theory to the iterative construction of a unitary coupled-cluster ansatz. First, a pool of excitation terms is optimized. Next, second order perturbation theory is used to estimate the influence of additional excitation terms. This allows one to not only identify the best excitation to add to the ansatz, but also estimate how much it will reduce the energy, while only incurring a small overhead of additional observables to be measured. Building up the ansatz in this manner simplifies the classical optimization of the variational parameters while minimizing the circuit depth.
As a second optimization of the resources required for the variational quantum algorithm, Wang et al. study generalized fermion-to-qubit transformations with the aim of reducing the number of costly two-qubit gates required to implement the unitary coupled-cluster ansatz. While numerically finding the global optimal transformation for a given problem is extremely challenging, heuristic methods applied to small molecules including H$_2$O indicate potential savings of up to 20% fewer two-qubit gates. This provides an alternative to generalized transformation approaches which reduce the required number of qubits to encode the wave function at the expense of increasing the circuit depth [5,16].
By reducing the resources required for simulation of fermionic systems on quantum computers, Wang et al. bring us a step closer to demonstrating a quantum advantage for quantum chemistry problems. In the near future, it will be interesting to implement these optimizations to the variational quantum eigensolver on real hardware in order to test their robustness to device noise and the imperfect estimation of observables using finite numbers of measurements. Another pressing issue is to test how well these methods scale as the number of orbitals is increased to the level required to achieve chemical accuracy.
### ► References
[1] Y. Cao, J. Romero, J. P. Olson, M. Degroote, P. D. Johnson, M. Kieferová, I. D. Kivlichan, T. Menke, B. Peropadre, N. P. D. Sawaya, S. Sim, L. Veis, and A. Aspuru-Guzik, Quantum Chemistry in the Age of Quantum Computing, Chemical Reviews 119, 10856 (2019).
https://doi.org/10.1021/acs.chemrev.8b00803
[2] V. E. Elfving, B. W. Broer, M. Webber, J. Gavartin, M. D. Halls, K. P. Lorton, and A. D. Bochevarov, How will quantum computers provide an industrially relevant computational advantage in quantum chemistry? arXiv:2009.12472.
arXiv:2009.12472
[3] A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets, Nature 549, 242 (2017).
https://doi.org/10.1038/nature23879
[4] J. Romero, R. Babbush, J. R. McClean, C. Hempel, P. J. Love, and A. Aspuru-Guzik, Strategies for quantum computing molecular energies using the unitary coupled cluster ansatz, Quantum Science and Technology 4, 014008 (2018).
[5] M. Steudtner and S. Wehner, Fermion-to-qubit mappings with varying resource requirements for quantum simulation, New Journal of Physics 20, 063010 (2018).
https://doi.org/10.1088/1367-2630/aac54f
[6] A. M. Childs, D. Maslov, Y. Nam, N. J. Ross, and Y. Su, Toward the first quantum simulation with quantum speedup, Proceedings of the National Academy of Sciences 115, 9456 (2018).
https://doi.org/10.1073/pnas.1801723115
[7] P.-L. Dallaire-Demers, J. Romero, L. Veis, S. Sim, and A. Aspuru-Guzik, Low-depth circuit ansatz for preparing correlated fermionic states on a quantum computer, Quantum Science and Technology 4, 045005 (2019).
https://doi.org/10.1088/2058-9565/ab3951
[8] H. R. Grimsley, S. E. Economou, E. Barnes, and N. J. Mayhall, An adaptive variational algorithm for exact molecular simulations on a quantum computer, Nature Communications 10, 3007 (2019).
https://doi.org/10.1038/s41467-019-10988-2
[9] Q. Wang, M. Li, C. Monroe, and Y. Nam, Resource-Optimized Fermionic Local-Hamiltonian Simulation on a Quantum Computer for Quantum Chemistry, Quantum 5, 509 (2021).
https://doi.org/10.22331/q-2021-07-26-509
[10] D. W. Berry, A. M. Childs, R. Cleve, R. Kothari, and R. D. Somma, Simulating Hamiltonian Dynamics with a Truncated Taylor Series, Physical Review Letters 114, 090502 (2015).
https://doi.org/10.1103/PhysRevLett.114.090502
[11] G. H. Low and I. L. Chuang, Optimal Hamiltonian Simulation by Quantum Signal Processing, Physical Review Letters 118, 010501 (2017).
https://doi.org/10.1103/PhysRevLett.118.010501
[12] D. W. Berry, G. Ahokas, R. Cleve, and B. C. Sanders, Efficient Quantum Algorithms for Simulating Sparse Hamiltonians, Communications in Mathematical Physics 270, 359 (2007).
https://doi.org/10.1007/s00220-006-0150-x
[13] M. Reiher, N. Wiebe, K. M. Svore, D. Wecker, and M. Troyer, Elucidating reaction mechanisms on quantum computers, Proceedings of the National Academy of Sciences 114, 7555 (2017).
https://doi.org/10.1073/pnas.1619152114
[14] A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O'Brien, A variational eigenvalue solver on a photonic quantum processor, Nature Communications 5, 4213 (2014).
https://doi.org/10.1038/ncomms5213
[15] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven, Barren plateaus in quantum neural network training landscapes, Nature Communications 9, 4812 (2018).
https://doi.org/10.1038/s41467-018-07090-4
[16] K. Setia and J. D. Whitfield, Bravyi-Kitaev Superfast simulation of electronic structure on a quantum computer, The Journal of Chemical Physics 148, 164104 (2018).
https://doi.org/10.1063/1.5019371
### Cited by
On Crossref's cited-by service no data on citing works was found (last attempt 2022-07-05 22:54:16). On SAO/NASA ADS no data on citing works was found (last attempt 2022-07-05 22:54:17).
|
2022-07-05 22:54:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4949778914451599, "perplexity": 2144.7715161967835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104628307.87/warc/CC-MAIN-20220705205356-20220705235356-00414.warc.gz"}
|
https://eprint.iacr.org/2009/635
|
Flexible Quasi-Dyadic Code-Based Public-Key Encryption and Signature
Kazukuni Kobara
Abstract
Drawback of code-based public-key cryptosystems is that their public-key size is lage. It takes some hundreds KB to some MB for typical parameters. While several attempts have been conducted to reduce it, most of them have failed except one, which is Quasi-Dyadic (QD) public-key (for large extention degrees). While an attack has been proposed on QD public-key (for small extension degrees), it can be prevented by making the extension degree $m$ larger, specifically by making $q^(m (m-1))$ large enough where $q$ is the base filed and for a binary code, $q=2$. The drawback of QD is, however, it must hold $n << 2^m - t$ (at least $n \leq 2^{m-1}$) where $n$ and $t$ are the code lenght and the error correction capability of the underlying code. If it is not satisfied, its key generation fails since it is performed by trial and error. This condition also prevents QD from generating parameters for code-based digital signatures since without making $n$ close to $2^m - t$, $2^{mt}/{n \choose t}$ cannot be small. To overcome these problems, we propose Flexible'' Quasi-Dyadic (FQD) public-key that can even achieve $n=2^m - t$ with one shot. Advantages of FQD include 1) it can reduce the publi-key size further, 2) it can be applied to code-based digital signatures, too.
Note: Fixed some typos.
Available format(s)
Category
Public-key cryptography
Publication info
Published elsewhere. Unknown where it was published
Keywords
Contact author(s)
kobara_conf @ m aist go jp
History
2010-05-21: revised
See all versions
Short URL
https://ia.cr/2009/635
CC BY
BibTeX
@misc{cryptoeprint:2009/635,
author = {Kazukuni Kobara},
title = {Flexible Quasi-Dyadic Code-Based Public-Key Encryption and Signature},
howpublished = {Cryptology ePrint Archive, Paper 2009/635},
year = {2009},
note = {\url{https://eprint.iacr.org/2009/635}},
url = {https://eprint.iacr.org/2009/635}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
|
2022-05-26 17:13:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5022228360176086, "perplexity": 2445.6594231822573}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662619221.81/warc/CC-MAIN-20220526162749-20220526192749-00716.warc.gz"}
|
https://blog.myrank.co.in/magnetization-and-magnetic-intensity/
|
# Magnetization and Magnetic Intensity
### Magnetization and Magnetic Intensity
The Magnetic behaviour of a magnet is characterized by the alignment of the atoms inside a substance. When a ferromagnetic substance is brought under the application of a strong external magnetic field, then they experience a torque wherein the substance aligns themselves in the direction of the magnetic field applied and hence gets strongly magnetized in the direction of the magnetic field.
Magnetization: Magnetization of a given sample material M, can be defined as the net magnetic moment for that material per unit volume.
Mathematically,
$$M\,=\,\frac{{{m}_{net}}}{V}$$,
Let us now consider the case of a solenoid. Let us take a solenoid with n turns per unit length and the current passing through it be given by I, then the magnetic field in the interior of the solenoid can be given as, B₀ = µ₀nI
Now, if we fill the interior of the solenoid with a material of non-zero magnetization, the field inside the solenoid must be greater than before. The net magnetic field B inside the solenoid can be given as,
B = B₀ + Bm
Where,
Bm = The field contributed by the core material. Here,
Bm α Magnetization of the material (M).
Mathematically,
Bm = µ₀M
Here,
µ₀ = Constant of permeability of vacuum.
Let us now discuss another concept here, the magnetic intensity of a material. The magnetic intensity of a material can be given as,
$$H\,=\,\frac{B}{{{\mu }_{0}}}\,-\,M$$,
From this equation, we see that, the total magnetic field can also be defined as,
B = µ₀ (H + M)
Here, the magnetic field due to the external factors such as the current in the solenoid is given as H and that due to the nature of the core is given by M. The latter quantity, that is M is dependent on external influences, and is given by, M = χH
Where,
χ = Magnetic susceptibility of the material.
It gives the measure of the response of a material to an external field. The magnetic susceptibility of a material is small and positive for paramagnetic materials and is small and negative for diamagnetic materials.
B = µ₀ (1 + χ) H = µ₀µrH = µH
Here,
µr = Relative magnetic permeability of a material.
Which is analogous to the dielectric constants in the case of electrostatics. We define the magnetic permeability as, µ = µ₀µr = µ₀ (1 + χ).
|
2022-12-08 03:45:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7969426512718201, "perplexity": 473.5979379563674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711232.54/warc/CC-MAIN-20221208014204-20221208044204-00794.warc.gz"}
|
http://sara-codes.com/simple-guide-to-confusion-matrix/
|
Putting it in a few words, a confusion matrix is a summarization of the performance of an algorithm. It is a table that describes the performance of a classifier model with known labels.
Confusion matrix is well used in Machine Learning because it not only indicates the errors made by the model but also describes the types of error.
Let’s have a look at what does the table is referring:
TP – True Positives: model predicted Positive and actual class is also Positive.
FP – False Positives: model predicted Positive, but the actual class is Negative. (aka Type I error)
FN – False Negatives: model predicted False but the actual class is Positive. (aka Type II error)
TN – True Negatives: model predicted Negative and actual class is also Negative.
As you can see, the confusion matrix is the count of each error type the model made. Having this information, we can calculate the following metrics:
Accuracy: Percentage of the correct classifications the model made from all the observations.
$\frac{TP+TN}{TP + TN + FP + FN}$
Precision: Percentage of the correctly predicted Positive observations among the total of the predicted Positive observations. In other words, it’s the percentage of the positive predicts that are correct.
$\frac{TP}{TP + FP}$
Recall or Sensibility: Is the percentage of the correctly predicted Positive observations among the total of the Positive observations.
$\frac{TP}{TP + FN}$
F1 score: Combines precision and recall. Can be interpreted as the weighted average of the precision and recall on a scale from 0 to 1, where 1 means a perfect classification.
$\frac{2 * (PRECISION * RECALL)}{PRECISION + RECALL}$
Insert math as
$${}$$
|
2019-08-25 03:42:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8013428449630737, "perplexity": 887.5725880043716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322170.99/warc/CC-MAIN-20190825021120-20190825043120-00364.warc.gz"}
|
https://msl.stanford.edu/bibliography/smith_persistent_2012
|
### Persistent Robotic Tasks: Monitoring and Sweeping in Changing Environments
@article{smith_persistent_2012,
title = {Persistent {Robotic} {Tasks}: {Monitoring} and {Sweeping} in {Changing} {Environments}},
volume = {28},
issn = {1552-3098, 1941-0468},
url = {http://ieeexplore.ieee.org/document/6101584/},
abstract = {In this paper, we present controllers that enable mobile robots to persistently monitor or sweep a changing environment. The environment is modeled as a field that is defined over a finite set of locations. The field grows linearly at locations that are not within the range of a robot and decreases linearly at locations that are within range of a robot. We assume that the robots travel on given closed paths. The speed of each robot along its path is controlled to prevent the field from growing unbounded at any location. We consider the space of speed controllers that are parametrized by a finite set of basis functions. For a single robot, we develop a linear program that computes a speed controller in this space to keep the field bounded, if such a controller exists. Another linear program is derived to compute the speed controller that minimizes the maximum field value over the environment. We extend our linear program formulation to develop a multirobot controller that keeps the field bounded. We characterize, both theoretically and in simulation, the robustness of the controllers to modeling errors and to stochasticity in the environment.},
language = {en},
number = {2},
urldate = {2020-09-15},
journal = {IEEE Transactions on Robotics},
author = {Smith, Stephen L. and Schwager, Mac and Rus, Daniela},
month = apr,
year = {2012},
keywords = {Multi-Agent Control, Multi-Agent Search, Optimization and Optimal Control, Persistent Surveillance, persistent\_surveillance},
pages = {410--426},
month_numeric = {4}
}
|
2022-08-08 10:53:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.397948682308197, "perplexity": 1995.8027194401063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00420.warc.gz"}
|
https://stats.stackexchange.com/questions/175450/time-varying-cointegrating-equation
|
Time varying cointegrating equation
I am estimating a rolling cointegration model and saving the parameters of the long run equation for trend analysis. I wonder if it is possible to estimate time varying parameter VECM using Kalman Filter on cointegrating equation instead of ecm or short run parameters. Many thanks mrrox
• Is your question whether or not it is possible to cast the model in state space form and then let the long-run relations vector (often called the $\beta$ vector) vary according to a random walk process? – Plissken Oct 5 '15 at 8:49
• Hello @ Plissken, yes, can I do that? if Yes, do you know of a code on the net or someone who could help with get such a specification estimated? – mr.rox Oct 5 '15 at 10:11
• I'll write an answer up later on what you can do. Letting the long-run relations vector vary according to a random walk process is problematic but if you are willing to use a Bayesian framework it should be possible. See: personal.strath.ac.uk/gary.koop/kls8.pdf. Note that Koop also publishes his Matlab replication files on his website. – Plissken Oct 5 '15 at 10:31
• Many thanks, it appears Koop published this paper with another guy in 2011. The code is in Gauss. Do you know if there is a matlab code of this sort? Thanks. – mr.rox Oct 5 '15 at 11:06
|
2020-01-27 05:14:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5588844418525696, "perplexity": 723.5935067774979}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694908.82/warc/CC-MAIN-20200127051112-20200127081112-00311.warc.gz"}
|
http://math.stackexchange.com/questions/354511/how-to-verify-this-is-an-orthogonal-basis-how-to-transform-it-into-an-orthonorm
|
# How to verify this is an orthogonal basis? How to transform it into an orthonormal basis?
Let $$B = \left\{ \begin{bmatrix} 3\\ -3\\ 0\end{bmatrix},\begin{bmatrix} 2\\ 2\\ -1\end{bmatrix},\begin{bmatrix} 1\\ 1\\ 4\end{bmatrix}\right\},\qquad v =\begin{bmatrix} 5\\ -3\\ 1\end{bmatrix}.$$ a) Verify that $B$ is an orthogonal basis of $\mathbb{R}^3$.
b) Transform $B$ into an orthonormal basis.
c) Write $v$ as linear combination of $B$.
I am really lost in class. I don't even know where to start. Please show steps and answers for the exercise problem so that I can learn. Thank you
-
Why don't you write down the definitions of the things you don't understand? – user66345 Apr 8 '13 at 3:24
I am assuming the same thing as user66345, and I have made an edit to that effect. – Zev Chonoles Apr 8 '13 at 3:25
@JoMo: Note that you need to end a line with two spaces in order for the line break to appear. – Zev Chonoles Apr 8 '13 at 3:26
My family is a rather huge set so I think they must be linear dependent... – DonAntonio Apr 8 '13 at 3:29
@DonAntonio Ha ha! Yes, most likely. It is good to know you can depend on your relatives. – 1015 Apr 8 '13 at 3:32
c) $v = \alpha b_1 + \beta b_2 + \gamma b_3$
To determine the $\alpha,\beta$, and $\gamma$, take the dot product of $v$ with $b_1,b_2$, and $b_3$ and note that $b_i.b_i=1,\, \forall i=1..3$. Note that, $\alpha,\beta$, and $\gamma$ are known as the Fourier coefficients.
|
2016-05-24 18:01:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9333562254905701, "perplexity": 287.9707014385645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049272823.52/warc/CC-MAIN-20160524002112-00154-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://dimag.ibs.re.kr/event/2022-09-13/
|
# Sebastian Wiederrecht, Killing a vortex
## September 13 Tuesday @ 4:30 PM - 5:30 PM KST
Room B332, IBS (기초과학연구원)
### Speaker
Sebastian Wiederrecht
IBS Discrete Mathematics Group
https://www.wiederrecht.com
The Structural Theorem of the Graph Minors series of Robertson and Seymour asserts that, for every $t\in\mathbb{N},$ there exists some constant $c_{t}$ such that every $K_{t}$-minor-free graph admits a tree decomposition whose torsos can be transformed, by the removal of at most $c_{t}$ vertices, to graphs that can be seen as the union of some graph that is embeddable to some surface of Euler genus at most $c_{t}$ and “at most $c_{t}$ vortices of depth $c_{t}$”. Our main combinatorial result is a “vortex-free” refinement of the above structural theorem as follows: we identify a (parameterized) graph $H_{t}$, called shallow vortex grid, and we prove that if in the above structural theorem we replace $K_{t}$ by $H_{t},$ then the resulting decomposition becomes “vortex-free”. Up to now, the most general classes of graphs admitting such a result were either bounded Euler genus graphs or the so called single-crossing minor-free graphs. Our result is tight in the sense that, whenever we minor-exclude a graph that is not a minor of some $H_{t},$ the appearance of vortices is unavoidable. Using the above decomposition theorem, we design an algorithm that, given an $H_{t}$-minor-free graph $G$, computes the generating function of all perfect matchings of $G$ in polynomial time. This algorithm yields, on $H_{t}$-minor-free graphs, polynomial algorithms for computational problems such as the {dimer problem, the exact matching problem}, and the computation of the permanent. Our results, combined with known complexity results, imply a complete characterization of minor-closed graphs classes where the number of perfect matchings is polynomially computable: They are exactly those graph classes that do not contain every $H_{t}$ as a minor. This provides a sharp complexity dichotomy for the problem of counting perfect matchings in minor-closed classes.
This is joint work with Dimitrios M. Thilikos.
## Details
Date:
September 13 Tuesday
Time:
4:30 PM - 5:30 PM KST
Event Category:
Event Tags:
Room B332
|
2022-08-17 07:05:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.803888201713562, "perplexity": 526.8839800204947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572870.85/warc/CC-MAIN-20220817062258-20220817092258-00530.warc.gz"}
|
https://chemistry.stackexchange.com/questions/33773/stereoselective-enolate-formation-with-different-bases-does-the-addition-of-a-b
|
Stereoselective enolate formation with different bases: Does the addition of a base NR3 to a ketone affords the cis or the trans enolate?
I'm wondering if the addition of a base of type $\ce{NR_3}$ (R: alkyl) to a ketone (e.g. 3-pentanone or 4-heptanone) results in the cis or trans-enolate.
1. $\ce{NMe_3}$ (small)
2. $\ce{NMeEt_2}$ (medium)
3. $\ce{NEt_3}$ (big)
4. $\ce{N(iPr)_2Et}$ (very bulky)
Maybe smaller bases as 1 or 2 usually give the trans-enolate, and bulkier bases such as 3 or even 4 affords the cis-enolate. (That's what I read, but I'm not able to understand why.) Is there a good explanation that rationalizes which enolate is formed, the cis or the trans?
Sterically hindered bases forms the the kinetic enolate. For deprotonation of 3-Methyl-4-heptanone I assume the enolate at the linear unbranched chain site, that's clear for me, but kinetic/thermodynamic enolate has nothing to do with cis/trans enolate or does it?
• With weak bases like the amines shown, the conditions are necessarily thermodynamic, so I would expect the thermodynamic Z-enolate to be favored over the E-enolate. Usually there is a trapping agent (e.g., TMSCl) available, which could change the outcome. – jerepierre Jul 9 '15 at 19:40
• Why is the Z-enolate thermodynamically favoured? (Can we say the same without the carbonyl, that cis-propene is thermodynamically favoured over trans-propene?) – laminin Jul 9 '15 at 20:29
• Tertiary amines are hardly basic enough to extract the alpha-proton of ketone (pKa ~20) to form enolate. Do you mean organolithium bases like (1) LiMe2 ...(4) Li(iPr)2 etc for a meaningful comparison with LDA or NaH? NEt3 was used to form enolate in the ACID-catalyzed enol formation. HNEt3(+) has a pKa of ~10. – SYK Jul 17 '15 at 23:15
• @SYK Your comment is very worthful for me! Let's take the smallest Lewis Acid possible, for carbonyl activation, I would try LiI or NaI. Then I suggest a), b), c) and d) all give the cis-enolate. – laminin Jul 17 '15 at 23:41
• I doubt LiI or NaI with a tertiary amine will deprotonate a carbonyl, so this question doesn't really make sense unless you are using some kind of boron activation (as your comment on the answer suggests), or silicon (as jerepierre suggested). – orthocresol Mar 3 '18 at 15:32
The keto-enol tautomerization cannot take place without at least a trace of acid or base, meaning that there is no direct shift of proton from alpha-carbon to oxygen or vice versa.
The two most common enolation mechanisms are:
1. Base-catalyzed (with organolithium reagent such as LDA (secondary amine))
In the base-catalyzed enol formation, the first step, proton extraction from the alpha-carbon, is slow. (The subsequent enolate formation is fast (due to resonance)). The base is proposed to form a chair-like transition state (TS) with the ketone, extracting the alpha-carbon proton and at the same time donating the Li to the oxygen. The resulting product is the classical lithium enolate.
Based on the structure of TS, you can then predict whether the “side chain” of the base would form a unfavorable syn-pentane interaction.
https://en.wikipedia.org/wiki/File:Ireland_model_enolate1.svg
1. Acid-catalyzed
In the acid-catalyzed enol formnation, the first step, the protonation of oxygen and subsequent “charge re-localization” to the hydroxyl carbon, is fast. (The subsequent enol formation is slow). Since the protonation can occur from any free proton in the environment, the size of the acid plays little direct role in the stereochemistry of the resulting enolate. (However, the size of the “side chain” in the ketone reactant does).
However, there is a 3rd mechanism when the “base” is a tertiary amine (as you appear to be very interested in) but the reaction is acid-catalyzed. In this mechanism, the amine behaves like a nucleophile and a carbinolamine intermediate is detected.
See Bruice, J. Am. Chem. Soc. 1983, 105, 4982
http://pubs.acs.org/doi/abs/10.1021/ja00353a023
(Also see Bruice, J. Am. Chem. Soc. 1989, 111, 962 and 1990, 112, 7361)
In drawing this carbinolamine intermediate, you can predict a bulky tertiary amine to favor a cis-enolate due to steric hindrance. However, as suggested in the paper, "severely sterically hindered" tertiary amine can be too bulky to follow this 3rd mechanism and just catalyzes enolation via the general base-catalyzed mechanism.
• Thank you very much for your answer. Probably there is a fourth mechanism which I'm more interested in than the other three: Addition of a lewis acid. a1) Bu2BOTf / (iPr)2NEt gives the cis enolate, b1) Cy2BCl / NEt3 gives the trans-enolate because of the bulky Cy. What does change when you instead use a2) Bu2BOTf / NEt3 and b2) Cy2BCl / (iPr)2NEt ? And last but not least I'd like to ask as a second question, if the addition of c) LiI and d) NaI also give the cis-enolate? – laminin Jul 18 '15 at 13:28
• chemistry.stackexchange.com/questions/34153/… – laminin Jul 18 '15 at 20:41
• (−1) OP is asking about the generation of stoichiometric enolate, but you seem to be talking about keto-enol tautomerism. Plus, LDA isn't going to "catalyse" keto-enol tautomerism. Your answer seems to be confused about this: "In the base-catalyzed enol formation [...] The resulting product is the classical lithium enolate." – orthocresol Mar 3 '18 at 15:36
|
2019-11-22 18:20:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7121464610099792, "perplexity": 5220.501873755333}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671411.14/warc/CC-MAIN-20191122171140-20191122200140-00424.warc.gz"}
|
http://eprint.iacr.org/2013/702
|
## Cryptology ePrint Archive: Report 2013/702
Efficient Non-Malleable Codes and Key-Derivation for Poly-Size Tampering Circuits
Sebastian Faust and Pratyay Mukherjee and Daniele Venturi and Daniel Wichs
Abstract: Non-malleable codes, defined by Dziembowski, Pietrzak and Wichs (ICS '10), provide roughly the following guarantee: if a codeword $c$ encoding some message $x$ is tampered to $c' = f(c)$ such that $c' \neq c$, then the tampered message $x'$ contained in $c'$ reveals no information about $x$. Non-malleable codes have applications to immunizing cryptosystems against tampering attacks and related-key attacks.
One cannot have an efficient non-malleable code that protects against all efficient tampering functions $f$. However, in this work we show the next best thing'': for any polynomial bound $s$ given a-priori, there is an efficient non-malleable code that protects against all tampering functions $f$ computable by a circuit of size $s$. More generally, for any family of tampering functions $\F$ of size $|\F| \leq 2^{s}$, there is an efficient non-malleable code that protects against all $f \in \F$. The rate of our codes, defined as the ratio of message to codeword size, approaches $1$. Our results are information-theoretic and our main proof technique relies on a careful probabilistic method argument using limited independence. As a result, we get an efficiently samplable family of efficient codes, such that a random member of the family is non-malleable with overwhelming probability. Alternatively, we can view the result as providing an efficient non-malleable code in the common reference string'' (CRS) model.
We also introduce a new notion of non-malleable key derivation, which uses randomness $x$ to derive a secret key $y = h(x)$ in such a way that, even if $x$ is tampered to a different value $x' = f(x)$, the derived key $y' = h(x')$ does not reveal any information about $y$. Our results for non-malleable key derivation are analogous to those for non-malleable codes.
As a useful tool in our analysis, we rely on the notion of leakage-resilient storage'' of Davi, Dziembowski and Venturi (SCN '10) and, as a result of independent interest, we also significantly improve on the parameters of such schemes.
Category / Keywords: foundations / information theory, non-malleability, codes
Date: received 27 Oct 2013, last revised 15 May 2014
Contact author: wichs at ccs neu edu
Available format(s): PDF | BibTeX Citation
Note: The last revision fixed a few typos.
Short URL: ia.cr/2013/702
[ Cryptology ePrint archive ]
|
2016-08-29 01:19:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7007371783256531, "perplexity": 1775.9452327999209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982949773.64/warc/CC-MAIN-20160823200909-00247-ip-10-153-172-175.ec2.internal.warc.gz"}
|
http://gpvv.avtp.pw/how-to-check-hard-disk-size-in-command-prompt.html
|
# How To Check Hard Disk Size In Command Prompt
Is there a way to disable the second hard disk from the command line NOT the gui because we need to write a script that runs every time the machine boots. The partitioning utilities that we will look at are listed in Table 1. Foe example, The size of files and IO operations call for 64K unit size. It takes longer than a Quickformat, but it generally works. This tool can be downloaded from Microsoft portal. Get-VM -VMName "ANGGA-DC1" | Get-VMHardDiskDrive. The method that I prefer involves writing a few lines in a command prompt window. If you run the Check Disk tool in Command Prompt and decide that you would then like to format the drive, you can do so via Command Prompt as well. CHKDSK Windows 10 with Elevated Command Prompt. Relaunch the Command Prompt by typing CMD command in RUN (Windows. After installing the program in the default location, in a command prompt, run: cd "\Program Files\Resource Kit" diruse /s \ | sort. The command is identical to the one for full hard disk access, except for the additional -partitions parameter. And you can customize the size by running. The boot drive's format and partition structure can be checked both in the OS X graphical interface and in the Terminal. This tool gives the details on bad sectors so that you can attempt to repair them. Short for "check disk," the chkdsk command is a Command Prompt command used to check a specified disk and repair or recover data on the drive if necessary. When ordering a new hard drive, it’s very important to check the interface type and the RPM. Here we let you know the way to use vboxmanage modifymedium command to increase the disk space or size of Virtual hard disk in VirtualBox installed on Ubuntu (Linux)/MacOS or Windows. The GUI tool is easier to use, but the command-line tool has advanced options. The partitioning utilities that we will look at are listed in Table 1. In the case of older versions of Windows, users can get to the Command Prompt by going to Start > Run and typing “cmd”. This is totally unnecessary for a USB external hard drive if you are only going be using it to store data and not to run an operating system. The potential for lost files is. Free disk physical bad sector scanner to check disk problem, portable software, light weight and easy. Checking the hard disk for errors in Windows. Go ahead and check everything if you like and then click OK. By opening a command prompt window, you can search for files that Windows truly doesn't want you to know are there. Find free disk space from command line? by engineer331 | June 20, 2007 5:16 AM PDT Does anyone know of a quick and simple way to get the amount of free disk space left on the HDD in WinXP Home?. In Linux, you can check disk space using command line tool called df command. If you wish to overwrite the entire disk in a way, that restoring data is nearly to impossible, you can also use diskpart, but with clean all subcommand. In addition to performing the same functions as the graphical version of Check Disk, the command prompt version also provides more detailed disk analysis and repair reports. How could you check your hard disk's size in Windows 7? How do you fix a command prompt window that flashes every few minutes in Windows 7? How do you make an external hard drive bootable to Windows 7?. For example:. “ DISKPART> list disk. Psyched? Jump over to the other side of the break to find out how you can check disk drives for errors in Windows 10. --image Specifies an existing image file that will be used to emulate. , using command-line shell. Via Command Prompt. To use the command line check disk version. You can use Disk Check in Windows 7 not only for local hard drives, but also for removable media such as USB memory sticks or memory cards. Press "Enter" to start the formatting process. The following command's output will provide information regarding the total size, disk space, memory consumed and file system. Command prompt (Admin). How To Determine Your Hard Disk Cluster Size In Windows Sector is the logical unit of file storage on a hard disk. On your keyboard press- Windows key + X simultaneously and then select - Command Prompt (admin) Click- Yes at User Account Control. Querying Disk Space on Remote Servers using Batch with WMIC from the desktop avoiding opening a command prompt to run the batch: available to check disk space. Date and hours - 2. It's not possible using Xen Center or CLI. so how can find internal hard disk information where system window installed. What I am looking for is a way to query a server to see what size hard drives are used in their RAID array. Here Are 4 Ways to Check Hard Disk Health on Windows By MTE Staff – Posted on Jun 9, 2018 Jun 9, 2018 in Windows Your hard drive is the soul of your PC, the place where all your most important data is stored. Open the command prompt with elevated privileges. sudo parted GNU Parted 3. How to create a partition using fdisk (man fdisk) in Linux. org is a web site devoted to helping users of legacy operating systems discover the power of Linux. – Run Check Disk command (from an elevated command prompt) to find and repair the damaged SD memory card. In the black command prompt window, type disk part and enter key to open the disk partitioning tool; Then run list disk command to call up a list of disks attached to your PC. In order to display all the partitions on a disk. How could you check your hard disk's size in Windows 7? How do you fix a command prompt window that flashes every few minutes in Windows 7? How do you make an external hard drive bootable to Windows 7?. There are various 3rd-party apps that can be used to fix a failed hard drive. Checking the hard disk for errors in Windows. Linux File System Quotas. In other words, all the same functions exist no matter which shortcut method you use to launch Disk Management, whether it be with Command Prompt, the Run dialog box, Computer Management, or even Windows Explorer. There are two methods, via which you can use the Check Disk command. This guide shows how to remove a failed hard drive from a Linux RAID1 array (software RAID), and how to add a new hard disk to the RAID1 array without losing data. Note the disk number of the drive that contains the System Reserved partition. The first step is to identify the disks on the system. align-type must be ‘minimal’, ‘optimal’ or an abbreviation. * Chkdsk displays a status report, and then lists the files that match the file specifications that have noncontiguous blocks. A list of disks will appear in a text format. 5 thoughts on " Getting a list of logical and physical drives from the command line " J2u 30 January 2014 at 4:47 pm. These simple steps will show you how to reduce high disk usage in Windows 10. Recover Formatted Data from Hard Disk Using the Command Prompt on Windows 10 Computer. Utilities hard drives and diskettes - 3. vmdk, which must be absolute, and partitions 1 and 5 of /dev/sda would be made accessible to the guest. Check disk information before using DiskPar. Richard Smedley presents your cut-out-and-keep guide to using the command line on the Raspberry Pi. WMIC is a command-line interface that lets you perform many administrative tasks, including checking hard disk health. REM is the MS DOS batch file method of providing comments. Howe (&)Date: 1998. Good batch file programming practice demands a comment at the head of every batch file explaining its use and syntax. This command is limited to _primary_ PC partitions on a hard disk. Now the virtual disk file virtual size has been increased, we need to allocate the newly added space in the virtual machine os. It is not possible to completely eliminate (clear) the error-counters of a hard disk by any means, because the affected hard disk component (read/write heads, surface, etc. But, it causes permanent data loss. You can also use Resize-VHD PowerShell command to expand the virtual disks. 2) Click Yes at the User Account Control prompt. Both methods are helpful, choose either of them to apply according to your own preference. Check your Disk Usage on Ubuntu from the command line Lowell Heddings @lowellheddings October 19, 2006, 9:20am EDT Ubuntu Linux, like all unix varieties, includes the du command line utility. Modifies the parameters of an existing virtual hard disk. Here we let you know the way to use vboxmanage modifymedium command to increase the disk space or size of Virtual hard disk in VirtualBox installed on Ubuntu (Linux)/MacOS or Windows. Open a command prompt and type diskpart to start using the command. You can still use Diskpart for managing hard disk on Windows operating systems. Right-click on your disk and then click Properties. The command prompt will be displayed again. TIP: To format USB drives via Command Prompt, follow the directions in our how to format USB drives via Command Prompt. This command will create a virtual hard disk file on our C: drive, with the file name "install1. In the command prompt, type in the following command below followed by one or more switches that you would like to use below with a space between each switch and press Enter. This is the setup stage, no calculation is done. It is a convenient tool for Windows 10 users. If you plan on installing a Linux distribution, check out our guide for partitioning a hard drive on Linux. With the help of fdisk command you can view, create, resize, delete, change, copy and move partitions on a hard drive using its own user friendly text based menu driven interface. Note: Unlike my previous example, running Disk Cleanup via the command prompt does not show you how much disk space it’s going to free. You can also check the status of a drive using a Windows Instrumentation Command-line (WMIC) command by issuing the command wmic diskdrive get status at a command prompt, though you won't get the same level of detail. In order to change MBR disk to GPT disk, you might as well use Diskpart command line. What do all the terms in the command mean? dir is a command used to show files in the current directory, but which can also locate data anywhere in the system. It verifies the file system integrity of a volume and fixes logical file system errors. Box 371954, Pittsburgh, PA 15250-7954. On the window that appears, click Properties, then Tools. The value you see under General is the size of the virtual machine file (it is a file with the extension '. This can be done only in the Locked Disk. The PowerShell cmdlets for storage management is new command line tools that help Sysadmins to manage hard disk with PowerShell. attributes disk clear readonly clean create partition primary format fs=fat32 quick (or ntfs). The diskpart prompt will open. Step 6: Now type select disk 1 command in Command Prompt and then press Enter to proceed. Follow the given below instructions to increase the disk size of VirtualBox in Windows. A glitchy or corrupted hard drive can create a moment of panic. Partitioning a drive creates a partition table which allows the operating system to look up and save data on the hard drive. If you are unable to a drive or partition via Disk Management or File Explorer, you can use the Command Prompt to format the drive. You can get to the extended disk cleanup by opening an elevated command prompt and then copying and pasting the following: cmd. This will show the full size and the free size of the hard disk drive in the system. Smartport hard disk emulation is now working on Floppy Emu, with my latest firmware! This mounts an SD memory card as an external 32 MB hard disk, on Apple II machines with Smartport support: the GS, or //c with ROM version 0 or later. This will show you how to use and run Check Disk or chkdsk at startup from within Vista, the command prompt, and the registry to check for corruption and possibly repair errors and bad sectors on the hard drive. Recovery environment lets open Command prompt using which you can run the defrag commands for the optimization. We will also look at resizing the ext3 and ReiserFS file systems. Luckily the for command can. How to Run CHKDSK on an External Hard Drive Using Command Prompt. The rm command removes each specified file. When it comes to computer repair, this is one of the first things I do. A list of available hard disks is displayed. you can use the Command. Chkdsk also marks any damaged or malfunctioning sectors on the hard drive or disk as "bad" and recovers any information still intact. Commands like fdisk, sfdisk and cfdisk are general. Execute the following two commands and look for GPT table. This includes magnetic hard disks, CDs/DVDs, and flash drives. NOTE: There is a new version of this tutorial available that uses gdisk instead of sfdisk to support GPT partitions. In this short post we will use diskpart to get disk information on a Windows server. You may also specify a different size (in MB), but make sure the partition you create them in contains about twice the space you specify. But it works as a potential for high disk usage in Windows 10. So let's cover a few scenarios using diskpart. The mkdir command with different parameters is available from the command prompt. How to Run a Chkdsk Function on Windows XP. To help determine that, Dsrfix begins by identifying the location where the requested hard disk is found (IO base port address and whether it is the master or slave device on that channel), and reports the disk size. Shrinking a virtual disk reclaims unused space in the virtual disk and reduces the amount of space that the virtual disk occupies on the host. If you frequently work with the Command Prompt or PowerShell, you may need to copy files from or to an external drive, at such, and many other times, you may need to display the drives within the. You can generally identify the USB drive using its size. They can be produced by several causes: system freezes, inappropriate shut downs or power outages. in BIOS/UEFI setup, please consult your device manual on how to disable this feature. How to Check a Disk from Windows. Is there a way to disable the second hard disk from the command line NOT the gui because we need to write a script that runs every time the machine boots. First, you need the device names of your disks, for this you can use df or cat /proc/partitions. Thank you saagar | The UNIX and Linux Forums. If you had to schedule the CHKDSK operation, then restart your computer. * Chkdsk displays a status report, and then lists the files that match the file specifications that have noncontiguous blocks. In order to change MBR disk to GPT disk, you might as well use Diskpart command line. HD Tune Pro is an extended version of HD Tune which includes many new features such as: write benchmark, secure erasing, AAM setting, folder usage view, disk monitor, command line parameters and file benchmark. Using Diskpart. This is not the same as your quota; rather, this displays how much space is left on the device or devices designated. Find disk free space from command line. My Dell laptop has been claiming there is a hard drive issue for some time however when I reboot and let Windows 7 scan the drive it either doesn’t scan the drive or it scans the drive and still claims there is an issue. In the case of older versions of Windows, users can get to the Command Prompt by going to Start > Run and typing “cmd”. 1 GB of wasted space!). Use the "df" command to get a report of available space. It is similar to the fsck command in Unix. 51, you can use the PIF Editor to change the files _default. Step one verified that Disk 1 is the 3TB drive. This works in most cases, where the issue is originated due to a system corruption. CHKDSK command, shorten of Check Disk, is used to scan for and fix hard drive disk errors in Windows. Thank you saagar | The UNIX and Linux Forums. If you would like to read the other parts in this article series please go to: Managing Hyper-V From the Command Line (Part 1) Managing Hyper-V From the Command Line (Part 2). Is there a way already to check the free space of a hard drive in a batch script? Check free disk space using Batch commands. My main areas of work are planning, development, managing and administration System infrastructures focusing on optimizing user processes, enforcing business security, performance enhancements, high availabilty and infrastucture scalability. While Windows loads, CHKDSK should automatically run and check the drive that you specified earlier. Finding disk capacity and free space of a local computer is easy but not so easy from a remote computer, especially through a GUI interface. Linux fsck utility is used to check and repair Linux filesystems (ext2, ext3, ext4, etc. NOTE: We suggest you create a backup of your existing virtual drive (create a copy of that virtual drive) before trying to increase its size. How to Check a Disk from Windows. The previous chapter analyzed file manipulation commands. In Command Prompt window, type chkdsk G: /f /r /x. Commands like fdisk, sfdisk and cfdisk are general. Type sel disk number, and then press ENTER. You'll need to reboot and the disk will be checked before the machine starts. How to Increase Disk Space in VMware. Disk Management shows the full disk size, cannot format it. Press- enter from your keyboard. CMD is the shortcut command for the Windows command line utility. uk Search Home Tools Downloads Mobile Security Software Web Windows R EC EN T POS T S Bulk decrypting multiple files encrypted by CryptoLocker Windows 8 quick keys Defending against DDoS attacks The Computer Ad by pricechop | Browser Close service starts then stops ← Previous Next → Logon Logoff Event Reader Restore MySQL. TIP: To format USB drives via Command Prompt, follow the directions in our how to format USB drives via Command Prompt. Is there an easy command line command to check disk space? Filesystem Size Used Avail Use% Mounted on /dev/sda5 56G. – Run Check Disk command (from an elevated command prompt) to find and repair the damaged SD memory card. Page 1 of 3 - How to get partitions path in command prompt? - posted in Boot from USB / Boot anywhere: Anyone know how can i get the partitions path in command prompt in Windows 7? I mean to list the partitions in this way: \Device\Harddisk0\Partition1 \Device\Harddisk0\Partition2 \Device\Harddisk0\Partition3 with the corresponding drive letters. There are several methods available to partition and format the RAW hard drive on windows 10 and lower versions, which are elucidated in the upcoming section of this post. It works by command line and hard to use for many users. Method 5: Run Chkdsk (Check Disk) High disk utilization can also be an indicator of a problem with your hard drive. This tutorial will show you the basics on how to use the command-prompt-based program called diskpart. This article will tell you how to perform the check disk function on Windows XP. This should be oriented toward current users of windows 7 and 8. So let's cover a few scenarios using diskpart. The command must be executed from an elevated command prompt window. Windows Command line get disk space in GB. In this example, I'm setting the specific disk index according to its model name (assuming there's only one like it in the computer). Commands like fdisk, sfdisk and cfdisk are general. There are several different ways to list all the hard drives present in a system through Linux command lines. It is similar to the fsck command in Unix. Other commands. The classic hard drive is prone to corruption because files are constantly copied to it and stored on it. After exiting, the command-line prompt will change back to #. To remove a file or directory in Linux, we can use the rm command or unlink command. Restarting will temporarily get rid of all swap files, but they'll come back. Click Start button and click All Programs and Accessorie. You cannot open any files on the specified drive until chkdsk finishes. mgv &: A Motif PostScript Viewer Eric A. size Sets the log file size Отваряте Command Prompt като Administrator (Run as. Disk Quotas: This feature of Linux allows the system administrator to allocate a maximum amount of disk space a user or group may use. In Windows, we can find the disk usage of a folder using du tool. It verifies the file system integrity of a volume and fixes logical file system errors. This will show you how to use and run Check Disk or chkdsk at startup from within Vista, the command prompt, and the registry to check for corruption and possibly repair errors and bad sectors on the hard drive. Type in the following command: chkdsk. Disk Check can identify and automatically correct file system errors and make sure that you can continue to load and write data from the hard disk. Check to make sure that your hard drive has at least 10% of it's capacity available for use. Just take the drive you want to wipe out of the computer and insert it into another computer, then open a command prompt window on the second computer and type the following commands:. You know like Windows have scandisk with fix opti. From this command, a list of the disks will appear from which choose your pen drive by checking disk size. MS-DOS versions 5. On the Command Prompt screen, type the command below:. Hard disk information from command prompt. Disk Management From the Command-Line, Part 1 March 10, 2014 Disk Utility within Mac OS X provides a range of disk management tools, from erasing and repartitioning hard disks to restoring images and repairing volumes. delete partition: delete partition [noerr][override] On a basic disk, deletes the partition with focus. Run the CHKDSK tool. Type det disk, and then press. Command-line disk formatting in Windows Server 2016. Q: If I run a DOS program or a DOS command from Total Commander's command line, I always land in c:\ (or another fixed directory) instead of the current directory! A: There is a directory saved in the PIF file associated with the program. The mkdir command with the parameters listed below is only available when you are using the Recovery Console. The easy way would be to delete the volumes in Disk Management console or with diskpart command, clean subcommand. Using format; The format command creates a new root directory and file system for the disk. To do so, type CMD in Start menu search box, right-click on Command Prompt in search results, and then click Run as administrator option. run the following command from an elevated command prompt: fsutil fsinfo ntfsinfo. There are two methods, via which you can use the Check Disk command. ) not really replaced, just the working conditions are changed. Wait for a while to check if the PC performs well. So let's cover a few scenarios using diskpart. The steps are: 1. Smartport hard disk emulation is now working on Floppy Emu, with my latest firmware! This mounts an SD memory card as an external 32 MB hard disk, on Apple II machines with Smartport support: the GS, or //c with ROM version 0 or later. There are two ways to check which partition table your disk is using – you can use the Command Prompter or you can use disk management tool. At the fdisk command line prompt, start with the print command (p) to print the. Basic Command Prompt Usage. Mkdir (Md) Creates a directory or subdirectory. If you wish to overwrite the entire disk in a way, that restoring data is nearly to impossible, you can also use diskpart, but with clean all subcommand. windows vista recovery disk step by step guide - Original writer of this review Clubic. To remove a file or directory in Linux, we can use the rm command or unlink command. If none are available, then you will have to. like here I choose disk 1 because here disk 0 is my. Click Next. Finding disk capacity and free space of a local computer is easy but not so easy from a remote computer, especially through a GUI interface. Make sure you don't select the hard disk of your computer and end up losing all your data. Here are the guides for them: Check Disk Command /chkdsk/ The Guide. For instance, I've plugged in a 4GB pen drive. Type list disk and hit Enter. Right-click Command Prompt and then select Run as administrator. You can learn much, much more from the official GNU GRUB Manual 0. View Remote Servers' Drive Sizes Via Command Prompt? d$and e$ drives I need to check and just do a "dir" in a command prompt window. How to repair a corrupted disk (HDD, USB Disk, Memory Card). The “mklink” command allows you to create a symbolic link – a virtual folder that acts as a gateway to some other location on your hard disk (or even on a different volume). exe from a command line by bootable CD. Some Mac users may require the ability to erase a disk or erase a hard drive from the command line on Mac OS, a task which is typically performed through the Disk Utility application from the GUI. File system details also can be seen from this command. It is important to check from time to time that adequate free space remains on the storage devices. To start the Disk Cleanup tool and specify the hard disk to be cleaned by using the command line, follow these steps: Click Start, and then click Run. Click Start > Run. Date and hours - 2. Some of these tweaks and tricks include playing with DNS cache, pinging to the default gateway and using. This bugged me for months; I kept. This guide will teach you how to do a manual disk defrag through the Windows command prompt. Click here for more information and to download a trial version. Commands like fdisk, sfdisk and cfdisk are general. Preparing the SD Card While in some cases you can simply format the CD card or delete the partition using Windows Computer Manager, if the SD card contains non-Windows partitions (i. But this does not preserve them from being restored. All these changes which we (or system) has done are not applied on the physical hard disk yet. The mkdir command with the parameters listed below is only available when you are using the Recovery Console. Now type, select partition # (your disk number). Remember, this is a Resource Kit. Formatting pen drive using command prompt in Windows operating system is a super easy task and it takes only a few seconds. Monitor disk I/O utilization on servers with Linux and Windows OS. To run CHKDSK command on an external Hard Drive, you need administrative privileges. Get-VM -VMName "ANGGA-DC1" | Get-VMHardDiskDrive. Resize disk in Oracle Linux Xen virtual machine This post will explain how to shrink Xen VDI of Linux Virtual machine. In this post, i am going to discuss about the HP hardware's and how to check the disk failures from command line in Hp hardware's. The method you use to open Disk Management doesn't change what you can do with it. MS-DOS versions 2. There are several applications on the market that you can use to check your hard drive for problems, but Windows also has a built-in utility called "chkdsk" that will scan your hard drive for errors and attempt to fix them. Open command As Administrator. du command - Display the amount of disk space used by the specified files and for each subdirectory. Open the command prompt, and navigate to your PearPC folder (cd\the\folder\its\in) C. Obviously, your mileage will vary. The following options can be used with the CellCLI command: -n — Runs the CellCLI utility in noninteractive mode. Step 1: Open Command Prompt as administrator. Commands to check hard disk partitions and disk space on Linux. It is recommended to run it whenever Windows has shut down abnormally or hard disk performs abnormally. Windows disk management from command prompt? How to get access to hard drives in windows8. To begin with you’ll need to open up the command prompt by either right-clicking on the Windows. CMD is the shortcut command for the Windows command line utility. From here I can open up Windows Explorer and start cleaning up or backup to an USB Drives any unused Mp3 files to reclaim back some disk space. jpg Linux provides all the necessary bits to help you find out exactly how much space remains on your drives. sys file (and check the size at the same time), type this command at a command prompt: dir c:\ /as. So you could check all of your servers in one pass. Locate the full path and file name of your virtual machine. Linux File System Quotas. This chapter shows some of the situations where the tools can help you. In Disk Management the drive shows up as Unknown and Not Initialized. In the [email protected] Boot Disk Creator main page, click Access Content. And press Enter. File attributes or NTFS permissions prevent files or folders from being either displayed or accessed when you use either Microsoft Windows Explorer or a Windows command prompt. There are many tools to check the health of your hard disk and restore its functionality. If you had to schedule the CHKDSK operation, then restart your computer. Running CHKDSK through the Windows Command Prompt: Go to the Windows Command Prompt. duplicate one hard disk partition to another hard disk partition: Sda2 and sdb2 are partitions. Commands like fdisk, sfdisk and cfdisk are general. It seems that your hard drive is working overtime, but you are not sure why. MS-DOS versions 5. It may sometimes happen that your PC may stop to a dead halt even though you don’t have many apps opened up/running. Repair the USB by type command repair disk= disk no. The first step in preparing your hard disk is viewing its partition information. The method that I prefer involves writing a few lines in a command prompt window. For hard drive recovery, you also use Check Disk or chkdsk using the command line. To do so, type CMD in Start menu search box, right-click on Command Prompt in search results, and then click Run as administrator option. exe stop superfetch. NOTE: There is a new version of this tutorial available that uses gdisk instead of sfdisk to support GPT partitions. I will be showing how to format HDDs and USBs using this utility. This guide talks about how to shrink volume without data loss in Windows 10/8/7/Vista/XP and provides four methods to shrink/resize volume on hard drive, USB drive, virtual disk, etc. Ey guys, I have a python script that I need to run from the command prompt and would like to know how to specify other drives from the command prompt as what I have tried is not working and specifying "cd. exe and can also be accessed through the Run line or the Vista/7 search boxes. In this post we are taking a look at some commands that can be used to check up the partitions on your system. To check all files on a FAT disk in the current directory for noncontiguous blocks, type: chkdsk *. This post talks of the command line check disk or chkdsk options, switches & parameters in Windows 10/8/7 & how to use chkdsk commands like chkdsk /r, etc. Back up any data you want to save before proceeding. Fixing Hard Disk Corrupted Boot Volume of Windows The MBR or you can say Master Boot Recode , is the most important Boot Sector of any computer's Hard Disk Drive or SSD. If you have any questions. However, you will need to give the Command Prompt (CMD) administrative privilege in order for chkdsk command to work.
|
2019-12-08 13:10:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31941381096839905, "perplexity": 2694.2170470110805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540510336.29/warc/CC-MAIN-20191208122818-20191208150818-00462.warc.gz"}
|
https://www.aimspress.com/article/10.3934/mbe.2005.2.591
|
92D30.
Export file:
Format
• RIS(for EndNote,Reference Manager,ProCite)
• BibTex
• Text
Content
• Citation Only
• Citation and Abstract
Use Of A Periodic Vaccination Strategy To Control The Spread Of Epidemics With Seasonally Varying Contact Rate
1. Department of Mathematics, Faculty of Science, Benha University, Benha
2. Department of Statistics and Modelling Science, Livingstone Tower, 26 Richmond Street, Glasgow G1 1XH
Abstract Related pages
In this paper, a general periodic vaccination has been applied to control the spread and transmission of an infectious disease with latency. A $SEIRS^1$ epidemic model with general periodic vaccination strategy is analyzed. We suppose that the contact rate has period $T$, and the vaccination function has period $LT$, where $L$ is an integer. Also we apply this strategy in a model with seasonal variation in the contact rate. Both the vaccination strategy and the contact rate are general time-dependent periodic functions. The same SEIRS models have been examined for a mixed vaccination strategy composed of both the time-dependent periodic vaccination strategy and the conventional one. A key parameter of the paper is a conjectured value $R^c_0$ for the basic reproduction number. We prove that the disease-free solution (DFS) is globally asymptotically stable (GAS) when $R^{"sup"}_0 < 1$. If $R^{"inf"}_0 > 1$, then the DFS is unstable, and we prove that there exists a nontrivial periodic solution whose period is the same as that of the vaccination strategy. Some persistence results are also discussed. Necessary and sufficient conditions for the eradication or control of the disease are derived. Threshold conditions for these vaccination strategies to ensure that $R^{"sup"}_0 < 1$ and $R^{"inf"}_0 > 1$ are also investigated.
Figure/Table
Supplementary
Article Metrics
Citation: Islam A. Moneim, David Greenhalgh. Use Of A Periodic Vaccination Strategy To Control The Spread Of Epidemics With Seasonally Varying Contact Rate. Mathematical Biosciences and Engineering, 2005, 2(3): 591-611. doi: 10.3934/mbe.2005.2.591
• 1. Zhenguo Bai, Yicang Zhou, Global dynamics of an SEIRS epidemic model with periodic vaccination and seasonal contact rate, Nonlinear Analysis: Real World Applications, 2012, 13, 3, 1060, 10.1016/j.nonrwa.2011.02.008
• 2. Drew Posny, Jin Wang, Computing the basic reproductive numbers for epidemiological models in nonhomogeneous environments, Applied Mathematics and Computation, 2014, 242, 473, 10.1016/j.amc.2014.05.079
• 3. Weiming Wang, Yongli Cai, Jingli Li, Zhanji Gui, Periodic behavior in a FIV model with seasonality as well as environment fluctuations, Journal of the Franklin Institute, 2017, 354, 16, 7410, 10.1016/j.jfranklin.2017.08.034
• 4. Eric Ávila-Vales, Erika Rivero-Esquivel, Gerardo Emilio García-Almeida, Global Dynamics of a Periodic SEIRS Model with General Incidence Rate, International Journal of Differential Equations, 2017, 2017, 1, 10.1155/2017/5796958
• 5. Nicolas Bacaër, Rachid Ouifki, Growth rate and basic reproduction number for population models with a simple periodic factor, Mathematical Biosciences, 2007, 210, 2, 647, 10.1016/j.mbs.2007.07.005
• 6. Nicolas Bacaër, Xamxinur Abdurahman, Resonance of the epidemic threshold in a periodic environment, Journal of Mathematical Biology, 2008, 57, 5, 649, 10.1007/s00285-008-0183-1
• 7. Yangjun Ma, Maoxing Liu, Qiang Hou, Jinqing Zhao, Modelling seasonal HFMD with the recessive infection in Shandong, China, Mathematical Biosciences and Engineering, 2013, 10, 4, 1159, 10.3934/mbe.2013.10.1159
• 8. L. Jódar, R.J. Villanueva, A. Arenas, Modeling the spread of seasonal epidemiological diseases: Theory and applications, Mathematical and Computer Modelling, 2008, 48, 3-4, 548, 10.1016/j.mcm.2007.08.017
• 9. Nicolas Bacaër, Approximation of the Basic Reproduction Number R 0 for Vector-Borne Diseases with a Periodic Vector Population, Bulletin of Mathematical Biology, 2007, 69, 3, 1067, 10.1007/s11538-006-9166-9
• 10. J.V. Greenman, R.A. Norman, Environmental forcing, invasion and control of ecological and epidemiological systems, Journal of Theoretical Biology, 2007, 247, 3, 492, 10.1016/j.jtbi.2007.03.031
• 11. Abraham J. Arenas, Gilberto González-Parra, Benito M. Chen-Charpentier, Dynamical analysis of the transmission of seasonal diseases using the differential transformation method, Mathematical and Computer Modelling, 2009, 50, 5-6, 765, 10.1016/j.mcm.2009.05.005
• 12. I. A. Moneim, Efficiency of Different Vaccination Strategies for Childhood Diseases: A Simulation Study, Advances in Bioscience and Biotechnology, 2013, 04, 02, 193, 10.4236/abb.2013.42028
• 13. Yong Li, Xianning Liu, Lianwen Wang, Xingan Zhang, Hopf bifurcation of a delay SIRS epidemic model with novel nonlinear incidence: Application to scarlet fever, International Journal of Biomathematics, 2018, 1850091, 10.1142/S1793524518500912
• 14. Maia Martcheva, Benjamin M Bolker, Robert D Holt, Vaccine-induced pathogen strain replacement: what are the mechanisms?, Journal of The Royal Society Interface, 2008, 5, 18, 3, 10.1098/rsif.2007.0236
|
2020-01-27 02:38:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7516965866088867, "perplexity": 3754.8647179104855}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694176.67/warc/CC-MAIN-20200127020458-20200127050458-00228.warc.gz"}
|
http://stats.stackexchange.com/questions/50131/seasonal-intervention-effects
|
# Seasonal intervention effects
I have estimated an intervention model on the log of a seasonally differenced series of sales figures and have used a step function rather than a pulse function. My question is: how do I interpret the intervention effect ? If the coefficient is -0.05 am I correct in saying that the intervention resulted in a 5% decrease in sales? Is that interpretation correct even though the original series to which the intervention was applied is in seasonal differences (it's a weekly series)?
-
Well, it's roughly a 5% (you may want to look at a confidence interval for that value to see how wide it is) dip in the average seasonal differences, $y_t - y_{t-s}$ (for seasonal period $s$). – Glen_b Feb 17 '13 at 1:23
|
2015-10-09 17:42:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8555446863174438, "perplexity": 641.5910555234182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737934788.65/warc/CC-MAIN-20151001221854-00059-ip-10-137-6-227.ec2.internal.warc.gz"}
|
http://motls.blogspot.it/2017/03/should-mathematics-exams-be-required-at.html?m=1
|
## Monday, March 13, 2017
### Should mathematics exams be required at the end of high school?
In recent weeks, I was involved in various discussions about the education of mathematics in Czechia. One of the topics was the "playful" Hejný method (a long CZ thread) to teach mathematics to kids which may be fun and useful but it's simply not a legitimate replacement for mathematics as I define it.
Yesterday, someone asked me to solve one page of undergraduate problems in mathematical statistics. Compute the averages, variances and standard deviations, medians, quantiles, draw some histograms, use computer software to do a quadratic fit. And also compute the probability that you get all 4 kings out of 32 cards in a pile of 7. An hour of work. I did consider the problems nicely chosen and adequate for someone who should have background in any experimental science etc.
But they were taken from an exam (a take-home exam?) for mostly female students who want to get a bachelor degree and become nurses. That's tough because I do think that most nurses just can't do a big majority of these things. But the statistics course is mandatory and right now, unlimited nurses do need the bachelor degree. It looks like an anomaly: Ways to deal with a senior who urinated himself could be more useful for them than the calculations of the residual variance of a quadratic fit. ;-) Some lawmakers are preparing a reform that will allow nurses to work without the bachelor degree – the high school plus a year of a "higher school" will be enough. But it's not reality yet.
At the end, however, I have big sympathies for the instructor who is trying hard to convince the students to learn these things. If you asked me, I would probably agree that people with college degrees in science-related disciplines – and medicine is one of them – should be able to do most of these things, at least in principle. It's not possible for most people to know such things and again, I do agree that nurses shouldn't necessarily be "college-educated folks".
The mathematics instructor is universally hated by his students, of course. This is the level that primarily determines my emotions. I just couldn't support the students in their bitter jihad against the noble man. The fact that some soon-to-be-nurses are being pushed to learn things they don't need is one thing. But this guy was hired to teach college-level mathematical statistics and it's simply right to do it right. It's in no way insane to expect the college students majoring in a science-based discipline to know how to do these standard things after two semesters of statistics!
A bigger discussion is one about the required subject in the mandatory Czech final high school exam, the well-known "maturity exam" (Czech: maturita). Unless I misunderstand something, the required subjects are Czech language (which contains some literature etc.) and a foreign language. Mathematics isn't required now and only 25% of the students choose is as their optional subject. It should change since Spring 2021 if I understand well – but the decision may probably be reversed again.
Lots of essays have been written that argued in both directions. It's spectacularly obvious that the two sides may be described as those who "like and respect mathematics" and those who don't. Needless to say, the postmodern "guru" of the "playful" teaching of mathematics to kids, Mr Hejný, is on the side of the opponents of the required mathematics maturity exam – he is really a hater of mathematics itself although his fans love to obfuscate this key fact.
Numerous authors have made great points why it's appropriate to make the mathematics exam required. Mr Plzák was comparing mathematics and Czech. Our native tongue is so yummy (so far so good – our Czech teacher would always lovably talk about "červeňoučké jablíčko", emotionally decorated words for a "red apple" that is so boring in English, to explain the superiority of Czech) and you can use it everywhere... Wait a minute. 0.171% of the mankind speaks Czech and the percentage is unlikely to grow substantially. On the other hand, the whole Universe speaks the language of mathematics. You can be sure what this guy thinks and I agree with his witty arguments.
He also argued that many Czechs don't see that the increase of price of a bun from CZK 1.50 to CZK 2.00 – a big 33% increase – looks negligible to most Czechs who don't really understand the rule of three and related things in mathematics. And they may have problems to calculate mortgage, too.
Others have pointed out e.g. that Mr Hejný must fail to believe his "amazing" pedagogical method, otherwise he wouldn't be afraid of introducing the mandatory mathematics exam.
I am nicely surprised that Ms Kateřina Valachová, the current social democratic minister of education otherwise considered a puppet of the progressive NGOs, is promoting the required mathematics maturity exam. Sometimes, it has to happen that the least likely folks turn out to stand on the right side of some important enough question.
Opponents of the required mathematics exam like to say ludicrous things – e.g. that the maturity exam should reduce teenagers' obesity (?) instead of encouraging them to learn some wisdom such as mathematics. Great, you can prevent obese people from completing the high school but what is it good for? Will you sleep well? But in most cases, their tirades – just like the monologues of Mr Hejný and his fans – quickly deteriorate to the universal song of math haters: Mathematics is just terrorizing children because it's forcing them to memorize various algorithms instead of being creative and thinking logically.
What they fail to notice is that if someone doesn't ever learn or rediscover most of the basic mathematical algorithms, then it proves that she or he is not thinking logically and she or he is not creative. If you don't know how to solve a set of two linear equations at the end of the high school (or later), it's a very unflattering testimony about your logical thinking and creativity. There are many ways to solve these problems and many others but if you haven't been able to get any of them, it's too bad.
You may be completely ignorant about all actual formulae, identities, laws, algorithms that allow rational people to think about the world around them – and you may still call yourself a logical if not creative person. But in that case, you are just lying to yourself and other idiots around you. Some logical or rational or independent or even creative thinking may be a primary driver that allows people to reinvent rules, laws, algorithms, tricks to figure out various things. But it's not a goal by itself and if you don't learn how to deal with any actual and particular problems or questions in mathematics, it probably means that you didn't have enough logical thinking and creativity, after all.
This systematically repeated talk about creativity and logical thinking by people who self-evidently suck both at creativity and logical thinking annoys me greatly.
Children, teenagers, and even adults should be encouraged to educate themselves especially in the knowledge that is important for other things because many other things can be built upon that primary knowledge – and the knowledge of mathematics is a key example. Some people can't do it well and they shouldn't be automatically sent to gas chambers because of that. But they shouldn't be given amazing stamps and degrees, either. If someone hasn't mastered basics of the high school mathematics, he shouldn't be getting diplomas proving any "broad education" at the high school level.
Similarly, people who haven't mastered basic undergraduate statistics and similar things shouldn't be getting college or university diplomas at colleges and universities focusing on scientific subjects. It's too bad when so many people try to circumvent all these common sense rules. They are basically trying to make the schools and diplomas from the schools useless.
On the contrary, the knowledge that should be left to people's personal opinions – such as their opinions about the value of environmentalism etc. – should be completely avoided at schools. Also, answers that one may obtain by a 30-second Google search and they seem good enough shouldn't be covered at schools, either. Schools should focus on the construction of the skeleton of knowledge that the people can't learn themselves by a simple search or by their ordinary recreational activities.
|
2017-04-26 23:30:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3964110016822815, "perplexity": 1244.2453211243476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121752.57/warc/CC-MAIN-20170423031201-00318-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://socratic.org/questions/59e371b4b72cff59d739cdcc#490189
|
# Question #9cdcc
Oct 15, 2017
$4 x$
#### Explanation:
well, the radical signs are unnecessary. $\sqrt{1} = 1$
...so you can rewrite as:
$1 - {x}^{2} - x - {x}^{2} + x$
...and I think this simplifies to:
$1 - 2 {x}^{2}$
And the derivative of this would b:
$- 4 x$
GOOD LUCK
Oct 15, 2017
Given: $\sqrt{1} - {x}^{2} - \frac{x}{\sqrt{1}} - {x}^{2} + x$
Use the substitution, $\sqrt{1} = 1$:
$1 - {x}^{2} - x - {x}^{2} + x$
Combine like terms:
$1 - 2 {x}^{2}$
Differentiate:
$\frac{d \left(1 - 2 {x}^{2}\right)}{\mathrm{dx}} = \frac{d \left(1\right)}{\mathrm{dx}} - \frac{d \left(2 {x}^{2}\right)}{\mathrm{dx}}$
The first term is 0 because the derivative of a constant is 0:
$\frac{d \left(1 - 2 {x}^{2}\right)}{\mathrm{dx}} = - \frac{d \left(2 {x}^{2}\right)}{\mathrm{dx}}$
Use the linear property of the derivative:
$\frac{d \left(1 - 2 {x}^{2}\right)}{\mathrm{dx}} = - 2 \frac{d \left({x}^{2}\right)}{\mathrm{dx}}$
Use the power rule, $\frac{d \left({x}^{n}\right)}{\mathrm{dx}} = n {x}^{n - 1}$:
$\frac{d \left(1 - 2 {x}^{2}\right)}{\mathrm{dx}} = - 4 x$
|
2022-08-16 04:13:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 14, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9595485329627991, "perplexity": 3845.6953831532874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572220.19/warc/CC-MAIN-20220816030218-20220816060218-00117.warc.gz"}
|
https://www.qtisas.com/fittable
|
QtiSAS & QtiKWS
SA(N)S related software: data reduction, analysis, global instrumental fit of curves and matrixes
SANS
Data
Reduction
ASCII
SANS
1D
Compile
Fitting
Function
Fitting
Curve(s)
Tools
Singular
Value De-
composition
Jülich
NSE
Tools
210 person(s)
[2020-04-15]
2 person(s)
[today]
fittable
Nonlinear Curve(s) Fitting Interface
… under construction …
The Chi-Square Minimization
Nonlinear curve fitting is an iterative procedure employing minimisation of the reduced chi-square value χ² to obtain the optimal parameter values. The reduced chi-square is obtained by dividing the residual sum of squares (RSS) by the degrees of freedom (DOF). Although this is the quantity that is minimized in the iteration process, this quantity is typically not a good measure to determine the goodness of fit. For example, if the y data is multiplied by a scaling factor, the reduced chi-square will be scaled as well.
M-Dimensional Data Sets [M>=1]
Residual Sum of Squares: $RSS=\sum_i^N[y_i-f(x_i)]^2$
Reduced Chi-Square: $\chi^2=\sum_i^Nw_i[y_i-f(x_i)]^2$
|
2021-01-23 05:39:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7968888282775879, "perplexity": 1485.28099575708}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703533863.67/warc/CC-MAIN-20210123032629-20210123062629-00179.warc.gz"}
|
https://www.transtutors.com/questions/problem-5-4a-p5-4a-bizkid-company-s-adjusted-trial-balance-on-august-31-2005-its-fis-86354.htm
|
# Problem 5-4A (P5-4A) BizKid Company’s adjusted trial balance on August 31, 2005, its fiscal... 2 answers below »
Help in ACC 225
Problem 5-4A (P5-4A) BizKid Company’s adjusted trial balance on August 31, 2005, its fiscal year-end, follows:
Debit Credit
Merchandise inventory . . . . . . . . . . . . $31,000 Other (noninventory) assets . . . . . . . . 120,400 Total liabilities . . . . . . . . . . . . . . . . . .$ 35,000
N. Kidman, Capital . . . . . . . . . . . . . . . 101,650
N. Kidman, Withdrawals . . . . . . . . . . . 8,000
Sales . . . . . . . . . . . . . . . . . . . . . . . . . 212,000
Sales discounts . . . . . . . . . . . . . . . . . 3,250
Sales returns and allowances . . . . . . . 14,000
Cost of goods sold . . . . . . . . . . . . . . 82,600
Sales salaries expense . . . . . . . . . . . . 29,000
Rent expense—Selling space . . . . . . . 10,000
Store supplies expense . . . . . . . . . . . . 2,500
Advertising expense . . . . . . . . . . . . . . 18,000
Office salaries expense . . . . . . . . . . . . 26,500
Rent expense—Office space . . . . . . . 2,600
Office supplies expense . . . . . . . . . . . 800
Totals . . . . . . . . . . . . . . . . . . . . . . . . $348,650$348,650
On August 31, 2004, merchandise inventory was $25,000. Supplementary records of merchandising activities for the year ended August 31, 2005, reveal the following itemized costs: Invoice cost of merchandise purchases . . . . . . .$91,000
Purchase discounts received . . . . . . . . . . . . . . . 1,900
Purchase returns and allowances . . . . . . . . . . . . 4,400
Costs of transportation-in . . . . . . . . . . . . . . . . 3,900
Required
1. Compute the company’s net sales for the year.
2. Compute the company’s total cost of merchandise purchased for the year.
3. Prepare a multiple-step income statement that includes separate categories for selling expenses and for general and administrative expenses.
4. Prepare a single-step income statement that includes these expense categories: cost of goods sold, selling expenses, and general and administrative expenses.
## Solutions:
Mohit G
Answer 1 Sales $212,000 less Sales discounts$ 3,250 less Sales returns and allowances $14,000 Net Sales$ 194,750 Answer 2 Invoice cost of merchandise purchases \$ ...
|
2020-04-05 07:18:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130480289459229, "perplexity": 321.6286562908493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370529375.49/warc/CC-MAIN-20200405053120-20200405083120-00057.warc.gz"}
|
http://mathhelpforum.com/calculus/173665-derivative-function.html
|
# Math Help - derivative of a function
1. ## derivative of a function
if y(x)=x^3+x^2+3
is it true that y(x)= the velocity. y'(x)= the accleration and y''(x)= the speed?
thanks
2. Without seeing the entire question its hard to say.
In general if y(x)= displacement then y'(x)= velocity and y''(x)= accelaration
3. Originally Posted by antikv
if y(x)=x^3+x^2+3
is it true that y(x)= the velocity. y'(x)= the accleration and y''(x)= the speed?
thanks
Where is this problem coming from? Typically in problems involving distance, speed, and accelertion y will be a function of time, not x.
Supposing we have y as a function of time, the usual definitions give y(t) as a displacement, y'(t) as a velocity, and y"(t) as an acceleration. This will not be the case for a function y(x) where x is a distance.
-Dan
4. As others have said, you cannot just pick out a function and ask if it is velocity. What a function represents depends upon the application.
Now, if y(t) is a position function, that is, if y(t) is interpreted as the position of some object at time t, the y'(t) is its velocity and y''(t) is its acceleration. The "speed" is just |y'(t)| the absolute value (or magnitude) of the velocity.
|
2016-05-03 03:31:38
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8217408657073975, "perplexity": 1457.2356006769064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118369.35/warc/CC-MAIN-20160428161518-00160-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://quantumcomputing.stackexchange.com/questions/5652/a-question-on-eastin-knill-theorem
|
# A question on Eastin-Knill theorem
I am reading the paper Restrictions on Transversal Encoded Quantum Gate Sets, Bryan Eastin, Emanuel Knill. I am unable to understand the following lines in the proof.
As the set of all unitary operators is a metric space, a finite number of unitary operators cannot approximate infinitely many to arbitrary precision
I need some references or hints as to understand how the above-quoted lines.
You can't approximate infinite set by some finite subset with the error that is less than half the minimum distance between elements in this finite subset. This is like trying to approximate every real number from $$[0,1]$$ by some finite subset with arbitrary precision. It's impossible.
Probably the easiest way to think about this is to consider an equivalent statement for the real numbers. Consider the range $$[0,1]$$, for instance. You're given a finite set of real numbers within that range. If you think about these values on the number line, it should be fairly obvious that there are necessarily points that are a finite distance away, so I cannot use members of this set as arbitrarily accurate approximations for any real number in the range. (In this case, if your set contains $$n$$ elements, it would be best to have them at values $$(2i-1)/(2n)$$ for $$i=1$$ to $$n$$ so that all real numbers in the range are within $$1/(2n)$$ of some element in the set. But if you try to bunch some closer together, obviously other have to get further apart).
|
2021-12-04 04:19:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8362401127815247, "perplexity": 163.26611602024113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362930.53/warc/CC-MAIN-20211204033320-20211204063320-00278.warc.gz"}
|
https://raweb.inria.fr/rapportsactivite/RA2016/deducteam/uid46.html
|
Overall Objectives
Application Domains
New Software and Platforms
Partnerships and Cooperations
Bibliography
PDF e-Pub
Section: New Results
Confluence
In $\lambda \Pi$modulo, congruences are expressed by rewrite rules that must enjoy precise properties, notably confluence, strong normalization, and type preservation. A difficulty is that these properties depend on each other in calculi of dependent types. To break the circularity, confluence is usually proved separately on untyped terms. A another difficulty then arises : computation do not terminate on untyped terms. A result of van Oostrom allows to show confluence of non-terminating left-linear higher-order rules, provided their critical pairs are development closed. This result was used for the encodings of HOL, Matita, and Coq up to version 8.4. Encoding the most recent version of Coq requires rules for universes that are confluent on open terms, while confluence on ground terms sufficed before. The encoding we recently developed for this new version of Coq has higher-order rules which are not left-linear, use pattern matching modulo associativity, commutativity and identity, and whose (joinable) critical pairs are not development closed. We have therefore developed a new powerful result for proving confluence of that sort of rules provided non-linear variables can only be instantiated by first-order expressions [18], [19].
|
2018-08-20 01:31:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7580207586288452, "perplexity": 2725.027996765653}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215487.79/warc/CC-MAIN-20180820003554-20180820023554-00291.warc.gz"}
|
https://letterstonature.wordpress.com/2008/07/
|
Feeds:
Posts
Comments
## Are The Dice Loaded?
I am currently reading Universes (1989) by John Leslie, Professor Emeritus of Philosophy at The University of Guelph, Ontario, Canada. The book, praised on the back cover by Antony Flew and Quentin Smith, discusses the issues surrounding the “fine-tuning” of the constants of nature, initial conditions, and even the forms of the laws of nature themselves to permit the existence of observers. I will not go into details of the fine-tuning here – readers are referred to “The Anthropic Cosmological Principle” by Barrow and Tipler.
This is a huge and hugely controversial area and I don’t want to bite off more than I can chew. (Leslie: “The ways in which ‘anthropic’ reasoning can be misunderstood form a long and dreary list”). Instead, I want to consider a single point made by Leslie, in response to the following quote from M. Scriven’s “Primary Philosophy” (1966):
If the world exists at all, it has to have some properties. What happened is just one of the possibilities. If we decide to toss a die ten times, it is guaranteed that a particular one of the $6^{10}$ possible combinations of ten throws is going to occur. Each is equally likely.
The argument is as follows: we cannot deduce anything interesting from the fine-tuning of the universe because the actual set of constants/initial conditions is just as likely as any other set. It is this claim (and this claim only) that I want to address, because I found Leslie’s treatment to be calling out for an example.
Read Full Post »
## Astronomy Royalty
I’m planning a series of posts on public speaking in science, which I generally find to be infuriatingly poor. There are exceptions, thankfully. The closing address of “Putting Gravity to Work”, a conference held over the last week at the IoA, Cambridge, was given by Martin Rees. It was a class above the normal “cure-your-insomnia” talk given at conferences and seminars. I hope to dissect the talk in more detail in a later post, but for now here’s an action shot of the Astronomer Royal in action:
Read Full Post »
## Big Bad Black Holes
I was sitting with some fellow tourists on a three-day coach tour of Ireland yesterday when the topic of what I do for a living came up. After being briefly mistaken for an astrologer (I really should start charging money for my services), my mother-in-law Christine mentioned my interest in black holes.
(Aside: black holes are something of an in-joke for my in-laws. Christine was asked by an work mate what her son-in-law does, and the conversation went something like this:
Christine: He’s measuring the black hole, or something.
Office mate: Well it’s about time someone did that!)
Returning to Ireland, a fellow tourist remarked that black holes “are bad things, because they suck everything up”. I’ve encountered this opinion a number of times before. The mental image of a black hole as a giant cosmic vacuum cleaner, destroyer of worlds, is surprisingly common. But the image, as any astronomer will tell you, is wrong. (more…)
Read Full Post »
|
2015-11-28 09:27:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35851505398750305, "perplexity": 1657.0752177533147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451872.11/warc/CC-MAIN-20151124205411-00063-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1741159/linear-transformation-problem-from-r4-to-r2/1741184
|
# Linear transformation problem from R^4 to R^2
Lets look at T = R^4 -> R^2, Prove that T is a linear transformation.
where : T$\begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}= \begin{bmatrix} x + z \\ y + w \end{bmatrix}$
Proof : Let A and B be dummy vectors such as
$A= \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ a_4 \end{bmatrix}$ and $B= \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ b_4 \end{bmatrix}$
$$T(cA + B) = \begin{bmatrix} ca1 + ca3 +b1 +b3 \\ ca2 + ca4 +b2 +b4 \end{bmatrix} = \begin{bmatrix} ca1 + ca3 \\ ca2 + ca4 \end{bmatrix} + \begin{bmatrix} b1 + b3 \\ b2 + b4 \end{bmatrix} = c \begin{bmatrix} a1 + a3 \\ a2 + a4 \end{bmatrix} + \begin{bmatrix} b1 + b3 \\ b2 + b4 \end{bmatrix} = cT(A)+T(B)$$
Also, $$T \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\0 \end{bmatrix}$$ so T is an empty vector Is this a sufficient proof?
All you need to show is that $T$ satisfies $T(cA+B) =cT(A) +T(B)$ for any vectors $A,B$ in $\mathbb{R}^4$ and any scalar from the field, and $T(0) =0$. It looks like you got it. That should be sufficient proof.
|
2020-04-02 22:49:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.750006377696991, "perplexity": 226.70462292121002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370508367.57/warc/CC-MAIN-20200402204908-20200402234908-00245.warc.gz"}
|
https://math.stackexchange.com/questions/2151296/estimate-gaussian-background-noise
|
# estimate gaussian background noise
Let's say I have 3 Random Variables $X_1, X_2, X_b$ where "$b$" stands for "background". Each one of them is Gaussian with $N(\mu_i, \sigma^2_i)$ for $i\in\{1,2,b\}$. I will assume $\mu_b=0$. Now I make $N$ experiments which measure the variables $X_1+X_b, X_2+X_b$ (where $X_b$ is measured at the same time for both of them) and I want to estimate all the $\mu_i$'s and $\sigma_i$'s.
I know how the estimate the means by taking the average of the results (because $\mu_b = 0$). Also I can easily estimate $\sigma_j^2 + \sigma_b^2$ because if the fact that $X_j+X_b = N(\mu_j,\sigma_j^2 + \sigma_b^2)$... but I need another estimator so I can get all the values for $\sigma$'s!
I thought about using the off-diagonal elements of the co-variance matrix (which are supposed to be $\sigma_b^2$) but I get huge problem when they are negative. Can someone help me find the missing estimator?
• But I know that $V(Y_1) - V(Y_2) = \sigma_1^2 - \sigma_2^2$ and then using your equation I find that $\sigma_2^2 = \frac{V(Y_1−Y_2) - V(Y_1)+V(Y_2)}{2}$ and I think the right side can be negative for some values of $Y_1,Y_2$ :( – RyArazi Feb 20 '17 at 10:18
• I know they are not equal. But using the fact that $V(Y_j) = \sigma_j^2 + \sigma_b^2$ I can get after some algebra what I wrote above... and it's not always negative (I think) – RyArazi Feb 20 '17 at 12:00
• If my Comment is not helpful, please disregard it and try something else. – BruceET Feb 20 '17 at 17:27
|
2019-05-25 17:04:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9274110794067383, "perplexity": 156.46059177481374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258147.95/warc/CC-MAIN-20190525164959-20190525190959-00268.warc.gz"}
|
http://ytcq.psychotherapie-neuwerk.de/oHYY
|
### In Line 5 Ran Through Is Best Understood To Mean Quizlet
in line 5 ran through is best understood to mean quizlet. Line of best fit: smoking in 1945. In Figure 16. The experimental standard deviations of the mean for each set is calculated using the following expression: s / (n) 1/2 (14. Além disso, com a empolgação de. Slope describes how steep a line is. The narrator's attitude toward "Reviewers" (line 25) is best characterized as. Practice: Eyeballing the line of best fit. The result is that runningtotal is 'cleared' (reset to 0) each time through the loop. Find the slope of a line that passes through points A and B. Place direct quotations that are 40 words or longer in a free-standing block of typewritten lines and omit quotation marks. The third stanza draws an implicit contrast between. Charging $8 per person, per ride, the Maglev runs the nearly 19 miles from Shanghai's Pudong International Airport to the Longyang metro station on the outskirts of Shanghai. Put your most important information at the beginning. Basically, a tongue twister works the same way as physical exercise. The branch and bound algorithm, presented in this paper, introduces three novel concepts to determine. When the distribution is symmetric and bimodal, mean = median = mode ANSWER: c 33. Having a conversation before the year begins about your expectations for students, behavior, homework, bathroom use, etc. In line 5, "ran through" is best understood to mean answer-7. However, the way negative emotion is often measured means that there is a dependency between variability and the. In the context of the entire passage, the word"anecdote" (line 1) is best understood to mean. Let’s see the difference between Compiler and Interpreter: 1. With that amperage draw, no more than 4 Platinums could be turned on before blowing a breaker. In the following graph, the rise from point P to point Q is 2 and the run from point P to point Q is 4. Best Romantic Christmas Movies to Watch. ψ (y best) is returned. This will shift the distribution to the right or left. Autonomy is the source of all obligations, whether moral or non-moral, since it is the capacity to impose upon ourselves, by virtue of our practical identities, obligations to act (Korsgaard 1996). These dictionaries continue to grow and improve as well. Reading comprehension is the understanding of what a particular text means and the ideas the author is attempting to So you must be able to understand each moving piece before you can understand the text as a whole. Rprop [3] tries to resolve the problem that gradients may vary widely in magnitudes. The average GPA of students in 2001 at a private university was 3. Time spent waiting in line for each moviegoer at each. It then references the. A man ran through. What is more, tongue twisters were - and probably still are - used by actors before a representation, by politicians before a speech and even by news anchors before going live. Quizlet Plus for. Read Paper. No more switching windows every time you want to Paraphrase text! QuillBot's Paraphraser helps you write better, faster, and clearer. You will go through difficult times, but when you can better navigate the difficult times, you not only live a happier life, but you'll also grow as person. Recently, LVMH opened a large store in Japan. Quizlet Learn. In context, "but cannot do thee wrong" (line 16) is best understood to express the speaker's (A) certainty that the loved one will forgive any transgressions (B) concern that an act of betrayal would be morally wrong (C) hope that the temptations of the world will not cause unfaithfulness (D) belief that no future love will supplant the. The WordReference language forum is the largest repository of knowledge and advice about the English language, as well as a number of other. She was the best student in the class, so all the study groups invited her to participate. The best tips for creating a successful email subject line. Most of these are through websites or applications, with the former being the preferred source of quick income. Learn how to read a candlestick chart, as well as spot candlestick patterns that aid in analyzing price direction When the real body is filled in or black, it means the close was lower than the open. This time, the goal is to find the line tangent to. In statistics, we use a similar. 69% ) Q3 2021 Earnings Call. Which of the following statements is most consistent with the idea conveyed in line 9 ("Not…lying")?. (line 15) is best understood to mean (A) Diluting (B) Cooling (C) Sifting (D) Penetrating (E) Invigorating (C) Sifting. , gold, or red, or justice, is. In line 5, "ran through" is best understood to mean. Planning is vital to your success as a co-teaching team. Whether you're a native English speaker or just beginning to learn, English grammar can be confusing. imagery to suggest the lady’s wide-ranging. It is an alphabetical list of what is in the manual and the page number where it can be found. But proper grammar isn't the only thing you need to think about. This steering of the magnetic flux 5 prevents leakage flux entering the generator core 2 and so the maximum RMS of the induced current is achieved, as the generator. LitCharts assigns a color and icon to each theme in A River Runs Through It, which you can use to track the themes throughout the work. We would like to show you a description here but the site won’t allow us. Strengthen the means of implementation and revitalize the global partnership for sustainable development. Locate two points on the line whose coordinates are integers. This is in sharp contrast to frameworks such as CppUnit. Name the coordinates of the points graphed on each number line. alliteration to emphasize the lady’s subtle power. Give kids a say in decisions and a chance to solve problems before stepping in. For every one unit increase in $$x$$ the predicted value of $$y$$ increases by the value of the slope. (It skips over the definition of “introduction” until it is asked to actually utilize the method) Step 2. I've really come to embrace the difficulties and struggles that life hands me because I know there's a silver lining in each one of them. Other types of lines are simply variations of the five main ones. Which of the following statements is most consistent with the idea conveyed in line 9 (“Not…lying”)?. For a particular value of x the vertical difference between the observed and fitted value of y is known as the deviation, or residual (Fig. Make recommendations for practice or generate data Ask a clinical question Critically analyze the evidence Find and collect the evidence Evaluate the outcomes in the clinical setting Create a spirit of inquiry. We apologize for the inconvenience. answer choices. As Japanese rice traders discovered centuries ago, investors' emotions surrounding the. The survey included 1,500 households and found that average Halloween spending was$58 per household. To understand why these types of lines matter, how to use them based on their functions and discover line variations for art and design - keep reading. The method “introduction” is called in line 5. If they had not fought the battle they fought, you probably wouldn't be reading this piece today. One difference between "Soaring to New Heights" and "See It Through" is that only "See It Through" —. 11 Make-up exam. Read the answers and choose the best answer to each question. Hint for plotting a line of best fit! Make sure to FORCE your best fit line to go through the origin. Considering it scans code one line at a time, errors are shown line by line. Buckets of bug blood, buckets of bug blood, buckets of bug blood. An ability to do something well, especially because you have learned and practiced it. Extension Lines Extension lines on a drawing are fine, dark, solid lines that extend outward from a point on a drawing to which a dimension refers. The narrator, Norman Maclean, relates that in his Presbyterian family in western Montana, fly-fishing and religion were considered of one piece. allusion to hint at the lady’s social prominence. vacation spots D. Running the tests is simple. The best models were traditional hierarchical organizations such as General Motors, IBM and Wai-Mm1. The square function will return 0 instead of x * x; The line runningtotal = 0 is the first line in the for loop, but immediately after this line, the line runningtotal = runningtotal + x will execute, giving runningtotal a non-zero value (assuming x is non-zero). Best Christmas Movies to Watch with Your Family This Season. It applies to all kids, including those who are homeschooled or in private schools, plus kids who are migrants or without homes. earliest stage of life in context the phrase transfix the flourish is best understood to mean. From speed to launch and handling, it can hold its own against much more expensive. Interference impacts a routers configuration. The probability of observing a sample mean smaller than 529 when the population mean is 532 is 0. Question 4 Select one answer. the "noises" (line 2) are answer-5. We make a negative with not + infinitive: I tried not to look at the accident. We might find that the best way to do a task that today is done by five full time people would be to use one part-time employee and a host of freelance contractors each working for a few minutes. Click to see our best Video content. The energy company has already laid out plans to obtain crude oil through other pipelines, or by moving it on ships or by rail. - Equation of the Line of Best Fit. US-made Berkley are perhaps the best know brand in the US market and have a presence throughout other global markets. It can be understood that “Plaza de Armas”, “Plaza Alameda” and adobe colonial mansions are: all examples of: _____ A. If the sample mean is 529 and the p-value is 0. Can a dealership program a new Fuel System Control Module without my truck being there? A forum community dedicated to Chevy Colorado and GMC Canyon owners and enthusiasts. Choose the best word to complete the text. clentja seville + 23 More. amusement parks B. This means 5 syllables in the first line, 7 syllables in the second line, and 5 syllables in the last line. Most states now use these standards in public schools. 2, you can see a dotted blue line from the origin through the production functions for the old and new technologies. "patronized" is best interpreted to mean. A short summary of this paper. Secondly, you, the teacher, will be there to define the expression if need be. In line 50, if the word benefit, which can mean lines 22 and 24 is a kindly act or anything contributing to improvement, is used to denote the first of these A. Some parents and educators view the standards as controversial. I don't pressure my brain through procrastination. Slope is typically expressed as a percent, and corresponds to the amount of rise, or vertical distance, divided by the run, or horizontal distance. The animation below shows a normal distribution with mean = 0, moving up and down from mean = 0 to mean = 5. In line 5, “ran through” is best understood to mean (A) hastened through (B) persisted through (C) performed (D) spread (E) pierced 7. This means that the author's last name and the year of publication for the source should appear in the text, like, for example, (Jones, 1998). N Mean StDev SE Mean Before 15 261. During our last meeting, we … a good action plan. One parameter we can change is the mean. In a study of the relationship between X=mean daily temperature for the month and Y=monthly charges on electrical bill, the following data was gathered: X 20 30 50 60 80 90 of the best fitting straight line through a set of points. Round and round the rugged rock the ragged rascal ran. We shall use the Greek letter Δ to mean "change in," so the change in quantity between two points is Δ Q and the change in price is Δ P. Beer's Law has an intercept through the origin, so your best fit line should reflect that. More and more companies are organizing themselves along product lines where companies have separate divisions according to the product that is being worked on. 25 amp of available electricity. The simile in line 2 ("As blow") achieves which of the following effects? It creates an atmosphere of danger. A negative slope indicates a line moving from the top left to bottom right. Saul McLeod, published 2010. 180 seconds. revelation, line 17 meanings, the final line of the passage is an B. Slope of a line. However, I'm a novice with github, how do I incorporate your codes into my edx-dl folder on my PC. This ad-free experience offers more features, more stats, and more fun while also. Find a mentor with similar challenges. In this example, change the. M2 plus M4 and M6 does. hastened through. I find that it's more helpful for me to study in advance and share what I know. I used edx-dl 2 months ago and it worked smoothly, I came back for it today but discovered this issues, thanks for resolving it. We must teach young people - both in school and at home - what. Slope can be upward or downward. Just enter your text into the input box, and our AI will work with you to build the best paraphrase from the original piece of writing. (7) Suspense, holding the reader or the listener in tense anticipation, is often realized through the separation of predicate from subject or from. The best way to 7 ____ with the situation is to get the child used to the idea of school. 1 Full PDF related to this paper. 0 million cells per microliter (cells/mcL) with a standard deviation of 0. He would probably agree with the opinion that one should proactively fight injustice; this practice would have saved many lives during the holocaust. ) comprehended the consequences of his accident c. Y = β 0 + β 1 X 1 + + β k X k + ϵ. A good communicator must be fluency in the language. The poems are 3 lines with a 5-7-5 pattern. Download Download PDF. in line 5, nativity refers primarily to the in context the phrase transfix the flourish is best understood to mean. Let's start with understanding rprop — algorithm that's used for full-batch optimization. 5 cm and standard deviation 0. The line can represent a cliff, ridge, or some other interruption in the landscape. Discover more every day. 1 depict the magnetic flow when it is prevented from travelling through the generator core 2. in the poem are closest to those expressed in which of the following quo- tations from other poets?. go through c. , M2 plus M4 does a good job in explaining some of the MTL–MSL difference shown on line 1 and in reducing the tail. LWBK696-FM_i-viii. In the line 5 a state-transition is executed. 92 a) Find a 90% confidence interval for the true mean reduction of the cholesterol reduction. Open browser to localhost:8000, (you can change the port in server. Step One: Identify where price is in relation to Senkou Span B to filter the trade direction. LINE also supports VoIP calling, and both audio and video conferencing. 59 in Spring semester of 2012. 71 Diff (Before After) 15 26. edu is a platform for academics to share research papers. If, for instance, a person wants to be good, and good means washing one's hands, then it seems one morally ought to wash their hands. There are four possible routes, but I don't know which is the best. Based on the paragraph, the author would most likely agree that people need to be proactive when they witness an injustice. Whatever economics knowledge you demand, these resources and study guides will supply. the action of the waves in line 1-4 is best described as. If someone understands why you've made a certain decision or taken a specific In these cases, it's best to acknowledge their extra work. $m=\frac{\text{rise}}{\text{run}}$. Neuroticism has been characterized by greater variability between high and low levels of negative emotion. In the references to an “old opera tune” (line 2), “sun-flooded silks” (line 4), and “sealed spice-jars” (line 10), the speaker uses. Module 8: Nonfiction Readings and Responses. helping people use/understand technology 4) This computer/server runs programs, but doesn't have Internet access. Traditional Japanese Haiku focused on nature but you don’t have to. Line is a mark made using a drawing tool or brush. The regression line is obtained using the method of least squares. The "wings and spurs" mentioned in line 16 are best understood to represent. Afinal, esse momento será decisivo para o bem-estar da sua família. And the latest set of standards, the. = 8 4 = 8 4. Example: In the Chinese vase. It means that he should has an extensive vocabulary with grammatical accuracy to help to describe things If they don't understand and want to clarify something, they wait for a suitable opportunity. LAN routers do not work well near walls of metals. Liveworksheets transforms your traditional printable worksheets into self-correcting interactive exercises that the students can do online and send to the teacher. Understanding Writing Assignments. It is impossible to tell from the information given. Avoid using all caps. What does parallelism mean? The quality or condition of being parallel. Lines 5 to 9 are repeated until the termination condition has been triggered. LINE is a communications application for all kinds of devices, including smartphones, PCs, and tablets. Place the steps of the evidence-based practice (EBP) process in order (0 being the first step; 6 being the last step). Percentage means per 100. When using the ordinary least squares method, one of the most common linear regressions, slope, is found by calculating b as the covariance of x and y, divided by the sum of squares of x,. Now we can write the formula for the price elasticity of demand as. wears a well constructed, well fitted mask while outside the vehicle. It has the best stock performance number of any car under 50,000 credits, with a rating of A 778. Line 8,9 appending our predicted character to our starting sentence gives. There is a cattle farm that hires young men to work. Tell whether the statement is sometimes, always, or never true. That means isolating and contacting your doctor or local board of health to arrange testing. $330,000 is the median and$675,000 is the mean. ; argv is an array of pointers to characters containing the name of the program in the first element of the array, followed by the arguments of the program, if any, in the remaining elements of the array. Flowers lose petals softly. The Common Core State Standards are a set of uniform academic standards for K-12 math and English language arts. lying") ? answer-8. Take the ratio of rise to run to find the slope. Line 5 shows that, as would be expected from Eq. is delivering 5. qxd 9/3/10 12:39 PM Page i the greatest influence on the development of in the nursing practice. and are often very expressive. Explanations. How do we understand? By listening. News, email and search are just the beginning. I tried to provide enough context for this in my examples. In the line 1 the state of the scanner is set to q0 , to the start state of the automaton, and the rst character of the input string is determined. The most conventional, least idiosyncratic aspect of the poem is its (A) tone capitalization ( meter 25. In line 3, "proper" is best understood to mean answer-6. The dashed line 5. S&T Bancorp, inc ( STBA 0. This is called Child Find, and it covers kids from birth through age 21. 01 probability that the population mean is smaller than. Slope can also be expressed as an angle, which gives the amount of deviation. New Movie Releases This Weekend: December 17-19. As it scans the code in one go, the errors (if any) are shown at the end together. Racing: 2018 Ford Mustang GT — 40,000 Credits. We bet you can finish this quiz in less time than it would take to go through the line at the Drive-Thru. 5) Using the above example, where values of 1004, 1005, and 1001 were considered acceptable for the calculation of the mean and the experimental standard deviation the mean would be 1003, the experimental standard. Discover simple explanations of macroeconomics and microeconomics concepts to help you make sense of the world. Neuroticism is the personality trait most closely linked with mental health challenges. Choose the word or phrase which has a similar meaning to: talk. The slope of a line can be found using the ratio of rise over run between any two points on the line. Applets implements functionality of a client. Such words can be: participles, articles, conjunctions, prepositions, determiners The only way to fight these influences is through education. The speaker's answer to the. One can use this app to communicate via texts, images, video, audio, and more. It takes time to work through certain issues. Haiku is a short, unrhymed poem based on a single image. Famous parent (s): actor Rob Schneider and model London King. ) perceived that he was responsible for hurting his hand d. This means that questions I through 5 will be the easi~st and questions 26 through 30 will be the hardest. In line 5, "thoughtless" is best understood to mean A) remiss B) indifferent C) shallow D) inconsiderate Quizlet Live. The slope of a regression line (b) represents the rate of change in y as x changes. (Remember, the tangent line runs through that point and has the same slope as the graph at that point. Insert an offer in your subject line. Line 7 This gives us back the index of the next predicted character after that sentence. Because y is dependent on x, the slope describes the predicted values of y given x. It is fitting, therefore, that his moral philosophy is based around assessing the broad characters of human beings rather than assessing singular acts in isolation. Aristotle (384-322 BC) was a scholar in disciplines such as ethics, metaphysics, biology and botany, among others. Image source: The Motley Fool. Students who wished to attend this high school were required to sign contracts pledging to put forth their best effort on their school work and to obey the school rules; if they did not wish to do so, they could attend another high school in the district. power, line 21 B. Consider cases where one has no desire to be good—whatever it is. persisted through. point out c. mean study hours for at least one of the 3 religious importance groups differs from the others. Despite starring the famous monster, however, the gameplay is similar to Pokémon and uses deck building. A high school increased the length of the school day from 6. As adolescents enter this stage, they gain the ability to think in an abstract manner by manipulating ideas in their head, without any dependence on concrete manipulation (Inhelder & Piaget, 1958). Which of the following statements is most consistent with the idea conveyed in line 9 ("Not. Public schools have a big responsibility. Read the text below. "It was a horrible place; just rich kids being. Clean clams crammed in clean cans. However, with just one 30amp, 240 volt line, again using the calculation above, we can see. Answers is the place to go to get the answers you need and to ask the questions you want. If someone doesn't feel like standing in line to buy tickets or gifts, they can hire someone on All you have to understand as a tester and checker is, to be honest about your feedback: apply. Applet is a dynamic and interactive program that runs inside a Web page displayed by a Java-capable browser. (4 pts) Suppose the distribution of red blood cell counts for a healthy population is known to have a mean of 5. The $$y$$ intercept is the location on the $$y$$ axis where the line passes through; this is the value of $$y$$ when $$x$$ equals 0. Traditional critiques of autonomy-based moral views, and Kant's in particular, have been mounted along various lines. Create An Assembly Line In Your Brain. Multiple Linear Regression Model. Which of the following would be a reasonable approximation for the length of time it would take for her to run 6 miles? answer choices. Report question. Digital subscriber line (DSL; originally digital subscriber loop) is a family of technologies that are used to transmit digital data over telephone lines. the New York City 10-km run are normally distributed with a mean. Moreover, while the battles are intense, they only last for three minutes. Just making a call to the predefined RUN_ALL_TESTS macro does the trick, as opposed to creating or deriving a separate runner class for test execution. Positive test results are highly specific, meaning that if you test positive you are very likely to be infected, particularly if you are tested during. He wanted to meet the group at the restaurant, so he called to find out what time to arrive. shows how problems caused by a physical challenge become less important over time. This Paper. In line 21, "failed" (A) died (B) faded (C) sickened (D) was unhappy ( was absent is best understood to mean (E) Fear of death 24. Equation 5. Chorus: "I've got what you need baby, you can't run from me, on a ?? ?? baby, I've got what Lyrics are a little blurry in my head but from what I remember, it'd say something along the lines of "It'd And I can't remember the rest (can't understand the rest lol) but umm yea so I am looking for it. Applets are small programs transferred through Internet, automatically installed and run as part of web-browser. Manipulating the data This section includes a number of activities to help you review, and to apply, the material covered in Chapter 8 of the SPSS Survival Manual. COMMUNICATION 3. Practice: Estimating equations of lines of best fit, and using them to make predictions. While these fleas flew, freezy breeze blew. 8 Mean total self esteem sex MALES FEMALES Note: For display purposes I have modified the graph by changing the female line to a dashed line. The algorithm stops if either the best solution found has not been improved for G stable generations or if the generation index has reached G max. b) carry out a test of hypotheses to determine if the data support the claim that the low-fat. Practice: Estimating slope of line of best fit. A healthy lifestyle can vastly improve your well-being. To give is better than to receive. In line 3, "proper" is best understood to mean. OPEN ENDED Give an example where absolute values are used in a real-life situation. Goal-dependent oughts run into problems even without an appeal to an innate human purpose. Explain the meaning of absolute value. hyperbole to evoke the lady’s advanced age. Either 4'-5" or 53", they both mean the same thing but if there is a mix of dimensioning it can become easy to look at 4'-8" and see 48". In line 22, "saw all" could best be understood to mean that the boy a. The formal operational stage begins at approximately age twelve and lasts into adulthood. Put on your best kilt, because it's time for some Scottish football trivia. RESEARCH 32 1. c) with the highest R2, after comparing all possible models. There is a bad disease spreading among people. Based on the paragraph, the author would most likely agree that. delivering b. understanding, lines 1819 D. Translates program one statement at a time. In context, "but cannot do thee wrong" (line 16) is best understood to express the speaker's (A) certainty that the loved one will forgive any He is too large to fit through the gate. 1 The Basic Concept of Nationalism. Count the rise and the run on the legs of the triangle. There is a 0. In absorption costing, items of stock are costed to include a ‘fair share’ of fixed production overhead, whereas in marginal costing, stocks are valued at variable production cost only. In multi-active cultures like the Arab and Latin spheres, time is event- or personality-related, a subjective commodity which can be manipulated, molded, stretched, or dispensed with, irrespective. Although the term "nationalism" has a variety of meanings, it centrally encompasses two phenomena: (1) the attitude that the members of a nation have when they care about their identity as members of that nation and (2) the actions that the members of a nation take in seeking to achieve (or sustain) some form of political. And if a client or prospect demonstrates the patience and fortitude to solve. Search for: Henry David Thoreau, "Walden," 1854. 4 million cells/mcL. Green OHLC bars are overlaid with a red weekly moving average. minutes of our lives elapsing. I really don't want to see him again. Use the word given in capitals at the end of each line to form a word that fits in the space in the same line. In telecommunications marketing, the term DSL is widely understood to mean asymmetric digital subscriber line (ADSL), the most commonly installed DSL technology, for Internet access. Slope calculator uses coordinates of two points A(xA,yA) A ( x A, y A) and B(xB,yB) B ( x B, y B) in the two-dimensional Cartesian. We were driving through heavy rain when the windscreen wipers stopped working, so. As such, social constructionism highlights the ways in which. That's right—the train, which takes just over 7 minutes to complete the journey using magnetic levitation (maglev) technology. things, line 18 example of C. Graph each set of numbers. well-known; most deep learning framework include the implementation of it out of the box. The more you practice, the better your pronunciation will be. Plato is attempting to discover through scientific investigation, or (inclusive or) through an analysis of what words mean, or through any other method, what the nature of, say, Justice is—compare the ways in which philosophers and scientists work to discover what, e. Best moving average for 5 minute chart Best moving average for 5 minute chart. The slant of a line is called the slope. What news do Dennis and Mac hear on the radio while at the ranch? answer choices. Exclude filler words from your subject line. Lala Kent is one of the biggest USPs of Vanderpump Rules, the show that follows the fabulous lives of the staff. will deliver c. Though limited in scope, Quizlet is an excellent study aid (think app-based flashcards) for learning anything that relies on memorization. Using the calculations above for the Black Dog Platinum XL-U, we can see that each light, running at 120 volts, will require 6. Preview this quiz on Quizizz. Best budget fluorocarbon fishing line. a aa aaa aaaa aaacn aaah aaai aaas aab aabb aac aacc aace aachen aacom aacs aacsb aad aadvantage aae aaf aafp aag aah aai aaj aal aalborg aalib aaliyah aall aalto aam. I am sure our suppliers the goods on time. Although well-known benchmark instances for this problem have been available for decades, the state of the art lacks optimal solutions for these instances. places to visit C. Vanderpump Rules: 5 Times Lala Crossed The Line (& 5 Times She Was Actually Reasonable) Lala Kent is one of the most no-nonsense people on Vaderpump Rules, and while she often crosses the line, she can also be surprisingly reasonable. 60 seconds. Berkley is a solid choice for bargain-conscious anglers and also tests well. Take A Sneak Peak At The Movies Coming Out This Week (8/12) Minneapolis-St. Another way to find the slope of a line, when you do not have a fitting program available, on a test for example. In the similes in lines 1-5, the "harpsichord" and the "boudoir" primarily serve to evoke which of the following Elegance and bygone days In context, the image of the lenny in line 14 is appropriate because its. To run it, download autohotkey then create a new script (right click on the desktop, choose new > autohotkey script. These people are the pioneers, the remnants, the trendsetters, the trailblazers, the people not afraid to walk the unbeaten path. so all-in-all it was less trouble to specify CHARACTER*246810 for the input record scratchpad so as not to have to struggle with piecemeal input. the verbs in lines. The best way to improve your reading comprehension level is through practice. What it was like: "I got kicked out of school in eighth grade," she's said. Line 4 defines main(), which is the entry point of a C program. Throughout the play's history, the play has been variously regarded as a highlight of Shakespeare's dramatic output, as a representation of the essence of human life, and as containing Shakespeare's most autobiographical character, in the form of Prospero the magician-ruler. Formal Operational Stage. Anyone suggesting or promoting violence in the comments section will be immediately banned, permanently. Oct 21, 2021, 1:00 p. Roberta ran rings around the Roman ruins. Read the text carefully line by line to identify the unnecessary word. TOP -NOTCH 5. (5) Through detachmentsecondary members of the sentence acquire independent stress and intonation which leads to their emphatic intensification. Example 4Example 4 1. The least squares regression line ( ) obtained from sample data is the best estimate of the true population regression line ŷ is an unbiased estimate for the mean response μ y b 0 is an unbiased estimate for the intercept β 0. Paul said she understood the implications for Canada, but it was a “false choice” for politicians of any stripe to portray the Line 5 pipeline battle as a choice between jobs and the environment. Finally, at the last line the best product allocation. Line 4 this generates a single data example which we can put through to predict the next char. Best Reactions to Movies Out Now In Theaters. The slope of this line tells us the amount of output per unit of capital goods at the point where it intersects the production function: it is the amount of output per worker divided by the capital goods per worker. D) 1 and 2 only. MGBpy October 4, 2018, 9:35pm #5. So to better understand this language, below are 50 of the top English proverbs, clearly explained just for you! To hear even more English But your neighbor probably thinks you have greener grass too, which means that your friends and other people think that you have better looks, a happier family, etc. Have you ever sent off an important email only to realize moments later that you forgot to run spell check and missed a mistake in the very. Everyone loves to win. Under federal law, public schools must look for, find, and evaluate kids who need special education. It means that we need to question why there are deep asset gaps between haves and have nots, and to ask why we continue to have long entrenched marginalisation of minority. Lines are basic tools for artists—though some artists show their lines more than others. The bus line planning problem or transit network design problem with integrated passenger routing is a challenging combinatorial problem. She found the house once, but she didn't think she could find it again. Essential travel means to Point Roberts, Washington transiting through Canada to return to their habitual residence or to access the mainland United States as long as they remain in their vehicle while passing through Canada. STBA earnings call for the period ending September 30, 2021. The header line specified the date and time slot for each column as Country,Island,Node,MEAN Energy,01AUG2010 Daily ENERGY,01AUG2010 01,01AUG2010 02,01AUG2010 03, etc. Aristotelian Virtue Ethics Introduction. The variable s0 indicates that the algorithm is analysing the input string, the text analysing is set in this variable in the line 2. A line feature used to keep certain points from being used in the calculation of new values when a raster is interpolated. Own Way Lyrics | (lyrics are written by Girraff-fa-fa-fa-fa-fa-fa(y)) Gabriella I gotta say what's in my mind Something about us doesn't seem right these days life keeps getting in the way Whenever we try, somehow the plan is always rearranged It's so hard to say But I've gotta do what's best for. the subject of line 1-4 is best described as. We seek revolution through the education of the masses. Solution: Slope m = 10 − 2 7 − 3 Slope m = 10 - 2 7 - 3. Remind kids that asking for help is a good thing, and practice doing it. Getting started is simple — download Grammarly's extension today. Only the sample points on the same side of the barrier as the current processing cell will be considered. Choose the best word or phrase to complete the sentence. The following scatter plot shows Pam's training as she prepares to run a 6 mile race at the end of the month. Browse through all study tools. Take good note of the parameters: argc is an integer representing the number of arguments of the program. illustrates how certain activities can enhance the quality of a person's life. ) saw from the looks on the others' faces how serious the accident was. Line 5,6 we normalise the single example and then put it through the prediction model. Praise kids when they speak up. If you’re looking to win races and progress through the game as cheaply as possible, this is the car to do it in. Draw students' attention to the meanings of some of the. And if you make your way through the town you can visit the Watford Museum in the High Street which has materials on printing and paper-making on 4. 1000 people are running as fast as they can cross country. Full PDF Package Download Full PDF Package. ) The tangent line equation we found is y = -3x - 19 in slope-intercept form, meaning -3 is the Here's a run-through of the whole process again. How can more women get into politics?. Social Constructionism Social constructionism is a theory of knowledge that holds that characteristics typically thought to be immutable and solely biological—such as gender, race, class, ability, and sexuality—are products of human definition and interpretation shaped by cultural and historical contexts (Subramaniam 2010). There are many types of lines: thick, thin, horizontal, vertical, zigzag, diagonal, curly, curved, spiral, etc. developed b Fewer Japanese tourists travelled last year, but they brought more goods in their home market. In Multiple Linear Regression, there is one quantitative response and more than one predictor or independent variable. Find your yodel. What is a Nation? 1. desire for the beloved that the speaker carries "patronized" is best interpreted to mean. Create a sense of urgency. It means that we need to take a long, hard look at the social and economic systems that underlie how we live, work and play. Do NOT specify any course urls with this arg, if you do, this arg is ignored. This app is supported by Android, iOS, Windows, and Mac. Compiler scans the whole program in one go. Social Sciences. When the distribution is skewed to the right, mean < median < mode c. The Tempest first appeared in print as the first play in the 1623 Folio of Shakespeare. "In this case the focus is always on the product and how it can be improved. Open the exe in a terminal: c:\program_files\dash_app> dash_app. Think of the cypherpunks who understood the value of the internet in the 1990s and directly battled the U. Paul Movie Theaters: A Complete Guide. 5) A workstation/handler PC is smaller than a laptop, but still has a keyboard. ) copy the code in, save, then right click on the file on the desktop and choose 'compile' If you want more than 10 clicks a second, change the 'sleep 95' line to a smaller value. DSL service can be delivered simultaneously with wired. OTHER QUIZLET SETS. Formula : Slope m = yB − yA xB − xA Slope m = y B - y A x B - x A. number best represents the mean income and which represents the median income? $330,000 is the mean and$675,000 is the median. DOCUMENT 2. ) realized his mistake in holding up his hand b. eD = ΔQ/¯Q ΔP / ¯P e D = Δ Q / Q ¯ Δ P / P ¯. I'll call you back , maybe the line will be better. Thus, it is important to understand how neuroticism manifests in everyday experience. py serve parameters. The value of closing stock will be higher in absorption costing than in marginal costing. Estimating with linear regression (linear models) This is the currently selected item. Prices are above the rising 50-day moving average line. Use keywords in your subject line. 01, which of the following statements is true? a. I recently spent time with someone who tested positive for COVID-19. diminish the vitality. Make an standalone executable file for windows. Agree on expectations. There are 5 main types of lines in art: vertical lines, horizontal lines, diagonal lines, zigzag lines, and curved lines. Keep your subject line simple. in line 5, nativity refers primarily to the. The Bottom Line. 45 After 15 234. Firstly, students are more interested when they can try to figure out the idiomatic expression and guess its meaning by themselves. If one subtracts these estimated individual contributions from the best estimate MTL–MSL, then one obtains the statistics shown in lines 5–7 of Table 1. 5 from North-pole to South-pole through the iron guidance block 1. difficulty, line 21 A. Check that students understand the meaning of cross-functional task force (a group of people with different backgrounds or expertise working towards a common 4 Read through the words in the box and the compound adjectives with the class. When the information is available to the people, systemic change will be inevitable and unavoidable. Finish each of the following sentences in such a way it is as similar as possible in meaning to the sentence before it. Andrei Nedea. 6L Duramax ® Turbo Diesel Engine are covered for 5-years/100,000 miles †. An integer is a rational number. Any line y = a + bx that we draw through the points gives a predicted or fitted value of y for each value of x in the data set. Install the msi on the desired computer. Na hora de escolher uma entre as melhores casas para alugar em Americana, é fundamental tomar certos cuidados. You should read through this chapter. I was born and (1) … up in Norfolk. Starting with the point on the left, sketch a right triangle, going from the first point to the second point. an inexorable procession. In the similes in lines 1-5, the "harpsichord" and the "boudoir" primarily serve to evoke which of the following Elegance and bygone days In context, the image of the lenny in line 14 is appropriate because its. When the distribution is symmetric and unimodal, mean = median = mode d. d) that has the smallest sum of squared errors. Is social media good for you? Did you ever post mean comments online? Did you ever post mean comments online? The influencers making a difference on TikTok. A The Kite Runner then becomes a story of Amir's journey in search of a way to make up for what he did. If you don't find what you are looking for in any of the dictionaries, search or ask in the forums. Through three cheese trees three free fleas flew. The model will contain the constant or intercept term, β 0, and more than one coefficient, denoted β 1, …, β k, where k is the number of predictors. When kids run into a challenge, ask what they think would help. 1 & 5 Seagull AS (QUESTIONS & CORRECT ANSWERS. government to ensure code was regarded as free speech. A River Runs Through It: Part 1. Millions trust Grammarly's free writing app to make their online writing clear and effective. A survey on a sample of 203 students from this university yielded an average GPA of 3. States have used academic standards in schools for decades. E As well as portraying the close relationship between two boys, Amir and Hassan, it describes the last peaceful days of Afghanistan, before revolution and war would destroy the country. Note, when you change the mean the whole shape of the distribution does not change, it just shifts from left to right. Get the best of Sporcle when you Go Orange. Due to a planned power outage, our services will be reduced today (June 15) starting at 8:30am PDT until the work is complete. A) 1 only B) 2 only C) 3 only D) 1 and 2 only E) 1, 2, and 3. In a histogram, the proportion of the total area which must be to the right of the mean is a. Slope refers to the angle, or grade, of an incline. Ensure healthy lives and promote well-being for all at all ages. are running. 01 probability that the population mean is smaller than 529. was found to follow a normal distribution with mean 3. A lot of trains late today due to the heavy storms. , can help you work out any differences you may have and come to a consensus for how your shared class will run. Godzilla Battle Line is the final game of the three-game Godzilla series, after Run Godzilla and Godzilla Destruction.
chz gki hzk yho kaj hqs itk fhm pan iqd vmn qgw svh bdg mzr ogw pom mqo ooj trb
|
2022-05-17 06:53:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32015833258628845, "perplexity": 1832.3148265637549}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00101.warc.gz"}
|
https://www.physicsforums.com/threads/position-velocity-acceleration-jerk-jounce-snap-crackle-pop-ad-infinitum.559413/
|
# Position, velocity, acceleration, jerk, jounce, snap, crackle, pop ad infinitum ?
1. Dec 12, 2011
### Nikarasu M
Hello,
Something on my mind today...
As you keep differentiating functions that are sometimes used to represent the displacement of objects you eventually end up with a function that has discontinuities and jumps in its path.
Simple example for the sake of illustration - an object at rest starts accelerating at 1 m/s^2 at t=0.
f(x) = 0, for x $\leq$ 0
f(x) = x^2, for x>0
f'(x) = 0, for x $\leq$ 0
f'(x) = x, for x>0
f''(x) = 0, for x $\leq$ 0
f''(x) = 1, for x>0
How are these jumps and discontinuities dealt with a real-world physical sense ? A jump in position would infer instantaneous/greater than light speed travel, but why is there no issue for acceleration ?
I read about jerk and jounce and so on (the 4th and 5th derivatives), but what are the rules and mechanisms for when and how each derivative is 'allowed' to make these instantaneous changes in value ?
Maybe there was an infinite regression of them - an infinite derivative maybe and there was some kind of mathy limit involved ?
Maybe I'm looking at simplified math text book examples/models (polynomials) and reading too much into them and the real world physical equations account for my conundrum ?
Is t=0 the big bang ? and everything is in deterministic sense always moving already and these jumps don't exist ?
2. Dec 12, 2011
### JHamm
Re: Position, velocity, acceleration, jerk, jounce, snap, crackle, pop... ad infinitu
How do you get from one point to another without jumping? Also your position and velocity graphs won't have discontinuities and your acceleration will just "jump" when you start applying your force but in reality it too will gradually grow as the force is applied up to its maximum.
3. Dec 12, 2011
### Nikarasu M
Re: Position, velocity, acceleration, jerk, jounce, snap, crackle, pop... ad infinitu
ok,
so then jerk and jounce have the kink then discontinuity ?
Or some other nth derivatives ?
Or you're saying the kinks and discontinuities end at the infintiy-th derivative ? (what is it called ?)
4. Dec 12, 2011
### chrisbaird
Re: Position, velocity, acceleration, jerk, jounce, snap, crackle, pop... ad infinitu
In the real physical world, there is no discontinuity in the position versus time graph, nor in the velocity, acceleration, jerk, etc. graphs. The discontinuities only arise mathematically when we use an idealized functional expression to represent these parameters which is only an approximation (although often a very good one). For example, if I have a shopping cart sitting at rest, then at some time t, I start pushing it with a constant force so that it accelerates, you could model the acceleration as jumping from 0 to a at time t. But if you actually measured the acceleration with a high-resolution accelerometer, you would find when you zoom way into the data that the acceleration smoothly changes from zero to a very quickly. Not only that, but also there is a finite time required for the affect of your push on the handles to propagate through the shopping cart to the front, so you will actually generate vibrations in the shopping cart (not enough to feel though - the vibrations you feel when pushing a shopping cart are from a bumpy floor/non-round wheels).
This actually leads to significant errors in numerical modeling of physical systems. For instance, in a similar way, three-dimensional objects are using represented in a computational system as having perfectly sharp edges, meaning that there is a discontinuity in the mass as a function of space at the surface going from object to no-object. If you look close enough at real objects, their surfaces are fuzzy. Depending on the frequency and resolution, this idealization can lead to significant effects. Some computational models have added algorithms that smooth out such effects in the end.
5. Dec 12, 2011
### Nikarasu M
Re: Position, velocity, acceleration, jerk, jounce, snap, crackle, pop... ad infinitu
ok,
I fear an infinite regression of questioning now ;)
so, you're saying if you zoom in on the step function you'll see it's actually curved - differentiate this and eventually you'll end up with what looks like another step function - zoom in again - it's also curved - and so on ... (infinitely?)
My question maybe is getting more philosophical (?) - but how does anything begin ?
It's like the step function in its digital on-off sense sense is the 'decision' at time t to make a move.
Say you have something moving and you reverse it's direction - which level of differentiation down all the functions first crosses the abscissa (time axis) ? The infinite one ? This is hard to for me to get my head around.
6. Dec 12, 2011
### Nikarasu M
Re: Position, velocity, acceleration, jerk, jounce, snap, crackle, pop... ad infinitu
Just realised my calculus is a bit off in my original post - but it shouldn't affect the logical flow of my query ...
7. Dec 13, 2011
### chrisbaird
Re: Position, velocity, acceleration, jerk, jounce, snap, crackle, pop... ad infinitu
I believe the problem is that you are still thinking objects have hard edges. Consider a glass marble traveling at constant speed, then it strikes a wall and reverses direction. The collision of the marble and the wall is actually an inter-atomic electromagnetic field interaction. The electromagnetic fields don't just extend to the radius of an atom and then drop to zero, they extend out to infinity. The fields of an atom will be very weak far away, but not zero. They die down smoothly out to infinity. That means that even when the marble is two feet away from the wall, it is already feeling some force from the wall (amazingly weak, but non-zero). So there is no "begin" or switch-on point for a collision (which would lead to a discontinuity in some derivative), because in reality, the interaction is always takes place. A marble two miles away heading for a brick wall is already interacting with it, and in a sense, experiencing part of the collision process (although in reality such a force at that distance will be buried in the noise of other stronger forces, e.g. thermal, friction, wind, seismic). For practical purposes, we must make some approximating thresholds. For instance, approximate that when two atoms are farther apart than 100 atomic radii, their interaction strength is so week that it can be treated as zero.
8. Dec 13, 2011
### Nikarasu M
Re: Position, velocity, acceleration, jerk, jounce, snap, crackle, pop... ad infinitu
Thanks Chris,
I see what you're saying here - I'll ponder it for a while and see how it sits (first read, I'd say it's sitting well)
Nick
9. Dec 13, 2011
### torquil
Re: Position, velocity, acceleration, jerk, jounce, snap, crackle, pop... ad infinitu
It is not impossible for a function that starts at zero to become nonzero and at the same time have all continuous derivatives. Consider e.g:
x(0) = 0
x(t) = exp(-1/t^2) for t>0
All its derivatives at t=0 are zero, but the function still deviates from zero at any t>0. So this is not a mathematical impossibility.
10. Dec 13, 2011
### Nikarasu M
Re: Position, velocity, acceleration, jerk, jounce, snap, crackle, pop... ad infinitu
My question was mostly re. the simple example you find in math text books - maybe I need to read up more on physics. Does this function model a physical process ?
|
2017-09-21 16:07:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5879193544387817, "perplexity": 928.0595790126802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687833.62/warc/CC-MAIN-20170921153438-20170921173438-00375.warc.gz"}
|
https://www.semanticscholar.org/paper/A-generalization-of-the-theorem-Geneson/d3628727aa67815a25e1b41ae0394d8ef22903a0
|
Corpus ID: 211096618
# A generalization of the K\H{o}v\'{a}ri-S\'{o}s-Tur\'{a}n theorem
@article{Geneson2020AGO,
title={A generalization of the K\H\{o\}v\'\{a\}ri-S\'\{o\}s-Tur\'\{a\}n theorem},
author={Jesse Geneson},
journal={arXiv: Combinatorics},
year={2020}
}
• Jesse Geneson
• Published 2020
• Mathematics, Computer Science
• arXiv: Combinatorics
• We present a new proof of the K\H{o}v\'{a}ri-S\'{o}s-Tur\'{a}n theorem that $ex(n, K_{s,t}) = O(n^{2-1/t})$ for $s, t \geq 2$. The new proof is elementary, avoiding the use of convexity. For any $d$-uniform hypergraph $H$, let $ex_d(n,H)$ be the maximum possible number of edges in an $H$-free $d$-uniform hypergraph on $n$ vertices. Let $K_{H, t}$ be the $(d+1)$-uniform hypergraph obtained from $H$ by adding $t$ new vertices $v_1, \dots, v_t$ and replacing every edge $e$ in $E(H)$ with $t$ edges… CONTINUE READING
#### References
##### Publications referenced by this paper.
SHOWING 1-10 OF 21 REFERENCES
## Improved bounds and new techniques for Davenport--Schinzel sequences and their generalizations
VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL
## A hypergraph extension of the bipartite Turán problem
• Mathematics, Computer Science
• J. Comb. Theory, Ser. A
• 2004
VIEW 6 EXCERPTS
HIGHLY INFLUENTIAL
## Forbidden formations in multidimensional 0-1 matrices
VIEW 1 EXCERPT
## Extremal functions of forbidden multidimensional matrices
• Mathematics, Computer Science
• Discret. Math.
• 2017
VIEW 3 EXCERPTS
## A Relationship Between Generalized Davenport-Schinzel Sequences and Interval Chains
VIEW 1 EXCERPT
## Stanley-Wilf limits are typically exponential
• Jacob Fox
• Mathematics, Computer Science
• ArXiv
• 2013
VIEW 1 EXCERPT
## The History of Degenerate (Bipartite) Extremal Graph Problems
• Mathematics
• 2013
VIEW 2 EXCERPTS
## Tight bounds on the maximum size of a set of permutations with bounded VC-dimension
• Mathematics, Computer Science
• J. Comb. Theory, Ser. A
• 2012
VIEW 1 EXCERPT
## Dependent random choice
• Mathematics, Computer Science
• Random Struct. Algorithms
• 2011
VIEW 1 EXCERPT
## Extremal functions of forbidden double permutation matrices
VIEW 2 EXCERPTS
|
2020-05-27 09:44:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8476443290710449, "perplexity": 5710.281867114101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392142.20/warc/CC-MAIN-20200527075559-20200527105559-00092.warc.gz"}
|
http://metrologyrules.com/uncertainty-of-measurement-relative-humidity-via-dew-point-and-air-temperature/
|
# Uncertainty of measurement: relative humidity via dew point and air temperature
This article discusses the calibration of a relative humidity (RH) hygrometer by comparison with a condensation (“chilled mirror”) dewpoint hygrometer and a thermometer measuring air temperature (“dry-bulb temperature”, in humidity parlance). It focuses on estimation of the uncertainty to be associated with the measurement result.
The comparison between the RH hygrometer (Unit Under Test, or UUT) and the dewpoint hygrometer and thermometer (together forming the reference standard) is performed in a temperature- and humidity-variable chamber. The dewpoint hygrometer measures dew-point (or frost-point, if below 0 °C) temperature, $Latex formula$, with an uncertainty of 0.1 °C (coverage factor k=2). A resistance thermometer is used to measure the air temperature, $Latex formula$, also with an uncertainty of 0.1 °C (k=2). (No correction is applied for self-heating of the resistance thermometer, as it was calibrated in air, that is, in similar conditions to those in which it is used.) The temperature uniformity of the chamber is specified by the manufacturer to be ± 0.3 °C. (We assume a coverage factor of k = √3.)
Measurements are performed at temperatures of 5 °C, 20 °C and 50 °C, and at relative humidities of 10 %rh, 50 %rh and 90 %rh.
First, we must be able to calculate relative humidity from measured values of dew point and air temperature. Relative humidity is defined as a ratio of water vapour pressures: $Latex formula$, where $Latex formula$ is the actual vapour pressure of water and $Latex formula$ is the saturation vapour pressure of water at the prevailing temperature [Beginner’s guide to humidity measurement, NPL Good Practice Guide No 124, p 17]. (Here we express RH from 0 to 1, not 0 %rh to 100 %rh.) We will use the Magnus formula to calculate water vapour pressures: $Latex formula$, where $Latex formula$ is saturation water vapour pressure (in Pa) at temperature $Latex formula$ (in °C), $Latex formula$ is used for the irrational number 2.718… (to distinguish it from the symbol for vapour pressure), and the constants are $Latex formula$ and $Latex formula$ [Guide to the measurement of humidity, Institute of Measurement and Control, 1996, p 53]. The Magnus formula has an uncertainty of less than 1.0 % (k=2) from -65 °C to 60 °C. We will not apply the water vapour enhancement factor to $Latex formula$ or $Latex formula$, to account for the presence of gases other than water vapour, as it would cancel in the ratio $Latex formula$.
How do we determine the actual water vapour pressure, $Latex formula$? It is the saturation water vapour pressure at the dew-point temperature $Latex formula$, by the definition of dew point [Beginner’s guide to humidity measurement, p 2]. So, applying the Magnus formula, $Latex formula$.
We may also need to calculate dew point from relative humidity and air temperature. To achieve this, first calculate vapour pressure $Latex formula$, then manipulate the Magnus formula to obtain $Latex formula$ [Guide to the measurement of humidity, Institute of Measurement and Control, 1996, p 54].
We will also need the sensitivity coefficients $Latex formula$ and $Latex formula$:
$Latex formula$ $Latex formula$
Evaluating the sensitivities at typical temperatures $Latex formula$ = -20 °Cfp to 50 °Cdp and $Latex formula$ = 5 °C to 50 °C, we see the familiar rule-of-thumb that $Latex formula$ or $Latex formula$, in other words, RH changes by approximately 6% of the value, for a change of 1 °C in dew point or air temperature. (The symbols °Cfp and °Cdp, for “degrees Celsius frostpoint” and “degrees Celsius dewpoint”, are commonly used to distinguish dew-point temperature, a measure of humidity, from air temperature.) In other words, if $Latex formula$ increases from 8 °Cdp to 9 °Cdp (at $Latex formula$ = 20 °C), RH changes from 0.46 (or 46 %rh) to 0.49 (or 49 %rh), an increase of 3 %rh, or 6% of value. If $Latex formula$ increases from 49 °C to 50 °C (at $Latex formula$ = 48 °Cdp), RH changes from 0.95 (95 %rh) to 0.90 (90 %rh), a decrease of 5 %rh, or 6% of value.
Here are the dew points, for the air temperature and relative humidity measurement points mentioned above:
Air temp: 5 °C 20 °C 50 °C RH: 10 %rh -21.8 °Cfp -11.2 °Cfp 10.1 °Cdp 50 %rh -4.0 °Cfp 9.3 °Cdp 36.7 °Cdp 90 %rh 3.5 °Cdp 18.3 °Cdp 47.9 °Cdp
We will evaluate uncertainties for the three combinations indicated in bold in the table above.
We estimate hysteresis of the UUT, a capacitive RH sensor, to vary from zero at 10 %rh to 0.6 %rh (k=1) at 50 %rh to zero at 90 %rh.
5 °C, RH = 0.10 (10 %rh):
Component Value u(k=1) Sensitivity u(RH) Ref std: Dewpoint -21.8 °Cfp 0.05 °C 0.010 0.0005 Air temperature 5.0 °C 0.05 °C -0.007 -0.0004 Chamber: Temperature gradient 0.17 °C -0.007 -0.0012 UUT: Hysteresis 0.000 RH 1 0.0000 Combined uncertainty (k=1) 0.0013 U(k=2) 0.003 (0.3 %rh)
20 °C, RH = 0.50 (50 %rh):
Component Value u(k=1) Sensitivity u(RH) Ref std: Dewpoint 9.3 °Cdp 0.05 °C 0.034 0.0017 Air temperature 20.0 °C 0.05 °C -0.031 -0.0015 Chamber: Temperature gradient 0.17 °C -0.031 -0.0054 UUT: Hysteresis 0.006 RH 1 0.0060 Combined uncertainty (k=1) 0.0083 U(k=2) 0.017 (1.7 %rh)
50 °C, RH = 0.90 (90 %rh):
Component Value u(k=1) Sensitivity u(RH) Ref std: Dewpoint 47.9 °Cdp 0.05 °C 0.046 0.0023 Air temperature 50.0 °C 0.05 °C -0.045 -0.0022 Chamber: Temperature gradient 0.17 °C -0.045 -0.0078 UUT: Hysteresis 0.000 RH 1 0.0000 Combined uncertainty (k=1) 0.0084 U(k=2) 0.017 (1.7 %rh)
(Contact the author at lmc-solutions.co.za.)
## 2 thoughts on “Uncertainty of measurement: relative humidity via dew point and air temperature”
1. Michele says:
Very very interesting article.
I have a question. In the computation of the uncertainty should also be considered the measurement errors of the dew point and air temperature.
For example, suppose that the error of the dew point is 0.1 ° C at 10% rh, and to immagine and a coverage factor k = 1.732, then u(k=1)=0.0577, senstivity=0.010 and u(rh) for this element =0,000577.
The same reasoning for the error related to air temperature.
Is it correct.
|
2020-10-31 16:47:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7761422991752625, "perplexity": 4457.25694799411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107919459.92/warc/CC-MAIN-20201031151830-20201031181830-00422.warc.gz"}
|
https://cses.fi/problemset/task/1140/
|
CSES - Projects
• Time limit: 1.00 s
• Memory limit: 512 MB
There are $n$ projects you can attend. For each project, you know its starting and ending days and the amount of money you would get as reward. You can only attend one project during a day.
What is the maximum amount of money you can earn?
Input
The first input line contains an integer $n$: the number of projects.
After this, there are $n$ lines. Each such line has three integers $a_i$, $b_i$, and $p_i$: the starting day, the ending day, and the reward.
Output
Print one integer: the maximum amount of money you can earn.
Constraints
• $1 \le n \le 2 \cdot 10^5$
• $1 \le a_i \le b_i \le 10^9$
• $1 \le p_i \le 10^9$
Example
Input:
4 2 4 4 3 6 6 6 8 2 5 7 3
Output:
7
|
2023-03-22 03:41:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28282928466796875, "perplexity": 835.9901107468846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00393.warc.gz"}
|
http://www.serendeputy.com/from-stats.stackexchange.com
|
## Welcome to Serendeputy!
Serendeputy is your personal news assistant.
- learns what you like and don't like,
- lovingly compiles a list of news and blogs for you.
How it works.
What to do:
2. Click smileys and frownies
3. Find favorite topics and sources
4. See how much better your deputy is getting at finding you good stuff.
# Stats Stack Exchange
i am beginning to harness scikit's svm to perform some news analytics. While going through their tutorials they perform a classification (using linear SVM) on a dataset called 20 news group. I chose 4 categories and finally input a 2257 x 35843 sparse...
From: Stats Stack Exchange | By: Vikram Murthy | Monday, January 26, 2015
smile
frown
For example the distribution of weights of human. There are not many adults under 40 kg, but a lot more people heavier than 100 kg, although the average of an adult's weight is, let's say, 70 kg. Another example is this human reaction time, sharing the...
From: Stats Stack Exchange | By: ziyuang | Sunday, January 25, 2015
smile
frown
Suppose I have the actual and fitted values of two regression lines. Each regression line is modeling the sales of some good. The fitted and actual values of one of the regression lines is much smaller than the other one. I want to compare the fitted...
From: Stats Stack Exchange | By: phil12 | Monday, January 26, 2015
smile
frown
I was wondering if there was a more comprehensive summary() function in R that perhaps includes more model metrics such as confidence intervals around the estimates maybe log-likelihood, AIC, BIC stuff like that. I know its pretty easy to just call other...
From: Stats Stack Exchange | By: moku | Monday, January 26, 2015
smile
frown
My final target is to develop a predictive model for a rate (fraction) DV. The DV showed bimodality and I have no variable that separates the two modes. Hence I created an IV using two observed IVs that can help in producing estimate near the two modes....
From: Stats Stack Exchange | By: Yan Mu | Monday, January 26, 2015
smile
frown
I am unable to follow the steps needed to derive the Fisher Information matrix and the CRLB of an autoregressive model from the observations $x$. The AR process is excited by non-Gaussain sequence, $u$. The $p$ th order AR process is : $x(n) = \sum_{j=1}^p... From: Stats Stack Exchange | By: Srishti M | Monday, January 26, 2015 smile frown First are some background information. This article "The Odds, Continually Updated" from NY Times happened to catch my attention. To be short, it states that [Bayesian statistics] is proving especially useful in approaching complex problems, including... From: Stats Stack Exchange | By: Aaron Zeng | Sunday, January 25, 2015 smile frown So I have two models and I want to calculate these statistics. Is there any package to calculate them in Stata? PRESS statistic (wiki) And, if I am not mistaken. $$R^2_{predicted} = 1 - \frac{RESET}{ESS}$$.... From: Stats Stack Exchange | By: Vladimir Yashin | Sunday, January 25, 2015 smile frown We have a data set where our outcome of interest varies over 10 years, but the explanatory variable of interest and all of the potential confounders are time-invariant. I am quite certain that a panel regression is not possible with this data set but... From: Stats Stack Exchange | By: Lola | Sunday, January 25, 2015 smile frown My data has a binary outcome (attack or not attack), day (20 day in repeated measured design) and some covariates (nestling’s movement). The objectives of my experiment are testing the effect of time and other factors and selecting useful variables... From: Stats Stack Exchange | By: sue | Sunday, January 25, 2015 smile frown Intuitively, it seems to me that, if one is able to make accurate predictions about a variable, then one has also (perhaps implicitly) produced a good estimate of its marginal or conditional distribution. Conversely, it seems that if one has fitted a... From: Stats Stack Exchange | By: ssdecontrol | Sunday, January 25, 2015 smile frown I am trying to solve the following equation, $$= \int_{-\infty}^{\infty} \frac{1}{\sqrt{ (2\pi)^{k_{Y}} | \Sigma |}} \cdot \mathrm{exp} \{ -\frac{1}{2} (Y - Xm)^{T} \Sigma^{-1} (Y - Xm) \} \times \delta(m - \beta) \mathrm{d} m$$... From: Stats Stack Exchange | By: user4581 | Sunday, January 25, 2015 smile frown How can I perform a Monte Carlo simulation on the entire vector in R I have a vector of 1000 values I would like to simulate the entire vector say 10000 times. I know that to simulate in R we do something like rnorm(10000, mean = 0, sd = 1). But I already... From: Stats Stack Exchange | By: Alexandre | Sunday, January 25, 2015 smile frown The problem comes from reading this [0] paper but I think I can express it in a self contained question. Consider the implicit function$H(z)$defined by the relation: $$F_z(z+H(z))-F_z(z-H(z))=0.5$$ The authors point out that when$f_z=\max(1-|z|,0)$... From: Stats Stack Exchange | By: user603 | Sunday, January 25, 2015 smile frown My question is related to the thread Negative values for AIC in General Mixed Model. I often get negative AIC values from the software I use. I notice it most when I'm doing time series. But here is what I don't get. When defining the AIC like $$AIC... From: Stats Stack Exchange | By: Zachary Blumenfeld | Sunday, January 25, 2015 smile frown I have done a multivariate meta analysis with R, with support from metafor package. I am using rma.mv-method which gives an R object of class c("rma.mv","rma"). My question is about looking for funnel plot asymmetry: Is it correct to use metafor's ranktest... From: Stats Stack Exchange | By: bigbang | Saturday, January 24, 2015 smile frown I have a univariate discrete random variable and a histogram representing its PDF. Is there a known way to increase/decrease the SD of the distribution (i.e. scaling it on the x-axis), while retaining other shape characteristics as much as possible?... From: Stats Stack Exchange | By: SkepticalEmpiricist | Sunday, January 25, 2015 smile frown Consider two Bayesian updates, where there are two observations. One updates with respect to x_1, and then uses the posterior of that as a prior to update with respect to x_2. In both cases, x_1 and x_2 are considered conditionally independent... From: Stats Stack Exchange | By: bayesianlyconfused | Sunday, January 25, 2015 smile frown I want to ask whether a procedure to do the following job exists (or whether it makes sense for it to exist). First, assume we have k functions f_1,...f_k that have the same domain and range. Then we have n>k inputs x_1,...,x_n. For each x_i,... From: Stats Stack Exchange | By: zyl1024 | Sunday, January 25, 2015 smile frown I often have to do repeated-measures ANOVA with Greenhouse-Geisser or Huynh-Feldt corrections, so I use Anova (as described in http://www.r-bloggers.com/r-tutorial-series-two-way-repeated-measures-anova/, and the A "doubly multivariate" design with two... From: Stats Stack Exchange | By: Stephen Politzer-Ahles | Saturday, January 24, 2015 smile frown I want to use regressionBF to run all subsets regression. Here is my code: fitness.bf = regressionBF(VO2 ~ ., data=fitnessdata) and here is the error it spits out when I try and run the code: Error in checkFormula(formula, data, analysis = "regression")... From: Stats Stack Exchange | By: yodudeman | Sunday, January 25, 2015 smile frown It is straightforward to verify that for two random variables X and Y with variances \sigma^2_X \neq \sigma^2_Y, we have that$$\Big|{\rm Cov}(X, Y)\Big| \leq \max\{\sigma^2_X,\, \sigma^2_Y\}$$On the other hand, is is not true in general that... From: Stats Stack Exchange | By: Alecos Papadopoulos | Sunday, January 25, 2015 smile frown Assume we have an input of an email and we want to predict if it is spam or not spam. Without being a statistician, i would think one of the predictors takes the subject of the input email and compares it to known spam email subjects and generate a lev... From: Stats Stack Exchange | By: user2827377 | Saturday, January 24, 2015 smile frown I'd like asking your help to understand a statistical issue from my data set. I ran a GLM with proportional data, using a binomial distribution. However, I've found underdispersion in my model and I don't know how to deal with that. I'm aware that a... From: Stats Stack Exchange | By: Mauricio | Saturday, January 24, 2015 smile frown I am building a predictor for y = f(x) using training samples {(x_i, y_i)} (assume) drawn i.i.d from some distribution p(x,y), by optimising the empirical L2-loss: f(x) = argmin_f \; \sum_i ||f(x_i)-y_i||_2^2. (Assume f is suitably parameterised,... From: Stats Stack Exchange | By: Vimal | Saturday, January 24, 2015 smile frown The object to be observed consists of B cubes (b_{1},\ldots,b_{B}). The detector consists of D parts namely (d_{1},\ldots,d_{D}). Let p(b_{i},d_{j}) denote the probability of detecting a photon emission from cube b_{i} in the detectortube d_{j}.... From: Stats Stack Exchange | By: ziT | Saturday, January 24, 2015 smile frown What criteria can be used to tell whether the prediction of a model will be reliable. Background: We have data with N computers. However, prices available only for, approx., N/2 computers. I build some log-linear model using these N/2 observations.... From: Stats Stack Exchange | By: Vladimir Yashin | Saturday, January 24, 2015 smile frown please I have a list of 10 stocks, with each having a timeseries of log returns. (AIG,JPM..) I have calculated the log returns for each of the stocks in the following ######### PB29=as.numeric(unlist(AIG[2])) n31=length(PB29) R.AIG <- (log(PB29[-1]/PB29[-n31]))... From: Stats Stack Exchange | By: Alexandre | Saturday, January 24, 2015 smile frown Is there any opportunity to create such interval where a variable (\{\ln(X_i)\}^n_{i=1}) is the fraction of prices for two periods?$$ X_i = \frac{price.new_i}{price.old_i} $$Please, look at my attempt below. Is everything correct?... From: Stats Stack Exchange | By: Vladimir Yashin | Friday, January 23, 2015 smile frown I would like to calculate d prime for a memory task that involves detecting old and new items. The problem I have is that some of the subjects have hit rate of 1 and/or false alarm rate of 0, which makes the probabilities 100% and 0%, respectively. The... From: Stats Stack Exchange | By: A.Rainer | Saturday, January 24, 2015 smile frown Let A and B be two constant matrices and let x and y be two random vectors, what is the general formula for Var(Ax+By)? I know the formula for when x and y are scalar random variables and A and B are constants, but what about the matrix... From: Stats Stack Exchange | By: user67358 | Saturday, January 24, 2015 smile frown Are there techniques whereby I can apply a large data set to a very unrelated small data set and see if there are any patterns that can be identified? From: Stats Stack Exchange | By: Sathya Atreyam | Saturday, January 24, 2015 smile frown I'm working with Shannon, Tsallis and Renyi entropies. I need to normalized these entropies for comparison purposes. In Shannon's entropy you need only to divide by the log of the number of bins.$$H(X) = -\sum_{i}\left({P(x_i) \log_b P(x_i)}\right)/\log_b(N)$... From: Stats Stack Exchange | By: Marco | Saturday, January 24, 2015 smile frown My study is looking at attitudes towards a concept across four different professional groups: Physicians, Nursing, Pharmacy, and Allied Health. I want to see whether there are differences in attitudes between the groups (e.g. across the professions)... From: Stats Stack Exchange | By: Kristen | Friday, January 23, 2015 smile frown we are trying to do a project to discover emerging topics in social network via link anomaly method. But we are not knowing how to implement this .if any one know the answer please reply... From: Stats Stack Exchange | By: abss IT | Saturday, January 24, 2015 smile frown I'm using R (factanal) to analyze some data. I know from reading that there are various ways of picking how many factors to use in the analysis. I don't know which to choose, or how to do any of them. Here's the data I have so far from factanal. I don't... From: Stats Stack Exchange | By: David Shobe | Friday, January 23, 2015 smile frown I have encountered the following two versions of the Cobb-Douglas production function as an illustration of the differences between intrinsically non-linear and linearisable non-linear regression models (and their transformations): \begin{align} Y_i... From: Stats Stack Exchange | By: Constantin | Friday, January 23, 2015 smile frown I am trying to validate a mixed effects logit regression model with a categorical dependent variable and categorical predictor variables - I have nothing that is continuous. One of my predictor variables is binary, and the other has three possible values... From: Stats Stack Exchange | By: Chris | Friday, January 23, 2015 smile frown What is the difference between noiseless and AWGN channels in terms of channel capacity? From: Stats Stack Exchange | By: golus | Friday, January 23, 2015 smile frown From google search, it seems Normal-Gamma is the conjugate prior for univariate gaussian. I am wondering if there is a systematic way to derive this ? (or to derive conjugate prior for exponential family in general) From: Stats Stack Exchange | By: aha | Friday, January 23, 2015 smile frown I need to print all the permutations of numbers in pyhton. so far I wrote this: def permutation(listNum, i): if i == len(listNum) - 1: print listNum else: for j in range(index, len(listNum)): listNum[i], listNum[j] = listNum[j], listNum[i] perm(listNum,... From: Stats Stack Exchange | By: gali | Friday, January 23, 2015 smile frown I have a set of data showing the dates of sick leave taken by several thousand people. It's been observed that some people have patterns that are unlikely to be by chance - in particular, you can see some people happen to be sick mainly on Fridays, which... From: Stats Stack Exchange | By: Glinkot | Friday, January 23, 2015 smile frown I am quite unsure of the intuitive difference between a random variable converging in probability versus a random variable converging in distribution. I've read numerous definitions and mathematical equations but that doesn't really help. What I don't... From: Stats Stack Exchange | By: nicefella | Friday, January 23, 2015 smile frown This may look like a silly question but I am struck in my work with this notation in one of the papers. From: Stats Stack Exchange | By: user67252 | Friday, January 23, 2015 smile frown I have imported a dataset from excel and I want to run a logistic regression, but SAS does not recognized continuous variables. That is the code I used: Proc logistic data=work.heart class famhist /param=ref ref=first model chd = tobacco ldl typea age... From: Stats Stack Exchange | By: francesco | Friday, January 23, 2015 smile frown Some books seem to include an assumption for the normal linear model which I have never met before. They say that there must be no correlation between between the explanatory variables and the errors. I was wondering if this assumption is true and if... From: Stats Stack Exchange | By: John M | Friday, January 23, 2015 smile frown I am looking to tease out the significance and contribution of a particular variable to 2 different continuous responses. I have 7 continuous variables I know to be influential on the two responses (which have been considered by the literature). I also... From: Stats Stack Exchange | By: Patricia Spellman | Friday, January 23, 2015 smile frown I am trying to build a predictive model on 30 million rows of customer data to predict which product type they will buy. I've looked at and tried out the ff package and the biglm packages, but these models aren't converging when I try to use a bigglm.... From: Stats Stack Exchange | By: Mike | Friday, January 23, 2015 smile frown I have a problem as follows. Life of tyres normally distributed for a specific make. mean=24,000 km and sd= 2500 km. Question is: As a result of improvements in manufacture, the length of life is still normally distributed, but the proportion of tyres... From: Stats Stack Exchange | By: arcomber | Friday, January 23, 2015 smile frown Say, we test an arbitrary regression or classification procedure onn$independent samples with leave-one-out cross-validation. This results in an estimate of the prediction error$e_n$for each sample$n$. Can these$e_n\$ be assumed to be independent...
From: Stats Stack Exchange | By: kazemakase | Friday, January 23, 2015
smile
frown
|
2015-01-26 10:22:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.23394298553466797, "perplexity": 2084.3047105041296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121785385.35/warc/CC-MAIN-20150124174945-00112-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://zbmath.org/?q=an:0762.35066&format=complete
|
## Bounded, almost-periodic, and periodic solutions to fully nonlinear telegraph equations.(English)Zbl 0762.35066
The author considers the nonlinear partial differential equations of hyperbolic type: ${\mathcal L}u+F(u_{xx},u_ x,u,u_{tt},u_{xt},u_ t)=f(x,t),\quad x\in(0,L),\quad t\in\mathbb{R}^ 1,$ where $${\mathcal L}u=u_{tt}+du_ t-au_{xx}$$, $$a,d>0$$, together with the boundary conditions $$u(0,t)=u(L,t)=0$$.
The problem of existence of global in time solutions is studied. The existence theorem for the solutions belonging to the space of continuous functions is proved under some conditions on $$F$$ when $$f(x,t)$$ is sufficiently small. The existence theorems for periodic and almost- periodic solutions are proved when $$f(x,t)$$ is a periodic or almost- periodic function of $$t$$, respectively.
### MSC:
35L70 Second-order nonlinear hyperbolic equations
### Keywords:
global in time solutions; existence theorem
Full Text:
### References:
[1] Amerio L., Prouse G.: Almost-periodic functions and functional equations. Van Nostrand New York 1971. · Zbl 0215.15701 [2] Arosio A.: Linear second order differential equations in Hilbert spaces - the Cauchy problem and asymptotic behaviour for large time. Arch. Rational Mech. AnaI. 86 (2) (1984), pp. 147-180. · Zbl 0563.35041 [3] Kato T.: Locally coercive nonlinear equations, with applications to some periodic solutions. Duke Math. J. 51 (4) (1984), pp. 923-936. · Zbl 0571.47051 [4] Kato T.: Quasilinear equations of evolution with applications to partial differential equations. Lecture Notes in Math., Springer Berlin 1975, pp. 25 - 70. [5] Krejčí P.: Hard implicit function theorem and small periodic solutions to partial differential equations. Comment. Math. Univ. Carolinae 25 (1984), pp. 519-536. · Zbl 0567.35007 [6] Lions J. L., Magenes E.: Problèmes aux limites non homogènes et applications I. Dunod Paris 1968. · Zbl 0165.10801 [7] Matsumura A.: Global existence and asymptotics of the second-order quasilinear hyperbolic equations with the first-order dissipation. Publ. RIMS Kyoto Univ. 13 (1977), pp. 349-379. · Zbl 0371.35030 [8] Milani A.: Time periodic smooth solutions of hyperbolic quasilinear equations with dissipation term and their approximation by parabolic equations. Ann. Mat. Pura Appl. 140 (4) (1985), pp. 331-344. · Zbl 0578.35060 [9] Petzeltová H., Štědrý M.: Time periodic solutions of telegraph equations in n spatial variables. Časopis Pěst. Mat. 109 (1984), pp. 60 - 73. · Zbl 0544.35011 [10] Rabinowitz P. H.: Periodic solutions of nonlinear hyperbolic partial differential equations II. Comm. Pure Appl. Math. 22 (1969), pp. Ĩ5-39. · Zbl 0157.17301 [11] Shibata Y.: On the global existence of classical solutions of mixed problem for some second order non-linear hyperbolic operators with dissipative term in the interior domain. Funkcialaj Ekvacioj 25 (1982), pp. 303-345. · Zbl 0524.35070 [12] Shibata Y., Tsutsumi Y.: Local existence of solution for the initial boundary value problem of fully nonlinear wave equation. Nonlinear Anal. 11 (3) 1987, pp. 335-365. · Zbl 0651.35053 [13] Štědrý M.: Small time-periodic solutions to fully nonlinear telegraph equations in more spatial dimensions. Ann. Inst. Henri Poincaré 6 (3) (1989), pp. 209-232. · Zbl 0679.34038 [14] Vejvoda O., al.: Partial differential equations: Time periodic solutions. Martinus Nijhoff PubI. 1982. · Zbl 0501.35001
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2023-03-22 06:42:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6102516055107117, "perplexity": 1570.900176364278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00407.warc.gz"}
|