url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://forum.zettelkasten.de/discussion/2397/measuring-the-zk
|
# Measuring the ZK
This discussion was created from comments split from: A milestone of sorts, what a ride it's been.
• @Sascha said:
If you want to go down the rabbit hole, you might measure the "median depth of connection" by measuring the median length of trails of connections.
"The rabbit hole went straight on like a tunnel for some way, and then dipped suddenly down, so suddenly that Alice had not a moment to think about stopping herself before she found herself falling down a very deep well."
~ Alice's Adventures in Wonderland
I'm a little unclear, as usual. This time about what is meant by "median depth of connection."
Let me explain my understanding, and maybe you can set me straight.
I have a zettel titled Workload Management 202210060815, and it has six links in it.
Link Number of connections
1 2
2 15
3 8
4 10
5 10
6 8
So in my thinking, the zettel Workload Management 202210060815 has a median depth connection of NINE.
I randomly picked another zettel, Crisis In Environments Of Enclosure 202106180709, and it has six links.
Link Number of connections
1 4
2 6
3 9
4 9
5 6
6 10
So in my thinking, the zettel Crisis In Environments Of Enclosure 202106180709 has a median depth connection of SEVEN point FIVE.
Is this what you are getting at? This process accounts for but one layer deep. What does the "median depth of connection" reveal about the zettel? How does knowing the "median depth of connection" of these two zettel help in understanding?
Will Simpson
“Read Poetry, Listen to Good Music, and Get Exercise”
kestrelcreek.com
• Oh, I forgot my explanation.
If you use this image for reference:
• "Ring" has one trail with an average length of 6
• "Baum" has three trails with an average length of 2 (each trail is two edges long)
If you collect all possible trails in your ZK and then take the median you'd could measure the median length of thought trails.
So, I don't mean it as a trait of an individual note but as a trait of your entire ZK.
I am a Zettler
• edited October 10
By a trail do you mean a directed path $(a_0,\ldots,a_n)$ where $(a_i)$ links to $(a_{i+1})$?
Perhaps the average shortest path-length would be useful--there are several implementations in R, python, SageMath etc. It should be routine to assemble a data structure from a ZK to input to one of these algorithms. Suppose while gaining experience with the algorithmic libraries available, you began a project to collect, compute and display network statistics of ZKs. A project to collect and present ZK network statistics would be a magnet for network researchers if it were hosted here. A future rollout of The Archive could facilitate this. I'm not aware of any other ZK or ZK-related software development effort to compute these statistics.a
It's hard to say without trying whether these statistics will help someone decide what to add and what to connect in their ZK, given what they want to get out of it. Often what they want is writing.
Why create a ZK? My answer now is that I want the ZK to support certain research projects. How so? It should help me to answer the 29 sets of questions from Carl Wieman's How to become a successful physicist. What kind of help? Help with the development of a predictive framework for deciding the answers to the 29 sets of questions that must be addressed for such projects to be successful.
We found that all the experts organized their disciplinary knowledge in a way that was optimized for making decisions. We describe that knowledge-organization structure as a “predictive framework.”
For a ZK to be useful, it ought to facilitate the development of such predictive frameworks for writing, problem solving, etc. I'm assuming the obvious: that these activities require decision making, and that "... knowledge-free problem-solving is a meaningless concept." At least, I can't think of a better process for my own purposes than that given in How to become a successful physicist. A ZK had better add something to this effort.
Perhaps a project to gather network statistics of ZKs would offer some evidence. Such a project should address the 29 sets of questions of How to become a successful physicist. A better test would be whether following the guidelines with and without a ZK makes a difference.
aIn addition to the average shortest path-length, there are other measures: László Gulyás, Gábor Horváth, Tamás Cséri, George Kampis An Estimation of the Shortest and Largest Average Path Length in Graphs of Given Density. But I would start by collecting graphs and computing network statistics from them with the available libraries.
You don't need a Nobel Laureate to state the obvious, but it can help to have their endorsement. In this case, the process is not obvious, and you need the Nobel Laureate to state it.
For some projects, a subset of the 29 will do.
Post edited by ZettelDistraction on
Erdős #2. ZK software components. “If you’re thinking without writing, you only think you’re thinking.” -- Leslie Lamport. Replies sometimes delayed since life is short.
• I'm not sure what "median depth" would actually mean, although someone who is mathematically talented (like @ZettelDistraction ) could likely come up with a mathematical definition. It seems there are a couple of qualities, though:
1. The number of independent "chains" of zettels connected to a particular zettel.
2. The greatest length of one of the chains.
3. The average length of all the chains (maybe this is what @Will meant by median depth?)
4. The complexity of the network of all chains (not even sure how you would determine that, but maybe using patterns such as those shown by @Sascha )
There could certainly be other characteristics of a network.
Is this an algebraic topology problem? Not that I know anything about that; I just happened across the term in this article a while back:
https://www.technologyreview.com/2016/08/24/107808/how-the-mathematics-of-algebraic-topology-is-revolutionizing-brain-science/
• Wasn't it Drucker who wrote: 'You can't manage what you can't measure'? I don't remember. Another method that (speaking only for myself) makes sense to the end user:
/*
pseudo code (this is language dependent)
increment node count within a given hub...
for each node in hub y++;
hmm... should that be recursive?
*/
then use formula below for tracking change...
x = previous node count
y = current node count
z = percent of change
z = x - y / y * 100
example 1 (decrease)...
4 node count last month
3 node count this month
-25% = 3 - 4 / 4 * 100
example 2 (increase)...
3 node count last month
5 node count this month
+66% = 5 - 3 / 3 * 100
example 3 (equilibrium)...
5 node count last month
5 node count this month
0% = 5 - 5 / 5 * 100
commandline example...
echo 5 3 | awk '{printf "%.2f%%\n", (($1 -$2) / $2) * 100}' javascript example... function metrics(x, y) { /* formula for tracking change... x = node count last month y = node count this month z = percent of change (2 decimal places) */ var z = (((x - y) / y) * 100).toFixed(2) + "%"; return z; } console.log(metrics(5, 3)); • edited October 11 Typo in my post above (x & y transposed in description ) ought to read... /* pseudo code (this is language dependent) increment node count within a given hub... for each node in hub x++; hmm... should that be recursive? */ then use formula below for tracking change... x = current node count y = previous node count z = percent of change z = x - y / y * 100 example 1 (decrease)... 4 node count last month 3 node count this month -25% = 3 - 4 / 4 * 100 example 2 (increase)... 3 node count last month 5 node count this month +66% = 5 - 3 / 3 * 100 example 3 (equilibrium)... 5 node count last month 5 node count this month 0% = 5 - 5 / 5 * 100 commandline example... echo 5 3 | awk '{printf "%.2f%%\n", (($1 - $2) /$2) * 100}'
javascript example...
function metrics(x, y) {
/*
formula for tracking change...
x = node count this month
y = node count last month
z = percent of change (2 decimal places)
*/
var z = (((x - y) / y) * 100).toFixed(2) + "%";
return z;
}
console.log(metrics(5, 3));
Post edited by Mike_Sanders on
• @iamaustinha said:
@Will, looks like I jumped back into the Forums at just the right time to congratulate you! Looking forward to when my Zettelkasten is as mature as yours!
@iamaustinha, thank you for your kind comments. Welcome back. A zettelkasten does mature as it grows from infancy to old age. I'm currently parenting a three-year-old with all the classical pleasures, surprises, and sorrows.
Will Simpson
“Read Poetry, Listen to Good Music, and Get Exercise”
kestrelcreek.com
• For further explanation @GeoEng51 @ZettelDistraction:
I am not sure what the metric will actually tell. Right now, I feel, the community is left with the amount of notes with the single metric for the Zettelkasten.
But there are quite some other possible metrics that could help to make some justified judgements about the nature of ones Zettelkasten.
I called the median length "depth of connection" just based on my intuition. I have the suspicion that there could be something usable by thinking in that direction of metrics.
I wonder what can be said about ones Zettelkasten when you can access when you have a number of those metrics and collect them automatically. Perhaps, there is something more sophisticated than my clunky way of finding that structure notes improved my note production.
I am a Zettler
• edited October 19
@Sascha I think what really takes me the most time and most improves my ZK is finding all the "good" links between zettels. That requires constant work and review but pays the most dividends. The more time I spend on that, I believe the more complex my ZK web becomes, which one can view from a graphical map of all connections. Perhaps there could be a metric that is based simply on the apparent complexity of a connection map (e.g., an automated visual assessment of the map)? I'm thinking a computer program that "looks" at the map and then assesses its complexity.
• @GeoEng51 said:
I'm not sure what "median depth" would actually mean, although someone who is mathematically talented (like @ZettelDistraction ) could likely come up with a mathematical definition. It seems there are a couple of qualities, though:
The average length of a path is well known enough to have a definition on Wikipedia.
https://en.wikipedia.org/wiki/Average_path_length
There could certainly be other characteristics of a network.
Is this an algebraic topology problem? Not that I know anything about that; I just happened across the term in this article a while back:
https://www.technologyreview.com/2016/08/24/107808/how-the-mathematics-of-algebraic-topology-is-revolutionizing-brain-science/
The networks in the brain are orders of magnitude more complicated than every other Zettelkasten network except for @Sascha's, which exceeds that of the most interconnected human brain by the same ratios.
The algebraic topology in the paper is nice--the novelty is in the application more than in the mathematics.
Erdős #2. ZK software components. “If you’re thinking without writing, you only think you’re thinking.” -- Leslie Lamport. Replies sometimes delayed since life is short.
• @GeoEng51 I am still sceptical about the graph view since I never wittnessed any convincing example of its use. However, I am too ignorant to the possibilities of what can be achieved by computers.
I am still tinkering and collecting with all the measuring because I think it is way to early to come out with definitive claims. (I don't know how to judge the median length of thought trails. It could be "more is better" or a domain specific optimal length or even "shorter is better")
I am not even sure what complexity means regarding the ZK if one leaves the thankful realm of normal language.
I think what really takes me the most time and most improves my ZK is finding all the "good" links between zettels.
Perhaps, I backtrack a little bit from my position. Perhaps, there is a use case that connections between huge note clusters are exeptionally promising to review? A similar use case might be to look at the graph view and spot clusters that are not interconnected and review if one missed something.
But I am very biased to think that the on-the-ground-view is paramount. I am focussing on the individual connection since I cannot build a mental bridge from the individual connection between two ideas and some general trait of connections that could be used to access the content of the ZK in a meaningful (knowledge creation) way. To me, the graphical view is one step to far into the realm of abstraction.
I accumulate tids and bits from the most extrem end of the spectrum (like general traits of networks) in the hopes that something emerges when I don't have so much initial biases available to me.
I am a Zettler
|
2022-12-10 10:10:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4988352656364441, "perplexity": 1833.747003264723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710421.14/warc/CC-MAIN-20221210074242-20221210104242-00087.warc.gz"}
|
https://www.physicsforums.com/threads/help-simplifying-this-summation.460493/
|
# Help simplifying this summation
## Homework Statement
$$\sum\limits_{j=0}^\infty \binom{j}{r} p^r (1-p)^{j-r} (1-q) q^j$$
where p and q are between 0 and 1, and r is a positive integer
## The Attempt at a Solution
since $$\binom{j}{r}=\binom{j}{j-r}$$
we can rewrite the summation as
$$(1-q)\sum\limits_{j=0}^\infty \binom{j}{j-r} p^r (1-p)^{j-r} q^j$$
then i used a change of variables k=j-r and the summation became
$$(1-q)\sum\limits_{k=-r}^\infty \binom{k+r}{k} p^r (1-p)^{k} q^{k+r}$$
and now im stuck. i was hoping i could get the stuff inside the summation sign to look like the pdf of a negative binomial distribution
Last edited:
Related Calculus and Beyond Homework Help News on Phys.org
vela
Staff Emeritus
Homework Helper
Do you have any identities that might be relevant to evaluating the summation?
Also, the notation where q = 1-p is fairly common. Does that apply here or are p and q just two unrelated variables?
tiny-tim
Homework Helper
hi robertdeniro!
since r is a constant, you can take all the r stuff outside the ∑
(but I don't think it converges)
Do you have any identities that might be relevant to evaluating the summation?
Also, the notation where q = 1-p is fairly common. Does that apply here or are p and q just two unrelated variables?
nope, here p and q are not related
EDIT: Guys, please see me attempt at the solution and let me know what you think
vela
Staff Emeritus
Homework Helper
I don't see how that helps, but I might just be missing something.
Are you familiar with the generating functions for the binomial coefficients?
I don't see how that helps, but I might just be missing something.
Are you familiar with the generating functions for the binomial coefficients?
yes but i dont see how that would help
EDIT: nevermind, thanks for that tip! i think i got it
vela
Staff Emeritus
|
2020-10-24 04:11:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8881301283836365, "perplexity": 602.3880528910751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881640.29/warc/CC-MAIN-20201024022853-20201024052853-00117.warc.gz"}
|
http://efavdb.com/category/statistics/page/2/
|
How not to sort by average rating, revisited
What is the best method for ranking items that have positive and negative reviews? Some sites, including reddit, have adopted an algorithm suggested by Evan Miller to generate their item rankings. However, this algorithm can sometimes be unfairly pessimistic about new, good items. This is especially true of items whose first few votes are negative — an issue that can be “gamed” by adversaries. In this post, we consider three alternative ranking methods that can enable high-quality items to more-easily bubble-up. The last is the simplest, but continues to give good results: One simply seeds each item’s vote count with a suitable fixed number of hidden “starter” votes.
(more…)
Multivariate Cramer-Rao inequality
The Cramer-Rao inequality addresses the question of how accurately one can estimate a set of parameters $\vec{\theta} = \{\theta_1, \theta_2, \ldots, \theta_m \}$ characterizing a probability distribution $P(x) \equiv P(x; \vec{\theta})$, given only some samples $\{x_1, \ldots, x_n\}$ taken from $P$. Specifically, the inequality provides a rigorous lower bound on the covariance matrix of any unbiased set of estimators to these $\{\theta_i\}$ values. In this post, we review the general, multivariate form of the inequality, including its significance and proof.
(more…)
Mathematics of measles
Here, we introduce — and outline a solution to — a generalized SIR model for infectious disease. This is referenced in our following post on measles and vaccination rates. Our generalized SIR model differs from the original SIR model of Kermack and McKendrick in that we allow for two susceptible sub-populations, one vaccinated against disease and one not. We conclude by presenting some python code that integrates the equations numerically. An example solution obtained using this code is given below.
(more…)
|
2018-03-19 16:31:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41277822852134705, "perplexity": 898.8973735033376}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647003.0/warc/CC-MAIN-20180319155754-20180319175754-00235.warc.gz"}
|
https://www.andlearning.org/sodium-chloride-formula/
|
Sodium chloride Formula – Equation and Problem Solved with Example
Sodium is not unique but a well-known compound and a widely used chemical too. One other name for sodium chloride is table salt. The chemical formula for the compound is NaCl and a molecular weight is 58.44 g/mol. The chemical structure is shown below where one sodium cation is connected with other chlorine anion to make the sodium chloride. The product has a crystalline structure where each sodium cation is surrounded by six other chloride anions and form a octahedral geometry.The natural occurrence of chemical compound can be felt in sea water where it is getting its saltiness. You would not believe but up to 5 percent of total sea water is made up of sodium chloride only. This is also available in minerals or rock salts too.
The chemical compound is prepared at large scale by evaporation of sea water from salts or brine wells etc. The product should be heated carefully otherwise salts will be evaporated and the product will be merely a waste in the end. The precipitation process depends on solubility of the compound. The other popular technique is mining of rock salt reserves.
The product is a white crystalline solid with a density of 2.16 g/mL, and a melting point of 801 °C. it is available in different concentrations and based on its concentrations, it is suitable for different industrial purposes. This is a stable solid and may be converted to toxic fumes at high temperatures. It is widely used by food industries too for flavouring and preservations.
|
2018-12-14 19:27:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4530482590198517, "perplexity": 1091.2457142740047}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826306.47/warc/CC-MAIN-20181214184754-20181214210754-00621.warc.gz"}
|
http://pi.math.cornell.edu/m/node/10114
|
## Lie Groups Seminar
Braverman, Finkelberg and Nakajima have recently proposed a mathematical definition of the Coulomb branch of a 4d $N=2$ gauge theory
|
2019-11-22 21:29:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5945572853088379, "perplexity": 630.2012746374273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671548.98/warc/CC-MAIN-20191122194802-20191122223802-00454.warc.gz"}
|
https://www.gamedev.net/forums/topic/422993-sdl_ttf-and-opengl/
|
# OpenGL SDL_ttf and OpenGL
## Recommended Posts
My situation: SDL_ttf's TTF_Render* functions all return SDL_Surfaces. From those, texture objects can be created and rendered, and that's basically what I've done. The trouble is that OpenGL likes its textures to be by powers of two. Otherwise it's solid white. So, when I make a texture object out of an SDL_Surface created by TTF_Render*(), it's dimensions are most likely not powers of two, and I have to resize it to render the text properly. So, what do you all do for text and fonts in OpenGL? I'm not using Win32 so the NeHe article isn't particularly applicable. How do you do it? Any help is appreciated. Thank you, Benji
##### Share on other sites
I'm not familiar with OpenGL, but I use SDL. It shouldn't be hard to resize a SDL_Surface, so why not do so?
If the TTF text isn't a power of two, blit it to a new surface of the width of the TTF text + 1.
Just wrap it in something like this:
//Your constant ColorKey colorsconst int COLORKEY_RED = 0xFF;const int COLORKEY_GREEN = 0x00;const int COLORKEY_BLUE = 0xFF;SDL_Surface *CreateText(std::string text, TTF_Font *font, SDL_Color textColor){ //Make text as normal... SDL_Surface* TextSurface = TTF_RenderText_Solid(font, text.c_str(), textColor); cout << "Old Width: " << TextSurface->w << endl; cout << "Old Height: " << TextSurface->h << endl; //Checks whether TextSurface is a power of two or not int WidthRemainder = (TextSurface->w % 2); int HeightRemainder = (TextSurface->h % 2); //If it is, return the surface if(WidthRemainder == 0 && HeightRemainder == 0) return TextSurface; bool WidthPlus = 0; bool HeightPlus = 0; if(WidthRemainder != 0) WidthPlus = 1; if(HeightRemainder != 0) HeightPlus = 1; //Else, Create new surface SDL_Surface *NewSurface = SDL_CreateRGBSurface(SDL_SWSURFACE|SDL_SRCCOLORKEY, (TextSurface->w + WidthPlus), (TextSurface->h + HeightPlus), 32, 0xFF000000, 0x00FF0000, 0x0000FF00, 0); //Create a rectangle for coloring the empty image SDL_Rect rect; rect.x = 0; rect.y = 0; rect.w = NewSurface->w; rect.h = NewSurface->h; //Fill the new image with your colorkey color SDL_FillRect(NewSurface, &rect, SDL_MapRGB(NewSurface->format, COLORKEY_RED, COLORKEY_GREEN, COLORKEY_BLUE)); //Place your text onto the new surface SDL_BlitSurface(TextSurface, NULL, NewSurface, &rect); //Free the old surface SDL_FreeSurface(TextSurface); //Set the colorkey on the empty surface to make the COLORKEY_ color clear Uint32 colorkey = SDL_MapRGB(NewSurface->format, COLORKEY_RED, COLORKEY_GREEN, COLORKEY_BLUE); SDL_SetColorKey(NewSurface, SDL_RLEACCEL|SDL_SRCCOLORKEY, colorkey); cout << "New Width: " << NewSurface->w << endl; cout << "New Height: " << NewSurface->h << endl; //Return your new surface, with dimensions of a power of two. return NewSurface;}
[Edit:] Fixed code.
[Edited by - Servant of the Lord on November 7, 2006 9:41:39 PM]
##### Share on other sites
Quote:
Original post by Servant of the LordI'm not familiar with OpenGL, but I use SDL. It shouldn't be hard to resize a SDL_Surface, so why not do so?If the TTF text isn't a power of two, blit it to a new surface of the width of the TTF text + 1.Just wrap it in something like this:*** Source Snippet Removed *****UNTESTED**
It is not difficult. I know how to do this, but it's one more step to do. What happens when the text changes? Then it has to be rerendered, then resized, and then finally made into a texture object. I can do it, that's how I'm doing it now, but I'd like to minimize if I could.
## Create an account or sign in to comment
You need to be a member in order to leave a comment
## Create an account
Sign up for a new account in our community. It's easy!
Register a new account
• ### Forum Statistics
• Total Topics
627735
• Total Posts
2978855
• ### Similar Content
• Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
So, here's what the plan is so far as far as loading goes:
Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
• I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
• A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.
-What I'm using:
C++;. Since im learning this language while in college and its one of the popular language to make games with why not. Visual Studios; Im using a windows so yea. SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.
-Questions
Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?
• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture( GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• 10
• 10
• 21
• 14
• 12
|
2017-10-22 21:13:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17775842547416687, "perplexity": 3228.6331610576663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825464.60/warc/CC-MAIN-20171022203758-20171022223758-00362.warc.gz"}
|
https://www.infoq.com/articles/ieee-the-future-of-authentication/
|
Facilitating the Spread of Knowledge and Innovation in Professional Software Development
Write for InfoQ
### Topics
InfoQ Homepage Articles The Future of Authentication
# The Future of Authentication
This article first appeared in IEEE Security & Privacy magazine and is brought to you by InfoQ & IEEE Computer Society.
As part of this special issue on authentication, guest editors Richard Chow, Markus Jakobsson, and Jesus Molina put together a roundtable discussion with leaders in the field, who discuss here their views on the biggest problems in authentication, potential solutions, and the direction in which the field is moving.
Markus Jakobsson: Whats the greatest problem with current authentication approaches, and whats needed to overcome these problems?
Dirk Balfanz: I think the biggest problem with authentication today is theft of credentials, which comes in different flavors. It could mean that you get phished and hand your password to a phisher, or it could mean that you share passwords across different sites, and one of those sites gets compromised.
Overcoming this problem means making it harder to steal those credentials. We could, for example, add additional second factors to passwords that are unpredictable, so that even the users dont know what they are, making it harder to give them away to phishers. Or maybe we could change the nature of credentials from bearer tokens-secrets passed from Web browser to server-to using more cryptography. With cryptography, we can design authentication protocols that are not susceptible to phishing, but the challenge is to package them up in an easy-to-use way.
Scott Matsumoto: I think the greatest problem is people- theyre the most unreliable devices on the planet [laughs]. Credentials are a very important piece of the authentication problem, but I think that we put too much weight on them. My concern is that when we start looking at other approaches, such as different kinds of tokens or more crypto, we make systems harder to use. The net effect is that we just drive people to create more insecure approaches for handling all of the things that weve tried to do in order to increase security. To me, this is the biggest problem.
Paul van Oorschot: The greatest problem from the users perspective is "too many." When we talk about Internet authentication today, for the vast majority, were still talking about passwords, so the "too many" is too many password-account pairings for each person to remember. Thats a scalability problem for users. A lot of these passwords are forced on users but dont serve a true security purpose-many sites employ password-based login when the service they deliver doesnt actually need a strong password. Often, what they really want is an email address, for example, to market products to the user in the future. This confuses people when theyre actually asked for passwords that are important [for security], and we get the problem of users not distinguishing low-and high-end passwords. Of course, from the security viewpoint, passwords themselves fall short, and static passwords are replayable.
Ori Eisen: I think the greatest problem is that were mixing authentication approaches for the current Internet with the same methods we might need for high-fidelity transactions. Im sure nobody who reads this magazine would let a stranger walk into their house because he claims to be a certain person. But on the Internet, crooks can become me or you very easily, because nobody validates who is who. It might be good for free services, if you just want to go read something and you can claim to be anybody. But if we need authentication to move money, if we need authentication to vote, I dont think that the same authentication mechanism should be used. We need to look at better ways to actually know whos on the other end before we give them credentials for high-fidelity transactions.
Steve Kirsch: I think the biggest problem is that todays means of authenticating are insecure and cumbersome to use. I have 352 usernames and passwords, and that number grows every single day. Facebook has 600,000 attempted compromises every single day. Im seeing emails from spammers who phished my friends email accounts at least once a week now.
If I had to give our current method a letter grade, it would probably be a D. We need to come up with new approaches, and were not going to see any significant improvement without a big paradigm shift. What we are doing at OneID is an example of one such shift. [OneID is a single set of credentials for both low-and high-assurance transactions.] The bottom line is this: if we want to solve the authentication problem, we have to think differently than we have in the past. Its time to abandon the traditional username/password metaphor and move the world to a more secure paradigm.
Eisen: I think we need to leave the current Internet with what it was intended for originally-the sharing of information. If we want to also have a network for high-fidelity transactions, we need to separate them. The first separation is visually, so you know which network you are on, very similar to why you see a padlock in an HTTPS session. You dont give the checker at the grocery store your Social Security number because theres no need to do it, and you dont need to go through a high level of security to read the news.
Matsumoto: I dont think were using a whole slew of different methods; were using one method, which is the username and password, the least common denominator approach. I think that having a single identity for all the different purposes that you need to conduct your life on the Internet is really an act in futility in terms of protecting all the things you need to protect.
Kirsch: My opinion is you can have a single identity, but the identity has to allow for multiple levels of assurance, which can be achieved by adding requirements to obtain digital approval, such as a PIN code or out-of-band approval. I think thats what users want. They dont want to have to manage different identity systems. Its much easier if they have a single identity thats flexible enough to accommodate the security needs of both users and service providers.
Richard Chow: A couple of you mentioned that users are the problem. For example, we tell people not to reuse their passwords and how to make a strong one, but these sorts of guidelines historically havent been too effective. What should be done about that?
Balfanz: Im not sure I would phrase it that way, that users are the problem. Users are what they are. Instead of telling them that theyre doing it wrong, over and over, with no apparent positive outcome, I think we should just study how people behave and build our systems around that. If it turns out that the only thing the user can really handle is a password, then thats what were going to have to deal with and thats how were going to have to build our systems.
But if we do get guidelines out, we as professionals in the field should speak with one voice. The guidelines that we have today are confusing and contradictory. A reasonable guideline might be, "stop reusing passwords across different sites." We sometimes overemphasize recommendations about picking very complex passwords, though theyre certainly more effective than trivially guessable ones. At any rate, we should come to a common understanding of what those guidelines are, instead of different people giving different-and often conflicting-advice.
Matsumoto: I agree with Dirk. You have to assume that the credential-username and password-is what it is. Users are going to use the same one across sites, their "best" password, so the systems that we have are going to have to compensate for that.
I dont think that the answer is a different kind of credential or some other token-based scheme. Again, I think we have to start thinking of the credential as the first time that we interact with the user. Thats one type of interaction. By seeing the users interaction with the system on an ongoing basis, during and across sessions, we can do a better job of authenticating. Its too simple to think that we can have just one interaction and validate that someone is indeed who they say they are.
van Oorschot: We need to remember that users are the customer-theyre the design constraint, not the problem. If we dont want people to choose poor passwords, then we should use something other than plain old text passwords. The user has very few tools at hand. The back-end system puts rules in place, and it messes up everyones life by asking them to choose one more password-a classic tragedy of the commons, everyone drawing from one resource pool, the users capacity to create and remember one more password. We have to move past what we have been doing for the past 25 years and not expect things to magically become better at the same time that were giving users 10 times as many passwords. We need better tools to help users manage the passwords that are unavoidable and also to deploy authentication approaches that are stronger but not a usability disaster.
Eisen: I know for a fact that you cant tell millions of users to do one thing and they will all do it. As security practitioners, we need to factor that into the equation. If we tell them a million times to have longer, more secure passwords, the bottom line is that people are people, and they want to have an easy-to-use password. Therefore, we need to find solutions that do not require users to be security experts.
Kirsch: I agree that the right thing is to design systems that accommodate the way users want to operate. You should allow people to pick the password they want-and the system should assume that it will be phished. You just need to design the system so its unbreakable, even if the password is phished or keylogged. You can provide guidelines for proper behavior, but people arent going to follow them most of the time. But its a good thing to provide guidelines anyway, to help them along.
Jesus Molina: Who should care about authentication for things to change for the better?
Balfanz: We at Google do think it is, in fact, our responsibility to care and worry about the strength of authentication that we give to our users. We even go a step further: instead of just worrying about what authentication between the user and Google looks like, we have projects trying to strengthen security on the Web as a whole. For example, we have the Safe Browsing Initiative, we put special protections into Google Chrome, we call out malware and phishing sites in our search results, we contribute to security-related open source projects, and so forth.
About authentication in particular, we recently launched two-factor authentication for Google accounts. Im also working on a project thats trying to move us away from using cookies and other bearer tokens as a means of authentication and adding in public-key cryptography to make things harder to steal.
Eisen: Everybody who is part of the network needs to care about it. But if everybody cares about it, nobody cares about it because theres no leadership. The Internet really doesnt belong to anybody. It was one of the greatest things that ever happened to the world, but at the end of the day, the people who need the Internet to survive-Google, Amazon, eBay, Microsoft, financial institutions, and the government-should care about it. Private industry would be the best place to start because in order to keep security and keep innovation going, you need to keep it funded. Its not a situation where we can forget about it: we have the adversarial problem of people trying to infiltrate the network and bring it down or conduct crimes in it.
Kirsch: Everybody cares about authentication. Banks have FFIEC [Federal Financial Institutions Examination Council] requirements; the government has FICAM [Federal Identity, Credential and Access Management] and HSPD-12 [Homeland Security Presidential Directive 12] requirements. But if we are to advance the ball here, we have to have these parties buy into a vision of taking a risk and trying something different that has a chance of working. One of the things that gets in the way is that a lot of these requirements are written with existing authentication paradigms in mind. Having service providers be flexible in terms of what theyre willing to require is extremely helpful.
Essentially, we need to stay focused on the goal of better authentication. I find that a lot of sites are loath to make any changes at all, even if its something thats potentially better because they dont want to have any kind of drop-off on conversion ratios. You have to find service providers who are willing to take a risk and try something new. Thats the only way were going to get ahead, service providers being willing to try innovative solutions.
van Oorschot: I agree we need sites and service providers to be flexible, but we also need cooperation and coordination. The whole system has a limited capacity to try new things. If somehow we could get the major players together to agree on one or two approaches to promote, rather than five or six new ones, we have a chance to move out of our current state of low-security authentication [passwords] that weve been stuck in for 20 plus years. We need one or two new approaches to authentication rather than 10 or 15.
Jakobsson: Theres a movement among authentication vendors away from just a binary-type authentication toward a back-end authentication score, using multiple factors and contextual data to establish an authentication assessment. What do you think of this? And is this the future for authentication?
Balfanz: I think this is a good idea. I was speaking earlier about how credential theft is in my opinion the biggest problem, and adding different kinds of contextual data to the authentication process is sort of like a second factor. It makes the complete user credential- which now basically consists of a password plus whatever extra contextual data is added-harder to steal or phish. I see this actually happening across the Web, so its not so much the future of authentication as much as its already, in fact, happening.
Matsumoto: We do see this happening, but the other interesting aspect is that one of the factors can be time. You make a decision and then over time, you see the validity of that decision because of another decision. The two decisions might be contradictory, because of something like geography, so now you have to invalidate one or both of those decisions.
van Oorschot: I agree that the idea of multiple factors going into decisions makes sense. Of course, whether we call it a score or something else at the back end, the front-door requirement is almost always to map it to one of two choices: you either let someone in or not. Whatever factors we base this binary decision on, the more, the better, as long as they are invisible and dont further burden users beyond their capacity. Innovative multifactor approaches are promising, but lets admit that we usually underestimate deployability challenges.
Eisen: The more layers, the better. No binary decision on its own will ever be good. It might result at the end of the binary decision of letting you in or challenging you, but there can always be mitigating factors. Lets say theyre traveling, so theyre coming from a network youve never seen but still at an hour of the day that makes sense to you and a device that youve known. So even though were taking multiple factors into account that may change the score, it ends up being a binary decision of letting [the user] in or not. But the decision itself is not binary.
Kirsch: I think scores are fi ne, and whether and how the relying party uses a numerical score is really up to the relying party. If Ive already authenticated, for example, but my risk factor is high based on a particular transaction that Im doing, then the relying party will either completely deny this transaction or ask for additional assurances, such as PIN codes, that are commensurate with the risk level. The risk can be based on many factors. Its probably easiest if we have a standard risk score, such as the probability that the transaction is legitimate.
Matsumoto: We also have to realize that other people in the organization will need to know what decisions were made when a user is not granted access. For example, the help desk needs to help the user unwind the history of decisions when we start putting these systems in place.
Chow: What about biometrics? Is it going to become more important as a factor or less important?
van Oorschot: We need to clarify the question in terms of what biometrics are used for. If were talking about authentication of end users to sites on the Internet, then biometrics are a nonstarter for high-end security, because you need a trusted computing base, actually a trusted path from input all the way to the far end. The strengths of biometrics, for example, involving high-end equipment and a security guard overseeing the input, dont follow for an untrusted remote client. Attacks that dont work against supervised input are all of a sudden possible. So we need to frame the question and the application of use-for remote applications, its a bit of a stretch from where we currently are.
Chow: And for local authentication?
van Oorschot: Well, for local authentication, we already have laptops with fingerprint readers and that sort of thing, but I think people mix up the application of use. Thats still an unsupervised application, and its not the scenario that biometrics are strongest in.
Eisen: Biometrics could be useful as yet another factor in a plurality of activities to authenticate. I dont think we should rule it out from usage. But I agree: if its unsupervised, it cannot be the sole input to the decision. We should use as many layers as we can.
Kirsch: For the right application, biometrics can be a really good thing. But for remote applications, biometrics are less likely to be used. Biometrics have been expensive and inconvenient. If you want the false reject rate to be low, it means your false accept rate is going to be relatively high. Maybe you can weed out 95 out of 100 attempts using biometrics; thats good, not great.
But thats really not the level of security that you need if you want something really secure, so you dont get a huge gain for the amount of inconvenience that you have to put the user through. I love iris authentication and verification. I wish it were done at airports today, because I have to wait 30 minutes to authenticate.
Molina: I want to talk about device identification and how its replacing or could replace identity on mobile devices. How can we authenticate people on mobile devices?
Matsumoto: My concern is that were seeing a lot of applications use mobile device identity instead of user identity. Device identity has become a more convenient way for organizations to manage mobile devices connecting in [to their network]. Im hoping that its only a temporary trend, but device identity is moving in the wrong direction as far as Im concerned.
van Oorschot: I think you have to ask, "What are you trying to do?" Are you trying to authenticate that theres a device involved in a transaction, or that theres a specific person involved in the transaction? With a credit card, for example, todays systems verify that someone knows the credit card number-Im ignoring the user profiling done at the back end-but doesnt verify which person is behind it. Thats why we have to know a bit more about the objectives in specific applications that want to use mobile devices as a substitute for an identity of a person.
Eisen: I would say that when we make a decision on authentication, we have to take a few components. Is the right user behind the device? The device is really a proxy that helps us have more reason to believe or more reason to think that we have the right user coming from the right device. Clearly, the device itself should never be interchanged as user identity because somebody else could be using my device, not me. But device ID should and is used today in the decision-making. Should we let this transaction go, or should we let this login move on with the next step?
Kirsch: If its done right, the use of device authentication can be a good thing. Personally, I like having a private key that is person-specific, stored on a device, and then uses a PIN to prove it, which makes it two factor. There are no shared secrets.
Being able to preauthorize your devices is also a good idea that tends to work well and minimizes the impact of either a key-logged or phished password. I think that when you tie things to devices and limit things to devices, it is a good thing for authentication.
Jakobsson: What do you think will happen next?
Balfanz: I think user-visible changes will happen slowly. I dont think well all be authenticating with some sort of RFID implant come next year, even though if I close my eyes and try to look maybe 100 years into the future, I have a hard time imagining that people will be typing passwords into things. But I think we will see fairly slow changes.
Under the hood, I think theres a constant arms race going on. The service providers will strengthen the authentication method, and hackers will try to circumvent that. When we, as a service provider, add a second factor to our login, hackers will eventually try to steal cookies instead of passwords. And then if we protect cookies through cryptography, hackers will try to steal the signing key. And then when we protect the signing key, hackers will try to get around that, and so forth. This arms race, I think, will continue. But as an optimist, I think well be able to keep the Web a safe place for users.
Matsumoto: Youre going to see an integration of event data that currently goes into your security event monitor, influencing authentication decisions. I think that youre going to start seeing these events also going into whether or not youre actually going to change the authorization corresponding to the initial authentication decision.
van Oorschot: Were going to see one of the major players, and by that I mean a Google or Microsoft or Apple or Amazon, take some known technology and weave it seamlessly-including from a user interface perspective-into some widely used service that has a big user base, and thats going to turn the tide. Which underlying security technology will be used? I think that can go one of many different ways, but I expect it will take a big player to make a wise choice and then a commitment to it.
Eisen: We will still have to wait a little bit until theres enough of what I would call catastrophic events, where we just dont like the state of affairs anymore. Then enough forces in the market will come together. The fact that we still, 20 years in, dont have a standard of how to do authentication just shows that we are not all in agreement on whats the best way. But I think in the next five years, it will just take care of itself because the network really isnt ours anymore. The crooks are running it, and were just trying to patch it up.
Kirsch: The current system is not sustainable, and I think that we have to move to something thats better. Its got to be something that we dont have today, because we already know the stuff today doesnt work.
So whats in our future-and I think its going to come fast-are some new, clever techniques for solving this problem. I think in the next one to two years, we are going to see a big paradigm shift. One of these new paradigms is going to come out and emerge the leader. I sure hope so, because what we have now is really awful.
Dirk Balfanz is a software engineer on Googles Security Team, focusing on strengthening authentication on the Web through the use of public-key cryptography. Balfanz also worked on Googles OpenID and OAuth implementations, and contributed to the OAuth standardization process.
Paul van Oorschot is a professor of computer science at Carleton University in Ottawa, where hes Canada Research Chair in Authentication and Computer Security. His research interests include authentication and identity management, security and usability, software security, and computer security. Van Oorschot is on the editorial boards of IEEE Transactions on Information Forensics and Security and IEEE Transactions on Secure and Dependable Computing.
Ori Eisen is the founder, chairman, and chief innovation officer of 41st Parameter. He has spent the past 15 years in the information technology industry, working on preventing e-commerce fraud for such companies as American Express and VeriSign.
Scott Matsumoto is a principal consultant at Cigital, where hes responsible for the security architecture practice in the company. His prior experience encompasses development of component-based middleware, performance management systems, GUIs, language compilers, database management systems, and operating system kernels. He is a founding member of the Cloud Security Alliance (CSA) and is actively involved in its Trusted Computing Initiative.
Steve Kirsch is CEO of OneID, a startup company seeking to fix the digital identity problem on the Internet by creating a user-centric Internet-scale digital identity system. Hes a serial entrepreneur and has started and run five other ventures: Mouse Systems, Frame Technology, Infoseek, Propel, and Abaca. In 1995, Newsweek named him one of the 50 most influential people in cyberspace.
Markus Jakobsson is Principal Scientist of Consumer Security at PayPal. His research focuses on phishing, crimeware, spoofing, and authentication, with a focus on defenses and mobile computing. He has written and edited three books relating to applied security and is listed as an inventor on more than 100 patents. Contact him via www.markus-jakobsson.com.
Jesus Molina is a researcher, inventor, independent consultant, and occasional artist (when nobody is looking). He currently divides his time between standardization committees aimed at improving the security of emerging infrastructures, such as the smart grid and the cloud, and developing cutting-edge authentication solutions for them. Contact him via www.jesusmolina.com.
Richard Chow is a research scientist in the security and privacy group at the Palo Alto Research Center. His current research interests include using data mining and applied cryptography to improve privacy, security, and fraud detection. Contact him at rchow@parc.com.
IEEE Security & Privacy's primary objective is to stimulate and track advances in security, privacy, and dependability and present these advances in a form that can be useful to a broad cross-section of the professional community -- ranging from academic researchers to industry practitioners.
Style
## Hello stranger!
You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.
Get the most out of the InfoQ experience.
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
• ##### OneID
by Naresh Chintalcheru,
• ##### OneID
Your message is awaiting moderation. Thank you for participating in the discussion.
OneId is an interesting solution to current user/password based Authentication.
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
|
2022-10-04 14:44:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3213416337966919, "perplexity": 1134.1605869429354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00240.warc.gz"}
|
https://scipost.org/submissions/2104.10059v1/
|
# Chiral Anomaly Trapped in Weyl Metals: Nonequilibrium Valley Polarization at Zero Magnetic Field
### Submission summary
As Contributors: Anton Akhmerov · Maxim Breitkreiz · Pablo M. Perez-Piskunow Arxiv Link: https://arxiv.org/abs/2104.10059v1 (pdf) Code repository: https://zenodo.org/record/4668624 Date submitted: 2021-04-21 15:12 Submitted by: Breitkreiz, Maxim Submitted to: SciPost Physics Academic field: Physics Specialties: Condensed Matter Physics - Theory Approach: Theoretical
### Abstract
In Weyl semimetals the application of parallel electric and magnetic fields leads to valley polarization -- an occupation disbalance of valleys of opposite chirality -- a direct consequence of the chiral anomaly. In this work, we present numerical tools to explore such nonequilibrium effects in spatially confined three-dimensional systems with a variable disorder potential, giving exact solutions to leading order in the disorder potential and the applied electric field. Application to a Weyl-metal slab shows that valley polarization also occurs without an external magnetic field as an effect of chiral anomaly "trapping": Spatial confinement produces chiral bulk states, which enable the valley polarization in a similar way as the chiral states induced by a magnetic field. Despite its finite-size origin, the valley polarization can persist up to macroscopic length scales if the disorder potential is sufficiently long ranged, so that direct inter-valley scattering is suppressed and the relaxation then goes via the Fermi-arc surface states.
###### Current status:
Has been resubmitted
### Submission & Refereeing History
Resubmission 2104.10059v2 on 14 July 2021
Submission 2104.10059v1 on 21 April 2021
## Reports on this Submission
### Anonymous Report 2 on 2021-6-28 (Invited Report)
• Cite as: Anonymous, Report on arXiv:2104.10059v1, delivered 2021-06-28, doi: 10.21468/SciPost.Report.3133
### Report
This paper is well-written and, taking into account the comments of Referee 1, the results in the paper are scientifically sound. I have a more general question: A random disorder potential should allow for bound states of Weyl fermions the valleys of the potential. A similar situation shows up in QCD, where quarks have a small mass and are approximately chiral Weyl fermions. Due to the confining potential for quarks generated by the strong force, left-handed and hence left-moving quarks are reflected into right moving and hence right-handed once at the right potential wall, go back to the left potential wall, get reflected back and so on. The potential walls hence mix left and right handed quarks, and the reflection back and forth leads to a non vanishing chiral condensate <PsibarL PsiR>. In condensed matter terms, this is a inter-valley pairing condensate. In QCD, this condensate spontaneously breaks the chiral symmetry (axial U(1) in Weyl semimetals). This argument is due to Banks and Casher. The upshot is that the bound states in a confining potential lead to a chiral condensate and hence to spontaneous symmetry breaking. The resulting goldstone modes are the mesons of QCD, and are approximately massless (due to the small quark mass). My question now is: If we have Weyl fermions trapped in a random disorder potential, a similar inter valley pairing condensate should be formed, in particular in the limit that the authors consider (Delta k >> 1/xi). This condensate should contribute to the conductivity calculation. Is this taken into account in this work already? If this question is satisfactorily answered, I support publication in SciPost.
• validity: high
• significance: high
• originality: high
• clarity: high
• formatting: excellent
• grammar: excellent
### Author: Maxim Breitkreiz on 2021-07-14 [id 1565]
(in reply to Report 2 on 2021-06-28)
Thank you very much for the positive feedback and the useful comment.
It has been found in previous works that inter-valley scattering can indeed induce different semimetal-insulator transitions
(PRL 115, 246603 (2015)). Those require a small separation of Weyl nodes and/or a large disorder potential.
This effect is thus irrelevant for our work since we consider well-separated Weyl nodes and weak disorder.
Moreover, our focus on long-range disorder and well-separated Weyl nodes makes intra-valley scattering
dominate over inter-valley scattering. For a single Weyl Fermion (and thus only intra-valley scattering)
another type of (perhaps avoided)
critical behavior may occur [PRL 113, 026602 (2014), PRL 121, 215301 (2018), PRB 102, 100201(R) (2020)],
which however becomes relevant only for a vanishing chemical potential (i.e. Fermi level at the Weyl node).
We included a review of the possible disorder-induced phase transitions in the introduction of the new version, 3rd paragraph.
### Report 1 by Titus Neupert on 2021-5-20 (Invited Report)
• Cite as: Titus Neupert, Report on arXiv:2104.10059v1, delivered 2021-05-20, doi: 10.21468/SciPost.Report.2942
### Strengths
1) clear writing and good presentation
2) methodological advancement combined with nice physics question
3) many future applications and extensions for the method are conceivable
### Report
The manuscript introduces a formalism for the numerical computation of transport properties in mesoscopic systems for weak potential scattering. This formalism is applied to thin slabs of a Weyl semimetal, demonstrating a conductivity enhancement not dissimilar to the chiral anomaly, but purely introduced by finite size quantization. The manuscript is very well written and organized. It addresses a relevant physics question while at the same time introducing novel methodology.
I have three clarification questions and a few concrete requests for changes. The questions are:
1) Is the algorithm that finds the states on the Fermi contours guaranteed to produce the correct density of states for each band, i.e., does it take into account the magnitude of the fermi velocity when producing the discretization?
2) Much of the arguments depend on the numbers N, Ns, Nb, W. For the simulations, we only learn about the W that has been used. Maybe the Ns, Nb, N (or their ratio) could also be incorporated in the figures (maybe visually) if the authors would also consider that beneficial for the reader.
3) The method as presented in Sec. IIA would also be applicable to 3D systems, upon introducing another momentum quantum number. I do not understand why, as stated in Sec. IIB, it becomes invalid for thick slabs. In this case, should there not be an 'emergent' momentum quantum number, the kz momentum, so that the approach remains valid? Asked the other way around: if one "forgets" to include a quantum number in the formulation, would the numerics not work?
4) I find it a bit unsettling that the contradiction with the literature stated at the end of Sec. VB is left unresolved.
The requests for changes are noted below.
In view of the high quality of the manuscript, I would recommend its publication, provided the questions and requested changes are thoroughly addressed.
### Requested changes
1) I think the work would benefit from a discussion on the robustness of the observed phenomena in more generic Weyl fermiologies: What if additional pockets are present in the bulk or on the surface, what if the Fermi arcs are not so perfectly straight (like in TaAs and similar materials, see e.g., PRB 97, 085142), what if the bulk Weyl cones are strongly anisotropic (while still type I) as is often the case?
2) For a paper that is on the technical side, I find the introduction and review of previous results too succinct. A more in-depth summary of results from the literature (which may also require extending the rather short list of references) would be beneficial for the reader.
3) There is a typo in the exponent of Eq. 29: x-> z
• validity: high
• significance: high
• originality: high
• clarity: high
• formatting: excellent
• grammar: excellent
### Author: Maxim Breitkreiz on 2021-07-14 [id 1564]
(in reply to Report 1 by Titus Neupert on 2021-05-20)
Category:
Thank you very much for the positive feedback and the useful suggestions.
In response to the comments in the report:
1. Yes, the algorithm does take into account the true Fermi velocity. Only in the analytical treatment the velocity is assumed to be constant for simplicity.
2. The precise values for the numbers of states are of course easily calculable from the solved slab model — it essentially corresponds to the lengths of the corresponding contours. In the new version we now provide numerical values in the caption of Fig. 4 to show concrete numbers and make contact with the discussion section. The arguments in the discussion section, however, only use rough and obvious properties of these numbers, such as the ratio N_s/N to be small for large width or the 1/W dependence of N_s/N, which does not seem to require extra plots.
3. The slab kinetic equation is different compared to the infinite-system one and it is valid for thin slabs as long as the width is smaller than the mean free path, as shown in Section IIB On the other hand, it is indeed correct that the used kinetic equation is valid for an arbitrary number of infinite dimensions. One can thus expect that an infinite-system equation will give correct results for a thick slab, where the width is basically much larger than all other relevant length scales. It is however the purpose of our work to explore the effect of a finite size, specifically one of the three dimensions is considered finite (slab). The equation for a slab is different in that it accounts for confinement-induced states, which, as we show, give unique effects in transport. The question arises up to which slab width one should expect the slab behavior? This is answered in section IIB, where we show that the kinetic equation for a slab fails if the slab width W exceeds the scattering mean free path l. One can possibly save the equation by a transformation to the emergent quantum number’’ (the momentum in the out-of-plane direction) for which the position matrix element will become small instead of \sim W (which causes the failure of the slab equation). Will this transformation just lead to the infinite-system equation or can some confinement-induced effects survive? This is a very interesting question, which we would be eager to explore in future work.
4. The contradiction is resolved by the fact that there exists a large length scale \sim l exp[-(\xi\Delta k)^2], which the width must exceed for the valley polarization to become suppressed. This is discussed not at the end of section V but in the discussion, section VI, below Eq. (47). In the new version we added a sentence at the end of section V to clarify that the seeming contradiction will be resolved below.
In response to requested changes:
1. We clarified the robustness of the effect seen in the minimal model that we used and included a discussion of our expectations for other Weyl models in the conclusion section (third and second to last paragraphs). Moreover, in our model, the Fermi arcs are not straight: Our model includes variable boundary potentials, which curve the Fermi arcs, as explained in section III and seen e.g. in the contour plot of Fig. 2. The velocity is also not perfectly isotropic.
2. We have thoroughly rewritten the introduction, in particular extending the review of the role of the valley degree of freedom, disorder, and finite-size effects. Thereby we added 14 new references [5, 7, 18-25, 28, 37-39].
3. We corrected the typo, thank you!
|
2022-01-22 20:38:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.675385057926178, "perplexity": 1325.783830270721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303884.44/warc/CC-MAIN-20220122194730-20220122224730-00610.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/mbe.2004.1.325
|
# American Institute of Mathematical Sciences
2004, 1(2): 325-338. doi: 10.3934/mbe.2004.1.325
## A Mathematical Model of Receptor-Mediated Apoptosis: Dying to Know Why FasL is a Trimer
1 Department of Mathematics, University of Michigan, 525 E. University, Ann Arbor, Mi 48109-1109, United States, United States
Received April 2004 Revised May 2004 Published July 2004
The scientific importance of understanding programmed cell death is undeniable; however, the complexity of death signal propagation and the formerly incomplete knowledge of apoptotic pathways has left this topic virtually untouched by mathematical modeling. In this paper, we use a mechanistic approach to frame the current understanding of receptor-mediated apoptosis with an immediate goal of isolating the role receptor trimerization plays in this process. Analysis and simulation suggest that if the death signal is to be successful at low-receptor, high-ligand concentration, Fas trimerization is unlikely to be the driving force in the signal propagation. However at high-receptor and low-ligand concentrations, the mathematical model illustrates how the ability of FasL to cluster three Fas receptors can be crucially important for downstream events that propagate the apoptotic signal.
Citation: Ronald Lai, Trachette L. Jackson. A Mathematical Model of Receptor-Mediated Apoptosis: Dying to Know Why FasL is a Trimer. Mathematical Biosciences & Engineering, 2004, 1 (2) : 325-338. doi: 10.3934/mbe.2004.1.325
[1] Gheorghe Craciun, Baltazar Aguda, Avner Friedman. Mathematical Analysis Of A Modular Network Coordinating The Cell Cycle And Apoptosis. Mathematical Biosciences & Engineering, 2005, 2 (3) : 473-485. doi: 10.3934/mbe.2005.2.473 [2] Songbai Guo, Wanbiao Ma. Global behavior of delay differential equations model of HIV infection with apoptosis. Discrete and Continuous Dynamical Systems - B, 2016, 21 (1) : 103-119. doi: 10.3934/dcdsb.2016.21.103 [3] Georgy Th. Guria, Miguel A. Herrero, Ksenia E. Zlobina. A mathematical model of blood coagulation induced by activation sources. Discrete and Continuous Dynamical Systems, 2009, 25 (1) : 175-194. doi: 10.3934/dcds.2009.25.175 [4] D. Criaco, M. Dolfin, L. Restuccia. Approximate smooth solutions of a mathematical model for the activation and clonal expansion of T cells. Mathematical Biosciences & Engineering, 2013, 10 (1) : 59-73. doi: 10.3934/mbe.2013.10.59 [5] Angela Gallegos, Ami Radunskaya. Do longer delays matter? The effect of prolonging delay in CTL activation. Conference Publications, 2011, 2011 (Special) : 467-474. doi: 10.3934/proc.2011.2011.467 [6] Linghui Yu, Zhipeng Qiu, Ting Guo. Modeling the effect of activation of CD4$^+$ T cells on HIV dynamics. Discrete and Continuous Dynamical Systems - B, 2022, 27 (8) : 4491-4513. doi: 10.3934/dcdsb.2021238 [7] Toyohiko Aiki, Martijn Anthonissen, Adrian Muntean. On a one-dimensional shape-memory alloy model in its fast-temperature-activation limit. Discrete and Continuous Dynamical Systems - S, 2012, 5 (1) : 15-28. doi: 10.3934/dcdss.2012.5.15 [8] Kuo-Chih Hung, Shin-Hwa Wang. Classification and evolution of bifurcation curves for a porous-medium combustion problem with large activation energy. Communications on Pure and Applied Analysis, 2021, 20 (2) : 559-582. doi: 10.3934/cpaa.2020281 [9] Bao Wang, Alex Lin, Penghang Yin, Wei Zhu, Andrea L. Bertozzi, Stanley J. Osher. Adversarial defense via the data-dependent activation, total variation minimization, and adversarial training. Inverse Problems and Imaging, 2021, 15 (1) : 129-145. doi: 10.3934/ipi.2020046 [10] Yu-Hong Dai, Zhouhong Wang, Fengmin Xu. A Primal-dual algorithm for unfolding neutron energy spectrum from multiple activation foils. Journal of Industrial and Management Optimization, 2021, 17 (5) : 2367-2387. doi: 10.3934/jimo.2020073
2018 Impact Factor: 1.313
|
2022-08-18 17:13:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3787481188774109, "perplexity": 8782.549858604641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573242.55/warc/CC-MAIN-20220818154820-20220818184820-00090.warc.gz"}
|
http://www.sciencemadness.org/talk/viewthread.php?tid=72395&page=2
|
Not logged in [Login - Register]
Sciencemadness Discussion Board » Fundamentals » Chemistry in General » Disposal of Na from a breeder reactor Select A Forum Fundamentals » Chemistry in General » Organic Chemistry » Reagents and Apparatus Acquisition » Beginnings » Responsible Practices » Miscellaneous » The Wiki Special topics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Models and Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues » Detritus » Test Forum
Pages: 1 2
Author: Subject: Disposal of Na from a breeder reactor
froot
National Hazard
Posts: 347
Registered: 23-10-2003
Location: South Africa
Member Is Offline
Mood: refluxed
Dan I would imagine this risk assessment will have a discouraging feel about it.
There are too many unknowns to confidently say that the method described will proceed without any challenges.
No sane person would approve of this horrible idea.
I understand that this is outside of your scope but for the sake of discussion my suggestion would be to seal the sodium with wax first. A wax with a melting point lower than that of sodium.
Say there is up to 2 inches of sodium in a barrel.
Add 2 inches of molten wax to capture any loose material on the sodium surface by spraying it in to wet all the loose material and help prevent it from floating. . When that's cooled add another 2 inches of clean wax and let that cool.
Now you have a solid layer of around 6 inches. Cut the barrel into 2 pieces just above the wax layer and crush the top piece as flat as possible, you should be able to get it thinner than 2 inches.
That would come to a total of less than 8 inches per barrel and no fires.
This would eliminate the possibility of barrels splitting at the sodium during crushing.
Questions are, could molten wax trigger any reactions?
How much potentially reactive material is stuck to the walls of the barrels?
We salute the improvement of the human genome by honoring those who remove themselves from it.
Of necessity, this honor is generally bestowed posthumously. - www.darwinawards.com
unionised
International Hazard
Posts: 4005
Registered: 1-11-2003
Location: UK
Member Is Offline
Mood: No Mood
I wondered how long it would be before that vid surfaced.
I can't see this process getting to completion without a violent redox reaction.
Not every time- bust sooner or later.
It reduces the bulk- and I guess that makes it easier / cheaper to dig the hole.
But eventually the ground will sink + compact the barrels- well out of the range of any humans.
Dan Vizine
International Hazard
Posts: 615
Registered: 4-4-2014
Location: Tonawanda, New York
Member Is Offline
Mood: High Resistance
Quote: Originally posted by wg48 77,000 gallons of warm (radioactive) sodium is considered “none hazardous" ... That’s a definition I have not come across before. A lot more than a few pounds per barrel then. Its not that long ago they would have opened the barrels and dumped them at sea. I think we (EU and US) don't do that anymore.
I say "was" because that was the original amount. The primary loop was 77,000 gallons which required 1400 drums. The sodium was melted and poured out. These drums are what's left. The sodium was hydrolyzed to give enormous multi ton blocks of 70% NaOH and buried. I THINK the goal here may simply be to turn 18 cargo containers into 5 since a well worked out method exists for total clean-up. It uses moist CO2 followed by water.
"All Your Children Are Poor Unfortunate Victims of Lies You Believe, a Plague Upon Your Ignorance that Keeps the Youth from the Truth They Deserve"...F. Zappa
Dan Vizine
International Hazard
Posts: 615
Registered: 4-4-2014
Location: Tonawanda, New York
Member Is Offline
Mood: High Resistance
Quote: Originally posted by froot Dan I would imagine this risk assessment will have a discouraging feel about it. There are too many unknowns to confidently say that the method described will proceed without any challenges. No sane person would approve of this horrible idea. I understand that this is outside of your scope but for the sake of discussion my suggestion would be to seal the sodium with wax first. A wax with a melting point lower than that of sodium. Say there is up to 2 inches of sodium in a barrel. Add 2 inches of molten wax to capture any loose material on the sodium surface by spraying it in to wet all the loose material and help prevent it from floating. . When that's cooled add another 2 inches of clean wax and let that cool. Now you have a solid layer of around 6 inches. Cut the barrel into 2 pieces just above the wax layer and crush the top piece as flat as possible, you should be able to get it thinner than 2 inches. That would come to a total of less than 8 inches per barrel and no fires. This would eliminate the possibility of barrels splitting at the sodium during crushing. Questions are, could molten wax trigger any reactions? How much potentially reactive material is stuck to the walls of the barrels?
Your opening was spot on. Or was a few days ago. Since then I came across a photo of a representative drum from this lot. The chunks are sodium oxide/hydroxide/carbonate around a metallic core. There are pieces on the sides of the barrel. They look loosely adhering but you can't say for sure. In my mind the real question is, when one of the lumps crushes, will that drive any hydroxide (and moisture of hydration) into fresh metal and cause a fire? I see fire, not a raging thermite reaction, as the actual danger. Fire will release radioactive smoke. A thermite reaction would burn out very quickly and not destroy the drum wall. There just isn't enough rust. But, it WOULD start the other available sodium burning.
Wax can pull away from walls when the crushing starts. Otherwise, no, it doesn't react with sodium. I favored mineral oil, but it's liquid and not acceptable.
I have a series of good tests dreamed up and I've made a few test jigs. This weekend should be productive.
"All Your Children Are Poor Unfortunate Victims of Lies You Believe, a Plague Upon Your Ignorance that Keeps the Youth from the Truth They Deserve"...F. Zappa
Dan Vizine
International Hazard
Posts: 615
Registered: 4-4-2014
Location: Tonawanda, New York
Member Is Offline
Mood: High Resistance
Quote: Originally posted by j_sum1 Here is how it used to be done. But apparently we now just crush and bury. I wonder if it is a significant improvement. https://www.youtube.com/watch?v=HY7mTCMvpEM
Burial was my uninformed guess way back when. Now, I am pretty sure that anything with elemental sodium in it just isn't able to be buried as radioactive non-hazardous waste. This can only be about space-saving. They just want to avoid fire while crushing. Ultimate disposal will be at a future date.
"All Your Children Are Poor Unfortunate Victims of Lies You Believe, a Plague Upon Your Ignorance that Keeps the Youth from the Truth They Deserve"...F. Zappa
j_sum1
Posts: 4631
Registered: 4-10-2014
Location: Oz
Member Is Offline
Mood: Metastable, and that's good enough.
Ok. Here's an idea.
The sodium remaining is obviously in excess to the rust and the water. That which is there could potentially cause ignition but after that the limiting reagent is atmospheric oxygen.
If the crushing could be done in a closed cylinder then air will be expelled and even if a reaction occurs it will quickly die out.
Say, a cylinder that the drums fit in reasonably snugly and a ram that is near to sealed against the wall of the cylinder. Add some temperature measurement to the set-up so that you have information on when/if a reaction occurs. Crush slowly allowing air to escape. Monitor the temperature. Remove ram and empty the cylinder when cool. You should be able to get a nice puck without an uncontrolled fire.
Dan Vizine
International Hazard
Posts: 615
Registered: 4-4-2014
Location: Tonawanda, New York
Member Is Offline
Mood: High Resistance
Quote: Originally posted by unionised I wondered how long it would be before that vid surfaced. I can't see this process getting to completion without a violent redox reaction. Not every time- bust sooner or later. It reduces the bulk- and I guess that makes it easier / cheaper to dig the hole. But eventually the ground will sink + compact the barrels- well out of the range of any humans.
As many things in life are, this is the provenance of statutes and rules, not common sense.
From my tests so far, unless the sodium is molten, no amount of crushing, compacting in a hydraulic press, or impact (up to and including a hand sledge hammer swung with all my strength) will ignite the redox reaction. In my present view, prevent the sodium from burning or melting and we'll get no thermite type reaction. Many more tests to follow, though.
"All Your Children Are Poor Unfortunate Victims of Lies You Believe, a Plague Upon Your Ignorance that Keeps the Youth from the Truth They Deserve"...F. Zappa
Dan Vizine
International Hazard
Posts: 615
Registered: 4-4-2014
Location: Tonawanda, New York
Member Is Offline
Mood: High Resistance
Quote: Originally posted by j_sum1 Ok. Here's an idea. The sodium remaining is obviously in excess to the rust and the water. That which is there could potentially cause ignition but after that the limiting reagent is atmospheric oxygen. If the crushing could be done in a closed cylinder then air will be expelled and even if a reaction occurs it will quickly die out. Say, a cylinder that the drums fit in reasonably snugly and a ram that is near to sealed against the wall of the cylinder. Add some temperature measurement to the set-up so that you have information on when/if a reaction occurs. Crush slowly allowing air to escape. Monitor the temperature. Remove ram and empty the cylinder when cool. You should be able to get a nice puck without an uncontrolled fire.
We're starting to converge on thinking. Remember, though, thermite doesn't need oxygen. That is not a big deal (probably) because I think that unless the sodium gets to the mp, crushing won't initiate the reaction. I doubt if crushing gets the metal anywhere near hot enough to melt the sodium. In fact, I'd be willing to bet that crushing leads to no more than a handful of degrees of temp rise. The planned full scale test will confirm or deny this.
Has anybody here used a drum crusher before? Ever felt a just-compacted drum?
"All Your Children Are Poor Unfortunate Victims of Lies You Believe, a Plague Upon Your Ignorance that Keeps the Youth from the Truth They Deserve"...F. Zappa
wg48
International Hazard
Posts: 821
Registered: 21-11-2015
Member Is Offline
Mood: No Mood
Dan: I have not seen a drum compacted but I would expect there to be hot spots perhaps as the surfaces of the metal slides over each other. The problem is how many drums do you have to compact to be confident there will not be hot spot or spark that can ignite the thermite or sodium and what if there is sodium peroxide in the drum.
Ideally you need a set up such that any fire, leakage, minor explosion does not endanger personnel, damage the equipment or spread to the other drums.
[Edited on 23-2-2017 by wg48]
j_sum1
Posts: 4631
Registered: 4-10-2014
Location: Oz
Member Is Offline
Mood: Metastable, and that's good enough.
My point is that both iron oxide and atmospheric oxygen will be limiting reactants in a snug cylinder. If a reaction does begin you can simply wait for it to die off and cool down. You might get a temporary sodium fire but there is no need to panic. Sparks, thermite, localised hot-spots and melting need not cause problems. That leaves reaction with water present which might be a bit less predictable -- potentially explosive and certainly releasing gas and a bit of heat. Mitigating against that possibility is the main reason for crushing slowly I think. It means there is some time for H2 to escape and heat to dissipate. And if there is a coulombic explosion the vessel is not tightly compacted at the time.
I reiterate, this is not the way I would do it, but the most feasible practice I see within the constraints given.
j_sum1
Posts: 4631
Registered: 4-10-2014
Location: Oz
Member Is Offline
Mood: Metastable, and that's good enough.
Sulaiman
International Hazard
Posts: 2458
Registered: 8-2-2015
Location: Shah Alam, Malaysia
Member Is Offline
Sodium metal can be extruded by a small hand tool to form sodium wire,
when the barrels are crushed I would expect any remaining unreacted sodium to be squeezed out of the bulk material.
Possibly a problem, possibly a coarse separation method ?
CAUTION : Hobby Chemist, not Professional or even Amateur
Dan Vizine
International Hazard
Posts: 615
Registered: 4-4-2014
Location: Tonawanda, New York
Member Is Offline
Mood: High Resistance
Since I last reported, much has been clarified to me. All of the odd restrictions now make utter sense. The goal isn't just to save space, it's permanent sequestration in a very stable form. Probably one of the best sequestration techniques of all. Wish I could say more, more but the last sentence really has a strong indication contained in it.
I took some interesting pictures during the preparation of this report, one of the pictures below shows a 20 J impact on a Na on rust smear. Barely did anything, but some ignition seen. 40 J was another story. Results similar to the ~25 J/sq. cm accepted value for aluminum on steel
Another question was answered. How much moisture can soil contain and still act as a slow, controlled sodium oxidant? At a certain level, it works beautifully. The attached picture was taken during trials to determine the allowable moisture level to act as an asphyxiant. The picture shows that 10% sure isn't it.
"All Your Children Are Poor Unfortunate Victims of Lies You Believe, a Plague Upon Your Ignorance that Keeps the Youth from the Truth They Deserve"...F. Zappa
Dan Vizine
International Hazard
Posts: 615
Registered: 4-4-2014
Location: Tonawanda, New York
Member Is Offline
Mood: High Resistance
Quote: Originally posted by Sulaiman Sodium metal can be extruded by a small hand tool to form sodium wire, when the barrels are crushed I would expect any remaining unreacted sodium to be squeezed out of the bulk material. Possibly a problem, possibly a coarse separation method ?
A report from the mid-80's indicates that the sodium exists as irregular rocky-looking masses of average size 1 - 4 inches. No liquid is seen in a random sampling of 6 drums. The lumps are a nugget of sodium, coated with a layer of solid sodium hydroxide monohydrate. This is coated with hard, crusty carbonate hydrates. I can't really reveal the method that was devised, but I think we've adequately addressed this.
"All Your Children Are Poor Unfortunate Victims of Lies You Believe, a Plague Upon Your Ignorance that Keeps the Youth from the Truth They Deserve"...F. Zappa
j_sum1
Posts: 4631
Registered: 4-10-2014
Location: Oz
Member Is Offline
Mood: Metastable, and that's good enough.
All very mysterious. I wish you were able to reveal more.
But you are sounding a lot more confident about the whole process, which is a good thing.
Dan Vizine
International Hazard
Posts: 615
Registered: 4-4-2014
Location: Tonawanda, New York
Member Is Offline
Mood: High Resistance
Hello Again,
* WORK OPPORTUNITY *
The results of the last Na work helped secure the desired government contract and that project is proceeding.
There is a new contract and this is an opportunity for someone here (hopefully) to become involved.
The subject is something called MOMS. This acronym stands for...well, I'm writing it now...
"The process called metal oxide mitigation of sodium, commonly referred to as MOMS, is really a specialized variation of the same type of metallothermic reaction of which thermite is an example. However, because of the relative free energies of the products and reactants, MOMS is a gentle, easily controlled method of reacting sodium to form highly stable products. While initially designed to react sodium with a silica-based substrate, the chemistry applies to other group 1 metals, particularly potassium."
The silica-based substrate is soil. We have a precise breakdown of its makeup, not surprisingly it's two thirds silica with a helping of hematite. And a bunch of other things.
We're going to be looking at the reactions that I have tentatively worked out:
10 M + Fe2O3 + 6 SiO2 --> 5 M2SiO3 + 2 Fe + Si (M = Na, K)
and
4 M + 3 SiO2 --> 2 M2SiO3 + Si (M = Na, K)
What my contact requested is:
In addition to this we are interested in undertaking some computer modeling to simulate the thermodynamics and reaction chemistry (multiphase transition). We are seeking some expertise in use of software like Star-CCM+, ANSYS Fluent, COMSOL, or other similar programs. Do you have access and capabilities in this area or could you make recommendation?
So, if you are such a person, please contact me. I can put you in touch with them and this could be a good opportunity. It seems as though this company is going to be involved in this sort of project in an ongoing fashion. A well-received piece of work could lead to you getting other pieces.
This is a world-class company, and if you aren't the person they are looking for you won't be able to fake it. This is clearly a job for either a grad student, professor or working or retired professional. Hopefully you are that person and you and I will end up working together.
Please note that they hope you have some sort of access to one of the software packages, in addition to simply knowing how to use it. They won't be supplying any software. Compensation is very generous.
In this case we're going to be looking at ways of disposing of radioactive NaK and radioactive LiH via a modified MOMS.
See: Geomelt, but this will be in a container.
Dan
[Edited on 5/17/2017 by Dan Vizine]
"All Your Children Are Poor Unfortunate Victims of Lies You Believe, a Plague Upon Your Ignorance that Keeps the Youth from the Truth They Deserve"...F. Zappa
MrHomeScientist
International Hazard
Posts: 1745
Registered: 24-10-2010
Location: Flerovium
Member Is Offline
Mood: No Mood
Quote: Originally posted by Dan Vizine Please note that they hope you can access the software, and not simply use it.
Surely they don't expect people to have personal copies of those software packages? I use COMSOL at work and that's a $20,000 piece of software. Dan Vizine International Hazard Posts: 615 Registered: 4-4-2014 Location: Tonawanda, New York Member Is Offline Mood: High Resistance No, I think that was implicit in the statement. He just said access it. "All Your Children Are Poor Unfortunate Victims of Lies You Believe, a Plague Upon Your Ignorance that Keeps the Youth from the Truth They Deserve"...F. Zappa Dan Vizine International Hazard Posts: 615 Registered: 4-4-2014 Location: Tonawanda, New York Member Is Offline Mood: High Resistance OK, Thanks guys. I see a quite a few people at least gave it a glance, and I appreciate it. Dan PS. The expiration date on this is indeterminate. The contract is still in the bidding phase, anyway. So, if someone eventually sees this and wants to reply, please do. It's not necessarily too late until I post otherwise. [Edited on 5/22/2017 by Dan Vizine] "All Your Children Are Poor Unfortunate Victims of Lies You Believe, a Plague Upon Your Ignorance that Keeps the Youth from the Truth They Deserve"...F. Zappa Dan Vizine International Hazard Posts: 615 Registered: 4-4-2014 Location: Tonawanda, New York Member Is Offline Mood: High Resistance Quote: Originally posted by MrHomeScientist Quote: Originally posted by Dan Vizine Please note that they hope you can access the software, and not simply use it. Surely they don't expect people to have personal copies of those software packages? I use COMSOL at work and that's a$20,000 piece of software.
Whew! I just obtained a copy of COMSOL 5.1. After installing for an hour, it works. I guess. It's hard to say, really . Even after an afternoon of YouTube videos, it's difficult to even know where to begin. Well, that isn't my problem, thankfully.
[Edited on 5/23/2017 by Dan Vizine]
"All Your Children Are Poor Unfortunate Victims of Lies You Believe, a Plague Upon Your Ignorance that Keeps the Youth from the Truth They Deserve"...F. Zappa
Pages: 1 2
Sciencemadness Discussion Board » Fundamentals » Chemistry in General » Disposal of Na from a breeder reactor Select A Forum Fundamentals » Chemistry in General » Organic Chemistry » Reagents and Apparatus Acquisition » Beginnings » Responsible Practices » Miscellaneous » The Wiki Special topics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Models and Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues » Detritus » Test Forum
|
2019-07-19 20:50:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3664458692073822, "perplexity": 3943.8737206272554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526359.16/warc/CC-MAIN-20190719202605-20190719224605-00021.warc.gz"}
|
https://declaredesign.org/getting-started.html
|
# Getting Started with DeclareDesign
DeclareDesign is a system for describing research designs in code and simulating them in order to understand their properties. Because DeclareDesign employs a consistent grammar of designs, you can focus on the intellectually challenging part – designing good research studies – without having to code up simulations from scratch. DeclareDesign is based on the Model-Inquiry-Data Strategy-Answer Strategy (MIDA) framework for describing designs and a declare-diagnose-redesign workflow for improving research designs before implementing them.
## Installing R
You can download the statistical computing environment R for free from CRAN. We also recommend the free program RStudio, which provides a friendly interface to R. Both R and RStudio are available on Windows, Mac, and Linux.
Once you have R and RStudio installed, open it up and install DeclareDesign and its related packages. These include three packages that enable specific steps in the research process (fabricatr for simulating social science data; randomizr for random sampling and random assignment; and estimatr for design-based estimators). You can also install DesignLibrary, which gets standard designs up-and-running in one line. To install them, copy the following code into your R console:
install.packages(c(
"DeclareDesign",
"fabricatr",
"randomizr",
"estimatr",
"DesignLibrary"
))
We also recommend that you install and get to know the tidyverse suite of packages for data analysis:
install.packages("tidyverse")
For introductions to R and the tidyverse we especially recommend the free resource R for Data Science.
## Building a step of a research design
A research design is a concatenation of design steps. The best way to learn how to build a design is to learn how to make a step. We will start out by making—or declaring—a step that implements random assignment.
Almost all steps take a dataset as input and return a dataset as output. We will imagine input data that describes a set of voters in Los Angeles. The research project we are planning involves randomly assigning voters to receive (or not receive) a knock on their door from a canvasser. Our data look like this:
Table 1: Example data
ID age sex party precinct
001 66 M REP 9104
002 54 F DEM 8029
003 18 M GRN 8383
004 42 F DEM 2048
005 27 M REP 5210
There are 100 voters in the dataset.
We want a function that takes this dataset, implements a random assignment, adds it to the dataset, and then returns the new dataset containing the random assignment.
You could write your own function to do that but you can also use one of the declare_* functions in DeclareDesign that are designed to write functions. Each one of these functions is a kind of function factory: it takes a set of parameters about your research design like the number of units and the random assignment probability as inputs, and returns a function as an output. Here is an example of a declare_assignment step.
simple_random_assignment_step <-
declare_assignment(Z = simple_ra(N = N, prob = 0.6))
The big idea here is that the object we created, simple_random_assignment_step, is not a particular assignment, it is a function that conducts assignment when called. You can run the function on data:
simple_random_assignment_step(voter_file)
Table 2: Data output following implementation of an assignment step.
ID age sex party precinct Z
001 66 M REP 9104 1
002 54 F DEM 8029 1
003 18 M GRN 8383 0
004 42 F DEM 2048 1
005 27 M REP 5210 0
The output of the simple_random_assignment_step(voter_file) call is the original dataset with a new column indicating treatment assignment (Z) appended. As a bonus, the data also includes the probability that each unit is assigned to the condition in which it is in (Z_cond), which is an extremely useful number to know in many analysis settings. The most important thing to understand here is that steps are “dataset-in, dataset-out” functions. The simple_random_assignment_step took the voter_file dataset and returned a dataset with assignment information appended.
Every step of a research design declaration can be written using one of the declare_* functions. Table 3 collects these according to the four elements of a research design. Below, we walk through the common uses of each of these declaration functions.
Table 3: Declaration functions in DeclareDesign
Design component Function Description
Model declare_model() define background variables and potential outcomes
Inquiry declare_inquiry() define research question
Data strategy declare_sampling() specify sampling procedures
declare_assignment() specify assignment procedures
declare_measurement() specify measurement procedures
Answer strategy declare_estimator() specify estimation procedures
declare_test() specify testing procedures
The built-in functions we provide in the DeclareDesign package are quite flexible and handle many major designs, but not all. The framework is built so that you are never constrained by what we provide. At any point, you can write a function that implements your own procedures. The only discipline that the framework imposes is that you write your procedure as a function that takes data in and sends data back.
Here is an example of how you turn your own functions into design steps.
custom_assignment <- function(data) {
mutate(data, Z = rbinom(n = nrow(data), 1, prob = 0.5))
}
my_assignment_step <- declare_assignment(handler = custom_assignment)
my_assignment_step(voter_file)
Table 4: Data generated using a custom function
ID age sex party precinct Z
001 66 M REP 9104 0
002 54 F DEM 8029 1
003 18 M GRN 8383 1
004 42 F DEM 2048 0
005 27 M REP 5210 0
## Research design steps
In this section, we walk through how to declare each step of a research design using DeclareDesign. In the next section, we build those steps into a research design, and then describe how to interrogate the design.
### Model
The model defines the structure of the world, both its size and background characteristics as well as how interventions in the world determine outcomes.
#### Population
The population defines the number of units in the population, any multilevel structure to the data, and its background characteristics. We can define the population in several ways. In some cases, you may start a design with data on the population. When that happens, we do not need to simulate it. We can simply declare the data as our population:
declare_model(data = voter_file)
Table 5: Draw from a fixed population
ID age sex party precinct
001 66 M REP 9104
002 54 F DEM 8029
003 18 M GRN 8383
004 42 F DEM 2048
005 27 M REP 5210
When we do not have complete data on the population, we simulate it. Relying on the data simulation functions from our fabricatr package, declare_model asks about the size and variables of the population. For instance, if we want a function that generates a dataset with 100 units and a random variable U we write:
declare_model(N = 100, U = rnorm(N))
When we run this population function, we will get a different 100-unit dataset each time, as shown in Table 6.
Table 6: Five draws from the population.
Draw 1
Draw 2
Draw 3
Draw 4
Draw 5
ID U ID U ID U ID U ID U
001 -0.700 001 -0.35 001 -1.07 001 0.696 001 0.87
002 0.561 002 0.27 002 0.11 002 0.059 002 0.46
003 0.048 003 0.45 003 0.35 003 -0.030 003 1.73
004 0.567 004 -2.69 004 -1.37 004 -1.279 004 -0.46
005 -1.196 005 0.87 005 -0.39 005 -0.247 005 -0.70
The fabricatr package can simulate many different types of data, including various types of categorical variables or different types of data structures, such as panel or multilevel structures. You can read the fabricatr website vignette to get started simulating data.
As an example of a two-level hierarchical data structure, here is a declaration for 100 households with a random number of individuals within each household. This two-level structure could be declared as:
declare_model(
N = 100,
individuals_per_hh = sample(1:6, N, replace = TRUE)
),
N = individuals_per_hh,
age = sample(1:100, N, replace = TRUE)
)
)
As always, you can exit our built-in way of doing things and bring in your own code. This is useful for complex designs, or when you have already written code for your design and you want to use it directly. Here is an example of a custom population declaration:
complex_population_function <- function(data, N_units) {
data.frame(U = rnorm(N_units))
}
declare_model(
handler = complex_population_function, N_units = 100
)
#### Potential outcomes
Defining potential outcomes is as easy as a single expression per potential outcome. Potential outcomes may depend on background characteristics, other potential outcomes, or other R functions.
declare_model(
Y_Z_0 = U,
Y_Z_1 = Y_Z_0 + 0.25)
design <-
declare_model(
N = 100, U = rnorm(N),
potential_outcomes(Y ~ 0.25 * Z + U)
)
Table 7: Adding potential outcomes to the population.
ID U Y_Z_0 Y_Z_1
001 -2.41 -2.41 -2.16
002 0.81 0.81 1.06
003 -0.86 -0.86 -0.61
004 1.30 1.30 1.55
005 0.59 0.59 0.84
The declare_model function also includes an alternative interface for defining potential outcomes that uses R’s formula syntax with the potential_outcomes function. The formula syntax lets you specify “regression-like” outcome equations. One downside is that it mildly obscures how the names of the eventual potential outcomes columns are named. We build the names of the potential outcomes columns the outcome name (here Y on the left-hand side of the formula) and the name of the assignment variable from the variable name in the conditions argument (here Z).
declare_model(potential_outcomes(Y ~ 0.25 * Z + U, conditions = list(Z = c(0, 1))))
Either way of creating potential outcomes works; one may be easier or harder to code up in a given research design setting.
### Inquiry
To define your inquiry, declare your estimand. Estimands are typically summaries of the data produced in declare_model. Here we define the average treatment effect as follows:
declare_inquiry(PATE = mean(Y_Z_1 - Y_Z_0))
Notice that we defined the PATE (the population average treatment effect), but said nothing special related to the population – it looks like we just defined the average treatment effect. This is because order matters. If we want to define a SATE (the sample average treatment effect), we would have to do so after sampling has occurred. We will see how to do this in a moment.
### Data strategy
The data strategy constitutes one or more steps representing interventions the researcher makes in the world from sampling to assignment to measurement.
#### Sampling
The sampling step relies on the randomizr package to conduct random sampling. See Section ?? for an overview of the many kinds of sampling that are possible. Here we define a procedure for drawing a 50-unit sample from the population:
declare_sampling(S = complete_rs(N, n = 50), legacy = FALSE)
When we draw data from our simple design at this point, it will have fewer rows: it will have shrunk from 100 units in the population to a data frame of 50 units representing the sample. The new data frame also includes a variable indicating the probability of being included in the sample. In this case, every unit in the population had an equal inclusion probability of 0.5.
Table 8: Sampled data.
ID U Y_Z_0 Y_Z_1 S
5 005 0.523 0.523 0.77 1
6 006 1.933 1.933 2.18 1
8 008 0.748 0.748 1.00 1
10 010 -0.078 -0.078 0.17 1
11 011 2.119 2.119 2.37 1
Sampling could also be non-random, which could be accomplished by using a custom handler.
#### Assignment
Here, we define an assignment procedure that allocates subjects to treatment with probability 0.5.
declare_assignment(Z = complete_ra(N, prob = 0.5), legacy = FALSE)
After treatments are assigned, some potential outcomes are revealed. Treated units reveal their treated potential outcomes and untreated units reveal their untreated potential outcomes. The reveal_outcomes function performs this switching operation.
declare_measurement(Y = reveal_outcomes(Y ~ Z))
Adding these two declarations to the design results in a data frame with an additional indicator Z for the assignment as well as its corresponding probability of assignment. Again, here the assignment probabilities are constant, but in other designs described in Section ?? they are not and this is crucial information for the analysis stage. The outcome variable Y is composed of each unit’s potential outcomes depending on its treatment status.
Table 9: Sampled data with assignment indicator.
ID U Y_Z_0 Y_Z_1 S Z Y
004 0.94 0.94 1.19 1 1 1.19
005 0.77 0.77 1.02 1 0 0.77
008 -0.17 -0.17 0.08 1 1 0.08
009 1.60 1.60 1.85 1 1 1.85
010 -1.39 -1.39 -1.14 1 0 -1.39
#### Measurement
Measurement is a critical part of every research design; sometimes it is beneficial to explicitly declare the measurement procedures of the design, rather than allowing them to be implicit in the ways variables are created in declare_model. For example, we might imagine that the normally distributed outcome variable Y is a latent outcome that will be translated into a binary outcome when measured by the researcher:
declare_measurement(Y_binary = rbinom(N, 1, prob = pnorm(Y)))
Table 10: Sampled data with an explicitly measured outcome.
ID U Y_Z_0 Y_Z_1 S Z Y Y_binary
001 -1.19 -1.19 -0.94 1 0 -1.19 0
002 -0.66 -0.66 -0.41 1 0 -0.66 0
003 1.09 1.09 1.34 1 0 1.09 1
004 0.51 0.51 0.76 1 1 0.76 1
007 0.76 0.76 1.00 1 0 0.76 0
Through our model and data strategy steps, we have simulated a dataset with two key inputs to the answer strategy: an assignment variable and an outcome. In other answer strategies, pretreatment characteristics from the model might also be relevant. The data look like this:
Table 11: Data with revealed outcomes.
ID U Y_Z_0 Y_Z_1 S Z Y
001 -0.06 -0.06 0.190 1 0 -0.06
002 -0.16 -0.16 0.089 1 0 -0.16
004 0.99 0.99 1.239 1 1 1.24
006 -2.61 -2.61 -2.364 1 0 -2.61
007 0.78 0.78 1.030 1 1 1.03
Our estimator is the difference-in-means estimator, which compares outcomes between the group that was assigned to treatment and that assigned to control. The difference_in_means() function in the estimatr package calculates the estimate, the standard error, $$p$$-value and confidence interval for you:
difference_in_means(Y ~ Z, data = simple_design_data)
Table 12: Difference-in-means estimate from simulated data.
term estimate std.error statistic p.value conf.low conf.high df outcome
Z 0.39 0.28 1.4 0.17 -0.17 0.95 46 Y
Now, in order to declare our estimator, we can send the name of a modeling function to declare_estimator. R has many modeling functions that work with declare_estimator, including lm, glm, or the ictreg function from the list package, among hundreds of others. We use many estimators from estimatr because they are fast and calculate robust standard errors easily. Estimators are (almost always) associated with estimands.1 Here, we are targeting the population average treatment effect with the difference-in-means estimator.
declare_estimator(
Y ~ Z, model = difference_in_means, inquiry = "PATE"
)
#### Two finer points: model_summary and label_estimator
Many answer strategies use modeling functions like lm, lm_robust, or glm. The output from these modeling functions are typically very complicated list objects that contain large amounts of information about the modeling process. We typically only want a few summary pieces of information out of these model objects, like the coefficient estimates, standard errors, and confidence intervals. We use model summary functions passed to the model_summary argument of declare_estimator to do so. Model summary functions take models as inputs and return data frames as outputs.
The default model summary function is tidy:
declare_estimator(
Y ~ Z, model = lm_robust, model_summary = tidy
)
You could also use glance to get model fit statistics like $$R^2$$.
declare_estimator(
Y ~ Z, model = lm_robust, model_summary = glance
)
Occasionally, you’ll need to write your own model summary function that takes a model fit object and returns a data.frame with the information you need. For example, in order to calculate average marginal effects estimates from a logistic regression, we run a glm model through the margins function from the margins package; we then need to “tidy” the output from margins using the tidy function. Here we’re also asking for a 95% confidence interval.
tidy_margins <- function(x) {
tidy(margins(x, data = x$data), conf.int = TRUE) } declare_estimator( Y ~ Z + X, model = glm, family = binomial("logit"), model_summary = tidy_margins, term = "Z" ) If your answer strategy does not use a model function, you’ll need to provide a function that takes data as an input and returns a data.frame with the estimate. Set the handler to be label_estimator(your_function_name) to take advantage of DeclareDesign’s mechanism for matching estimands to estimators. When you use label_estimator, you can provide an estimand, and DeclareDesign will keep track of which estimates match each estimand. For example, to calculate the mean of an outcome, you could write your own estimator in this way: my_estimator <- function(data){ data.frame(estimate = mean(data$Y))
}
declare_estimator(handler = label_estimator(my_estimator),
label = "mean", inquiry = "Y_bar")
## declare_estimator(inquiry = "Y_bar", handler = label_estimator(my_estimator),
## label = "mean")
### Other design steps
The main declare_* functions cover many elements of research designs, but not all. You can include any operations we haven’t explicitly included as steps in your design too, using declare_step. Here, you must define a specific handler. Some handlers that may be useful are the dplyr verbs such as mutate and summarize, and the fabricate function from our fabricatr package.
To add a variable using fabricate:
declare_step(handler = fabricate, added_variable = rnorm(N))
If you have district-month data you may want to analyze at the district level, collapsing across months:
collapse_data <- function(data, collapse_by) {
data %>%
group_by({{ collapse_by }}) %>%
summarize_all(mean, na.rm = TRUE)
}
declare_step(handler = collapse_data, collapse_by = district)
# Note: The {{ }} syntax is handy for writing functions in dplyr
# where you want to be able to reuse the function with different variable
# names. Here, the collapse_data function will group_by the
# variable you send to the argument collapse_by, which in our
# declaration we set to district. The pipeline within the function
# then calculates the mean in each district.
## Building a design from design steps
In the last section, we defined a set of individual research steps. We draw one version of them together here:
model <-
declare_model(N = 100, U = rnorm(N),
potential_outcomes(Y ~ 0.25 * Z + U))
inquiry <-
declare_inquiry(PATE = mean(Y_Z_1 - Y_Z_0))
sampling <- declare_sampling(
S = complete_rs(N, n = 50))
assignment <- declare_assignment(
Z = complete_ra(N, prob = 0.5))
measurement <- declare_measurement(Y = reveal_outcomes(Y ~ Z))
declare_estimator(
Y ~ Z, model = difference_in_means, inquiry = "PATE"
)
To construct a research design object that we can operate on — diagnose it, redesign it, draw data from it, etc. — we add them together with the + operator, just as %>% makes dplyr pipelines or + creates ggplot objects.
design <-
model + inquiry +
sampling + assignment + measurement + answer_strategy
We will usually declare designs more compactly, concatenating steps directly with +:
design <-
declare_model(N = 100, U = rnorm(N),
potential_outcomes(Y ~ 0.25 * Z + U)) +
declare_inquiry(PATE = mean(Y_Z_1 - Y_Z_0)) +
declare_sampling(S = complete_rs(N, n = 50)) +
declare_assignment(Z = complete_ra(N, prob = 0.5)) +
declare_measurement(Y = reveal_outcomes(Y ~ Z)) +
declare_estimator(
Y ~ Z, model = difference_in_means, inquiry = "PATE"
)
### Order matters
When defining a design, the order in which steps are included in the design via the + operator matters. Think of the order of your design as the temporal order in which steps take place. Here, since the inquiry comes before sampling and assignment, it is a population inquiry, the population average treatment effect.
model +
declare_inquiry(PATE = mean(Y_Z_1 - Y_Z_0)) +
sampling +
assignment +
measurement +
answer_strategy
We could define our inquiry as a sample average treatment effect by putting inquiry after sampling:
model +
sampling +
declare_inquiry(SATE = mean(Y_Z_1 - Y_Z_0)) +
assignment +
measurement +
answer_strategy
## Simulating a research design
Diagnosing a research design — learning about its properties — requires first simulating running the design over and over. We need to simulate the data generating process, then calculate the values of its inquiries, then calculate the resulting estimates. For example, to draw simulated data based on the design, we use draw_data:
draw_data(design)
Table 13: Simulated data draw.
ID U Y_Z_0 Y_Z_1 S Z Y
003 0.080 0.080 0.33 1 1 0.33
008 -1.194 -1.194 -0.94 1 1 -0.94
010 1.300 1.300 1.55 1 1 1.55
013 -1.914 -1.914 -1.66 1 0 -1.91
016 0.034 0.034 0.28 1 1 0.28
draw_data runs all of the “data steps” in a design, which are both from the model and from the data strategy (sampling, assignment, and measurement).
To simulate the estimands from a single run of the design, we use draw_inquiries. This runs two operations at once: it draws the data, and calculates the estimands at the point defined by the design. For example, in our design, the estimand comes just after the potential outcomes. In this design, draw_inquiries will run the first two steps and then calculate the value of inquiry from the inquiry function we declared:
draw_estimands(design)
Table 14: Estimands calculated from simulated data.
inquiry estimand
PATE 0.25
Similarly, we can draw the estimates from a single run with draw_estimates which simulates data and, at the appropriate moment, calculates estimates.
draw_estimates(design)
Table 15: Estimates calculated from simulated data.
term estimate std.error statistic p.value conf.low conf.high df outcome inquiry
Z 0.24 0.29 0.84 0.41 -0.34 0.83 48 Y PATE
To simulate designs, we use the simulate_design function to draw data, calculate estimands and estimates, and then repeat the process over and over.
simulation_df <- simulate_design(design)
simulation_df
Table 16: Simulations data frame.
sim_ID estimand estimate std.error statistic p.value conf.low conf.high df
1 0.25 0.64 0.25 2.55 0.014 0.136 1.15 47
2 0.25 -0.18 0.31 -0.59 0.557 -0.804 0.44 43
3 0.25 0.41 0.29 1.40 0.170 -0.181 1.00 46
4 0.25 0.17 0.23 0.73 0.468 -0.300 0.64 48
5 0.25 0.71 0.32 2.21 0.032 0.063 1.36 45
## Diagnosing a research design
Using the simulations data frame, we can calculate diagnosands like bias, root mean-squared-error, and power for each estimator-estimand pair. In DeclareDesign, we do this in two steps. First, declare your diagnosands, which are functions that summarize simulations data. The software includes many pre-coded diagnosands (see Section ??), though you can write your own like this:
study_diagnosands <- declare_diagnosands(
bias = mean(estimate - estimand),
rmse = sqrt(mean((estimate - estimand)^2)),
power = mean(p.value <= 0.05)
)
Second, apply your diagnosand declaration to the simulations data frame with the diagnose_design function:
diagnose_design(simulation_df, diagnosands = study_diagnosands)
Table 17: Design diagnosis.
Bias RMSE Power
-0.00 0.28 0.14
(0.01) (0.01) (0.01)
We can also do this in a single step by sending diagnose_design a design object. The function will first run the simulations for you, then calculate the diagnosands from the simulation data frame that results.
diagnose_design(design, diagnosands = study_diagnosands)
### Redesign
After the declaration phase, you will often want to learn how the diagnosands change as design features change. We can do this using redesign:
redesign(design, N = c(100, 200, 300, 400, 500))
An alternative way to do this is to write a “designer.” A designer is a function that makes designs based on a few design parameters. Designer help researchers flexibly explore design variations. Here’s a simple designer based on our running example:
simple_designer <- function(sample_size, effect_size) {
declare_model(
N = sample_size,
U = rnorm(N),
potential_outcomes(Y ~ effect_size * Z + U)
) +
declare_inquiry(PATE = mean(Y_Z_1 - Y_Z_0)) +
declare_sampling(S = complete_rs(N, n = 50)) +
declare_assignment(Z = complete_ra(N, prob = 0.5)) +
declare_measurement(Y = reveal_outcomes(Y ~ Z)) +
declare_estimator(
Y ~ Z, model = difference_in_means, inquiry = "PATE"
)
}
To create a single design, based on our original parameters of a 100-unit sample size and a treatment effect of 0.25, we can run:
design <- simple_designer(sample_size = 100, effect_size = 0.25)
Now to simulate multiple designs, we can use the DeclareDesign function expand_design. Here we examine our simple design under several possible sample sizes, which we might want to do to conduct a minimum power analysis. We hold the effect size constant.
designs <- expand_design(
simple_designer,
sample_size = c(100, 500, 1000),
effect_size = 0.25
)
Our simulation and diagnosis tools can take a list of designs and simulate all of them at once, creating a column called design to keep track. For example:
diagnose_design(designs)
### Comparing designs
Alternatively, we can compare a pair of designs directly with the compare_designs function. This function is most useful for comparing the differences between a planned design and an implemented design.
compare_designs(planned_design, implemented_design)
Similarly, we can compare two designs on the basis of their diagnoses:
compare_diagnoses(planned_design, implemented_design)
### Library of designs
In our DesignLibrary package, we have created a set of common designs as designers (functions that create designs from just a few parameters), so you can get started quickly.
library(DesignLibrary)
block_cluster_design <- block_cluster_two_arm_designer(N = 1000, N_blocks = 10)
|
2022-08-10 08:05:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41938719153404236, "perplexity": 2764.368370342143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00066.warc.gz"}
|
https://bitworking.org/news/2006/11/wikicalccommunity/
|
# WikiCalcCommunity
So I downloaded wikiCalc and after doing the usual Perl/CPAN/Sacrificial-Chicken Dance I finally got it mostly up and running. One of the reasons I wanted to give it a try was to see if I could get it to include sparklines via my sparklines web service. I tried using the wikiCalc function wkcHTTP() but all the permutations I try keep failing. I went off in search of the wikiCalc mailing list, or wikiCalc bugzilla, or wikiCalc wiki, or wikiCalc IRC channel; any one of the trappings of a typical open source community.
|
2020-07-06 07:06:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4902193248271942, "perplexity": 1863.70658001378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890105.39/warc/CC-MAIN-20200706042111-20200706072111-00230.warc.gz"}
|
https://tsfa.co/questions/Calculus/500033
|
Evaluate integral of one divide(x to the power of two) with respect to x
Evaluate integral of 1/(x^2) with respect to x
Apply basic rules of exponents.
Put out of the denominator by raising it to the power.
First, multiply the exponents in .
Apply the power rule when multiplying the exponents, .
First, multiply by .
By the Power Rule, the integral of with respect to is .
Should be rewritten as .
Do you need help with solving Evaluate integral of 1/(x^2) with respect to x? We can help you. You can write to our math experts in our application. The best solution for you is above on this page.
|
2023-01-27 07:02:02
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9201430082321167, "perplexity": 827.9842614770761}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494974.98/warc/CC-MAIN-20230127065356-20230127095356-00239.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/an-object-is-executing-uniform-circular-motion-with-an-angular-speed-of-12-radians-per-second-at-t-0-the-object-starts-at-an-angle-0-what-is-the-angular-displacement-of-the-particle-after-4-s-projectile-motion_220169
|
Tamil Nadu Board of Secondary EducationHSC Science Class 11th
# An object is executing uniform circular motion with an angular speed of π12 radians per second. At t = 0 the object starts at an angle θ = 0 What is the angular displacement of the particle after 4 s? - Physics
Sum
An object is executing uniform circular motion with an angular speed of pi/12 radians per second. At t = 0 the object starts at an angle θ = 0 What is the angular displacement of the particle after 4 s?
#### Solution
ω = π/12 rad/s
ω = θ/t
θ = w x t = π/12 x 4
θ = π/3 radian
θ = "180°"/3
= 60°
Concept: Projectile Motion
Is there an error in this question or solution?
Share
|
2023-03-28 07:56:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6744169592857361, "perplexity": 864.950738266227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00615.warc.gz"}
|
http://umj.imath.kiev.ua/authors/name/?lang=en&author_id=512
|
2019
Том 71
№ 7
Buldygin V. V.
Articles: 26
Article (Russian)
Karamata theorem for regularly log-periodic functions
Ukr. Mat. Zh. - 2012. - 64, № 11. - pp. 1443-1463
We generalize the Karamata theorem on the asymptotic behavior of integrals with variable limits to the class of regularly log-periodic functions.
Article (English)
On the convergence of positive increasing functions to infinity
Ukr. Mat. Zh. - 2010. - 62, № 10. - pp. 1299–1308
We study the conditions of convergence to infinity for some classes of functions extending the well-known class of regularly varying (RV) functions, such as, e.g., $O$-regularly varying (ORV) functions or positive increasing (PI) functions.
Obituaries (Ukrainian)
Anatolii Yakovych Dorogovtsev
Ukr. Mat. Zh. - 2004. - 56, № 8. - pp. 1151-1152
Anniversaries (Ukrainian)
Mykhailo Iosypovych Yadrenko (On His 70th Birthday)
Ukr. Mat. Zh. - 2002. - 54, № 4. - pp. 435-438
Article (English)
Properties of a Subclass of Avakumović Functions and Their Generalized Inverses
Ukr. Mat. Zh. - 2002. - 54, № 2. - pp. 149-169
We study properties of a subclass of ORV functions introduced by Avakumović and provide their applications for the strong law of large numbers for renewal processes.
Article (Ukrainian)
On the Asymptotic Properties of Solutions of Linear Stochastic Differential Equations in $R^d$
Ukr. Mat. Zh. - 2000. - 52, № 9. - pp. 1166-1175
We investigate necessary and sufficient conditions for the almost-sure boundedness of normalized solutions of linear stochastic differential equations in $R^d$ their almost-sure convergence to zero. We establish an analog of the bounded law of iterated logarithm.
Article (Russian)
Strong Law of Large Numbers with Operator Normalizations for Martingales and Sums of Orthogonal Random Vectors
Ukr. Mat. Zh. - 2000. - 52, № 8. - pp. 1045-1061
We establish the strong law of large numbers with operator normalizations for vector martingales and sums of orthogonal random vectors. We describe its applications to the investigation of the strong consistency of least-squares estimators in a linear regression and the asymptotic behavior of multidimensional autoregression processes.
Article (Russian)
On asymptotic properties of the empirical correlation matrix of a homogeneous vector-valued Gaussian field
Ukr. Mat. Zh. - 2000. - 52, № 3. - pp. 300-318
We investigate properties of the empirical correlation matrix of a centered stationary Gaussian vector field in various function spaces. We prove that, under the condition of integrability of the square of the spectral density of the field, the normalization effect takes place for a correlogram and integral functional of it.
Brief Communications (Ukrainian)
On the Levy-Baxter theorems for shot-noise fields. III
Ukr. Mat. Zh. - 1999. - 51, № 2. - pp. 251–254
We establish sufficient conditions for singularity of distributions of shot-noise fields with response functions of a certain form.
Article (Ukrainian)
On the Levy-Baxter theorems for shot-noise fields. II
Ukr. Mat. Zh. - 1999. - 51, № 1. - pp. 12–31
We establish sufficient conditions under which shot-noise fields with a response function of a certain form possess the Levy-Baxter property on an increasing parametric set.
Article (Ukrainian)
On the Levy-Baxter theorems for shot-noise fields. I
Ukr. Mat. Zh. - 1998. - 50, № 11. - pp. 1463–1476
We consider shot-noise fields generated by countably additive stochastically continuous homogeneous random measures with independent values on disjoint sets. We establish necessary and sufficient conditions under which the shot-noise fields possess the Levy-Baxter property on fixed and increasing parametric sets.
Article (Ukrainian)
On asymptotic normality of estimates for correlation functions of stationary Gaussian processes in the space of continuous functions
Ukr. Mat. Zh. - 1995. - 47, № 11. - pp. 1485–1497
We establish conditions of the weak convergence of the empirical correlogram of a stationary Gaussian process to some Gaussian process in the space of continuous functions. We prove that such a convergence holds for a broad class of stationary Gaussian processes with square integrable spectral density.
Brief Communications (Ukrainian)
To the memory of Valentin Anatol'evich Zmorovich
Ukr. Mat. Zh. - 1994. - 46, № 8. - pp. 1110–1111
Article (Ukrainian)
Estimates of the supremum distribution for a certain class of random processes
Ukr. Mat. Zh. - 1993. - 45, № 5. - pp. 596–608
Exponential estimates of the “tails” of supremum distributions are obtained for a certain class of pre-Gaussian random processes. The results obtained are applied to the quadratic forms of Gaussian processes and to processes representable as stochastic integrals of processes with independent increments.
Article (Ukrainian)
Comparison theorems and asymptotic behavior of correlation estimators in spaces of continuous functions. II
Ukr. Mat. Zh. - 1991. - 43, № 5. - pp. 579-583
Article (Ukrainian)
Comparison theorems and asymptotic behavior of correlation estimates in spaces of continuous functions. I.
Ukr. Mat. Zh. - 1991. - 43, № 4. - pp. 482-489
Article (Ukrainian)
Asymptotic properties of correlation bounds in functional spaces. I.
Ukr. Mat. Zh. - 1991. - 43, № 2. - pp. 179–187
Article (Ukrainian)
A generalized summation of a random series
Ukr. Mat. Zh. - 1989. - 41, № 12. - pp. 1618–1623
Article (Ukrainian)
Convergence of Fourier series of stationary Gaussian processes
Ukr. Mat. Zh. - 1987. - 39, № 3. - pp. 278–282
Article (Ukrainian)
Cylindrical and Borel σ-algebras
Ukr. Mat. Zh. - 1986. - 38, № 1. - pp. 12–17
Article (Ukrainian)
Oscillation of the realizations of bounded almost-sure Gaussian sequences
Ukr. Mat. Zh. - 1985. - 37, № 1. - pp. 110 – 111
Article (Ukrainian)
Borel measures in nonseparable metric spaces
Ukr. Mat. Zh. - 1983. - 35, № 5. - pp. 552—556
Article (Ukrainian)
Convergence of the decomposition of a Gaussian field
Ukr. Mat. Zh. - 1982. - 34, № 2. - pp. 137-143
Article (Ukrainian)
Sub-Gaussian random variables
Ukr. Mat. Zh. - 1980. - 32, № 6. - pp. 723–730
Article (Ukrainian)
Sub-Gaussian processes and convergence of random series in functional spaces
Ukr. Mat. Zh. - 1977. - 29, № 4. - pp. 443–454
Article (Ukrainian)
On the structure of a ?-algebra of borel sets and the convergence of certain stochastic series in Banach spaces
Ukr. Mat. Zh. - 1975. - 27, № 4. - pp. 435–442
|
2019-08-21 13:19:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.391186386346817, "perplexity": 1194.7910375258791}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316021.66/warc/CC-MAIN-20190821131745-20190821153745-00554.warc.gz"}
|
http://mathoverflow.net/questions/99009/on-automorphism-of-some-finite-2-group-of-class-nilpotency-two
|
# On automorphism of some finite 2-group of class nilpotency two
Let $G$ be a finite 2-group of nilpotency class two such that $\frac{G}{Z(G)}=\{Z(G), aZ(G), bZ(G), abZ(G)\}\simeq C_{2}\times C_{2}$. Then do there exist a non inner automorphism $\alpha$ of $G$ such that $\alpha(a)\neq a$, $\alpha(b)\neq b$ and $\alpha(ab)\neq ab$ ? For example this true for $D_{8}$, dihedral group of order 8, or $Q_{8}$, generalized quaternion group of order 8.
-
Could you say something about how you came across this question, because as written it looks like a homework question. – Noah Snyder Jun 7 '12 at 5:12
Derek Holt answered this question 2 hours ago at math.stackexchange: math.stackexchange.com/a/155023/669 – j.p. Jun 7 '12 at 11:19
Noah, it seems like it might not be homework, just yet another case (if you look at the OP's original question) of someone wanting information rather than thinking for themselves – Yemon Choi Jun 7 '12 at 17:15
|
2015-03-03 03:32:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9322925209999084, "perplexity": 461.3614349830424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463104.7/warc/CC-MAIN-20150226074103-00221-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://questions.examside.com/past-years/jee/question/find-out-the-surface-charge-density-at-the-intersection-of-p-jee-main-physics-units-and-measurements-npoyczfcezohkcka
|
Javascript is required
1
JEE Main 2021 (Online) 16th March Evening Shift
MCQ (Single Correct Answer)
+4
-1
Find out the surface charge density at the intersection of point x = 3 m plane and x-axis, in the region of uniform line charge of 8 nC/m lying along the z-axis in free space.
A
0.424 nC m$$-$$2
B
4.0 nC m$$-$$2
C
47.88 C/m
D
0.07 nC m$$-$$2
2
JEE Main 2021 (Online) 26th February Evening Shift
MCQ (Single Correct Answer)
+4
-1
Given below are two statements:
Statement I : An electric dipole is placed at the center of a hollow sphere. The flux of the electric field through the sphere is zero but the electric field is not zero anywhere in the sphere.
Statement II : If R is the radius of a solid metallic sphere and Q be the total charge on it. The electric field at any point on the spherical surface of radius r (< R) is zero but the electric flux passing through this closed spherical surface of radius r is not zero..
In the light of the above statements, choose the correct answer from the options given below :
A
Both Statement I and Statement II are true
B
Statement I is false but Statement II is true
C
Statement I is true but Statement II is false
D
Both Statement I and Statement II are false
3
JEE Main 2021 (Online) 26th February Evening Shift
MCQ (Single Correct Answer)
+4
-1
An inclined plane making an angle of 30$$^\circ$$ with the horizontal is placed in a uniform horizontal electric field $$200{N \over C}$$ as shown in the figure. A body of mass 1 kg and charge 5 mC is allowed to slide down from rest at a height of 1 m. If the coefficient of friction is 0.2, find the time taken by the body to reach the bottom.
[g = 9.8 m/s2; $$\sin 30^\circ = {1 \over 2}$$; $$\cos 30^\circ = {{\sqrt 3 } \over 2}$$]
A
0.46 s
B
0.92 s
C
1.3 s
D
2.3 s
4
JEE Main 2021 (Online) 26th February Morning Shift
MCQ (Single Correct Answer)
+4
-1
Find the electric field at point P (as shown in figure) on the perpendicular bisector of a uniformly charged thin wire of length L carrying a charge Q. The distance of the point P from the centre of the rod is a = $${{\sqrt 3 } \over 2}L$$.
A
$${Q \over {4\pi {\varepsilon _0}{L^2}}}$$
B
$${Q \over {3\pi {\varepsilon _0}{L^2}}}$$
C
$${Q \over {2\sqrt 3 \pi {\varepsilon _0}{L^2}}}$$
D
$${{\sqrt 3 Q} \over {4\pi {\varepsilon _0}{L^2}}}$$
Policy
|
2023-02-02 01:24:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7174264788627625, "perplexity": 810.5907220944996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499954.21/warc/CC-MAIN-20230202003408-20230202033408-00424.warc.gz"}
|
https://anna-neufeld.github.io/treevalues/reference/treeval.plot.html
|
Essentially a wrapper function for rpart.plot() from the rpart.plot package, but additional arguments allow user to add p-values and confidence intervals to plots.
treeval.plot(
tree,
sigma_y = NULL,
nn = TRUE,
printn = TRUE,
inferenceType = 2,
digits = 3,
alpha = 0.05,
permute = FALSE,
...
)
## Arguments
tree An rpart tree. The tree must have been build with parameter model=TRUE. Provide the standard deviation of y, if known. If not provided, the sample standard deviation of y will be used as a conservative estimate. boolean- would you like node numbers to be printed? Nodes are numbered using the same methodology as the rpart package. If node n has children, its children are numbered 2n and 2n+1. boolean - would you like the number of observations to be printed in each node? An integer specifying which pieces of inference information should be added to the plot. The options currently available are (0) No confidence intervals, p-values, or "fitted mean" label. Just calls rpart.plot(). (1) No confidence intervals. Each split labeled with a p-value. (2) Label each internal node with a confidence interval and label each split with a p-value. This is the default, but can also be a little messy/hard to read. Options 3 and 4 print the same information but with small formatting tweaks. Integer- how many digits would you like the text in the plot rounded to. If inferenceType is such that confidence intervals will be printed, (1-alpha) confidence intervals will be printed. If inferenceType is such that confidence intervals will be printed, should the conditioning set for the confidence intervals include all permutations of the relevant branch? Setting this to TRUE will lead to slightly narrower confidence intervals, but will make computations more expensive. See paper for more details. Additional arguments are passed on to rpart.plot(). Examples include "cex".
## Examples
bls.tree <-rpart::rpart(kcal24h0~hunger+disinhibition+resteating,
model = TRUE, data = blsdata, maxdepth=1)
treeval.plot(bls.tree, inferenceType=0)
treeval.plot(bls.tree, inferenceType=1)
treeval.plot(bls.tree, inferenceType=2)
|
2021-10-27 18:35:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48789697885513306, "perplexity": 1911.483059701068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00478.warc.gz"}
|
http://www.singaporeolevelmaths.com/binomial-expansion/
|
# Binomial Expansion Teaches how to choose the RIGHT partner (:
I always like to share with my students what each Math topic has to do with their everyday life, particularly their future.
Yesterday, I did Binomial Expansion. It's about finding the RIGHT partner (: It's about mastering the skills of finding the correct match based on a set of factors.
Just like this question, you see that there are 2 brackets. 1st bracket is good for now. 2nd bracket needs us to do some expansion work by using the FORMULAE (It's provided during GCE O Level Exams but some schools don't seem to provide it for their mid year, strange)
Oh yar, there is NO need to remember the Binomial Expansion Formulae!
Then one of the questions that often pop up is " When do we stop our expansion for [TEX](1+\frac {x}{3})^{12}[/TEX]?"
To answer this question, it depends on what type of partners you are looking for.
We are interested in the coefficients of $x$ & $x^2$ so from the first bracket :
To have $x$,
• 6 will pair up with $x$ from [TEX](1+\frac {x}{3})^{12}[/TEX]
• $2x$ will pair up with the constant from [TEX](1+\frac {x}{3})^{12}[/TEX]
Similarly, to have $x^2$,
• 6 will pair up with $x^2$ from [TEX](1+\frac {x}{3})^{12}[/TEX]
• $2x$ will pair up with the $x$,from [TEX](1+\frac {x}{3})^{12}[/TEX]
• $-3x^2$ will pair up with the constant from [TEX](1+\frac {x}{3})^{12}[/TEX]
Now, do you know where you stop the expansion of [TEX](1+\frac {x}{3})^{12}[/TEX]?
Stop at the $x^2$ term :) aka the 3rd term
### Related Post
F-r-e-e GCE O Level Mathematics Exam Questions Papers For Download?Just Buy A Copy In BookStores I thought I should say a little more with relation to the earlier article I wrote on How To Create Math PowerNotes For Revision of Tests & Exams ...
Logarithm Equation Question 3 This is an interesting question which I came across under Additional Mathematics (A-Math): Find the value of x. Who is courageous to work on...
A-Math: Binomial Theorem Checklist Last lesson, I recapped with my students the essence of Binomial Theorem in A-Math and upon searching my website, I found that I have written quite a ...
A-Math Circular Measure 5 marks, showing qn. Another question from GCE O Level Additional Mathematics (A-Math). :)
E-Maths Note : Algebraic Expansion This is a topic some of which is learnt ever since Primary 6. Through my years of coaching, I notice quite a handful of students form blocks when t...
### Ai Ling Ong
Hi, I'm Ai Ling Ong. I enjoy coaching students who have challenges with understanding and scoring in 'O' Level A-Maths and E-Maths. I develop Math strategies, sometimes ridiculous ideas to help students in understanding abstract concepts the fast and memorable way. I write this blog to share with you the stuff I teach in my class, the common mistakes my students made, the 'way' to think, analyze... If you have found this blog post useful, please share it with your friends. I will really appreciate it! :)
### 2 Responses to Binomial Expansion Teaches how to choose the RIGHT partner (:
1. [...] Binomial Expansion Teaches how to choose the RIGHT partner (: [...]
2. Don Partho says:
How does the 6 pair up with the x^2 in (1+x/3)^12? there is no x^2, only an x, surely if you times them together you won't get a coefficient for the x^2 value, but for the x.... Can you please explain :/
|
2017-06-25 01:45:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 11, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43479451537132263, "perplexity": 1901.452233161165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320386.71/warc/CC-MAIN-20170625013851-20170625033851-00300.warc.gz"}
|
http://mathoverflow.net/questions/20311/if-f-k-not-to-0-a-e-does-there-exist-a-subsequence-a-set-of-positive-measu
|
# If $f_k \not\to 0$ a.e., does there exist a subsequence, a set of positive measure, and $c > 0$, on which $\liminf |f_{k_j}| > c$?
Here you are another question in basic measure theory...
Let $f_k$ be a measurable sequence of functions on $(X,M,\mu)$ measure space. Suppose that $f_k$ does not go to 0 a.e.. Can I then find a set $A\subseteq X$ with positive measure and a subsequence $f_{k_j}$ and an $\varepsilon > 0$ such that $\liminf_j |f_{k_j}(x)| > \varepsilon$ foreach $x\in A$?
-
Your measure theory textbook surely has a discussion of this. Perhaps called "convergence in measure". The "no" answer below shows that convergence in measure does not imply a.e. convergence. This same example is probably in your textbook. – Gerald Edgar Apr 4 '10 at 18:35
Thanks for your comment. I know that convergence in measure does not imply a.e. convergence (I use Folland's Real Analysis, and there there is a good section on convergence in measure), but my question was more about the inverse, I think. – Nicolò Apr 5 '10 at 15:14
Convergence a.e. does not imply convergence in measure. Consider f_{n} defined as f_{n}(x)=1 x>n and f_{n}(x)=0 x<=n. – Digital Gal Jul 20 '10 at 18:54
That's not true. For example, in $(0,1)$ take
$f_1 =1$,
$f_2=1_{(0,1/2)}$, $f_3= 1_{(1/2,1)}$
$f_4=1_{(0,1/3)}$, $f_5= 1_{(1/3,2/3)}$, $f_6= 1_{(2/3,1)}$
and so on. $f_k(x)$ does not go to 0 a.e. (the limit does not exist, for each x), but we can't find any succession that satisfies the statement, because $m(supp f_k)$ goes to zero
|
2015-04-25 22:09:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8765244483947754, "perplexity": 303.2957656065078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651727.46/warc/CC-MAIN-20150417045731-00131-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://prideacademy317.com/secular-meaning-wzupv/47f950-latex-in-r-markdown-example
|
Real Techniques Advent Calendar Review, Birch Leafminer R=h:edu, Mustang 1970 Malaysia, Walgreens Micellar Water, Advantages And Disadvantages Of Encryption Wikipedia, " />
# latex in r markdown example
But the default in RStudio is still to use Sweave, so you first need to change that default.Go to the RStudio (on menu bar) → Preferences and select Sweave on the left. An R Markdown Template for Academic Manuscripts. I often need to write short reports which are not full blown manuscripts, e.g. As the commands are the ones already used in latex syntax, this works as expected in a tex output document, and thus with pdf. consists of text (especially no floating environments), go with Markdown. Markdown's formatting commands are simpler than most other formatting languages, such as LaTeX or HTML, because it has a smaller number of features. 1 & 2 & 3 \\ In raw markdown, you would for example write a cross-reference to a figure like this: \@ref(fig:label), where the label is the name of the code chunk used to make the figure. In the latter case, they are centered and set off from the main text. Posted on February 10, 2016 by steve in R Markdown The frontmatter to an R Markdown document. Let’s do this with the LaTeX sectsty package – you can basically stuff this code anywhere in the preamble, like so: Let’s also reduce the overall margins a touch via the geometry argument in the YAML while we’re at it. Equations can be formatted inline or as displayed formulas. Though such documents don’t need to adhere to a strict template, I still want them to look nice. This book showcases short, practical examples of lesser-known tips and tricks to helps users get the most out of these tools. After reading this book, you will understand how R Markdown documents … R Markdown files have the file extension “.Rmd”. You can download the latest version of RStudio at https://www.rstudio.com/products/rstudio/download/. x: For kable(), x is an R object, which is typically a matrix or data frame. LaTeX is a fantastic way to create and display print-ready scientific documents. Thursday, April 25, 2019 You can align the equations like this. \begin{align} So I can't for the life of me including this dsfont package in R Markdown. Defining a new LaTeX command in a R Markdown document is also quite straight forward. this is bold in markdown \center{this centers text} in latex. Some examples. To create an R Markdown document that uses the Distill format, first install the distillR package from CRAN: Using Distill for R Markdown requires Pandoc v2.0 or higher. Those new to R Markdown will appreciate the short, practical examples that address the most common issues users encounter. In proving a result, it is often useful to include comments. \end{align} Let’s start with the following: And don’t forget to include template.tex in your .Rmd YAML header like so: Next, we clearly need to fix the fact that section titles are now larger than the document title! Markdown is an easy-to-use plain text formatting syntax. markdown documentation: Creating a table. The character # at the beginning of a line means that the rest of the line is interpreted as a section header. The key formatting constructs are discussed at http://rmarkdown.rstudio.com/authoring_basics.html. You should consider no longer using LaTeX as a front-end for your manuscripts. 7 & 8 & 9 NOTE: if you need a void column you must add a space between the pipes. R Markdown supports a reproducible workflow for dozens of static and dynamic output formats including HTML, PDF, MS … In LaTeX, you can create parentheses, brackets, and braces which size themselves automatically to contain large expressions. 3+x &=4 && \text{(Solve for } x \text{. This repository holds my working template for such purposes. can be a bit mysterious. I really want to use the double-stroke letters. I’ve accomplished this for years by writing directly in LaTeX, but I want to align my process with my recent transition to composing most docs in RStudio/Rmd. $\begin{array} LaTeX \newcommand in R Markdown. (Notice that, in the third example, I use the tilde character for a forced space. Modifying R Markdown's LaTeX styles How to make a simple LaTeX template for R Markdown from scratch 4 minute read Published: 25 Apr, 2019. Markdown is a simple formatting syntax for authoring HTML, PDF, and MS Word documents. It can be used on some websites like Stack Overflow or to write documentations (essentially on GitHub). (5) discusses the implications of R Markdown.$, http://rmarkdown.rstudio.com/authoring_basics.html. I include basic syntax to R Markdown and a minimal working example of how the analysis itself can be conducted within R with the knitr package. " Use a productive notebook interface to weave together narrative text and code to produce elegantly formatted output. &= 0 Itâs always better to give than to receive. More importantly, markdown documents are easy to read. Adding a pagebreak was already possible with rmarkdown when output is pdf_document() or latex_document(), without any restriction about the version of pandoc. LaTeX can certainly produce any type of table you’d like to create, but it does so in a way that can be very difficult to follow visually. Here is an example of how you format. Matrics are presented in the array environment. Now Iâm going to create a series of sections using secondary headings. %% Comment -- define some macros Notice how I define new symbols \Xbar and \sumn to make things much simpler! This option is ultimately included in the template.tex provided in the repository, and here’s the relevant YAML and output: Published Should be cautious about following formatting advice for other types of Markdown when working on Markdown. Zero in any list of numbers Markdown files have the file extension “.Rmd ” that it cooperates R.! This section, we will borrow the LaTeX cheat sheet that also illustrates how to an! Discusses the implications of R markdownis that it cooperates with R. like as! “.Rmd ” I define new symbols \Xbar and \sumn to make things much simpler template, LaTeX...., go with Markdown often need to write HTML in a shortened way Stack Overflow or to write in. Thankfully, RStudio will render a LaTeX pdf, and SQL repository my. Root using the \left and \right operators h/t so ) ) R Markdown should this! For kable ( ), a single caret indicates the superscript you will understand how R Markdown a! The equivalent of \mathbf one of the integers from 2 to 19 in a hidden code block superscripts... Interpreted as a gist upper and lower case versions available for some letters in list! This repository holds my working template for such purposes tool that is installed any... Can be confusing, because the R Markdown developers that provides a comprehensive and accurate reference the... 3 from both sides, brackets, and braces which size themselves automatically to contain large expressions and SQL s. Object, which latex in r markdown example typically a matrix or data frame abandon the LaTeX typesetting language a line means the... # at the course website, RStudio will render a LaTeX pdf but. The equivalent of \mathbf by the use of single dollar-sign characters text in... Engine utilized for { gtsummary } tables for various R Markdown the frontmatter to an Markdown! N'T work on this by yourself, you want to abandon the LaTeX cheat sheet available the. Course website should use RStudio v1.2.718 or higher ( which are still nice )! If the source code and displayed post are viewed side by side they are centered and set by! To an R Markdown websites like Stack Overflow or to write short reports are. Will need to write documentations ( essentially on GitHub ) Stack Overflow or write! Be formatted inline or as displayed formulas a result, it is often useful to include an R,! See more ideas about LaTeX, as indicated in the LaTeX look in the above case, mathematical... 15\ ] \ [ \begin { array } the short, practical examples of lesser-known tips and to. Has just one character, there is more than one character, there is more than one character braces. Here ’ s the full.Rmd: here ’ s a very simple language allows. R. like LaTeX with Sweave, code chunks can be confusing, because the R Markdown have... Very far as a section header document is also quite straight forward following examples, followed by typeset. Importantly, Markdown documents are easy to read up with tags or formatting instructions deviation scores is always equal zero... Is implemented in LaTeX, as indicated in the former case, the mathematical material is off! More ideas about LaTeX, article template, I still want them to look nice { gtsummary } be. Your manuscripts one way you can see above how I constructed main section headings Markdown which is a,... List with each element being a returned value from kable ( ), etc 1: Start new... Smoothly in the LaTeX look in the former case, braces must be used as! The mathematical material is set off by the use of single dollar-sign.! Deal with one of the integers from 2 to 19 in a shortened way you automatically and! For kable ( ).. format: a character string can create parentheses,,. Notice how I define new symbols \Xbar and \sumn to make things much simpler square root a! You used R to get it Markdown will appreciate the short, practical that... That allows you to write HTML in a different font from mathematical variables Word documents Markdown. Are viewed side by side I define new symbols \Xbar and \sumn to things! ( Yielding the solution. ) } \\ x & =1 & & \text { ( Yielding solution... Formatted inline or as displayed formulas following example, I don ’ t need to with. Markdown language delimits superscripts with two carets this summation expression \ ( \sin\,. Language that is installed like any other R package available for some.. Main text with LaTeX: tables to make things much simpler authored by the of. The default print engine utilized for { gtsummary } can be included using RStudio then you should no! Is no need to download and Install R. Before installing RStudio we borrow. This section, we show you some rudiments of the Greek alphabet is implemented in LaTeX, template! You to write a novel a matrix or data frame book showcases short, practical examples lesser-known! © text CC-BY and code to produce elegantly formatted output make this.! A returned value from kable ( ).. format: a character string I compute sum... Markdown language delimits superscripts with two carets course website the pipes comes bundled with Pandoc )... Markdown is a tool used to indicate the formatting of each column is right justified including this package. Though such documents don ’ t want to deal with one of the Greek alphabet is implemented LaTeX... Include comments the way typesetting language it is often useful to include an R Markdown Cookbook authored by use... Here as a front-end for your manuscripts the frontmatter to an R Markdown document is quite... A character string LaTeX: tables cautious about following formatting advice for other types of Markdown when working on Markdown! Always equal to zero in any list of numbers rrr } to indicate a square root a. And \right operators line of text ( especially no floating environments ), x is an example of way... Void column you must use LaTeX to write HTML in a different font from mathematical variables after reading book! Latex instead, like R Markdown Start, we will borrow the LaTeX in! The table below summarizes the default print engine utilized for { gtsummary } can be,! Write a novel use a single caret character ^ ( Subtract 3 from sides! Rtf planned for the future free, open source tool that is superset! Typically a matrix or data frame new symbols \Xbar and \sumn to make things simpler. Are using RStudio then you should consider no longer using LaTeX as a major! More than one character, braces were not actually needed versions, should make this clear automatically, MS... This centers text } in LaTeX equations, a single caret character ^ they 've marked. Rmarkdown and LaTeX '' on Pinterest a new R Markdown language delimits superscripts with two.... Material occurs smoothly in the LaTeX typesetting language higher ( which comes bundled with Pandoc v2.0 ) the frontmatter an. Going to create a series of sections using secondary headings and SQL this clear adhere a... 2 to 19 is 189 always equal to zero in any list of numbers my working template Academic. Equal to zero in any list of numbers some letters for other types of Markdown when working R... Discussed at http: //rmarkdown.rstudio.com/authoring_basics.html summation expression \ ( \sin\ ), etc the. Look nice on R Markdown and accurate reference to the R Markdown selector... Major mode or a markdown-latex-r-polymode but did n't get very far other R.! Or command inline in a sentence sum of the Greek alphabet is implemented in LaTeX equations, a format is... I use the code { rrr } to indicate the formatting of each is... Listing on the LaTeX cheat sheet available at the course website or a markdown-latex-r-polymode did! } x \text { latex in r markdown example importantly, Markdown documents single caret character ^ indicate that each.... Inline mathematical material is set off by the core R Markdown template:. Authored by the use of single dollar-sign characters sometimes, you need a void column you must a... Kables ( ).. format: a character string following example, Suppose are. One begins with the statement \end { array } source tool that is a free open... You can download the latest version of RStudio at https: //www.rstudio.com/products/rstudio/download/ lesser-known tips and tricks helps! To display R code but not evaluate it with tags or formatting instructions to an R Markdown template such! Contain text and formatting commands can find a listing on the LaTeX cheat sheet define new symbols and! A future post note: this can be seamlessly integrated into R Markdown language delimits with... Can see above how I constructed main section headings be the equivalent of.... To write short reports which are still nice! brackets, and braces which size automatically... Can create parentheses, brackets, and SQL ( ).. format: a character string them... Should make this clear Markdown \center { this centers text } in LaTeX, as indicated in the following,... The mathematical material is set off from the main text be used on some websites like Overflow..Md Method 1: Start a new LaTeX command in a shortened way align } 3+x & =4 &! A character string will render a LaTeX pdf, and braces which size themselves automatically contain. Template, I want to deal with one of the LaTeX cheat available! Create formatted documents then go to file \ ( \sum_ { i=1 } ^n X_i\ ) appears inline for.
|
2021-05-15 12:07:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7343326807022095, "perplexity": 3138.6686418874774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00267.warc.gz"}
|
http://damarioristorante.it/nhtb/molecules-to-grams-calculator.html
|
# Molecules To Grams Calculator
atoms and/or molecules. This hydrate Na. 022x10 23 molecules/mole, the number of molecules of the template per gram can be calculated: mol/g * molecules/mol = molecules/ g Finally, the number of molecules or number of copies of template in the sample can be estimated by multiplying by 1*10 9 to convert to ng and then multiplying by the amount of template (in ng). 5 grams of HCl per liter. 998 grams, and contains about 6. b) A chemical change occurs when ice melts that causes the mass of water to increase c) When ice melts there is an increase in the amount of water molecules. In a separate experiment, the molar mass is found to be 62. you figure out h. There are 1,000,000 micrograms (mcg) in 1 gram (g). 03×10^20 H2O2 molecules, 6. per liter (instead of 98. How to calculate moles - moles to grams converter. 2) Find the number of molecules in 0. About this page: Calculate weight of generic and branded foods per volume; Weight of the selected food item is calculated based on the food density and its given volume to answer questions such as, how many ounces or grams of a selected food in a liter, a cup, or in a spoon. To prepare a 200 millilitre solution of salt water (NaCl) with a molarity of 2 mol/m 3, for example, we would calculate the amount of NaCl necessary (in grams) to add to the water. What does that mean?. We couldn't find a conversion between molecules and grams [incompatible types] Do a quick conversion: 1 molecules = grams-force using the online calculator for metric conversions. ! 100!! For!example,!if!I!have!9. 55 x 1020 molecules. Definition of cubic inches of water provided by WikiPedia The cubic inch is a unit of measurement for volume in the Imperial units and United States customary units systems. Is the atomic mass for nickel in your periodic table correct? C. We know that there are two hydrogen atoms and one oxygen atom for every H 2 0 molecule. Later the definition was rejected, because is was not so exact. This series of articles tackles various aspects of the mole. 82 x 10 molecules 24 How many grams are there in 1 x 10 molecules of BC13? 195 grams 22 How many grams are there in 4. 52 X 10 molecules of aspirin, C9H804 ? (d) What is the molar mass of diazepam ([email protected]) if 0. 022 23 particles in a mole), so multiplying this number by the number of moles will give you the number of particles. Calculate the percent change in weight of the dialysis bag. you calculate the molecular weight of the molecule. 7 grams of Fe is reacted? 14. For example, to find out how many micrograms in a gram and a half, multiply 1. The mass m in grams (g) is equal to the mass m in kilograms (kg) times 1000: m (g) = m (kg) × 1000. 008 g/mol) = 8. ›› Metric conversions and more. The gram was exactly one thousandth of a grave, so the new unit was kilogram, in other words: 1000 gram. Lets plug these numbers into the above equation: mole = 10 / 36. ]]> 34 pro. One mole is equal to the number of atoms in 12 grams of carbon, which has an atomic weight of 12. 253 mol x 64. 1 cube of sugar is 4 grams, which is 4/342 = 0. In addition, explore hundreds of other calculators including topics such as finance, math, health, fitness, weather, and even transportation. B Convert from mass to moles by dividing the mass given by the compound’s molar mass. Molecular Mass Calculator. Each sample contains 6. But since we are given grams of glucose not moles of glucose, then it follows that we must convert the 54 g of glucose into moles of glucose. 000 000 g/mol = 2. Use that result to calculate the number of molecules in the spot of oleic acid. 9×10^25 O3 molecules. 02 x 1023 molecules I. 02 x 1023atoms (Avogadro’s number) 12 grams of 12C = 1 mole 1 mole = 6. An ounce is a unit of weight equal to 1/16 th of a pound or about 28. To convert between moles and number of molecules or atoms, you need to use Avogadro's number, which is 6. Our calculator can also find the mass of substance you need to add to your solution to obtain a desired molar concentration, according to the formula. Roland www. Using Avogadro's number, 6. Before going into the details of this number and how it is derived, we need to know the definition of mole. 022 X10 22 X 13 /2= 6. This means that to make the conversion you need to know the density of your substance (ideally in g/ml, to avoid conversion first). How many molecules are there in 2. Interestingly, as this module was first written in June 2001 the number of elements was 115; however, scientists at the. ›› Metric conversions and more ConvertUnits. Practical Molecular Biology. Calculate the mass (in grams) of each sample. Read more Moles to Atoms Calculator. The relations among amount of substance (in moles), mass (in grams), and number of chemical entities. 60 moles of CO, calculate the number of moles of O2 that would. 022x10 23 molecules/mole, the number of molecules of the template per gram can be calculated: mol/g * molecules/mol = molecules/ g Finally, the number of molecules or number of copies of template in the sample can be estimated by multiplying by 1*10 9 to convert to ng and then multiplying by the amount of template (in ng). 02 x 10^23 for one mole of a substance. 57 m ol H IO 3 175. Calculate how many moles of H 2 O were dehydrated from your salt. g to mcg formula. Living systems obey the laws of chemistry and physics. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Let's do a quick example to help explain how to convert from moles to grams, or grams to moles. How many grams of H2S are found in a sample with 5. 022 × 10 23 atoms, molecules, or formula units of that substance. 5 g of sodium. Above formula can be used for known template lenght. 18 oz weight For conversion between other measuring units use the whole buttermilk converter above. The number of grams in a cup varies based upon the ingredient because the cup is a unit of volume and the gram is a unit of weight. Chemical reactions typically take place between molecules of varying weights, meaning measurements of mass (such as grams) can be misleading when compared the reactions of individual molecules. Input the molecular formula and the weight is calculated. Comparison with the ideal gas law leads to an expression for temperature sometimes referred to as the kinetic temperature. The calculator will display the total number of atoms in those moles. That is, the molar mass of a substance is the mass (in grams per mole) of 6. The outer membrane is impermeable to large molecules and hydrophobic compounds from the environment. Calculate the mass of one trillion molecules of oxygen gas. (Answer: 0. A Calorie, as the term is generally used in North America by nutritionists and the public, is what a chemist would call a kilocalorie (1 kcal) ; it's a 1,000 times bigger than a scientist's calorie. 03 (g/mol) H 8 (1. 00674 grams. Assume that the molecule is 1 / 4 as wide as it is tall. 00 grams 44. The Role of LPS in the Outer Membrane of Gram-negative Bacteria. 3 grams of NH4S02? molecules 312 grams 24 07 x 10 22 1. So one mole = 6. 53 X 103 molecules S03 to moles. Lets plug these numbers into the above equation: mole = 10 / 36. The mass of one mole of an element is called the Gram Atomic Mass (GAM). In a separate experiment, the molar mass is found to be 62. Calculate the number of grams of NH 3 produced by the reaction of 5. Calculate the number of moles of potassium nitrate, KNO3, in a sample with a mass of 85. But usually we talk about mols in terms of atoms, so since atoms are really small, it doesn’t take much of a substance to equal one mol of atoms or molecules. 57 m ol H IO 3 175. 5 mole of oxygen atoms. 04 x 1023) molecules of ammonia (NH 3) = 1. 300 grams of H 2 O are produced. O2 is 32 g/mole because there are 2 of them. 469 grams? Number of molecules = (0. 1 mole = 6. The best part about mols is that 1 mol of atoms is equal to the atomic mass in grams and 1 mol of molecules is equal to the molecular mass in grams. Mass can be used to 'count' atoms and molecules because a mole is the number of atoms present in one gram atomic weight* of any atom or in one gram molecular weight of any molecule. 998 grams, and contains about 6. 304 g MgO/1 mol MgO) = 141. 022 x 1023molecules 44. 85 x 1019 CH4 molecule Moles, Grams and Molecules. 253 mol x 64. 57 mol HIO 3 to calculate the mass. 7 moles of Na 2 O? 3) How many atoms are in 14 moles of cadmium? 4) How many moles are in 4. Calculate the number of grams of NH 3 produced by the reaction of 5. The mole is widely used in chemistry as a convenient way to express amounts of reactants and products of chemical reactions. Unit 2 Quiz--Moles, Mass, and Molecules: Multiple Choice (Choose the best answer. 02 23 10 molecules O2 ___ 1 mol O 2 2 O atoms __ 24 molecule O 2 6. 63 x 1024 molecules of CCl 4?. Next calculate the molecular masses that you need. 5 × 109 H2O molecules Step 1. 8051e+24 <<< if you want to input atom amount it should be expressed this way, not as 1. 66 x 1022 formula units of potassium iodide. The basic unit of measurement for mass in the metric system; one cubic centimeter of water has a mass of approximately one gram. Avogadro’s number, number of units in one mole of any substance (defined as its molecular weight in grams), equal to 6. The attractive forces between the molecules of a gas become significant only at very low temperatures. 453 grams per mole of atoms, etc. Calculate the average atomic mass of the substance using the calculator linked above or a formula. Calculate the number of molecules in 210 grams of 'water. The mass and molarity of chemical compounds can be calculated based on the molar mass of the compound. Want to know how many moles a gram has or need grams to moles calculator for general calculation purposes? Or even want to calculate moles from grams of a substance? On the off chance that truly, at that point you are in the ideal spot. K 2 Cr 2 O 7 + 6 NaI + 7 H 2 SO 4 ---> Cr 2 (SO 4) 3 + 3 I 2 + 7 H 2 O + 3 Na 2 SO 4 + K 2 SO 4: The equation is already balanced but you should check it over just to be sure. Molar mass Ba = 137. Avogadro's number is a constant that represent the number of molecules or atoms per mole of any given substance. (Answer: 0. com a unit of capacity redefined in 1964 by a reduction of 28 parts in a million to be exactly equal to one cubic decimeter. 02 10 atoms O 5. 8 g of BaCl2? Answer. Molar mass of NaCl is 58. hydrogen (H 2), sulfur (S 8), chlorine (Cl 2). 1 grams per liter) to make a solution that is equivalent to one made from 36. 25 moles HF. A vat of Hydrogen Peroxide ($\ce{H2O2}$) contains 455 grams of oxygen atoms. 50 x 102 mol H 2 O 100. 0221415 x 10 23 for Avogadro's number. Define what a mole is. 00 grams of oxygen for a reaction, which would be used up first and how much would be left over from the reactant that is not completely consumed in the reaction? Answers: 1) 139. Avogadro's number is given as 6. 007825 grams and one mole of neutrons has a mass of 1. To find the mass of 2. The first such method we examine is mass. (atomic mass units) for single molecules or grams for laboratory quantities. But you can calculate that entrance rate by assuming that the molecules in the equilibrium gas are moving with random thermal speeds. Examples: C6H12O6, PO4H2(CH2. In addition, explore hundreds of other calculators including topics such as finance, math, health, fitness, weather, and even transportation. The name of the anhydrous salt is followed by a prefix indicating the number of water. Milligram (mg) is a unit of Weight used in Metric system. 5 grams of HCl, are chemically equivalent and are known as equivalent weights of these substances because each will react with the same amount of NaOH (40. 022 × 10 23 atoms, molecules, or formula units of that substance. 2 x 10 3 grams/18. Because there is only 1 C atom in each CO2 molecule, there is 1 mol of C atoms per mole of CO2 molecules. 4% Cr and 38. 250 moles of iron atoms. This tool converts micrograms of DNA and picomoles of DNA according to the following formula where N is the length of the DNA:. In each molecule of ethanol there are two carbon atoms. 02 1023 g b. 20 Lof sulfur dioxide gas soa SOõ al. 84 grams: A given sample of xenon fluoride contains molecules of a single type XeFn, where n is some whole number. 0153 atomic mass units, so 1 mole of water molecules. 9×10^25 O3 molecules. 03527396195 oz. m is the mass (i. 3 grams of ammonium sulfite? 10) How many grams are there in 3. Grams to Moles Calculator. 000 grams of carbon-12. 022 10 (23 / ) (/ ) x atomsormolecules mol atomsormolecules mass g x molar mass g mol = Example. ‹ Force Calculator and Converter for Physics, Newton’s 2nd Law F=ma Projectile Motion & Parabolic Motion Calculator: Solving 2D Kinematics Problems for Physics › Tagged with: chemistry tools , determine mass percentage , find molar mass , identify empirical formulas , mass percentage calculator , molar mass calculator , simplify chemical. 00674 grams. 5 grams of HCl per liter. Kinetic Temperature The expression for gas pressure developed from kinetic theory relates pressure and volume to the average molecular kinetic energy. 24 moles of beryllium. A Calorie, as the term is generally used in North America by nutritionists and the public, is what a chemist would call a kilocalorie (1 kcal) ; it's a 1,000 times bigger than a scientist's calorie. In this tutorial: Molar masses Grams - moles - molecules calculations How to calculate ions Handy links As I mentioned in the past tutorial, the word "mole" refers to 6. In each case, the number of grams in 1 mol is the same as the number of atomic mass units that describe the atomic mass, the molecular mass, or the formula mass, respectively. To convert from molecules to grams, it is necessary to first convert the number of molecules of a substance by dividing by Avogadro's number to find the number of moles, and then multiply the number of moles by the molar mass of this substance. Lets plug these numbers into the above equation: mole = 10 / 36. 204 x 1024 (12. 0225 × 10 23 is called as Avogadro number. 2 x 10 3 grams. How many grams of Fe 2O 3 are produced when 42. Convert between moles and grams NOTE: Because your browser does NOT support JavaScript -- probably because JavaScript is disabled in an Options or Preferences dialog -- the calculators below won't work. 1 grams per liter) to make a solution that is equivalent to one made from 36. Moles to Grams. 50 molar solution of H 2SO 4 made from 49. What does that mean?. A Use the molecular formula of the compound to calculate its molecular mass in grams per mole. 6 g propane x 1 mole propane x 6. Conventional notation is used, i. 001 Fractional: 1/64 Fractional: 1/32 Fractional: 1/16 Fractional: 1/8 Fractional: 1/4 Fractional: 1/2. 000 grams of carbon-12. In each case, the number of grams in 1 mol is the same as the number of atomic mass units that describe the atomic mass, the molecular mass, or the formula mass, respectively. 1 mole = 6. 25 moles HF. 73 g of KClO 3? 4 Fe + 3 O 2 è 2 Fe 2O 3 13. 007 97(7) × 1. ) What is the mass of one mole of AuCl 3? 96 g. 1 kilogram over 1000 grams is equal to 1. 8 grams of calcium? 0. 9×10^25 O3 molecules. Because NaCl has a molar mass of 58 grams/mol, we do the following calculation: ((2 moles 58 g/mol)(200 ml))/1000ml = 23. Visit our food calculations forum for more details. With the help of the converter it is much easier to do weighting. 2 grams of the gas carbon dioxide, CO2. moles to something (grams, liters, particles) MULTIPLY. something (grams, liters, particles) to moles DIVIDE. 2K views ·. 1 molecule of NH 3 has 3 atoms of hydrogen in it. O is named sodium thiosulfate pentahydrate. 8) If you were given 53. 28 1023 g d. Let's do a quick example to help explain how to convert from moles to grams, or grams to moles. 469 grams? Number of molecules = (0. 03 x 10^20 molecules of XeFn weigh 0. 0×1025 O3 molecules, and 9. One mole of protons has a mass of 1. Calculate the number of grams of NH 3 produced by the reaction of 5. To convert from molecules to grams, it is necessary to first convert the number of molecules of a substance by dividing by Avogadro's number to find the number of moles, and then multiply the number of moles by the molar mass of this substance. 0 billion molecules of propane? 10 x 109 molecules 44 = 7. 6 grams of propane. 66 1024 g c. Thus, the mass of a mole of any substance, which is known as its molar mass (Mm), equals its. 24 moles of beryllium. The second converts moles of Cl 2 to the number of molecules. Calculate the mass, in grams. 27 moles = 1. 022 x 10^23 molecules in a mole, you would divide the molar mass by 6. 02x10^23, which equals. 10) / number of calories per gram in carbohydrates (4) = 37. How many grams of Fe 2O 3 are produced when 42. Calculate the percent change in weight of the dialysis bag. 500 mol MgO × (40. 253 mol x 64. 02 X 10^23 = 63. 00 grams of nitrogen gas c. Kilograms to Grams conversion table. Most ppl dislike this but that’s because they dont fking know how to use it. Long Answer Questions. 022 x 10 52. K 2 Cr 2 O 7 + 6 NaI + 7 H 2 SO 4 ---> Cr 2 (SO 4) 3 + 3 I 2 + 7 H 2 O + 3 Na 2 SO 4 + K 2 SO 4: The equation is already balanced but you should check it over just to be sure. 5 g of sodium. 02 x 10 23 molecules Cl = 1. Each sample contains 6. 022x10 N molecules2 22 2atomsN x molN 1 moleculeN 24 2 1. 5 × 109 H2O molecules Step 1. , weight) of solute in grams (g) that must be dissolved in volume V of solution to make the desired molar concentration (C). 064 (g/mol) 44. The central oxygen atom is more electronegative than the two hydrogen atoms, and so the electrons shared in the two bonds spend more time around the oxygen atom than around the hydrogen atoms. 24 moles 8) How many grams are in 238 moles of arsenic? 17,826 grams What are the molecular weights of the following compounds? 9) NaOH 40. I suggest developing the facility. Solution: 1. 88×1019 CH4 molecules. Give your answer with 3 sig. Find the mass of a candle BEFORE lighting it. A gram is the approximate weight of a cubic centimeter of water. We can convert from grams to moles, liters to moles (for gases), and atoms or molecules to moles. 000756 grams of gaseous oxygen. 88 grams of O2 5) 5. Calculate the mass of 0. This free density calculator determines any of the three variables in the density equation given the other two. Molecules are converted to moles and vice versa using the Avagadro's number. Calculate the width of the molecule in cm and nm. Avogadro’s Number Calculator The number of molecules in a mole of a substance, approximately 6. Ounces to Grams formula. A previous tutorial shows how to calculate the molecular weight of a substance from the atomic weights given. a coefficient stating the number of water molecules, and then the formula for water. ››Definition: Molecule. 008 g/mol) = 8. 300 grams of H 2 O are produced. A Calorie, as the term is generally used in North America by nutritionists and the public, is what a chemist would call a kilocalorie (1 kcal) ; it's a 1,000 times bigger than a scientist's calorie. 3 grams of phosphorus? 0. 0 grams of water? 2) How many grams are in 3. Is it time to calculate some properties? Calculate Properties. How many grams of H2S are found in a sample with 5. Probably most users need wider variety of materials, including oil, food and other products. 0221415 x 10 23 for Avogadro's number. This online calculator converts moles to liters of a gas at STP (standard temperature and pressure) and liters of a gas to moles. (b) Calculate the number of molecules of glucose present in its 90 grams (molecular mass of glucose is 180 u) (c) Calculate number of moles of water in 2 grams of water. 00674 grams. 63 x 1024 molecules of CCl 4?. Now you need to find the moles to complete the problem. Chemical reactions typically take place between molecules of varying weights, meaning measurements of mass (such as grams) can be misleading when compared the reactions of individual molecules. Since sucrose, C 12 H 22 O 11 has a molecular weight of 342. Step 2: is to. 938 gram sample of an organic compound containing C, H and O is analyzed by combustion analysis and 7. 253 mol x 64. 50 x 1025 molecules of carbon monoxide. Given that 9. The conversion is very simple, and is based on the fact that ideal gas equation is a good approximation for many common gases at standard temperature and pressure. Calculate the mass in grams of each of the following:5. Without using a calculator, arrange the following samples in order of increasing numbers of carbon atoms: 12 g 1 mol molecules of. Milligram (mg) is a unit of Weight used in Metric system. This formula is : C 6 H 12 O 6. 85 x 1019 CH4 molecule Moles, Grams and Molecules. A summary of Erikson’s Psychosocial Stages of Development will be posted for you in the Discussions se. 6 grams of propane. 50 L of argon gas, Ar, at STP. 045 moles X 101. A gram is a unit of weight equal to 1/1000 th of a kilogram. 0 grams of H 2SO 4. 3 grams of NH4S02? molecules 312 grams 24 07 x 10 22 1. 5 x 1025 O3 molecules9. 002 grams of CO 2 and 4. In this tutorial: Molar masses Grams - moles - molecules calculations How to calculate ions Handy links As I mentioned in the past tutorial, the word "mole" refers to 6. Notice that there is an extra conversion in this equation. Calculate the mass of 1. Small programs, which can help in common laboratory calculations. 022 × 10 23 atoms, molecules, or formula units of that substance. 1 x 10^21 molecules. A summary of Erikson’s Psychosocial Stages of Development will be posted for you in the Discussions se. 022 23, or 6 with 23 zeros. Using the data in the table, calculate the average atomic mass for nickel. 01 g/mol) = 36. 626×10^23 molecules of HCl. 02214076 × 10 23. 022 x 10^23 to get the mass in grams per molecule. 022 × 10 23 atoms —1. 0153 atomic mass units, so 1 mole of water molecules. Lets plug these numbers into the above equation: mole = 10 / 36. We know that there are two hydrogen atoms and one oxygen atom for every H 2 0 molecule. Grams/Moles Calculations – Answer Key Given the following, name the compound and find the number of moles: 1) 30 grams of H 3PO 4 (phosphoric acid) 0. There is a simple relation between these two:, where - mass of the substance in grams. 1mole = molar mass in grams. Interestingly, as this module was first written in June 2001 the number of elements was 115; however, scientists at the. But you can calculate that entrance rate by assuming that the molecules in the equilibrium gas are moving with random thermal speeds. 7 grams by mole/grams. 2044 10 atomsN = molN d) The mass in grams of a mole of atoms of any element (its molar mass) is numerically. 4% Cr and 38. It is the most used unit of measurement for non-liquid ingredients. This free density calculator determines any of the three variables in the density equation given the other two. These weights, 49. The relations among amount of substance (in moles), mass (in grams), and number of chemical entities. (Divided the grams of anhydrate by its molar mass to find the number of moles. Molecular Mass Calculator. Ounces to Grams table Start Increments Increment: 1000 Increment: 100 Increment: 20 Increment: 10 Increment: 5 Increment: 2 Increment: 1 Increment: 0. 3(6 pts) Calculate the volume (in L) of 0. 000000000000001 kilograms (SI unit). Roland www. How can we calculate absolute A solution is formed when one substance is dissolved into another. The first step in the determination of the molar mass of an ionic compound is to determine its formula mass, which is the weighted average of the mass of the naturally occurring formula units of the substance. 75 mol H 2 (g) and 0. one way to do it, assuming you know the molecular weight and how many copies of the molecules you have is as follows: 1. What does 'mole' mean? Isn't it just an indicator of the number of atoms/ions/molecules? So, what would be the relation between the quantity sodium sulfate and water molecules? Yes, 'mole' is an indicator of quantity. 015 88 (14) g/mol M(S. 015 g H 2 O) = 0. 1 grams per liter) to make a solution that is equivalent to one made from 36. A vat of Hydrogen Peroxide ($\ce{H2O2}$) contains 455 grams of oxygen atoms. The mole is a frequently used unit for the amount of atoms, molecules, ions, etc. 02x10^23, so to calculate the number of sulfuric acid molecules in the 25. a coefficient stating the number of water molecules, and then the formula for water. You need to convert molecules --> Moles --> grams. We now have the necessary information to get to the goal of our calculation. BYJU’S online grams to atoms calculator tool makes the conversion faster, and it displays the conversion to atoms in a fraction of seconds. The molar mass of an element (or compound) is the mass in grams of 1 mole of that substance, a property expressed in units of grams per mole (g/mol) (see Figure 4). 1000 milligrams = 1 gram 1000 grams = 1 kg 1. Hi! I need to calculate the mass in grams of each of the following: 6. Factor label. Calculate the mass of 0. In addition, explore hundreds of other calculators including topics such as finance, math, health, fitness, weather, and even transportation. On this page, we use the molecular weight to convert between the macroscopic scale (grams of a substance) and the microscopic scale (number of molecules of that substance). something (grams, liters, particles) to moles DIVIDE. Valium -- find the molar mass. I) Calculate the molecular mass of Ba3 (AS04)2. Calculate the mass in grams for 4 molecules N2O5. !!In!a!gas,!all!atoms!or!molecules! exist!as!individual!particles,!and!the!individual!particles!mix!easily!and!completely!. Chemical Mole to Gram Calculator Easily convert between grams and moles of any substance. 9 moles of chromium? 618. 9×10^25 O3 molecules. Roland www. The mole is a frequently used unit for the amount of atoms, molecules, ions, etc. ConvertUnits. 75 1024 atoms Al __ 1 mol Al , 6. 44 grams (2. 70 X 10^22 / 6. • Molar mass has units of grams per mole (g/mol). 250 moles of iron atoms. We deal with enough to see, which means trillions and trillions of them – measurable numbers or fractions of moles. 7098 × 10 21 molecules polluting that water! [ Calculators ] [ Converters ] [ Periodic Table ]. 0250 mol of NaF. A gram is a unit of weight equal to 1/1000 th of a kilogram. These ratios are useful, since they allow us to convert from quantities in grams to quantities in kilograms and vice versa. 022 140 76 × 10 23 molecules, whose total mass is about 18. 23 2 2 22 2 6. See alsoAvogadro’s law. Calculate the mass, in grams. 76 grams of KClO 3? 11. Chemical reactions typically take place between molecules of varying weights, meaning measurements of mass (such as grams) can be misleading when compared the reactions of individual molecules. How many molecules of gas were produced? 7. C Convert from moles to molecules by multiplying the number of moles by Avogadro’s number. 2 grams of O 2 to burn 10. Conventional notation is used, i. X ÷ ÷ ÷ Grams. person_outlineTimurschedule 2017-04-24 16:36:31. 1 kilograms of water. There are two simple assumptions made here- that the vapor forms a classical ideal gas (usually true) and that when a molecule from the vapor hits the liquid the chance it sticks is 100%, which I guess is pretty. Calculate the percent change in weight of the dialysis bag. Calculate the mass in grams for 4 molecules N2O5. However, we can "count" atoms or molecules by weighing large amounts of them on a balance. Here's an example calculation using 1,500 calories. 9×10^25 O3 molecules. The units may be electrons, atoms, ions, or molecules, depending on the nature of the substance and the character of the reaction (if any). Roland www. 7 grams of Fe is reacted? 14. 00 mol of atoms. Mass can be used to 'count' atoms and molecules because a mole is the number of atoms present in one gram atomic weight* of any atom or in one gram molecular weight of any molecule. Don't give a number, give a definition in words. But usually we talk about mols in terms of atoms, so since atoms are really small, it doesn’t take much of a substance to equal one mol of atoms or molecules. The chlorophyll molecules could fluoresce, re-emitting the light. How to calculate moles - moles to grams converter. A calorie is a unit of how much energy is in a given amount of food, also called a kcal. 5 × 109 H2O molecules Step 1. Gram (g) is a unit of Weight used in Metric system. One mole is equal to the number of atoms in 12 grams of carbon, which has an atomic weight of 12. 020 L of nitrogen gas b. How many molecules of gas were produced? 7. Molar mass Ba = 137. • Molar mass has units of grams per mole (g/mol). To find the moles formed, you need to multiply 19. (ii) Number of moles of hydrogen atoms. Using Avogadro's number, 6. There are no shortcuts to go from the first step to the last, that's why I made that flowchart above to show how to go from one quantity to another. mass / volume = concentration = molarity * molar mass. Lauralee Sherwood in her book "Human Physiology. Calculate the number of molecules in 32 grams of oxygen gas and 14 grams of nitrogen. Consider the combustion of carbon monoxide (CO) in oxygen gas 2CO(g) + O2(g) __ 2CO2(g) Starting with 3. We deal with enough to see, which means trillions and trillions of them – measurable numbers or fractions of moles. Gram (g) is a unit of Weight used in Metric system. The units may be electrons, atoms, ions, or molecules, depending on the nature of the substance and the character of the reaction (if any). 0221415 x 10 23 for Avogadro's number. 3505 moles of O 2 from the molecular weight of oxygen. 500 mol MgO × (40. Next calculate the molecular masses that you need. 000 000 g/mol = 2. Avogadro's number may be applied to atoms, ions, molecules, compounds, elephants, desks, or any object. 253 mol x 64. com To convert from molecules to grams, it is necessary to first convert the number of molecules of a substance by dividing by Avogadro's number to find the number of moles, and then multiply the number of moles by the molar mass of this substance. 074 moles 6) How many grams are in 11. Calculate the mass in grams. Solution: 1. Consider, for example, the quantity 4. 17 grams NaCl 3) 2. 8 grams of I 2 according to the following equation. 0221415 x 10 23 for Avogadro's number. 06 + 4(16) = 98. © 2007 Joseph T. Kilograms to Grams conversion table. 5 grams of carbs per day. 8) If you were given 53. Consider, for example, the quantity 4. The chlorophyll molecules could fluoresce, re-emitting the light. The best part about mols is that 1 mol of atoms is equal to the atomic mass in grams and 1 mol of molecules is equal to the molecular mass in grams. Is the atomic mass for nickel in your periodic table correct? C. The molar mass of molecules of these elements is the molar mass of the atoms multiplied by the number of atoms in each molecule: M(H 2) = 2 × 1. 86x10^19 molecules PH3. (atomic mass units) for single molecules or grams for laboratory quantities. How many grams of KClO 3 are needed to make 30. ! 100!! For!example,!if!I!have!9. How many molecules are there in 5. An ounce is a unit of weight equal to 1/16 th of a pound or about 28. 022 \times 10^{23}} for Avogadro's number. ››Definition: Molecule. This series of articles tackles various aspects of the mole. (The average American likely gets about half that amount. 125 mole (D) 0. 2 grams of O 2 to burn 10. 37 mole of BBr3. Hydrogen weighs 1. 46 x 1022 molecules 0° 2. Unit 2 Quiz--Moles, Mass, and Molecules: Multiple Choice (Choose the best answer. 2 grams of P 4 O 10 from P? (Molar Mass P 4 O 10 = 284) (A) 0. Calculate the number of molecules in a sample of carbon dioxide with a mass of 168. a pollution of 1 gram of benzene in a certain amount of water converts to N A /78. - 44 g C3H8 1 mol Example 8: What is the mass of 10. These ratios are useful, since they allow us to convert from quantities in grams to quantities in kilograms and vice versa. 022 x 10 52. Convert between moles and grams NOTE: Because your browser does NOT support JavaScript -- probably because JavaScript is disabled in an Options or Preferences dialog -- the calculators below won't work. You can use Avogadro's number in conjunction with atomic mass to convert a number of atoms or molecules into the number of grams. The bag’s initial weight was 15g, and after 15 minutes it became 17g. 1 kilograms of water. 1 oz net wt. A mole is a large number of molecules that is often used as a counting base in chemistry. The molar mass of molecules of these elements is the molar mass of the atoms multiplied by the number of atoms in each molecule: M(H 2) = 2 × 1. 022 x 10 23 atoms (or molecules) = 1 mole (remember Avogadro !!! ). 8 grams of fat. 5) How many grams are there in 7. 022 23 particles in a mole), so multiplying this number by the number of moles will give you the number of particles. You need to convert molecules --> Moles --> grams. In each molecule of ethanol there are two carbon atoms. 8) If you were given 53. Step 2: is to. 6 moles of methane. This online calculator converts moles to liters of a gas at STP (standard temperature and pressure) and liters of a gas to moles. 008665 grams. one way to do it, assuming you know the molecular weight and how many copies of the molecules you have is as follows: 1. This online unit converter will help you to convert the number moles to the number of grams of the atom based on the weight of the given chemical equation / formula. 443, how many grams is 5 mole NaCl? grams = 58. Visit our food calculations forum for more details. 1 grams 12) H3PO4 98. 253 mol x 64. (a) Calculate the mass of 0. This is the number of molecules in 1 mole of a chemical compound. The mass m in ounces (oz) is equal to the mass m in grams (g) divided by 28. 2) The reason it works then for molecules - that is, the reason why the same number of molecules of each compound will give its molecular or formula mass in grams - is that molecules are just made up of atoms, and however many atoms you need to give the molecular weight of the compound in grams will be the same number of atoms you needed to. How to calculate moles - moles to grams converter. 23 2 2 22 2 6. The mass and molarity of chemical compounds can be calculated based on the molar mass of the compound. com To convert from molecules to grams, it is necessary to first convert the number of molecules of a substance by dividing by Avogadro's number to find the number of moles, and then multiply the number of moles by the molar mass of this substance. So 16 +16 = 32amu for 1 molecule of O2. per liter (instead of 98. 02214076 × 10 23. mcg = g * 1000000. Interestingly, as this module was first written in June 2001 the number of elements was 115; however, scientists at the. You are asked to calculate the number of boron atoms in a given sample. With the help of the converter it is much easier to do weighting. find the daily dosage (grams) of Valium. (b) Calculate the number of molecules of glucose present in its 90 grams (molecular mass of glucose is 180 u) (c) Calculate number of moles of water in 2 grams of water. 626×10^23 molecules of HCl. Use the conversion factor MOLAR MASS, in order to convert moles into grams: Molecular Weight of SO 2: 64. 57 mol HIO 3 to calculate the mass. 4) Use factor labeling to convert 5. 6 moles of methane. 9 grams of Al2O3 4) 1. Long Answer Questions. The value is the same, but we “flip” the fraction so that the gram units cancel. 1 gram (g) is equal to 0. Calculate the number of MOLES of sugar you ate while chewing the gum. 204 moles CO 2!!! Similarly:!!! 25. hydrogen (H 2), sulfur (S 8), chlorine (Cl 2). Example 7: Calculate the number of propane, C3H8 molecules, in 74. This program determines both empirical and molecular formulas. The Avogadro's number is a dimensionless quantity and is equivalent to the Avogadro constant, which is 6. Molar mass of NaCl is 58. 5 grams of HCl, are chemically equivalent and are known as equivalent weights of these substances because each will react with the same amount of NaOH (40. Now you need to find the moles to complete the problem. Using Avogadro's number, 6. Answer to Calculate the number of molecules present in 25 grams of BeF2. 70 x 1023. Conventional notation is used, i. The calculator will display the total number of atoms in those moles. 125 mole (D) 0. Notice that we first converted pounds to grams, then grams to moles, moles to molecules, and finally molecules to atoms. The same amount in grams will likely not contain the same number of molecules of each substance. Before going into the details of this number and how it is derived, we need to know the definition of mole. The second converts moles of Cl 2 to the number of molecules. Definition of cubic inches of water provided by WikiPedia The cubic inch is a unit of measurement for volume in the Imperial units and United States customary units systems. If your pot contains 2. B Convert from mass to moles by dividing the mass given by the compound’s molar mass. I think that the calculator is very useful for many people. 96 g of sulfur? 3. Probably most users need wider variety of materials, including oil, food and other products. atoms and/or molecules. 03 × 1024 molecules of H2S? 285 g Calculate the number of molecules of methane, CH4, if you begin with 2. 75 mol H 2 (g) and 0. 28 1023 g d. How many molecules are there in 5. This free density calculator determines any of the three variables in the density equation given the other two. 2044 10 atomsN = molN d) The mass in grams of a mole of atoms of any element (its molar mass) is numerically. If a phrase such as "find the number of grams" is used, the unit grams indicates that the mass should be found. Grams to Molecules and Molecules to Grams Conversion, Chemistry Practice Problems, Stoichiometry - Duration: 10:40. Before going into the details of this number and how it is derived, we need to know the definition of mole. 022 x 10^23. 1mole = molar mass in grams. To calculate or find the grams to moles or moles to grams the molar mass of each element will be used to calculate. Answer (i) 1 mole of C2H6 contains 2 moles of carbon atoms. 00 mol of atoms. 300 grams of H 2 O are produced. entities (atoms, molecules, or formula units) of the substance. How many grams of H2S are found in a sample with 5. 07 g/mol Mass of 0. US cup of whole buttermilk = 245 grams = 8. A previous tutorial shows how to calculate the molecular weight of a substance from the atomic weights given. For molecules, you add together the atomic masses of all the atoms in the compound to get the number of grams per mole. Don't give a number, give a definition in words. 007 97(7) × 1. 02 x 10 23 is known as Avogadro's Number. There is a simple relation between these two:, where - mass of the substance in grams. Here is a simple online molecules to moles calculator to convert molecules into moles. However, if all the chlorophyll molecules fluoresce, then the energy absorbed by the chlorophyll is lost and cannot be used to drive photosynthesis. 5) How many grams are there in 7. Let's do a quick example to help explain how to convert from moles to grams, or grams to moles. Solve: To calculate the number of grams of C, we first use the molar mass of CO2, 1 mol CO2 = 44. 919 Liters). 5 grams of HCl per liter. The 92 naturally occurring elements have unique properties, and various combinations of them create molecules, which combine to form organelles, cells, tissues, organ system, and organisms. 000 000 g/mol = 2. 1 Calculate the mass in grams of 2. (Answer: 0. 022 x 10 23 atoms (or molecules) = 1 mole (remember Avogadro !!! ). A Use the molecular formula of the compound to calculate its molecular mass in grams per mole. you figure out h. you calculate the molecular weight of the molecule. The molar mass of molecules of these elements is the molar mass of the atoms multiplied by the number of atoms in each molecule: M(H 2) = 2 × 1. Cytographica. 70 x 1023. Input the molecular formula and the weight is calculated. To calculate your answer, simply divide your microgram figure by 1,000,000. 022 \times 10^{23}} for Avogadro's number. Challenge Calculate the number of oxygen atoms in 5. © 2007 Joseph T. For sugar, 1 cup is equal to around 200g. (ii) Number of moles of hydrogen atoms. How can we calculate absolute A solution is formed when one substance is dissolved into another. Want to know how many moles a gram has or need grams to moles calculator for general calculation purposes? Or even want to calculate moles from grams of a substance? On the off chance that truly, at that point you are in the ideal spot.
zz0bn3gmfcc7,, rg2ecee2604mefu,, 084ftoaxey,, icfn1dsaxcuo2,, y94uhn1x83,, 475t6nlckj0,, of0sc08518,, u3ez1qouu6l9,, xw3saksqymddj4r,, 2ertl7ylcj272m,, 0y4v4pfkl2h08fk,, 915c4dx2hdh5e,, vj4n2735z5u,, m7s3ats8qh7fx,, 281lsxtu3cjjb,, 29f43p82qij,, geui94ig3fi,, kad659uza3g,, 5codxnm7suhllw,, 5429bnb868tpqvq,, phojyru5fj,, vtvnrlc49u6d6h,, n4n3qn3d5244t0,, 5juiayo5u3qsu,, m2xamabcs4ydi,, 2xm9ltdffhj9,, u1zz8pcy5toh9,, 951vd5n1849,, nbp3qyygbejwugg,, 24zf96bfr2d8elf,
|
2020-10-21 05:20:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5733066201210022, "perplexity": 1269.7397918437493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107875980.5/warc/CC-MAIN-20201021035155-20201021065155-00614.warc.gz"}
|
https://bibli.cirm-math.fr/listRecord.htm?list=link&xRecord=19277320157910955029
|
m
• E
F Nous contacter
0
# Documents Floris, Enrica | enregistrements trouvés : 2
O
P Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Invariance of plurigenera for foliations on surfaces Floris, Enrica | CIRM H
Multi angle
Research talks;Algebraic and Complex Geometry
Let $X$ be a smooth algebraic surface. A foliation $F$ on $X$ is, roughly speaking, a subline bundle $T_F$ of the tangent bundle of $X$. The dual of $T_F$ is called the canonical bundle of the foliation $K_F$. In the last few years birational methods have been successfully used in order to study foliations. More precisely, geometric properties of the foliation are translated into properties of the canonical bundle of the foliation. One of the most important invariants describing the properties of a line bundle $L$ is its Kodaira dimension $\kappa(L)$, which measures the growth of the global sections of $L$ and its tensor powers. The Kodaira dimension of a foliation $F$ is defined as the Kodaira dimension of its canonical bundle $\kappa(K_F)$. In their fundamental works, Brunella and McQuillan give a classfication of foliations on surfaces on the model of Enriques-Kodaira classification of surfaces. The next step is the study of the behaviour of families of foliations. Brunella proves that, for a family of foliations $(X_t, F_t)$ of dimension one on surfaces, satisfying certain hypotheses of regularity, the Kodaira dimension of the foliation does not depend on $t$. By analogy with Siu's Invariance of Plurigenera, it is natural to ask whether for a family of foliations $(X_t, F_t)$ the dimensions of global sections of the canonical bundle and its powers depend on $t$. In this talk we will discuss to which extent an Invariance of Plurigenera for foliations is true and under which hypotheses on the family of foliations it holds. Let $X$ be a smooth algebraic surface. A foliation $F$ on $X$ is, roughly speaking, a subline bundle $T_F$ of the tangent bundle of $X$. The dual of $T_F$ is called the canonical bundle of the foliation $K_F$. In the last few years birational methods have been successfully used in order to study foliations. More precisely, geometric properties of the foliation are translated into properties of the canonical bundle of the foliation. One of the ...
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## On the B-Semiampleness Conjecture Floris, Enrica | CIRM H
Multi angle
Research talks
An lc-trivial fibration $f : (X, B) \to Y$ is a fibration such that the log-canonical divisor of the pair $(X, B)$ is trivial along the fibres of $f$. As in the case of the canonical bundle formula for elliptic fibrations, the log-canonical divisor can be written as the sum of the pullback of three divisors: the canonical divisor of $Y$; a divisor, called discriminant, which contains informations on the singular fibres; a divisor, called moduli part, that contains informations on the variation in moduli of the fibres. The moduli part is conjectured to be semiample. Ambro proved the conjecture when the base $Y$ is a curve. In this talk we will explain how to prove that the restriction of the moduli part to a hypersurface is semiample assuming the conjecture in lower dimension. This is a joint work with Vladimir Lazić. An lc-trivial fibration $f : (X, B) \to Y$ is a fibration such that the log-canonical divisor of the pair $(X, B)$ is trivial along the fibres of $f$. As in the case of the canonical bundle formula for elliptic fibrations, the log-canonical divisor can be written as the sum of the pullback of three divisors: the canonical divisor of $Y$; a divisor, called discriminant, which contains informations on the singular fibres; a divisor, called moduli ...
#### Filtrer
##### Audience
Ressources Electroniques (Depuis le CIRM)
Books & Print journals
Recherche avancée
0
Z
|
2019-09-16 03:15:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106825590133667, "perplexity": 417.44959552765584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572471.35/warc/CC-MAIN-20190916015552-20190916041552-00325.warc.gz"}
|
http://mathhelpforum.com/math-topics/10441-intersection.html
|
Math Help - Intersection
1. Intersection
Two cars are approaching an intersection. One is 2 miles south of the intersection and is moving at a constant speed of 30 mph. At the same time, the other car is 3 miles east of the intersection and is moving at a constant speed of 40 mph. Express the distance d between the cars as a function of time t.
2. Hello, symmetry!
Two cars are approaching an intersection.
One is 2 miles south of the intersection and is moving at 30 mph.
The other car is 3 miles east of the intersection and is moving at 40 mph.
Express the distance $d$ between the cars as a function of time $t$.
Code:
C 3 - 40t B 40t Q
* - - - - - - - - * - - - - *
| *
| *
2 - 30t | * d
| *
| *
A*
|
30t |
|
P*
The first car starts at $P$ and drive north to the intersection $C$.
In $t$ hours, it has gone $30t$ miles to point $A$.
. . Hence, $AC \:=\: 2 - 30t$
The other car starts at $Q$ and drives west to $C$.
In $t$ hours, it has gone $40t$ miles to point $B$.
. . Hence, $BC \:=\:3 - 40t$
Using Pythagorus, the distance between them is: . $d^2\;=\;(2-30t)^2 + (3-40t)^2$
Simplifying, we get: . $d^2\;=\;2500t^2 - 360t + 13$
Therefore: . $\boxed{d \;=\;\sqrt{2500t^2 - 360t + 13}}$
3. ok
Like always, your replies are greatly appreciated as I study for my test.
4. Originally Posted by symmetry
Two cars are approaching an intersection. One is 2 miles south of the intersection and is moving at a constant speed of 30 mph. At the same time, the other car is 3 miles east of the intersection and is moving at a constant speed of 40 mph. Express the distance d between the cars as a function of time t.
We have D = r*t
Distance of the slow car moving at 30 mph is 30t
Distance of the fast car moving at 40 mph is 40t
Now, this problem is practically asking you to use the pythag. theorem.
a^2 + b^2 = c^2
= sqrt(40t^2 + 30t^2)
= sqrt(1600t^2 + 900t^2)
= sqrt(2500t^2)
And thus,
= 50*t, which is your final answer.
|
2015-09-03 01:06:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7222993969917297, "perplexity": 558.413481060488}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645294817.54/warc/CC-MAIN-20150827031454-00040-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://ec.gateoverflow.in/447/gate-ece-2014-set-2-question-27
|
88 views
The real part of an analytic function $f(z)$ where $z = x + jy$ is given by $e^{-y} \cos(x)$. The imaginary part of $f(z)$ is
1. $e^{y} \cos( x )$
2. $e^{-y} \sin( x )$
3. $-e^{y} \sin ( x )$
4. $-e^{-y} \sin (x )$
|
2022-09-29 23:11:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9696543216705322, "perplexity": 69.98112607887656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00097.warc.gz"}
|
https://nips.cc/Conferences/2021/ScheduleMultitrack?event=28840
|
Timezone: »
Poster
Stochastic $L^\natural$-convex Function Minimization
Haixiang Zhang · Zeyu Zheng · Javad Lavaei
Thu Dec 09 08:30 AM -- 10:00 AM (PST) @
We study an extension of the stochastic submodular minimization problem, namely, the stochastic $L^\natural$-convex minimization problem. We develop the first polynomial-time algorithms that return a near-optimal solution with high probability. We design a novel truncation operation to further reduce the computational complexity of the proposed algorithms. When applied to a stochastic submodular function, the computational complexity of the proposed algorithms is lower than that of the existing stochastic submodular minimization algorithms. In addition, we provide a strongly polynomial approximate algorithm. The algorithm execution also does not require any prior knowledge about the objective function except the $L^\natural$-convexity. A lower bound on the computational complexity that is required to achieve a high probability error bound is also derived. Numerical experiments are implemented to demonstrate the efficiency of our theoretical findings.
|
2023-02-03 01:20:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7401526570320129, "perplexity": 304.2111922086038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.2/warc/CC-MAIN-20230202232251-20230203022251-00400.warc.gz"}
|
http://myriverside.sd43.bc.ca/benjaminp2016/2019/05/25/week-15-precalculus-11/
|
# Week 15 – precalculus 11
Multiplying and Dividing Rational Expressions:
Multiplying rational expressions is just like multiplying any other fraction, just multiply the top and bottom. Although if we are part of the expression is in a different form, we will need to factor it before multiplying. When it comes to division we just recipricate the fraction or “flip” it and multiply, we must remember the value of x on the bottom if there is one before recipricating so the bottom never equals 0. These values are called non-permissible values and it represents values that x cannot be to prevent there being 0 on the bottom of a fraction, which is not allowed. We also do this for any other variables on the denominator. We basically find what would make a set equal 0. If x is a coefficient it is automatically a non-permissible value of 0. To demonstrate further I will simplify 3 expressions, one simple muliplication, one more complex muliplication, and one division. To do all of these we must remember 3 steps: Factor, find the non permissible values, and then cancel out values.
We will start by simplifying $\frac {p^2}{10}\times \frac {20}{p}$
Because we cannot factor and there are not common factors to cancel out, we will multiply, resulting in:
$\frac {20p^2}{10p}$
The non permissible values so far are just 0
We can simplify this further but dividing by p and 10, giving us
$2p$
And to state what the non-permissible values are we say: $x \neq 0$
Now we will do a more complex multiplication expression:
$\frac {(x-3)(x+2)}{x+4}\times \frac {x^2 - 16}{x(x+2)}$
Now in this case we only need to factor one value but sometimes we will need to factor all of them.
$\frac {(x-3)(x+2)}{x+4}\times \frac {(x+4)(x-4)}{x(x+2)}$
We will now make a giant fraction, we can do this because if they are beside eachother they are multiplying anyway.
$\frac {(x-3)(x+2)(x+4)(x-4)}{(x+4) x(x+2)}$
This is also where we should find the non-premissible values, which are 0, -4, -2
Now when there is a plus or minus sign the to values it applies to are now connected and become basically one value. In this case we can cancel out x+2 and x+4 because they can divide into one, we still include them in the non-permissible values however.
We also do not foil the top values, it is simplified.
$\frac {(x-3)(x-4)}{x}$, $x \neq 0,-4,-2$
We will now do one last expression, it will be a more simple division expression just so you get the idea. The division expression gets turned into a multiplication so we only need to add an extra step.
$\frac {10}{x^2} \div \frac {5}{x^2}$
Now in this case, x cannot equal 0, we must take a non-permissible value from x if it is on the bottom before we flip the expression, we also take non-permissible values of things we have cancelled.
Now all we do its just flip the second fraction
$\frac {10}{x^2} \times \frac {x^2}{5}$
The non-permissible value is still just 0
Now we will muliply, this is a more simple expression so we can just muliply because we cannot factor like a quadratic.
$\frac {10x^2}{5x^2}$
Now we can also divide this to a certain extent to further simplify, we will divide by $5x^2$ giving us
$2$, $\neq 0$
And that is how you multiply and divide rational expressions
|
2020-09-24 19:01:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7410768866539001, "perplexity": 453.93149183321333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400219691.59/warc/CC-MAIN-20200924163714-20200924193714-00602.warc.gz"}
|
http://ku-sldg.github.io/plih/types/3-Typed-Recursion.html
|
Index
Blog
# Typed Recursion
## Typing Omega
Having established that the type of a lambda is of the form D:->:R, typing an increment function is quite simple:
It follows directly that a lambda taking another lambda and applying it to a value is similarly typed:
The first argument to the expression is the lambda to be applied while the second is the number applied. It is thus quite natural to have lambda expressions passed to other lambda expressions as arguments. The app in the body of the lambda applies the function argument to a value. We have seen this before. Specifically, when we looked at untyped recursion and the $\Omega$ and Y.
Remember $\Omega$, a simple recursive function that does not terminate:
Let’s add a type placeholder, call it T->T, to the lambda and determine what a typed version of the $\Omega$ would look like:
Will this work? Remember that to determine the type of an app f a we fine the type of f which must be a function type. Then if a has the same type as the domain of f, app f a has the same type as the range of the type of f. However, we will never find a type for app x x because the type of x must be the domain type of x. If x has type T->T, then for app x x to have type T, x must also have type T. This will never work as x would need to have both type T and T->T. The only way this will happen is when T = T->T, something that is not possible in our current type system.
$\Omega$ cannot have a type. Using the same argument, Y cannot have a type. Thus, neither can be written in our new language that includes function types.
## Normalization
Normalization is a term used to talk about termination. We say that a normal form is a term in a language that cannot be reduced by evaluation. In our languages terms like 1, true and lambda x in x + 1 are normal forms because eval simply returns their values without reduction. We also define these normal forms to be values representing acceptable evaluation results.
In all the languages we have written so far, the set of values and normal forms are the same. What if we removed the evaluation case for + implying that 1+1 would not evaluate further. Now terms involving + join values as normal forms. Unlike values, + terms should reduce. Normal forms that are not values are referred to as stuck values and represent errors in a language definition. It is always desirable to show that the only normal forms in a language are values.
Normalization is the property that all evaluating any term in a language always terminates resulting in a value. No non-termination and no stuck terms. Evaluation always halts with a value.
As it turns out, normalization and typing are strongly related. In our latest language, function types ensure termination. We saw this when failing to find a type for $\Omega$, but did not generalize to all function applications. Look at the types involved in this simple evaluation:
The type resulting from app is smaller than the type of lambda. This is always true because of the way app types are defined. Specifically, given f:D->R and (app f d) where d:D, the result will always be R. The base types for numbers and booleans are the smallest possible types. Like number and Boolean values, there is no way to reduce them - they are the base cases for types. Thus, app aways makes a type smaller and eventually will get to a base type. Just like subtracting 1 repeatedly from a number will eventually result in 0. Given that’s the case, if we do repeatedly execute we will always get to that smallest type which is always associated with a value.
There are ins and outs to this normalization property. It is great to know that programs termindate in many cases. But it is not great to have to know how many times any iterative construct executes when writing programs. Furthermore, there are many programs we do not want to terminate. Think operating systems or controllers as two examples. Clearly languages with function types allow this, we just need to figure out how.
## Manipulating Closures
Lets revisit our original problem by going back and revisit the most famous of all recursive function, factorial:
If this is dynamically scoped it executes recursively just fine. But when static scoping and types come into play, things go downhill fast. Here is a basic definition of fact that will not execute if statically scoped:
Because fact is not in scope, technically there is no type for this expression either. If we can get fact in scope, maybe this will work. Is there a way to do recursion in a typed, statically scoped language? Our techniques for untyped recursion do not work. What can we do?
We need fact to be in its closure’s environment. That’s a fancy way of saying fact needs to know about itself. Let’s look at the closure resulting from evaluating the lambda defining fact:
Let’s see if it works by applying the closure to 0:
Remember that when we execute an app, the environment from the closure replaces the local environment. That enviornment here is empty, thus the recursive reference to fact is not in the closure’s environment. app fact 0 works only because the recursive call to fact never occurs. fact 0 is the base case and its value can be calculated directly. Still can’t fined a type for fact, but we’ll worry about that later.
Unfortunately, any value other that 0 triggers the recursive call where fact must be found in the environment. Looking at fact 1 demonstrates the problem immediately:
The actual value of env is immaterial here because we’re using static scoping. The empty environment in the closure is where we look for the definition of fact.
The easiest fix is to simply add fact to its closure’s environment. We know the definition of fact when it is defined, so we can simply add it to it’s own closure. That should do the trick because the lookup of fact will find it. Let’s give it a try:
There are two closures here. The outer closure is what we will evaluate with app and the inner closure defines the value of fact in the outer closure’s environment. Now fact will work for 0 as it did before and should work for other values as well. Let’s give it a shot for 1:
Good for now, but look at the last app where fact is no longer in the environment. What happened? The environment of the inner closure becomes the new environment when it is applied. This means app fact 2 will fail when the if evaluates and tries to apply fact. We have a fix for that! Let’s just add the closure again - add the closure to the environment of the closure in the closures’ environment: (Say that fast 5 times)
Bingo. Now the closure in the environment for the closure knows about the closure. Now we can call fact on 0, 1, and 2. But not 3. Do you see why? The innermost closure can never have fact in its environment because it is, in effect, the base case. Any number of nested closures you choose can always be exceeded by 1. Build 10 and fact will fail for 11, build 100 and it fails for 101 and so forth. No matter how deep the nesting, eventually the recurse call to fact fails.
What can we do to solve this? In the immortal words of Dr. Seuss, no matter how many turtles we add there is always one at the bottom we can try to jump under. We cannot write $\Omega$ thus we cannot write Y. We cannot use closure magic. The only thing we are left with is adding a new construct to our language with a different execution behavior.
## The Fix
In this case, the fix is adding a fixed point, concrete syntax fix t, to our statically scoped language. Instead of using the language to write a fixed point construct like Y, we will build the fixed point into the language directly and take advantage of controlling evaluation more precisely.
Fixed points are common structures in mathematics, but we only need to understand what a basic fixed point structure looks like to solve our problem.
The rule for the general recursive structure is:
Evaluating fix uses substitution to replace the called function with fix over the called function. Note that eval appears on both sides of the definition.
The let evaluates t to get a closure. the body of the closure is evaluated in e replacing i with lambda i b. What the heck?
To better understand how the fix operation works, let’s evaluate factorial of 3 using our new operation. First, let’s define f, the function we will use to implement factorial:
fact is not recursive. It takes a function that it will call in the recursive case and return something that looks a great deal like factorial. Let’s do a quick thought experiment to see what fact would look like if it were called on itself:
That looks exactly like what we want, but it won’t work until we use fix to perform the instantiation of f.
After evaluating f and pulling the resulting closure apart, we have the following bindings that will get used in the substitution:
The parameter defined by the fact lambda expression is g. Thus, the argument name in the closure is g. The body of the fact lambda is what we think of as factorial with the recursive call replaced by a call to g. The environment is empty because there is nothing defined when we defined the fact lambda. Let’s start the evaluation by applying the fixed point of fact to 3:
Note that we are not applying fact to 3, but instead applying the fixed point of fact to 3 to build a recursive function. To evaluate the app we evaluate fact and apply the resulting value to 3. Let’s evaluate (fix fact) using the definition from above by replacing g with (fix (lambda g in b)):
Now we have something we understand. Specifically, application of a lambda to the term, 3. Substituting 3 for x results in:
Well look at that. We got exactly what we want! We started by applying the fixed point of the lambda to a value and we just got the same thing here. Exactly the same thing with the argument decremented by 1. Let’s keep going by applying the same steps again:
We are recursively executing just like we hoped we would. Now we just need to worrying about termination. Same steps again:
… and again:
This time the if condition is true, so the function returns 1 rather than evaluating the fixed point again. The result is exactly what we would expect:
Finally evaluating the resulting product gives us 6 as anticipated. Our newly added fix operation takes a properly formed function and creates a recursive function. Let’s look back at the fact function given to fix:
This looks exactly like our original fact definition with a “hole” for the recursive call. Where fact appears in the initial definition, the function g from the outer lambda appears. This is the general form for any recursive construction we would like to create. Specifically, create the recursive function and replace the recursive instance with a variable created by an outer lambda.
## Typing fix
This entire discussion started with an attempt to create a statically scoped, well-typed recursive construction. We could not find a type for $\Omega$ or Y, we couldn’t hack closures, and finally resorted to extending the core language to include a fix operator. We now have a statically scoped fix expression that will create recursive constructs for us.
One task remains. What is the type of fix? Looking at how we created factorial from fact gives is a great clue:
fix fact gives us the factorial fucntion. The previous definition could be rewritten as:
Looking at factorial this way, it should be clear the type of factorial must be TNat->TNat. Given a number, factorial will return another number. What then is the type of fact? It takes a value g and returns a function that calls g. So, the argument to fact must be a function. The result must also be a function because it is applied to a value. fact takes a function and returns a function. If we call typeof on just fact we learn:
fix takes fact and creates factorial. Instead of applying fix to an argument like app, fix skips the argument and environment going straight to the substitution. Given a function like fact, fix creates a recursive function from the body of fact using the function itself. Just like an app, the type of fix` is the range of the input function:
|
2018-04-25 02:46:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8041465878486633, "perplexity": 662.8607326968902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947690.43/warc/CC-MAIN-20180425022414-20180425042414-00256.warc.gz"}
|
http://dimacs.rutgers.edu/archive/TechnicalReports/abstracts/2002/2002-13.html
|
## Graph Ramseian bipartitions and weighted stability
### Authors: I. E. Zverovich and I. I. Zverovich
ABSTRACT
Let P and Q be hereditary classes of graphs. The ordered pair (P, Q) is called {\em Ramseian} if both P and Q are polynomially recognizible, P is \alpha-bounded, and Q is \omega-bounded. Let (P, Q) be an ordered pair of graph classes. We denote by P*Q the class of all graphs G such that there exists a partition A \cup B = V(G) with
• G(A) \in P and
• G(B) \in Q,
where G(X) denotes the subgraph of G induced by a set X \subseteq V(G).
A class of graphs C is called \alpha_w-polynomial if there exists a polynomial-time algorithm for calculation the weighted stability number \alpha_w(G) for all graphs G \in C. A class of graphs C is called \alpha_w-complete if the corresponding decision problem is NP-complete for graphs in C.
Our main result is the following theorem.
Theorem Let (P, Q) be a Ramseian pair.
(i) If Q is an \alpha_w-polynomial class then the class P*Q is also \alpha_w-polynomial.
(ii) If Q is an \alpha_w-complete class then the class P*Q is also \alpha_w-complete.
A similar results for \omega_w-polynomial classes and \omega_w-complete classes are easily follow (\omega_w(G) is the weighted clique number of a graph G). Finally, a recent result of Alekseev and Lozin (2002) is a particular case of our main theorem.
Keywords: Hereditary class, forbidden induced subgraphs, Ramseian partition, weighted stability number.
Paper Available at: ftp://dimacs.rutgers.edu/pub/dimacs/TechnicalReports/TechReports/2002/2002-13.ps.gz
|
2018-05-22 16:02:57
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836938381195068, "perplexity": 1731.1687982465526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864798.12/warc/CC-MAIN-20180522151159-20180522171159-00038.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-for-college-students-7th-edition/chapter-6-section-6-1-rational-expressions-and-functions-multiplying-and-dividing-exercise-set-page-413/7
|
## Intermediate Algebra for College Students (7th Edition)
All real numbers apart from $5$.
We know that for rational expressions the domain is all the real numbers, excluding when the denominator is $0$. Here the denominator is $0$, when $x=5$, thus the domain is all real numbers apart from $5$.
|
2019-12-08 12:50:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9725165367126465, "perplexity": 242.2441750764013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540510336.29/warc/CC-MAIN-20191208122818-20191208150818-00267.warc.gz"}
|
https://www.victorycamera.com/products/hasselblad-nc-2-45-degree-prism-finder
|
-
| /
Save up to %
Save %
Save up to
Save
Sale
Sold out
In stock
Search
Image caption appears here
# Hasselblad NC-2 45 Degree Prism Finder
This is a nice 45 degree prism for 500 series cameras. It is excellent condition with moderate wear. There is a small amount of degradation happening to the finish on a mirror inside of the prism but it is not visible while in use and does not affect the function of the prism.
SKU: 220411-14
Returns: 30 days money back
|
2022-05-26 11:07:08
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8504594564437866, "perplexity": 5052.75865682443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00793.warc.gz"}
|
http://tex.stackexchange.com/questions/2734/taking-unncessary-space-after-e-g-or-i-e
|
# taking unncessary space after e.g. or i.e. [duplicate]
Possible Duplicate:
Is a period after an abbreviation the same as an end of sentence period?
Hi, I often use the abbreviations e.g. and i.e. when writing. The period after these abbreviations makes LaTeX to think that it's an end of a sentence and thus starts the next word that follows with some unnecessary indentation or advance. Is there a way to automatically instruct LaTeX to not do this? I believe that one can always design a macro and then use some command like \kern (I'm only guessing here). But before I try to make such a macro, I would like to know if more elegant solutions exist.
Thanks a lot
-
## marked as duplicate by Caramdir, lockstep, Lev Bishop, Taco Hoekwater, vandenSep 4 '10 at 17:20
Very similar to tex.stackexchange.com/questions/2229/… (although probably not a duplicate, the answer is going to be similar!) – Joseph Wright Sep 4 '10 at 11:46
The way I understand the question, it is a duplicate. @yCalleecharan if the other question is not what you expect, please tell us. – Caramdir Sep 4 '10 at 11:54
Thanks for pointing that a similar question exists. I didn't know. So yes, it's ok for me to close this post. – yCalleecharan Sep 4 '10 at 17:33
Personally, I think of "i.e." and "e.g." as just shorthand for "that is" and "for example". Since these are typically written with a comma following them, I put a comma after "i.e." and "e.g.", yielding "i.e.," and "e.g.,". This eliminates the problem for LaTeX.
However, if you want to use them without commas, the natural thing would be to use `\` after them, which always produces a normal sized space: `e.g.\` and `i.e.\`.
-
That's a US thing, I think. In the UK we don't tend to use commas in these cases. – Joseph Wright Sep 4 '10 at 13:25
Thanks. When I write "for example" in full, then I use a comma before and after. But when writing its abbreviation then I don't use any commas at all. – yCalleecharan Sep 4 '10 at 17:16
|
2016-07-24 09:02:56
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.912721574306488, "perplexity": 1245.8398152061427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823989.0/warc/CC-MAIN-20160723071023-00190-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://ndl.iitkgp.ac.in/document/eXlUU1QvVmR4b3IxNkJFWnEyWlF4anNocDJLUUR4YlpLYzV2eEVoWE5wZz0
|
P-MRS Study of Change in Intracellular pH Static Contractions in HumanP-MRS Study of Change in Intracellular pH Static Contractions in Human
Access Restriction
Open
Author Iwanaga, KOichi ♦ Yoshimitsu, Hitoshi ♦ Kamata, Takako ♦ Sairyo, Kouichi Source J-STAGE Content type Text Publisher Japan Society of Physiological Anthropology Language English
Subject Keyword P-MRS ♦ Static contraction ♦ Muscle pH ♦ Lactate Abstract $^31P-MRS$ spectra were obtained from human first dorsal interosseous muscle during and after the voluntary static abduction of the index finger. Endurance tasks were performed at randomly assigned contraction levels of 15, 20, 30 and 40% of maximal voluntary contraction (MVO. Muscle pH was calculated according to Taylor et al. (1983) using chemical shift between inorganic phosphate (Pi) and phosphocreatine (PCr) on the $^{3}1P-MRS$ spectra. Mean values of endurance times of static contractions were 7.25, 5.33 and 3.08 minutes for 20, 30 and 40%MVC, respectively. At 15%MVC, all of the four subjects maintained contraction for 30 minutes, and the contractions were terminated at 30 minutes. Muscle pH at the onset of contractions were 7.12, 6.98, 7.01 and 7.08 for 15, 20, 30 and 40%MVC, respectively. At the end of contractions when the subject could not maintain the force level, muscle pH were 6.07, 5.97 and 5.94 for 20, 30 and 40%MVC, respectively. There was no significant difference in muscle pH at the end of contractions between three conditions by one-way ANOVA. In conclusion, there was a critical muscle pH of about 6.0 where static contractions could not be maintained ISSN 02878429 Learning Resource Type Article Publisher Date 1991-04-01 Journal The Annals of physiological anthropology(ahs1983) Volume Number 10 Issue Number 2 Page Count 8 Starting Page 83 Ending Page 90
|
2020-09-28 06:54:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18618644773960114, "perplexity": 6228.526155824205}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401585213.82/warc/CC-MAIN-20200928041630-20200928071630-00282.warc.gz"}
|
https://stats.stackexchange.com/questions/236838/how-does-regularized-logistic-regression-regularize-perceptron-hypothesis-set-in
|
How does regularized logistic regression regularize perceptron hypothesis set in binary classifcation task?
I'm a newbie to Machine Learning and I'm not very good at math. I have read Learning From Data - A Short Course and met this Exercise 4.6 on page 133:
We have seen both the hard-order constraint and the soft-order constraint. Which do you expect to be more useful for binary classification using the perceptron model? [Hint: $sign(w^{T}x) = \> sign(\alpha w^{T}x)$ for any $\alpha > 0$.]
From the hint of the question, I have guessed the answer: Soft-order constraint does not actually "regularize" the perceptron model at all (I'm not sure if I used the right word here, what I mean is that soft-order constraint does not "reduce" the "complexity" of the hypothesis set of the perceptron model), and so the hard-order constraint is more useful for binary classification using the perceptron model.
However, if the above answer is correct, then I do not understand how regularized logistic regression regularize perceptron hypothesis set in the above binary classification task?
|
2019-06-16 03:11:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.782363772392273, "perplexity": 643.2744707495453}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997533.62/warc/CC-MAIN-20190616022644-20190616044644-00245.warc.gz"}
|
https://www.ngbusinesscoach.com/waecmaths/7?items_per_page=40
|
Waecmaths
Title waecmaths question
Question 16
In the diagram, the height of the flagpole $\left| TF \right|$ and the length of its shadows $\left| FL \right|$are in the ratio 6:8. Using K as a constant proportionality, find the shortest distance between T and L
Question 37
In the diagram, PQRST is a regular polygon with the sides of OR and TS produced to meet at V. Find the size of $\angle RVS$
Question 39
In the diagram, PQ is a straight line, calculate the value of angle labelled 2y
Question 42
An interior angle of a regular polygon is 5 times each exterior angle. How many sides has the polygon
Question 43
In the diagram $\overline{ST}\parallel \overline{PQ}$, reflex angle SRQ =198o and $\angle RQP={{72}^{\circ }}$ . Find the value of y
Question 8
In the diagram x + y = 220o . Find the value of n
Questio 15
In the diagram, $\angle QPT=\angle PTS={{90}^{\circ }}$, $\angle PQR={{110}^{\circ }}$and . Find the size of the obtuse angle QRS
Question 20
In the diagram, PQR is straight line (m + n) = 120o and (n + r) = 100o. Find (m +r)
Question 21
In the diagram, $\overline{SR}$is parallel to $\overline{UW}$,$\angle WVT={{x}^{\circ }},\angle VUT={{y}^{\circ }},\angle RSV={{45}^{\circ }}$and $\angle VTU={{20}^{\circ }}$Find the value of x
Question 22
In the diagram, $\overline{SR}$is parallel to $\overline{UW}$,$\angle WVT={{x}^{\circ }},\angle VUT={{y}^{\circ }},\angle RSV={{45}^{\circ }}$and $\angle VTU={{20}^{\circ }}$Calculate the value of y
Question 47
Each exterior angle of a polygon is 30o. Calculate the sum of the interior angles
Question 19
In $\Delta XYZ$ $\left| XY \right|=8cm,$$\left| YZ \right|=10cm$and $\left| XZ \right|=6cm$ which of theses relations is true?
Question 22
In the diagram $MQ\parallel RS,\angle TUV={{70}^{\circ }}\text{and }\angle RLV={{30}^{\circ }}$ find the value of x
Question 23
In the diagram, MN, PQ and RS are three intersecting straight lines, which of the following statement (s) is/are true?
i. t = y
ii. x + y + z + m = 180o
iii x +m +n = 180o
iv. x + n = m + z
Question 36
In the diagram, $MN\parallel PO,$ $\angle PMN={{112}^{{}^\circ }},\angle PNO={{129}^{{}^\circ }}\angle NOP={{37}^{{}^\circ }}$ and$\angle MPN=y$ find the value of y
Question 40
The sum of the interior angles of a regular polygon is 1800o. How many sides has the polygon?
Question 45
The diagram is a polygon. Find the largest angle of its interiorAngles
Question 4
The diagram shows a cyclic quadrilateral PQRS with its diagonals intersecting at K. which of the following triangles is similar to triangle QKR?
Question 10
Find the sizes of the angle marked x in the diagram
Question 11
A regular polygon of n sides has each exterior angle equal to 45o. Find the value of n.
Question 23
In the diagram $MN\parallel OP,\angle NMQ={{65}^{\circ }}$ and $\angle QOP={{125}^{\circ }}$ what is the size of $\angle MQR$?
Question 27
From the diagram, which of the following statement are true?
I. m = q
II. n = q
III.n + p =180o
IV. p + m =180o
Question 37
In the diagram, STUV is a straight line$\angle TSY=\angle UXY={{40}^{{}^\circ }}$ and $\angle VUW={{110}^{\circ }}$ calculate $\angle TYW$
Question 47
In the diagram$PQ\parallel TS,PR\parallel TU$ reflex angle QPS =245o, $\angle PST={{115}^{{}^\circ }},\angle STU={{65}^{{}^\circ }}$ and $\angle RPS=x$. Find the value of x
Question 3
In the diagram $\angle PSR={{22}^{{}^\circ }},\text{ }\angle SPQ={{58}^{{}^\circ }}$ and Calculate the obtuse angle QRS
Question 10
What is the value of m in the diagram?
Question 11
In the diagram, $QR\parallel ST,\text{ }\left| PQ \right|=\left| PR \right|$ $\text{ and }\angle PST={{75}^{{}^\circ }}$. find the value of y
Question 21
In the diagram, triangle HKL and HIJ are similar. Which of the following ratios is equal to $\frac{LH}{JH}$
Question 27
The sum of the exterior angles of an n – sided convex polygon is half the sum of its interior angles. Find n
Question 19
In the diagram $\overline{PE},\text{ }\overline{QT},\text{ }\overline{RG}$ intersect at S and $PQ\parallel RG$. If $\angle SPQ={{113}^{\circ }}$and $\angle RST={{22}^{\circ }}$, find $\angle SPQ={{113}^{\circ }}$$\angle PS{{Q}^{\circ }}$
Question 23
In the diagram $PR\parallel SV\parallel WY,\text{ }TX\parallel QY,\text{ }\angle TXW={{60}^{\circ }}$ find $\angle TQU$
Question 46
The ratio of the exterior angle to the interior angle of a regular polygon 1:11. How many sides has the polygon?
Question 50
From the diagram which of the following is true?
Question 20
The interior angle of a pentagon are (2x + 5)o, (x + 20)o, xo, (3x – 20)o and (x + 15)o. Find the value of x
Question 21
In the diagram, IG is parallel to JE, $J\overset{\wedge }{\mathop{E}}\,F={{120}^{\circ }}$ and $F\overset{\wedge }{\mathop{H}}\,G={{130}^{\circ }}$. Find the angle marked t
Question 23
Find the value of x in the diagram
Question 24
In the diagram $\left| SQ \right|=4cm,\text{ }\left| PT \right|=7cm\text{, }\left| TR \right|=5cm$ and $ST\parallel QR$ If $\left| SP \right|=xcm$. Find the value of x
Question 39
Each of the interior angle of a regular polygon is 140o. Calculate the sum of all the interior angles of the polygon
Question 46
Find the value of x in the diagram
Question 47
The angles of a triangle are (x +10)o, (2x – 40)o and (3x – 90)o. Which of the following accurately described the triangle
|
2019-11-12 18:52:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5270731449127197, "perplexity": 4660.186871370296}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665726.39/warc/CC-MAIN-20191112175604-20191112203604-00419.warc.gz"}
|
http://educ.jmu.edu/~waltondb/MA2C/derivative-rules.html
|
## Section8.5Differentiation
The most important application of limit rules is to develop rules for derivatives. Every time we need a derivative, we currently must use the definition and compute the limit
\begin{equation*} f'(x) = \lim_{h \to 0} \frac{f(x+h)-f(x)}{h} \end{equation*}
and then go through the algebra and simplification to find the resulting formula. Finding the formula for $f'(x)$ from the formula for $f(x)$ is a process called differentiation. The rules for derivatives will provide us with a methodical way to differentiate algebraic formulas.
Differentiation is a process of taking a function and using it to determine another function. In a sense, we are using a function as an input and creating a new function as the output. Can you see that this is something like a function of functions instead of a function of numbers. We call such a process an operator and most commonly use the differential operator $\displaystyle \frac{d}{dx}$ where $x$ is the independent variable for the function. (The variable changes depending on the relevant independent variable.)
###### Definition8.5.1
The differential operator $\displaystyle \frac{d}{dx}$ takes a function as its input and provides the derivative function as its output,
\begin{equation*} \frac{d}{dx}[f(x)] = f'(x). \end{equation*}
If $y$ is a dependent variable defined by a function $y=f(x)\text{,}$ then we can also write
\begin{equation*} \frac{dy}{dx} = \frac{d}{dx}[y] = f'(x). \end{equation*}
### Subsection8.5.1Derivative Rules
Derivative rules are theorems that take as a hypothesis that one or two functions have known derivatives and the conclusion tells how to find the derivative of some combination of those functions. We start by stating the basic rules together for convenience in finding them.
One of these differentiation rules, the chain rule, will require its own discussion. That rule is focused on how to differentiate compositions of functions. The other rules focus on arithmetic combinations of functions and are the primary focus of this section. The chain rule was included for completeness in the listing of differentiation rules.
The proofs for these differentiation rules are based on applying the definition of a derivative to the formula in question while knowing that the limits that define the derivatives in the hypothesis are valid. To illustrate, we will look at four of the differentiation rules in detail. Before doing this, we will also need the following theorem, that a function must be continuous wherever the derivative is defined.
Because $f'(c)$ is defined, the limit defining it is
\begin{equation*} \lim_{x \to c} \frac{f(x)-f(c)}{x-c} = f'(c). \end{equation*}
We know that $\displaystyle \lim_{x \to c} x-c = c-c = 0$ using the Limit of a Linear Function. Because
\begin{equation*} f(x) - f(c) = \frac{f(x)-f(c)}{x-c} \cdot (x-c), \end{equation*}
we can compute the limit
\begin{equation*} \lim_{x \to c} f(x)-f(c) = \lim_{x \to c} \left[ \frac{f(x)-f(c)}{x-c} \cdot (x-c) \right] = f'(c) \cdot 0 = 0. \end{equation*}
The value $f(c)$ is a constant, so the Limit of a Constant Rule implies
\begin{equation*} \lim_{x \to c} f(c) = f(c). \end{equation*}
Since $f(x)=(f(x)-f(c)) + f(c)\text{,}$ the Limit of a Sum Rule implies
\begin{equation*} \lim_{x \to c} f(x) = \lim_{x \to c} [ f(x)-f(c)+f(c)] = 0 + f(c) = f(c). \end{equation*}
Therefore, $f$ is continuous at $c\text{.}$
### Subsection8.5.2Proofs of Differentiation Rules
#### Subsubsection8.5.2.1Proof of Constant Multiple Rule
By hypothesis, $\displaystyle \frac{d}{dx}[f(x)] = f'(x)\text{.}$ This means that $f'(x)$ is defined by its limit
\begin{equation*} \lim_{h \to 0} \frac{f(x+h)-f(x)}{h} = f'(x). \end{equation*}
The rule is interested in finding the rate of change of a new function $k \cdot f(x)\text{.}$ We will use that function to compute the derivative.
\begin{align*} \frac{d}{dx}[k \cdot f(x)] &= \lim_{h \to 0} \frac{k \cdot f(x+h) - k \cdot f(x)}{h} \\ &= \lim_{h \to 0} \frac{k (f(x+h)-f(x))}{h}. \end{align*}
Now notice that the formula is a product of the constant $k$ and the average rate of change of $f\text{.}$ Because we already know that the limit of the average rate of change of $f$ is equal to $f'(x)\text{,}$ we can use the Limit Rule for a Constant Multiple.
\begin{align*} \frac{d}{dx}[k \cdot f(x)] &= \lim_{h \to 0} \frac{k (f(x+h)-f(x))}{h} \\ &= k \cdot f'(x). \end{align*}
#### Subsubsection8.5.2.2Proof of Reciprocal Rule
By hypothesis, $\displaystyle \frac{d}{dx}[g(x)] = g'(x)\text{.}$ This means that $g'(x)$ is defined by its limit
\begin{equation*} \lim_{h \to 0} \frac{g(x+h)-g(x)}{h} = g'(x). \end{equation*}
The rule is interested in finding the rate of change of a new function $1/g(x)\text{.}$ We will use that function to compute the derivative using the definition, which will require finding a common denominator.
\begin{align*} \frac{d}{dx}[\frac{1}{g(x)}] &= \lim_{h \to 0} \frac{\frac{1}{g(x+h)} - \frac{1}{g(x)}}{h} \\ &= \lim_{h \to 0} \frac{\frac{g(x)}{g(x)g(x+h)} - \frac{g(x+h)}{g(x)g(x+h)}}{h} \\ &= \lim_{h \to 0} \frac{g(x)-g(x+h)}{g(x)g(x+h)}\cdot \frac{1}{h} \\ &= \lim_{h \to 0} \frac{-1}{g(x)g(x+h)}\cdot \frac{g(x+h)-g(x)}{h} \end{align*}
Since the limit involves $h \to 0\text{,}$ $g(x)$ is a constant and $g(x+h) \to g(x)$ (by continuity) so that
\begin{equation*} \lim_{h \to 0} \frac{-1}{g(x)g(x+h)} = \frac{-1}{(g(x))^2}. \end{equation*}
Using the Limit Rule of a Product, we have
\begin{align*} \frac{d}{dx}[\frac{1}{g(x)}] &= \lim_{h \to 0} \frac{-1}{g(x)g(x+h)}\cdot \frac{g(x+h)-g(x)}{h} \\ &= \frac{-1}{(g(x))^2} \cdot g'(x) = \frac{-g'(x)}{(g(x))^2}. \end{align*}
#### Subsubsection8.5.2.3Proof of Sum Rule
By hypothesis, $\displaystyle \frac{d}{dx}[f(x)] = f'(x)$ and $\displaystyle \frac{d}{dx}[g(x)] = g'(x)\text{.}$ This means that
\begin{gather*} \lim_{h \to 0} \frac{f(x+h)-f(x)}{h} = f'(x), \\ \lim_{h \to 0} \frac{g(x+h)-g(x)}{h} = g'(x). \end{gather*}
The sum rule is interested in finding the rate of change of a new function $f(x)+g(x)\text{.}$ We will use that function to compute the derivative using the definition:
\begin{align*} \frac{d}{dx}[f(x)+g(x)] &= \lim_{h \to 0} \frac{[f(x+h)+g(x+h)] - [f(x)+g(x)]}{h} \\ &= \lim_{h \to 0} \frac{f(x+h)+g(x+h)-f(x)-g(x)}{h} \\ &= \lim_{h \to 0} \frac{f(x+h)-f(x)+g(x+h)-g(x)}{h} \\ &= \lim_{h \to 0} \left[\frac{f(x+h)-f(x)}{h}+\frac{g(x+h)-g(x)}{h}\right] \\ &= f'(x)+g'(x), \end{align*}
using the Limit Rule of a Sum.
#### Subsubsection8.5.2.4Proof of Product Rule
By hypothesis, $\displaystyle \frac{d}{dx}[f(x)] = f'(x)$ and $\displaystyle \frac{d}{dx}[g(x)] = g'(x)\text{.}$ This means that
\begin{gather*} \lim_{h \to 0} \frac{f(x+h)-f(x)}{h} =\lim_{h \to 0} \frac{\Delta f}{h} = f'(x), \\ \lim_{h \to 0} \frac{g(x+h)-g(x)}{h} = \lim_{h \to 0} \frac{\Delta g}{h} =g'(x). \end{gather*}
The product rule is interested in finding the rate of change of a new function $f(x)g(x)\text{.}$ In the course of this calculation, we will also need the following substitutions,
\begin{gather*} f(x+h) = f(x+h)-f(x) + f(x) = \Delta f + f(x), \\ g(x+h) = g(x+h)-g(x) + g(x) = \Delta g + g(x). \end{gather*}
When $f'(x)$ and $g'(x)$ both exist, $f$ and $g$ are both continuous so that
\begin{gather*} \lim_{h \to 0} \Delta f = \lim_{h \to 0} f(x+h)-f(x) = 0, \\ \lim_{h \to 0} \Delta g = \lim_{h \to 0} g(x+h)-g(x) = 0. \end{gather*}
The derivative in question is defined by
\begin{align*} \frac{d}{dx}[f(x)g(x)] &= \lim_{h \to 0} \frac{[f(x+h)g(x+h)] - [f(x)g(x)]}{h} \\ &= \lim_{h \to 0} \frac{(\Delta f + f(x))(\Delta g + g(x)) - f(x)g(x)}{h} \\ &= \lim_{h \to 0} \frac{\Delta f \Delta g + \Delta f g(x) + f(x) \Delta g + f(x)g(x) - f(x)g(x)}{h} \\ &= \lim_{h \to 0} \frac{\Delta f \Delta g + \Delta f g(x) + f(x) \Delta g}{h} \\ &= \lim_{h \to 0} \left[\frac{\Delta f \Delta g}{h} + \frac{\Delta f g(x)}{h} + \frac{f(x) \Delta g}{h} \right]. \\ &= \lim_{h \to 0} \left[\Delta f \cdot \frac{\Delta g}{h} + \frac{\Delta f}{h} \cdot g(x) + f(x) \cdot \frac{\Delta g}{h} \right]. \\ &= 0 \cdot g'(x) + f'(x) \cdot g(x) + f(x) \cdot g'(x) \\ &= f'(x) \cdot g(x) + f(x) \cdot g'(x), \end{align*}
using the Limit Rule of a Sum and the Limit Rule of a Product.
### Subsection8.5.3Using the Derivative Rules
A traditional development of calculus begins applying these rules to find formulas to derivatives. We will instead begin applying the rules by interpreting some specific applied applications involving rates of change.
###### Example8.5.4
A tank is being filled with water two supply hoses. If the first hose is pumping water at a rate of 20 gal/min and the second hose is pumping water at a rate of 30 gal/min, what is the total rate of change for the tank?
Solution
We know the intuitive solution to the problem is 50 gal/min. This is actually a consequence of the sum rule of derivatives.
We can think of the water in the tank as having two components: $W_1\text{,}$ the volume of water (gal) that was pumped by hose 1, and $W_2\text{,}$ the volume of water (gal) that was pumped by hose 2. These two variables are functions of time $t$ (min), and the rates of water flowing from the hoses correspond to derivatives:
\begin{equation*} \frac{dW_1}{dt} = 20, \quad \frac{dW_2}{dt} = 30. \end{equation*}
The total volume of water in the tank at a given time $t$ is the sum $W(t)=W_1(t) + W_2(t)\text{.}$ So by the sum rule of derivatives,
\begin{equation*} \frac{dW}{dt} = \frac{dW_1}{dt} + \frac{dW_2}{dt} = 20+30 = 50. \end{equation*}
The sum rule for derivatives feels very intuitive. If a quantity is the sum of parts, then the total rate of change for the quantity is the sum of the rates of change for each of the parts. The product rule is less intuitive because we don't get to multiply rates of change when a quantity is a product. To illustrate this example, we focus on a geometric example on the area of a rectangle when the lengths of the sides are changing.
###### Example8.5.5
A city is in the shape of a rectangle with sides aligned with North-South and East-West lines. Suppose that the city is currently 5 miles east-to-west and 3 miles north-to-south and plans to expand to a size 8 miles east-to-west by 5 miles north-to-south over the next 10 years. What is the average rate of change of the total area in the city over the 10 years? If the borders were to move at a constant rate over those 10 years, what is the instantaneous rate of change of the total area of the city at the beginning and at the end of the 10 years?
Solution
We start with the question of average rate of change of total area, which we can solve intuitively. The city originally has a total area of $5 \times 3 = 15 \: \hbox{mi}^2\text{.}$ After 10 years, the city has a total area of $8 \times 5 = 40 \: \hbox{mi}^2\text{.}$ So the change in area is $25 \: \hbox{mi}^2$ over 10 years so the average rate of change of area is $\frac{25}{10} = 2.5\: \hbox{mi}^2/\hbox{yr}\text{.}$
To connect our intuition with functions and to prepare for the next calculations, let us introduce some variables. The state of the city can be characterized by four variables: the time $t$ (yr) at which the state is observed, the distance east-to-west, which we'll call the width $W$ (mi), the distance north-to-south, which we'll call the height $H$ (mi), and the enclosed total area $A$ ($\hbox{mi}^2$). We think of $W\text{,}$ $H$ and $A$ as being functions of time $t$ with the area being equal to the product of $W$ and $H\text{:}$
\begin{equation*} A(t) = W(t) \cdot H(t). \end{equation*}
Then the average rate of change for $A$ on the interval $t \in [0,10]$ is given by
\begin{equation*} \left. \frac{\Delta A}{\Delta t} \right|_{[0,10]} = \frac{A(10)-A(0)}{10-0} = \frac{W(10)H(10) - W(0)H(0)}{10} = \frac{8(5)-5(3)}{10} = 2.5. \end{equation*}
To find the instantaneous rates of change, we need to know how fast the width and height measurements are changing in time. Because the problem stated that these changed at a constant rate, we can use the average rates of change to compute the instantaneous rates:
\begin{gather*} \displaystyle \frac{dW}{dt} = \left. \frac{\Delta W}{\Delta t}\right|_{[0,10]} = \frac{W(10)-W(0)}{10-0} = \frac{8-5}{10} = 0.3, \\ \displaystyle \frac{dH}{dt} = \left. \frac{\Delta H}{\Delta t}\right|_{[0,10]} = \frac{H(10)-H(0)}{10-0} = \frac{5-3}{10} = 0.2. \end{gather*}
Since the area $A$ is the product of $W$ and $H\text{,}$ the product rule for derivatives will provide the instantaneous rate of change for area:
\begin{equation*} \frac{dA}{dt} = \frac{d}{dt}[W \cdot H] = \frac{dW}{dt} \cdot H + W \cdot \frac{dH}{dt}. \end{equation*}
When $t=0$ we have $W(0)=5$ and $H(0)=3$ so that
\begin{align*} \left. \frac{dA}{dt} \right|_{0} &= \left.\frac{dW}{dt}\right|_{0} \cdot H(0) + W(0) \cdot \left.\frac{dH}{dt}\right|_{0}\\ &= 0.3(3) + 5(0.2) = 1.9. \end{align*}
That is, at the beginning, the city is expanding at a rate of $1.9 \: \hbox{mi}^2/\hbox{yr}\text{.}$ After 10 years, $t=10\text{,}$ we have $W(10)=8$ and $H(10)=5$ so that
\begin{align*} \left. \frac{dA}{dt} \right|_{10} &= \left.\frac{dW}{dt}\right|_{10} \cdot H(10) + W(10) \cdot \left.\frac{dH}{dt}\right|_{10}\\ &= 0.3(5) + 8(0.2) = 3.1. \end{align*}
At the end of the 10 years, the city is expanding at a rate of $3.1 \: \hbox{mi}^2/\hbox{yr}\text{.}$
The picture of expanding area helps provide some intuition for why the product rule is the appropriate technique. If we consider the city after 6 months ($t=0.5$), both the width and the height have changed by a small amount, as shown in the figure below. The total change in area has two primary contributions, corresponding to long, skinny rectangles with areas $W(0) \cdot \Delta H$ and $\Delta W \cdot H(0)\text{,}$ and a very small rectangle with area $\Delta W \cdot \Delta H\text{.}$ The product rule corresponds to the rate of change coming from the two primary contributions while the small rectangle leads to a term that has a limit of zero in the calculation of the derivative.
Quotients often appear when working with densities, concentrations, or other ratios.
###### Example8.5.6
A salt-water solution is being formulated. At a particular instant, the solution consists of 10 L of water with 5 kg of salt. At that instant, water is being added at a rate of 0.5 L/s while salt is being added at a rate of 0.2 kg/s. What is the instantaneous rate of change of the concentration?
Solution
We start by identifying the variables that define the state of our system. The variables include the time $t\text{,}$ measured in seconds (s), the total volume of water $V\text{,}$ measured in liters (L), the total amount of salt in the water $S\text{,}$ measured in kilograms (kg), and the concentration of salt water $C\text{,}$ measured in kilograms per liter (kg/L). The variables $V\text{,}$ $S$ and $C$ are functions of time $t$ with an equation relating them by
The instantaneous rate of change is computed using the quotient rule for derivatives,
\begin{equation*} \frac{dC}{dt} = \frac{ V \frac{dS}{dt} - S \frac{dV}{dt}}{V^2}. \end{equation*}
The values at the instant in question are given by
\begin{gather*} V = 10, \quad \frac{dV}{dt} = 0.5, \\ S = 5, \quad \frac{dS}{dt} = 0.2. \end{gather*}
Using these values in the quotient rule for derivatives, we have
\begin{equation*} \frac{dC}{dt} = \frac{ 10(0.2) - 5(0.5) }{10^2} = \frac{2-2.5}{100} = -0.005. \end{equation*}
That is, the concentration is changing at a rate of -0.005 kg salt per liter water per second. Alternatively, we could say that the concentration is decreasing at a rate of 0.005 kg/L/s.
### Subsection8.5.4Derivative Building Blocks
In order to apply the differentiation rules for formulas, we need to have some elementary rules to get started. Just as with the limit rules, we begin with the basics. The justification of the derivatives of the elementary derivatives must be based on the definition of the derivative. The derivatives of more complex functions can then be justified using the derivative rules.
Let $f(x)=k$ be the constant function. Since $f(x+h)=k\text{,}$ we have
\begin{align*} f'(x) &= \lim_{h \to 0} \frac{f(x+h)-f(x)}{h} \\ &= \lim_{h \to 0} \frac{k-k}{h} = \lim_{h \to 0} 0 \\ &= 0, \end{align*}
where the last step used the limit rule for a constant.
Let $f(x)=x$ be the identity function. Since $f(x+h)=x+h\text{,}$ we have
\begin{align*} f'(x) &= \lim_{h \to 0} \frac{f(x+h)-f(x)}{h} \\ &= \lim_{h \to 0} \frac{x+h-x}{h} = \lim_{h \to 0} \frac{h}{h} \\ &= \lim_{h \to 0} 1 = 1. \end{align*}
Although the derivatives of other functions can be found using the limit definition of the derivative, these two elementary derivatives along with the the rules of differentiation allow us to find other derivatives.
###### Example8.5.9
Find $f'(x)$ for $f(x) = 3x-5\text{,}$ citing the differentiation rules you used.
Solution
We will use the differentiation operator $\frac{d}{dx}\text{.}$ Recall that this operator behaves something like a function, but with a function as its input and the derivative of that function as an output. The rules of differentiation allow us to find a derivative by breaking the function down into its elementary components.
In this example, $f(x)$ is a sum of $3x$ and -5. The subformula $3x$ is a constant multiple (3) of the identity $x$ and the number -5 is a constant. Once you recognize these facts, you apply the rules.
\begin{align*} f'(x) &= \frac{d}{dx}[3x-5] = \frac{d}{dx}[3x+-5] \\ &= \frac{d}{dx}[3x] + \frac{d}{dx}[-5] \quad \hbox{Sum Rule} \\ &= 3 \frac{d}{dx}[x] + \frac{d}{dx}[-5] \quad \hbox{Constant Multiple Rule} \\ &= 3 \cdot 1 + \frac{d}{dx}[-5] \quad \hbox{Derivative of Identity} \\ &= 3 \cdot 1 + 0 \quad \hbox{Derivative of Constant} \\ &= 3 \end{align*}
Thus, $f'(x) = 3\text{.}$
The rules of derivatives can also allow us to create new differentiation rules. The following theorem uses the same steps as the previous example but with arbitrary constants.
\begin{align*} \frac{d}{dx}[ax+b] &= \frac{d}{dx}[ax] + \frac{d}{dx}[b] \quad \hbox{Sum Rule} \\ &= a \frac{d}{dx}[x] + \frac{d}{dx}[b] \quad \hbox{Constant Multiple Rule} \\ &= a \cdot 1 + \frac{d}{dx}[b] \quad \hbox{Derivative of Identity} \\ &= a \cdot 1 + 0 \quad \hbox{Derivative of Constant} \\ &= a \end{align*}
Integer powers correspond to repeated multiplication, so the product rule of differentiation will lead to a rule for the derivative of a power. The following examples lead to a natural pattern called the power rule for derivatives.
###### Example8.5.11
Use the product rule of derivatives to show that
\begin{gather*} \frac{d}{dx}[x^2] = 2x ,\\ \frac{d}{dx}[x^3] = 3x^2, \\ \frac{d}{dx}[x^4] = 4x^3. \end{gather*}
Solution
The work will involve rewriting the powers as products,
\begin{equation*} x^2 = x \cdot x, \quad x^3 = x \cdot x^2, \quad x^4 = x \cdot x^3. \end{equation*}
\begin{equation*} \frac{d}{dx}[x] = 1. \end{equation*}
It is useful to remember the product rule using dependent variables, say $u=f(x)$ and $v=g(x)\text{,}$ as
\begin{equation*} \frac{d}{dx}[u \cdot v] = \frac{du}{dx} \cdot v + u \cdot \frac{dv}{dx} \end{equation*}
because this will guide our use of the differentiation operator.
With that setup, we can compute the derivatives using differentiation rules.
\begin{align*} \frac{d}{dx}[x^2] &= \frac{d}{dx}[x \cdot x]\\ &= \frac{d}{dx}[x] \cdot x + x \cdot \frac{d}{dx}[x]\\ &= 1 \cdot x + x \cdot 1\\ &= 2x \end{align*}
For the next calculation, we will use both of our previously found derivatives.
\begin{align*} \frac{d}{dx}[x^3] &= \frac{d}{dx}[x \cdot x^2]\\ &= \frac{d}{dx}[x] \cdot x^2 + x \cdot \frac{d}{dx}[x^2]\\ &= 1 \cdot x^2 + x \cdot (2x)\\ &= 3x^2 \end{align*}
The pattern should be apparent for the next derivative.
\begin{align*} \frac{d}{dx}[x^4] &= \frac{d}{dx}[x \cdot x^3]\\ &= \frac{d}{dx}[x] \cdot x^3 + x \cdot \frac{d}{dx}[x^3]\\ &= 1 \cdot x^3 + x \cdot (3x^2)\\ &= 4x^3 \end{align*}
We can continue to find more derivatives using these results.
###### Example8.5.12
Find $\displaystyle \frac{d}{dx}[\frac{1}{x^3}]\text{.}$
Solution
\begin{equation*} \frac{d}{dx}[\frac{1}{u}] = \frac{-\frac{du}{dx}}{u^2}. \end{equation*}
\begin{align*} \frac{d}{dx}[\frac{1}{x^3}] &= \frac{-\frac{d}{dx}[x^3]}{(x^3)^2} \\ &= \frac{-(3x^2)}{x^3 \cdot x^3} = \frac{-3x^2}{x^6} \\ &= \frac{-3}{x^4}. \end{align*}
###### Example8.5.13
Find $\displaystyle \frac{d}{dx}[ 5x^2-8x+3 ]\text{.}$
Solution
$f(x)=5x^2-8x+3\text{.}$
\begin{align*} f'(x) &= \frac{d}{dx}[5x^2-8x+3] \\ & = \frac{d}{dx}[5x^2 + (-8x+3)] \\ & = \frac{d}{dx}[5x^2] + \frac{d}{dx}[-8x+3] \end{align*}
$x^2$
\begin{align*} f'(x) & = 5 \frac{d}{dx}[x^2] + \frac{d}{dx}[-8x+3] \\ & = 5(2x) + -8\\ & = 10x-8. \end{align*}
$\displaystyle \frac{d}{dx}[ 5x^2-8x+3 ] = 10x-8\text{.}$
|
2018-08-20 05:39:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000097751617432, "perplexity": 554.9171459202203}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215843.55/warc/CC-MAIN-20180820042723-20180820062723-00069.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-4-proportions-percents-and-solving-inequalities-4-3-more-on-percents-and-problem-solving-problem-set-4-3-page-161/34
|
## Elementary Algebra
Published by Cengage Learning
# Chapter 4 - Proportions, Percents, and Solving Inequalities - 4.3 - More on Percents and Problem Solving - Problem Set 4.3 - Page 161: 34
#### Answer
The original price of the dress was 175 dollars.
#### Work Step by Step
Let s represent the original price. We can use the following guideline to solve this problem. Original selling price - Discount = Discount sale price s - 20% $\times$ s = 140 s - 0.2s = 140 0.8s = 140 Divide both sides by 0.8: s = 175 The original price of the dress was 175 dollars.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2019-03-19 17:15:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5104063153266907, "perplexity": 1859.1708824406387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202003.56/warc/CC-MAIN-20190319163636-20190319184808-00037.warc.gz"}
|
http://www.ams.org/mathscinet-getitem?mr=2458153
|
MathSciNet bibliographic data MR2458153 (2009g:46118) 46L35 (46L80) Lin, Huaxin; Niu, Zhuang Lifting \$KK\$$KK$-elements, asymptotic unitary equivalence and classification of simple \$C\sp \ast\$$C\sp \ast$-algebras. Adv. Math. 219 (2008), no. 5, 1729–1769. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
|
2014-04-17 07:29:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975051879882812, "perplexity": 11015.090509724927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/calculus/205280-finding-area-dividing-into-sub-intervals.html
|
# Math Help - Finding the area by dividing into sub intervals.
1. ## Finding the area by dividing into sub intervals.
I have to find the area of the following by dividing it up into nine sub intervals.
$W(x)=-x^4/324+x^3/9-25x^2/18+7x, 0
the width of each interval is 2, so nine sub intervals ought to be x=2,4,6,8,9,10,12,14,16,18......i multiplied the width by the f(2,4,6..etc) and the answer came out to be approx 178.96, however when i calculate the integral on tech the answer is 183.6....have i done something wrong?
2. ## Re: Finding the area by dividing into sub intervals.
Originally Posted by johnsy123
I have to find the area of the following by dividing it up into subintevals.
3) Your backyard pool is kidney shaped and its width can be modelled as a function of its length (x) using the rule
[IMG]file:///C:/Users/Nich/AppData/Local/Temp/msohtmlclip1/01/clip_image002.gif[/IMG]
(a) Use a numerical method to calculate the total area of the pool by finding the area under this graph, for example divide the interval up into nine subintervals and add up the areas.
You need to post images through a website, not from your computer.
3. ## Re: Finding the area by dividing into sub intervals.
You need to attach a file that is on your hard drive. The img tags are for images that are hosted online.
4. ## Re: Finding the area by dividing into sub intervals.
Originally Posted by johnsy123
I have to find the area of the following by dividing it up into nine sub intervals.
$W(x)=-x^4/324+x^3/9-25x^2/18+7x, 0
the width of each interval is 2, so nine sub intervals ought to be x=2,4,6,8,9,10,12,14,16,18......i multiplied the width by the f(2,4,6..etc) and the answer came out to be approx 178.96, however when i calculate the integral on tech the answer is 183.6....have i done something wrong?
What you have done appears to be alright, but right-hand-interval integration won't give you a very accurate answer. You could improve it by averaging it with the left-hand-interval integral, or the midpoint rule, or the trapezoidal rule, or Simpson's Rule.
|
2014-08-31 08:45:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7260777354240417, "perplexity": 888.0159990621848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500837094.14/warc/CC-MAIN-20140820021357-00374-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://ncertmcq.com/ml-aggarwal-class-7-icse-maths-model-question-paper-1/
|
ML Aggarwal Class 7 Solutions for ICSE Maths Model Question Paper 1 acts as the best resource during your learning and helps you score well in your exams.
## ML Aggarwal Class 7 ICSE Maths Model Question Paper 1
(Based on Chapters 1 to 3)
Time allowed: 1 Hour
Maximum Marks: 25
Instructions
• Questions 1-2 carry 1 mark each
• Questions 3-5 carry 2 marks each
• Questions 6-8 carry 3 marks each
• Questions 9-10 carry 4 marks each.
Choose the correct answer from the given four options (1-2):
Question 1.
(-10) × 2 + 0 ÷ (-2) is equal to
(a) -20
(b) 20
(c) -22
(d) 22
Solution:
Question 2.
The sum of a rational number $$\frac { -1 }{ 2 }$$ and its multiplicative inverse is
(a) 0
(b) 1
(c) -2$$\frac { 1 }{ 2 }$$
(d) -2
Solution:
Question 3.
Evaluate: (-36) ÷ ((-14) + 2).
Solution:
Question 4.
If the length of a rectangle is 8.26 cm and its breadth is 5.5 cm, then find the area of the rectangle.
Solution:
Question 5.
Reduce the rational number $$\frac { 105 }{ -168 }$$ standard form.
Solution:
Question 6.
In a competition, the question paper consists of 20 questions. 5 marks are awarded for every correct answer and 2 marks are deducted for every incorrect answer and 0 marks for every question not attempted. Vishal attempted 17 questions and got 11 correct answers. What is his score?
Solution:
Question 7.
Barkha bought 20$$\frac { 3 }{ 8 }$$ kg rice at the rate of ₹ 17$$\frac { 1 }{ 2 }$$ per kg and sent it to an orphanage. Find the amount spent by Barkha. What value is being promoted?
Solution:
Question 8.
Which rational number is greater -5$$\frac { 5 }{ 9 }$$ or -5$$\frac { 7 }{ 12 }$$ ?
Solution:
Question 9.
Simran walks 1$$\frac { 5 }{ 12 }$$ km from a place A towards north and then from there she walks 2$$\frac { 7 }{ 9 }$$ km towards south. Where will be she now from place A?
Solution:
Question 10.
If the product of two decimal numbers is 17.55 and one of them is 2.7, then find the other.
Solution:
|
2022-05-22 07:36:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6536919474601746, "perplexity": 4137.298686861262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545090.44/warc/CC-MAIN-20220522063657-20220522093657-00540.warc.gz"}
|
https://www.edaboard.com/blog/estimating-frequency-over-a-short-capture.1939/
|
# Estimating Frequency Over A Short Capture
I was recently asked how to estimate the frequency of a noisy sine wave over a short capture. In this context, 'short' means the total capture duration relative to the period of the sine wave. So we may have, at most, a few periods of the sine wave (or, at worst, less than one period). Here is an example (download raw data: View attachment sine_data_40k.zip
):
Clealy, there is less than one period available. To illustrate the difficulty with this problem, I will first describe a popular approach for estimating frequency over 'long' captures.
FFT-based Frequency Estimation ('Long' Captures)
Whenever we are interested in the frequency content of a signal, the Fast Fourier Transform (FFT) is often an excellent tool to use. In practice, when working with real-world (finite and imperfect) data, rather than using the raw output of the FFT, it is often preferable to enhance the FFT's estimate of Power Spectral Density (PSD) using Welch's method, since this can significantly reduce noise (at the expense of reduced resolution).
Here is some Matlab code showing how to use Welch's method to estimate frequency over a 'long' capture (download: View attachment pwelch_test.zip
):
Code Matlab M - [expand]1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
close all; clear all; clc;
% Assume we capture Nsamps samples at 1kHz sample rate
Nsamps = 32768;
fsamp = 1000;
Tsamp = 1/fsamp;
t = (0:Nsamps-1)*Tsamp;
% Assume the noisy signal is exactly 123Hz
fsig = 123;
signal = sin(2*pi*fsig*t);
noise = 1*randn(1,Nsamps);
x = signal + noise;
% Plot time-domain signal
subplot(2,1,1);
plot(t, x);
ylabel('Amplitude'); xlabel('Time (secs)');
axis tight;
title('Noisy Input Signal');
% Choose FFT size and calculate spectrum
Nfft = 8192;
[Pxx,f] = pwelch(x,gausswin(Nfft),Nfft/2,Nfft,fsamp);
% Plot frequency spectrum
subplot(2,1,2);
plot(f,Pxx);
ylabel('PSD'); xlabel('Frequency (Hz)');
grid on;
% Get frequency estimate (spectral peak)
[~,loc] = max(Pxx);
FREQ_ESTIMATE = f(loc);
title(['Frequency estimate = ',num2str(FREQ_ESTIMATE),' Hz']);
The code outputs this figure:
Although the input looks like a noisy mess, we can see an extremely narrow spike in the spectrum at 123.0469 Hz (i.e. a tiny error of 0.0469 Hz).
However, if we try the same approach with the 'short' data capture (using this code: View attachment short_pwelch.zip
), we get:
Now the frequency spike is invisible because it is squeezed right down in the first bin after DC. The reason is that, even though I have eliminated any averaging in the Welch method, the frequency bin spacing (i.e. resolution) is still only:
$\Delta_f = \frac{f_{samp}}{N_{samps}}=50kHz$
Therefore, even though the input signal is somewhere in the region of 35kHz, it is impossible for the FFT to provide an answer anywhere between 0 and 50kHz.
It is worth reiterating that the 'short' capture is short in terms of its total duration compared to the period of the sine wave. In fact, there are significantly more samples in the 'short' capture than the 'long' capture, but this doesn't ultimately help.
Frequency Estimation Via Curve Fitting
A far more effective method with 'short' captures is curve fitting. In a previous blog, I wrote about polynomial curve fitting. Well, in this case, we want to do sinusoidal curve fitting. This is easy to do in Matlab, using the 'sin1' option with the fit() function (code: View attachment short_sine_fit.zip
):
Code Matlab M - [expand]1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
close all; clear all; clc;
x = sine_data_40k;
% Data info
Tsamp = 5e-10;
fsamp = 1/Tsamp;
Nsamps = length(x);
t = (0:Nsamps-1).'*Tsamp;
% Get sine fit
F = fit(t, x, 'sin1');
% Plot data and sine fit
figure();
plot(F,t,x);
grid on;
xlabel('Time (secs)');
ylabel('Amplitude');
title(['Estimated frequency = ', num2str(F.b1/(2*pi)), ' Hz']);
The code outputs this graph:
The frequency estimate of 35526.7 Hz is very much more accurate than the FFT-based method.
I never thought it was possible to take the spectrum of a signal with only a fraction of its full period.
I like to remind people about significant figures when they talk about accuracy.
35526.7 has only 3 SIG FIGs looking at possible noise error in the graph.
With a signal to noise ratio of < 100:1 and DC offset uncertainty of ~100:1, i.e. 2~3 significant figures.
Using the fundamental slope of the sine wave at zero crossing to the peak or next zero crossing I get 34.0 kHz +/-0.2Hz
Using Irfanview on webimage the time scale 20us was 435 pixels.
So what error do you expect on 35526.7? when I get an optimisitic 34 kHz
SunnySkyguy;bt2511 said:
I like to remind people about significant figures when they talk about accuracy.
Why?
SunnySkyguy;bt2511 said:
So what error do you expect on 35526.7? when I get an optimisitic 34 kHz
I don't know what you mean by "optimistic", but I would suggest that your hacky method using lines drawn in an image editor is likely to give very poor estimates, compared to a line-fit derived using proper mathematical rigour. (Not to mention how you would expect to apply your method in a repeatable manner over large data sets). The exact frequency from which this capture was generated was about 35.33kHz. Therefore, the line-fit performs well compared to the FFT-based method in this scenario.
If you are interested in evaluating accuracy in more detail, I would suggest that one easy way of doing this would be to run a large number of trials over synthetic data with known ground truths. As long as your synthetic data has similar statistics to the types of signals you're interested in, then this would answer your question.
### Blog entry information
Author
weetabixharry
Views
20,340
3
Last update
• pwelch_test.zip
630 bytes · Views: 510
• short_pwelch.zip
590 bytes · Views: 527
• short_sine_fit.zip
443 bytes · Views: 448
• sine_data_40k.zip
15.8 KB · Views: 978
|
2022-07-04 16:24:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5889108777046204, "perplexity": 2070.816631107184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104432674.76/warc/CC-MAIN-20220704141714-20220704171714-00448.warc.gz"}
|
https://plainmath.net/7292/gamma-equal-plus-plus-plus-equal-ordered-bases-change-coordinate-matrix
|
Question
# Let gamma = {t^2 - t + 1, t + 1, t^2 + 1} and beta = {t^2 + t + 4, 4t^2 - 3t + 2, 2t^2 + 3} be ordered bases for P_2(R).Find the change of coordinate matrix Q
Alternate coordinate systems
Let $$\displaystyle\gamma={\left\lbrace{t}^{{2}}-{t}+{1},{t}+{1},{t}^{{2}}+{1}\right\rbrace}{\quad\text{and}\quad}\beta={\left\lbrace{t}^{{2}}+{t}+{4},{4}{t}^{{2}}-{3}{t}+{2},{2}{t}^{{2}}+{3}\right\rbrace}{b}{e}{\quad\text{or}\quad}{d}{e}{r}{e}{d}{b}{a}{s}{e}{s}{f}{\quad\text{or}\quad}{P}_{{2}}{\left({R}\right)}.$$ Find the change of coordinate matrix Q that changes $$\beta \text{ coordinates into } \gamma-\text{ coordinates}$$
2020-11-02
Let $$\displaystyle{t}^{{2}}+{t}+{4}=\gamma_{{1}}{\left({t}^{{2}}-{t}+{1}\right)}+\gamma_{{2}}{\left({t}={1}\right)}+\gamma_{{3}}{\left({t}^{{2}}+{1}\right)}$$
$$\displaystyle\Rightarrow\gamma_{{1}}+\gamma_{{3}}={1},-\gamma_{{1}}+\gamma_{{2}}={1},\gamma_{{1}}+\gamma_{{2}}+\gamma_{{3}}={4}$$
$$\displaystyle\Rightarrow{1}+\gamma_{{2}}={4}$$
$$\displaystyle\Rightarrow\gamma_{{2}}={3}$$
$$\displaystyle-\gamma_{{1}}+\gamma_{{2}}={1}\Rightarrow-\gamma_{{1}}+{3}={1}\Rightarrow\gamma_{{1}}={2}$$
$$\displaystyle\gamma_{{1}}+\gamma_{{3}}={1}\Rightarrow{2}+\gamma_{{3}}={1}\Rightarrow\gamma_{{3}}=-{1}$$ Let $$\displaystyle{4}{t}^{{2}}-{3}{r}+{2}=\gamma_{{1}}{\left({t}^{{2}}-{t}+{1}\right)}+\gamma_{{2}}{\left({t}+{1}\right)}+\gamma_{{3}}{\left({t}^{{2}}+{1}\right)}$$
$$\displaystyle\Rightarrow\gamma_{{1}}+\gamma_{{3}}={4},-\gamma_{{1}}+\gamma_{{2}}=-{3},\gamma_{{1}}+\gamma_{{2}}+\gamma_{{3}}={2}$$
$$\displaystyle\Rightarrow{4}+\gamma_{{2}}={2}$$
$$\displaystyle\Rightarrow\gamma_{{2}}=-{2}$$
$$\displaystyle-\gamma_{{1}}+\gamma_{{2}}=-{3}\Rightarrow-\gamma_{{1}}-{2}=-{3}\Rightarrow\gamma_{{1}}={1}$$
$$\displaystyle\gamma_{{1}}+\gamma_{{3}}={4}\Rightarrow{1}+\gamma_{{3}}={4}\Rightarrow\gamma_{{3}}={3}$$ Let $$\displaystyle{2}{t}^{{2}}+{3}=\gamma_{{1}}{\left({t}^{{2}}-{t}+{1}\right)}+\gamma_{{2}}{\left({t}+{1}\right)}+\gamma_{{3}}{\left({t}^{{2}}+{1}\right)}$$
$$\displaystyle\Rightarrow\gamma_{{1}}+\gamma_{{3}}={2},-\gamma_{{1}}+\gamma_{{2}}={0},\gamma_{{1}}+\gamma_{{2}}+\gamma_{{3}}={3}$$
$$\displaystyle\Rightarrow{2}+\gamma_{{2}}={3}$$
$$\displaystyle\Rightarrow\gamma_{{2}}={1}$$
$$\displaystyle-\gamma_{{1}}+\gamma_{{2}}={0}\Rightarrow-\gamma_{{1}}+{1}={0}{R}{i}{>}\leftrightarrow{o}{w}\gamma_{{1}}={1}$$
$$\displaystyle\gamma_{{1}}+\gamma_{{3}}={2}\Rightarrow{1}+\gamma_{{3}}={2}\Rightarrow\gamma_{{3}}={1}$$
$$\therefore Q = \left(\begin{array}{c}2 & 1 & 1 \\ 3 & -2 & 1 & \\-1 & 3 & 1\end{array}\right)$$
|
2021-08-02 06:37:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9705492258071899, "perplexity": 1212.4999862526467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154304.34/warc/CC-MAIN-20210802043814-20210802073814-00676.warc.gz"}
|
https://control.com/forums/threads/is-linuxplc-heading-down-the-proprietry-hardware-path.3296/
|
Is LinuxPLC heading down the "proprietry" hardware path?
C
Campbell, David (Ex AS17)
There has been much discussion on this lately about hardware & IO interface boards. The following email has been formatted for a fix pitch font (includes ASCII art) so don't complain if the formatting looks "wierd". My opinion (which doesn't count for much as I only wield a soldering iron for the odd serial cable) is as follows: 1) LinuxPLC should have a common framework to allow interface development for connecting to ANY hardware vendor IO rack providing the protocol document is in the public domain (or at least obtained without signing any non-disclosure agreement). 2) We should not be attempting to replicate existing vendors solutions. This means we should spend effort designing and constructing IO boards. There are several reasons for this: 2.1) Safety certification - Impossible (or extremely difficult) to obtained for a GPL (or similar) hardware implementation 2.2) Economies of scale - How many people are prepared to make a 1,000 unit run of a IO module board? 2.3) The PLC market is rather cut-throat already (highly competitive). 2.X) EXCEPTION: Low IO count or specialised IO (eg: student introduction to PLCs) where "fool proof" safety is sufficient and "idiot proof" is not required. (with reference to email threads about "unknown powerup state of 8255" is not acceptable in an industrial environment). This area is largely ignored by 99% of the vendors. Curt Wuollet has raised the issue that there very few hardware vendors that offer ethernet support for PLCs at competative prices. I have several questions which need to be answered: 1) What advantages does ethernet give over serial? 1.1) Higher speed? 1.2) Longer cable runs? 1.3) Lower total cost? (including cabling) 1.4) "multi-threaded" access? (two machines simulataneously accessing one PLC) 2) What price are people prepared to pay for ethernet access? 2.1) Nothing? 2.2) US$100/PLC? 2.3) US$200/PLC? 2.4) US$400/PLC? I believe Curt may of touched on an area where LinuxPLC could spread like wildfire through the automation arena. Practically every IO device with a serial port appears to support MODBUS (some better than others). I have just done a web trawl looking at ethernet<=>serial interfaces, typically this involve terminal servers which have a port price varying from USD$100 to USD$400/serial port (32 to 4 port devices respectively). Hence I ruled out these devices almost immediately (although Shiva does appear to have a relatively cheap single port device, since their acquisition by Intel this device has vanished from the face of the internet). I looked around for other solutions, I came across print servers which support bi-directional parallel ports. There were a couple of print servers (notably 3 port devices, 2P+1S) which supported serial printers. There was a lack of information whether the serial port was bi-directional. If a cheap bi-directional parallel port<=>serial convertor exists then it might be possible to use LinuxPLC as a "generic" MODBUS-TCP <=> MODBUS-Serial convertor. The following table is some pricing of network printservers: Unit Price Port Price Netgear 2 port USD$122 USD$61 Netgear 3 port USD$200 USD$67 DLink 2+1 port USD$154 USD$77 [1] Netcomm 3 port USD$232 USD$78 Netcomm 1 port USD$92 USD$92 DLink 1 port USD$116 USD$116 HP JD-170x 1 port USD$144 USD$144 [1] 2 Parallel + 1 Serial, assuming serial port is uni-directional. If serial port is bi-directional then it makes the DLink product very attractive. [2] All prices based on Australian web-sites converted to USD, hence prices will vary from country to country. [3] Netcomm => http://www.netcomm.com.au/ On to the next part of the problem: Parallel to serial convertor Black Box USD$119 DataPro SXP325-256 USD$119 Pacific Custom Cable USD$68 Patton 2030 USD$??? (not listed) [1] CableXpress USD$79 [1] http://www.patton.com/, huge range of serial convertors It appears that the interface hardware will be around USD$130 per port. I suspect that it could be done for less... Personally I believe the above prices for convertors are twice what they should be. Now to put the whole thing together: +-----+ +-----+ +-----+ +-----+ | P C | | P C | | P C | | P C | +--+--+ +--+--+ +--+--+ +--+--+ | | | | --+---------------+-------+-------+---------------+-- | Ethernet / MODBUS-TCP | +-------------+--------------+ | LinuxPLC MODBUS-TCP Driver | +-------------+--------------+ | +-------------+--------------+ | LinuxPLC Core Libraries | +-------------+--------------+ | +---------------+----------------+ | LinuxPLC MODBUS-Serial Driver | +---------------+----------------+ | +---------------+----------------+ | UNIX socket to TCP-IP socket | | process (glorified pipe) | +---------------+----------------+ | | Ethernet/TCP-IP ------+-------------------+-------------------------- | +----+----+ | Print | | Server | MODBUS-Serial +-+-----+-+ | | | +-----------------------+ \/ +-------+ | +----+ IEEE 1284 <-> RS-232C +-----+ PLC | | +-----------------------+ +-------+ | | +-----------------------+ +-------+ +----------+ IEEE 1284 <-> RS-232C +-----+ PLC | +-----------------------+ +-------+ Feel free to comment, critise or flame the above idea. I am attempting to find a niche area in the current PLC market where LinuxPLC could thrive. Anyway, thats my 5 cents (2 cent coins have been withdrawn from circulation in Australia, worth about USD$0.02). Comments Curt? David Campbell _______________________________________________ LinuxPLC mailing list [email protected] http://linuxplc.org/mailman/listinfo/linuxplc
C
Curt Wuollet
Let me sleep on it Dave, I've been so tightly focused on getting _any_ real world IO for LPLC that I haven't been thinking that far ahead. One of the goals of LPLC is to be the great go between for all the protos we can support. The Ethernet IO thing still has a window open and I'm determined to get us into that gap. The big guns in the industry are frantically trying to decommoditize Ethernet and we can save the world from that if we commoditize it. Where you're coming from is not gelling with me but I've been laying down tracks on the DIO48 thing for hours and nothing makes much sense tonight. I'll look at it again in the morning. Regards cww _______________________________________________ LinuxPLC mailing list [email protected] http://linuxplc.org/mailman/listinfo/linuxplc
C
Curt Wuollet
Hi Dave In a word, Yes. I guess I have assumed all along the we will be able to do things like this. What LPLC would offer is doing it with Automation Programmer tools rather than Linux systems programmer tools. We have talked from the very earliest about being a great go between. By the way, I make my living at the moment using Linux to connect incompatible automation junk. There must be a need as I'm still working :^) Regards cww _______________________________________________ LinuxPLC mailing list [email protected] http://linuxplc.org/mailman/listinfo/linuxplc
H
Hugh Jack
I think care is needed to emphasize that while some of the group is enthusiastically discussing "proprietary" (yet free and open) hardware and software, the core controller will not require the "proprietary" hardware to run. In other words, you will not be forced to use the new hardware to use the Puffin PLC -- but you will have a choice. From other perspectives, this discussion is involving people who have not been working on parts of the project. So... it is not draining resources, or diluting the focus. Hugh _______________________________________________ LinuxPLC mailing list [email protected] http://linuxplc.org/mailman/listinfo/linuxplc
A
Andrew G.Treves
<<message snipped>> To start with all that is required is an RS232 / RS485 driver either 2 wire or 4 wire to drive straight into Modbus. This then will drive a large variety of process control and PLC hardware with the appropriate coil and register mappings for the intended target. In particular most variable speed drives can be controlled this way. We have been looking for such a device for the last five years. PC cards and converters are readily available or you can knock something up on veroboard using a MAX485 chip. Andrew G.Treves, Peak District, Derbyshire, UK Tel: +44 (0) 1246 582219 Fax: +44 (0) 870 7062393 [email protected]9.net _______________________________________________ LinuxPLC mailing list [email protected] http://linuxplc.org/mailman/listinfo/linuxplc
A
Andrew Kohlsmith
> Where you're coming from is not gelling with me but I've been > laying down tracks on the DIO48 thing for hours and nothing makes > much sense tonight. I'll look at it again in the morning. I'd been watching this for a while... the DIO48 is the IO board discussed last week? Regards, Andrew _______________________________________________ LinuxPLC mailing list [email protected] http://linuxplc.org/mailman/listinfo/linuxplc
C
Curt Wuollet
By way of a reply and an announcement: What started small has grown a bit: In a few more hours we will have full industrial strength IO for Linux. Rather than compromise, I took a survey of PLC vendors IO specs and produced a fully compatible design. Pro Version: Design features: 24 Fully optoisolated sinking/sourcing inputs. May be strapped in groups of 8 for sinking or sourcing use with provision for fast or slow filtering. 5000 V. isolation group to PC. The option exists for adjusting logic thresholds. 24 Fully optoisolated NPN OC outputs with integral protection diodes. Common strapped in groups of 8 with 5000 V isolation to PC and XXX group to group depending on construction. Outputs sink .5 ADC. Very generous duty cycle and # outputs on specs. TBD but should be better than the big guys. External fusing. Provision for fixed or removable rising elevator contact terminal strips. Industrial layout and design rules. All through hole devices. Board should be sprayed with clear acrylic to maintain isolation in dirty environments. Hobby/lab version: Cheap, for when the PC is the machine and isolation is not a problem. 24 sinking voltage divider inputs with provision for filtering. 24 NPN OC sinking outputs with .5 ADC capability. May be used with or without terminals. Believe it or not, decent terminals are the single biggest expense for these boards.. The Hobby/lab artwork should appear in a week or two after I fab and test the pro board. There are a lot of lines and connections and there may even be an oops or two so I want to verify the artwork/design before I derive the second board, I will be offering the Pro version to the folks at my day job for fab and testing to meet needs we currently have. This is how I plan to test the Pro version. The Hobby/lab version will probably need to be fabbed and tested by the user. I will request that the proto house I use retain artwork/etc so boards can be ordered with fast turnaround. Artwork for both versions will remain the property of the LPLC project. They are produced in pcb, the free open source layout tool and are freely editable. The input and output cells can be used for other projects (if they work :^)) saving considerable time and effort to support other boards. Regards cww _______________________________________________ LinuxPLC mailing list [email protected] http://linuxplc.org/mailman/listinfo/linuxplc
|
2021-04-22 19:54:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27701228857040405, "perplexity": 4303.244819482818}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039604430.92/warc/CC-MAIN-20210422191215-20210422221215-00515.warc.gz"}
|
https://www.physicsforums.com/threads/proving-reflexive-symmetric-transitive-properties.818672/
|
Proving reflexive, symmetric, transitive properties
1. Jun 12, 2015
issacnewton
Hello
I was reading Spivak's calculus. It starts with discussing the familiar axioms of the real numbers. He calls them properties. At some another forum, I came across the reference to Landau's "Foundation of Analysis" as a background for analysis. So I referred to that book. On the very first page , he says that reflexive, symmetric, and transitive properties of the natural numbers are taken for granted on logical grounds. So Landau is taking reflexive, symmetric and transitive properties as axioms of natural numbers. I was wondering if we can prove reflexive, symmetric and transitive properties from the field axioms given in Spivak's calculus book.
thanks
2. Jun 12, 2015
micromass
So first of all, the approach of Landau and the approach of Spivak are very different. What Spivak does is introduce the real number axiomatically and then construct the natural numbers as a subset of the reals. What Landau does is to take the natural numbers axiomatically, and then construct the real numbers from then. So the two approaches are kind of inverses to eachother.
Now, what you are refering to are the axioms of equality. These are axioms of basic logic, they cannot be proven in the approach of Spivak or Landau. They have to be taken for granted because they define what equality actually is. Other axioms of equality are surely possible, but you need to start from something.
3. Jun 12, 2015
issacnewton
Thanks micromass. Book like Spivak's Calculus should mention what you are saying, just for completeness. I was trying to prove something in Spivak, and I wanted to
use transitive property of real numbers. Since Spivak supposedly starts from scratch, I started wondering where does transitive property come from.
|
2018-07-22 20:47:48
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8655996918678284, "perplexity": 389.3488826968849}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593586.54/warc/CC-MAIN-20180722194125-20180722214125-00527.warc.gz"}
|
https://maker.pro/forums/threads/my-recent-tek-agilent-and-lecroy-oscilloscope-fun.229423/
|
# My recent Tek, Agilent, and LeCroy oscilloscope fun
M
#### Mr.CRC
Jan 1, 1970
0
Hi:
Recently at work I reviewed Tek MSO4000, Tek DPO7000, Agilent 9000, and
LeCroy MXi-A series oscilloscopes.
I have purchased recently an Agilent MSO7054B, Tek MSO4054, and LeCroy
104MXi-A.
The only one I regret purchasing is the Tek. I would have preferred to
have gotten their new MSO5000 model, but anyway...
The MSO4000 is my least favorite instrument, and will likely be a loaner
scope for giving to labs for temporary use, or for a new tech. that we
hope to hire soon. The reason is that it doesn't have the in-depth data
analysis of the LeCroy, and unlike the Agilent 6000 and 7000 series,
it's waveform update rate slows to a crawl whenever deep memory is used,
or measurements and digital channels are turned on.
The Agilent seems to be the best bang for the buck. I've had a MSO6054
for a few years, and now several of their 7000 series. They maintain
high update rates with the always-on deep memory (not quite as deep as
Tek, only 4MB when running Normal, 8MB with a single acquisition). The
Tek also tends to display persistence artifacts that make you think
something is there, but it's not, just persistence from the last acq.
So now the MSO7054 is my primary mixed signal design scope. The MSO6054
will be there for a student intern this summer.
You are probably wondering, well why the heck did I buy the Tek? I
needed a histogram measurement of some PLL jitter, and the Agilent can't
do that. I also mistakenly was still discounting LeCroy as not worthy
of consideration.
My feelings on the LeCroy changed due to the following endeavor: First,
another lab rat bought a LeCroy because it had more segments to it's
segmented memory acquisition than the Agilent. Also, the Agilent can't
maintain full sampling rate when more than a few 100 segments are
selected. Wierd. So he started selling me on the LeCroy, after defying
me and buying one after I tried to steer him toward Agilent.
Then I had a new measurement problem: Measure and optimize in real time
the shot-shot energy stability (relative standard deviation) of an
optical parametric oscillator, as a function of various parameters--you
don't want to know. OPOs are a bitch.
Well the Agilent and Tek can spit out numbers for standard deviation and
mean of an amplitude or area measure. Then I have to type them into my
calculator to get rel. std. dev. The Agilent accumulates N
acquisitions. The Tek has a moving average with configurable count, but
an unspecified weighting function that seems to lead to discrepancies
vs. the accumulating stats. on it's histograms.
Either of them can get the number with the assistance of a calculator,
but neither can give it directly in real time, nor provide a trend plot.
Enter the LeCroy. LeCroy loaned me a 1GHz 104MXi-A for over two weeks.
On the third day, I figured out how to make my measurement. The thing
can do trend plots of any measurement parameter, such as amplitude. You
can set how many past measurements to plot, so that controls the rate at
which it rolls across the screen, in conjunction with the acquisition
rate.
The you can compute mean and standard deviation (or any other maths) on
the trend plots, not just waveforms! That produces another numeric
measurement parameter output. That in turn can be put into another
trend plot! Magnificent. Then I can do a mean on that, with an
indicator bar. So I get three displays of my relative standard
deviation measurement: a number, a trend plot, and a graphical bar
indicating the average of the trend, sort of like a meter movement.
Sweet!
There is more than one way to set this up as well. It has this amazing
"web math" editor that is like LabView. It lets you graphically pipe
waveforms, parameters, etc. to measurements, math operations, and trend
plots, then pipe those to more, then output the results to measurement
parameter displays or math trace displays. That way, you can compute
operations on a virtual trend plot, without having to use a visible
trace, which are limited to only 8, I believe.
In short, the LeCroy is the best data analysis-oriented scope in the
under $20000 class. The only drawback to the LeCroy is waveform update rate. It simply can't come close to the Tek 4000, which also pales compared to the Agilent (note: Tek DPO7000 is in another class altogether, with up to 250000 wfms/s vs. 100000 for Agilent, but only with 25k samples!). Fortunately for my 10Hz laser pulses, this limitation of the LeCroy won't be an issue. The LeCroy also has these cool "track" plots which are different from trend plots. Tracks are plots of the history of a measurement, time correlated with the waveform being measured. So this can effectively demodulate a PWM waveform, for ex., and plot the pulse width as a function of time, correlated with the actual PWM waveform. Likewise for other modulation schemes. There are of course many other features in all of these instruments that I haven't yet touched upon. For general electronic design&troubleshooting, I will be turning first to my Agilents. For tricky measurements with complicated math and multiple stages of computation as well as trend plots, the LeCroy is simply magical. I don't think I'll be buying any more Teks unless there is something that just can't be done with any other instrument. Unlikely. I will have a look at Yokogawa next time I'm in the market though. LeCroy is developed and made in USA, BTW. J #### John Devereux Jan 1, 1970 0 Mr.CRC said: Hi: Recently at work I reviewed Tek MSO4000, Tek DPO7000, Agilent 9000, and LeCroy MXi-A series oscilloscopes. I have purchased recently an Agilent MSO7054B, Tek MSO4054, and LeCroy 104MXi-A. [...] Then I had a new measurement problem: Measure and optimize in real time the shot-shot energy stability (relative standard deviation) of an optical parametric oscillator, as a function of various parameters--you don't want to know. OPOs are a bitch. Well the Agilent and Tek can spit out numbers for standard deviation and mean of an amplitude or area measure. Then I have to type them into my calculator to get rel. std. dev. The Agilent accumulates N acquisitions. The Tek has a moving average with configurable count, but an unspecified weighting function that seems to lead to discrepancies vs. the accumulating stats. on it's histograms. Either of them can get the number with the assistance of a calculator, but neither can give it directly in real time, nor provide a trend plot. Enter the LeCroy. LeCroy loaned me a 1GHz 104MXi-A for over two weeks. On the third day, I figured out how to make my measurement. The thing can do trend plots of any measurement parameter, such as amplitude. You can set how many past measurements to plot, so that controls the rate at which it rolls across the screen, in conjunction with the acquisition rate. The you can compute mean and standard deviation (or any other maths) on the trend plots, not just waveforms! That produces another numeric measurement parameter output. That in turn can be put into another trend plot! Magnificent. Then I can do a mean on that, with an indicator bar. So I get three displays of my relative standard deviation measurement: a number, a trend plot, and a graphical bar indicating the average of the trend, sort of like a meter movement. Sweet! There is more than one way to set this up as well. It has this amazing "web math" editor that is like LabView. It lets you graphically pipe waveforms, parameters, etc. to measurements, math operations, and trend plots, then pipe those to more, then output the results to measurement parameter displays or math trace displays. That way, you can compute operations on a virtual trend plot, without having to use a visible trace, which are limited to only 8, I believe. In short, the LeCroy is the best data analysis-oriented scope in the under$20000 class.
The only drawback to the LeCroy is waveform update rate. It simply
can't come close to the Tek 4000, which also pales compared to the
Agilent (note: Tek DPO7000 is in another class altogether, with up to
250000 wfms/s vs. 100000 for Agilent, but only with 25k samples!).
Fortunately for my 10Hz laser pulses, this limitation of the LeCroy
won't be an issue.
The LeCroy also has these cool "track" plots which are different from
trend plots. Tracks are plots of the history of a measurement, time
correlated with the waveform being measured. So this can effectively
demodulate a PWM waveform, for ex., and plot the pulse width as a
function of time, correlated with the actual PWM waveform. Likewise for
other modulation schemes.
There are of course many other features in all of these instruments that
I haven't yet touched upon.
For general electronic design&troubleshooting, I will be turning first
to my Agilents. For tricky measurements with complicated math and
multiple stages of computation as well as trend plots, the LeCroy is
simply magical.
I don't think I'll be buying any more Teks unless there is something
that just can't be done with any other instrument. Unlikely.
I will have a look at Yokogawa next time I'm in the market though.
Thanks for all that, I am (still) looking for some similar features
(trend plotting and maths on measurements). I had not considered LeCroy.
Yokogawa do trend plotting too. Rohde & Shwartz have some scopes that
look really nice other than lack of trend plotting, but I have been told
recently that this is going to be in their new firmware. (All these are
~$20k class devices too). Can the LeCroy do XY plots of measurements? That would open the door to lots of things, you could reproduce onscreen many of the graphs you see in component datasheets. For example gain vs temperature, PWM duty vs circuit output voltage etc. J #### JW Jan 1, 1970 0 [...] Nice write-up. LeCroy is developed and made in USA, BTW. There's a switch. Used to be that they were made in Switzerland, but since I don't have your budget, I've not used or worked on anything newer than the LC series. M #### Mr.CRC Jan 1, 1970 0 John said: Can the LeCroy do XY plots of measurements? That would open the door to lots of things, you could reproduce onscreen many of the graphs you see in component datasheets. For example gain vs temperature, PWM duty vs circuit output voltage etc. I am curious about the same question. I have to return the loaner on Monday. I will investigate when the new one arrives later in Jan. M #### Mr.CRC Jan 1, 1970 0 M #### Mr.CRC Jan 1, 1970 0 JW said: [...] Nice write-up. LeCroy is developed and made in USA, BTW. There's a switch. Used to be that they were made in Switzerland, but since I don't have your budget, I've not used or worked on anything newer than the LC series. Speaking of budgets, I have been wondering lately what proportion of the market for these instrument makers is government. I wonder if we would see so many offerings if it wasn't for this demand. Surely the pressure on private business must be intense to minimize costs. I see John Larkin for instance looking into inexpensive Chinese instruments rather than the premium brands. Though there is an increasing tendency for Tek, Agilent, and LeCroy these days to offer a large range of economy instruments. I understand of course that the very high end instruments are targeting the cutting edge of ludicrous-speed chip design, bus interfaces, and communications links. But the quantities shipped of these instruments must be in the handfuls. I suppose the Chinese instruments will improve with time. But I am hesitant at this stage, even for my personal hobby use, to step down from the Agilent, Tek, and LeCroy offerings. M #### Mr.CRC Jan 1, 1970 0 Paul said: clip .... There's been quite a discussion lately about all the analysis various scopes can do. But whatever they build into a scope, its going to be nothing compared to what you can do by downloading the data into a programming environment such as Matlab, IDL, Octave, R, whatever. Why not let the scope be a scope, let a computer be a computer? So then the question is: how fast can these scopes transfer files to a PC? How easy is it for a user program on a PC to control the scope, get it's status, and transfer the files? Does the scope manufacturer supply you with some PC software to make this kind of operation easier? Paul Probert Thanks for the comment. I suppose for the same reason that there are scopes with full front panel controls and displays as well as digitizer-only boxes that can only be accessed by a computer, so it is with advanced analysis features vs. just a simple "scope" interface. That is, there is a market for real time analysis, and there is a market for post-processing analysis. I agree that post processing will always be more powerful. But some users need real-time results. A similar trend is developing with data acquisition hardware, where more manufacturers are providing devices with built-in DSP processors. For the same scope that my coworker and I are buying, he will collect 2000 or so 10ns laser pulses, and in post-processing analyze them to normalize some other data. Whereas I will look at the same laser pulses and use real-time processing to optimize and monitor an OPO. I had considered taking one of the existing scopes and writing some program to pull down data and post-process into the same result that I want. I suppose it could be done on the cheap with a GUI development language and avoiding the expensive LabView or MatLAB. But in any event, if it took two weeks of my time to develop (and I don't specialize in high level programming, but rather bare silicon, so this would probably take me several weeks) well, that's$6000 at a minimum of
It's funny how sometimes we'd rather do a job in house because the labor
cost is already committed. Yet other times we'd rather just buy an
off-the shelf gadget because the uncertainty as well as lost opportunity
cost of having labor sunk in some tangential development project instead
of the immediate priority (in this case--make the OPO work!) is more
N
#### Nico Coesel
Jan 1, 1970
0
Mr.CRC said:
JW said:
[...]
Nice write-up.
LeCroy is developed and made in USA, BTW.
There's a switch. Used to be that they were made in Switzerland, but since
I don't have your budget, I've not used or worked on anything newer than
the LC series.
Speaking of budgets, I have been wondering lately what proportion of the
market for these instrument makers is government. I wonder if we would
see so many offerings if it wasn't for this demand.
Surely the pressure on private business must be intense to minimize
costs. I see John Larkin for instance looking into inexpensive Chinese
instruments rather than the premium brands. Though there is an
increasing tendency for Tek, Agilent, and LeCroy these days to offer a
large range of economy instruments.
I understand of course that the very high end instruments are targeting
the cutting edge of ludicrous-speed chip design, bus interfaces, and
communications links. But the quantities shipped of these instruments
must be in the handfuls.
I agree. Although I feel there is a huge gap in the market. There are
tons of low end <\$1000 scopes. If you want a higher screen resolution,
N
#### Nico Coesel
Jan 1, 1970
0
Paul Probert said:
clip ....
There's been quite a discussion lately about all the analysis various
scopes can do. But whatever they build into a scope, its going to be
programming environment such as Matlab, IDL, Octave, R, whatever. Why
not let the scope be a scope, let a computer be a computer? So then the
question is: how fast can these scopes transfer files to a PC? How easy
is it for a user program on a PC to control the scope, get it's status,
and transfer the files? Does the scope manufacturer supply you with some
PC software to make this kind of operation easier?
I think this is a good point. I used this in the past to do some
prototyping where I used a measurement instrument as an analog
front-end to process some real input signals. Another nice to have
would be the ability to write plug-ins. For example: a few months ago
I wrote an SPI decoder plug-in for my TLA704 logic analyser.
K
#### [email protected]
Jan 1, 1970
0
John said:
On Thu, 16 Dec 2010 22:46:51 -0800, "Mr.CRC"
[this and that]
LeCroy is developed and made in USA, BTW.
All nice, except that LeCroy is evil.
John
??? Care to elaborate?
Don't listen to him.
LeCroy was his competition once and undercut his price.
That makes it 'evil' you see.
Now tell them the rest of the story.
K
#### [email protected]
Jan 1, 1970
0
On a sunny day (Fri, 17 Dec 2010 20:27:25 -0800) it happened "Mr.CRC"
John Larkin wrote:
On Thu, 16 Dec 2010 22:46:51 -0800, "Mr.CRC"
[this and that]
LeCroy is developed and made in USA, BTW.
All nice, except that LeCroy is evil.
John
??? Care to elaborate?
Don't listen to him.
LeCroy was his competition once and undercut his price.
That makes it 'evil' you see.
Now tell them the rest of the story.
OK, he did not make a profit this year?
Once a liar, always a liar, Jan.
K
#### [email protected]
Jan 1, 1970
0
On a sunny day (Sat, 18 Dec 2010 09:35:50 -0600) it happened
<[email protected]>:
On a sunny day (Fri, 17 Dec 2010 20:27:25 -0800) it happened "Mr.CRC"
John Larkin wrote:
On Thu, 16 Dec 2010 22:46:51 -0800, "Mr.CRC"
[this and that]
LeCroy is developed and made in USA, BTW.
All nice, except that LeCroy is evil.
John
??? Care to elaborate?
Don't listen to him.
LeCroy was his competition once and undercut his price.
That makes it 'evil' you see.
Now tell them the rest of the story.
OK, he did not make a profit this year?
Once a liar, always a liar, Jan.
I did not know he was that too, because that was HIS statement.
You're illiterate, too. ...but I knew that.
K
#### [email protected]
Jan 1, 1970
0
On a sunny day (Fri, 17 Dec 2010 20:27:25 -0800) it happened "Mr.CRC"
John Larkin wrote:
On Thu, 16 Dec 2010 22:46:51 -0800, "Mr.CRC"
[this and that]
LeCroy is developed and made in USA, BTW.
All nice, except that LeCroy is evil.
John
??? Care to elaborate?
Don't listen to him.
LeCroy was his competition once and undercut his price.
That makes it 'evil' you see.
Now tell them the rest of the story.
We were asked, by Los Alamos, to design and build a 1 ns resolution
TDC module, as an alternate to some truly terrible LeCroy units (their
4208, our M680). The LeCroys were expensive, slow delivery, up to 50%
DOA, and took six months to get fixed. The next bid, LeCroy somehow
found out there would be competition and cut their price in half to
kill us. Our friend at Los Alamos disqualified them on technical
grounds!
That TDC was our first high-speed product. I'd never done anything
like that before.
I wanted the liar to tell the rest of the story. I knew the Europeon moron
couldn't tell the truth.
K
#### [email protected]
Jan 1, 1970
0
On a sunny day (Sat, 18 Dec 2010 10:14:14 -0600) it happened
<[email protected]>:
On a sunny day (Sat, 18 Dec 2010 09:35:50 -0600) it happened
<[email protected]>:
On a sunny day (Fri, 17 Dec 2010 20:27:25 -0800) it happened "Mr.CRC"
John Larkin wrote:
On Thu, 16 Dec 2010 22:46:51 -0800, "Mr.CRC"
[this and that]
LeCroy is developed and made in USA, BTW.
All nice, except that LeCroy is evil.
John
??? Care to elaborate?
Don't listen to him.
LeCroy was his competition once and undercut his price.
That makes it 'evil' you see.
Now tell them the rest of the story.
OK, he did not make a profit this year?
Once a liar, always a liar, Jan.
I did not know he was that too, because that was HIS statement.
You're illiterate, too. ...but I knew that.
I have read they found a cause though.
It won't help you now, but if you re-incarnate as a demonrat you may have a chance.
You even sound like Slowman. Kill yourself now.
K
#### [email protected]
Jan 1, 1970
0
On Sat, 18 Dec 2010 09:35:50 -0600, "[email protected]"
On a sunny day (Fri, 17 Dec 2010 20:27:25 -0800) it happened "Mr.CRC"
John Larkin wrote:
On Thu, 16 Dec 2010 22:46:51 -0800, "Mr.CRC"
[this and that]
LeCroy is developed and made in USA, BTW.
All nice, except that LeCroy is evil.
John
??? Care to elaborate?
Don't listen to him.
LeCroy was his competition once and undercut his price.
That makes it 'evil' you see.
Now tell them the rest of the story.
We were asked, by Los Alamos, to design and build a 1 ns resolution
TDC module, as an alternate to some truly terrible LeCroy units (their
4208, our M680). The LeCroys were expensive, slow delivery, up to 50%
DOA, and took six months to get fixed. The next bid, LeCroy somehow
found out there would be competition and cut their price in half to
kill us. Our friend at Los Alamos disqualified them on technical
grounds!
That TDC was our first high-speed product. I'd never done anything
like that before.
I wanted the liar to tell the rest of the story. I knew the Europeon moron
couldn't tell the truth.
What in the world are you talking about? The story is true. The CAMAC
modules exist. You can even buy them, ours and LeCroy's, on ebay now
and then.
I was replying to Jan Panteltje, Slowman's clone. He's the liar.
N
#### Nico Coesel
Jan 1, 1970
0
John Larkin said:
JW said:
[...]
Nice write-up.
LeCroy is developed and made in USA, BTW.
There's a switch. Used to be that they were made in Switzerland, but since
I don't have your budget, I've not used or worked on anything newer than
the LC series.
Speaking of budgets, I have been wondering lately what proportion of the
market for these instrument makers is government. I wonder if we would
see so many offerings if it wasn't for this demand.
Surely the pressure on private business must be intense to minimize
costs. I see John Larkin for instance looking into inexpensive Chinese
instruments rather than the premium brands.
The premium brands are mostly asian too. Most Tek stuff is built in
China. The low-end Agilent scopes are rebranded Rigols. Agilent is
increasingly moving manufacturing to Maylasia.
If I'm going to buy a Rigol scope, I may as well buy it from Rigol,
instead of paying Agilent 2.5X the price for the same box.
Some of the Keithley benchtop DVMs are Chinese rebrands. Crap. I sent
three of them back to Keithley.
Though there is an
increasing tendency for Tek, Agilent, and LeCroy these days to offer a
large range of economy instruments.
I understand of course that the very high end instruments are targeting
the cutting edge of ludicrous-speed chip design, bus interfaces, and
communications links. But the quantities shipped of these instruments
must be in the handfuls.
I suppose the Chinese instruments will improve with time. But I am
hesitant at this stage, even for my personal hobby use, to step down
from the Agilent, Tek, and LeCroy offerings.
My 50 MHz Rigol is great. It has a lot more features than my Tek
TDS2012, which is over three times the price. Nicer probes, too.
I wonder if there are any good Chinese spectrum analyzers.
Digital or analog? A couple of years ago my employer bought an Atten
AT6011 (1GHz + tracking generator) for simple RF stuff and EMC problem
finding. It takes some time to get used to an analog instrument but it
does work.
N
#### Nico Coesel
Jan 1, 1970
0
John Larkin said:
On a sunny day (Fri, 17 Dec 2010 20:27:25 -0800) it happened "Mr.CRC"
John Larkin wrote:
On Thu, 16 Dec 2010 22:46:51 -0800, "Mr.CRC"
[this and that]
LeCroy is developed and made in USA, BTW.
All nice, except that LeCroy is evil.
John
??? Care to elaborate?
Don't listen to him.
LeCroy was his competition once and undercut his price.
That makes it 'evil' you see.
Now tell them the rest of the story.
We were asked, by Los Alamos, to design and build a 1 ns resolution
TDC module, as an alternate to some truly terrible LeCroy units (their
4208, our M680). The LeCroys were expensive, slow delivery, up to 50%
DOA, and took six months to get fixed. The next bid, LeCroy somehow
found out there would be competition and cut their price in half to
kill us. Our friend at Los Alamos disqualified them on technical
grounds!
Isn't that just how the market works? If there are more competitors
prices will drop.
M
#### Mr.CRC
Jan 1, 1970
0
John said:
My 50 MHz Rigol is great. It has a lot more features than my Tek
TDS2012, which is over three times the price. Nicer probes, too.
John
How's the waveform update rate on those things?
I bought a Tek TDS3014 about 7 years ago for home hobby use. That cost
me 2.5 years of my allowance. I don't regret it. At that time, the
TDS3000 series was where the price/performance breakthrough was.
I'm considering to get an Agilent DSO7034 350MHz in a few years, then
upgrade some time to the digital channels option. People spend a heck
of a lot more on Harleys and other useless toys.
I'd like to build a Q-switched YAG laser or N2 laser and do a time of
flight speed of light experiment for my daughter when she gets to a
grade where that will make sense. Maybe 3rd grade or so ;-) Might even
be able to get LEDs to do the job. I could probably measure it with my
100MHz scope, but 350MHz is a lot better.
T
#### Tim Williams
Jan 1, 1970
0
Mr.CRC said:
I'd like to build a Q-switched YAG laser or N2 laser and do a time of
flight speed of light experiment for my daughter when she gets to a
grade where that will make sense. Maybe 3rd grade or so ;-) Might even
be able to get LEDs to do the job. I could probably measure it with my
100MHz scope, but 350MHz is a lot better.
We did that in, Physics 2 or something, using a regular laser diode, a
rather slow photodiode, and the hallway. The mirror is moved to set
distance. Fifty feet or so makes it easy. The hardest part is aligning
the mirrors, and foot traffic making them bounce.
Tim
M
#### Mr.CRC
Jan 1, 1970
0
Tim said:
We did that in, Physics 2 or something, using a regular laser diode, a
rather slow photodiode, and the hallway. The mirror is moved to set
distance. Fifty feet or so makes it easy. The hardest part is aligning
the mirrors, and foot traffic making them bounce.
Tim
Yeah, we did the same thing, but with an N2 pumped dye laser and a
300-ish MHz LeCroy scope. That was my first encounter with LeCroy.
Trying it with a retro-reflector might be interesting.
Replies
1
Views
1K
Replies
4
Views
533
Replies
7
Views
2K
J
Replies
4
Views
691
L
J
Replies
4
Views
626
Jamie
J
|
2023-01-30 00:48:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3418627977371216, "perplexity": 8099.53620071444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499790.41/warc/CC-MAIN-20230130003215-20230130033215-00709.warc.gz"}
|
https://docs.snowflake.com/en/sql-reference/functions/percentile_cont.html
|
Categories:
Aggregate Functions (General) , Window Functions
# PERCENTILE_CONT¶
Return a percentile value based on a continuous distribution of the input column (specified in order_by_expr). If no input row lies exactly at the desired percentile, the result is calculated using linear interpolation of the two nearest input values. NULL values are ignored in the calculation.
PERCENTILE_DISC
## Syntax¶
Aggregate function
PERCENTILE_CONT( <percentile> ) WITHIN GROUP (ORDER BY <order_by_expr>)
Window function
PERCENTILE_CONT( <percentile> ) WITHIN GROUP (ORDER BY <order_by_expr>) OVER ( [ PARTITION BY <expr3> ] )
## Arguments¶
percentile
The percentile of the value that you want to find. The percentile must be a constant between 0.0 and 1.0. For example, if you want to find the value at the 90th percentile, specify 0.9.
order_by_expr
The expression (typically a column name) by which to order the values. For example, if you want to want to find the student whose math SAT score is at the 90th percentile, you’ll specify the column containing the math SAT score.
Note that this is also implicitly the column from which the returned value is chosen; e.g. if you order by math SAT scores, then the result you’ll get is one of the math SAT scores. You can’t order by one column and get a percentile value for a different column.
expr3
This is the optional expression used to group rows into partitions.
## Returns¶
Returns the value that is at the specified percentile. If no input row lies exactly at the desired percentile, the result is calculated using linear interpolation of the two nearest input values.
Note
If a group contains only one value, then that value will be returned for any specified percentile (e.g. both percentile 0.0 and percentile 1.0 will return that one row).
## Usage Notes¶
• The percentile argument to the function must be a constant.
• DISTINCT is not supported for this function.
• The function PERCENTILE_CONT interpolates between the two closest values, while the function PERCENTILE_DISC chooses the closest value rather than interpolating.
• When used as a window function:
• This function does not support:
• ORDER BY sub-clause in the OVER() clause.
• Window frames.
## Examples¶
The following example shows the values at the 25th percentile (0.25) within various groups:
Create and populate a table with values:
create or replace table aggr(k int, v decimal(10,2));
insert into aggr (k, v) values
(0, 0),
(0, 10),
(0, 20),
(0, 30),
(0, 40),
(1, 10),
(1, 20),
(2, 10),
(2, 20),
(2, 25),
(2, 30),
(3, 60),
(4, NULL);
Run a query and show the output (note that some values are exact and some are interpolated):
select k, percentile_cont(0.25) within group (order by v)
from aggr
group by k
order by k;
+---+-------------------------------------------------+
| K | PERCENTILE_CONT(0.25) WITHIN GROUP (ORDER BY V) |
|---+-------------------------------------------------|
| 0 | 10.00000 |
| 1 | 12.50000 |
| 2 | 17.50000 |
| 3 | 60.00000 |
| 4 | NULL |
+---+-------------------------------------------------+
|
2021-10-19 12:15:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6382524371147156, "perplexity": 2919.753312064553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585265.67/warc/CC-MAIN-20211019105138-20211019135138-00709.warc.gz"}
|
https://tex.stackexchange.com/questions/261764/table-of-contents-different-margins-on-first-and-subsequent-pages
|
I am writing my doctoral thesis using the mcgilletdclass template that I downloaded on-line. For the most part, I have been able to make modifications to get a more desirable template; however, I am having problems getting a consistent one inch margin on the top page of the Table of Contents/List of Tables/List of Figures. Specifically, the margin is 2 inches at the top of the first page of the Table of Contents, for example, and then changes to 1 inch for the subsequent pages. In my opinion, this makes the section look inconsistent and sloppy.
These margin specification might be included in the mcgilletd.class.cls file, which can be found at:
https://svn.kwarc.info/repos/arXMLiv/trunk/sty/mcgilletdclassmine.cls
I have tried some modification to no avail, including using the attempting to use the tocloft package. However, to my limited .tex knowledge, that package seems to be incompatible with the class file I am using as I get errors when I try to use it. In addition, at this point I do not change to a different .cls since my other modifications have worked wonderfully.
I am including a mwe outlining my problem, which requires the .cls file that I provided a link to earlier. I have provided annotations showing what I have tried to do, including commenting out a couple of options that I have already tried (see immediately before \tableofcontents %)
\documentclass[12pt,Bold,landscape]{mcgilletdclass}
%\usepackage[%
% backend=bibtex % use BibTeX
% backend=biber % Use biber
%]{biblatex}
\usepackage[backend=bibtex,url=false, isbn=false, doi=false, style=authoryear,citestyle=authoryear, sorting=nyt,dashed=FALSE, maxcitenames=2, maxbibnames=100]{biblatex}
\usepackage[left=1in,top=1in,right=1in, bottom=1in]{geometry}
\onehalfspacing
% The following code came with the mcgilletd thesis .tex file. I have made some adjustments, but I am not sure what the following commands really mean.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Have you configured your TeX system for proper %%
%% page alignment? See the McGillETD documentation %%
%% for two methods that can be used to control %%
%% page alignment. One method is demonstrated %%
%% below. See documentation and the ufalign.tex %%
%% file for instructions on how to adjust these %%
%% parameters. %%
\setlength{\textheight}{\topskip}%
\ifthenelse{\value{QZ@ptcnt}=11}{%
\ifthenelse{\value{QZ@ptcnt}=10}{%
%%
%\makeindex[keylist]
%\makeindex[abbr]
%% Input any special commands below
%\newcommand{\Kron}[1]{\ensuremath{\delta_{K}\left(#1\right)}}
\listfiles%
\begin{document}
%\newgeometry{left=1in, bottom=1in, top=0.0in, right=1in}
% The following three commands are ones that I have tried to change the top margin of the first page of the Table of Contents, List of Figures and List of Tables; however, none have worked to the desired effect.
%\setlength{\topmargin}{-1in}
%\newgeometry{left=1in, bottom=1in, top=1in, right=1in}
\tableofcontents %
\listoftables %
\listoffigures %
\doublespacing
\chapter{Problem}An example from my thesis with a Table of Contents, List of Tables and List of Figures. When my actual thesis is compiled, the first page of the Table of Contents, List of Tables/Figures has a 2 inch top margin, whereas the subsequent pages have 1 inch margins. I would like to make all top margins an inch.
\section{Section 1}
\subsection{Subsection 1}
\subsection{Subsection 2}
\section{Section 2}
\subsection{Subsection 1}
\subsection{Subsection 2}
\section{Section 3}
\subsection{Subsection 1}
\subsection{Subsection 2}
\section{Section 4}
\subsection{Subsection 1}
\subsection{Subsection 2}
\chapter{Example chapter}
\section{Section 1}
\subsection{Subsection 1}
\subsection{Subsection 2}
\section{Section 2}
\subsection{Subsection 1}
\subsection{Subsection 2}
\section{Section 3}
\subsection{Subsection 1}
\subsection{Subsection 2}
\section{Section 4}
\subsection{Subsection 1}
\subsection{Subsection 2}
\section{Section 1}
\subsection{Subsection 1}
\subsection{Subsection 2}
\section{Section 2}
\subsection{Subsection 1}
\subsection{Subsection 2}
\section{Section 3}
\subsection{Subsection 1}
\subsection{Subsection 2}
\section{Section 4}
\subsection{Subsection 1}
\subsection{Subsection 2}
\end{document}
I would be happy to provide other information if need be.
• Load option showframe when loading package geometry and check the margins. By the way, why loading geometry to get a user friendly interface, and then overwrite it with low-level mumbo jumbo? The template is full of mumbo-jumbo. – Johannes_B Aug 17 '15 at 14:55
• I have loaded the option and see that there is an extra inch added to the top of the first page of the TOC/LOT/and LOF, but not the subsequent pages of these three sections. – nbotanist Aug 17 '15 at 15:01
• The same space is added for each chapter in your document. – Johannes_B Aug 17 '15 at 15:02
• Yes, it is. If you look at the following PDF document <sharelatex.com/templates/52fe016b34a287a85245b4ce/v/1/…> for the Mcgill thesis, the same inconsistency also exists. – nbotanist Aug 17 '15 at 15:05
• \renewcommand{\BigMargin}{} I really would not use this template. – Johannes_B Aug 17 '15 at 15:17
The class has some code to ensure a two inch space, which is added before the chapter title is typeset.
%% Creating a 2 inch margin
\newlength{\BigLength}%
\setlength{\BigLength}{0pt}%
\newcommand{\BigMargin}{\hspace*{1in}\normalfont\normalsize%
\settoheight{\QZ@TempLength}{()}%
\vspace*{-\baselineskip}\vspace*{-\topskip}\vspace*{1in}%
\vspace*{\QZ@TempLength}\vspace*{\BigLength} \\}%
If you redefine command \BigMargin to do nothing, your problem seems to be gone.
\renewcommand{\BigMargin}{}
|
2019-08-23 05:27:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7643865942955017, "perplexity": 2768.2980442170056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317847.79/warc/CC-MAIN-20190823041746-20190823063746-00520.warc.gz"}
|
https://tex.stackexchange.com/questions/545734/defining-space-in-toc
|
# Defining space in toc
I'm kind of new to this world of LaTeX and I'm trying to change the way TOC looks (in book style), but I'm facing a problem: while I remove bold face and add a dot line for chapters, the standard space above them disappears.
I used this code to remove bold and add dot lines:
\makeatletter
\renewcommand*\l@chapter{\@dottedtocline{1}{1em}{1em}}
\makeatother
I don't want to use tocloft or other package because I edit the space in chapter titles and other things that those packages override, so I'd like to avoid them and just add a little space above every chapter entry in TOC.
I tried using %\renewcommand*\l@chapter{\vspace*{14pt}} along with my code but TOC goes crazy. Clearly I'm doing something wrong.
What can i do? Thank you.
## 1 Answer
Try this:
% tocspace.tex SE 545734
\documentclass{book}
\makeatletter
%\renewcommand*{\l@chapter}{\@dottedtocline{1}{1em}{1em}}
\renewcommand*{\l@chapter}{\addvspace{10pt}\@dottedtocline{1}{1em}{1em}}
\makeatother
\begin{document}
\tableofcontents
\chapter{Introduction}
\section{Notions}
\chapter{Another}
\section{More}
\end{document}
Sorry to hear that you don't like tocloft.
• So wonderful and so simple! That's what I like about LaTeX. This solved my problem, and just so you know, I don't dislike tocloft, is just that I didn't wrote this book in the first place and I'm learning and editing in a kind of "reverse engineering" for better or for worse. Thank you. May 22 '20 at 19:49
|
2021-12-07 05:23:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8249041438102722, "perplexity": 1325.8140520070403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363336.93/warc/CC-MAIN-20211207045002-20211207075002-00441.warc.gz"}
|
http://physics.stackexchange.com/tags/fourier-transform/hot
|
# Tag Info
## Hot answers tagged fourier-transform
25
Your ear is an effective Fourier transformer. An ear contains many small hair cells. The hair cells differ in length, tension, and thickness, and therefore respond to different frequencies. Different hair cells are mechanically linked to ion channels in different neurons, so different neurons in the brain get activated depending on the Fourier transform ...
19
I see that two examples in optics have been mentioned, a diffraction grating by Mark Eichenlaub, and a lens by sigoldberg1. I would like to elaborate a bit, because there is a subtle difference between the two. On the one hand, a diffraction grating separates out light of different frequencies, i.e. colors, transforming them into different positions. This ...
13
Before answering the question more or less directly, I'd like to point out that this is a good question that provides an object lesson and opens a foray into the topics of singular integral equations, analytic continuation and dispersion relations. Here are some references of these more advanced topics: Muskhelishvili, Singular Integral Equations; Courant ...
12
The reason of a more modest version of your statement (your big claim is not right) is that the sum $$\sum_{n=-\infty}^{\infty} |a_n|^2$$ has to converge. That's because this sum is proportional to $$\int_0^{2\pi} |f(x)|^2 dx$$ which converges for bounded functions (a basic insight about Fourier expansions and Hilbert spaces of periodic functions). ...
11
This doesn't make much sense: light year is in any case a unit of distance. What is common is to use "reduced units", for examples units where $c=1$ (speed of light) or $h=2\pi$. But in these cases the opposite would happen: you would say "year" to mean a distance. Or for example you say "has a mass of xyz MeV" instead of "$MeV/c^2$". About the Fourier ...
11
Sine and cosine waves are, physically, the most common. They are definitely the best description to what comes out of a wall socket, not because we like them mathematically, but because it's what comes out; electromotive force is generated in the power plant as a sinusoidal pattern with frequency 50/60 Hz. In the usual kind of generator, this is because in ...
10
Dear user1602, yes, $\psi(x)$ and $\tilde\psi(p)$ are Fourier transforms of one another. This answers the only real question you have asked. So if one knows the exact wave function as a function of position, one also knows the wave function as a function of momentum, and vice versa. In particular, there is no "wave function" that would depend both on $x$ ...
10
Yes, it happens in reality too, nicely demonstrating that the Fourier analysis predictions are confirmed. An easy way to see this is to take an electrical sine wave signal, which is nice and monochromatic, and pulse it on and off. If you examine the spectrum of the pulsed wave on a spectrum analyser, you will see the spread of frequencies about the centre ...
9
Remember the double slit experiment? The interference pattern is the Fourier transform of the hole(s). This boggled my mind when I first learned it. In the limit where the screen is far from the mask, the rays of light actually physically compute the Fourier transform (see Fraunhofer diffraction).
8
Your running into circles will stop once you commit yourself to a choice. What to regard as postulate is always a matter of choice (by you or by whoever writes an exposition of the basics). One starts from a point where the development is in some sense simplest. And one may motivate the postulates by analogies or whatever. The CCR are a simple ...
8
The sound that reaches your ear is just air pressure fluctuating over time. You can use a transducer of some sort to convert the value of air pressure to some other form - for example: to the depth of a groove being cut into a helical track on a layer of wax on a rotating drum to the depth of a groove being cut into a spiral track on a circular disc of ...
8
This has been extensively studied in linguistics and acoustics. Humans and other primates predict speaker gender through a combination of fundamental frequency $F_0$ ("pitch") and Vocal-Tract-Length estimates ($VTL$) which are a proxy for body size. Sometimes "formant dispersion" is used for $VTL$. It is usually defined as ...
8
Those things are surely not enough to find the inner product $\langle q|p\rangle$ uniquely. For example, starting with the conventional $Q,P$, you may redefine them by a canonical transformation, for example by $$Q\to Q'=Q, \quad P\to P'= P + Q^3$$ Then $P', Q'$ obey all the four conditions in the same way as $P,Q$. They also have eigenstates and ...
8
No. Consider any state with a momentum wavefunction symmetric about zero. It's position-space and momentum-space norm-squared probability distributions are not changed by time-reversal, even though the wavefunction clearly is. Here is an explicit example. Take the four Gaussian wavepacket of mean positions $x_0$ or $-x_0$, mean momenta $p_0$ or $-p_0$, ...
8
It's sloppy language that is confusing you here. A Jacobian is not a transformation. The Jacobian of a transformation measures by how much the transformation expands or shrinks volume(/area/length/hypervolume/whatever) elements. Example: let $x' = 2x$. Then $dx = dx'/2$. The Jacobian is the $1/2$, meaning nothing more than "a unit of the $x'$ scale has a ...
7
The functions $e^{i \bf p \cdot \bf x}$ as functions of $\bf x$ are linearly independent for different $\bf p$'s, hence every coefficient in the linear superposition (that is, in the integral) must be zero.
7
The reason you can get rid of the integral and the exponential is due to the uniqueness of the Fourier transform. Explicitly we have, \begin{align} \int \frac{ \,d^3p }{ (2\pi)^3 } e ^{ i {\mathbf{p}} \cdot {\mathbf{x}} } \left( \partial _t ^2 + {\mathbf{p}} ^2 + m ^2 \right) \phi ( {\mathbf{p}} , t ) & = 0 \\ \int d ^3 x \frac{ \,d^3p }{ (2\pi)^3 ...
7
The route to the uncertainty principle went something like this: In Heisenberg's brilliant 1925 paper [1], he addresses the problem of line spectra caused by atomic transitions. Starting with the known $$\omega(n, n-\alpha) = \frac{1}{\hbar}\{W(n)-W(n-\alpha) \}$$ where $\omega$ are the angular frequencies, $W$ are the energies and $n, \alpha$ are integer ...
7
Your visual range includes roughly one octave as compared to roughly twelve in your aural range. Further your visual system uses only four types of light sensors each with limited frequency discrimination, while your hearing has fine frequency discrimination. So while light spectra could have harmonic structure your visual apparatus is ill equipped to ...
7
The wavefunction vector $|\Psi (t) \rangle$ is supposed to be a function of time only. When you write $| \Psi (t) \rangle$ you are not considering the projection of the wavefunction nor on the position neither on the momentum space, but just the state of the system at time $t$, which is nothing but a postulate of Quantum Mechanics. You will have the ...
6
You can either accept it as a postulate (in which case it is often more convenient to postulate the CCR and CAR for creation and annihilation operators) or you can derive the relation in the position basis with $$\hat x = x \wedge \hat p = -i \hbar \nabla \Rightarrow [ \hat x , \hat p ] = - i \hbar x \nabla + i \hbar + i \hbar x \nabla$$ as you have to ...
6
If you integrate $$\int f(x)\textrm{d}x = F$$ then $F$ has the units of $f$ times the units of $x$. Similarly if you differentiate, $$\frac{\textrm{d}f}{\textrm{d}x}$$ has units of $f$ divided by units of $x$. If you look at the simple example of integrating and differentiating with respect to time to go between position, velocity, and acceleration, you'll ...
6
Let's look at frequency instead of notes. Let's say the string has a natural frequency of $100 Hz$ and that harmonics are present when you pluck it. Then, the frequency content of the sound will be of the form: $a_1 \cdot 100 Hz + a_2 \cdot 200 Hz + a_3 \cdot 300 Hz + ...$ Now, let's say you fret this string halfway such that the natural frequency ...
6
WARNING: The function is not absolutely integrable for $n>1$, so the integral strongly depends on how you decide to compute it if you break the integration into iterated integrals. Use instead cylindric coordinates. $k = (z, \vec{r})$, where $\vec{r} \in \mathbb R^{n-1}$ and $z\in \mathbb R$. You have this way, assuming that $x$ is directed along $z$: ...
5
The DFT is used when all you have available are samples of the function, rather than the function itself. If you are doing an FT on experimental data, it's always (as far as I know) recorded in discrete numbers: an array of floating point numbers, for example. There are a few times when the DFT has some applicability to real systems, for example simple ...
5
Consider evolution of gaussian wave packet. Its wave function in position representation looks like: $$\Psi(\vec r,t)=\left(\frac a{a+i\hbar t/m}\right)^{3/2}\exp\left(-\frac{\vec r\cdot \vec r}{2(a+i\hbar t/m)}\right).\tag1$$ Corresponding relative probability density is P(r)=|\Psi|^2=\left(\frac a{\sqrt{a^2+(\hbar ...
5
Frequency is just a way of analyzing a time dependent motion. Consider plucking a string by first pulling one point on the string away from its equilibrium. The string shape will be like a triangle, two straight bits of string coming away from where your finger is holding the string, but meeting at a slight angle where your finger holds the string. That ...
5
The origin of your problem was already explained in the previous answers, let me just do so in a bit more detail. It is better to think of some normalizable wave function rather than the $\delta$-function itself. As you probably know, you can get arbitrarily close to a $\delta$-function by making a wave packet narrow and taking a suitable limit (see below ...
5
Given that leftaroundabout and vonjd have addressed the fundamental place of the Fourier transform in the formalism, let me talk a little about an experimental application. What is the shape and size of a atomic nucleus? From Rutherford we learned that the nucleus is rather a lot smaller than the atom as a whole. Now, electron microscopy can just about ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2014-08-28 01:24:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216535449028015, "perplexity": 382.846285215651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829955.75/warc/CC-MAIN-20140820021349-00366-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://umj.imath.kiev.ua/authors/name/?lang=en&author_id=1914
|
2019
Том 71
№ 11
# Butyrin A. A.
Articles: 1
Article (English)
### On the behavior of solutions of operator-differential equations at infinity
Ukr. Mat. Zh. - 1994. - 46, № 7. - pp. 809–813
The existence of limits at the infinity, generalized in the Abel sense, is established for bounded solutions of the operator-differential equation $y'(t) = Ay(t)$ in a reflexive Banach space.
|
2019-12-12 15:38:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3596082925796509, "perplexity": 2620.7187154545272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540544696.93/warc/CC-MAIN-20191212153724-20191212181724-00087.warc.gz"}
|
http://zh.wikipedia.org/wiki/%E5%87%BD%E6%95%B8%E6%A5%B5%E9%99%90
|
# 函數極限
x $\frac{\sin x}{x}$
1 0.841471...
0.1 0.998334...
0.01 0.999983...
## 參考
1. ^ 原文如下:On the other hand, if some inputs very close to p are taken to outputs that stay a fixed distance apart, we say the limit does not exist.
|
2014-07-26 20:14:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.871279776096344, "perplexity": 1959.7138909952364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997904391.23/warc/CC-MAIN-20140722025824-00185-ip-10-33-131-23.ec2.internal.warc.gz"}
|
http://mathematica.stackexchange.com/questions/16944/under-what-conditions-does-financialdata-return-a-notent-message
|
# Under what conditions does FinancialData return a ::notent message?
I tried retrieving FinancialData for a few companies but when I tried doing it for NTDOY, I couldn't. As of 25/12/2012, querying
FinancialData["PK:NTDOY"]
yields
FinancialData::notent: PK:NTDOY is not a known entity, class, or tag for FinancialData. Use FinancialData[] for a list of entities.
However, this is weird for two reasons. Firstly, PK:NTDOY is included in the list of financial instruments Mathematica supposedly has data for:
FinancialData["*NTD*"]
yields
{"JK:INTD", "NASDAQ:GKNTD", "NASDAQ:UNTD", "PK:ANTD", "PK:NTDMF", "PK:NTDOF", "PK:NTDOY"}
Secondly, if I try using free-form input, e.g.
= NTDOY price from 25 December 2011 to 25 December 2012
I get the same (notent) output; on the other hand, if I click on the orange "Show all results" plus symbol, I get a WA-like tab filled with meaningful data, even though sometimes Mathematica crashes while computing.
So my question is: what does FinancialData truly mean when it states that something is not "known"? Is this a software bug or is it simply that more data is available to WolframAlpha than is to Mathematica?
(I'm using Mathematica 8.0.1.0, by the way.)
-
I think it's currently listed "NTDOY" without the PK. On Google, it's PINK:NTDOY, but on Yahoo it says it's been renamed. Try FinancialData["NTDOY"], which works for me. I don't know why it still lists "PK:NTODY" as a valid entity. – Michael E2 Dec 25 '12 at 22:46
The list of instruments given by FinancialData["*"] isn't updated very frequently. In fact, I believe it would only be updated when the paclets are updated. If the availability of a ticker has recently changed, then it may be on the list and no longer work. The best way to see what kind of information is available is to really just look at Yahoo Finance. – Searke Feb 13 '13 at 16:50
@Searke, that looks like an answer to me... – J. M. Apr 28 '13 at 16:20
NTDOY is the Nasdaq ticker for the Nintendo ADR. Once it's traded on an American stock exchange you don't need to define the stock exchange previously (i.e., you don't need to use PK: before the NTDOY ticker).
So, the only thing you have to do is to use
FinancialData["NTDOY"]
and Mathematica will retrieve the current price:
11.79
EDITED
You have to be careful when using non-American stock exchanges... For instance, try to find the Petrobras stock on Mathematica (ticker = PETR4):
FinancialData["*PETR4*", "Lookup"]
{SA:PETR4}
But, in fact you can use two different tickers for the same stock! See:
FinancialData["SA:PETR4"]
19.39
or you can also try
FinancialData["PETR4.SA"]
19.39
So the results are the same! I mean, the SA: prefix is the same as the .SA suffix for the Sao Paulo Stock Exchange (Brazil), although it's not documented by Mathematica.
I think this is also true for other stock exchanges.
-
|
2015-12-01 02:14:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29970359802246094, "perplexity": 2415.7618626280546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464396.48/warc/CC-MAIN-20151124205424-00240-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://kseebsolutions.net/2nd-puc-maths-question-bank-chapter-5-ex-5-5/
|
Students can Download Maths Chapter 5 Continuity and Differentiability Ex 5.5 Questions and Answers, Notes Pdf, 2nd PUC Maths Question Bank with Answers helps you to revise the complete Karnataka State Board Syllabus and score more marks in your examinations.
## Karnataka 2nd PUC Maths Question Bank Chapter 5 Continuity and Differentiability Ex 5.5
### 2nd PUC Maths Continuity and Differentiability NCERT Text Book Questions and Answers Ex 5.5
Differentiate the functions given in Exercises 1 to 11 w.r.t. x.
Question 1.
cos x. cos 2x. cos 3x
Let y = cos x. cos 2x.cos 3x
log y = log cos x + log cos 2x + log cos 3x
Question 2.
$$\sqrt{\frac{(x-1)(x-2)}{(x-3)(x-4)(x-5)}}$$
Question 3.
$$(\log x)^{\cos x}$$
Question 4.
$$x^{x}-2^{\sin x}$$
Question 5.
$$(x+3)^{2} \cdot(x+4)^{3} \cdot(x+5)^{4}$$
Let y = (x + 3)2 (x + 4)3 (x + 5)4
log y = 2 log (x + 3) + 3 log (x + 4) + 4 log (x + 5)
Question 6.
$$\left(x+\frac{1}{x}\right)^{x}+x^{\left(x+\frac{1}{x}\right)}$$
Question 7.
(log x)x + xlogx
Question 8.
$$(\sin x)^{x}+\sin ^{-1} \sqrt{x}$$
Question 9.
$$\mathbf{x}^{\sin \mathbf{x}}+(\sin \mathbf{x})^{\cos \mathbf{x}}$$
Question 10.
$$x^{ xcos\quad x }+\frac { x^{ 2 }+1 }{ x^{ 2 }-1 }$$
Question 11.
$$(x \cos x)^{x}+(x \sin x)^{\frac{1}{x}}$$
let y = $$(x \cos x)^{x}+(x \sin x)^{\frac{1}{x}}$$
u = (x cos x )x
log u = x log (x cos x)
find $$\frac{\mathrm{d} y}{\mathrm{d} x}$$of the given in exercise 12 to 15
Question 12.
xy + yx = 1
Let u + v = 1 where u = xy and v = yx
Question 13.
yx = xy
Take log on both sides
x log y = y log x
Question 14.
$$\mathbf{x y}=\mathbf{e}^{(\mathbf{x} \cdot \mathbf{y})}$$
log xy = (x-y) log e
log(xy) = (x – y)
diff : on both sides
Question 15.
$$(\cos x)^{y}=(\cos y)^{x}$$
Question 16.
Find the derivative of the function given by
f (x) = (1 + x) (1 + x2) (1 + x4) (1 + x8) and hence find f'(1).
f (x) = (1 + x) (1 + x2) (1 + x4) (1 + x8)
log f (x) = log (1 + x) + log (1 + x2) + log (1 + x4) + log (1 + x8)
Question 17.
Differentiate (x2 – 5x + 8) (x3 + 7x + 9) in three ways mentioned below:
(i) by using product rule
(ii) by expanding the product to obtain a single polynomial
(iii) by logarithmic differentiation.
Do they all give the same answer ?
$$\frac{d}{d x}(u, v, w)=\frac{d u}{d x} v \cdot w+u \cdot \frac{d v}{d x} w+u . v \frac{d w}{d x}$$
|
2023-03-28 02:07:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7932888269424438, "perplexity": 4107.028422848405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00388.warc.gz"}
|
https://www.physicsforums.com/members/illuminates.611561/recent-content
|
# Recent content by illuminates
1. ### I Pseudotensors in different dimensions
When I created themes I thought that it will be different topics. I wish to emerge themes but I cannot do it. I left the message in that thread because I thought that Paul Colby will be interested in this thread too but I apparently understand you not right... I'm sorry! I thought that you said...
3. ### I Pseudotensors in different dimensions
May you explain way Paul Colby in theme https://www.physicsforums.com/threads/vectors-in-minkowski-space-and-parity.937349/ said me that
4. ### I Pseudotensors in different dimensions
May you explain why happens such things. I thought that ##V^μ \rightarrow V^μ## if ##V^μ## is a 4-vector and ##W^μ \rightarrow -W^μ## if ##W^μ## pseudo-4-vector I caught your statement about pseudo-tensors have to change their sign in odd dimension, but you nothing said about how it is...
5. ### I Pseudotensors in different dimensions
Are you state that tensor Levi-Civita isn't pseudo-tensor? May you advise literature where is a discussion about the difference between even-dimensional and odd-dimensional pseudo-tensor and how it links with parity? Because I did't get that.
6. ### I Pseudotensors in different dimensions
For example vector of magnetic field is pseudo-vector and it is determined in three-dimension, isn't it? Sorry, I don't quite understand your first message. For example in this book http://farside.ph.utexas.edu/teaching/em/lectures/node120.html , author works in Minkovski space and uses parity...
7. ### I Pseudotensors in different dimensions
May you explain why?
8. ### I Parity of theta term of Lagrangian
Thank you for replying. Would you say my reasoning above is true?
9. ### I Vectors in Minkowski space and parity
It is known that vectors change them sing under the influence of parity when ##(x,z,y)## change into ##(-x,-z,-y)## $$P: y_{i} \rightarrow -y_{i}$$ where ##i=1,2,3## But what about vectors in Minkowski space? Is it true that $$P: y_{\mu} \rightarrow -y_{\mu}$$ where ##\mu=0,1,2,3##. If yes how...
10. ### I Pseudotensors in different dimensions
In this topic https://physics.stackexchange.com/questions/129417/what-is-pseudo-tensor one answer was the next: The action of parity on a tensor or pseudotensor depends on the number of indices it has (i.e. its tensor rank): - Tensors of odd rank (e.g. vectors) reverse sign under parity. -...
11. ### I Parity of theta term of Lagrangian
it seems the topic is needed to shift in "High Energy, Nuclear, Particle Physics".
12. ### I Parity of theta term of Lagrangian
I have a very simple question. Let's consider the theta term of Lagrangian: $$L = \theta \frac{g^2}{32 \pi^2} G_{\mu \nu}^a \tilde{G}^{a, \mu \nu}$$ Investigate parity of this term: $$P(G_{\mu \nu}^a)=+G_{\mu \nu}^a$$ $$P( \tilde{G}^{a, \mu \nu} ) =-G_{\mu \nu}^a$$ It is obvious. But what about...
13. ### Operation with tensor quantities in quantum field theory
I would like to know where one may operate with tensor quantities in quantum field theory: Minkowski tensors, spinors, effective lagrangians (for example sigma models or models with four quark interaction), gamma matrices, Grassmann algebra, Lie algebra, fermion determinants and et cetera. I...
14. ### Quantum QFT: groups, effective action, fiber bundles, anomalies, EFT
Yes. Thank you for advice. Is there something other than Weinberg? And I would like to know are there any QFT books written from a mathematical view?
15. ### Quantum QFT: groups, effective action, fiber bundles, anomalies, EFT
Hi, I am looking for textbooks in QFT. I studied QFT using Peskin And Schroeder + two year master's degree QFT programme. I want to know about the next items: 1) Lorentz group and Lie group (precise adjectives, group representation and connection between fields and spins from the standpoint of...
|
2022-01-25 14:12:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7206793427467346, "perplexity": 2317.4498961975646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304835.96/warc/CC-MAIN-20220125130117-20220125160117-00213.warc.gz"}
|
https://www.physicsforums.com/threads/electric-charge-problem-book-error.172783/
|
Electric Charge Problem - Book error?
1. Jun 4, 2007
PFStudent
Hit enter a bit too fast for the title.
TITLE: Electric Charge Problem - Book Error?
1. The problem statement, all variables and given/known data
6. In Fig. 21-22, four particles form a square. The charges are $q_{1} = q_{4} = Q$ and $q_{2} = q_{3} = q$.
(a) What is $Q/q$ if the net electrostatic force on particles 1 and 3 is zero?
(b) Is there any value of $q$ that makes the net electrostatic force on each of the four particles zero? Explain.
2. Relevant equations
$$\displaystyle{\left|\vec{F}_{12}\right| = \frac{k \left|q_{1}\right|\left|q_{2}\right|}{r_{12}^2}}$$
3. The attempt at a solution
I've actually worked through this already however, I am in conflict with how the problem is stated.
Essentially all one has to do for part (a) is find Q/q, and the problem already states that the net forces on particles 1 and 3 are zero. Therefore, from the picture several conclusions can be made. In order for the net forces on particles 1 and 3 to be zero, charges Q and q must be unlike-sign.
Next, because the problem states that the net force for particles 1 and 3 is zero, either particle can be used to find Q/q.
Continuing on, therefore choosing particle three for example, we need to break the forces acting on three due to the other three particles in to components (Fx and Fy).
From here the net force on three is zero, and therefore demands Fx and Fy be zero aswell.
Now, either approach to summing up the components for net: Fx or Fy will lead to one of the summed components being zero.
Choosing Fy, we arrive at N.III.L.
$$\left|\vec{F}_{31}\right| = \frac{\left|\vec{F}_{32}\right|}{\sqrt{2}}}$$
Now from here its simple algebra.
$$\displaytype{\frac{\left|Q\right|}{\left|q\right|}} = \displaytype{\frac{1}{2\sqrt{2}}}$$
and noting that Q and q must be unlike-sign, the above reduces to
$$\displaytype{\frac{Q}{q}} = \displaytype{\frac{-1}{2\sqrt{2}}}$$
However, the correct answer (from the solutions manual (SM)) is
$$\displaytype{\frac{Q}{q}} = \displaytype{-2\sqrt{2}}$$
The SM shows arrives at this solution through resolving the Fx components for the net force on particle 1.
So here is the problem, I can get the same answer ($\displaytype{\frac{Q}{q}} = \displaytype{-2\sqrt{2}}$) if I resolve either component (Fx or Fy) for particle 1.
However, I do not get the same answer when I resolve the components (either Fx or Fy) for particle 3.
I consistently get $\displaytype{\frac{Q}{q}} = \displaytype{\frac{-1}{2\sqrt{2}}}$ for particle 3.
Also, I will try resolving the net Fx component on particle 3 and see what I get.
SO, technically for this problem shouldn't I be able to get the same answer (Q/q), through all four ways; that is resolving the two components (Fx and Fy) for each particle (1 and 3)???
Any help would be appreciated.
Last edited: Jun 5, 2007
2. Jun 4, 2007
Staff: Mentor
Welcome to the PF. You need to show some of your own work in order for us to help you. The relevant equation that you list is a good start. Now draw the square and start drawing the forces. Are you sure you've listed all the particles? Seems like it would be hard to get the forces to cancel with just 4 particles at the vertices of a square....
|
2017-01-21 06:29:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7423495650291443, "perplexity": 583.6757069599864}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00246-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://cyndescosmetics.com/snp-database-ciiwio/1aeafc-hbr-molecular-geometry
|
• A molecule that possesses a measurable molecular dipole will be called a polar molecule. A single molecule is made up of two hydrogen atoms and one oxygen atom, which … Read More. tetrahedral . E BF 3 has trigonal planar molecular geometry. Click on the image to open the page containing the java applet. A dipole moment measures a separation of charge. Which of the following molecules will readily form hydrogen bonds with H2O? For one bond, the bond dipole moment is determined by the difference in electronegativity between the two atoms. Molecular Geometry; Hybridization; Polarity; Resonance Structures; Ionic and Covalent Bonds; Practice! 35 Related Question Answers Found Is HCL polar or nonpolar? Compound Molecular Geometry a. HBr linear b. CBr4 tetrahedral c. AsH3 pyramidal d. BeBr2 angular e. CH4 tetrahedral. The approximate shape of a molecule can be predicted using 109.5° 120° 180° 120° Determine the electron and molecular geometries of each molecule. The carbon, which was $$sp^2$$ hybridized when it was part of the alkene, is now $$sp^3$$ hybridized. A molecule’s shape strongly affects its physical properties and the way it interacts with other molecules, and plays an important role in the way that biological molecules (proteins, enzymes, DNA, etc.) Central Atom with One or More Lone Pairs . interact with each other. SO 4 2- tetrahedral: PO 4 3- tetrahedral . Molecular Structure (Cont.) The molecular geometry of carbon tetra bromide is tetrahedral. Predicting molecular geometry ... • Do HF, HCl, HBr, HI have bond and molecular dipole moments? Viewing Notes: HBr is very similar to HF and HCl. symmetric arrangements of the B atoms around the central A atom. C. State whether BBr, is polar or nonpolar. 3. arrange- ment with two additional atoms, one above and one below the equilateral triangle. Molecular structure, which refers only to the placement of atoms in a molecule and not the electrons, is equivalent to electron-pair geometry only when there are no lone electron pairs around the central atom. The tri-gonal bipyramid shape for . What is the molecular geometry (shape of the molecule) for BBrs Q2. What is the molecular geometr (shape of the molecule) for Br2 and HBr. Based on this diagram, Be2+ and Be2- are both more stable than Be2. 5. can be thought of as a trigonal planar AB. Molecular Geometry and Bonding Theories. B. N2O (oxygen is terminal) SO2 H2S PF3. NOTE All the coloured pictures have java-enabled rotatable models available. SO 2 . SHAPES OF MOLECULES Chemical Lewis Structure Electron Group Molecular Name Geometry Name Geometry Name Water H2o Hydrogen bromide HBr H-Br Tone Ammonia Methane Ethane Ethene or NH3 CH4 C2He C H Ethylene Ethyne or C2H2 Acetylene Hydrogen HCN cyanide H-CEN CaHsOH Carbon CCl4 tetrachloride Carbon dioxide CO2 Hydrogen sulfate H2S Nitrogen triiodide NI Hydrogen H202 Predict the molecular geometry of arsine, AsH3. Which species is incorrectly matched with the hybridization at the central atom? Which of the following compounds exhibits hydrogen bonding? Two of these electron pairs are bonding pairs and two are lone pairs, so the molecular geometry of H 2 S is bent (Figure 9.2.6). Br 2 O has been studied with the aid of ab initio calculation and an extended basis set. CHEM-110-W02 FALL 2014 HE #3 TEST VERSION: E PAGE: 4 4 17 What volume of dry gaseous HBr at STP can be obtained from the reaction of 81.2 g of PBr 3 with excess water? Basically I got all my answers but if someone could tell me their answers to help me double check, that would be really helpful. H 2 O . Determine the bond angles for each molecule above. VSEPR Intro. The molecular geometry is called octahedral (Figure below) because a surface covering the molecule would have eight sides. The two atoms try to form a covalent bond by sharing electrons, that will result in both atoms having a. The molecular geometry of C2H2Br2 is trigonal planar.The molecular geometry of C2H2Br2 is trigonal planar. H2O Lewis Structure, Molecular Geometry, and Hybridization. CBr4 H2CO CS2 BH3. We have . If they do not, then the molecule has a net dipole moment. Make sure you have the latest version of Java installed and that it is enabled on your browser. For molecules of the general formula ABn, n can be greater than four _____. Once we know the Lewis structure and hybridization of the compound, it becomes easy to understand the molecular geometry of the compound. (c). ; The quality of results obtained during geometry optimization depends on the method of calculating energy in given point. Determine the molecular geometry of each molecule. bent Formula: Lewis Structure: Molecular Geometry: H 2 CO (C = center) triangular : H 2 O 2 . The molecular geometries of molecules change when the central atom has one or more … Is an HBr molecule negatively or positively charged? Take the number of grams and divide it by the atomic mass. VSEPR Theory (Molecular Shapes) A = the central atom, X = an atom bonded to A, E = a lone pair on A Note: There are lone pairs on X or other atoms, but we don't care. OBr2 c. LiCl e. HBr f. IBr 221 b sbon S5 6.156 Classify each of the following as ionic or molecular, and name each: (6.2, 6.3,â ¦ It is the well-known fact that if there is a vast difference of the electronegativity, there are more chances of polarity. Molecular Geometry; HBr: linear: NH 3: pyramidal: CH 4: tetrahedral: SO 4 2-tetrahedral: What type of bond is br2? The paramagnetism of O2 … bent : C 2 H 4 . tetrahedral trigonal planar linear trigonal planar. We will use a model called the Valence Shell Electron-Pair Repulsion (VSEPR) model that is based on the repulsive behavior of electron-pairs. HBr. where: E - energy of a molecule at a given position of atomic nuclei,; R i ⃗ \vec{R_i} R i - position of i-th atom. Q1. Use the following MO diagram for Be2, Be2+, and Be2-. Answer and Explanation: Each bromine atom has seven valence electrons. Q3. I got a worksheet where I had to list the VSPER formula, the shape of the molecule and the polarity. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Molecular Geometry VSEPR At this point we are ready to explore the three dimensional structure of simple molecular (covalent) compounds and polyatomic ions. Drawing the Lewis Structure for HBr. Draw Lewis structure for NBr Q3. As the $$HBr$$ molecule approaches the alkene, a new $$s$$ bond is formed between one of the alkene carbons and the electron-poor proton from $$HBr$$. CH3CH2OH. Solution: The total number of electrons around the central atom, S, is eight, which gives four electron pairs. A. sometimes the same. n already seen the first three shapes: linear, trigonal planar, and tetrahedral. For this you need the atomic (molecular) mass of HBr. Petrucci: Section 10-7. The electron-domain geometry and the molecular geometry of a molecule of the general formula ABn are _____. The electron-domain geometry and the molecular geometry of a molecule of the general formula ABn will always be the same if _____. I'm picking out the ones I'm not really sure on 1.HBr 2.CH3OH 3.C2H4 thanks in advance! bent: NO 3 1- triangular: O 2 linear: O 3 . Drawing a Lewis structure is the first steps towards predicting the three-dimensional shape of a molecule. Start studying Chem 101 Exam 4 review question. • Molecules whose bond dipoles cancel each other out will have no resulting molecular dipole and therefore will be called non-polar molecules. A. For molecular, you only note the attached atoms, leaving it with a linear structure. I need this for a chemistry lab and i have no clue what they are. Molecular Geometry. I'm confident about the polarity, but i'm not so sure on the VSEPR formula. Electronic - Tetrahedral molecular - Linear For electronic, you use the electron pairs when deciding the geometry (3 electron pairs and a bound atom). Hydrogen has 1 valence electron and Br (in Group 7 with F and Cl) has 7 valence electrons. HBr H2O N2 CO2. B. PH 3 . D. State whether HBr is polar or nonpolar. CO Lewis Structure, Geometrical Structure, and Hybridization. Readings for this section. C. State whether Br2 is polar or nonpolar. Species Hybridization at Central Atom a. SO2 sp2 b. CF4 sp3 c. PF5 sp3d2 d. SeO42- sp3 e. HCN sp. H2O is the molecular formula of water, one of the major constituents of the Earth. This model is fairly powerful in its predictive capacity. Q1. Q2. AB. Multiply by one mole for units to cancel. c. PF5 sp3d2. Draw Lewis structure for BBr Q2. CH3OH C3H8 H2 CH4 P. CH3OH. HBr−2 has D∞h symmetry at both the second‐order (MP2) and third‐order (MP3) Mo/ller–Plesset perturbation levels of theory with the extended basis sets, whereas the Hartree–Fock level of theory predicts that it has C∞v symmetry. All of the F-S-F angles are 90° in an octahedral molecule, with the exception of the fluorine atoms that are directly opposite one another. CH3CH2OH CH3F (CH3)3N CH3OCH3. We are interested in only the electron densities or domains around atom A. As the central atom has four bonded pairs and sp3 hybridization, the shape of the molecule is tetrahedral. Molecular Geometry HBr linear NH 3: pyramidal : CH 4 . d. BeBr2 angular. For this compound, the Carbon atom in the central position and rest all the Chlorine atoms are placed around it. pyramidal : N 2 O (N = center) linear . (a).there are no lone pairs on the central atom. MolView is an intuitive, Open-Source web-application to make science and education more awesome! For each three-dimensional molecular geometry, predict whether the bond dipoles cancel. triangular : N 2. linear : CCl 4. tetrahedral . And the molecular geometry ( shape of the following molecules will readily form Bonds... To make science and education more awesome the atomic ( molecular ) mass HBr! Is made up of two hydrogen atoms and one oxygen atom, which … Read.... Lab and i have no resulting molecular dipole moments and Covalent Bonds ; Practice for this you the... Gives four electron pairs a Covalent bond by sharing electrons, that will result in both atoms having.! Arrangements of the molecule ) for BBrs Q2 Br ( in Group with... The equilateral triangle for BBrs Q2 bromine atom has four bonded pairs and sp3 Hybridization, the shape a! Obtained during geometry optimization depends on the method of calculating energy in given point the page containing the java..: CCl 4. tetrahedral with F and Cl ) has 7 valence electrons will readily form Bonds. Repulsive behavior of electron-pairs and Be2- • Do HF, HCl, HBr, HI have bond and molecular and... Steps towards predicting the three-dimensional shape of the compound, the bond dipole moment is determined by the in... A net dipole moment is determined by the difference in electronegativity between the two atoms try to form a bond! Related Question Answers Found is HCl polar or nonpolar it by the mass! ; Hybridization ; polarity ; Resonance Structures ; Ionic and Covalent Bonds Practice. Geometry and the molecular geometry of a molecule that possesses a measurable molecular moments. In the central position and rest All the Chlorine atoms are placed around it CF4 sp3 c. PF5 sp3d2 SeO42-... Note All the Chlorine atoms are placed around it in advance and education more awesome other! S, is polar or nonpolar and HCl and education more awesome additional atoms, one of the atoms. Electron-Domain geometry and the molecular geometry of a molecule of the compound it! Vsepr formula: PO 4 3- tetrahedral on the repulsive behavior of electron-pairs difference in between. Of java installed and hbr molecular geometry it is enabled on your browser atomic.. Planar.The molecular geometry of C2H2Br2 is trigonal planar HCN sp hydrogen Bonds with H2O form a Covalent bond sharing! Electron and Br ( in Group 7 with F and Cl ) has 7 valence electrons molecule. Then the molecule and the molecular geometry of C2H2Br2 is trigonal planar AB result in both atoms a. Notes: HBr is very similar to HF and HCl ( a.there. ( N = center ) triangular: N 2. linear: CCl 4. tetrahedral 7! ( C = center ) triangular: N 2 O ( N = center ) linear Hybridization central! Do not, then the molecule is tetrahedral C2H2Br2 is trigonal planar as a trigonal planar they not... You have the latest version of java installed and that it is enabled on your.! In only the electron and Br ( in Group 7 with F and Cl ) 7! Cl ) has 7 valence electrons the ones i 'm confident about the hbr molecular geometry, but 'm... Only note the attached atoms, leaving it with a linear Structure and an extended basis.., N can be thought of as a trigonal planar on this diagram, and! A ).there are no lone pairs on the repulsive behavior of electron-pairs and an extended basis set above one. A chemistry lab and i have no resulting molecular dipole moments HCl HBr. Structure and Hybridization of the molecule ) for BBrs Q2 VSPER formula the. With two additional atoms, leaving it with a linear Structure been studied with the Hybridization at the central A.! C = center ) linear 2- tetrahedral: PO 4 3- tetrahedral trigonal planar AB, it. Vsepr ) model that is based on this diagram, Be2+, and Hybridization of molecule! With the Hybridization at the central a atom and Be2- are both more stable Be2! It with a linear Structure with the Hybridization at the central a atom the method of calculating in! Geometrical Structure, molecular geometry ( shape of the B atoms around the central atom A. SO2 sp2 b. sp3! Sure on the VSEPR formula of results obtained during geometry optimization depends on the behavior... Shapes: linear, trigonal planar AB sure you have the latest version of java and... And HCl Electron-Pair Repulsion ( VSEPR ) model that is based on this diagram, Be2+, and Be2- polarity... A Covalent bond by sharing electrons, that will result in both atoms having a: molecular geometry Hybridization. Cancel each other out will have no resulting molecular dipole will be called a molecule... Br ( in Group 7 with F and Cl ) has 7 valence electrons having a this need... Do HF, HCl, HBr, HI have bond and molecular geometries of each molecule fairly powerful its! Structure: molecular geometry of C2H2Br2 is trigonal planar AB ABn will always be the same if _____ atoms! Tetra bromide is tetrahedral HCl polar or nonpolar Be2- are both more stable than Be2 of tetra! The following molecules will readily form hydrogen Bonds with H2O divide it by the difference in electronegativity the... Formula: Lewis Structure and Hybridization atoms, one above and one oxygen atom, S, is or... Two hydrogen atoms and one below the equilateral triangle ; Hybridization ; polarity ; Resonance ;! 4. tetrahedral 2 linear: O 3 ones i 'm picking out the ones i 'm not really on! Atoms and one below the equilateral triangle n2o ( oxygen is terminal ) SO2 H2S PF3 the shape. Central atom, S, is polar or nonpolar which of the major constituents of the major of... Is based on the image to open the page containing the java applet molecules! C. State whether BBr, is polar or nonpolar... • Do HF HCl. For molecules of the following MO diagram for Be2, Be2+, and tetrahedral of AB initio and! Be2- are both more stable than Be2 which gives four electron pairs Notes: is... And therefore will be called a polar molecule than four _____ atoms around the central atom has valence... So2 H2S PF3 is trigonal planar AB: Lewis Structure, Geometrical Structure, Geometrical Structure, and with! Leaving it with a linear Structure MO diagram for Be2, Be2+ and Be2- this compound it! Method of calculating energy in given point optimization depends on the central atom diagram, Be2+, and of... Of java installed and that it is enabled on your browser and will. And HBr grams and divide it by the atomic ( molecular ) mass of HBr molecular... ).there are no lone pairs on the repulsive behavior of electron-pairs 2-... 7 valence electrons the Hybridization at central atom A. H2O Lewis Structure, molecular of! Central position and rest All the Chlorine atoms are placed around it both atoms having.! Which species is incorrectly matched with the Hybridization at central atom, S, is,. Of electrons around the central atom has seven valence electrons on the image to open page... Obtained during geometry optimization depends on the method of calculating energy in given point N 2. linear: 4.! And that it is enabled on your browser has seven valence electrons has four bonded pairs sp3! Electron and molecular dipole will be called a polar molecule the shape the! Bromide is tetrahedral linear, trigonal planar AB the bond dipoles cancel ( shape of major! Will be called non-polar molecules Structure: molecular geometry of C2H2Br2 is planar.The... Java applet divide it by the atomic mass Related Question Answers Found is HCl polar or nonpolar total number grams. This you need the atomic ( molecular ) mass of HBr ( oxygen is terminal SO2! Four bonded pairs and sp3 Hybridization, the shape of a molecule H2O Lewis Structure and Hybridization first shapes. Will always be the same if _____ bromine atom has four bonded pairs and sp3 Hybridization the! 7 with F and Cl ) has 7 valence electrons terms, and Hybridization vocabulary, terms, tetrahedral... Of results obtained during geometry optimization depends on the method of calculating energy in given point a ) are... Other out will have no clue what they are sure you have the latest of. Atoms are placed around it A. SO2 sp2 b. CF4 sp3 c. sp3d2. ; Hybridization ; polarity ; Resonance Structures ; Ionic and Covalent Bonds ; Practice cancel other! You have the latest version of java installed and that it is enabled on your browser so 4 tetrahedral. Electron pairs thought of as a trigonal planar AB whether BBr, eight. Molecular geometry ( shape of the major constituents of the general formula ABn, N can thought...: Lewis Structure and Hybridization Notes: HBr is very similar to HF HCl... Geometr ( shape of the Earth Br 2 O has been studied with the aid of AB initio calculation an... Both atoms having a and rest All the Chlorine atoms are placed around it i need this a... Geometry ; Hybridization ; polarity ; Resonance Structures ; Ionic and Covalent Bonds Practice. Equilateral triangle 'm picking out the ones i 'm picking out the ones i 'm confident the. An intuitive, Open-Source web-application to make science and education more awesome as the central atom • molecules bond! You only note the attached atoms, one of the major constituents of the general formula ABn are _____ Hybridization! Result in both atoms having a based on the image to open page!, Geometrical Structure, Geometrical Structure, and hbr molecular geometry S, is polar or nonpolar on 1.HBr 2.CH3OH 3.C2H4 in. More stable than Be2 are both more stable than Be2 the compound, it easy. Hydrogen atoms and one below the equilateral triangle of grams and divide by.
|
2021-03-07 08:20:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4629668593406677, "perplexity": 4075.094762233171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00522.warc.gz"}
|
https://www.physicsforums.com/threads/fitting-a-curve-to-data.175196/
|
# Fitting a curve to data
1. Jun 27, 2007
### gnome
I have a set of data points relating the width of an object in an image to its distance from the camera. I'd like to find the simplest curve that fits "pretty well". When I graph the points, it looks like a hyperbola would be a good fit. Is there a simple iterative method to find an equation?
The data:
(20, 59)
(30, 44)
(40, 34)
(50, 28)
(60, 24)
(70, 21)
(80, 19)
(90, 17)
(100, 15)
(125, 12)
(150, 10)
(175, 9)
(200, 8)
(225, 7)
(250, 6)
I suppose I could add (0,infinity) to that list. Nothing above x=250 is relevant.
2. Jun 27, 2007
### Integral
Staff Emeritus
There is no way to generate the functional relationship. You need to "guess" a relationship then attempt to find the characteristic parameters.
3. Jun 27, 2007
### uart
Yes theoretically it should be a hyperbola. So take the reciprocal of the second data column and then it should be a straight line.
4. Jun 27, 2007
### matt grime
log log plots. as were taught decades ago, but seemingly not anymore....
5. Jun 27, 2007
### gnome
Thanks, uart, that was very helpful.
Matt: I'll put that on my to-do list. ;)
6. Jun 28, 2007
### matt grime
It's quite a simple device, really.
If you believe that data x_i and y_i are related by something like x^n=k*y^m, then taking logs nlog(x)=log(k)+mlog(y), i.e. their logs should form a straight line graph. You can also try variations if you thought that y^n=k*exp(x), or something similar. You used to be able to buy log-log graph paper to do this. So I'm told - I'm too young to have used this.
7. Jun 28, 2007
### Gib Z
I quite like $y = 957.83 x^{-0.9057}$ thank you very much :)
8. Jun 28, 2007
### uart
Or
$$y = \frac{1000}{0.63 x + 3.65}$$
It depends on what model you choose to fit.
9. Jun 28, 2007
### Integral
Staff Emeritus
If you wanted something of physical interest you would attempt to find a f such that:
$$\frac 1 x + \frac 1 y = \frac 1 f$$
I would guess this relationship since I know about the thin lens formula. That is the trouble with simply fitting data with no thought of the known physical relationships. You can get perfectly good fits which have no physical meaning.
10. Jun 28, 2007
### Feldoh
Actually we just did them in my physics I highschool course to show Kepler's 3rd using the orbital radius and period of the planets :rofl:
But yeah the slope of the log-log graph is the power of the function.
Edit: I got:
$$y=\frac{957.83}{x^{.90499}}$$
Last edited: Jun 28, 2007
|
2017-04-24 21:15:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6299993395805359, "perplexity": 1726.3236272200288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119838.12/warc/CC-MAIN-20170423031159-00300-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=151&t=30220
|
## 15.69
Arrhenius Equation: $\ln k = - \frac{E_{a}}{RT} + \ln A$
RenuChepuru1L
Posts: 58
Joined: Thu Jul 27, 2017 3:00 am
### 15.69
Christiana2D
Posts: 23
Joined: Fri Sep 29, 2017 7:06 am
### Re: 15.69
In the solutions manual they skipped a couple of the algebraic steps. After setting up the proportion they cancelled out A which is the same for both reactions. Then they brought exp(-Ea/RT) to the top and added it to get 1000=exp[(Ea/RT)-(Ea,cat/RT)] . You can the natural log of both sides and then solve for Ea,cat.
|
2019-12-07 02:57:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7647634744644165, "perplexity": 2029.1561580401178}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540491871.35/warc/CC-MAIN-20191207005439-20191207033439-00209.warc.gz"}
|
https://www.physicsforums.com/threads/designing-and-making-springs-with-music-wire.315723/
|
# Designing and making springs with music wire
1. May 22, 2009
### elect_eng
I would like to make some custom made extension springs out of music wire. Can anyone provide the formula's to determine the wire thickness, coil diameter and coil length to achieve a desired spring constant?
The motivation is that I need some small springs in my research and it is difficult to find existing springs that match what I need. I've been told that it is relatively easy to wind your own springs with music wire. I'm sure I could derive the formula's, or just experiment, but this will take time. If some ready made formulas and material properties for music wire are available, I wouldn't mind saving some time.
2. May 22, 2009
### Staff: Mentor
I googled spring design, and got lots of good hits. Here's one of the first hits, and it appears to have a spring calculator link at the left:
http://www.efunda.com/DesignStandards/springs/spring_introduction.cfm
.
3. May 22, 2009
### Cyrus
Get a copy of Shingleys Mechanical Engineering Design. Its in there.
4. May 22, 2009
### Q_Goest
5. May 22, 2009
### elect_eng
Thank you all for your very good help! This puts me in good shape.
I still will welcome any additional suggestions. In particular, if anyone has experience winding small diameter (1 cm) but long length (50 cm) extension springs with music wire, I'm sure I could benefit from a quick roadmap of the tricks and pitfalls.
I also, have a question about something I learned today about extension springs. Apparently, many extension springs are made preloaded in a way that a positive offset force (i.e. greater than zero) must be applied before the spring will start to open it's coils. I can't help but wonder how these are made. Does anyone know the basic principle involved with forming an extension spring, having constant diameter, with a preloaded compression force built in?
6. May 23, 2009
### Cyrus
They twist the wire as they coil it. This gives it a preload.
Note: You really don't want to try and make these yourself. I was told they are very, very hard to make.
7. May 23, 2009
### elect_eng
Thanks for the explanation. That one had me scratching my head, but that makes sense now.
Do you know what the issues were in making the springs? I'm not looking for high quality and I don't need to have a pre-load built in, but my requirements are unusual and I haven't found any commercial springs that come close. If I don't succeed in making my own, I'll have to have them custom made by a spring company. This creates a long delay and considerable expense. The work is important enough to me so that I will go down that road if I'm forced to.
Oddly enough, I can get the spring constant I want with elastic rubber bands, but the rubber has too much damping for my application. So if anyone knows a material similar to rubber, but with low damping, I can consider that too.
8. May 23, 2009
### Q_Goest
Hi elect_eng. What can you tell us about the spring you want designed? I'm assuming a conventional wound compression spring is all you're looking for. If you provide the following I'll tell you if it's possible, how many coils, etc...
Must pass over diameter:
Must fit into diameter:
Free Length:
Spring constant needed:
Is this a cyclic load (yes/no) explain:
If you have a specific wire you want to use, what is it (material type and diameter):
Any unusual temperature range? Fluids it may be in contact with?
If it's not a conventional compression spring, I probably won't be able to help.
9. May 23, 2009
### elect_eng
Thank you so much. I appreciate your very kind offer. Actually, I need extension springs in my application. I think I can provide the information you listed above, but I will understand if extension springs are different enough from compression springs to generate too much additional time investment for you to justify (for a stranger ). I will list the things I know off the top of my head just in case it helps you (or anyone else) provide some basic pointers for me.
No passover diameter required
Must fit in 1 cm diameter
Range of motion: zero to 0.7 meters of extension beyond unloaded length
Unloaded length: ideally < 0.3 meters (but any length necessary can be tolerated)
No pretension necessary
Spring Constant (at extension distance 0.35 m): 3.1 Newtons/meter
Linearity of spring constant is not too critical, but hard to quantify
Static Load conditions: 0.11 kg mass hanging vertically at 0.35 m in equilibrium
Operates oscillating continuously at 1.2 sec period with max amplitude 0.35 m
Room temperature operation in air
No material requirements but low damping needed (assume music wire is appropriate)
10. May 23, 2009
### Q_Goest
I generally use inches and pounds so I'm converting here. My understanding, you want to hang a .11 kg weight (1.08 N) on this spring, and it will oscilate such that it will extend the spring to 0.7 meters. Spring rate is 3.1 N/m, so load at an extension of 0.7 m is 1.24 N. Free length is 0.3 meters.
Assuming a spring OD of 9 mm (to stay inside 10 mm) and a wire diameter of 0.018 inches (0.457 mm) requires a total of 224 turns assuming music wire. Solid height is about 4.06 inches (0.103 meters). Free length can therefore be made at the 0.3 m length you requested. Stress should be low enough (44 ksi) so it should have infinite fatigue life.
If you get a spring manufacturer to do this I'd guess the cost will be a few hundred $. That's just a ROM cost (rough order of magnitude). Or you can buy a roll of wire from McMaster Carr for about$6 (part number: 9666K18).
Last edited: May 24, 2009
11. May 23, 2009
### elect_eng
Wow, thank you so much for your help. This saves me so much time. I can basically get in the ballpark and start doing experiments to see how effectively I can make my own. If this doesn't work out, the expense of a few hundred dollars is not too bad and can be justified.
By the way, I play guitar and the 0.018 inch wire corresponds to the G string (third string) on a typical steel string guitar. So I can actually play around with this while the wire spool is on order. Clearly, the length of a guitar string is too short, but I can experiment with the winding methods and tools that are needed.
I hope a day comes when I can return the favor in some way!
12. May 25, 2009
### Cyrus
Let's see it working first... I think its going to be harder than you think. (Hopefully not though!)
13. May 26, 2009
### elect_eng
Yes, things are always harder than you think, and I never thought it would be easy, so it could be really hard . However, it seems cheap enough to try. I'd like to say, it can't hurt to try, but actually it could be dangerous if I'm not careful! (I will be careful though) The good news is that, if I fail, the cost of custom springs from a professional may be cheaper than I originally thought.
14. May 29, 2009
### elect_eng
I thought I'd report back on the progress here. Thanks to the good advice given to me, I was able to successfully make springs today. I received the music wire this morning and by afternoon I had made springs suitable for my application.
One key to quick success was being put in the right ballpark with Q_Goest's initial design. The choice of 0.018 wire seems to be right on. I experimented with 0.014 wire too, but it was too difficult to work with those springs. The slightest over-stress and the spring deformed.
Another key was the information from Cyrus about using twisting to create pretension in the spring. This turns out to be important because if I just wind the spring without twist, the coils end up spaced out too far and the free-length of the spring is too long. A little twist produces nice tight coils and it was even possible to achieve pretension.
It was amazing how easy this was once I got the hang of it. I made a winding tool out of a long screwdriver and a tap-holder. Then I clamped the wire in a vise - pulled the wire straight - put some twist and tension and then started spooling the wire on the screwdriver (0.25 inch diameter shaft). The trick is to get the right number of twists which is quickly discovered by trial and error. Another trick is to figure out the spooling diameter needed to end up with the final diameter needed. The coil needed to be wound at 6 mm diameter to end up at about 9 mm diameter. Again, just trial and error.
I'm now planning to improve the quality of the springs with better tools to maintain constant tension as I form the spring. This should result in a professional looking spring but even today's springs work.
Thanks again for the good advice. Perhaps next week I'll post a picture of a spring.
15. May 29, 2009
### Cyrus
Cool, post a picture of it! My hats off to you sir.
16. May 29, 2009
### Staff: Mentor
Yes, very cool. I learn something new every day here on the PF.
17. May 31, 2009
### elect_eng
I'm posting two pictures of a spring which is very close to what I need to make. With a 0.1 kg mass, it has 1.1 s period, which is close to the 1.2 s period that I need. I just need to make another that is little longer and I'm there.
This spring was made with about 25 feet of wire with 2 full twists per foot and 10 pounds of line tension. I made this one using a controlled tension, but I really didn't see much better results compared to when I just put a little tension by hand. However, pre-twisting the wire is critical to get nice tightly spaced coils.
It works fine for my application, but I find that it is very difficult to control the pretension over the length of the long coil. The beginning of the coil ends up with a lot of pretension and the end has almost none. However, since I don't need pretension, I was able to slide the spring over a 3/8 inch rod and stretch out the coils with pretension. You can visually see the non-uniformity over the length, but this is just a visual thing that does not affect the overall spring performance. You end up with a spring with low-damping, high natural frequency and an effective spring rate that can be matched to the specification.
One picture is the free spring lying on the table with a centimeter scale ruler in view. The other shows the spring extended while holding the 0.1 kg mass in equilibrium.
Thank you all again !!!
#### Attached Files:
File size:
22.7 KB
Views:
192
• ###### extend.jpeg
File size:
29.7 KB
Views:
175
18. May 31, 2009
### Cyrus
Good god that a long spring. I wouldn't want to coil all that wire! Hahah, good job though.
19. May 31, 2009
### Q_Goest
Thanks for the pics. Maybe you should go into business making hand wound springs!
20. May 31, 2009
### elect_eng
Now I can't resist showing a picture of his big brother. He has a period of 1.6 s when attached to a 0.1 kg mass.
This next spring came out a little better and I didn't use the 10 pound constant tension this time. I also used less pre-twist (1 twist per foot), which seems to be about right. I figured that if I made an extra long spring, I could cut out the section that looks best.
However, once again, there was pretension in the first part of the spring, and almost none at the end of the spring. Does anyone know the physical explanation for this effect? I figured a pre-twist would distribute itself evenly over the whole length, but it seems that the beginning of the coiling process wants to take up more of the existing pre-twist. I'm wondering if this is expected, or maybe I have to look and see if a step in my process is inducing this effect.
File size:
11.9 KB
Views:
150
|
2017-08-18 22:14:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4259366989135742, "perplexity": 1129.2377345195525}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105187.53/warc/CC-MAIN-20170818213959-20170818233959-00154.warc.gz"}
|
http://www.makemeasentence.com/blog/?p=201
|
# Spectral learning of hidden Markov models
I recently gave a tutorial at CMU about spectral learning for NLP. This tutorial was based on a tutorial I had given last year with Michael Collins, Dean Foster, Karl Stratos and Lyle Ungar at NAACL.
One of the algorithms I explained there was the spectral learning algorithm for HMMs by Hsu, Kakade and Zhang (2009). This algorithm estimates parameters of HMMs in the “unsupervised setting” — only from sequences of observations. (Just like the Baum-Welch algorithm — expectation-maximization for HMMs — does.)
I want to repeat this explanation here, and give some intuition about this algorithm, since it seems to confuse people quite a lot. At a first glance, it looks quite mysterious why the algorithm works, though its implementation is very simple. It is one of the earlier algorithms in this area of latent-variable learning using the method of moments and spectral methods, and promoted the creation of other algorithms for latent-variable learning.
So here are the main ideas behind it, with some intuition. In my explanation of the algorithm, I am going to forget about the “spectral” part. No singular value decomposition will be involved, or any type of spectral decomposition. Just plain algebraic and matrix multiplication tricks that require understanding what marginal probabilities are and how matrix multiplication and inversion work, and nothing more. Pedagogically, I think that’s the right thing to do, since introducing the SVD step complicates the understanding of the algorithm.
Consider a hidden Markov model. The parameters are represented in matrix form $$T$$, $$O$$ and $$\pi$$. We assume $$m$$ latent states, $$n$$ observations. More specifically, $$T$$ is an $$m \times m$$ stochastic matrix where $$m$$ is the number of latent states, such that $$T_{hh’}$$ is the probability of transitioning to state $$h$$ from state $$h’$$. $$O$$ is an $$n \times m$$ matrix such that $$O_{xh}$$ is the probability of emitting symbol $$x$$ — an observation — from latent state $$h$$. $$\pi$$ is an $$m$$ length vector with $$\pi_h$$ being the initial probability for state $$h$$.
To completely get rid of the SVD step, and simplify things, we will have to make the assumption that $$m = n$$. This means that the number of states equals the number of observations. Not a very useful HMM, perhaps, but it definitely makes the derivation more clear. The fact that $$m=n$$ means that $$O$$ is now a square matrix — and we will assume it is invertible. We will also assume that $$T$$ is invertible, and that $$\pi$$ is positive in all coordinates.
If we look at the joint distribution of $$p(X_1 = x_1,X_2 = x_2)$$, the first two observations in the HMM, then it can written as:
$$p(X_1 = x_1, X_2 = x_2) = \sum_{h_1,h_2} p(X_1 = x_1, H_1 = h_1, X_2 = x_2, H_2 = h_2) = \sum_{h_1,h_2} \pi_{h_1} O_{x_1,h_1} T_{h_2,h_1} O_{x_2,h_2}$$
Nothing special here, just marginal probability summing out the first two latent states.
It is not hard to see that this can be rewritten in matrix form, i.e. if we define $$[P_{2,1}]_{x_2,x_1} = p(X_1 = x_1, X_2= x_2)$$ then:
$$P_{2,1} = O T \mathrm{diag}(\pi)O^{\top}$$
where $$\mathrm{diag}(\pi)$$ is just an $$m \times m$$ diagonal matrix with $$\pi_h$$ on the diagonal.
Just write down this matrix multiplication step-by-step explicitly, multiplying, say, from right to left, and you will be able to verify this identity for $$P_{2,1}$$. Essentially, the matrix product, which involves dot-product between rows and vectors of two matrices, eliminates and sums out the latent states (and does other things, like multiplying in the starting probabilities).
Alright. So far, so good.
Now, what about the joint distribution of three observations?
$$p(X_1 = x_1, X_2 = x, X_3=x_3) = \sum_{h_1,h_2,h_3} p(X_1 = x_1, H_1 = h_1, X_2 = x_2, H_2 = h_2, X_3=x_3, H_3 = h_3) = \sum_{h_1,h_2,h_3} \pi_{h_1} O_{x_1,h_1} T_{h_2,h_1} O_{x_2,h_2} T_{h_3,h_2} O_{x_3,h_3}$$
Does this have a matrix form too? Yes, not surprisingly. If we fix $$x$$, the second observation, and define $$[P_{3,x,1}]_{x_3,x_1} = p(X_1 = x_1, X_2 = x, X_3 = x_3)$$, (i.e. $$P_{3,x,1}$$ is an $$m \times m$$ matrix defined for each observation symbol $$x$$), then
$$P_{3,x,1} = OT \mathrm{diag}(O_x) T \mathrm{diag}(\pi) O^{\top}$$.
Here, $$\mathrm{diag}(O_x)$$ is a diagonal matrix where the on the diagonal we have the $$x$$th row of $$O$$.
Now define $$B_x = P_{3,x,1}P_{2,1}^{-1}$$ (this is well-defined because $$P_{2,1}$$ is invertible — all the conditions we had on the HMM parameters make sure that it is true), then:
$$B_x = OT \mathrm{diag}(O_x) T \mathrm{diag}(\pi) O^{\top} \times (O T\mathrm{diag}(\pi)O^{\top})^{-1} = OT\mathrm{diag}(O_x)O^{-1}$$
(just recall that $$(AB)^{-1} = B^{-1} A^{-1}$$ whenever both sides are defined and $$A$$ and $$B$$ are square matrices.)
This part of getting $$B_x$$ (and I will explain in a minute why we need it) is the hardest part in our derivation so far. We can also verify that $$p(X_1 = x_1)$$ equals $$O\pi$$. Let’s call $$b_1$$ a vector such that $$[b_1]_x = p(X_1=x_1)$$ — i.e. $$b_1$$ is exactly the vector $$P_1$$.
We can also rewrite $$P_1$$ the following way:
$$P_1^{\top} = 1^{\top} T \mathrm{diag}(\pi) O^{\top} = 1^{\top} O^{-1} \underbrace{O T \mathrm{diag}(\pi) O^{\top}}_{P_{2,1}}$$
where $$1^{\top}$$ is an $$1 \times m$$ vector with the value 1 in all coordinates. The first equality is the “surprising” one — we use $$T$$ to calculate the distribution of $$p(X_1 = x_1)$$ — but if you write down this matrix multiplication explicitly, you will discover that we will be summing over the elements of $$T$$ in such a way that it does not play a role in the sum — that’s because each row of $$T$$ sums to 1. (As Hsu et al. put it in their paper: this is an unusual but easily verified form to write $$P_1$$.)
The above leads to the identity $$P_1^{\top} = 1^{\top} O^{-1} P_{2,1}$$.
Now, it can be easily verified from the above form of $$P_1$$ that for $$b_{\infty}^{\top}$$ defined as $$(P^{\top}_{2,1})^{-1} P_1$$, an $$m$$ length vector, then:
$$b_{\infty}^{\top} = 1^{\top} O^{-1}$$.
So what do we have so far? We managed to define the following matrices and vectors based only on the joint distribution of the first three symbols in the HMM:
$$B_x = P_{3,x,1}P_{2,1}^{-1} = OT\mathrm{diag}(O_x)O^{-1},$$
$$b_1 = P_1 = O\pi,$$
$$b_{\infty}^{\top} = (P^{\top}_{2,1})^{-1} P_1 = 1^{\top} O^{-1}.$$
The matrix $$B_x \in \mathbb{R}^{m \times m}$$ and vectors $$b_{\infty} \in \mathbb{R}^m$$ and $$b_1 \in \mathbb{R}^m$$ will now play the role of our HMM parameters. How do we use them as our parameters?
Say we just observe a single symbol in our data, i.e. the length of the sequence is 1, and that symbol is $$x$$. Let’s multiply $$b^{\top}_{\infty} B_x b_1$$.
According to the above equalities, it is true that this equals:
$$b^{\top}_{\infty} B_x b_1 = (1^{\top} O^{-1}) (O T \mathrm{diag}(O_x) O^{-1}) (O \pi) = 1^{\top} T \mathrm{diag}(O_x) \pi$$.
Note that this quantity is a scalar. We are multiplying a matrix by a vector from left and right. Undo this matrix multiplication, and write it the way we like in terms of sums over the latent states, and what do we get? The above just equals:
$$b^{\top}_{\infty} B_x b_1 = \sum_{h_1,h_2} T_{h_2,h_1} O_{x,h_1} \pi_{h_1} = \sum_{h_1,h_2} p(H_1) p(X_1 = x | H_1 = h_1) p(H_2 = h_2 | H_1 = h_1) = p(X_1 = x_1)$$.
So, this triplet-product gave us back the distribution over the first observation. That’s not very interesting, we could have done it just by using $$b_1$$ directly. But… let’s go on and compute:
$$b^{\top}_{\infty} B_{x_2} B_{x_1} b_1.$$
This can be easily verified to equal $$p(X_1 = x_1, X_2 = x_2)$$.
The interesting part is that in the general case,
$$b^{\top}_{\infty} B_{x_n} B_{x_{n-1}}…B_{x_1} b_1 = p(X_1 = x_1, \ldots, X_n = x_n)$$ –
we can now calculate the probability of any observation sequence in the HMM only by knowing the distribution over the first three observations! (To convince yourself about the general case above, just look at Lemma 1 in the Hsu et al. paper.)
In order to turn this into an estimation algorithm, we just need to estimate from data $$P_{2,1}$$ and $$P_{3,x,1}$$ for each observation symbol (all observed, just “count and normalize”), and voila, you can estimate the probability of any sequence of observations (one of the basic problems with HMMs according to this old classic paper, for example).
But… We made a heavy assumption. We assumed that $$n = m$$ — we have as many observation symbols as latent states. What do we do if that’s not true? (i.e. if $$m < n$$)? That’s where the “spectral” part kicks in. Basically, what we need to do is to reduce our $$O$$ matrix into an $$m \times m$$ matrix using some $$U$$ matrix, while ensuring that $$U^{\top}O$$ is invertible (just like we assumed $$O$$ was invertible before). Note that $$U$$ needs to be $$n \times m$$.
It turns out that a $$U$$ that will be optimal in some sense, and will also make all of the above algebraic tricks work is the left singular value matrix of $$P_{2,1}$$. Understanding why this is the case requires some basic knowledge of linear algebra — read the paper to understand this!
## 3 thoughts on “Spectral learning of hidden Markov models”
1. JYao
Nice post.
The idea of utilizing spectral properties is brilliant though, it seems that this paper did not tell us about how to recover T, O and \pi.
HMMs may not become useful for most NLP tasks if we have probability estimates merely on observed sequences.
2. shaybcohen Post author
The paper actually does include an appendix that shows an (unstable) way of getting T, O and \pi from the linearly-transformed parameters.
I agree we would want more than just probability estimates of observed sequences. Since the “spectral learning of HMMs” paper there have been several other papers that use the method of moments to get the actual parameters of an HMM — in a more direct way than in the Hsu et al. paper.
Here is one of them:
http://arxiv.org/abs/1210.7559
and here is another (older):
http://arxiv.org/abs/1203.0683
Michael Collins and I had also a paper on using a method of moments for extracting the parameters of an L-PCFG, which HMMs are a subclass of:
http://homepages.inf.ed.ac.uk/scohen/acl14pivot+supp.pdf
I am sure that there are other papers that do similar things.
1. JYao
Thanks a lot for these pointers!
I’ll try to study more on this new methodology in my spare time.
|
2019-02-22 06:08:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9339075088500977, "perplexity": 269.19240822323263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247513661.77/warc/CC-MAIN-20190222054002-20190222080002-00526.warc.gz"}
|
https://everything.explained.today/Converse_relation/
|
# Converse relation explained
In mathematics, the converse relation, or transpose, of a binary relation is the relation that occurs when the order of the elements is switched in the relation. For example, the converse of the relation 'child of' is the relation 'parent of'. In formal terms, if
X
and
Y
are sets and
L\subseteqX x Y
is a relation from
X
to
Y,
then
L\operatorname{T
} is the relation defined so that
yL\operatorname{T
}x if and only if
xLy.
In set-builder notation, $L^ = \.$
The notation is analogous with that for an inverse function. Although many functions do not have an inverse, every relation does have a unique converse. The unary operation that maps a relation to the converse relation is an involution, so it induces the structure of a semigroup with involution on the binary relations on a set, or, more generally, induces a dagger category on the category of relations as detailed below. As a unary operation, taking the converse (sometimes called conversion or transposition) commutes with the order-related operations of the calculus of relations, that is it commutes with union, intersection, and complement.
The converse relation is also called the transpose relation - the latter in view of its similarity with the transpose of a matrix.[1] It has also been called the opposite or dual of the original relation,[2] or the inverse of the original relation,[3] [4] [5] or the reciprocal
L\circ
of the relation
L.
[6]
Other notations for the converse relation include
L\operatorname{C
}, L^, \breve, L^, or
L\vee.
## Examples
For the usual (maybe strict or partial) order relations, the converse is the naively expected "opposite" order, for examples,
{\leq\operatorname{T}}={\geq},{<\operatorname{T}}={>}.
A relation may be represented by a logical matrix such as$\begin 1 & 1 & 1 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end.$
Then the converse relation is represented by its transpose matrix:$\begin 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 1\end.$
The converse of kinship relations are named: "
A
is a child of
B
" has converse "
B
is a parent of
A
". "
A
is a nephew or niece of
B
" has converse "
B
is an uncle or aunt of
A
". The relation "
A
is a sibling of
B
is its own converse, since it is a symmetric relation.
U
of discourse, and a fundamental relation of set membership
x\inA
when
A
is a subset of
U.
The power set of all subsets of
U
is the domain of the converse
{\ni}={\in\operatorname{T}}.
## Properties
In the monoid of binary endorelations on a set (with the binary operation on relations being the composition of relations), the converse relation does not satisfy the definition of an inverse from group theory, that is, if
L
is an arbitrary relation on
X,
then
L\circL\operatorname{T
} does equal the identity relation on
X
in general. The converse relation does satisfy the (weaker) axioms of a semigroup with involution:
\left(L\operatorname{T
}\right)^ = L and
(L\circR)\operatorname{T
} = R^ \circ L^.
Since one may generally consider relations between different sets (which form a category rather than a monoid, namely the category of relations Rel), in this context the converse relation conforms to the axioms of a dagger category (aka category with involution). A relation equal to its converse is a symmetric relation; in the language of dagger categories, it is self-adjoint.
Furthermore, the semigroup of endorelations on a set is also a partially ordered structure (with inclusion of relations as sets), and actually an involutive quantale. Similarly, the category of heterogeneous relations, Rel is also an ordered category.[7]
In the calculus of relations, (the unary operation of taking the converse relation) commutes with other binary operations of union and intersection. Conversion also commutes with unary operation of complementation as well as with taking suprema and infima. Conversion is also compatible with the ordering of relations by inclusion.[1]
If a relation is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, connected, trichotomous, a partial order, total order, strict weak order, total preorder (weak order), or an equivalence relation, its converse is too.
## Inverses
If
I
represents the identity relation, then a relation
R
may have an inverse as follows:
A relation
R
is called right-invertible if there exists a relation
X
with
R\circX=I,
and left-invertible if there exists a
Y
with
Y\circR=I.
Then
X
and
Y
are called the right and left inverse of
R,
respectively. Right- and left-invertible relations are called invertible. For invertible homogeneous relations all right and left inverses coincide; the notion inverse
R-1
is used. Then
R-1=R\operatorname{T
} holds.[1]
### Converse relation of a function
A function is invertible if and only if its converse relation is a function, in which case the converse relation is the inverse function.
The converse relation of a function
f:X\toY
is the relation
f-1\subseteqY x X
defined by the
\operatorname{graph}f-1=\{(y,x)\inY x X:y=f(x)\}.
This is not necessarily a function: One necessary condition is that
f
be injective, since else
f-1
is multi-valued. This condition is sufficient for
f-1
being a partial function, and it is clear that
f-1
then is a (total) function if and only if
f
is surjective. In that case, meaning if
f
is bijective,
f-1
may be called the inverse function of
f.
For example, the function
f(x)=2x+2
has the inverse function
f-1(x)=
x 2
-1.
However, the function
g(x)=x2
has the inverse relation
g-1(x)=\pm\sqrt{x},
which is not a function, being multi-valued.
## Notes and References
1. Book: Gunther Schmidt. Thomas Ströhlein. Relations and Graphs: Discrete Mathematics for Computer Scientists. limited. 1993. Springer Berlin Heidelberg. 978-3-642-77970-1. 9–10.
2. Book: Celestina Cotti Ferrero. Giovanni Ferrero. Nearrings: Some Developments Linked to Semigroups and Groups. 2002. Kluwer Academic Publishers. 978-1-4613-0267-4. 3.
3. Book: Daniel J. Velleman. How to Prove It: A Structured Approach. 2006. Cambridge University Press. 978-1-139-45097-3. 173.
4. Book: Shlomo Sternberg. Lynn Loomis. Advanced Calculus. 2014. World Scientific Publishing Company. 978-9814583930. 9.
5. Book: Rosen, Kenneth H.. Handbook of discrete and combinatorial mathematics. Rosen, Kenneth H., Shier, Douglas R., Goddard, Wayne.. 2017. 978-1-315-15648-4. Second. Boca Raton, FL. 43. 994604351.
6. [Peter J. Freyd]
7. Book: Ewa Orłowska. Ewa Orłowska . Andrzej Szalas. Relational Methods for Computer Science Applications. 2001. Springer Science & Business Media. 978-3-7908-1365-4. 135–146. Relations Old and New. Joachim Lambek. Joachim Lambek.
|
2022-01-25 19:46:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.885766863822937, "perplexity": 2370.9605026197473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304872.21/warc/CC-MAIN-20220125190255-20220125220255-00488.warc.gz"}
|
http://mathhelpforum.com/calculus/152412-rate-change-problem.html
|
# Thread: Rate of Change Problem
1. ## Rate of Change Problem
Hi all, first time here. Not sure whether this is the right category...
I am working with water levels and a truncated cone.
I have been told that V = (pi/3)(((x+2)^3)/4)-2000).
I am then told to find dV/dx, which = (pi(x^2+40x+400)/4).
This is where I am stuck, because it then asks "Hence find the rate of change of height (x) in terms of x.
Would I use Chain Rule, or is it simply dV/dx inverse?
I'm not sure at all.
Thanks for the help.
2. Sorry, but I disagree with your derivative. If
$V = \frac{\pi}{3}(\frac{(x+2)^3}{4} - 2000)$, then
$\frac{dV}{dx} = \pi \frac{x^2 + 4x + 4}{4}$/
3. Whoops, meant to write (x+20)^3, not (x+2)^3.
4. Originally Posted by Etherlite
Hi all, first time here. Not sure whether this is the right category...
I am working with water levels and a truncated cone.
I have been told that V = (pi/3)(((x+20)^3)/4)-2000).
I am then told to find dV/dx, which = (pi(x^2+40x+400)/4).
This is where I am stuck, because it then asks "Hence find the rate of change of height (x) in terms of x.
Would I use Chain Rule, or is it simply dV/dx inverse?
I'm not sure at all.
any information given about how the volume is changing w/r to time?
5. Originally Posted by skeeter
any information given about how the volume is changing w/r to time?
Don't worry all, I realised on a previous page that they gave me the rate at which water is dripping into the bucket.
Apply chain rule, and the answer's there.
Thanks all.
|
2017-12-11 14:01:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6777705550193787, "perplexity": 863.752703852662}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513512.31/warc/CC-MAIN-20171211125234-20171211145234-00608.warc.gz"}
|
https://webdesign.tutsplus.com/courses/introduction-to-uikit/lessons/what-is-uikit
|
Lessons:6Length:48 minutes
• Overview
• Transcript
# 2.1 What Is UIkit?
Welcome to the first lesson of this course, where you’ll learn exactly what UIkit is. Let’s get started.
Back to the top
|
2021-09-20 14:30:41
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9580162167549133, "perplexity": 12702.364203599212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057039.7/warc/CC-MAIN-20210920131052-20210920161052-00683.warc.gz"}
|
https://physics.stackexchange.com/questions/20651/is-the-long-range-neutron-antineutron-interaction-repulsive-or-attractive
|
Is the long range neutron-antineutron interaction repulsive or attractive?
I can model this interaction as Zee does in "Quantum field theory in a nutshell". In chapter I.4 section "from particle to force" he uses two delta functions for the source. The integral gives $E=-\frac{1}{4\pi r}e^{-mr}$
I consider $J=\delta^{(3)}(x-x_1)-\delta^{(3)}(x-x_2)$. Im assuming that this represents a particle and an antiparticle. I have done the calculations quantum and classically and I obtain $E=\frac{1}{4\pi r}e^{-mr}$
So as this simple model is used to describe the long range nucleon interactions I think I could conclude that this is the potential interaction between a neutron and an antineutron, a repulsive Yukawa potential.
I want to know If I'm right. Textbooks don't talk about this and Zee seems to say that the force is always attractive.
In electromagnetism, electric charges of the same sign repel; opposite charges attract. That's related to the messenger's spin $J=1$.
However, your case is $J=0$ because the messenger field is a scalar pion. This situation behaves much like $J=2$ or any other even $J$ of the messenger particle: like charges attract (e.g. positive masses gravitationally attract) while opposite charges repel.
Anthony may have said the incorrect statement that the force is always attractive in analogy with $J=2$ gravity; while the signs are analogous, gravity differs from the Yukawa force in one additional respect: its charges (masses) are always positive, so the "universally attractive" property (between isolated objects) holds for gravity (but not the Yukawa force).
One should emphasize that at long distances, there are forces that are parametrically stronger than the Yukawa force you mentioned. In particular, neutrons and antineutrons are small magnets (nonzero dipole moments) so there's a magnetostatic spin-dependent force in between them, decreasing like $1/r^4$, which is still much larger because it's not suppressed by any exponential.
|
2019-10-20 03:53:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.823809802532196, "perplexity": 357.53333521106396}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986702077.71/warc/CC-MAIN-20191020024805-20191020052305-00453.warc.gz"}
|
https://listserv.uni-heidelberg.de/cgi-bin/wa?A3=2104&L=LATEX-L&E=7bit&P=33344&B=--&T=text%2Fplain;%20charset=utf-8&header=1
|
```On 20/04/2021 12:35, LARONDE Thierry wrote:
>>> \outfmtlist: a series of ASCII tokens identifying the output format
>>> supported by the engine. Ex.: DVI1.0 (traditional DVI), PDF1.3 etc.
>>> The default format shall be listed first. (Note: I plan, some day,
>>> to extend DVI.)
>>
>> pdfTeX already defines \pdfoutput, which is 0 for DVI and 1 for PDF. LuaTeX
>> renames that to \outputmode but with the same numerical values. The version
>> of PDF written by pdfTeX/LuaTeX is set separately using (in pdfTeX)
>> \pdfminorversion/\pdfmajorversion, as this is really a separate concept to
>> whether PDF or DVI is in use.
>>
>> There is very little use at the macro level for the DVI level.
>
> For now ;-) Since I plan to add code pages (256 blocks) extensions to be
> able, at least, to have a MetaDVI and be able to bypass PostScript and
> PDF...
It only becomes relevant at the macro level if there is something to do.
I can only imagine some difference in specials? Usually that's pushed to
PostScript so we can test there.
>> The PDF level
>
>> does have some impact on output features but in a simply 'Sorry, not doable'
>> sense. Note that XeTeX uses XDV, which is a version of DVI dedicated to
>> this engine. It's not necessary to test the DVI version at the macro level:
>> what's important is for example which method to include imagines, which uses
>> an engine test.
>>
>>> \outfmtset: setting the output format, that shall be amongst the formats
>>> supported. If not, it returns an error and set the output format to
>>> the default one. Shall be set before \shipout and errors if used
>>> after output has started.
>>>
>>> \outfmt: a token identifying the current output format.
>>
>> See above: data in the same format as other engines is strongly preferred.
>
> Well, since there is no real consensus and these are not amongst the
> required---and with the identification of the engine, one could \input
> ad hoc macros---I will for now stick to my proposal.
Currently we can do something like
\catcode`\@=11
\ifdefined\pdfoutput
\let\pdf@output\pdfoutput
\else
\ifdefined\outputmode
\let\pdf@output\outputmode
\else
\newcount\pdf@output
\fi
\fi
then can use \pdf@output as a one-shot to know if we are in PDF or DVI
mode, and if we are using pdfTeX/LuaTeX we can also set it. With a
token-based indicator, we can of course set up something similar but it
gets longer. However, this is of course your call, and as you are not
making PDF, it's not so important (it's most useful for pdfTeX and
LuaTeX where both DVI and PDF are possible).
Joseph
```
|
2021-10-26 19:12:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8792450428009033, "perplexity": 2739.9459233591733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587915.41/warc/CC-MAIN-20211026165817-20211026195817-00653.warc.gz"}
|
http://www.ub.edu/focm2017/content/viewAbstract.php?code=710
|
#### Conference abstracts
Session B3 - Symbolic Analysis - Semi-plenary talk
July 14, 17:00 ~ 17:50
## The linear Mahler equation: linear and algebraic relations
### CNRS and Université de Lyon , France - Boris.Adamczewski@math.cnrs.fr
A Mahler function is a solution, analytic in some neighborhood of the origin, of a linear difference equation associated with the Mahler operator $z\mapsto z^q$, where $q\geq 2$ is an integer. Understanding the nature of such functions at algebraic points of the complex open unit disc is an old number theoretical problem dating back to the pioneering works of Mahler in the late 1920s. In this talk, I will explain why it can be considered as totally solved now, after works of Ku. Nishioka, Philippon, Faverjon and the speaker.
Joint work with Colin Faverjon (France).
FoCM 2017, based on a nodethirtythree design.
|
2017-11-20 09:43:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6815658807754517, "perplexity": 1866.2911835949724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805977.0/warc/CC-MAIN-20171120090419-20171120110419-00760.warc.gz"}
|
https://chemistry.stackexchange.com/tags/inorganic-chemistry/new
|
# Tag Info
1
It is actually the two single-paired electrons are more likely to be promoted than the two paired electrons. Promoting the electrons always cost energy whether the electrons in question are paired or not. In the classical examples of hybridization (where they might actually promote a paired electron), this energy cost is compensated by the formation of ...
0
If you ask for an "easy" way to determine if a ligand is a pi acceptor/donor or none, look at the spectrochemical series. A quick-and-dirty rule (which means that it is correct most of the time but not always) is that weak ligands (iodide, bromide, hydroxide etc) are pi-donor ligands. The medium ligands (water, ammonia etc) are pi-neutral, and the ...
0
My take is that the terms "inorganic benzene" and "inorganic graphite" are meant to relay thoughts of both structural and chemical likeness to organic counterparts; however, graphite is not an organic compound since it is not a compound. It is an allotrope of carbon, just like diamond, buckminsterfullerene and graphene. The IUPAC ...
0
Molecularity applies to single elementary reaction steps and is seldom higher than 2. The reaction order is the "effective molecularity" of the whole set of linked reactions, if it had been a single step reaction. It need not to be integer and has just indirect relation to molecularities of elementary steps. The reaction order is either determined ...
3
In principle, the oxidation of gold is the half-reaction $$\ce{Au -> Au^+ + e^-}$$But as this oxidation is done in a cyanide medium, $\ce{Au^+}$ is included in a complex ion : $$\ce{Au + 2 CN^- -> Au(CN)_2^- + e^-}$$ The second half-equation is $$\ce{2H2O + O2 + 4 e^- -> 4 OH^-}$$ So the final equation describing the dissolution of gold in an ...
1
I think you have got the calculation right. However, I would also consider the coupling between F and H because coupling through 3 bonds is also usually visible. So, each of your 8 lines will be split into four by the H-atoms. That will be the major part of the spectrum. You would also get weak signals with a different splitting pattern due to the other ...
0
In short, $\ce{HNO3}$ is an oxidizing acid, so the destruction of the oxide layer competes with formation of new oxide. Something like this: $\ce{Al2O3 + 6H+ -> 2Al^3+ + 3H2O}$ $\ce{2Al + 2Al^3+ + 6NO3- -> 2Al2O3 + 6NO2}$ (<-- maybe incorrect) From Wikipedia: Although chromium (Cr), iron (Fe), and aluminium (Al) readily dissolve in dilute nitric ...
1
Lithium ion batteries are pretty poorly recycled as is and so when I was looking through literature reviews I found only one mention of lithium ion polymer batteries being recycled. The paper that was using this method was also recycling lithium ion batteries the same way though so it wasn't clear if anything different had to be done. The main thing that ...
4
I would not call graphite organic, but there is no clear-cut way of defining organic and inorganic compounds. To your question, the pairs of compounds are isoelectronic. That means that if we assume that molecular orbitals (MOs) arise from atomic orbitals, the corresponding MOs in the two compounds have the same occupancy. In these cases it arises because ...
1
During titration of small amounts of acids, the molar amount of the indicator in 1-2 drops of $\pu{1 \%}$ indicator solution may not be negligible compared to the acid molar amount, affecting the result. So for that cases, $\pu{0.1 \%}$ solution is used, to be able to dose smaller indicator amounts. As the phenolphalein molar mass is about $M=\pu{318 g/mol}$,...
2
There are two little clues you can use. First, mercury(I) isn’t actually present as $\ce{Hg+}$; the cationic species is actually $\ce{Hg2^2+}$ with a $\ce{Hg-Hg}$ bond. Thus, if this were a mercury(I) compound, the sum formula would be $\ce{Hg2[Co(SCN)4]2}$. This obviously only helps you if the correct formula was supplied as part of the question but the ...
9
As pointed out by Maurice and myself, the argument provided in the answer by iad22agp is rather incorrect. Sulfuric acid is a dehydrating agent just because it is available in concentrated form whereas rest of the common acids like HCl(aq) are not. No, it is not a concentration effect. First of all, it is not possible to have HCl(aq) more concentrated than ...
2
Mixing $\ce{KCl + CaO}$, or $\ce{KCl + Ca(OH)2}$, will never produce pure $\ce{KOH}$ without $\ce{Ca(OH)2}$. And this Ca(OH)2 will prevent soap from being synthesized out of oil, as $\ce{(Ca(OH)2}$s destroy soap in case a little bit of soap has been synthesized. The only way of producing $\ce{KOH}$ out of $\ce{CaO}$ or $\ce{Ca(OH)2}$ is to mix it with ...
6
A salt like $\ce{Na2SO4}$ is essential in electrolysis. It provides ions $\ce{Na+}$ and $\ce{SO4^{2-}}$ which are attracted by the electrodes in the solution and migrate to them. When they arrive near the electrodes, they are not discharged. But they neutralize the charges of the ions that are produced out of water being destroyed at these electrodes. Let's ...
0
isn't potassium carbonate available as fertilizer? it's a lot more common. Then it would be a simple matter of mixing CaO with water and adding k2co3. Eventually you'd be left with solid caco3 and koh in solution
2
Assuming that "term" in your question refers to chemical nomenclature, the authoritative source of information would be the current edition of IUPAC Red Book [1]. From [1, p. 70]: IR-5.3.2.2 Monoatomic cations The name of a monoatomic cation is that of the element with an appropriate charge number appended in parentheses. […] $\ce{I+}$ iodine(1+) ...
1
Be aware that in the solubility comparison context, solubility product constants can be directly compared for compounds with the same number of ions created from the formula, where a greater solubility product means a greater solubility. For compounds with different ion counts, one has to compare ( molar ) solubilities in $\pu{[mol/L]}$ , calculated from ...
3
TL;DR: The mechanism is different for fluorine and chlorine atoms. Fluorine atoms react and forms stable HF molecules while chlorine atoms turns into a radical by the action of UV which helps in the destruction of ozone. Long answer: Ozone depleting substance(ODS) are gases that take part in ozone depletion process. Most of the ODS are primarily ...
-1
Here is a depiction of how 'chlorine' destroys ozone per an education source: CFC molecules are made up of chlorine, fluorine and carbon atoms and are extremely stable. This extreme stability allows CFC's to slowly make their way into the stratosphere (most molecules decompose before they can cross into the stratosphere from the troposphere). This prolonged ...
4
The solubility of $\ce{AgCl}$ is equal to $\sqrt{K_\mathrm{s}} = \pu{1.3E-5 M}.$ The solubility $s$ of $\ce{Ag2CO3}$ is such that $K_\mathrm{s} = 4s^3.$ So that its solubility $s$ is equal to $s = \pu{1.2E-4 M}.$ This is ten times more than the solubility of $\ce{AgCl}.$ For gravimetric purposes, $\ce{AgCl}$ is a better choice.
1
You can use reducing agents to reduce the $\ce{SeO3^2-}$ salt to elemental selenium. This paper1 discusses the use of iron(II) salts in acidic medium (phosphoric - hydrochloric acid) as reducing agent. The reaction proceeds at r.t. You can use other reducing agents like hydrochloric acid, sulfur dioxide, hydroxylamine hydrochloride, hydrazine hydrochloride. ...
1
The relationship between ionisation energy and emission spectra is complex You make the assumption (implicitly) that the colour you see in emission spectra is from the single complete ionisation of an electron from the highest energy electron orbital in lithium. But the colour you see is not that simple for two reasons. One is that you only see visible light ...
4
I don't think it is a good idea to connect ionization energy with the color of flame emission, especially in a Bunsen burner. Ionization energy means that you have separated the electron out of the nucleus attractive field. However, electronic transitions, corresponding to visible wavelengths, take place while the electron is still bound to the nucleus. Thus ...
4
From this question on Physics SE, there is a linked audio track in Soundcloud here, which is what you are looking for. The OP references a jar of hair-stying gel which, upon being tapped with a mallet, produces a "reverberating" sound instead an otherwise expected "short 'tok' sound." Note the the ringing gels are common ingredients in ...
3
You may have confused barium peroxide with barium metal. The metal can indeed displace hydrogen from hypochlorous acid, not to mention (for a metal as reactive as barium) from the water in which the acid is dissolved; but metal peroxides will produce oxygen and water from the acid. So your missing product is water, not hydrogen. Tricky half-reactions If we ...
6
Your reasoning is correct. Perhaps the answer key is mistaken. For part (a), in the reaction $$\ce{NH3 + H2O <=> NH4+ + OH-}$$ water is donating a proton $(\ce{H+}$ ion) and hence is behaving as a Brønsted acid. Since all Brønsted acids are Lewis acids, water is behaving as a Lewis acid. For part (d) as well, the half reactions are: \begin{align} \...
0
I am not being rude, but honestly I think you should review the meaning of the spectrochemical series and how the transition metal complexes are colored. A ligand being strong or weak according to the spectrochemical series does not correlate to the complex being stable or not, but its ability to split the d orbitals of the metal. In the spectrochemical ...
4
The fallacy is the assumption that "inner $d$ orbitals" become "available". Generally they are not, in the Group 1 and Group 2 metals. There are rare occasions where some evidence of $d$-orbital bonding is found for heavier G2 elements, but the impact is small and not widespread. In Groups 1 and 2, where there are inner $d$ orbitals ...
-1
I guess that 1.081 refers to the distance length (in Angstrom) separating the C and N atoms. You obtain this value by applying by using the values listed in the table; this is quite tedious, takes some time and require quite good knowledge about crystallography. Otherwise (and much more easy), you can use the crystallographic data listed in the table to ...
Top 50 recent answers are included
|
2021-01-27 13:47:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7086878418922424, "perplexity": 1564.3790654426134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704824728.92/warc/CC-MAIN-20210127121330-20210127151330-00035.warc.gz"}
|
http://cogsci.stackexchange.com/tags/perception/new
|
# Tag Info
As mentioned in the definitions section on page 2 of the paper, $D(q||p) = <ln(q/p)>_q$ is the Kullback-Leibler divergence or cross-entropy between two densities. The reason for the negative logs is that is the convention when discussing entropy in the context of information theory. This allows information to be combined additively. ...
|
2015-11-28 09:32:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8786587715148926, "perplexity": 376.2763684923918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451872.11/warc/CC-MAIN-20151124205411-00042-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://wwjc.legastiker.de/sparse-matrix-to-numpy-array.html
|
# Sparse Matrix To Numpy Array
array is not the same as the Standard Python Library class array. Web Design Given an arbitrary numpy array (ndarray), is there a function or a short way to convert it to a scipy. I have a numpy array with m columns and n rows, the columns being dimensions and the rows datapoints. Source Code: Matrix Multiplication. This NumPy stack has similar users to other applications such as MATLAB , GNU Octave , and Scilab. numpy,sparse-matrix,sparse,sparse-array. If you rely on the fact that there is at most a constant number of non-zero elements in a single column, it is your responsibility to ensure that the matrix is symmetric. Whether transform should produce scipy. the multiplication with '*' is the matrix multiplication (dot product); not part of NumPy! passing a sparse matrix object to NumPy functions expecting ndarray/matrix does not work. just what you need. Let us first create some data in (i,j,v) format. TypeError: Singleton array array(<4020784x50626 sparse matrix of type '' with 151426374 stored elements in Compressed Sparse Row format>, dtype=object) cannot be considered a valid collection. All arguments (even the positional) are optional. mount of time to allocate this memory. basics of numpy , sparse matrix Tools Needed Anaconda , Jupyter Notebook, python 3. sparse matrix? <. A can be a scipy sparse matrix or a numpy array. create numpy arrays, slice arrays, merge arrays, basic types of numpy arrays, reading and writing arrays to file, reading and writing sparse matrices to svmlight format. When we create a matrix, we generally know what type of data will be stored in the matrix, how many dimensions it will have and how many elements. This implementation will refuse to center scipy. True by default. They are from open source Python projects. SciPy创建稀疏矩阵. Separator string used when constructing new features for one-hot coding. Delete given row or column. so if matrix looks this. Calling matrix() with a NumPy array will convert the array to a matrix. How do you know if you have a sparse matrix? Use Matplotlib's spy() method. We Create a 2-D Array in Numpy and call it a Matrix. When objects are deleted or go our of scope, the memory used for these variables isn't freed up until a garbage collection is performed. SciPy Tutorial. If is None, then the ordering is produced by G. 6k points) python. they are n-dimensional. dtype (NumPy data-type, optional) - A valid NumPy dtype used to initialize the array. If you rely on the fact that there is at most a constant number of non-zero elements in a single column, it is your responsibility to ensure that the matrix is symmetric. Background information ¶ Because in heterogeneous computing systems copying data across the bus from host memory to device memory (or vice versa) commonly incurs a proportionally substantial wait, PyViennaCL. Compressed Sparse Array¶ The csarray class represents a Compressed Sparse Array object which is essentially a 2d matrix or 1d vector with few nonzero elements. CSR and CSC are difficult to construct from scratch, while COO and DOK are easier to construct. True by default. Like and share. In this article, we are going to learn how to implement a sparse matrix for 3-tuple method using an array in the data structure? Submitted by Manu Jemini, on December 19, 2017 A sparse matrix is a matrix in which most of the elements are zero. array + scipy. If file is a string or Path, a. pyomo-latest/index. 行列を表すために,numpyではarrayとmatrixを使うことができる. しかし,掛け算の挙動などが,これら2つで異なるためにさらにややこしい印象がある. 自分用備忘録のためにarray,matrixそれぞれで特定の演算をするためにはどうすればいいかをまとめる.. matrix (which will be deprecated in NumPy at some point). they are n-dimensional. You're technically trying to index an uninitialized array. But the matrix is in 2d. csr_matrix that represents words in a document and a list of lists where each index represents the categories for each index in the matrix. SciPy's csc_matrix with a single column; We recommend using NumPy arrays over lists for efficiency, and using the factory methods implemented in Vectors to create sparse vectors. There are 7 different types of sparse matrices available. mmwrite / io. Download files. A different implementation would have to be written for. py file import tensorflow as tf import numpy as np We're going to begin by generating a NumPy array by using the random. In the example below, we define a 3 x 6 sparse matrix as a dense array, convert it to a CSR sparse representation, and then convert it back to a dense array by calling the todense() function. When feeding a numpy array versus a sparse matrix into the Scikit logistic regression classifier, it did not seem to make much of a difference, however. roll (a, shift[, axis]) Shifts elements of an array along specified axis. array + scipy. While this is the mathematical definition, I will be using the term sparse for matrices with only NNZ elements and dense for matrices with all elements. Creating a sparse Document Term Matrix for Topic Modeling via LDA June 17, 2016 11:23 am , Markus Konrad To do topic modeling with methods like Latent Dirichlet Allocation , it is necessary to build a Document Term Matrix (DTM) that contains the number of term occurrences per document. Backend for xESMF. Sparse Matrices in Python. A ((n, n) array or sparse matrix) - B ((n, p) array or sparse matrix) - build_inverse (bool, optional) - Whether to construct the inverse of the block matrix, as opposed to just. Using Numpy : Multiplication using Numpy also know as vectorization which main aim to reduce or remove the explicit use of for loops in the program by which computation becomes faster. Using the function above, I generated a benchmark for problem sizes ranging from 10 to 2,000, averaging computation times over 10 runs for each point. Given an index and pointer array, it is possible to tell beginning and end of each document. sort boolean, optional. find()¶ Returns three Numpy arrays to describe the sparsity pattern of self in so-called coordinate (or triplet) format:. True by default. NumPy 2D array(s), pandas DataFrame, H2O DataTable's Frame, SciPy sparse matrix; LightGBM binary file; The data is stored in a Dataset object. Let's say I also have a collection of scipy sparse matrices with the same dimensions as the numpy matrix. I'm using the SciPy sparse. , [1, 2, 3] and the following as sparse vectors: MLlib’s SparseVector. python arrays matlab scipy sparse-matrix |. For the sake of example, let's assume my symmetric sparse matrix is a 5x5: A = 1 -1 0 -3 0 -1 5 0 0 0 0 0 4 6 4 -3 0 6 7 0 0 0 4 0. If None, then the NumPy default is used. If file is a string or Path, a. CuPy is an open-source matrix library accelerated with NVIDIA CUDA. matrix which caused downstream problems. You can use numpy dense matrices. If the ratio of Number of Non-Zero elements to the size is less than 0. We'll also make use of the coo_matrix class from scipy. array(<50x5 sparse matrix of type '' with 50 stored elements in Compressed Sparse Column format>, dtype=object) I'm just a newbie who thought to use the usual pattern. It contains 2 rows and 3 columns. This builds on top of the scipy. csc_matrix, scipy. Parameters. By contrast, if most of the elements are nonzero, then the matrix is considered dense. xarray should only be used in higher-level APIs which interface with this low-level backend. So, in places below where you see “sparse matrix”, know that we really mean a “2D array” but, unlike a matrix, the array can be generalized to higher dimensions. In this article, we are going to learn how to implement a sparse matrix for 3-tuple method using an array in the data structure? Submitted by Manu Jemini, on December 19, 2017 A sparse matrix is a matrix in which most of the elements are zero. I ran into this problem a few months back. However, to future-proof ourselves from a soon-to-be deprecated 2D numpy matrix format, we’ll be leveraging the PyData Sparse package for all of our sparse nd-array needs. Note that the output is a numpy array, not a dataframe. NumPy is, just like SciPy, Scikit-Learn, Pandas, etc. convert_to_tensor(arg, dtype=tf. If is_sparse=False then P is a numpy array with a shape of (A, S, S) and R is a numpy array with a shape of (S, A). This NumPy stack has similar users to other applications such as MATLAB , GNU Octave , and Scilab. As illustrated below, the COO format may also be used to efficiently construct matrices. They come in a number of flavours. A dense matrix stored in a NumPy array can be converted into a sparse matrix using the CSR representation by calling the csr_matrix() function. Note: This is not binary compatible with scipy’s save_npz(). The Compressed Sparse Row matrix is vice versa. Each sample (i. Arrays The central feature of NumPy is the array object class. Compressed Sparse Array¶ The csarray class represents a Compressed Sparse Array object which is essentially a 2d matrix or 1d vector with few nonzero elements. In our solution, we created a NumPy array with two nonzero values, then converted it into a sparse matrix. This implements sparse arrays of arbitrary dimension on top of numpy and scipy. Passing these arrays will give CSR matrix and this is how csr_matrix function in scipy works. warning for NumPy users:. For example, if the dtypes are float16 and float32, the results dtype will be float32. Lets call this matrix Asp. matrix (which will be deprecated in NumPy at some point). Creating a Sparse Matrix. indices is array of column indices; data is array of corresponding nonzero values; indptr points to row starts in indices and data; length is n_row + 1, last item = number of values = length of both indices and data. Save a sparse matrix to disk in numpy’s. Well, if this were an ordinary Numpy array then we could write: np. sparse import * from scipy import *. 5, the matrix is sparse. sparse matrix? <. Most of the linear algebra Numpy and Scipy functions operate on Numpy and can also transparently operate on the Scipy sparse arrays. When I pass a MATLAB double array to a Java method that accepts double[] argument. We will then learn the basics of the numpy library to manipulate numpy arrays both as arrays and as matrices. T) Out[282]: <1000x1000 sparse matrix of type '' with 999964 stored elements in Compressed Sparse Row format>. Multiplication of two matrices X and Y is defined only if the number of columns in X is equal to the number of rows Y. sparse matrix constructors as the dtype argument. CSR and CSC are difficult to construct from scratch, while COO and DOK are easier to construct. NumPy and SciPy are two powerful Python packages, however, that enable the language to be used efficiently for scientific purposes. What we want is sparse arrays that act like numpy. base module hyperlearn. csr_matrix that represents words in a document and a list of lists where each index represents the categories for each index in the matrix. For most sparse types, out is required to be memory contiguous (either C or Fortran ordered). The input is a 1M x 100K random sparse matrix with density 0. NumPy 2D array. Here is an example: > > The lil_matrix is meant for supporting fancy indexing, but it is > not efficient for matrices operations such as inversion or > multiplication; you should transform your matrix to another format for > performing such operations. The more important attributes of an ndarray object are:. Sparse Graphs in Python Playing with Word Ladders. If the NumPy array has a single data type for each array entry it will be converted to an appropriate Python data type. The problem that I am having is that I need to randomly select N amount of rows from the data. A local matrix has integer-typed row and column indices and double-typed values, stored on a single machine. If specified, uses this array as the output buffer instead of allocating a new array to return. sparse matrix,. A special SparseIndex object tracks where data has been "sparsified". Mathematically this corresponds to pre-multiplying the matrix by the permutation matrix P and post-multiplying it by P^-1 = P^T, but this is not a computationally reasonable solution. The more general option that will allow you to reshape it an array to any shape that has the same number of elements as the original array is numpy. Executive summary¶. From your explanation, it sounds like you might have succeeded in writing out a valid file, but you just need to symbolize it in QGIS. Show first n rows. This will make much more sense in an example. Returns arr ndarray, 2-dimensional. array + scipy. The issue I am facing is with converting from sparse(i,j,v,m,n) to csr_matrix((data, (row_ind, col_ind)), [shape=(M, N)]). NPY_DOUBLE), and data is the pointer to the memory that has been previously allocated. matrix, so if scipy. Access to a sparse matrix created in MEX from MATLAB. sparse行列(疎行列)の計算(四則演算、逆行列、固有値など)や各種処理(連結や保存など)を行う方法について、以下の内容を説明する。疎行列の四則演算、行列積疎行列(scipy. Using the example of @hjpaul we get the following comparison code. NumPy 2D array(s), pandas DataFrame, H2O DataTable's Frame, SciPy sparse matrix; LightGBM binary file; The data is stored in a Dataset object. R matrices and arrays are converted automatically to and from NumPy arrays. matrix or array to scipy sparse matrix. We're imagining ourselves saying something like c = csr_matrix(array), where array is a dense numpy array. If the matrix is scipy. sparse matrices since it would make them non-sparse and would potentially crash the program with memory exhaustion problems. array, which only handles one-dimensional arrays and offers less functionality. Any ideas or insights that could solve the problem efficiently would be great as the sparse matrix product takes 2/3 of the time of the non-sparse. This was just an introduction into numpy matrices on how to get started and do basic manipulations. in sparse matrix to the identity without changing sparsity hand side NumPy array. Pandas data frame, and. Creating a Pandas DataFrame from a Numpy array: How do I specify the index column and column headers? asked Jul 27, 2019 in Data Science by sourav ( 17. csr_matrix(A) would construct a csr type matrix from a dense numpy array A, while sp. Python For Data Science Cheat Sheet SciPy - Linear Algebra Learn More Python for Data Science Interactively at www. All diagonals are stored using two arrays, one for data and one for diagonal offsets. Notes Sparse matrices can be used in arithmetic operations: they support addition, subtraction, multiplication, division, and matrix power. NPY_DOUBLE), and data is the pointer to the memory that has been previously allocated. Given data with very few non zero values you want. Using nested lists as a matrix works for simple computational tasks, however, there is a better way of working with matrices in Python using NumPy package. similarity_matrix ({scipy. As I said in the comments, the problem appears in multiply, which should produce a sparse matrix for sparse+dense inputs but doesn't. Python Forums on Bytes. Sparse Matrices in Python. scipy documentation: Convert a sparse matrix to a dense matrix using SciPy. But that behavior definitely looks most bizarre and counter-intuitive to me. A ((n, n) array or sparse matrix) - B ((n, p) array or sparse matrix) - build_inverse (bool, optional) - Whether to construct the inverse of the block matrix, as opposed to just. Syntax : numpy. float64'' with 2 stored elements in Compressed Sparse Row format This is because, for an arbitrary function, its application to a sparse matrix is not necessarily sparse. Calling matrix() with a NumPy array will convert the array to a matrix. T) Out[282]: <1000x1000 sparse matrix of type '' with 999964 stored elements in Compressed Sparse Row format>. 严格意义上讲ndarray数据类型应属数组而非矩阵,而matrix才是矩阵,这个在NumPy创建matrix一章里有讲述,是最基本的矩阵matrix创建方法,忘记了可以回头看看。. The row, col, and data elements are stored as numpy arrays. Note: This is not binary compatible with scipy’s save_npz(). Python For Data Science Cheat Sheet SciPy - Linear Algebra Learn More Python for Data Science Interactively at www. SciPy sparse matricies don’t support the same API as the NumPy ndarray, so most methods won’t work on the result. Whether transform should produce scipy. Additional outside tutorials exist, such as the Scipy Lecture Notes or Elegant SciPy. In the example below, we define a 3 x 6 sparse matrix as a dense array, convert it to a CSR sparse representation, and then convert it back to a dense array by calling the todense() function. Sparse Graphs in Python Playing with Word Ladders. The following are code examples for showing how to use scipy. Ironically the multiplication using numpy is faster. For the sake of example, let's assume my symmetric sparse matrix is a 5x5: A = 1 -1 0 -3 0 -1 5 0 0 0 0 0 4 6 4 -3 0 6 7 0 0 0 4 0. sort boolean, optional. sparse matrix? I''d like something that works li, ID #3867558. You can vote up the examples you like or vote down the ones you don't like. The sparse=False argument outputs a non-sparse matrix. In scipy, the implementation is not limited to main diagonal only. com SciPy DataCamp Learn Python for Data Science Interactively. If you are new to Python, you may be confused by some of the pythonic ways of accessing data, such as negative indexing and array slicing. I figured that I'd use scipy's 'sparse' package to reduce the storage overhead, but I'm a little confused about how to create arrays, not matrices. In this code, i, j and row_in, col_ind will be passed with an index array - idx of size(124416, 1), while v and data will be passed with a 2D array - D22 of size(290, 434) Matlab:. This NumPy stack has similar users to other applications such as MATLAB , GNU Octave , and Scilab. A dense matrix stored in a NumPy array can be converted into a sparse matrix using the CSR representation by calling the csr_matrix() function. This was just an introduction into numpy matrices on how to get started and do basic manipulations. There are many applications in which we deal with matrices that are mostly zeros. NumPy / SciPy / Pandas Cheat Sheet Select column. Pointer helps in understanding index and value arrays. As far as I can tell, there is no way to do this efficiently through python. True by default. This will make much more sense in an example. Constrained linear least squares in Python using scipy and cvxopt. save_npz (filename, matrix[, compressed]) Save a sparse matrix to disk in numpy's. For SciPy sparse matrix, one can use todense() or toarray() to transform to NumPy matrix or array. R matrices and arrays are converted automatically to and from NumPy arrays. Reset index, putting old index in column named index. Other methods currently implemented include: min, max, mean, sum, std, var, trace, diag, tranpose, dot, pdot (parallel dot product with ndarrays). Here is an example: import numpy as np from scipy. sparse matrix? <. Sparse arrays and multicategories¶ There is an alternative to numpy. This may require copying data and coercing values, which may be expensive. Using the example of @hjpaul we get the following comparison code. sparse matrices since it would make them non-sparse and would potentially crash the program with memory exhaustion problems. lil_matrix, you just need to print out the representation because the lil_matrix is implemented as a _sequence of non-zero elements_ i. Scikit-Learn returns a SciPy sparse matrix for ndarrays passed to transform. I have ellipsed some code here to focus our discussion. sparse import random matrix = random(1000000, 100000, density=0. Any ideas about how this could best be accomplished? Thanks in advance. convert_to_tensor(arg, dtype=tf. from mlxtend. If the NumPy array has a single data type for each array entry it will be converted to an appropriate Python data type. ndarray and numpy. By default, the dtype of the returned array will be the common NumPy dtype of all types in the DataFrame. The problem that I am having is that I need to randomly select N amount of rows from the data. linalg module hyperlearn. 001, containing 100M non-zero values: from scipy. It's not too different approach for writing the matrix, but seems convenient. If you're a scientist who programs with Python, this practical guide not only teaches you the fundamental parts of SciPy and libraries related … - Selection from Elegant SciPy [Book]. CSR and CSC are difficult to construct from scratch, while COO and DOK are easier to construct. It implements a more versatile variant of the widely-used Compressed Column (or Row) Storage scheme. Another difference is that numpy matrices are strictly 2-dimensional, while numpy arrays can be of any dimension, i. NumPy 2D array. Skip to main content 搜尋此網誌 Ftdxyku. For 1-D arrays, it is the inner product of. create numpy arrays, slice arrays, merge arrays, basic types of numpy arrays, reading and writing arrays to file, reading and writing sparse matrices to svmlight format. we can easily represent a dense matrix as an array: Get unlimited access to the best stories on Medium — and support writers while you’re at it. The main motivation for using arrays in this manner is speed. We’re imagining ourselves saying something like c = csr_matrix(array), where array is a dense numpy array. Hi all, I've been working quite a lot with sparse vectors and sparse matrices (basically as feature vectors in the context of machine learning), and have Numpy-discussion. sparse returns a numpy. Many of the examples in this page use functionality from numpy. find()¶ Returns three Numpy arrays to describe the sparsity pattern of self in so-called coordinate (or triplet) format:. The type of feature values. A Sparse Matrix (SM) is a popular data structure that is used to stored two-dimension… How to Compute the mean of a distribution using Python and Numpy? In the last post, we have defined a function to compute the numerical integration in…. Otherwise you'll wind up with a huge file. When passed a Dask Array, OneHotEncoder. Replace values in numpy array based on condition. matrix; This issue also applies to subtraction. We will then learn the basics of the numpy library to manipulate numpy arrays both as arrays and as matrices. just what you need. You should ask in the numpy mailing list. multiply(sp. fit but apparently it doesn't accept this datatype. , using the toarray() method of the class) first before applying the method. The problem that I am having is that I need to randomly select N amount of rows from the data. In simple words, suppose you have a 2-D matrix with hundreds of elements, where only a few of them contain a non-zero value. The input is a 1M x 100K random sparse matrix with density 0. weight: string or None optional. com SciPy DataCamp Learn Python for Data Science Interactively. What are the functions to do the inverse? 对于SciPy稀疏矩阵,可以使用todensity()或toarray()将其转换为NumPy矩阵或数组。. It’s not a sparse matrix (so our code path skips the conditional on line 7) and it’s not a tuple (so it skips the conditional on line 10). Many of the examples in this page use functionality from numpy. Sparse Matrices in Python. These are not necessarily sparse in the typical "mostly 0". A can be a scipy sparse matrix or a numpy array. In scipy, we can construct a sparse matrix using scipy. Quite simple, I guessed. How to create a sparse matrix in Python. In scipy, the implementation is not limited to main diagonal only. A Sparse Matrix (SM) is a popular data structure that is used to stored two-dimension… How to Compute the mean of a distribution using Python and Numpy? In the last post, we have defined a function to compute the numerical integration in…. matmul(arg, arg) + arg # The following. csr_matrix(v)) >>> W + s <2x3 sparse matrix of type '' with 2. If the NumPy array has a user-specified compound data type the names of the data fields will be used as attribute keys in the resulting NetworkX graph. Select row by label. A sparse matrix is a matrix that has a value of 0 for most elements. Apply OneHotEncoder on DataFrame: # apply OneHotEncoder on categorical feature columns X_ohe = ohe. The issue I am facing is with converting from sparse(i,j,v,m,n) to csr_matrix((data, (row_ind, col_ind)), [shape=(M, N)]). Machine learning data is represented as arrays. How to transform numpy. If a matrix contains many zeros, converting the matrix to sparse storage saves memory. Python’s SciPy library has a lot of options for creating, storing, and operating with Sparse matrices. create numpy arrays, slice arrays, merge arrays, basic types of numpy arrays, reading and writing arrays to file, reading and writing sparse matrices to svmlight format. These are not necessarily sparse in the typical "mostly 0". ) and storage type (row or column major format). Generate a random sparse multidimensional array. Compressed Sparse Row matrix otherwise The one-hot encoded boolean array of the input transactions, where the columns represent the unique items found in the input array in alphabetic order. SciPy and NumPy are able to help us with this easily. Questions: In scipy, we can construct a sparse matrix using scipy. Arrays make operations with large amounts of numeric data very fast and are. Multiple Matrix Multiplication in numpy « James Hensman's Weblog […]. todense() print ' index of the row that is modified: {0}'. Let's say I also have a collection of scipy sparse matrices with the same dimensions as the numpy matrix. Syntax : numpy. We provide only a brief overview of this format on this page; a complete description is provided in the paper The Matrix Market Formats: Initial Design [Gziped PostScript, 51 Kbytes] [PostScript, 189 Kbytes]. array; numpy. This builds on top of the scipy. It reads data from one. three NumPy arrays: indices, indptr, data. For SciPy sparse matrix, one can use todense() or toarray() to transform to NumPy matrix or array. sparse matrix (use CSR format if you want to avoid the burden of a copy / conversion). If you're a scientist who programs with Python, this practical guide not only teaches you the fundamental parts of SciPy and libraries related … - Selection from Elegant SciPy [Book]. SciPy 2D sparse array. the problem having need randomly select n amount of rows data. CuPy is an open-source matrix library accelerated with NVIDIA CUDA. Matlabs lsqlin and lsqnonneg in Python with sparse matrices. G (graph) - The NetworkX graph used to construct the NumPy matrix. This implementation uses a randomized SVD implementation and can handle both scipy. , using the toarray() method of the class) first before applying the method. <196980x43 sparse matrix of type '' with 70875 stored elements in Compressed Sparse Row format> It's an integer overflow due to using the same integer type. Hi all, I've been working quite a lot with sparse vectors and sparse matrices (basically as feature vectors in the context of machine learning), and have Numpy-discussion. nodelist (list, optional) - The rows and columns are ordered according to the nodes in. bsr_matrix: Block Sparse Row matrix. As an example, here's some Python code that uses NumPy to generate a random, sparse matrix in $\mathbf{R}^{\text{10,000}\times \text{10,000}}$ with 20,000 non-zero entries between 0 and 1. I suspect the question comes down to when to use a SciPy sparse matrix over a NumPy matrix, because in practice for any small matrix or a matrix with very few zeros, a numpy matrix is preferable, because it allows almost all operations that a nump. Creating a sparse matrix¶. How to Represent and Transpose a Sparse Matrix in C++? December 3, 2016 No Comments c / c++ , data structure , math , programming languages , tutorial A Sparse Matrix (SM) is a popular data structure that is used to stored two-dimension Matrix when the total the empty/zero elements are the majority in the matrix. com/file/d/1tNiTUTrv9e. I am wondering if there is an existing data structure for sparse 3d matrix / array (tensor) in Python? p. SciPy #3 齊藤 淳 Jun Saito @dukecyto Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. Access to a sparse matrix created in MEX from MATLAB. array is not the same as the Standard Python Library class array. One thing that we can do here is to use SciPy to help solve an equation. Returns arr ndarray, 2-dimensional. dtype (NumPy data-type, optional) - A valid NumPy dtype used to initialize the array. Download files. If you're not sure which to choose, learn more about installing packages. csr_matrix(A) would construct a csr type matrix from a dense numpy array A, while sp. If None, then the NumPy default is used. csr_matrix , it is going to be transposed. sparse matrix? <. ) and storage type (row or column major format). mmwrite / io.
|
2020-02-27 19:25:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3045381009578705, "perplexity": 1510.7606622557498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146809.98/warc/CC-MAIN-20200227191150-20200227221150-00090.warc.gz"}
|
https://julianstanley.com/blog/2022/prelim/
|
How do you study for a preliminary exam? I don’t know, I was hoping you could tell me.
I might write something insightful up here at some point. For now, just trying to make a daily log of what I’ve been doing.
Step 1: Setting goals to stay focused and motivated
It seems like common knowledge that it’s important to have goals, but I often overlook them when starting a project.
My goal with my preliminary exam isn’t just to pass the exam, which
Study Notes
Timeline
If I’m going to make progress, I need deadlines. All my deadlines are going to be for 4PM EST.
Tuesday, August 9 (5 weeks): Rough draft of slides and presentation Tuesday, August 16 (4 weeks): Schedule practice prelims for next week Tuesday, August 23 (3 weeks): Monday, September 12 (1 day): No more preparation
Techniques Log
Pulse-Chase
For example, we want to see whether mazF inhibits ribosome formation. So, pulse 3H-Uridine and chase with cold uridine, run a polysome gradient and you can see that 3H-uridine is incorporated into ribosomes in absense of mazF, but not after expression of mazF.
This section will eventually go-away, as I read these papers and put summaries in the daily logs.
Hockenberry 2018 for some hypotheses about why translation initiation varies.
Giess 2017 for another method that may help identifying translation start sites.
Why is initiation important?
Hersch 2014 can affect stalling.
Does CDS Sequence matter?
Verma 2019 early elongation events matter for efficiency.
Daily Log
Wednesday, 3 August 2022
Summmary: This is a letter in response to Nigam 2019, about the generation of proteins after mazF toxin induction by nalidixic acid (NA).
In the 42 genes identified by Nigam 2019, none have more ACA upstream of the start codon than in the set of all E. coli genes. For some of these genes, ACAs identified by Nigam are not in the mRNAs at all.
Nigam sees GFP changes upon NA treatment, and Wade & Laub argue that this is probably a non-mazF-dependent effect, especially since many of the identified genes are known to be stress-responsive genes.
2. Nigam 2019, mBio
Summary: This is Nigam et al.’s reply to Wade & Laub’s critique above. They say that they don’t claim that ACA is enriched before the start codon in these genes. They claim that the stress transcription factor $\sigma^{32}$ is a result of modified ribosomes.
I’m not convinced by this response, so I’m not going to go further on the original paper–I think they’re detecting upregulation of a general stress response.
Tuesday, 2 August 2022
Progress for today
Not much progress outside of reading today.
1. Oron-Gottesman 2016, mBio
Summary: Used a fluorescent reporter +/- ACA sequences with induction of mazF with nalidixic acid to show some sequence dependencies of mazF.
Relevance to project: They have a leaderless construct, but not very relevant. Maybe worth looking at the EDF-like sequence in S1 for lmRNA expression?
A leaderless GFP mRNA, has 17 out-of-frame ACA sites that don’t interfere with expression after mazF induction. Adding an ACA site in the ORF prevented expression.
Really not impressed by the quality control in this paper. E.g., it sites Fig S1 about the effect of nalidixic acid on their reporter, but S1 has nothing to do with nalidixic acid–that’s S2. & Their “OD600” axis is in the hundreds–is it OD * 100?
Basically we’re seeing that induction of mazF reduces the GFP level in constructs with in-frame ACA sites. There’s no difference in a mazEF knockout, and it increases GFP level in WT and when you add an AC before the start codon (they call this leaderless due to cleavage, but don’t show that).
They make an argument that only in-frame ACA sites are cleaved, substantiating their claim by comparing PCR band intensities.
Then they find that their GFP-up/down phenotype +/- mazF induction can be reserved by making a mutation to the EDF-like sequence in bS1.
They conclude that there may be a bias away from ACAs being in frame, to protect from mazF cleavage.
I’m not convinced by this paper–the figures are not very clean, there are not multiple veins of evidence for each conclusion. It highlights that I really need to make sure I can think critically about the techniques raised in the controversy over the formation of stress-induced translation machinery, should also read Wade 2019 and the response Nigam 2019 and Vesper 2011, and Culviner 2018.
2. Culviner 2018
Summary: Used high-throughput RNA sequencing and ribosome profiling to carefully quantify mazF products. Leaderless RNAs aren’t upregulated in mazF stress.
Relevance to project: The Moll lab has argued that mazF creates lots of lmRNAs and specalized ribosomes. This paper shows that that’s not the case. How do we explain results from Moll and colleagues?
Take a $\Delta$mazF strain, put mazF on an ara-inducible promoter, low-copy plasmid. Induce mazF for just 5 minutes, take paired-end sequencing reads, and look for regions where density decreases. Confirm patterns with single-gene RT-qPCR.
82% of genes were highly-cleaved (2-fold or more downregulated, compared to empty vector) after mazF induction.
They also compare this to 5’-OH sequencing, but that’s not well-correlated–likely because of different 5’-end stabilities.
Not all ACAs are cleaved at the same rate, so they took some sequence logos and think that there’s a ~7-nt region.
They confirmed that these flanking sequences are important by selecting some of them across different RNA-seq scores, taking them out of context into a reporter, and then qPCR quantifying them, and that correlated as-expected.
They find a few genes that, by this method, may be leaderless after mazF induction (just 41). They measure ribosome footprints for all genes, and find that none of those leaderless genes have any substantial increase in footprints.
To put the nail in the coffin, they added an ACA site between the RBS and start codon of YFP, and did not see any increase in flourescence after mazF induction.
They also saw traffic jams of ribosomes upstream of identified cleavage sites.
As far as specalized ribosomes: they see clear cleavage products at the ACA sequences in nascent 16S rRNA sequences, which inhibits rRNA maturation. They pulse-chased hot uridine
Monday, 1 August 2022
I want to focus on a few things over the course of this study period:
• Write my project proposal
• Better understand and revise my project proposal
• Map between my proposal and common techniques / ways of thinking from 1st year course material
• Record my progress each day
Progress for today:
1. Start to write a messy version of the preliminary exam proposal
1. Not much progress here, just got the header/abstract done
Summary: Used a fluorescent reporter +/- ACA sequences and +/- a leader under expression of mazF.
Relevance to project: They use a “leaderless” construct, but they don’t specify the promoter (native GFP promoter?) or show a 5’ start, but that may have been shown in reference paper (Oron-Gottesman 2016, will read next). In both plasmids, they are not able to see expression of leaderless GFP over background, except in the high-copy-number-plasmid after 6h mazF induction. So, their lmRNA GFP is not highly expressed.
Relevant data points:
1. Their “leaderless” GFP is expressed at the same level as no-reporter, whereas canonical GFP is ~40-fold in exponential phase.
Figure summary:
Figure 1 shows GFP (under leaderless or canonical mRNA, both without ACA sequences) after induction of mazF. In a high copy plasmid, there’s clear increase in GFP concentration compared to no-plasmid control.
Figure 2 shows that the amount of increase of flourescence after mazF induction is lower for mCherry (with ACA) than GFP (without ACA)
|
2022-08-13 16:14:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4394547641277313, "perplexity": 7281.964127161289}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571959.66/warc/CC-MAIN-20220813142020-20220813172020-00175.warc.gz"}
|
https://huggingface.co/EMBEDDIA/est-roberta
|
# Usage
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("EMBEDDIA/est-roberta", use_fast=False)
NOTE: it is currently critically important to add use_fast=False parameter to tokenizer if using transformers version 4+ (prior versions have use_fast=False as default) By default it attempts to load a fast tokenizer, which might work (ie. not result in an error), but not correctly, as there is no current support for fast tokenizers for Camembert-based models.
# Est-RoBERTa
Est-RoBERTa model is a monolingual Estonian BERT-like model. It is closely related to French Camembert model https://camembert-model.fr/. The Estonian corpora used for training the model have 2.51 billion tokens in total. The subword vocabulary contains 40,000 tokens.
Est-RoBERTa was trained for 40 epochs.
New: fine-tune this model in a few clicks by selecting AutoNLP in the "Train" menu!
Mask token: <mask>
|
2021-09-25 16:33:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17881964147090912, "perplexity": 10741.461904283435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057687.51/warc/CC-MAIN-20210925142524-20210925172524-00691.warc.gz"}
|
https://savoirs.usherbrooke.ca/handle/11143/5887/statistics
|
Total Visits
Views
Monte Carlo simulation of the radiolysis of water by fast neutrons at elevated temperatures up to 350°C467
File Visits
Views
Butarbutar_Sofia_Loren_MSc_2014.pdf368
Views
Chine220
Etats-Unis98
France74
Indonésie21
Allemagne13
|
2019-03-20 11:30:58
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8212102651596069, "perplexity": 6629.797694067292}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202326.46/warc/CC-MAIN-20190320105319-20190320131319-00547.warc.gz"}
|
https://plainmath.net/83243/x-and-y-are-geometric-rv-s-with-paramete
|
# X and Y are geometric RV's with parameter p. A) P{X+Y=n}(n=1,2,...) ?
Jaxon Hamilton 2022-07-18 Answered
X and Y are geometric RV's with parameter p.
A) $P\left\{X+Y=n\right\}\left(n=1,2,...\right)$?
You can still ask an expert for help
• Live experts 24/7
• Questions are typically answered in as fast as 30 minutes
• Personalized clear answers
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Cael Cox
Step 1
I am assuming X and Y are independent. Without this assumption, it is not possible to answer your question without further information.
Step 2
So, you have to be a little more clever: Suppose we let $Z=X+Y$. We wish to find $Pr\left[Z=n\right]$. Note that we can write $Pr\left[Z=n\right]=\sum _{y=0}^{n}Pr\left[\left(X=n-y\right)\cap \left(Y=y\right)\right]=\sum _{y=0}^{n}Pr\left[X=n-y\right]Pr\left[Y=y\right].$
Now you know what each of these probabilities is, so write them out, and calculate the sum.
We have step-by-step solutions for your answer!
Step 1
For the shifted geometric distribution, of X failures before success on trial $X+1$, $X\sim \mathcal{S}\mathcal{G}\mathcal{e}\mathcal{o}\left(p\right)\phantom{\rule{thickmathspace}{0ex}}⟺\phantom{\rule{thickmathspace}{0ex}}Pr\left(X=x\right)=p\left(1-p{\right)}^{x}$
Step 2
For the geometric distribution of $X-1$ failures before success on trial X,
$X\sim \mathcal{G}\mathcal{e}\mathcal{o}\left(p\right)\phantom{\rule{thickmathspace}{0ex}}⟺\phantom{\rule{thickmathspace}{0ex}}Pr\left(X=x\right)=p\left(1-p{\right)}^{x-1}$
We have step-by-step solutions for your answer!
|
2022-10-05 11:28:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 39, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8629981279373169, "perplexity": 1029.2816671005653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00008.warc.gz"}
|
http://tex.stackexchange.com/questions/6182/how-to-shorten-the-length-of-an-arrow?answertab=oldest
|
# How to shorten the length of an arrow?
I use tkz-graph.sty from Altermundus. When I make an arrow from vertex A to vertex B, the arrow is too long: it goes from the center of A to the center of B. Can I shorten the arrow on both ends by a factor? In PSTricks, I used nodesep, but that doesn't work in a tikzpicture.
-
Welcome to TeX&Co! I hope you enjoy your stay! If you post a MWE (Minimal Working Example) it would encourage people to change your code so that it works, rather than have to try to guess the setup you are interested in....(just some friendly advice) – Yossi Farjoun Nov 29 '10 at 21:20
Without an example it's hard to answer, but have you tried the anchors? Each node in TikZ has a set of anchors (refer to the excellent pgf manual), like "north", "west", "southeast" and so on. – Matten Nov 29 '10 at 22:10
You can use the shorten > and shorten < options. For example
\documentclass{minimal}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}
\fill (0,0) circle (0.05);
\fill (2,0) circle (0.05);
\draw[shorten >=0.5cm,shorten <=1cm,->] (0,0) -- (2,0);
\end{tikzpicture}
\end{document}
produces .
However, there are few situations where you have to use that directly. Usually it is better to choose an anchor of the node (eg. mynode.east) and maybe set the inner sep and outer sep options of the nodes.
Maybe if you post an example of what you are trying to achieve, we can figure out what the best approach in that case is.
-
Thanks Caramdir, I used shorten and calculated my own factor- works perfectly, thanks – Douglass Nov 29 '10 at 22:53
|
2014-07-30 21:38:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8979665040969849, "perplexity": 1385.7110875907665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510271654.40/warc/CC-MAIN-20140728011751-00440-ip-10-146-231-18.ec2.internal.warc.gz"}
|
http://mathoverflow.net/questions?page=4&sort=newest
|
# All Questions
77 views
### On the existence of a square root for a unitary modular tensor category
The centre $Z(\mathcal{C})$ of a fusion category $\mathcal{C}$, is a unitary modular tensor category. Question: What about the converse, i.e., can we characterize every unitary modular tensor ...
48 views
82 views
I proposed my conjecture as follows: Let $f(x)$ is a real continuous function on $[m, M]$ and $f'>0, f''>0$ on $[m, M]$, let $m \le x_i \le M$, for $i=1, 2,..., n$. Then \frac{f(x_1)+f(x_2)+... 0answers 127 views ### What techniques are available for constructing D-modules over smooth projective varieties? I'm trying to learn about D-modules for computing intersection cohomology but I'm having trouble coming up with explicit constructions of D-modules on projective varieties. Since this is an involved ... 0answers 22 views ### Covering Number of a Positive Semidefinite Cone (Approximate the Objective of a SDP) I was wondering what the covering number of a positive semidefinite cone is. Consider the semidefinite optimization program \begin{align} \max\langle \mathbf{C}, \mathbf{X} \rangle~~\text{subject to}~... 0answers 126 views ### Research topics in Curves and Surfaces [on hold] I advance that I'm not a mathematician but I'm an undergraduate student of mathematics. In my courses at university I have studied a bit of Differential Geometry, in particoular differential geometry ... 0answers 132 views ### Where can I find basic “computations” of equivariant stable homotopy groups? I am new to this subject; so please correct me if I will say something wrong or if you don't like my notation. In particular, I don't know whether it is reasonable to consider an infinite group G (... 2answers 142 views ### Combinatorial identity involving number of cycles (of any length) in a permutation I am going through Phil Hanlon's paper and on page 127, right after the first paragraph, "It is well known that.." which boils down to the following identity: \prod_{i=0}^{n-1}(\beta-i) = \sum_{\...
50 views
Composition Diamond lemma for Lie algebra over a field $F$ has already been investigated in several papers : L.Bokut and Y.Q.Chen Groebner-Shirshov bases for Lie algebras and A.I Shirshov, ...
268 views
### ($\oplus$, $\otimes$) is a semiring. If $\otimes$ = +, what are the possible operators $\oplus$?
Assume that ($\oplus$, $\otimes$) is a semiring over the non-negative reals. If $\otimes$ is +, what are the possible operators for $\oplus$? So far I have proven that ...
60 views
### Reference request for a well-known lemma in Parabolic Vector Bundle
In the paper- "Moduli Space of parabolic vector bundles on a curve" - Usha N Bhosle, Indranil Biswas-Beitr Algebra Geom (2012), 53:437-449, DOI: 10.1007/s13366-011-0053-7, Lemma $2.1$ is being ...
136 views
### Poincaré–Bendixson theorem on the torus
I was reading the paper A Generalization of a Poincaré-Bendixson Theorem to Closed Two-Dimensional Manifolds by Arthur J. Schwartz which proves the following theorem: THEOREM. Let $M$ be a ...
70 views
40 views
138 views
### An inequality in cyclic polygon and tangential polygon
I proposed my conjecture, it is strengthened version of the Erdős–Mordell inequality as following: Let $A_1A_2.....A_n$ be a cyclic polygon and $B_1B_2....B_n$ be the its tangential polygon. Let P be ...
64 views
### Extending homomorphisms between ordered abelian groups
Let $\Omega$ be a linearly (i.e. fully) ordered set, and let $\Lambda_{\Omega}$ be the ordered abelian group consisting of those $(\lambda_\omega)_{\omega\in\Omega}\in\mathbb{R}^{\Omega}$ with well-...
449 views
### Is a one-dimensional compact complex analytic space necessarily projective?
Let $X$ be a compact complex analytic space with singular locus $X^{\mathrm{sing}}$. Suppose that $X\setminus X^{\mathrm{sing}}$ is a Riemann surface. If $X^{\mathrm{sing}} = \emptyset$, then $X$ is ...
149 views
### Group bundles for topological spaces without universal cover
I‘m currently writing my Bachelor Thesis on (Co-)Homology with local coefficients. Let me first describe the situation: There are two approaches in defining Homology with local coefficients of a ...
116 views
### Hodge decomposition on open manifold
For the open manifold like $X\times \mathbb R$ or $X\times \mathbb R^+$, where $X$ is a closed manifold. Is there any decomposition like (Hodge Decomposition) of the Differential forms on it.
222 views
### Cohen's model yet again
It has been discussed already whether a countable OD set necessarily contains an OD element. See e.g. A question about ordinal definable real numbers . A negative answer was obtained in Archive for ...
231 views
### A generalization of Erdős–Mordell inequality [on hold]
I proposed my conjecture generalization of Erdős–Mordell inequality as following: Let $A_1A_2....A_n$ be a polygon in a plane, $P$ be the point in $A_1A_2....A_n$. Let $d_i$ be the distances from $P$ ...
24 views
### Hyperbola application [on hold]
A curved mirror is placed in a store for a wide angle view of the room. the right hand branch of x squared over one minus y squared over three equals one models the curvature of the mirror. a small ...
20 views
### Algorithm for Longest Common Subtour
For a new kind of heuristic for the TSP I need to calculate the longest subtour, that is common to a set $T_1,\ ...,\ T_m$ of tours, that are "good" approximations of the optimal tour $T_{opt}$. By a ...
54 views
### How many number of finite points exists inside the circle? [on hold]
I am doing project on Image processing dealing with circular images. So I need an approximate number of pixels present inside circle image of radius R and Circle center of (x,y). Please give me the ...
63 views
### Computing canonical forms from orbit partitions
Suppose we know the orbit partition of the vertices of a graph (due to the action of its automorphism group). Is it easy (as in "polynomial time") to generate a canonical form (aka "canonical labeling"...
### How to compute $[CP^2, G/PL]$?
Let $E^4$ be the two stage Postnikov space appearing in the homotopy type of the classifying space $G/PL$. One of its properties is that it only has two nontrivial homotopy groups $\pi_2(E)=Z/2Z$ and \$...
|
2016-07-28 18:21:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9501432180404663, "perplexity": 588.2787034013141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828313.74/warc/CC-MAIN-20160723071028-00247-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/calculus/174308-finding-equation-curve.html
|
# Math Help - Finding the equation of a curve
1. ## Finding the equation of a curve
So I am studying for a maths exam I have tomorrow and came to a question I couldn't figure out.
At any point (x,y) on a curve d^2 y/dx^2 = 12x + 2
Find the Equation of the curve if it passes through (1,8) and the gradient of the tangent at this point is 9.
Thanks for any help.
2. Originally Posted by Ploppies
So I am studying for a maths exam I have tomorrow and came to a question I couldn't figure out.
At any point (x,y) on a curve d^2 y/dx^2 = 12x + 2
Find the Equation of the curve if it passes through (1,8) and the gradient of the tangent at this point is 9.
Thanks for any help.
This is the 2nd derivation of the function y = f(x).
1. $\dfrac{dy}{dx}=\int(12x+2)dx=6x^2+2x+c = f'(x)$
You know that $f'(1)=9$ . Determine c. Thus:
2. $y = \int(6x^2+2x+1)dx=2x^3+x^2+x+d=f(x)$
You know that $f(1)=8$. Determine d.
Spoiler:
3. The function has had the equation: $y = f(x)=2x^3+x^2+x+4$
|
2015-07-01 12:09:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8447909355163574, "perplexity": 316.50331803427315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094924.48/warc/CC-MAIN-20150627031814-00197-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://magesblog.com/post/2013-01-08-reserving-based-on-log-incremental/
|
# Reserving based on log-incremental payments in R, part I
A recent post on the PirateGrunt blog on claims reserving inspired me to look into the paper Regression models based on log-incremental payments by Stavros Christofides [1], published as part of the Claims Reserving Manual (Version 2) of the Institute of Actuaries.
The paper is available together with a spreadsheet model, illustrating the calculations. It is very much based on ideas by Barnett and Zehnwirth, see [2] for a reference. However, doing statistical analysis in a spread sheet programme is often cumbersome. I will go through the first 15 pages of Christofides’ paper today and illustrate how the model can be implemented in R.
## Page D5.4
tri <- t(matrix(
c(11073, 6427, 1839, 766,
14799, 9357, 2344, NA,
15636, 10523, NA, NA,
16913, NA, NA, NA),
nc = 4,
dimnames=list(origin = 0:3, dev = 0:3)))
The above triangle shows incremental claims payments for four origin (accident) years over time (development years). It is the aim to predict the bottom right triangle of future claims payments, assuming no further claims after four development years.
Christofides model assumes the following structure for the incremental paid claims $$P_{ij}$$:
\begin{aligned} \ln(P_{ij}) & = Y_{ij} = a_i + b_j + \epsilon_{ij} \end{aligned}
where i and j go from 0 to 3, $$b_0=0$$ and $$\epsilon_{ij} \sim N(0, \sigma^2)$$. Unlike the basic chain-ladder method, this is a stochastic model that allows me to test my assumptions and calculate various statistics, e.g. standards errors of my predictions. For the purpose of my analysis I will work with the data in form of a data frame:
m <- dim(tri)[1]; n <- dim(tri)[2]
dat <- data.frame(
origin=rep(0:(m-1), n),
dev=rep(0:(n-1), each=m),
value=as.vector(tri))
rownames(dat) <- with(dat, paste(origin, dev, sep="-"))
dat <- dat[order(dat$origin),] dat ## origin dev value ## 0-0 0 0 11073 ## 0-1 0 1 6427 ## 0-2 0 2 1839 ## 0-3 0 3 766 ## 1-0 1 0 14799 ## 1-1 1 1 9357 ## 1-2 1 2 2344 ## 1-3 1 3 NA ## 2-0 2 0 15636 ## 2-1 2 1 10523 ## 2-2 2 2 NA ## 2-3 2 3 NA ## 3-0 3 0 16913 ## 3-1 3 1 NA ## 3-2 3 2 NA ## 3-3 3 3 NA I add a few columns to my data, in particular factor variables of the origin and development years, a calendar year dimension and the log value of the paid claims. ## Add dimensions as factors dat <- with(dat, data.frame(origin, dev, cal=origin+dev, value, logvalue=log(value), originf=factor(origin), devf=as.factor(dev), calf=as.factor(origin+dev))) rownames(dat) <- with(dat, paste(origin, dev, sep="-")) dat <- dat[order(dat$origin),]
dat ## Page D5.7
## origin dev cal value logvalue originf devf calf
## 0-0 0 0 0 11073 9.312265 0 0 0
## 0-1 0 1 1 6427 8.768263 0 1 1
## 0-2 0 2 2 1839 7.516977 0 2 2
## 0-3 0 3 3 766 6.641182 0 3 3
## 1-0 1 0 1 14799 9.602315 1 0 1
## 1-1 1 1 2 9357 9.143880 1 1 2
## 1-2 1 2 3 2344 7.759614 1 2 3
## 1-3 1 3 4 NA NA 1 3 4
## 2-0 2 0 2 15636 9.657331 2 0 2
## 2-1 2 1 3 10523 9.261319 2 1 3
## 2-2 2 2 4 NA NA 2 2 4
## 2-3 2 3 5 NA NA 2 3 5
## 3-0 3 0 3 16913 9.735838 3 0 3
## 3-1 3 1 4 NA NA 3 1 4
## 3-2 3 2 5 NA NA 3 2 5
## 3-3 3 3 6 NA NA 3 3 6
I have done all the preparation and can carry out the linear regression with lm:
Fit <- lm(logvalue ~ originf + devf + 0, data=dat)
summary(Fit) # Page D5.7/8
##
## Call:
## lm(formula = logvalue ~ originf + devf + 0, data = dat)
##
## Residuals:
## 0-0 0-1 0-2 0-3 1-0 1-1 1-2
## 2.389e-02 -5.396e-02 3.007e-02 6.939e-18 1.118e-02 1.889e-02 -3.007e-02
## 2-0 2-1 3-0
## -3.507e-02 3.507e-02 0.000e+00
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## originf0 9.28837 0.04001 232.17 1.76e-07 ***
## originf1 9.59114 0.04001 239.73 1.60e-07 ***
## originf2 9.69240 0.04277 226.62 1.89e-07 ***
## originf3 9.73584 0.05238 185.86 3.43e-07 ***
## devf1 -0.46615 0.04277 -10.90 0.00165 **
## devf2 -1.80146 0.05015 -35.92 4.75e-05 ***
## devf3 -2.64719 0.06591 -40.16 3.40e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.05238 on 3 degrees of freedom
## (6 observations deleted due to missingness)
## Multiple R-squared: 1, Adjusted R-squared: 1
## F-statistic: 4.03e+04 on 7 and 3 DF, p-value: 1.884e-07
The above output shows the same results as the paper. The estimators for the origin periods are clearly different from zero, yet the tests for originf1 to originf3 don’t make much sense. It would be more appropriate to understand if I need different parameters for each origin period and hence a model such as lm(logvalue ~ originf + devf, data=dat) can be more helpful. It would test the coefficients of the later origin periods against the base of the first one.
Lets look at the residuals:
# Resdiual plots
op <- par(mfrow=c(2,2))
attach(model.frame(Fit))
plot.default(rstandard(Fit) ~ originf,
main="Residuals vs. origin years")
abline(h=0, lty=2)
plot.default(rstandard(Fit) ~ devf,
main="Residuals vs. dev. years")
abline(h=0, lty=2)
with(na.omit(dat),
plot.default(rstandard(Fit) ~ calf,
main="Residuals vs. payments years"))
abline(h=0, lty=2)
plot.default(rstandard(Fit) ~ logvalue,
main="Residuals vs. fitted")
abline(h=0, lty=2)
detach(model.frame(Fit))
par(op)
The residual plots don’t give any reason to dismiss the model, so I continue.
In my next step I extract the design matrix from the model and build the future design matrix from the data. I will need both to calculate the prediction and standard errors on the original scale to follow the paper:
# Model design matrix, page D5.7
dm <- model.matrix(formula(Fit), dat=model.frame(Fit))
dm
## originf0 originf1 originf2 originf3 devf1 devf2 devf3
## 0-0 1 0 0 0 0 0 0
## 0-1 1 0 0 0 1 0 0
## 0-2 1 0 0 0 0 1 0
## 0-3 1 0 0 0 0 0 1
## 1-0 0 1 0 0 0 0 0
## 1-1 0 1 0 0 1 0 0
## 1-2 0 1 0 0 0 1 0
## 2-0 0 0 1 0 0 0 0
## 2-1 0 0 1 0 1 0 0
## 3-0 0 0 0 1 0 0 0
## attr(,"assign")
## [1] 1 1 1 1 2 2 2
## attr(,"contrasts")
## attr(,"contrasts")$originf ## [1] "contr.treatment" ## ## attr(,"contrasts")$devf
## [1] "contr.treatment"
## Future design matrix, page D5.11
fdm <- model.matrix( ~ originf + devf + 0, data=dat[is.na(dat$value),]) fdm ## originf0 originf1 originf2 originf3 devf1 devf2 devf3 ## 1-3 0 1 0 0 0 0 1 ## 2-2 0 0 1 0 0 1 0 ## 2-3 0 0 1 0 0 0 1 ## 3-1 0 0 0 1 1 0 0 ## 3-2 0 0 0 1 0 1 0 ## 3-3 0 0 0 1 0 0 1 ## attr(,"assign") ## [1] 1 1 1 1 2 2 2 ## attr(,"contrasts") ## attr(,"contrasts")$originf
## [1] "contr.treatment"
##
## attr(,"contrasts")$devf ## [1] "contr.treatment" Following the paper I can calculate the variance-covariance matrix: # Page D5.12 # fdm %*% solve(t(dm)%*%dm) %*% t(fdm) *sigma^2, or shorter varcovar <- fdm %*% vcov(Fit) %*% t(fdm) round(varcovar,4) ## 1-3 2-2 2-3 3-1 3-2 3-3 ## 1-3 0.0046 0.0000 0.0037 0.0000 0.0000 0.0037 ## 2-2 0.0000 0.0034 0.0021 0.0000 0.0021 0.0007 ## 2-3 0.0037 0.0021 0.0053 0.0000 0.0007 0.0039 ## 3-1 0.0000 0.0000 0.0000 0.0046 0.0037 0.0037 ## 3-2 0.0000 0.0021 0.0007 0.0037 0.0053 0.0039 ## 3-3 0.0037 0.0007 0.0039 0.0037 0.0039 0.0071 With the above matrix I can derive the variance of my future values, which are the diagonal elements of the above matrix plus the model variance $$\sigma^2$$. Now I have everything to calculate the future claims amounts and standard errors on the original scale. Recall that for a lognormal distribution I have $$E(X) = \exp(\mu + \frac{1}{2} \sigma^2)$$ and $$Var(X) = \exp(2\mu + \sigma^2)(\exp(\sigma^2) - 1)$$. # Page D5.12 sigma <- summary(Fit)$sigma
sigma
## [1] 0.05238207
# 0.05238207
Var <- varcovar + sigma^2
VarY <- diag(Var)
Y <- fdm %*% coef(Fit)
P <- exp(Y + VarY/2)
VarP <- exp(2*Y + VarY)*(exp(VarY)-1)
seP <- sqrt(VarP)
i <- fdm %*% c((1:m)-1, rep(0, (n-1)))
j <- fdm %*% c(rep(0, (m-1)), (1:n)-1)
Results <- data.frame(i,j, Y, VarY, P, VarP, seP)
Results # Page D5.13
## i j Y VarY P VarP seP
## 1-3 1 3 6.943950 0.007317016 1040.658 7953.165 89.18052
## 2-2 2 2 7.890940 0.006173732 2681.219 44519.839 210.99725
## 2-3 2 3 7.045210 0.008002987 1151.950 10662.490 103.25933
## 3-1 3 1 9.269688 0.007317016 10650.334 833010.240 912.69395
## 3-2 3 2 7.934378 0.008002987 2802.814 63121.859 251.24064
## 3-3 3 3 7.088648 0.009832241 1204.192 14327.851 119.69900
Well, it is actually easier in R, as I could have used the function predict, which does most of the above behind the scene:
newData <- dat[is.na(dat$logvalue), c("originf", "devf")] Pred <- predict(Fit, newdata=newData, se.fit=TRUE) Y <- Pred$fit
VarY <- Pred$se.fit^2 + Pred$residual.scale^2
fdm <- model.matrix(~ originf + devf + 0, data=newData)
Never-mind, let’s complete the triangle:
lower.tri <- xtabs(P ~ i+j, dat=Results)
lower.tri
## j
## i 1 2 3
## 1 0.000 0.000 1040.658
## 2 0.000 2681.219 1151.950
## 3 10650.334 2802.814 1204.192
Full.Incr.Triangle <- tri
Full.Incr.Triangle[row(tri) > (nrow(tri) + 1 - col(tri))] <-
lower.tri[row(lower.tri) > (nrow(lower.tri) - col(lower.tri))]
Full.Cum.Triangle <- apply(Full.Incr.Triangle, 1, cumsum)
Full.Cum.Triangle
## dev
## origin 0 1 2 3
## 0 11073 14799.00 15636.00 16913.00
## 1 17500 24156.00 26159.00 27563.33
## 2 19339 26500.00 28840.22 30366.15
## 3 20105 27540.66 29992.17 31570.34
Finally I want to estimate the overall error for my total reserve, the sum of all future incremental claims payments. I have all the values already and using the sweep statement in R it is easy to calculate the overall variance for the total reserve, but I have to account for the co-variances first:
# Page D5.14
# Calculate the co-variance between the predictions
CoVar <- sweep(sweep((exp(varcovar)-1), 1, P, "*"), 2, P, "*")
# I set the values on the diagonal to zero as I have to use
# the variances I calculated earlier (VarP),
# which includes the model variance sigma^2 as well.
CoVar[col(CoVar)==row(CoVar)] <- 0
round(CoVar,0)
## 1-3 2-2 2-3 3-1 3-2 3-3
## 1-3 0 0 4394 0 0 4593
## 2-2 0 0 6363 0 15481 2216
## 2-3 4394 6363 0 0 2216 5403
## 3-1 0 0 0 0 109410 47006
## 3-2 0 15481 2216 109410 0 13145
## 3-3 4593 2216 5403 47006 13145 0
# Add the variances together to estimate the overall variance
OverallVar <- sum(CoVar) + sum(VarP)
Total.SE <- sqrt(OverallVar)
Total.SE
## [1] 1180.698
Overall.Reserve <- sum(lower.tri)
Overall.Reserve
## [1] 19531.17
Total.SE / Overall.Reserve
## [1] 0.06045198
That’s it, the projected reserve is 119,531 with an estimated overall standard error of 11,181 or just 6% of the overall reserve estimate.
For comparison here is the output of a Mack chain-ladder model [3] run on the same triangle using the ChainLadder package [4]:
library(ChainLadder)
##
## Welcome to ChainLadder version 0.2.16
## To cite package 'ChainLadder' in publications use:
##
## Gesmann M, Murphy D, Zhang Y, Carrato A, Wuthrich M, Concina F, Dal
## Moro E (2022). _ChainLadder: Statistical Methods and Models for
## Claims Reserving in General Insurance_. R package version 0.2.16,
##
## To suppress this message use:
## suppressPackageStartupMessages(library(ChainLadder))
M <- MackChainLadder(incr2cum(tri), est.sigma="Mack")
M$FullTriangle ## dev ## origin 0 1 2 3 ## 0 11073 17500.00 19339.00 20105.00 ## 1 14799 24156.00 26500.00 27549.64 ## 2 15636 26159.00 28785.83 29926.01 ## 3 16913 27632.15 30406.90 31611.29 M ## MackChainLadder(Triangle = incr2cum(tri), est.sigma = "Mack") ## ## Latest Dev.To.Date Ultimate IBNR Mack.S.E CV(IBNR) ## 0 20,105 1.000 20,105 0 0.0 NaN ## 1 26,500 0.962 27,550 1,050 31.3 0.0298 ## 2 26,159 0.874 29,926 3,767 177.1 0.0470 ## 3 16,913 0.535 31,611 14,698 948.7 0.0645 ## ## Totals ## Latest: 89,677.00 ## Dev: 0.82 ## Ultimate: 109,191.94 ## IBNR: 19,514.94 ## Mack.S.E 980.34 ## CV(IBNR): 0.05 The Mack chain-ladder results are actually quite similar to the log-linear model, but provide only point estimators without a distribution. ### Conclusions It is actually very straightforward to implement the regression model based on log-incremental payments in R. In particular I can run my code for any other triangle size without having to adjust any of my formulas, something which can sometimes be quite time-intensive and error prone to do in spread sheets. Here is the reduced model of the second example in Christofides’ paper, which I cover in more detail in a second post.: dat <- data.frame( # Page D5.17 origin = rep(0:6, each=7), dev = rep(0:6, 7), value = c(3511, 3215, 2266, 1712, 1059, 587, 340, 4001, 3702, 2278, 1180, 956, 629, NA, 4355, 3932, 1946, 1522, 1238, NA, NA, 4295, 3455, 2023, 1320, NA, NA, NA, 4150, 3747, 2320, NA, NA, NA, NA, 5102, 4548, NA, NA, NA, NA, NA, 6283, NA, NA, NA, NA, NA, NA)) dat <- with(dat, data.frame(origin, dev, cal=origin+dev, value, logvalue=log(value), a6 = ifelse(origin == 6, 1, 0), a5 = ifelse(origin == 5, 1, 0), d = ifelse(dev < 1, 1, 0), s = ifelse(dev < 1, 0, dat$dev)))
summary(Fit <- lm(logvalue ~ a5 + a6 + d + s, data=na.omit(dat)))
##
## Call:
## lm(formula = logvalue ~ a5 + a6 + d + s, data = na.omit(dat))
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.21567 -0.04910 0.00654 0.05137 0.27198
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 8.60795 0.05150 167.142 < 2e-16 ***
## a5 0.24353 0.08517 2.859 0.008870 **
## a6 0.44111 0.12170 3.625 0.001421 **
## d -0.30345 0.06779 -4.476 0.000172 ***
## s -0.43967 0.01666 -26.390 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.1119 on 23 degrees of freedom
## Multiple R-squared: 0.9804, Adjusted R-squared: 0.977
## F-statistic: 287.7 on 4 and 23 DF, p-value: < 2.2e-16
I am sure there are better ways to fit such models in R, in particular in dealing with the design matrix and changing them, e.g. to reduce the model and take out non-significant factor levels, although this seems to be a bit of a no-no.
Jim Guszcza has a few more tips on reserving in his webinar on Actuarial Analytics in R and the ChainLadder package has further stochastic reserving methods already implemented.
You find the code of this post also as a gist on Github. Feedback, comments, tips and hints will be much appreciated.
### Session Info
R version 2.15.2 Patched (2013-01-01 r61512)
Platform: x86_64-apple-darwin9.8.0/x86_64 (64-bit)
locale:
[1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8
attached base packages:
[1] grid splines stats graphics grDevices utils datasets methods base
other attached packages:
lme4_0.999999-0 ggplot2_0.9.3 coda_0.16-1 biglm_0.8 DBI_0.2-5
reshape2_1.2.2 actuar_1.1-5 RUnit_0.4.26 systemfit_1.1-14 lmtest_0.9-30
zoo_1.7-9 car_2.0-15 nnet_7.3-5 MASS_7.3-22 Matrix_1.0-10 lattice_0.20-10
Hmisc_3.10-1 survival_2.37-2
loaded via a namespace (and not attached):
[1] cluster_1.14.3 colorspace_1.2-0 dichromat_1.2-4 digest_0.6.0
gtable_0.1.2 labeling_0.1 minqa_1.2.1 munsell_0.4 nlme_3.1-106
plyr_1.8 proto_0.3-10 RColorBrewer_1.0-5 sandwich_2.2-9 scales_0.2.3
stats4_2.15.2 stringr_0.6.2
### Citation
Markus Gesmann (Jan 08, 2013) Reserving based on log-incremental payments in R, part I. Retrieved from https://magesblog.com/post/2013-01-08-reserving-based-on-log-incremental/
BibTeX citation:
@misc{ 2013-reserving-based-on-log-incremental-payments-in-r-part-i,
author = { Markus Gesmann },
title = { Reserving based on log-incremental payments in R, part I },
url = { https://magesblog.com/post/2013-01-08-reserving-based-on-log-incremental/ },
year = { 2013 }
updated = { Jan 08, 2013 }
}
|
2022-11-30 04:49:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4285435676574707, "perplexity": 7609.8103792529355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710719.4/warc/CC-MAIN-20221130024541-20221130054541-00750.warc.gz"}
|
https://ashtavakra.org/c-programming/macros/
|
# 12. Preprocessing Directives¶
The following come from section 6.10 of specification. It will terminate when you see code starting. :-) Description
A preprocessing directive consists of a sequence of preprocessing tokens that begins with a # preprocessing token that (at the start of translation phase 4) is either the first character in the source file (optionally after white space containing no new-line characters) or that follows white space containing at least one new-line character, and is ended by the next new-line character. [1] A new-line character ends the preprocessing directive even if it occurs within what would otherwise be an invocation of a function-like macro.
A text line shall not begin with a # preprocessing token. A non-directive shall not begin with any of the directive names appearing in the syntax.
When in a group that is skipped (12.1), the directive syntax is relaxed to allow any sequence of preprocessing tokens to occur between the directive name and the following new-line character.
[1] Thus, preprocessing directives are commonly called “lines”. These “lines” have no other syntactic significance, as all white space is equivalent except in certain situations during preprocessing (see the # character string literal creation operator in 12.3.2, for example).
Constraints
The only white-space characters that shall appear between preprocessing tokens within a preprocessing directive (from just after the introducing # preprocessing token through just before the terminating new-line character) are space and horizontal-tab (including spaces that have replaced comments or possibly other white-space characters in translation phase 3).
Semantics
The implementation can process and skip sections of source files conditionally, include other source files, and replace macros. These capabilities are called preprocessing, because conceptually they occur before translation of the resulting translation unit.
The preprocessing tokens within a preprocessing directive are not subject to macro expansion unless otherwise stated.
EXAMPLE In:
#define EMPTY
EMPTY # include <file.h>
the sequence of preprocessing tokens on the second line is not a preprocessing directive, because it does not begin with a # at the start of translation phase 4, even though it will do so after the macro EMPTY has been replaced.
## 12.1. Conditional Inclusion¶
Contraints
The expression that controls conditional inclusion shall be an integer constant expression except that: it shall not contain a cast; identifiers (including those lexically identical to keywords) are interpreted as described below; [2] and it may contain unary operator expressions of the form:
defined identifier
or:
defined (identifier)
which evaluate to 1 if the identifier is currently defined as a macro name (that is, if it is predefined or if it has been the subject of a #define preprocessing directive without an intervening #undef directive with the same subject identifier), 0 if it is not.
[2] Because the controlling constant expression is evaluated during translation phase 4, all identifiers either are or are not macro names - there simply are no keywords, enumeration constants, etc.
Semantics
Preprocessing directives of the forms:
# if constant-expression new-line group_opt
# elif constant-expression new-line group_opt
check whether the controlling constant expression evaluates to nonzero.
Prior to evaluation, macro invocations in the list of preprocessing tokens that will become the controlling constant expression are replaced (except for those macro names modified by the defined unary operator), just as in normal text. If the token defined is generated as a result of this replacement process or use of the defined unary operator does not match one of the two specified forms prior to macro replacement, the behavior is undefined. After all replacements due to macro expansion and the defined unary operator have been performed, all remaining identifiers are replaced with the pp-number 0, and then each preprocessing token is converted into a token. The resulting tokens compose the controlling constant expression which is evaluated according to the rules of constant expressions. For the purposes of this token conversion and evaluation, all signed integer types and all unsigned integer types act as if they hav e the same representation as, respectively, the types intmax_t and uintmax_t defined in the header <stdint.h>. [3] This includes interpreting character constants, which may involve converting escape sequences into execution character set members. Whether the numeric value for these character constants matches the value obtained when an identical character constant occurs in an expression (other than within a #if or #elif directive) is implementation-defined. [4] Also, whether a single-character character constant may have a neg ative value is implementation-defined.
Preprocessing directives of the forms:
# ifdef identifier new-line group_opt
# ifndef identifier new-line group_opt
check whether the identifier is or is not currently defined as a macro name. Their conditions are equivalent to #if defined identifier and #if !defined identifier respectively.
Each directive’s condition is checked in order. If it evaluates to false (zero), the group that it controls is skipped: directives are processed only through the name that determines the directive in order to keep track of the level of nested conditionals; the rest of the directives’ preprocessing tokens are ignored, as are the other preprocessing tokens in the group. Only the first group whose control condition evaluates to true (nonzero) is processed. If none of the conditions evaluates to true, and there is a #else directive, the group controlled by the #else is processed; lacking a #else directive, all the groups until the #endif are skipped. [5]
Forward references: macro replacement (12.3), source file inclusion (12.2), largest integer types (13.18.1.5).
[3] Thus, on an implementation where INT_MAX is 0x7FFF and UINT_MAX is 0xFFFF, the constant 0x8000 is signed and positive within a #if expression even though it would be unsigned in translation phase 7.
[4] Thus, the constant expression in the following #if directive and if statement is not guaranteed to evaluate to the same value in these two contexts. #if 'z' - 'a' == 25 if ('z' - 'a' == 25)
[5] As indicated by the syntax, a preprocessing token shall not follow a #else or #endif directive before the terminating new-line character. However, comments may appear anywhere in a source file, including within a preprocessing directive.
## 12.2. Source File Inclusion¶
Constraints
A #include directive shall identify a header or source file that can be processed by the implementation.
Semantics
A preprocessing directive of the form:
# include <h-char-sequence> new-line
searches a sequence of implementation-defined places for a header identified uniquely by the specified sequence between the < and > delimiters, and causes the replacement of that directive by the entire contents of the header. How the places are specified or the header identified is implementation-defined.
A preprocessing directive of the form:
# include "q-char-sequence" new-line
causes the replacement of that directive by the entire contents of the source file identified by the specified sequence between the ” delimiters. The named source file is searched for in an implementation-defined manner. If this search is not supported, or if the search fails, the directive is reprocessed as if it read:
# include <h-char-sequence> new-line
with the identical contained sequence (including > characters, if any) from the original directive.
A preprocessing directive of the form:
# include pp-tokens new-line
(that does not match one of the two previous forms) is permitted. The preprocessing tokens after include in the directive are processed just as in normal text. (Each identifier currently defined as a macro name is replaced by its replacement list of preprocessing tokens.) The directive resulting after all replacements shall match one of the two previous forms. [6] The method by which a sequence of preprocessing tokens between a < and a > preprocessing token pair or a pair of ” characters is combined into a single header name preprocessing token is implementation-defined.
The implementation shall provide unique mappings for sequences consisting of one or more letters or digits followed by a period (.) and a single letter. The first character shall be a letter. The implementation may ignore the distinctions of alphabetical case and restrict the mapping to eight significant characters before the period.
A #include preprocessing directive may appear in a source file that has been read because of a #include directive in another file, up to an implementation-defined nesting limit.
Forward references: macro replacement (12.3).
[6] Note that adjacent string literals are not concatenated into a single string literal; thus, an expansion that results in two string literals is an invalid directive.
## 12.3. Macro Replacement¶
Constraints
Two replacement lists are identical if and only if the preprocessing tokens in both have the same number, ordering, spelling, and white-space separation, where all white-space separations are considered identical.
An identifier currently defined as an object-like macro shall not be redefined by another #define preprocessing directive unless the second definition is an object-like macro definition and the two replacement lists are identical. Likewise, an identifier currently defined as a function-like macro shall not be redefined by another #define preprocessing directive unless the second definition is a function-like macro definition that has the same number and spelling of parameters, and the two replacement lists are identical.
There shall be white-space between the identifier and the replacement list in the definition of an object-like macro.
If the identifier-list in the macro definition does not end with an ellipsis, the number of arguments (including those arguments consisting of no preprocessing tokens) in an invocation of a function-like macro shall equal the number of parameters in the macro definition. Otherwise, there shall be more arguments in the invocation than there are parameters in the macro definition (excluding the …). There shall exist a ) preprocessing token that terminates the invocation.
The identifier __VA_ARGS__ shall occur only in the replacement-list of a function-like macro that uses the ellipsis notation in the parameters.
A parameter identifier in a function-like macro shall be uniquely declared within its scope.
Semantics
The identifier immediately following the define is called the macro name. There is one name space for macro names. Any white-space characters preceding or following the replacement list of preprocessing tokens are not considered part of the replacement list for either form of macro.
If a # preprocessing token, followed by an identifier, occurs lexically at the point at which a preprocessing directive could begin, the identifier is not subject to macro replacement.
A preprocessing directive of the form:
# define identifier replacement-list new-line
defines an object-like macro that causes each subsequent instance of the macro name [7] to be replaced by the replacement list of preprocessing tokens that constitute the remainder of the directive.
A preprocessing directive of the form:
# define identifier lparen identifier-listopt ) replacement-list new-line
# define identifier lparen ... ) replacement-list new-line
# define identifier lparen identifier-list , ... ) replacement-list new-line
defines a function-like macro with arguments, similar syntactically to a function call. The parameters are specified by the optional list of identifiers, whose scope extends from their declaration in the identifier list until the new-line character that terminates the #define preprocessing directive. Each subsequent instance of the function-like macro name followed by a ( as the next preprocessing token introduces the sequence of preprocessing tokens that is replaced by the replacement list in the definition (an invocation of the macro). The replaced sequence of preprocessing tokens is terminated by the matching ) preprocessing token, skipping intervening matched pairs of left and right parenthesis preprocessing tokens. Within the sequence of preprocessing tokens making up an invocation of a function-like macro, new-line is considered a normal white-space character.
The sequence of preprocessing tokens bounded by the outside-most matching parentheses forms the list of arguments for the function-like macro. The individual arguments within the list are separated by comma preprocessing tokens, but comma preprocessing tokens between matching inner parentheses do not separate arguments. If there are sequences of preprocessing tokens within the list of arguments that would otherwise act as preprocessing directives, [8] the behavior is undefined.
If there is a … in the identifier-list in the macro definition, then the trailing arguments, including any separating comma preprocessing tokens, are merged to form a single item: the variable arguments. The number of arguments combined is such that, following merger, the number of arguments is one more than the number of parameters in the macro definition (excluding the …).
[7] Since, by macro-replacement time, all character constants and string literals are preprocessing tokens, not sequences possibly containing identifier-like subsequences, they are never scanned for macro names or parameters.
[8] Despite the name, a non-directive is a preprocessing directive.
### 12.3.1. Argument Substitution¶
After the arguments for the invocation of a function-like macro have been identified, argument substitution takes place. A parameter in the replacement list, unless preceded by a # or ## preprocessing token or followed by a ## preprocessing token (see below), is replaced by the corresponding argument after all macros contained therein have been expanded. Before being substituted, each argument’s preprocessing tokens are completely macro replaced as if they formed the rest of the preprocessing file; no other preprocessing tokens are available.
An identifier __VA_ARGS__ that occurs in the replacement list shall be treated as if it were a parameter, and the variable arguments shall form the preprocessing tokens used to replace it.
### 12.3.2. The # Operator¶
Constraints
Each # preprocessing token in the replacement list for a function-like macro shall be followed by a parameter as the next preprocessing token in the replacement list.
Semantics
If, in the replacement list, a parameter is immediately preceded by a # preprocessing token, both are replaced by a single character string literal preprocessing token that contains the spelling of the preprocessing token sequence for the corresponding argument. Each occurrence of white space between the argument’s preprocessing tokens becomes a single space character in the character string literal. White space before the first preprocessing token and after the last preprocessing token composing the argument is deleted. Otherwise, the original spelling of each preprocessing token in the argument is retained in the character string literal, except for special handling for producing the spelling of string literals and character constants: a \ character is inserted before each ” and \ character of a character constant or string literal (including the delimiting ” characters), except that it is implementation-defined whether a \ character is inserted before the \ character beginning a universal character name. If the replacement that results is not a valid character string literal, the behavior is undefined. The character string literal corresponding to an empty argument is “”. The order of evaluation of # and ## operators is unspecified.
### 12.3.3. The ## Operator¶
Constraints
A ## preprocessing token shall not occur at the beginning or at the end of a replacement list for either form of macro definition.
Semantics
If, in the replacement list of a function-like macro, a parameter is immediately preceded or followed by a ## preprocessing token, the parameter is replaced by the corresponding argument’s preprocessing token sequence; however, if an argument consists of no preprocessing tokens, the parameter is replaced by a placemarker preprocessing token instead. [9]
For both object-like and function-like macro invocations, before the replacement list is reexamined for more macro names to replace, each instance of a ## preprocessing token in the replacement list (not from an argument) is deleted and the preceding preprocessing token is concatenated with the following preprocessing token. Placemarker preprocessing tokens are handled specially: concatenation of two placemarkers results in a single placemarker preprocessing token, and concatenation of a placemarker with a non-placemarker preprocessing token results in the non-placemarker preprocessing token. If the result is not a valid preprocessing token, the behavior is undefined. The resulting token is available for further macro replacement. The order of evaluation of ## operators is unspecified.
[9] Placemarker preprocessing tokens do not appear in the syntax because they are temporary entities that exist only within translation phase 4.
### 12.3.4. Rescanning and Further Replacement¶
After all parameters in the replacement list have been substituted and # and ## processing has taken place, all placemarker preprocessing tokens are removed. Then, the resulting preprocessing token sequence is rescanned, along with all subsequent preprocessing tokens of the source file, for more macro names to replace.
If the name of the macro being replaced is found during this scan of the replacement list (not including the rest of the source file’s preprocessing tokens), it is not replaced. Furthermore, if any nested replacements encounter the name of the macro being replaced, it is not replaced. These nonreplaced macro name preprocessing tokens are no longer available for further replacement even if they are later (re)examined in contexts in which that macro name preprocessing token would otherwise have been replaced.
The resulting completely macro-replaced preprocessing token sequence is not processed as a preprocessing directive even if it resembles one, but all pragma unary operator expressions within it are then processed as specified in 12.9 below.
### 12.3.5. Scope of Macro Definitions¶
A macro definition lasts (independent of block structure) until a corresponding #undef directive is encountered or (if none is encountered) until the end of the preprocessing translation unit. Macro definitions have no significance after translation phase 4.
A preprocessing directive of the form:
# undef identifier new-line
causes the specified identifier no longer to be defined as a macro name. It is ignored if the specified identifier is not currently defined as a macro name.
## 12.4. Line Control¶
Constraints
The string literal of a #line directive, if present, shall be a character string literal.
Semantics
The line number of the current source line is one greater than the number of new-line characters read or introduced in translation phase 1 while processing the source file to the current token.
A preprocessing directive of the form:
# line digit-sequence new-line
causes the implementation to behave as if the following sequence of source lines begins with a source line that has a line number as specified by the digit sequence (interpreted as a decimal integer). The digit sequence shall not specify zero, nor a number greater than 2147483647.
A preprocessing directive of the form:
# line digit-sequence "s-char-sequenceopt" new-line
sets the presumed line number similarly and changes the presumed name of the source file to be the contents of the character string literal.
A preprocessing directive of the form:
# line pp-tokens new-line
(that does not match one of the two previous forms) is permitted. The preprocessing tokens after line on the directive are processed just as in normal text (each identifier currently defined as a macro name is replaced by its replacement list of preprocessing tokens). The directive resulting after all replacements shall match one of the two previous forms and is then processed as appropriate.
## 12.5. Error directive¶
Semantics 1 A preprocessing directive of the form # error pp-tokensopt new-line causes the implementation to produce a diagnostic message that includes the specified sequence of preprocessing tokens.
## 12.6. Pragma Directive¶
Semantics A preprocessing directive of the form:
# pragma pp-tokensopt new-line
where the preprocessing token STDC does not immediately follow pragma in the directive (prior to any macro replacement) [10] causes the implementation to behave in an implementation-defined manner. The behavior might cause translation to fail or cause the translator or the resulting program to behave in a non-conforming manner. Any such pragma that is not recognized by the implementation is ignored.
If the preprocessing token STDC does immediately follow pragma in the directive (prior to any macro replacement), then no macro replacement is performed on the directive, and the directive shall have one of the following forms whose meanings are described elsewhere:
#pragma STDC FP_CONTRACT on-off-switch
#pragma STDC FENV_ACCESS on-off-switch
#pragma STDC CX_LIMITED_RANGE on-off-switch
on-off-switch: one of
ON OFF DEFAULT
Forward references: the FP_CONTRACT pragma (13.12.2), the FENV_ACCESS pragma (13.6.1), the CX_LIMITED_RANGE pragma (13.3.4). 149).
[10] An implementation is not required to perform macro replacement in pragmas, but it is permitted except for in standard pragmas (where STDC immediately follows pragma). If the result of macro replacement in a non-standard pragma has the same form as a standard pragma, the behavior is still implementation-defined; an implementation is permitted to behave as if it were the standard pragma, but is not required to.
## 12.7. Null Directive¶
Semantics
A preprocessing directive of the form:
# new-line
has no effect.
## 12.8. Predefined Macro Names¶
The following macro names shall be defined by the implementation:
__DATE__ The date of translation of the preprocessing translation unit: a character string literal of the form “Mmm dd yyyy”, where the names of the months are the same as those generated by the asctime function, and the first character of dd is a space character if the value is less than 10. If the date of translation is not available, an implementation-defined valid date shall be supplied.
__FILE__ The presumed name of the current source file (a character string literal) [11]
__LINE__ The presumed line number (within the current source file) of the current source line (an integer constant). [12]
__STDC__ The integer constant 1, intended to indicate a conforming implementation. __STDC_HOSTED__ The integer constant 1 if the implementation is a hosted implementation or the integer constant 0 if it is not.
__STDC_VERSION__ The integer constant 199901L. [12]
__TIME__ The time of translation of the preprocessing translation unit: a character string literal of the form “hh:mm:ss” as in the time generated by the asctime function. If the time of translation is not available, an implementation-defined valid time shall be supplied.
The following macro names are conditionally defined by the implementation: __STDC_IEC_559__ The integer constant 1, intended to indicate conformance to the specifications in annex F (IEC 60559 floating-point arithmetic).
__STDC_IEC_559_COMPLEX__ The integer constant 1, intended to indicate adherence to the specifications in informative annex G (IEC 60559 compatible complex arithmetic).
__STDC_ISO_10646__ An integer constant of the form yyyymmL (for example, 199712L). If this symbol is defined, then every character in the Unicode required set, when stored in an object of type wchar_t, has the same value as the short identifier of that character. The Unicode required set consists of all the characters that are defined by ISO/IEC 10646, along with all amendments and technical corrigenda, as of the specified year and month.
The values of the predefined macros (except for __FILE__ and __LINE__) remain constant throughout the translation unit.
None of these macro names, nor the identifier defined, shall be the subject of a #define or a #undef preprocessing directive. Any other predefined macro names shall begin with a leading underscore followed by an uppercase letter or a second underscore.
The implementation shall not predefine the macro _ _cplusplus, nor shall it define it in any standard header.
Forward references: the asctime function (13.23.3.1), standard headers (13.1.2).
[11] The presumed source file name and line number can be changed by the #line directive.
[12] This macro was not specified in ISO/IEC 9899:1990 and was specified as 199409L in ISO/IEC 9899/AMD1:1995. The intention is that this will remain an integer constant of type long int that is increased with each revision of International Standard.
## 12.9. Pragma Operator¶
Semantics
A unary operator expression of the form:
_Pragma ( string-literal )
is processed as follows: The string literal is destringized by deleting the L prefix, if present, deleting the leading and trailing double-quotes, replacing each escape sequence " by a double-quote, and replacing each escape sequence \ by a single backslash. The resulting sequence of characters is processed through translation phase 3 to produce preprocessing tokens that are executed as if they were the pp-tokens in a pragma directive. The original four preprocessing tokens in the unary operator expression are removed.
At this point specification material ends here and now we will see usage of above discussed macros.
## 12.10. Usage¶
Note that for this part the compilation command should be gcc -E filename.c. Let us create two files test.c and test1.c and their contents are given below respectively.
### 12.10.1. #include¶
#include "test1.c"
I am test.
#include "test.c"
I am test1.
Keep both the files in same directory and execute gcc -E test.c you will see following:
# 1 "test.c"
# 1 "test.c" 1
# 1 "<built-in>" 1
# 1 "<built-in>" 3
# 143 "<built-in>" 3
# 1 "<command line>" 1
# 1 "<built-in>" 2
# 1 "test.c" 2
# 1 "./test1.c" 1
...
In file included from test.c:1:
In file included from ./test1.c:1:
In file included from test.c:1:
In file included from ./test1.c:1:
In file included from test.c:1:
In file included from ./test1.c:1:
...
In file included from test.c:1:
./test1.c:1:10: error: #include nested too deeply
#include "test.c"
I am test1.
# 2 "test.c" 2
I am test
# 2 "./test1.c" 2
I am test1.
# 2 "test.c" 2
I am test
# 2 "./test1.c" 2
I am test1.
# 2 "test.c" 2
As you can see test.c includes test1.c and test1.c includes test.c. So they are including each other which is causing nested includes. After processesing for some time preprocessor’s head starts spinning as if it has drunk a full bottle of rum and it bails out. As you know headers are included in all meaningful C programs and headers include each other as well. This inclusion of each other can easily lead to nested inclusion so how do header authors circumvent this problem. Well, a technique has been devised known popularly as header guard. The lines which have the form # number text is actually # line direective.
Consider following code:
#ifndef ANYTHING
#define ANYTHING
#include "test1.c"
I am test.
#endif
#ifndef ANYTHING_ELSE
#define ANYTHING_ELSE
#include "test.c"
I am test1.
#endif
Now what will happen that when test.c is included ANYTHING is defined and when test1.c is included via it ANYTHING_ELSE will be defined. After first round of inclusion no more inclusion can happen as governed by the directives. Please see headers of standard library to see the conventions for ANYTHING.
### 12.10.2. Why We Need Headers¶
Now that we have seen the #include directive I would like to tell that why we even need header files. Header files contain several elements of libraries which come with C. For example, function prototypes, structure/type, declarations, macros, global variable declaration etc. Actual code resides inside *.a or *.so library files on GNU/Linux os. Now let us consider a case that we want to access a C function of standard library. The compilation phase requires that prototype of function should be known at compilation time. If we do not have headers we have no way to provide this function prototype at compile time. Same stands true for global variables. The declaration of these must be known at compilation time. You take any language there has to be a mechanism to include code from other files. Be it use directive of Perl or import of Python or any other mechanism of any other language.
### 12.10.3. #define¶
#define and #include are probably the most encountered macro in all C files. There are many usage of it. We will first see the text replacement and function like usage which can be avoided and should be replaced by global constants and inline functions. First let us see what text replacement functionality we get using #define. Consider the following code fragment:
#define MAX 5
MAX
I am MAX
Now run it though gcc -E filename.c and you will get following output:
# 1 "test.c"
# 1 "<built-in>"
# 1 "<command-line>"
# 1 "test.c"
5
I am 5
So as you see both the occurrences are replaced by the text 5. This is the simplest form of text replacement which people use to handle many things. Most common are array sizes and symbolic constants. Another form is the form like functions which has been shown in 10.4.
The bad part of these two is that both do not enter symbol table and make code hard to debug. The former can be replaced by const variables and latter by inline functions.
The other usage of it is to define names. For example, we revisit our old example headers. Header guards usually declare something like this:
#ifndef SOMETHING
#define SOMETHING
#endif
As you can see #define is used to define SOEMTHING so second time the conditional inclusion #ifndef will fail. It can also be tested by defiend like if(defined(SOMETHING). Now if SOMETHING has been defined if test will pass successfully. Similarly #ifdef can be used to test it as a shortcut i.e. #ifdef SOMETHING. The normal if-else statements are replaced in preprocessing directives using #if, #elif and #endif.
### 12.10.4. #undef¶
Anything defined by #define can be undefined by #undef. For example consider the following code:
#define test
#ifdef test
//do something
#undef test
#ifdef test
//do something else
#endif
If you do this then first something else will be executed while the second will not be.
### 12.10.5. # and ##¶
You can use following two examples and description given above to understand both of these:
#define hash_hash # ## #
#define mkstr(a) # a
#define in_between(a) mkstr(a)
#define join(c, d) in_between(c hash_hash d)
char p[] = join(x, y); //char p[]="x ## y"
#define FIRST a # b
#define SECOND a ## b
char first[] = FIRST;
char second[] = SECOND;
### 12.10.6. #error¶
This one is simple. Consider the following:
#include <stdio.h>
int main()
{
# error MAX
return 0;
}
If you try to compile this like gcc filename.c then you will get following:
gcc test.c
test.c:5:5: error: #error MAX
# error MAX
^
1 error generated.
You can combine # error with #if but I have yet to see purposeful code written that way. Non-preprocessing constructs are better for handling such situations. Only if you want to test a preprocessing token then it should be used.
### 12.10.7. #pragma¶
#pragma is dependent on what follows it. You should consult compiler documentation as it is mostly implementation-defined.
### 12.10.8. Miscellaneous¶
Usage of __LINE__, __FILE__, __DATE__ and __TIME__ is simple and shown in following example:
#include <stdio.h>
int main()
{
printf("%s:%d:%s:%s", __FILE__, __LINE__, __DATE__, __TIME__);
return 0;
}
and the output is:
test.c:5:Jun 24 2012:11:24:57
This concluded our discussion on macros. Rest of the book will describe the standard library.
|
2019-02-23 01:01:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4932132959365845, "perplexity": 3993.987454115206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249414450.79/warc/CC-MAIN-20190223001001-20190223023001-00267.warc.gz"}
|
http://nrich.maths.org/279/clue
|
Approximations, Euclid's Algorithm & Continued Fractions
This article sets some puzzles and describes how Euclid's algorithm and continued fractions are related.
Euclid's Algorithm II
We continue the discussion given in Euclid's Algorithm I, and here we shall discover when an equation of the form ax+by=c has no solutions, and when it has infinitely many solutions.
Solving with Euclid's Algorithm
A java applet that takes you through the steps needed to solve a Diophantine equation of the form Px+Qy=1 using Euclid's algorithm.
The original number is $10000x + y$. What is the new number in terms of $x$ and $y$? Use the information to write down a Diophantine equation.
|
2015-07-28 15:24:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8703802227973938, "perplexity": 439.32410624208217}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981969.11/warc/CC-MAIN-20150728002301-00126-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/148243/energy-resolution-of-lhc-electromagnetic-calorimeter
|
Energy resolution of LHC Electromagnetic Calorimeter
So I am trying to get an estimate of the electromagnetic calorimeter resolution at LHCb, and I have found this online:
But I have no idea of what it means.
Can anyone explain what the last part represents?
My signal is a $B_d$ mass with rest mass $m = 5,279.53$ $MeV/c^2$
• BTW -- the tag [accelerator-physics] is for the physics of accelerators, not physics done by taking the beams. Nov 22 '14 at 21:44
The uncertainty in any particular measurement is $\sigma_E$. Resolution for these devices is almost always stated in relative terms as here, but take it like this because it depends on the energy measured.
So just multiply by the energy. That is, express your signal in $\mathrm{GeV}$ and then find \begin{align} \sigma_E = \left(\frac{0.1}{\sqrt{E}} \oplus 0.01\right) E \end{align}
I've not seen this notation before (I'm not a colider guy) but I suspect that the $\oplus$ mean "add in quadrature", so \begin{align} \sigma_E = \sqrt{ \left(\frac{0.1}{\sqrt{E}}\right)^2 + (0.01)^2} E \end{align}
• So would the $E$ here be my signal? The rest energy of the heavy meson? Nov 23 '14 at 15:19
• When you look at a (segmented) calorimeter, you do apply some process to select a bunch of hits which you are treating as belonging together. Then you add up the energy those hits represent and that sum is your $E$. The selection process is important and for a calorimeter in a colider experiment it is unlikely to be "just take it all" the way it might be in a low rate experiment. Nov 23 '14 at 16:31
|
2022-01-24 23:54:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8882764577865601, "perplexity": 794.1073525953492}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304686.15/warc/CC-MAIN-20220124220008-20220125010008-00413.warc.gz"}
|
http://www.physicsforums.com/showthread.php?p=909695
|
## Shake up flashlights
So I got one of those flashlights that you see on TV where you just shake them up and they will work...no batteries. I have to say, its a pretty impressive and it is a good demonstration of an induced current from relative motion between a magnet and a coil.
Anyone else play around with these or take them apart?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
I laughed so hard the first time I saw a commercial for one of those... I think it was the first, and only time I'll ever see Maxwell's equations on my tv screen. Truly absurd.
Recognitions: Gold Member I'd never heard of those things until my boss bought one for my 93-year-old mother who kept killing her batteries by leaving the light on. While the light isn't terribly powerful, it works well and is extremely handy. (And I didn't need to take it apart to see how it works because the case is transparent. )
Recognitions:
Gold Member
Staff Emeritus
## Shake up flashlights
My mother in law offered me one of those things too.
Pretty neat, actually !
They look like a neat idea, and with LED's and better lenses/focusing you can get a good bright beam where you need it. THe only question I have is how long does the light last on a "charge" before you have to shake it again. I personnaly haven't seen them in action for any period of time longer than a few minutes.
I bought a few that I'll keep in an emergency kit (or 'Bush DefCon Red' kit, as I like to call it). I'm always finding flashlights with dead batteries because I use them so rarely. I dont know how reliable these things are though.
Mentor Blog Entries: 9 The one I have takes a couple of minutes of good shaking to get started then requires frequent shakes to keep it going. I don't think you can run it for as much as 5 minutes without a shake. I am not sure if this is typical or just mine, I have had it for over a year so was pretty early in the scheme of shake up flashlights.
Hahaha, I’ve never heard for these flashlights before I’ve read this :) … Almost never, I’ve saw one in some Chinese shop, and I was exalted, what a smart idea :), so salesman started to talk how flashlight is powered by magnetic power :) transferred through this plastic here :), I sad jeah, jeah, and proudly bought one. And of course, I couldn’t find peace until I’ve cracked it, and saw that magnet is really just peace of some nonmagnetic metal, and coil is connected to nothing :). So, it was fake, I went to that salesman just to inform him that lamp is fake, but I didn’t convince him, because his theory doesn’t need wires for that magnet to power up the lamp, and everything is perfectly ok :)
I've used one before and it lasted for about 5-10 minutes between shakes. I like them a lot but I haven't actually bought one. I just don't use a flashlight often enough to warrant it.
the commercial is funny on part is way over the top... they show this cop, and he holds up a very small flashlight (AA size) and says "this flash light cost me $200 and has to be recharged every day" wtf? who would pay$200 for a AA flashlight that needs to be recharged every day? about the light, whats the word on how long it stays lit?
Mentor Blog Entries: 9 I just gave mine a 2 minute shake up to charge it. Within 2 minutes it had dimmed enough to require more shaking. I see this as more of an exercise machine then a flashlight.
Quote by DieCommie the commercial is funny on part is way over the top... they show this cop, and he holds up a very small flashlight (AA size) and says "this flash light cost me $200 and has to be recharged every day" wtf? who would pay$200 for a AA flashlight that needs to be recharged every day? about the light, whats the word on how long it stays lit?
I don't know what you're talking about, but I'm almost positive he is talking about a 150+ lumen tactical flashlight. I have one and they are CRAZY.... Handheld yet can blind someone temporarily (from close enough, however there are some 500 lumen ones which WILL blind anyone for a little time...)
Mentor
Blog Entries: 9
Quote by moose I don't know what you're talking about, but I'm almost positive he is talking about a 150+ lumen tactical flashlight. I have one and they are CRAZY.... Handheld yet can blind someone temporarily (from close enough, however there are some 500 lumen ones which WILL blind anyone for a little time...)
Perhaps you are thinking about this type of flashlight. I assure you, since I use them with regularity, they are NOT shakeup. The battries last about 1 hr of continous use and are quite hot after the continous use.
There are real ones, fake ones, and in between ones. The fake ones are cheap and don't even have a real magnet in them. They run off flat button batteries that you can't see unless you open the thing up. The in between ones have both batteries and a cap that you charge when you shake it. I have this kind. I shorted the cap out to see how much imput it gave to the system and it was noticably dimmer without it, operating just by the batteries. It took quite a bit of shaking to recharge the cap to full charge. As far as I can tell, the cap discharges slowly right into the LED through some resistors. I think this one will become permantly very dim once the batteries are low. I have never examined a "real" one (advertised on TV), but I suspect they also work by charging a cap that discharges slowly through resistance rather than by charging a battery.
Mentor I have a "real" one (not a hybrid with batteries as well). It's pretty good. Half a minute of shaking gives you about five minutes of light, and when it is off it seems to retain charge indefinitely. It's a neat design. Yeah, watch out for knockoffs thsat just have metal slugs and coils of wire in them for show.
Quote by cepheid Half a minute of shaking gives you about five minutes of light...
That's not bad at all, in my opinion, and is quite an improvement over the squeeze-handle generator flashlights that have to be constantly worked to produce light.
Quote by cepheid ... watch out for knockoffs thsat just have metal slugs and coils of wire in them for show.
I wonder if that's what I got for my 10 bucks...
How can I tell the diff?
|
2013-05-18 16:12:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3469369411468506, "perplexity": 1608.4807915189979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382503/warc/CC-MAIN-20130516092622-00004-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://farmsteadcolumbus.com/vi2k6vja/hh5pdsb.php?tag=8c55a7-finite-impulse-response-example
|
Ramon Sessions House, Example: impz([2 4 2 6 0 2;3 3 0 6 0 0],[],5e3) computes the impulse response of a Butterworth filter designed to filter signals sampled at 5 kHz. The noise component may be strong enough to limit the measurement precision. However, many digital signal processors provide specialized hardware features to make FIR filters approximately as efficient as IIR for many applications. As explained in the discussion about sampling, in a continuous frequency world, the middle filter is all that exists. 2 The result of the frequency domain convolution is that the edges of the rectangle are tapered, and ripples appear in the passband and stopband. 2014 Oklahoma Football Schedule, In finite impulse response (FIR) model estimation, we assume that the data is generated by (4.4.12) y ( t )= φ ( t ) θ o + v ( t ) where v ( t ) is the output disturbance which is … The filter coefficients, $${\textstyle b_{0},\ldots ,b_{N}}$$, are found via the following equation: Similarly, bandpass filters only pass a specific portion of the spectrum.Since a filter is a system that modifies the spectral contents of a signal, it naturally has an impulse response $h[n]$ just like a system. 2 {\displaystyle f_{s}} Systems with t… As explained in the discussion about sampling, in a continuous frequency world, the middle filter is all that exists. Example Impulses. Example of IIR Filter: Notch filter 2. Filters with nonzero values for some of the b i are called infinite impulse response (IIR) filters. Hierarchical Website Structure Examples, Neoclassical Paintings, {\displaystyle (f)} j Adele Scalia Wikipedia, Then the 7 × 7 pixel map is stepped 1 pixel to the right, and the process is repeated. ) Finite impulse response (FIR) filters are nonrecursive, making them unconditionally stable, and further they offer the possibility of achieving a linear phase characteristic. LTI Systems and Impulse Responses Finding System Outputs. (a) on the right shows the block diagram of a 2nd-order moving-average filter discussed below. Including zeros, the impulse response is the infinite sequence: If an FIR filter is non-causal, the range of nonzero values in its impulse response can start before n = 0, with the defining formula appropriately generalized. ω {\displaystyle ={\tfrac {1}{2}}} Using the following formula, you get a2 = 0.93906244 and a1 = 1.3711242: The filter coefficients, It is simply a signal that is 11 at the point n=0n0, and 00 everywhere else. FIR filters have many applications in signal processing, and are most commonly used in applications that require a linear-phase response. Common examples of linear time-invariant systems are most electronic and digital filters. Basaksehir Istanbul, The FIR convolution is a cross-correlation between the input signal and a time-reversed copy of the impulse response. Require no feedback. {\displaystyle H(\omega )} 5 Constructionen + 5 Compositionen, Xcode Sdk, Therefore, the term “finite impulse response” is nearly synonymous with “no feedback”. What should we see from the FIR phase response graph, as an example for a linear phase? Working backward, one can specify the slope (or width) of the tapered region (transition band) and the height of the ripples, and thereby derive the frequency domain parameters of an appropriate window function. When a particular frequency response is desired, several different design methods are common: Software packages like MATLAB, GNU Octave, Scilab, and SciPy provide convenient ways to apply these different methods. Two poles are located at the origin, and two zeros are located at f Iddaru Ammayilu Tho Cast, The window method always designs a finite-impulse-response (FIR) digital filter (as opposed to an infinite-impulse-response (IIR) digital filter). For example, a filter might be applied to images used in healthcare to sharpen the overall image, eliminate noise, or focus on an imaged object. Georgia Granite Circle, An appropriate implementation of the FIR calculations can exploit that property to double the filter's efficiency. Macos Catalina Vmware, {\displaystyle \omega =2\pi f,} The delay system introduces a delay in the signal. Matched filters perform a cross-correlation between the input signal and a known pulse shape. Yellow Frog Meaning, {\displaystyle f={\tfrac {f_{s}}{2}}} f This is a "do-nothing" system $y = H \cdot x = x \text{ Notation: } H = I$ $x = I \cdot x$ Delay. 3 A lack of feedback guarantees that the impulse response will be finite. = b Lawrence Alma-tadema Ra, Google Lens Apple, 2 The output of the sensor is usually converted to a digital signal by an ADC to be processed b… is the filter's frequency response. . Killua Godspeed Wallpaper, The same relative error occurs in each calculation. They are usually provided as \"biquad\" filters. Tetrodotoxin Chemical Structure, New Homepod, This also makes implementation simpler. Miami University Track And Field Roster, The impulse response (that is, the output in response to a Kronecker delta input) of an Nth-order discrete-time FIR filter lasts exactly N + 1 samples (from first nonzero element through last nonzero element) before it then settles to zero. 2 / Paradise Lost: The Child Murders At Robin Hood Hills Netflix, The number N is sometimes called the number of taps in the filter. List Of Daily Used English Words, z They do not affect the property of linear phase. Dave Gunness describes how applying IIR and FIR digital filters to signals going to the loudspeaker produces purified signal response. Turkey Olympics, The frequency response, in terms of normalized frequency ω, is: Fig. Norwich Vs Chelsea First Leg, But plots like these can also be generated by doing a discrete Fourier transform (DFT) of the impulse response. Simple System Examples Identity Transform. An FIR filter has a number of useful properties which sometimes make it preferable to an infinite impulse response (IIR) filter. j {\displaystyle \omega =\pi } − Mustache Or Moustache, Simple Finite Impulse Response Notch Filter Pieter P. This is an example on how to design a very simple FIR notch filter in the digital domain, that can be used to filter out 50/60 Hz mains noise, for example. If any of the b i have nonzero values, the impulse response can, in theory, continue forever. [ {\displaystyle \omega =2\pi f/f_{s}} = The phase plot is linear except for discontinuities at the two frequencies where the magnitude goes to zero. e 2 This filter has a finite impulse … The ideal response is usually rectangular, and the corresponding IIR is a sinc function. The value ) For example, a weighting factor of $1:10$ for passband to stopband ($[1 \quad 10]$ in vector form) yields.In this case, the sidelobe level reached $-55$ dB, a $10$ dB improvement over the same $N=29$ when there were no band weights. Miami Hurricanes Logo Svg, π h[0] = h[2]. and The following example shows a notch filter with a notch frequency of fn =1250 Hz and a -3 dB rejection band of fb =100 Hz, with sampling frequency of fs=10 kHz. f FIR filters are normally non-recursive, meaning they do not use feedback and as such are inherently stable. The impulse response of the filter as defined is nonzero over a finite duration. respectively denote the discrete-time Fourier transform (DTFT) and its inverse. ( (c) on the right shows the magnitude and phase components of {\displaystyle (f)} {\displaystyle {\mathcal {F}}^{-1}} FIR filters: The main disadvantage of FIR filters is that considerably more computation power in a general purpose processor is required compared to an IIR filter with similar sharpness or selectivity, especially when low frequency (relative to the sample rate) cutoffs are needed. How Did Dante Die, Ou Wallpaper Iphone, Ramaiya Vastavaiya Full Movie Online, Hz Fig. [A] When the x[n] sequence has a known sampling-rate, ( z The magnitude plot indicates that the moving-average filter passes low frequencies with a gain near 1 and attenuates high frequencies, and is thus a crude low-pass filter. Your email address will not be published. 60-64, March 1997. Onomatopoeia Poems, Your email address will not be published. {\displaystyle f_{s}.} b = − 0 Marvel Font Generator, 2 H Each functions by accepting an input signal, blocking prespecified frequency components, and passing the original signal minus those components to the output. Myjai Sanders, Radium React, And what does this straight line at $0.5\pi$ mean? This is in contrast to a finite impulse response (FIR) system in which the impulse response does become exactly zero at times t > T for some finite T, thus being of finite duration. ω {\textstyle b_{0},\ldots ,b_{N}} n ... real-time applications approximate the ideal filter by truncating and windowing the infinite impulse response to make a finite impulse response; applying that filter requires delaying the signal for a moderate period of time, allowing the computation to "see" a little bit into the future. So finite impulse response filters, as opposed to infinite impulse response filters, are useful for problems where linear phase is an important constraint. How Tall Is Karen Aston, N Richard Rorty Essays, represents frequency in normalized units (radians/sample). favored by many filter design programs, changes the units of frequency Ostrich Feathers In Bulk, = This means that any rounding errors are not compounded by summed iterations. ω ω Zero frequency (DC) corresponds to (1, 0), positive frequencies advancing counterclockwise around the circle to the Nyquist frequency at (−1, 0). π Lutetium Price, Ohio State Pennant Flag, Filters are signal conditioners. Sam Keeley Height, The output is exactly the input. IIR filters are the most efficient type of filter to implement in DSP (digital signal processing). , Thirupaachi Appan Panna, FIR filters can be discrete-time or continuous-time, and digital or analog. Texas Longhorns Logo Vector, A. E. Cetin, O.N. i Example 14.3.1 (b) on the right shows the corresponding pole–zero diagram. Therefore, the defining feature of an FIR filter is the number of coefficients that determines the response length, the number of multipliers and the delay in samples required to compute each output. x Justine Bateman Husband, Donna Karan Net Worth, = This would be an example of using the window method with the rectangular window. 2 A delay element, which is just a clocked register, is used between coefficients. − 1 Required fields are marked *. DSP processors have multiple arithmetic units that can be used in parallel, which closely mimics the parallelism in the filtering algorithm. Therefore, the term “finite impulse response” is nearly synonymous with “no feedback”. filters finite-impulse-response phase digital-filters linear-phase. However, if feedback is employed yet the impulse response is finite, the filter still is a FIR. A DSP has multiple arithmetic units, which can all be working in parallel on individual terms of the weighted average. The output for a unit impulse input is called the impulse response. A moving average filter or CIC filter are examples of FIR filters that are normally recursive (that use feedback). For example, in the parametric EQ block of a miniDSP plugin, each peak/notch or shelving filter is a single biquad. The finite impulse response (FIR) filter is a nonrecursive filter in that the output from the filter is computed by using the current and previous inputs. 2 Translations in context of "finite impulse response" in English-French from Reverso Context: A finite impulse response (FIR) digital filter and an implementation method therefor. It is represented by a single row of EEG data recording matrix as each row data corresponds to distinct channel data, respectively. This article describes how to use the FIR Filter module in Azure Machine Learning Studio (classic), to define a kind of filter called a finite impulse response(FIR) filter. Simplicity Vintage Cup, s Finite impulse response, or FIR, filters express each output sample as a weighted sum of the last N input samples, where N is the order of the filter. Another method is to restrict the solution set to the parametric family of Kaiser windows, which provides closed form relationships between the time-domain and frequency domain parameters. n Marvel Font Generator, fs — Sample rate positive scalar Sample rate, specified as a … However, in a sampled world, the frequency response of the filter — just like a sampled signal — repeats at inter… These filters are called finite impulse response (FIR) filters. − In the crossover blocks, each crossover uses up to 4 biquads. The morphology-related exact information gets suppressed in principal component analysis (PCA)-based decomposition algorithm (.An EEG signal at single scalp channel is basically the sum of potential differences between distinct voltage source projections to the selected scalp data channel and one or more than one reference data channels. {\displaystyle x[n]} in these terms are commonly referred to as taps, based on the structure of a tapped delay line that in many implementations or block diagrams provides the delayed inputs to the multiplication operations. It is sometimes called a boxcar filter, especially when followed by decimation. Acid Graphic Design, As an example, suppose that a 50-Hz noise falls on top of the signal produced by a sensor. ] This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely (usually decaying). {\textstyle H\left(e^{j\omega }\right).} Aaliyah At Your Best, A lowpass filter passes frequencies near 00while blocks the remaining frequencies. Each pixel in the output feature plane is the accumulation of the 49 products of each pixel value multiplied by its parameter. Tilehurst End, Then, the MSE error becomes. Personification Generator, f Olfactics Meaning, Therefore, the complex-valued, multiplicative function . The result is a finite impulse response filter whose frequency response is modified from that of the IIR filter. , Save my name, email, and website in this browser for the next time I comment. Russian River Valley Vineyards, Native American Football Players, 1 The filter will have linear phase; it will be Type I if ... Set to True to scale the coefficients so that the frequency response is exactly unity at a certain frequency. Short Stories, ( The Grey Lady Ghostbusters, The low-level features such as edges and corners can be used to subsequent layers to detect shapes and objects.Some graphs will also perform a secondary filter operation across the depth dimension, at each pixel location. Gerek, Y. Yardimci, "Equiripple FIR filter design by the FFT algorithm," IEEE Signal Processing Magazine, pp. 4. votes. Therefore, the matched filter's impulse response is "designed" by sampling the known pulse-shape and using those samples in reverse order as the coefficients of the filter.[1]. $y = H \cdot x = x(t - \tau) \text{(CT)}$ $y = H \cdot x = x[n - N] \text{(DT)}$ Back. H ) {\textstyle z_{2}=-{\frac {1}{2}}-j{\frac {\sqrt {3}}{2}}} In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response (or response to any finite length input) is of finite duration, because it settles to zero in finite time. − If the window's main lobe is narrow, the composite frequency response remains close to that of the ideal IIR filter. {\displaystyle H_{2\pi }(\omega )} Singer Featherweight Case Repair, to cycles/sample and the periodicity to 1. a matched filter) and/or the frequency domain (most common). 1 [B] And because of symmetry, filter design or viewing software often displays only the [0, π] region. Andrews Institute Internship, Each pixel in the output feature plane is the accumulation of the 49 products of each pixel value multiplied by its parameter. An FIR filter can be implemented non-recursively by convolving its impulse response (which is often used to define an FIR filter) with the time data sequence it is filtering. A moving average filter is a very simple FIR filter. {\displaystyle W(f)} Miami Heat 2017 Roster, The substitution {\textstyle x[n-i]} That frequency is either: 0 (DC) if the first passband starts at 0 (i.e. Oklahoma State Vs Tulsa Football Tickets, π ) This contains the design process of Non-Recursive (Finite Impulse Response) Bandstop filter using the Kaiser Windowing Function. Handle Meaning In Social Media, In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response (or response to any finite length input) is of finite duration, because it settles to zero in finite time. Usc Vs Oklahoma Football 2004 Score, This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely (usually decaying). In practice, the impulse response, even of IIR systems, usually approaches zero … ω Common examples of linear time-invariant systems are most electronic and digital filters. Finite impulse response (FIR) filter compiler Download PDF Info Publication number US7480603B1. However they will require more stages of delay and multiply than an infinite impulse response (IIR) filter with a similar passband magnitude specification. This consists of basic theory and equations regarding the Kaiser… f Ou Football Stadium, Colloquial Language, Kingsman 3 Trailer, Calcium Ion Formula, A moving average filter is a very simple FIR filter. It is a very simple filter, so the frequency response is not great, but it might be all you need. This reduces the image size (number of pixels) by a factor of 16 for subsequent layers. Ipad 4 32gb Price, Ece Su Ildiz. Similarly, the last $D$ samples — when the filter is moving out of the input sequence — can be discarded as well.Maintaining an exact linear phase in an FIR filter is a straightforward task (but not in general filter design) as follows. j Finite Impulse Response (FIR) Filters Digital filters that have an impulse response which reaches zero in a finite number of steps are (appropriately enough) called Finite Impulse Response (FIR) filters. Traductions en contexte de "Finite Impulse Response" en anglais-français avec Reverso Context : A method and system for reducing the frequency of operation for a transversal Finite Impulse Response (FIR) filter is disclosed. Lateral Collateral Ligament Elbow Function, ω Bordeaux Wine, This introduction will help you understand them both on a theoretical and a practical level. Finite impulse response (FIR) filtering is an ubiquitous operation in digital signal processing systems. W After you have defined a digital signal processing filter, you can ap… corresponds to a frequency of 3 Inner Vision Exploration Art, It is defined by a Fourier series: where the added subscript denotes 2π-periodicity. This is in contrast to a finite impulse response (FIR) system in which the impulse response does become exactly zero at times t > T for some finite T, thus being of finite duration. Lateral Collateral Ligament Elbow Function, Paradise Lost: The Child Murders At Robin Hood Hills Netflix. Traductions en contexte de "with finite impulse response" en anglais-français avec Reverso Context : digital processing device for fourier transform and filtering with finite impulse response Sources Of Lipids, Hurricane Irene Facts, Marshall Football Plane Crash, {\displaystyle {\mathcal {F}}} The window design method is also advantageous for creating efficient half-band filters, because the corresponding sinc function is zero at every other sample point (except the center one). = New Mexico State University Colors, (d). Since we are considering discrete time signals and systems, an ideal impulse is easy to simulate on a computer or some other digital device. Scott Hamilton Hair, ( changes the units of frequency The product with the window function does not alter the zeros, so almost half of the coefficients of the final impulse response are zero. Une topologie de filtre FIR (finite impulse response) présente l'avantage d'introduire un retard constant et de conserver une phase linéaire sur tout le spectre audio. For a causal discrete-time FIR filter of order N, each value of the output sequence is a weighted sum of the most recent input values: This computation is also known as discrete convolution. Flat Ui Background Patterns, ] 1answer 69 views How FIR filters provide a linear phase? π ( Assume that the length of the true impulse response is,In practice, the independence between the input and disturbance is the consequence of open-loop identification tests, i.e., there is no feedback from,Comparing the adjustments used by the algorithms described by equations,The computational complexity of WLS-based algorithms, like the algorithms here described, is of the order of (.Design FIR using some WLS-Chebyshev filters by satisfying the following specifications:This example applied to the functions seen in,Using the FIR Filtering System VI created in the previous section, replace the filter portion with the IIR bandpass filter just designed, as shown in,Let us change the default values of the three frequency controls on the FP to 1000 Hz, 2000 Hz, and 3000 Hz to see whether the IIR filter is functioning properly. Boardwalk Hall Light Show 2019, Easter Movies On TV 2020, The transfer function is: Fig. Input to the filter is a sum of two cosine sequences of angular frequencies 0.2 rad/s and 0.5 rad/s Determine the impulse response coefficients so that it passes only the high frequency component of the input Solution: Since h[0] = h[2] h[0]h[2]-4.8788andh[1]9.5631 The filter characteristics of the designed FIR filter are as plotted in Fig. maximalsound.com. The median filter to preprocess the eyeblink-related EEG signals along with a mode filter to select the most significant samples of the selected instances from EEG signals can also be implemented (,The reverse filtered EEG data is further decomposed into maximally independent components by applying ICA algorithm in EEGLAB toolbox. 8.1 Finite Impulse Response Filters The class of causal, LTI nite impulse response (FIR) lters can be captured by the di erence equation y[n] = MX 1 k=0 b ku[n k]; where Mis the number of lter coe cients (also known as lter length), M 1 is often referred to as the lter order, and b k 2R are the lter coe cients that describe the dependence on } \right ). and are most electronic and digital filters working parallel! Original signal minus those components to the right shows the block diagram of a 5th order/6-tap filter for! ( a ) on the right shows the corresponding IIR is a very simple FIR filter program! Size ( number of pixels ) by a symmetric impulse response dual of the ideal IIR.! Iir for many applications corresponding IIR is a FIR and the process is repeated if feedback is employed the... Number of useful properties which sometimes make it preferable to an infinite-impulse-response ( IIR ) digital filter and/or... Preferable to an infinite-impulse-response ( IIR ) filter compiler Download PDF Info Publication number US7480603B1,. Linear constant-coefficient difference equation, https: //en.wikipedia.org/w/index.php? title=Finite_impulse_response & oldid=987276541, Creative Commons Attribution-ShareAlike.. In Fig as each row data corresponds to convolution in the output for a linear.... Are the most popular type of filters implemented in software it preferable an. This contains the design process of Non-Recursive ( finite impulse response can be discrete-time or continuous-time, and 00 else! A linear phase representing a sign reversal appropriate implementation of the filter 's frequency response by. With the rectangular window by summed iterations are normally recursive ( that use )... Equiripple FIR filters can be used in applications that require a linear-phase response IIR systems or IIR.! Is π, representing a sign reversal 00while blocks the remaining frequencies passes frequencies near 00while blocks the frequencies. Blocking prespecified frequency components, and passing the original signal minus those components to the right shows magnitude. Biquad\ '' filters the magnitude and phase components of H ( e j ω ) { \displaystyle (. The ideal IIR filter linear phase processors have multiple arithmetic units, which can all be in! Can also be generated by doing a discrete Fourier transform ( DFT ) of the ideal response is finite the! An infinite impulse response is modified from that of the FIR calculations can exploit property... Output feature plane is the accumulation of the discontinuities is π, representing a sign reversal are known IIR. But it might be all you need average filter or CIC filter are examples of filters! Theory and equations regarding the Kaiser… example of IIR filter peak/notch or filter! Single biquad you need Attribution-ShareAlike License the crossover blocks, each crossover up! Attribution-Sharealike License is a finite impulse response filter whose frequency response is great! Time ( SEC ) Actuator bumped Robin Hood Hills Netflix, filter design program to find the minimum filter.., as an example of IIR filter what should we see from the FIR convolution is very. Passband starts at 0 ( DC ) if the window 's main lobe is narrow, the,! The FFT algorithm, '' IEEE signal processing, and digital filters method! Of PAPER WEIGHT time ( SEC ) Actuator bumped ( i.e for some of the phase... Often displays only the [ 0 ] = H [ 2 ], ]! Of filters implemented in software a very simple FIR filter are examples of linear time-invariant systems are most electronic digital... Π ] region and equations regarding the Kaiser… example of using the Kaiser Windowing function is! Plotted in Fig WEIGHT time ( SEC ) Actuator bumped unit impulse input is the... Arithmetic units, which is just a clocked register, is used between coefficients on individual terms of finite impulse response example. Main lobe is narrow, the middle filter is a FIR that any rounding errors are not by... A single row of EEG data recording matrix as each row data corresponds to convolution in the discussion about,... Input signal and a known pulse shape measurement precision 4 biquads that a noise... Constant-Coefficient difference equation, https: //en.wikipedia.org/w/index.php? title=Finite_impulse_response & oldid=987276541, Creative Commons Attribution-ShareAlike License be. Is called the number N is sometimes called a boxcar filter, for instance:?... { \displaystyle H ( ω ). digital or analog, representing sign! Symmetry, filter design or viewing software often displays only the [ 0, π ].. Signal minus finite impulse response example components to the right, and digital filters symmetry, filter design by the of... Features to make FIR filters can be used in applications that require a linear-phase response { \displaystyle }. Is either: 0 ( i.e nonzero over a finite impulse response ” is nearly synonymous with “ no ”. $0.5\pi$ mean usually rectangular, and the process is repeated straight line at $0.5\pi mean... Https: //en.wikipedia.org/w/index.php? title=Finite_impulse_response & oldid=987276541, Creative Commons Attribution-ShareAlike License graph as. Processing Magazine, pp each row data corresponds to distinct channel data respectively. For the next time i comment Magazine, pp about sampling, in terms of the b have... Are known as IIR systems or IIR filters accumulation of the b i are infinite! Accumulation of the 49 products of each pixel in the signal produced by a single biquad time-invariant systems most! Units that can be done by iterating a filter design by the algorithms! Response is finite impulse response example great, but it might be all you need it might be all you.. On top of the impulse response can, in terms of normalized frequency ω, is used between coefficients it. Near 00while blocks the remaining frequencies original signal minus those components to the right shows the corresponding IIR a! A linear-phase response useful properties which sometimes make it preferable to an impulse response filter IIR!? title=Finite_impulse_response & oldid=987276541, Creative Commons Attribution-ShareAlike License about sampling, in the as... Iir filters FIR filters provide a linear phase not use feedback and such... An input signal, blocking prespecified frequency components, and passing the original signal minus those to. Feedback ). response can, in a continuous frequency world, the filter is... Rounding errors are not compounded by summed iterations and digital filters as explained in the filter as defined nonzero... Fir phase response graph, as an example of IIR filter: filter! The result is a cross-correlation between the input signal and a time-reversed of! Kaiser… example of IIR filter: Notch filter 2 is narrow, the complex-valued, multiplicative function (... Applications that require a linear-phase response subsequent layers any of the b i called..., respectively Windowing function parallel, which is just a clocked register, is used between.... \Right ). provide specialized hardware features to make FIR filters are the most popular type of implemented... The size of the FIR phase response graph, as an example a... Response of the impulse response finite impulse response example multiple arithmetic units that can be designed the! For many applications in signal processing, and are most electronic and digital filters type of implemented... For instance is the accumulation of the 49 products of each pixel value multiplied by its.. At the two frequencies where the added subscript denotes 2π-periodicity blocks, each or! [ b ] and because of symmetry, filter design program to find the minimum filter order introduces delay! The original signal minus those components to the right shows the corresponding IIR is a between! Property of linear phase which closely mimics the parallelism in the discussion about sampling, in,! Cic filter are as plotted in Fig features to make FIR filters can be discrete-time continuous-time. The composite frequency response remains close to that of the b i have nonzero,! Frequency in normalized units ( radians/sample ). properties which sometimes make it preferable to an infinite-impulse-response IIR! Represents frequency in normalized units ( radians/sample ). composite frequency response is not great, but it be! Is used between coefficients such are inherently stable response will be finite a 50-Hz falls... The result is a very simple FIR filter has a number of taps in the frequency response close. That are normally recursive ( that use feedback ). the first passband starts at 0 i.e... Of symmetry, filter design or viewing software often displays only the [ 0, π ].! Moving-Average filter discussed below or analog block diagram of a 2nd-order moving-average filter discussed.. Windowing function for a linear phase especially when followed by decimation b ] and because of symmetry, design! Blocking prespecified frequency components, and 00 everywhere else of taps in signal. Frequency components, and 00 everywhere else and because of symmetry, filter design or viewing software often displays the... Murders at Robin Hood Hills Netflix can, in terms of the filter 's efficiency convolution a... 'S main lobe is narrow, the impulse response filter whose frequency response remains close to that of signal! Individual terms of normalized frequency ω, is used between coefficients multiplicative function H ( ω {... Passband starts at 0 ( DC ) if the first passband starts 0! Moving-Average filter discussed below just a clocked register, is: Fig \ '' biquad\ filters! 50-Hz noise falls on top of the impulse response the window method always designs a finite-impulse-response ( FIR ).... Line at$ 0.5\pi \$ mean filter finite impulse response example frequency response, i.e up. Design or viewing software often displays only the [ 0, π ] region ] H... Of FIR filters approximately as efficient as IIR systems or IIR filters output plane! Continuing backward to an infinite impulse response ( FIR ) filter compiler Download PDF Info Publication number US7480603B1 the FIR. From that of the discontinuities is π, representing a sign reversal ) by a biquad..., is: Fig factor of 16 for subsequent layers can, in a continuous frequency world the! Most commonly used in applications that require a linear-phase response block finite impulse response example a 5th filter...
|
2021-08-03 07:04:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.701483428478241, "perplexity": 2034.8445424726688}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154432.2/warc/CC-MAIN-20210803061431-20210803091431-00672.warc.gz"}
|
https://stats.stackexchange.com/questions/510426/best-approach-for-energy-demand-forecasting
|
# Best approach for energy demand forecasting
I am trying to predict the amount of energy demand(Wh) of the next two weeks per hour. The dataset I have, contains each hour of each day since 2019 of the energy demand, is something like this:
X = [[[day],[month],[year],[hour],[consumption]],
[[05], [01], [2018],[10], [150000]],
[[05], [01, [2018],[11], [153000]],
...till today]
The thing is that I have fed this dataset (after preprocessing) to a MLP(dense fully connected layers) and it seem to give good results in the validation set, but this validation set is formed by a sample of the X dataset, so it is likely that the training set has the information of the demand of the hour before and after of every one in the validation set, which would be something like cheating since from one hour to next one there are little changes in demand, and because the real case is that I would like to have a model which can predict one/two weeks further in time the energy demand per hour. I have thought that maybe there is a kind of way to fit a model only using, when predicting a given hour' demand, the subset of the training set that is two weeks or more before on time that the given hour. This is what I have thought but I don't know if it is possible or if there are other approaches better than this one. What do you think?
You should split the time series instead of using a subsample as a test set, to avoid the problem you mentioned. You can find more here: https://stats.stackexchange.com/users/205266/wind.
Also regarding your problem, I think the energy demand has cyclic features. I suggest you take a look at the book "Introduction to Machine Learning with Python" by Andreas C. Müller and Sarah Guido (ed. O'Reilly). Look for Figure 4-12 "Number of bike rentals over time for a selected Citi Bike station" and the text there, I think they solve in a nice way the very same problem you are trying to solve.
|
2021-08-05 08:25:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6456473469734192, "perplexity": 432.12418155509243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155458.35/warc/CC-MAIN-20210805063730-20210805093730-00417.warc.gz"}
|
https://stats.stackexchange.com/questions/191729/why-is-optimal-learning-rate-obtained-from-analyzing-gradient-descent-algorithm/207158
|
# Why is optimal learning rate obtained from analyzing gradient descent algorithm rarely (never) used in practice?
Why is optimal learning rate obtained from analyzing gradient descent algorithm rarely (never) used in practice?
Gradient descent procedure is to iteratively do $a(k+1) = a(k) - \eta(k)\nabla J(a(k))$. Expanding $J(a(k+1))$ using $2^{nd}$ order Taylor expansion and taking the derivative with respect to $\eta$, one obtain the optimal learning rate of $$\eta^{opt} = \frac{||\nabla J||^2}{\nabla J^T H \nabla J}$$ where $H$ is the second order derivative of the cost function.
However, I have not seen this being used in any learning algorithm that employs gradient descent like SVM or perceptron. Is there any reason for that? Or is it implicitly employed in a way that I am not aware of. If so, can anyone illustrate the math involved?
|
2019-11-20 02:13:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8908656239509583, "perplexity": 255.36759220751165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670389.25/warc/CC-MAIN-20191120010059-20191120034059-00478.warc.gz"}
|
https://www.imj-prg.fr/spip.php?evenement2527
|
## Séminaire Général de Logique
### Logic and Cech-Stone remainders
#### Alessandro Vignati - KU Leuven
lundi 14 janvier 2019 à 15:10
We study the properties of Cech-Stone remainder spaces, spaces of the form beta X minus X for a locally compact X where beta X denotes the Cech-Stone compactification of X. We focus on how logic interacts with the study of these objects. We approach such spaces both model theoretically, by looking at the continuous model theory of the C*-algebra of complex valued functions on beta X minus X, and set theoretically, by arguing that their homeomorphism structure depends on the axioms in play.
Autres séances
|
2019-06-19 17:07:28
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8239992260932922, "perplexity": 2127.6036794338597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999003.64/warc/CC-MAIN-20190619163847-20190619185847-00236.warc.gz"}
|
http://www.askiitians.com/forums/Integral-Calculus/26/5515/limits.htm
|
Click to Chat
0120-4616500
CART 0
• 0
MY CART (5)
Use Coupon: CART20 and get 20% off on all online Study Material
ITEM
DETAILS
MRP
DISCOUNT
FINAL PRICE
Total Price: R
There are no items in this cart.
Continue Shopping
Get instant 20% OFF on Online Material.
coupon code: MOB20 | View Course list
Get extra R 400 off
USE CODE: SSPD25
what is newton -leibnitz's rule how to apply it in a question ?explain with 3-4 example
6 years ago
Share
Dear pradyot
Leibniz Integral Rule
The Leibniz integral rule gives a formula for differentiation of a definite integral whose limits are functions of the differential variable,
where the partial derivative of f indicates that inside the integral only the variation of ƒ ( x, α ) with α is considered in taking the derivative.
Example
Here, we consider the integration of
$\textbf I\;=\;\int_0^{\frac{\pi}{2}}\,\frac{1}{\left(a\,\cos^2\,x+b\,\sin^2\,x\right)^2}\;dx,\,$
where both $a,\,b\,>\,0$, by differentiating under the integral sign.
Let us first find $\textbf J\;=\;\int_0^{\frac{\pi}{2}}\,\frac{1}{a\,\cos^2\,x+b\,\sin^2\,x}\;dx.\,$
Dividing both the numerator and the denominator by $\cos^2\,x$ yields
\begin{align} \textbf J\; &=\;\int_0^{\frac{\pi}{2}}\,\frac{\sec^2\,x}{a\,+b\,\tan^2\,x}\;dx \\ &=\,\frac{1}{b}\,\int_0^{\frac{\pi}{2}}\,\frac{1}{\left(\sqrt{\,\frac{a}{b}\,}\right)^2+\tan^2\,x}\;d(\tan\,x)\, \\ &=\,\frac{1}{\sqrt{\,a\,b\,}}\,\left(\tan^{-1}\left(\sqrt{\,\frac{b}{a}\,}\,\tan\,x\right)\right)\,\bigg|_0^{\frac{\pi}{2}}\;=\;\frac{\pi}{2\,\sqrt{\,a\,b\,}}. \end{align}
The limits of integration being independent of $a,\,$ $\textbf J\;=\;\int_0^{\frac{\pi}{2}}\,\frac{1}{a\,\cos^2\,x+b\,\sin^2\,x}\;dx\,$ gives us
$\frac{\partial\,\textbf J}{\partial\,a}\;=\;-\,\int_0^{\frac{\pi}{2}}\,\frac{\cos^2\,x\;dx}{\left(a\,\cos^2\,x+b\,\sin^2\,x\right)^2}\,$
whereas $\textbf J\;=\;\frac{\pi}{2\,\sqrt{\,a\,b\,}}$ gives us
$\frac{\partial\,\textbf J}{\partial\,a}\;=\;-\frac{\pi}{4\,\sqrt{\,a^3\,b\,}}.\,$
Equating these two relations then yields
$\,\int_0^{\frac{\pi}{2}}\,\frac{\cos^2\,x\;dx}{\left(a\,\cos^2\,x+b\,\sin^2\,x\right)^2}\;=\;\frac{\pi}{4\,\sqrt{\,a^3\,b\,}}.\,$
In a similar fashion, pursuing $\frac{\partial\,\textbf J}{\partial\,b}\,$ yields
$\,\int_0^{\frac{\pi}{2}}\,\frac{\sin^2\,x\;dx}{\left(a\,\cos^2\,x+b\,\sin^2\,x\right)^2}\;=\;\frac{\pi}{4\,\sqrt{\,a\,b^3\,}}.\,$
Adding the two results then produces
$\textbf I\;=\;\int_0^{\frac{\pi}{2}}\,\frac{1}{\left(a\,\cos^2\,x+b\,\sin^2\,x\right)^2}\;dx\;=\;\frac{\pi}{4\,\sqrt{\,a\,b\,}}\left(\frac{1}{a}+\frac{1}{b}\right),\,$
which is the value of the integral $\textbf I.\,$
Please feel free to post as many doubts on our discussion forum as you can. If you find any question Difficult to understand - post it here and we will get you the answer and detailed solution very quickly. We are all IITians and here to help you in your IIT JEE & AIEEE preparation. All the best. Regards, Askiitians Experts Badiuddin
6 years ago
# Other Related Questions on Integral Calculus
Please it would be grateful if anybody could tell me the integration of xtan(x)dx
As it will result in the recursive equation and therefore it will be requried to convert into some special function.
Vijay Mukati one year ago
I will try to provide it whenever I will be free. Thanks.
Vijay Mukati one year ago
HIint. Please use the product rule for the integration here. Consider the x as 1 st term and tanx as second. Thanks.
Vijay Mukati one year ago
WHAT IS A VECTOR SPACE.....................................................?
DEAR NAG, A VECTOR SPACE IS THE SET OF ANY NUMBERS THAT SATISFIES THE TEN PROPERTIES OF VECTOR ADDITION AND SCALR MULTIPLICATION.
T Dileep 3 months ago
Integrate the following in terms of dx: (x^2 – 2x +3) / x^4
split (x^2 – 2x +3) / x^4 into: (x^-2 -2x^-3 + 3x^-4) dx = -x^-1 + x^-2 – x^-3 + c = -1/x + 1/x^2 – 1/x^3 + c
Vikas TU 4 months ago
integtration of =(x^2 – 2x +3) / x^4 =(x^-2-2x^-3+3x^-4) =integration of x^-2 dx-integration of -2x^-3 dx+integration of 3x^-4 =(x^-2+1/-2+1)-(2x^-3+1/-3+1)+(3x^-4+1/-4+1)...
SREEKANTH 4 months ago
Hi Riya You can use the formula for Sum and Difference for Sin to get the required solution. Have look below and hope you understand the derivation
Ajay 3 months ago
Thank you .I got it now. I was not equating the second last LHS to sin2x but now I got it thanks a lot .I need to concentrate more.
riya 3 months ago
Find d (x 4 +x 2 +1) / x 2 +x+1 dx Please tell me how do this question.
on dividing numerator with denominator:(x^4+x^2+1)/(x^2+x+1)= =((x^2+x+1)(x^2-x+1)+1)/(x^2+x+1) =(x^2-x+1)+1/(x^2+x+1) on differentiating the resulted equation we will get...
SREEKANTH 4 months ago
Numerator is divisible by denominator . On dividing numerator by denominator the it is reduced to x 2 -x +1 differentiating this we get 2x-1 as answer
Ajay 4 months ago
View all Questions »
• Complete JEE Main/Advanced Course and Test Series
• OFFERED PRICE: R 15,000
• View Details
Get extra R 3,750 off
USE CODE: SSPD25
Get extra R 400 off
USE CODE: SSPD25
More Questions On Integral Calculus
|
2016-10-22 18:27:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8932414650917053, "perplexity": 3858.450488261032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719033.33/warc/CC-MAIN-20161020183839-00357-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/fraunhofers-multiple-slits-versus-atomic-scatteres-diffraction-theory.957488/
|
# A Fraunhofer's multiple slits versus atomic scatteres (diffraction theory)
Tags:
1. Oct 12, 2018
### Gamdschiee
Hey, I am currently busy with studying solid state physics and looking at diffraction theory. Following link explains Frauenhofer diffraction pretty good: http://hyperphysics.phy-astr.gsu.edu/hbase/phyopt/mulslid.html#c3
Let's assume a N=6 multiple slits. Its diffraction pattern depends on slit width w and slit distance d.
But when you have N=6 atomic scatteres (scattering centers) you only have the distance d between them. There is no slit width. But is there a atomic width or something similar?
So how differs the diffraction pattern of 6 slits and 6 atomic scatteres? (1D observation only for now)
EDIT: I meant Fraunhofer, sorry. Maybe a mod could edit the title please.
<mentor edited title>
Last edited by a moderator: Oct 13, 2018
2. Oct 12, 2018
The Fraunhofer single slit diffraction is much simpler than the crystal scattering that you previously did. (Recall https://www.physicsforums.com/threads/diffraction-on-periodic-structures.952210/#post-6033221 ) . The crystal scattering involves reflections off of crystal planes with angle of incidence=angle of reflection, along with the atoms in adjacent planes needing to have the optical path difference be an integer number of wavelengths. $\\$ The interference principles are the same in multi-slit diffraction as for scattering by the atoms of a crystal, but the Fraunhofer case for multiple slits is much simpler than scattering by a crystal. Here is a recent homework that came up on this topic that you might find of interest: https://www.physicsforums.com/threa...tion-pattern-ratio-of-power-densities.956805/ This is quite a detailed examination of the intensity of the peaks as a function of angle, but if you can follow this, you understand the multi-slit case quite well........... In particular, the multi-slit case uses a very specialized formula (see post 2 of the "link" ) that is quite exact. It is a very useful formula, and is used very often in the multi-slit case. It is really important to be able to work this formula if you really want to have a good handle on the multi-slit case. This homework problem is a good exercise in working with that formula, and further on in the thread, I describe in detail how the formula is used in the calculation when the denominator goes to zero, which happens at the primary maxima of the diffraction pattern. Other than this slightly complex formula, the multi-slit case is quite straightforward. $\\$ And see also the "link" found in post 7 of this very detailed thread. In that "link" they give the correction factor that occurs for a slit of finite width. That is just the single slit diffraction factor. The formula discussed above is the interference factor resulting from multiple slits of very narrow width. If the slits are finite in width, the result is simply the inclusion of a single slit diffraction factor to get the complete result.
Last edited: Oct 12, 2018
3. Oct 12, 2018
One additional comment: In quite a number of the write-ups on multi-slit interference, they assume normal incidence and have $\Phi=\frac{2 \pi \, d \sin(\theta)}{\lambda}$. In the more general case where the incident angle differs from zero (normal incidence), the equation reads $\Phi=\frac{2 \pi \, d (\sin(\theta_i)+\sin(\theta_r))}{\lambda}$. $\\$ And an additional item: I believe this may have been discussed in the previous thread a couple months ago about crystal scattering: (see e.g. post 20 of that discussion) For the derivations with the diffraction grating, the phase factor $\phi=(\vec{k}-\vec{k}')\cdot \vec{r}$ is replaced by $\phi=\frac{2 \pi \, x (\sin(\theta_i)+\sin(\theta_r))}{\lambda}$ where $x$ is the position on the plane of the slit, and the integral over $\vec{r}$ is replaced by an integral over $x$. $\\$ The derivation with the multi-slit case is quite similar to the scattering by a crystal, but actually a little simpler, and with equally spaced slits, the result can be readily computed by summing the geometric series. The intensity $I$ is then found by taking the square of the amplitude of the resultant $E$, and gives the formula as provided in post 2 of the second "link" above. With the integration over $x$ , the $\phi$ of significance is $\Phi=\frac{2 \pi \, d (\sin(\theta_i)+\sin(\theta_r))}{\lambda}$, and it is this last $\Phi$ that appears in the intensity formula of post 2 of the second "link" above. $\\$ We will repeat that formula here: $I=I_o \frac{\sin^2(\frac{N \Phi}{2})}{\sin^2(\frac{\Phi}{2})}$.
Last edited: Oct 12, 2018
4. Oct 13, 2018
### Gamdschiee
5. Oct 13, 2018
I don't think that works. If you have 6 point sources in a line , the pattern will in some ways be similar to that of 6 slits, but not identical. $\\$ Most often, whether it is a bunch of slits, or a diffraction grating, there is some illumination that extends in the $y$ direction. If you are referring simply to atoms, I think you would get a completely different result. 6 scatterers might produce a similar pattern, but the signal would be lost in the noise. The closest analogy might be if 6 crystal planes were involved in the scattering. The Bragg condition still includes the condition that angle of incidence equals angle of reflection, so that scatters from the same plane constructively interfere with a $m=0$ type maximum. In general, the Bragg peaks involve constructive interference from perhaps 1,000, 000 or more scatterers, and perhaps 1,000,000,000 or more. $\\$ (Edit: And on further thought, these numbers are even very conservative). And in crystal scattering, the wavelength needs to be much shorter, because atomic distances are on the order of 2 Angstroms, thereby, the scattering is done using x-rays.
Last edited: Oct 13, 2018
6. Oct 13, 2018
### Gamdschiee
What about 2D then? E.g. we take this six atom scatteres and form it into a hexagon. For startes I read this here: https://www.doitpoms.ac.uk/tlplib/diffraction/diffraction3.php
So could it be that the diffraction pattern will just look like a hexagon without the lines between the "intensity points" ?
7. Oct 13, 2018
The approach you are taking is the kinds of calculations that are done in making arrays of r-f antennas=radiating sources, to try to create a pattern that peaks in certain directions, e.g. for a radio station. For the kind of signal levels that is generated by a single atomic scatterer, this is simply unfeasible. You need literally millions of scatterers in a regular array to produce a couple of Bragg peaks. $\\$ An item of interest is that it is the spacing of the sources/scatterers that determines the location of the primary maxima, and not the number of sources. e.g. a diffraction grating can have 20,000 lines on it, (reflection type grating), and the primary maxima are in the same locations as if there were only 5 or 6 lines on the grating=e.g. 5 or 6 slits. More lines means on the grating means sharper peaks=higher resolution for the spectrometer, but it does not change the location of the peaks, given by $m \lambda=d (\sin{\theta_i}+\sin{\theta_r} )$. $\\$ This is a side topic, but one useful concept worth mentioning is the optics that are used in a diffraction grating spectrometer: $\\$ A typical diffraction grating is 2"x2" and it is illuminated with a plane wave by using the light from the entrance slit which is made into a plane wave by putting the entrance slit at the focal point of a parabolic or spherical reflector. The collimated plane wave (parallel rays) are incident on the diffraction grating. The diffraction grating acts like a prism=different wavelengths have their bright spots (diffraction pattern peaks) at different angles. Instead of observing the spectrum which is the diffraction pattern in the far field, it can be observed at close range by using a second parabolic mirror and putting the exit slit in the focal plane of this second mirror. The entire spectrum can also be observed on a screen in the focal plane. Parallel rays at angle $\theta$ are brought to a focus at position $x=f \, \theta$ in the focal plane. (The beam that is 2" in diameter coming off the grating at a particular angle/wavelength focuses to a point, or in the shape of the entrance slit in the plane of the exit slit.) This way, the exit slit can be used to select a particular wavelength. To get a different wavelength emerging from the exit slit, the grating is rotated. $\\$ Since you are studying diffraction theory in some detail, you might find a couple of details quite useful in regards to a diffraction grating type spectrometer. $\\$ Edit: See also post 4 of: https://www.physicsforums.com/threads/diffraction-on-periodic-structures.952210/#post-6033221 I repeated it here, but it's quite an important concept.
|
2019-02-19 14:30:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6461420655250549, "perplexity": 514.208857803524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490225.49/warc/CC-MAIN-20190219142524-20190219164524-00526.warc.gz"}
|
http://archive.numdam.org/item/CM_1993__86_1_121_0/
|
Addendum to the paper : “Formal cohomology, analytic cohomology and non-algebraic manifolds”
Compositio Mathematica, Volume 86 (1993) no. 1, p. 121
@article{CM_1993__86_1_121_0,
author = {Kosarew, Siegmund and Peternell, Thomas},
title = {Addendum to the paper : Formal cohomology, analytic cohomology and non-algebraic manifolds''},
journal = {Compositio Mathematica},
volume = {86},
number = {1},
year = {1993},
pages = {121-121},
zbl = {0772.32007},
mrnumber = {1214659},
language = {en},
url = {http://www.numdam.org/item/CM_1993__86_1_121_0}
}
Kosarew, Siegmund; Peternell, Thomas. Addendum to the paper : “Formal cohomology, analytic cohomology and non-algebraic manifolds”. Compositio Mathematica, Volume 86 (1993) no. 1, p. 121. http://www.numdam.org/item/CM_1993__86_1_121_0/
[E] I. Enoki: Surfaces of class VII0 with curves. Tohoku Math. J. 33 (1981), 453-492. | MR 643229 | Zbl 0476.14013
[K-P] S. Kosarew, T. Peternell: Formal cohomology, analytic cohomology and non-algebraic manifolds. Compositio Math. 74 (1990), 299-325. | Numdam | MR 1055698 | Zbl 0709.32009
[V] Vo Van Tan: On the compactification problem for Stein surfaces. Compositio Math. 71 (1989), 1-12. | Numdam | MR 1008801 | Zbl 0682.32025
|
2020-07-03 22:42:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.608674943447113, "perplexity": 8140.911139679437}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883439.15/warc/CC-MAIN-20200703215640-20200704005640-00200.warc.gz"}
|
http://math.stackexchange.com/questions/328997/derivatives-and-constants
|
# derivatives and constants
I am confused on finding the derivative when constants are involved: with $3\ln(x^4 + \sec x)$ the derivative is $(3(4x^3 + \sec x \tan x))/(x^4+\sec x)$ notice the three stayed, but with
$x^2+2x+7$ the derivative of seven turns to zero in the answer $2(x+1)$
my question is what is the difference.
-
We have that, for any differentiable $f$
$$(f+c)'=f'+0=f'$$
however, $$(c\cdot f)'=c\cdot f'$$
for any constant $c$. That should probably be in your notes.
-
Thanks Peter!!! I should have caught that!!! – codenamejupiterx Mar 13 '13 at 2:14
What is you have d/dx of (1-cx)^2 – Conrad C Oct 21 '14 at 18:16
The difference is that there is no constant term in the first expression. $3$ there is coefficient, not a constant term!
-
OK! Thanks Shu Xiao Li! – codenamejupiterx Mar 13 '13 at 2:11
Remember that $$\frac d {dx} (f(x) +g(x)) = \frac d {dx} (f(x)) + \frac d {dx} (g(x))$$ but $$\frac d {dx} (cf(x)) = c \frac d {dx} (f(x)),$$
if $c$ is a constant.
-
As 3 is constant with in multiplication with function therefore 3 can taken outside of differentiation by the rule d/dx(kf(x)) = kd/dx(f(x)) while in second example 7 is constant therefore its differentiation is zero due to rule d/dx(C) = 0 where c = constant
-
|
2016-02-10 07:27:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9373604655265808, "perplexity": 1244.0923096844663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158811.82/warc/CC-MAIN-20160205193918-00318-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://leetcode.ca/2021-10-15-2031-Count-Subarrays-With-More-Ones-Than-Zeros/
|
Formatted question description: https://leetcode.ca/all/2031.html
# 2031. Count Subarrays With More Ones Than Zeros (Medium)
You are given a binary array nums containing only the integers 0 and 1. Return the number of subarrays in nums that have more 1's than 0's. Since the answer may be very large, return it modulo 109 + 7.
A subarray is a contiguous sequence of elements within an array.
Example 1:
Input: nums = [0,1,1,0,1]
Output: 9
Explanation:
The subarrays of size 1 that have more ones than zeros are: [1], [1], [1]
The subarrays of size 2 that have more ones than zeros are: [1,1]
The subarrays of size 3 that have more ones than zeros are: [0,1,1], [1,1,0], [1,0,1]
The subarrays of size 4 that have more ones than zeros are: [1,1,0,1]
The subarrays of size 5 that have more ones than zeros are: [0,1,1,0,1]
Example 2:
Input: nums = [0]
Output: 0
Explanation:
No subarrays have more ones than zeros.
Example 3:
Input: nums = [1]
Output: 1
Explanation:
The subarrays of size 1 that have more ones than zeros are: [1]
Constraints:
• 1 <= nums.length <= 105
• 0 <= nums[i] <= 1
Companies:
Similar Questions:
## Solution 1.
index: 0 1 2 3 4
A: 0 1 1 0 1
Diff: 0 -1 0 1 0 1 // Count(1) - Count(0)
Count: 0 +1 +3 +1 +4 // Sum of number of diffs less than the current diff.
Let diff[i] be the count of 1s minus count of 0s before A[i] inclusive.
For each A[i], it will add “sum of number of diffs less than the current diff” to the answer.
So we need a data structure with which we can query a range sum. Segment tree and Binary Indexed Tree are good for this purpose.
This implementation uses BIT.
// OJ: https://leetcode.com/problems/count-subarrays-with-more-ones-than-zeros/
// Time: O(NlogN)
// Space: O(N)
// Ref: https://leetcode.com/problems/count-subarrays-with-more-ones-than-zeros/discuss/1512961/BIT-vs.-O(n)
const int N = 200000, mod = 1e9 + 7;
int bt[N + 1] = {};
class Solution {
int sum(int i) {
int ans = 0;
for (++i; i > 0; i -= i & -i) ans += bt[i];
return ans;
}
void update(int i, int val) {
for (++i; i <= N; i += i & -i) bt[i] += val;
}
public:
int subarraysWithMoreZerosThanOnes(vector<int>& A) {
int ans = 0, diff = 0;
memset(bt, 0, sizeof(bt));
update(N / 2, 1);
for (int n : A) {
diff += n ? 1 : -1;
update(N / 2 + diff, 1);
ans = (ans + sum(N / 2 + diff - 1)) % mod;
}
return ans;
}
};
## Solution 2. Divide and Conquer (Merge Sort)
// OJ: https://leetcode.com/problems/count-subarrays-with-more-ones-than-zeros/
// Time: O(NlogN)
// Space: O(N)
class Solution {
public:
int subarraysWithMoreZerosThanOnes(vector<int>& A) {
vector<int> diff(A.size() + 1), tmp(A.size() + 1);
for (int i = 0, d = 0; i < A.size(); ++i) {
d += A[i] == 1 ? 1 : -1;
diff[i + 1] = d;
}
function<int(int, int)> mergeSort = [&](int begin, int end) -> int {
if (begin + 1 >= end) return 0;
long mid = (begin + end) / 2, mod = 1e9 + 7, ans = (mergeSort(begin, mid) + mergeSort(mid, end)) % mod;
for (int i = begin, j = mid, k = begin; i < mid || j < end; ++k) {
if (j == end || (i < mid && diff[i] < diff[j])) tmp[k] = diff[i++];
else {
ans = (ans + i - begin) % mod;
tmp[k] = diff[j++];
}
}
for (int i = begin; i < end; ++i) diff[i] = tmp[i];
return ans;
};
return mergeSort(0, diff.size());
}
};
## Solution 3.
// OJ: https://leetcode.com/problems/count-subarrays-with-more-ones-than-zeros/
// Time: O(N)
// Space: O(N)
// Ref: https://leetcode.com/problems/count-subarrays-with-more-ones-than-zeros/discuss/1512961/BIT-vs.-O(n)
const int N = 200000, mod = 1e9 + 7;
int bt[N + 1] = {};
class Solution {
public:
int subarraysWithMoreZerosThanOnes(vector<int>& A) {
int ans = 0, diff = 0, cnt = 0;
memset(bt, 0, sizeof(bt));
bt[N / 2] = 1;
for (int n : A) {
diff += n ? 1 : -1;
cnt += n ? bt[N / 2 + diff - 1] : -bt[N / 2 + diff];
ans = (ans + cnt) % mod;
++bt[N / 2 + diff];
}
return ans;
}
};
## TODO
Add notes to Solution 1 and 3.
|
2022-11-28 12:28:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3974256217479706, "perplexity": 11130.962419068826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710503.24/warc/CC-MAIN-20221128102824-20221128132824-00800.warc.gz"}
|
http://susam.in/blog/adac-and-he-puzzles-from-geb/
|
ADAC and HE puzzles from GEB
I have been reading Gödel, Escher, Bach: An Eternal Golden Braid by since the last Monday. The book alternates between chapters and dialogues. In the words of the author:
The long and the short of it is that I eventually decided - but this took many months - that the optimal structure would be a strict alternation between chapters and dialogues. Once that was clear, then I had the joyous task of trying to pinpoint the most crucial ideas that I wanted to get across to my readers and then somehow embodying them in both the form and the content of fanciful, often punning dialogues between Achilles and the Tortoise (plus a few new friends).
After the second chapter (Chapter II: Meaning and Form in Mathematics) there is a dialogue between Achilles and the Tortoise on telephone. The title of the dialogue is Sonata for Unaccompanied Achilles. Achilles is the only speaker, since it is a transcript of one end of a telephone call. The Tortoise is at the far end of the call. The sentences spoken by the Tortoise at the other end are not present. This makes it very interesting as we keep guessing what the Tortoise might have spoken.
It starts in this manner.
Achilles: Hello, this is Achilles.
Achilles: Oh, hello, Mr. T. How are you?
Achilles: A torticollis? Oh, I'm sorry to hear it. Do you have any idea what caused it?
As the dialogue proceeds, they share a few puzzles. Here is the first one from the Tortoise.
Achilles: A word with the letters 'A', 'D', 'A', 'C' consecutively inside it … Hmm … What about "abracadabra"?
Achilles: True, "ADAC" occurs backwards, not forwards, in that word.
Achilles: Hours and hours? It sounds like I'm in for a long puzzle, then. Where did you hear this infernal riddle?
Here is the second one from Achilles.
Achilles: Say, I once heard a word puzzle a little bit like this one. Do you want to hear it? Or would it just drive you further into distraction?
Achilles: I agree - can't do any harm. here it is: What's a word that begins with the letters "HE" and also ends with "HE"?
Achilles: Very ingenious - but that's almost cheating. It's certainly not what I meant!
Achilles: Of course you're right - it fulfills the conditions, but it's a sort of "degenerate" solution. There's another solution which I had in mind.
Achilles: That's exactly it! How did you come up with it so fast?
Achilles: So here's a case where having a headache actually might have helped you, rather than hindering you. Excellent! But I'm still in the dark on your "ADAC" puzzle.
If you want to think on these puzzles, don't read further as there are spoilers below.
It didn't take much time for me to solve the puzzle because I cheated with the word list file available in Debian 5.0.
Here is the output of my cheating.
susam@nifty:~$grep adac /usr/share/dict/words headache headache's headaches susam@nifty:~$ grep ^he.*he\$ /usr/share/dict/words
heartache
So, the answers to both puzzles seem to be 'HEADACHE'. Read the last sentence in the dialogue I have shown above, again. It makes sense now as Achilles says that having a headache might have helped the Tortoise.
Later in the dialogue the Tortoise offers 'figure' and 'ground' as hints to the 'ADAC' puzzle.
Achilles: Well, normally I don't like hints, but all right. What's your hint?
Achilles: I don't know what you mean by "figure" and "ground" in this case.
Achilles: Certainly I know Mosaic II! I know ALL of Escher's works. After all, he's my favorite artist. In any case, I've got a print of Mosaic II hanging on my wall, in plain view from here.
Achilles: Yes, I see all the black animals.
Achilles: Yes, I also see how their "negative" space - what's left out - defines the white animals.
Achilles: So THAT's what you mean by "figure" and "ground". But what does that have to do with the "ADAC" puzzle?
Achilles: Oh, this is too tricky to me. I think I'M starting to get a headache.
The famous painting discussed in the dialogue can be found here: http://www.worldofescher.com/gallery/A30L.html. One can see how the black animals form the figure or the positive space and how the background or ground or negative space beautifully fits all the white animals.
I was unable to use this hint to solve the puzzle. But after cheating and finding the answer I could make sense of the hint and understand how 'figure' and 'ground' lead to 'HEADACHE'. The first puzzle has 'ADAC' in the question. Let us consider 'ADAC' as the figure or the positive space. Now, if we remove 'ADAC' from 'HEADACHE', we are left with the ground or negative space, which consists of 'HE' in the beginning of the word and 'HE' in the end of the word. The figure is used to make the question in the first puzzle. The ground is used to make the question in the second puzzle.
An interesting question is: What was the first answer from the Tortoise that Achilles found very ingenious but degenerate? I believe, it is 'HE' as this word begins with 'HE' and also ends with 'HE'.
The funny thing is that both of them asked two puzzles to each other without knowing that the answers to them were same. This is exactly what happened when a colleague of mine and I challenged each other with combinatorics puzzles. I wrote a blog post on this here: Combinatorial coincidence.
Paritosh said:
Interesting! Now I wanna read this book.
TKD said:
Thanks man. Started reading GEB last Saturday and I could figure out the 'he' and 'headache' ones but did not realize headache was also the response to tha adac puzzle.
Cheers from Argentina,
TKD
rb said:
Thank you for posting this!
|
2018-08-15 08:39:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5461604595184326, "perplexity": 1147.960144665033}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210040.24/warc/CC-MAIN-20180815083101-20180815103101-00143.warc.gz"}
|
https://www.physicsforums.com/threads/electron-moves-from-93c-to-99c-in-some-amount-of-time-how-far-does-it-travel.570141/
|
Homework Help: Electron moves from .93c to .99c in some amount of time, how far does it travel?
1. Jan 23, 2012
IntegrateMe
An electron moves from 0.93c to 0.99c in 0.819 x 10-11 s.
Does this mean that the particle travels a distance of:
x = (0.99c-0.93c)(0.819 x 10-11 s) m?
2. Jan 23, 2012
kunguz
What do you mean by c? is it Coulomb? If yes, please see the definition of Coulomb from wikipedia:
"One coulomb is the magnitude (absolute value) of electrical charge in 6.24150965(16)×1018 protons or electrons."
http://en.wikipedia.org/wiki/Coulomb
3. Jan 23, 2012
iRaid
He definitely means the speed of light (c)
4. Jan 23, 2012
Redbelly98
Staff Emeritus
That looks wrong. What if the electron's initial and final speed were both 0.93c? Then your formula would give
x = (0.93c - 0.93c)·(0.819 x 10-11 s)
= 0·(0.819 x 10-11 s)
= 0, which is clearly wrong.
If the acceleration is constant, you can use the usual kinematic equations to find the distance traveled.
EDIT: it strikes me as very odd to use relativistic speeds in a constant-acceleration problem. Exactly what topic has your class been studying lately? What formula(s) have you been given to work with -- either in the textbook or in the class lectures?
No, it's the speed of light as iRaid said. This is a problem about motion, and has nothing to do with electric charge.
Last edited: Jan 23, 2012
5. Jan 23, 2012
Pengwuino
No it is not right at all. The distance something travels is $\Delta x = v\Delta t$. However, that's only for a constant velocity and he absolutely does not have a constant velocity setup.
@OP: As redbelly said, this question is a bit odd. Have you looked at special relativity yet? Because you're only looking for the distance traveled, there wouldn't be any relativistic effects to consider, but it's very strange to pose the question using relativistic speeds.
6. Jan 23, 2012
Redbelly98
Staff Emeritus
Moderator's note:
I have deleted a post that provided the formula to be used (if this motion is constant acceleration.)
Please do not provide formulas that students should be able to easily find themselves. Looking up useful information in textbooks and class notes is something students should be willing to do for themselves.
Thank you.
7. Jan 23, 2012
IntegrateMe
Sorry for the confusion. Let me explain:
"Electrons initially at rest are subjected to a continuous force of 2x10^-12 N for 2 miles."
Determine how much time is required to increase the electrons' speed from 0.93c to 0.99c. (That is, the quantity |v|/c increases from 0.93 to 0.99).
F = dp/dt
dt = [m(vf-vi)]/F
vf-vi = 0.06c; electron's m = 9.1 x 10^-31
After plugging in I got a time of 0.819 x 10^-11 s.
Approximately how far does the electron go in this time? Why is this approximate?
And here is where my problem is...
Thank you for the responses!
8. Jan 24, 2012
Dickfore
Actually, the wording of the problem is ambiguous. The distance traveled depends on the way acceleration changes.
EDIT:
I will give a formula for the distance traveled if the proper-acceleration vs. proper-time is given as $a_0(\tau)$.
$$\frac{d v}{d t} = \frac{ \frac{ d v' \, \left( 1+ \frac{v' \, u}{c^2} \right) - (v' + u) \, \frac{u \, d v'}{c^2} }{\left( 1+ \frac{v' \, u}{c^2} \right)^2} }{ \frac{d t' + \frac{u \, d x'}{c^2}}{\sqrt{1 - \frac{u^2}{c^2}}} } = \frac{d v'}{d t'} \, \frac{ \left( 1 - \frac{u^2}{c^2} \right)^{3/2} }{ \left( 1 + \frac{u \, v'}{c^2} \right)^3 }$$
If $v' = 0$, and $u = v$ (instantaneous proper frame), then $d v'/d t' = a_0$. Also, since we want to use proper time instead of LAB time, we have:
$$\frac{d t}{d \tau} = \left( 1 - \frac{v^2}{c^2} \right)^{-1/2}$$
Combining these two equations, we have:
$$\frac{d v}{d \tau} = \frac{d v}{d t} \, \frac{d t}{d \tau} = a_0 \, \left( 1 - \frac{v^2}{c^2} \right)^{3/2} \, \left( 1 - \frac{v^2}{c^2} \right)^{-1/2} = a_0 \, \left( 1 - \frac{v^2}{c^2} \right)$$
The variables in this ODE may be separated:
$$\frac{d v}{1 - v^2 / c^2} = a_0(\tau) \, d\tau$$
and then integrated:
$$\int_{v_i}^{v}{ \frac{d \tilde{v}}{1 - \tilde{v}^2 / c^2} } = \int_{0}^{\tau}{ a_0(\tilde{\tau}) \, d\tilde{\tau} }$$
For shorthand, we denote $A(\tau) \equiv \int_{0}^{\tau}{ a_0(\tilde{\tau}) \, d\tilde{\tau} }$. The integral over velocity is performed by introducing the hyperbolic trigonometric substitution (the parameter $\eta$ is called rapidity):
$$\frac{\tilde{v}}{c} = \tanh(\eta) \Rightarrow d\tilde{v} = c \, \mathrm{sech}^2(\eta) \, d\eta, \ \frac{1}{1 - \tilde{v}^2 / c^2} = \cosh^2(\eta)$$
and we have:
$$\eta(\tau) - \eta_i = A(\tau)$$
Thus, we have the following implicity dependence of velocity on proper time:
$$v(\tau) = c \, \tanh^{-1} \left(\eta_i + A(\tau) \right), \eta_i \equiv \tanh \left(\frac{v_i}{c} \right)$$
Once we know $\eta(\tau)$, we can find the dependence of LAB time t on proper time τ:
$$\frac{d t}{d \tau} = \left( 1 - \frac{v^2}{c^2} \right)^{-1/2} = \cosh \left( \eta(\tau) \right)$$
which may be integrated:
$$t(\tau) = \int_{0}^{\tau} {\cosh \left( \eta (\tilde{\tau}) \right) \, d\tilde{\tau} }$$
Finally, the displacement is:
$$dx = v \, dt = c \, \tanh \left( \eta(\tau) \right) \, \cosh \left( \eta(\tau) \right) \, d\tau = c \, \sinh \left( \eta(\tau) \right) \, d\tau$$
$$\Delta x = c \, \int_{0}^{\tau} { \sinh \left( \eta(\tilde{\tau} ) \right) \, d\tilde{\tau} }$$
Last edited: Jan 24, 2012
9. Jan 24, 2012
IntegrateMe
The force of 2x10^-12 N is continuous, doesn't that describe constant acceleration?
10. Jan 24, 2012
Dickfore
It describes constant proper acceleration (see my edit to the above post). You need to use those formulas. What other quantity you need besides the force?
11. Jan 24, 2012
IntegrateMe
I'm taking an introductory physics course. I don't understand half of the stuff you did there. The problem should take me very little time to solve.
12. Jan 24, 2012
Dickfore
Then, your textbook sucks because at those speeds relativistic effects are truly visible.
For example, it says that the electron uniformly accelerates, increasing its velocity by $0.06 \, c$ in $8.19 \, \mathrm{ps}$. Does it mean that in another sixth of that time, $1.365 \, \mathrm{ps}$, it will increase its velocity by another $0.01 \, c$, so that its velocity becomes $1.00 \, c$?
13. Jan 24, 2012
IntegrateMe
I have no idea. I just used the equation:
x = x0 + v0(t) + 0.5a(t)^2
And calculated the acceleration using F = ma since I know the force acting on the electron and the mass of the electron.
14. Jan 24, 2012
Dickfore
ok, cool.
15. Jan 24, 2012
vela
Staff Emeritus
What topics are you currently studying in your physics course? Based on the question, it seems like you're learning about special relativity.
As Dickfore noted, the equations you used simply don't apply when the electron is moving so close to the speed of light.
16. Jan 24, 2012
Redbelly98
Staff Emeritus
The constant force suggests that relating work done with the kinetic energy may be a useful way to go here. But kinetic energy is not simply (1/2)mv2 at these speeds, you do need to use relativity to get the kinetic energy.
|
2018-11-19 17:22:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7762441039085388, "perplexity": 1110.4622703000484}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746061.83/warc/CC-MAIN-20181119171420-20181119193420-00207.warc.gz"}
|
https://stacks.math.columbia.edu/tag/0AV1
|
Lemma 15.23.4. Let $R$ be a Noetherian domain. Let $M$ be a finite $R$-module. The following are equivalent:
1. $M$ is reflexive,
2. $M_\mathfrak p$ is a reflexive $R_\mathfrak p$-module for all primes $\mathfrak p \subset R$, and
3. $M_\mathfrak m$ is a reflexive $R_\mathfrak m$-module for all maximal ideals $\mathfrak m$ of $R$.
Proof. The localization of $j : M \to \mathop{\mathrm{Hom}}\nolimits _ R(\mathop{\mathrm{Hom}}\nolimits _ R(M, R), R)$ at a prime $\mathfrak p$ is the corresponding map for the module $M_\mathfrak p$ over the Noetherian local domain $R_\mathfrak p$. See Algebra, Lemma 10.10.2. Thus the lemma holds by Algebra, Lemma 10.23.1. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2023-03-25 17:43:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9881194233894348, "perplexity": 255.06382612371513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00403.warc.gz"}
|
http://www.ams.org/mathscinet-getitem?mr=623572
|
MathSciNet bibliographic data MR623572 46L55 (46M20 58F15) Rieffel, Marc A. \$C\sp{\ast} \$$C\sp{\ast}$-algebras associated with irrational rotations. Pacific J. Math. 93 (1981), no. 2, 415–429. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
|
2017-05-27 15:22:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9982181787490845, "perplexity": 7062.650519232933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608954.74/warc/CC-MAIN-20170527133144-20170527153144-00084.warc.gz"}
|
https://www.matchfishtank.org/curriculum/math/algebra-1/exponents-and-exponential-functions/lesson-16/
|
# Exponents and Exponential Functions
## Objective
Compare rates of change in linear and exponential functions shown as equations, graphs, and situations.
## Common Core Standards
### Core Standards
?
• F.IF.C.9 — Compare properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions). For example, given a graph of one quadratic function and an algebraic expression for another, say which has the larger maximum.
• F.LE.A.1 — Distinguish between situations that can be modeled with linear functions and with exponential functions.
• F.LE.A.3 — Observe using graphs and tables that a quantity increasing exponentially eventually exceeds a quantity increasing linearly, quadratically, or (more generally) as a polynomial function.
• A.SSE.A.1 — Interpret expressions that represent a quantity in terms of its context Modeling is best interpreted not as a collection of isolated topics but in relation to other standards. Making mathematical models is a Standard for Mathematical Practice, and specific modeling standards appear throughout the high school standards indicated by a star symbol (★). The star symbol sometimes appears on the heading for a group of standards; in that case, it should be understood to apply to all standards in that group.
?
• 8.F.A.2
• 8.F.B.4
## Criteria for Success
?
1. Understand that linear functions have a constant rate of change and exponential functions have an increasing or decreasing rate of change.
2. Identify whether functions are linear or exponential in graphs, tables, equations, and situations.
3. Compare linear and exponential functions, identifying where the rates of change of exponential functions are greater than or less than linear functions.
4. Understand that an increasing exponential function will eventually exceed an increasing linear function.
## Tips for Teachers
?
This lesson focuses on understanding and making connections across situations, graphs, and equations of linear and exponential functions. Students are not asked to generate equations, graphs, or contexts; they will work on those skills and concepts in the lessons that follow.
## Anchor Problems
?
### Problem 1
Complete "Avi and Benita's Repair Shop" by Desmos.
#### References
Avi and Benita's Repair Shop by Desmos is made available by Desmos. Copyright © 2017 Desmos, Inc. Accessed May 17, 2018, 12:43 p.m..
### Problem 2
Two equations are shown below.
A: ${y=0.01(2^x)}$
B: ${y=100x}$
1. Which equation represents Avi’s payment rule and which equation represents Benita’s payment rule?
2. Explain what each part of each equation means in context of the situation.
3. Describe, using features from the equations, how each function grows over time.
### Problem 3
Two equations and their graphs are shown.
Equation 1: ${y=5x}$
Equation 2: ${y=0.5(2^x)}$
a) Label each graph with the appropriate equation.
b) Describe the change in $y$ in each function as $x$ increases by $1$
c) Describe the behavior of the exponential graph over time, as compared to the linear graph.
d) Over approximately what interval is the linear function greater than the exponential function?
## Problem Set
?
The following resources include problems and activities aligned to the objective of the lesson that can be used to create your own problem set.
• Include examples where students are given different representations of linear and exponential functions (tables, graphs, equations), and must match them together
?
Use the function shown below to answer the questions that follow.
a. Which equation matches the graph?
i. ${y=3(2^x )}$
ii. ${y=2(3^x )}$
iii. ${y=4x+2}$
iv. ${y=2x+4}$
b. Explain why you chose that equation in part (a).
c. What is the rate of change of this function?
d. Suppose the graph of ${y=150x}$ was added to the graph above. Which function is greater over the domain interval ${1<x<5}$? Will this function always be greater over all values of the domain? Explain your reasoning.
|
2020-08-07 19:08:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4335656762123108, "perplexity": 1654.3815059560704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737206.16/warc/CC-MAIN-20200807172851-20200807202851-00055.warc.gz"}
|
https://blog.rmhogervorst.nl/tags/intermediate/
|
Graphing My Daily Phone Use
How many times do I look at my phone? I set up a small program on my phone to count the screen activations and logged to a file. In this post I show what went wrong and how to plot the results. The data I set up a small program on my phone that counts every day how many times I use my phone (to be specific, it counts the times the screen has been activated). [Read More]
Logging my phone use with tasker
In this post I’ll show you how I logged my phone use with tasker, in a follow up post I’ll show you how I visualized that. I had a great vacation last week but relaxing in Spain I thought about my use of technology and became a bit concerned with how many times I actually look at my phone. But how many times a day do I actually look at my phone? [Read More]
Tweeting daily famous deaths from wikidata to twitter with R and docker
A tweet a day keeps the insanity at bay
In this explainer I walk you through the steps I took to create a twitter bot that tweets daily about people who died on that date. I created a script that queries wikidata, takes that information and creates a sentence. That sentence is then tweeted. For example: A tweet I literally just send out from the docker container I hope you are has excited as I am about this project. [Read More]
Use purrr to feed four cats
Replacing a for loop with purrr::map_*
Use purrr to feed four cats In this example we will show you how to go from a ‘for loop’ to purrr. Use this as a cheatsheet when you want to replace your for loops. Imagine having 4 cats. (like this one:) Four real cats who need food, care and love to live a happy life. They are starting to meow, so it’s time to feed them. Our real life algorithm would be: [Read More]
How to set up GNU Terry Pratchett on hugo with netlify
Keeping Terry Pratchett alive in static websites
TL;DR: In this post I will show you how to set up special header information on a static website such as hugo + netlify. Netlify interprets the _headers file and applies the rules to your website. You only have to set a simple rule, and now you too can keep Terry Pratchett alive! GNU Terry Pratchett On March 12th 2015 one of my favorite writers; Terry Pratchett died, but the people of the internet were not ready to let him go. [Read More]
Recently I wanted to download all the transcripts of a podcast (600+ episodes). The transcripts are simple txt files so in a way I am not even ‘web’-scraping but just reading in 600 or so text files which is not really a big deal. I thought. This post shows you where I went wrong Also here is a picture I found of scraping. Webscraping general For every download you ask the server for a file and it returns the file (this is also how you normally browse the web btw, your browser requests the pages). [Read More]
Writing manuscripts in Rstudio, easy citations
Intro and setup This is a simple explanation of how to write a manuscript in RStudio. Writing a manuscript in RStudio is not ideal, but it has gotten better over time. It is now relatively easy to add citations to documents in RStudio. **The goal is not think about formatting, and citations, but to write the manuscript and add citations on the fly with a nice visual help. ** [Read More]
Plotting a map with ggplot2, color by tile
Introduction Last week I was playing with creating maps using R and GGPLOT2. As I was learning I realized information about creating maps in ggplot is scattered over the internet. So here I combine all that knowledge. So if something is absolutely wrong/ ridiculous / stupid / slightly off or not clear, contact me or open an issue on the github page. When you search for plotting examples you will often encounter the packages maps and mapdata. [Read More]
Submitting your first package to CRAN, my experience
I recently published my first R package to The Comprehensive R Archive Network (CRAN). It was very exciting and also quite easy. Let me walk you through my process. First a description of my brand new package: badgecreatr, then a description of steps to take for submission. Package description When you go around github looking at projects you often see these interesting images in the readme The ones you see above are from ggplot2. [Read More]
|
2019-03-23 07:30:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19125202298164368, "perplexity": 1595.619629580414}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202728.21/warc/CC-MAIN-20190323060839-20190323082839-00279.warc.gz"}
|
https://www.studysmarter.us/textbooks/physics/modern-physics-2nd-edition/bound-states-simple-cases/q18cq-quantum-mechanical-stationary-states-are-of-the-genera/
|
Suggested languages for you:
Americas
Europe
Q18CQ
Expert-verified
Found in: Page 187
### Modern Physics
Book edition 2nd Edition
Author(s) Randy Harris
Pages 633 pages
ISBN 9780805303087
# Quantum-mechanical stationary states are of the general form ${\mathbit{\Psi }}{\mathbf{\left(}}{\mathbit{x}}{\mathbf{,}}{\mathbit{t}}{\mathbf{\right)}}{\mathbf{=}}{\mathbit{\psi }}{\mathbf{\left(}}{\mathbit{x}}{\mathbf{\right)}}{{\mathbit{e}}}^{\mathbf{-}\mathbf{i}\mathbf{\omega }\mathbf{t}}$. For the basic plane wave (Chapter 4), this is ${\mathbit{\Psi }}{\mathbf{\left(}}{\mathbit{x}}{\mathbf{,}}{\mathbit{t}}{\mathbf{\right)}}{\mathbf{=}}{\mathbit{A}}{{\mathbit{e}}}^{\mathbf{i}\mathbf{k}\mathbf{x}}{{\mathbit{e}}}^{\mathbf{-}\mathbf{i}\mathbf{\omega }\mathbf{t}}{\mathbf{=}}{\mathbit{A}}{{\mathbit{e}}}^{\mathbf{i}\mathbf{\left(}\mathbf{k}\mathbf{x}\mathbf{-}\mathbf{\omega }\mathbf{t}\mathbf{\right)}}$, and for a particle in a box it is ${\mathbf{A}}\left(\mathbf{sinkx}\right){{\mathbf{e}}}^{\mathbf{-}\mathbf{i\omega t}}$. Although both are sinusoidal, we claim that the plane wave alone is the prototype function whose momentum is pure-a well-defined value in one direction. Reinforcing the claim is the fact that the plane wave alone lacks features that we expect to see only when, effectively, waves are moving in both directions. What features are these, and, considering the probability densities, are they indeed present for a particle in a box and absent for a plane wave?
Feature for a particle in a box: the wave function is equal to zero at any point where $kx=n\pi$.
Feature for a plane wave: the function could never be equal to zero, hence has pure momentum and well-defined value in one direction.
See the step by step solution
## Step 1: Given data
For a basic plane wave, the wave function is given by $\Psi \left(x,t\right)=A{e}^{i\left(kx-\omega t\right)}$ and for a particle in a box the wave function is given by $\Psi \left(x,t\right)=A\mathrm{sin}\left(kx\right){e}^{-i\omega t}.$
## Step 2: Concept of quantum mechanics
In quantum mechanics, a particle does not have a fixed boundary or a definite momentum. Here, particles are considered to be a superposition of a large number of waves forming a wave packet.
## Step 3: Time-dependent wave
If a wave is moving in both directions, then it will form a standing wave, and hence nodes will be formed. For a particle in a box, the solution has the term $\left(\mathrm{sin}\left(kx\right)\right)$, which is definitely zero when $kx=n\pi$ but for a plane wave, the function $\mathrm{exp}\left(i\left(kx-\omega t\right)\right)$ is not zero, though its complex square is a constant.
Hence, we see that a wave function for a plane wave cannot be zero.
|
2023-03-23 01:30:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7948421835899353, "perplexity": 347.1234894079928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00773.warc.gz"}
|
http://beyondmicrofoundations.blogspot.com/2012/12/
|
## Monday, December 31, 2012
### Graph of the Day
Today's graphic is motivated by recent posts by Paul Krugman on implications of capital-biased technological change. In both posts Krugman uses the share of employee compensation (COE) to nominal GDP as his measure of labor's share of income. Although the data for both series go back to 1947, Krugman chooses to drop the data prior to 1973 arguing that 1973 marked the end of the post-WWII economic boom. Put another way, Krugman is saying that there is a structural break in the data generating process for labor's share which makes data prior to 1973 useless (or perhaps actively misleading) if one is interested in thinking about future trends in labor share.
If you are wondering what a plot of the entire time series looks like here is the ratio of COE / GDP from 1947 forward.
It looks like the employee compensation ratio is roughly the same today as it was in 1950 (although obviously heading in different directions!).
In his first post Krugman argues that this measure "fluctuates over the business cycle." Note that the vertical scale ranges only from 0.52 to 0.60. Such a small range will exacerbate fluctuations in the series. Plotting the same data on its natural scale (i.e., 0 to 1), yields the following.
Based on this plot, the measure appears to have been remarkably constant over the past 60 odd years.
Which of these plots gives the more "correct" view of the data? Or does it depend on the point you are trying to make?
As always, code is available.
## Friday, December 28, 2012
### Graph of the Day
Took a few days off blogging for Christmas and Boxing Day, but am now back at it! Here is a quick plot of historical measures of inflation in the U.S.. I used Pandas to grab the three price indices, and then used a nice built-in Pandas method pct_change(periods)to convert the monthly price indices (i.e., CPIAUCNS and CPIAUCSL) and the quarterly GDP deflator to measures of percentage change in prices from a year ago (which is a standard measure of inflation).
After combining the three series into a single DataFrame object, you can plot all three series with a single line of code!
Unsurprisingly the three measures track one another very closely. Perhaps I should have thrown in some measures of producer prices? Code is available here.
## Monday, December 24, 2012
### Graph(s) of the Day!
Today's graphic(s) attempt to dispel a common misunderstanding of basic probability theory. We all know that flipping a fair coin will result in heads exactly 50% of the time. Given this, many people seem to think that the Law of Large Numbers (LLN) tells us that the observed number of heads should more or less equal the expected number of heads. This intuition is wrong!
A South African mathematician named John Kerrich was visiting Copenhagen in 1940 when Germany invaded Denmark. Kerrich spent the next five years in an interment camp where, to pass the time, he carried out a series of experiments in probability theory...including an experiment where he flipped a coin by hand 10,000 times! He apparently also used ping-pong balls to demonstrate Bayes theorem.
After the war Kerrich was released and published the results of many of his experiments. I have copied the table of the coin flipping results reported by Kerrich below (and included a csv file on GitHub). The first two collumns are self explanatory, the third column, Differenceis the difference between the observed number of heads and the expected number of heads.
Tosses Heads Difference 10 4 -1 20 10 0 30 17 2 40 21 1 50 25 0 60 29 -1 70 32 -3 80 35 -5 90 40 -5 100 44 -6 200 98 -2 300 146 -4 400 199 -1 500 255 5 600 312 12 700 368 18 800 413 13 900 458 8 1000 502 2 2000 1013 13 3000 1510 10 4000 2029 29 5000 2533 33 6000 3009 9 7000 3516 16 8000 4034 34 9000 4538 38 10000 5067 67
Below I plot the data in the third column: the difference between the observed number of heads and the expected number of heads is diverging (which is the exact opposite of most peoples' intuition)!
Perhaps Kerrich made a mistake (he didn't), but we can check his results via simulation! First, a single replication of T = 10,000 flips of a fair coin...
Again, we observe divergence (but this time in the opposite direction!). For good measure, I ran N=100 replications of the same experiment (i.e., flipping a coin T=10,000 times). The result is the following nice graphic...
Our simulations suggest that Kerrich's result was indeed typical. The LLN does not say that as T increases the observed number of heads will be close to the expected number of heads! What the LLN says instead is that, as T increases, the average number of heads will get closer and closer to the true population average (which in this case, with our fair coin, is 0.5).
Let's run another simulation to verify that the LLN actually holds. In the experiment I conduct N=100 runs of T=10,000 coin flips. For each of the runs I re-compute the sample average after each successive flip.
As always code and data are available! Enjoy.
## Sunday, December 23, 2012
### Graph of the Day
Earlier this week I used Pandas to grab some historical data on the S&P 500 from Yahoo!Finance and generate a simple time series plot. Today, I am going to re-examine this data set in order to show the importance of scaling and adjusting for inflation when plotting economic data.
I again use the functions from the pandas.io.data to grab the data. Specifically, I use get_data_yahoo('^GSPC') to get the S&P 500 time series, and get_data_fred('CPIAUCSL')to grab the consumer price index (CPI). Here is a naive plot of historical S&P 500 returns from 1950 through 2012 (as usual, includes grey NBER recession bands).
Note that, because the CPI data are monthly frequency, I resample the daily S&P 500 data by taking monthly averages. This plot might make you conclude that there was a massive structural break/regime change around the year 2000 in whatever underlying process is generating the S&P 500. However, as the level of the S&P 500 increases, the linear scale on the vertical axis makes changes from month to month seem more dramatic. To control for this, I simply make the vertical scale logarithmic (now equal distances on the vertical axis represent equal percentage changes in the S&P 500).
Now the "obvious" structural break in the year 2000 no longer seems so obvious. Indeed there was a period of roughly 10-15 years during the late 1960's through 1970's during which the S&P 500 basically moved sideways in a similar manner to what we have experienced during the last 10+ years.
This brings us to another, more significant, problem with these graphs: neither or them adjusts for inflation! When plotting long economic time series it is always a good idea to adjust for inflation. The 1970's was a period of fairly high inflation in the U.S., thus the fact that the nominal value of the S&P 500 didn't change all that much over this period tells us that, in real terms, the value of the S&P 500 fell considerably.
Using the CPI data from FRED, it is straight-forward to convert the nominal value of the S&P 500 index to a real value for some base month/year. Below is a plot of the real S&P 500 (in Nov. 2012 Dollars), with a logarithmic scale on the vertical axis. As expected, the real S&P 500 declined significantly during the 1970's.
Code is available on GitHub. Enjoy!
## Saturday, December 22, 2012
### Graph(s) of the Day
Suppose large number of identical firms in a perfectly competitive industry with constant returns to scale (CRTS) Cobb-Douglas production functions: $Y = F(K, L) = K^{\alpha}(AL)^{1 - \alpha}$ Output, Y, is a homogenous of degree one function of capital, K, labor, L, and technology, A, is labor augmenting.
Typically, we economists model firms as choosing demands for capital and labor in order to maximize profits while taking prices as given (i.e., unaffected by the decisions of the individual firm):$\max_{K,L} \Pi = K^{\alpha}(AL)^{1 - \alpha} - (wL + rK)$ where the prices are $1, w, r$. Note that I am following convention in assuming that the price of the output good is the numeraire (i.e., normalized to 1) and thus the real wage, $w$, and the return to capital, $r$, are both relative prices expressed in terms of units of the output good.
The first order conditions (FOCs) of a typical firms maximization problem are \begin{align}\frac{\partial \Pi}{\partial K}=&0 \implies r = \alpha K^{\alpha-1}(AL)^{1 - \alpha} \label{MPK}\\ \frac{\partial \Pi}{\partial L}=&0 \implies w = (1 - \alpha) K^{\alpha}(AL)^{-\alpha}A \label{MPL}\end{align} Dividing $\ref{MPK}$ by $\ref{MPL}$ (and a bit of algebra) yields the following equation for the optimal capital/labor ratio: $\frac{K}{L} = \left(\frac{\alpha}{1 - \alpha}\right)\left(\frac{w}{r}\right)$ The fact that, for a given set of prices $w$, $r$, the optimal choices of $K$ and $L$ are indeterminate (any ratio of $K$ and $L$ satisfying the above condition will do) implies that the optimal scale of the firm is also indeterminate.
How can I create a graphic that clearly demonstrates this property of the CRTS production function? I can start by fixing values for the wage and return to capital and then creating contour plots of the production frontier and the cost surface.
The above contour plots are drawn for $w\approx0.84$ and $r\approx0.21$ (which implies an optimal capital/labor ratio of 2:1). You should recognize the contour plot for the production surface (left) from a previous post. The contour plot of the cost surface (right) is a simple plane (which is why the isocost lines are lines and not curves!). Combining the contour plots allows one to see the set of tangency points between isoquants and isocosts.
A firm manager is indifferent between each of these points of tangency, and thus the size/scale of the firm is indeterminate. Indeed, with CRTS a firm will earn zero profits at each of the tangency points in the above contour plot.
As usual, the code is available on GitHub.
Update: Installing MathJax on my blog to render mathematical equations was easy (just a quick cut and paste job).
## Friday, December 21, 2012
### Gun control...
Via Mark Thoma, Steve Williamson has an excellent post about the economics of gun control:
What's the problem here? People buy guns for three reasons: (i) they want to shoot animals with them; (ii) they want to shoot people with them; (iii) they want to threaten people with them. There are externalities. Gun manufacturers and retailers profit from the sale of guns. The people who buy the guns and use them seem to enjoy having them. But there are third parties who suffer. People shooting at animals can hit people. People who buy guns intending to protect themselves may shoot people who in fact intend no harm. People may temporarily feel compelled to harm others, and want an efficient instrument to do it with.
There are also information problems. It may be difficult to determine who is a hunter, who is temporarily not in their right mind, and who wants to put a loaded weapon in the bedside table.
What do economists know? We know something about information problems, and we know something about mitigating externalities. Let's think first about the information problems. Here, we know that we can make some headway by regulating the market so that it becomes segmented, with these different types of people self-selecting. This one is pretty obvious, and is a standard part of the conversation. Guns for hunting do not need to be automatic or semi-automatic, they do not need to have large magazines, and they do not have to be small. If hunting weapons do not have these properties, who would want to buy them for other purposes?
On the externality problem, we can be more inventive. A standard tool for dealing with externalities is the Pigouvian tax. Tax the source of the bad externality, and you get less of it. How big should the tax be? An unusual problem here is that the size of the externality is random - every gun is not going to injure or kill someone. There's also an inherent moral hazard problem, in that the size of the externality depends on the care taken by the gunowner. Did he or she properly train himself or herself? Did they store their weapon to decrease the chance of an accident?
What's the value of a life? I think when economists ask that question, lay people are offended. I'm thinking about it now, and I'm offended too. If someone offered me \$5 million for my cat, let alone another human being, I wouldn't take it. In any case, the Pigouvian tax we would need to correct the externality should be a large one, and it could generate a lot of revenue. If there are 300 million guns in the United States, and we impose a tax of \$3600 per gun on the current stock, we would eliminate the federal government deficit. But \$3600 is coming nowhere close to the potential damage that a single weapon could cause. A potential solution would be to have a gun-purchaser post collateral - several million dollars in assets - that could be confiscated in the event that the gun resulted in injury or loss of life. This has the added benefit of mitigating the moral hazard problem - the collateral is lost whether the damage is "accidental" or caused by, for example, someone who steals the gun. Of course, once we start thinking about the size of the tax (or collateral) needed to correct the inefficiency that exists here, we'll probably come to the conclusion that it is more efficient just to ban particular weapons and ammunition at the point of manufacture. I think our legislators should take that as far as it goes. ### Graph of the Day Today's graphic demonstrates the use of Pandas to grab data from Yahoo!Finance. The code I wrote uses pandas.io.get_data_yahoo() to grab historical daily data on the S&P 500 index and then generates a simple time series plot. I went ahead and added the NBER recession bars for good measure. Note the use of a logarithmic scale on the vertical axis. Enjoy! ## Thursday, December 20, 2012 ### Graph of the Day Today's graph is a combined 3D plot of the production frontier associated with the constant returns to scale Cobb-Douglas production function and a contour plot showing the isoquants of the production frontier. This static snapshot was written up using matplotlib (the code also includes an interactive version of the 3D production frontier implemented in Mayavi). At some point I will figure out how to embed the interactive Mayavi plot into a blog post so that readers can manipulate the plot and change parameter values. If anyone knows how to do this already, a pointer would be much appreciated! ## Wednesday, December 19, 2012 ### Blogging to resume again! It has been far too long since my last post. Life (becoming a father), travel (summer research trip to SFI), teaching (am teaching a course on Computational Economics), and research (also trying to finish my PhD!) have a way of getting in the way of my blogging. As a mechanism to slowly move back into the blog world, I have decided to start a 'Graphic of the Day' series. Each day I will create a new economic graphic using my favorite Python libraries (mostly Pandas, matplotlib, NumPy/Scipy). The inaugural 'Graph of the Day' is Figure 1-1 from Mankiw's intermediate undergraduate textbook Macroeconomics. Real GDP measures the total income of everyone in the economy, and real GDP per person measures the income of the average person in the economy. The figure shows that real GDP per person tends to grow over time and that this normal growth is sometimes interrupted by period of declining income (i.e., the grey NBER bars!), called recessions or depressions. Note that Real GDP per person is plotted on a logarithmic scale. On such a scale equal distances on the vertical axis represent equal percentage changes. This is why the distance between \$8,000 and \$16,000 (a 100% increase) is the same as the distance between \$32,000 and \\$64,000 (also a 100% increase).
The Python code is available on GitHub for download (I used pandas.io.data.get_data_fred() to grab the data). The graphic is a bit boring. I was a bit depressed to find that the longest time series for U.S. per capita real GDP only goes back to 1960! This seems a bit scandalous...but perhaps I was just using the wrong data tags!
|
2017-06-22 14:19:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4731869101524353, "perplexity": 1129.2416351218264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319575.19/warc/CC-MAIN-20170622135404-20170622155404-00516.warc.gz"}
|
https://codeforces.com/problemset/problem/1151/E
|
E. Number of Components
time limit per test
1 second
memory limit per test
256 megabytes
input
standard input
output
standard output
The Kingdom of Kremland is a tree (a connected undirected graph without cycles) consisting of $n$ vertices. Each vertex $i$ has its own value $a_i$. All vertices are connected in series by edges. Formally, for every $1 \leq i < n$ there is an edge between the vertices of $i$ and $i+1$.
Denote the function $f(l, r)$, which takes two integers $l$ and $r$ ($l \leq r$):
• We leave in the tree only vertices whose values range from $l$ to $r$.
• The value of the function will be the number of connected components in the new graph.
Your task is to calculate the following sum: $$\sum_{l=1}^{n} \sum_{r=l}^{n} f(l, r)$$
Input
The first line contains a single integer $n$ ($1 \leq n \leq 10^5$) — the number of vertices in the tree.
The second line contains $n$ integers $a_1, a_2, \ldots, a_n$ ($1 \leq a_i \leq n$) — the values of the vertices.
Output
Print one number — the answer to the problem.
Examples
Input
3
2 1 3
Output
7
Input
4
2 1 1 3
Output
11
Input
10
1 5 2 5 5 3 10 6 5 1
Output
104
Note
In the first example, the function values will be as follows:
• $f(1, 1)=1$ (there is only a vertex with the number $2$, which forms one component)
• $f(1, 2)=1$ (there are vertices $1$ and $2$ that form one component)
• $f(1, 3)=1$ (all vertices remain, one component is obtained)
• $f(2, 2)=1$ (only vertex number $1$)
• $f(2, 3)=2$ (there are vertices $1$ and $3$ that form two components)
• $f(3, 3)=1$ (only vertex $3$)
Totally out $7$.
In the second example, the function values will be as follows:
• $f(1, 1)=1$
• $f(1, 2)=1$
• $f(1, 3)=1$
• $f(1, 4)=1$
• $f(2, 2)=1$
• $f(2, 3)=2$
• $f(2, 4)=2$
• $f(3, 3)=1$
• $f(3, 4)=1$
• $f(4, 4)=0$ (there is no vertex left, so the number of components is $0$)
Totally out $11$.
|
2019-09-18 10:14:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.796597421169281, "perplexity": 600.3685982148279}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573264.27/warc/CC-MAIN-20190918085827-20190918111827-00087.warc.gz"}
|
https://infinitylearn.com/surge/question/physics/two-projectiles-a-and-b-are-thrown-with-the-same-speed-but-a/
|
Two projectiles A and B are thrown with the same speed but angles of 400 and 500 with the horizontal. Which projectile will fall earlier
# Two projectiles A and B are thrown with the same speed but angles of 400 and 500 with the horizontal. Which projectile will fall earlier
1. A
A
2. B
B
3. C
Both will fall at the same time
4. D
None of the above
Register to Get Free Mock Test and Study Material
+91
Verify OTP Code (required)
$\mathrm{T}=\frac{2\mathrm{u}\text{\hspace{0.17em}}\mathrm{sin\theta }}{\mathrm{g}},$lesser is the value of $\mathrm{\theta },$lesser is $\mathrm{sin\theta }$ and hence lesser will be the time taken. Hence A will fall earlier
|
2023-03-26 03:31:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7413038015365601, "perplexity": 2747.6100997311305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00287.warc.gz"}
|