url stringlengths 14 1.76k | text stringlengths 100 1.02M | metadata stringlengths 1.06k 1.1k |
|---|---|---|
https://ask.wireshark.org/question/21693/do-not-decode-above-tcpport-and-output-as-text/ | # Do not decode above tcp.port and output as text edit
Hi all,
It might be that the answer is already written somewhere, but I havent't been able find it.
This siutation is as follows: we capture network traffic to process data from one particular port. After doing the capture, we convert it to a comma separate file using:
tshark.exe -r input.pcapng -o data.show_as_text:TRUE -F logcat-long -eframe.time_epoch -eip.src -eip.dst -edata.text -Tfields "tcp.analysis.push_bytes_sent and tcp.port == 10001" > output.csv
Most of the time this works great. However, one time we got a session that was interpreted as irc. This lead to the column data.text being empty for that session.
I am thinking of adding --disable-protocol irc as extra argument to never have the issue again for irc. However, I was wondering whether there are better arguments to also achieve the same results.
I did notice that the -C option can be used to specify a configuration file. These tests are running on multiple machines, so I would prefer to have a command-line only option.
Also the tcp.payload which can be outputed by replacing -edata.text with -etcp.payload, but that only contains numbers and not the text.
Does anybody have a good suggestions for command line parameters to use?
edit retag close merge delete | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3945799469947815, "perplexity": 3079.490942227059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154126.73/warc/CC-MAIN-20210731203400-20210731233400-00278.warc.gz"} |
https://forum.bebac.at/forum_entry.php?id=18650&order=time | ## Korsmeyer–Peppas’ model [Design Issues]
Hi Helmut,
» I don’t expect a lag-time. I would rather say that doxo follows Le Chatelier’s principle and is simply driven out from the liposomes due to the concentration gradient (Cliposomes > Cblood).
with a recent paper1
Looks like it follows Korsmeyer–Peppas’ model
Release = (CumReleased)/(FinallyReleased) = k*tn
I know that's endless in vitro attempt but it confirms your suggestion.
To obtain an indirect estimate of plasma concentrations of free doxorubicin after administration of Liposomal Doxorubicin, the reported ratio between doxorubicinol, the major doxorubicin metabolite, and doxorubicin concentration in plasma after administration of standard doxorubicin can be used.
Based on the measurement of doxorubicinol, which usually represents 40–50% of the free doxorubicin concentrations, plasma concentrations of free doxorubicin after liposomal administration remain very low, approximately 0.25–1.25% of the total measured drug.2
1. Fateme Haghiralsadat et al. (2017) A comprehensive mathematical model of drug release kinetics from nano-liposomes, derived from optimization studies of cationic PEGylated liposomal doxorubicin formulations for drug-gene delivery, Artificial Cells, Nanomedicine, and Biotechnology, 46:1, 169–177, doi:10.1080/21691401.2017.1304403.
2. Gabizon, A., Shmeeda, H. & Barenholz, Y. Clin Pharmacokinet (2003) 42: 419–436. doi:10.2165/00003088-200342050-00002.
Kind regards,
Mittyri | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9726506471633911, "perplexity": 21627.022907198392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00295.warc.gz"} |
http://nrich.maths.org/thismonth/3and4/2005/02 | Coordinates - February 2005, Stage 3&4
Problems
Eight Hidden Squares
Stage: 2 and 3 Challenge Level:
On the graph there are 28 marked points. These points all mark the vertices (corners) of eight hidden squares. Can you find the eight hidden squares?
Isosceles Triangles
Stage: 3 Challenge Level:
Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?
Lost
Stage: 3 Challenge Level:
Can you locate the lost giraffe? Input coordinates to help you search and find the giraffe in the fewest guesses.
Square Coordinates
Stage: 3 Challenge Level:
A tilted square is a square with no horizontal sides. Can you devise a general instruction for the construction of a square when you are given just one of its sides?
Lost on Alpha Prime
Stage: 4 Challenge Level:
On the 3D grid a strange (and deadly) animal is lurking. Using the tracking system can you locate this creature as quickly as possible?
Tracking Points
Stage: 4 Challenge Level:
Can you give the coordinates of the vertices of the fifth point in the patterm on this 3D grid?
Something in Common
Stage: 4 Challenge Level:
A square of area 3 square units cannot be drawn on a 2D grid so that each of its vertices have integer coordinates, but can it be drawn on a 3D grid? Investigate squares that can be drawn. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26408571004867554, "perplexity": 1625.5630029361232}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507449153.0/warc/CC-MAIN-20141017005729-00027-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://matholympiad.org.bd/forum/viewtopic.php?t=3218&p=15648 | Junior Divisional 2013/1
Problem for Junior Group from Divisional Mathematical Olympiad will be solved here.
Forum rules
Please don't post problems (by starting a topic) in the "Junior: Solved" forum. This forum is only for showcasing the problems for the convenience of the users. You can post the problems in the main Divisional Math Olympiad forum. Later we shall move that topic with proper formatting, and post in the resource section.
barnik
Posts: 13
Joined: Wed Dec 03, 2014 3:37 pm
Junior Divisional 2013/1
$O$ is the midpoint of $AB$ and $N$ is the midpoint of $AC$.The ratio $AD:AB=2:5$ and the ratio $AF:AC=2:5$. The area of triangle $ABC$ is $50\text m^2$. What is the difference between the area of $ADPF$ and triangle $PON$?
Attachments
a.png (17.34 KiB) Viewed 2829 times
tanmoy
Posts: 289
Joined: Fri Oct 18, 2013 11:56 pm
$(ABC)=50m^{2}.\therefore (AON)=\frac{50}{4}=\frac{25}{2}m^{2}.(ADF)=50\times \frac{4}{25}=8$
$(DPF)=\frac{16}{25}(PON).(DPO)=(FPN)=\frac{4}{5}(PON).$
$(DONF)=\frac{25}{2}-8=\frac{9}{2}$
$\therefore \frac{16}{25}(PON)+\frac{4}{5}(PON)+\frac{4}{5}(PON)+(PON)=\frac{9}{2}$
$\therefore (PON)=\frac{25}{18}.\therefore (DPF)=\frac{8}{9}$
$\therefore (ADPF)=8+\frac{8}{9}=\frac{80}{9}m^{2}. \therefore (ADPF)-(PON)=\frac{80}{9}-\frac{25}{18}=\frac{15}{2}m^{2}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8152530789375305, "perplexity": 1996.1377913068106}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739328.66/warc/CC-MAIN-20200814130401-20200814160401-00597.warc.gz"} |
https://janmr.com/blog/2015/01/typesetting-math-with-html-and-css-fractions/ | # janmr blog
## Typesetting Math Using HTML and CSS — Fractions24 January 2015
Currently, there is no best way of showing math on the web. An HTML5 standard exists, MathML, but unfortunately it doesn't have broad browser support. Instead, many alternatives exist, all with varying quality and speed.
I would like to explore how far you can get by using just HTML and CSS (including web fonts). My findings should be considered experimental and in no way authoritative.
This post will deal with one way of typesetting fractions, inspired by the approach taken by Kahn Academy's KaTeX project.
Consider first the following layout:
before
8
1234
after
which is obtained by the following HTML and CSS:
before<span class="math-box">
<span class="vstack">
<div style="top: 0.686em;">8</div>
<div style="top: -0.677em;">1234</div>
<span class="baseline-fix"></span>
</span>
</span>after
.math-box {
display: inline-block;
}
.math-box .vstack {
display: inline-block;
position: relative;
}
.math-box .vstack > div {
position: relative;
text-align: center;
height: 0;
}
.math-box .baseline-fix {
display: inline-table;
table-layout: fixed;
}
(Live demo.) Worth noting is:
• The two outermost elements span.math-box and span.vstack are inline-block elements, meaning that they are positioned as inline elements (aligning baselines, for one, which is very important here), but otherwise behave as block elements (able to contain other block elements).
• The divs will be stacked on top of each other because of their block style. Therefore the width of the enclosing span.vstack element will be the widest of the divs.
• The divs have height zero which means that, by themselves, they will be displayed on top of each other, all following the same baseline.
• The divs are positioned relatively, so the top property can position the elements correctly in the vertical direction.
• The .baseline-fix is necessary in Internet Explorer, since otherwise the elements outside .math-box will not be aligned correctly (in the vertical direction). I don't know exactly why this fix works.
But we would also like to display a horizontal line between the numerator and denominator, like so:
before
8
1234
after
We aim for this markup
before<span class="math-box">
<span class="vstack">
<div style="top: 0.686em;">8</div>
<div style="top: -0.677em;">1234</div>
<div style="top: -0.23em;"><span class="frac-line"></span></div>
<span class="baseline-fix"></span>
</span>
</span>after
but what should the style be for span.frac-line? Using something like display: inline-block; width: 100%; border-bottom: 1px solid black; will work, but how thick should the line be? Using something like 0.04em makes the line scale with the font size, but using a small font size can result in a line thickness less than 1 pixel (leading to the line disappearing or having a modified color). Here, KaTeX has a nice trick up their sleeve:
.math-box .frac-line {
width: 100%;
display: inline-block;
}
.math-box .frac-line:before {
display: block;
border-bottom-style: solid;
border-bottom-width: 1px;
content: "";
}
.math-box .frac-line:after {
display: block;
margin-top: -1px;
border-bottom-style: solid;
border-bottom-width: 0.04em;
content: "";
}
(Live demo.) Using the pseudo-elements :before and :after, two lines will be drawn on top of each other. This way the line will be 0.04em, but never less than 1px.
Now we have a complete fraction, but there is one remaining issue. We would like the outer-most box span.math-box to exactly enclose the inner elements, like this:
before
8
1234
after
(If the fit is not perfect then see the comment in the final paragraph.) This will not be the case for the HTML/CSS presented above because of the relative positioning of the elements. The outer box will fit perfectly in the horizontal direction, but not in the vertical direction. A fix is to use a so-called strut, widely used in TeX:
before<span class="math-box" style="border: 1px solid red;">
<span class="strut" style="height: 2.008em; vertical-align: -0.686em;"></span>
<span class="vstack">
<div style="top: 0.686em;">8</div>
<div style="top: -0.677em;">1234</div>
<div style="top: -0.23em;"><span class="frac-line"></span></div>
<span class="baseline-fix"></span>
</span>
</span>after
.math-box .strut {
display: inline-block;
}
(Live demo.) A strut is just a zero-width element, which can be made to control the vertical extent both below and above the baseline.
This concludes the demonstration of how to typeset a fraction using HTML and CSS. Note one important thing here: The browser's layout engine will take care of all spacing and alignment in the horizontal direction, but you have to position everything yourself in the vertical direction. And to do that, you need precise information of how tall your font's characters are. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6627629995346069, "perplexity": 4270.877385922601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256184.17/warc/CC-MAIN-20190521002106-20190521024106-00377.warc.gz"} |
http://math.stackexchange.com/questions/217860/how-do-mathematicians-formally-prove-the-connected-sum-of-two-disks-is-homeomorp | # How do mathematicians formally prove the connected sum of two disks is homeomorphic to annulus and similar kinds?
I was tyring to read this fact that the connected sum of two disks is homeomorphic to annulus. By intutitive picture, it is obvious, but I wanted to it in a formalistic way. So I was reading many different textbooks, looking up the formal definition of connected sum, surfaces, etc..
The definition of connected sum includes the notion of homeomorphism. So when they prove the above statement formally, do they use various kinds of homeomorphism, and actually constrct a homeomorphism between the connected sum and annulus?
Anyway in general, if I want to study these things in a fully formalized context, what topics should I study? And could you refer me any rigorous textbook?
Thanks!
-
You probably won't find this in a textbook. It's more annoying than anything else. Best to be done as a self-imposed exercise if at all. What have you tried? – Qiaochu Yuan Oct 21 '12 at 7:47
Maybe easier to see it is a cylinder? The connected sum by definition here is gotten by gluing two annuli along their inner circles. Simply parameterize one by $S^1\times [0,1/2]$ and the other by $S^1\times [1/2,1]$, so that after attaching you get $S^1\times [0,1]$. – user641 Oct 21 '12 at 9:28
@QiaochuYuan I havne't really tried anything because I couldn't do anything much. (And I'm very new to this topic.) I was looking at all the algebraic textbooks but they only demonstrate this by pictorial arguments. So I was wondering if there is any formalized proof of that. – julypraise Oct 21 '12 at 9:51
@SteveD Thanks for the insight. I will kepp in mind what you said. But I don't think I can do what you've instructed at this stage; I've met this area only recently. I was just wondring if they actually formally prove this and if they do, they are in some textbooks. – julypraise Oct 21 '12 at 9:54
This is one of those things that's pretty obvious, but it's a bit of a hassle to actually construct anything more than a rought hand waving "proof" of it. I'd recommend looking at Allen Hatcher's textbook, which can be found here on his website: math.cornell.edu/~hatcher/AT/ATchapters.html. The methods he covers are suited to showing that spaces are or are not homotopy equivalent, or that they aren't homeomorphic, but in my experience it seems to be the standard text for an introduction to algebraic topology – user123123 Oct 21 '12 at 12:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5789315700531006, "perplexity": 433.9747931211251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510271654.40/warc/CC-MAIN-20140728011751-00080-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://www.chilimath.com/lessons/advanced-algebra/inverse-of-exponential-function/ | # Finding the Inverse of an Exponential Function
I will go over three examples in this tutorial showing how to determine algebraically the inverse of an exponential function. But before you take a look at the worked examples, I suggest that you review the suggested steps below first in order to have a good grasp of the general procedure.
## Steps to Find the Inverse of an Exponential Function
STEP 1: Change $f\left( x \right)$ to $y$.
$\large{f\left( x \right) \to y}$
STEP 2: Interchange $\color{blue}x$ and $\color{red}y$ in the equation.
$\large{x \to y}$
$\large{y \to x}$
STEP 3: Isolate the exponential expression on one side (left or right) of the equation.
The exponential expression shown below is a generic form where $b$ is the base, while $N$ is the exponent.
STEP 4: Eliminate the base $b$ of the exponential expression by taking the logarithms of both sides of the equation.
• To make the simplification much easier, take the logarithm of both sides using the base of the exponential expression itself.
• Using the log rule,
STEP 5: Solve the exponential equation for $\color{red}y$ to get the inverse. Finally, replace $\color{red}y$ with the inverse notation ${f^{ - 1}}\left( x \right)$ to write the final answer.
Replace $y$ with ${f^{ - 1}}\left( x \right)$
Let’s apply the suggested steps above to solve some problems.
### Examples of How to Find the Inverse of an Exponential Function
Example 1: Find the inverse of the exponential function below.
This should be an easy problem because the exponential expression on the right side of the equation is already isolated for us.
Start by replacing the function notation $f\left( x \right)$ by $y$.
The next step is to switch the variables $\color{red}x$ and $\color{red}y$ in the equation.
Since the exponential expression is by itself on one side of the equation, we can now take the logarithms of both sides. When we get the logarithms of both sides, we will use the base of $\color{blue}2$ because this is the base of the given exponential expression.
Apply the Log of Exponent Rule which is ${\log _b}\left( {{b^k}} \right) = k$ as part of the simplification process. The rule states that the logarithm of an exponential number where its base is the same as the base of the log is equal to the exponent.
We are almost done! Solve for $y$ by adding both sides by $5$ then divide the equation by the coefficient of $y$ which is $3$. Don’t forget to replace $y$ to ${f^{ - 1}}\left( x \right)$. This means that we have found the inverse function.
If we graph the original exponential function and its inverse on the same $XY-$ plane, they must be symmetrical along the line $\large{\color{blue}y=x}$. Which they are!
Example 2: Find the inverse of the exponential function below.
The only difference of this problem from the previous one is that the exponential expression has a denominator of $2$. Other than that, the steps will be the same.
We change the function notation $f\left( x \right)$ to $y$, followed by interchanging the roles of $\color{red}x$ and $\color{red}y$ variables.
At this point, we can’t perform the step of taking the logarithms of both sides just yet. The reason is that the exponential expression on the right side is not fully by itself. We first have to get rid of the denominator $2$.
We can accomplish that by multiplying both sides of the equation by $2$. The left side becomes $2x$ and the denominator on the right side is gone!
By isolating the exponential expression on one side, it is now possible to get the logs of both sides. When you do this, always make sure to use the base of the exponential expression as the base of the logarithmic operations.
In this case, the base of the exponential expression is $5$. Therefore, we apply log operations on both sides using the base of $5$.
Using this log rule, ${\log _b}\left( {{b^k}} \right) = k$ , the fives will cancel out leaving the exponent $\color{blue}4x+1$ on the right side of the equation after simplification. This is great since the log portion of the equation is gone.
We can now finish this up by solving for the variable $y$, then replacing that by ${f^{ - 1}}\left( x \right)$ to denote that we have obtained the inverse function.
As you can see, the graphs of the exponential function and its inverse are symmetrical about the line $\large{\color{green}y=x}$.
Example 3: Find the inverse of the exponential function below.
I see that we have an exponential expression being divided by another. The good thing is that the exponential expressions have the same base of $3$. We should be able to simplify this using the Division Rule of Exponent. To divide exponential expressions having equal bases, copy the common base and then subtract their exponents. Below is the rule. The assumption is that $b \ne 0$.
Observe how the original problem has been greatly simplified after applying the Division Rule of Exponent.
At this point, we can proceed as usual in solving for the inverse. Rewrite $f\left( x \right)$ as $y$, followed by interchanging the variables $\color{red}x$ and $\color{red}y$.
Before we can get the logs of both sides, isolate the exponential portion of the equation by adding both sides by $4$.
Since the exponential expression is using base $3$, we take the logs of both sides of the equation with base $3$ as well! By doing so, the exponent $\color{blue}2y-1$ on the right side will drop, so we can continue on solving for $y$ which is the required inverse function.
It verifies that our answer is correct because the graph of the given exponential functions and its inverse (logarithmic function) are symmetrical along the line $\large{y=x}$.
You might also be interested in:
Inverse of a 2×2 Matrix
Inverse of Absolute Value Function
Inverse of Constant Function
Inverse of Linear Function
Inverse of Logarithmic Function
Inverse of Quadratic Function
Inverse of Rational Function
Inverse of Square Root Function | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 56, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9400205612182617, "perplexity": 154.89107123225168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00410.warc.gz"} |
https://cstheory.stackexchange.com/questions/11768/oracle-complexity-of-a-problem-in-the-counting-hierarchy | # Oracle complexity of a problem in the Counting Hierarchy
In "On The Complexity of Numerical Analysis" (SIAM J. Comp. Vol. 38, 2009), Allender et al. introduce the problem of PosSLP and show that its complexity lies in the counting hierarchy, and more precisely in $P^{\mathit{PP}^{\mathit{PP}^{\mathit{PP}}}}$.
I have a problem, call it $X$, that I have shown can be solved in $\mathit{NP}^{\mathit{PosSLP}}$. Can I correctly conclude that $X$ lies in $\mathit{NP}^{\mathit{PP}^{\mathit{PP}^{\mathit{PP}}}}$?
• I wonder why would anyone upvote this question. It is a typical example of a bad question: you do not state what you understand and what you do not. Because of this, people cannot post any meaningful answer. Kristoffer Arnsfelt Hansen’s answer just repeats what you wrote in the question with one additional word “Yes.” Jun 24 '12 at 0:50
• Just for the record: I decided to answer this basic question, since I saw a very misleading answer was given and it was even voted up. Jun 24 '12 at 17:09
Yes. Each time your $\sf NP$ machine want to query the $\sf PosSLP$ oracle, simply simulate the polynomial time oracle Turing machine underlying the inclusion $\sf PosSLP \subseteq {P}^{{PP}^{PP^{PP}}}$, passing its oracle queries to the $\sf {PP}^{PP^{PP}}$ oracle.
• Thanks, this is what I thought but wanted to make sure I was understanding the oracle definitions properly.
– Joel
Jun 22 '12 at 9:34
Let's take this step by step, so we are not confused by that small tower of oracles.
$X \in NP^{PosSLP}$. Since $PosSLP \in P^{PP^{PP^{PP}}}$, we derive that $X \in NP^{P^{PP^{PP^{PP}}}}$. We want to prove that $X \in NP^{PP^{PP^{PP}}}$.
Since $NP \subseteq NP^{P}$ and due to oracle properties, adding the power of the same oracles to both hands of the relation preserves it. Therefore we have: $NP^{PP^{PP^{PP}}} \subseteq NP^{P^{PP^{PP^{PP}}}}$.
So yes, the problem is in $NP^{P^{PP^{PP^{PP}}}}$.
• “adding the power of the same oracles to both hands of the relation preserves it.” This reasoning is incorrect. If it were true, we would already know P≠NP because there is a language A such that P^A≠NP^A (the Baker-Gill-Solovay theorem). But a famous counterexample in complexity theory is that IP=PSPACE but IP^A≠PSPACE^A for some language A. Jun 24 '12 at 0:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9593794345855713, "perplexity": 375.4445933484547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306346.64/warc/CC-MAIN-20220128212503-20220129002503-00001.warc.gz"} |
https://teedoc.neucrack.com/get_started/en/usage/start.html | # start writing document
## Build and Preview
Execute in the document directory with site_config.json
teedoc serve
After displaying Starting server at 0.0.0.0:2333 ...., it is fine
Open the browser to visit: http://127.0.0.1:2333
Modify the file in real time. After saving the file, it will automatically rebuild the file after 3 seconds by default, and then the browser will automatically refresh
The delay time of automatic refresh can be set, you can add the -t parameter, for example, teedoc -t 0 serve is set to a 0 second delay,
It can also be set in the document configuration, see the description of the configuration parameter rebuild_changes_delay below
If you only need to build and generate HTML pages, you only need to execute
teedoc build
Note that if you are finally generating the release version of the document, you must use the build command to generate the website page, the page generated by the serve command can only be used for local preview, there will be redundant preview related code, which is not suitable Used in production deployment
In addition, you can also specify the parameter -d or --dir to specify the document directory, so that you do not need to execute commands under the document directory, such as
teedoc -d /home/teedoc/my_doc build
## Build document delete
The built document will be placed in the out directory, the program will not actively delete it, if you need to clear it, please delete it manually
## Document directory structure
├─.github
├─docs
│ ├─develop
│ │ ├─en
│ │ └─zh
│ └─get_started
│ ├─assets
│ ├─en
│ └─zh
├─pages
│ └─index
│ ├─en
│ └─zh
├─static
│
└─site_config.json
• .github: automatic build script, how to use it in later chapters
• docs: document, contains multiple documents, each document is a separate folder
• pages: pages, including homepages, 404 pages, etc.
• static: static file folder, such as storing pictures
• site_config.json: website configuration file
• config.json: In addition to site_config.json, each document directory can have config.json to configure document related pages
• sidebar.json: document directory
Before looking at how to use the configuration file, you must keep in mind that the configuration file is very simple. The configuration file has only two file names, a unique document configuration file site_config and each document's own configuration file config
## Configuration file
The configuration file can be a file in json or yaml format, choose the one you like. Don't be afraid if you haven't touched it before, search for a tutorial and take 10 minutes to learn it.
If your document directory has a lot of content, it is recommended to use the yaml format, which looks more concise
teedoc provides commands for converting between json and yaml formats
### From json to yaml
teedoc -f ./config.json json2yaml
### From yaml to json
teedoc -f ./config.json yaml2json
### From gitbook SUMMARY.md to json
teedoc -f ./SUMMARY.md summary2json
### From gitbook SUMMARY.md to yaml
teedoc -f ./SUMMARY.md summary2yaml
## site_config.json Site configuration
Website configuration items, such as website name, page routing, plug-in configuration, etc.
The following sample configuration file looks like there are many configuration items, don’t be scared, in fact it is very simple, the main configuration items, mastered, thinking is easy
The configuration file is in json format, for example:
{
"site_name": "teedoc",
"site_slogon": "happy to write",
"site_root_url": "/",
"site_domain": "teedoc.github.io",
"site_protocol": "https",
"config_template_dir": "./",
"source": "https://github.com/teedoc/teedoc.github.io/blob/main",
"route": {
"docs": {
"/get_started/zh/": "docs/get_started/zh",
"/develop/zh/": "docs/develop/zh",
},
"pages": {
"/": "pages/index/zh",
},
"assets": {
"/static/": "static",
"/get_started/assets/": "docs/get_started/assets"
},
"/blog/": "blog"
},
"translate": {
"docs": {
"/get_started/zh/": [ {
"url": "/get_started/en/",
"src": "docs/get_started/en"
}
],
"/develop/zh/": [ {
"url": "/develop/en/",
"src": "docs/develop/en"
}
]
},
"pages": {
"/": [ {
"url": "/en/",
"src": "pages/index/en"
}
]
}
},
"executable": {
"python": "python3",
"pip": "pip3"
},
"plugins": {
"teedoc-plugin-markdown-parser":{
"from": "pypi",
"config": {
}
},
"teedoc-plugin-theme-default":{
"from": "pypi",
"config": {
"dark": true,
"env":{
"main_color": "#4caf7d"
},
"css": "/static/css/custom.css",
"js": "/static/js/custom.js"
}
}
}
}
• site_name: site name
• site_slogon: website slogan
• site_root_url: website root directory path, use the default value /; if you need to put the generated content in the website folder (not the root directory folder), you can set
• site_domain: website domain name, currently used place: generate sitemap.xml and robots.txt
• site_protocol: website protocol, http or https, currently used place: generate sitemap.xml and robots.txt
• config_template_dir: config template file, config.json or config.yaml in other document directories can be import the files in it, the default location is the directory where site_config is located
• source: document source path, such as https://github.com/teedoc/teedoc.github.io/blob/main, where main is the main branch of the document, and Edit this page' will be added to the document pageButton (link), click to jump to the source code of the corresponding file. You can leave it blank to not add a link. In addition, you can add "show_source": "Edit this page" in config.json to define the text content of the button as Edit this page, if you want the document to have no such button, set it to "show_source": false; you can also add show_source: edit this page or show_source: false in the header information of the file (md or ipynb file) to set
• route: Web page routing, including routing of documents, pages and resource files, such as routing of documents
"docs": {
"/get_started/zh/": "docs/get_started/zh",
"/get_started/en/": "docs/get_started/en",
"/develop/zh/": "docs/develop/zh",
"/develop/en/": "docs/develop/en"
},
The key represents the url of the document in the final generated website, and the following value is the corresponding source document path.
For example, the source document docs/get_started/zh/README.md will generate the file out/get_started/zh/index.html after construction. If it is not a md file (ie unsupported file), it will be left intact Copy files automatically, and finally the out directory is the generated website
The same is true for pages, assets will not perform document conversion and directly copy to the corresponding directory
• translate: Translate, specify the url and file path of the translated version corresponding to the document. Similarly, the config and sidebar configuration files are required under the path of the translated version, and locale is specified in the config file To achieve the specified document language to be translated, for example, Chinese can be zh, zh_CN, zh_TW, English is en, en_US, etc. The translated sidebar and the document path need to be consistent with the source document. If there is no translation, you can leave the translation file. When the user visits a page that has not been translated, it will jump to no_tanslate.html to indicate that there is no translation. For more details, please see Internationalization i18n
• executable: executable program settings, here you can set the executable program names of python and pip, which will be used when installing the plug-in
• plugins: Plug-in configuration, mainly composed of name, source, and configuration items.
For the name, you can search for teedoc-plugin in github to find open source plug-ins. You are also welcome to participate in writing plug-ins (you only need to use the Python syntax);
Fill in the from field with pypi, if the plug-in is downloaded locally, you can also fill in the folder path, or you can fill in the git path directly, such as git+https://github.com/*****/* *****.git
The configuration items are determined by the specific plug-in. For example, teedoc-plugin-theme-default has the dark option to choose whether to enable the dark theme
• rebuild_changes_delay: After detecting file changes, how many seconds to delay automatically regenerating the document, the browser will automatically refresh the page, the default is 3 seconds, the shortest can be set to 0 seconds, you can use teedoc -t 3 serve or teedoc --delay serve to override this setting
• robots: Customize the content of robots.txt, such as "User-agent": "*" means that all clients are allowed to crawl, which will have an impact on SEO. For example, crawling of JPEG pictures is not allowed: "Disallow": "/.jpeg\$", and access to the admin directory: "Disallow": "/admin" is not allowed, specifically and robots.txt Same format requirements
• layout_root_dir: the root directory of the layout template, the default is layout, that is, when the layout template file is needed, it will automatically find it under this folder
• layout_i18n_dirs: The international translation directory of layout, which can be a path such as locales, and in some special cases, multiple paths such as ["locales1", "locales2] can also be set. The content of the file can be referred to [i18n](./i18n.md#plug-in internationalization) production
## config.json document configuration
This is the configuration for each document, placed in the root directory of each document, such as docs/get_started/zh/config.json, each document is independent of each other, you can set the same to keep the website navigation bar consistent
Here you can configure the navigation bar of each document and the content of the footer (footer), and you can also set the config item of the plug-in. The current document will override the configuration in site_config.json to achieve different Documents in different languages (internationalization/i18n) or styles, etc.
such as:
{
"import": "config_zh",
"id": "teedoc_page",
"class": "language_zh",
"locale": "en_US",
"navbar": {
"title": "teedoc",
"logo": {
"alt": "teedoc logo",
"src": "/static/image/logo.png"
},
"home_url": "/",
"items": [
{
"url": "/get_started/zh/",
"label": "Installation and Use",
"position": "left"
},
{
"url": "/develop/zh/",
"label": "Development",
"position": "left"
},
{
"url": "https://github.com/neutree/teedoc",
"label": "github",
"target": "_blank",
"position": "right"
},
{
"label": "Language: ",
"position": "right",
"items": [
{
"url": "/get_started/zh/",
"label": "Chinese"
},
{
"url": "/get_started/en/",
"label": "English"
}
]
}
]
},
"footer":{
"top":[
{
"items": [
{
"label": "Use teedoc to build",
"url": "https://github.com/neutree/teedoc",
"target": "_blank"
},
{
"url": "https://neucrack.com",
"target": "_blank"
}
]
},
{
"label": "Source",
"items": [
{
"label": "github",
"url": "https://github.com/neutree/teedoc",
"target": "_blank"
},
{
"label": "Source files of this website",
"url": "https://github.com/teedoc/teedoc.github.io",
"target": "_blank"
}
]
}
],
"bottom": [
{
"label": "*ICP备********号-1",
"url": "https://beian.miit.gov.cn",
"target": "_blank"
},
{
"label": "*Public Network Security No. ************",
"url": "https://beian.miit.gov.cn/#/Integrated/index",
"target": "_blank"
}
]
},
"plugins": {
"teedoc-plugin-search":{
"config": {
"search_hint": "Search",
"input_hint": "Enter keywords, separate multiple keywords with spaces",
"other_docs_result_hint": "Results from other documents",
"curr_doc_result_hint": "Current document search result"
}
}
},
}
• import: You can import the configuration from the template file, the file name without suffix. For example, site_config set config_template_dir to ./, fill in "import": "config_zh" here, it means to import config_zh.json (priority) or config_zh in the same directory assite_config.yaml.
Then you can add the configuration of the current document, overwrite the template file, the same keywords, and modify different content. If it is an array (list), to replace the content of the template file, you need to add id to the array item of the template file. Keyword, then modify, if the id keyword is not specified, it will be appended to the array. For example, the template file config_zh:
{
"locale": "en_US",
"navbar": {
"title": "teedoc",
"items": [
{
"url": "/get_started/zh/",
"label": "安装使用",
"position": "left"
},
{
"id": "language",
"label": "Language: ",
"position": "right",
"items": [
{
"url": "/zh",
"label": "中文"
},
{
"url": "/en",
"label": "English"
}
]
}
]
}
}
The configuration file of a specific document:
{
"import": "config_zh",
"navbar": {
"title": "teedoc123",
"items": [
{
"id": "language",
"label": "Language: ",
"position": "right",
"items": [
{
"url": "/get_started/zh",
"label": "中文"
},
{
"url": "/get_started/en",
"label": "English"
}
]
}
]
}
}
• id: The id of the document. Generally, there is no need to write it. The id will be set to the <html> tags of all pages in the config.json directory. For example, if teedoc_page is set here, all pages in this directory will become <html id="teedoc_page"> ... </html>. If the markdown file has set id, this value will be overwritten, that is, each page can only have one id.
• class: The class of the document, generally you don't need to write it. Set the class to the <html> tags of all pages in the config.json directory, and use spaces for multiple class Separate. For example, if language_zh is set here, all pages in this directory will become <html class="language_zh"> ... </html>. If class is set in the markdown file, it will be appended. For example, if language_zh is set in config.json, and class: zh_readme is set in README.md, the final result is class=" language_zh zh_readme". This function is convenient to customize the style of each page or the style of different documents.
• locale: locale code, can found from here, for example: zh, zh_CN, en_US, ja etc. Or get by program babel:
pip install babel
pybabel --list-locales
• navbar: Navigation bar settings, each document can be individually set up the navigation bar, if you want to keep the entire website unified, you can modify each configuration to be the same. The keyword type is used in the first layer and is used to indicate the category of this label in the navigation bar. The values are:
• link: normal link, this option is the default when you don’t write the type keyword
• list: There are sub-items, which will be displayed in the form of a drop-down menu
• selection: Single option, such as language selection. When the type keyword is not written and there is the items keyword, this option is the default
• language: If translate is set in site_config, the items of type language will be automatically filled in the language list, so we don't need to write the language list manually! The effect is the same as selection (in fact, the internal code is to automatically replace the language type with selection)
• footer: website footer, divided into upper and lower parts, and multiple columns can be added to the upper part, and each column can have multiple values
• plugins: Configure the configuration items of the plug-in, if it has been set in the site_config.json, it will be overwritten, that is, the priority of the child config is higher
• show_source: Under the premise that the keyword source is set in site_config.json, it is the source code path of the document, such as https://github.com/teedoc/teedoc.github.io/blob/main , Where main is the main branch of the document, and the Edit this page button (link) will be added to the document page, click to jump to the corresponding file source code. Set "show_source": "Edit this page" to define the text content of the button as Edit this page, if you don't set it, the default is Edit this page, if you want the document to have this button, set it to "show_source": false; you can also add show_source: edit this page or show_source: false in the header information of the file (md or ipynb file) to set
## sidebar.json Document directory (sidebar) settings
There is a directory for setting documents, one for each document, independent of each other
The file path uses a relative path, just fill in the file name, README.md will be automatically converted to index.html
In addition, you can also directly url without writing the path of file, such as "url": "/get_started/zh/", at the same time you can set "target":"_blank" to open in a new window, otherwise Open in current window
For the items in the first layer of items, if there is only label without url, file and items, a classification mark will be added to the sidebar, and the effect is as follows:
And you can add option "collapsed": false to show sub directory by default
such as:
items:
- label: Introduction to teedoc
- label: Install teedoc
- label: Start writing document
file: usage/start.md
- label: Plugin
collapsed: false
items:
- label: Theme Plugin
file: plugins/themes.md
- label: Other plugins
file: plugins/others.md
- label: markdown syntax
file: syntax/syntax_markdown.md
- label: Website using teedoc
file: usage/sites.md
- label: More samples
items:
- label: Second-level subdirectory example
items:
- label: Sample three-level sub-directory
items:
- label: Article 1
file: more/example_docs/doc1.md
- label: Article 2
file: more/example_docs/doc2.md
- label: This is a link
url: https://github.com/teedoc/teedoc
target: _blank
or json format
{
"items":[
{
"label": "Introduction to teedoc",
},
{
"label": "Install teedoc",
},
{
"label": "Start writing document",
"file": "usage/start.md"
},
{
"label": "Plugin",
"collapsed": false,
"items":[
{
"label": "Theme Plugin",
"file": "plugins/themes.md"
},
{
"label": "Other plugins",
"file": "plugins/others.md"
}
]
},
{
"label": "markdown syntax",
"file": "syntax/syntax_markdown.md"
},
{
"label": "Website using teedoc",
"file": "usage/sites.md"
},
{
"label": "More samples",
"items":[
{
"label": "Second-level subdirectory example",
"items":[
{
"label": "Sample three-level sub-directory",
"items":[
{
"label": "Article 1",
"file": "more/example_docs/doc1.md"
}
]
},
{
"label": "Article 2",
"file": "more/example_docs/doc2.md"
}
]
},
{ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5172209739685059, "perplexity": 8363.241854560005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00366.warc.gz"} |
http://mathhelpforum.com/latex-help/172504-test-print.html | # Test
• February 24th 2011, 03:09 PM
robertog
Test
$\begin{array}{l}
\frac{{ - b \pm \sqrt {{b^2} - 4ac} }}{{2a}}\\
\frac{{n!}}{{r!\left( {n - r} \right)!}}\\
\frac{1}{2}
\end{array}$
• February 24th 2011, 10:04 PM
CaptainBlack
Quote:
Originally Posted by robertog
$\begin{array}{l}
\frac{{ - b \pm \sqrt {{b^2} - 4ac} }}{{2a}}\\
\frac{{n!}}{{r!\left( {n - r} \right)!}}\\
\frac{1}{2}
\end{array}$
You might find centred {c} looks better than left justified {l} for this. Also if you start with \dfrac you will get larger fractions, and it would benefit from blank lines between the equations
$\displaystyle \begin{array}{c}
\dfrac{{ - b \pm \sqrt {{b^2} - 4ac} }}{{2a}}\\ \\
\dfrac{{n!}}{{r!\left( {n - r} \right)!}}\\ \\
\dfrac{1}{2}
\end{array}$
• March 10th 2011, 02:58 PM
wsc810
$b^2 + 4ac$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6885994672775269, "perplexity": 6261.192529971359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295358.50/warc/CC-MAIN-20160823195815-00057-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://studyadda.com/question-bank/probability_q15/4612/363123 | • # question_answer The speed of vehicles 200 travelling along a section of highway were recorded and displayed frequently alongside. What is the possibility that the vehicles were travelling at a speed between 60 kmph and 100 kmph? A) $\frac{2}{7}$ B) $\frac{3}{7}$C) $\frac{11}{14}$ D) $\frac{1}{7}$
(c): No. of vehicles running between 60 and $100\text{ }kmph=200=n\left( E \right)$ Total vehicles $=350=n\left( S \right)$ $n(P)\frac{275}{350}=\frac{11}{14}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9614354372024536, "perplexity": 1727.2705513543908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146066.89/warc/CC-MAIN-20200225110721-20200225140721-00155.warc.gz"} |
https://www.qb365.in/materials/stateboard/9th-standard-maths-english-medium-free-online-test-1-mark-questions-with-answer-key-2020-2021-part-6-882.html | #### 9th Standard Maths English Medium Free Online Test 1 Mark Questions with Answer Key 2020 - 2021 Part - 6
9th Standard
Reg.No. :
•
•
•
•
•
•
Maths
Time : 00:15:00 Hrs
Total Marks : 15
Part A
15 x 1 = 15
1. The set (A - B) U(B - A) is ___________
(a)
AΔB
(b)
AUB
(c)
A∩B
(d)
A'UB'
2. For any three sets P, Q and R, P-(Q$\\ \cap$R) is
(a)
P-(Q$\cup$R)
(b)
(P$\\ \cap$Q)-R
(c)
(P-Q)$\cup$(P-R)
(d)
(P-Q)$\\ \cap$(P-R)
3. Find the odd one out of the following
(a)
$\sqrt { 32 } \times \sqrt { 2 }$
(b)
$\frac { \sqrt { 27 } }{ \sqrt { 3 } }$
(c)
$\sqrt { 72 } \times \sqrt { 8 }$
(d)
$\frac { \sqrt { 54 } }{ \sqrt { 18 } }$
4. If a number has a non-terminating and non-recurring decimal expansion, then it is______________ .
(a)
a rational number
(b)
a natural number
(c)
an irrational number
(d)
an integer
5. The type of the polynomial 4–3x3 is
(a)
constant polynomial
(b)
linear polynomial
(c)
quadratic polynomial
(d)
cubic polynomial.
6. Which of the following is trinomial?
(a)
-7z
(b)
${ z }^{ 2 }-{ 4y }^{ 2 }$
(c)
${ x }^{ 2 }y-{ xy }^{ 2 }+y$
(d)
$12a=9ab+5b-3$
7. If one of the factors of x2-6x-16 is x - 8 then the other factor is
(a)
(x + 6)
(b)
(x - 2)
(c)
(x + 2)
(d)
(x -16)
8. Which of the following is a solution of the equation 2x − y = 6
(a)
(2,4)
(b)
(4,2)
(c)
(3, −1)
(d)
(0,6)
9. One angle of a parallelogram is a right angle. The name of the quadrilateral is ______
(a)
square
(b)
rectangle
(c)
rhombus
(d)
kite
10. A chord is at a distance of 15cm from the centre of the circle of radius 25cm. The length of the chord is
(a)
25cm
(b)
20cm
(c)
40cm
(d)
18cm
11. If the points A (2,0), B (-6,0), C (3, a–3) lie on the x-axis then the value of a is _____
(a)
0
(b)
2
(c)
3
(d)
-6
12. Data available in an unorganized form is called ------------- data
(a)
Grouped data
(b)
class interval
(c)
mode
(d)
raw data
13. If 2sin 2$\theta$ = $\sqrt { 3 }$ , them the value of $\theta$ is
(a)
900
(b)
300
(c)
450
(d)
600
14. If the lateral surface area of a cube is 600 cm2, then the total surface area is
(a)
150 cm2
(b)
400 cm2
(c)
900 cm2
(d)
1350 cm2
15. The probability based on the concept of relative frequency theory is called
(a)
Empirical probability
(b)
Classical probability
(c)
Both (1) and (2)
(d)
Neither (1) nor (2) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8926624059677124, "perplexity": 2603.53550246605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358323.91/warc/CC-MAIN-20211127223710-20211128013710-00503.warc.gz"} |
https://www.koreascience.or.kr/search.page?keywords=BDI+SCALE | • Title/Summary/Keyword: BDI SCALE
### The correlation between post-stroke depression and the recovery rate of motor functions (뇌졸중후 우울증의 정도와 운동기능의 회복과의 상관관계에 대한 연구)
• Park, Se-Jin;Park, Sang-Dong;Lee, Jeong-Hun
• Journal of Oriental Neuropsychiatry
• /
• v.13 no.2
• /
• pp.101-106
• /
• 2002
• Objectives : The degree of post-stroke depression was observed and then correlated to the recovery rate of the motor functions of the above treated stroke patients. Methods : The BDI SCALE(Beck Depression Inventory Scale) and motor grades of 50 diagnosed stroke patients who were hospitalized in Dong-Seo Oriental Hospital between the period of May 2002 to September 2002 were measured. After a 1 month recovery period the BDI SCALE and motor grade of the above mentioned patients were again measured and a correlation was observed. Results : A lower BDI SCALE was observed in patients with a higher motor grade recovery rate. Conclusion : The treatment of post-stroke depression is imperative for positive effects on the motor functions of stroke patients.
### Self-care, Family Support and Depression in Elderly Patients with Diabetes Mellitus (노인 당뇨병 환자의 자가간호, 가족지지, 우울)
• Park, Kee-Sun;Moon, Jung-Soon;Park, Sun-Nam
• Journal of Korean Academy of Fundamentals of Nursing
• /
• v.16 no.3
• /
• pp.345-352
• /
• 2009
• Purpose: This study was done to investigate the degree of self-care, family support and depression, and relationship among these variables for elder with diabetes mellitus. Method: Participants were 202 diabetic patients, 65 years or over, living in Seoul, Korea. Data were collected using the self-care tool for diabetic patients by Kim (1996), the family support tool for diabetic patients by Park (1984), and Korea's BDI scale by Lee (1995). Results: Of the patients, 43.1% showed HbAlc levels than higher 7%. The highest mean score was for self-care for medication compliance, and the lowest for blood glucose testing compliance. Factors affecting self-care were employment, education, HbAlc level, diabetic self-care education and complications. Factors affecting family support were living with family, diabetic self-care education, hospitalization and complications. Factors affecting depression were gender, living with family and complications. All of these factors were significant. Patients experiencing depression were 16.8% of patients. There was a significant positive correlation between self-care and family support, and significant negative correlations between self-care and depression, and family support and depression. Conclusion: For more effective management of diabetes mellitus in elders, improvement in self-care compliance, and family support are needed.
### Influencing Factors on Antenatal Depression (산전우울의 영향요인)
• Kim, Hae-Won;Jung, Yeon-Yi
• Korean Journal of Women Health Nursing
• /
• v.16 no.2
• /
• pp.95-104
• /
• 2010
• Purpose: This study examined the influencing factors on antenatal depression among pregnant women. Methods: This was a cross sectional descriptive study with 255 pregnant women who visited a general hospital in a metropolitan city for their regularly scheduled check-up. Measurement tools employed were the Korean version of Beck Depression Inventory (BDI), the food habits, and the Pittsburg Sleep Quality Index (PSQI). Socio-demographic variables and the status of high risk pregnancy were identified. Influencing factors on antenatal depression were identified using a stepwise multiple regression analysis. Results: The mean score of antenatal depression was $7.2{\pm}5.0$; 18.4% with mild depression, 5.9% had moderate depression, with 0.8% identified with severe depression on BDI scale. Influencing factors on antenatal depression accounted for 47.8% of the total variance which consisted of quality of sleep, marital satisfaction, food habits, gestation periods, sexual satisfaction, high risk pregnancy, and age. Conclusion: Findings show that antenatal depression should be monitored on a regular basis during early pregnancy and in high risk pregnancy if possible, and quality of sleep and food habits should be incorporated in the management of antenatal depression.
### Factors associated with Postpartum Depression and Its Influence on Maternal Identity (산후우울의 영향요인과 모성 정체성과의 관련성)
• Jung, Yoen Yi;Kim, Hae Won
• Korean Journal of Women Health Nursing
• /
• v.20 no.1
• /
• pp.29-37
• /
• 2014
• Purpose: This study aimed to examine the factors associated with postpartum depression and its influence on maternal identity of postpartum women. Methods: Research design was a cross sectional descriptive study with a total of 89 women within the six month postpartum period. Associations of eating habits, overall sleep quality and other factors with postpartum depression utilizing the Korean Beck Depression Inventory (K-BDI) were done. The influence of postpartum depression on maternal identity was analyzed. Variables yielding significant associations (p<.05) were included in an adjusted logistic regression and a stepwise multiple regression. Results: Mean scores of postpartum depression was $9.42{\pm}6.08$ and 31.5% (n=28) for mild depression, 11.2% (n=10) was moderate and 4.5% (n=4) was severe depression on the K-BDI scale. Perceived health status and overall sleep quality were predictors of postpartum depression. Postpartum depression and the husband's love were predictors of maternal identity. Conclusion: Awareness of poor health perception and sleep quality will be helpful to detect for postpartum depression. Strategies to increase maternal identity during the postpartum period would be tailored by level of depression.
### Cytoprotective action of Rubi Fructus by modulation of Reactive Oxygen Species, peroxynitrite and $Ca^{2+}$ (복분자(覆盆子)의 세포내 ROS, $ONOO^-$ 생성 및 $Ca^{2+}$ 증가 억제에 의한 혈관내피세포 보호작용)
• Lee, Cheol-Woong;Jeong, Ji-Cheon
• The Journal of Internal Korean Medicine
• /
• v.26 no.3
• /
• pp.615-625
• /
• 2005
• Objectives : Poststroke depression is a frequent and specific entity that impaires the rehabilliation and functional recovery of patients with hemiplegia. The author evaluated the effect of Banhahubak-tang(Banxiahoupotang) in patients with poststroke depression. Methods : 38 patients suffering from poststroke depression(determined by Diagnostic and Statistical Manual of Mental Disorders, revised. 3rd edition. and Beck Depression Inventory[BDI] cutoff $point{\geqq}10$) in Kyunghee Oriental hospital were randomized into two groups; treatment group(n=19) and control group(n=19). The treatment group was prescribed with Banhahubak-tang(Banxiahoupotang) three times a day fur a week. Control troop was prescribed with other herbal medicines used for stroke Patients three times a day for a week. Patients were evaluated by use of BDI scale, Modified Barthel Index, Depression of Ki score, Yin syndrome score, and Yang syndrome score. Among 38 patients, 24 patients got BDI scores above 21, which is the cut-off score for depression in Korean. The same procedures and assessments described above were applied. Results : Treatment group did not significantly improve compared with control group. Results yielded only slight significance (P=0.086). Especially. patients with poststroke depression as yin syndrome improved more significantly on BDI than those classified as yang syndrome. When BDI cutoff point for depression was defined as being ${\geq}\;21$, treatment group did not significantly improve compared with control group(P=0.114). However, patients with poststroke depression classified as yin syndrome were also significantly improved on BDI than those classified as yang syndrome. Conclusions : This study suggests that Banhahubak-tang(Banxiahoupotang) is significantly effective in patients with poststroke depression classified as yin syndrome.
### Effects of Banhahubak-tang(Banxiahoupotang) on patients with poststroke depression (중풍후우울증에 대한 반하후박탕의 유효성 및 적응증 평가)
• Jung, Jae-Han;Choi, Chang-Min;Hong, Jin-Woo;Kim, Tae-Hun;Rhe, Jun-Woo;Lee, Cha-Ro;Bahn, Geon-Ho;Jung, Woo-Sang;Moon, Sang-Kwan;Bae, Hyung-Sup;Na, Byong-Jo
• The Journal of Internal Korean Medicine
• /
• v.26 no.3
• /
• pp.563-574
• /
• 2005
• Objectives : Poststroke depression is a frequent and specific entity that impaires the rehabilliation and functional recovery of patients with hemiplegia. The author evaluated the effect of Banhahubak-tang(Banxiahoupotang) in patients with poststroke depression. Methods : 38 patients suffering from poststroke depression(determined by Diagnostic and Statistical Manual of Mental Disorders, revised. 3rd edition. and Beck Depression Inventory[BDI] cutoff $point{\geqq}10$) in Kyunghee Oriental hospital were randomized into two groups; treatment group(n=19) and control group(n=19). The treatment group was prescribed with Banhahubak-tang(Banxiahoupotang) three times a day fur a week. Control troop was prescribed with other herbal medicines used for stroke Patients three times a day for a week. Patients were evaluated by use of BDI scale, Modified Barthel Index, Depression of Ki score, Yin syndrome score, and Yang syndrome score. Among 38 patients, 24 patients got BDI scores above 21, which is the cut-off score for depression in Korean. The same procedures and assessments described above were applied. Results : Treatment group did not significantly improve compared with control group. Results yielded only slight significance (P=0.086). Especially. patients with poststroke depression as yin syndrome improved more significantly on BDI than those classified as yang syndrome. When BDI cutoff point for depression was defined as being ${\geq}\;21$, treatment group did not significantly improve compared with control group(P=0.114). However, patients with poststroke depression classified as yin syndrome were also significantly improved on BDI than those classified as yang syndrome. Conclusions : This study suggests that Banhahubak-tang(Banxiahoupotang) is significantly effective in patients with poststroke depression classified as yin syndrome.
### Beck Depression Inventory Score and Associated Factors in Korean Patients with Lumbar Spinal Stenosis (척주관협착증 환자의 Beck Depression Inventory 점수와 이와 관련된 요인들의 분석)
• Kim, Ae Ra;Seo, Bo Byoung;Kim, Jin Mo;Bae, Jung In;Jang, Young Ho;Lee, Yong Cheol;Kang, Chul Hyung;Jung, Sung Won;Hong, Ji Hee
• The Korean Journal of Pain
• /
• v.20 no.2
• /
• pp.138-142
• /
• 2007
• Background: Depression is a frequent comorbid disease of chronic pain patients. This study was conducted to evaluate the prevalence of depression and to correlate associated factors and depression in patients with lumbar spinal stenosis. Methods: The data of this survey was collected from 97 patients that visited our pain clinic for the management of lumbar spinal stenosis. Depression was examined by a self-reported survey using the Korean version of the Beck Depression Inventory (BDI). The Oswestry Disability Index (ODI) and the life satisfaction scale score were also obtained. Demographic and clinical characteristics (including spouse status, employment status, smoking status, the number of patients with multiple painful areas, the number of patients with combined disease, pain duration, visual analogue scale, Roland 5-point scale and walking distance) were obtained from an interview with the patient. The patients were divided into group N ($BDI{\leq}14$, n = 43) and group 0 (BDI > 14, n = 54) according to the BDI scale. Of the 97 patients, 55,7% had a high BDI score. Results: The patients in group N had a higher rate of employment (48.0%, P < 0.05) and had higher life satisfaction scale scores ($9.4{\pm}2.5$, P < 0.01) as compared to group D patients. The BDI score showed a close correlation with employment status and the life satisfaction scale. Conclusions: Many lumbar spinal stenosis patients had high BDI scores. Employment status and the life satisfaction scale were closely correlated with the BDI score.
### Comparison of Stress Perception and Depression between Gastric Cancer and Gastritis Patients (위암 환자들과 위염 환자들 간의 스트레스지각 및 우울의 비교)
• Koh, Kyung-Bong;Lee, Sang-In;Lee, Jong-Min
• Korean Journal of Psychosomatic Medicine
• /
• v.2 no.1
• /
• pp.88-97
• /
• 1994
• A comparison was made between gastric cancer and gastritis patients regarding stress Perception and depression, using Global Assessment Recent Stress(GARS) scale and Beck Depression Inventory(BDI). 50% of gastric cancer patients and 38% of gastritis patients were found to be depressed on scores of BDI scale, respectively. There was no significant difference in scores of stress perception between both the groups. However, gastric cancer patients tended to be more depressed than gastritis patients, although the difference is statistically not significant. In the gastric cancer patients, severity of psychic distress showed significantly positive correlation with depression, whereas in the gastritis patients, severity of physical symptoms showed significantly positive correlation with depression. It suggested that depression of gastric cancer patients was more likely to be related to the extent of psychic distress than that of physical symptoms. In each of both the groups, female patients showed significantly higher stress perception than male patients, and age was found to have significantly negative correlation with stress perception. In conclusion, severity of pathology of the same organ was not related to extent of stress perception and of depression in which denial of gastric cancer patients might play a role. Thus, it is emphasized that psychosocial approach is more needed for gastric cancer patients than for gastritis patients. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2833361029624939, "perplexity": 17261.765051177976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585305.53/warc/CC-MAIN-20211020090145-20211020120145-00567.warc.gz"} |
https://www.physicsforums.com/threads/is-angular-momentum-conserved-if-you-move-off-at-a-tangent.688320/ | # Is angular momentum conserved if you move off at a Tangent ?
1. Apr 27, 2013
### elemis
Lets imagine a binary system of two astronauts in space connected to one another via light rope.
The rope is taut and they're spinning round and round with their axis of rotation being the the axis perpendicular to the their centre of mass.
Now, my question is this. Lets say they each let go of the rope; they move off at tangents.
Is angular momentum conserved ? And what is their subsequent motion ?
My hypothesis : They will move off at constant velocity at a tangent to their circular motion. I believe that because they now under the effects of no external force by the rotational analogue of newton's second law (dL/dt = T) their angular momentum will be of equal magnitude but opposite sign.
Does this make sense ?
EDIT : If you can recommend some good reference websites on this topic that would be nice too. Thanks !
2. Apr 27, 2013
### Staff: Mentor
yes.
You're right about them moving off at a tangent, wrong about the angular momenta. It will be equal magnitude and same sign, so that angular momentum is conserved. You calculate the angular momentum of point masess like our drifting astronauts as
L=r x mv
where the 'x' is the vector cross-product operation and r is the position vector from the origin. Try it on your astronauts and you'll see that it doesn't change when they let go of the rope; as they drift apart the angle between v and r changes in a way that is exactly canceled by the change in the magnitude of r.
(The moment of inertia of a non-point object is calculated by integrating the formula for point masses across the object. Any first-year mechanics text will be a good reference).
3. Apr 27, 2013
### elemis
I comprehend everything in your answer except the bit in bold. Can you explain how they change ?
I understand that lLl = lrllpl*sinθ.... Does this have anything to do with your explanation ?
4. Apr 27, 2013
### Infrared
Look at $rsin\theta$. It is the perpendicular distance from the origin to an astronaut and will not change even when r does.
5. Apr 27, 2013
### elemis
I assume you are referring to : lLl = lrllpl*sinθ
Yes, what you say make sense. Hence, I agree that lLl is proportional to lpl in the scenario.
However, wouldn't v have different signs for each astronaut since they are moving off in opposite directions ?
6. Apr 27, 2013
### Infrared
Yes, they do have opposite signs but don't you have an absolute value in your equation? If you are worried about the value of L and not just its magnitude, then you could use the right hand rule for both astronauts to see that the angular momentum vector is in the same direction for both.
7. Apr 27, 2013
### elemis
Yes, I know that the absolute value of angular momentum would be equal for each astronaut, but my initial argument was that L will be equal in each astronaut's case but have opposite signs.
The reason for the opposite signs (in my mind) is the opposite signs for velocity (moving in opoosite directions).
However, Nugatory says the magnitude will be same but they will have the SAME signs.
This I do not understand.
EDIT : I'm talking about the vector quantities of L.
8. Apr 27, 2013
### Infrared
Do you know what a cross product is? If you construct the angular momentum vector for each astronaut, you will see that the angular momentum for both astronauts is in the same direction and same sign (before and after) by using the right hand rule.
Edit: You can also think of it this way. Even though the momentum vectors are in opposite directions, so are the position vectors, so the cross product is the same in both cases.
Last edited: Apr 27, 2013
9. Apr 27, 2013
### Staff: Mentor
That's the rule for the magnitude of the angular momentum vector. The direction of the angular momentum vector is perpendicular to the plane in which r and p lie, and points either up or down according to the right-hand rule. In your example, they'll both point in the same direction so will add up to the original angular momentum instead of cancelling to zero.
The components of the cross-product L of two vectors A and B are given by
$$L_i = \sum\sum\epsilon_{ijk}A_{j}B_{k}$$
where the summations are across j and k, and the epsilon symbol is defined here (no need to read past the first few lines). This turns out to be equivalent to the rule you gave above for calculating the magnitude of the vector plus using the right-hand rule to choose its direction.
Last edited: Apr 27, 2013
10. Apr 27, 2013
### rcgldr
Note that the total angular momentum will also include the rate of rotation for each astronaut. This portion of the total angular momentum is also conserved.
11. Apr 27, 2013
### WannabeNewton
elemis maybe it would benefit to do this in terms of vectors and see that the result does indeed hold. Fix the origin to the center of the circular binary orbit of radius $R$ and assume that the rotation speed is a constant $v = \omega R$. The position vectors of the two point masses at a given instant of time will be $r_{1} = R\hat{r}, r_{2} = -R\hat{r}$ and their tangential velocities will be $\mathbf{v}_{1} = v\hat\theta, \mathbf{v}_{2} = -v\hat{\theta}$ hence $L_{1} = mr\times \mathbf{v} = mRv\hat{r}\times \hat{\theta} = mRv\hat{z} = L_{2}$ so $L = 2mRv\hat{z}$.
Now let's say they both let go of the ropes. They each go off on a tangent with velocity $v$. At this point it would be more convenient to use cartesian coordinates so let's do that and let's also orient our coordinate system (with the same fixed origin from above) so that the x-axis is aligned with the direction they go off in. Draw the position vector from the origin to any one of the point masses and you'll see that $r_{1} = vt\hat{x} - R\hat{y}, r_{2} = -vt\hat{x} + R\hat{y}$ and $\mathbf{v}_{1} = v\hat{x}, \mathbf{v}_{2} = -v\hat{x}$ thus $L_{1} = m(vt\hat{x} - R\hat{y})\times (v\hat{x}) = mRv\hat{z}, L_{2} = m(-vt\hat{x} + R\hat{y})\times (-v\hat{x}) = mRv\hat{z}$ so again we see that $L = 2mRv\hat{z}$.
12. Apr 27, 2013
### WannabeNewton
This is a very good point but I think the OP was thinking of a situation where the astronauts could be treated as point masses, in which case they wouldn't have spin angular momentum. But yeah the distinction wouldn't really matter since, as you note, it would be conserved anyways.
13. Apr 27, 2013
### BobG
Which reference frame?
If you're using a non-inertial rotating reference frame, then both astronauts are moving in a positive direction. If you're using an inertial non-rotating frame, then the signs of the components of your radius will also be changing. Using an easy example, if the first astronaut moved positive y when the radius lie on the positive x axis, and the second astronaut moved negative y when the radius lie on the negative x axis, then the sign of their angular momentum hasn't changed. It lies on the positive z axis for both.
Likewise, linear momentum is also conserved, as you started with none and ended with the astronauts' linear momentum cancelling out.
All of that makes sense mathematically, but, obviously, once the astronauts are moving off on their own path, it makes more sense to refer to their linear momentum and the angular momentum of the rope (plus whatever it was connected to). The angular momentum of the rope has obviously decreased once the astronauts have departed.
But total momentum was still conserved regardless of how you look at it or refer to it.
Last edited: Apr 27, 2013
Similar Discussions: Is angular momentum conserved if you move off at a Tangent ? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9294880628585815, "perplexity": 304.94168129513355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813626.7/warc/CC-MAIN-20180221143216-20180221163216-00494.warc.gz"} |
http://mathhelpforum.com/pre-calculus/159930-problem-region-bounded-curve-straight-line-2.html | # Thread: problem in region bounded by a curve and a straight line
1. $\dfrac{\pi}{16} \left[49(7) - \frac{14(7)^2}{2} + \frac{7^3}{3}\right] - \left[49(3) - \frac{14(3)^2}{2} + \frac{3^3}{3} \right] - \pi\left[4(4) - \frac{4^2}{2}- 4(3) - \frac{3^2}{2} \right]$
This line should be:
$\dfrac{\pi}{16} \left[49(7) - \frac{14(7)^2}{2} + \frac{7^3}{3}\right] - \left[49(3) - \frac{14(3)^2}{2} + \frac{3^3}{3} \right] - \pi\left[(4(4) - \frac{4^2}{2})- (4(3) - \frac{3^2}{2}) \right]$
$\dfrac{\pi}{16} \left[49(7) - \frac{14(7)^2}{2} + \frac{7^3}{3}\right] - \left[49(3) - \frac{14(3)^2}{2} + \frac{3^3}{3} \right] - \pi\left[4(4) - \frac{4^2}{2}- 4(3) + \frac{3^2}{2} \right]$
Notice the plus sign.
Continue now.
2. i get:
$\dfrac{\pi}{16} \left[49(7) - \frac{14(7)^2}{2} + \frac{7^3}{3}\right] - \left[49(3) - \frac{14(3)^2}{2} + \frac{3^3}{3} \right] - \pi\left[4(4) - \frac{4^2}{2}- 4(3) + \frac{3^2}{2} \right]$
= $\frac{\pi}{16}\left[\frac{64}{3}\right] -\pi\left[\frac{1}{2}\right]$
= $\pi\frac{4}{3} - \pi\left[\frac{1}{2}\right]$
= $\frac{5}{6}\pi unit^3$
right?
3. $\pi\left[4(4) - \frac{4^2}{2}- 4(3) + \frac{3^2}{2} \right] = \pi (16 - 8 - 12 + 4.5) = 0.5\pi$
Giving final answer as $\frac56 \pi\ units^3$
4. okey sir..thanks very much..i really appreciate with you..thanks!!
Page 2 of 2 First 12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6777058243751526, "perplexity": 12248.730238349828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00408-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/13788/when-testing-n-items-how-to-cover-all-t-subsets-by-as-few-s-subsets-as-possible/13790 | # When testing n items, how to cover all t-subsets by as few s-subsets as possible?
This problem arose from software testing. The problem is a bit difficult to explain. I will first give an example, then try to generalize the problem.
There are 10 items to be tested, say A to J, and a testing tool that can test 3 items at the same time. Order of items in the testing tool does not matter. Of course, for exhaustive testing, we need $^{10}C_{3}$ combinations of items.
The problem is more complex. There is an additional condition that once a pair of items has been tested together, than the same pair does not need to be tested again.
For example, once we executed the following three tests:
A B C
A D E
B D F
we do not have to execute:
A B D
because the pair A,B was covered by the first test case, A,D was covered by the second, and B,D was covered by the third.
So the problem is, what is the minimum number of test cases that we need to ensure that all pairs are tested?
To generalize, if we have n items, s can be tested at the same time, and we need to ensure that all possible t tuples are tested (such that s > t), what is the minimum number of test cases that we need in terms of n, s and t?
And finally, what would be a good algorithm to generate the required test cases?
• The cases were the test is "optimal" (every $t$-tuple is tested exactly once) are covered by the notion of Block Design. There are relatively few of these perfect test possibilities, so one needs additional heuristics, I guess. – Hendrik Jan Aug 17 '13 at 11:16
• Your test paradigm is faulty; it may not be sufficient to test only pairs. Some errors may only occur if three (or more) components act together in a specific combination. – Raphael Aug 17 '13 at 13:21
• @Raphael, thanks for a much better title, but I completely fail to understand how you can claim "your test paradigm is faulty" with having zero understanding of the actual problem or the context. – wookie919 Aug 17 '13 at 22:09
• @wookie919 That's because you don't give any context but pose a general problem. I have merely observed that, in general, you may need to test all combinations that can occur (in action). – Raphael Aug 18 '13 at 11:14
The block designs you want (for testing 3 things at a time, and covering all pairs) are called Steiner triple systems. There exists a Steiner triple system with $\frac{1}{3} {n \choose 2}$ triples whenever $n \equiv 1 \mathrm{\ or\ } 3$ mod $6$, and algorithms are known to construct these. See, for example, this MathOverflow question (with a link to working Sage code!). For other $n$, you could round up to the next $n' \equiv 1 \mathrm{\ or\ } 3$ mod $6$, and use a modification of this triple system for $n'$ to cover all pairs for $n$.
If you want the best construction for other $n$, the number of triples required is the covering number $C(n,3,2)$, and is given by this entry in the on-line encyclopedia of integer sequences. This links to the La Jolla Covering Repository which has a repository of good coverings. The online encyclopedia of integer sequences gives a conjectured formula for $C(n,3,2)$; if this formula holds, intuitively that means there should probably be good algorithmic ways of constructing these coverings, but since the formula is conjectured, it is clear that nobody currently knows them.
For high covering numbers, good coverings are harder to find than for $C(n,3,2)$, and the repository will give better solutions than any known efficient algorithms.
Form the undirected graph $G$ where each vertex is a pair of items, and where there is an edge between two vertices if they share an item in common. In other words, $G=(V,E)$ where $V=\{\{a,b\} : a,b \in \text{Items} \land a\ne b\}$ and $E=\{(s,t) : s,t \in V \land |s\cap t|=1\}$. The graph has ${n \choose 2}$ vertices, and every vertex has $2n-4$ edges incident on it.
Then one approach would be to find a maximum matching in $G$. Edmonds' algorithm can be used to find such a maximum matching in polynomial time. If you're lucky, this will give you a perfect matching, and then you're good. Each edge $(\{A,B\},\{B,C\}) \in E$ in the matching corresponds to a test case $A B C$. Since every vertex is incident with one edge in the perfect matching, you have covered all pairs, using ${n \choose 2}/2$ test cases, which is within a $1.5$ factor of optimal. If you don't get a perfect matching, add a few more test cases as needed to achieve full coverage.
In the case of $s=3$ and $t=2$ you need to perform at least ${n \choose 2}/3$ tests, since there are ${n \choose 2}$ pairs and every test covers 3 pairs. That means that you can do the trivial thing and perform ${n \choose 2}$ tests, and be only a factor of 3 worse than the optimum.
If you are actually programming this, then a way to optimize this could be by first picking some number tests at random, and then do the brute force on the pairs not covered by the test so far.
For general $s$ and $t$, there is a lower bound of ${n \choose t}/{s \choose t}$ tests. For an upper bound I'm claiming that is is enough to make $C \cdot \frac{{n \choose t}}{s \choose t}\cdot {\log({n \choose t})} \leq O(t \cdot (\frac{n-t}{s-t})^t \log(n))$ tests.
Let's see what happens we we choose the tests uniformly at random.If you pick an $s$-tuple $S \subseteq [n]$ at random, then for a fixed $t$-tuple $X \subseteq [n]$, we have $\Pr[X \subset S] = \frac{{n-t \choose s-t}}{{n \choose s}}$. Therefore, if we pick $C \cdot {n \choose t} \cdot {\log({n \choose t})}$ tests at random, then $$\Pr[X \text{ does not belong to any of them}] = \left(1 - \frac{{n-t \choose s-t}}{{n \choose s}}\right)^{C \cdot {n \choose t} \cdot {\log({n \choose t})}} \leq \exp \left(-C \frac{{n-t \choose s-t}{n \choose t}}{{n \choose s}{s \choose t}} \cdot \log({n \choose t})\right) = \exp(-C \log{n \choose t})\leq 1/{n \choose t}.$$
Therefore, by the union bound, after $O(t \cdot (\frac{n-t}{s-t})^t \log(n))$ random tests all $t$-tuples will be covered.
• Thank you very much for the very insightful answer, but I was looking for an exact algorithm that would generate exactly the ${n \choose t}/s$ lower bound number of test cases (if that is even possible), or something very close to the lower bound. – wookie919 Aug 17 '13 at 22:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.760205090045929, "perplexity": 266.74023291165486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141182794.28/warc/CC-MAIN-20201125125427-20201125155427-00622.warc.gz"} |
https://www.wyzant.com/resources/answers/269209/trigonometry_question | Rebecca J.
# Trigonometry question?
Suppose P(-5/6,y) is a point on the unit circle in the third quadrant. let θ be the radian measure of the angle in standard position with P on the terminal side, so that θ is the circular coordinate of P. Evaluate all the circular functions of θ. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8248454928398132, "perplexity": 810.8919941476962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366969.45/warc/CC-MAIN-20210303134756-20210303164756-00471.warc.gz"} |
https://stlab.cc/legacy/eve-definitions-in-lua.html | Often many dialogs in an application have very similar Eve definitions. For example, all of the “Properties” dialogs on Windows contain a tab group and “OK/Cancel/Apply” buttons. It would be nice to be able to define functions that return commonly used combinations of widgets.
One solution to this problem is to write the Eve definitions in an embeddable language such as Lua.
My current thinking is that a datastructure would be built in Lua that defines all of the views, and that there would be a function exposed that would take this datastructure and build a UI from it. So an Eve definition might look like this:
view = dialog{ name = "Test" }
{
row{ horizontal = "align_fill", vertical = "align_fill" }
{
static_text{ name = "This is a test." },
button{ name = "Click me", action: cancel }
}
}
--
-- Now build it.
--
asl_factory(view)
The various view functions (dialog, row, static_text, button, etc) can be defined like this:
function leaf_factory(name)
return function(params)
local t = {}
t.name = name;
t.params = params;
return t;
end
end
function container_factory(name)
return function(params)
return function(children)
local t = {}
t.name = name;
t.params = params;
t.children = children;
return t;
end
end
end
local dialog = container_factory("dialog")
local row = container_factory("row")
local column = container_factory("column")
local static_text = leaf_factory("static_text")
local button = leaf_factory("button") | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.535600483417511, "perplexity": 7937.177244831547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689752.21/warc/CC-MAIN-20170923160736-20170923180736-00586.warc.gz"} |
https://cadabra.science/manual/expand_delta.html | a field-theory motivated approach to computer algebra
expand_delta
Expand generalised Kronecker delta symbols
In Cadabra the KroneckerDelta property indicates a generalised Kronecker delta symbol. In order to expand it into standard two-index Kronecker deltas, use expand_delta, as in the example below.
\delta{#}::KroneckerDelta;
$$\displaystyle{}\text{Attached property KroneckerDelta to }\delta\left(\#\right).$$
ex:=\delta^{a}_{b}^{c}_{d};
$$\displaystyle{}\delta^{a}\,_{b}\,^{c}\,_{d}$$
expand_delta(_);
$$\displaystyle{}\frac{1}{2}\delta^{a}\,_{b} \delta^{c}\,_{d} - \frac{1}{2}\delta^{c}\,_{b} \delta^{a}\,_{d}$$
ex:=\delta^{a}_{m}^{l}_{n} \delta_{a}^{c}_{b}^{d};
$$\displaystyle{}\delta^{a}\,_{m}\,^{l}\,_{n} \delta_{a}\,^{c}\,_{b}\,^{d}$$
expand_delta(_); distribute(_); eliminate_kronecker(_); canonicalise(_);
$$\displaystyle{}\left(\frac{1}{2}\delta^{a}\,_{m} \delta^{l}\,_{n} - \frac{1}{2}\delta^{l}\,_{m} \delta^{a}\,_{n}\right) \left(\frac{1}{2}\delta_{a}\,^{c} \delta_{b}\,^{d} - \frac{1}{2}\delta_{b}\,^{c} \delta_{a}\,^{d}\right)$$
$$\displaystyle{}\frac{1}{4}\delta^{a}\,_{m} \delta^{l}\,_{n} \delta_{a}\,^{c} \delta_{b}\,^{d} - \frac{1}{4}\delta^{a}\,_{m} \delta^{l}\,_{n} \delta_{b}\,^{c} \delta_{a}\,^{d} - \frac{1}{4}\delta^{l}\,_{m} \delta^{a}\,_{n} \delta_{a}\,^{c} \delta_{b}\,^{d}+\frac{1}{4}\delta^{l}\,_{m} \delta^{a}\,_{n} \delta_{b}\,^{c} \delta_{a}\,^{d}$$
$$\displaystyle{}\frac{1}{4}\delta^{l}\,_{n} \delta_{m}\,^{c} \delta_{b}\,^{d} - \frac{1}{4}\delta^{l}\,_{n} \delta_{b}\,^{c} \delta_{m}\,^{d} - \frac{1}{4}\delta^{l}\,_{m} \delta_{n}\,^{c} \delta_{b}\,^{d}+\frac{1}{4}\delta^{l}\,_{m} \delta_{b}\,^{c} \delta_{n}\,^{d}$$
$$\displaystyle{}\frac{1}{4}\delta_{b}\,^{d} \delta^{c}\,_{m} \delta^{l}\,_{n} - \frac{1}{4}\delta_{b}\,^{c} \delta^{d}\,_{m} \delta^{l}\,_{n} - \frac{1}{4}\delta_{b}\,^{d} \delta^{c}\,_{n} \delta^{l}\,_{m}+\frac{1}{4}\delta_{b}\,^{c} \delta^{d}\,_{n} \delta^{l}\,_{m}$$
Note that it is in principle possible to get a result similar to the expanded form by using the Young projector and then canonicalising, but this is more expensive:
ex:=\delta^{a}_{b}^{c}_{d};
$$\displaystyle{}\delta^{a}\,_{b}\,^{c}\,_{d}$$
young_project_tensor(_);
$$\displaystyle{}\delta^{a}\,_{b}\,^{c}\,_{d}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7199556827545166, "perplexity": 2467.4460358201886}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991413.30/warc/CC-MAIN-20210512224016-20210513014016-00272.warc.gz"} |
https://iwaponline.com/aqua/article/69/7/678/77703/Enhanced-cadmium-removal-from-water-by | ## Abstract
Hydroxyapatite powders were synthesized according to a wet precipitation route and then subjected to heat treatments within the temperature range of 200–800 °C. The prepared samples were tested as sorbents for cadmium in an aqueous medium. The best performances were obtained with the material treated at 200 °C (HAp200), as the relevant sorbent textural features (SBET – specific surface area and Vp – total volume of pores) were least affected at this low calcination temperature. The maximum adsorption capacity at standard ambient temperature and pressure was 216.6 mg g−1, which increased to 240.7 mg g−1 by increasing the temperature from 25 to 40 °C, suggesting an endothermic nature of the adsorption process. Moreover, these data indicated that a thermal treatment at 200 °C enhanced the ability of the material in Cd2+ uptake by more than 100% compared to other similar studies. The adsorption kinetic process was better described by the pseudo-second-order kinetic model. Langmuir, Freundlich, and Dubinin–Kaganer–Radushkevich isotherms were applied to describe the sorption behaviour of Cd2+ ions onto the best adsorbent. Furthermore, a thermodynamic study was also performed to determine ΔH°, ΔS°, and ΔG° of the sorption process of this adsorbent. The adsorption mechanisms were investigated by Fourier transform infrared spectroscopy (FTIR) and scanning electron microscopy-transmission electron microscopy (SEM-TEM) observations.
## HIGHLIGHTS
• The powder calcined at 200 °C (HAp200) exhibited the maximum Cd2+ uptake capacity.
• Relevant sorbent features (SBET and Vp) were least affected for the powder calcined at 200 °C.
• Maximum adsorption capacity at standard ambient temperature and pressure was 216 mg g−1.
• Uptake capacity was improved by more than 100% compared to other studies.
### Graphical Abstract
Graphical Abstract
Graphical Abstract
## INTRODUCTION
Heavy metals are generally introduced to the environment through discharged domestic, industrial, and agricultural wastewater. Released into the water system, these effluents are of potential threat to surface and ground waters. One of the most hazardous pollutants among the various contaminants that have been added to groundwater is cadmium (Idrees et al. 2018). Its presence in drinking water has been considered as a major public health threat. Indeed, chronic exposure to cadmium results in kidney dysfunction (Johri et al. 2010), bone, and cardiovascular diseases (Yingjian et al. 2017). Moreover, epidemiological studies showed a link between inhalation exposure and the development of lung and prostate cancer (Chen et al. 2016). When released into the water systems, these effluents constitute potential threats to the aquatic organisms and public health, as they can accumulate and be transferred up the food chain, threatening food safety and posing serious health risks (Rajeshkumar & Li 2018). Therefore, great attention to the removal of Cd from wastewater is required before it reaches the environmental water sources. In this way, several physicochemical treatments have been applied to reduce this metal in water systems, such as chemical precipitation (Matusik et al. 2008), ion exchange (Wong et al. 2014), membrane filtration (Kheriji et al. 2015), and adsorption process (Harja et al. 2015; Naeem et al. 2019). Among these remediation techniques, the adsorption procedure has the advantage of being highly efficient, economical, and a simple handling method.
Apatites are a large class of mineral compounds with the general chemical formula M10(PO4)6X2, where M is mostly a divalent cation (Ca2+, Sr2+, Ba2+, Cd2+, Pb2+, etc.) and X is a monovalent or bivalent anion (F, Cl, Br, OH, ) (Elliot 1994). The ability of these materials to exchange several ions in both of the cationic and anionic sites makes them highly effective sorbents for Cd(II) and other trace metals from contaminated soils and wastewaters (He et al. 2013). Among apatites, hydroxyapatite with the chemical formula Ca10(PO4)6(OH)2 (HAp) is the main mineral component of the calcified tissues (bone and teeth). The crystallinity, the porosity, and the surface characteristics of HAp are correlated to the synthesis and heat treatment conditions. Hence, the sorption behaviour of HAp is highly affected by either morphology or the crystalline state (Stötzel et al. 2009; Wang et al. 2016).
It was reported that the sorption capacity of the as-prepared HAp sample with the SBET value of 94.9 m2 g−1 was 142.8 mg of Cd2+ per gram of adsorbent (Mobasherpour et al. 2011). On the other hand, this quantity was reduced to 70.9 and 16.9 mg of Cd2+ per gram of adsorbent by the effect of calcination at temperatures ranging from 500 to 1,140 °C (Da Rocha et al. 2002). However, no study has been conducted for powders calcined at low and moderate temperatures. Therefore, the purpose of this work is to investigate the dependence of the Cd2+ removal efficiency of HAp powders on the calcination temperature and to determine the best heat treatment conditions that allow maximizing it. Moreover, the adsorption process dynamics was studied as a function of pH, adsorbent dosage, contact time, initial metal concentration, and temperature, and the adsorption isotherms were established.
## MATERIALS AND METHODS
### Materials preparation
HAp powders were synthesized by wet chemical precipitation (Jebri et al. 2017). The method involves a dropwise addition of a diammonium hydrogen phosphate solution (NH4)2HPO4 (0.31 M) into a boiling solution of calcium nitrate Ca(NO3)2·4H2O (0.12 M) at normal pressure. The pH of precipitation medium has been maintained close to 11 by successive additions of ammonia solution (28 wt%). The precipitate was separated by vacuum filtration. The filtrate was washed by distilled water and dried for 12 h at 100 °C. Finally, the as-obtained powders were calcined at various temperatures for 12 h. The synthesis can be schematized by the following reaction:
All the reactants used in the experiments are of analytical grade and manufactured by Merck and Fluka.
### Sorption procedure
The study of Cd2+ sorption onto HAp was carried out through the batch equilibrium technique. The influence of adsorbent calcination temperature on the Cd2+ removal was investigated. The effects of used adsorbent amount, the contact time, the pH, the initial metal concentration, and the solution temperature were also studied.
For these purposes, aqueous solutions of Cd2+ ions with various concentrations were prepared by diluting a stock solution containing 2.7445 g of Cd(NO3)2·4H2O per litre of distilled water.
The adsorption studies were carried out by mixing 10–60 mg of adsorbent with 100 mL of Cd2+ solutions at various concentrations. Furthermore, the acidity/alkalinity of the medium was varied from pH ∼2 up to weakly basic conditions (pH ∼8) by dropwise adding of 0.1 M aqueous solutions of HCl or KOH to study the effect of pH variation. To assure the accomplishment of adsorption equilibrium, the suspension was stirred for 120 min (Mobasherpour et al. 2011) at 300 rpm and then sampled through vacuum filtration for further analysis of the remaining adsorbate. The amount of the adsorbed Cd2+ per gram of adsorbent qt (mg g–1) at time t is calculated as follows:
(1)
where C0 and Ct are the metal ion concentrations (mg L–1) in the liquid phase initially and at any time t, respectively, m is the mass of the adsorbent (g), and V is the volume of the solution (L). After 120 min, Ct and qt reach the equilibrium value Ce and qe.
### Analytical methods
The HAp precipitate was subjected to calcination temperatures in the range of 200–800 °C with a heating rate of 20 °C min–1 using a Nabertherm Muffle Furnace. X-ray diffraction analysis (XRD) was conducted with a D8 ADVANCE Bruker diffractometer using copper radiations (Kα1 = 1.5406 Å; Kα2 = 1.5445 Å). Data were collected from 10 to 80° 2θ using the step scanning technique. A refinement of the structure using PANalytical X'Pert HighScore Plus V3.0.5 software, allowed determination of the lattice parameters and the crystallite size. According to Landi et al. (2000), the fraction of the crystalline phase xc could be evaluated using the following relation:
(2)
where I300 is the intensity of the (300) reflection and V112/300 is the intensity of the hollow between (112) and (300) reflections which completely disappears in non-crystalline samples.
Fourier transform infrared spectroscopy (FTIR) was carried out on pellets prepared by mixing 1 mg of the sample with 200 mg of infrared grade KBr. The spectra were recorded between 400 and 4,000 cm–1 using the IRAffinity-1 Shimadzu spectrophotometer. Nitrogen adsorption–desorption isotherms were performed with a Micromeritics ASAP 2020 at 77 K. The SBET of the particles was calculated using the Brunauer–Emmett–Teller (BET) method in the range of relative pressure (p/p0) from 0.01 to 0.99. The average pore diameter was evaluated by the Barrett–Joyner–Halenda (BJH) method applied to the desorption branch. The morphological features of the material before and after contact with Cd2+ solutions were determined using an Ultra-high-Resolution Analytical Electron Microscope (HR-FESEM Hitachi SU-70) and a Transmission Electron Microscope (Hitachi, H9000 NAR). In the liquid phase, the changes in concentrations of Cd2+ ions resulting from the interactions with the sorbent were measured by atomic absorption spectrometer type novAA® 350 Analytik Jena.
## RESULTS AND DISCUSSION
### Characterization of HAp powders
XRD patterns of the apatitic precipitate (100 °C) and those of HAp samples heat-treated at temperatures in the range of 200–800 °C are shown in Figure 1(a). The results indicate the presence of the crystalline apatitic phase even in the dried powder. The intensity of the XRD peaks was progressively enhanced by increasing the calcination temperature. This evolution is mainly reflected by the increase in the hollow between (112) and (300) reflections on the recorded diagram. Based on the Landi method (Landi et al. 2000), the estimated crystalline phase xc are 0.50, 0.55, 0.60, 0.70, and 0.85 for the products heated at 100, 200, 400, 600, and 800, respectively.
Figure 1
XRD patterns (a) and infrared spectra (b) of hydroxyapatite powders calcined at different temperatures.
Figure 1
XRD patterns (a) and infrared spectra (b) of hydroxyapatite powders calcined at different temperatures.
The lattice parameters determined for the highly crystallized sample heat-treated at 800 °C (xc = 0.85) are a = b = 9.417 Å, c = 6.880 Å, whereas the crystallite sizes vary within the range 30–40 nm according to the heating temperature. The FTIR spectra of the samples displayed in Figure 1(b) exhibit the characteristic absorbance bands of OH groups centred at 629 and 3,570 cm–1, and of groups appearing at 563, 601, 982, 1,030, and 1,091 cm–1. The intensities of these bands increase with increasing the heat treatment temperature, while the absorption band located at 1,383 cm–1, relative to nitrate ions adsorbed on the apatite surface during the synthesis, progressively disappears.
In order to elucidate the effect of calcination temperature on the surface properties of HAp, the powders heated at the extreme temperatures were subjected to the physisorption of N2. The results plotted in Figure 2 show that the samples are predominantly mesoporous, as evidenced by the type IV isotherms. However, the material heated at 200 °C exhibits a higher hysteresis loop (at p/p0 > 0.9) compared to that treated at 800 °C (0.12 < p/p0 < 0.96). The data collected in Table 1 show significant decreases of pore diameter (Dp) and total pore volume (Vp) for the HAp200 powder. Similarly, the surface area (SBET) has been reduced by more than five times by the heat treatment at 800 °C. CBET measures the adsorption force of the first adsorbed nitrogen layer. Consequently, this parameter is directly related to the affinity of nitrogen toward the material surface (Jelinek & Kováts 1994). The decrease of CBET value indicates the low affinity of nitrogen toward the surface of the sample calcined at 800 °C.
Table 1
Textural features of HAp samples heated at 200 and 800 °C
SampleSBET (m2g–1)Vp (cm2g–1)Dp (Å)CBET
HAp200 58.1 0.237 327 123.1
HAp800 10.8 0.014 159 97.4
SampleSBET (m2g–1)Vp (cm2g–1)Dp (Å)CBET
HAp200 58.1 0.237 327 123.1
HAp800 10.8 0.014 159 97.4
Figure 2
Nitrogen adsorption–desorption isotherms for hydroxyapatite treated at 200 °C (a) and 800 °C (b).
Figure 2
Nitrogen adsorption–desorption isotherms for hydroxyapatite treated at 200 °C (a) and 800 °C (b).
### Optimization of the sorption parameters
#### Effect of the heat treatment
To study the heat treatment impact of HAp over its Cd2+ removal efficiency, masses of 30 and 40 mg of powders calcined at predefined temperatures were equilibrated with the metal ion solution at an initial concentration of 10 mg L−1 Cd2+. These experiments were conducted for initial solutions under slightly acidic (pH = 5.2) and under neutral conditions (pH = 7). For both masses, the results are illustrated in Figure 3(a) and 3(b). It can be seen that the highest Cd2+ removal percentages were observed for the material heated at 200 °C. Indeed, such treatment allowed releasing the residues of the synthesis inserted into the pores of the apatite, explaining the slight increase in Cd2+ uptake compared to the dried raw material. These results were predictable by FTIR analyses that indicated a significant decrease of the absorbance band of nitrate between 100 and 200 °C. On the other hand, the adsorption efficiency is considerably reduced by heating the material to 800 °C. This decrease in the adsorption ability can be explained by the reduction of the textural features of the material. Indeed, according to the results of BET analyses reported in Table 1, the rise of the calcination temperature induced concomitant decreases in SBET and Vp. Such trends are in accordance with scanning electron microscopy (SEM) results illustrated in Figure 11, HAp200’s surface is more heterogeneous than that of HAp800.
Figure 3
Evolution of Cd2+ removal by adsorption onto HAp powders heated at various temperatures (a) at pH = 5.2 (b) at pH = 7 (Conditions: [Cd2+]0 = 10 mg L−1, V = 100 mL, m = 30 and 40 mg, t = 120 min).
Figure 3
Evolution of Cd2+ removal by adsorption onto HAp powders heated at various temperatures (a) at pH = 5.2 (b) at pH = 7 (Conditions: [Cd2+]0 = 10 mg L−1, V = 100 mL, m = 30 and 40 mg, t = 120 min).
Based on these results, the optimization of the other parameters on the adsorption of Cd2+ onto Hap powder has been carried out with the material heat-treated at 200 °C, denoted as HAp200.
#### Effect of the initial pH
The results of Cd2+ removal efficiency exerted by 40 mg HAp200 adsorbent dispersed in 10 mg L−1 of Cd2+ solution are plotted in Figure 4. These results reveal that the initial solution pH does not play any significant role, except for the extreme values. The slight decrease in the removal percentage observed below pH 3 could be attributed to a partial dissolution of HAp and the competition between H3O+ protons and Cd2+ ions for the same adsorption surface sites. Accordingly, no adsorption tests were conducted below pH = 2 to prevent the dissolution of apatite in the acid medium. The upper slight alkaline pH limit was selected to avoid the precipitation of Cd-hydroxide forms in an alkaline medium. However, the observed small apparent decrease suggests that under these conditions, some Cd2+ ions started reacting with OH. As a result, the optimal performance of HAp200 is obtained in the pH range of 3–7. It has been shown that the reactions responsible for the surface properties of HAp in aqueous systems are and . In the pH range lower than 6, the predominate reactive sites on the surface of HAp are . So, the sorption of Cd2+ occurs by ion-exchange between H+ on the surface sites and the metal ion according to the following reactions:
Figure 4
Effect of the pH on the removal of Cd2+ by HAp200 (Conditions: [Cd2+]0 = 10 mg L−1, V = 100 mL, m = 40 mg, t = 120 min).
Figure 4
Effect of the pH on the removal of Cd2+ by HAp200 (Conditions: [Cd2+]0 = 10 mg L−1, V = 100 mL, m = 40 mg, t = 120 min).
Under alkaline conditions, the surface becomes negatively charged and the predominate species are ≡PO and ≡Ca(OH)0. In this case, the uptake of Cd2+ is governed by the electrostatic forces.
#### Effect of the adsorbent dose
The effect of the adsorbent amount on Cd2+ removal by HAp200 is depicted in Figure 5. It can be seen that the removal efficiency increases (although at a decreasing pace) with increasing the amounts of HAp200 up to an optimum dosage of 0.4 g L–1, beyond which it becomes approximately constant. The observed trend can be understood considering the insufficient surface area available at the beginning of the experience, forcing the saturation of the active surface sites. However, with increasing adsorbent doses, higher amounts of active surface sites become available for adsorption, and the equilibrium between the Cd2+ concentrations at the surface and in the solution is accomplished for gradually lower degrees of adsorbent surface saturation.
Figure 5
Effect of adsorbent dosage on Cd2+ removal (Conditions: [Cd2+]0 = 10 mg L−1, V = 100 mL, pH = 5.2, t = 120 min).
Figure 5
Effect of adsorbent dosage on Cd2+ removal (Conditions: [Cd2+]0 = 10 mg L−1, V = 100 mL, pH = 5.2, t = 120 min).
#### Effect of the contact time
The removal percentages of Cd2+ ions as a function of the contact time for initial metal concentrations of 10, 50, and 100 mg L−1 are reported in Figure 6. It could be noticed that the highest adsorption of metal ions is observed in the first half-hour of contact. This phenomenon could be explained by the availability of a high number of active sites at the beginning of the adsorption process, which gradually decreases as time progresses. Furthermore, the required period for the establishment of equilibrium extraction decreases to 15 min by increasing the initial metal concentration to 100 mg L−1. These results are very consistent, demonstrating that the increasing initial Cd2+ concentrations drive the equilibrium toward the surface saturation, exhausting the active surface sites and hindering the further removal capacity of the adsorbent (Adebowale et al. 2006). This explains the clear decreasing trends in the removal capacity with increasing concentrations of metal ions. Accordingly, under the tested experimental conditions, the uptake capacities were 25.0, 123.6, and 216.6 mg g−1 for initial Cd2+ concentrations of 10, 50, and 100 mg L−1, respectively.
Figure 6
Contact time effect on Cd2+ sorption onto HAp200 (Conditions: [Cd2+]0 = 10, 50, and 100 mg L−1, V = 100 mL, m = 40 mg).
Figure 6
Contact time effect on Cd2+ sorption onto HAp200 (Conditions: [Cd2+]0 = 10, 50, and 100 mg L−1, V = 100 mL, m = 40 mg).
#### Effect of the solution temperature
Figure 7 shows the evolution of the uptake capacity at different temperatures (298, 303, 308, and 313 K) for an initial metal concentration of 100 mg L−1. It can be seen that the adsorption capacity of the apatite increased from ∼217 to ∼241 mg g−1 with the rise of the temperature, suggesting an endothermic nature of the adsorption process. The increase in the adsorption rate of Cd2+ ions can be explained by an increase in the mobility of the metal ions with the rise of the temperature, which favours their interaction with the active sites on the surface of HAp200. Furthermore, the increasing temperature may produce a swelling effect within the internal structure of the adsorbent, enabling large metal ions to cross the external boundary layer and the internal pores (Dogan & Alkan 2003).
Figure 7
Effect of the solution temperature on the uptake of Cd2+ ions by HAp200 (Conditions: [Cd2+]0 = 100 mg L−1, V = 100 mL, m = 40 mg, t = 120 min).
Figure 7
Effect of the solution temperature on the uptake of Cd2+ ions by HAp200 (Conditions: [Cd2+]0 = 100 mg L−1, V = 100 mL, m = 40 mg, t = 120 min).
### Sorption kinetics study
To predict the mechanism involved in the adsorption process of Cd2+ ions onto HAp200, several kinetic models have been applied to fit the experimental results. The models are the pseudo-first-order, the pseudo-second-order, Elovich, and the intraparticle diffusion kinetic models.
The linearized form of the pseudo-first-order and the pseudo-second-order kinetic models can be described by the following equations (Lagergren 1898; Ho 2006):
(3)
(4)
In Equations (3) and (4), qe and qt are the amounts of the sorbed adsorbate at the equilibrium extraction and at the contact time t, respectively (both in mg g−1). The k1 and k2 are the pseudo-first-order and pseudo-second-order sorption constants in (min–1) and g (mg min)–1, respectively. They are determined through the linear plots of ln(qeqt) and t/qt versus t (Figure 8(a) and 8(b)), the gathered data are presented in Table 2. It can be noticed that the correlation coefficients R2 values obtained for the pseudo-second-order kinetic model are much higher than those found by applying the pseudo-first-order kinetic model. Besides, the experimental amounts of the adsorbed metal (qe,exp) are very close to those deduced from the pseudo-second-order kinetic model (qe,cal). Thus, the adsorption process can be well described by the pseudo-second-order kinetics model. This result shows that the rate-limiting step could be chemisorption involving valency forces though the sharing of electrons between the active sites of adsorbent and adsorbate as covalent forces and ion-exchange (Ho 2006). A similarity was observed for other previous studies devoted on the kinetics of heavy metals adsorption onto phosphate materials (Elkady et al. 2011). Considering the pseudo-second-order constant k2, the initial adsorption rate h [mg (g min)–1] can be expressed as . The calculated data are reported in Table 2.
Table 2
Kinetic parameters for adsorption of Cd2+ ions onto HAp200 at 298 K
[Cd2+]0 (mg L–1)
1050100
qe,exp (mg g–125.0 123.6 216.6
Pseudo-first-order model
qe,cal (mg g–19.6 23.2 27.0
k1 (min–10.038 0.055 0.026
R2 0.8729 0.7905 0.4304
Pseudo-second-order model
qe,cal (mg g–125.2 123.5 204.1
k2 [g (mg min)–10.019 0.014 0.008
h [mg (g min)–111.9 208.3 344.8
R2 0.9994 0.9998
Elovich model
α [mg (g min)–12.2 × 104 4.6 × 1010 1.2 × 1018
β (g mg–10.460 0.202 0.221
R2 0.9729 0.8526 0.8378
Intraparticle diffusion model
kid,1 (mg (g min0.5)–12.18 6.59 7.00
I1 13.1 92.5 170.0
R2 0.931 0.946
kid,2 [mg (g min0.5)–10.39 0.22 1.11
I2 20.8 121.2 190.3
R2 0.9704 0.9825 0.7877
[Cd2+]0 (mg L–1)
1050100
qe,exp (mg g–125.0 123.6 216.6
Pseudo-first-order model
qe,cal (mg g–19.6 23.2 27.0
k1 (min–10.038 0.055 0.026
R2 0.8729 0.7905 0.4304
Pseudo-second-order model
qe,cal (mg g–125.2 123.5 204.1
k2 [g (mg min)–10.019 0.014 0.008
h [mg (g min)–111.9 208.3 344.8
R2 0.9994 0.9998
Elovich model
α [mg (g min)–12.2 × 104 4.6 × 1010 1.2 × 1018
β (g mg–10.460 0.202 0.221
R2 0.9729 0.8526 0.8378
Intraparticle diffusion model
kid,1 (mg (g min0.5)–12.18 6.59 7.00
I1 13.1 92.5 170.0
R2 0.931 0.946
kid,2 [mg (g min0.5)–10.39 0.22 1.11
I2 20.8 121.2 190.3
R2 0.9704 0.9825 0.7877
Figure 8
Experimental data fitting for adsorption of Cd2+ ions onto HAp200 according to the pseudo-first-order (a), pseudo-second-order (b), Elovich (c), and intraparticle diffusion kinetic models (d).
Figure 8
Experimental data fitting for adsorption of Cd2+ ions onto HAp200 according to the pseudo-first-order (a), pseudo-second-order (b), Elovich (c), and intraparticle diffusion kinetic models (d).
The applicability of the Elovich model to sorption kinetics was investigated using the linearized form given by the following equation (Chien & Clayton 1980):
(5)
where α is the initial adsorption rate and β is a constant related to the extent of surface coverage and activation energy for chemisorption reactions, in mg (g min)–1 and (g mg–1), respectively. These parameters could be estimated from the slope and the intercept of the straight-line plot of qt versus ln t (Figure 8(c)). The calculated parameters are listed in Table 2. They are obtained with a quite high correlation coefficient (0.8378 ≤ R2 ≤ 0.9729). The increase of the initial adsorption rate constant α with the initial metal concentration supports the chemisorption occurrence during the adsorption process.
The possibility of intraparticle diffusion was also explored. The adsorbate species could be transported from the bulk of the solution into the solid phase through the intraparticle diffusion-transport process, which is often the rate-limiting step especially in a rapidly stirred batch reactor (Weber & Chakravoti 1974). The diffusion model is expressed as follows (Weber & Morris 1963):
(6)
in which, kid is the intraparticle diffusion rate [mg (g min1/2)–1]. Values of I give an idea about the boundary layer thickness. In fact, the larger I, the greater is the boundary layer effect (McKay et al. 1985). The adsorption capacity qt was reported as a function of the square root of time t1/2 for different initial Cd2+ concentrations (Figure 8(d)). The curves exhibit two separate regions attributed to the film diffusion followed by the intraparticle diffusion. Indeed, the first linear portion covers the time range between 5 and 15 min and corresponds to external metal ion diffusion and binding by active sites distributed on the outer surface of the adsorbent. The second lasts from 30 to 120 min, assigned to the establishment of the equilibrium. For each initial metal concentration, values of kid,1, kid,2 and I1, I2 are determined from the slopes and the intercepts of the two straight lines, respectively. The results are gathered in Table 2. It is clear to see that the thickness of the boundary layers in the second portion related to the intraparticle diffusion (I2) is larger than that of the first portion, which is relative to the film diffusion (I1). Thus, and accordingly, the film diffusion rate, kid,1, is far greater than the intraparticle diffusion rate kid,2. Moreover, the linear portions of the curves did not pass through the origin. Therefore, intraparticle diffusion was not the rate-limiting step and the removal of Cd2+ ions by Hap200 involves some other processes that may be operating simultaneously.
The equilibrium adsorption studies were performed to evaluate the maximum adsorption capacity of the adsorbent toward the adsorbate and to predict the type of interactions between them. Thus, several equations for isotherms have been proposed. However, Langmuir, Freundlich, and Dubinin–Kaganer–Radushkevich (DKR) models are the most appropriate for equilibrium modelling of heavy metals adsorption onto calcium phosphates.
Langmuir model assumes monolayer adsorption of the adsorbate onto a finite number of identical sites of the adsorbent surface and there is no interaction between the adsorbed species. Mathematically, this model is expressed by the following equation:
(7)
in which, Ce is the equilibrium concentration of the metal ions (mg L–1), qe is the equilibrium sorption capacity (mg g–1), qm is the maximum sorption capacity (mg g–1), and b is the Langmuir constant related to the energy of adsorption (L mg–1).
Freundlich isotherm was also applied to study the distribution of Cd2+ ions between the liquid and the solid phases. According to this model, the adsorption occurs on a heterogeneous surface through a multilayer adsorption mechanism. Freundlich equation can be written as follows:
(8)
in which, kf is the Freundlich isotherm constant and n is an empirical parameter related to the intensity of the adsorption which varies with the adsorbent heterogeneity. Thus, if (1/n) values are in the range of 0.1–1, the adsorption conditions are favourable.
The DKR model has been successfully applied to describe the sorption of Cd2+ ions onto HAp. The DKR equation can be written as follows:
(9)
where Cads is the number of metal ions adsorbed per unit mass of adsorbent (mol g–1), Xm is the maximum sorption capacity determined through this model (mol g–1), β is the activity coefficient related to mean sorption energy (mol2 J–2), and ε is the Polanyi potential which is expressed as:
(10)
Accordingly, the sorption potential is a temperature-dependent parameter, specific to the nature of sorbent and sorbate (Dada et al. 2012). The slope of the plot of ln(Cads) versus ε2 gives β and the intercept yields the sorption capacity Xm. The sorption space in the vicinity of a solid surface is characterized by a series of equipotential surfaces having the same sorption potential. The sorption energy E could be estimated through the following equation:
(11)
The magnitude of apparent energy E is useful to estimate the type of adsorption. So, E value below 8 kJ mol−1 indicates physical adsorption. Between 8 and 16 kJ mol−1, the adsorption process can be described by ion exchange, and over 16 kJ mol−1, it is governed by stronger chemical adsorption rather than by ion exchange (Lin & Juang 2002).
In this study, the equilibrium data for Cd(II) distribution between the solid and the liquid phases at 298 K are determined by varying the initial metal concentration from 10 to 100 mg L−1. The adsorption isotherm of Cd2+ on HAp200 is depicted in Figure 9(a). This latter was correlated according to Langmuir, Freundlich, and DKR models (Figure 9(b)–9(d)).
Figure 9
Isotherm of Cd2+ sorption onto HAp200 at 298 K (a) and linear fits of experimental data according to Langmuir (b), Freundlich (c), and DKR model (d).
Figure 9
Isotherm of Cd2+ sorption onto HAp200 at 298 K (a) and linear fits of experimental data according to Langmuir (b), Freundlich (c), and DKR model (d).
The sorption constants determined from the selected three models together with their corresponding correlation coefficients are gathered in Table 3. It can be noticed that R2 values exceed 0.98 suggesting a good agreement between the theoretical models and the experimental results. Thus, the maximum sorption capacity qm obtained with the Langmuir model is 217.4 mg g−1, which is very close to the experimental value of 216.6 mg g−1. Moreover, 1/n factor determined from the Freundlich model is in the range of 0.1–1, showing that the adsorption conditions are favourable. E value derived from the DKR model is 16.4 kJ mol−1, suggesting a chemisorption reaction type in which the adsorption is the result of chemical bonds formed between the surface of the adsorbent (HAp200) and the adsorbate (Cd2+). Finally, based on the determined value of qe, this study showed that a thermal treatment at 200 °C, enhanced the performances of HAp in Cd2+ removal from the aqueous medium by more than 100% when compared to the raw material (142.8 mg g−1) (Mobasherpour et al. 2011).
Table 3
Langmuir, Freundlich, and DKR constants for Cd2+ sorption onto HAp200 at 298 K
Langmuir adsorption isotherm constants qm (mg g–1) b (L mg–1) R2 217.4 1.92 0.9892 Freundlich adsorption isotherm constants kf (mg g–1) n R2 115.0 3.97 0.9818 DKR adsorption isotherm constants Xm (mg g–1) β (mol2 J–2) R2 554.4 −1.86 × 10–9 0.9893
Langmuir adsorption isotherm constants qm (mg g–1) b (L mg–1) R2 217.4 1.92 0.9892 Freundlich adsorption isotherm constants kf (mg g–1) n R2 115.0 3.97 0.9818 DKR adsorption isotherm constants Xm (mg g–1) β (mol2 J–2) R2 554.4 −1.86 × 10–9 0.9893
### Sorption thermodynamics
The thermodynamic data such as Gibbs free energy (ΔG°), enthalpy (ΔH°), and entropy (ΔS°) can be estimated through equilibrium constant changing as a function of the temperature. These parameters are determined using the following equations:
(12)
(13)
(14)
In Equation (12), kd is the distribution ratio (kd = qe/Ce) and R is the gas constant (8.314 J mol–1K–1). The enthalpy and entropy were determined from the slope and the intercept of the straight-line plot of ln(kd) versus the reciprocal of absolute temperature (1/T) (Figure 10). These data are summarized in Table 4.
Table 4
Thermodynamic data for Cd(II) adsorption on HAp200
ΔH° (kJ mol–1)ΔS° (J mol–1 K–1)ΔG°(kJ mol–1)
298 K303 K308 K313 K
69.98 258.96 −7.18 −8.48 −9.77 −11.07
ΔH° (kJ mol–1)ΔS° (J mol–1 K–1)ΔG°(kJ mol–1)
298 K303 K308 K313 K
69.98 258.96 −7.18 −8.48 −9.77 −11.07
Figure 10
Plot of ln kd versus (1/T) for Cd2+ adsorption onto HAp200.
Figure 10
Plot of ln kd versus (1/T) for Cd2+ adsorption onto HAp200.
ΔH° is positive and higher than 40 kJ mol−1. Consequently, the adsorption process is endothermic and of chemical nature involving strong attraction forces. Besides, the positive value of ΔS° shows the increase in disorder at the solid–liquid interface.
### Sorption mechanisms
Based on a bibliographic survey, ion exchange, surface complexation, and dissolution–precipitation are the main mechanisms involved in the sorption process of cadmium and other heavy metal ions onto hydroxyapatite-based materials (Mobasherpour et al. 2011). In this context, SEM and transmission electron microscopy (TEM) observations were carried out to study the morphological variation before and after the interaction with cadmium, on the one hand, and to highlight its higher reactivity toward Cd2+ ions compared to HAp800, on the other hand. Also, the sorption mechanisms were investigated by FTIR analyses, a method that has been considered as a kind of direct means to assess the type of interactions between arsenic and bone char (Chen et al. 2008).
#### SEM and TEM observations
According to SEM and TEM micrographs shown in Figure 11, hydroxyapatite particles calcined at 200 and 800 °C have a rod-like shape. However, HAp200 looks more heterogeneous with a smaller particle diameter than HAp800 which agrees with the measured SBET for HAp200 and HAp800 determined by the BET method. Moreover, for HAp200, the interactions with cadmium induced a slight increase in particle size and there is no tendency toward the rounded shape.
Figure 11
SEM micrographs relative to HAp200 (a), HAp200-Cd (b), HAp800 (c), HAp800-Cd (d), and TEM micrographs relative to HAp200-Cd (1) and HAp800-Cd (2).
Figure 11
SEM micrographs relative to HAp200 (a), HAp200-Cd (b), HAp800 (c), HAp800-Cd (d), and TEM micrographs relative to HAp200-Cd (1) and HAp800-Cd (2).
The EDS analysis illustrated in Figure 12 shows a higher amount of cadmium on the surface of HAp heated at 200 °C than that heated at 800 °C, and consequently, the impact of cadmium sorption on the morphologies of both materials (HAp200 and HAp800) was not the same. Hence, in the case of HAp800-Cd, the rounded shape may be explained by the precipitation of leached atoms after the liquid supersaturation. These latter tend to precipitate on the concave and the necks of the particle according to Oswald refinement (Finsy 2004). Accordingly, the origin of these modifications may be related to dissolution/precipitation phenomena. On the other hand, HAp200-Cd shows only an increase of the particle size and a higher amount of cadmium on its surface compared to HAp800-Cd. So, the slight morphological changes may be correlated with the substitution of calcium with cadmium on the surface and/or in the structure, as well as to slow dissolution–precipitation interface reactions.
Figure 12
EDS graphs relative to HAp200 (a), HAp200-Cd (b), HAp800 (c), and HAp800-Cd (d).
Figure 12
EDS graphs relative to HAp200 (a), HAp200-Cd (b), HAp800 (c), and HAp800-Cd (d).
#### FTIR analyses
The results of FTIR analyses performed for HAp200 before and after Cd(II) adsorption are shown in Figure 13. It can be seen that the absorption bands appearing at 982, 1,030, and 1,071 cm–1 and the absorption band of OH group at 629 cm–1 were significantly shifted to lower frequencies. This phenomenon is related to the distortion of the functional groups induced by the decrease of the lattice volume which is the consequence of the substitution of Ca2+ by Cd2+ (with smaller ionic radius) leading to the formation of a solid solution according to the following scheme:
Figure 13
IR spectrum of the solid residue with a moderate amount of Cd2+ uptake (qe = 123.6 mg g−1) compared to that of the starting powder.
Figure 13
IR spectrum of the solid residue with a moderate amount of Cd2+ uptake (qe = 123.6 mg g−1) compared to that of the starting powder.
Indeed, Cd2+ ions are initially adsorbed onto the HAp200 surface by a rapid complexation on ≡POH and ≡CaOH2+ sites, then substituted to crystallographic sites of Ca2+ atoms. On the other hand, for pH values between 4 and 6, similar studies reported a partial dissolution of Ca10(PO4)6(OH)2 followed by the precipitation of Cd-doped apatite with the chemical formula Ca(10−x)Cdx(PO4)6(OH)2 (Mobasherpour et al. 2011). Finally, all these mechanisms could occur simultaneously and/or successively as illustrated in Figure 14.
Figure 14
Mechanisms of Cd removal by HAp200: (a) ion exchange, (b) substitution of calcium in the structure, (c) dissolution/precipitation, and (d) Cd-doped hydroxyapatite.
Figure 14
Mechanisms of Cd removal by HAp200: (a) ion exchange, (b) substitution of calcium in the structure, (c) dissolution/precipitation, and (d) Cd-doped hydroxyapatite.
## CONCLUSIONS
This study demonstrates that thermal treatment plays an important role in the capacity of HAp for removing Cd2+ ions from contaminated waters. Calcination allowed releasing the residues of the synthesis inserted into the pores of the apatite and consequently enhanced the ability of the material in Cd2+ uptake. However, the consumption of energy during this step can be minimized by reducing the heating temperature to 200 °C, while maximizing the Cd2+ removal efficiency.
The sorption ability was found to be most affected by some parameters such as the sorbent dosage, the contact time, the initial metal concentration, and the solution temperature. Contrarily, the initial pH of the Cd2+ solutions plays only a minor role, especially when shifted from pH 3 to pH 7. Further acidification enhances the adsorption of H3O+ protons, turning the surface of the particles less negative, and partially dissolving the HAp particles. These concomitant effects decrease the driving force for the adsorption of Cd2+ ions and the uptake capacity observed at pH = 2.
The equilibrium data could reasonably be fitted using Langmuir, Freundlich, and DKR isotherms types. Furthermore, the best kinetic parameters were provided by the pseudo-second-order kinetic model with R2 values higher than 0.999. The thermodynamic calculations revealed a chemical nature of the adsorption process involving strong attraction forces. The FTIR analysis before and after depollution experiments strongly supported the ion exchange as the predominant mechanism of the sorption process.
## FUNDING
This study was supported by the Ministry of Higher Education and Scientific Research of Tunisia, in collaboration with the CICECO-Aveiro Institute of Materials, FCT Ref. UID/CTM/50011/2019, financed by national funds through the FCT/MCTES. Avito H. S. Rebelo acknowledges the Portuguese Foundation for Science and Technology (FCT) for the PhD fellowship grant (SFRH/BD/36101/2007).
## DATA AVAILABILITY STATEMENT
All relevant data are included in the paper or its Supplementary Information.
## REFERENCES
K. O.
Unuabonah
I. E.
Olu-Owolabi
B. I.
2006
The effect of some operating variables on the adsorption of lead and cadmium ions on kaolinite clay
.
J. Hazard. Mater.
134
(
1–3
),
130
139
.
https://doi:10.1016/j.jhazmat.2005.10.056
.
Chen
Y.
Chai
L.
Shu
Y.
2008
Study of arsenic (V) adsorption on bone char from aqueous solution
.
J. Hazard. Mater.
160
,
168
172
.
https://doi.org/10.1016/j.jhazmat.2008.02.120
.
Chen
C.
Xun
P.
Nishijo
M.
Carter
S.
He
K.
2016
Cadmium exposure and risk of prostate cancer: a meta-analysis of cohort and case-control studies among the general and occupational populations
.
Sci. Rep.
6
.
https://doi.org/10.1038/srep25814
.
Chien
S. H.
Clayton
W. R.
1980
Application of Elovich equation to the kinetics of phosphate release and sorption in soils
.
Soil Sci. Soc. Am.
44
,
265
268
.
https://doi.org/10.2136/sssaj1980.03615995004400020013x
.
A. O.
Olalekan
A. P.
Olatunya
A. M.
O.
2012
Langmuir, Freundlich, Temkin, and Dubinin–Radushkevich isotherms studies of equilibrium sorption of Zn2+ unto phosphoric acid modified rice husk
.
IOSR J. Appl. Chem.
3
(
1
),
38
45
.
https://doi.org/10.9790/5736-0313845
.
Da Rocha
N. C. C.
De Campos
R. C.
Rossi
A. M.
Moreira
E. L.
Barbosa
A. D. F.
Moure
G. T.
2002
Cadmium uptake by hydroxyapatite synthesized in different conditions and submitted to thermal treatment
.
Environ. Sci. Technol.
36
,
1630
1635
.
https://doi.org/10.1021/es0155940
.
Dogan
M.
Alkan
M.
2003
Adsorption kinetics of methyl violet onto perlite
.
Chemosphere
50
,
517
528
.
https://doi.org/10.1016/S0045-6535(02)00629-X
.
M. F.
Mahmoud
M. M.
Abd-El-Rahman
H. M.
2011
Kinetic approach for cadmium sorption using microwave synthesized nano-hydroxyapatite
.
Elsevier J. Non-Cryst. Solids
357
,
1118
1129
.
https://doi.org/10.1016/j.jnoncrysol.2010.10.021
.
Elliot
J. C.
1994
Studies in Inorganic Chemistry: Structure and Chemistry of the Apatites and Other Calcium Orthophosphates
.
Elsevier
,
Amsterdam
.
Finsy
R.
2004
On the critical radius in Ostwald ripening
.
Langmuir
20
,
2975
2976
.
https://doi.org/10.1021/la035966d
.
Harja
M.
Buema
G.
Bulgariu
L.
Bulgariu
D.
Sutiman
D. M.
Ciobanu
G.
2015
Removal of cadmium(II) from aqueous solution by adsorption onto modified algae and ash
.
Korean J. Chem. Eng.
32
,
1804
1811
.
https://doi.org/10.1007/s11814-015-0016-z
.
He
M.
Shi
H.
Zhao
X.
Yu
Y.
Qu
B.
2013
Immobilization of Pb and Cd in contaminated soil using nano-crystallite hydroxyapatite
.
Procedia Environ. Sci.
18
,
657
665
.
https://doi.org/10.1016/j.proenv.2013.04.090
.
Ho
Y.
2006
Review of second-order models for adsorption systems
.
J. Hazard. Mater.
136
(
3
),
681
689
.
https://doi.org/10.1016/j.jhazmat.2005.12.043
.
Idrees
N.
Tabassum
B.
Abd-Allah
E. F.
Hashem
A.
Sarah
R.
Hashim
M.
2018
Groundwater contamination with cadmium concentrations in some West U.P. Regions, India
.
Saudi J. Biol. Sci.
25
,
1365
1368
.
https://doi.org/10.1016/j.sjbs.2018.07.005
.
Jebri
S.
Khattech
I.
Jemal
M.
2017
Standard enthalpy, entropy and Gibbs free energy of formation of «A» type carbonate phosphocalcium hydroxyapatites
.
J. Chem. Thermodyn.
106
,
84
94
.
https://doi.org/10.1016/j.jct.2016.10.035
.
Jelinek
L.
Kováts
E.
1994
True surface areas from nitrogen adsorption experiments
.
Langmuir
10
,
4225
4231
.
https://doi.org/10.1021/la00023a051
.
Johri
N.
Jacquillet
G.
Unwin
R.
2010
Heavy metal poisoning: the effects of cadmium on the kidney
.
Biometals
23
,
783
792
.
https://doi.org/10.1007/s10534-010-9328-y
.
Kheriji
J.
Tabassi
D.
Hamrouni
B.
2015
Removal of Cd(II) ions from aqueous solution and industrial effluent using reverse osmosis and nanofiltration membranes
.
Water Sci. Technol.
72
,
1206
1216
.
https://doi.org/10.2166/wst.2015.326
.
Lagergren
S.
1898
.
24
(
4
),
1
39
.
Landi
E.
Tampieri
A.
Celotti
G.
Sprio
S.
2000
Densification behavior and mechanisms of synthetic hydroxyapatite
.
J. Eur. Ceram. Soc.
20
,
2377
2387
.
https://doi.org/10.1016/S0955-2219(00)00154-0
.
Lin
S. H.
Juang
R. S.
2002
Heavy metal removal from water by sorption using surfactant-modified montmorillonite
.
J. Hazard. Mater.
92
,
315
326
.
https://doi.org/10.1016/s0304-3894(02)00026-2
.
Matusik
J.
Bajda
T.
Manecki
M.
2008
.
J. Hazard. Mater.
152
,
1332
1339
.
https://doi.org/10.1016/j.jhazmat.2007.08.010
.
McKay
G.
Otterburn
M. S.
Aga
J. A.
1985
Fuller's earth and fired clay as adsorbents for dyestuffs
.
Water Air Soil Pollut.
24
,
307
322
.
https://doi.org/10.1007/BF00161790
.
Mobasherpour
I.
Salahi
E.
Pazouki
M.
2011
Removal of divalent cadmium cations by means of synthetic nano crystallite hydroxyapatite
.
Desalination
266
,
142
148
.
https://doi.org/10.1016/j.desal.2010.08.016
.
Naeem
M. A.
Imran
M.
M.
Abbas
G.
Tahir
M.
Murtaza
B.
I.
2019
Batch and column scale removal of cadmium from water using raw and acid activated wheat straw biochar
.
Water
11
(
7
),
1438
.
https://doi:10.3390/w11071438
.
Rajeshkumar
S.
Li
X.
2018
Bioaccumulation of heavy metals in fish species from the Meiliang Bay, Taihu Lake, China
.
Toxicol. Rep.
5
,
288
295
.
https://doi:10.1016/j.toxrep.2018.01.007
.
Stötzel
C.
Müller
F. A.
Reinert
F.
Niederdraenk
F.
Barralet
J. E.
Gbureck
U.
2009
Ion adsorption behaviour of hydroxyapatite with different crystallinities
.
Colloids Surf. B Biointerfaces
74
,
91
95
.
https://doi.org/10.1016/j.colsurfb.2009.06.031
.
Wang
D.
Guan
X.
Huang
F.
Li
S.
Shen
Y.
Chen
J.
Long
H.
2016
Removal of heavy metal ions by biogenic hydroxyapatite: morphology influence and mechanism study
.
Russ. J. Phys. Chem. A
90
,
1557
1562
.
https://doi.org/10.1134/S0036024416080069
.
Weber
T. W.
Chakravoti
R. K.
1974
Pore and solid diffusion models for fixed-bed adsorbers
.
AIChE J.
20
,
228
238
.
https://doi.org/10.1002/aic.690200204
.
Weber
W. J.
Morris
J. C.
1963
Kinetics of adsorption on carbon from solution
.
J. Sanit. Eng. Div.
89
,
31
61
.
Wong
C. W.
Barford
J. P.
Chen
G.
McKay
G.
2014
Kinetics and equilibrium studies for the removal of cadmium ions by ion exchange resin
.
J. Environ. Chem. Eng.
2
,
698
707
.
https://doi.org/10.1016/j.jece.2013.11.010
.
Yingjian
L.
Ping
W.
Rui
H.
Xuxia
L.
Peng
W.
Jianbin
T.
Zihui
C.
Zhongjun
D.
Jing
W.
Qi
J.
Shixuan
W.
Haituan
L.
Zhixue
L.
2017
Cadmium exposure and osteoporosis: a population-based study and benchmark dose estimation in southern China
.
J. Bone Miner. Res.
32
,
1990
2000
.
https://doi.org/10.1002/jbmr.3151
. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.826718270778656, "perplexity": 4608.041903721938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00006.warc.gz"} |
http://mathhelpforum.com/math-topics/204065-physics-help.html | # Math Help - Physics Help
1. ## Physics Help
A 30-06 bullet has a mass of 0.010 kg. If the average force on the bullet is 7900 N, what is the bullet's average acceleration?
Is there a formula for this
2. ## Re: Physics Help
Originally Posted by Louisana1
A 30-06 bullet has a mass of 0.010 kg. If the average force on the bullet is 7900 N, what is the bullet's average acceleration?
Is there a formula for this
You should be familar with Newton's laws yes?
Force is equal to mass times acceleration!
$F=ma$
You know two of the three things in this equation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9927124977111816, "perplexity": 1358.6318180895853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135597.56/warc/CC-MAIN-20140914011215-00176-ip-10-234-18-248.ec2.internal.warc.gz"} |
http://magick.codeplex.com/discussions/547619 | vimallinksture Jun 5, 2014 at 6:38 AM Edited Jun 5, 2014 at 6:47 AM I am getting below error on server while converting PDF to Image. Could not load file or assembly 'Magick.NET-Q16-x86.dll' or one of its dependencies. The specified module could not be found. My server OS is 64 bit. I have already installed .NET 4.0: Visual C++ Redistributable on server but I have not installed vs 2012. Is necessary VS 2012 to install on server for using ImageMagick? Can you please help me to get rid out of this problem. Thanks in advanced. dlemstra Coordinator Jun 5, 2014 at 7:14 AM You don't need to install VS 2012 on your server. You probably only installed the x64 version of the Visual C++ Redistributable 2012. You should install the x86 version of the redistributable if you want to use Magick.NET-Q16-x86. If you want to convert a PDF file to an image make sure you also install Ghostscript on your server. vimallinksture Jun 5, 2014 at 12:27 PM Thanks for your reply. I have already install all things as you mentioned. So still I am getting below error: Could not load file or assembly 'Magick.NET-Q16-x86.dll' or one of its dependencies. The specified module could not be found. Thanks. dlemstra Coordinator Jun 5, 2014 at 12:30 PM Did you reboot your server? And are you using the .NET 4.0 version of Magick.NET? vimallinksture Jun 5, 2014 at 12:35 PM Yes, we are using .NET 4.0 version of Magick.NET. No, still we have not reboot server. Is it needed? dlemstra Coordinator Jun 5, 2014 at 12:41 PM I don't know if it is necessary to reboot after your install 'Visual C++ Redistributable 2012 x86. The error message 'The specified module could not be found.' is an error that will be raised when the C++ Redistributable cannot be found. I suggest you should try to see if a reboot will resolve your issue. vimallinksture Jun 10, 2014 at 5:24 AM Thanks dlemstra. The above error is now fixed but imagemagick cannot save converted image to folder on server. It is working fine in my local. Please help me for get rid of this. Thanks in advanced. dlemstra Coordinator Jun 10, 2014 at 8:06 AM What do you mean by 'cannot save'? What is the error/exception you are receiving? vimallinksture Jun 10, 2014 at 8:28 AM I meant that in our local environment, image is saved to folder but on server, image is not saved to the folder. Any guesses, what is the problem? Thank You, Vimal dlemstra Coordinator Jun 10, 2014 at 10:02 AM I really need more information, what is the error you are receiving? How are you writing the image? Your question is like me asking you why my car won't start. And if you can fix it for me. vimallinksture Jun 10, 2014 at 10:23 AM Surprise for me is that, there is no error occurs. Below is my code: if (!Page.IsPostBack) { MagickReadSettings settings = new MagickReadSettings(); settings.Density = new MagickGeometry(300, 300); settings.FrameIndex = 0; settings.FrameCount = 1; using (MagickImageCollection images = new MagickImageCollection()) { images.Read(Server.MapPath(@"2014_HPP_Plan_2-11-14_REV.pdf"), settings); int page = 1; foreach (MagickImage image in images) { image.Write(Server.MapPath("~/developeruploads/Snakeware.Page.jpg")); } } } Please review code and let me know if there is any problem. Thank You, Vimal dlemstra Coordinator Jun 10, 2014 at 10:58 AM Maybe your live environment is writing the output somewhere else then you are expecting. Can you search your server for the file 'Snakeware.Page.jpg'? And can you try it without 'Server.MapPath'? Magick.NET 'understands' the '~' and uses 'AppDomain.CurrentDomain.BaseDirectory'. images.Read(@"~\2014_HPP_Plan_2-11-14_REV.pdf"), settings); // AND image.Write(@"~\developeruploads\Snakeware.Page.jpg")); vimallinksture Jun 10, 2014 at 11:16 AM Thanks for your reply. I have already tried search with the file name 'Snakeware.Page.jpg' in whole drive in server but I could not find it. I think that imagemagick is not converted pdf to images in server. We will try your code. Thank You, Vimal dlemstra Coordinator Jun 10, 2014 at 11:23 AM Did you install Ghostscript on your server? vimallinksture Jun 10, 2014 at 11:29 AM Edited Jun 10, 2014 at 1:20 PM Yes, I have installed Ghostscript on server 32 and 64 bit both but still it is not working. dlemstra Coordinator Jun 10, 2014 at 1:42 PM Can you try to see what happens when you write to a full path, for example: image.Write(@"c:\test\Snakeware.Page.jpg")? vimallinksture Jun 11, 2014 at 8:39 AM Thank you for your help. Now it is working fine. Problem was with Ghostscript. Earlier 32 bit exe could not installed but yesterday once I tried and it installed and seems to work fine. Thank You Very much, Vimal vimallinksture Jun 20, 2014 at 6:36 AM Conversion of PDFs to images is working fine in DEV site but it gives me error on LIVE site. Error is "Failed to load embedded assembly: Access is denied. " We are using Magick.NET-AnyCPU.dll Can you please help me to get rid of this problem? Thanks, Vimal dlemstra Coordinator Jun 20, 2014 at 6:41 AM The AnyCPU library will try to write it's embedded assembly to a temporary directory but it seems it is not allowed to write anything there. You can set the directory with the following property: MagickAnyCPU.CacheDirectory = @"C:\path\to\your\temp\directory"; vimallinksture Jun 20, 2014 at 6:51 AM Should I have to create a directory inside temporary directory? dlemstra Coordinator Jun 20, 2014 at 7:07 AM You need to specify a directory that is outside your wwwroot but writable by your application. The AnyCPU library will create a directory inside the directory you specify. vimallinksture Jun 20, 2014 at 9:34 AM Which DLL I can use instead of AnyCPU ? Can you please tell us? We are using 64 bit operating systems. Thanks, Vimal dlemstra Coordinator Jun 20, 2014 at 10:07 AM If your application pool is running in 64 bit mode you can use the x64 version of Magick.NET. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5314322710037231, "perplexity": 2333.969639079341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426372.41/warc/CC-MAIN-20170726182141-20170726202141-00475.warc.gz"} |
https://math.stackexchange.com/questions/2803972/which-integers-can-be-written-as-x22y2-3z2 | # Which integers can be written as $x^2+2y^2-3z^2\$?
For which integers $n$ has the diophantine equation $$x^2+2y^2-3z^2=n$$ solutions ?
These theorems
https://en.wikipedia.org/wiki/15_and_290_theorems
do not apply because the given quadratic form is not positive (or negative) definite. It seems that the quadratic form is universal (for every integer $n$ a solution exists) , but I have no idea how this can be proven.
We claim that any $n\in\mathbb{Z}$ can be written as $x^2+2y^2-3z^2$.
Note that any such integer $n$ is $0$ or it is equal to a $4^a\cdot 2^b\cdot d$ where $a$ is a non negative integer, $b\in\{0,1\}$ and $d$ is a signed odd number. Then we consider the following cases.
0) If $n=0$ then let $x=0$, $y=0$ and $z=0$.
1) If $n=d=2k+1$ then let $x=k+1$, $y=k$ and $z=k$: $$x^2+2y^2-3z^2=(k+1)^2-k^2=2k+1=n.$$
2) If $n=2d=2(2k+1)$ then let $x=k$, $y=k+1$ and $z=k$: $$x^2+2y^2-3z^2=2(k+1)^2-2k^2=4k+2=n.$$
3) If $n=4^ak$ and $k$ can be written as $x_k^2+2y_k^2-3z_k^2$ then let $x=2^ax_k$, $y=2^ay_k$ and $z=2^az_k$: $$x^2+2y^2-3z^2=4^a(x_k^2+2y_k^2-3z_k^2)=4^ak=n.$$
• How did you get the idea to consider $n=2^{2a+b}\cdot d$? – MJD Jun 1 '18 at 15:22
• @MJD I first noted 3) then 1). Hence I tried the remaining case 2). – Robert Z Jun 1 '18 at 15:27
The jpegs did not come out well.... in Modern Elementary Theory of Numbers by Leonard Eugene Dickson, (1939) , on the top of page 161, homework exercise 2 is this: given that $C$ is a positive integer, $$x^2 + 2 y^2 - C z^2$$ is universal if and only if $C$ is odd and every prime factor of $C$ is $\; \equiv 1 \; \mbox{or} \; 3 \pmod 8$
Dickson and his students Ross and Oppenheim found all universal indefinite ternary quadratic forms, collected into four types (up to $SL_3 \mathbb Z$ equivalence of forms). Take $M$ any integer and $N$ any odd integer, not necessarily positive in either case. In all four cases it it obvious that they are universal, just experiment a bit with each. In the first, take $(x,1,0)$ for example. $$xy - M z^2$$ $$2xy - N z^2$$ $$2xy + y^2- N z^2$$ $$2xy + y^2- 2 N z^2$$ In the fourth form, (I) for odd numbers $(x,1,0);$ (II) for numbers $2 \pmod 4$ take $(x,2,1);$ (III) for numbers $0 \pmod 4$ take $(x,2,0).$
Notice there is no $x^2$ term in any of these, so that the form evaluated at $(1,0,0)$ comes out to $0. \;$ A fundamental part of this is that a universal (ternary) form must non-trivially represent $0.$ Not true for quaternaries, such as $w^2 - 2 x^2 - 3 y^2 + 6 z^2$
Meanwhile, your $x^2 + 2 y^2 - 3 z^2$ is equivalent to the fourth type listed above, namely $2xy+y^2 + 6 z^2.$ For quadratic forms, we take (half) the Hessian matrix of second partials; forms with such Gram matrices $G,H$ are equivalent when there is an integer matrix $P$ with $\det P = 1$ and $P^T GP = H.$ Notice that this means $Q = P^{-1}$ is also of integers with $\det Q = 1,$ while $Q^T H Q = G.$ I have a program that finds me such matrices $P.$ Equivalent forms integrally represent exactly the same values.
The first matrix identity below reads $$(x+y)^2 + 2 (x+3z)^2 - 3(x+2z)^2 = 2xy+y^2 + 6 z^2$$ $$\left( \begin{array}{ccc} 1&1&1 \\ 1&0&0 \\ 0&3&2 \\ \end{array} \right) \left( \begin{array}{ccc} 1&0&0 \\ 0&2&0 \\ 0&0&-3 \\ \end{array} \right) \left( \begin{array}{ccc} 1&1&0 \\ 1&0&3 \\ 1&0&2 \\ \end{array} \right) = \left( \begin{array}{ccc} 0&1&0 \\ 1&1&0 \\ 0&0&6 \\ \end{array} \right)$$
$$\left( \begin{array}{ccc} 0&1&0 \\ -2&2&1 \\ 3&-3&-1 \\ \end{array} \right) \left( \begin{array}{ccc} 0&1&0 \\ 1&1&0 \\ 0&0&6 \\ \end{array} \right) \left( \begin{array}{ccc} 0&-2&3 \\ 1&2&-3 \\ 0&1&-1 \\ \end{array} \right) \left( \begin{array}{ccc} 1&0&0 \\ 0&2&0 \\ 0&0&-3 \\ \end{array} \right)$$ For the second matrix identity, take $u = -2y+3z, \; v = x + 2 y - 3 z, \; w = y - z, \;$ giving $$2uv+v^2 + 6 w^2 = x^2 + 2 y^2 - 3 z^2$$
Representation of a number we write.
$$aX^2+bY^2=cZ^2+q$$
I think that the only way to record the desired polynomial is to use the solutions of any equation.
$$ax^2+by^2=cz^2$$
Knowing the solutions of this equation and substituting them into the linear Diophantine equation.
$$axs+byp-czk=1$$
$(s;p;k) -$ variables which are solutions of this equation. Then the solution of the first equation can be written as.
$$X=\frac{x}{2}(ck^2-as^2-bp^2+q)+s$$
$$Y=\frac{y}{2}(ck^2-as^2-bp^2+q)+p$$
$$Z=\frac{z}{2}(ck^2-as^2-bp^2+q)+k$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9812566637992859, "perplexity": 120.45316647803405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519923.26/warc/CC-MAIN-20210120054203-20210120084203-00028.warc.gz"} |
https://cs.stackexchange.com/questions/69631/tips-for-improving-implementation-of-wagner-fischer-algorithm | # Tips for improving implementation of Wagner-Fischer-algorithm
So I'm working on an implementation of a Wagner-Fischer-Algorithm for an online programming challenge site, but I can't seem to push the time down to where it needs to be. The assignment is to, for a number of different 'misspelled' words w1, w2, ... , wn and a dictionary D compute the editing distances between all words wi and all words di in the dictionary and, for every wi, output the word(s) in the dictionary with the smallest editing distance from wi.
At the moment this is how I've implemented my algorithm;
main():
minDistance = 1000 // arbitrary large number
Let D be the dictionary
Let W be the set of words that needs ‘correcting’
For every w in W:
For every d in D:
dist = distance(w, d)
if dist < minDistance:
minDistance = dist
Make a linked list minList and add d to it
if dist == minDistance:
Add d to minList
Output minDistance aswell as minList
distance(w, d):
Make a matrix M with dimensions (m,n)
If the last d and this d has any p (start)-letters in common => use M(m, p) from last computation (no need to compute it again)
Fill the first row and the first column with their respective ‘index’ //Look at table on Wiki
For col = 1 to m:
For row = p to n:
wagner-fischer(w, d, col, row)
wagner-fischer(w, d, col, row):
res = M(col-1, row-1) + (1 if w have the same letter at index col-1 as d at row-1)
addLetter = M(col-1, row) + 1
deleteLetter = M(col, row-1) + 1
if addLetter < res:
if deleteLetter < res:
res = deleteLetter
return res
Does anyone have any tips on how to optimize my implementation further? I'm really struggling at this point and I don't really know how to improve it further. I've done it in Java if that's of any importance.
EDIT: The online challenge says as follows;
"The input consists of two parts, the first being the dictionary (max 500 000 words) and the second being the words to be corrected (max 100 words). Each word can be max 40 characters long."
• Have you considered other algorithms, apart from running Wagner-Fischer on all pairs w,d? There's lots written on edit distance and spelling correction; search this site and elsewhere to find many resources and algorithms and data structures. Also, if this is a practical problem, I suggest you edit the question to characterize your problem more clearly: what is the typical size of D, typical value of n, and typical values for the edit distance (including how many words are at edit distance 0 from some word in D; at edit distance 1 from some word in D)? – D.W. Feb 1 '17 at 23:32
• Finally, it might be nice to credit the source of the problem by linking to the problem on the programming contest site. – D.W. Feb 1 '17 at 23:33
• I'm afraid the task specifically asks for Wagner-Fischer.. Right, I will add that information to my post! And sorry for not posting the site, but it's exclusive to my university. I can post the description though!Thanks for your answer. – Nyfiken Gul Feb 2 '17 at 9:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28795477747917175, "perplexity": 1655.1742276115572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684226.55/warc/CC-MAIN-20191018154409-20191018181909-00066.warc.gz"} |
https://chem.libretexts.org/Textbook_Maps/General_Chemistry/Book%3A_Chem1_(Lower)/08%3A_Solution_Chemistry/8.9%3A_Distillation | # 8.9: Distillation
• Page ID
3571
• Skills to Develop
Make sure you thoroughly understand the following essential ideas:
• Sketch out a typical boiling point diagram for a binary liquid solution, and use this to show how a simple one-stage distillation works.
• Explain the role of the lever rule in fractional distillation
• Describe the purpose and function of a fractionating column
• Sketch out boiling point diagrams for high- and low-boiling azeotropes
• Describe the role of distillation in crude oil refining, and explain, in a very general way, how further processing is used to increase the yield of gasoline motor fuel.
Distillation is a process whereby a mixture of liquids having different vapor pressures is separated into its components. At first one might think that this would be quite simple: if you have a solution consisting of liquid A that boils at 50°C and liquid B with a boiling point of 90°C, all that would be necessary would be to heat the mixture to some temperature between these two values; this would boil off all the A (whose vapor could then be condensed back into pure liquid A), leaving pure liquid B in the pot. But that overlooks that fact that these liquids will have substantial vapor pressures at all temperatures, not only at their boiling points.
To fully understand distillation, we will consider an ideal binary liquid mixture of $$A$$ and $$B$$. If the mole fraction of $$A$$ in the mixture is $$\chi_A$$, then by the definition of mole fraction, that of $$B$$ is
$\chi_B = 1 – \chi_A$
Since distillation depends on the different vapor pressures of the components to be separated, let's first consider the vapor pressure vs. composition plots for a hypothetical mixture at some arbitrary temperature at which both liquid and gas phases can exist, depending on the total pressure.
Figure $$\PageIndex{3}$$
In this diagram, all states of the system (that is, combinations of pressure and composition) in which the solution exists solely as a liquid are shaded in green. Since liquids are more stable at higher pressures, these states occupy the upper part of the diagram. At any given total vapor pressure such as at , the composition of the vapor in equilibrium with the liquid (designated by xA) corresponds to the intercept with the diagonal equilibrium line at . The diagonal line is just an expression of the linearity between vapor pressure and composition according to Raoult's law.
Figure $$\PageIndex{4}$$
The blue shading in plot on the right shows all the states in which the vapor is the only stable phase. The upper boundary of this region is defined by the equilibrium line which in this case is curved. As before, the intersection of the pressure line with the equilibrium curve defines the mole fractions of A and B present in the vapor. (Note that mole fractions of gases, which we are dealing with here, are conventionally represented by y, hence yA and yB.)
The curvature of the equilibrium line arises from the need to combine Raoult's law with Dalton's law of partial pressures which applies to gaseous mixtures.
Figure $$\PageIndex{5}$$
The two plots immediately above refer to the same solution at the same total vapor pressure . We can therefore combine them into a single plot, which we show here.
The two liquid-vapor equilibrium lines (one curved, the other straight) now enclose an area in which liquid and vapor can coexist; outside of this region, the mixture will consist entirely of liquid or of vapor. At this particular pressure , the intercept with the upper boundary of the two-phase region gives the mole fractions of A and B in the liquid phase, while the intercept with the lower boundary gives the mole fractions of the two components in the vapor.
Take a moment to study this plot, and to confirm that
• because both intercepts occur on equilibrium lines, they describe the compositions of the liquid and vapor that can simultaneously exist;
• the compositions of the vapor and liquid are not the same;
• in the vapor, the mole fraction of B (the more volatile component of the solution) is greater than that inf the liquid;
• in the liquid, the mole fraction of A (the less volatile component) is smaller than that of the vapor.
The vapor in equilibrium with a solution of two or more liquids is always richer in the more volatile component.
### Boiling Point Diagrams
The rule shown above suggests that if we heat a mixture sufficiently to bring its total vapor pressure into the two-phase region, we will have a means of separating the mixture into two portions which will be enriched in the more volatile and less volatile components respectively. This is the principle on which distillation is based.
Figure $$\PageIndex{6}$$
But what temperature is required to achieve this? Again, we will spare you the mathematical details, but it is possible to construct a plot similar to the one above except that the vertical axis represents temperature rather than pressure. This kind of plot is called a boiling point diagram. Some important things to understand about this diagram:
• The shape of the two-phase region is bi-convex, as opposed to the half-convex shape of the pressure-composition plot.
• The slope of the two-phase region is opposite to what we saw in the previous plot, and the areas corresponding to the single-phase regions are reversed. This simply reflects the fact that liquids having a higher vapor pressure boil at lower temperatures, and vice versa.
• The horizontal line that defines the temperature is called the tie line. Its intercepts with the two equilibrium curves specify the composition of the liquid and vapor in equilibrium with the mixture at the given temperature.
• The vapor composition line is also known as the dew point line — the temperature at which condensation begins on cooling.
• The liquid composition line is also called the bubble point line — the temperature at which boiling begins on heating.
### Distillation and Temperature
The tie line shown above is for one particular temperature. But when we heat a liquid to its boiling point, the composition will change as the more volatile component (B in these examples) is selectively removed as vapor. The remaining liquid will be enriched in the less volatile component, and its boiling point will consequently rise. To understand this process more thoroughly, let us consider the situation at several points during the distillation of an equimolar solution of A and B.
We begin with the liquid at T1, below its boiling point. When the temperature rises to T2, boiling begins and the first vapor (and thus the first drop of condensate) will have the composition y2. As the more volatile component B is boiled off, the liquid and vapor/condensate compositions shift to the left (orange arrows). At T4, the last trace of liquid disappears. The system is now entirely vapor, of composition y4.
Notice that the vertical green system composition line remains in the same location in the three plots because the "system" is defined as consisting of both the liquid in the "pot" and that in the receiving container which was condensed from the vapor. The principal ideas you should take away from this are that
• distillation can never completely separate two volatile liquids;
• the composition of the vapor and thus of the condensed distillate changes continually as each drop forms, starting at y2 and ending at y4 in this example;
• if the liquid is completely boiled away, the composition of the distillate will be the same as that of the original solution.
### Laboratory Distillation Setup
The apparatus used for a simple laboratory batch distillation is shown here. The purpose of the thermometer is to follow the progress of the distillation; as a rough rule of thumb, the distillation should be stopped when the temperature rises to about half-way between the boiling points of the two pure liquids, which should be at least 20-30 C° apart (if they are closer, then fractional distillation, described below, becomes necessary).
Figure $$\PageIndex{7}$$: Fractional distillation setup. An Erlenmeyer flask is used as a receiving flask. Here the distillation head and fractionating column are combined in one piece. Image used with permission from Wikipedia
Condensers are available in a number of types. The simple Liebig condenser shown above is the cheapest and therefore most commonly used in student laboratories. Several other classic designs increase the surface area separating the vapor/distillate and cooling water, leading to greater heat exchange efficiency and allowing higher throughput.
Figure $$\PageIndex{8}$$
absolute ethanol "), the most common method is the use of zeolite-based molecular sieves to absorb the remaining water. Addition of benzene can break the azeotrope, and this was the most common production method in earlier years. For certain critical uses where the purest ethanol is required, it is synthesized directly from ethylene.
Special Distillation Methods
Here be briefly discuss two distillation methods that students are likely to encounter in more advanced organic lab courses.
Vacuum distillationMany organic substances become unstable at high temperatures, tending to decompose, polymerize or react with other substances at temperatures around 200° C or higher. A liquid will boil when its vapor pressure becomes equal to the pressure of the gas above it, which is ordinarily that of the atmosphere. If this pressure is reduced, boiling can take place at a lower temperature. (Even pure water will boil at room temperature under a partial vacuum.) "Vacuum distillation" is of course a misnomer; a more accurate term would be "reduced-pressure distillation". Vacuum distillation is very commonly carried out in the laboratory and will be familiar to students who take more advanced organic lab courses. It is also sometimes employed on a large industrial scale.
Figure $$\PageIndex{9}$$: vacuum connection at lower right
The vacuum distillation setup is similar that employed in ordinary distillation, with a few additions:
• The vacuum line is connected to the bent adaptor above the receiving flask.
• In order to avoid uneven boiling and superheating ("bumping"), the boiling flask is usually provided with a fine capillary ("ebulliator") through which an air leak produces bubbles that nucleate the boiling liquid.
• The vacuum is usually supplied by a mechanical pump, or less commonly by a water aspirator or a "house vacuum" line.
• The boiling flask is preferably heated by a water- or steam bath, which provides more efficient heat transfer to the flask and avoids localized overheating. Prior to about 1960, open flames were commonly used in student laboratories, resulting in occasional fires that enlivened the afternoon, but detracted from the student's lab marks.
• A Claisen-type distillation head (below) provides a convenient means of accessing the boiling flask for inserting an air leak capillary or introducing additional liquid through a separatory funnel. This Claisen-Vigreux head includes a fractionation column.
Figure $$\PageIndex{10}$$: A Claisen-type distillation head
Steam Distillation: Strictly speaking, this topic does not belong in this unit, since steam distillation is used to separate immiscible liquids rather than solutions. But because immiscible liquid mixtures are not treated in elementary courses, we present a brief description of steam distillation here for the benefit of students who may encounter it in an organic lab course. A mixture of immiscible liquids will boil when their combined vapor pressures reach atmospheric pressure. This combined vapor pressure is just the sum of the vapor pressures of each liquid individually, and is independent of the quantities of each phase present.
Figure $$\PageIndex{11}$$
Because water boils at 100° C, a mixture of water and an immiscible liquid (an "oil"), even one that has a high boiling point, is guaranteed to boil below 100°, so this method is especially valuable for separating high boiling liquids from mixtures containing non-volatile impurities. Of course the water-oil mixture in the receiving flask must itself be separated, but this is usually easily accomplished by means of a separatory funnel since their densities are ordinarily different.
There is a catch, however: the lower the vapor pressure of the oil, the greater is the quantity of water that co-distills with it. This is the reason for using steam: it provides a source of water able to continually restore that which is lost from the boiling flask. Steam distillation from a water-oil mixture without the introduction of additional steam will also work, and is actually used for some special purposes, but the yield of product will be very limited. Steam distillation is widely used in industries such as petroleum refining (where it is often called "steam stripping") and in the flavors-and-perfumes industry for the isolation of essential oils
The term essential oil refers to the aromas ("essences") of these [mostly simple] organic liquids which occur naturally in plants, from which they are isolated by steam distillation or solvent extraction. Steam distillation was invented in the 13th Century by Ibn al-Baiter, one of the greatest of the scientists and physicians of the Islamic Golden Age in Andalusia.
Industrial-scale distillation and Petroleum fractionation
Distillation is one of the major "unit operations" of the chemical process industries, especially those connected with petroleum and biofuel refining, liquid air separation, and brewing. Laboratory distillations are typically batch operations and employ relatively simple fractionating columns to obtain a pure product. In contrast, industrial distillations are most often designed to produce mixtures having a desired boiling range rather than pure products.
Industrial operations commonly employ bubble-cap fractionating columns (seldom seen in laboratories), although packed columns are sometimes used. Perhaps the most distinctive feature of large scale industrial distillations is that they usually operate on a continuous basis in which the preheated crude mixture is preheated in a furnace and fed into the fractionating column at some intermediate point. A reboiler unit maintains the bottom temperature at a constant value. The higher-boiling components then move down to a level at which they vaporize, while the lighter (lower-boiling) material moves upward to condense at an appropriate point.
Petroleum is a complex mixture of many types of organic molecules, mostly hydrocarbons, that were formed by the effects of heat and pressure on plant materials (mostly algae) that grew in regions that the earth's tectonic movements buried over periods of millions of years. This mixture of liquid and gases migrates up through porous rock until it s trapped by an impermeable layer of sedimentary rock. The molecular composition of crude oil (the liquid fraction of petroleum) is highly variable, although its overall elemental makeup generally reflects that of typical plants.
element amount carbon hydrogen nitrogen oxygen sulfur metals 83-87% 10-14% 0.1-2% 0.1-1.5% 0.5-6% <1000 ppm
The principal molecular constituents of crude oil are
• alkanes: Also known as paraffins, these are saturated linear- or branched-chain molecules having the general formula CnH2n+2 in which n is mostly between 5 and 40.
• unsaturated aliphatic: Linear- or branched chain molecules containing one or more double or triple bonds (alkenes or alkynes).
• Cycloalkanes:Also known as naphthenes these are saturated hydrocarbons CnH2n containing one or more ring structures.
• Aromatic hydrocarbons:These contain one or more fused benzene rings CnHn, often with hydrocarbon side-chains.
The word gasoline predates its use as a motor fuel; it was first used as a topical medicine to rid people of head lice, and to remove grease spots and stains from clothing. The first major step of refining is to fractionate the crude oil into various boiling ranges.
boiling range fraction name further processing
<30° C butane and propane gas processing
30 - 210° straight-run gasoline blending into motor gasoline
100 - 200° naphtha reforming into gasoline components
150 - 250° kerosene jet fuel blending
160 -400° light gas oil distillate fuel blending into diesel or fuel oil
315 - 540° heavy gas oil catalytic cracking: large molecules are broken up into smaller ones and recycled
>450° asphalts, bottoms may be vacuum-distilled into more fractions
#### Further processing and blending
About 16% of crude oil is diverted to the petrochemical industry where it is used to make ethylene and other feedstocks for plastics and similar products. Because the fraction of straight-run gasoline is inadequate to meet demand, some of the lighter fractions undergo reforming and the heavier ones cracking and are recycled into the gasoline stream. These processes necessitate a great amount of recycling and blending, into which must be built a considerable amount of flexibility in order to meet seasonal needs (more volatile gasolines and heating fuel oil in winter, more total gasoline volumes in the summer.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7813600301742554, "perplexity": 1594.8315614749881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159744.52/warc/CC-MAIN-20180923193039-20180923213439-00078.warc.gz"} |
http://mfcabrera.com/blog/2016/9/30/dooplablogorg.html | # Ha-doopla! Tool for displaying Python Hadoop Streaming Errors
## Introducing Doopla
At TrustYou we use Luigi a lot for building our Data Pipelines, mostly made of batch Hadoop map reduce Job. We have a couple of clusters, one using a pretty old version of Hadoop and one more recent, where we use HDP 2.7.
Writing Hadoop MR jobs in Python is quite nice, and it is even more straight forward using Luigi's support. The big issue is when you are developing and you have to debug. Although I try to decrease the amount of time of in-cluster debugging (by for example using domain classes and writing unit tests against them), sometimes you have no choice.
And then the pain comes. One you Mapper or your reducer fails most of the times Luigi cannot show you the reason of the failure and you have to go to the web interface and manually click through many times until you sort of find the error message, with hopefully enough debugging information.
So after debugging my MR jobs this way for a while I got really annoyed and decided to automate that part and I created Doopla , a small script that fetches the the output (generally stderr) of a failed mapper and / or reducer, and using Pygments highlights the failing Python code. It not jobid is specified if will fetch the output from the last failed job. It was a two hours hack at the beginning so it is not a code I am proud of so I made it public and even send it to Pypi (a chance to learn something new as well), so it can be installed easily by just writing pip install doopla.
It initally only supported our old Hadoop version, but last one worked with HDP 2.7 (and I guess it might work for other Hadoop versin). New version of Hadoop offer an REST API for querying job status and information, but I kept scraping the information (hey, it is a hack).
You can also integrate that in Emacs (supporting the highlighting and everything) with code like:
And then hit M-x doopla to obtain the same without leaving your lovely editor. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18007777631282806, "perplexity": 2352.839427759162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609054.55/warc/CC-MAIN-20170527191102-20170527211102-00213.warc.gz"} |
https://www.r-bloggers.com/tag/robust/ | # Posts Tagged ‘ robust ’
## Standard, Robust, and Clustered Standard Errors Computed in R
June 15, 2012
By
$Standard, Robust, and Clustered Standard Errors Computed in R$
Where do these come from? Since most statistical packages calculate these estimates automatically, it is not unreasonable to think that many researchers using applied econometrics are unfamiliar with the exact details of their computation. For the purposes of illustration, I am going to estimate different standard errors from a basic linear regression model: , using the
Read more »
## Linear regression models with robust parameter estimation
May 15, 2010
By
There are situations in regression modelling where robust methods could be considered to handle unusual observations that do not follow the general trend of the data set. There are various packages in R that provide robust statistical methods which are summarised on the CRAN Robust Task View. As an example of using robust statistical estimation in
Read more »
# Never miss an update! Subscribe to R-bloggers to receive e-mails with the latest R posts.(You will not see this message again.)
Click here to close (This popup will not appear again) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5269302129745483, "perplexity": 2070.1918693888015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00292-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/282130/random-walk-on-mathbbzd | Random walk on $\mathbb{Z}^{d}$ [closed]
We know that a simple random walk (walk along one of the $2d$ lattices' directions with the same probability $\frac{1}{2d}$) on $\mathbb{Z}^{d}$ is transient when $d\geq 3$. And the probability $P_{0}(S_{2n}=0)$ can be computed: $P_{0}(S_{2n}=0)=\sum_{l_{1}+\dots+l_{d}=n}\frac{(2n)!}{(l_{1}!\cdots l_{d}!)^{2}}\frac{1}{(2d)^{2n}}$. Where $S_{2n}$ means that the random walk returns the origin after $2n$ times move.
It's easy to imagine that this probability with $n$ fixed is decreasing as $d$ is increased since if $d$ is larger, the random walk need return the origin at all $d$ directions, then the constraint on the random walk is more strict. But how can I prove the result rigorously?
• What is $S_{2n}$ ? Have you try Stirling approximation to show your claim ? – Loïc Teyssier Sep 27 '17 at 12:05
• @sbbb88522: would you please explain and (at least grammatically...) clarify the "since the stability is weaken" part of your question? – Peter Heinig Sep 27 '17 at 12:07
• Sorry for my vague expression before. I think this problem may seem to be an elementary one but I have tried some analysis and probabilistic methods and have made no progress. I think Stirling approximation is necessary but I can;t figure out how to apply it. Thank you for your comments. – sbbb885522 Sep 27 '17 at 14:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7445709705352783, "perplexity": 315.2899188634224}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740423.36/warc/CC-MAIN-20200815005453-20200815035453-00369.warc.gz"} |
https://kb.osu.edu/dspace/handle/1811/29801 | # Observation of High-lying Triplet States of $Na_{2}$ by Pulsed Perturbation Facilitated OODR Spectroscopy
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/29801
Files Size Format View
1995-TF-09.jpg 40.89Kb JPEG image
Title: Observation of High-lying Triplet States of $Na_{2}$ by Pulsed Perturbation Facilitated OODR Spectroscopy Creators: Liu, Yaoming; Li, J.; Chen, D.; Li, Li Issue Date: 1995 Publisher: Ohio State University Abstract: Triplet gerade states of $Na_{2}$ below $X^{2}{\Sigma_{g}}^{+}$ v = 0 and above 3s+3d dissociation limit were probed by perturbation facilitated optical-optical double resonance (PFOODR) $spectroscopy.^{1}$ Two pulesed dye lasers were used as PUMP and PROBE lasers. The PUMP laser excited transitions from ground state, $X^{1}{\Sigma_{g}}^{+}$, to the $A^{1}{\Sigma_{u}}^{+} \sim b^{3}\Pi_{u}$ mixed intermediate levels and the PROBE laser further excited from the intermediate levels to high-lying triplet gerade states. Ultra-violet OODR fluorescence was detected. Several high-lying triplet gerade states were observed and assigned. Description: 1. Li Li and R. W. Field J. Mol. Spectroscopy 117, 245 (1986). Author Institution: Tsinghua University, Beijing 100084, China URI: http://hdl.handle.net/1811/29801 Other Identifiers: 1995-TF-09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5517981648445129, "perplexity": 19334.06543301703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051196108.86/warc/CC-MAIN-20160524005316-00228-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://economics.stackexchange.com/questions/41498/connection-between-saving-and-investments-y-cignx | # Connection between saving and investments-Y=C+I+G+NX [duplicate]
I understand Y-G+C = I+NX = Savings . The equation that we see everywhere in economic textbooks. I see the mathematical logic.
BUT
Given that I = capital investment in GDP accounting surely Savings are what is not spent after you haven't spent anything on capital goods etc. So in my head Y-G-C-I = NX = S and infact why not take NX over to the other side!
What "mistake" am I making here? I can regurgitate the textbooks and get the marks but I just don't understand it for myself from first principles.
The $$I$$ is actually not just capital spending. In some models that might be true, for example, in Solow growth model we will assume that all investment is capital spending but $$I$$ is not just that.
For example, $$I$$ would include both purchases of new houses (although argument could be made they are capital goods, they would not satisfy more narrower definitions requiring that capital being factor of production), or investment in inventory for example (see more detailed explanation in Blanchard et al. Macroeconomics: a European Perspective, pp 41-43).
PS: As pointed in the comments, you should have minus in front of $$C$$ so the equation should be: $$Y-G-C = I+NX$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4969733953475952, "perplexity": 1307.4977399019986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077336.28/warc/CC-MAIN-20210414064832-20210414094832-00265.warc.gz"} |
https://sushilsub.wordpress.com/2009/05/ | # Sushil Subramanian – Weblog
Essays on engineering, music, culture and their curious intersections – by Sushil Subramanian
## Compressed Sensing: Transmit Nothing – Receive Everything
(I wrote this article on request from my friend Ved Deshpande, for the Mathematics Department, IIT Kharagpur, newsletter, Xponent, as a guest article. There may be some technical mistakes in this article, however, this is how I understood compressed sensing.)
What we know:
In the last 5 years, the discipline of Applied Mathematics, particularly in Communications and Signal Processing has seen a spectacular shift in paradigm. In Sping 2004, David Donoho’s (Stanford) former PhD. student, Emmanuel Candes (presently at CalTech.) first discussed the idea of compressed sensing. From these meetings germinated a novel and phenomenal method of sensing and estimating signals, which Donoho developed. Simultaneously, with the help of post doctoral student Justin Romberg (now at GeorgiaTech.) and Field’s Medalist Terence Tao (UCLA), Candes also nurtured the same idea, which developed into a unique set of papers on compressed sensing. This article is an attempt to expose the beauty of their findings, which often go unnoticed in mathematics and applied sciences.
Consider a digital signal $x(n)$ where $1 \leq n \leq N$ i.e., the signal has restricted length. In the 1930’s, work done by Shannon, Whittaker and Nyquist suggests that the above signal can be reconstructed with zero-error, if it had at least $2N$ samples to begin with. This is equivalent to saying, that if can be represented as a unique linear combination of unique functions (often called a basis), then a matrix of dimension $2N \times N$ has to sample the signal for perfectly obtaining the original basis. Since the basis in consideration and the signal $x$ are related by a unique linear combination, if the basis is obtained perfectly, $x$ can simply be obtained by the relation $x = \psi \theta \mathrm{,}$ where $\theta$ is the basis of length $N$ and $\psi$ is a unique, invertible $N \times N$ matrix called the transform matrix. For example, some of us may be familiar with the Fourier basis, where $\psi$ is full of $e^{2\pi j n k / N}$ terms, where $n$ and $k$ are the matrix indices.
Let us think of the entire problem in a different way. Let’s say we want to reproduce the signal most of the times. This means, if we try to reconstruct the signal infinite times in infinite different experiments, we will converge to a probability of reconstruction, $P$. If we can make $P$ high enough, we can think of posing relaxations on other constraints in the problem. What will be truly interesting is we could manage to get away with sampling the signal with a matrix of dimension $K \times N$ where $K$ is allowed to be lesser than $N$. A bold statement saying that it is indeed possible, in the form of a big blow to the age old theory of Shannon, was declared by Candes and Donoho in their groundbreaking papers in 2004. On the downside however, this works for a specific set of signals only known as sparse signals. In a nutshell, Candes and Donoho said that if the basis is sparse, i.e. it contains very few non-zero elements, then it is possible to sample the signal with a matrix of size $K \times N$ often called a matrix of transformation to a lesser rank, and still manage to get back $x$.
Suppose, the basis $\theta$ is $S-\mathrm{sparse}$ i.e. $S$ out of the $N$ elements of the basis are non-zero. Now we use a $K \times N$ matrix for sampling. Thus the resulting signal is of length $K$ which can be represented as $y = \phi x = \phi \psi \theta$. This signal has been proven to be enough to get back $\theta$. Amazingly, Candes and Donoho also showed that as long as $\psi$ and $\phi$ are highly incoherent (a mathematical formulation of how unrelated the matrices are), $\phi$ can be completely random! Further, $K$ can be as less as $S \log N$ and the probability of failure $\left(1-P \right)$ is about $e^{-1/N}$. If $S = 3$ and $N = 100$ it implies $K \approx 14$ which is way lesser than that predicted by Shannon! At first it may seem that $y$ being a $K-\mathrm{length}$ vector means that to obtain $\theta$ we actually have more variables than equations (if $\phi$ and $\psi$ are assumed known). However, this problem is cleverly overcome by Candes and Donoho using the concepts of linear programming and convex optimization often used in electrical engineering and operations research.
The Implications:
Compressed sensing (aptly termed, as we compress the way we measure or “sense” as mentioned above) has far reaching implications in the applied sciences and engineering fields. One application is in underwater communications. In underwater communication channels, signals are usually distorted by a channel response which can be modeled as a sparse signal, very similar to the basis described above. The transmitted signal convolves as a matrix similar to $\psi$, and the signal is then available to a receiver. Now that we know that the channel response is sparse, we can measure the signal to obtain a $K-\mathrm{length}$ signal that can perfectly give us back the channel response! This channel response comes in very handy in finally removing the Gaussian noise from the data. Thus, in effect, we are measuring a signal as if nearly nothing has been transmitted and every required information is available!
The DSP Lab at Rice University have archived and update all the latest literature on compressed sensing. Visit them at: http://www.dsp.ece.rice.edu/cs/.
Written by sushilsub
May 16, 2009 at 2:59 pm
Posted in Blog Entries
Tagged with , | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 47, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8817358016967773, "perplexity": 748.4448499217108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110774.86/warc/CC-MAIN-20170822123737-20170822143737-00413.warc.gz"} |
https://www.ideals.illinois.edu/handle/2142/95332 | ## Files in this item
FilesDescriptionFormat
application/pdf
MOON-DISSERTATION-2016.pdf (1MB)
(no description provided)PDF
## Description
Title: Modal auxiliary verbs and contexts Author(s): Moon, Lori Ann Director of Research: Lasersohn, Peter N Doctoral Committee Chair(s): Lasersohn, Peter N Doctoral Committee Member(s): Ionin, Tania; Schreiner, Sylvia L. R.; Meseguer, Jose Department / Program: Linguistics Discipline: Linguistics Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): formal semantics formal grammar pragmatics syntax modal auxiliary verbs tense aspect epistemic modality relativist semantics relativism assessment-sensitivity categorial grammar English context model theoretic semantics intensional logic natural language semantics Abstract: Modal auxiliary verbs, such as could', might', must', would', and others, have different readings depending on the context in which they occur (Kratzer 1981). The sentence `Jess could fry the fish' can mean that, in a time previous to the utterance of the sentence, Jess had the ability to fry the fish, or it can mean that, at the time of the utterance, Jess frying the fish is a possible event. Modal auxiliary verbs often create intensional environments, leading the events described by the second verb to be understood to be non-actual events. When the readings are described as being determined by a context, it is often a broad notion of non-linguistic and extra-sentential linguistic context that is the focus of the interpretation. For example, descriptive pragmatic constraints are used in Lewis 1973 and Kratzer 1981 to characterize types of accessibility relations and types of orderings of worlds. A large part of the meaning of modal auxiliary verbs, however, centers around how the events described by the second verb are situated relative to the time at which the sentence containing the modal auxiliary is used. Information about the temporal situation of an event is conveyed through the linguistic context in which a modal auxiliary verb occurs, including, but not limited to, lexical properties of the linguistic expressions describing the event in the scope of the modal auxiliary, lexical properties of the modal auxiliary itself, and temporal and aspectual marking on linguistic expressions in the verbal projections. In order to provide a framework for representing the interactions of tense, aspect, and modality, a fragment of English is given in a Multi-Modal Combinatorial Categorial Grammar (Baldridge & Kruijff, Steedman 2012). Modal auxiliaries are given verb-like lexical entries in the grammar using lexical entries that combine features from Villavicenio 2002 and standard attribute value matrices of Head Driven Phrase Structure Grammar (Pollard & Sag 1999, Sag, Wasow, & Bender 2003). Modal auxiliaries have default lexical arguments with which they combine, and they combine with temporal and aspectual meaning that is sometimes morphologically manifested through grammatical tense and aspect. Portions of the combinatory methods are based on Bach 1983, who argued for less constrained combinatorial rules and unification of features in order to represent modal auxiliaries. The notion of event semantics (Davidson 1967) plays an important role in the formulation of the compositional semantics due to the way in which event times are related to aspectual meaning. The grammar uses a Neo-Davidsonian approach (Parsons 1990) to representing the arguments of the verb and builds on the work of Champollion 2015. The temporal component is very important in this work and uses portions of the temporal and event ontology proposed in Muskens 1995, 2003. Two paradigms of modal auxiliaries are proposed: Tense-bearing modal auxiliaries and non-tense-bearing modal auxiliaries. Within each paradigm, readings are shown to have differing semantics with respect to the semantic roles with which they combine and the temporal and aspectual readings that they can have. Differing results with respect to their behaviour in describing various states of affairs are addressed as is their behaviour in expressing past tense, sequence of tense contexts (Abusch 1997), and the distribution of perfect aspect. The formal grammar distinguishes parts of the meaning of sentences with modal auxiliary verbs that can be represented in terms of composition of temporal and aspectual expressions with modal auxiliary verbs or composition of a modal auxiliary verb with its arguments on one hand from parts of the meaning that are constrained by a broader notion of context, on the other hand. The notion of a broader context is not, however, neglected in the treatment. The English language fragment presented in the grammar is interpreted in a relativist semantic model, motivated by the assessment-sensitivity of epistemic modal auxiliaries (MacFarlane 2011, Lasersohn 2005, Lasersohn 2015). Readings that do not require assessment sensitivity are given truth conditions according to those given for monadic truth in Lasersohn 2015. The interaction of readings with their grammatical distribution provides additional theoretical insights into the linguistic contexts that are conducive to assessment sensitivity, actuality inferences, and counterfactual readings. Most notably, it is shown that assessment sensitivity is only present in modal auxiliaries that are in the non-tense-bearing paradigm. Parts of the theoretical treatment presented in this work have been applied in areas of automated classification of modal auxiliary verbs (Moon 2011, Moon 2012, Moon et al. 2016), showing that temporal, aspectual, and argument structure information can be used to determine the most likely reading of a modal auxiliary at the sentence level, increasing the ease of reading identification for automated tools. Issue Date: 2016-11-22 Type: Thesis URI: http://hdl.handle.net/2142/95332 Rights Information: Copyright 2016 Lori A Moon Date Available in IDEALS: 2017-03-01 Date Deposited: 2016-12
| {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6760326623916626, "perplexity": 3521.837982511443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583508988.18/warc/CC-MAIN-20181015080248-20181015101748-00360.warc.gz"} |
http://bactra.org/notebooks/change-points.html | Notebooks
## Change-Point Problems
27 Feb 2017 16:30
Suppose you have a time series which has some (stochastic) property you're interested in, say its expected value or its variance. You think this is usually constant, but that if it does change, it does so abruptly. You would like to know if and when it changes, and perhaps to localize the time when it did. You now have a change-point problem. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8404707312583923, "perplexity": 564.3676218914169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528058.3/warc/CC-MAIN-20190419201105-20190419223105-00521.warc.gz"} |
http://www2.mpi-magdeburg.mpg.de/preprints/2013/06/ | # Preprint No. MPIMD/13-06
#### Abstract:
We consider the numerical solution of projected algebraic Riccati equations using Newton\'s method. Such equations arise, for instance, in model reduction of descriptor systems based on positive real and bounded real balanced truncation. We also discuss the computation of low-rank Cholesky factors of the solutions of projected Riccati equations. Numerical examples are given that demonstrate the properties of the proposed algorithms.
#### BibTeX:
@TECHREPORT{MPIMD13-06,
author = {Peter Benner and Tatjana Stykel},
title = {Numerical Solution of Projected Algebraic Riccati Equations},
number = {MPIMD/13-06},
month = may,
year = 2013,
institution = {Max Planck Institute Magdeburg},
type = {Preprint},
note = {Available from \url{http://www.mpi-magdeburg.mpg.de/preprints/}},
} | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8737789392471313, "perplexity": 3941.08373468008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891316.80/warc/CC-MAIN-20180122113633-20180122133633-00553.warc.gz"} |
http://mathoverflow.net/questions/152352/is-euclid-dead/152363 | # Is Euclid dead? [closed]
Apparently Euclid died about 2,300 years ago (actually 2,288 to be more precise), but the title of the question refers to the rallying cry of Dieudonné, "A bas Euclide! Mort aux triangles!" (see King of Infinite Space: Donald Coxeter, the Man Who Saved Geometry by Siobhan Roberts, p. 157), often associated in the popular mind with Bourbaki's general stance on rigorous, formalized mathematics (eschewing pictorial representations, etc.). See Dieudonné's address at the Royaumont seminar for his own articulated stance.
In brief, the suggestion was to replace Euclidean Geometry (EG) in the secondary school curriculum with more modern mathematical areas, as for example Set Theory, Abstract Algebra and (soft) Analysis. These ideas were influential, and Euclidean Geometry was gradually demoted in French secondary school education. Not totally abolished though: it is still a part of the syllabus, but without the difficult and interesting proofs and the axiomatic foundation. Analogous demotion/abolition of EG took place in most European countries during the 70s and 80s, especially in the Western European ones. (An exception is Russia!) And together with EG there was a gradual disappearance of mathematical proofs from the high school syllabus, in most European countries; the trouble being (as I understand it) that most of the proofs and notions of modern mathematical areas which replaced EG either required maturity or were not sufficiently interesting to students, and gradually most of such proofs were abandoned. About ten years later, there were general calls that geometry return, as the introduction of the alternative mathematical areas did not produce the desired results. Thus EG came back, but not in its original form.
I teach in a University (not a high school), and we keep introducing new introductory courses, for math majors, as our new students do not know what a proof is. [Cf. the rise of university courses in the US that come under the heading "Introduction to Mathematical Proofs" and the like.]
I am interested in hearing arguments both FOR and AGAINST the return of EG to high school curricula. Some related questions: is it necessary for high-school students to be exposed to proofs? If so, is there is a more efficient mathematical subject, for high school students, in order to learn what is a theorem, an axiom and a proof?
Full disclosure: currently I am leading a campaign for the return of EG to the syllabus of the high schools of my country (Cyprus). However, I am genuinely interested in hearing arguments both pro and con.
-
## closed as off-topic by Steven Landsburg, Gerald Edgar, Chris Godsil, Igor Pak, LuciaDec 22 '13 at 2:14
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question does not appear to be about research level mathematics within the scope defined in the help center." – Steven Landsburg, Gerald Edgar, Chris Godsil, Igor Pak, Lucia
If this question can be reworded to fit the rules in the help center, please edit the question.
Most parts of discrete math (e.g. elementary number theory, elementary combinatorics, elementary graph theory) seem like a better fit. Among other things, students might actually use that material (e.g. in parts of computer science). But in any case this is probably too opinion-based for MO. – Qiaochu Yuan Dec 19 '13 at 18:24
This web site doesn't really cover pedagogical issues in pre-university education, and doesn't usually permit questions whose answers are opinions. I don't know of a suitable place to ask your question. Maybe start a blog. – Ben McKay Dec 19 '13 at 18:48
Ben McKay, the author clearly connected the issue to the effects on undergraduate math education. – Monroe Eskew Dec 19 '13 at 18:52
I do not understand why the question was closed. It seems to be a legitimate question for this site, and the number of answers and votes seems to show that there is substantial interest. – Alexandre Eremenko Dec 19 '13 at 20:47
What's this absolute nonsense about the series of articles in the 60's ? Which articles ? in which journals ? signed by whom ? What does that mean, "by the Bourbaki's" ? – Joël Dec 20 '13 at 4:47
show 18 more comments
When I was in high school (in the early 1960's), Euclidean geometry was the only course in the standard curriculum that required us to write proofs. These proofs, however, were in a very rigid format, with statements on the left side of the page and a reason for every statement on the right side. So I fear that many students got an inaccurate idea of what proofs are really like. They also got the idea that proofs are only for geometry; subsequent courses (in the regular curriculum, not honors courses) didn't involve proofs. The textbook that we used also had some defects concerning proofs. For example, Theorem 1 was word-for-word identical with Postulate 19; Theorem 1 was given a proof that didn't involve Postulate 19, so, in effect, we were shown that Postulate 19 is redundant, but the redundancy was never mentioned, and I still don't know why a redundant postulate was included in the first place. Another defect of the standard courses in geometry was that, because of the need to gently teach how to find and write proofs (in that rigid format), very little interesting geometry was taught; the class was mostly proving trivialities. I was fortunate to be in an honors class, with an excellent instructor who showed us some really interesting things (like the theorems of Ceva and Menelaus), but most students at my school had no such advantage.
I conjecture that Euclidean geometry can be used for a good introduction to mathematical proof, but, as the preceding paragraph shows, there are many things that can go wrong. (There are other things that can go wrong too. I mentioned that I had an excellent teacher. But my school also had math teachers who knew very little about proofs or about geometry beyond what was in the textbook.) So my advice is, if you want to develop a course such as you described in the question, proceed, but be very careful.
Incidentally, many years ago, I recommended to my university department that we use a course on projective geometry as an "introduction to proof" course. The idea was that there are fairly easy proofs, and the results are not as obvious, intuitively, as equally easy results of Euclidean geometry. My suggestion was not adopted.
Qiaochu Yuan's suggestion of discrete math instead of geometry might have similar advantages as my projective geometry proposal, but it will still be subject to many of the pitfalls that I indicated above, plus one more: Most high school math teachers know less about discrete math than they do about geometry.
-
+1. Axiomatizing Euclidean geometry properly is tricky and full of insidious points that a high-school student cannot understand and appreciate. For instance, configuration details and intersection points. If I am not mistaken, it took Hilbert to do it rigorously, a couple of millennia after Euclid's attempts. I am no math education expert, but to me abstract algebra looks like a much better starting point: it's much clearer what needs to be proved and what the axioms are. (yes, I studied EG in high school). – Federico Poloni Dec 19 '13 at 19:39
This sounds as if you used bad textbooks in your school. Andreas, was that in USA? :) – Sergei Akbarov Dec 19 '13 at 19:44
Essentially you say that school teachers were not competent. That is the biggest problem indeed. – Anton Petrunin Dec 19 '13 at 22:21
My middle-school geometry class (age 13) was a proof-based geometry class. We started out with synthetic affine geometry and built up to Euclidean. It was surely not the most rigorous axiomatization of geometry, but it was an excellent class and made every "introduction to proof" class I've encountered since entirely redundant. Every assignment and every test was entirely proof-based, and some were quite challenging to me at the time. Do I remember any geometry? No, not really. Was it one of the most valuable classes I've ever taken? Absolutely. – dfeuer Dec 20 '13 at 7:04
@FedericoPoloni, Using abstract algebra instead of EG because it avoids tricky arcane foundational issues seems like a misplaced priority. Abstract algebra is, well, awfully abstract, and won't be as accessible or intuitive to young math students as EG. And accessibility, imo, is far more important at that stage than total rigor. – Jonah Dec 20 '13 at 11:13
show 13 more comments
I try to keep my answer short.
Fact: Euclidean geometry is still taught in Iranian middle and high schools.
Observation (based on research): Most teachers do not like to teach geometry. They say, when you teach geometry, you are always faced with problems that you don't know how to solve them. But, it seems that they haven't got that problem with the rest of mathematics taught in school! Thinking of your campaign, ask yourself, have you got enough teachers willing to teach geometry and able to do so?
Fact : There is at least one mathematician who is in love with triangles. Here is a quote from his paper in The Mathematical Intelligencer:
No object has ever served mathematics better or longer. Compare the number of nontrivial results which are true for all topological spaces, rings, groups, etc, without putting extra assumptions on them with the number of nontrivial results which are true in any triangle...When it comes to deducing results in mathematics just from the definition of an object, nothing can hold a candle to the triangle. The triangle will serve mathematics forever.
Opinion: There is a big difference between teaching geometry as a source of fascinating problems, and as a rigid body of axiomatic knowledge. Personally, I favor the former. Go to observation above!
-
Iran will stay with geometry since the ornament on your flag is made with a compass-and-straightedge construction isiri.org/portal/files/std/1.htm – Anton Petrunin Dec 20 '13 at 0:39
An excellent observation! It is especially relevant in the situation OP finds himself in, namely, when geometry has been absent from the school curriculum for a while. As an example, the state of New York dropped EG from its school curriculum for a number of years and now faces a paradoxical situation of teachers instructing in a subject that they themselves have never learned in school! This effect is felt for several generations, e.g. we have pre-service teachers who took geometry in school from someone who had never learned it himself/herself. – Victor Protsak Dec 20 '13 at 1:27
@Amir Asghari: Apparently, you have addressed a very serious issue: If EG returns, then who is going to teach it? Even now high school teachers avoid to teach several difficult things. – smyrlis Dec 20 '13 at 5:15
@ToddTrimble Dear Todd. The research I mentioned is an unpublished master thesis I supervised in 2009: (Leila Mansouri), The differences between the teaching of geometry and the teaching of mathematics in highschool! Unfortunatly, the result is not available in English. Thus, let me summerize the results here, hoping that it comes handy. – Amir Asghari Dec 21 '13 at 10:54
First of all, consider that the title speaks itself. Separating the teaching of geometry from the teaching of mathematics (including calulus) reflects the belief of most teachers. Indeed, whatever a mathematician may say in favour of teaching geometry falls into disfavour from teachers' point of view: Geometry is problem based, each problem could have several solutions, solving most problems needs creativity (and you cannot teach creativity), geometry has a unity (that is to say its different parts are closely related to each other) and so on. – Amir Asghari Dec 21 '13 at 11:14
show 4 more comments
I strongly recommend to read this paper of Sharygin. (It is in Russian, but it worth to translate.) You will see the reasons to return EG in school, you will also the reasons why it disappears.
Sharygin is my hero, he is the author of many very good math books for school students, he also wrote the best (the opinion is mine) text book in Euclidean geometry for school.
P.S. Let me share what I know about the history of geometry curriculum in Russian school. We had textbook of Kiselev, which served for more than half century. It was changing slowly, at the beginning it was quite close to Euclid's Elements. (If you ask about geometry someone from the generation of my parents, their eyes start to radiate with positive energy and they start to explain how wonderful was the experience.)
After that (60-s) changes start. First Nikitin's book --- a big step back. After that, instead of coming back to Kiselev, many books were written by very prominent mathematicians (including Alexandrov and Pogorelov) these books were yet worse than Nikitin's book. Later Sharygin's book appears; it is a very good book but extremely demanding from the teacher (say absolute geometry was not discussed, but if the teacher is not familiar with absolute geometry then he can not teach properly).
Now we get so called "Unified state examination" (the worst reform ever made in Russia) it is either too expansive or impossible to check proofs on this exam; the later wipes geometry from the school curriculum; formally it is still there but since it is not needed to pass the exam, no one needs to learn it.
Conclusion: It seems that every big reform makes education worse. The right direction would be to change things gradually, and it has to be done by teachers with help of academia, not other way around.
-
An interesting read! Sharygin was also my hero because of his problem books in plane and space geometry. His observations about geometrical skills of strong math olympians are right on target. However, overall this is a rambling article with strong conspirological overtones and, frankly, it should be classified as a panglossian manifesto extolling the virtues of geometry rather than an objective study relying on rational analysis. It is also surprisingly devoid of concrete examples of "good geometry" that he promotes. I would have expected more from such a great mathematician and educator. – Victor Protsak Dec 20 '13 at 6:28
@VictorProtsak, Yes the article is written in a very emotional way, but Sharygin was actually teaching geometry, so things he says relying on experience (which is not comparable with mine or yours). OP asks for arguments for the return of EG in high schools. I think it answers his question. – Anton Petrunin Dec 20 '13 at 15:52
P.S. when I hear "objective study relying on rational analysis" I think "now it is time for the lie". – Anton Petrunin Dec 20 '13 at 15:53
Anton, are you familiar with "Higher geometry" by N.V. Efimov? Do you have an opinion on the text? (Probably it is not a text suitable for school teaching, but I would still like to know what you think of it on its own terms.) – Andres Caicedo Dec 21 '13 at 17:10
Very interesting answer! – Gil Kalai Dec 21 '13 at 18:41
With the caveats mentioned by Andreas, I think Euclidean Geometry makes excellent sense as a high-school course. (My high school experience was not dissimilar to Andreas's -- still the two-column format, but I also had a teacher who understood mathematics beyond what was in the textbook.)
The basic point of agreement (between those Bourbakistes and those who would uphold EG) seems to be that there is need for a course that expounds mathematics as an axiomatic discipline, and the careful modes of reasoning that go into that. In some sense just about any system based on axioms (be it EG, set theory, "discrete mathematics", or something else) would serve that purpose, except that Euclidean Geometry has the big advantage of being visual and readily accessible to intuition. (The downside to that might be Isaac Newton's criticism [see Arnold's Huygens and Barrow, Newton and Hooke, pp. 49-50] that most of the theorems are intuitively quite obvious, so that the typical course can seem a painful exercise in pedantry.)
I like Andreas's projective geometry proposal. Among other things, this would help promote the idea of the power of unification in mathematics: that things that might look very different, such as ellipses and hyperbolas, are often the same thing in disguise.
-
Euclidean geometry is still taught in American high schools, but I am strongly against it. I think it should be replaced with linear algebra.
Arguments against Euclidean geometry:
• Most of what you prove in a high school Euclidean geometry class seems pretty obvious until you learn about non-Euclidean geometry. It makes students think that proofs are pedantry for its own sake.
• Euclidean geometry is basically useless. There was undoubtedly a time when people used ruler and compass constructions in architecture or design, but that time is long gone.
• Euclidean geometry is obsolete. Even those students who go into mathematics will probably never use it again.
Arguments for linear algebra:
• $\mathbb{R}^2$ with the standard inner product is a model for the Euclidean axioms, so in particular you can still prove the same theorems if you really want to.
• Linear algebra generalizes easily to dimensions larger than 3 where most students' geometric intuition breaks down, so it is easier for them to appreciate the need for axioms and theorems.
• Linear algebra - particularly eigenvalues and eigenvectors - is ubiquitous in modern science and engineering. I would argue that the average person is much more likely to encounter an eigenvalue problem than a calculus problem.
• Linear algebra is, of course, still the basic language in which most of mathematics is expressed and thus a linear algebra class is a more honest taste of what math is all about.
• Providing students with an early foundation in linear algebra would make later education run more smoothly. Even many non-scientists use software that is based on solving linear systems or computing matrix decompositions, and it might help for such people to have a little more context. And those who go on to take further science classes - particularly physics - would more obviously benefit. If nothing else, we might finally be able to teach our students the correct second derivative test in multivariable calculus classes...
-
Things like the Pythagorean theorem, the theorem about inscribed angles in a circle, the volume relation between a pyramid and a parallelepiped, are these all so intuitively obvious? – Monroe Eskew Dec 19 '13 at 21:30
@MonroeEskew, sadly, I doubt you will see a proof of these things in a high school geometry course (in the US, at least; I think the Russian curriculum is another story). – Adeel Dec 19 '13 at 21:47
"Euclidean geometry is still taught in American high schools" --- your first sentence is wrong, I do not see the point to read further. – Anton Petrunin Dec 19 '13 at 22:02
Something that goes by the name Geometry is still taught in American high schools, no question about it, and Euclidean is as accurate as any other single adjective (what else would one call it?). If Anton's point is that the course is some denatured form or deformation of what he understands by Euclidean Geometry, then that of course is a separate point. Otherwise, Paul is correct. – Todd Trimble Dec 19 '13 at 22:41
Paul: a personal story. When I was in the 9th or 10th grade (in the USSR), I bought an experimental geometry textbook which exposed Euclidean geometry from the point of view of vectors, using what the book called "H.Weyl's approach". Although I was an accomplished math olympian and had spent quite a bit of time improving my understanding of elementary geometry using problem-based approach, I remember how comprehending this fairly elementary linear algebra based geometry was HARD. – Victor Protsak Dec 21 '13 at 3:04
show 14 more comments
As long as this question is open I might as well throw in my two cents. I think it is not useful to teach Euclidean geometry to high school students. Here are some reasons I can think of for people to teach Euclidean geometry to high school students and why I think they are bad reasons:
• As an introduction to the notion of a proof. As I said in the comments, I think there are better options here, such as areas of discrete math like elementary number theory, elementary combinatorics, or elementary graph theory. Unlike Euclidean geometry, at least some of this material has nontrivial applications: for example, the application of elementary number theory to cryptography or the application of combinatorics to analyzing algorithms. Also unlike Euclidean geometry, this material offers a lot of opportunity for computer-based exploration: for example, Project Euler. But it's not even clear to me that high school students really need an introduction to proof.
• As preparation for other topics that high school students ought to know. Euclidean geometry might not be a bad way to prepare students for trigonometry and eventually calculus, but I don't think high school students ought to learn these things either. The same goes for physics.
• As preparation for using mathematics in daily life. Here I think topics like Fermi estimation and some basic probability and statistics would be more useful (e.g. for helping people make better political and medical decisions). As far as I can tell most people have no use for Euclidean geometry in their daily lives.
• As preparation for jobs involving mathematics. If students want to take such jobs, the relevant mathematics can be taught to them as part of their job training, or they can pick it up themselves. Note that there are many people with programming jobs despite the general lack of programming in most high school curricula.
-
Your opinion is quite extreme. Perhaps you think mechanical and electrical engineering are becoming obsolete? – Monroe Eskew Dec 19 '13 at 22:26
There is nothing better than EG as "introduction to the notion of a proof"; there is nothing on the second place and nothing on the third --- you examples say way below. – Anton Petrunin Dec 19 '13 at 22:28
It's not just the notion of proof; it's the nature of mathematics seen as an axiomatic deductive discipline that should be part of one's broad cultural education of where mathematics fits into general human knowledge. This aspect is badly underappreciated. (It might be tempting to overplay the applications aspect, but that would be missing the real point of such a course.) Of all the traditional high school curricula, EG comes closest to capturing that essential aspect of mathematics as it is understood by mathematicians. – Todd Trimble Dec 19 '13 at 22:35
Granted, majority of people get by with very little mathematics and none of it too deep. Still, I am SHOCKED that "daily life" of "most people" is reduced to "political and medical decisions". How about making and fixing things with your own hands? And by the way, for 99.99% people spatial imagination is way more important in their daily life than any kind of mathematical proof. – Victor Protsak Dec 20 '13 at 6:48
@Todd: it's not clear to me that teaching high school students Euclidean geometry does anything to address this. I think as mathematicians we should be careful to separate our experience of mathematics from the experience of the masses and appreciate that not everyone finds it as engrossing as we do. To be clear, what I am mostly against is forcing children to learn things that many of them will neither enjoy nor use. As long as we're going to force children to learn anything we might as well think carefully about what we're forcing them to learn and whether there are better options. – Qiaochu Yuan Dec 20 '13 at 8:16
show 17 more comments
Also in Israel, Euclidean geometry is taught in schools for quite some time (judging from my parents, me, and my children). I personally like the idea of it being taught and being the first encounter with mathematical axioms, definitions and proofs, as well as an encounter with geometrical thinking. For learning what a mathematical proof is, I doubt if any of the suggested substitutes will even come close.
But it is not clear to me how crucial it is to teach (everybody) the notion of a mathematical proof in high school at all.
-
Gil, when children at school are taught religion or military training, is it doubtful for you as well? Wouldn't it be better to explain them that when a "great intellectual leader" tells something strange there is a possibility to verify whether what he says is indeed wise, or on the contrary stupid? :) – Sergei Akbarov Dec 21 '13 at 19:04
Unfortunately, I was born in USSR, where those stupidities were presented without hesitation in school education as "great truth of modern science", and "modern logic" (since the author, G.W.F.Hegel suggested his own understanding of logic in his great masterpieces marxists.org/reference/archive/hegel/works/hl/hlconten.htm). Fortunately, we had been also taught geometry, where we could understand what actually logic is, and this saved us from total intellectual degradation. – Sergei Akbarov Dec 21 '13 at 19:22
@SergeiAkbarov, I agree, if one wants to remove the proofs from high school, one has to think what will come instead. – Anton Petrunin Dec 21 '13 at 23:18
@SergeiAkbarov I think your first comment has little to do with math proof, especially for a typical student. Actually for this I think training in language, logic, philosophy could be more useful. I think we discussed this already once but for my taste you blow out of proportion the relevance of math proof education on such matters. In my opinion it is neither sufficient nor necessary for critical and independent thinking in the context you bring up. – quid Dec 22 '13 at 1:38
"quid and quim are different people!" -- Ah! Excuse me, I did not notice! Yes briefly that was my point: "if you take proofs out of the curriculum, bullshit like this will come in instead." Anton understood me correctly. – Sergei Akbarov Dec 22 '13 at 10:15
show 25 more comments
I completely agree with you. It is important for everyone to be exposed to proofs, because it shows them what math is really about--reasoning, not computation. The mathematical way of thinking is very valuable for developing general critical thinking skills and the most careful and precise reasoning. I believe there is no substitute. You also hit the nail on the head when you point out that Euclidian geometry is a great medium for learning what proofs are all about. The subject matter connects with intuition, and the propositions and arguments are easily seen to be well-motivated and accessible to the novice. As you mention, it is empirically found to be hard to do proofs for the beginner with other topics.
How can we expect our undergraduates to do well when the secondary education is lacking in the prerequisite training? If mathematics education is an appropriate topic of discussion here, then certainly the relation of high school curriculum to the preparedness of undergraduates is relevant.
-
Huzzah! I hate it when people try waste the young mind's time with paper computation. Use a computer. What kids should be doing is learning reasoning and honing their creative ability (with tools which they will have to learn, yes, but not those darn pieces of paper!). – bjb568 Dec 20 '13 at 2:57
There is something important besides rigor introduced in Euclidean Geometry classes: a connection between visual perception and sequential reasoning.
In "Mathematics in the 20th Century" Atiyah likened Geometry with space-bound visual perception and Algebra with sequential time-bound reasoning. If we continue that simile a course that naturally combines both would be a movie, something much more than the ingredients. And every time we encounter one of those movies it usually generates quite a bit of excitement.
Algebra, however, is not the only sequential process in Mathematics; the other one is the sequential reasoning of a proof.
What I find important EG is that it's the first course in High School that connects visual perception and sequential reasoning, making it the first "movie" the kids ever see, and for many of them the only one. Replacing EG with Number Theory or Combinatorics as other suggested would replace the marriage of visual to sequential with a marriage of sequential to sequential.
-
You need to establish a goal, and the reason for the goal. An example is "Have my high school require everyone to take a course covering this syllabus in Euclidean geometry" for the goal, and "because it is intellectually enriching and potentially useful" as the reason.
I don't think the above is a good example. Here is a different example: "Require knowledge of Euclidean geometry and its applications to graduate from high school" as a goal, with the reason being "our society needs engineers, technicians, and other workers who will use the knowledge and applications to improve our community." I like this example a little better because the reason feels more concrete; sadly, I do not know if the reason is valid.
As your present question stands, I do not see a good combination of goal and reason. When you have that, you will have a foundation for arguing for your goal.
If the goal is to help students learn proofs, I might suggest looking at Common Core education standards happening in the United States. Good communication and expression in a broad range of areas of study is emphasized, and I would couple this with the ability to produce arguments in a variety of styles: logical, emotional, inspirational, to start. I would suggest a course or two which presents arguments in geometry, algebra, analysis, discrete mathematics, and logic, so that one can taste the different flavors of proof that occur in the fields.
Gerhard "Also Gives Fresh, Minty Breath" Paseman, 2013.12.19
-
Christopher Moore has a character named Minty Fresh in at least one of his books. There is a bit of explanation as to the reason for the name, I don't immediately recall. I really like his books, though. en.wikipedia.org/wiki/A_Dirty_Job – Will Jagy Dec 19 '13 at 22:39
introduced in a different book: "A few characters from Moore's earlier novels participate in this story: Minty Fresh from Coyote Blue" – Will Jagy Dec 19 '13 at 22:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4941931664943695, "perplexity": 920.3068285177106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997889314.41/warc/CC-MAIN-20140722025809-00053-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://caltechmacs117b.wordpress.com/2008/01/22/116b-lecture-5/ | ## 116b- Lecture 5
We showed that a function is in ${\sf R}$ iff it has a $\Sigma_1$-graph. It follows that a set is r.e. iff it is $\Sigma_1$-definable.
Advertisements | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9959676861763, "perplexity": 602.0774315260894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649508.48/warc/CC-MAIN-20180323235620-20180324015620-00334.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/199979-continous-function-print.html | # continous function
• June 13th 2012, 08:22 AM
saravananbs
continous function
f(x)=e-1/x2 if x is not =0
=0 if x=0
then f is continuous. how it can be proved.
• June 13th 2012, 08:36 AM
Prove It
Re: continous function
Quote:
Originally Posted by saravananbs
f(x)=e-1/x2 if x is not =0
=0 if x=0
then f is continuous. how it can be proved.
See if \displaystyle \begin{align*} \lim_{x \to 0^-}e^{-\frac{1}{x^2}} = \lim_{x \to 0^+}e^{-\frac{1}{x^2}} = 0 \end{align*}.
• June 13th 2012, 08:38 AM
Reckoner
Re: continous function
Quote:
Originally Posted by saravananbs
f(x)=e-1/x2 if x is not =0
=0 if x=0
then f is continuous. how it can be proved.
We need to show that $\lim_{x\to0}f(x) = f(0).$
$\lim_{x\to0}f(x)$
$=\lim_{x\to0}e^{-1/x^2}$
Now evaluate the limit and show that it equals $f(0).$
• June 13th 2012, 08:40 AM
saravananbs
Re: continous function
yes at x=0, it is continuous
but for other points , how its derived.
is it like limit x-> a+h f(x) = limit x-> a-h f(x) =f(a)?
• June 13th 2012, 08:55 AM
emakarov
Re: continous function
Quote:
Originally Posted by saravananbs
yes at x=0, it is continuous
but for other points , how its derived.
One way is to say that $e^{-1/x^2}$ is continuous as the composition of continuous functions.
• June 13th 2012, 09:05 AM
saravananbs
Re: continous function
Quote:
Originally Posted by emakarov
One way is to say that $e^{-1/x^2}$ is continuous as the composition of continuous functions.
i think three function are involved , e^x ,1/x, x^2
all are continuous. is it right.
• June 13th 2012, 09:06 AM
Prove It
Re: continous function
All are continuous where x =/= 0.
• June 13th 2012, 09:06 AM
emakarov
Re: continous function
Yes, all are continuous for x ≠ 0. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9657562375068665, "perplexity": 4970.314407372157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982290752.48/warc/CC-MAIN-20160823195810-00234-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://mathhelpforum.com/threads/give-the-order-of-each-zero.143940/ | Give the order of each zero...
jzellt
Feb 2008
535
4
Give the order of each of the zeros of the function:
sinz / z
Can someone please expalin or show the steps needed to do this?
chisigma
MHF Hall of Honor
Mar 2009
2,162
994
near Piacenza (Italy)
Considering the expansion of the function as 'infinite product'...
$$\displaystyle \frac{\sin z}{z} = (1-\frac{z}{\pi}) (1+\frac{z}{\pi}) (1-\frac{z}{2 \pi}) (1+\frac{z}{2\pi}) \dots$$ (1)
... You can easily verity that the zeroes are at $$\displaystyle z = k \pi$$ with $$\displaystyle k \ne 0$$ and each of them has order 1...
Kind regards
$$\displaystyle \chi$$ $$\displaystyle \sigma$$
jzellt
Feb 2008
535
4
Thanks for the quick reply.
I guess I'm really behind here but how do you find that
sinz/z = (1 - z/pi)(1 + z/pi)(1 - z/2pi)... ?
Also, what tells you that the zeros are of order 1,2,3,...?
Thanks
Last edited:
HallsofIvy
MHF Helper
Apr 2005
20,249
7,909
His point was that if you write it as such an infinite product, you can see that each zero gives one factor and so has order one. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.91231769323349, "perplexity": 1530.633671097669}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668561.61/warc/CC-MAIN-20191115015509-20191115043509-00215.warc.gz"} |
http://physics.stackexchange.com/questions/47607/other-than-the-motion-of-the-earth-what-else-would-cause-parallax | Other than the motion of the Earth, what else would cause parallax?
Wikipedia has this to say about the discovery of the aberration of light:
In 1728, while unsuccessfully attempting to measure the parallax of Eltanin, James Bradley discovered the aberration of light resulting from the movement of the Earth. Bradley's discovery confirmed Copernicus' theory that the Earth revolved around the Sun.
http://en.wikipedia.org/wiki/Gamma_Draconis
If parallax was a known phenomenon at the time, what was the theory at the time that caused parallax? Other than the motion of Earth around the Sun, what else could account for parallax?
-
The phenomenon of parallax itself is simply the result of different views of one's surroundings observed from different locations. For binocular vision, we get a different view of our environment from each eye simultaneously, and the visual cortex of our brains learns very early on how to process the two distinct images into an appearance of a three-dimensional image. (Actual physical interaction with the objects in the environment is also important in "educating" the visual cortex about distances to objects. We generally accomplish this during infancy.)
In the case of astronomical parallax, celestial objects are generally so distant that the two (or more) different views must be obtained at different times from different positions of the Earth on its orbit. (The exception is the Moon, which is close enough to show parallax in simultaneous observations from well-separated points on the Earth. As an example, this is why occultations of stars by the Moon are not seen by all observers on the otherwise proper side of the Earth.)
To note the presence of astronomical parallax, it is necessary to have very precise measurements of the direction in which the object is seen, relative to a much more distant "background". The fact that this is not observed with the naked-eye was used until the 18th Century as an argument against the motion of the Earth about the Sun, since (obviously) the planets and stars would show parallax if the Earth were not simply stationary at the center of the Universe.
One difficulty in measuring parallax for stars is that, as we now know, the change in observed direction, even at diametrically opposite locations on the Earth's orbit, is at most a bit over one second of arc (1/3600 of a degree) for the closest stars and generally hundredths of a second or less for most of the visible stars. But even to discern that requires being able to compare the observed directions of the stars at different times. In the age before photography, this required having precise charting of the stars on the sky, which was the result of a tremendous effort in the centuries after the first use of telescopes in astronomy. It wasn't until 1838 that the first sufficiently accurate observations of stellar parallax were accomplished.
By comparison, the aberration of starlight due to the Earth's motion is an effect about twenty times larger than the largest stellar parallaxes, so it was became possible to detect that by 1725.
-
Thank you for the very informative answer. You do discuss why aberration is easier to detect than parallax, which answers another (unasked) question of mine, but the core question is not addressed: Why was Bradley trying to measure parallax if it had not been discovered yet? And if it had been discovered (or suspected), yet the Earth was supposedly stationary (because Bradley's discovery proved that it wasn't), then what would explain the parallax? – dotancohen Apr 15 '13 at 5:18
It was understood that stellar parallaxes should be seen if the Earth were in fact in motion, but they are too small to observe by eye and require telescope observation and precise comparison to verify. The heliocentric model of the solar system was not widely accepted during the 17th Century and Newton's mechanics alone was not considered adequate support for it (Aristotle's physics was only finally discarded by "natural philosophers" by the late 18th Century). Galileo's observations of moons orbiting Jupiter was also not regarded as sufficient evidence (continued)... – RecklessReckoner Apr 15 '13 at 6:04
since mere telescopic observation was not considered a reliable means of investigating nature by most philosophers of the time. Bradley and others needed to find a proponderance of acceptable physical evidence in order to show convincingly the inadequacies of the "Aristotlean view" of the cosmos. This was a very difficult transition for the way of thinking about the physical universe and took more than two centuries after the death of Copernicus to achieve. (Consider the continuing resistance of some people to the "Darwinian view" of the history of living things on Earth.) – RecklessReckoner Apr 15 '13 at 6:10
I see, thanks. Therefore Bradley's discovery of aberration was the final bit of needed evidence? – dotancohen Apr 15 '13 at 7:06
That I don't know off-hand. Harvard was the first university to teach the heliocentric model as the "accepted" one by the mid-18th Century, but the preponderance of evidence wasn't really achieved (to the point where scientific doubt was effectively eliminated) until the middle of the following century, after stellar parallax was satisfactorily measured and the Foucault pendulum was demonstrated. Scientific "truths" are really arrived at only by showing that a particular viewpoint is supported by observations far better than any others proposed; we never really know what The World truly is. – RecklessReckoner Apr 15 '13 at 8:13 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9033969640731812, "perplexity": 598.0384843712857}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922763.3/warc/CC-MAIN-20140909043334-00103-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://au.mathworks.com/help/mcb/gs/estimate-control-gains-from-motor-parameters.html | ## Estimate Control Gains from Motor Parameters
Perform control parameter tuning for the speed and the torque control loops that are part of the Field-Oriented Control (FOC) algorithm. Motor Control Blockset™ provides you with multiple methods to compute the control loop gains from the system or block transfer functions that are available for the motors, inverter, and controller:
• Use the Field Oriented Control Autotuner block.
• Use the model initialization script.
### Field-Oriented Control Autotuner
The Field-Oriented Control Autotuner block of Motor Control Blockset enables you to automatically tune the PID control loops in your Field-Oriented Control (FOC) application in real time. You can automatically tune the PID controllers associated with the following loops (for more details, see How to Use Field Oriented Control Autotuner Block):
• Direct-axis (d-axis) current loop
• Speed loop
For each loop that the block tunes, the Field-Oriented Control Autotuner block performs the autotuning experiment in a closed-loop manner without using a parametric model associated with that loop. The block enables you to specify the order in which the block tunes the control loops. When the tuning experiment runs for one loop, the block has no effect on the other loops. For more details about FOC autotuner, see Field Oriented Control Autotuner and Tune PI Controllers by Using Field Oriented Control Autotuner.
Simulink Control Design enables you to design and analyze the control systems modeled in Simulink. You can automatically tune the arbitrary SISO and MIMO control architectures, including the PID controllers. You can deploy PID autotuning to the embedded software to automatically compute the PID gains in real time.
You can find the operating points and compute the exact linearizations of the Simulink models at different operating conditions. Simulink Control Design provides tools that let you compute the simulation-based frequency responses without modifying your model. For details, see https://www.mathworks.com/help/slcontrol/index.html
### Model Initialization Script
This section explains how the Motor Control Blockset examples estimate the control gains needed to implement field-oriented control. For example, for a PMSM that is connected to a quadrature encoder, these steps describe the procedure to compute the control loop gain values from the system details by using the initialization script:
1. Open the initialization script (`.m`) file of the example in MATLAB®. To find the associated script file name:
1. Select Modeling > Model Settings > Model Properties to open the model properties dialog box.
2. In the Model Properties dialog box, navigate to the Callbacks tab > InitFcn to find the name of the script file that Simulink opens before running the example.
2. This figure shows an example of the initialization script (`.m`) file.
3. Use the Workspace to edit the control variables values. For example, to update Stator resistance (`Rs`), use the variable `pmsm` to add the parameter value to the `Rs` field.
4. The model initialization script associated with a target model calls these functions and sets up the workspace with the necessary variables.
Model Initialization ScriptFunction Called By Model Initialization ScriptDescription
Script associated with a target model
`mcb_SetPMSMMotorParameters`
Input to the function is motor type (for example, BLY171D).
The function populates a structure named `pmsm` in the MATLAB workspace, which is used by the model.
It also computes the permanent magnet flux and rated torque for the selected motor.
You can extend the function by adding an additional switch-case for a new motor.
This function also loads the structure `motorParam`, obtained by running parameter estimation, to the structure `pmsm`. If the structure `motorParam` is not available in the MATLAB workspace, the function loads the default parameters.
`mcb_SetInverterParameters`
Input to the function is inverter type (for example, BoostXL-DRV8305).
The function populates a structure named `inverter` in the MATLAB workspace, which is used by the model.
The function also computes the inverter resistance for the selected inverter.
You can extend the function by adding an additional switch-case for a new inverter.
`mcb_SetProcessorDetails`
Inputs to the function are processor type (for example, F28379D) and the Pulse-Width Modulation (PWM) switching frequency.
The function populates a structure named `target` in the MATLAB workspace, which is used by the model.
The function also computes the PWM counter period that is a parameter for the ePWM block in the target model.
You can extend the function by adding an additional switch-case for a new processor.
`mcb_getBaseSpeed`
Inputs to the function are motor and inverter parameters.
The function computes the base speed for PMSM.
Type ```help mcb_getBaseSpeed``` at the MATLAB command window or see section Obtain Base Speed for more details.
`mcb_SetPUSystem`
Inputs to the function are motor and inverter parameters.
The function sets the base values of the per-unit system for voltage, current, speed, torque, and power.
The function populates a structure named `PU_System` in the MATLAB workspace, which is used by the model.
`mcb.internal.SetControllerParameters`
Inputs to the function are motor and inverter parameters, per-unit system base values, PWM switching time period, sample time for the control system, and sample time for the speed controller.
The function computes the Proportional Integral (PI) parameters (Kp, Ki) for the field-oriented control implementation.
The function populates a structure named `PI_params` in the MATLAB workspace, which is used by the model.
See section Obtain Controller Gains for more details.
This table explains the useful variables for each control parameter that you can update.
Note
You can try starting MATLAB in the administrator mode on Windows® system, if you are unable to update the model initialization scripts associated with the example models.
Control Parameter CategoryControl Parameter NameMATLAB Workspace Variable
Motor parametersManufacturer’s model number`pmsm.model`
Manufacturer’s serial number`pmsm.sn`
Pole pairs`pmsm.p`
Stator resistance (Ohm)`pmsm.Rs`
d-axis stator winding inductance (Henry)`pmsm.Ld`
q-axis stator winding inductance (Henry)`pmsm.Lq`
Back emf constant (V_line(peak)/krpm)`pmsm.Ke`
Motor Inertia (kg.m2)`pmsm.J`
Friction constant (N.m.s)`pmsm.F`
Permanent Magnet Flux (WB)`pmsm.FluxPM`
Trated`pmsm.T_rated`
Nbase`pmsm.N_base`
Irated`pmsm.I_rated`
Position decodersQEP index and Hall position offset correction`pmsm.PositionOffset`
Quadrature encoder slits per revolution`pmsm.QEPSlits`
Inverter parametersManufacturer’s model number`inverter.model`
Manufacturer’s serial number`inverter.sn`
DC link voltage of the inverter (V)`inverter.V_dc`
Maximum measurable currents by ADCs (A)`inverter.I_max`
Maximum permissible currents by inverter (A)`inverter.I_trip`
On-state resistance of MOSFETs (Ohm)`inverter.Rds_on`
Shunt resistance for current sensing (Ohm)`inverter.Rshunt`
Per-phase board resistance seen by motor (Ohm)`inverter.R_board`
Current scaling`inverter.MaxADCCnt`
ADC Offsets for current sensor (Ia and Ib)
`inverter.CtSensAOffset`
`inverter.CtSensBOffset`
Enable Auto-calibration for current sense ADCs`inverter.ADCOffsetCalibEnable`
ProcessorManufacturer’s model number`target.model`
Manufacturer’s serial number`target.sn`
CPU Frequency`target.CPU_frequency`
PWM frequency`target.PWM_frequency`
PWM counter period`target.PWM_Counter_Period`
Per-Unit SystemBase voltage (V)`PU_System.V_base`
Base current (A)`PU_System.I_base`
Base speed (rpm)`PU_System.N_base`
Base torque (Nm)`PU_System.T_base`
Base power (Watts)`PU_System.P_base`
Data-type for target deviceData-type (Fixed-point Or Floating-point) selection`dataType`
Sample time valuesSwitching frequency for converter`PWM_frequency`
PWM switching time period`T_pwm`
Sample time for current controllers`Ts`
Sample time for speed controller`Ts_speed`
Simulation sample time`Ts_simulink`
Simulation sample time for motor`Ts_motor`
Simulation sample time for inverter`Ts_inverter`
Controller parametersProportional gain for Iq controller`PI_params.Kp_i`
Integral gain for Iq controller`PI_params.Ki_i`
Proportional gain for Id controller`PI_params.Kp_id`
Integral gain for Id controller`PI_params.Ki_id`
Proportional gain for Speed controller`PI_params.Kp_speed`
Integral gain for Speed controller`PI_params.Ki_speed`
Proportional gain for Field weakening controller`PI_params.Kp_fwc`
Integral gain for Field weakening controller`PI_params.Ki_fwc`
Note
For the predefined processors and drivers, the model initialization script uses the default values.
The model initialization script uses these functions for performing the computations:
Control Parameter CategoryFunctionFunctionality
Base speed of the motor`mcb_getBaseSpeed`
Calculates the base speed of PMSM at the rated voltage and rated load.
For details, type ```help mcb_getBaseSpeed``` at the MATLAB command prompt or see section Obtain Base Speed.
Motor characteristics for the given motor and inverter`mcb_getCharacteristics`
Obtain these characteristics of the motor.
• Torque as opposed to speed characteristics
• Power as opposed to speed characteristics
• Iq as opposed to speed and Id as opposed to speed characteristics
For details, type ```help mcb_getCharacteristics``` at the MATLAB command prompt.
Control algorithm parameters`mcb.internal.SetControllerParameters`
Compute the gains for these PI controllers:
• Current (torque) control loop gains (Kp, Ki) for currents Id and Iq
• Speed control loop gains (Kp, Ki)
• Field weakening control gains (Kp, Ki)
For details, see section Obtain Controller Gains.
Control analysis for the motor and inverter you are using`mcb_getControlAnalysis`
Performs frequency domain analysis for the computed gains of PI controllers used in the field-oriented motor control system.
Note
This feature requires Control System Toolbox™.
For details, type ```help mcb_getControlAnalysis``` at the MATLAB command prompt.
#### Obtain Base Speed
The function `mcb_getBaseSpeed` computes the base speed of the PMSM at the given supply voltage. Base speed is the maximum motor speed at the rated voltage and rated load, outside the field-weakening region.
When you call this function (for example, ```base_speed = mcb_getBaseSpeed(pmsm,inverter)```), it returns the base speed (in rpm) for the given combination of PMSM and inverter. The function accepts the following inputs:
• PMSM parameter structure.
• Inverter parameter structure.
These equations describe the computations that the function performs:
The inverter voltage constraint is defined by computing the d-axis and q-axis voltages:
The current limit circle defines the current constraint which can be considered as:
In the preceding equation, ${i}_{d}$ is zero for surface PMSMs. For interior PMSMs, values of ${i}_{d}$ and ${i}_{q}$ corresponding to MTPA are considered.
Using the preceding relationships, we can compute the base speed as:
where:
• ${\omega }_{e}$ is the electrical speed corresponding to frequency of stator voltages (Radians/ sec).
• ${\omega }_{base}$ is the mechanical base speed of the motor (Radians/ sec).
• ${i}_{d}$ is the d-axis current (Amperes).
• ${i}_{q}$ is the q-axis current (Amperes).
• ${v}_{do}$ is the d-axis voltage when ${i}_{d}$ is zero (Volts).
• ${v}_{qo}$ is the q-axis voltage when ${i}_{q}$ is zero (Volts).
• ${L}_{d}$ is the d-axis winding inductance (Henry).
• ${L}_{q}$ is the q-axis winding inductance (Henry).
• ${R}_{s}$ is the stator phase winding resistance (Ohms).
• ${\lambda }_{pm}$ is the permanent magnet flux linkage (Weber).
• ${v}_{d}$ is the d-axis voltage (Volts).
• ${v}_{q}$ is the q-axis voltage (Volts).
• ${v}_{max}$ is the maximum fundamental line to neutral voltage (peak) supplied to the motor (Volts).
• ${v}_{dc}$ is the dc voltage supplied to the inverter (Volts).
• ${i}_{max}$ is the maximum phase current (peak) of the motor (Amperes).
• $p$ is the number of motor pole pairs.
#### Obtain Motor Characteristics
The function `mcb_getCharacteristics` calculates the torque and speed characteristics of the motor, which helps you to develop the control algorithm for the motor.
The function returns these characteristics for the given PMSM:
• Torque as opposed to Speed
• Power as opposed to Speed
• Iq as opposed to Speed
• Id as opposed to Speed
#### Obtain Controller Gains
The function `mcb.internal.SetControllerParameters` computes the gains for the PI controllers used in the field-oriented motor control systems.
When you call this function (for example, ```PI_params = mcb.internal.SetControllerParameters(pmsm,inverter,PU_System,T_pwm,Ts_control,Ts_speed)```), it returns the gains of these PI controllers used in the FOC algorithm:
• Direct-axis (d-axis) current loop
• Speed loop
• Field-weakening control loop
The function accepts these inputs:
• `pmsm object`
• `inverter object`
• `PU system params`
• `T_pwm`
• `Ts_control`
• `Ts_speed`
The function does not plot any characteristic.
The design of compensators depends on the classical frequency response analysis applied to the motor control systems. We used the Modulus Optimum (MO) based design for the current controllers and the Symmetrical Optimum (SO) based design for the speed controller.
The function automatically computes the other required parameters (for example, bandwidth, damping) based on the input arguments.
#### Perform Control Analysis
The function `mcb_getControlAnalysis` performs the basic control analysis of the PMSM FOC current control system. The function performs frequency domain analysis for the computed PI controller gains used in the field-oriented motor control systems.
Note
This function requires the Control System Toolbox.
When you call this function (for example, `mcb_getControlAnalysis(pmsm,inverter,PU_System,PI_params,Ts,Ts_speed)`), it performs the following functions for the current control loop or subsystem:
• Transfer function for the closed-loop current control system
• Root locus
• Bode diagram
• Stability margins (PM & GM)
• Step response
• PZ map
The function plots the corresponding plots:
Get ebook | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 21, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6916981339454651, "perplexity": 5134.047282253585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369553.75/warc/CC-MAIN-20210304235759-20210305025759-00228.warc.gz"} |
https://jmwhyte.github.io/MathBio.html | # Interests in Mathematical/Statistical Biology
My PhD relates to the use of flow-cell optical biosensors to indirectly study the dynamics of biomolecular interactions. This led to interest in variety of problems.
## Structural global identifiability (SGI)?
This concept lurks wherever we wish to “calibrate” a model structure (estimate parameters from data) so that we can proceed to make predictions.
Often we can represent a physical system by a structure of “state-space” systems. These relate state variables (x), observables (y) and possibly inputs (u) through parametric expressions in the parameters $$\boldsymbol{\theta}$$: $\begin{gather} \dot{\bf x}({\bf x}, {\bf u}, t; {\boldsymbol{\theta}}) = {\bf f}({\bf x}, {\bf u}, t; {\boldsymbol{\theta}}) \, , \quad {\bf x}(0) = {\bf x_{0}}(\boldsymbol{\theta}) \\ {\bf y}({\bf x}, {\bf u}, t; {\boldsymbol{\theta}}) = {\bf g}({\bf x}, {\bf u}, t; {\boldsymbol{\theta}}) \, . \end{gather}$ Suppose we have an infinite, error-free record of data, and a correctly specified model structure (composed of parametric relationships between variables). Is it possible to obtain a unique estimate for each parameter? If not, and if alternative parameter estimates produce very different predictions, we cannot make predictions with confidence. This may cause our study to be an unproductive use of time, effort, and resources.
I have considered the problem for model structures used in various biological settings.
## Analysis of linear switching systems representing (e.g.) biosensor data:
Optical biosensor experiments1 provide a means of indirectly observing the interactions of biomolecular species in real time. The experimental setup features an immobilised analyte bound to a sensor surface, and an analyte in solution made to flow over this surface. A series of “kinetic” experiments aim to determine the rate constants of interactions. Experiments typically consist of two or more phases, delineated by a change in experimental conditions. For example, in the association phase, some concentration of analyte is made to flow over the surface for a specified time. In the dissociation phase, the solution is changed to buffer (zero analyte concentration).
In certain cases, the experimental output is appropriately modelled by a linear switching system (LSS). A LSS is a collection of linear time-invariant state-space systems, with a switch that determines which system is in effect at each time point. A schematic of the LSS output is shown in Figure 2 below.
As standard methods in identifiability testing and parameter estimation are not appropriate for an LSS structure, it is necessary to design other methods.
See the abstract of my PhD thesis:2
“Global a priori identifiability of models of flow-cell optical biosensor experiments”, Bulletin of the Australian Mathematical Society 98, no. 2 (2018): 350-352.
### Talks and posters on identifiability analysis
Approaches to analysing jump dynamical systems in biomolecular kinetics (and elsewhere?)“, talk presented at the ANZIAM Mathematical Biology Special Interest Group (MBSIG) meeting, February 14th, 2022.
“Structural identifiability analysis for switching system structures: towards a toolkit for changing times”, talk presented at Dynamical Systems Applied to Biology and Natural Sciences (DSABNS 2022 Virtual), February 10th, 2022. (DSABNS Contributed Talk Award.)
“Numerical investigation of structural minimality for structures of uncontrolled linear switching systems with Maple”, talk presented at the Maple Conference 2021 (virtual), November 2021.
“My Enemy, My Ally: how useful is this mathematical model?”, talk presented at the ARC Centre of Excellence for Mathematical & Statistical Frontiers (ACEMS) Early-Career Researcher Retreat, November 3rd, 2020.
Talk begins with a brief comparison of mathematics and science fiction.
“Branching out into structural identifiability analysis with Maple”, Maple Conference 2020 (online), Nov 2nd 2020.
“Frustrated mathematical modelling and changeable destinies: Structural identifiability analysis of models to support useful results”, Seminario de Investigación Interdisciplinar para la Innovación en Ciencia y Tecnología, (SICTE Interdisciplinary research, Catholic University of the North, Chile), invited oral presentation online, Oct 20th 2020.
“An introduction to the testing of model structures for global a priori identifiability (with examples drawn from Plasmodium falciparum malaria modelling)”, invited oral presentation for Influencing Public Health Policy with Data-informed Mathematical Models of Infectious Diseases, Creswick, Victoria, July 1st 2019.
“Biological modelling, and rarely asked questions of the 21st century”, poster presentation, BioInfoSummer, University of Western Australia, Perth, December 3rd 2018.
“Biological modelling, and rarely asked questions of the 21st century”, poster presentation, Australian Bioinformatics and Computational Biology Society Conference, Melbourne, November 26th 2018.
#### Work in progress
Whyte, J. M. “Structural minimality of linear swtiching system structres, as motivated by flow-cell optical biosensors and biomolecular interactions”
1. For a cool video introduction, look here↩︎
2. Won’t you?↩︎ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5396577715873718, "perplexity": 4644.244660498657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00575.warc.gz"} |
https://conference.ippp.dur.ac.uk/event/470/contributions/2525/ | # The 34th International Symposium on Lattice Field Theory (Lattice 2016)
24-30 July 2016
Highfield Campus, University of Southampton
Europe/London timezone
## Infrared properties of a prototype pNGB model for beyond-SM physics
25 Jul 2016, 14:35
20m
Building 67 Room 1027 (Highfield Campus, University of Southampton)
### Building 67 Room 1027
#### Highfield Campus, University of Southampton
Highfield Campus, Southampton SO17 1BJ, UK
Talk Physics Beyond the Standard Model
### Speaker
Prof. Anna Hasenfratz (University of Colorado)
### Description
We construct a prototype BSM model where the Higgs boson is a pseudo Nambu-Goldstone boson by combining 4 light (massless) flavors and 8 heavy flavors. In the infrared, the SU(4) chiral symmetry is spontaneously broken , while in the ultraviolet it exhibits the properties of the $N_f=12$ conformal fixed point. The running coupling of this system walks" and the energy range of walking can be tuned by the mass of the heavy flavors. At the same time, renormalization group considerations predict the spectrum of such a system to show hyperscaling i.e. hadron masses in units of $F_\pi$ are independent of the heavy mass. Hyperscaling is present for bound states made-up of light, heavy, or heavy and light flavors. This observation is supported by numerical observations and makes the model strongly predictive.
### Primary author
Prof. Anna Hasenfratz (University of Colorado)
### Co-authors
Prof. Claudio Rebbi (Boston University) Dr Oliver Witzel (University of Edinburgh)
### Presentation Materials
Slides
###### Your browser is out of date!
Update your browser to view this website correctly. Update my browser now
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6911724209785461, "perplexity": 4739.2655800245375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00311.warc.gz"} |
https://zbmath.org/?q=an:1163.49003 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
A hybrid approximation method for equilibrium and fixed point problems for a monotone mapping and a nonexpansive mapping. (English) Zbl 1163.49003
Summary: The purpose of this paper is to present an iterative scheme by a hybrid method for finding a common element of the set of fixed points of a nonexpansive mapping, the set of solutions of an equilibrium problem and the set of solutions of the variational inequality for $\alpha$-inverse-strongly monotone mappings in the framework of a Hilbert space. We show that the iterative sequence converges strongly to a common element of the above three sets under appropriate conditions. Additionally, the idea of our results are applied to find a zero of a maximal monotone operator and a strictly pseudocontractive mapping in a real Hilbert space.
##### MSC:
49J40 Variational methods including variational inequalities 47H10 Fixed-point theorems for nonlinear operators on topological linear spaces 47H05 Monotone operators (with respect to duality) and generalizations 49M30 Other numerical methods in calculus of variations 47J20 Inequalities involving nonlinear operators
Full Text:
##### References:
[1] Blum, E.; Oettli, W.: From optimization and variational inequalities to equilibrium problems, Math. student 63, 123-145 (1994) · Zbl 0888.49007 [2] Combettes, P. L.; Hirstoaga, S. A.: Equilibrium programming in Hilbert spaces, J. nonlinear convex anal. 6, 117-136 (2005) · Zbl 1109.90079 [3] Flam, S. D.; Antipin, A. S.: Equilibrium progamming using proximal-link algorithms, Math. program. 78, 29-41 (1997) · Zbl 0890.90150 · doi:10.1007/BF02614504 [4] Genel, A.; Lindenstrass, J.: An example concerning fixed points, Israel J. Math. 22, 81-86 (1975) · Zbl 0314.47031 · doi:10.1007/BF02757276 [5] Goebel, K.; Kirk, W. A.: Topics in metric fixed point theory, (1990) · Zbl 0708.47031 [6] Iiduka, H.; Takahashi, W.: Strong convergence theorems for nonexpansive mapping and inverse-strong monotone mappings, Nonlinear anal. 61, 341-350 (2005) · Zbl 1093.47058 · doi:10.1016/j.na.2003.07.023 [7] Kirk, W. A.: Fixed point theorem for mappings which do not increase distance, Amer. math. Monthly 72, 1004-1006 (1965) · Zbl 0141.32402 · doi:10.2307/2313345 [8] Lia, L.; Song, W.: A hybrid of the extragradient method and proximal point algorithm for inverse strongly monotone operators and maximal monotone operators in Banach spaces, Nonlinear anal.: hybrid systems 1, 398-413 (2007) · Zbl 1117.49011 · doi:10.1016/j.nahs.2006.08.003 [9] Mann, W. R.: Mean value methods in iteration, Proc. amer. Math. soc. 4, 506-510 (1953) · Zbl 0050.11603 · doi:10.2307/2032162 [10] Moudafi, A.; Thera, M.: Proximal and dynamical approaches to equilibrium problems, Lecture note in economics and mathematical systems 477, 187-201 (1999) [11] Nakajo, K.; Takahashi, W.: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups, J. math. Anal. appl. 279, 372-379 (2003) · Zbl 1035.47048 · doi:10.1016/S0022-247X(02)00458-4 [12] Opial, Z.: Weak convergence of successive approximations for nonexpansive mappings, Bull. amer. Math. soc. 73, 591-597 (1967) · Zbl 0179.19902 · doi:10.1090/S0002-9904-1967-11761-0 [13] Reich, S.: Weak convergence theorems for nonexpansive mappings, J. math. Anal. appl. 67, 274-276 (1979) · Zbl 0423.47026 · doi:10.1016/0022-247X(79)90024-6 [14] Rockafellar, R. T.: On the maximality of sums of nonlinear monotone operators, Trans. amer. Math. soc. 149, 75-88 (1970) · Zbl 0222.47017 · doi:10.2307/1995660 [15] Rockafellar, R. T.: Monotone operators and proximal point algorithm, SIAM J. Control optim. 14, 877-898 (1976) · Zbl 0358.90053 · doi:10.1137/0314056 [16] Takahashi, S.; Takahashi, W.: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces, J. math. Anal. appl. 331, 506-515 (2007) · Zbl 1122.47056 · doi:10.1016/j.jmaa.2006.08.036 [17] Takahashi, W.: Nonlinear functional analysis, (2000) · Zbl 0997.47002 [18] Tada, A.; Takahashi, W.: Weak and strong convergence theorems for a nonexpansive mappings and an equilibrium problem, J. optim. Theory appl. 133, 359-370 (2007) · Zbl 1147.47052 · doi:10.1007/s10957-007-9187-z [19] Takahashi, W.; Toyoda, M.: Weak convergence theorems for nonexpansive mappings and monotone mappings, J. optim. Theory appl. 118, 417-428 (2003) · Zbl 1055.47052 · doi:10.1023/A:1025407607560 [20] Yao, J. -C.; Chadli, O.: Pseudomonotone complementarity problems and variational inequalities, Handbook of generalized convexity and monotonicity, 501-558 (2005) · Zbl 1106.49020 · doi:10.1007/0-387-23393-8_12 [21] Zeng, L. C.; Schaible, S.; Yao, J. C.: Iterative algorithm for generalized set-valued strongly nonlinear mixed variational-like inequalities, J. optim. Theory appl. 124, 725-738 (2005) · Zbl 1067.49007 · doi:10.1007/s10957-004-1182-z | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.800618052482605, "perplexity": 5115.507641626957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860123077.97/warc/CC-MAIN-20160428161523-00060-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/546498/a-short-or-elegant-proof-for-if-p-n2-then-p-n-when-p-is-prime/546499 | # A short or elegant proof for if $p | n^2$ then $p | n$ when $p$ is prime?
Let $n, p \in \mathbb{Z}^{+}$ such that $p$ is prime. Prove $p | n^2 \Rightarrow p | n$.
What is a short or elegant proof to this? Some ideas are given at the question Prove that $\sqrt 5$ is irrational, but I would like to find a proof that does not need to rely upon the Unique Factorization Theorem or Euclid's Lemma, if possible. (In my particular math course, we haven't reached these results; they are yet to be proven and are inaccessible.)
I tried to start this proof by assuming $$\tag 1 p\mid ab\implies p\mid a \text{ or } p\mid b$$ The statement seems obviously true, however, my course requires a pedantic proof of that statement, and I am lost on finding one.
Euclid's Lemma (see snapshot at end) is given in a chapter of the textbook my class has not yet reached, so we may not assume this without proving it first. (But we may assume everything up to and including chapter 8 in this textbook(table of contents)). So I am trying to find a short proof for (1) $$p\mid ab\implies p\mid a \text{ or } p\mid b$$ that does not require Euclid's or Bezout's lemmas, if possible. If it is possible to show (1), then it seems proving $p | n^2 \Rightarrow p | n$ should be straightforward.
I was sketching out the following proof by $4$ cases.
Let $a, b, p, m, r_1, r_2 \in \mathbb{Z}$ where $p$ is prime. Prove: $$p\mid ab\implies p\mid a \text{ or } p\mid b$$ Assume $p\mid ab$. Then $ab = pm$ for some $m \in \mathbb{Z}^{+}$. (Note: Cases 1. and 2. are totally wrong, and so I may ask about this proof, $p\mid ab\implies p\mid a \text{ or } p\mid b$, in a new question.)
1. $p \not \mid a \implies \dots \implies \dots \implies p \mid b$
2. $p \not \mid b \implies \dots \implies \dots \implies p \mid a$
3. $p \mid a$ and $p \mid b$. Done.
4. $p \not \mid a$ and $p \not \mid b$. We must somehow show this leads to a contradiction. Not sure how to do this.
Euclid's Lemma
-
What's your definition of prime? Can you, at least, divide with remainder? – fedja Oct 31 '13 at 4:04
@fedja, the definition of prime which I am allowed to assume is: prime is an integer $p \ge 2$ with only positive integer divisors $1$ and $p$. – Not a NaN notha Oct 31 '13 at 4:28
What do you call Euclid's Lemma which you don't want to rely on? In my book it is precisely your implication$~(1)$, which indeed seems an inevitable stepping stone to the implication of your question. – Marc van Leeuwen Oct 31 '13 at 8:51
@MarcvanLeeuwen,apologies for the lengthy response. Please see the update. I am intermittently busy with things today, and will read over your answer as soon as I have the chance. – Not a NaN notha Oct 31 '13 at 18:05
What you want to prove is more generally Euclid's lemma, namely if a prime $p$ divides a product $ab$ then $p$ divides at least one of $a$ and $b$. In fact what one really wants is its generalisation to any finite number of factors (which I don't think carries a name or is even explicitly formulated very often), namely that if a prime $p$ divides a product $a_1a_2\ldots a_k$ then there is some $i$ such that $p\mid a_i$; this follows from the $ab$ case by an immediate induction.
The traditional proof of Euclid's lemma uses the fact that for any (positive) integers $m,n$ one can write $\gcd(m,n)=sm+tn$, for some $s,t\in\Bbb Z$ (called Bezout coefficients for $m,n$). Actually only the existence of $\gcd(m,n)$ for all $m,n$ is necessary, but in the strong sense of being a common divisor that is divisible by any (other) common divisor; this is indicated in the first proof given in the WP article. While such a proof is useful to explain that a counterpart of Euclid's lemma continues to be true in more general algebraic settings where Bezout coefficients no longer need to exist (but $\gcd$s do), I don't think it really provides a simpler proof for Euclid's lemma (in$\def\Z{\mathbf Z}~\Z$), because the fact that in$~\Z$ greatest common divisors in the strong sense exist ultimately depends on the existence of Bezout coefficients anyway.
For the most direct proof from scratch*, I would proceed as follows. Assume $p$ is prime and $p\mid ab$. Let $M=\{\, sp+ta\in \Z_{>0}\mid s,t\in\Z\,\}$, which is a non-empty (since $p\in M$) set of positive integers, and let $d$ be the minimal element of $M$. Any $m\in M$ is divisible by$~d$, because if it were not, then the remainder$~r$ of the division of $m$ by$~d$ would satisfy $r>0$, therefore $r\in M$ (any positive $m_1-qm_2$ is in$~M$, for $m_1,m_2\in M$ and $q\in\Z$), and so the condition $r<d$ (which holds by definition for a remainder after division by$~d$) would contradict the choice of$~d$.
In particular $d$ divides both $p$ and $a$, since $p$ and $|a|$ are elements of$~M$. The fact that $d$ is a positive divisor of$~p$ implies by the definition of prime number that either $d=1$ or $d=p$. In the latter case $p=d\mid a$ and we are done. In the former case let $s,t\in\Z$ be such that $1=d=sp+ta$, which means in particular that $ta\equiv1\pmod p$. Then $p\mid ab$ implies $p\mid tab$, and one has $b=1b\equiv tab\equiv0\pmod p$, so $p\mid b$; we are done for this case too.
Of course this proof is extracted from things that are usually presented in the form of more general statements about Euclidean division, greatest common divisors, and the Euclidean algorithm.
${}$
*I do assume basic facts about division with remainder, and at the end about modular arithmetic (but the latter could be avoided by using explicit witnesses of the modular equivalences used).
-
I found your first paragraph to be the most helpful one; namely that "if a prime $p$ divides a product $a_1a_2\ldots a_k$ then there is some $i$ such that $p\mid a_i$; this follows from the $ab$ case by an immediate induction." For the remaining paragraphs, I would have to assume the fact (which has yet to be proven in my class) that for any $m,n \in \mathbb{Z}^{+}$, one can write $\gcd(m,n) = sm + tn$ for some $s,t \in \mathbb{Z}$. Although I may only need to re-prove Euclid's Lemma, is there any way to prove what you wrote in your first paragraph, without assuming this fact? – Not a NaN notha Oct 31 '13 at 23:04
@mathStudent I don't quite understand your question. The proof above does not use anything about $\gcd$s, nor even mentions them (but you may _recognise_ the number $d$ as being $\gcd(p,a)$; I explicitly show that it is a common divisor of $p$ and $a$). As for the statement in the first paragraph, the proof as said is by induction, on $k$: for $k=1$ there is nothing to prove, for $k=2$ it is Euclid's lemma which I show in the remaining paragraphs; for $k>2$, since $p\mid a_1(a_2\ldots,a_k)$ one has (EL) either $p\mid a_1$ ($i=1$), or $p\mid a_2\ldots,a_k$ and then induction gives $i\geq2$. – Marc van Leeuwen Nov 1 '13 at 7:08
The defining property of a prime number is that $$\tag 1 p\mid ab\implies p\mid a \text{ or } p\mid b$$
Thus, if $p\mid a^2$ then $p\mid a$.
ADD I am guessing you define a number $p$ to be prime if the only divisors of $p$ are $1$ and $p$ itself. Euclid's lemma then gives the "true" definition $(1)$ of a prime number. To get this result, you can use the well ordering principle, that any nonempty subset of the positive integers has a least element. Define first
DEF Let $a,b$ be integers. Then the greatest common divisor of $a,b$ is the unique positive integer $d=(a,b)$ such that $d\mid a,b$ and whenever $f\mid a,b$ then $f\mid d$.
Bezout's Lemma then proves both the existence and the uniqueness of $d=(a,b)$, as follows:
PROP Let $a,b$ be integers. Then $(a,b)$ exists, and is unique.
P Consider the set $\Bbb Za+\Bbb Zb=\{xa+yb:x,y\in\Bbb Z\}$. By considering possible cases of the sign of $a,b$, it is seen the set of positive elements of $\Bbb Za+\Bbb Zb$ is nonempty (for example, if $a>0,b<0$ then $a-b>0$ is in the set). Let $d$ be the least positive element of $\Bbb Za+\Bbb Zb$.
First, we show $d$ is a common divisor. Indeed, write $a=qd+r$ and $b=q'd+r'$ with either $r=0,r<d$ and $r'=0,r'<d$. Then $a-dq,b-q'd$ are elements of $\Bbb Za+\Bbb Zb$ (check they have the form $xa+yb$). Since $r<d$ and $r'<d$ is impossible by the definition of $d$, $r=r'=0$ so that $d$ divides both $a,b$.
Now, we show $d$ divides every element of $\Bbb Za+\Bbb Zb$. Indeed, pick an element $m$ in the set. Then $m=qd+r$ with $r<d$ or $r=0$, by the division algorithm, and again $m-qd=r$ is in $\Bbb Za+\Bbb Zb$ so we must have $r=0$. We have shown thus that $\Bbb Za+\Bbb Zb=\{dz:z\in\Bbb Z\}=\Bbb Z d$. If $f\mid a,b$ then $f\mid ax+by$ for any $x,y$. Since $d$ is of this form by construction, $f\mid d$. Thus $d$ is a greatest common divisor. But if $d'$ is another one, by definition $d\mid d'$ and $d'\mid d$ for they both greatest common divisors. Thus $d=\pm d'$; and since they are both positive, by definition, $d=d'$. $\blacktriangle$.
OBS The above (very, very important) result can be stated as
$$\Bbb Za+\Bbb Zb=\Bbb Z(a,b)$$
Now Euclid's lemma comes in easily
LEMMA Suppose that $(a,b)=1$ and $a\mid bc$. Then $a\mid c$.
P By Bezout's lemma, we can write $ax+by=1$ for some integers $x,y$. Then $cax+bcy=c$. Since $a\mid ac$ and $a\mid bc$, $a\mid cax+bcy=c$.
Then, you move onto
LEMMA Let $p$ be a prime. Then $(p,a)=1\iff p\not\mid a$.
P Suppose $p\not\mid a$. Since the only divisors of $p$ are $1,p$, then $(p,a)$ can be either $1$ or $p$. Since $p\not\mid a$, $p$ is not a common divisor of $a$ and $p$; thus $(a,p)=1$. If $p\mid a$ then $(p,a)=p\neq 1$.
THM (Defining property of prime numbers) Let $p>1$ be a positive integer. Then $p$ is prime if and only if for any $a,b$ integers, $p\mid ab\implies p\mid a$ or $p\mid b$.
P Suppose that $p$ is a prime, and assume $p\mid ab$. If $p\mid a$, there is nothing to prove. Assume thus that $p\not\mid a$. The $(p,a)=1$, so Euclid's lemma says $p\mid b$. By symmetry the claim follows by replacing $b$ with $a$. Now suppose $p$ is not prime. Then $p$ has a divisor $1<a<p$, and thus $p=ab$ with $b=p/a$. Then $p\mid ab$, but $p\not\mid a$ and $p\not\mid b$.
-
That was a very fast answer. – Newb Oct 31 '13 at 4:02
This builds Euclid's Lemma into the definition of prime. True, it is the standard way to do it, except in beginning number theory, which uses another definition. – André Nicolas Oct 31 '13 at 4:05
@AndréNicolas I am adding something. – Pedro Tamaroff Oct 31 '13 at 4:16
@PedroTamaroff: I got as far as $$p\mid ab\implies p\mid a \text{ or } p\mid b$$, but did not know how to prove that statement. So this property is defined in the definition of a prime number, or must this property be proven? – Not a NaN notha Oct 31 '13 at 4:25
@mathStudent The proof of this statement is fairly simple: Either $p \mid a$ or $p \not \mid b$. In the first case, we are done. So assume that $p \not \mid b$. Then you should manage to use Bézout's identity to complete the proof. – Gamma Function Oct 31 '13 at 4:31
You can hide the usage of the linear representation lemma, if you want. However, division with remainder is a must with this definition. Otherwise, you'll prove a statement that is too general to be true.
Now, let's prove that if $(a,b)=1$ and $a\mid bc$, then $a\mid c$ by induction in $a$.
$a=1$ is straightforward ($1$ divides everything). Assume that $A\ge 2$ and we know the result for $a<A$. Let $(A,b)=1$ and $A\mid bc$. Dividing with remainder, we can assume without loss of generality that $b<A$ (this is nothing but one step in the Euclidean algorithm, so, as I said, we still have it in disguise). Also, since $(A,b)=1$ and $A\ge 2$, the case $b=0$ is impossible. Write $Ak=bc$. Since $(b,A)=1$, $b\mid Ak$, $b<A$, we have $b\mid k$ by the induction assumption. Hence $k=mb$ and $Am=c$, i.e., $A\mid c$.
This uses the absence of divisors of $0$, division with remainder, and the induction axiom (which can be relaxed to the condition that no infinite sequence with strictly decreasing norms exists). Removing any of those makes the proof impossible because you can create a ring in which the statement is false then.
-
$\displaystyle{{n \over p}\,n = \mu}$ where $\mu$ is an integer. If $p \not |\ n$, then $\mu$ is not an integer.
And why do you claim $\mu$ is not an integer? – Pedro Tamaroff Oct 31 '13 at 6:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9883785843849182, "perplexity": 132.1564738951709}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997892806.35/warc/CC-MAIN-20140722025812-00126-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://www.earth-syst-sci-data.net/11/1839/2019/ | Journal topic
Earth Syst. Sci. Data, 11, 1839–1852, 2019
https://doi.org/10.5194/essd-11-1839-2019
Earth Syst. Sci. Data, 11, 1839–1852, 2019
https://doi.org/10.5194/essd-11-1839-2019
Data description paper 28 Nov 2019
Data description paper | 28 Nov 2019
# Global variability in belowground autotrophic respiration in terrestrial ecosystems
Global variability in belowground autotrophic respiration in terrestrial ecosystems
Xiaolu Tang1,2, Shaohui Fan3, Wenjie Zhang4,5, Sicong Gao5, Guo Chen1, and Leilei Shi6 Xiaolu Tang et al.
• 1College of Earth Science, Chengdu University of Technology, Chengdu, China
• 2State Environmental Protection Key Laboratory of Synergetic Control and Joint Remediation for Soil & Water Pollution, Chengdu University of Technology, Chengdu, China
• 3Key Laboratory of Bamboo and Rattan, International Centre for Bamboo and Rattan, Beijing, China
• 4State Key Laboratory of Resources and Environmental Information System, Institute of Geographic Sciences and Natural Resources Research, Beijing, China
• 5School of Life Sciences, University of Technology Sydney, Sydney, New South Wales, Australia
• 6Key Laboratory of Geospatial Technology for the Middle and Lower Yellow River Regions, College of Environment and Planning, Henan University, Jinming Avenue, Kaifeng, China
Correspondence: Wenjie Zhang (wenjie.zhang@uts.edu.au) and Sicong Gao (sicong.gao@student.uts.edu.au)
Abstract
Belowground autotrophic respiration (RA) is one of the largest but most highly uncertain carbon flux components in terrestrial ecosystems. However, RA has not been explored globally before and still acts as a “black box” in global carbon cycling currently. Such progress and uncertainty motivate the development of a global RA dataset and understanding its spatial and temporal patterns, causes, and responses to future climate change. We applied the random forest (RF) algorithm to upscale an updated dataset from the Global Soil Respiration Database (v4) – covering all major ecosystem types and climate zones with 449 field observations, using globally gridded temperature, precipitation, soil and other environmental variables. We used a 10-fold cross validation to evaluate the performance of RF in predicting the spatial and temporal pattern of RA. Finally, a globally gridded RA dataset from 1980 to 2012 was produced with a spatial resolution of 0.5× 0.5 (longitude × latitude) and a temporal resolution of 1 year (expressed in g C m−2 yr−1; grams of carbon per square meter per year).
Globally, mean RA was 43.8±0.4 Pg C yr−1, with a temporally increasing trend of 0.025±0.006 Pg C yr−2 from 1980 to 2012. Such an incremental trend was widespread, representing 58 % of global land. For each 1 C increase in annual mean temperature, global RA increased by 0.85±0.13 Pg C yr−2, and it was 0.17±0.03 Pg C yr−2 for a 10 mm increase in annual mean precipitation, indicating positive feedback of RA to future climate change. Precipitation was the main dominant climatic driver controlling RA, accounting for 56 % of global land, and was the most widely spread globally, particularly in dry or semi-arid areas, followed by shortwave radiation (25 %) and temperature (19 %). Different temporal patterns for varying climate zones and biomes indicated uneven responses of RA to future climate change, challenging the perspective that the parameters of global carbon stimulation are independent of climate zones and biomes. The developed RA dataset, the missing carbon flux component that is not constrained and validated in terrestrial ecosystem models and Earth system models, will provide insights into understanding mechanisms underlying the spatial and temporal variability in belowground vegetation carbon dynamics. The developed RA dataset also has great potential to serve as a benchmark for future data–model comparisons. The developed RA dataset in a common NetCDF format is freely available at https://doi.org/10.6084/m9.figshare.7636193 (Tang et al., 2019).
1 Introduction
Belowground autotrophic respiration (RA) mainly originated from plant roots, mycorrhizae, and other microorganisms in the rhizosphere directly relying on the labile carbon component leaked from roots (Hanson et al., 2000; Tang et al., 2016; Wang and Yang, 2007). Thus, RA reflects the photosynthesis-derived carbon respired back to the atmosphere by roots and regulates the net photosynthetic production allocation to belowground tissues (Högberg et al., 2002). RA is one main component of soil respiration (Hanson et al., 2000), and soil respiration represents the second largest source of carbon fluxes from soil to the atmosphere (after gross primary production – GPP) in the global carbon cycle (Raich and Schlesinger, 1992). Globally, RA could amount to roughly 54 Pg C yr−1 (1 Pg = 1015 g, calculating RA as an approximate ratio of 0.5 of soil respiration; more details in Hanson et al., 2000) according to different estimates of global soil respiration (Bond-Lamberty, 2018), which is almost 5 times the carbon release from human activities (Le Quéré et al., 2018). However, the contribution of RA to soil respiration varied greatly, from 10 % to 90 % across biomes, across climate zones and among years (Hanson et al., 2000), leading to the strong spatial and temporal variability in RA. Thus, whether RA varies with ecosystem types or climate zones remains an open question at the global scale (Ballantyne et al., 2017). Consequently, an accurate estimate of RA and its spatio-temporal dynamics are critical in understanding the response of terrestrial ecosystems to climate change.
Due to the difficulties in separation and measurement of RA at varying spatial scales and its diurnal, seasonal and annual variabilities, RA becomes one of the largest but most highly uncertain carbon flux components in terrestrial ecosystems. Although individual site measurements of RA have been widely conducted across ecosystem types and biomes, the globally spatial and temporal patterns of RA have not been explored and still act as a black box in global carbon cycling (Ballantyne et al., 2017). This black box is not well constrained and validated because most terrestrial ecosystem models and Earth system models were commonly calibrated and validated against eddy covariance measurements of net ecosystem carbon exchange (Yang et al., 2013). Such progress and uncertainty motivate the development of a global RA dataset from observations and understanding its spatial and temporal patterns, causes, and responses to future climate change. Despite the general agreement that global soil respiration increased during last several decades (Bond-Lamberty et al., 2018; Bond-Lamberty and Thomson, 2010; Zhao et al., 2017), how global RA responds to climate change is far from certain because of different temperature sensitivities of RA across terrestrial ecosystems (Liu et al., 2016; Wang et al., 2014). Therefore, reducing RA uncertainty and clarifying its response to climate change, particularly to temperature and precipitation, is essential for global carbon allocation and future projection of the impact of climate change on global terrestrial carbon cycling.
Although several studies have globally estimated soil respiration and its response to climate variables (Bond-Lamberty and Thomson, 2010; Hursh et al., 2017; Zhao et al., 2017), such efforts have not been conducted for global RA directly. Hashimoto et al. (2015) indirectly derived RA via the difference between total soil respiration and heterotrophic respiration; however, it might lead to uncertainties due to the inclusion of the temperature and precipitation as the only model drivers and a low model efficiency (32 %). Besides temperature and precipitation, other variables, e.g., soil water, carbon and nitrogen content, are additionally critical factors regulating RA, and those factors generally varied with biomes and climate zones. Consequently, Hashimoto et al. (2015) may not reflect the key processes affecting RA, such as soil nutrient constraints.
On the other hand, the climate-derived models usually explain <50 % variability in soil respiration (Bond-Lamberty and Thomson, 2010; Hashimoto et al., 2015; Hursh et al., 2017), which might be another uncertainty source. Recent studies have included more variables and field observations to improve the prediction ability of linear and nonlinear models (Jian et al., 2018b; Zhao et al., 2017); however, this may propagate error because of the overfitting and autocorrelation among these variables (Long and Freese, 2006). The random forest (RF; Breiman, 2001) algorithm, a machine-learning approach, could overcome these issues based on the hierarchical structure and be insensitive to outliers and noise compared to single classifiers (Breiman, 2001; Tian et al., 2017). RF uses a large number of ensemble regression trees but a random selection of predictive variables (Breiman, 2001). RF only requires two free parameter settings: the number of variables sampled as candidates for each split and the number of trees. The performance of the RF model is usually not sensitive to the number of trees and number of variables. Moreover, RF regression can deal with a large number of features, which could help feature selection based on the variable importance and can avoid overfitting (Jian et al., 2018b). Consequently, it has been widely used for carbon flux modeling in recent years (Bodesheim et al., 2018; Jung et al., 2017).
Therefore, we applied the RF algorithm to retrieve global RA based on the updated RA field observations from the most updated Global Soil Respiration Database (SRDB v4; Bond-Lamberty and Thomson, 2018) with the linkage of other global variables (see Materials and methods) for the first time, aiming to (1) develop a globally gridded RA dataset using field observations (named RF-RA), (2) estimate the spatial and temporal patterns of RA at the global scale, (3) identify the dominant driving factors of the spatial and temporal variabilities in RA, and (4) compare the RF-RA dataset with the previous RA estimates from Hashimoto et al. (2015). The developed RF-RA dataset will advance our understanding of global RA and its spatial and temporal variabilities. The RF-RA is expected to serve as a benchmark for global vegetation models and future data–model comparison, which further advance our knowledge of the covariation of RA with climate, soil and vegetation factors, linking the empirical observations temporally and spatially to bridge the knowledge gap on local, regional and global scales.
2 Material and methods
## 2.1 Development of RA observational dataset
First, the RA observational dataset was developed based on SRDB (v4) across the globe, which is publicly available at https://daac.ornl.gov/cgi-bin/dsviewer.pl?ds_id=1578 (last access: 18 November 2018; Bond-Lamberty and Thomson, 2018). Then, we further updated the dataset using observations collected from Chinese peer-reviewed literature from the China Knowledge Resource Integrated Database (CNKI: https://www.cnki.net/, last access: 1 December 2017) up to March 2018, which followed the identical criteria applied in SRDB development. To control the data quality, annual RA observations were filtered in that (1) annual RA was directly reported in publications indicated by the “years of data” of SRDB; (2) the start and end years were recorded in literature or expanded from the years of data of SRDB; (3) soil respiration measurements with alkali absorption and soda lime were not included due to the potential underestimate of respiration rate with the increasing pressure inside chamber (Pumpanen et al., 2004); (4) observations with treatments of nitrogen addition, air and soil warming, and rain and litter exclusion were not included, except cropland; and (5) potential problems observations labeled as Q10 (potential problem with data), Q11 (suspected problem with data), Q12 (known problem with data), Q13 (duplicate) and Q14 (inconsistency) were excluded. Finally, this study included a total of 449 field observations (Fig. 1), including 68 observations from CNKI. RA observations were absolutely dominated by forest ecosystems (379 observations) that are globally unevenly distributed, mainly from China, America and Europe. Although there was a lack of RA observations in Australia, Russia, Africa, and South America, our dataset covered all major ecosystem types and climate zones across the globe.
Figure 1Distribution of observational sites used to develop the globally gridded RF-RA dataset.
## 2.2 Vegetation, climate and soil data
A total of 11 environmental variables were used to model global RA (Table 1). Specifically, global land cover with a half-degree resolution was obtained from MODIS Land Cover (MCD12Q1 v5; Friedl et al., 2010). The monthly gridded temperature, precipitation, diurnal temperature range, potential evapotranspiration and self-calibrated Palmer drought severity index (PDSI) at the 0.5 resolution were obtained from Climatic Research Unit (CRU) Time Series (TS) Version 4.01 from 1901 to 2016 (Harris et al., 2014; van der Schrier et al., 2013). Monthly shortwave radiation (SWR; Kalnay et al., 1996) and soil water content (van den Dool et al., 2003) at the 0.5 resolution were from the National Oceanic and Atmospheric Administration Earth System Research Laboratory (NOAA ESRL) Physical Sciences Division. Soil organic carbon content with a resolution of 250 m was downloaded from soil grid data (Hengl et al., 2017), and soil nitrogen density was from the Global Soil Data Task of the International Geosphere-Biosphere Programme (IGBP; Global Soil Data, 2000), while monthly nitrogen deposition data with a resolution of 0.5 were downloaded from the Earth system models of GISS-E2-R, CCSM-CAM3.5 and GFDL-AM3, providing coverage since 1850s (Lamarque et al., 2013). The monthly global variables were first aggregated to the year scale and then resampled to a 0.5 resolution using bilinear interpolation for those variables without a 0.5 resolution. These variables could represent different aspects controlling RA variability. For instance, temperature, precipitation and soil water content are the most important variables controlling plant photosynthesis, which is the primary carbon source of RA (Högberg et al., 2002, 2001). Finally, global variables of each given site extracted by coordinates correspond to annual RA estimates from the SRDB.
Table 1Global variables used for producing the global RH dataset.
## 2.3 Random forest-based RA modeling
A RF model was trained with the 11 variables listed in Table 1 by caret by linking RandomForest package in R 3.4.4 (Kabacoff, 2015); then the trained model was implemented to estimate grid RA at the 0.5× 0.5 resolution over 1980–2012. The performance of RF was assessed by a 10-fold cross validation (CV). A 10-fold CV suggested that the whole dataset was subdivided into 10 parts with an approximately equal number of samples. The target values for each of these 10 parts were predicted on the training using the remaining nine parts. Two statistics were employed in model assessment: modeling efficiency (R2) and root-mean-square error (RMSE; Yao et al., 2018). The 10-fold CV result showed that RF performed well and could capture the spatial and temporal pattern of RA (Fig. S1 in the Supplement).
## 2.4 Temporal trend analysis
We applied Theil–Sen linear regression to estimate temporal trend analysis of RA and its driving variables for each grid cell. The Theil–Sen estimator is a median-based non-parametric slope estimator which has been widely used for spatial analysis of time series carbon flux analysis (Forkel et al., 2016; Zhang et al., 2017). The Mann–Kendall non-parametric test was applied for the significant change trend in RA and its driving factors for each grid cell (p<0.05).
## 2.5 Relationships between RA and climate variables
Mean annual temperature, precipitation and shortwave radiation were considered to be the most important proxies driving RA. The relationships between RA and temperature, precipitation and shortwave radiation were analyzed by partial correlation for each grid cell. The absolute value of the correlation coefficient of these three variables was used in an RGB (red, green, blue) combination to indicate the dominant factors of RA.
## 2.6 Cross comparisons with Hashimoto2015-RA
To further compare the differences between the RF-RA dataset and RA developed by Hashimoto et al. (2015; named Hashimoto2015-RA), the comparison map profile (CMP) method was applied. Hashimoto developed a climate-driven model by updating Raich's model, which stimulated soil respiration as a function of temperature and water (precipitation) at a monthly time step (Hashimoto et al., 2015; Raich et al., 2002). Therefore, to get a global estimate to soil respiration at a monthly scale, the globally gridded air temperature and precipitation with a spatial resolution of 0.5 were derived from University of East Anglia CRU 3.21 (Harris et al., 2014), and 1638 field observations were taken from the SRDB (v3) for the model parameterization (Hashimoto et al., 2015). Monthly soil respiration was summed to a yearly scale. Furthermore, annual soil respiration was divided into autotrophic and heterotrophic respiration using a global relationship between soil respiration and heterotrophic respiration derived from a meta-analysis (Bond-Lamberty et al., 2004). This global relationship can be expressed as
$\begin{array}{}\text{(1)}& \mathrm{ln}\left(\mathrm{RH}\right)=\mathrm{1.22}+\mathrm{0.73}×\mathrm{ln}\left(\mathrm{RS}\right),\end{array}$
where RH means annual heterotrophic respiration, and RS stands for annual soil respiration (expressed in g C m−2 yr−1).
Therefore, global Hashimoto2015-RA was derived by the difference between soil respiration and heterotrophic respiration. The monthly or annual Hashimoto2015-RA dataset can be freely accessed from (http://cse.ffpri.affrc.go.jp/shojih/data/index.html, last access: February 2016; Hashimoto et al., 2015).
The CMP was developed based on absolute distance (D) and the cross-correlation coefficient (CC) on multiple scales (Gaucherel et al., 2008). D and CC reflect the similarity of data values and spatial structure of two images with the same size, respectively (Gaucherel et al., 2008). Low D and higher CC reflect goodness between the compared images, while high D and low CC suggest badness. The D among moving windows of two compared images was calculated by Eq. (2) (Gaucherel et al., 2008):
$\begin{array}{}\text{(2)}& D=\mathrm{abs}\left(\stackrel{\mathrm{‾}}{x}-\stackrel{\mathrm{‾}}{y}\right).\end{array}$
$\stackrel{\mathrm{‾}}{x}$ and $\stackrel{\mathrm{‾}}{y}$ are averages calculated over two moving windows (3 pixels×3 pixels to 41 pixels×41 pixels in this study). Finally, the mean D was averaged for different scales.
The CC was calculated by Eq. (3) (Gaucherel et al., 2008):
$\begin{array}{}\text{(3)}& \mathrm{CC}=\frac{\mathrm{1}}{{N}^{\mathrm{2}}}\sum _{i=\mathrm{1}}^{N}\sum _{j=\mathrm{1}}^{N}\frac{\left({x}_{ij}-\stackrel{\mathrm{‾}}{x}\right)×\left({y}_{ij}-\stackrel{\mathrm{‾}}{y}\right)}{{\mathit{\sigma }}_{x}×{\mathit{\sigma }}_{y}},\text{(4)}& \text{with}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}{\mathit{\sigma }}_{x}^{\mathrm{2}}=\frac{\mathrm{1}}{{N}^{\mathrm{2}}-\mathrm{1}}\sum _{i=\mathrm{1}}^{N}\sum _{j=\mathrm{1}}^{N}\left({x}_{ij}-\stackrel{\mathrm{‾}}{x}{\right)}^{\mathrm{2}},\end{array}$
where xij and yij are the pixel values at row i and column j of two moving windows of the two compared images, respectively. N represents the number of pixels for each moving window, while σx and σy are the standard deviation calculated from the two moving windows. Finally, like D calculations, CC was calculated as the mean of different scales.
3 Results
## 3.1 Spatial patterns of RA
Figure 2Spatial patterns of annual mean and standard deviation for RF-RA (a, c) and Hashimoto2015-RA (b, d) from 1980 to 2012. The standard deviation was applied to characterize the inter-annual variability following Yao et al. (2018).
The RF-RA dataset presented a great globally spatial variability during 1980–2012 (Figs. 2a and 3). The largest RA fluxes commenced from tropical regions, particularly in the tropical Amazon and southeastern areas, which generally have a high RA that is more than 700 g C m−2 yr−1 (grams of carbon per square meter per year). Following the tropical areas, the subtropics, e.g., southern China and eastern America, and humid temperate areas, e.g., North America and western and central Europe, had typical moderate RA fluxes of 400–600 g C m−2 yr−1. By contrast, the relative low RA fluxes occurred in the areas with sparse vegetation cover and cold and dry climate, e.g., boreal and tundra, which had low temperatures and a short growing season. Besides this, dry or semi-arid areas, e.g., northwestern China and the Middle East, also had typical low RA fluxes below 200 g C m−2 yr−1, which were often limited by water availability.
Figure 3Latitudinal pattern for RF-RA and Hashimoto2015-RA. The grey area indicates 2.5th to 97.5th percentile ranges of the RF-RA.
The most significant RA inter-annual variability (expressed as standard deviation; Fig. 2c) was found in topical or subtropical regions, with values above 80 g C m−2 yr−1, while most areas remained less variable, with values less than 40 g C m−2 yr−1. Latitudinally, zonal mean RA increased from cold and dry biomes (tundra and semi-arid) to warm and humid biomes (temperate and tropical forest; Fig. 3), reflecting more to fewer environmental limitations. RA varied from 112±21 g C m−2 yr−1 at about 70 N to 552±101 g C m−2 yr−1 at the Equator. Between 10–25 S and 15–20 N, due to the limitation of water, zonal mean RA experienced a slight decrease. Therefore, with the increase in water availability, RA led to a second peak in around 20 N and 40 S, respectively.
Compared to RF-RA, Hashimoto2015-RA presented a similarly latitudinal pattern, with the highest RA fluxes in tropical regions characterized by warm and humid climate, followed by subtropical regions and the lowest RA in boreal areas featured by cold and dry climate (Fig. 2b). The most significant change occurred in tropical areas and central Australia. However, it is worth noting that some clear differences between data-derived RA and Hashimoto2015-RA existed (Fig. 4): specifically, there was a remarkable difference of above 300 g C m−2 yr−1 for the southern Amazon and 200 g C m−2 yr−1 for subtropical China. Although most areas between RF-RA and Hashimoto2015-RA expressed high and positive correlations, some areas, such as the Middle East, western Russia, eastern America and northern Japan, showed negative correlations.
Figure 4Comparison of RF-RA with Hashimoto2015-RA based on absolute distance (a) and cross correlation (b).
## 3.2 Spatial pattern of RA trend
Figure 5Spatial patterns of the temporal trend for RF-RA and Hashimoto2015-RA during 1980–2012.
The trend of RF-RA showed heterogeneous spatial patterns (Fig. 5). A total of 58 % of global areas experienced an increasing trend during 1980–2012 (calculating from cell areas), and 33 % of these areas showed a significant change (p<0.05). Generally, the change trend for the majority areas was from −4 to 4 g C m−2 yr−2, while the most striking increasing change occurred in eastern Russia and tropical and eastern regions in Africa, with an increasing trend of above 5 g C m−2 yr−2. Similarly, 77 % of global areas of Hashimoto2015-RA had an increasing trend, 46 % of which were statistically significant (p<0.05).
## 3.3 Total RA and its temporal trend
Figure 6Annual variability in RF-RA (a) and Hashimoto2015-RA (b) from 1980 to 2012. The grey area represents 95 % confidence interval.
Mean global RA was 43.8±0.4 Pg C yr−1 during 1980–2012, varying from 42.9 Pg C yr−1 in 1992 to 44.9 Pg C yr−1 in 2010, with a significant trend of 0.025±0.006 Pg C yr−2 despite high annual variabilities (0.06 % yr−1, p<0.001; Fig. 6a). Similarly, a rising trend was also observed for Hashimoto2015-RA (0.073±0.009 Pg C yr−2, p<0.001; Fig. 6b), which was higher than that of RF-RA. The annual mean of Hashimoto2015-RA was 40.5±0.9 Pg C yr−1.
Figure 7Total amount of RF-RA and Hashimoto2015-RA for three climate zones and eight biomes during 1980–2012. Three climate zones defined as boreal, temperature and tropical regions according to Peel et al. (2007), while eight biomes include boreal forest, cropland, grassland, savannas, shrubland, temperate forest, tropical forest and wetland. The error bars indicated standard deviation.
RA and its trend were also evaluated for three climate zones (boreal, temporal and tropical areas based on the Köppen–Geiger climate classification) and eight major biomes (boreal forest, cropland, grassland, savannas, shrubland, temperate forest, tropical forest and wetland; Fig. 7). The tropics had the highest RA, 15.6±0.2 Pg C yr−1, followed by temperate regions, with 9.3±0.1 Pg C yr−1, and boreal areas represented the lowest RA, 6.7±0.1 Pg C yr−1. These three climate zones were the main contributors of global RA, accounting for 72 %. Temporally, considerable RA inter-annual variability in these three climate zones existed (Fig. S2). Specifically, RA in tropical and boreal zones showed a significantly increasing trend from 1980 to 2012, with an increasing rate of 0.013±0.003 and 0.008±0.002 Pg C yr−2, respectively. However, RA in temperate zones presented a slightly decreasing trend of $-\mathrm{0.003}±\mathrm{0.001}$ Pg C yr−2 (p=0.048), although strong variability was observed.
In terms of biomes, tropical forest had the highest RA, followed by the widely distributed cropland and savannas (Fig. 7), while wetland had the lowest RA due to its limited land cover. RA showed a significantly increasing trend during 1980–2012 (ps<0.01) in the majority of biomes, except for temperate forest, savannas and wetland. RA in tropical forest, boreal forest and cropland increased by 0.0076±0.0015, 0.0047±0.0016 and 0.0036±0.0014 Pg C yr−2, respectively. Compared to RF-RA, Hashimoto2015-RA for the three climate zones and eight biomes generally produced similar change patterns, although the magnitude difference existed (Figs. 7, S2 and S3). However, there were significant increasing trends of total RA in temperate zones, temperate forest, savannas and wetland of Hashimoto2015-RA which were not observed in RF-RA.
Figure 8The correlation between RA and temperature and precipitation: (a, b) for RF-RA and (c, d) for Hashimoto2015-RA. The anomaly was calculated as the difference between temperature or precipitation of corresponding year and the mean of 1980–2012. *** means significant level at 0.001.
RA was significantly correlated with the temperature anomaly (R2=0.59, p<0.001) and precipitation anomaly (R2=0.50, p<0.001; Fig. 8). On average, RA increased by 0.85±0.13 Pg C yr−2 for a 1 C increment in mean annual temperature and 0.17±0.03 Pg C yr−2 for a 10 mm increase in mean annual precipitation. However, different biomes and climate zones showed uneven responses to the temperature and precipitation changes (Figs. S4 and S5). For example, no significant correlations were found between RA in the temperate zone, savannas, wetland and the temperature anomaly, while other climate zones and biomes were significantly correlated with the temperature and precipitation anomaly.
4 Dominant factors for RA variability
Figure 9Dominant driving factors for RF-RA (a) and Hashimoto2015-RA (b). MAT is mean annual temperature, MAP is mean annual precipitation and SWR is shortwave radiation.
The dominant environmental factor was examined with partial regression coefficients when regressing RA against annual mean temperature, precipitation and shortwave radiation. Latitudinally, higher mean annual temperature, precipitation and shortwave radiation were associated with higher RA in the major latitudinal gradients (positive partial correlations; Fig. S6). Spatially, the dominant environmental factor varied greatly globally (Fig. 9). Precipitation was the most important dominant factor for the spatial pattern of RA among the three environmental controls, covering about 56 % of global land (Fig. 10), and it was widely distributed globally, particularly in dry or semi-arid areas such as northwestern China, southern Africa, central Australia and America. Temperature dominated about 19 % of global land, which mainly occurred in tropical Africa, southern Amazon rainforest and Siberia, and partly in the tundra. The rest of the land (25 %) was dominated by shortwave radiation, primarily covering boreal areas above 50 N in eastern America and central and eastern Russia. Similarly, precipitation was also the most important dominant factor for Hashimoto2015-RA, dominating about 77 % of land, while temperature and shortwave radiation dominated 13 % and 10 % of land. However, their spatial patterns varied greatly compared to RF-RA. For example, temperature was the main dominant factor for most areas in Australia for Hashimoto2015-RA, while RF-RA indicated that precipitation and shortwave radiation dominated such areas (Fig. 9).
Figure 10The percentage of land (calculated from cell areas) dominated by mean annual temperature (MAT), precipitation (MAP) and shortwave radiation (SWR) for RF-RA and Hashimoto2015-RA.
5 Discussion
## 5.1 Global RA
Despite great efforts to quantify global soil carbon fluxes and their spatial and temporal patterns (Bond-Lamberty and Thomson, 2010; Hursh et al., 2017; Jian et al., 2018b), to our knowledge, no attempt tried to assess RA using the machine-learning approach by linking a large number of empirical measurements, and RA's spatial and temporal patterns remain large uncertainties. Such uncertainties justify the development of a global RA dataset derived from observations to understand its spatial and temporal patterns, causes, and responses to future climate change. Based on the most updated observations from the SRDB (Bond-Lamberty and Thomson, 2018) and Chinese peer-reviewed literature, we, for the first time, applied the RF algorithm to develop the RF-RA dataset and estimate the temporal and spatial variability in global RA and its response to environmental variables, which can indeed contribute to reduce RA uncertainties.
Globally, mean annual RA amounted to 43.8±0.4 Pg C yr−1 from 1980 to 2012 (Fig. 6). It was slightly higher than Hashimoto2015-RA (40.5±0.9 Pg C yr−1), and there was great divergence of spatial and temporal patterns (see discussion part in Comparison with Hashimoto2015-RA). Due to there being no direct estimate on global RA, the RF-RA dataset was compared with other RA estimates using total soil respiration multiplied by the proportion of RA or heterotrophic respiration. The global average proportion of RA ranged from 0.37 to 0.46 over 1990–2014 (calculated from Bond-Lamberty et al., 2018), while global soil respiration was 67 to 108 Pg C yr−1 according to different estimates; thus global RA varied from 25 to 51 Pg C yr−1. The developed RF-RA dataset fell into this range. Similarly, RA increased by 0.025±0.006 Pg C yr−2 during 1980–2012. Such an increase may be related to the increasing photosynthesis due to global warming and CO2 fertilization effects, which could increase carbon availability in plant-derived substrate inputs into the soil (e.g., root exudates and biomass) for both root metabolisms (Piñeiro et al., 2017; Zhou et al., 2016). This annual increase accounted for about 25 % of the global soil respiration increase (0.09 and 0.1 Pg C yr−2; Bond-Lamberty and Thomson, 2010; Hashimoto et al., 2015), suggesting that about one-quarter of the total soil respiration increment due to climate change came from RA.
With a 1 C increase in global mean temperature, RA will increase by 0.85±0.13 Pg C yr−2 and 0.17±0.03 Pg C yr−2 for a 10 mm increase in precipitation, indicating that carbon fluxes from RA might positively feedback to future climate change, which was typically characterized by increasing temperature and changes in precipitation (IPCC, 2013). However, the RA increment varied with climate zones and ecosystem types (Figs. S2 and S3), which was similar to previous findings in which total soil respiration or RA varied with climate zones or ecosystem types (Ballantyne et al., 2017; Jian et al., 2018a). These differences may be related to regional heterogeneity and the plant functional trait. For example, regional temperature significantly differed from global averages (Huang et al., 2012), with much faster change in high-latitude regions (Hartmann et al., 2014), and semi-arid climates dominated the trend and variability in global land CO2 sink (Ahlström et al., 2015). Therefore, the regionally uneven responses of RA to climatic variables highlight the urgent need to account for regional heterogeneity when studying the effects of climate change on ecosystem carbon dynamics in future.
RF-RA also has important indications of carbon allocation from photosynthesis. The immediate carbon substrates for RA were primarily derived from recent photosynthesis (Högberg et al., 2001; Subke et al., 2011). Strong correlation between photosynthesis and RA demonstrated the evidence for their close coupling relationships (Chen et al., 2014; Kuzyakov and Gavrichkova, 2010). Globally, GPP was about 125 Pg C yr−1 during last few decades (Bodesheim et al., 2018; Zhang et al., 2017). Thus, roots respired more than one-third of carbon from GPP, suggesting that except the carbon used for constructing belowground tissues, a large proportion of carbon will be returned back to the atmosphere respired by roots. However, it should be noted that through root respiration, soil nutrients for vegetation growth will be required, which may affect the RA flux.
## 5.2 Dominant factors
Spatially, the dominant driving factors for RA varied greatly. Temperature and shortwave radiation were the main driving factors for high-latitude areas above 50 N (Fig. 9a). This result was not surprising because RA was positively correlated with temperature or photosynthesis (indirectly reflecting the solar radiation) (Chen et al., 2014; Tang et al., 2016), and high-latitude regions was always limited by temperature or energy, leading to low RA as well (Fig. 3a).
Globally, precipitation was the most important factor, covering about 56 % of land (Figs. 9a and 10). Precipitation was always considered to be a proxy for soil water content (Hursh et al., 2017; Yao et al., 2018), and such wide dominance of precipitation on RA was related to the mechanisms of soil water availability driving RA. First, soil water exists in the form of ice when temperature is below zero. In this case, plant and soil microbes cannot directly use it for growth or respiration. This could be observed in some boreal areas where precipitation was the dominant factor of RA (Fig. 9a). Second, soil water content that is too high or too low (e.g., flooding and drought) could limit the mobility of substrates and carbon input to below the ground, which could affect RA. Yan et al. (2014) found that soil respiration decreased once soil water content was below a lower (14.8 %) or above an upper (26.2 %) threshold in a poplar plantation. Similarly, Gomez-Casanovas et al. (2012) also found that RA decreased when soil water content was above 30 %. These results seemed to support our findings. Third, the relationship between soil water content and RA or total soil respiration is more complex than the relationship between temperature and soil respiration. Numerous formulas, such as linear (Tang et al., 2016), polynomial (Moyano et al., 2012), logarithmic (Schaefer et al., 2009) or quadratic (Hursh et al., 2017) models, have been widely applied to describe the relationship between soil water content and soil respiration. The multifarious relationships between soil water content and RA may occur because soil water content affects RA in multiple ways. Meanwhile, seasonal variability in precipitation and soil water content is often correlated with temperature (Feng and Liu, 2015), making the relationship between soil water content and RA more complex.
Similarly, the dominance of precipitation in Hashimoto was also widely observed (Fig. 8), dominating 77 % of land (Fig. 10). Although this percentage was 17 % higher than RF-RA, both results demonstrated that the global RA in the majority of land was dominated by precipitation. However, it is noticeable that the dominant environmental factor controlling spatial carbon fluxes gradient differs among different years (Reichstein et al., 2007), e.g., for extreme climates and climatic disturbance.
## 5.3 Comparison with Hashimoto2015-RA
Globally, total RF-RA was slightly higher than Hashimoto2015-RA; however, great divergence was observed both spatially and temporally (Fig. 6), particularly in tropical regions, where RF-RA was much lower than Hashimoto2015-RA (Fig. 3). These differences could be attributed to several reasons. First, two RA datasets had different land cover areas, especially in desert areas in North Africa, where very sparse or no vegetation existed. If RF-RA was masked by Hashimoto2015-RA, global RA was 39.6±0.4 Pg C yr−1, which was pretty close to Hashimoto2015-RA (Fig. S8). Second, different predictors and algorithms were applied for RF-RA and Hashimoto2015-RA prediction. Besides temperature and precipitation, RA was also affected by soil nutrients, carbon substrate supply, belowground carbon allocation, site disturbance and other variables (Chen et al., 2014; Hashimoto et al., 2015; Tang et al., 2016; Zhou et al., 2016). Hashimoto2015-RA was calculated from the difference between total soil respiration and heterotrophic respiration, which were predicted by a simple climate-driven model using temperature and precipitation only (Hashimoto et al., 2015). Thus, Hashimoto2015-RA could not reflect its soil nutrient and other environmental constraints. To overcome such limitations, besides temperature and precipitation, we included soil water content, soil nitrogen and soil organic carbon as proxies for environmental and nutrient constraints of RA and considered the interactions among these variables using RF, achieving a model efficiency of 0.52 for RA prediction (Fig. S1), which was higher than that for Hashimoto soil respiration, with a model efficiency of 0.32 (Hashimoto et al., 2015). The simple climate model for Hashimoto soil respiration has advantages and limitations (Hashimoto et al., 2015). Third, the empirical model (the relationship between total soil respiration and heterotrophic respiration) from which Hashimoto2015-RA is derived originated from forest ecosystems (Bond-Lamberty et al., 2004; Hashimoto et al., 2015), which may bring uncertainties to other ecosystems. For example, the difference between RF-RA and Hashimoto2015-RA varied by up to 350 g C yr−1 in southern and northern Amazon areas and in Madagascar, where the savannas were widely distributed (Fig. 4); thus Hashimoto2015-RA might not capture the spatial and temporal pattern of RA for non-forest ecosystems. Including more environmental variables and improving the algorithm could be a great advantage to reduce the uncertainty in modeling RA.
## 5.4 Advantages, limitations and uncertainties
Generally, the developed RF-RA dataset had four main advantages in estimating global RA: first, the RF-RA dataset, to our knowledge, was the first attempt to model RA using a large number of empirical field observations, and the spatial and temporal patterns of RA were investigated globally. In contrast, most previous studies mainly focused on global soil respiration, which was not partitioned into RA and heterotrophic respiration globally (Hursh et al., 2017; Jian et al., 2018b; Zhao et al., 2017). Second, we used an up-to-date field observational dataset developed from the SRDB up to November 2018 (Bond-Lamberty and Thomson, 2018) and updated it by including 68 observations from Chinese peer-reviewed literature. This new updated dataset included a total of 449 field observations (Fig. 1). These observations had a wide coverage range of global terrestrial ecosystems and represented all major biomes and climate zones. Third, the global terrestrial ecosystems were separated into eight biomes, including boreal forest, cropland, grassland, savannas, shrubland, temperate forest, tropical forest and wetland. The total RA and its inter-annual variability were evaluated for each of the eight biomes (Figs. S3 and S4). Besides, total RA and its inter-annual variability were also assessed for three climate zones – boreal, temperate and tropical zones (Figs. S2 and S5) – according to the Köppen–Geiger climate classification system (Peel et al., 2007). Different temporal change trends across biomes and climate zones also further indicated uneven responses of RA to climate change across the globe. Fourth, we used a RF algorithm to model and map global RA with the linkage of climate, soil and other environmental predictors. The results showed that RF could capture the spatial and temporal variability well in RA (Fig. S1). Compared to linear regressions for soil respiration prediction (because there was no global RA prediction before this study) with a model efficiency of less than 35 % (Bond-Lamberty and Thomson, 2010; Hashimoto et al., 2015; Hursh et al., 2017), the RF algorithm achieved a much higher model efficiency, at 52 %, which indeed improved the RA modeling and reduced the uncertainties.
Although data-derived global RA could serve as a benchmark for global-carbon-cycle modeling, and the RF-RA filled the data gaps of global RA, limitations and uncertainties still remained in a few aspects. First, although we conducted a data quality control to develop the RF-RA dataset, a lack of a reliable approach for separating RA and heterotrophic respiration may lead to an important uncertainty of RA estimates. There are several approaches, e.g., trenching, stable or radioactive isotopes, and gridding, to partition soil respiration (Bond-Lamberty et al., 2004; Högberg et al., 2001; Hanson et al., 2000); however, each of these approaches has its own limitations. For example, trenching has been widely applied in partitioning RA and heterotrophic respiration due to its easy operation and low cost. On the other hand, heterotrophic respiration may be increased due to the termination of water uptake by roots and the decomposition of remaining dead roots in trenching plots (Hanson et al., 2000; Tang et al., 2016). Commonly, RA was calculated from the difference between total soil respiration and heterotrophic respiration; thus the trenching approach might lead to an underestimation of RA. In our dataset, a total of 254 RA observations were estimated by the trenching approach, while the rest RA observations were estimated by other separation approaches, e.g., isotope, radiocarbon and mass balance. Thus, inconsistent separation approaches could also be another source of uncertainty of RA values.
Second, due to the limited observations of RA at a daily or monthly scale, the RF-RA dataset was produced at an annual scale. Although there was no direct study to compare the difference of RA upscaling from daily or monthly and annual scale, substantial differences of soil respiration upscaling from daily or monthly and annual scales (Jian et al., 2018b) indirectly illustrated the potential difference of RA upscaling from different timescales.
Third, the effects of rising atmospheric CO2 on root growth were not explicitly represented when developing the RF-RA dataset, although CO2 fertilization effects could partly be represented in the increased temperatures. While the magnitude of CO2 fertilization effects on photosynthesis is still uncertain (Gray et al., 2016), RF or other machine-learning approaches are encouraged for quantifying the uncertainties due to CO2 fertilization.
Fourth, we did not consider the effects of human activities and historical changes in biomes on RA. However, important changes may occur in tropical forest, grassland and cropland during last several decades due to human activities (Hansen et al., 2013; Klein Goldewijk et al., 2011). Thus, changes in biomes should be included in future global RA and carbon cycling modeling. However, the lack of such data is the main constrain of detecting the effects of biome change on RA.
Finally, uneven coverage of observations in the updated dataset would be another source of uncertainties. Although our dataset had a wide range of land cover, the observational sites mainly distributed in China, Europe and North America and were dominated by forest. There was a great lack of observations in areas such as Africa, Australia and Russia and biomes such as tropical forest, shrubland, wetland and cropland. However, our dataset covered all major ecosystem types and climate zones across the globe. RA observations caused bias of RF model towards the regions with more observations. Therefore, including more observations in these areas and biomes without observations should largely increase our capability to assess the spatial and temporal patterns of global RA and contribute to improving the global-carbon-cycle modeling to future climate change.
6 Data availability
The developed RF-RA dataset is freely downloadable from https://doi.org/10.6084/m9.figshare.7636193 (Tang et al., 2019), called `Respiration_autotrophic_belowgroud_ glob_ 1980_2012_yr_half_dgree_TangX.nc”, which is a globally gridded RA dataset from 1980 to 2012 with a spatial resolution of 0.5 at an annual scale (expressed in g C m−2 yr−1; grams of carbon per square meter per year). The RA dataset is provided in NetCDF format (Network Common Data Form).
7 Conclusions
Although data-derived RA may serve as a benchmark for ecosystem models, no such study has assessed the global variability in RA with a large number of empirical observations that can help bridge the knowledge gap between local, regional and global scales. The RF-RA dataset filled this knowledge gap by linking field observations and globally gridded environmental variables using an RF algorithm, providing a global RF-RA dataset at a spatial resolution of 0.5× 0.5 (longitude × latitude) at an annual scale from 1980 to 2012. Currently, robust findings include the following.
1. Annual mean RA was 43.8±0.4 Pg C yr−1, with a temporally increasing trend of 0.025±0.006 Pg C yr−2 over 1980–2012, indicating an increasing carbon return from the roots to the atmosphere.
2. Unevenly temporal and spatial variabilities in varying climate zones and biomes indicated their uneven responses to future climate change, challenging the perspective that the parameters of global carbon stimulation are independent of climate zones and biomes.
3. Precipitation dominated RA for most of the land globally.
4. The RF-RA dataset has great potential to serve as a benchmark for future data–model comparisons to understand the mechanisms of belowground vegetation carbon allocation and its dynamics. However, further improvements in modeling algorithms, including more observations in areas without field measurements, should overcome shortcomings from reduced data availability and the mismatch in spatial resolution between covariates and in situ RA.
Supplement
Supplement.
Author contributions
Author contributions.
XT, SF, WZ and SG designed the research and collected the data, XT, WZ and SG contributed to the data processing and analysis. XT, WZ, SG, GC and LS wrote the paper, and all authors contributed to the review of the paper.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
The authors express their great thanks to the contributors to the Hashimoto soil respiration dataset and the contributors of data from the SRDB, with particular thanks to Ben Bond-Lamberty’s continuous efforts to improve the SRDB for years. Many thanks to Shoji Hashimoto for their valuable comments that improved the paper. Great thanks to two anonymous reviewers and the editor – Birgit Heim – for their constructive suggestion and comments for improving the paper.
Financial support
Financial support.
This study was supported by the National Natural Science Foundation of China (31800365 and 41671432); Fundamental Research Funds of International Centre for Bamboo and Rattan (1632017003 and 1632018003); National Key Research and Development Project (2017YFC1501002 and 2018YFC1504702); Major Scientific and Technological Support Research Subject for the Prevention and Control of Ecological Geological Disasters in “8.8” Jiuzhaigou Earthquake Stricken Area of Department of Natural Resources of Sichuan Province (KJ-2018-20); Innovation funding of Remote Sensing Science and Technology of Chengdu University of Technology (KYTD201501); Starting Funding of Chengdu University of Technology (10912-2018KYQD-06910), Foundation for University Key Teacher of Chengdu University of Technology (10912-2019JX-06910), and Open Funding from the Key Laboratory of Geoscience Spatial Information Technology of Ministry of Land and Resources (Chengdu University of Technology).
Review statement
Review statement.
This paper was edited by Birgit Heim and reviewed by two anonymous referees.
References
Ahlström, A., Raupach, M. R., Schurgers, G., Smith, B., Arneth, A., Jung, M., Reichstein, M., Canadell, J. G., Friedlingstein, P., and Jain, A. K.: The dominant role of semi-arid ecosystems in the trend and variability of the land CO2 sink, Science, 348, 895–899, https://doi.org/10.1126/science.aaa1668, 2015.
Ballantyne, A., Smith, W., Anderegg, W., Kauppi, P., Sarmiento, J., Tans, P., Shevliakova, E., Pan, Y., Poulter, B., Anav, A., Friedlingstein, P., Houghton, R., and Running, S.: Accelerating net terrestrial carbon uptake during the warming hiatus due to reduced respiration, Nat. Clim. Change, 7, 148–152, https://doi.org/10.1038/nclimate3204, 2017.
Bodesheim, P., Jung, M., Gans, F., Mahecha, M. D., and Reichstein, M.: Upscaled diurnal cycles of land-atmosphere fluxes: a new global half-hourly data product, Earth Syst. Sci. Data, 10, 1327–1365, https://doi.org/10.5194/essd-10-1327-2018, 2018.
Bond-Lamberty, B.: New Techniques and Data for Understanding the Global Soil Respiration Flux, Earth's Future, 6, 1176–1180, https://doi.org/10.1029/2018ef000866, 2018.
Bond-Lamberty, B. and Thomson, A.: Temperature-associated increases in the global soil respiration record, Nature, 464, 579–582, https://doi.org/10.1038/nature08930, 2010.
Bond-Lamberty, B. P. and Thomson, A. M.: A Global Database of Soil Respiration Data, Version 4.0. ORNL Distributed Active Archive Center, https://doi.org/10.3334/ORNLDAAC/1578, 2018.
Bond-Lamberty, B., Wang, C., and Gower, S. T.: A global relationship between the heterotrophic and autotrophic components of soil respiration?, Global Chang. Biol., 10, 1756–1766, https://doi.org/10.1111/j.1365-2486.2004.00816.x, 2004.
Bond-Lamberty, B., Bailey, V. L., Chen, M., Gough, C. M., and Vargas, R.: Globally rising soil heterotrophic respiration over recent decades, Nature, 560, 80-83, https://doi.org/10.1038/s41586-018-0358-x, 2018.
Breiman, L.: Random forests, Mach. Learn., 45, 5–32, https://doi.org/10.1023/A:1010933404324, 2001.
Chen, G. S., Yang, Y. S., and Robinson, D.: Allometric constraints on, and trade-offs in, belowground carbon allocation and their control of soil respiration across global forest ecosystems, Global Chang. Biol., 20, 1674–1684, https://doi.org/10.1111/gcb.12494, 2014.
Feng, H. H. and Liu, Y. B.: Combined effects of precipitation and air temperature on soil moisture in different land covers in a humid basin, J. Hydrol., 531, 1129–1140, https://doi.org/10.1016/j.jhydrol.2015.11.016, 2015.
Forkel, M., Carvalhais, N., Rödenbeck, C., Keeling, R., Heimann, M., Thonicke, K., Zaehle, S., and Reichstein, M.: Enhanced seasonal CO2 exchange caused by amplified plant productivity in northern ecosystems, Science, 351, 696–699, https://doi.org/10.1126/science.aac4971, 2016.
Friedl, M. A., Sulla-Menashe, D., Tan, B., Schneider, A., Ramankutty, N., Sibley, A., and Huang, X.: MODIS Collection 5 global land cover: Algorithm refinements and characterization of new datasets, Remote Sens. Environ., 114, 168–182, https://doi.org/10.1016/j.rse.2009.08.016, 2010.
Gaucherel, C., Alleaume, S., and Hely, C.: The comparison map profile method: a strategy for multiscale comparison of quantitative and qualitative images, IEEE Trans. Geosci. Remote Sens., 46, 2708–2719, https://doi.org/10.1109/TGRS.2008.919379, 2008.
Global Soil Data, T.: Global Gridded Surfaces of Selected Soil Characteristics (IGBP-DIS), ORNL Distributed Active Archive Center, https://doi.org/10.3334/ornldaac/569, 2000.
Gomez-Casanovas, N., Matamala, R., Cook, D. R., and Gonzalez-Meler, M. A.: Net ecosystem exchange modifies the relationship between the autotrophic and heterotrophic components of soil respiration with abiotic factors in prairie grasslands, Global Chang. Biol., 18, 2532–2545, https://doi.org/10.1111/j.1365-2486.2012.02721.x, 2012.
Gray, S. B., Dermody, O., Klein, S. P., Locke, A. M., McGrath, J. M., Paul, R. E., Rosenthal, D. M., Ruiz-Vera, U. M., Siebers, M. H., Strellner, R., Ainsworth, E. A., Bernacchi, C. J., Long, S. P., Ort, D. R., and Leakey, A. D. B.: Intensifying drought eliminates the expected benefits of elevated carbon dioxide for soybean, Nat. Plants, 2, 16132, https://doi.org/10.1038/nplants.2016.132, 2016.
Högberg, P., Nordgren, A., Buchmann, N., Taylor, A. F. S., Ekblad, A., Hogberg, M. N., Nyberg, G., Ottosson-Lofvenius, M., and Read, D. J.: Large-scale forest girdling shows that current photosynthesis drives soil respiration, Nature, 411, 789–792, https://doi.org/10.1038/35081058, 2001.
Högberg, P., Nordgren, A., and Ågren, G.: Carbon allocation between tree root growth and root respiration in boreal pine forest, Oecologia, 132, 579–581, https://doi.org/10.1007/s00442-002-0983-8, 2002.
Hansen, M. C., Potapov, P. V., Moore, R., Hancher, M., Turubanova, S. A., Tyukavina, A., Thau, D., Stehman, S. V., Goetz, S. J., Loveland, T. R., Kommareddy, A., Egorov, A., Chini, L., Justice, C. O., and Townshend, J. R. G.: High-Resolution Global Maps of 21st-Century Forest Cover Change, Science, 342, 850–853, https://doi.org/10.1126/science.1244693, 2013.
Hanson, P. J., Edwards, N. T., Garten, C. T., and Andrews, J. A.: Separating root and soil microbial contributions to soil respiration: A review of methods and observations, Biogeochemistry, 48, 115–146, https://doi.org/10.1023/a:1006244819642, 2000.
Harris, I., Jones, P., Osborn, T., and Lister, D.: Updated high-resolution grids of monthly climatic observations–the CRU TS3. 10 Dataset, Int. J. Climatol., 34, 623–642, https://doi.org/10.1002/joc.3711, 2014.
Hartmann, D. L., Klein Tank, A. M. G., Rusticucci, M., Alexander, L. V., Brönnimann, S., and Charabi, Y. A. R.: Observations: Atmosphere and Surface, in: Climate Change 2013 – The Physical Science Basis: Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, Intergovernmental Panel on Climate, edited by: Stocker, T., Qin, D., Plattner, G., Tignor, M., Allen, S., Boschung, J., Nauels, A., Xia, Y., Bex, V., and Midgley, P., Cambridge University Press, Cambridge, 2014.
Hashimoto, S., Carvalhais, N., Ito, A., Migliavacca, M., Nishina, K., and Reichstein, M.: Global spatiotemporal distribution of soil respiration modeled using a global database, Biogeosciences, 12, 4121–4132, https://doi.org/10.5194/bg-12-4121-2015, 2015.
Hengl, T., Mendes de Jesus, J., Heuvelink, G. B., Ruiperez Gonzalez, M., Kilibarda, M., Blagotic, A., Shangguan, W., Wright, M. N., Geng, X., Bauer-Marschallinger, B., Guevara, M. A., Vargas, R., MacMillan, R. A., Batjes, N. H., Leenaars, J. G., Ribeiro, E., Wheeler, I., Mantel, S., and Kempen, B.: SoilGrids250m: Global gridded soil information based on machine learning, PLoS One, 12, e0169748, https://doi.org/10.1371/journal.pone.0169748, 2017.
Huang, J., Guan, X., and Ji, F.: Enhanced cold-season warming in semi-arid regions, Atmos. Chem. Phys., 12, 5391–5398, https://doi.org/10.5194/acp-12-5391-2012, 2012.
Hursh, A., Ballantyne, A., Cooper, L., Maneta, M., Kimball, J., and Watts, J.: The sensitivity of soil respiration to soil temperature, moisture, and carbon supply at the global scale, Global Chang. Biol., 23, 2090–2103, https://doi.org/10.1111/gcb.13489, 2017.
IPCC: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 2013.
Jian, J., Steele, M. K., Day, S. D., and Thomas, R. Q.: Future global soil respiration rates will swell despite regional decreases in temperature sensitivity caused by rising temperature, Earth's Fut., 6, 1539–1554, https://doi.org/10.1029/2018EF000937, 2018a.
Jian, J., Steele, M. K., Thomas, R. Q., Day, S. D., and Hodges, S. C.: Constraining estimates of global soil respiration by quantifying sources of variability, Global Chang. Biol., 24, 4143–4159, https://doi.org/10.1111/gcb.14301, 2018b.
Jung, M., Reichstein, M., Schwalm, C. R., Huntingford, C., Sitch, S., Ahlstrom, A., Arneth, A., Camps-Valls, G., Ciais, P., Friedlingstein, P., Gans, F., Ichii, K., Jain, A. K., Kato, E., Papale, D., Poulter, B., Raduly, B., Rodenbeck, C., Tramontana, G., Viovy, N., Wang, Y. P., Weber, U., Zaehle, S., and Zeng, N.: Compensatory water effects link yearly global land CO2 sink changes to temperature, Nature, 541, 516–520, https://doi.org/10.1038/nature20780, 2017.
Kabacoff, R. I.: R in action: data analysis and graphics with R, Manning Publications Co., Shelter Island, New York, 2015.
Kalnay, E., Kanamitsu, M., Kistler, R., Collins, W., Deaven, D., Gandin, L., Iredell, M., Saha, S., White, G., Woollen, J., Zhu, Y., Chelliah, M., Ebisuzaki, W., Higgins, W., Janowiak, J., Mo, K. C., Ropelewski, C., Wang, J., Leetmaa, A., Reynolds, R., Jenne, R., and Joseph, D.: The NCEP/NCAR 40-year reanalysis project, B. Am. Meteorol. Soc., 77, 437–471, https://doi.org/10.1175/1520-0477(1996)077<0437:Tnyrp>2.0.Co;2, 1996.
Klein Goldewijk, K., Beusen, A., Van Drecht, G., and De Vos, M.: The HYDE 3.1 spatially explicit database of human-induced global land-use change over the past 12,000 years, Global Ecol. Biogeogr., 20, 73–86, https://doi.org/10.1111/j.1466-8238.2010.00587.x, 2011.
Kuzyakov, Y. and Gavrichkova, O.: Time lag between photosynthesis and carbon dioxide efflux from soil: a review of mechanisms and controls, Global Chang. Biol., 16, 3386–3406, https://doi.org/10.1111/j.1365-2486.2010.02179.x, 2010.
Lamarque, J.-F., Dentener, F., McConnell, J., Ro, C.-U., Shaw, M., Vet, R., Bergmann, D., Cameron-Smith, P., Dalsoren, S., Doherty, R., Faluvegi, G., Ghan, S. J., Josse, B., Lee, Y. H., MacKenzie, I. A., Plummer, D., Shindell, D. T., Skeie, R. B., Stevenson, D. S., Strode, S., Zeng, G., Curran, M., Dahl-Jensen, D., Das, S., Fritzsche, D., and Nolan, M.: Multi-model mean nitrogen and sulfur deposition from the Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP): evaluation of historical and projected future changes, Atmos. Chem. Phys., 13, 7997–8018, https://doi.org/10.5194/acp-13-7997-2013, 2013.
Le Quéré, C., Andrew, R. M., Friedlingstein, P., Sitch, S., Pongratz, J., Manning, A. C., Korsbakken, J. I., Peters, G. P., Canadell, J. G., Jackson, R. B., Boden, T. A., Tans, P. P., Andrews, O. D., Arora, V. K., Bakker, D. C. E., Barbero, L., Becker, M., Betts, R. A., Bopp, L., Chevallier, F., Chini, L. P., Ciais, P., Cosca, C. E., Cross, J., Currie, K., Gasser, T., Harris, I., Hauck, J., Haverd, V., Houghton, R. A., Hunt, C. W., Hurtt, G., Ilyina, T., Jain, A. K., Kato, E., Kautz, M., Keeling, R. F., Klein Goldewijk, K., Körtzinger, A., Landschützer, P., Lefèvre, N., Lenton, A., Lienert, S., Lima, I., Lombardozzi, D., Metzl, N., Millero, F., Monteiro, P. M. S., Munro, D. R., Nabel, J. E. M. S., Nakaoka, S., Nojiri, Y., Padin, X. A., Peregon, A., Pfeil, B., Pierrot, D., Poulter, B., Rehder, G., Reimer, J., Rödenbeck, C., Schwinger, J., Séférian, R., Skjelvan, I., Stocker, B. D., Tian, H., Tilbrook, B., Tubiello, F. N., van der Laan-Luijkx, I. T., van der Werf, G. R., van Heuven, S., Viovy, N., Vuichard, N., Walker, A. P., Watson, A. J., Wiltshire, A. J., Zaehle, S., and Zhu, D.: Global Carbon Budget 2017, Earth Syst. Sci. Data, 10, 405–448, https://doi.org/10.5194/essd-10-405-2018, 2018.
Liu, Y., Liu, S., Wan, S., Wang, J., Luan, J., and Wang, H.: Differential responses of soil respiration to soil warming and experimental throughfall reduction in a transitional oak forest in central China, Agr. Forest Meteorol., 226, 186–198, https://doi.org/10.1016/j.agrformet.2016.06.003, 2016.
Long, S. J. and Freese, J. (Eds.): Regression models for categorical dependent variables using Stata, College Station, TX: Stata Press, Texas, 2006.
Moyano, F. E., Vasilyeva, N., Bouckaert, L., Cook, F., Craine, J., Curiel Yuste, J., Don, A., Epron, D., Formanek, P., Franzluebbers, A., Ilstedt, U., Kätterer, T., Orchard, V., Reichstein, M., Rey, A., Ruamps, L., Subke, J.-A., Thomsen, I. K., and Chenu, C.: The moisture response of soil heterotrophic respiration: interaction with soil properties, Biogeosciences, 9, 1173–1182, https://doi.org/10.5194/bg-9-1173-2012, 2012.
Peel, M. C., Finlayson, B. L., and McMahon, T. A.: Updated world map of the Köppen-Geiger climate classification, Hydrol. Earth Syst. Sci., 11, 1633–1644, https://doi.org/10.5194/hess-11-1633-2007, 2007.
Piñeiro, J., Ochoa-Hueso, R., Delgado-Baquerizo, M., Dobrick, S., Reich, P. B., Pendall, E., and Power, S. A.: Effects of elevated CO2 on fine root biomass are reduced by aridity but enhanced by soil nitrogen: A global assessment, Sci. Rep., 7, 15355, https://doi.org/10.1038/s41598-017-15728-4, 2017.
Pumpanen, J., Kolari, P., Ilvesniemi, H., Minkkinen, K., Vesala, T., Niinist, S., Lohila, A., Larmola, T., Morero, M., Pihlatie, M., Janssens, I., Yuste, J. C., Grünzweig, J. M., Reth, S., Subke, J.-A., Savage, K., Kutsch, W., Østreng, G., Ziegler, W., Anthoni, P., Lindroth, A., and Hari, P.: Comparison of different chamber techniques for measuring soil CO2 efflux, Agr. Forest Meteorol., 123, 159–176, https://doi.org/10.1016/j.agrformet.2003.12.001, 2004.
Raich, J. W. and Schlesinger, W. H.: The global carbon dioxide flux in soil respiration and its relationship to vegetation and climate, Tellus B, 44, 81–99, https://doi.org/10.1034/j.1600-0889.1992.t01-1-00001.x, 1992.
Raich, J. W., Potter, C. S., and Bhagawati, D.: Interannual variability in global soil respiration, 1980–94, Global Chang. Biol., 8, 800–812, https://doi.org/10.1046/j.1365-2486.2002.00511.x, 2002.
Reichstein, M., Papale, D., Valentini, R., Aubinet, M., Bernhofer, C., Knohl, A., Laurila, T., Lindroth, A., Moors, E., Pilegaard, K., and Seufert, G.: Determinants of terrestrial ecosystem carbon balance inferred from European eddy covariance flux sites, Geophys. Res. Lett., 34, L01402, https://doi.org/10.1029/2006GL027880, 2007.
Schaefer, D. A., Feng, W., and Zou, X.: Plant carbon inputs and environmental factors strongly affect soil respiration in a subtropical forest of southwestern China, Soil Biol. Biochem., 41, 1000–1007, https://doi.org/10.1016/j.soilbio.2008.11.015, 2009.
Subke, J.-A., Voke, N. R., Leronni, V., Garnett, M. H., and Ineson, P.: Dynamics and pathways of autotrophic and heterotrophic soil CO2 efflux revealed by forest girdling, J. Ecol., 99, 186–193, https://doi.org/10.1111/j.1365-2745.2010.01740.x, 2011.
Tang, X., Fan, S., Qi, L., Guan, F., Du, M., and Zhang, H.: Soil respiration and net ecosystem production in relation to intensive management in Moso bamboo forests, Catena, 137, 219–228, https://doi.org/10.1016/j.catena.2015.09.008, 2016.
Tang, X., Fan, S., Zhang, W., Gao, S., Chen, G., and Shi, L.: A gridded dataset of belowground autotrophic respiration from 1980 to 2012 in global terrestrial ecosystems upscaling of field observations, in: Figshare, https://doi.org/10.6084/m9.figshare.7636193, 2019.
Tian, X., Yan, M., van der Tol, C., Li, Z., Su, Z., Chen, E., Li, X., Li, L., Wang, X., Pan, X., Gao, L., and Han, Z.: Modeling forest above-ground biomass dynamics using multi-source data and incorporated models: A case study over the qilian mountains, Agr. Forest Meteorol., 246, 1–14, https://doi.org/10.1016/j.agrformet.2017.05.026, 2017.
van den Dool, H., Huang, J., and Fan, Y.: Performance and analysis of the constructed analogue method applied to U.S. soil moisture over 1981–2001, J. Geophys. Res.-Atmos., 108, 8617, https://doi.org/10.1029/2002jd003114, 2003.
van der Schrier, G., Barichivich, J., Briffa, K. R., and Jones, P. D.: A scPDSI-based global data set of dry and wet spells for 1901–2009, J. Geophys. Res.-Atmos., 118, 4025–4048, https://doi.org/10.1002/jgrd.50355, 2013.
Wang, C. and Yang, J.: Rhizospheric and heterotrophic components of soil respiration in six Chinese temperate forests, Global Chang. Biol., 13, 123–131, https://doi.org/10.1111/j.1365-2486.2006.01291.x, 2007.
Wang, X., Liu, L., Piao, S., Janssens, I. A., Tang, J., Liu, W., Chi, Y., Wang, J., and Xu, S.: Soil respiration under climate warming: differential response of heterotrophic and autotrophic respiration, Global Chang. Biol., 20, 3229–3237, https://doi.org/10.1111/gcb.12620, 2014.
Yan, M. F., Zhou, G. S., and Zhang, X. S.: Effects of irrigation on the soil CO2 efflux from different poplar clone plantations in arid northwest China, Plant Soil, 375, 89–97, https://doi.org/10.1007/s11104-013-1944-1, 2014.
Yang, J., Gong, P., Fu, R., Zhang, M., Chen, J., Liang, S., Xu, B., Shi, J., and Dickinson, R.: The role of satellite remote sensing in climate change studies, Nat. Clim. Chang., 3, 875–883, https://doi.org/10.1038/nclimate1908, 2013.
Yao, Y., Wang, X., Li, Y., Wang, T., Shen, M., Du, M., He, H., Li, Y., Luo, W., Ma, M., Ma, Y., Tang, Y., Wang, H., Zhang, X., Zhang, Y., Zhao, L., Zhou, G., and Piao, S.: Spatiotemporal pattern of gross primary productivity and its covariation with climate in China over the last thirty years, Global Chang. Biol., 24, 184–196, https://doi.org/10.1111/gcb.13830, 2018.
Zhang, Y., Xiao, X., Wu, X., Zhou, S., Zhang, G., Qin, Y., and Dong, J.: A global moderate resolution dataset of gross primary production of vegetation for 2000–2016, Sci. Data, 4, 170165, https://doi.org/10.1038/sdata.2017.165, 2017.
Zhao, Z., Peng, C., Yang, Q., Meng, F.-R., Song, X., Chen, S., Epule, T. E., Li, P., and Zhu, Q.: Model prediction of biome-specific global soil respiration from 1960 to 2012, Earth's Future, 5, 715–729, https://doi.org/10.1002/2016EF000480, 2017.
Zhou, L., Zhou, X., Shao, J., Nie, Y., He, Y., Jiang, L., Wu, Z., and Hosseini Bai, S.: Interactive effects of global change factors on soil respiration and its components: a meta-analysis, Global Chang. Biol., 22, 3157–3169, https://doi.org/10.1111/gcb.13253, 2016. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5865709781646729, "perplexity": 10135.196013860257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540551267.14/warc/CC-MAIN-20191213071155-20191213095155-00128.warc.gz"} |
https://hal.archives-ouvertes.fr/hal-00086532 | # Tree based functional expansions for Feynman--Kac particle models
Abstract : We design exact polynomial expansions of a class of Feynman--Kac particle distributions. These expansions are finite and are parametrized by coalescent trees and other related combinatorial quantities. The accuracy of the expansions at any order is related naturally to the number of coalescences of the trees. Our results include an extension of the Wick product formula to interacting particle systems. They also provide refined nonasymptotic propagation of chaos-type properties, as well as sharp $\mathbb{L}_p$-mean error bounds, and laws of large numbers for $U$-statistics.
Keywords :
Type de document :
Article dans une revue
The Annals of Applied Probability : an official journal of the institute of mathematical statistics, The Institute of Mathematical Statistics, 2009, pp.778-825. <10.1214/08-AAP565>
Domaine :
https://hal.archives-ouvertes.fr/hal-00086532
Contributeur : Patras Frédéric <>
Soumis le : mercredi 19 juillet 2006 - 09:24:23
Dernière modification le : mercredi 4 mai 2016 - 13:30:50
Document(s) archivé(s) le : lundi 5 avril 2010 - 22:10:02
### Citation
Pierre Del Moral, Frédéric Patras, Sylvain Rubenthaler. Tree based functional expansions for Feynman--Kac particle models. The Annals of Applied Probability : an official journal of the institute of mathematical statistics, The Institute of Mathematical Statistics, 2009, pp.778-825. <10.1214/08-AAP565>. <hal-00086532>
Consultations de
la notice
## 155
Téléchargements du document | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4787929654121399, "perplexity": 4554.41282875404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125881.93/warc/CC-MAIN-20170423031205-00028-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://blog.mdda.net/oss/2014/10/12/unison-on-android | While Daniel Roggen has done a great job in porting unison and associated utilities to Android in a simple, free Android App, the actual synchronization step is left to either (a) the user, at the command line, or (b) using his for-pay sync App.
Not that I object to earning money by selling apps, but ‘just because’, here are steps that worked for me on my Android phone (which was ‘rooted’ simply by changing the settings in Settings-Security-Superuser).
Enabling Android Debugging
On a local ‘real’ machine, plug in the device, and find the idVendor from watching tail -f /var/log/messages. Then create a new udev rules file in (for instance)/etc/udev/rules.d/51-android.rules :
Thereafter, adb commands will work :
Dry run through
These instructions were also helpfully given in the Unison App (but having them here makes them much easier to implement).
Go into the right directory :
Check that unison runs
(one should also check out what version is running on the server, so that they can be matched)
Generate key pair
and then inspect the public key, so that it can be uploaded to the server’s .ssh/authorized_keys :
(remember to pop the orsa.key.pub into the right place on the server…)
Now test the sync command works
Here, you’ll need to choose suitable entries for :
• a (writeable) local directory for the sync’d folder
• the server’s sync’d folder
• the server’s domain (here ‘unison.example.com’)
• the user running unison on the server (here ‘unison’)
• the server’s port (here ‘23456’)
Running the script on the phone
Having checked all that works, the final script (with comments) can be assembled on the local ‘real’ machine. Mine (stored in ~/.unison/android/ToRead.sh) looks like :
Load the script onto the phone
Choose a suitable place for the script to live, and execute on the local ‘real’ machine :
Finally, the script can be run using a terminal program on the Android device :
All done!
Of course, the file ‘ToRead.sh’ can easily be adapted to other sync’d folders, so that you can have multiple sync’d folders being independently updated (or even put into a combined file). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2997269332408905, "perplexity": 5581.432352019038}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123530.18/warc/CC-MAIN-20170423031203-00411-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/31791/the-dirac-equation-with-a-6x6-matrix?answertab=votes | # The Dirac Equation with a 6x6 Matrix
The Dirac Equation is $$i\hbar \frac{\partial \Psi}{\partial t}=\left[c\sum_i{\alpha_i p_i}+mc^2\beta\right]\Psi$$ with the constraints $$\{ \alpha_i,\alpha_j\}=2\delta_{ij} \\ \{ \alpha_i, \beta\}=0 \\ \{ \beta, \beta\}=2$$ imposed to get the relativistic dispersion $E^2=(mc^2)^2+(pc)^2$. In 3D, the smallest size allowed for the $\alpha$ and $\beta$ matrices is 4x4, and this describes a spin 1/2 particle.
I've heard that using 6x6 matrices describes a spin 1 particle. I've also heard that Maxwell's Equations follow from the Dirac Equation using 6x6 matrices. Is this true? If so, do you have a reference for this? If not, what happens if I try to use a 6x6 or larger matrix to describe a particle?
Also, I know that 2x2 matrices describe a spin 1/2 particle in 2D. Is there an analogous Dirac Equation for 1D? I realize that proper rotations make no sense in 1D, so I don't expect there to be a 1D Dirac Equation, but if there is one, please correct me.
-
Going in the direction of David BarMoshe's answer, this free book on fields by Warren Siegel covers some of the math around page $129\pm 20$. – Nikolaj K. Jul 11 '12 at 13:01
The electromagnetic field tensor expressed in spinor notation:
$F_{A C \dot{B}\dot{D}}=\sigma^{\mu}_{A \dot{B}}\sigma^{\nu}_{C \dot{D}} F_{\mu \nu}$
decomposes into a self dual and an anti-self dual parts:
$F_{A C \dot{B}\dot{D}}=\epsilon_{AC} \phi_{ \dot{B}\dot{D}} + \epsilon_{\dot{B}\dot{D}} \phi_{AC }$
(Where $\phi_{AC}$ is symmetric, thus contains 3 independent components. The components of $\phi_{AC}$ are just $\mathbf{E} + i \mathbf{B}$)
For a sourceless Maxwell theory, The Maxwell equations are equivalent to two homogeneous Dirac equations in the self dual and anti-self dual parts:
$\nabla^{A}_{\dot{B}} \phi_{AC } = 0$
$\nabla^{\dot{B}}_{A} \phi_{\dot{B}\dot{D} } = 0$
Since each equation has 3 independent components, they can be combined to get a signle 6 dimensional matrix equation.
Remark: For an introducton to the two component spinor notation, please see for examples the appendix to the following lecture notes by: Christian Saemann
-
Thank you. I thinking about starting with the Dirac Equation and concluding with Maxwell's Equations, but I suppose I could construct that by following your logic backwards. – ChickenGod Jul 18 '12 at 8:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9101160168647766, "perplexity": 388.74574401873514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654330/warc/CC-MAIN-20140305060734-00047-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/204529-probability-question-using-unfair-coin.html | # Math Help - Probability question using an "unfair" coin
1. ## Probability question using an "unfair" coin
An "unfair" coin has a heads side which weighs two and one-half times heavier than the tails side. If you toss this unfair coin 100 times, how many of those times would you expect to see heads? Explain why.
Design a simulation that will approximate this result. Explain your experimental design and results.
Trials n=100 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.891182541847229, "perplexity": 2484.7360298891217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828322.57/warc/CC-MAIN-20160723071028-00320-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://www.ams.org/joursearch/servlet/DoSearch?f1=msc&v1=60B10&jrnl=one&onejrnl=tran | # American Mathematical Society
Publications Meetings The Profession Membership Programs Math Samplings Policy and Advocacy In the News About the AMS
You are here: Home > Publications
AMS eContent Search Results
Matches for: msc=(60B10) AND publication=(tran) Sort order: Date Format: Standard display
Results: 1 to 6 of 6 found Go to page: 1
[1] Elizabeth Meckes. Linear functions on the classical matrix groups. Trans. Amer. Math. Soc. 360 (2008) 5355-5366. MR 2415077. Abstract, references, and article information View Article: PDF This article is available free of charge [2] Richard B. Darst and Zorabi Honargohar. Weak-star convergence in the dual of the continuous functions on the $n$-cube, $1\leq n\leq \infty$ . Trans. Amer. Math. Soc. 275 (1983) 357-372. MR 678356. Abstract, references, and article information View Article: PDF This article is available free of charge [3] L. Š. Grinblat. Convergence of random processes without discontinuities of the second kind and limit theorems for sums of independent random variables . Trans. Amer. Math. Soc. 234 (1977) 361-379. MR 0494376. Abstract, references, and article information View Article: PDF This article is available free of charge [4] L. Š. Grinblat. Compactifications of spaces of functions and integration of functionals . Trans. Amer. Math. Soc. 217 (1976) 195-223. MR 0407227. Abstract, references, and article information View Article: PDF This article is available free of charge [5] J. Kuelbs. Fourier analysis on linear metric spaces . Trans. Amer. Math. Soc. 181 (1973) 293-311. MR 0331455. Abstract, references, and article information View Article: PDF This article is available free of charge [6] Luis G. Gorostiza. An invariance principle for a class of $d$-dimensional polygonal random functions . Trans. Amer. Math. Soc. 177 (1973) 413-445. MR 0336774. Abstract, references, and article information View Article: PDF This article is available free of charge
Results: 1 to 6 of 6 found Go to page: 1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8438593745231628, "perplexity": 1459.5934568277466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463928.32/warc/CC-MAIN-20150226074103-00203-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://blog.gmane.org/gmane.comp.lang.haskell.glasgow.user/month=2004080 | 12 Mar 09:59 2014
### RC2 build failure on Debian: armel
Hi,
armel still fails like in RC1:
https://buildd.debian.org/status/fetch.php?pkg=ghc&arch=armel&ver=7.8.20140228-1&stamp=1394495564
/«PKGBUILDDIR»/compiler/stage2/build/libHSghc-7.8.0.20140228.a(genSym.o): In function genSym':
genSym.c:(.text+0x84): undefined reference to arm_atomic_spin_lock'
genSym.c:(.text+0x88): undefined reference to arm_atomic_spin_unlock'
Any ideas? Anyone feeling responsible?
Thanks,
Joachim
--
--
Joachim “nomeata” Breitner
mail <at> joachim-breitner.de • http://www.joachim-breitner.de/
Jabber: nomeata <at> joachim-breitner.de • GPG-Key: 0x4743206C
Debian Developer: nomeata <at> debian.org
_______________________________________________
11 Mar 23:11 2014
### can't load .so/.DLL - undefined symbol
I am trying to understand the following linker message. I have started
GHCi, loaded a program and try to run it:
Main> main
...
/var/cabal/lib/x86_64-linux-ghc-7.8.0.20140228/alsa-seq-0.6.0.3/libHSalsa-seq-0.6.0.3-ghc7.8.0.20140228.so
(/var/cabal/lib/x86_64-linux-ghc-7.8.0.20140228/alsa-seq-0.6.0.3/libHSalsa-seq-0.6.0.3-ghc7.8.0.20140228.so:
undefined symbol:
alsazmseqzm0zi6zi0zi3_SystemziPosixziPoll_zdfStorableFd_closure)
I assume that GHCi wants to say the following: The instance Storable Fd
defined in module System.Posix.Poll cannot be found in the shared object
file of the alsa-seq package. That's certainly true because that module
is in the package 'poll' and not in 'alsa-seq'. But 'alsa-seq' imports
'poll'. What might be the problem?
It's a rather big example that fails here, whereas the small examples in
alsa-seq package work. Thus I first like to know what the message really
means, before investigating further. I installed many packages at once
with cabal-install using a single build directory, like:
\$ cabal install --builddir=/tmp/dist --with-ghc=ghc7.8.0.20140228 poll
alsa-seq pkg1 pkg2 pkg3 ...
? Can this cause problems?
7 Mar 20:36 2014
### GHC 7.8 on ARM
I've been accumulating some notes concerning the current status of GHC
on ARM here[1]. To quote the tl;dr,
GHC 7.8 should run well on ARM with the LLVM code generator and
dynamic linking. The process of building GHC itself might be a bit
hairy due to linker terribleness. After this, however, you’ll have
fully-featured GHC installation supporting all of the modern
amenities including GHCi and Template Haskell.
Cheers,
- Ben
[1] http://bgamari.github.io/posts/2014-03-06-compiling-ghc-7.8-on-arm.html
_______________________________________________
7 Mar 16:47 2014
### Problem with cabal's --enable-library-coverage on 7.8.1rc2
Hi,
On Mac OS X 10.9.2 with ghc 7.8.0.20140228 and cabal 1.18.0.3
Doing:
cabal configure --enable-library-coverage
cabal build
Fails with:
ld: illegal text reloc in '_enablezmlibraryzmcoveragezm0zi0zi1_Library_sendMsg2_info' to '__hpc_tickboxes_enablezmlibraryzmcoveragezm0zi0zi1_Util_hpc' for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
But without the coverage flag it’s OK.
I found it when switching from 7.6.3 to 7.8.1RC2 on a project I have and was able to strip it to this:
Don’t know if this is a cabal or ghc problem and I don’t know how to continue.
Thanks!
My ghc —info:
[("Project name","The Glorious Glasgow Haskell Compilation System")
,("GCC extra via C opts"," -fwrapv")
,("C compiler command","/usr/bin/gcc")
,("C compiler flags"," -m64 -fno-stack-protector")
,("ld command","/usr/bin/ld")
,("ld flags"," -arch x86_64")
,("ld supports compact unwind","YES")
,("ld supports build-id","NO")
,("ld supports filelist","YES")
,("ld is GNU ld","NO")
,("ar command","/usr/bin/ar")
,("ar flags","clqs")
,("ar supports at file","NO")
,("touch command","touch")
,("dllwrap command","/bin/false")
,("windres command","/bin/false")
,("libtool command","libtool")
,("perl command","/usr/bin/perl")
,("target os","OSDarwin")
,("target arch","ArchX86_64")
,("target word size","8")
,("target has GNU nonexec stack","False")
,("target has .ident directive","True")
,("target has subsections via symbols","True")
,("Unregisterised","NO")
,("LLVM llc command","llc")
,("LLVM opt command","opt")
,("Project version","7.8.0.20140228")
,("Booter version","7.6.3")
,("Stage","2")
,("Build platform","x86_64-apple-darwin")
,("Host platform","x86_64-apple-darwin")
,("Target platform","x86_64-apple-darwin")
,("Have interpreter","YES")
,("Object splitting supported","YES")
,("Have native code generator","YES")
,("Support SMP","YES")
,("Tables next to code","YES")
,("RTS ways","l debug thr thr_debug thr_l thr_p dyn debug_dyn thr_dyn thr_debug_dyn l_dyn thr_l_dyn")
,("Support dynamic-too","YES")
,("Support parallel --make","YES")
,("Dynamic by default","NO")
,("GHC Dynamic","YES")
,("Debug on","False")
]
_______________________________________________
5 Mar 22:54 2014
### RC2 build failures on Debian: sparc
Hi,
sparc fails differently than in RC1, and very plainly with a
segmentation fault in dll-split (which happens to be the first program
to be run that is compiled with stage1):
https://buildd.debian.org/status/fetch.php?pkg=ghc&arch=sparc&ver=7.8.20140228-1&stamp=1393975264
Any ideas? Anyone feeling responsible?
It would be shame to loose a lot of architectures in 7.8 compared to
7.6, but I’m not a porter and don’t know much about these part of the
compiler, so I have to rely on your support in fixing these problems,
preferably before 7.8.1.
Greetings,
Joachim
--
--
Joachim “nomeata” Breitner
mail <at> joachim-breitner.de • http://www.joachim-breitner.de/
Jabber: nomeata <at> joachim-breitner.de • GPG-Key: 0x4743206C
Debian Developer: nomeata <at> debian.org
_______________________________________________
4 Mar 10:13 2014
### Problem with gold linker and 7.8.1rc2
If I use the gold linker with 7.8.1rc2 (RHEL bindist), anything using network
in build-depends - even if it's not used in the code - fails to build with:
libHSunix-2.7.0.1-ghc7.8.0.20140228.so: error: undefined reference to
libHSunix-2.7.0.1-ghc7.8.0.20140228.so: error: undefined reference to
'sem_open'
libHSunix-2.7.0.1-ghc7.8.0.20140228.so: error: undefined reference to
'sem_wait'
libHSunix-2.7.0.1-ghc7.8.0.20140228.so: error: undefined reference to
'sem_getvalue'
libHSunix-2.7.0.1-ghc7.8.0.20140228.so: error: undefined reference to
'sem_close'
libHSunix-2.7.0.1-ghc7.8.0.20140228.so: error: undefined reference to
'sem_post'
libHSunix-2.7.0.1-ghc7.8.0.20140228.so: error: undefined reference to
'sem_trywait'
(I only have shared libraries on this machine.)
--
4 Mar 09:15 2014
### Stripping the bindist?
Should the binary distribution be stripped? Not a huge deal, but I'm saving a
fair amount of space by stripping
after installing it.
--
View this message in context: http://haskell.1045720.n5.nabble.com/Stripping-the-bindist-tp5745023.html
27 Feb 10:39 2014
### OutsideIn(X) question
I don't know if this is the best forum to ask questions about the OutsideIn(X) paper that lies below type inference in GHC. Any way, I sent it to Haskell-café and was advised to send it here also.
My question related about the proof of soundness and principality, specifically Lemma 7.2 (to be found in page 67). In that lemma, it's stated that QQ and \phi' Q_q ||- \phi Q_w <-> \phi' Q_w'. I'm trying to recover the proof (which is omitted in the text), but I stumble upon a wall when trying to work out what happens in the case an axiom is applied.
In particular, I'm playing with an example where
QQ (the set of axioms) = { forall. C a => D a } (where C and D are one-parameter type classes)
Q_q = { }
Q_w = { D Int }
Thus, if I apply the rule DINSTW (to be found in page 65), I get a new
Q_w' = { C Int }
Now, if the lemma 7.2 is true, it should be the case that
(1) QQ ||- C Int <-> D Int
which in particular means that I have the two implications
(2) { forall. C a => D a, C Int } ||- D Int
(3) { forall. C a => D a, D Int } ||- C Int
(2) follows easily by applying the AXIOM rule of ||- (as shown in page 54). However, I don't see how to make (3) work :(
I think that understanding this example will be key for my understanding of the whole system.
Anybody could point to the error in my reasoning or to places where I could find more information?
_______________________________________________
20 Feb 15:30 2014
### Re: haskell xml parsing for larger files?
Have you looked at tagsoup?
On Feb 20, 2014 3:30 AM, "Christian Maeder" <Christian.Maeder <at> dfki.de> wrote:
Hi,
I've got some difficulties parsing "large" xml files (> 100MB).
A plain SAX parser, as provided by hexpat, is fine. However, constructing a tree consumes too much memory on a 32bit machine.
see http://trac.informatik.uni-bremen.de:8080/hets/ticket/1248
I suspect that sharing strings when constructing trees might greatly reduce memory requirements. What are suitable libraries for string pools?
Before trying to implement something myself, I'ld like to ask who else has tried to process large xml files (and met similar memory problems)?
I have not yet investigated xml-conduit and hxt for our purpose. (These look scary.)
In fact, I've basically used the content trees from "The (simple) xml package" and switching to another tree type is no fun, in particular if this gains not much.
Thanks Christian
_______________________________________________
_______________________________________________
20 Feb 12:30 2014
### haskell xml parsing for larger files?
Hi,
I've got some difficulties parsing "large" xml files (> 100MB).
A plain SAX parser, as provided by hexpat, is fine. However,
constructing a tree consumes too much memory on a 32bit machine.
see http://trac.informatik.uni-bremen.de:8080/hets/ticket/1248
I suspect that sharing strings when constructing trees might greatly
reduce memory requirements. What are suitable libraries for string pools?
Before trying to implement something myself, I'ld like to ask who else
has tried to process large xml files (and met similar memory problems)?
I have not yet investigated xml-conduit and hxt for our purpose. (These
look scary.)
In fact, I've basically used the content trees from "The (simple) xml
package" and switching to another tree type is no fun, in particular if
this gains not much.
Thanks Christian
19 Feb 13:07 2014
### State of -XImpredicativeTypes
Lectori salutem,
What is the actual state of ImpredicativeTypes? It appears documented as a "properly" finished GHC
extension, but on IRC and other places I keep hearing it's poorly tested, buggy or incomplete. Is this true
or just FUD?
Cheers,
Merijn
_______________________________________________
` | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47582846879959106, "perplexity": 16434.4921732432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021587780/warc/CC-MAIN-20140305121307-00004-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/11462/is-there-a-difference-between-pure-binary-and-binary | # Is there a difference between pure binary and binary?
In some books and on the internet I occasionally find "pure binary" and "binary" on its own, is there a difference between these two terms? If so, can someone describe briefly what they are?
• What is the context? – Aryabhata Apr 21 '13 at 14:22
• @Aryabhata I found it to be in a section about Gray Code, where only the term "pure binary" is used, while "binary" is used in the rest of the book. – Saras Apr 21 '13 at 17:18
I'm guessing here for lack of context, but I think the following distinction is reasonable.
A binary encoding is anything that maps stuff to bit strings. There are many, including two's complement, IEEE float, ASCII, and so on.
Pure binary probably refers to bland natural numbers written in base two, i.e. if $n_{(2)} = a_k\dots a_0$ then
$\qquad\displaystyle n = \sum_{i=0}^{k} 2^ia_i$.
• So pure binary refers to binary pattern that has not been modified in any way or is not a fixed-point binary number and not a signed integer, etc. Just the natural denary number converted to binary? – Saras Apr 21 '13 at 17:26
• That's what I'm saying, but I can't promise that everybody will use the terms like that in every context. – Raphael Apr 21 '13 at 22:58
The only contexts I know where pure binary is used when referencing the C standard which has a definition for the concept
A positional representation for integers that uses the binary digits 0 and 1, in which the values represented by successive bits are additive, begin with 1, and are multiplied by successive integral powers of 2, except perhaps the bit with the highest position.
The rationale for the C standard mentions the desire to avoid things like Gray code. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5233592391014099, "perplexity": 637.1685332955954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987773711.75/warc/CC-MAIN-20191021120639-20191021144139-00283.warc.gz"} |
https://www.physicsforums.com/threads/pyramids-built-from-cut-stone.231772/ | # Pyramids built from cut stone
1. Apr 28, 2008
### wolram
http://www.abc.net.au/science/articles/2008/04/28/2229383.htm?site=science&topic=ancient
Many of Egypt's most famous monuments, such as the Sphinx and Cheops pyramid at Giza, contain hundreds of thousands of marine fossils, according to a new study.
Most of the fossils are intact and preserved in the monument walls, giving clues to how the monuments were built.
The authors suggest the stones that make up the Giza plateau, Fayum and Abydos monuments must have been carved out of natural stone as they reveal what chunks of the sea floor must have looked like over 4000 years ago, when the buildings were erected.
2. Apr 28, 2008
### matthyaouw
I didn't know that there was any doubt that some of the stones of the pyramids were cut stones. I'll have a full read of the original article later on to see if they've concluded that all of the stones are cut, and if so on what grounds.
Significantly over 4000 years I should think...
3. Apr 28, 2008
### wolram
There was an argument that some were cast/man made.
http://www.fravahr.org/spip.php?breve258
The widely accepted theory-that the pyramids were crafted of carved-out giant limestone blocks that workers carried up ramps-had not only not been embraced by everyone, but as important had quite a number of holes.
After extensive scanning electron microscope (SEM) observations and other testing, Barsoum and his research group found that the tiniest structures within the inner and outer casing stones were indeed consistent with a reconstituted limestone. The cement binding the limestone aggregate was either silicon dioxide (the building block of quartz) or a calcium and magnesium-rich silicate mineral.
Last edited: Apr 28, 2008 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8062968254089355, "perplexity": 4899.5977728718935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00115-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://nlab-pages.s3.us-east-2.amazonaws.com/nlab/show/24+branes+transverse+to+K3 | # Contents
## Idea
Several classes of string theory vacua require the presence of exactly 24 branes of codimension 4 transverse to a K3-surface-fiber; this happens notably:
In both cases the condition arises as a kind of tadpole cancellation-condition, where the charge of the 24 branes in the compact K3-fiber space, which naively would be 24 in natural units, cancels out to zero, due to some subtle effect.
Despite the superficial similarity, this subtle effect is, at the face of it, rather different in the two cases:
An argument that these two different-looking mechanisms are in fact equivalent, under suitable duality in string theory, is given in Braun-Brodie-Lukas-Ruehle 18, Sec. III, following detailed analysis due to Aspinwall-Morrison 97.
### In F-theory on K3
In passing from M-theory to type IIA string theory, the locus of any Kaluza-Klein monopole in 11d becomes the locus of D6-branes in 10d. The locus of the Kaluza-Klein monopole in turn (as discussed there) is the locus where the $S^1_A$-circle fibration degenerates. Hence in F-theory this is the locus where the fiber of the $S^1_A \times S^1_B$-elliptic fibration degenerates to the nodal curve. Since the T-dual of D6-branes are D7-branes, it follows that D7-branes in F-theory “are” the singular locus of the elliptic fibration.
Now, considering F-theory on K3, an elliptically fibered complex K3-surface
$\array{ T &\longrightarrow& K3 \\ && \downarrow \\ && \mathbb{C}\mathbb{P}^1 }$
may be parameterized via the Weierstrass elliptic function as the solution locus of the equation
$y^2 = x^3 + f(z) x + g(z)$
for $x,y,z \in \mathbb{C}\mathbb{P}^1$, with $f$ a polynomial of degree 8 and $g$ of degree twelve. The j-invariant of the complex elliptic curve which this parameterizes for given $z$ is
$j(\tau(z)) = \frac{4 (24 f)^3}{27 g^2 + 4 f^3} \,.$
The poles $j\to \infty$ of the j-invariant correspond to the nodal curve, and hence it is at these poles that the D7-branes are located.
Since the order of the poles is 24 (the polynomial degree of the discriminant $\Delta = 27 g^2 + 4 f^3$, see at elliptically fibered K3-surfacesingular points) there are necessarily 24 D7-branes (Sen 96, page 5, Lerche 99, p. 6 , see also Morrison 04, sections 8 and 17, Denef 08, around (3.41), Douglas-Park-Schnell 14).
Notice that the net charge of these 24 D7-branes is supposed to vanish, due to S-duality effects (e.g. Denef 08, below (3.41)).
### In IIA-theory on K3
Under T-duality the above discussion in F-theory translates to 24 D6-branes in type IIA string theory on K3 (Vafa 96, Footnote 2 on p. 6).
### In HET-theory on K3
In heterotic string theory KK-compactified on K3 with vanishing gauge fields-instanton number, the existence of exactly 24 NS5-branes is implied by the Green-Schwarz mechanism: This requires that the 3-flux density $H_3$ measuring the NS5-brane charge satisfies $d H_3 = \trac{1}{2} p_1(\nabla)$; and using that on K3 we have $\int_{K3} \tfrac{1}{2} p_1(\nabla) = \int_{K3} \chi_4(\nabla) = \chi_4[K3] = 24$ this implies, with Stokes' theorem, that the $H_3$-flux through the 3-spheres around transversal NS5-brane-punctures of the K3 equals 24 (e.g. Schwarz 96, around p. 50, Aspinwall-Morrison 97, Sec. 4-5, Johnson 98, p. 30, Braun-Brodie-Lukas-Ruehle 18, Section III.A, Choi-Kobayashi 19, Sec. 1.1).
The duality of this HET-phenomenon with that in F-theory above is discussed in Braun-Brodie-Lukas-Ruehle 18, Section III.
### Under Hypothesis H
The vanishing of the Euler characteristic of K3 after cutting out the complement of 24 points is precisely the mechanism which witnesses the order 24 of the third stable homotopy group of spheres, seen under Pontryagin's theorem as the existence of a framed cobordism $K3 \setminus 24 \cdot D^4$ between 24 3-spheres:
This relates the number of 24 branes transverse on K3 to Hypothesis H:
## References
### In F-theory
Discussion in F-theory via the Kodeira classification of elliptically fibered K3s:
### In HET-theory
Discussion in heterotic string theory via the Green-Schwarz mechanism on K3 (see also at small instantons):
### In F- and HET-theory
Joint discussion in F- and dual HET-theory:
### Swampland cobordism conjecture
The swampland cobordism conjecture is the hypothesis that consistency of vacua in string theory/M-theory/F-theory (hence their being in the “landscape” instead of the “swampland”) requires (undefined) stringy/quantum gravity-analogs of cobordism groups to vanish:
A more concrete consequence of this conjecture is claimed (McNamara-Vafa 17, Sec. 5.2, 2nd paragr.) to be the statement that – paraphrasing/extrapolating somewhat: brane charges are quantized in Cobordism cohomology; so that, in particular tadpole cancellation of brane charges is to happen in Cobordism cohomology. This statement is discussed, as a consequence of Hypothesis H, in (see p. 83): | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8880295157432556, "perplexity": 2021.783277654322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303868.98/warc/CC-MAIN-20220122164421-20220122194421-00298.warc.gz"} |
https://astronomy.stackexchange.com/questions/47370/is-it-more-accurate-to-measure-distances-from-mars/47387 | # Is it more accurate to measure distances from Mars?
At a certain level of development of the Martian civilization, scientists of this planet began to measure distances. Will their measurements be more or less accurate: a) to the planets of the solar system; b) to the nearest stars according to the measurements of earthlings? Consider that the development of the sciences of the terrestrial and Martian civilizations followed approximately the same path.
The Mars's orbit is only different from Earth's by being more elongated. Does it give some advantages?
• I guess the way to go about this is to think of how our methods of measuring distances are depending on the orbit of a planet.. Nov 4 '21 at 15:21
• Hint: How was a parsec originally defined? Nov 4 '21 at 15:35
• Oh, I see. So because Mars's orbit is more elongated if the Martians used their average distance to the Sun in parsec they would get greater mistake. Nov 4 '21 at 17:59
• @ALiCeP. yes, but instead of "is more elongated" just say "is larger" and instead of "greater mistake" say "a larger motion" or better yet simply "larger parallax". I think you can write an answer to your own question now if you like! :-)
– uhoh
Nov 4 '21 at 19:38
• A good part of the “cosmic ladder” was figuring the diameter of the Earth, the distance and size of the Moon, and the distance and size of the Sun. Mars has small moons, which might make it more difficult to measure their size and distance. Also, they orbit Mars much faster than Earth’s Moon, adding to the difficulty. Nov 5 '21 at 23:41
It would be more accurate to determine the parallaxes of stars, and thus their distances from the Solar systerm, from the surface of Mars than from the surface of the Earth.
The orbit of the planet Earth has a semi-major axis of 1 Astronomical Unit (AU).
Imagine that a star is observed at the moment when there is astraight line from the center of the Sun through the center of the Earth to the celestial longitude of the star, i.e. when the star's celestial longitude is directly opposite to the Sun as seen from Earth.
Now imagine that the star is observed 3 months or 1 quarter year before then. The line between Earth and the Sun will be at right amgles to the line between the Sun and the Star, and will be 1 AU long. Imagine that the angle to the star is measured very precisely.
Now imagine that the star isobserved, and the angle to it measured, six months after is angle is measured the first time. The planet Earth will now be opposite in its orbit to where it was six months before, and the line between Earth and the Sun will be at right angles to the line between the Sun and the Star. Imagine that the angle to the star is measured very precisely.
Because of theincredibly vast distances to the stars, the two measurements of the angle to the star should be very slightly different despite being measured from points which are two AU apart. The difference between the two angle measurements is called the parallax of the star, and the parallax can be used to calculate the distance to the star.
The orbit of Mars has a semi-major axis of 1.52 AU. So two measurements of the Angle to a star made 0.94 Earth year, or half a Martian year, apart would be made from positions 3.04 AU apart. That is 1.52 times as far as the baseline used in parallax measurements from Earth, so that will makeparallex measurements a little bit easier and more accurate.
For example, it is a little easier to measure an angle of 0.0015 arc second than an angle of 0.001 arc second.
So it is theoretically more accurate to measure the distances to distant stars from the surface of Mars than from the Earth, although it has not be tried yet.
• Could you please explain in a little more detail why exactly the measurements made from 3.04 AU apart (1.52 times as far as from Earth's baseline) are more accurate and easy? Nov 6 '21 at 6:17
• @ALICe P. I would think that it is obviously easier to measure a larger angle than a smaller angle. Anyway, I added something to that effect to my answer. Nov 7 '21 at 4:58
• You're right, weird that I didn't think of it. Thanks for clarifying this for me once again! Nov 7 '21 at 6:56 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.883919894695282, "perplexity": 577.1006604540162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305423.58/warc/CC-MAIN-20220128074016-20220128104016-00563.warc.gz"} |
https://saturnaxis.github.io/Astrobio/Chapter_06/Early_Earth.html | # 6. Early Earth#
## 6.1. Evolution of Metabolism and Early Microbial Communities#
The process of metabolism is a hallmark of all living organisms. Catabolic reactions generate energy for the organism while anabolic reactions are used for the synthesis of cell material. Metabolic pathways in today’s living organisms have been evolving for more than $$3.5\ {\rm Gyr}$$. The evolution of metabolism cannot be separated from the origin life because it was necessary for even the earliest organisms. Contemporary metabolic pathways are presumed to be much more elaborate and sophisticated that those first evolved. Metabolism today ranges from
• the use of various inorganic chemicals (e.g., hydrogen or sulfur for energy),
• several forms of photosynthesis, and
• metabolism of hundreds of organic compounds.
It is presently impossible for us to know which pathways originated first and how they evolved. Understanding how metabolism evolved on the early Earth is of considerable importance. Microorganisms resembling today’s Bacteria and Archaea were most likely the first organisms, and the evolution of their metabolism is the most relevant. Identifying the early evolution of metabolism in these organisms enables us to distinguish specific metabolic types from the later innovations.
### 6.1.1. The Setting: Conditions on Early Earth#
On the early Earth, there were physical and chemical conditions that influenced the evolution of metabolic processes.
One factor that influenced physical conditions was an instability in the environment (e.g., large changes in available sunlight, water, or temperature). An impact of an asteroid ($$R{\sim}50\ {\rm km}$$; see Fig. 5.16) can sterilize all surface life on our planet and evaporate a global ocean that is $$3\ {\rm km}$$ deep. Only microbial ife at a depth of several $${\rm km}$$ could then survive. Cosmic events affected early life in major and largely unknown ways that may have lead to the extinction of many species and redirection of the evolution of Earth’s biota. It is possible that catastrophic events during the early history of Earth may have also affected the evolution of metabolism.
Chemical conditions on early Earth were marked by a relative dearth (absence) of nutrients that make life so rich today. A steady supply of certain nutrients was available, which may have influenced the emergence of specific metabolic pathways. Organic and other volatile materials are synthesized in space and became incorporated into the Earth through impacts by large bodies. These impacts may have disrupted evolution, but also provided nutrients for life. In addition, the abiotic synthesis of organic molecules occurred on Earth. Acetic acid ($$\rm CH_3COOH$$) synthesis has been shown to occur under primordial conditions.
Although on the early Earth organic materials were less abundant than today, we may reasonably assume that they were present in sufficient quantities and varieties for life to originate and evolve. Organic materials alone would not likely have been sufficient to sustain life. Other nutrients were necessary for organisms to utilize electron donors and acceptors to generate metabolic energy needed for life (see Table 6.1). The first metabolisms that arose were carried out anaerobically due to the lack of free oxygen.
Table 6.1 Building blocks of life and replenishable sources of energy on early Earth.#
Nutrient
Sources on primitive Earth
Fixed Carbon
Simple amino acids, sugars, other organics
Meteors and asteroids
Acetic acid
Chemosynthesis on Earth
Electron donors
Hydrogen ($$\rm H_2$$)
Volcanic outgassing
Ferrous Iron ($$\rm Fe^{2+}$$)
Mineral dissolution
Electron acceptors
Carbon dioxide and monoxide ($$\rm CO_2$$ and $$\rm CO$$)
Volcanism
Sulfite ($$\rm SO_3^{-}$$)
Volcanism; hydration of sulfur dioxide ($$\rm SO_2$$)
Elemental sulfur ($$\rm S$$)
Mineral dissolution
### 6.1.2. Evidence for the Nature of Early Metabolisms#
A few categories of more direct evidence can elucidate the nature and timing of early metabolisms. This evidence comes from three primary sources: biomarkers, the geochemical record, and phylogenetic evidence.
Unique biomarkers have been useful for understanding early metabolisms. For example, hopanoids are found only in cell membranes of the bacterial group cyanobacteria, which carryout oxygen-producing photosynthesis. The presence of hopanoids has been used to date the evolution of these organisms to about $$2.5\ {\rm Ga}$$. This date agrees well with the geochemical record for oxygen production on Earth and supports the view that the first oxygenic photosynthetic organisms were cyanobacteria.
The geochemical record has been useful in establishing evidence for certain types of metabolic activity. Strong evidence indicates that carbon dioxide fixation was an early metabolic event on Earth, which is called primary production. This is when environmental $${\rm CO_2}$$ is taken up and the carbon is incorporated into organic matter. Carbon dioxide fixing organisms cause isotopic fractionation. As a result in the Isua deposits in Greenland (that date to $$3.5\ {\rm Ga}$$), deposited organic carbon has a lighter isotopic signature than carbonate.
Although this is excellent evidence that primary production occurred on early Earth, there is no definitive evidence about the nature of the earliest primary producing organisms. Were they using chemical energy (chemosynthetic) or light energy (photosynthetic), and which organisms were responsible for this process? The anaerobic methane oxidizing microbes are known to produce the lightest organic carbon, which may explain the existence of lighter organic carbon isotopes in certain layers of the Isua sediments.
Phylogenetic analysis is another useful approach for identifying which bacterial groups were involved in early metabolism. By knowing which group of organisms evolved first, it is possible to hypothesize which metabolic activities arose first. The deepest branches of the RNA Tree of Life are of thermophilic Archaea and Bacteria, implying that these are the earliest ancestors of any organisms still alive today. However, two issues are important:
1. discernible detail regarding the major groups of microorganisms is in adequate, so there remains a question about which organisms might have been the earliest; and
2. horizontal gene transfer, which involves the exchange of genes among different lineages.
Most genes are transferred by vertical inheritance from one generation to another within a species.
Some of the genes responsible for carrying out methanogenesis (methane production) have also been found in bacteria that are methane consumers. Methanogens are found in the domain Archaea, whereas methane consumers are found in the domain Bacteria. This sort of horizontal gene transfer makes it very difficult to trace the evolution of metabolic functions and other characteristics of organisms.
### 6.1.3. Contemporary Metabolisms#
The evolution of life has changed the Earth, even as the Earth has provided the habitats for life. For example the volution of oxygenic photosynthesis gave rise to oxygen on Earth and enabled the later evolution of aerobic metabolism. To understand the evolution of metabolism, we study the essential metabolic features of present-day organisms.
#### 6.1.3.1. Anabolism#
One of the two major parts of metabolism is anabolism (i.e., constructive metabolism or biosynthesis), which is responsible for the building of new cell material for growth and reproduction. The heterotrophs synthesize cell material from organic molecules that are present in their environment. In contrast, the autotrophs make organic molecules from carbon dioxide in a process called carbon dioxide fixation. The autotrophs are important because they are the primary producers that ultimately provide the entire biosphere with organic material.
The most advanced pathway for carbon dioxide fixation is the Calvin-Benson cycle used by plants and some bacteria as part of photosynthesis. Other pathways include the acetyl coenzyme A, reverse tricarboxylic acid, and hydroxypropionate pathways.
#### 6.1.3.2. Catabolism#
The chemical reactions that constitute metabolism must be thermodynamically feasible. Anabolic reactions require energy to create order, and thus organism must also have catabolic pathways (i.e., destructive metabolism) to generate energy needed for anabolism. Catabolic pathways are remarkably diverse, where some organism “burn” organic food using oxygen (i.e., aerobic organotropy). Before oxygen, organisms “breathed” substances such as carbon dioxide, sulfate, iron, and other substances. The lithotrophs “eat” hydrogen, sulfur, ammonia, and other unlikely-seeming foods. The phototrophs harvest the energy of light.
#### 6.1.3.3. Oxidation, reduction, and electron flow#
Catabolism follows the principles of oxidation and reduction. These principles are best illustrating using a form of catabolism called respiration. For respirers, oxidation corresponds to “eating” as reduction is to “breathing.” Between the two types of reactions, electrons flow and the work done using catabolic energy is analogous to the work done by electrons flowing between the terminals of a battery. The only requirement is that the reaction has the potential to release energy, which condition is realized if the substance oxidized has a more negative electrical potential than the substance reduced.
• To generate energy the organism oxidizes (or “eats”) the electron donor, while it reduces (or “breathes”) the electron receptor.
Respirers range from methanogens and acetogens, which utilize carbon dioxide and produce methane and acetic acid as waste products, to aerobes (which use oxygen and produce water). The electron acceptor has also been a major constraint in the evolution of catabolism because electron acceptors (e.g., oxygen) became abundant relatively late.
#### 6.1.3.4. Respiration, photosynthesis, and fermentation#
The three forms of catabolism are respiration, photosynthesis, and fermentation, which are distinguished largely by their patterns of electron flow. Respiration is the most straightforward because separate substances in the environment (i.e., substrate) are used by organism as electron donor and acceptor. In photosynthesis and fermentation, electrons still flow from a donor to an acceptor, but the pattern is less intuitive.
Photosynthesizers take advantage of the key property of chlorophyll, which functions well as an electron acceptor and an effective electron donor (if energized by light). As a result, electrons flow from and to chlorophyll in a cyclic pattern, doing work in the process. Superimposed on this chlorophyll-driven cyclic electron flow, electrons from an electron-donating substrate are used to reduce carbon dioxide in carbon dioxide fixation. Fermenters have a pattern, where an electron-donating substrate is oxidized and the electron acceptor is an internal metabolic intermediate rather than a substrate.
#### 6.1.3.5. ATP and the proton-motive force#
Catabolism consists of metabolic pathways that involve oxidation and reduction. How is the energy actually stored and transferred? Adenosine triphosphate (ATP) is the energy currency of biology, ATP contains phosphoanhydride bonds that yield energy upon hydrolysis (i.e., breakdown through combination of water). Certain other compounds have energy-rich bonds too, including adenosine diphosphate (ADP), pyrophospate (PP), and other metabolic intermediates. ATP predominates in present-day organisms.
The formation of ATP and the hydrolysis of ATP provide the energy link between catabolism and anabolism, where the former generates ATP and the latter uses ATP. The mechanism of ATP generation is usually simplest in the case of fermentation, where ATP is directly generated by reactions that are part of a metabolic intermediate containing an energy-rich phosphoanhydride bond that transfers its phosphate to ADT and forming ATP.
For respiration and photosynthesis, ATP is generated in a two-stage process in which the cell membrane plays an essential role. Just as a membrane is necessary to separate the contents of a cell, so is a membrane necessary for energy generation and storage. The principle is akin to charge separation.
1. The flow of electrons through a membrane-bound electron transport chain crates a potential between the interior and exterior of the cell.
2. Protons move across the potential due to the electromotive force and gradient in pH (proton concentration). This movement of protons is called the proton-motive force within biology.
3. The energy supplied by the proton-motive force is then used to make ATP, via a membrane-bound enzyme called ATP synthase that couples the synthesis of ATP to the import of protons.
This process is termed electron transport phosphorylation or oxidative phosphorylation. Hence the cell membrane plays a central role not only in the cell’s structural integrity, but also in its ability to generate energy.
#### 6.1.3.6. Energy yields of catabolism#
All catabolic mechanisms lead to the generation of ATP. But which forms of catabolism generate the most ATP and are the most advantageous for the organism? A quantity used by chemists to evaluate the energy yield of a reaction is the change in Gibbs free energy ($$\Delta G$$) expressed in kilojoules ($$\rm kJ$$).
A large negative $$\Delta G$$ indicates a high yield of energy, while a positive $$\Delta G$$ signifies that energy is consumed. Hence, all catabolic pathways must have a negative $$\Delta G$$, where the more negative the better. The $$\Delta G$$ depends on the relative electrical potential of the electron donors and acceptors.
Aerobic respiration (ATP yield $$=38$$) using glucose ($${\rm C_6H_{12}O_6}$$) as an electron donor and oxygen as an electron acceptor is one of the most effective in producing ATP:
(6.1)#\begin{align} {\rm C_6H_{12}O_6} + 6{\rm O_2} \rightarrow 6{\rm CO_2} + 6{\rm H_2O}; \qquad \Delta G = -2870\ {\rm kJ}. \end{align}
Aerobic respiration produces a large negative $$\Delta G$$ because the oxidation of glucose occurs at a very negative redox potential, while the reduction of oxygen occurs at a very positive redox potential. Organisms that carry out aerobic respiration can grow fast, which includes highly evolved forms such as animals.
Photosynthesis is effective through the light-driven electron flow via chlorophyll and appreciated when one realizes that oxygenic photosynthesis is the reverse of aerobic respiration. Since the reverse of an energy-yielding process is an energy-requiring one, the use of light via chlorophyll must be very effective. Like animals, plants are also highly evolved and successful.
Other forms of respiration (e.g., methanogenesis or sulfate reduction) yield less energy:
(6.2)#\begin{align} 4{\rm H_2} + {\rm CO_2} &\rightarrow {\rm CH_4} + 2{\rm H_2O}; \qquad \Delta G = -131\ {\rm kJ}, \quad \mathbf{(methanogenesis)} \\ 4{\rm H_2} + {\rm SO_4^{2-}} + {\rm H^+} &\rightarrow {\rm HS^-} + 4{\rm H_2O}; \qquad \Delta G = -152\ {\rm kJ}. \quad \mathbf{(sulfate\ reduction)} \end{align}
Although methanogens and sulfate reducers use electron donors with a large negative redox potential (e.g., hydrogen or organic substances), the electron acceptors (carbon dioxide and sulfate) have redox potentials that are also in the negative range. Thus, the $$\Delta G$$’s have small negative values and the ATP yields are low.
Fermenters are also limited by small redox potential differences between the electron donor and acceptor. Lactic acid fermentation (ATP yield $$=2$$) has a low ATP yield and $$\Delta G = -196\ {\rm kJ}$$. Fermenters also lack the opportunity to fully oxidize their substrates.
### 6.1.4. Early Metabolic Mechanisms#
Metabolism today is complex and diverse. One of our tenets is that life, and metabolism, began simpler and more uniform. The continuity of evolution means that all contemporary metabolism evolved from these modes beginnings. Our other tenet is the early life had to make do with the nutrients that were available.
Oparin originally hypothesized that the first metabolisms were not photosynthetic, where organisms obtained organic carbon from their surroundings rather than synthesizing it photosynthetically from carbon dioxide (i.e., heterotrophic). Photosynthetic organisms are much more complex than heterotrophic organisms because they need pathways to carryout out photosynthesis and corban dioxide fixation. This is in addition to the anabolic pathways for synthesis of amino acids and other cell materials. In contrast, heterotrophic metabolism requires relatively few enzymes to produce cell material from organic precursors.
A simple type of catabolism is the Stickland reaction that occurs in certain bacteria and involves the fermentation of amino acid pairs. In this reaction, only two amino acids are needed and very few enzymes are required for catabolism and generation of ATP. It is interesting to note that alanine and glycine in examples of the Stickland reaction are very stable and commonly found in carbonaceous chondrite meteorites. Therefore, these substrates may have been available for metabolism early on.
The hypothesis that heterotrophic metabolism arose first ignores the importance of primary production (i.e., the production of organic material either by chemosynthesis or photosynthesis). Heterotrophic organisms require preformed organic material. Although organic substrates may have been abundant on early Earth, within a short time they would have been depleted by any heterotrophic organisms. Some process in which organic material is produced biologically on Earth is of fundamental importance in early metabolism. The most important question concerning early metabolism becomes: What were the most likely metabolism(s) of the initial primary producers?
### 6.1.5. Evolution of Methanogenesis and Acetogensis#
#### 6.1.5.1. Today and on the early Earth#
Methanogens and acetogens partly share analogous metabolic pathways, although not phylogenetically related. Methanogens and acetogens are both anaerobes that specialize in metabolism using one-carbon intermediates. Hydrogen is a common electron donor and carbon dioxide a common electron acceptor. Methane and acetic acid are the respective catabolic products. Both organisms fix carbon dioxide by the acetyl coenzyme A ($$\mathbf{\rm CoA}$$) pathway. Today methanogens and acetogens play essential roles in anaerobic ecosystems and live in a variety of habitats ranging from hydrothermal vents, pond sediments, and animal digestive tracts.
Note
A coenzyme is any non-protein molecule required for the functioning of an enzyme. Coenzyme A is a specific sulfur-containing organic coenzyme.
Today, methanogens in sediments team up with a variety of fermenters to convert cellulose-type materials to methane and carbon dioxide. Before photosynthesis, the Earth and its biosphere were quite different. Organic materials were limited to those formed by abiotic processes and by a feeble primary production. Autotrophy (or primary producers) may have been preferred over heterotrophy, necessitating a means of carbon dioxide fixation. On the early Earth, electron donors and acceptors for catabolism may have been rare or limited to a few elements or compounds. In this most ancient of biological worlds, methanogens or acetogen may have been the only metabolic types.
Analogs of the early Earth may exist today:
• Microbial communities in the Earth’s subsurface are apparently dominated by methanogens that obtain hydrogen from reduced minerals in rock.
• Deep basalt aquifers, hot springs, and submarine hydrothermal vents are examples of sites where microbial ecosystems may be supported by hydrogen of geological origin.
These environments that are poor in organic material but potentially rich in mineral nutrients could resemble ecosystems that dominate on the early Earth or that may exist on other planetary bodies.
#### 6.1.5.2. Metabolic simplicity and the acetyl CoA pathway#
It is reasonable to assume that the earliest organisms were also the simplest. Frequent bombardment of the early Earth might have limited the time available for complexity ot evolve. From this standpoint, methanogenesis and acetogenesis are appealing as the earliest metabolisms. In its simplest form, the metabolisms of methanogens and acetogens each consist of three essential pathways.
1. reduction of carbon dioxide to a methyl ($$\rm CH_3$$) group,
2. the acetyl coenzyme A ($$\rm CoA$$) pathway that uses the methyl group to form $$\rm CH_3CO-CoA$$, and
3. a single energy-yielding step.
Fig. 6.1 The reductive acetyl-CoA (Wood-Ljungdahl) pathway. The upper part (red) shows the variant of the pathway functioning in acetogens, and the lower part (blue) depicts the pathway in methanogens. Image Credit: Berg (2011).#
Methanogens reduce additional methyl groups to methane, while acetogens convert some of their acetyl $$\rm CoA$$ to acetic acid and gain an ATP in the process. The acetyl $$\rm CoA$$ pathway may have been the crux of an ancient autotrophic mode of life. Other pathways of carbon dioxide fixation are more complex, involving various numbers of multi-carbon intermediates. Because of its simplicity the acetyl $$\rm CoA$$ pathway may have been the easiest to evolve.
#### 6.1.5.3. A prebiotic precursor for acetyl COA pathway#
Further support for the ancient nature of the acetyl $$\rm CoA$$ pathway comes from evidence that the crucial reaction could have occurred prebiotically. Günther Wächtershäuser proposed the theory that metabolism evolved from prebiotic chemistry, where he has shown in the laboratory that under primordial conditions a methyl group can react with carbon monoxide to yield an acetyl group (Huber and Wächtershäuser (1997)).
Particularly interesting was the observation that the reaction required nickel sulfide and iron sulfide as catalysts. The enzyme that catalyzes the reaction in present-day organisms also contains a nickel-iron-sulfur center. A core step in the metabolism of methanogens and acetogens may have derived from a prebiotic chemical reaction. Wächtershäuser has suggested that the combined presence of methyl mercaptan ($$\rm CH_3SH$$), iron sulfide, and nickel sulfide could constitute important components of a “marker” for primitive habitats on Earth and Mars.
#### 6.1.5.4. Hydrogen oxidation and the proton-motive force#
Another significant feature of the methanogenic and acetogenic metabolism is the use of $$\rm H_2$$ as the catabolic electron donor. The electron transport chain of many organism is a complex multi-component assembly that moves proteins from inside to the outside of cells as electrons flow from one electron carrier to another. These complex electron chains are characteristic of organotrophs and phototrophs, as well as aerobic lithotrophs that oxidize inorganic compounds other than hydrogen.
The use of hydrogen offers the possibility of a simpler mechanism of a simpler mechanism in which a proton-motive force is generated without the transport of protons. The products of hydrogen oxidation are simply protons and electrons. If the oxidation of hydrogen occurs on the outside of the cell membrane, then protons can accumulate there. A simple electron carrier that spans the membrane can then deliver electrons to the inside. The electrons can then be used on the inside in a proton-consuming reaction to reduce carbon dioxide to methane or acetic acid. The result is a proton-motive force.
This simple mechanism requiring only an extracellular hydrogen-oxidizing enzyme (hydrogenase) and a membrane-bound electron carrier (e.g., iron-sulfur protein) has not yet actually been demonstrated in the methanogens or acetogens. But neither is it excluded as a primordial mechanism or as one that operates today.
### 6.1.6. Evolution of Photosynthesis#
The evolution of photosynthesis greatly enhanced the capability of the planet to carry out primary production. organisms have evolved to make efficient use of light energy, which has always been available and abundant on the surface fo the planet. Although the evolution of photosynthesis most likely followed the evolution of simpler primary production mechanisms, when it evolved it had a major impact on all life.
Photosynthesis has been found in Bacteria and Eukarya. A remarkable variety of photosynthetic bacterial groups exists. Within the Bacteria, five of the major phyla contain photosynthetic members: Protobacteria, Fimicutes, Chorobi, Choroflexi, and Cyanobacteria. C.B. van Niel proposed (in 1941) the following general reaction for photosynthesis:
(6.3)#\begin{align} {\rm CO_2} + 2{\rm H_2A} + \text{light} \rightarrow {\rm (CH_2O)} + {\rm H_2O} + 2{\rm A}, \end{align}
where $${\rm H_2A}$$ can stand for $${\rm H_2S}$$, $${\rm H_2}$$, $${\rm H_2O}$$, an organic compound, or even ferrous iron ($$\rm Fe^{2+}$$). $${\rm (CH_2O)}$$ represents an organic product of the photosynthesis.
Many of the photosynthetic bacteria are restricted to anoxic habitats and ues $${\rm H_2S}$$ as a source of electrons. Thus the reaction for these bacteria is:
(6.4)#\begin{align} {\rm CO_2} + 2{\rm H_2S} + \text{light} \rightarrow {\rm (CH_2O)} + {\rm H_2O} + 2{\rm S}. \end{align}
These phototrophs can often further oxidize the elemental sulfur to produce sulfate anaerobically during photosynthesis:
(6.5)#\begin{align} 3{\rm CO_2} + 2{\rm S} + 5{\rm H_2O} + \text{light} \rightarrow 3{\rm (CH_2O)} + {\rm H_2SO_4}. \end{align}
Cyanobacteria are best known as the first organisms to use water to carry out oxygenic photosynthesis, in which oxygen is produced:
(6.6)#\begin{align} {\rm CO_2} + {\rm H_2O} + \text{light} \rightarrow {\rm (CH_2O)} + {\rm O_2}. \end{align}
Note that the $${\rm O_2}$$ comes from the water and not the carbon dioxide. For this reason, oxygenic photosynthesis is called the “water splitting” reaction. Apart from cyanobacteria, all photosynthetic bacteria carry out photosynthesis anaerobically and oxygen is not produced. This anaerobic process is referred to as anoxygenic photosynthesis.
When did photosynthesis evolve? The availability of suitable substrates is not a constraint because they were all available, as was light. The main constraint has to do with complexity. To generate a proton-motive force photosynthetically requires the addition of chlorophyll, as well as a relatively complex electron transport chain involving the transport of protons across the membrane. This mechanism is more likely to have followed the relatively simple $${\rm H_2}$$ chemosynthetic mechanisms.
The time of emergence of different photosynthetic types is unresolved in the geologic record, but available evidence suggests that anoxic phototrophs predated the cyanobacteria. At that time the entire planet was anoxic, since the cyanobacteria had yet to begin to oxygenate the atmosphere and oceans. The evolutionary emergence of the cyanobacteria was the result of a remarkable biochemical innovation, where the development of a light-driven electron transport system that could fix $$\rm CO_2$$ with electrons extracted from water, liberating its oxygen. This photosystem, composed of two photosynthetic reaction centers, was likely built upon the machinery of earlier simpler microorganisms that were restricted to anoxic habitats and that used reduced sulfur as a source of electrons.
The time of cyanobacterial emergence has been dated to $$2.1-2.5\ {\rm Ga}$$ on the basis of protein sequences, which is consistent with the geochemical estimate for the era of oxygenation of Earth’s atmosphere and oceans (i.e., from the banded iron formations). The evolution of photosynthesis liberated microbial life from dependence on geochemically produced, reduced substrates (e.g., $$\rm H_2S$$), resulting in an explosive increase in biomass on Earth. By using water as a reductant, life was free to fully colonize the planet, now requiring only water, light, carbon dioxide, and organic substrates for growth and division.
The innovation of oxygenic photosynthesis began a remarkable evolutionary progression, culminating in the development of complex animal and planet species. The development of an oxygen-rich atmosphere prepared the biosphere for the emergence of higher life forms. Only by burning reduced carbon with oxygen (via an electron transport chain) could sufficient energy be released to fuel large multicellular organisms. The resultant atmospheric oxygen produced an ozone layer in the statosphere that is strongly absorbent of UV radiation. When the ozone layer became significant, this opened up land to colonization by planets and animals (due to lower UV radiation) who no longer needed water for protection.
### 6.1.7. Aerobic Metabolism#
Aerobic metabolism is the culmination of metabolic evolution on Earth. This was made possible due to the superior energy yield from using oxygen as an electron acceptor. The microbial world expanded along with multicellular organisms. New kinds of chemolithotrophs: oxidizers of nitrogen, sulfur, and iron compounds evolved and have become an important part of today’s nutrient cycles. The innovations in the electron transport chain that came with photosynthesis allowed aerobes to take advantage of this new source of energy. Indeed, many of the same components of the electron transport chain that evolved for photosynthesis are used in aerobic respiration.
### 6.1.8. Earth’s Earliest Communities#
If you were to visit the Earth during any average point in its history, the scene would be other-worldly. For most of Earth’s history it was the planet of the microbes. The land would be mostly barren of visible life. Only in shallow freshwater and intertidal marine basins would there be visible accumulation of life, in the form of microbial mat communities. Microbial mats are macroscopically visible microbial ecosystems in which microbes build communities that in may ways are analogous to rain forests. They are layered communities that today develop as carpets, often many centimeters thick, in certain types of shallow aquatic settings.
Fig. 6.2 General structures of microbial mats. The thickness can range from millimeters to several centimeters, and are formed by multiple biofilms of microorganisms embedded in a matrix of exopolysaccharides, in a vertical fashion due to the physical gradients. Image Credit: Prieto-Barajas (2018).#
The structural coherence of the community is provided by extracellular polymers produced by the mat microorganism and by the filamentous forms of some dominant populations. The top layer (i.e., canopy), primarily consists of oxygen-producing bacteria, cyanobacteria. Together with the anoxygenic phototrophs, the cyanobacteria sustain a remarkably diverse undergrowth of other microorganisms. Virtually all major physiological types of microbes are resident in contemporary microbial mats, and they sustain all central biogeochemical cycles, where each element is alternately oxidized and reduced by the metabolism of different microorganisms.
Under the microscope, the activities of these highly organized populations shape the chemical structure of the mat, forming sharp gradients of oxygen, $$\rm pH$$, sulfide, and other chemical species (via a metabolic process). High productivity by photo- or chemosynthesis allows each mat to sustain significant biomass on little more than water, sunlight, and available inorganic nutrients.
The most impressive mat communities occur in extreme environments (compared to conditions for multicellular life), since significant biomass accumulation can only occur in the absence of grazing. For example, today’s mats of thermophilic cyanobacteria are common in hot springs worldwide. Mats also thrive in hypersaline lagoons, as well as in intermittently dry intertidal regions. Mats also build calcified stromatolites in warm seas, from desert soil crust, and grow within the frozen rocks of Antarctica as endolithic mats. They even thrive embedded in the ice cover of Antarctic dry valley lakes.
The fossil record suggests that mats were once widely distributed on early Earth. The decline of extensive microbial mat systems in the rock record is generally attributed tot he emergence of multicellular grazers and water planets that ultimate forced them to a much more limited habitat range. The most conspicuous and often enigmatic vestiges of the early microbial biosphere and microbial mats are stromatolites. It is generally accepted (but not always proven) that the sediment trapping or mineral precipitation processes that formed stromatolites were mediated by ancient microbial communities. The existence of fossil stromatolites beginning in the early and middle Archean is consistent with isotopic evidence from autotrophic carbon fixation in the early Archean, and very strongly suggests that photoautotrophy existed.
Today there is increasing recognition that microorganisms are extremely promiscuous. Horizontal gene transfer is not restricted to closely related organisms. Exchange within and between domains is now well documented and increasingly recognized to be a major force in shaping metabolic innovation. We speculate that the high density and close associations among microorganisms in microbial mat provided a superb environment for genetic exchange between populations. If this is true, some of the most significant events in the history of our planet may have occurred in microbial mats. For example, the cyanobacteria may have emerged within this context as a result of melding the photosystems of anoxic phototrophs.
## 6.2. Limits of Carbon Life on Earth#
The search for extraterrestrial life is intimately linked with our understanding of the distributions, activities, and physiologies of life as we know it, Earth-life. It is important to know the extent of environmental conditions that can support terrestrial organisms as a first-order set of criteria for the identification of extraterrestrial habitats. Even though other life forms may have different biochemistries and origins, the limits of life on Earth may help define the potential for habitability elsewhere.
It is also likely that many of the limits of Earth-life could extend out of the bounds of extreme conditions found on modern-day Earth. This is the case for the bacterium Deinoccus radiodurans, which can tolerate levels of radiation beyond what naturally occurs on Earth. Escherichia coli has an apparent tolerance to hydrostatic pressures that exceed by an order of magnitude the pressures in the deepest ocean trenches.
Since Earth is the only planet that unequivocally supports modern, living ecosystems, it is logical to first look for life elsewhere that resembles Earth-life. Life as we know it requires
• either a light or chemical energy source, and other nutrients from its environment (e.g., nitrogen, phosphorus, sulfur, iron, and around 70 other trace elements).
• liquid water as a medium (solvent) for both energy transduction and biosynthesis.
Chemical disequilibria are required to fuel the maintenance and growth of organisms. Thus, the search for extraterrestrial life is focused on planets and moons that
• currently have (or may have had) liquid water,
• have a history of geological and geophysical properties that favor the synthesis of organic compounds and their polymerization, and
• provide the energy sources and nutrients needed to sustain life.
However, we are limited by what we currently know about life on this planet. There are two main issues being addressed by astrobiologists:
1. our incomplete understanding of the physiological diversity of Earth-life, and
2. our almost complete lack of data about possible alternative biochemistries.
For example, a novel marine photosynthetic microorganism, recently isolated from deep-sea hydrothermal vents, may be utilizing the blackbody radiation from hot sulfides for photosynthesis. In another case, newly discovered microorganism have extended the upper temperature for growth to $$121\ {\rm ^\circ C}$$ and lowered the $$\rm pH$$ record to below 0. One hyperthermophilic microbe lacks strings of nucleotide bases that are common to all other known organisms in its $$\rm 16S$$ rRNA and incidentally, is symbiotic to another archaeal species. Discoveries such as these emphasize how little we know.
### 6.2.1. Extremophiles and the Limits of Life#
The extreme conditions that limit growth or prove lethal to most organisms may in fact be “Garden of Eden” conditions for other organisms. Extremes of high temperature, high and low $$\rm pH$$, high salt concentration, toxic metals, toxic organic chemical compounds, and high levels of radiation kill the overwhelming majority of Earth’s organisms. however, there are organisms from all three domains of life that have adapted to many terrestrial extremes. High temperature, low $$\rm pH$$, and high salinity environment are likely to have persisted through Earth history, and these extreme environments are not rare on modern-day Earth.
Fig. 6.3 Representative idealized cross section of Earth’s crust showing the diversity of extreme environments and their approximate location.. Image Credit: Wikipedia:extremophile.#
There are very few natural environments on Earth where life is absent (life is the rule rather than the exception). Microbial life on Earth has proliferated into habitats that span nearly every imaginable environmental variable. Only the very highest temperatures or low availability of water to participate in chemical reactions (e.g., water-ice) render terrestrial environments unsuitable for growth. There are few environments on Earth entirely free from surviving life.
Most discussions of the limits of life focus on the extreme range of single physical or chemical conditions. There does appear to be some absolute maximum temperature and minimum concentration of water that will prevent cellular growth. There are two distinct classes of extreme environmental conditions based on how they affect cells. Extremes in pressure and temperature extend their effect into the cytoplasm. Intracellular biosynthesis, metabolism, and macromolecular structures in extremophiles are adapted to function under such conditions. In contrast, organisms capable of growing in extremes of $$\rm pH$$, salinity, irradiation, and in the presence of high levels of toxic metals are adapted to either
1. maintain intracellular conditions that are typical for non-extremophiles, or
2. compensate for the extreme condition.
There are some exceptions.
• While most acidophiles (“low $$\rm pH$$ loving” microbes) maintain an internal $$\rm pH$$ near neutrality, Picrophilus torridus grows optimally at $$\rm pH$$ of 0.7 and maintains an intracellular $$\rm pH$$ value of 4.6.
• Among extremely halophilic (“salt loving”) Archaea, some have an absolute requirement for salt and grow best at molar concentrations of $$3.5-4.5\ \rm M$$, but can also grow in saturated $$\rm NaCl$$ ($$5.2\ \rm M$$). Their intracellular functional and structural components are adapted to high salt concentration and their enzymes require high salt to maintain their active structure.
There are combinations of extreme conditions that apparently prevent cells from growing. For example (so far), no organisms have been characterized that are capable of growing in high salt concentrations at the upper and lower limits of temperature and $$\rm pH$$. It is not know whether this combination effect is due to an insurmountable barrier posed by these combinations of extreme conditions, insufficient sample, or lack of assessable habitats with these combinations of extremes.
There are also combinations of extreme conditions that have a synergistic effect on the growth or survival of cells not adapted to either of the specific extreme conditions. This is the case for hydrostatic pressure and temperature, or salt and temperature. Low temperature and high hydrostatic pressures affect cell processes in the same way, with the result that the minimum growth temperature of non-piezophilic (“pressure loving”) microbes is increased with increasing pressure. Similarly, high salt concentrations increase the minimum or decrease the maximum growth temperatures of non-halophilic microbes.
### 6.2.2. Water, Desiccation, and Life in Non-aqueous Solvents#
The absence of available water and extremes of temperature are the only single variables known to prevent growth and survival of organisms. The other physical and chemical factors that are thought of as extreme conditions (e.g., $$\rm pH$$, pressure, radiation, and toxic metals) are life-prohibiting factors for most organisms but not for all. Life has adapted to the entire terrestrial ranges of these variables. however there are some combinations of physical and chemical conditions for which no known organisms have been found to grow. These include environments that have both high salt ($$>30\%\ \rm NaCl$$ by weight) and
• low temperatures $$(<0\ \rm ^\circ C)$$, such as in sea-ice inclusions, or
• high temperatures $$(>90\ \rm ^\circ C)$$, known to exist in brine pools beneath the Red and Mediterranean Seas.
Earth-life can be described as a web of aqueous (water-based) chemical reactions. There is a point where the intracellular water activity decrease so far the most cells will die. Desiccation causes DNA to break, lipids to undergo permanent changes and proteins to crystallize, denature, and undergo condensation reactions. Saturated brine pools ($$35\%$$ salt, 0.75 water activity) are environments of low water activity that are inhabited by bacteria, Archaea, eukaryotic algae, and brine shrimp.
Organisms that grow or survive in dry environments or in solutions with low water activity match their internal water activity with that of their surroundings. This strategy required evolutionary adaptions of intracellular macromolecules, and of metabolic and biosynthetic processes to operate despite high salt concentrations. Most other microorganisms and eukaryotes deal with desiccation by accumulating compatible organic compounds. Some organisms survive desiccation by forming spores or cysts, while others have mechanisms to repair any damage to their DNA. Both bacteria and eukaryotes have been found to grow in Antarctic rocks that have liquid water for only short thaw periods.
An important issue regarding water and astrobiology is the degree to which water is required by carbon-based life, and whether or not an organic solvent could replace water as the primary solvent. Another issue is the ability of organisms to survive environmental conditions that are outside their limits for growth. These usually involve decreasing the internal water content of the cell, as in the case of bacterial and fungal spores (and some animals such as tardigrades). These issues are important in assessing whether or not carbon-based life could
• exist in liquid methane or ethane pools on Titan,
• survive the harsh physical conditions that would be encountered during transport from one planet to another, or
• survive long periods in completely desiccated state and still retain the ability to grow if water is eventually introduced.
Is it possible for carbon-based life to exist in solvents other than liquid water? many organic solvents including alcohols, phenols, and toluene, are extremely toxic to microorganisms. The degree of antimicrobial action of a solvent depends on its hydrophobicity (“water avoidance”). The more hydrophobic a solvent, the more readily it can accumulate in cellular membranes. The toxicity of the solvents to cells is due to their ability to make the membrane permeable, resulting in the leakage of macromolecules including RNA and proteins.
While organic solvents kill most microorganism, there are some bacteria that can tolerate relatively high concentrations. Two mechanisms have been identified for solvent tolerance:
• membranes that limit the diffusion of solvents into the cell, and
• specialized mechanisms that remove any solvents that have diffused into the cell.
Another key issue is whether the carbon-based biochemistry can occur in non-aqueous solvents. Certainly, many enzymes function in organic solvents and many organic reactions fundamental to biochemistry can occur in non-aqueous solvents. However, even those enzymes active in organic solvents still have som bound water necessary to maintain their active structure. Water is important in other vital biochemical reactions during metabolism and biosynthesis. It appears that carbon-based life is unlikely to able to adapt to a pure solvent environment unless
• it has mechanisms to form water from solvents (e.g., alcohols), or
• it can produce all the necessary water de novo (anew) from biochemical reactions.
There exist specific channels in bacterial membranes that function as pores for the transport of water. Named aquaporins, these channels can facilitate rapid water transport during osmotic stress. The presence of specific water-transport pores makes it imaginable that there could be organisms with membranes that are resistant to organic solvents and yet with their specialized aquaporins extract and selectively transport low concentrations of water from the solvent/water mixture. Moreover, up to $$70\%$$ of the intracellular water in actively growing Escherichia coli cells is generate by metabolic processes and not derived from the external environment.
### 6.2.3. Temperature Extremes#
Temperature is a fundamental thermodynamic parameter that affects all biochemical reactions, in particular setting limits on life because temperature and pressure determine whether water is in the liquid phase. Given that liquid water exists in the liquid state, what then is the allowable range of temperatures for life?
Microorganisms have been cultured with growth observed at temperatures as high as $$121\ \rm ^\circ C$$ or as low as $$-15\ ^\circ C$$. There is even evidence for intact microorganisms with DNA and RNA in hydrothermal vent sulfides at temperatures exceeding $$200\ \rm ^\circ C$$. Despite this wide gap in maximum growth temperatures between prokaryotic and eukaryotic cells, eukaryotes share all other extremes of life with bacteria and Archaea, including growth in environments of great acidity, high salt concentration, pressure, and toxic metal concentrations.
Freezing temperatures can kill cells if internal ice crystals are formed. Cells survive freezing if quickly frozen, where viable cells are even preserved for long periods if quickly frozen in liquid nitrogen $$(-196\ \rm^\circ C$$). On the other hand, slow freezing or slow thawing of cells favors ice crystal formation that can damage macromolecules and structural polymers leading to death. some cellular adaptations for preventing internal ice crystal formation during slow freezing include increasing intracellular solute concentrations, production of exopolysaccharides, and modification of lipids (and proteins) to increase the fluidity of membranes and mobility of enzymes.
Microorganisms that grow best at temperatures above $$80\ \rm ^\circ C$$ are called hyperthermophiles. The maximum growth temperature of cultured hyperthermophiles varies from $$80-121\ \rm ^\circ C$$, and the minimum growth temperature varies from $${\sim}40-80\ \rm ^\circ C$$, depending on the organism. Hyperthermophiles include bacteria and Archaea, aerobes and anaerobes, and heterotrophs and autotrophs. Some acidophiles, alkalophiles (“high $$\rm pH$$-loving” microbes), and radiation-resistant organisms are also hyperthermophiles.
Hyperthermophiles have protein and lipid structures that are adapted to high temperature. While there are no generalizations that can be made about how enzymes and other proteins are thermally stable, there are some recurrent characteristics.
• Protein structures are stabilized at high temperature through amino acid substitutions and most importantly through use of disulfide bonds for structural stabilization.
• Heat stable, ether-linked lipids are universal in hyperthermophilic Archaea and in some bacteria.
• Fundamental changes in protein and lipid structure compensate for the increased mobility and fluidity at high temperatures.
• All hyperthermophiles studied have a reverse gyrase that positively supercoils DNA, which along with cationic proteins increases the thermal stability of the DNA.
Is $$121\ \rm ^\circ C$$ the highest possible temperature for growth of life? The chemical properties of water and biological macromolecules to maintain their 3D structure by compensating temperature effects with either higher pressure or with increasing salt concentrations. Important cofactors (e.g., ATP and nicotine-adenine dinucleotide (NAD)) are thermally subject to chemical modifications in order to cope with temperature stress. Hyperthermophiles stabilize ATP and NAD by increasing their rate of synthesis, by extrinsic factors (e.g., high ion concentrations), or by intrinsically more stable replacements. There is also evidence that attachment and biofilm formation on minerals increases the thermal stability of hyperthermophilic Archaea. The upper temperature fo life is still to be determined.
Video Credit: NPS/Hugo Sindelar
Is there a low-temperature limit for life? Microbial activity has been measured at $$-20\ ^\circ C$$ in ice, and photosynthesis has been observed in Antarctic cryptoendolithic (“hidden within rock”) lichens at $$-20\ ^\circ C$$. Water can remain liquid at temperatures lower than $$-30\ ^\circ C$$ in the presence of salts (or other solutes) and at even lower temperatures in combination with soluble organic solvents. Enzyme activity has been measured at $$-100\ ^\circ C$$ in a mixture of methanol, ethylene glycol, and water. There is also evidence for the transfer of electrons and enzyme activity at $$-80\ ^\circ C$$ in a marine psychorphilic (or cryophilic) bacterium. It is even possible that there may be no lower temperature limit for enzyme activity, nor even for cell growth, if a suitable solvent or solvent mixture is available.
### 6.2.4. Survival While Traveling through Space#
The ability of microorganisms to survive the most extreme environmental conditions increases the probability that they could survive transport to other planet (and moons) and thereby effect panspermia. Panspermia is an important issue in the context of the “limits” of life since the presence of life in any extraterrestrial setting requires an origin. Two groups of microorganism have received the most attention as possible successful space travelers: spore-forming bacteria and radiation-resistant microorganisms.
Spore formation by bacteria and fungi is usually in response to stress conditions including limit nutrients, drying, and heat-shock. Spores are capable of long-term survival and have been recovered from environmental samples that are more than a million years old. Spores are also known to remain stable against heat and radiation, which allows them to travel on the winds to distant locations, or slowly to sediment down into deep ocean trenches. Some spores can survive more an an hour exposure to dry heat at $$150\ \rm ^\circ C$$, although survival is greatly reduced in moist heat. Spores from thermophilic bacteria are more resistant to heat than mesophilic (“moderate temperature loving”) spores.
Any possibility that microorganisms can be transported from one planet to another necessitates that they have the ability to resist the lethal effects of radiation. High-energy radiation and particles ($$\alpha$$ and $$\beta$$) damage DNA, which results in cytotoxic and mutagenic effects to the cell. UV radiation is the most abundant form of damaging radiation and probably the most common natural mutagen. Ionizing radiation kills cell in general by causing multiple breaks in DNA, although UV light can also kill cells in other ways which prevent replication. Most organisms protect themselves from damaging radiation with measures such as radiation absorbing pigments and DNA repair mechanisms.
Table 6.2 Bacteria and Archaea resistant to ionizing radiation#
Species
Phylum
Dosage ($$\rm Gy$$) that kills
90% of pop.
Actinobacteria
11,000
Deinoccus-Thermus
10,000
Thermococcus gammatolerans
Euryarchaeota (Archaea)
8000
Rubrobacter xylanphilus
Actinobacteria
5500
Chroococcidiopisis species
Cyanobacteria
4000
Hymanobacter actinoscierus
Flexibacter-Cytophaga-Bacteroides
3500
Actinobacteria
2000
$$\gamma$$-Proteobacteria
2000
Kocurea rosea
Actinobacteria
2000
$$\alpha$$-Proteobacteria
1000
Note
A gray ($$\rm Gy$$) is a unit of absorbed dose of ionizing radiation corresponding to the absorption of $$1\ \rm J/kg$$ of absorbing material ($$1\ {\rm Gy} = 100\ {\rm rads}$$). The natural background level on Earth is $${\sim}0.3\ {\rm mGy/yr}$$ (at a typical mid-latitude near sea-level). On the surface of Europa the level is almost $$10^{10}$$ times higher, enough to kill humans in one minute of exposure.
The most radiation resistant microorganisms include both bacteria and Archaea (see Tab. 6.2). The radiation resistance is many orders of magnitude above what is naturally found anywhere on Earth. Radiation resistance in D. radiodurans is the result of extremely efficient DNA repair mechanisms that with high fidelity can reassemble DNA that has been sheared into multiple pieces. It is believed that thees DNA repair mechanisms evolved in response to DNA damage due to desiccation rather than radiation. T. gammatolerans was found in a submarine hydrothermal vent in the Guaymas Basin off the coast of Baja California at a depth of about $$2600\ \rm m$$.
Note
The German cockroach is the most radiation resistant metazoan. The tardigrade in its desiccation-resistant “tun” state can also survive extremely high X-ray exposure.
Microorganisms traveling in space will also be exposed to the interplanetary vacuum of order of $$10^{-14}\ \rm Pa$$. Exposure to this level of vacuum causes extreme dehydration of cells such that naked spores can survive for only days. Survival of spores i increased if they are associated with various chemicals (e.g., sugars or embedded in salt crystals). Nicholson et al. (2000) discuss the various stresses that a microbial cell or spore would have to endure to survive interplanetary travel. These include the ability to survive
• the process that transports them out of the Earth’s atmosphere (e.g., volcanic eruptions, and bolide impacts),
• long periods of transit in the cold of space, and
• the entry into a new planetary home.
Spores have been demonstrated to survive the shock conditions of a meteorite impact, UV radiation, and the cold of space. It is clear that panspermia is possible and even probable if bacterial spores become embedded in rocks that get ejected from one planet and eventually enter the atmosphere of another.
There is evidence that microorganisms that are attached to surfaces (e.g., minerals and organic polymers) have enhanced survival to a variety of stress conditions. These biofilms are physically and physiologically complex. The niche within a mature biofilm allows both physiological and genetic transformations of the encapsulated organisms. The immediate environment of cells within a biofilm can affect the delivery of nutrients and the availability of energy sources, which leads to a range of activities. Heterogeneity within biofilms has also been shown to play a role in resistance to environmental stress.
Stress resistance has commonly been attributed to two features of the biofilm microbial communities:
1. their ability to limit rates of diffusion, and
2. the presence of multiple physiological states.
Thees physical phenomena are commonly coupled to physiological changes such as the diversity of growth stages, including slow-growing, stationary phase cells.
### 6.2.5. Extremophiles, Hydrothermal Vents, and Earth’s Earliest Microbes#
Models for early life are based on geological, geochemical, and isotopic evidence of biosignatures in the earliest rocks, inferences from global phylogenetic trees, and the biochemical characteristics of extant organisms. The deep-sea and the sub-seafloor are two of the modeled sites for the earliest microbial ecosystems. These are safe havens from killer impact events, as well as sites of ubiquitous and active geophysical processes, including hydrothermal activity that can generate carbon and chemical energy sources, along with other elements required for life. (see National Geographic: Exploring the Edge of Existence about the discovery of the hydrothermal vents.)
There are models that support the hypothesis that hydrothermal systems were involved in the origin of life and that microorganisms related to extant hydrothermal vent extremophiles were the earliest organisms. The implication is that submarine hydrothermal systems can both generate and support life without the need for oxidants needed for photosynthesis. While these remain hypotheses, there is convincing evidence that microorganisms capable of exploiting the chemical energy sources at vents (particularly $$\rm H_2$$) preceded photosynthesizing microorganisms. Biochemical evidence indicates that the first photosynthetic organisms were anaerobic and coupled the reduction of $$\rm CO_2$$ with the oxidation of $$\rm H_2S$$ rather than $$\rm H_2O$$.
To date (on Earth), we know of two primary types of hydrothermal processes that drive warm- to high-temperature flow in the sub-seafloor.
1. Ridge-axis, hydrothermal circulation driven by upwelling magma or recently solidified hot rock often producing spectacular hydrothermal chimneys (or “black smokers”). Water-rock reactions commensurate with high-temperature-driven fluid flow result in fluid temperatures up to $$407\ \rm ^\circ C$$ and produce distinct chemistries containing high concentrations of metals and low $$\rm pH$$.
2. The exothermic serpentinization of ultramafic rocks, which yields moderate temperature fluids ($$<150\ \rm ^\circ C$$) in comparison to ridge-axis systems. These fluids are highly alkaline (up to $$\rm pH$$ 12), and contain elevated levels of volatile compounds (e.g., $$\rm CH_4$$ and $$\rm H_2$$) relative to ridge-axis systems.
A third source of hydrothermal activity is tidal heating that occurs when differential gravitational forces from a planet “flex” one of its moons, usually in concert with gravitational effects from other moons. Tidal heating is the best explanation for geysers on Enceladus and is also believed to be important in maintaining a liquid ocean on Europa.
Strong hydrothermally driven fluid circulation is a fundamental physical-chemical process that was likely present on Earth as soon as it had appreciable amounts of liquid water. Hydrothermal activity on the early Earth may have been even more pervasive than today due to higher global heat fluxes (due to much more closely orbiting Moon and geophysical activity). A hallmark feature of hydrothermal systems is dynamic mixing between fluids of varying compositions. These mixing processes occur at multiple spatial and temporal scales, including
• the instantaneous process of sulfide mineral precipitation seen at black smokers, and
• the modest thermal and chemical gradients in the sub-seafloor.
A vast majority of the mixing occurs within the fractures and pore spaces of rocks, where the fluids interact with and alter the host materials. Hydrothermal systems also experience periodic perturbations linked to the tides, and are intertwined with episodic processes such as earthquake activity, opening and sealing of fractures, and magma intrusion events.
The surface-associated microbiology of present-day hydrothermal vent environments may also be relevant to the origin and early evolution of Earth-life. Abiotic reactions facilitated by mineral surfaces catalyze a number of reactions that may have substituted for early metabolic process. This idea has been expanded to where mineral catalysis could have preceded all enzyme catalysis during the inferred early, non-protein stage in the origin of life.
The aggregated communities found on mineral surfaces may have been important to the early evolution of life in that their diversity could have facilitated a high rate of genetic exchange and over the evolutionary limitations of any single lineage. Hydrothermal systems of the present-day Earth have not changed significantly from those present millions of years ago. Such systems may arbor relict microorganisms and biochemistries that can aid in our understanding of the origin of Earth-life.
## 6.3. The Evolution and Diversification of Life#
The fossil record conjures up images of dinosaurs, trilobites, and extinct humans. This reflects not only our metazoan-centric bias, but also a visual bias. Starting about $$542\ {\rm Ma}$$, macroscopic remains (of animals, traces, shells, and later bones and planets) become quite evident in the fossil record, hence the geological term Phanerozoic, which literally means the eon of “visible life.” These $$542\ {\rm Myr}$$ of the fossil record demonstrate that changes have occurred in organisms and have provided compelling evidence for the theory of evolution. See the lectures by Russell Garwood at Univ of Manchester.
The Proterozoic was mainly a microbial world, the Phanerozoic is a world of macroscopic animals and plants. The transition from the Proterozoic to the Phanerozoic marks what my be one of the most significant evolutionary events in the history of life, when many of the modern animal phyla evolved. Highlights of this $$2500\ {\rm Myr}$$ record include
• the rise to dominance of cyanobacteria in shallow marine environments,
• the early evolution and diversification of eukaryotes,
• the first appearance of multicellular algae,
• the appearance and their subsequent diversification (the Cambrian explosion),
• the first land plants and their subsequent diversification,
• the appearance and dominance of intelligent life.
In addition to these biologic changes, the Earth’s crust continued to be dynamic with active seafloor spreading and plate tectonics; the atmosphere changed from an anoxic to an oxygenic atmosphere; and the oceans responded to theses and other biospheric, lithospheric, and atmospheric changes.
For astrobiology, an understanding of the evolution and diversification of life on earth is critical. The Earth is the most active of all the planets in the solar System and continues to evolve. Reading the fossil and rock records provides valuable insight into life’s evolution and diversification. Although the search for extant life elsewhere in the Universe is of tremendous importance, such a search would focus only on the very narrow window of the present time. Searches should also be made at extraterrestrial sites for fossil life, since such a search takes full advantage of the immense spans of time involved with a planet’s history.
### 6.3.1. The Proterozoic#
The Proterozoic spans almost $$2\ {\rm Gyr}$$ recording the further evolution and diversification of already established bacteria and eukaryotes. This long record documents important evolutionary changes that shaped the biosphere on its way to a planet dominated by planets and animals.
The term Proterozoic was coined from combining the Greek words protero (meaning anterior or earliest) and zoic (meaning life or animals). Historically, rocks older than the Cambrian were though to lack fossils. However, indications of fossils were discovered as the sedimentary rocks found below the Cambrian (layer) received more attention. This suggested the possibility of a longer history of life than previously thought. Based on a variety of studies, it became established that the Proterozoic contained significant remains of life, but it was dominated by microorganisms.
All three domains of the tree of life (Bacteria, Archaea, and Eukarya) were present by the beginning of the Proterozoic. This included the establishment of anaerobic heterotrophs, lithotrophs, and photoautotrophs as well as variations on these metabolic pathways. During the early Proterozoic cyanobacteria had already evolved oxygenic photosynthesis. With the rise of an oxygenic atmosphere, microbes using oxygen greatly diversified, where aerobic metabolism was essential for the evolution of eukaryotes with the already established Eukarya domain. It was during the Proterozoic that three of the four major eukaryotic groups evolved (algae, fungi, and animals), where plants did not appear until the early Paleozoic.
#### 6.3.1.1. Types of Proterozoic fossils#
Proterozoic fossils consist fo the following major types: microfossils, stromatolites, macroscopic carbonized compressions, soft-bodied impressions, trace fossils, and mineralized hard parts.
• Microfossils are the preserved remains of microbes. There are two primary methods of study: acid residues and petrographic thin sections. Both rely on light microscopy for examination (i.e., techniques using a microscope).
• To prepare a petrographic thin section, a thin sliver of rock is cut from the sample and epoxied to a glass microscope slide. The sliver is further thinned by grinding to a thickness ($${\sim}30-50\ {\rm \mu m}$$) that allows the transmission of light. Microbes are preserved in situ within the layers of sedimentary rock.
• For acid residues, rock samples dissolved in acid are examined for insoluble residues under a microscope. Organic walled remains in the residues are commonly called acritarchs.
• Stromatolites are laminated structures produced by the sediment trapping, binding, and/or mineral precipitation of microbes (principally cyanobacteria). They are abundant and diverse in Proterozoic carbonate rocks.
• Carbonaceous compressions or carbonaceous films are compressed, chemically resistant, organic material visible to the naked eye. Because of their macroscopic size and regular shape, many are considered to be eukaryotic fossils.
• Soft-bodied organisms can be preserved as impressions on bedding planes. For the Proterozoic, the most famous of these fossils is the Ediacaran biota.
• Trace fossils are the tangible evidence of the activity of an organism (e.g., burrowing). There are numerous reports of trace fossils older than $$600\ {\rm Ma}$$, but these reports have not held up to scrutiny.
• Mineralized skeletons (hard parts) by eukaryotes very late in the Proterozoic gave rise to the rich, macroscopic fossil record that characterized the Phanerozoic.
#### 6.3.1.2. Prokaryotes#
Archaea and Bacteria are two distinct groups that comprise prokaryote microorganisms. Archaea differ from Bacteria in cell wall composition, the structure and composition of membranes, and other differences. There is no direct, body fossil evidence for Archaea in the Proterozoic or earlier. However, there is indirect evidence based on sediments with highly depleted $$\delta^{13}{\rm C_{org}}$$ that are indicative of methanogens in the Late Archean.
The record for Bacteria in the Proterozoic is good an is actually better than in the Phanerozoic. Although the Proterozoic microfossil record is dominated by cyanobacteria, this does not necessarily mean that they dominated the biosphere. Rather, it could simply be that cyanobacteria were more readily preserved than other bacteria. In the Proterozoic, microbial mats were likely geochemical sites for early silicification (chert, or microcrystalline quartz, formation), which favored the preservation of cyanobacteria. In addition, the chemistry and structure of cyanobacterial sheaths and cell walls probably favored their silica permineralization over other prokaryotes.
For smaller ($$<2\ {\rm \mu m}$$) spheroidal and filamentous microbial fossils, bacterial taxonomic assignments other than cyanobacteria are possible even though the small size and simple morphology are shared by many groups of bacteria. Sometimes the microbial fossil has a distinctive morphology or biomarker studies can be used to aid in the determination of bacterial taxonomic assignments.
#### 6.3.1.3. Stromatolites#
The most obvious fossils from the Proterozoic are the centimeter to millimeter sized stromatolites. Many Proterozoic shallow, marine carbonate units contain stromatolites of a variety of shapes. Shapes include wavy-laminated stratiform structures, domes, columns, branched columns, cylindrical cones, and oncoids (i.e., laminae that almost or entirely enclose a central core).
The sizes of Proterozoic stromatolites span six orders of magnitude, ranging from columns several millimeter in diameter to domes over $$100\ {\rm m}$$ across. As highly variable as this might seem, one of the hallmarks is that within a given bed (along the same horizon) the shape and size of the stromatolites are often narrowly restricted. This gives the sense that there is a morphological theme to the stromatolites in the bed. The morphological theme of
• relatively narrow size range,
• the sam or nearly the same shape, and
• a distinctive microstructure
has resulted in the taxonomic treatment of a number of stromatolites. These distinctive stromatolites are even treated like fossil species. Although this practice is controversial , a number of stromatolite taxa can be easily recognized and used as index fossils for determining the age of the strata. About 1200 species-level taxa have been treated taxonomically, of which 90% of these taxa are from the Proterozoic.
Below you can find some 3D models of stromatolites – please do take the time to look at these (you may want to close one before opening the next so only one is open at any time). They are actually quite common if you do geology fieldwork, and are the first macroscopic evidence of life on earth!
Fossil specimen of a stromatolite from the Silurian of Herkimer County, New York. Specimen is on display at the Museum of the Earth, Ithaca, New York. Model by Emily Hauf.
Fossil specimen of the stromatolite Collenia versiformis from the Proterozoic of Montana. Specimen is from the Cornell University Paleobotanical Collection (CUPC), Ithaca, New York.
Fossil specimen of the stromatolite Chlorellopsis coloniata from the Eocene of Wyoming. Specimen is from the Cornell University Paleobotanical Collection (CUPC), Ithaca, New York. Model by Emily Hauf.
#### 6.3.1.4. Acritarchs#
The term acritarch was created to accommodate organic-walled microfossils of uncertain taxonomic affinity that were found in acid-resistant residues of shale and siltstone. Acritarchs come in a variety of shapes, with or without ornamentation (some have spines or elaborate processes), and sizes (a few $$\rm \mu m$$ to several $$\rm mm$$). They are usually considered to be cysts, spores, or vegetative unicells of algal phytoplankters (microscopic algae that live int he upper part of water columns), although other origins are possible.
Possibly the oldest acritarchs (sphaeromorphs around $$48\ {\rm \mu m}$$ in diameter) come from $${\sim}2.1\ {\rm Ga}$$ shale in Siberia, suggesting the establishment of eukaryotic phytoplankton by this time. Acanthomorphs (complex acritarchs with spines or other projections) are more confidently interpreted as planktonic eukaryotes, and are known from strata $${\sim}1.5\ {\rm Ga}$$ in Australia. Acritarchs are the most abundant eukaryotic fossils found in the Mesoproterozoic and Neoproterozoic.
#### 6.3.1.5. Other algae, amoebae, and fungi#
Some microfossils that are morphologically distinctive can be attributed to specific eukaryotic clades (and hence are not called acritarchs). Microscopic red algae (Bangiomorpha) are found in $$1.2\ {\rm Ga}$$ rocks of Arctic Canada. Chrysophytes (also known as golden algae) are heterokonts, which are a group of eukaryotes that includes diatoms, brown algae, and water molds that underwent a secondary symbiosis involving a red algae and possesses two different flagella.
Calcification in algae appears to have occured earlier than in animals. Thin, sheet-like calcareous structures have been found in the Kingston Peak Formation ($$700-750\ {\rm Ma}$$) in California. It is not until the latest Neoproterozoic that calcareous algae and calcified cyanobacteria became more abundant and diverse. Vase-shape microfossils common from several late Neoproterozoic localities are considered to be testate amoebae (protozoans) based on their strong similarity with living examples.
The fossil record of fungi is known almost exclusively from microscopic remains. Based on molecular clock analyses, fungi are generally considered to have originated in the Proterozoic. One microfossil from $$1.43\ {\rm Ma}$$ rocks (Tappania) might also be a fungus.
#### 6.3.1.6. Carbonaceous compressions#
Millimeter and larger-sized carbonaceous compressions from the Proterozoic constitute an interesting type of fossil. Based on their macroscopic size and distinctive shapes they are considered to be multicellular eukaryotes. Disks, ellipses, ribbons, sausage-like shapes, leaf-like shapes, and irregular shapes are some of the morphologies found. These shapes continue into the Mesoproterozoic, however, the diversity and complexity increases in the Neoproterozoic. Most are considered to be formed by algae, although some ring-shaped forms from $$800-850\ {\rm Ma}$$ rocks have been interpreted as worms or worm-like metazoans.
#### 6.3.1.7. Animals#
The fossil record of Proterozoic animals is controversial, but the most confident remains consist of an assemblage of soft-bodied animals (the Ediacaran biota) from late Neoproterozic strata worldwide (except Antarctica). The ages of these fossils range from $$540-565\ {rm Ma}$$, where the Ediacarans comprise a wide variety of soft-bodied marine animals with some forms suggestive of jellyfish, worms, and sea pens (see lectures from Thomas Holtz at UMD). Some fossils do not resemble any living counterparts. Because of the unique morphology of many, the animal affinities have been questioned and it has been proposed that these represent an extinct Kingdom.
Animal trace fossils are also known from Ediacaran-aged rocks. Most traces are millimeter sized horizontal trails, bu some show shallow sediment penetration (i.e., burrowing). Pre-Ediacaran trace fossils are more controversial, where the controversy often centers on the unexpected great age. Curiously, some of these structures would likely be accepted as trace fossils if they were found in younger (Phanerozoic) rocks.
• One report consists of millimeter diameter, sinuous traces from strata in India that was thought to be $$1.1\ {\rm Ga}$$. This report received considerable attention, in large part because the structures looked like animal traces, but were almost $$500\ {\rm Myr}$$ older than the previously accepted oldest animal traces. Recently, new radiometric age dates on the rocks yielded an even older age: $$1.6\ {\rm Ga}$$. It is likely that these are not animal trace fossils.
• Another report describes possible trace-like fossils from $$>1.2\ {\rm Ga}$$ rocks in Western Australia. It is not clear if theses structures (a) were produced by animals, (b) represent some extinct lineage independent of the metazoans, or (c) had an origin independent of organisms (i.e., abiogenic).
The evolution of the Eukarya (as seen from the fossil record) was one of the increased morphological complexity. This complexity included
• an increase in intricacy of cells and their coverings,
• an increase in the number of cell types,
• the evolution of multicellularity,
• the development of a tissue grade of organization, and
• the formation of organs.
Much of this was accompanied by an increase in size to macroscopic proportions. The development of eukaryotes may have been influenced by oxygen in the atmosphere. Oxygen levels rose significantly at least twice in the Mesoproterozoic and Neoproterozoic, once at about $$1.3\ {\rm Ga}$$, and again in the late Neoproterozoic. The close of the Proterozoic saw an end to the dominance of prokaryotes in the fossil record. the biosphere was beginning to be taken over by macroscopic eukaryotes. The pathways taken by evolution would profoundly change as animals and plants expanded in the biosphere.
### 6.3.2. The Phanerozoic#
The Phanerozoic is an eon characterized by macroscopic animals and planets, which has lasted $$542\ {\rm Myr}$$. However, the microbial world of the Proterozoic has not really disappeared, where it has been overshadowed by these large forms of multicellular life. The fossil record of the Phanerozoic is understood much better that the Proterozoic, not only because of its richness, but also from the fact that many more paleontologists study its plants and animals compared to study of the Proterozoic (i.e., the fossils of the Proterozoic are much more difficult to study). The biostratigraphic and absolute time control on the fossil record is also excellent. Hence, patterns of increased diversification of a group of organisms (i.e., radiations) and extinctions of fossil groups are quite well defined and contribute significantly to our understanding of evolution on planet Earth.
#### 6.3.2.1. The Cambrian “Explosion”#
The appearance of organisms in the fossil record with hard parts (e.g., shells) signals the transition from the Proterozoic to the Phanerozoic. The first $$54\ {\rm Myr}$$ period of the Phanerozoic (i.e., the Cambrian Period) marks the most important interval in the evolution of life on Earth. It is during this time that all the major modern animal phyla either appear or greatly diversify. Known as the Cambrian “explosion,” the nature and cause of this seemingly abrupt appearance of animals has been (and continues to be) the subject of much debate. The question is: was this a real event (i.e., a sudden explosion in lifeforms), or was it due to the nature of the fossil record, which can now much more readily be preserved through their evolved hard parts?
The Neoproterozoic-Cambrian (sedimentary) boundary is available for study in many areas around the world. Studies have shown a general pattern with regard to the order in which the animal fossils appear in the strata. Below the boundary, low diversity faunas of both body and trace fossils are abundant, while above one finds diverse metazoan assemblages dominated by arthropods (e.g., trilobites). Either fortunately (or due to the nature of the Cambrian environment) there is a disproportionately large number of exceptionally preserved fossil deposits in which organisms (with and without hard parts) have been preserved. The two most important are the Early Cambrian Chengjiang biota (in China) and the Middle Cambrian Burgess Shale biota (in Canada).
Note
A fauna (pl. faunae) is a collection of associated animals, and a flora (pl. florae) is similarly for plants.
The fossil record indicates a short interval of only $${\sim}30\ {\rm Myr}$$ from the time of the Ediacaran fauna to the early Cambrian, in which most modern phyla evolved. Studies of molecular clock data from living organisms propose a very much earlier appearance for metazoans (as much as $$1.5\ {\rm Ga}$$). The fossil record suggests that only five of the modern animal phyla had appeared before the Cambrian:
During a period in the Early Cambrian (maybe as short as $$10\ {\rm Myr}$$), the other major phyla appeared, including ctenophores, priapulids, nematodes, lobopodians, arthopods, moluscs, halkieriids, annelids, brachiopods, hemichordates, echinoderms, cephalochordates, and chordates (which include all vertebrates).
One of the challenges in interpreting the extent of the Cambrian evolutionary explosion is the degree to which the early Phanerozoic faunas were related to the Edicacaran faunas. At least some of the organisms would appear to have similarities with modern phyla. The most significant of these are the sponges, anthozoan cnidarians (corals), and stem-group triploblasts. One of the features that characterizes the Cambrian explosion is the evolution of biomineralization, which is the ability to produce minerals, (usually based on $$\rm Ca$$, but also $$\rm Si$$ and $$\rm P$$) through metabolic processes that results in shells or other hard parts.
The Cnidarians are not very common in Cambrian strata. The Chengjiang and Burgess Shale faunas indicate that arthropods and sponges were the dominant groups in the Early to Middle Cambrian seas. Following their appearance in the late Proterozoic, sponges diversified greatly in the Cambrian. The Chengjiang fauna play a major role in the formation of the first Cambrian reefs.
Arthropods comprise $${\sim}45\%$$ of the Chengjiang and Burgess Shale faunas. Studies suggest that arthropods occupied a range of ecological niches, having evolved morphological adaptions linked to defense and feeding strategies. The geologically simultaneous appearance in a number of phyla of biomineralization argues for strong survival advantages for these hard, outer body coverings. This may have been for protection in response to the evolution of predator/prey relationships. The top-line predator was Anomalocaris (“abnormal shrimp”), which is a form that links the more primitive lobopods with the more advanced arthropods (i.e., crustaceans and trilobites).
As to what triggered the Cambrian explosion, the mechanisms are still obscure. Studies of isotopic and chemical indicators (principally $$\delta^{13}{\rm C}$$, $$^{87}{\rm Sr}/^{86}{\rm Sr}$$, and phosphogenesis) indicate that significant changes were occurring in ocean chemistry an circulation. The extent to which these affected or stimulated the evolutionary radiations (or rather are a reflection of it) is unclear. The Proterozoic-Phanerozoic transition is characterized by extensive phosphogenesis, deposition of black shales, and extreme negative shifts in $$\delta^{13}{\rm C}$$. The transition is generally considered to be the result of changes in global tectonics, ocean circulation, and/or nutrient supply.
It is also possible that the chemical changes are a result of the Cambrian explosion arising from the development of more complex food webs. Analysis of the Burgess Shale community has indicated the presence of feeding habits analogous to those existing in present-day marine ecosystems. Marine primary productivity is likely to have increased significantly with a major diversification of phytoplankton int the Cambrian. Accompanying this (and perhaps driving it) was the evolution of zooplankton, which may have been a key innovation that led to the evolution of larger animals. Zooplankton fecal pellets may have played a role in oxygenation f the sea floor, as well as increasing the availability of nutrient-rich organic compounds to the seafloor biota.
Ecological factors likely drove the evolutionary radiations of metazoans and phytoplankton. In particular, the appearance of herbivores and carnivorous predators that engendered evolutionary “arms races” between predators and prey, as well as the evolution of a range of new feeding strategies. These include carnivorous behavior, scavenging, and filter feeding in the ocean. Poor regulation of genes controlling development would have resulted in significant morphological diversification over short time periods. It is suggested that poor regulation of homeotic genes in trilobites during the Early Cambrian may have been a factor in driving evolutionary radiations in trilobites.
#### 6.3.2.2. Bacteria and Plants on Land#
There earliest evidence for colonization of the land comes form $$1.2\ {\rm Ga}$$ rocks in central Arizona, where microfossils represent the remains of microbial communities on land. These have morphologies and sizes suggestive of bacteria (including cyanobacteria), and are though to be have been living at or close to the land surface. Evidence for biological activity in these rocks is supported by $$^{13}{\rm C}$$ depletion, caused by $$\rm CO_2$$ respiration into groundwater by terrestrial (i.e., land-based as opposed to marine) organisms, which may have also contributed to weathering of the rocks.
The presence of spore-like structures in Middle Cambrian shales in Tennessee and the Grand Canyon may be the first indication of the presence of land plants. These simple spheres are though to have derived from very primitive land plants, and were probably closest to simply bryophytes (mosslike plants). The large morphological diversity of theses structures indicates a relatively diverse flora on land at this time. This would produce significant consequences for atmospheric evolution. The presence of a Cambrian terrestrial flora living by photosynthesis may have played a significant role in drawdown of atmospheric $$\rm CO_2$$. It has been suggested that this might even have played a pare in the Cambrian explosion, where carbon runoff affected the trophic structure of shallow water ecosystems.
#### 6.3.2.3. Land Invertebrates#
The fossil record indicates that the first metazoans to colonize the land were arthropods. Arthropod tracks from $${\sim}500\ {\rm Ma}$$ sandstones in Ontario (Canada) have been interpreted as having been made out of the water on land. These arthropods were likely to have been amphibious, and would have faced immense physiological problems in making the transition from an aqueous to a dry environment. The much greater diurnal changes in the atmosphere than in the aquatic environment presented problems of potential desiccation. A major limiting factor in the transition to air-breathing was the problem of $$\rm CO_2$$ excretion in air. The success of arthropods in early land colonization lay in their tough exoskeleton, pre-adapted to a life on land.
Other evidence for animal activity on land is Late Ordovician ($${\sim}450\ {\rm Ma}$$) burrows in fossil soils in Pennsylvania that may have been made by millipedes. These small arthropods may have overcome high temperatures and potential desiccation by burrowing in the soil. Fossilized fecal pellets from Later Silurian deposits ($${\sim}410\ {\rm Ma}$$) deposits in Sweden that contain fungal hyphae indicating the existence of fungivorous microarthropods (e.g., mites or millipedes). This implies the presence of a decomposer niche, with the arthropods playing an important role in the establishment of soils through reworking and increasing nitrate and phosphate levels. This would have provided a necessary habitat to allow subsequent colonization by vascular plants, which are those with a system of tissues for circulating water and nutrients.
The presence of book lungs similar to those in modern spiders provides firm evidence that animals had achieved a major step in living on land: respiring in a gaseous atmosphere. Slightly younger Middle Devonian remains from Gilboa in New York State have yielded the earliest known spider, Attercopus fimbriunguis. Like modern spiders it is equipped with a spinneret, providing evidence that even the earliest spiders were able to spin webs. It was also endowed with fangs and a poison gland. Like the arthropods, these spiders were mainly predators and were probably feeding on abundant mites and millipedes.
There is little evidence for herbivorous arthropods in the fossil record until well into the Carboniferous Period ($${\sim}350\ {\rm Ma}$$). Eating plants may only have become established when animals had evolved appropriate enzymes and digesting symbiotic bacteria in their gut. Before this adaptation, the vascular plants could have been toxic to early terrestrial animals.
#### 6.3.2.4. Land Vertebrates#
Vertebrate colonization of the land occurred some time after the arthropod invasion. The oldest tetrapod (four-legged animal) evidence is incomplete remains of Elginerpeton from the Late Devonian Scat Craig deposit ($${\sim}370\ {\rm Ma}$$) (near Elgin, Scotland). This amphibian was very fish-like in its anatomy, and presumably in its behavior. The most complete of the early tetrapods are Ichthyostega and Acanthostega form the Late Devonian of East Greenland. See the Lecture Notes from Thomas Holtz @ UMD.
Detailed work on their limbs has revealed that Ichthyostega had seven digits on the feet, whereas Acanthostega had eight digits. It has been suggested that the appearance of digits was probably an adaptation for swimming rather than for walking on land. This casts doubt on the terrestrial ability of these tetrapods. Studies of the braincase, limb skeletons, and gill arches indicate that these early amphibians were little more than fish with slightly modified skull patterns, and digits rather than fins on the ends of their limbs. They were largely aquatic, possibly venturing onto land for short periods.
The fish-tetrapod transition was complex, involving large morphological and physiological changes. These include
• the development of air breathing,
• the ability to walk on land,
• the ability to hear in air,
• the ability to retain moisture, and
• the ability to reproduce out of water.
It is likely that these steps occurred one at a time and not all at once.
Fig. 6.4 Late Devonian lobe-finned fish and amphibious tetrapods. Image Credit: Wikipedia:evolution of tetrapods.#
The first tetrapods did not rush out of the water, but emerged fully equipped and pre-adapted for a life on land. The earliest tetrapod to have been adapted for walking on land was Pederpes from the early Carboniferous ($${\sim}300\ {\rm Ma}$$) of Scotland. While the presence of a lateral line indicates Pederpes was partly aquatic, the structure of its foot suggest that is was also adapted for walking on land.
The selection pressure that “drove” vertebrates onto land may well have been to find new prey. Panderichthyid fish (from which tetrapods probably evolved) are though to have been predators, possessing very large heads relative to their body size, large fangs, and a wide gape. Their well-developed hands and feet (while not initially evolved for locomotion on land) were pre-adapted for supporting their body weight out of water. The positioning of their eyes on top of the skull meant that they were perfectly suited to viewing a new, terrestrial world.
The first tetrapods possessed a long fish-like tail, little changed from the panderichtyid fish tail. Even after the more advanced amphibians developed improved limb girdles that allowed them to walk freely on land, the tail remained as the important propulsive device for aquatic forays. The first amphibians retained a body covered by fish-like scales. It has been suggested that a scale-covered belly would have provided important protection when dragging themselves across the ground and it would have assisted in preventing desiccation.
#### 6.3.2.5. The Aerial Niche#
It was not until the middle Carboniferous ($${\sim}325\ {\rm Ma}$$) that the first organisms occupied the aerial niche. Arthropods again led the way, this time in form of flying insects. the first insects were flightless and evolved in the Early Devonian, but then insects are absent from the fossil record for $$55\ {\rm Myr}$$. During this time, the first flying insects must have evolved, as ten orders of flying insects suddenly appear in the fossil record at the Middle/Early Carboniferous boundary ($${\sim}335\ {\rm Ma}$$). The evolution of the largest known flying insects early in the group’s evolutionary history were dragonflies with wingspans up to $$0.7\ {\rm m}$$. This may have been in part due to the high atmospheric levels of $$\rm CO_2$$ and $$\rm O_2$$ at this time, which resulted in higher atmospheric pressures capable of supporting such large bodies.
#### 6.3.2.6. Marine Faunae#
The fossil record of the Phanerozoic is dominated by marine organisms, mainly invertebrates. This is due to the enhanced chances in marine settings of burial in sediments after death (i.e., easier for such life to be recorded). Moreover, for the Paleozoic, the fossil record indicates that most organisms were aquatic. Studies of fauna during the Paleozoic comprises of the Cambrian fauna (dominated by trilobites) and the post-Cambrian fauna (i.e., Paleozoic fauna).
The Cambrian evolutionary radiation was characterized by the origin and diversification of basic body plans and diversification of basic body plans at the phylum and class levels. There was a three- to four-fold increase of global richness of marine families during the Ordovician compared to the Cambrian. The dominant classes to radiate were primarily immobile, suspension-feeding species (e.g.e, articulate brachiopods, crinoids, cephalopods, gastropods, and bivalves).
Many of these groups were associated with reef systems that became dominant in low latitude from mid-Ordovician times onward. On a global scale, Paleozoic fauna went into decline during the Devonian. Fish diversified greatly during hte mid-Paleozoic times, particularly sharks, osteichthyans, acanthodians, and placoderms.
Shallow seafloor communities experienced the Mesozoic “marine revolution” from $$251-65\ {\rm Ma}$$. This was characterized by a diversification in durophagous predators (shell crushers) and by an intensification of grazing by both vertebrates and invertebrates. During the Jurassic, some sharks, rays, and crustaceans evolved the habit , while during the Cretaceous it appeared in a number of groups gastropods.
What drove this Mesozoic marine revolution? Evolutionary arms races between predators and prey contributed, but nonbiological factors may also have been significant. Correlations appear between the intervals of climatic warming, marine transgression (sea level rising and covering land), and high productivity. During the late Mesozoic, periods of massive submarine volcanism contributed enhanced nutrients to the marine environment. In addition to the elevated temperatures, the levels of productivity increases and thus enabled the evolution of lifestyles that utilized higher energy levels (e.g., more calcification and higher metabolic rates supporting more active organisms). The two principal phytoplanktonic algae in the oceans today (the diatoms and coccolithophorids) also appear in the Mesozoic.
Biotic recovery during the Cenozoic was variable after the Cretaceous-Tertiary (K-T) mass extinction event eliminated $$60-75\%$$ of all species $$65\ {\rm Ma}$$. The extinction removed a sizeable part of the food chain, which resulted in profound and extended reorganization of nutrient and biogemochemical cycling in the oceans. Re-establishing pre-extincition levels of morphological, taxonomic, and ecological diversity occurred by a series of discrete radiations.
While some groups (e.g., ammonites and marine reptiles) became extinct, others (e.g., echinoids) saw little change in diversity across the boundary, but later underwent major changes. These include the evolution of the clypeasteroids (sand dollars), which have diversified throughout the Cenozoic. The numbers of microplankton never returned to those of the Mesozoic. Higher diversity occurred during the warmer phases, notably the Paleocene-Eocene and the Miocene.
When compared with pre-Cenozoic times, the marine biota experienced great increases in diversity through the Cenozoic after recovering from the K-T event. The Cenozoic marine biota can therefore be regarded as essentially a continuation of the Modern fauna that first appeared during the early Mesozoic. The most significant chage in composition of the marine fauna, was the evolution of marine mammals during the Eocene and their subsequent diversification.
#### 6.3.2.7. Terrestrial Florae#
During Cambrian-Ordovician times ($${\sim}542-450\ {\rm Ma}$$), atmospheric $$\rm CO_2$$ levels may have been up to 18 times higher than present-day levels. This played an important role in soil formation since more acidic rain increased chemical weathering of rocks, so helping soil formation and providing an environment in which vascular plants could grow and diversify. Moreover, $$\rm CO_2$$-enrichment favored photosynthetically active organisms that promoted the decomposition and release of inorganic nutrients. however, the very high $$\rm CO_2$$ levels were not beneficial for terrestrial plants as they contributed to very high global temperatures. Only when global temperatures began to decline did vascular plants evolve and diversify.
Over the next $$50 \rm Myr$$, terrestrial flora underwent its greatest diversification, evolving from small vascular plants to dense vegetation with trees over $$35\ \rm m$$ tall. During the early period of vascular plant diversification in Early-Middle Devonian times, $$\rm CO_2$$ levels were possibly $$8-9$$ times higher than now. This was one of the most active periods in the Phanerozoic of plate movement and dramatic global climate change, and probably contributed significantly to floral diversification.
During the Carboniferous, global climate changed from warm, humid, and ice-free to cooler and drier with a glaciation at high latitudes in the Souther Hemisphere. This caused a drop in sea level of $$100-200\ \rm m$$ and aridity in high latitudes. But in a narrow equatorial belt, rain all year meant that vegetation accumulated in lowland swamps and led to the formation of the great coal deposits of North America and Europe. The continental blocks of Gondwana and Laurasia joined to form the supercontinent of Pangea (late in the Early Carboniferous).
From the Late Devonian-Carboniferous ($${\sim}300-370\ {\rm Ma}$$), $$\rm CO_2$$ levels are though to have plummeted from $$3600$$ to $$300\ \rm ppm$$, contributing significantly to global cooling. $$\rm CO_2$$ levels are estimated by using the density of stomata (pores) on fossil leaves (i.e., the greater the density, the lower the $$\rm CO_2$$ levels). During the diversification of vascular plants, their roots contributed to increasing weather, and consequent drawdown of atmospheric $$\rm CO_2$$. Some of the $$\rm CO_2$$ became dissolved bicarbonate and was transported to the oceans by rivers, while some was locked away as coal in the large accumulations of organic material.
During the Devonian-Carboniferous, plants not only grew larger, but developed more sophisticated reproductive structures, water transport systems, and root and leaf architecture. Leaves first appeared as very small structures during the Early Devonian, then became larger and broader by the end of the Devonian. This was presumably in response to the reduction in atmospheric $$\rm CO_2$$, as more leaf area is required to accommodate more stomata. The evolution of seeds during the Devonian revolutionized the plant cycle by freeing plants from the need for external water for sexual reproduction (i.e., plants could grow farther away from bodies of water).
The first trees appeared during the mid-Devonian ($${\sim}380\ {\rm Ma}$$). For the next $$80\ {\rm Myr}$$, forests of spore-producing trees, plus two groups of early seed-producing trees dominated the landscape. The spore-producing trees where lycopsids (e.g., Lepidodendron), which represented two-thirds of early forests, and sphensopsids (e.g., Calamites), which flourished in swampy conditions. Other significant elements of the flora were ferns, progymnosperms (ancestor of all seed plants and conifers), pteridosperms (seed ferns), and Cordaites (a common Carboniferous plant).
During the last $$65\ {\rm Myr}$$, continental planets have moved into their present configurations that produced mountain ranges (e.g., Alps, Himalayas, and Rockies). Global climate has changed from a very warm greenhouse world in the early Cenozoic to an icehouse world in recent times. Early Paleocene to Middle Eocene ($$65-45\ {\rm Ma}$$), was a warm period in Earth’s history. For example, the deep oceans reached $$9-12\ {\rm ^\circ C}$$ higher than present-day levels and annual sea surface temperatures around Antarctica were $$15-17\ {\rm ^\circ C}$$, while mean diurnal temperatures at $$45^\circ$$ latitude reached about $$30\ {\rm ^\circ C}$$. This was due to a number of factors:
• changing continental configurations affecting ocean currents and global climate,
• the Pacific Ocean enlarging and taking warmer water to high latitudes,
• increased mantle degassing,
• increased volcanic activity,
• methane-induced polar warming, and
• a period of global carbon imbalance (i.e., more $$\rm CO_2$$ production than burial).
At this time, the poles were ice-free, precipitation occurred more inland, and temperature gradients from the equator to the poles were much smaller than now.
During these profound environmental changes: angiosperms diversified, forests shrank, and grasslands expanded. Forests and woodlands of angiosperm trees extended from pole to pole. Continents were dominated by an extensive tropical everwet biome that extended to relatively high latitudes. Floras at the poles were cool to warm temperate, where large leaf-size was an adaptation to low light levels in the winter at high latitudes.
The early Cenozoic saw the evolution of grasses, but it was not until $$10-20\ {\rm Ma}$$ that widespread grass-dominated ecosystems existed, even though they had started to form by a significant part of the global vegetation by $${\sim}25\ {\rm Ma}$$. The spreading of grasslands was promoted in part by the cooling and increasing aridity. Many grasses have drought resistant adaptations (e.g., increased root growth, decreased shoot growth, reduced physiological activity during droughts, etc.), which coped with stress arising from the increased frequency of fires associated with a more arid climate.
#### 6.3.2.8. Terrestrial Faunae#
Amphibians dominated the Carboniferous coal forests and continued as important aquatic animals during the Permian. During the Carboniferous ($${\sim}325\ {\rm Ma}$$), the first truly terrestrial vertebrates (the reptiles) evolved, where several major lineages radiated in the Late Carboniferous, but a relatively low diversity and small size. Two lineages emerged: the diapsids (the group containing living reptiles, dinosaurs, and birds) and the synapsids (mammal-like reptiles), where the latter were the dominant terrestrial vertebrates during the late Paleozoic.
Fossil remains indicate the synapsids had spread across Pangea by the Permian. The first mammal-like reptiles that evolved in the Late Carboniferous were the pelycosaurs. Unlike the earliest reptiles, pelycosaurs possessed a much larger head and body. Among them were the first terrestrial vertebrate carnivores. The edaphosaurs were herbivorous and sported huge, fin-like sails on their backs. Extremely long neural spines projecting from their backbones supported a then membrane. A rich supply of blood vessels at the base of the spines suggest that the sail functioned as a thermoregulator, allowing rapid absorption or radiation of heat. By the late Permian ($${\sim}260\ {\rm Ma}$$), pelycosaur diversity had declined and they were replaced with a new group of mammal-like reptiles, the therapsids. These animals looked like a cross between a hippopotamus and a crocodile. Unlike pelycosaurs, therapsids had limbs set beneath their bodies, giving them greater mobility and therefore increasing their foraging ranges.
During the Mesozoic, there was an evolutionary revolution in terrestrial vertebrates that rivaled the one occurring in the marine realm. Many new groups evolved and radiated during the Triassic: turtles, crocodilians, dinosaurs, and mammals later. Moreover, the first flying vertebrates (the pterosaurs) also evolved during the Late Jurassic (probably from theropod dinosaurs) and diversified greatly during the Cretaceous. During the early Mesozoic, many families of terrestrial tetrapods were global in their distribution, despite the break-up of Pangea.
More specific changes are evident in one of the dominant reptilian groups, the dinosaurs. Ornithopods (bipedal herbivores) evolved near the Late Triassic-Early Jurassic boundary ($${\sim}200\ {\rm Ma}$$). Diversity increase was limited until the Late Cretaceous, which was particularly evident in the hadrosaurs. This is attributed to the evolution of more efficient jaw mechanics and development of cutting teeth. This particular trend was probably in response to environmental stimuli, especially the changing nature of the plant food.
The Late Cretaceous diversification of angiosperms also corresponds strongly with insect diversification. Early Cretaceous flower have features such as stamens with small anthers and pollen grains too big for effective wind dispersal, which indicates that many were pollinated by insects. Moths and butterflies also seem to have radiated about the time of the angiosperms, as well as bees, whose fossil record dates back to the mid-Cretaceous.
Like reptiles that diversified in both marine and aerial niches during the Mesozoic, mammals similarly diversified during the Cenozoic, with the evolution of both cetaceans (whales) and bats. Terrestrial faunae are particularly characterized by a great increase in diversity of mammals (in terms of both morphology and size). Like the flora, the evolution of the terrestrial mammal fauna was strongly influenced by profound climatic changes of the Cenozoic. Many mammalian lineages increased in body size, in part in response to ecological changes brought on by the transition from a warm greenhouse world to an increasingly ice house world.
The classic example for this is the evolution of horses, from the small tropical-forest inhabiting forms to larger forms adapted for wide open grass plains and with teeth modified to eat this coarser vegetation. Many of the changes in diversity also relate to the shifting continents, and the evolution of increasingly more endemic groups, such as the many marsupial families that evolved on the Australian continent once it had separated from Antarctica.
Mammals (in general) have large brains for their body size, compared with other vertebrates (with the exception of birds). The evolution of the largest relative brain size occurred in the hominin genus Homo. Because the brain is a metabolically “hungry” organ, for hominids to evolve large brain-size, developmental tradeoffs were necessary to allocate energy that would enable larger brain growth. This is thought to have occurred by a reduction in gut size, which necessitated a change in diet, predominantly from herbivorous to omnivorous, with a large intake of meat. Increase in brain size in the Homo lineage correlates with a reduction not only in gut size, but also in jaw and tooth size.
The final species in this lineage, Homo sapiens, evolved such a well-developed neocortex and increased so much in cognitive development that it has not only manipulated the environment to a greater extent than any other single species of metazoan. The overall trend of evolution:
• colonization first of the oceans,
• then the terrestrial environment, and
• then the atmosphere,
has been taken one step further by this species with the ability to spread beyond the atmosphere into outer space.
The evolution of life on planet Earth has not been a series of continuously random episodes that have fortuitously produced the pattern we see in the fossil record. For the last $$3.5\ \rm Gyr$$ or so, life has expanded along environmental gradients (e.g., deep to shallow water; aquatic to terrestrial; terrestrial to aerial). The genetic changes that have occurred have been heavily constrained along pre-existing developmental pathways, resulting in many examples of convergent evolution. If we are to seek life on other planets, then we need to find another active, evolving planet.
• How inevitable is life on a planet that has evolved in the way that Earth has?
• If we were to re-run the clock on Earth, would we get a similar evolutionary scenario?
Conway Morris (2003) has argued that we would, and that ultimately what we perceive as intelligent life would also be inevitable. If another planet is evolving under very different constraints, would life evolve and would it bear any remote resemblance to life on Earth? This is a question that is still hotly debated in the exoplanet era of astrobiology.
## 6.4. Homework#
Problem 1
What factors on the early Earth are important for the emergence of life? Explain how the environment affects these factors, especially with respect to the development of metabolism.
Problem 2
Compare and contrast the metabolic processes of anabolism and catabolism. Describe the types of organisms that utilize each metabolic process.
Problem 3
How is energy actually stored and transferred in biology? Identify the chemical reactions, Gibbs free energy yields, and requisite environments for the three major forms of respiration.
Problem 4
Summarize the reductive acetyl-CoA pathway and why it is important for the earliest metabolic processes.
Problem 5
Describe the process of photosynthesis qualitatively and chemically. Is water a necessary prerequisite for photosynthesis? Explain your answer.
Problem 6
What does the fossil record suggest for the origins of the earliest communities of life on the Earth? What kind of physiological structures were common among these communities?
Problem 7
Summarize what we know about the limits of Earth-like life and why do we use the Earth as our primary model in the search for live elsewhere.
Problem 8
Identify the extreme environments for which life can unexpectedly thrive in. Is the presence of water a necessary or sufficient for the existence of life?
Problem 9
What were the most common types of lifeforms in Earth’s history? Describe how we identify the remains of lifeforms that are without shells or skeletons.
Problem 10
Summarize how life diversified after the Cambrian “Explosion”. What morphological and physiological changes occurred during the fish-tetrapod transition?
Problem 11
Describe how changes to Earth’s surface over the last $$500\ \rm Myr$$ have influenced the types of lifeforms that have flourished or perished in response to an evolving Earth. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6229113936424255, "perplexity": 4937.648602098546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00092.warc.gz"} |
https://zbmath.org/?q=an:0538.94001 | ×
# zbMATH — the first resource for mathematics
The theory of discrete signal processing. (English. Russian original) Zbl 0538.94001
Probl. Inf. Transm. 19, 284-288 (1983); translation from Probl. Peredachi Inf. 19, No. 4, 43-49 (1983).
Summary: Assume that G is a finite group. A general definition of the G-spectrum of a discrete signal, that utilizes irreducible representations of group G, is given. If G is an Abelian group, then the G-spectrum coincides with the familiar definition. The general definition of G-spectrum preserves all the advantages of spectral processing of discrete signals that are inherent in the Abelian case. For lengths that are powers of 2, an infinite sequence $$\{G_ n\}$$ of non-Abelian groups is constructed, for which the $$G_ n$$-spectrum can be calculated 3/4 times more rapidly than the FFT allows in the Abelian case for the same lengths.
##### MSC:
94A12 Signal theory (characterization, reconstruction, filtering, etc.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.79207843542099, "perplexity": 975.4614299269186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517966.39/warc/CC-MAIN-20210119042046-20210119072046-00717.warc.gz"} |
https://planetmath.org/ProofOfTheExistenceOfTranscendentalNumbers | # proof of the existence of transcendental numbers
Cantor discovered this proof.
## Lemma:
Consider a natural number $k$. Then the number of algebraic numbers of height $k$ is finite.
### Proof:
To see this, note the sum in the definition of height is positive. Therefore:
$n\leq k$
where $n$ is the degree of the polynomial. For a polynomial of degree $n$, there are only $n$ coefficients, and the sum of their moduli is $(k-n)$, and there is only a finite number of ways of doing this (the number of ways is the number of algebraic numbers). For every polynomial with degree less than $n$, there are less ways. So the sum of all of these is also finite, and this is the number of algebraic numbers with height $k$ (with some repetitions). The result follows.
## Proof of the main theorem:
You can start writing a list of the algebraic numbers because you can put all the ones with height 1, then with height 2, etc, and write them in numerical order within those sets because they are finite sets. This implies that the set of algebraic numbers is countable. However, by diagonalisation, the set of real numbers is uncountable. So there are more real numbers than algebraic numbers; the result follows.
Title proof of the existence of transcendental numbers ProofOfTheExistenceOfTranscendentalNumbers 2013-03-22 13:24:36 2013-03-22 13:24:36 kidburla2003 (1480) kidburla2003 (1480) 8 kidburla2003 (1480) Proof msc 03E10 AlgebraicNumbersAreCountable | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 9, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9639425873756409, "perplexity": 235.3620887550691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00346.warc.gz"} |
https://theoreticalatlas.wordpress.com/2011/03/17/hgtqgr-part-iiia-workshop/ | Now for a more sketchy bunch of summaries of some talks presented at the HGTQGR workshop. I’ll organize this into a few themes which appeared repeatedly and which roughly line up with the topics in the title: in this post, variations on TQFT, plus 2-group and higher forms of gauge theory; in the next post, gerbes and cohomology, plus talks on discrete models of quantum gravity and suchlike physics.
## TQFT and Variations
I start here for no better reason than the personal one that it lets me put my talk first, so I’m on familiar ground to start with, for which reason also I’ll probably give more details here than later on. So: a TQFT is a linear representation of the category of cobordisms – that is, a (symmetric monoidal) functor $nCob \rightarrow Vect$, in the notation I mentioned in the first school post. An Extended TQFT is a higher functor $nCob_k \rightarrow k-Vect$, representing a category of cobordisms with corners into a higher category of k-Vector spaces (for some definition of same). The essential point of my talk is that there’s a universal construction that can be used to build one of these at $k=2$, which relies on some way of representing $nCob_2$ into $Span(Gpd)$, whose objects are groupoids, and whose morphisms in $Hom(A,B)$ are pairs of groupoid homomorphisms $A \leftarrow X \rightarrow B$. The 2-morphisms have an analogous structure. The point is that there’s a 2-functor $\Lambda : Span(Gpd) \rightarrow 2Vect$ which is takes representations of groupoids, at the level of objects; for morphisms, there is a “pull-push” operation that just uses the restricted and induced representation functors to move a representation across a span; the non-trivial (but still universal) bit is the 2-morphism map, which uses the fact that the restriction and induction functors are bi-ajdoint, so there are units and counits to use. A construction using gauge theory gives groupoids of connections and gauge transformations for each manifold or cobordism. This recovers a form of the Dijkgraaf-Witten model. In principle, though, any way of getting a groupoid (really, a stack) associated to a space functorially will give an ETQFT this way. I finished up by suggesting what would need to be done to extend this up to higher codimension. To go to codimension 3, one would assign an object (codimension-3 manifold) a 3-vector space which is a representation 2-category of 2-groupoids of connections valued in 2-groups, and so on. There are some theorems about representations of n-groupoids which would need to be proved to make this work.
The fact that different constructions can give groupoids for spaces was used by the next speaker, Thomas Nicklaus, whose talk described another construction that uses the $\Lambda$ I mentioned above. This one produces “Equivariant Dijkgraaf-Witten Theory”. The point is that one gets groupoids for spaces in a new way. Before, we had, for a space $M$ a groupoid $\mathcal{A}_G(M)$ whose objects are $G$-connections (or, put another way, bundles-with-connection) and whose morphisms are gauge transformations. Now we suppose that there’s some group $J$ which acts weakly (i.e. an action defined up to isomorphism) on $\mathcal{A}_G(M)$. We think of this as describing “twisted bundles” over $M$. This is described by a quotient stack $\mathcal{A}_G // J$ (which, as a groupoid, gets some extra isomorphisms showing where two objects are related by the $J$-action). So this gives a new map $nCob \rightarrow Span(Gpd)$, and applying $\Lambda$ gives a TQFT. The generating objects for the resulting 2-vector space are “twisted sectors” of the equivariant DW model. There was some more to the talk, including a description of how the DW model can be further mutated using a cocycle in the group cohomology of $G$, but I’ll let you look at the slides for that.
Next up was Jamie Vicary, who was talking about “(1,2,3)-TQFT”, which is another term for what I called “Extended” TQFT above, but specifying that the objects are 1-manifolds, the morphisms 2-manifolds, and the 2-morphisms are 3-manifolds. He was talking about a theorem that identifies oriented TQFT’s of this sort with “anomaly-free modular tensor categories” – which is widely believed, but in fact harder than commonly thought. It’s easy enough that such a TQFT $Z$ corresponds to a MTC – it’s the category $Z(S^1)$ assigned to the circle. What’s harder is showing that the TQFT’s are equivalent functors iff the categories are equivalent. This boils down, historically, to the difficulty of showing the category is rigid. Jamie was talking about a project with Bruce Bartlett and Chris Schommer-Pries, whose presentation of the cobordism category (described in the school post) was the basis of their proof.
Part of it amounts to giving a description of the TQFT in terms of certain string diagrams. Jamie kindly credited me with describing this point of view to him: that the codimension-2 manifolds in a TQFT can be thought of as “boundaries in space” – codimension-1 manifolds are either time-evolving boundaries, or else slices of space in which the boundaries live; top-dimension cobordisms are then time-evolving slices of space-with-boundary. (This should be only a heuristic way of thinking – certainly a generic TQFT has no literal notion of “time-evolution”, though in that (2+1) quantum gravity can be seen as a TQFT, there’s at least one case where this picture could be taken literally.) Then part of their proof involves showing that the cobordisms can be characterized by taking vector spaces on the source and target manifolds spanned by the generating objects, and finding the functors assigned to cobordisms in terms of sums over all “string diagrams” (particle worldlines, if you like) bounded by the evolving boundaries. Jamie described this as a “topological path integral”. Then one has to describe the string diagram calculus – ridigidy follows from the “yanking” rule, for instance, and this follows from Morse theory as in Chris’ presentation of the cobordism category.
There was a little more discussion about what the various properties (proved in a similar way) imply. One is “cloaking” – the fact that a 2-morphism which “creates a handle” is invisible to the string diagrams in the sense that it introduces a sum over all diagrams with a string “looped” around the new handle, but this sum gives a result that’s equal to the original map (in any “pivotal” tensor category, as here).
Chronologically before all these, one of the first talks on such a topic was by Rafael Diaz, on Homological Quantum Field Theory, or HLQFT for short, which is a rather different sort of construction. Remember that Homotopy QFT, as described in my summary of Tim Porter’s school sessions, is about linear representations of what I’ll for now call $Cob(d,B)$, whose morphisms are $d$-dimensional cobordisms equipped with maps into a space $B$ up to homotopy. HLQFT instead considers cobordisms equipped with maps taken up to homology.
Specifically, there’s some space $M$, say a manifold, with some distinguished submanifolds (possibly boundary components; possibly just embedded submanifolds; possibly even all of $M$ for a degenerate case). Then we define $Cob_d^M$ to have objects which are $(d-1)$-manifolds equipped with maps into $M$ which land on the distinguished submanifolds (to make composition work nicely, we in fact assume they map to a single point). Morphisms in $Cob_d^M$ are trickier, and look like $(N,\alpha, \xi)$: a cobordism $N$ in this category is likewise equipped with a map $\alpha$ from its boundary into $M$ which recovers the maps on its objects. That $\xi$ is a homology class of maps from $N$ to $M$, which agrees with $\alpha$. This forms a monoidal category as with standard cobordisms. Then HLQFT is about representations of this category. One simple case Rafael described is the dimension-1 case, where objects are (ordered sets of) points equipped with maps that pick out chosen submanifolds of $M$, and morphisms are just braids equipped with homology classes of “paths” joining up the source and target submanifolds. Then a representation might, e.g., describe how to evolve a homology class on the starting manifold to one on the target by transporting along such a path-up-to-homology. In higher dimensions, the evolution is naturally more complicated.
A slightly looser fit to this section is the talk by Thomas Krajewski, “Quasi-Quantum Groups from Strings” (see this) – he was talking about how certain algebraic structures arise from “string worldsheets”, which are another way to describe cobordisms. This does somewhat resemble the way an algebraic structure (Frobenius algebra) is related to a 2D TQFT, but here the string worldsheets are interacting with 3-form field, $H$ (the curvature of that 2-form field $B$ of string theory) and things needn’t be topological, so the result is somewhat different.
Part of the point is that quantizing such a thing gives a higher version of what happens for quantizing a moving particle in a gauge field. In the particle case, one comes up with a line bundle (of which sections form the Hilbert space) and in the string case one comes up with a gerbe; for the particle, this involves associated 2-cocycle, and for the string a 3-cocycle; for the particle, one ends up producing a twisted group algebra, and for the string, this is where one gets a “quasi-quantum group”. The algebraic structures, as in the TQFT situation, come from, for instance, the “pants” cobordism which gives a multiplication and a comultiplication (by giving maps $H \otimes H \rightarrow H$ or the reverse, where $H$ is the object assigned to a circle).
There is some machinery along the way which I won’t describe in detail, except that it involves a tricomplex of forms – the gradings being form degree, the degree of a cocycle for group cohomology, and the number of overlaps. As observed before, gerbes and their higher versions have transition functions on higher numbers of overlapping local neighborhoods than mere bundles. (See the paper above for more)
## Higher Gauge Theory
The talks I’ll summarize here touch on various aspects of higher-categorical connections or 2-groups (though at least one I’ll put off until later). The division between this and the section on gerbes is a little arbitrary, since of course they’re deeply connected, but I’m making some judgements about emphasis or P.O.V. here.
Apart from giving lectures in the school sessions, John Huerta also spoke on “Higher Supergroups for String Theory”, which brings “super” (i.e. $\mathbb{Z}_2$-graded) objects into higher gauge theory. There are “super” versions of vector spaces and manifolds, which decompose into “even” and “odd” graded parts (a.k.a. “bosonic” and “fermionic” parts). Thus there are “super” variants of Lie algebras and Lie groups, which are like the usual versions, except commutation properties have to take signs into account (e.g. a Lie superalgebra’s bracket is commutative if the product of the grades of two vectors is odd, anticommutative if it’s even). Then there are Lie 2-algebras and 2-groups as well – categories internal to this setting. The initial question has to do with whether one can integrate some Lie 2-algebra structures to Lie 2-group structures on a spacetime, which depends on the existence of some globally smooth cocycles. The point is that when spacetime is of certain special dimensions, this can work, namely dimensions 3, 4, 6, and 10. These are all 2 more than the real dimensions of the four real division algebras, $\mathbb{R}$, $\mathbb{C}$, $\mathbb{H}$ and $\mathbb{O}$. It’s in these dimensions that Lie 2-superalgebras can be integrated to Lie 2-supergroups. The essential reason is that a certain cocycle condition will hold because of the properties of a form on the Clifford algebras that are associated to the division algebras. (John has some related material here and here, though not about the 2-group case.)
Since we’re talking about higher versions of Lie groups/algebras, an important bunch of concepts to categorify are those in representation theory. Derek Wise spoke on “2-Group Representations and Geometry”, based on work with Baez, Baratin and Freidel, most fully developed here, but summarized here. The point is to describe the representation theory of Lie 2-groups, in particular geometrically. They’re to be represented on (in general, infinite-dimensional) 2-vector spaces of some sort, which is chosen to be a category of measurable fields of Hilbert spaces on some measure space, which is called $H^X$ (intended to resemble, but not exactly be the same as, $Hilb^X$, the space of “functors into $Hilb$ from the space $X$, the way Kapranov-Voevodsky 2-vector spaces can be described as $Vect^k$). The first work on this was by Crane and Sheppeard, and also Yetter. One point is that for 2-groups, we have not only representations and intertwiners between them, but 2-intertwiners between these. One can describe these geometrically – part of which is a choice of that measure space $(X,\mu)$.
This done, we can say that a representation of a 2-group is a 2-functor $\mathcal{G} \rightarrow H^X$, where $\mathcal{G}$ is seen as a one-object 2-category. Thinking about this geometrically, if we concretely describe $\mathcal{G}$ by the crossed module $(G,H,\rhd,\partial)$, defines an action of $G$ on $X$, and a map $X \rightarrow H^*$ into the character group, which thereby becomes a $G$-equivariant bundle. One consequence of this description is that it becomes possible to distinguish not only irreducible representations (bundles over a single orbit) and indecomposible ones (where the fibres are particularly simple homogeneous spaces), but an intermediate notion called “irretractible” (though it’s not clear how much this provides). An intertwining operator between reps over $X$ and $Y$ can be described in terms of a bundle of Hilbert spaces – which is itself defined over the pullback of $X$ and $Y$ seen as $G$-bundles over $H^*$. A 2-intertwiner is a fibre-wise map between two such things. This geometric picture specializes in various ways for particular examples of 2-groups. A physically interesting one, which Crane and Sheppeard, and expanded on in that paper of [BBFW] up above, deals with the Poincaré 2-group, and where irreducible representations live over mass-shells in Minkowski space (or rather, the dual of $H \cong \mathbb{R}^{3,1}$).
Moving on from 2-group stuff, there were a few talks related to 3-groups and 3-groupoids. There are some new complexities that enter here, because while (weak) 2-categories are all (bi)equivalent to strict 2-categories (where things like associativity and the interchange law for composing 2-cells hold exactly), this isn’t true for 3-categories. The best strictification result is that any 3-category is (tri)equivalent to a Gray category – where all those properties hold exactly, except for the interchange law $(\alpha \circ \beta) \cdot (\alpha ' \circ \beta ') = (\alpha \cdot \alpha ') \circ (\beta \circ \beta ')$ for horizontal and vertical compositions of 2-cells, which is replaced by an “interchanger” isomorphism with some coherence properties. John Barrett gave an introduction to this idea and spoke about “Diagrams for Gray Categories”, describing how to represent morphisms, 2-morphisms, and 3-morphisms in terms of higher versions of “string” diagrams involving (piecewise linear) surfaces satisfying some properties. He also carefully explained how to reduce the dimensions in order to make them both clearer and easier to draw. Bjorn Gohla spoke on “Mapping Spaces for Gray Categories”, but since it was essentially a shorter version of a talk I’ve already posted about, I’ll leave that for now, except to point out that it linked to the talk by Joao Faria Martins, “3D Holonomy” (though see also this paper with Roger Picken).
The point in Joao’s talk starts with the fact that we can describe holonomies for 3-connections on 3-bundles valued in Gray-groups (i.e. the maximally strict form of a general 3-group) in terms of Gray-functors $hol: \Pi_3(M) \rightarrow \mathcal{G}$. Here, $\Pi_3(M)$ is the fundamental 3-groupoid of $M$, which turns points, paths, homotopies of paths, and homotopies of homotopies into a Gray groupoid (modulo some technicalities about “thin” or “laminated” homotopies) and $\mathcal{G}$ is a gauge Gray-group. Just as a 2-group can be represented by a crossed module, a Gray (3-)group can be represented by a “2-crossed module” (yes, the level shift in the terminology is occasionally confusing). This is a chain of groups $L \stackrel{\delta}{\rightarrow} E \stackrel{\partial}{\rightarrow} G$, where $G$ acts on the other groups, together with some structure maps (for instance, the Peiffer commutator for a crossed module becomes a lifting $\{ ,\} : E \times E \rightarrow L$) which all fit together nicely. Then a tri-connection can be given locally by forms valued in the Lie algebras of these groups: $(\omega , m ,\theta)$ in $\Omega^1 (M,\mathfrak{g} ) \times \Omega^2 (M,\mathfrak{e}) \times \Omega^3(M,\mathfrak{l})$. Relating the global description in terms of $hol$ and local description in terms of $(\omega, m, \theta)$ is a matter of integrating forms over paths, surfaces, or 3-volumes that give the various $j$-morphisms of $\Pi_3(M)$. This sort of construction of parallel transport as functor has been developed in detail by Waldorf and Schreiber (viz. these slides, or the full paper), some time ago, which is why, thematically, they’re the next two speakers I’ll summarize.
Konrad Waldorf spoke about “Abelian Gauge Theories on Loop Spaces and their Regression”. (For more, see two papers by Konrad on this) The point here is that there is a relation between two kinds of theories – string theory (with $B$-field) on a manifold $M$, and ordinary $U(1)$ gauge theory on its loop space $LM$. The relation between them goes by the name “regression” (passing from gauge theory on $LM$ to string theory on $M$), or “transgression”, going the other way. This amounts to showing an equivalence of categories between [principal $U(1)$-bundles with connection on $LM$] and [$U(1)$-gerbes with connection on $M$]. This nicely gives a way of seeing how gerbes “categorify” bundles, since passing to the loop space – whose points are maps $S^1 \rightarrow M$ means a holonomy functor is now looking at objects (points in $LM$) which would be morphisms in the fundamental groupoid of $M$, and morphisms which are paths of loops (surfaces in $M$ which trace out homotopies). So things are shifted by one level. Anyway, Konrad explained how this works in more detail, and how it should be interpreted as relating connections on loop space to the $B$-field in string theory.
Urs Schreiber kicked the whole categorification program up a notch by talking about $\infty$-Connections and their Chern-Simons Functionals . So now we’re getting up into $\infty$-categories, and particularly $\infty$-toposes (see Jacob Lurie’s paper, or even book if so inclined to find out what these are), and in particular a “cohesive topos”, where derived geometry can be developed (Urs suggested people look here, where a bunch of background is collected). The point is that $\infty$-topoi are good for talking about homotopy theory. We want a setting which allows all that structure, but also allows us to do differential geometry and derived geometry. So there’s a “cohesive” $\infty$-topos called $Smooth\infty Gpds$, of “sheaves” (in the $\infty$-topos sense) of $\infty$-groupoids on smooth manifolds. This setting is the minimal common generalization of homotopy theory and differential geometry.
This is about a higher analog of this setup: since there’s a smooth classifying space (in fact, a Lie groupoid) for $G$-bundles, $BG$, there’s also an equivalence between categories $G-Bund$ of $G$-principal bundles, and $SmoothGpd(X,BG)$ (of functors into $BG$). Moreover, there’s a similar setup with $BG_{conn}$ for bundles with connection. This can be described topologically, or there’s also a “differential refinement” to talk about the smooth situation. This equivalence lives within a category of (smooth) sheaves of groupoids. For higher gauge theory, we want a higher version as in $Smooth \infty Gpds$ described above. Then we should get an equivalence – in this cohesive topos – of $hom(X,B^n U(1))$ and a category of $U(1)$$(n-1)$-gerbes.
Then the part about the “Chern-Simons functionals” refers to the fact that CS theory for a manifold (which is a kind of TQFT) is built using an action functional that is found as an integral of the forms that describe some $U(1)$-connection over the manifold. (Then one does a path-integral of this functional over all connections to find partition functions etc.) So the idea is that for these higher $U(1)$-gerbes, whose classifying spaces we’ve just described, there should be corresponding functionals. This is why, as Urs remarked in wrapping up, this whole picture has an explicit presentation in terms of forms. Actually, in terms of Cech-cocycles (due to the fact we’re talking about gerbes), whose coefficients are taken in sheaves of complexes (this is the derived geometry part) of differential forms whose coefficients are in $L_\infty$-algebroids (the $\infty$-groupoid version of Lie algebras, since in general we’re talking about a theory with gauge $\infty$-groupoids now).
Whew! Okay, that’s enough for this post. Next time, wrapping up blogging the workshop, finally. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 123, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9042916893959045, "perplexity": 882.602204837702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806660.82/warc/CC-MAIN-20171122194844-20171122214844-00471.warc.gz"} |
http://tex.stackexchange.com/questions/34406/observation-not-a-question-the-commands-guillemotleft-and-guillemotright-ap | # Observation (not a question): the commands \guillemotleft and \guillemotright appear to be misspellings [closed]
The traditional French double quotation marks, « and », are called guillemets. They are used in the scripts of many modern languages, including some forms of Chinese (as 《 and 》 in restricted contexts).
The standard LaTeX commands to produce them, however, are \guillemotleft and \guillemotright; substituting e for the o leads to a compiler error. It seems that a non-standard spelling of guillemet influence the choice of the LaTeX command.
My Petit Robert (vintage 1991) defines guillemot: "Oiseau palmipède voisin du pingouin, habitant les régions arctiques [Web-footed bird related to the penguin, inhabiting arctic regions]". Perhaps this is all a plot to plant Linux imagery in people's LaTeX code.
I originally posted this comment here: http://wp.me/p1OF2a-cU .
-
## closed as not a real question by Torbjørn T., diabonas, egreg, Werner, JakeNov 10 '11 at 20:56
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question.
It seems that Adobe has acknowledged the error, but they think it's too late to change the characters' names in their tables. – egreg Nov 10 '11 at 20:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9172526597976685, "perplexity": 4249.287041113294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398465089.29/warc/CC-MAIN-20151124205425-00055-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://opentextbooks.uregina.ca/creemathdictionary/chapter/n/ | N
Nine 9 kîkâmitâtaht
Nineteen 19 kîkâmitâtahtosâp
Ninety 90 kîkâmitâtahtomitanaw
Ninth 9th mwecikîkâmitâtaht
Number The concept of an amount, quantity, or how many items there are in a collection. [1] akihtâson
Number line A line (vertical or horizontal) on which each point represents a number. [1] akihtāson tipapekinikan
Numerator The number above the line in a fraction that can state one of the following: the number of elements taken from a set or from equal parts. tahkoc akitason
Numerical Involving numbers or a number system. [1] akihtāsowina
Numerical expression Any combination of numerals and/or operation symbols. Also, known as an arithmetic expression. [1] akihtāsona-itėwina $35 \\4.5 – 1.2 \\5 × 4 – 4$
Numerical pattern A sequence of numbers following a certain rule akihtȃso kaskomakāki 1, 5, 9, 13, … (arithmetic progression) 2, 6, 18, 54, … (geometric progression) 0, 1, 1, 2, 3, 5, 8, 13, … (Fibonacci Sequence) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7142455577850342, "perplexity": 2063.183236788795}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00432.warc.gz"} |
https://www.acmicpc.net/problem/5053 | 시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율
1 초 128 MB 13 5 5 38.462%
## 문제
One of the most fundamental data structure problems is the dictionary problem: given a set D of words you want to be able to quickly determine if any given query string q is present in the dictionary D or not. Hashing is a well-known solution for the problem. The idea is to create a function h : Σ → [0..n − 1] from all strings to the integer range 0, 1, .., n − 1, i.e. you describe a fast deterministic program which takes a string as input and outputs an integer between 0 and n−1. Next you allocate an empty hash table T of size n and for each word w in D, you set T[h(w)] = w. Thus, given a query string q, you only need to calculate h(q) and see if T[h(q)] equals q, to determine if q is in the dictionary. Seems simple enough, but aren’t we forgetting something? Of course, what if two words in D map to the same location in the table? This phenomenon, called collision, happens fairly often (remember the Birthday paradox: in a class of 24 pupils there is more than 50% chance that two of them share birthday). On average you will only be able to put roughly √n-sized dictionaries into the table without getting collisions, quite poor space usage!
A stronger variant is Cuckoo Hashing1. The idea is to use two hash functions h1 and h2. Thus each string maps to two positions in the table. A query string q is now handled as follows: you compute both h1(q) and h2(q), and if T[h1(q)] = q, or T[h2(q)] = q, you conclude that q is in D. The name “Cuckoo Hashing” stems from the process of creating the table. Initially you have an empty table. You iterate over the words d in D, and insert them one by one. If T[h1(d)] is free, you set T[h1(d)] = d. Otherwise if T[h2(d)] is free, you set T[h2(d)] = d. If both are occupied however, just like the cuckoo with other birds’ eggs, you evict the word r in T[h1(d)] and set T[h1(d)] = d. Next you put r back into the table in its alternative place (and if that entry was already occupied you evict that word and move it to its alternative place, and so on). Of course, we may end up in an infinite loop here, in which case we need to rebuild the table with other choices of hash functions. The good news is that this will not happen with great probability even if D contains up to n/2 words!
1Cuckoo Hashing was suggested by the danes R. Pagh and F. F. Rödler in 2001
## 입력
aOn the first line of input is a single positive integer 1 ≤ t ≤ 50 specifying the number of test cases to follow. Each test case begins with two positive integers 1 ≤ m ≤ n ≤ 10000 on a line of itself, m telling the number of words in the dictionary and n the size of the hash table in the test case. Next follow m lines of which the i:th describes the i:th word di in the dictionary D by two non-negative integers h1(di) and h2(di) less than n giving the two hash function values of the word di. The two values may be identical.
## 출력
For each test case there should be exactly one line of output either containing the string “successful hashing” if it is possible to insert all words in the given order into the table, or the string “rehash necessary” if it is impossible.
## 예제 입력 1
2
3 3
0 1
1 2
2 0
5 6
2 3
3 1
1 2
5 1
2 5
## 예제 출력 1
successful hashing
rehash necessary | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32359543442726135, "perplexity": 920.3878555617963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373241.51/warc/CC-MAIN-20210305183324-20210305213324-00248.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/search.php?author_id=11185&sr=posts | ## Search found 52 matches
Sat Mar 17, 2018 3:12 pm
Forum: General Rate Laws
Topic: 15.49
Replies: 2
Views: 303
### Re: 15.49
Yes, you can just write all the reactants into the rate law. If there are two of the same molecule in the reactants, then you will know it's second order.
Sat Mar 17, 2018 1:03 am
Forum: Galvanic/Voltaic Cells, Calculating Standard Cell Potentials, Cell Diagrams
Topic: Oxidation power
Replies: 1
Views: 98
### Re: Oxidation power
From what I got on my test, it seems like the more positive the reduction half reaction was, the greater the oxidation power.
Mon Mar 12, 2018 12:29 am
Forum: Kinetics vs. Thermodynamics Controlling a Reaction
Topic: Equilibrium constant
Replies: 3
Views: 401
### Re: Equilibrium constant
For a multistep reaction, you just multiply the rate constants of each forward reaction and divide it by the product of the rate constants of each reverse reaction. So K = (k1 x k2)/(k'1 x k'2)
Sun Mar 11, 2018 11:01 pm
Forum: Arrhenius Equation, Activation Energies, Catalysts
Topic: k and Ea
Replies: 4
Views: 492
### Re: k and Ea
The larger the value of k is, the faster the rate. If Ea is large, more energy is required for the reaction to occur, which means the rate will be slower. If the rate is slower, k will be smaller.
Sun Mar 11, 2018 1:42 am
Forum: Second Order Reactions
Topic: Half life of second order reactions
Replies: 8
Views: 789
### Re: Half life of second order reactions
I would still study the half life of second order reactions and understand how they work because it was on the homework and was on my test as well. I think it'd be a good idea to study it in case it's on the final.
Sun Mar 11, 2018 12:25 am
Forum: Calculating Standard Reaction Entropies (e.g. , Using Standard Molar Entropies)
Topic: entropy of vaporization
Replies: 1
Views: 336
### Re: entropy of vaporization
I'm assuming you're talking about a problem asking for the entropy of vaporization at a temperature lower than the boiling point. Since you can't directly find the entropy of vaporization when the temperature is lower than the boiling point, you need to find the entropy of vaporization at the boilin...
Mon Mar 05, 2018 1:51 am
Forum: Reaction Mechanisms, Reaction Profiles
Topic: slowest step determines rate of overall reaction
Replies: 3
Views: 254
### Re: slowest step determines rate of overall reaction
The slowest step determines the rate of the overall reaction because other intermediate steps need the products of the slowest step to continue on with the reaction. These other steps may be faster, but they can only go as fast as the slowest step, which is why the rate of the overall reaction is de...
Sun Mar 04, 2018 10:59 pm
Forum: First Order Reactions
Topic: half-lives
Replies: 4
Views: 255
### Re: half-lives
I think half lives are just a convenient way of comparing the speed of reaction between reactions. You can just refer to the half lives of different reactions to see which occurs at a faster rate.
Mon Feb 26, 2018 4:04 pm
Forum: General Rate Laws
Topic: Unique Rate [ENDORSED]
Replies: 4
Views: 254
### Re: Unique Rate[ENDORSED]
Yes, the equation at the bottom of page 613 takes into account the coefficients and signs (positive or negative) of reactants and products in each reaction, so you should be getting the same unique rate whether you use reactant or product.
Mon Feb 26, 2018 12:57 am
Forum: General Rate Laws
Topic: rate law for forward and backward reaction
Replies: 2
Views: 119
### Re: rate law for forward and backward reaction
We are looking to find the rate of reaction immediately after the reaction starts. This means that none of the reactants have turned into product yet, which is why we can focus only on the forward reaction. Once there is product causing the reverse reaction, it is hard to find the rate of reaction. ...
Sat Feb 24, 2018 10:22 pm
Forum: Galvanic/Voltaic Cells, Calculating Standard Cell Potentials, Cell Diagrams
Topic: Galvanic cell set up
Replies: 8
Views: 437
### Re: Galvanic cell set up
Usually the cathode is on the right in a cell diagram, but you should find the standard potentials of each half reaction to make sure. The reaction with the more positive standard potential will be the cathode.
Sat Feb 24, 2018 8:52 pm
Forum: Galvanic/Voltaic Cells, Calculating Standard Cell Potentials, Cell Diagrams
Topic: When do you need to include Pt? [ENDORSED]
Replies: 5
Views: 308
### Re: When do you need to include Pt?[ENDORSED]
Pt is only used in cell diagrams when a solid is needed to for the transfer of electrons from the anode to the cathode. If there is already a solid transferring the electrons, there is no need for Pt.
Mon Feb 19, 2018 11:46 pm
Forum: Balancing Redox Reactions
Topic: 14.1
Replies: 1
Views: 135
### Re: 14.1
There are 2 electrons on the product side of the oxidation half reaction. There are 6 electrons on the reactant side of the reduction half reaction. When you combine the two half reactions, you need to multiply everything in the oxidation half reaction by 3 to cancel out the electrons.
Mon Feb 19, 2018 11:00 pm
Forum: Galvanic/Voltaic Cells, Calculating Standard Cell Potentials, Cell Diagrams
Topic: Inert conductor
Replies: 2
Views: 164
### Re: Inert conductor
I think you need an inert conductor when the reactants and products of the half reaction are both gases or solids because you need a solid to transfer the electrons to make the half reaction occur.
Mon Feb 19, 2018 9:46 pm
Forum: Balancing Redox Reactions
Topic: 14.5a
Replies: 4
Views: 268
### Re: 14.5a
The atoms being oxidized and reduced actually aren't both present in the same molecule in the products. The oxidation half reaction is O3 --> O2 and the reduction half reaction is Br- --> BrO3-
Sun Feb 11, 2018 9:37 pm
Forum: Gibbs Free Energy Concepts and Calculations
Topic: Why is deltaG of formation 0 for diatomic molecules?
Replies: 3
Views: 4501
### Re: Why is deltaG of formation 0 for diatomic molecules?
ΔS isn't 0 for diatomic molecules because there is always entropy. S is only 0 when T = 0K.
Sun Feb 11, 2018 8:56 pm
Forum: Gibbs Free Energy Concepts and Calculations
Topic: 11.19
Replies: 3
Views: 205
### Re: 11.19
I think it might be because the coefficients in the chemical equations only have one sigfig
Sun Feb 11, 2018 8:38 pm
Forum: Gibbs Free Energy Concepts and Calculations
Topic: 11.83
Replies: 4
Views: 304
### Re: 11.83
We are looking for K, which can be found with the equation ΔG = -RTlnK. So first we need to find ΔG. We can do that by using the equation ΔG = ΔH - TΔS. We are given the temperature, but we still need to find ΔH and ΔS in order to find ΔG.
Thu Feb 08, 2018 10:47 pm
Forum: Entropy Changes Due to Changes in Volume and Temperature
Topic: Calculating entropy of vaporization
Replies: 1
Views: 174
### Re: Calculating entropy of vaporization
I don't think cooling a vapor down to under its boiling point takes away enough kinetic energy from the molecules to cause them to condense. For example, at room temperature, there is still water vapor in the air. However, a cold bottle in the room will cause the water vapor to condense on the bottle.
Thu Feb 08, 2018 8:13 pm
Forum: Calculating Standard Reaction Entropies (e.g. , Using Standard Molar Entropies)
Topic: 9.33
Replies: 2
Views: 186
### Re: 9.33
I agree. Gas has a much greater entropy than liquid, so just the fact that there is no gas on the product side should be enough to show that entropy has decreased.
Wed Jan 31, 2018 8:51 pm
Forum: Concepts & Calculations Using Second Law of Thermodynamics
Topic: Celsius or Kelvin
Replies: 5
Views: 311
### Re: Celsius or Kelvin
For problems with ΔT, you can use either Celsius or Kelvin because the difference between the temperatures will be the same. However, for problems with just T, you should generally use Kelvin because that is the SI unit.
Sun Jan 28, 2018 10:40 pm
Forum: Concepts & Calculations Using Second Law of Thermodynamics
Topic: 9.1
Replies: 3
Views: 210
### 9.1
For homework problem 9.1, the solutions manual says to use the temperature in Kelvin instead of Celsius, but it was given in Celsius in the problem. How do I know when to use Kelvin and when to use Celsius?
Sun Jan 28, 2018 9:54 pm
Forum: Thermodynamic Definitions (isochoric/isometric, isothermal, isobaric)
Topic: Isothermal Reversible
Replies: 3
Views: 386
### Re: Isothermal Reversible
Are irreversible reactions isothermal as well? Does a change in temperature make a reaction not reversible?
Sun Jan 28, 2018 9:36 pm
Forum: Thermodynamic Systems (Open, Closed, Isolated)
Topic: Open, closed, or isolated [ENDORSED]
Replies: 3
Views: 359
### Re: Open, closed, or isolated[ENDORSED]
The universe is an isolated system because energy cannot be created or destroyed. Likewise, matter cannot be added or taken away from the universe.
Sun Jan 21, 2018 12:15 am
Forum: Reaction Enthalpies (e.g., Using Hess’s Law, Bond Enthalpies, Standard Enthalpies of Formation)
Topic: Hess's Law
Replies: 2
Views: 80
### Re: Hess's Law
Since the intermediate steps in Hess's Law are only used to find the enthalpy of the overall reaction, the intermediate steps do not necessarily have to be able to be carried out. Therefore, it is OK to use fractions because all you need are values to calculate the overall reaction's enthalpy.
Sun Jan 21, 2018 12:08 am
Forum: Calculating Work of Expansion
Topic: w=-P*deltaV
Replies: 3
Views: 183
### Re: w=-P*deltaV
At a constant pressure, if volume increases, it means that the system is expanding and energy is leaving the system as work. This leaves the system with a lower internal energy. The negative sign means that when a system expand, it is losing energy as work.
Mon Jan 15, 2018 9:58 pm
Forum: Heat Capacities, Calorimeters & Calorimetry Calculations
Topic: Homework 8.29
Replies: 5
Views: 282
### Re: Homework 8.29
I think it has more to do with how many atoms are in each molecule
Sun Jan 14, 2018 10:22 pm
Forum: Phase Changes & Related Calculations
Topic: Temperature during phase change
Replies: 2
Views: 140
### Re: Temperature during phase change
The water is not undergoing a phase change between the melting and boiling phases. It is in its liquid state, so adding heat will increase the temperature of the water.
Sun Jan 14, 2018 10:19 pm
Forum: Phase Changes & Related Calculations
Topic: Steam burning more than water question
Replies: 4
Views: 211
### Re: Steam burning more than water question
I think the amount of energy is what damages the skin. And since steam has so much more energy than water at 100 degrees Celsius, that damages the skin a lot more.
Sun Jan 14, 2018 10:14 pm
Forum: Reaction Enthalpies (e.g., Using Hess’s Law, Bond Enthalpies, Standard Enthalpies of Formation)
Topic: Standard State
Replies: 2
Views: 157
### Re: Standard State
Something is in its standard state when one mole of it is being produced. You can put something into its standard state by dividing both sides of the chemical equation by however much you need to make the product one mole.
Sun Dec 10, 2017 2:11 am
Forum: Administrative Questions and Class Announcements
Topic: Final
Replies: 3
Views: 408
### Re: Final
The locations are separated by lecture time and last name
Fri Dec 08, 2017 11:56 pm
Forum: Acidity & Basicity Constants and The Conjugate Seesaw
Topic: pKa and Ka
Replies: 3
Views: 386
### Re: pKa and Ka
Ka is the equilibrium constant for the reaction, so a small Ka means there are few products, indicating a weaker acid. I think of pKa and Ka as inversely related; the smaller the Ka, the larger the pKa. So weak acid would have a small Ka and a large pKa.
Sun Dec 03, 2017 2:38 am
Forum: Shape, Structure, Coordination Number, Ligands
Topic: Drawing Coordination Compounds
Replies: 5
Views: 677
### Re: Drawing Coordination Compounds
If there are different ligands around the metal, does it matter where I put each ligand?
Sun Dec 03, 2017 1:53 am
Forum: Hybridization
Topic: Lone Pairs in Hybridization
Replies: 4
Views: 393
### Re: Lone Pairs in Hybridization
Since hybridization relies only on areas of electron density, lone pairs count, but keep in mind that double and triple bonds only count as one area of electron density
Sun Nov 26, 2017 9:52 pm
Forum: Naming
Topic: test 4 [ENDORSED]
Replies: 4
Views: 299
### Re: test 4[ENDORSED]
Yes, I think that's part of the test topics
Sun Nov 26, 2017 9:05 pm
Forum: Determining Molecular Shape (VSEPR)
Topic: 4.43
Replies: 2
Views: 229
### Re: 4.43
I think s-character refers to how much of the hybrid orbital is from the s orbital. So for example, the sp hybrid orbital would be 50% s-character, while the sp^3 hybrid orbital would have 25% s-character.
Sun Nov 19, 2017 2:55 pm
Forum: Lewis Structures
Topic: Best way to start Lewis Structures
Replies: 12
Views: 724
### Re: Best way to start Lewis Structures
I personally prefer drawing 1 bond between the central atom and each of the other atoms and then figuring out how many more bonds or lone pairs I need in order to have the right number of electrons
Mon Nov 13, 2017 10:51 pm
Forum: Octet Exceptions
Topic: Expanded Octets
Replies: 3
Views: 271
### Re: Expanded Octets
On problem 4.11 part d, the Xe atom has 7 pairs of electrons. I was wondering how you figure out how many pairs of electrons should be assigned to an atom that can have an expanded octet.
Sun Nov 12, 2017 8:15 pm
Forum: Determining Molecular Shape (VSEPR)
Topic: Molecular shape vs Electron arrangement
Replies: 2
Views: 217
### Molecular shape vs Electron arrangement
I don't quite understand the distinction between molecular shape and electron arrangement. What is the difference and how are they related?
Sun Nov 12, 2017 8:08 pm
Forum: Lewis Structures
Topic: Shape
Replies: 5
Views: 315
### Re: Shape
I think the molecular structure has to do with the atoms that make up the molecule and how the bonds and lone pairs of electrons will interact. The shape would be influenced by the structure and be based on the attractions and repulsion caused by the electrons.
Sun Nov 12, 2017 8:07 pm
Forum: Lewis Structures
Topic: Shape
Replies: 5
Views: 315
### Re: Shape
I think the molecular structure has to do with the atoms that make up the molecule and how the bonds and lone pairs of electrons will interact. The shape would be influenced by the structure and be based on the attractions and repulsion caused by the electrons.
Sun Nov 05, 2017 4:25 pm
Forum: Trends in The Periodic Table
Topic: Comparing Cations and Parent Atom Atomic Radius
Replies: 2
Views: 382
### Re: Comparing Cations and Parent Atom Atomic Radius
Yes. If you refer to figure 2.20, on page 51 you can see that the atomic radius of Ba is 217 pm. In figure 2.22 on page 52, it shows that Cs+1 has an atomic radius of 167 pm.
Sun Nov 05, 2017 3:20 pm
Forum: Lewis Structures
Replies: 3
Views: 241
### Re: Biradical vs Lone Pair
Biradicals have two unpaired electrons, meaning each electron is in a different orbital. An atom with a lone pair has two electrons in the same orbital.
Sun Oct 29, 2017 8:36 pm
Forum: Heisenberg Indeterminacy (Uncertainty) Equation
Topic: Two multiple choice questions I was stuck on ...
Replies: 2
Views: 589
### Re: Two multiple choice questions I was stuck on ...
The correct answer for 16 is C. Since Heisenberg's Indeterminacy Equation is (uncertainty of momentum) x (uncertainty of position) = h/4*pi, this means that uncertainty of momentum and uncertainty of position are inversely related. h/4*pi is a constant value, so if uncertainty of momentum is high, u...
Sun Oct 29, 2017 8:31 pm
Forum: Quantum Numbers and The H-Atom
Topic: Ionization Energy
Replies: 3
Views: 292
### Re: Ionization Energy
Ionization energy refers to how hard it is to remove an electron from an atom. Atoms are more stable with a full shell, and metals in groups 1 and 2 have 1 or 2 extra electrons outside of their full shell. Since atoms are more stable with just the full shell, it's easier to remove the 1 or 2 extra e...
Sun Oct 22, 2017 8:01 pm
Forum: Wave Functions and s-, p-, d-, f- Orbitals
Topic: What section of the book to be on [ENDORSED]
Replies: 2
Views: 257
### Re: What section of the book to be on[ENDORSED]
I think the material we've covered is up to 2.7, but it's always good to read ahead!
Mon Oct 16, 2017 4:24 pm
Forum: Bohr Frequency Condition, H-Atom , Atomic Spectroscopy
Topic: Balmer/Lyman Series [ENDORSED]
Replies: 7
Views: 540
### Re: Balmer/Lyman Series[ENDORSED]
My TA said that the Lyman series comes to rest at n=1 and the Balmer series comes to rest at n=2, so I'm not quite sure what they're actually supposed to be.
Mon Oct 16, 2017 4:24 pm
Forum: Bohr Frequency Condition, H-Atom , Atomic Spectroscopy
Topic: Balmer/Lyman Series [ENDORSED]
Replies: 7
Views: 540
### Re: Balmer/Lyman Series[ENDORSED]
My TA said that the Lyman series comes to rest at n=1 and the Balmer series comes to rest at n=2, so I'm not quite sure what they're actually supposed to be.
Sun Oct 15, 2017 3:30 pm
Forum: Properties of Light
Topic: How to use the Rydberg Formula? [ENDORSED]
Replies: 6
Views: 557
### Re: How to use the Rydberg Formula?[ENDORSED]
I don't see the Rydberg formula on the constants and equations sheet. Does that mean we will have to memorize it? I also don't quite understand why there is a negative sign in the formula En=-hR/n^2
Fri Oct 13, 2017 10:36 pm
Forum: Properties of Light
Topic: Balmer Vs. Lyman
Replies: 12
Views: 1531
### Re: Balmer Vs. Lyman
What relationship do these series have with the elements? Do all elements have these series? Are the different series only referring to the electrons?
Sun Oct 08, 2017 11:03 am
Forum: Molarity, Solutions, Dilutions
Topic: E 15 [ENDORSED]
Replies: 1
Views: 250
### Re: E 15[ENDORSED]
I think you need to first need to subtract the mass of the (OH)2 from the original mass to find what element "M" is. You'll find that the molar mass of "M" matches the molar mass of Calcium. The sulfide of this metal is calcium sulfide (CaS), and you can just calculate the molar ...
Thu Oct 05, 2017 9:10 pm
Forum: Balancing Chemical Reactions
Topic: Order of Elements to Balance [ENDORSED]
Replies: 5
Views: 783
### Order of Elements to Balance[ENDORSED]
When I'm balancing equations, in what order should I balance the elements? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8271335363388062, "perplexity": 2647.329165794716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874340.10/warc/CC-MAIN-20201020221156-20201021011156-00346.warc.gz"} |
https://arxiv.org/abs/1811.03933v1 | cs.IT
# Title:Vector Gaussian CEO Problem Under Logarithmic Loss and Applications
Abstract: We study the vector Gaussian CEO problem under logarithmic loss distortion measure. Specifically, $K \geq 2$ agents observe independent noisy versions of a remote vector Gaussian source, and communicate independently with a decoder over rate-constrained noise-free links. The CEO also has its own Gaussian noisy observation of the source and wants to reconstruct the remote source to within some prescribed distortion level where the incurred distortion is measured under the logarithmic loss penalty criterion. We find an explicit characterization of the rate-distortion region of this model. For the proof of this result, we first extend Courtade-Weissman's result on the rate-distortion region of the DM $K$-encoder CEO problem to the case in which the CEO has access to a correlated side information stream which is such that the agents' observations are independent conditionally given the side information and remote source. Next, we obtain an outer bound on the region of the vector Gaussian CEO problem by evaluating the outer bound of the DM model by means of a technique that relies on the de Bruijn identity and the properties of Fisher information. The approach is similar to Ekrem-Ulukus outer bounding technique for the vector Gaussian CEO problem under quadratic distortion measure, for which it was there found generally non-tight; but it is shown here to yield a complete characterization of the region for the case of logarithmic loss measure. Also, we show that Gaussian test channels with time-sharing exhaust the Berger-Tung inner bound, which is optimal. Furthermore, application of our results allows us to find the complete solutions of three related problems: the vector Gaussian distributed hypothesis testing against conditional independence problem, a quadratic vector Gaussian CEO problem with determinant constraint, and the vector Gaussian distributed Information Bottleneck problem.
Comments: submitted to IEEE Transactions on Information Theory, 58 pages, 6 figures Subjects: Information Theory (cs.IT) Cite as: arXiv:1811.03933 [cs.IT] (or arXiv:1811.03933v1 [cs.IT] for this version)
## Submission history
From: Yigit Ugur [view email]
[v1] Fri, 9 Nov 2018 14:46:37 UTC (666 KB) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8473787903785706, "perplexity": 665.1989329582934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999814.77/warc/CC-MAIN-20190625072148-20190625094148-00300.warc.gz"} |
https://www.physicsforums.com/threads/what-type-of-chemical-reaction-does-not-have-oxdiztion-and-reduction.729206/ | # What type of chemical reaction does not have oxdiztion and reduction
1. ### taregg
69
what types of chemical reaction does not have oxdiztion and reduction...
Last edited: Dec 19, 2013
1,539
Which is not a redox reaction?
As far as I remember,
-Neutralization Reactions (Acid + base -->water)
-Precipitation
-Decomposition | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8928924202919006, "perplexity": 12305.31154411566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737929054.69/warc/CC-MAIN-20151001221849-00031-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://appliedmechanics.asmedigitalcollection.asme.org/SBC/proceedings-abstract/SBC2007/47985/241/331219 | Cerebrovascular disorder such as subarachnoid hemorrhage (SAH) is 3rd position of the cause of death in Japan [1]. Its initiation and growth are reported to depend on hemodynamic factors, particularly on wall shear stress or blood pressure induced by blood flow. In order to investigate the information on the hemodynamic quantities in the cerebral vascular system, the authors have been developing a computational tool using patient-specific modeling and numerical simulation [2]. In order to achieve an in vivo simulation of living organisms, it is important to apply appropriate physiological conditions such as physical properties, models, and boundary conditions. Generally, the numerical simulation using a patient-specific model is conducted for a localized region near the research target. Although the analysis region is only a part of the circulatory system, the simulation has to include the effects from the entire circulatory system. Many studies have carried out to derive the boundary conditions to model in vivo environment [3–5]. However, it is not easy to obtain the biological data of cerebral arteries due to head capsule.
This content is only available via PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8398438692092896, "perplexity": 898.2673727270853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00478.warc.gz"} |
https://www.math10.com/tests/simplifying-rational-expressions.html | # Simplifying Rational Expressions
Complete the test and get an award.
Question 1
Simplify the expression: $\frac{3x^6}{x^2}$
Question 2
Which expression is equivalent to: $\frac{6x+2}{4x+2}$
Question 3
Which identity is NOT correct?
Question 4
For which $x$ is the expression $\frac{(x+2)(x-1)}{(x-2)(x+1)}$ undefined?
Question 5
Simplify the expression: $\frac{(x+3)(x-2)}{(2-x)(3-x)},x\neq2, x\neq3$
Question 6
Simplify the expression: $\frac{x^2-6x}{x-6},x\neq 6$
Question 7
Calculate the value of the expression: $\frac{2x+5}{x+5}$ for $x=3$
Question 8
Which expresion is equivalent to: $\frac{x^2+x-2}{x-1},x\neq1$
Question 9
Which identity is correct?
Question 10
Simplify the expression: $\frac{2}{4x-8},x\neq2$
Contact email: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8388708233833313, "perplexity": 8001.795419606567}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814857.77/warc/CC-MAIN-20180223213947-20180223233947-00685.warc.gz"} |
https://congdantoancau.info/i9zwn6k1/article.php?9b2619=linear-regression-using-ols-python | linear regression using ols pythonPrepared Meals Delivery Chicago, Sabja Seeds Recipes, Major Networking Standards Organizations, 5kg Washing Machine, Will My Wool Blanket Stop Shedding, Rabies Symptoms In Cows, Tv Interview Clipart, Craftsman Cmcst910m1 V20 Max String Trimmer, Boon High Chair Seat Pad, " />Prepared Meals Delivery Chicago, Sabja Seeds Recipes, Major Networking Standards Organizations, 5kg Washing Machine, Will My Wool Blanket Stop Shedding, Rabies Symptoms In Cows, Tv Interview Clipart, Craftsman Cmcst910m1 V20 Max String Trimmer, Boon High Chair Seat Pad, " />
linear regression using ols python
02/12/2020
linear regression using ols python
This technique finds a line that best “fits” the data and takes on the following form: ŷ = b 0 + b 1 x. where: ŷ: The estimated response value; b 0: The intercept of the regression line; b 1: The slope of the regression line Interest Rate 2. institutional quality, then better institutions appear to be positively Example of Multiple Linear Regression in Python. We now have the fitted regression model stored in results. Output : Example 2: Using lmplot() method The lmplot is another most basic plot. The second condition may not be satisfied if settler mortality rates in the 17th to 19th centuries have a direct effect on current GDP (in addition to their indirect effect through institutions). Whenever there is a change in X, such change must translate to a change in Y.. Providing a Linear Regression Example. Is the traning data set score gives us any meaning(In OLS we didn't use test data set)? predicted values lie along the linear line that we fitted above. This takes the formula y ~ X, where X is the predictor variable ( TV advertising costs) and y is the output variable ( Sales ). We will use pandasâ .read_stata() function to read in data contained in the .dta files to dataframes, Letâs use a scatterplot to see whether any obvious relationship exists ... OLS measures the accuracy of a linear regression model. This method takes as an input two array-like objects: X and y. Now we can construct our model in statsmodels using the OLS function. (Table 2) using data from maketable2.dta, Now that we have fitted our model, we will use summary_col to endogenous. The graph makes it very intuitive to understand how MARS can better fit the data using hinge functions. Scikit Learn is awesome tool when it comes to machine learning in Python. If so don’t read this post because this post is all about implementing linear regression in Python. We can correctly estimate a 2SLS regression in one step using the We will use pandas dataframes with statsmodels, however standard arrays can also be used as arguments. How do we measure institutional differences and economic outcomes? It shows a line representing a linear regression model along with data points on the 2D-space and x and y can be set as the horizontal and vertical labels respectively. You have now opted to receive communications about DataRobot’s products and services. The above statistic turned into a probability, A different test of the skewness and kurtosis, A test for the presence of autocorrelation (that the errors are not independent.) I'd like to perform a fixed effects panel regression with two IVs (x1 and x2) and one DV (y), using robust standard errors. estimates. We want to test for correlation between the endogenous variable, We can extend our bivariate regression model to a multivariate regression model by adding in other factors that may affect $logpgp95_i$. Or does a change in total employment cause a change in GNP? that minimize the sum of squared residuals, i.e. Separate data into input and output variables. To start with we load the Longley dataset of US macroeconomic data from the Rdatasets website. It includes its meaning along with assumptions related to the linear regression technique. Linear Regression with Python Scikit Learn. The t-statistic value. then we reject the null hypothesis and conclude that $avexpr_i$ is institutional differences are proxied by an index of protection against expropriation on average over 1985-95, constructed by the, $\beta_0$ is the intercept of the linear trend line on the maketable4.dta (only complete data, indicated by baseco = 1, is Statsmodels also provides a formulaic interface that will be familiar to users of R. Note that this requires the use of a different api to statsmodels, and the class is now called ols rather than OLS. My time had come. Linear regression is a standard tool for analyzing the relationship between two or more variables. significance of institutions in economic development. So far we have simply constructed our model. Simple Feature Engineering to Predict No-Show Appointments. Along the way, weâll discuss a variety of topics, including. endogeneity issues, resulting in biased and inconsistent model we saw in the figure. Let's say that you are doing a medical research on cervical cancer. We need to retrieve the predicted values of ${avexpr}_i$ using We will start with simple linear regression involving two variables and then we will move towards linear regression … Interest Rate 2. 2 min read. Using our parameter estimates, we can now write our estimated This summary provides quite a lot of information about the fit. Now that we are familiar with the dataset, let us build the Python linear regression models. The majority of settler deaths were due to malaria and yellow fever 'https://github.com/QuantEcon/lecture-python/blob/master/source/_static/lecture_specific/ols/maketable1.dta?raw=true', # Dropping NA's is required to use numpy's polyfit, # Use only 'base sample' for plotting purposes, 'Figure 2: OLS relationship between expropriation, # Drop missing observations from whole sample, 'https://github.com/QuantEcon/lecture-python/blob/master/source/_static/lecture_specific/ols/maketable2.dta?raw=true', # Create lists of variables to be used in each regression, # Estimate an OLS regression for each set of variables, 'Figure 3: First-stage relationship between settler mortality, 'https://github.com/QuantEcon/lecture-python/blob/master/source/_static/lecture_specific/ols/maketable4.dta?raw=true', # Fit the first stage regression and print summary, # Print out the results from the 2 x 1 vector β_hat, Creative Commons Attribution-ShareAlike 4.0 International, simple and multivariate linear regression. and had a limited effect on local people. the, $u_i$ is a random error term (deviations of observations from Such variation is needed to determine whether it is institutions that give rise to greater economic growth, rather than the other way around. coefficients differ slightly. Therefore, we will estimate the first-stage regression as, The data we need to estimate this equation is located in OLS) is not recommended. of the linear model is Ordinary Least Squares (OLS). linearmodels package, an extension of statsmodels, Note that when using IV2SLS, the exogenous and instrument variables Linear Regression¶ Linear models with independently and identically distributed errors, and for errors with heteroscedasticity or autocorrelation. The argument formula allows you to specify the response and the predictors using the column names of the input data frame data. display the results in a single table (model numbers correspond to those This tutorial explains how to perform linear regression in Python. So, it is fair to say our OLS model did not overfit the data. bias due to the likely effect income has on institutional development. So does that mean a change in GNP cause a change in total employment? Scikit Learn is awesome tool when it comes to machine learning in Python. In this article we covered linear regression using Python in detail. The dependent variable. There are different way to run linear regression in statsmodels. The result suggests a stronger positive relationship than what the OLS As we appear to have a valid instrument, we can use 2SLS regression to If $\alpha$ is statistically significant (with a p-value < 0.05), 0.05 as a rejection rule). Hence, we try to find a linear function that predicts the response value(y) as accurately as possible as a function of the feature or independent variable(x). We need to use .fit() to obtain parameter estimates Scikit-learn is a powerful Python module for machine learning and it comes with default data sets. Solving Linear Regression in Python Last Updated: 16-07-2020 Linear regression is a common method to model the relationship between a dependent variable … Exited with code 0. are split up in the function arguments (whereas before the instrument This Multivariate Linear Regression Model takes all of the independent variables into consideration. Clearly there is a relationship or correlation between GNP and total employment. As the name implies, an OLS model is solved by finding the parameters the predicted value of the dependent variable. ${avexpr}_i$ with a variable that is: The new set of regressors is called an instrument, which aims to original paper (see the note located in maketable2.do from Acemogluâs webpage), and thus the the linear trend due to factors not included in the model). When performing linear regression in Python, we need to follow the steps below: Install and import the packages needed. So, the 1st figure will give better predictions using linear regression. results. For one, it is computationally cheap to calculate the coefficients. OLS using Statsmodels Statsmodels is part of the scientific Python library that’s inclined towards data analysis, data science, and statistics. Check your inbox to confirm your subscription. This tutorial will teach you how to create, train, and test your first linear regression machine learning model in Python using the scikit-learn library. relationship as. Statsmodels is part of the scientific Python library that’s inclined towards data analysis, data science, and statistics. Namely, there is likely a two-way relationship between institutions and In this lecture, we’ll use the Python package statsmodelsto estimate, interpret, and visu-alize linear regression models. computations. More sophisticated errors are also available. It assumes that this relationship takes the form: (y = beta_0 + beta_1 * x) Ordinary Least Squares is the simplest and most common estimator in which the two (beta)s are chosen to minimize the square of … significant, indicating $avexpr_i$ is endogenous. A Use Case of Interest to Healthcare Providers, Using Machine Learning to Increase Revenue and Improve Sales Operations, Empiric Health on More Efficient Solutions for Bloated U.S. Healthcare Industry: More Intelligent Tomorrow, Episode #12, Which variable is the response in the model, How the parameters of the model were calculated, Degrees of freedom of the residuals. It makes very strong assumptions about the relationship between the predictor variables (the X) and the response (the Y). The output shows that the coefficient on the residuals is statistically Source code linked here.. Table of Contents. After visualizing the relationship we will explain the summary. $\hat{\beta}_0$ and $\hat{\beta}_1$. We will start with simple linear regression involving two variables and then we will move towards linear regression involving multiple variables. The basic standard error of the estimate of the coefficient. We can use this equation to predict the level of log GDP per capita for In order to use Linear Regression, we need to import it: from sklearn.linear_model import LinearRegression We will use boston dataset. This was it. in log GDP per capita is explained by protection against In general, X will either be a numpy array or a pandas data frame with shape (n, p) where n is the number of data points and p is the number of predictors. 4mo ago ... '# Linear Regression with Multiple variables'} 10.3s 23 [NbConvertApp] Writing 292304 bytes to __results__.html 10.3s 24. If it is less than the confidence level, often 0.05, it indicates that there is a statistically significant relationship between the term and the response. To implement the simple linear regression we need to know the below formulas. In reality, not all of the variables observed are highly statistically important. Although endogeneity is often best identified by thinking about the data We also add a constant term so that we fit the intercept of our linear model. Using a scatterplot (Figure 3 in [AJR01]), we can see protection The lower and upper values of the 95% confidence interval. protection against expropriation), and these institutions still persist Now we will implement Logistic Regression from scratch without using the sci-kit learn library. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5833189487457275, "perplexity": 1161.9446024833896}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366959.54/warc/CC-MAIN-20210303104028-20210303134028-00571.warc.gz"} |
https://indico.cern.ch/event/578804/contributions/2623909/ | 25-29 September 2017
Salamanca, Spain
Europe/Zurich timezone
## $f_0(980)$ production in $D^+_s \to \pi^+ \pi^+ \pi^-$ and $D^+_s \to \pi^+ K^+ K^-$ decays
28 Sep 2017, 15:15
20m
Aula 2.3
### Speaker
Dr Jorgivan Morais Dias (Instituto de Física Corpuscular - Universidad de Valencia, Spain.)
### Description
We study the $D^+_s \to \pi^+ \pi^+ \pi^-$ and $D^+_s \to \pi^+ K^+ K^-$ decays adopting a mechanism in which the $D^+_s$ decays weakly into a $\pi^+$ and a $q\bar{q}$ component, which hadronizes into two pseudoscalar mesons. The final state interactions between these two pseudoscalar mesons is taken into account by using a chiral unitary approach in coupled channels, which gives rise to the $f_0(980)$ resonance. Hence, we obtain the invariant mass distribution of the pairs $\pi^+\pi^-$ and $K^+K^-$ after the decay of that resonance and compare our theoretical amplitudes with those available from the experimental data. Our results are in a fair agreement with the shape of these data, within large uncertainty, and a $f_0(980)$ signal is seen in both the $\pi^+\pi^-$ and $K^+K^-$ distributions. Predictions for the relative size of $\pi^+\pi^-$ and $K^+K^-$ distributions are made.
### Primary author
Dr Jorgivan Morais Dias (Instituto de Física Corpuscular - Universidad de Valencia, Spain.)
### Co-authors
Prof. Fernando Navarra (Instituto de Física, Universidade de São Paulo, Brazil.) Prof. Marina Nielsen (Instituto de Física, Universidade de São Paulo) Prof. Eulogio Oset (Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6960582733154297, "perplexity": 2782.710230674209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401641638.83/warc/CC-MAIN-20200929091913-20200929121913-00368.warc.gz"} |
https://quant.stackexchange.com/questions/42360/proof-black-scholes-theta/42367 | # Proof Black Scholes Theta
I saw the following proof of theta in a paper I read, and I thought it looked pretty neat. Unfortunately I don't understand the step that they do. This is what they do:
Now, I don't get how they go from $$S_0 n(d_1)\frac{\partial d_1}{\partial t} - Xe^{-rt}n(d_2) \frac{\partial d_2}{\partial t}$$ to $$S_0 n(d1) \frac{\partial (d_1-d_2)}{\partial t}$$. Could anyone explain to me why this is true?
• Consider the relationship of $d_1$ and $d_2$ as well as the relationship of $n(d_1)$ and $n(d_2)$. – Gordon Oct 24 '18 at 17:36
• so $d_2 = d_1 - \sigma \sqrt{T}$, and consequently I thought $n(d_2) = n(d_1 - \sigma \sqrt{T})$. How can I use this to go further? – Keith Oct 24 '18 at 17:50
• Would it help to write out the normal density function? – Keith Oct 24 '18 at 18:00
• Yes, to write out the normal density function. – Gordon Oct 24 '18 at 18:02
• See also this question. – Gordon Oct 24 '18 at 18:10
There is a well known identity for the Black Scholes model: $$S_0 n(d_1)-X e^{-rT} n(d_2) = 0$$ (proof).
Using this allows you to combine these two terms:
$$S_0 n(d_1)\frac{\partial d_1}{\partial t} - Xe^{-rT}n(d_2) \frac{\partial d_2}{\partial t}$$
into
$$S_0 n(d1) (\frac{\partial d_1}{\partial t}-\frac{\partial d_2}{\partial t})$$
or
$$S_0 n(d1) \frac{\partial (d_1-d_2)}{\partial t}$$
Then we use the fact that $$d_1-d_2=\sigma\sqrt{t}$$
Since Black Scholes Theta is for the Black–Scholes option pricing formula, the above step holds true.
For more info, refer page 3 and 4 of this pdf. http://moya.bus.miami.edu/~tsu/jef2008.pdf
• This paper just repeats the equations above with no further explanation. – Alex C Nov 23 '18 at 23:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.769601583480835, "perplexity": 284.452573861394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525187.9/warc/CC-MAIN-20190717121559-20190717143559-00214.warc.gz"} |
http://chsommer.com/spq-survey.htm | # Shortest-Path Queries in Static Networks
Christian Sommer
ACM Computing Surveys, Volume 46, Issue 4 (pp. 45:1-31), 2014
We consider the point-to-point (approximate) shortest-path query problem, which is the following generalization of the classical single-source (SSSP) and all-pairs shortest paths (APSP) problems: We are first presented with a network (graph). A so-called preprocessing algorithm may compute certain information (a data structure or index) to prepare for the next phase. After this preprocessing step, applications may ask shortest-path or distance queries, which should be answered as fast as possible.
Due to its many applications in areas such as transportation, networking, and social science, this problem has been considered by researchers from various communities (sometimes under different names): algorithm engineers construct fast route planning methods, database and information systems researchers investigate materialization tradeoffs, query processing on spatial networks, and reachability queries, and theoretical computer scientists analyze distance oracles and sparse spanners. Related questions are posed for compact routing and distance labeling schemes in networking and distributed computing and for metric embeddings in geometry as well.
In this survey, we review selected approaches, algorithms, and results on shortest-path queries from all these fields, with the main focus lying on the tradeoff between the index size and the query time. We survey methods for general graphs as well as specialized methods for restricted graph classes, in particular for those classes with arguably high practical significance such as planar graphs and complex networks.
@article{Som14,
title = {Shortest-Path Queries in Static Networks},
author = {Christian Sommer},
journal = {ACM Computing Surveys},
volume = {46},
issue = {4},
pages = {45:1--31},
year = {2014},
url = {http://dx.doi.org/10.1145/2530531},
doi = {10.1145/2530531},
}
Official version
Local version (506.2 KB)
This survey is largely based on Chapter 3 of my Ph.D. thesis.
HomePublications → Shortest-Path Queries in Static Networks | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43611228466033936, "perplexity": 2761.034032865266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107892062.70/warc/CC-MAIN-20201026204531-20201026234531-00067.warc.gz"} |
https://tex.stackexchange.com/questions/275405/hfill-in-math-mode-redux | # hfill in math mode redux
This is a followup to an old thread
hfill in math mode
I'm starting a new one because it's hard to write all this in a comment. In the original thread, @egreg proposed the following commands
\makeatletter
\newcommand{\pushright}[1]{\ifmeasuring@#1\else\omit\hfill$\displaystyle#1$\fi\ignorespaces}
\newcommand{\pushleft}[1]{\ifmeasuring@#1\else\omit$\displaystyle#1$\hfill\fi\ignorespaces}
\makeatother
as a way to obtain the effect of \hfill within an align environment. The first pushes text to the right, the second to the left, of the line.
I've been trying to modify these command, and am getting inconsistent results. To avoid &\pushright{\text{foo}} I would like to put the text and the atsign in the macro, then just say, for example, \rightPush{foo}. I've succeeded to construct \rightPush, but for some baffling reason, the analog \leftPush doesn't work. Here are my two macros.
\def\rightPush#1{& \ifmeasuring@#1\else\omit\hfill$\displaystyle\text{#1}$\fi\ignorespaces}
\def\leftPush#1{\ifmeasuring@#1\else\omit$\displaystyle\text{#1}$\hfill & \fi\ignorespaces}
I'd be happy to leave the & out of the macro, but that doesn't help: the \text in the \leftPush is what generates the errors (though not in the \rightPush). As a followup question, if somebody could explain what the \ifmeasuring@ and omit are doing in this macro, I'd really appreciate it.
Thanks very much for any advice
• Welcome to TeX.SX! Please help us help you and add a minimal working example (MWE) that illustrates your problem. Reproducing the problem and finding out what the issue is will be much easier when we see compilable code, starting with \documentclass{...} and ending with \end{document}. Otherwise it is difficult to figure out how precisely you are using \leftPush and \rightPush. Oct 28 '15 at 15:34
No test file is provided but at the very least
\def\leftPush#1{\ifmeasuring@#1\else\omit$\displaystyle\text{#1}$\hfill & \fi\ignorespaces}
should be
\def\leftPush#1{\ifmeasuring@#1\else\omit$\displaystyle\text{#1}$\hfill \fi&\ignorespaces}
so that as for \rightPush you get a & whether \ifmeasuring@ is true or false.
AMS alignments are always executed twice, the first "measuring" pass does not typeset anything but just measures the size of the cell entries which determines some parameters that are set before the second typesetting pass. \ifmeasuring@ is true just on the first execution. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7907575368881226, "perplexity": 1497.619013928486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306181.43/warc/CC-MAIN-20220129122405-20220129152405-00053.warc.gz"} |
http://mathhelpforum.com/calculus/30812-local-linearization.html | 1. ## Local Linearization!
What is the local linearization of e^(x^2) near x = 2?
i need help with this bad johnny, i just dont know how to do it
thanks
mathlete
2. Local linearization means the tangent line of $f(x)=e^{(x^2)}$ at $x=2$.
The line is $y=m\cdot x + C$, where m is the slope and C is a constant.
$m=f'(x)$ at x=2. Find C by plugging the point (2,f(2)). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9004136919975281, "perplexity": 601.6500026260906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982938776.59/warc/CC-MAIN-20160823200858-00077-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=LATEX-L;ed311b10.9808&FT=&P=T&H=&S= | ## LATEX-L@LISTSERV.UNI-HEIDELBERG.DE
Options: Use Forum View Use Monospaced Font Show HTML Part by Default Show All Mail Headers Message: [<< First] [< Prev] [Next >] [Last >>] Topic: [<< First] [< Prev] [Next >] [Last >>] Author: [<< First] [< Prev] [Next >] [Last >>]
Subject: Re: [latex/3844] uc/lccode controls in inputenc? From: Frank Mittelbach <[log in to unmask]> Reply To: Mailing list for the LaTeX3 project <[log in to unmask]> Date: Fri, 24 Feb 2006 11:21:03 +0100 Content-Type: text/plain Parts/Attachments: text/plain (126 lines)
I suggested to Philipp that we discuss this here as I have the feeling that
there are a number of problems associated with his suggested approach and I
hope to hear a few more opinions.
> ---------- %< ----------
> \documentclass{minimal}
> \usepackage[utf8]{inputenc}
> \begin{document}
>
> ^^c3^^a4
>
> \lowercase{^^c3^^a4} % fails
>
> \MakeLowercase{^^c3^^a4}
>
> \end{document}
> ---------- %< ----------
using \lowercase or \uppercase in LaTeX is a general problem which is why
those two commands are explicitly not supported in general context but only in
very welldefined coding where the input to thoseprimitives is known.
LaTeX goes a long way to internally only use LICR sequences which then do not
have any such problem (and which is why \MakeLowercase first turns the input
to LICR before applying the TeX primitive).
so one question to ask is: are the scenarios mentioned in:
> If a UTF-8 character is prefixed with \noexpand (or \string or
> \protect), however, the raw UTF-8 sequence still gets through to the
> primitive.
represent valid LaTeX input/coding, or whether whatever is tried to achieved
has to be handled through interfaces designed to work correctly.
To answer this it would be good to explicitly show what kind of reasons there
would be to \string, \noexpand or \protect some UTF char that then result in
this behavior
however that is not to say, that LaTeX should not protect against errornous
input if that can be done in a safe way.
so lets have a look at the suggestions:
> My suggestion was: why not set the uppercase and lowercase codes of
> all bytes used in UTF-8 to zero? The concept of uc/lccodes doesn't
> apply to UTF-8 anyway (at least not with an 8-bit engine...), why
> take the risk of having it backfire?
because ...
lc codes are unfortunately not only used for lowercasing text they are also
used for hyphenation. but they are used for hyphenation of the LICRs that
result from changing the UTF8 to the final glyph in the font encoding. Thus if
we would turn all lc codes for the upper half to zero, good by hyphenation of
most languages when typeset in T1 font encoding.
furthermore
> There is one thing I didn't mention in the report. Since inputenc may
> switch the input encoding mid-stream, the codes would also need to be
> restored before a new encoding is initialized. So the issue at stake
> is really: should there by a central uc/lccode management in
> inputenc?
again the lc/uc is not really only a property of the inputenc it is formost a
property of the output encoding due to the unfortunate overloading with
hyphenation. And it gets one step further: the values for that are --- at least
with std TeX --- only looked at at the very end of the paragraph but inputenc
can bechanged in mid-paragraph.
inputenc currently solves this problem by considering the inputencoding as
something that is removed as the very first step by turning chars into LICRs
and from then on all you deal with are a) 7bit which is transparent to writing
out and reading in and b) just with uc/lc on the LICR level which is then only
dependent on the output encoding.
> This would also make fixes for 8-bit encodings possible which
> currently can't be handled by primitive case-changing operations. In
> 8-bit encodings such as latin1, latin9, winansi, etc., there are a
> few exeptions to the general rule that the encoding positions of
> uppercase and lowercase letters differ by 32. Primitive case-changing
> operations will produce surprising results in such cases.
they don't as the case changing is not primitive. they only produce surprising
results if the translation from input encoding to LICR is broken eg because
people used \uppercase rather than \MakeUppercase or in case they use a font
encoding which doesn't obey the LaTeX requirement of using the only allowed
ul/lc table (which is the one compatible to T1).
> Here's an example (you may need to recode this for the characters in
> the first two columns to come out right):
>
> ---------- %< ----------
> \documentclass{article}
> \usepackage[latin9]{inputenc}
> \usepackage[T1]{fontenc}
> \begin{document}
> \centering\Large
>
> Default settings:
>
> \begin{tabular}{c@{$\neq$}c@{\hspace{2em}}c@{$\neq$}c}
> ,b<(B & \uppercase{,b=(B} & ^^bc & \uppercase{^^bd}\\
> ,b=(B & \lowercase{,b<(B} & ^^bd & \lowercase{^^bc}\\
> ,b>(B & \uppercase
>
precisely: it uses unsupported lowercase and would not sow any defect if using
\MakeLowercase and \MakeUppercase.
so my feeling here is
a) that's not the way to improve the situation
b) the problem really only exists because of using those two primitives which
are explicitly forbidden in LaTeX
c) that the model used by inputenc to manage this is actually fine
d) would could be improved is to set the chars involved in UTF8 to catcode 12
while that encoding is active, however, whether that is really worth the
effort is doubtful as, so far I only see this guarding against incorrect
coded input or packages | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9364858865737915, "perplexity": 8617.844995297466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524270.28/warc/CC-MAIN-20210121070324-20210121100324-00152.warc.gz"} |
http://link.springer.com/article/10.1007%2Fs11664-007-0350-y | , Volume 37, Issue 5, pp 736-742,
Open Access
Date: 30 Nov 2007
# Growth of Polarity-Controlled ZnO Films on (0001) Al2O3
## Abstract
The polarity control of ZnO films grown on (0001) Al2O3 substrates by plasma-assisted molecular-beam epitaxy (P-MBE) was achieved by using a novel CrN buffer layer. Zn-polar ZnO films were obtained by using a Zn-terminated CrN buffer layer, while O-polar ZnO films were achieved by using a Cr2O3 layer formed by O-plasma exposure of a CrN layer. The mechanism of polarity control was proposed. Optical and structural quality of ZnO films was characterized by high-resolution X-ray diffraction and photoluminescence (PL) spectroscopy. Low-temperature PL spectra of Zn-polar and O-polar samples show dominant bound exciton (I8) and strong free exciton emissions. Finally, one-dimensional periodic structures consisting of Zn-polar and O-polar ZnO films were simultaneously grown on the same substrate. The periodic inversion of polarity was confirmed in terms of growth rate, surface morphology, and piezo response microscopy (PRM) measurement. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8585978150367737, "perplexity": 20876.687462374382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898119.0/warc/CC-MAIN-20141030025818-00228-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://mathoverflow.net/users/801/nick-gill?tab=summary | # Nick Gill
less info
reputation
1444
bio website boolesrings.org/nickgill location San Jose, Costa Rica age 38 member for 5 years, 6 months seen May 1 at 10:17 profile views 1,980
I'm a visiting professor at the Universidad de Costa Rica.
27 Number of isomorphism types of finite groups 21 2x2 subdeterminants of a matrix 20 Does Zhang's theorem generalize to $3$ or more primes in an interval of fixed length? 19 Groups that do not exist 18 Where are the second- (and third-)generation proofs of the classification of finite simple groups up to?
# 6,339 Reputation
+10 Is (n,m)=(18,7) the only positive solution to n^2 + n + 1 = m^3 ? +10 General bound for the number of subgroups of a finite group +10 Which finite simple groups can be characterized by their action on a small set? +10 Does the Alternating group of degree $n>7$ have exactly one irreducible character of degree $n-1$?
# 13 Questions
14 What's the big deal about $M_{13}$? 13 Does O'Nan-Scott depend on CFSG? 11 Cosets and conjugacy classes 8 When does a `distinguished matching' exist? 7 A strong sum-product “for translates” in finite fields
# 72 Tags
400 gr.group-theory × 78 32 graph-theory × 10 227 finite-groups × 40 31 nt.number-theory × 4 79 reference-request × 12 25 linear-algebra × 3 67 co.combinatorics × 14 24 matrices × 2 50 rt.representation-theory × 9 21 algebraic-graph-theory × 3
# 2 Accounts
MathOverflow 6,339 rep 1444 Mathematics 173 rep 5
Nice Answer × 18 Quorum Good Answer Nice Question × 3 Yearling × 5 Curious Explainer Custodian × 2 Civic Duty finite-groups
# 0 Active bounties
This user has no active bounties
# 389 Votes Cast
all time by type month
384 up 124 question 1
5 down 265 answer | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5772801041603088, "perplexity": 3810.9329896831323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430454626420.21/warc/CC-MAIN-20150501043026-00029-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.bigs-math.uni-bonn.de/de/events/eventpages/lecture-by-dr-christian-borgs-of-microsoft-research-for-the-computer-science-department/ | Lecture by Dr. Christian Borgs of Microsoft Research for the Computer Science Department (Prof. M. Karpinski)
Friday, 14:00 (2 PM), Location: Hausdorff Research Institute, Poppelsdorfer Allee 45, main lecture hall
Title: " Modeling Social Networks: From Network Formation to Community Detection"
Abstract:
In the first part of this talk, I will consider a new game theoretic model of network formation. The model is motivated by the observation that the social cost of relationships is often the effort needed to maintain them. In contrast to many popular models of network formation whose equilibria tend to be complete graphs or trees, our model has equilibria which support many observed network structures, including triadic closure and heavy-tailed degree distributions. This part of the talk represents work with J.T. Chayes, J. Ding and B. Lucier. If time permits, I will then turn to recent work by N. Balcan, M. Braverman, J.T. Chayes, S.-H. Teng, and myself in which we give an algorithm to discover overlapping communities within a social network. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7291696071624756, "perplexity": 1965.3661321952154}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141732835.81/warc/CC-MAIN-20201203220448-20201204010448-00055.warc.gz"} |
http://www.kurims.kyoto-u.ac.jp/~kyodo/kokyuroku/contents/340.html | No.340
fϐ̊wI_
Geometric Theory of Several Complex Variables
@
1978/09/07`1978/09/09
{@RF
FUJIMOTO,HIROTAKA
@
ځ@
@
1. Surjective Morphisms of Holomorphic Vector Bundles (Geometric Theory of Several Complex Variables)--------------------------------1
@@@@Universite de Paris@@@SKODA,H.
@
@@@@Department of Mathematics, Tokyo University of Agriculture and Technology@@@WAKABAYASHI,ISAO
@
@@@@University of California, Berkeley@@@WU,H.
@
4. Families of Linear Systems on Projective Manifolds (Geometric Theory of Several Complex Variables)-------------------------------26
@@@@Tohoku University@@@NAMBA,MAKOTO
@
5. J[}ʂ̓ɂ (Geometric Theory of Several Complex Variables)-------------------------------------------------------42
@@@@ww@@@R j@(YAMAGUCHI,HIROSHI)
@
6. Brody's Method in Value Distribution Theory (Geometric Theory of Several Complex Variables)--------------------------------------56
@@@@University of California, Los Angeles@@@GREEN,Mark L.
@
@@@@Department of Mathematics, Faculty of Science and Technology, Sophia University@@@KITA,MICHITAKA
@
@@@@The Chinese University of Hong Kong@@@WONG,B.
@ | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8763198256492615, "perplexity": 5368.441511798591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721415.7/warc/CC-MAIN-20161020183841-00321-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://02402.compute.dtu.dk/quizzes/test-quiz-2 | 0
Correct
0
Incorrect
6
Total
## Question 1 of 6
A company has outsourced the manufacturing of a gasket for one of their valves to a company in China. The gaskets are received in very large lots (with many thousands of gaskets). The company controls a batch by sampling 200 gaskets at random from the lot, these are classified as defective or intact. A lot is accepted if there are at most 2 defective item among the controlled ones.
What is the approximate probability of accepting a lot if the percentage of defectives is $0.4\%$?
## Question 2 of 6
In a redesigned valve the socalled “elasticity modulus’’ of the material is important for the functionality. To compare the elasticity modulus of 3 different brass alloys, samples from each alloy was purchased from 5 different manufacturers. The measurements in the table below indicates the measured elasticity modulus in GPa:
Brass alloy Row sum
M1 M2 M3
Manufacturer A 82.5 90.9 75.6 249.0
Manufacturer B 83.7 99.2 78.1 261.0
Manufacturer C 80.9 101.4 87.3 269.6
Manufacturer D 95.2 104.2 92.2 291.6
Manufacturer E 80.8 104.1 83.8 268.7
Column sum 423.1 499.8 417.0
Consider only the data for brass alloy M1. The median and the upper quartile for these become: (using the eBook Chapter 1 definition) end:text
## Question 3 of 6
The arrival of guests wishing to check into a hotel is assumed in the period between 14 (2pm) and 18 (6 pm) o’clock to be described by a poisson proces (arrivals are assumed evenly distributed over time and independent of each other). From extensive previous measurements it has been found that the probability that no guests arrive in a period of 15 minutes is 0.30. ($P (X_{15min} = 0) = 0.30$, where $X_{15min}$ describes the number of arrivals per 15 min).
The expected number of arrivals per 15 min, and the probability that in a period of 1 hour 8 guests or more arrive are:
## Question 4 of 6
On a shelf 9 apparently identical ring binders are postioned. It is known that 2 of the ring binders contain statistics exercises, 3 of the ring binders contain math problems and 4 of ring binders contain reports. Three ring binders are sampled without replacement.
The random variable X describes the number of ring binders with statistics exercises among the 3 chosen ones.The mean and variance for the random variable X is:
## Question 5 of 6
If you did the previous exercise, the following is a repetition: On a shelf 9 apparently identical ring binders are postioned. It is known that 2 of the ring binders contain statistics exercises, 3 of the ring binders contain math problems and 4 of ring binders contain reports. Three ring binders are sampled without replacement.
The probability (${P_1}$) that all the three chosen ring binders contain reports and the probability (${P_2}$) to chose exactly one of each kind of ring binder are:
## Question 6 of 6
A PC user noted that the probability of no spam emails during a given day is 5\%. ($P (X = 0) = 0.05$, where $X$ denotes the number of spam emails per day). The number of spam emails per day is assumed to follow a poisson distribution.
The expected number of spam emails per day and the probability of getting more than 5 spam mails on any given day are: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7635906338691711, "perplexity": 1314.8385954150247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500356.92/warc/CC-MAIN-20230206145603-20230206175603-00187.warc.gz"} |
https://www.pololu.com/product/2876 | Polo-BOO! Halloween Sale going on now – click for details!
STSPIN220 Low-Voltage Stepper Motor Driver Carrier
Pololu item #: 2876 87 in stock Brand: Pololu Status: Active and Preferred Free add-on shipping in USA Free shipping in USA over $40 Price break Unit price (US$)
1 5.95
5 5.47
25 5.04
100 4.63
This breakout board for STMicro’s STSPIN220 low-voltage microstepping bipolar stepper motor driver offers microstepping down to 1/256-step and operates from 1.8 V to 10 V, allowing stepper motors to be powered with voltages that are too low for other drivers. It can deliver up to approximately 1.1 A per phase continuously without a heat sink or forced air flow (up to 1.3 A peak). The module has a pinout and interface that are very similar to that of our popular A4988 carriers, so it can be used as a drop-in replacement for those boards in many applications.
Alternatives available with variations in these parameter(s): header pins soldered? Select variant…
Overview
This product is a carrier board or breakout board for the STSPIN220 low-voltage stepper motor driver from STMicroelectronics (ST); we therefore recommend careful reading of the STSPIN220 datasheet (1MB pdf) before using this product. This stepper motor driver offers microstep resolutions down to 1/256 of a step, and it lets you control one bipolar stepper motor at up to approximately 1.1 A per phase continuously without a heat sink or forced air flow (see the Power dissipation considerations section below for more information). Here are some of the driver’s key features:
• Simple step and direction control interface
• Nine different step resolutions down to 256 microsteps: full-step, half-step, 1/4-step, 1/8-step, 1/16-step, 1/32-step, 1/64-step, 1/128-step, and 1/256-step
• Adjustable current control lets you set the maximum current output, which lets you use voltages above your stepper motor’s rated voltage to achieve higher step rates
• Motor supply voltage: 1.8 V to 10 V (for a higher-voltage alternative, consider the STSPIN820 carrier, which operates from 7 V to 45 V)
• Can deliver 1.1 A per phase continuously without additional cooling
• Can interface directly with 3.3 V and 5 V systems
• Over-temperature thermal shutdown, over-current shutdown, and short circuit protection
• 4-layer, 2 oz copper PCB for improved heat dissipation
• Exposed solderable ground pad below the driver IC on the bottom of the PCB
• Module size, pinout, and interface match those of our A4988 stepper motor driver carriers in most respects
This product ships with all surface-mount components—including the STSPIN220 driver IC—installed as shown in the product picture.
Included hardware
The STSPIN220 low-voltage stepper motor driver carrier ships with one 1×16-pin breakaway 0.1″ male headers (for a version of this carrier with header pins already installed, see item #2877). The headers can be soldered in for use with solderless breadboards or 0.1″ female connectors. You can also solder your motor leads and other connections directly to the board.
Using the driver
Power connections
The driver requires a logic supply voltage (3 – 5 V) to be connected across the VCC and GND pins and a motor supply voltage of 1.8 V to 10 V to be connected across VIN and GND. These supplies should have appropriate decoupling capacitors close to the board, and they should be capable of delivering the expected currents (peaks up to 3 A for the motor supply).
Motor connections
The STSPIN220 is intended to control a single bipolar stepper motor. The two sides of one coil should be connected across OUTA1 and OUTA2, and the two sides of the other coil should be connected across OUTB1 and OUTB2.
Warning: Connecting or disconnecting a stepper motor while the driver is powered can destroy the driver. (More generally, rewiring anything while it is powered is asking for trouble.)
Step (and microstep) size
Stepper motors typically have a step size specification (e.g. 1.8° or 200 steps per revolution), which applies to full steps. A microstepping driver such as the STSPIN220 allows higher resolutions by allowing intermediate step locations, which are achieved by energizing the coils with intermediate current levels. For instance, driving a motor in quarter-step mode will give the 200-step-per-revolution motor 800 microsteps per revolution by using four different current levels.
Unlike our other stepper motor drivers, some of the resolution (step size) selector inputs share pins with the STEP/STCK and DIR pins, and the values of the inputs are latched at power-up or when standby mode is released. After this, the values on the inputs do not affect the microstep mode, and the MODE3 and MODE4 inputs start operating as step and direction controls. The only exception is the case where MODE1 and MODE2 are both driven low, which overrides the latched microstep setting and forces the driver into full-step mode. The previous microstep setting is restored once MODE1 or MODE2 is set high.
There are nine step resolutions available as shown in the table below. The four MODE pins are floating, so they must be connected to logic high or low before operating the driver. For the microstep modes to function correctly, the current limit must be set low enough (see below) so that current limiting gets engaged. Otherwise, the intermediate current levels will not be correctly maintained, and the motor will skip microsteps.
MODE1 MODE2 MODE3 (STEP) MODE4 (DIR) Latched step setting
Low Low Low Low Full step
High Low High Low Half step
Low High Low High 1/4 step
High High High Low 1/8 step
High Low High High 1/8 step
High High High High 1/16 step
Low High Low Low 1/32 step
Low Low Low High 1/32 step(1)
High High Low High 1/64step
Low High High High 1/64 step
High Low Low Low 1/128 step
Low Low High Low 1/128 step(1)
High High Low Low 1/256 step
Low High High Low 1/256 step
High Low Low High 1/256 step
Low Low High High 1/256 step(1)
1 Keeping the MODE1 and MODE2 inputs low after the step resolution configuration forces the driver into full-step mode instead of the selected configuration.
Control inputs and status outputs
The rising edge of each pulse to the STEP (STCK) input corresponds to one microstep of the stepper motor in the direction selected by the DIR pin. Unlike most of our other stepper motor driver carriers, the STEP and DIR inputs are floating, so they must be connected to logic high or low to ensure proper operation.
The STSPIN220 IC has two different inputs for controlling its power states, STBY/RESET and EN/FAULT:
• When the STBY pin is driven low, the driver enters a low-power mode, disables the motor outputs, and resets the translation table. We call this pin STBY on our board based on the logic of how it works, but it is a direct connection to the STBY pin on the driver. The board pulls this pin high by default.
• The EN pin is inverted by our carrier board and presented as ENABLE, which makes it the same way as the enable pins on our various other stepper motor drivers with this form factor. It is pulled low on the board to enable the driver by default, and it can be driven low to disable the outputs.
The STSPIN220 can detect several fault (error) states that it reports by driving its EN/FAULT pin low. The FAULT pin is not made available by default (to avoid conflicts when using the STSPIN220 carrier as a drop-in replacement for our other stepper motor driver carriers), but it can be connected to the pin labeled “(1)” or “(2)” by bridging the surface mount jumper labeled “F” on the bottom side of the board to the corresponding pad labeled “1” or “2”.
Current limiting
To achieve high step rates, the motor supply is typically higher than would be permissible without active current limiting. For instance, a typical stepper motor might have a maximum current rating of 1 A with a 5 Ω coil resistance, which would indicate a maximum motor supply of 5 V. Using such a motor with 10 V would allow higher step rates, but the current must actively be limited to under 1 A to prevent damage to the motor.
The STSPIN220 supports such active current limiting, and the trimmer potentiometer on the board can be used to set the current limit. You will typically want to set the driver’s current limit to be at or below the current rating of your stepper motor. One way to set the current limit is to put the driver into full-step mode and to measure the current running through a single motor coil without clocking the STEP input. The measured current will be equal to the current limit (since both coils are always on and limited to 100% of the current limit setting in full-step mode).
Another way to set the current limit is to measure the VREF voltage and calculate the resulting current limit. The VREF pin voltage is accessible via a small hole that is circled on the bottom silkscreen of the circuit board. The current limit relates to VREF as follows:
text(current limit) = text(VREF) × 5 text(A)/text(V)
Like the FAULT pin, VREF can be connected to the pin labeled “(1)” or “(2)” by bridging the surface mount jumper labeled “R” on the bottom side of the board to the corresponding pad labeled “1” or “2”.
Note: The coil current can be very different from the power supply current, so you should not use the current measured at the power supply to set the current limit. The appropriate place to put your current meter is in series with one of your stepper motor coils. If the driver is in full-step mode, both coils will always be on and limited to 100% of the current limit setting as (unlike some other drivers that limit it to about 70% in full-step mode). If your driver is in one of the microstepping modes, the current through the coils will change with each step, ranging from 0% to 100% of the set limit.
Power dissipation considerations
The driver ICs have maximum current ratings higher than the continuous currents we specify for these carrier boards, but the actual current you can deliver depends on how well you can keep the IC cool. The carrier’s printed circuit board is designed to draw heat out of the IC, but to supply more than the specified continuous current per coil, a heat sink or other cooling method is required.
This product can get hot enough to burn you long before the chip overheats. Take care when handling this product and other components connected to it.
Please note that measuring the current draw at the power supply will generally not provide an accurate measure of the coil current. Since the input voltage to the driver can be significantly higher than the coil voltage, the measured current on the power supply can be quite a bit lower than the coil current (the driver and coil basically act like a switching step-down power supply). Also, if the supply voltage is very high compared to what the motor needs to achieve the set current, the duty cycle will be very low, which also leads to significant differences between average and RMS currents. Additionally, please note that the coil current is a function of the set current limit, but it does not necessarily equal the current limit setting as the actual current through each coil changes with each microstep and can be further reduced if Active Gain Control is active. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2201993763446808, "perplexity": 3289.41292929028}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986664662.15/warc/CC-MAIN-20191016041344-20191016064844-00079.warc.gz"} |
https://proofwiki.org/wiki/One_Half_as_Pandigital_Fraction | # One Half as Pandigital Fraction
## Theorem
There are $12$ ways $\dfrac 1 2$ can be expressed as a pandigital fraction:
$\dfrac 1 2 = \dfrac {6729} {13 \, 458}$
$\dfrac 1 2 = \dfrac {6792} {13 \, 584}$
$\dfrac 1 2 = \dfrac {6927} {13 \, 854}$
$\dfrac 1 2 = \dfrac {7269} {14 \, 538}$
$\dfrac 1 2 = \dfrac {7293} {14 \, 586}$
$\dfrac 1 2 = \dfrac {7329} {14 \, 658}$
$\dfrac 1 2 = \dfrac {7692} {15 \, 384}$
$\dfrac 1 2 = \dfrac {7923} {15 \, 846}$
$\dfrac 1 2 = \dfrac {7932} {15 \, 864}$
$\dfrac 1 2 = \dfrac {9267} {18 \, 534}$
$\dfrac 1 2 = \dfrac {9273} {18 \, 546}$
$\dfrac 1 2 = \dfrac {9327} {18 \, 654}$
## Proof
Can be verified by brute force.
## Historical Note
According to David Wells in his $1986$ work Curious and Interesting Numbers, this result appears in an article by Mitchell J. Friedman in Volume $8$ of Scripta Mathematica, but it is proving difficult to find an archived copy to consult directly. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.627429723739624, "perplexity": 1392.9386717320917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304810.95/warc/CC-MAIN-20220125100035-20220125130035-00690.warc.gz"} |
http://www.zora.uzh.ch/id/eprint/45854/ | # Do ultra-runners in a 24-h run really dehydrate?
Knechtle, B; Wirth, A; Knechtle, P; Rosemann, T; Senn, O (2011). Do ultra-runners in a 24-h run really dehydrate? Irish Journal of Medical Science, 180(1):129-134.
## Abstract
BACKGROUND: Loss of body mass during a 24-h run was considered to be a result of dehydration.
AIMS: We intended to quantify the decrease in body mass as a loss in fat mass or skeletal muscle mass and to quantify the change in hydration status.
METHODS: Body mass, fat mass, skeletal muscle mass, haematocrit, plasma sodium and urinary specific gravity were measured in 15 ultra-marathoners in a 24-h run.
RESULTS: Body mass decreased by 2.2 kg (p = 0.0009) and fat mass decreased by 0.5 kg (p = 0.0084). The decrease in body mass correlated to the decrease in fat mass (r = 0.72, p = 0.0024). Urinary specific gravity increased from 1.012 to 1.022 g/mL (p = 0.0005).
CONCLUSIONS: The decrease in body mass and the increase in urinary specific gravity indicate dehydration. The decrease in body mass was correlated to the decrease in fat mass and therefore not only due to dehydration.
## Abstract
BACKGROUND: Loss of body mass during a 24-h run was considered to be a result of dehydration.
AIMS: We intended to quantify the decrease in body mass as a loss in fat mass or skeletal muscle mass and to quantify the change in hydration status.
METHODS: Body mass, fat mass, skeletal muscle mass, haematocrit, plasma sodium and urinary specific gravity were measured in 15 ultra-marathoners in a 24-h run.
RESULTS: Body mass decreased by 2.2 kg (p = 0.0009) and fat mass decreased by 0.5 kg (p = 0.0084). The decrease in body mass correlated to the decrease in fat mass (r = 0.72, p = 0.0024). Urinary specific gravity increased from 1.012 to 1.022 g/mL (p = 0.0005).
CONCLUSIONS: The decrease in body mass and the increase in urinary specific gravity indicate dehydration. The decrease in body mass was correlated to the decrease in fat mass and therefore not only due to dehydration.
## Statistics
### Citations
16 citations in Web of Science®
16 citations in Scopus® | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8231478333473206, "perplexity": 5162.139870706219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948530668.28/warc/CC-MAIN-20171213182224-20171213202224-00649.warc.gz"} |
https://www.journaltocs.ac.uk/index.php?action=browse&subAction=pub&publisherID=1205&journalID=40908&pageb=1&userQueryID=&sort=&local_page=4&sorType=&sorCol=5 | Publisher: Cornell University (Total: 3 journals) [Sort alphabetically]
Showing 1 - 3 of 3 Journals sorted by number of followers Cornell Law Review (Followers: 15, SJR: 1.508, CiteScore: 1) Cornell Intl. Law J. (Followers: 7, SJR: 0.202, CiteScore: 1) J. of Privacy and Confidentiality
Similar Journals
Journal of Privacy and ConfidentialityNumber of Followers: 0 Open Access journal ISSN (Online) 2575-8527 Published by Cornell University [3 journals]
• The Sample Complexity of Distribution-Free Parity Learning in the Robust
Shuffle Model
• Authors: Kobbi Nissim, Chao Yan
Abstract: We provide a lowerbound on the sample complexity of distribution-free parity learning in the realizable case in the shuffle model of differential privacy. Namely, we show that the sample complexity of learning d-bit parity functions is Ω(2d/2). Our result extends a recent similar lowerbound on the sample complexity of private agnostic learning of parity functions in the shuffle model by Cheu and Ullman . We also sketch a simple shuffle model protocol demon- strating that our results are tight up to poly(d) factors.
PubDate: 2022-11-02
DOI: 10.29012/jpc.805
Issue No: Vol. 12, No. 2 (2022)
• Representing Sparse Vectors with Differential Privacy, Low Error, Optimal
Space, and Fast Access
• Authors: Christian Janos Lebeda, Martin Aumüller, Rasmus Pagh
Abstract: Representing a sparse histogram, or more generally a sparse vector, is a fundamental task in differential privacy. An ideal solution would use space close to information-theoretical lower bounds, have an error distribution that depends optimally on the desired privacy level, and allow fast random access to entries in the vector. However, existing approaches have only achieved two of these three goals. In this paper we introduce the Approximate Laplace Projection (ALP) mechanism for approximating k-sparse vectors. This mechanism is shown to simultaneously have information-theoretically optimal space (up to constant factors), fast access to vector entries, and error of the same magnitude as the Laplace-mechanism applied to dense vectors. A key new technique is a unary representation of small integers, which we show to be robust against ''randomized response'' noise. This representation is combined with hashing, in the spirit of Bloom filters, to obtain a space-efficient, differentially private representation.
Our theoretical performance bounds are complemented by simulations which show that the constant factors on the main performance parameters are quite small, suggesting practicality of the technique.
PubDate: 2022-11-02
DOI: 10.29012/jpc.809
Issue No: Vol. 12, No. 2 (2022)
• Consistent Spectral Clustering of Network Block Models under Local
Differential Privacy
• Authors: Jonathan Hehir, Aleksandra Slavkovic, Xiaoyue Niu
Abstract: The stochastic block model (SBM) and degree-corrected block model (DCBM) are network models often selected as the fundamental setting in which to analyze the theoretical properties of community detection methods. We consider the problem of spectral clustering of SBM and DCBM networks under a local form of edge differential privacy. Using a randomized response privacy mechanism called the edge-flip mechanism, we develop theoretical guarantees for differentially private community detection, demonstrating conditions under which this strong privacy guarantee can be upheld while achieving spectral clustering convergence rates that match the known rates without privacy. We prove the strongest theoretical results are achievable for dense networks (those with node degree linear in the number of nodes), while weak consistency is achievable under mild sparsity (node degree greater than $\sqrt{n}$). We empirically demonstrate our results on a number of network examples.
PubDate: 2022-11-02
DOI: 10.29012/jpc.811
Issue No: Vol. 12, No. 2 (2022)
• Exact Inference with Approximate Computation for Differentially Private
Data via Perturbations
• Authors: Ruobin Gong
Abstract: This paper discusses how two classes of approximate computation algorithms can be adapted, in a modular fashion, to achieve exact statistical inference from differentially private data products. Considered are approximate Bayesian computation for Bayesian inference, and Monte Carlo Expectation-Maximization for likelihood inference. Up to Monte Carlo error, inference from these algorithms is exact with respect to the joint specification of both the analyst's original data model, and the curator's differential privacy mechanism. Highlighted is a duality between approximate computation on exact data, and exact computation on approximate data, which can be leveraged by a well-designed computational procedure for statistical inference.
PubDate: 2022-11-02
DOI: 10.29012/jpc.797
Issue No: Vol. 12, No. 2 (2022)
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7615727782249451, "perplexity": 2774.5550420908507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710711.7/warc/CC-MAIN-20221129200438-20221129230438-00458.warc.gz"} |
http://eprints.adm.unipi.it/2072/ | # On compressing and indexing data
Ferragina, Paolo and Manzini, Giovanni (2002) On compressing and indexing data. Technical Report del Dipartimento di Informatica . Università di Pisa, Pisa, IT.
In this paper we design two compressed data structures for the full-text indexing problem. These data structures support efficient substring searches within the indexed text $T$ using roughly the same space required to store $T$ in compressed form. Our first compressed data structure retrieves the $occ$ occurrences of a pattern $P[1,p]$ in $T[1,n]$ in $O(p + occ\log^{1+\epsilon} n)$ time and uses at most $5n H_k(T) + o(n)$ bits of storage, where $H_k(T)$ is the $k$-th order empirical entropy of $T$. This space occupancy is $Theta(n)$ bits in the worst case and $o(n)$ bits for compressible texts. Our data structure exploits the relationship between the suffix array and the Burrows-Wheeler compression algorithm. Our second compressed data structure achieves $O(p+occ)$ query time and uses $O(n H_k(T)\log^\epsilon n) + o(n)$ bits of storage. In the worst case the space occupancy is $o(n\log n)$ bits which is asymptotically smaller than the space occupancy of suffix trees and suffix arrays. This second data structure exploits the interplay between two compressors: the Burrows-Wheeler algorithm and the LZ78 algorithm. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3944351375102997, "perplexity": 820.7051667879808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805708.41/warc/CC-MAIN-20171119172232-20171119192232-00303.warc.gz"} |
https://www.physicsforums.com/threads/relatively-easy-limit-help-please.83898/ | # Relatively easy limit help please
• Start date
• #1
Hessam
36
0
ok... so I'm taking calc3, and didnt go to class for a month, lol
lim
(x,y,z) -> (0,0,0)
of this
xy + yz + zx
-----------------
(x^2)+(y^2)+(z^2)
i tried doing the limit as (x,y,z) approaches (x,0,0) (0,y,0) and (0,0,z) and got zero, but that's not the answer
however when i put it into my calc and grind it out i get 1... but how can i do it on paper?
Last edited:
• #2
Maxos
92
0
The limit does not exist
To show that, take two curves, e.g:
$$x=y=z$$ and $$x=y=-z$$
You'll find $$\frac {3x^2} {3x^2}$$ and $$\frac {-x^2} {3x^2}$$
Whose limits are 1 and $$-\frac {1} {3}$$, which are different. CVD
• Last Post
Replies
16
Views
449
• Last Post
Replies
6
Views
383
• Last Post
Replies
2
Views
252
• Last Post
Replies
10
Views
126
• Last Post
Replies
2
Views
335
• Last Post
Replies
14
Views
521
• Last Post
Replies
13
Views
509
• Last Post
Replies
16
Views
725
• Last Post
Replies
1
Views
570
• Last Post
Replies
15
Views
605 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7252597808837891, "perplexity": 8686.615498252875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710808.72/warc/CC-MAIN-20221201085558-20221201115558-00628.warc.gz"} |
https://dsp.stackexchange.com/questions/18320/welchs-overlapped-method | # Welch's Overlapped Method
I am measuring wind speed using a number of different anemometers. These anemometers are placed on a structure which exhibits sinusoidal motion. Thus, the signal of the anemometers contains both the wind speed as well as the sine wave to which they were exposed.
I wish to accurately determine the amplitude of the sine wave from the anemometer signal. Thus, I decided to use a flat-top window when calculating the FFT. However, since the signal is 'noisy' the resulting amplitude of the sine wave is not correct. Sometimes, the amplitude given on the spectrum is greater than the actual amplitude of the sine wave!!
I am considering using Welch's overlapped method (pwelch in MATLAB). Can someone please tell me whether such a method is suitable for my application? I have read that this method should only be used with stationary signals? I am confused whether my signal is considered as stationary. Can someone please help me? Thanks.
• I wouldn't say your signal is stationary. However, you are interested in a sine wave interference which is stationary. The Welch method will yield a better interference to noise ratio. If your interference is strong enough then you wouldn't gain much. – ThP Sep 23 '14 at 19:56
• @ThP: Thanks for your reply. When using pwelch in MATLAB I can see a peak at the frequency of my sine wave. Considering my situation, do you think a psd spectrum makes sense or should I just calculate fft? Thanks in advance. – user10881 Sep 23 '14 at 21:30 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.891139566898346, "perplexity": 526.7961443779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315132.71/warc/CC-MAIN-20190819221806-20190820003806-00062.warc.gz"} |
https://www.math.ias.edu/seminars/abstract?event=23471 | # On Proving Hardness of Improper Learning from Worst-Case Assumptions
COMPUTER SCIENCE/DISCRETE MATH I Topic: On Proving Hardness of Improper Learning from Worst-Case Assumptions Speaker: Benny Applebaum Affiliation: Princeton University Date: Monday, March 31 Time/Room: 11:15am - 12:15pm/S-101
Learning theory, and in particular PAC learning, was introduced by Valiant in 1984 and has since become a major area of research in theoretical and applied computer science. One natural question that was posed at the very inception of the field is whether there are classes of functions that are hard to learn. Here it is important to make a distinction between proper and improper learning: in proper learning one is required to output a hypothesis that is comparable to the function being learned (e.g. if we are trying to learn k-DNF then the hypothesis should also be a k-DNF), while in improper learning the hypothesis can be more complex (e.g. if we are trying to learn k-DNF then the hypothesis could be a circuit of size n^c). Computational limitations to proper and improper learning have been extensively studied, starting with the seminal works of Pitt-Valiant (JACM '88) and Kearns-Valiant (JACM '94). However, while the hardness of proper learning is typically based on worst case assumptions on the power of NP (e.g., SAT \notin BPP), all known limitations on improper learning are based on cryptographic assumptions (e.g., the existence of one-way functions). It is natural to ask whether this gap is inherent, specifically: is it possible to base hardness of improper learning on worst case assumptions such as SAT \notin BPP? We show that, unless the Polynomial Hierarchy collapses, such a statement cannot be proven via a large class of reductions including Karp reductions, truth-table reductions, and a restricted form of non-adaptive Turing reductions. Also, a proof that uses a Turing reduction of constant levels of adaptivity would imply an important consequence in cryptography as it yields a transformation from any average-case hard problem in NP to a (standard) one-way function. These results are obtained by showing that lower bounds for improper learning are intimately related to the complexity of zero-knowledge arguments and to the existence of weak cryptographic primitives. In particular, we prove that if a language L reduces to the task of improper learning circuits, then, depending on the type of the reduction in use, either (1) L has a statistical zero-knowledge argument system, or (2) the worst-case hardness of L implies the existence of a weak variant of one-way functions (as defined by Ostrovsky-Wigderson). Interestingly, we observe that the converse implication also holds. Namely, if (1) or (2) hold then the intractability of L implies that improper learning is hard. Our results hold even in the stronger model of agnostic learning. This is joint work with Boaz Barak and David Xiao. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8747759461402893, "perplexity": 513.4838674648311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948608836.84/warc/CC-MAIN-20171218044514-20171218070514-00303.warc.gz"} |
https://brilliant.org/discussions/thread/has-the-brilliant-staff-given-up/ | ×
# Has the Brilliant Staff given up?
Note by Patrick Lu
4 years, 7 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
Patrick,
In fact we do care an immense amount about what you think of our problems, and are bemused that you feel we have given up. We would be grateful for any constructive criticism you have of the types/genre's of problems we offer. As has been pointed out in this thread, computer science is a new topic for us. Everything about the problem set from the difficulty calibration, to the types of problems is a weekly experiment that informs the evolution of what we offer in future weeks. Our math and physics sections have months of experimentation behind them, the Computer Science section, only a few weeks.
If you have suggestions of what types of problems you would like to see we would be happy to hear them.
Is your criticism of that particular problem that it is too easy to put into wolfram alpha? Or do you feel that even writing your own code, the problem is not interesting to you? We would care a lot about the latter criticism.
Staff - 4 years, 7 months ago
The computer science module isn't even computer science. It's just asking people to write some basic code to bash out numbers. There's no algorithms involved, no need for efficiency.
When one of the problems is this:
Find the first 3 digits after the decimal point of the given integral: http://www.wolframalpha.com/input/?i=integral+of+cos%28x%29%5E100+from+0+to+pi%2F4
You know that the staff just doesn't care.
- 4 years, 7 months ago
You've got a point, but you are definitely going about it the wrong way. Your tone is very harsh, and there is no need for that. Think about this: 1) Computer Science is a very new area to brilliant. 2) There is one person doing that section, so it's not the entire staff. 3) Brilliant has made leaps and bounds in the past few weeks. This is only one instance of incompetence. 4) Brilliant is FREE. I apologize if a free service isn't the best! 5) If you haven't noticed, there is a new competition involving computer programming. It didn't look like that volume of work took a few hours to think and write up.
Clearly Brilliant isn't the site for your experienced programming skills. But it's a great site otherwise. So please think twice before cursing the entire brilliant staff.
- 4 years, 7 months ago
I wonder why anyone would care about you.
If you're so smart, why don't you try Project Euler? You probably wouldn't be able to solve a quarter of the problems.
Until then, try gaining a couple of levels in everything. You're not even at level 5 in comp sci and you're criticizing Brilliant? Pathetic.
- 4 years, 7 months ago
Hey,I am not so expert ...but still I have solved various project euler problems....Please don't scare anybody with that special tag...I am not supporting anybody else...don't misunderstand...
- 4 years, 7 months ago
You certainly don't have to be an expert in order to solve some project euler problems, but solving more than 25% of them requires some skill.
- 4 years, 7 months ago
You know what isn't computer science? Letting WolframAlpha solve the problem for you. That's knowing how to use the internet. You might want to practice with that some more by the way, as you put the body of your post in a comment.
- 4 years, 7 months ago
I have to agree that the computer science problems can be better. Many of the problems I have seen so far are well-known problems that can be searched in Wikipedia, and if it's not, usually it can be solved using simple loops. (I only got to level 5 this week, so I haven't seen what level 5 looks like)
- 4 years, 7 months ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7953948378562927, "perplexity": 1402.9219891618525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812327.1/warc/CC-MAIN-20180219032249-20180219052249-00547.warc.gz"} |
https://gitlab.irstea.fr/HYCAR-Hydro/airgr/-/commit/947b9698ad5b83e20c364bdbb47ed20059827e11 | Commit 947b9698 by unknown
### v1.0.3 documentation of plot.OutputsModel() updated (part "details")
parent 4bcd88be
... ... @@ -33,11 +33,12 @@ Function which creates a screen plot giving an overview of the model outputs \details{ Dashboard of results including various graphs (depending on the model):\cr (1) time series of total precipitation\cr (2) time series of snow pack (plotted only if CemaNeige ise used)\cr (3) time series of simulated flows (and observed flows if provided)\cr (4) interannual median monthly simulated flow (and observed flows if provided)\cr (5) correlation plot between simulated and observed flows (if observed flows provided)\cr (6) cumulative frequency plot for simulated flows (and observed flows if provided) (2) time series of temperature (plotted only if CemaNeige ise used)\cr (3) time series of snow pack (plotted only if CemaNeige ise used)\cr (4) time series of simulated flows (and observed flows if provided)\cr (5) interannual median monthly simulated flow (and observed flows if provided)\cr (6) correlation plot between simulated and observed flows (if observed flows provided)\cr (7) cumulative frequency plot for simulated flows (and observed flows if provided) } \author{ Laurent Coron (June 2014), ... ...
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9983775019645691, "perplexity": 24106.719059044754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00427.warc.gz"} |
https://studywolf.wordpress.com/ | ## Using SymPy’s lambdify for generating transform matrices and Jacobians
I’ve been working in VREP with some of their different robot models and testing out force control, and one of the things that becomes pretty important for efficient workflow is to have a streamlined method for setting up the transform matrices and calculating the Jacobians for different robots. You do not want to be working these all out by hand and then writing them in yourself.
A solution that’s been working well for me (and is fully implemented in Python) is to use SymPy to set up the basic transform matrices, from each joint to the next, and then use it’s derivative function to calculate the Jacobian. Once this is calculated you can then use SymPy’s lambdify function to parameterize this, and off you go! In this post I’m going to work through an example for controlling VREP’s UR5 arm using force control. And if you’re just looking for code examples, you can find it all up on my GitHub.
Edit: Updated the code to be much nicer, added saving of calculated functions, and a null space controller.
Setting up the transform matrices
This is the part of this process that is unique to each arm. The goal is to set up a system so that you can specify your transforms for each joint (and to each centre-of-mass (COM) too, of course) and then be on your way. So here’s the UR5, cute thing that it is:
For the UR5, there are 6 joints, and we’re going to be specifying 9 transform matrices: 6 joints and the COM for three arm segments (the remaining arm segment COMs are centered at their respective joint’s reference frame). The joints are all rotary, with 0 and 4 rotating around the z-axis, and the rest all rotating around x.
So first, we’re going to create a class called robot_config. Then, to create our transform matrices in SymPy first we need to set up the variables we’re going to be using:
# set up our joint angle symbols (6th angle doesn't affect any kinematics)
self.q = [sp.Symbol('q%i'%ii) for ii in range(6)]
# segment lengths associated with each joint
self.L = np.array([0.0935, .13453, .4251, .12, .3921, .0935, .0935, .0935])
where self.q is an array storing all our joint angle symbols, and self.L is an array of all of the offsets for each joint and arm segment lengths.
Using these to create the transform matrices for the joints, we get a set up that looks like this:
# transform matrix from origin to joint 0 reference frame
self.T0org = sp.Matrix([[sp.cos(self.q[0]), -sp.sin(self.q[0]), 0, 0],
[sp.sin(self.q[0]), sp.cos(self.q[0]), 0, 0],
[0, 0, 1, self.L[0]],
[0, 0, 0, 1]])
# transform matrix from joint 0 to joint 1 reference frame
self.T10 = sp.Matrix([[1, 0, 0, -L[1]],
[0, sp.cos(-self.q[1] + sp.pi/2),
-sp.sin(-self.q[1] + sp.pi/2), 0],
[0, sp.sin(-self.q[1] + sp.pi/2),
sp.cos(-self.q[1] + sp.pi/2), 0],
[0, 0, 0, 1]])
# transform matrix from joint 1 to joint 2 reference frame
self.T21 = sp.Matrix([[1, 0, 0, 0],
[0, sp.cos(-self.q[2]),
-sp.sin(-self.q[2]), self.L[2]],
[0, sp.sin(-self.q[2]),
sp.cos(-self.q[2]), 0],
[0, 0, 0, 1]])
# transform matrix from joint 2 to joint 3
self.T32 = sp.Matrix([[1, 0, 0, L[3]],
[0, sp.cos(-self.q[3] - sp.pi/2),
-sp.sin(-self.q[3] - sp.pi/2), self.L[4]],
[0, sp.sin(-self.q[3] - sp.pi/2),
sp.cos(-self.q[3] - sp.pi/2), 0],
[0, 0, 0, 1]])
# transform matrix from joint 3 to joint 4
self.T43 = sp.Matrix([[sp.sin(-self.q[4] - sp.pi/2),
sp.cos(-self.q[4] - sp.pi/2), 0, -self.L[5]],
[sp.cos(-self.q[4] - sp.pi/2),
-sp.sin(-self.q[4] - sp.pi/2), 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]])
# transform matrix from joint 4 to joint 5
self.T54 = sp.Matrix([[1, 0, 0, 0],
[0, sp.cos(self.q[5]), -sp.sin(self.q[5]), 0],
[0, sp.sin(self.q[5]), sp.cos(self.q[5]), self.L[6]],
[0, 0, 0, 1]])
# transform matrix from joint 5 to end-effector
self.TEE5 = sp.Matrix([[0, 0, 0, self.L[7]],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 1]])
You can see a bunch of offsetting of the joint angles by -sp.pi/2 and this is to account for the expected 0 angle (straight along the reference frame’s x-axis) at those joints being different than the 0 angle defined in the VREP simulation (at a 90 degrees from the x-axis). You can find these by either looking at and finding the joints’ 0 position in VREP or by trial-and-error empirical analysis.
Once you have your transforms, then you have to specify how to move from the origin to each reference frame of interest (the joints and COMs). For that, I’ve just set up a simple function with a switch statement:
# point of interest in the reference frame (right at the origin)
self.x = sp.Matrix([0,0,0,1])
def _calc_T(self, name, lambdify=True):
""" Uses Sympy to generate the transform for a joint or link
name string: name of the joint or link, or end-effector
lambdify boolean: if True returns a function to calculate
the transform. If False returns the Sympy
matrix
"""
# check to see if we have our transformation saved in file
if os.path.isfile('%s/%s.T' % (self.config_folder, name)):
Tx = cloudpickle.load(open('%s/%s.T' % (self.config_folder, name),
'rb'))
else:
if name == 'joint0' or name == 'link0':
T = self.T0org
elif name == 'joint1' or name == 'link1':
T = self.T0org * self.T10
elif name == 'joint2':
T = self.T0org * self.T10 * self.T21
T = self.T0org * self.T10 * self.Tl21
elif name == 'joint3':
T = self.T0org * self.T10 * self.T21 * self.T32
T = self.T0org * self.T10 * self.T21 * self.Tl32
elif name == 'joint4' or name == 'link4':
T = self.T0org * self.T10 * self.T21 * self.T32 * self.T43
elif name == 'joint5' or name == 'link5':
T = self.T0org * self.T10 * self.T21 * self.T32 * self.T43 * \
self.T54
elif name == 'link6' or name == 'EE':
T = self.T0org * self.T10 * self.T21 * self.T32 * self.T43 * \
self.T54 * self.TEE5
Tx = T * self.x # to convert from transform matrix to (x,y,z)
# save to file
cloudpickle.dump(Tx, open('%s/%s.T' % (self.config_folder, name),
'wb'))
if lambdify is False:
return Tx
return sp.lambdify(self.q, Tx)
So the first part is pretty straight forward, create the transform matrix, and then at the end to get the (x,y,z) position we just multiply by a vector we created that represents a point at the origin of the last reference frame. Some of the transform matrices (the ones to the arm segments) I didn’t specify above just to cut down on space.
The second part is where we use this awesome lambify function, which lets us turn the matrix we’ve defined into a function, so that we can pass in joint angles and get back out the resulting (x,y,z) position. There’s also the option to get the straight up SymPy matrix return, in case you need the symbolic form (which we will!).
NOTE: You can also see that there’s a check built in to look for saved files, and to just load those saved files instead of recalculating things if they’re available. This is because calculating some of these matrices and their derivatives takes a long, long time. I used the cloudpickle module to do this because it’s able to easily handle saving a whole bunch of weird things that makes normal pickle sour.
Calculating the Jacobian
So now that we’re able to quickly generate the transform matrix for each point of interest on the UR5, we simply take the derivative of the equation for each (x,y,z) coordinate with respect to each joint angle to generate our Jacobian.
def _calc_J(self, name, lambdify=True):
""" Uses Sympy to generate the Jacobian for a joint or link
name string: name of the joint or link, or end-effector
lambdify boolean: if True returns a function to calculate
the Jacobian. If False returns the Sympy
matrix
"""
# check to see if we have our Jacobian saved in file
if os.path.isfile('%s/%s.J' % (self.config_folder, name)):
(self.config_folder, name), 'rb'))
else:
Tx = self._calc_T(name, lambdify=False)
J = []
# calculate derivative of (x,y,z) wrt to each joint
for ii in range(self.num_joints):
J.append([])
J[ii].append(sp.simplify(Tx[0].diff(self.q[ii]))) # dx/dq[ii]
J[ii].append(sp.simplify(Tx[1].diff(self.q[ii]))) # dy/dq[ii]
J[ii].append(sp.simplify(Tx[2].diff(self.q[ii]))) # dz/dq[ii]
Here we retrieve the Tx vector from our _calc_T function, and then calculate the derivatives. When calculating the Jacobian for the end-effector, this is all we need! Huzzah!
But to calculate the Jacobian for transforming the inertia matrices of each arm segment into joint space we’re going to need the orientation information added to our Jacobian as well. This we know ahead of time, for each joint it’s a 3D vector with a 1 on the axis being rotated around. So we can predefine this:
# orientation part of the Jacobian (compensating for orientations)
self.J_orientation = [
[0, 0, 1], # joint 0 rotates around z axis
[1, 0, 0], # joint 1 rotates around x axis
[1, 0, 0], # joint 2 rotates around x axis
[1, 0, 0], # joint 3 rotates around x axis
[0, 0, 1], # joint 4 rotates around z axis
[1, 0, 0]] # joint 5 rotates around x axis
And then we just fill in the Jacobians for each reference frame with self.J_orientation up to the last joint, and fill in the rest of the Jacobian with zeros. So e.g. when we’re calculating the Jacobian for the arm segment just past the second joint we’ll use the first two rows of self.J_orientation and the rest of the rows will be 0.
So this leads us to the second half of the _calc_J function:
end_point = name.strip('link').strip('joint')
if end_point != 'EE':
end_point = min(int(end_point) + 1, self.num_joints)
# add on the orientation information up to the last joint
for ii in range(end_point):
J[ii] = J[ii] + self.J_orientation[ii]
# fill in the rest of the joints orientation info with 0
for ii in range(end_point, self.num_joints):
J[ii] = J[ii] + [0, 0, 0]
# save to file
cloudpickle.dump(J, open('%s/%s.J' %
(self.config_folder, name), 'wb'))
J = sp.Matrix(J).T # correct the orientation of J
if lambdify is False:
return J
return sp.lambdify(self.q, J)
The orientation information is added in, we save the result to file, and a function that takes in the joint angles and outputs the Jacobian is created (unless lambdify == False in which case the SymPy symbolic form is returned.)
Then finally, two wrapper functions are added in to make creating / accessing these functions easier. First, define a couple of dictionaries
# create function dictionaries
self._T = {} # for transform calculations
self._J = {} # for Jacobian calculations
and then our wrapper functions look like this
def T(self, name, q):
""" Calculates the transform for a joint or link
name string: name of the joint or link, or end-effector
q np.array: joint angles
"""
# check for function in dictionary
if self._T.get(name, None) is None:
print('Generating transform function for %s'%name)
self._T[name] = self.calc_T(name)
return self._T[name](*q)[:-1].flatten()
def J(self, name, q):
""" Calculates the transform for a joint or link
name string: name of the joint or link, or end-effector
q np.array: joint angles
"""
# check for function in dictionary
if self._J.get(name, None) is None:
print('Generating Jacobian function for %s'%name)
self._J[name] = self.calc_J(name)
return np.array(self._J[name](*q)).T
So how you use this class (all of this is in a class) is to call these T and J functions with the current joint angles. They’ll check to see if the functions have already be created or stored in file, if they haven’t then the T and / or J functions will be created, then our wrappers do a bit of formatting to get them into the proper form (i.e. transposing or cropping), and return you your (x,y,z) or Jacobian!
NOTE: It’s a bit of a misnomer to have the function be called T and actually return to you Tx, but hey this is a blog. Lay off.
Calculating the inertia matrix in joint-space and force of gravity
Now, since we’re here we might as well also calculate the functions for our inertia matrix in joint space and the effect of gravity. So, define a couple more placeholders in our robot_config class to help us:
self._M = [] # placeholder for (x,y,z) inertia matrices
self._Mq = None # placeholder for joint space inertia matrix function
self._Mq_g = None # placeholder for joint space gravity term function
and then add in our inertia matrix information (defined in each link’s centre-of-mass (COM) reference frame)
# create the inertia matrices for each link of the ur5
self._M.append(np.diag([1.0, 1.0, 1.0,
self._M.append(np.diag([2.5, 2.5, 2.5,
self._M.append(np.diag([5.7, 5.7, 5.7,
self._M.append(np.diag([3.9, 3.9, 3.9,
self._M.append(np.diag([0.7, 0.7, 0.7,
and then using our equations for calculating the system’s inertia and gravity we create our _calc_Mq and _calc_Mq_g functions
def _calc_Mq(self, lambdify=True):
""" Uses Sympy to generate the inertia matrix in
joint space for the ur5
lambdify boolean: if True returns a function to calculate
the Jacobian. If False returns the Sympy
matrix
"""
# check to see if we have our inertia matrix saved in file
if os.path.isfile('%s/Mq' % self.config_folder):
Mq = cloudpickle.load(open('%s/Mq' % self.config_folder, 'rb'))
else:
# get the Jacobians for each link's COM
J = [self._calc_J('link%s' % ii, lambdify=False)
# transform each inertia matrix into joint space
# sum together the effects of arm segments' inertia on each motor
Mq = sp.zeros(self.num_joints)
Mq += J[ii].T * self._M[ii] * J[ii]
# save to file
cloudpickle.dump(Mq, open('%s/Mq' % self.config_folder, 'wb'))
if lambdify is False:
return Mq
return sp.lambdify(self.q, Mq)
def _calc_Mq_g(self, lambdify=True):
""" Uses Sympy to generate the force of gravity in
joint space for the ur5
lambdify boolean: if True returns a function to calculate
the Jacobian. If False returns the Sympy
matrix
"""
# check to see if we have our gravity term saved in file
if os.path.isfile('%s/Mq_g' % self.config_folder):
'rb'))
else:
# get the Jacobians for each link's COM
J = [self._calc_J('link%s' % ii, lambdify=False)
# transform each inertia matrix into joint space and
# sum together the effects of arm segments' inertia on each motor
Mq_g = sp.zeros(self.num_joints, 1)
for ii in range(self.num_joints):
Mq_g += J[ii].T * self._M[ii] * self.gravity
# save to file
cloudpickle.dump(Mq_g, open('%s/Mq_g' % self.config_folder,
'wb'))
if lambdify is False:
return Mq_g
return sp.lambdify(self.q, Mq_g)
and wrapper functions
def Mq(self, q):
""" Calculates the joint space inertia matrix for the ur5
q np.array: joint angles
"""
# check for function in dictionary
if self._Mq is None:
print('Generating inertia matrix function')
self._Mq = self._calc_Mq()
return np.array(self._Mq(*q))
def Mq_g(self, q):
""" Calculates the force of gravity in joint space for the ur5
q np.array: joint angles
"""
# check for function in dictionary
if self._Mq_g is None:
print('Generating gravity effects function')
self._Mq_g = self._calc_Mq_g()
return np.array(self._Mq_g(*q)).flatten()
and we’re all set!
Putting it all together
Now we have nice clean code to generate everything we need for our controller. Using the controller developed in this post as a base, we can replace those calculations with the following nice compact code (which also includes a secondary null-space controller to keep the arm near resting joint angles):
# calculate position of the end-effector
# derived in the ur5 calc_TnJ class
xyz = robot_config.T('EE', q)
# calculate the Jacobian for the end effector
JEE = robot_config.J('EE', q)
# calculate the inertia matrix in joint space
Mq = robot_config.Mq(q)
# calculate the effect of gravity in joint space
Mq_g = robot_config.Mq_g(q)
# convert the mass compensation into end effector space
Mx_inv = np.dot(JEE, np.dot(np.linalg.inv(Mq), JEE.T))
svd_u, svd_s, svd_v = np.linalg.svd(Mx_inv)
# cut off any singular values that could cause control problems
singularity_thresh = .00025
for i in range(len(svd_s)):
svd_s[i] = 0 if svd_s[i] < singularity_thresh else \
1./float(svd_s[i])
# numpy returns U,S,V.T, so have to transpose both here
Mx = np.dot(svd_v.T, np.dot(np.diag(svd_s), svd_u.T))
kp = 100
kv = np.sqrt(kp)
# calculate desired force in (x,y,z) space
u_xyz = np.dot(Mx, target_xyz - xyz)
# transform into joint space, add vel and gravity compensation
u = (kp * np.dot(JEE.T, u_xyz) - np.dot(Mq, kv * dq) - Mq_g)
# calculate our secondary control signal
# calculated desired joint angle acceleration
q_des = (((robot_config.rest_angles - q) + np.pi) %
(np.pi*2) - np.pi)
u_null = np.dot(Mq, (kp * q_des - kv * dq))
# calculate the null space filter
Jdyn_inv = np.dot(Mx, np.dot(JEE, np.linalg.inv(Mq)))
null_filter = (np.eye(robot_config.num_joints) -
np.dot(JEE.T, Jdyn_inv))
u_null_filtered = np.dot(null_filter, u_null)
u += u_null_filtered
And there you go!
You can see all of this code up on my GitHub, along a full example controlling a UR5 VREP model though reaching to a series of targets. It looks something pretty much like this (slightly better because this gif was made before adding in the null space controller):
Overhead of using lambdify instead of hard-coding
This was a big question that I had, because when I’m running simulations the time step is on the order of a few milliseconds, with the controller code called at every time step. So I reaaaally can’t afford a crazy overhead for using lambdify.
To test this, I used the handy Python timeit, which requires a bit awkward setup, but quite nicely calls the function a whole bunch of times (1,000,000 by default) and accounts for various other things going on that could affect the execution time.
I tested two sample functions, one simpler than the other. Here’s the code for setting up and testing the first function:
import timeit
import seaborn
# Test 1 ----------------------------------------------------------------------
print('\nTest function 1: ')
time_sympy1 = timeit.timeit(
stmt = 'f(np.random.random(), np.random.random())',
setup = 'import numpy as np;\
import sympy as sp;\
q0 = sp.Symbol("q0");\
l0 = sp.Symbol("l0");\
a = sp.cos(q0) * l0;\
f = sp.lambdify((q0, l0), a, "numpy")')
print('Sympy lambdify function 1 time: ', time_sympy1)
time_hardcoded1 = timeit.timeit(
stmt = 'np.cos(np.random.random())*np.random.random()',
setup = 'import numpy as np')
print('Hard coded function 1 time: ', time_hardcoded1)
Pretty simple, a bit of a pain in the sympy setup, but other than that not bad at all. The second function I tested was just a random collection of cos and sin calls that resemble what gets computed in a Jacobian:
l1*np.sin(q0 - l0*np.sin(q1)*np.cos(q2) - l2*np.sin(q2) - l0*np.sin(q1) + q0*l0)*np.cos(q0) + l2*np.sin(q0)
And here’s the results:
So it’s slower for sure, but again this is the difference in time after 1,000,000 function calls, so until some big optimization needs to be done using the SymPy lambdify function is definitely worth the slight gain in execution time for the insane convenience.
The full code for the timing tests here are also up on my GitHub.
## Dynamic movement primitives part 4: Avoiding obstacles – update
Edit: Previously I posted this blog post on incorporating obstacle avoidance, but after a recent comment on the code I started going through it again and found a whole bunch of issues. Enough so that I’ve recoded things and I’m going to repost it now with working examples and a description of the changes I made to get it going. The edited sections will be separated out with these nice horizontal lines. If you’re just looking for the example code, you can find it up on my pydmps repo, here.
Alright. Previously I’d mentioned in one of these posts that DMPs are awesome for generalization and extension, and one of the ways that they can be extended is by incorporating obstacle avoidance dynamics. Recently I wanted to implement these dynamics, and after a bit of finagling I got it working, and so that’s going to be the subject of this post.
There are a few papers that talk about this, but the one we’re going to use is Biologically-inspired dynamical systems for movement generation: automatic real-time goal adaptation and obstacle avoidance by Hoffmann and others from Stefan Schaal’s lab. This is actually the second paper talking about obstacle avoidance and DMPs, and this is a good chance to stress one of the most important rules of implementing algorithms discussed in papers: collect at least 2-3 papers detailing the algorithm (if possible) before attempting to implement it. There are several reasons for this, the first and most important is that there are likely some typos in the equations of one paper, by comparing across a few papers it’s easier to identify trickier parts, after which thinking through what the correct form should be is usually straightforward. Secondly, often equations are updated with simplified notation or dynamics in later papers, and you can save yourself a lot of headaches in trying to understand them just by reading a later iteration. I recklessly disregarded this advice and started implementation using a single, earlier paper which had a few key typos in the equations and spent a lot of time tracking down the problem. This is just a peril inherent in any paper that doesn’t provide tested code, which is almost all, sadface.
OK, now on to the dynamics. Fortunately, I can just reference the previous posts on DMPs here and don’t have to spend any time discussing how we arrive at the DMP dynamics (for a 2D system):
$\ddot{\textbf{y}} = \alpha_y (\beta_y( \textbf{g} - \textbf{y}) - \dot{\textbf{y}}) + \textbf{f},$
where $\alpha_y$ and $\beta_y$ are gain terms, $\textbf{g}$ is the goal state, $\textbf{y}$ is the system state, $\dot{\textbf{y}}$ is the system velocity, and $\textbf{f}$ is the forcing function.
As mentioned, DMPs are awesome because now to add obstacle avoidance all we have to do is add another term
$\ddot{\textbf{y}} = \alpha_y (\beta_y( \textbf{g} - \textbf{y}) - \dot{\textbf{y}}) + \textbf{f} + \textbf{p}(\textbf{y}, \dot{\textbf{y}}),$
where $\textbf{p}(\textbf{y}, \dot{\textbf{y}})$ implements the obstacle avoidance dynamics, and is a function of the DMP state and velocity. Now then, the question is what are these dynamics exactly?
Obstacle avoidance dynamics
It turns out that there is a paper by Fajen and Warren that details an algorithm that mimics human obstacle avoidance. The idea is that you calculate the angle between your current velocity and the direction to the obstacle, and then turn away from the obstacle. The angle between current velocity and direction to the obstacle is referred to as the steering angle, denoted $\varphi$, here’s a picture of it:
So, given some $\varphi$ value, we want to specify how much to change our steering direction, $\dot{\varphi}$, as in the figure below:
If we’re on track to hit the object (i.e. $\varphi$ is near 0) then we steer away hard, and then make your change in direction less and less as the angle between your heading (velocity) and the object is larger and larger. Formally, define $\dot{\varphi}$ as
$\dot{\varphi} = \gamma \;\varphi \;\textrm{exp}(-\beta | \varphi |),$
where $\gamma$ and $\beta$ are constants, which are specified as $1000$ and $\frac{20}{\pi}$ in the paper, respectively.
Edit: OK this all sounds great, but quickly you run into issues here. The first is how do we calculate $\varphi$? In the paper I was going off of they used a dot product between the $\dot{\textbf{y}}$ vector and the $\textbf{o} - \textbf{y}$ vector to get the cosine of the angle, and then use np.arccos to get the angle from that. There is a big problem with this here, however, that’s kind of subtle: You will never get a negative angle when you calculate this, which, of course. That’s not how np.arccos works, but it is still what we need so we will be able to tell if we should be turning left or right to avoid this object!
So we need to use a different way of calculating the angle, one that tells us if the object is in a + or – angle relative to the way we’re headed. To calculate an angle that will give us + or -, we’re going to use the np.arctan2 function.
We want to center things around the way we’re headed, so first we calculate the angle of the direction vector, $\dot{\textbf{y}}$. Then we create a rotation vector, R_dy to rotate the vector describing the direction of the object. This shifts things around so that if we’re moving straight towards the object it’s at 0 degrees, if we’re headed directly away from it, it’s at 180 degrees, etc. Once we have that vector, nooowwww we can find the angle of the direction of the object and that’s what we’re going to use for phi. Huzzah!
# get the angle we're heading in
phi_dy = -np.arctan2(dy[1], dy[0])
R_dy = np.array([[np.cos(phi_dy), -np.sin(phi_dy)],
[np.sin(phi_dy), np.cos(phi_dy)]])
# calculate vector to object relative to body
obj_vec = obstacle - y
# rotate it by the direction we're going
obj_vec = np.dot(R_dy, obj_vec)
# calculate the angle of obj relative to the direction we're going
phi = np.arctan2(obj_vec[1], obj_vec[0])
This $\dot{\varphi}$ term can be thought of as a weighting, telling us how much we need to rotate based on how close we are to running into the object. To calculate how we should rotate we’re going to calculate the angle orthonormal to our current velocity, then weight it by the distance between the object and our current state on each axis. Formally, this is written:
$\textbf{R} \; \dot{\textbf{y}},$
where $\textbf{R}$ is the axis $(\textbf{o} - \textbf{x}) \times \dot{\textbf{y}}$ rotated 90 degrees (the $\times$ denoting outer product here). The way I’ve been thinking about this is basically taking your velocity vector, $\dot{\textbf{y}}$, and rotating it 90 degrees. Then we use this rotated vector as a row vector, and weight the top row by the distance between the object and the system along the $x$ axis, and the bottom row by the difference along the $\textbf{y}$ axis. So in the end we’re calculating the angle that rotates our velocity vector 90 degrees, weighted by distance to the object along each axis.
So that whole thing takes into account absolute distance to object along each axis, but that’s not quite enough. We also need to throw in $\dot{\varphi}$, which looks at the current angle. What this does is basically look at ‘hey are we going to hit this object?’, if you are on course then make a big turn and if you’re not then turn less or not at all. Phew.
OK so all in all this whole term is written out
$\textbf{p}(\textbf{y}, \dot{\textbf{y}}) = \textbf{R} \; \dot{\textbf{y}} \; \dot{\varphi},$
and that’s what we add in to the system acceleration. And now our DMP can avoid obstacles! How cool is that?
Super compact, straight-forward to add, here’s the code:
Edit: OK, so not suuuper compact. I’ve also added in another couple checks. The big one is “Is this obstacle between us and the target or not?”. So I calculate the Euclidean distance to the goal and the obstacle, and if the obstacle is further away then the goal, don’t both doing any avoidance! This took care of a few weird errors where you would get a big deviation in the trajectory if it saw an obstacle past the goal, because it was planning on avoiding it, but then was pulled back in to the goal before the obstacle anyways so it was a pointless exercise. The other check added in is just to make sure you only add in obstacle avoidance if the system is actually moving. Finally, I also changed the $\gamma = 100$ instead of $1000$.
def avoid_obstacles(y, dy, goal):
p = np.zeros(2)
for obstacle in obstacles:
# based on (Hoffmann, 2009)
# if we're moving and we're not at the target
if np.linalg.norm(dy) > 1e-5
# get the angle we're heading in
phi_dy = -np.arctan2(dy[1], dy[0])
R_dy = np.array([[np.cos(phi_dy), -np.sin(phi_dy)],
[np.sin(phi_dy), np.cos(phi_dy)]])
# calculate vector to object relative to body
obj_vec = obstacle - y
# rotate it by the direction we're going
obj_vec = np.dot(R_dy, obj_vec)
# calculate the angle of obj relative to the direction we're going
phi = np.arctan2(obj_vec[1], obj_vec[0])
dphi = gamma * phi * np.exp(-beta * abs(phi))
R = np.dot(R_halfpi, np.outer(obstacle - y, dy))
pval = -np.nan_to_num(np.dot(R, dy) * dphi)
# check to see if the distance to the obstacle is further than
# the distance to the target, if it is, ignore the obstacle
if np.linalg.norm(obj_vec) > np.linalg.norm(goal - y):
pval = 0
p += pval
return p
And that’s it! Just add this method in to your DMP system and call avoid_obstacles at every timestep, and add it in to your DMP acceleration.
You hopefully noticed in the code that this is set up for multiple obstacles, and that all that that entailed was simply adding the p value generated by each individual obstacle. It’s super easy! Here’s a very basic graphic showing how the DMP system can avoid obstacles:
So here there’s just a basic attractor system (DMP without a forcing function) trying to move from the center position to 8 targets around the unit circle (which are highlighted in red), and there are 4 obstacles that I’ve thrown onto the field (black x’s). As you can see, the system successfully steers way clear of the obstacles while moving towards the target!
We must all use this power wisely.
Edit: Making the power yours is now easier than ever! You can check out this code at my pydmps GitHub repo. Clone the repo and after you python setup.py develop, change directories into the examples folder and run the avoid_obstacles.py file. It will randomly generate 5 targets in the environment and perform 20 movements, giving you something looking like this:
## Using VREP for simulation of force-controlled models
I’ve been playing around a bit with different simulators, and one that we’re a big fan of in the lab is VREP. It’s free for academics and you can talk to them about licences if you’re looking for commercial use. I haven’t actually had much experience with it before myself, so I decided to make a simple force controlled arm model to get experience using it. All in all, there were only a few weird API things that I had to get through, and once you have them down it’s pretty straight forward. This post is going to be about the steps that I needed to take to get things all set up. For a more general start-up on VREP check out All the code in this post and the model I use can be found up on my GitHub.
Getting the right files where you need them
As discussed in the remote API overview, you’ll need three files in whatever folder you want to run your Python script from to be able to hook into VREP remotely:
• remoteApi.dll, remoteApi.dylib or remoteApi.so (depending on what OS you’re using)
• vrep.py
• vrepConstants.py
You can find these files inside your VREP_HOME/programming/remoteApiBindings/python/python and VREP_HOME/programming/remoteApiBindings/lib/lib folders. Make sure that these files are in whatever folder you’re running your Python scripts from.
The model
It’s easy to create a new model to mess around with in VREP, so that’s the route I went, rather than importing one of their pre-made models and having some sneaky parameter setting cause me a bunch of grief. You can just right click->add then go at it. There are a bunch of tutorials so I’m not going to go into detail here. The main things are:
• Make sure joints are in ‘Torque/force’ mode.
• Make sure that joint ‘Motor enabled’ property is checked. The motor enabled property is in the ‘Show dynamic properties dialogue’ menu, which you find when you double click on the joint in the Scene Hierarchy.
• Know the names of the joints as shown in the Scene Hierarchy.
So here’s a picture:
where you can see the names of the objects in the model highlighted in red, the Torque/force selection highlighted in blue, and the Motor enabled option highlighted in green. And of course my beautiful arm model in the background.
Setting up the continuous server
The goal is to connect VREP to Python so that we can send torques to the arm from our Python script and get the feedback necessary to calculate those torques. There are a few ways to set up a remote connection in VREP.
The basic one is they have you add a child script in VREP and attach it to an object in the world that sets up a port and then you hit go on your simulation and can then run your Python script to connect in. This gets old real fast. Fortunately, it’s easy to automate everything from Python so that you can connect in, start the simulation, run it for however long, and then stop the simulation without having to click back and forth.
The first step is to make sure that your remoteApiConnections.txt file in you VREP home directory is set up properly. A continuous server should be set up by default, but doesn’t hurt to double check. And you can take the chance to turn on debugging, which can be pretty helpful. So open up that file and make sure it looks like this:
portIndex1_port = 19997
portIndex1_debug = true
portIndex1_syncSimTrigger = true
Once that’s set up, when VREP starts we can connect in from Python. In our Python script, first we’ll close any open connections that might be around, and then we’ll set up a new connection in:
import vrep
# close any open connections
vrep.simxFinish(-1)
# Connect to the V-REP continuous server
clientID = vrep.simxStart('127.0.0.1', 19997, True, True, 500, 5)
if clientID != -1: # if we connected successfully
print ('Connected to remote API server')
So once the connection is made, we check the clientID value to make sure that it didn’t fail, and then we carry on with our script.
Synchronizing
By default, VREP will run its simulation in its own thread, and both the simulation and the controller using the remote API will be running simultaneously. This can lead to some weird behaviour as things fall out of synch etc etc, so what we want instead is for the VREP sim to only run one time step for each time step the controller runs. To do that, we need to set VREP to synchronous mode. So the next few lines of our Python script look like:
# --------------------- Setup the simulation
vrep.simxSynchronous(clientID,True)
and then later, once we’ve calculated our control signal, sent it over, and want the simulation to run one time step forward, we do that by calling
# move simulation ahead one time step
vrep.simxSynchronousTrigger(clientID)
Get the handles to objects of interest
OK the next chunk of code in our script uses the names of our objects (as specified in the Scene Hierarchy in VREP) to get an ID for each which to identify which object we want to send a command to or receive feedback from:
joint_names = ['shoulder', 'elbow']
# joint target velocities discussed below
joint_target_velocities = np.ones(len(joint_names)) * 10000.0
# get the handles for each joint and set up streaming
joint_handles = [vrep.simxGetObjectHandle(clientID,
name, vrep.simx_opmode_blocking)[1] for name in joint_names]
# get handle for target and set up streaming
_, target_handle = vrep.simxGetObjectHandle(clientID,
'target', vrep.simx_opmode_blocking)
Set dt and start the simulation
And the final thing that we’re going to do in our simulation set up is specify the timestep that we want to use, and then start the simulation. I found this in a forum post, and I must say whatever VREP lacks in intuitive API their forum moderators are on the ball. NOTE: To use a custom time step you have to also set the dt option in the VREP simulator to ‘custom’. The drop down is to the left of the ‘play’ arrow, and if you don’t have it set to ‘custom’ you won’t be able to set the time step through Python.
dt = .01
vrep.simxSetFloatingParameter(clientID,
vrep.sim_floatparam_simulation_time_step,
dt, # specify a simulation time step
vrep.simx_opmode_oneshot)
# --------------------- Start the simulation
# start our simulation in lockstep with our code
vrep.simxStartSimulation(clientID,
vrep.simx_opmode_blocking)
Main loop
For this next chunk I’m going to cut out everything that’s not VREP, since I have a bunch of posts explaining the control signal derivation and forward transformation matrices.
So, once we’ve started the simulation, I’ve set things up for the arm to be controlled for 1 second and then for the simulation to stop and everything shut down and disconnect.
while count < 1: # run for 1 simulated second
# get the (x,y,z) position of the target
_, target_xyz = vrep.simxGetObjectPosition(clientID,
target_handle,
-1, # retrieve absolute, not relative, position
vrep.simx_opmode_blocking)
if _ !=0 : raise Exception()
track_target.append(np.copy(target_xyz)) # store for plotting
target_xyz = np.asarray(target_xyz)
q = np.zeros(len(joint_handles))
dq = np.zeros(len(joint_handles))
for ii,joint_handle in enumerate(joint_handles):
# get the joint angles
_, q[ii] = vrep.simxGetJointPosition(clientID,
joint_handle,
vrep.simx_opmode_blocking)
if _ !=0 : raise Exception()
# get the joint velocity
_, dq[ii] = vrep.simxGetObjectFloatParameter(clientID,
joint_handle,
2012, # ID for angular velocity of the joint
vrep.simx_opmode_blocking)
if _ !=0 : raise Exception()
In the above chunk of code, I think the big thing to point out is that I’m using vrep.simx_opmode_blocking in each call, instead of vrep.simx_opmode_buffer. The difference is that you for sure get the current values of the simulation when you use blocking, and you can be behind a time step using buffer.
Aside from that the other notable things are I raise an exception if the first parameter (which is the return code) is ever not 0, and that I use simxGetObjectFloatParameter to get the joint velocity instead of simxGetObjectVelocity, which has a rotational velocity component. Zero is the return code for ‘everything worked’, and if you don’t check it and have some silly things going on you can be veeerrrrryyy mystified when things don’t work. And what simxGetObjectVelocity returns is the rotational velocity of the joint relative to the world frame, and not the angular velocity of the joint in its own coordinates. That was also a briefly confusing.
So the next thing I do is calculate u, which we’ll skip, and then we need to set the forces for the joint. This part of the API is real screwy. You can’t set the force applied to the joint directly. Instead, you have to set the target velocity of the joint to some really high value (hence that array we made before), and then modulate the maximum force that can be applied to that joint. Also important: When you want to apply a force in the other direction, you change the sign on the target velocity, but keep the force sign positive.
# ... calculate u ...
for ii,joint_handle in enumerate(joint_handles):
# get the current joint torque
_, torque = \
vrep.simxGetJointForce(clientID,
joint_handle,
vrep.simx_opmode_blocking)
if _ !=0 : raise Exception()
# if force has changed signs,
# we need to change the target velocity sign
if np.sign(torque) * np.sign(u[ii]) < 0:
joint_target_velocities[ii] = \
joint_target_velocities[ii] * -1
vrep.simxSetJointTargetVelocity(clientID,
joint_handle,
joint_target_velocities[ii], # target velocity
vrep.simx_opmode_blocking)
if _ !=0 : raise Exception()
# and now modulate the force
vrep.simxSetJointForce(clientID,
joint_handle,
abs(u[ii]), # force to apply
vrep.simx_opmode_blocking)
if _ !=0 : raise Exception()
# move simulation ahead one time step
vrep.simxSynchronousTrigger(clientID)
count += dt
So as you can see we check the current torque, see if we need to change the sign on the target velocity, modulate the maximum allowed force, and then finally step the VREP simulation forward.
Conclusions
And there you go! Here’s an animation of it in action (note this is a super low quality gif and it looks way better / smoother when actually running it yourself):
All in all, VREP has been enjoyable to work with so far. It didn’t take long to get things moving and off the ground, the visualization is great, and I haven’t even scratched the surface of what you can do with it. Best of all (so far) you can fully automate everything from Python. Hopefully this is enough to help some people get their own models going and save a few hours and headaches! Again, the full code and the model are up on my GitHub.
Nits
• When you’re applying your control signal, make sure you test each joint in isolation, to make sure your torques push things in the direction you think they do. I had checked the rotation direction in VREP, but the control signal for both joints ended up needing to be multiplied by -1.
• Another nit when you’re building your model, if you use the rotate button from the VREP toolbar on your model, wherever that joint rotates to is now 0 degrees. If you want to set the joint to start at 45 degrees, instead double click and change Pos[deg] option inside ‘Joint’ in Scene Object Properties.
## Deep learning for control using augmented Hessian-free optimization
Traditionally, deep learning is applied to feed-forward tasks, like classification, where the output of the network doesn’t affect the input to the network. It is a decidedly harder problem when the output is recurrently connected such that network output affects the input. Because of this application of deep learning methods to control was largely unexplored until a few years ago. Recently, however, there’s been a lot of progress and research in this area. In this post I’m going to talk about an implementation of deep learning for control presented by Dr. Ilya Sutskever in his thesis Training Recurrent Neural Networks.
In his thesis, Dr. Sutskever uses augmented Hessian-free (AHF) optimization for learning. There are a bunch of papers and posts that go into details about AHF, here’s a good one by Andrew Gibiansky up on his blog, that I recommend you check out. I’m not going to really talk much here about what AHF is specifically, or how it differs from other methods, if you’re unfamiliar there are lots of places you can read up on it. Quickly, though, AHF is kind of a bag of tricks you can use with a fast method for estimating the curvature of the loss function with respect to the weights of a neural network, as well as the gradient, which allows it to make larger updates in directions where the loss function doesn’t change quickly. So rather than estimating the gradient and then doing a small update along each dimension, you can make the size of your update large in directions that change slowly and small along dimensions where things change quickly. And now that’s enough about that.
In this post I’m going to walk through using a Hessian-free optimization library (version 0.3.8) written by my compadre Dr. Daniel Rasmussen to train up a neural network to train up a 2-link arm, and talk about the various hellish gauntlets you need run to get something that works. Whooo! The first thing to do is install this Hessian-free library, linked above.
I’ll be working through code edited a bit for readability, to find the code in full you can check out the files up on my GitHub.
Build the network
Dr. Sutskever specified the structure of the network in his thesis to be 4 layers: 1) a linear input layer, 2) 100 Tanh nodes, 3) 100 Tanh nodes, 4) linear output layer. The network is connected up with the standard feedforward connections from 1 to 2 to 3 to 4, plus recurrent connections on 2 and 3 to themselves, plus a ‘skip’ connection from layer 1 to layer 3. Finally, the input to the network is the target state for the plant and the current state of the plant. So, lots of recursion! Here’s a picture:
The output layer connects in to the plant, and, for those unfamiliar with control theory terminology, ‘plant’ just means the system that you’re controlling. In this case an arm simulation.
Before we can go ahead and set up the network that we want to train, we also need to specify the loss function that we’re going to be using during training. The loss function in Ilya’s thesis is a standard one:
$L(\theta) = \sum\limits^{N-1}\limits_{t=0} \ell(\textbf{u}_t) + \ell_f(\textbf{x}_N),$
$\ell(\textbf{u}_t) = \alpha \frac{||\textbf{u}_t||^2}{2},$
$\ell_f(\textbf{x}_N) = \frac{||\textbf{x}^* - \textbf{x}_t||^2}{2}$
where $L(\theta)$ is the total cost of the trajectory generated with $\theta$, the set of network parameters, $\ell(\textbf{u})$ is the immediate state cost, $\ell_f(\textbf{x})$ is the final state cost, $\textbf{x}$ is the state of the arm, $\textbf{x}^*$ is the target state of the arm, $\textbf{u}$ is the control signal (torques) that drives the arm, and $\alpha$ is a gain value.
To code this up using the hessianfree library we do:
from hessianfree import RNNet
from hessianfree.nonlinearities import (Tanh, Linear, Plant)
from hessianfree.loss_funcs import SquaredError, SparseL2
l2gain = 10e-3 * dt # gain on control signal loss
rnn = RNNet(
# specify the number of nodes in each layer
shape=[num_states * 2, 96, 96, num_states, num_states],
# specify the function of the nodes in each layer
layers=[Linear(), Tanh(), Tanh(), Linear(), plant],
# specify the layers that have recurrent connections
rec_layers=[1,2],
# specify the connections between layers
conns={0:[1, 2], 1:[2], 2:[3], 3:[4]},
# specify the loss function
loss_type=[
# squared error between plant output and targets
SquaredError(),
# penalize magnitude of control signal (output of layer 3)
SparseL2(l2gain, layers=[3])],
use_GPU=True)
Note that if you want to run it on your GPU you’ll need PyCuda and sklearn installed. And a GPU.
An important thing to note as well is that in Dr. Sustkever’s thesis when we’re calculating the squared error of the distance from the arm state to the target, this is measured in joint angles. So it’s kind of a weird set up to be looking at the movement of the hand but have your cost function in joint-space instead of end-effector space, but it definitely simplifies training by making the cost more directly relatable to the control signal. So we need to calculate the joint angles of the arm that will have the hand at different targets around a circle. To do this we’ll take advantage of our inverse kinematics solver from way back when, and use the following code:
def gen_targets(arm, n_targets=8, sig_len=100):
#Generate target angles corresponding to target
#(x,y) coordinates around a circle
import scipy.optimize
x_bias = 0
if arm.DOF == 2:
y_bias = .35
dist = .075
elif arm.DOF == 3:
y_bias = .5
dist = .2
# set up the reaching trajectories around circle
targets_x = [dist * np.cos(theta) + x_bias \
for theta in np.linspace(0, np.pi*2, 65)][:-1]
targets_y = [dist * np.sin(theta) + y_bias \
for theta in np.linspace(0, np.pi*2, 65)][:-1]
joint_targets = []
for ii in range(len(targets_x)):
joint_targets.append(arm.inv_kinematics(xy=(targets_x[ii],
targets_y[ii])))
targs = np.asarray(joint_targets)
# repeat the targets over time
for ii in range(targs.shape[1]-1):
targets = np.concatenate(
(np.outer(targs[:, ii], np.ones(sig_len))[:, :, None],
np.outer(targs[:, ii+1], np.ones(sig_len))[:, :, None]), axis=-1)
targets = np.concatenate((targets, np.zeros(targets.shape)), axis=-1)
# only want to penalize the system for not being at the
# target at the final state, set everything before to np.nan
targets[:, :-1] = np.nan
return targets
And you can see in the last couple lines that to implement the distance to target as a final state cost penalty only we just set all of the targets before the final time step equal to np.nan. If we wanted to penalize distance to target throughout the whole trajectory we would just comment that line out.
Create the plant
You’ll notice in the code that defines our RNN I set the last layer of the network to be plant, but that that’s not defined anywhere. Let’s talk. There are a couple of things that we’re going to need to incorporate our plant into this network and be able to use any deep learning method to train it. We need to be able to:
1. Simulate the plant forward; i.e. pass in input and get back the resulting plant state at the next timestep.
2. Calculate the derivative of the plant state with respect to the input; i.e. how do small changes in the input affect the state.
3. Calculate the derivative of the plant state with respect to the previous state; i.e. how do small changes in the plant state affect the state at the next timestep.
4. Calculate the derivative of the plant output with respect to its state; i.e. how do small changes in the current position of the state affect the output of the plant.
So 1 is easy, we have the arm simulations that we want already, they’re up on my GitHub. Number 4 is actually trivial too, because the output of our plant is going to be the state itself, so the derivative of the output with respect to the state is just the identity matrix.
For 2 and 3, we’re going to need to calculate some derivatives. If you’ve read the last few posts you’ll note that I’m on a finite differences kick. So let’s get that going! Because no one wants to calculate derivatives!
Important note, the notation in these next couple pieces of code is going to be a bit different from my normal notation because they’re matching with the hessianfree library notation, which is coming from a reinforcement learning literature background instead of a control theory background. So, s is the state of the plant, and x is the input to the plant. I know, I know. All the same, make sure to keep that in mind.
# calculate ds0/dx0 with finite differences
d_input_FD = np.zeros((x.shape[0], # number of trials
x.shape[1], # number of inputs
self.state.shape[1])) # number of states
for ii in range(x.shape[1]):
# calculate state adding eps to x[ii]
self.reset_plant(self.prev_state)
inc_x = x.copy()
inc_x[:, ii] += self.eps
self.activation(inc_x)
state_inc = self.state.copy()
# calculate state subtracting eps from x[ii]
self.reset_plant(self.prev_state)
dec_x = x.copy()
dec_x[:, ii] -= self.eps
self.activation(dec_x)
state_dec = self.state.copy()
d_input_FD[:, :, ii] = \
(state_inc - state_dec) / (2 * self.eps)
d_input_FD = d_input_FD[..., None]
Alrighty. First we create a tensor to store the results. Why is it a tensor? Because we’re going to be doing a bunch of runs at once. So our state dimensions are actually trials x size_input. When we then take the partial derivative, we end up with trials many size_input x size_state matrices. Then we increase each of the parameters of the input slightly one at a time and store the results, decrease them all one at a time and store the results, and compute our approximation of the gradient.
Next we’ll do the same for calculating the derivative of the state with respect to the previous state.
# calculate ds1/ds0
d_state_FD = np.zeros((x.shape[0], # number of trials
self.state.shape[1], # number of states
self.state.shape[1])) # number of states
for ii in range(self.state.shape[1]):
# calculate state adding eps to self.state[ii]
state = np.copy(self.prev_state)
state[:, ii] += self.eps
self.reset_plant(state)
self.activation(x)
state_inc = self.state.copy()
# calculate state subtracting eps from self.state[ii]
state = np.copy(self.prev_state)
state[:, ii] -= self.eps
self.reset_plant(state)
self.activation(x)
state_dec = self.state.copy()
d_state_FD[:, :, ii] = \
(state_inc - state_dec) / (2 * self.eps)
d_state_FD = d_state_FD[..., None]
Great! We’re getting closer to having everything we need. Another thing we need is a wrapper for running our arm simulation. It’s going to look like this:
def activation(self, x):
state = []
# iterate through and simulate the plant forward
# for each trial
for ii in range(x.shape[0]):
self.arm.reset(q=self.state[ii, :self.arm.DOF],
dq=self.state[ii, self.arm.DOF:])
self.arm.apply_torque(u[ii])
state.append(np.hstack([self.arm.q, self.arm.dq]))
state = np.asarray(state)
self.state = self.squashing(state)
This is definitely not the fastest code to run. Much more ideally we would put the state and input into vectors and do a single set of computations for each call to activation rather than having that for loop in there. Unfortunately, though, we’re not assuming that we have access to the dynamics equations / will be able to pass in vector states and inputs.
Squashing
Looking at the above code that seems pretty clear what’s going on, except you might notice that last line calling self.squashing. What’s going on there?
The squashing function looks like this:
def squashing(self, x):
index_below = np.where(x < -2*np.pi)
x[index_below] = np.tanh(x[index_below]+2*np.pi) - 2*np.pi
index_above = np.where(x > 2*np.pi)
x[index_above] = np.tanh(x[index_above]-2*np.pi) + 2*np.pi
return x
All that’s happening here is that we’re taking our input, and doing nothing to it as long as it doesn’t start to get too positive or too negative. If it does then we just taper it off and prevent it from going off to infinity. So running a 1D vector through this function we get:
This ends up being a pretty important piece of the code here. Basically it prevents wild changes to the weights during learning to result in the system breaking down. So the state of the plant can’t go off to infinity and cause an error to be thrown, stopping our simulation. But because the target state is well within the bounds of where the squashing function does nothing, post-training we’ll still be able to use the resulting network to control a system that doesn’t have this fail safe built in. Think of this function as training wheels that catch you only if you start falling.
With that, we no have pretty much all of the parts necessary to begin training our network!
Training the network
We’re going to be training this network on the centre-out reaching task, where you start at a centre point and reach to a bunch of target locations around a circle. I’m just going to be re-implementing the task as it was done in Dr. Sutskever’s thesis, so we’ll have 64 targets around the circle, and train using a 2-link arm. Here’s the code that we’ll use to actually run the training:
for ii in range(last_trial+1, num_batches):
# train a bunch of batches using the same input every time
# to allow the network a chance to minimize things with
# stable input (speeds up training)
err = rnn.run_batches(plant, targets=None,
max_epochs=batch_size,
optimizer=HessianFree(CG_iter=96, init_damping=100))
# save the weights to file, track trial and error
# err = rnn.error(inputs)
err = rnn.best_error
name = 'weights/rnn_weights-trial%04i-err%.3f'%(ii, err)
np.savez_compressed(name, rnn.W)
A quick aside: if you want to run this code yourself, get a real good computer, have an arm simulation ready, the hessianfree Python library installed, and download and run this train_hf.py file. (Note: I used version 0.3.8 of the hessianfree library, which you can install using pip install hessianfree==0.3.8) This will start training and save the weights into a weights/ folder, so make sure that that exists in the same folder as train_hf.py. If you want to view the results of the training at any point run the plot_error.py file, which will load in the most recent version of the weights and plot the error so far. If you want to generate an animated plot like I have below run gen_animation_plots.py and then the last command from my post on generating animated gifs.
Another means of seeing the results of your trained up network is to use the controller I’ve implemented in my controls benchmarking suite, which looks for a set of saved weights in the controllers/weights folder, and will load it in and use it to generate command signals for the arm by running it with
python run.py arm2_python ahf reach --dt=1e-2
where you replace arm2_python with whatever arm model you trained your model on. Note the --dt=1e-2 flag, that is important because the model was trained with a .01 timestep and things get a bit weird if you suddenly change the dynamics on the controller.
OK let’s look at some results!
Results
OK full discretion, these results are not optimizing the cost function we discussed above. They’re implementing a simpler cost function that only looks at the final state, i.e. it doesn’t penalize the magnitude of the control signal. I did this because Dr. Sutskever says in his thesis he was able to optimize with just the final state cost using much smaller networks. I originally looked at neurons with 96 neurons in each layer, and it just took forgoddamnedever to run. So after running for 4 weeks (not joking) and needing to make some more changes I dropped the number of neurons and simplified the task.
The results below are from running a network with 32 neurons in each layer controlling this 2-link arm, and took another 4-5 weeks to train up.
Hey that looks good! Not bad, augmented Hessian-free learning, not bad. It had pretty consistent (if slow) decline in the error rate, with a few crazy bumps from which it quickly recovered. Also take note that each training iteration is actually 32 runs, so it’s not 12,50-ish runs it’s closer to 400,000 training runs that it took to get here.
One biggish thing that was a pain was that it turns out that I only trained the neural network for reaching in the one direction, and when you only train it to reach one way it doesn’t generalize to reaching back to the starting point (which, fair enough). But, I didn’t realize this until I was took the trained network and ran it in the benchmarking code, at which point I was not keen to redo all of the training it took to get the neural network to the level of accuracy it was at under a more complicated training set. The downside of this is that even though I’ve implemented a controller that takes in the trained network and uses it to control the arm, to do the reaching task I have to just do a hard reset after the arm reaches the target, because it can’t reach back to the center, like all the other controllers. All the same, here’s an animation of the trained up AHF controller reaching to 8 targets (it was trained on all 64 above though):
Things don’t always go so smoothly, though. Here’s results from another training run that took around 2-3 weeks, and uses a different 2-link arm model (translated from Matlab code written by Dr. Emo Todorov):
What I found frustrating about this was that if you look at the error over time then this arm is doing as well or better than the previous arm at a lot of points. But the corresponding trajectories look terrible, like something you would see in a horror movie based around getting good machine learning results. This of course comes down to how I specified the cost function, and when I looked at the trajectories plotted over time the velocity of the arm is right at zero at the final time step, which it is not quiiiitte the case for the first controller. So this second network has found a workaround to minimize the cost function I specified in a way I did not intend. To prevent this, doing something like weighting the distance to target heavier than non-zero velocity would probably work. Or possibly just rerunning the training with a different random starting point you could get out a better controller, I don’t have a great feel for how important the random initialization is, but I’m hoping that it’s not all too important and its effects go to zero with enough training. Also, it should be noted I’ve run the first network for 12,500 iterations and the second for less than 6,000, so I’ll keep letting them run and maybe it will come around. The first one looked pretty messy too until about 4,000 iterations in.
Training regimes
Frustratingly, the way that you train deep networks is very important. So, very much like the naive deep learning network trainer that I am, I tried the first thing that pretty much anyone would try:
• run the network,
• update the weights,
• repeat.
This is what I’ve done in the results above. And it worked well enough in that case.
If you remember back to the iLQR I made a little while ago, I was able to change the cost function to be
$L(\theta) = \sum\limits^{N-1}\limits_{t=0} \ell(\textbf{u}_t) + \ell_f(\textbf{x}_N),$
$\ell(\textbf{u}_t, \textbf{x}_t) = \alpha \frac{||\textbf{u}_t||^2}{2} + \frac{||\textbf{x}^* - \textbf{x}_t||^2}{2},$
$\ell_f(\textbf{x}_N) = \frac{||\textbf{x}^* - \textbf{x}_t||^2}{2}$
(i.e. to include a penalty for distance to target throughout the trajectory and not just at the final time step) which resulted in straighter trajectories when controlling the 2-link arm. So I thought I would try this here as well. Sadly (incredibly sadly), this was fairly fruitless. The network didn’t really learn or improve much at all.
After much consideration and quandary on my part, I talked with Dr. Dan and he suggested that I try another method:
• run the network,
• record the input,
• hold the input constant for a few batches of weight updating,
• repeat.
This method gave much better results. BUT WHY? I hear you ask! Good question. Let me give giving explanation a go.
Essentially, it’s because the cost function is more complex now. In the first training method, the output from the plant is fed back into the network as input at every time step. When the cost function was simpler this was OK, but now we’re getting very different input to train on at every iteration. So the system is being pulled in different directions back and forth at every iteration. In the second training regime, the same input is given several times in a row, which let’s the system follow the same gradient for a few training iterations before things change again. In my head I picture this as giving the algorithm a couple seconds to catch its breath dunking it back underwater.
This is a method that’s been used in a bunch of places recently. One of the more high-profile instances is in the results published from DeepMind on deep RL for control and for playing Go. And indeed, it also works well here.
To implement this training regime, we set up the following code:
for ii in range(last_trial+1, num_batches):
# run the plant forward once
rnn.forward(input=plant, params=rnn.W)
# get the input and targets from above rollout
inputs = plant.get_vecs()[0].astype(np.float32)
targets = np.asarray(plant.get_vecs()[1], dtype=np.float32)
# train a bunch of batches using the same input every time
# to allow the network a chance to minimize things with
# stable input (speeds up training)
err = rnn.run_batches(inputs, targets, max_epochs=batch_size,
optimizer=HessianFree(CG_iter=96, init_damping=100))
# save the weights to file, track trial and error
# err = rnn.error(inputs)
err = rnn.best_error
name = 'weights/rnn_weights-trial%04i-err%.3f'%(ii, err)
np.savez_compressed(name, rnn.W)
So you can see that we do one rollout with the weights, then go in and get the inputs and targets that were used in that rollout, and start training the network while holding those constant for batch_size epochs (training sessions). From a little bit of messing around I’ve found batch_size=32 to be a pretty good number. So then it runs 32 training iterations where it’s updating the weights, and then saves those weights (because we want a loooottttt of check-points) and then restarts the loop.
Embarrassingly, I’ve lost my simulation results from this trial, somehow…so I don’t have any nice plots to back up the above, unfortunately. But since this is just a blog post I figured I would at least talk about it a little bit, since people might still find it useful if they’re just getting into the field like me. and just update this post whenever I re-run them. If I rerun them.
What I do have, however, are results where this method doesn’t work! I tried this with the simpler cost function, that only looks at the final state distance from the target, and it did not go so well. Let’s look at that one!
My guess here is basically that the system has gotten to a point where it’s narrowed things down in the parameter space and now when you run 32 batches it’s overshooting. It needs feedback about its updates after every update at this point. That’s my guess, at least. So it could be the case that for more complex cost functions you’d want to train it while holding the input constant for a while, and then when the error starts to plateau switch to updating the input after every parameter update.
Conclusions
All in all, AHF for training neural networks in control is pretty awesome. There are of course still some major hold-backs, mostly related to how long it takes to train up a network, and having to guess at effective training regimes and network structures etc. But! It was able to train up a relatively small neural network to move an arm model from a center point to 64 targets around a circle, with no knowledge of the system under control at all. In Dr. Sutskever’s thesis he goes on to use the same set up under more complicated circumstances, such as when there’s a feedback delay, or a delay on the outgoing control signal, and unexpected noise etc, so it is able to learn under a number of different, fairly complex situations. Which is pretty slick.
Related to the insane training time required, I very easily could be missing some basic thing that would help speed things up. If you, reader, get ambitious and run the code on your own machine and find out useful methods for speeding up the training please let me know! Personally, my plan is to next investigate guided policy search, which seems like it’s found a way around this crazy training time.
## Simultaneous perturbation vs finite differences for linear dynamics estimation and control signal optimization
Recently in my posts I’ve been using finite differences to approximate the gradient of loss functions and dynamical systems, with the intention of creating generalizable controllers that can be run on any system without having to calculate out derivatives beforehand. Finite differences is pretty much the most straight-forward way of approximating a gradient that there is: vary each parameter up and down (assuming we’re doing central differencing), one at a time, run it through your function and estimate the parameters effect on the system by calculating the difference between resulting function output. To do this requires 2 samples of the function for each parameter.
But there’s always more than one way to peel an avocado, and another approach that’s been used with much success is the Simultaneous Perturbation Stochastic Approximation (SPSA) algorithm, which was developed by Dr. James Spall (link to overview paper). SPSA is a method of gradient approximation, like finite differences, but, critically, the difference is that it varies all of the parameters at once, rather than one at a time. As a result, you can get an approximation of the gradient with far fewer samples from the system, and also when you don’t have explicit control over your samples (i.e. the ability to vary each parameter one at a time).
Given some function $\textbf{f}$ dependent on some set of parameters $\textbf{x}$, we’re used to finding the gradient of $\textbf{f}(\textbf{x})$ using FDSA (finite differences stochastic approximation) written in this form:
$\textbf{f}_x = \frac{\textbf{f}(\textbf{x} + \Delta \textbf{x}) - \textbf{f}(\textbf{x} - \Delta \textbf{x})}{2 \Delta \textbf{x}} = \Delta x^{-1} \frac{\textbf{f}(\textbf{x} + \Delta \textbf{x}) - \textbf{f}(\textbf{x} - \Delta \textbf{x})}{2},$
where $\Delta\textbf{x}$ is a perturbation to the parameter set, and the subscript on the left-hand side denotes the derivative of $\textbf{f}(\textbf{x})$ with respect to $\textbf{x}$.
And that’s how we’ve calculated it before, estimating the gradient of a single parameter at a time. But, we can rewrite this for a set perturbations $\Delta \textbf{X} = [\Delta\textbf{x}_0, ... , \Delta\textbf{x}_N]^T$:
$\textbf{f}_\textbf{x} = \Delta \textbf{X}^{-1} \textbf{F},$
where
$\textbf{F} = [\frac{\textbf{f}(\textbf{x} + \Delta \textbf{x}_0) - \textbf{f}(\textbf{x} - \Delta \textbf{x}_0)}{2}, ... , \frac{\textbf{f}(\textbf{x} + \Delta \textbf{x}_0) - \textbf{f}(\textbf{x}_N - \Delta \textbf{x}_N)}{2}]^T$,
which works as long as $\Delta\textbf{X}$ is square. When it’s not square (i.e. we have don’t have the same number of samples as we have parameters), we run into problems, because we can’t calculate $\Delta\textbf{X}^{-1}$ directly. To address this, let’s take a step back and then work forward again to get a more general form that works for non-square $\Delta \textbf{X}$ too.
By rewriting the above, and getting rid of the inverse by moving $\Delta\textbf{x}$ back to the other side, we have:
$\Delta\textbf{X} \; \textbf{f}_\textbf{x} = \textbf{F}$
Now, the standard trick to move a matrix that’s not square is to just make it square by multiplying it by its transpose to get a square matrix, and then the whole thing by the inverse:
$\Delta\textbf{X}^T \Delta\textbf{X} \; \textbf{f}_\textbf{x} = \Delta\textbf{X}^T \textbf{F}$
$(\Delta\textbf{X}^T \Delta\textbf{X})^{-1} (\Delta\textbf{X}^T \Delta\textbf{X}) \textbf{f}_\textbf{x} = (\Delta\textbf{X}^T \Delta\textbf{X})^{-1} \Delta\textbf{X}^T \textbf{F}$
$\textbf{f}_\textbf{x} = (\Delta\textbf{X}^T \Delta\textbf{X})^{-1} \Delta\textbf{X}^T \textbf{F}$
Alright! Now we’re comfortable with this characterization of gradient approximation using a form that works with non-square perturbation matrices.
Again, in FDSA, we only vary one parameter at a time. This means that there will only ever be one non-zero entry per row of $\Delta \textbf{X}$. By contrast, in SPSA, we vary multiple parameters, and so rows of $\Delta\textbf{X}$ will be just chalk full of non-zero entries.
Gradient approximation to estimate $\textbf{f}_\textbf{x}$ and $\textbf{f}_\textbf{u}$ for LQR control
Implementing this was pretty simple. Just had to modify the calc_derivs function, which I use to estimate the derivative of the arm with respect to the state and control signal, in my LQR controller code by changing from standard finite differences to simultaneous perturbation:
def calc_derivs(self, x, u):
"""" calculate gradient of plant dynamics using Simultaneous
Perturbation Stochastic Approximation (SPSA). Implemented
based on (Peters & Schaal, 2008).
x np.array: the state of the system
u np.array: the control signal
"""
# Initialization and coefficient selection
num_iters = 20
eps = 1e-4
delta_K = None
delta_J = None
for ii in range(num_iters):
# Generation of simultaneous perturbation vector
# choose each component from a Bernoulli +-1 distribution
# with probability of .5 for each +-1 outcome.
delta_k = np.random.choice([-1,1],
size=len(x) + len(u),
p=[.5, .5])
# Function evaluations
inc_x = np.copy(x) + eps * delta_k[:len(x)]
inc_u = np.copy(u) + eps * delta_k[len(x):]
state_inc = self.plant_dynamics(inc_x, inc_u)
dec_x = np.copy(x) - eps * delta_k[:len(x)]
dec_u = np.copy(u) - eps * delta_k[len(x):]
state_dec = self.plant_dynamics(dec_x, dec_u)
delta_j = ((state_inc - state_dec) /
(2.0 * eps)).reshape(-1)
# Track delta_k and delta_j
delta_K = delta_k if delta_K is None else \
np.vstack([delta_K, delta_k])
delta_J = delta_j if delta_J is None else \
np.vstack([delta_J, delta_j])
f_xu = np.dot(np.linalg.pinv(np.dot(delta_K.T, delta_K)),
np.dot(delta_K.T, delta_J))
f_x = f_xu[:len(x)]
f_u = f_xu[len(x):]
return f_x.T , f_u.T
A couple notes about the above code. First, you’ll notice that the f_x and f_b matrices are both calculated at the same time. That’s pretty slick! And that calculation for f_xu is just a straight implementation of the matrix form of gradient approximation, where I’ve arranged things so that f_x is in the top part and f_u is in the lower part.
The second thing is that the perturbation vector delta_k is generated from a Bernoulli distribution. The reason behind this is that we want to have a bunch of different samples that pretty reasonably spread the state space and move all the parameters independently. Making each perturbation some distance times -1 or 1 is an easy way to achieve this.
Thirdly, there’s the num_iters variable. This is a very important variable, as it dictates how many random samples of our system we take before we estimate the gradient. I’ve found that to get this to work for both the 2-link arm and the more complex 3-link arm, it needs to be at least 20. Or else things explode and die horribly. Just…horribly.
OK let’s look at the results:
The first thing to notice is that I’ve finally discovered the Seaborn plotting package. The second is that SPSA does as well as FDSA.
You may ask: Is there any difference? Well, if we time these functions, on my lil’ laptop, for the 2-link arm it takes SPSA approximately 2.0ms, but it takes FDSA only 0.8ms. So for the same performance the SPSA is taking almost 3 times as long to run. Why? This boils down to how many times the system dynamics need to be sampled by each algorithm to get a good approximation of the gradient. For a 2-link arm, FDSA has 6 parameters ($\textbf{q}, \dot{\textbf{q}},$ and $\textbf{u}$) that it needs to sample twice (we’re doing central differencing), for a total of 12 samples. And as I mentioned above, the SPSA algorithm needs 20 samples to be stable.
For the 3-link arm, SPSA took about 3.1ms on average and FDSA (which must now perform 18 samples of the dynamics) still only 2.1ms. So number of samples isn’t the only cause of time difference between these two algorithms. SPSA needs to perform that a few more matrix operations, including a matrix inverse, which is expensive, while FDSA can calculate the gradient of each parameter individually, which is much less expensive.
OK so SPSA not really impressive here. BUT! As I discovered, there are other means of employing SPSA.
Gradient approximation to optimize the control signal directly
In the previous set up we were using SPSA to estimate the gradient of the system under control, and then we used that gradient to calculate a control signal that minimized the loss function (as specified inside the LQR). This is one way to use gradient approximation methods. Another way to use these methods is approximate the gradient of the loss function directly, and use that information to iteratively calculate a control signal that minimizes the loss function. This second application is the primary use of the SPSA algorithm, and is what’s described by Dr. Spall in his overview paper.
In this application, the algorithm works like this:
2. perturb input and simulate results
3. observe loss function and calculate gradient
4. update input to system
5. repeat to convergence
Because in this approach we’re iteratively optimizing the input using our gradient estimation, having a noisy estimate is no longer a death sentence, as it was in the LQR. If we update our input to the system with several noisy gradient estimates the noise will essentially just cancel itself out. This means that SPSA now has a powerful advantage over FDSA: Since in SPSA we vary all parameters at once, only 2 samples of the loss function are used to estimate the gradient, regardless of the number of parameters. In contrast, FDSA needs to sample the loss function twice for every input parameter. Here’s a picture from (Spall, 1998) that shows the two running against each other to optimize a 2D problem:
This gets across that even though SPSA bounces around more, they both reach the solution in the same number of steps. And, in general, this is the case, as Dr. Spall talks about in the paper. There’s also a couple more details of the algorithm, so let’s look at it in detail. Here’s the code, which is just a straight translation into Python out of the description in Dr. Spall’s paper:
# Step 1: Initialization and coefficient selection
max_iters = 5
converge_thresh = 1e-5
alpha = 0.602 # from (Spall, 1998)
gamma = 0.101
a = .101 # found empirically using HyperOpt
A = .193
c = .0277
delta_K = None
delta_J = None
u = np.copy(self.u) if self.u is not None \
else np.zeros(self.arm.DOF)
for k in range(max_iters):
ak = a / (A + k + 1)**alpha
ck = c / (k + 1)**gamma
# Step 2: Generation of simultaneous perturbation vector
# choose each component from a bernoulli +-1 distribution with
# probability of .5 for each +-1 outcome.
delta_k = np.random.choice([-1,1], size=arm.DOF, p=[.5, .5])
# Step 3: Function evaluations
inc_u = np.copy(u) + ck * delta_k
cost_inc = self.cost(np.copy(state), inc_u)
dec_u = np.copy(u) - ck * delta_k
cost_dec = self.cost(np.copy(state), dec_u)
gk = np.dot((cost_inc - cost_dec) / (2.0*ck), delta_k)
# Step 5: Update u estimate
old_u = np.copy(u)
u -= ak * gk
# Step 6: Check for convergence
if np.sum(abs(u - old_u)) < converge_thresh:
break
The main as-of-yet-unexplained parts of this code are the alpha, gamma, a, A, and c variables. What’s their deal?
Looking inside the loop, we can see that ck controls the magnitude of our perturbations. Looking a little further down, ak is just the learning rate. And all of those other parameters are just involved in shaping the trajectories that ak and ck follow through iterations, which is a path towards zero. So the first steps and perturbations are the biggest, and each successively becomes smaller as the iteration count increases.
There are a few heuristics that Dr. Spall goes over, but there aren’t any hard and fast rules for setting a, A, and c. Here, I just used HyperOpt to find some values that worked pretty well for this particular problem.
The FDSA version of this is also very straight-forward:
# Step 1: Initialization and coefficient selection
max_iters = 10
converge_thresh = 1e-5
eps = 1e-4
u = np.copy(self.u) if self.u is not None \
else np.zeros(self.arm.DOF)
for k in range(max_iters):
gk = np.zeros(u.shape)
for ii in range(gk.shape[0]):
# Step 2: Generate perturbations one parameter at a time
inc_u = np.copy(u)
inc_u[ii] += eps
dec_u = np.copy(u)
dec_u -= eps
# Step 3: Function evaluation
cost_inc = self.cost(np.copy(state), inc_u)
cost_dec = self.cost(np.copy(state), dec_u)
gk[ii] = (cost_inc - cost_dec) / (2.0 * eps)
old_u = np.copy(u)
# Step 5: Update u estimate
u -= 1e-5 * gk
# Step 6: Check for convergence
if np.sum(abs(u - old_u)) < converge_thresh:
break
You’ll notice that in both the SPSA and FDSA code we’re no longer sampling plant_dynamics, we’re instead sampling cost, a loss function I defined. From just my experience playing around with these algorithms a bit, getting the loss function to be appropriate and give the desired behaviour is definitely a bit of an art. It feels like much more of an art than in other controllers I’ve coded, but that could just be me.
The cost function that I’m using is pretty much the first thing you’d think of. It penalizes distance to target and having non-zero velocity. Getting the weighting between distance to target and velocity set up so that the arm moves to the target but also doesn’t overshoot definitely took a bit of trial and error, er, I mean empirical analysis. Here’s the cost function that I found worked pretty well, note that I had to make special cases for the different arms:
def cost(self, x, u):
dt = .1 if self.arm.DOF == 3 else .01
next_x = self.plant_dynamics(x, u, dt=dt)
vel_gain = 100 if self.arm.DOF == 3 else 10
return (np.sqrt(np.sum((self.arm.x - self.target)**2)) * 1000 \
+ np.sum((next_x[self.arm.DOF:])**2) * vel_gain)
So that’s all the code, let’s look at the results!
For these results, I used a max of 10 iterations for optimizing the control signal. I was definitely surprised by the quality of the results, especially for the 3-link arm, compared to the results generated by a standard LQR controller. Although I need to note, again, that it was a fair bit of me playing around with the exact cost function to get these results. Lots of empirical analysis.
The two controllers generate results that are identical to visual inspection. However, especially in the 3-link arm, the time required to run the FDSA was significantly longer than the SPSA controller. It took approximately 140ms for the SPSA controller to run a single loop, but took FDSA on average 700ms for a single loop of calculating the control signal. Almost 5 times as long! For the same results! In directly optimizing the control signal, SPSA gets a big win over standard FDSA. So, if you’re looking to directly optimize over a loss function, SPSA is probably the way you want to go.
Conclusions
First off, I thought it was really neat to directly apply gradient approximation methods to optimizing the control signal. It’s something I haven’t tried before, but definitely makes sense, and can generate some really nice results when tuned properly. Automating the tuning is definitely I’ll be discussing in future posts, because doing it by hand takes a long time and is annoying.
In the LQR, the gradient approximation was best done by the FDSA. I think the main reasons for this is that in solving for the control signal the LQR algorithm uses matrix inverses, and any errors in the linear approximations to the dynamics are going to be amplified quite a bit. If I did anything less than 10-15 iterations (20 for the 3-link arm) in the SPSA approximation then things exploded. Also, here the SPSA algorithm required a matrix inverse, where the FDSA didn’t. This is because we only varied one parameter at a time in FDSA, and the effects of changing each was isolated. In the SPSA case, we had to consider the changes across all the variables and the resulting effects all at once, essentially noting which variables changed by how much and the changes in each case, and averaging. Here, even with the more complex 3-link arm, FDSA was faster, so I’m going to stick with it in my LQR and iLQR implementations.
In the direct control signal optimization SPSA beat the pants off of FDSA. It was almost 5 times faster for control of the 3-link arm. This was, again, because in this case we could use noisy samples of the gradient of the loss function and relied on noise to cancel itself out as we iterated. So we only needed 2 samples of the loss function in SPSA, where in FDSA we needed 2*num_parameters. And although this generated pretty good results I would definitely be hesitant against using this for any more complicated systems, because tuning that cost function to get out a good trajectory was a pain. If you’re interested in playing around with this, you can check out the code for the gradient controllers up on my GitHub.
## The iterative Linear Quadratic Regulator algorithm
A few months ago I posted on Linear Quadratic Regulators (LQRs) for control of non-linear systems using finite-differences. The gist of it was at every time step linearize the dynamics, quadratize (it could be a word) the cost function around the current point in state space and compute your feedback gain off of that, as though the dynamics were both linear and consistent (i.e. didn’t change in different states). And that was pretty cool because you didn’t need all the equations of motion and inertia matrices etc to generate a control signal. You could just use the simulation you had, sample it a bunch to estimate the dynamics and value function, and go off of that.
The LQR, however, operates with maverick disregard for the future. Careless of the consequences, it optimizes for the current time step only. It would be really great to have an algorithm that was able to plan out a sequence, mindful of the overall cost of the trajectory and to optimize for that.
This is exactly the iterative Linear Quadratic Regulator method (iLQR) was designed for. iLQR is an extension of LQR control, and the idea here is basically to optimize a whole control sequence rather than just the control signal for the current point in time. The basic flow of the algorithm is:
1. Initialize with initial state $x_0$ and initial control sequence $\textbf{U} = [u_{t_0}, u_{t_1}, ..., u_{t_N}]$.
2. Do a forward pass, i.e. simulate the system using $(x_0, \textbf{U})$ to get the trajectory through state space, $\textbf{X}$, that results from applying the control sequence $\textbf{U}$ starting in $x_0$.
3. Do a backward pass, estimate the value function and dynamics for each $(\textbf{x}, \textbf{u})$ in the state-space and control signal trajectories.
4. Calculate an updated control signal $\hat{\textbf{U}}$ and evaluate cost of trajectory resulting from $(x_0, \hat{\textbf{U}})$.
1. If $|(\textrm{cost}(x_0, \hat{\textbf{U}}) - \textrm{cost}(x_0, \textbf{U})| < \textrm{threshold}$ then we've converged and exit.
2. If $\textrm{cost}(x_0, \hat{\textbf{U}}) < \textrm{cost}(x_0, \textbf{U})$, then set $\textbf{U} = \hat{\textbf{U}}$, and change the update size to be more aggressive. Go back to step 2.
3. If $\textrm{cost}(x_0, \hat{\textbf{U}}) \geq$ $\textrm{cost}(x_0, \textbf{U})$ change the update size to be more modest. Go back to step 3.
There are a bunch of descriptions of iLQR, and it also goes by names like ‘the sequential linear quadratic algorithm’. The paper that I’m going to be working off of is by Yuval Tassa out of Emo Todorov’s lab, called Control-limited differential dynamic programming. And the Python implementation of this can be found up on my github in my Control repo. Also, a big thank you to Dr. Emo Todorov who provided Matlab code for the iLQG algorithm, which was super helpful.
Defining things
So let’s dive in. Formally defining things, we have our system $\textbf{x}$, and dynamics described with the function $\textbf{f}$, such that
$\textbf{x}_{t+1} = \textbf{f}(\textbf{x}_t, \textbf{u}_t),$
where $\textbf{u}$ is the input control signal. The trajectory $\{\textbf{X}, \textbf{U}\}$ is the sequence of states $\textbf{X} = \{\textbf{x}_0, \textbf{x}_1, ..., \textbf{x}_N\}$ that result from applying the control sequence $\textbf{U} = \{\textbf{u}_0, \textbf{u}_1, ..., \textbf{u}_N\}$ starting in the initial state $\textbf{x}_0$.
Now we need to define all of our cost related equations, so we know exactly what we’re dealing with.
Define the total cost function $J$, which is the sum of the immediate cost, $\ell$, from each state in the trajectory plus the final cost, $\ell_f$:
$J(\textbf{x}_0, \textbf{U}) = \sum\limits^{N-1}\limits_{t=0} \ell(\textbf{x}_t, \textbf{u}_t) + \ell_f(\textbf{x}_N).$
Letting $\textbf{U}_t = \{\textbf{u}_t, \textbf{u}_{t+1}, ..., \textbf{U}_N\}$, we define the cost-to-go as the sum of costs from time $t$ to $N$:
$J_t(\textbf{x}, \textbf{U}_t) = \sum\limits^{N-1}\limits_{i=t} \ell(\textbf{x}_i, \textbf{u}_i) + \ell_f(\textbf{x}_N).$
The value function $V$ at time $t$ is the optimal cost-to-go from a given state:
$V_t(\textbf{x}) = \min\limits_{\textbf{U}_t} J_t(\textbf{x}, \textbf{U}_t),$
where the above equation just says that the optimal cost-to-go is found by using the control sequence $\textbf{U}_t$ that minimizes $J_t$.
At the final time step, $N$, the value function is simply
$V(\textbf{x}_N) = \ell_f(\textbf{x}_N).$
For all preceding time steps, we can write the value function as a function of the immediate cost $\ell(\textbf{x}, \textbf{u})$ and the value function at the next time step:
$V(\textbf{x}) = \min\limits_{\textbf{u}} \left[ \ell(\textbf{x}, \textbf{u}) + V(\textbf{f}(\textbf{x}, \textbf{u})) \right].$
NOTE: In the paper, they use the notation $V'(\textbf{f}(\textbf{x}, \textbf{u}))$ to denote the value function at the next time step, which is redundant since $\textbf{x}_{t+1} = \textbf{f}(\textbf{x}_t, \textbf{u}_t)$, but it comes in handy later when they drop the dependencies to simplify notation. So, heads up: $V' = V(\textbf{f}(\textbf{x}, \textbf{u})$.
Forward rollout
The forward rollout consists of two parts. The first part is to simulating things to generate the $(\textbf{X}, \textbf{U}),$ from which we can calculate the overall cost of the trajectory, and find out the path that the arm will take. To improve things though we’ll need a lot of information about the partial derivatives of the system, calculating these is the second part of the forward rollout phase.
To calculate all these partial derivatives we’ll use $(\textbf{X}, \textbf{U})$. For each $(\textbf{x}_t, \textbf{u}_t)$ we’ll calculate the derivatives of $\textbf{f}(\textbf{x}_t, \textbf{u}_t)$ with respect to $\textbf{x}_t$ and $\textbf{u}_t$, which will give us what we need for our linear approximation of the system dynamics.
To get the information we need about the value function, we’ll need the first and second derivatives of $\ell(\textbf{x}_t, \textbf{u}_t)$ and $\ell_f(\textbf{x}_t, \textbf{x}_t)$ with respect to $\textbf{x}_t$ and $\textbf{u}_t$.
So all in all, we need to calculate $\textbf{f}_\textbf{x}$, $\textbf{f}_\textbf{u}$, $\ell_\textbf{x}$, $\ell_\textbf{u}$, $\ell_\textbf{xx}$, $\ell_\textbf{ux}$, $\ell_\textbf{uu}$, where the subscripts denote a partial derivative, so $\ell_\textbf{x}$ is the partial derivative of $\ell$ with respect to $\textbf{x}$, $\ell_\textbf{xx}$ is the second derivative of $\ell$ with respect to $\textbf{x}$, etc. And to calculate all of these partial derivatives, we’re going to use finite differences! Just like in the LQR with finite differences post. Long story short, load up the simulation for every time step, slightly vary one of the parameters, and measure the resulting change.
Once we have all of these, we’re ready to move on to the backward pass.
Backward pass
Now, we started out with an initial trajectory, but that was just a guess. We want our algorithm to take it and then converge to a local minimum. To do this, we’re going to add some perturbing values and use them to minimize the value function. Specifically, we’re going to compute a local solution to our value function using a quadratic Taylor expansion. So let’s define $Q(\delta \textbf{x}, \delta \textbf{u})$ to be the change in our value function at $(\textbf{x}, \textbf{u})$ as a result of small perturbations $(\delta \textbf{x}, \delta \textbf{u})$:
$Q(\delta \textbf{x}, \delta \textbf{u}) = \ell (\textbf{x} + \delta \textbf{x}, \textbf{u} + \delta \textbf{u}) + V(\textbf{f}(\textbf{x} + \delta\textbf{x}, \textbf{u} + \delta \textbf{u})).$
The second-order expansion of $Q$ is given by:
$Q_\textbf{x} = \ell_\textbf{x} + \textbf{f}_\textbf{x}^T V'_\textbf{x},$
$Q_\textbf{u} = \ell_\textbf{u} + \textbf{f}_\textbf{u}^T V'_\textbf{x},$
$Q_\textbf{xx} = \ell_\textbf{xx} + \textbf{f}_\textbf{x}^T V'_\textbf{xx} \textbf{f}_\textbf{x} + V'_\textbf{x} \cdot \textbf{f}_\textbf{xx},$
$Q_\textbf{ux} = \ell_\textbf{ux} + \textbf{f}_\textbf{u}^T V'_\textbf{xx} \textbf{f}_\textbf{x}+ V'_\textbf{x} \cdot \textbf{f}_\textbf{ux},$
$Q_\textbf{uu} = \ell_\textbf{uu} + \textbf{f}_\textbf{u}^T V'_\textbf{xx} \textbf{f}_\textbf{u}+ V'_\textbf{x} \cdot \textbf{f}_\textbf{uu}.$
Remember that $V' = V(\textbf{f}(\textbf{x}, \textbf{u}))$, which is the value function at the next time step. NOTE: All of the second derivatives of $\textbf{f}$ are zero in the systems we’re controlling here, so when we calculate the second derivatives we don’t need to worry about doing any tensor math, yay!
Given the second-order expansion of $Q$, we can to compute the optimal modification to the control signal, $\delta \textbf{u}^*$. This control signal update has two parts, a feedforward term, $\textbf{k}$, and a feedback term $\textbf{K} \delta\textbf{x}$. The optimal update is the $\delta\textbf{u}$ that minimizes the cost of $Q$:
$\delta\textbf{u}^*(\delta \textbf{x}) = \min\limits_{\delta\textbf{u}}Q(\delta\textbf{x}, \delta\textbf{u}) = \textbf{k} + \textbf{K}\delta\textbf{x},$
where $\textbf{k} = -Q^{-1}_\textbf{uu} Q_\textbf{u}$ and $\textbf{K} = -Q^{-1}_\textbf{uu} Q_\textbf{ux}.$
Derivation can be found in this earlier paper by Li and Todorov. By then substituting this policy into the expansion of $Q$ we get a quadratic model of $V$. They do some mathamagics and come out with:
$V_\textbf{x} = Q_\textbf{x} - \textbf{K}^T Q_\textbf{uu} \textbf{k},$
$V_\textbf{xx} = Q_\textbf{xx} - \textbf{K}^T Q_\textbf{uu} \textbf{K}.$
So now we have all of the terms that we need, and they’re defined in terms of the values at the next time step. We know the value of the value function at the final time step $V_N = \ell_f(\textbf{x}_N)$, and so we’ll simply plug this value in and work backwards in time recursively computing the partial derivatives of $Q$ and $V$.
Calculate control signal update
Once those are all calculated, we can calculate the gain matrices, $\textbf{k}$ and $\textbf{K}$, for our control signal update. Huzzah! Now all that’s left to do is evaluate this new trajectory. So we set up our system
$\hat{\textbf{x}}_0 = \textbf{x}_0,$
$\hat{\textbf{u}}_t = \textbf{u}_t + \textbf{k}_t + \textbf{K}_t (\hat{\textbf{x}}_t - \textbf{x}_t),$
$\hat{\textbf{x}}_{t+1} = \textbf{f}(\hat{\textbf{x}}_t, \hat{\textbf{u}}_t),$
and record the cost. Now if the cost of the new trajectory $(\hat{\textbf{X}}, \hat{\textbf{U}})$ is less than the cost of $(\textbf{X}, \textbf{U})$ then we set $\textbf{U} = \hat{\textbf{U}}$ and go do it all again! And when the cost from an update becomes less than a threshold value, call it done. In code this looks like:
if costnew < cost:
sim_new_trajectory = True
if (abs(costnew - cost)/cost) < self.converge_thresh:
break
Of course, another option we need to account for is when costnew > cost. What do we do in this case? Our control update hasn’t worked, do we just exit?
The Levenberg-Marquardt heuristic
No! Phew.
The control signal update in iLQR is calculated in such a way that it can behave like Gauss-Newton optimization (which uses second-order derivative information) or like gradient descent (which only uses first-order derivative information). The is that if the updates are going well, then lets include curvature information in our update to help optimize things faster. If the updates aren’t going well let’s dial back towards gradient descent, stick to first-order derivative information and use smaller steps. This wizardry is known as the Levenberg-Marquardt heuristic. So how does it work?
Something we skimmed over in the iLQR description was that we need to calculate $Q^{-1}_\textbf{uu}$ to get the $\textbf{k}$ and $\textbf{K}$ matrices. Instead of using np.linalg.pinv or somesuch, we’re going to calculate the inverse ourselves using singular value decomposition, so that we can regularize it. This will let us do a couple of things. First, we’ll be able to make sure that our estimate of curvature ($Q_\textbf{uu}^{-1}$) stays positive definite, which is important to make sure that we always have a descent direction. Second, we’re going to add a regularization term to the singular values to prevent them from exploding when we take their inverse. Here’s our regularization implemented in Python:
U, S, V = np.linalg.svd(Quu)
S[S < 0] = 0.0 # no negative values
S += lamb # add regularization term
Quu_inv = np.dot(U, np.dot(np.diag(1.0/S), V.T))
Now, what happens when we change lamb? The singular values represent the magnitude of each of the left and right singular vectors, and by taking their reciprocal we flip the contributions of the vectors. So the ones that were contributing the least now have the largest singular values, and the ones that contributed the most now have the smallest singular values. By adding a regularization term we ensure that the inverted singular values can never be larger than 1/lamb. So essentially we throw out information.
In the case where we’ve got a really good approximation of the system dynamics and value function, we don’t want to do this. We want to use all of the information available because it’s accurate, so make lamb small and get a more accurate inverse. In the case where we have a bad approximation of the dynamics we want to be more conservative, which means not having those large singular values. Smaller singular values give a smaller $Q_\textbf{uu}^{-1}$ estimate, which then gives smaller gain matrices and control signal update, which is what we want to do when our control signal updates are going poorly.
How do you know if they’re going poorly or not, you now surely ask! Clever as always, we’re going to use the result of the previous iteration to update lamb. So adding to the code from just above, the end of our control update loop is going to look like:
lamb = 1.0 # initial value of lambda
...
if costnew < cost:
lamb /= self.lamb_factor
sim_new_trajectory = True
if (abs(costnew - cost)/cost) < self.converge_thresh:
break
else:
lamb *= self.lamb_factor
if lamb > self.max_lamb:
break
And that is pretty much everything! OK let’s see how this runs!
Simulation results
If you want to run this and see for yourself, you can go copy my Control repo, navigate to the main directory, and run
python run.py arm2 reach
or substitute in arm3. If you’re having trouble getting the arm2 simulation to run, try arm2_python, which is a straight Python implementation of the arm dynamics, and should work no sweat for Windows and Mac.
Below you can see results from the iLQR controller controlling the 2 and 3 link arms (click on the figures to see full sized versions, they got distorted a bit in the shrinking to fit on the page), using immediate and final state cost functions defined as:
l = np.sum(u**2)
and
pos_err = np.array([self.arm.x[0] - self.target[0],
self.arm.x[1] - self.target[1]])
l = (wp * np.sum(pos_err**2) + # pos error
wv * np.sum(x[self.arm.DOF:self.arm.DOF*2]**2)) # vel error
where wp and wv are just gain values, x is the state of the system, and self.arm.x is the $(x,y)$ position of the hand. These read as “during movement, penalize large control signals, and at the final state, have a big penalty on not being at the target.”
So let’s give it up for iLQR, this is awesome! How much of a crazy improvement is that over LQR? And with all knowledge of the system through finite differences, and with the full movements in exactly 1 second! (Note: The simulation speeds look different because of my editing to keep the gif sizes small, they both take the same amount of time for each movement.)
Changing cost functions
Something that you may notice is that the control of the 3 link is actually straighter than the 2 link. I thought that this might be just an issue with the gain values, since the scale of movement is smaller for the 2 link arm than the 3 link there might have been less of a penalty for not moving in a straight line, BUT this was wrong. You can crank the gains and still get the same movement. The actual reason is that this is what the cost function specifies, if you look in the code, only $\ell_f(\textbf{x}_N)$ penalizes the distance from the target, and the cost function during movement is strictly to minimize the control signal, i.e. $\ell(\textbf{x}_t, \textbf{u}_t) = \textbf{u}_t^2$.
Well that’s a lot of talk, you say, like the incorrigible antagonist we both know you to be, prove it. Alright, fine! Here’s iLQR running with an updated cost function that includes the end-effector’s distance from the target in the immediate cost:
All that I had to do to get this was change the immediate cost from
l = np.sum(u**2)
to
l = np.sum(u**2)
pos_err = np.array([self.arm.x[0] - self.target[0],
self.arm.x[1] - self.target[1]])
l += (wp * np.sum(pos_err**2) + # pos error
wv * np.sum(x[self.arm.DOF:self.arm.DOF*2]**2)) # vel error
where all I had to do was include the position penalty term from the final state cost into the immediate state cost.
Changing sequence length
In these simulations the system is simulating at .01 time step, and I gave it 100 time steps to reach the target. What if I give it only 50 time steps?
It looks pretty much the same! It’s just now twice as fast, which is of course achieved by using larger control signals, which we don’t see, but dang awesome.
What if we try to make it there in 10 time steps??
OK well that does not look good. So what’s going on in this case? Basically we’ve given the algorithm an impossible task. It can’t make it to the target location in 10 time steps. In the implementation I wrote here, if it hits the end of it’s control sequence and it hasn’t reached the target yet, the control sequence starts over back at t=0. Remember that part of the target state is also velocity, so basically it moves for 10 time steps to try to minimize $(x,y)$ distance, and then slows down to minimize final state cost in the velocity term.
In conclusion
This algorithm has been used in a ton of things, for controlling robots and simulations, and is an important part of guided policy search, which has been used to very successfully train deep networks in control problems. It’s getting really impressive results for controlling the arm models that I’ve built here, and using finite differences should easily generalize to other systems.
iLQR is very computationally expensive, though, so that’s definitely a downside. It’s definitely less expensive if you have the equations of your system, or at least a decent approximation of them, and you don’t need to use finite differences. But you pay for the efficiency with a loss in generality.
There are also a bunch of parameters to play around with that I haven’t explored at all here, like the weights in the cost function penalizing the magnitude of the cost function and the final state position error. I showed a basic example of changing the cost function, which hopefully gets across just how easy changing these things out can be when you’re using finite differences, and there’s a lot to play around with there too.
Implementation note
In the Yuval and Todorov paper, they talked about using backtracking line search when generating the control signal. So the algorithm they had when generating the new control signal was actually:
$\hat{\textbf{u}}_t = \hat{\textbf{u}}_t + \alpha\textbf{k}_t + \textbf{K}_t(\hat{\textbf{x}}_t - \textbf{x}_t)$
where $\alpha$ was the backtracking search parameter, which gets set to one initially and then reduced. It’s very possible I didn’t implement it as intended, but I found consistently that $\alpha = 1$ always generated the best results, so it was just adding computation time. So I left it out of my implementation. If anyone has insights on an implementation that improves results, please let me know!
And then finally, another thank you to Dr. Emo Todorov for providing Matlab code for the iLQG algorithm, which was very helpful, especially for getting the Levenberg-Marquardt heuristic implemented properly.
## Linear-Quadratic Regulation for non-linear systems using finite differences
One of the standard controllers in basic control theory is the linear-quadratic regulator (LQR). There is a finite-horizon case (where you have a limited amount of time), and an infinite-horizon case (where you don’t); in this post, for simplicity, we’re only going to be dealing with the infinite-horizon case.
The LQR is designed to handle a very specific kind of problem. First, it assumes you are controlling a system with linear dynamics, which means you can express them as
$\dot{\textbf{x}} = \textbf{A}\textbf{x} + \textbf{B}\textbf{u}$,
where $\textbf{x}$ and $\dot{\textbf{x}}$ are the state and its time derivative, $\textbf{u}$ is the input, and $\textbf{A}$ and $\textbf{B}$ capture the effects of the state and input on the derivative. And second, it assumes that the cost function, denoted $J$, is a quadratic of the form
$J = \int_0^{\infty} \left( (\textbf{x} - \textbf{x}^*)^T \textbf{Q} (\textbf{x} - \textbf{x}^*) + \textbf{u}^T \textbf{R} \textbf{u} \right) dt$
where $\textbf{x}^*$ is the target state, and $\textbf{Q} = \textbf{Q}^T \geq 0$ and $\textbf{R} = \textbf{R}^T \geq 0$ are weights on the cost of not being at the target state and applying a control signal. The higher $\textbf{Q}$ is, the more important it is to get to the target state asap, the higher $\textbf{R}$ is, the more important it is to keep the control signal small as you go to the target state.
The goal of the LQR is to calculate a feedback gain matrix $\textbf{K}$ such that
$\textbf{u} = -\textbf{K} \textbf{x},$
drives the system to the target. When the system is a linear system with a quadratic cost function, this can be done optimally. There is lots of discussion elsewhere about LQRs and their derivation, so I’m not going to go into that with this post. Instead, I’m going to talk about applying LQRs to non-linear systems, and using finite differences to do it, which works when you have a readily accessible simulation of the system on hand. The fun part is that by using finite differences you can get this to work without working out the dynamics equations yourself.
Using LQRs on non-linear systems
As you may have noticed, non-linear systems violate the first assumption of a linear quadratic regulator; that the system is linear. That doesn’t mean that we can’t apply it, it just means that it’s not going to be optimal. How poorly the LQR will perform depends on a few things, two important factors being how non-linear the system dynamics actually are, and how often you’re able to update the feedback gain matrix $\textbf{K}$. To apply LQR to non-linear systems we’re just going to close our eyes and pretend that the system dynamics are linear, i.e. they fit the form
$\dot{\textbf{x}} = \textbf{A}\textbf{x} + \textbf{B}\textbf{u}.$
We’ll do this by approximating the actual dynamics of the system linearly. We’ll then solve for our gain value $\textbf{K}$, generate our control signal for this timestep, and then re-approximate the dynamics again at the next time step and solve for $\textbf{K}$ from the new state. The more non-linear the system dynamics are, the less appropriate $\textbf{K}$ will be for generating our control signal $\textbf{u}$ as we move away from the state $\textbf{K}$ was calculated in; this is why update time of the LQR can become an important factor.
Using finite-differences to approximate system dynamics
An important question, then, is how do we find this system approximation? How can we calculate the $\textbf{A}$ and $\textbf{B}$ matrices that we then use to solve for $\textbf{K}$? If we know the dynamics of the system to be
$\dot{\textbf{x}} = f(\textbf{x}, \textbf{u})$,
then we can calculate
$\textbf{A} = \frac{\partial f(\textbf{x}, \textbf{u})}{\partial \textbf{x}}, \;\;\;\; \textbf{B} = \frac{\partial f(\textbf{x}, \textbf{u})}{\partial \textbf{u}}$.
If you’re going to try this for the 3-link arm, though, get out Mathematica. Do not try this by hand. If you disregard my warning and foolhardily attempt such a derivation you will regret, repent, and then appeal to Wolfram Alpha for salvation. These equations quickly become terrible and long even for seemingly not-so-complicated systems.
There are a few ways to skirt this. Here we’re going to assume that the system under control is a simulation, or that we at least have access to an accurate model, and use the finite differences method to compute these values. The idea behind finite differences is to approximate the rate of change of the function $f$ at the point $x$ by sampling $f$ near $x$ and using the difference to calculate $\dot{f}(x)$. Here’s a picture for a 1D system:
So here, our current state $x$ is the blue dot, and the red dots represent the sample points $x + \Delta x$ and $x - \Delta x$. We can then calculate
$\dot{f}(x) \approx \frac{f(x+\Delta x) - f(x-\Delta x)}{2\Delta x},$
and you can see the actual rate of change of $f$ at $x$ plotted in the blue dashed line, and the approximated rate of change calculated using finite differences plotted in the red dashed line. We can also see that the approximated derivative is only accurate near $x$ (the blue dot).
Back in our multi-dimensional system, to use finite differences to calculate the derivative with respect to the state and the input we’re going to vary each of the dimensions of the state and input by some small amount one at a time, calculating the effects of each one by one. Here’s a chunk of pseudo-code to hopefully clarify this idea:
eps = 1e-5
A = np.zeros((len(current_state), len(current_state))
for ii in range(len(current_state)):
x = current_state.copy()
x[ii] += eps
x_inc = simulate_system(state=x, input=control_signal)
x = current_state.copy()
x[ii] -= eps
x_dec = simulate_system(state=x, input=control_signal)
A[:,ii] = (x_inc - x_dec) / (2 * eps)
B = np.zeros((len(current_state), len(control_signal))
for ii in range(len(control_signal)):
u = control_signal.copy()
u[ii] += eps
x_inc = simulate_system(state=current_state, input=u)
u = control_signal.copy()
u[ii] -= eps
x_dec = simulate_system(state=current_state, input=u)
B[:,ii] = (x_inc - x_dec) / (2 * eps)
Now we’re able to generate our $\textbf{A}$ and $\textbf{B}$ matrices we have everything we need to solve for our feedback gain matrix $\textbf{K}$! Which is great.
Note on using finite differences in continuous vs discrete setup
Something that’s important to straighten out too is what exactly is returned by the simulate_system function in the code above. In the continuous case, your system is captured as
$\dot{\textbf{x}} = \textbf{A}\textbf{x} + \textbf{B}\textbf{u},$,
where in the discrete case your system is defined
$\textbf{x}(t+1) = \textbf{A}\textbf{x}(t) + \textbf{B}\textbf{u}(t).$
If you are calculating your feedback gain matrix $\textbf{K}$ using the continuous solution to the algebraic Riccati equation, then you need to be returning $\dot{\textbf{x}}(t)$. If you’re solving for $\textbf{K}$ using the discrete solution to the algebraic Riccati equation you need to return $\textbf{x}(t+1)$. This was just something that I came across as I was coding and so I wanted to mention it here in case anyone else stumbled across it!
Applying LQR to 2 and 3 link arm control
Alright! Let’s have a look at how the LQR does controlling non-linear systems. Below we have the control of a 2-link arm compared to a 3-link arm, and you can see the control of the 2-link arm is better. This is a direct result of the dynamics of a 3-link arm being significantly more complex.
Note on controlling at different timesteps
When I was first testing the LQR controller I expected the effects of different control update times to be a lot more significant than it was. As it turns out, for controlling a 3-link arm, there’s not really a visible difference in a controller that is updating every .01 seconds vs every .001 seconds vs every .0001 seconds. Let’s have a look:
Can’t even tell, eh? Fun fact, the simulation took 1 minute 30 seconds at .01 seconds time step and 45 minutes at .0001 seconds time step. The left-most animation is the .01 seconds and the right-most the .0001 seconds. But why is there seemingly so little difference? Well, this boils down to the dynamics of the 3-link arm changing actually pretty slowly. Below I’ve plotted just a few of the elements from the $\textbf{A}$, $\textbf{B}$, and $\textbf{K}$ matrices over .5 seconds of simulation time:
So, there are some obvious points where sampling the dynamics at a .01 time step is noticeably less accurate, but all in all there’s not a huuuggge difference between sampling at .01 and .0001 seconds. If you’re just watching the end-effector path it’s really not very noticeable. You can see how the elements of $\textbf{A}$ and $\textbf{B}$ are changing fairly slowly; this means that $\textbf{K}$ is going to be an effective feedback gain for a fair chunk of time. And the computational savings you get by sampling the dynamics and regenerating $\textbf{K}$ every .01 seconds instead of every .0001 seconds are pretty big. This was just another thing that I came across when playing around with the LQR, the take away being don’t just assume you need to update your system crazy often. You might get very comparable performance for much less computational cost.
Conclusions
All in all, the LQR controller is pretty neat! It’s really simple to set up, and generic. We don’t need any specific information about the system dynamics, like we do for effective operational space control (OSC). When we estimate the dynamics with finite differences, all need is a decent system model that we can sample. Again, the more non-linear the system, of course, the less effective a LQR will be. If you’re interested in playing around with one, or generating the figures that I show above, the code is all up and running on my Github for you to explore.
## Setting up an arm simulation interface in Nengo 2
I got an email the other day asking about how to set up an arm controller in Nengo, where they had been working from the Spaun code to strip away things until they had just the motor portion left they could play with. I ended up putting together a quick script to get them started and thought I would share it here in case anyone else was interested. It’s kind of fun because it shows off some of the new GUI and node interfacing. Note that you’ll need nengo_gui version .15+ for this to work. In general I recommend getting the dev version installed, as it’s stable and updates are made all the time improving functionality.
Nengo 1.4 core was all written in Java, with Jython and Python scripting thrown in on top, and since then a lot of work has gone into the re-write of the entire code base for Nengo 2. Nengo 2 is now written in Python, all the scripting is in Python, and we have a kickass GUI and support for running neural simulations on CPUs, GPUs, and specialized neuromorphic hardware like SpiNNaKer. I super recommend checking it out if you’re at all interested in neural modelling, we’ve got a bunch of tutorials up and a very active support board to help with any questions or problems. You can find the simulator code for installation here: https://github.com/nengo/nengo and the GUI code here: https://github.com/nengo/nengo_gui, where you can also find installation instructions.
And once you have that up and running, to run an arm simulation you can download and run the following code I have up on my GitHub. When you pop it open at the top is a run_in_GUI boolean, which you can use to open the sim up in the GUI, if you set it to False then it will run in the Nengo simulator and once finished will pop up with some basic graphs. Shout out to Terry Stewart for putting together the arm-visualization. It’s a pretty slick little demo of the extensibility of the Nengo GUI, you can see the code for it all in the <code>arm_func</code> in the <code>nengo_arm.py</code> file.
As it’s set up right now, it uses a 2-link arm, but you can simply swap out the Arm.py file with whatever plant you want to control. And as for the neural model, there isn’t one implemented in here, it’s just a simple input node that runs through a neural population to apply torque to the two joints of the arm. But! It should be a good start for anyone looking to experiment with arm control in Nengo. Here’s what it looks like when you pull it up in the GUI (also note that the arm visualization only appears once you hit the play button!):
## Operational space control of 6DOF robot arm with spiking cameras part 3: Tracking a target using spiking cameras
Alright. Previously we got our arm all set up to perform operational space control, accepting commands through Python. In this post we’re going to set it up with a set of spiking cameras for eyes, train it to learn the mapping between camera coordinates and end-effector coordinates, and have it track an LED target.
What is a spiking camera?
Good question! Spiking cameras are awesome, and they come from Dr. Jorg Conradt’s lab. Basically what they do is return you information about movement from the environment. They’re event-driven, instead of clock-driven like most hardware, which means that they have no internal clock that’s dictating when they send information (i.e. they’re asynchronous). They send information out as soon as they receive it. Additionally, they only send out information about the part of the image that has changed. This means that they have super fast response times and their output bandwidth is really low. Dr. Terry Stewart of our lab has written a bunch of code that can be used for interfacing with spiking cameras, which can all be found up on his GitHub.
Let’s use his code to see through a spiking camera’s eye. After cloning his repo and running python setup.py you can plug in a spiking camera through USB, and with the following code have a Matplotlib figure pop-up with the camera output:
import nstbot
import nstbot.connection
import time
eye = nstbot.RetinaBot()
eye.connect(nstbot.connection.Serial('/dev/ttyUSB0', baud=4000000))
time.sleep(1)
eye.retina(True)
eye.show_image()
while True:
time.sleep(1)
The important parts here are the creation of an instance of the RetinaBot, connecting it to the proper USB port, and calling the show_image function. Pretty easy, right? Here’s some example output, this is me waving my hand and snapping my fingers:
How cool is that? Now, you may be wondering how or why we’re going to use a spiking camera instead of a regular camera. The main reason that I’m using it here is because it makes tracking targets super easy. We just set up an LED that blinks at say 100Hz, and then we look for that frequency in the spiking camera output by recording the rate of change of each of the pixels and averaging over all pixel locations changing at the target frequency. So, to do this with the above code we simply add
eye.track_frequencies(freqs=[100])
And now we can track the location of an LED blinking at 100Hz! The visualization code place a blue dot at the estimated target location, and this all looks like:
Alright! Easily decoded target location complete.
Transforming between camera coordinates and end-effector coordinates
Now that we have a system that can track a target location, we need to transform that position information into end-effector coordinates for the arm to move to. There are a few ways to go about this. One is by very carefully positioning the camera and measuring the distances between the robot’s origin reference frame and working through the trig etc etc. Another, much less pain-in-the-neck way is to instead record some sample points of the robot end-effector at different positions in both end-effector and camera coordinates, and then use a function approximator to generalize over the rest of space.
We’ll do the latter, because it’s exactly the kind of thing that neurons are great for. We have some weird function, and we want to learn to approximate it. Populations of neurons are awesome function approximators. Think of all the crazy mappings your brain learns. To perform function approximation with neurons we’re going to use the Neural Engineering Framework (NEF). If you’re not familiar with the NEF, the basic idea is to using the response curves of neurons as a big set of basis function to decode some signal in some vector space. So we look at the responses of the neurons in the population as we vary our input signal, and then determine a set of decoders (using least-squares or somesuch) that specify the contribution of each neuron to the different dimensions of the function we want to approximate.
Here’s how this is going to work.
1. We’re going to attach the LED to the head of the robot,
2. we specify a set of $(x,y,z)$ coordinates that we send to the robot’s controller,
3. when the robot moves to each point, record the LED location from the camera as well as the end-effector’s $(x,y,z)$ coordinate,
4. create a population of neurons that we train up to learn the mapping from camera locations to end-effector $(x,y,z)$ locations
5. use this information to tell the robot where to move.
A detail that should be mentioned here is that a single camera only provides 2D output. To get a 3D location we’re going to use two separate cameras. One will provide $(x,z)$ information, and the other will provide $(y,z)$ information.
Once we’ve taped (expertly) the LED onto the robot arm, the following script to generate the information we to approximate the function transforming from camera to end-effector space:
import robot
from eye import Eye # this is just a spiking camera wrapper class
import numpy as np
import time
# connect to the spiking cameras
eye0 = Eye(port='/dev/ttyUSB2')
eye1 = Eye(port='/dev/ttyUSB1')
eyes = [eye0, eye1]
# connect to the robot
rob = robot.robotArm()
# define the range of values to test
min_x = -10.0
max_x = 10.0
x_interval = 5.0
min_y = -15.0
max_y = -5.0
y_interval = 5.0
min_z = 10.0
max_z = 20.0
z_interval = 5.0
x_space = np.arange(min_x, max_x, x_interval)
y_space = np.arange(min_y, max_y, y_interval)
z_space = np.arange(min_z, max_z, z_interval)
num_samples = 10 # how many camera samples to average over
try:
out_file0 = open('eye_map_0.csv', 'w')
out_file1 = open('eye_map_1.csv', 'w')
for i, x_val in enumerate(x_space):
for j, y_val in enumerate(y_space):
for k, z_val in enumerate(z_space):
rob.move_to_xyz(target)
time.sleep(2) # time for the robot to move
# take a bunch of samples and average the input to get
# the approximation of the LED in camera coordinates
eye_data0 = np.zeros(2)
for k in range(num_samples):
eye_data0 += eye0.position(0)[:2]
eye_data0 /= num_samples
out_file0.write('%0.2f, %0.2f, %0.2f, %0.2f\n' %
(y_val, z_val, eye_data0[0], eye_data0[1]))
eye_data1 = np.zeros(2)
for k in range(num_samples):
eye_data1 += eye1.position(0)[:2]
eye_data1 /= num_samples
out_file1.write('%0.2f, %0.2f, %0.2f, %0.2f\n' %
(x_val, z_val, eye_data1[0], eye_data1[1]))
out_file0.close()
out_file1.close()
except:
import sys
import traceback
print traceback.print_exc(file=sys.stdout)
finally:
rob.robot.disconnect()
This script connects to the cameras, defines some rectangle in end-effector space to sample, and then works through each of the points writing the data to file. The results of this code can be seen in the animation posted in part 2 of this series.
OK! So now we have all the information we need to train up our neural population. It’s worth noting that we’re only using 36 sample points to train up our neurons, I did this mostly because I didn’t want to wait around. You can of course use more, though, and the more sample points you have the more accurate your function approximation will be.
Implementing a controller using Nengo
The neural simulation software (which implements the NEF) that we’re going to be using to generate and train our neural population is called Nengo. It’s free to use for non-commercial use, and I highly recommend checking out the introduction and tutorials if you have any interest in neural modeling.
What we need to do now is generate two neural populations, one for each camera, that will receives input from the spiking camera and transform the target’s location information into end-effector coordinates. We will then combine the estimates from the two populations, and send that information out to the robot to tell it where to move. I’ll paste the code in here, and then we’ll step through it below.
from eye import Eye
import nengo
from nengo.utils.connection import target_function
import robot
import numpy as np
import sys
import traceback
# connect to robot
rob = robot.robotArm()
model = nengo.Network()
try:
def eyeNet(port='/dev/ttyUSB0', filename='eye_map.csv', n_neurons=1000,
label='eye'):
# connect to eye
spiking_cam = Eye(port=port)
# read in eval points and target output
eval_points = []
targets = []
file_obj = open(filename, 'r')
for line in file_data:
line_data = map(float, line.strip().split(','))
targets.append(line_data[:2])
eval_points.append(line_data[2:])
file_obj.close()
eval_points = np.array(eval_points)
targets = np.array(targets)
# create subnetwork for eye
net = nengo.Network(label=label)
with net:
def eye_input(t):
return spiking_cam.position(0)[:2]
net.input = nengo.Node(output=eye_input, size_out=2)
net.map_ens = nengo.Ensemble(n_neurons, dimensions=2)
net.output = nengo.Node(size_in=2)
nengo.Connection(net.input, net.map_ens, synapse=None)
nengo.Connection(net.map_ens, net.output, synapse=None,
**target_function(eval_points, targets))
return net
with model:
# create network for spiking camera 0
eye0 = eyeNet(port='/dev/ttyUSB2', filename='eye_map_0.csv', label='eye0')
# create network for spiking camera 1
eye1 = eyeNet(port='/dev/ttyUSB1', filename='eye_map_1.csv', label='eye1')
def eyes_func(t, yzxz):
x = yzxz[2] # x coordinate coded from eye1
y = yzxz[0] # y coordinate coded from eye0
z = (yzxz[1] + yzxz[3]) / 2.0 # z coordinate average from eye0 and eye1
return [x,y,z]
eyes = nengo.Node(output=eyes_func, size_in=4)
nengo.Connection(eye0.output, eyes[:2])
nengo.Connection(eye1.output, eyes[2:])
# create output node for sending instructions to arm
def arm_func(t, x):
if t < .05: return # don't move arm during startup (avoid transients)
rob.move_to_xyz(np.array(x, dtype='float32'))
armNode = nengo.Node(output=arm_func, size_in=3, size_out=0)
nengo.Connection(eyes, armNode)
sim = nengo.Simulator(model)
sim.run(10, progress_bar=False)
except:
print traceback.print_exc(file=sys.stdout)
finally:
print 'disconnecting'
rob.robot.disconnect()
The first thing we’re doing is defining a function (eyeNet) to create our neural population that takes input from a spiking camera, and decodes out an end-effector location. In here, we read in from the file the information we just recorded about the camera positions that will serve as the input signal to the neurons (eval_points) and the corresponding set of function output (targets). We create a Nengo network, net, and then a couple of nodes for connecting the input (net.input) and projecting the output (net.output). The population of neurons that we’ll use to approximate our function is called net.map_ens. To specify the function we want to approximate using the eval_points and targets arrays, we create a connection from net.map_ens to net.output and use **target_function(eval_points, targets). So this is probably a little weird to parse if you haven’t used Nengo before, but hopefully it’s clear enough that you can get the gist of what’s going on.
In the main part of the code, we create another Nengo network. We call this one model because that’s convention for the top-level network in Nengo. We then create two networks using the eyeNet function to hook up to the two cameras. At this point we create a node called eyes, and the role of this node is simply to amalgamate the information from the two cameras from $(x,z)$ and $(y,z)$ into $(x,y,z)$. This node is then hooked up to another node called armNode, and all armNode does is call the robot arm’s move_to_xyz function, which we defined in the last post.
Finally, we create a Simulation from model, which compiles the neural network we just specified above, and we run the simulation. The result of all of this then looks something like the following:
And there we go! Project complete! We have a controller for a 6DOF arm that uses spiking cameras to train up a neural population and track an LED, that requires almost no set up time. I gave a demo of this at the end of the summer school and there’s no real positioning of the cameras relative to the arm required, just have to tape the cameras up somewhere, run the training script, and go!
Future work
From here there are a bunch of fun ways to go about extending this. We could add another LED blinking at a different frequency that the arm needs to avoid, using an obstacle avoidance algorithm like the one in this post, add in another dimension of the task involving the gripper, implement a null-space controller to keep the arm near resting joint angles as it tracks the target, and on and on!
Another thing that I’ve looked at is including learning on the system to fine tune our function approximation online. As is, the controller is able to extrapolate and move the arm to target locations that are outside of the range of space sampled during training, but it’s not super accurate. It would be much better to be constantly refining the estimate using learning. I was able to implement a basic version that works, but getting the learning and the tracking running at the same time turns out to be a bit trickier, so I haven’t had the chance to get it all running yet. Hopefully there will be some more down-time in the near future, however, and be able to finish implementing it.
For now, though, we still have a pretty neat target tracker for our robot arm!
## Operational space control of 6DOF robot arm with spiking cameras part 2: Deriving the Jacobian
In the previous exciting post in this series I outlined the project, which is in the title, and we worked through getting access to the arm through Python. The next step was deriving the Jacobian, and that’s what we’re going to be talking about in this post!
Alright.
This was a time I was very glad to have a previous post talking about generating transformation matrices, because deriving the Jacobian for a 6DOF arm in 3D space comes off as a little daunting when you’re used to 3DOF in 2D space, and I needed a reminder of the derivation process. The first step here was finding out which motors were what, so I went through and found out how each motor moved with something like the following code:
for ii in range(7):
target_angles = np.zeros(7, dtype='float32')
target_angles[ii] = np.pi / 4.0
rob.move(target_angles)
time.sleep(1)
and I found that the robot is setup in the figures below
this is me trying my hand at making things clearer using Inkscape, hopefully it’s worked. Displayed are the first 6 joints and their angles of rotation, $q_0$ through $q_5$. The 7th joint, $q_6$, opens and closes the gripper, so we’re safe to ignore it in deriving our Jacobian. The arm segment lengths $l_1, l_3,$ and $l_5$ are named based on the nearest joint angles (makes easier reading in the Jacobian derivation).
Find the transformation matrix from end-effector to origin
So first thing’s first, let’s find the transformation matrices. Our first joint, $q_0$, rotates around the $z$ axis, so the rotational part of our transformation matrix $T^0_\textrm{orgin}$ is
$R^0_\textrm{orgin} = \left[ \begin{array}{ccc} \textrm{cos}(q_0) & -\textrm{sin}(q_0) & 0 \\ \textrm{sin}(q_0) & \textrm{cos}(q_0) & 0 \\ 0 & 0 & 1 \end{array} \right],$
and $q_0$ and our origin frame of reference are on top of each other so we don’t need to account for translation, so our translation component of $T^0_\textrm{origin}$ is
$D^0_\textrm{orgin} = \left[ \begin{array}{c} 0 \\ 0 \\ 0 \end{array} \right].$
Stacking these together to form our first transformation matrix we have
$T^0_\textrm{orgin} = \left[ \begin{array}{cc} R^0_\textrm{origin} & D^0_\textrm{origin} \\ 0 & 1 \end{array} \right] = \left[ \begin{array}{cccc} \textrm{cos}(q_0) & -\textrm{sin}(q_0) & 0 & 0\\ \textrm{sin}(q_0) & \textrm{cos}(q_0) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] .$
So now we are able to convert a position in 3D space from to the reference frame of joint $q_0$ back to our origin frame of reference. Let’s keep going.
Joint $q_1$ rotates around the $x$ axis, and there is a translation along the arm segment $l_1$. Our transformation matrix looks like
$T^1_\textrm{0} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & \textrm{cos}(q_1) & -\textrm{sin}(q_1) & l_1 * \textrm{cos}(q_1) \\ 0 & \textrm{sin}(q_1) & \textrm{cos}(q_1) & l_1 * \textrm{sin}(q_1) \\ 0 & 0 & 0 & 1 \end{array} \right] .$
Joint $q_2$ also rotates around the $x$ axis, but there is no translation from $q_2$ to $q_3$. So our transformation matrix looks like
$T^2_\textrm{1} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & \textrm{cos}(q_2) & -\textrm{sin}(q_2) & 0 \\ 0 & \textrm{sin}(q_2) & \textrm{cos}(q_2) & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] .$
The next transformation matrix is a little tricky, because you might be tempted to say that it’s rotating around the $z$ axis, but actually it’s rotating around the $y$ axis. This is determined by where $q_3$ is mounted relative to $q_2$. If it was mounted at 90 degrees from $q_2$ then it would be rotating around the $z$ axis, but it’s not. For translation, there’s a translation along the $y$ axis up to the next joint, so all in all the transformation matrix looks like:
$T^3_\textrm{2} = \left[ \begin{array}{cccc} \textrm{cos}(q_3) & 0 & -\textrm{sin}(q_3) \\ 0 & 1 & 0 & l_3 \\ \textrm{sin}(q_3) & 0 & \textrm{cos}(q_3) \\ 0 & 0 & 0 & 1 \end{array} \right] .$
And then the transformation matrices for coming from $q_4$ to $q_3$ and $q_5$ to $q_4$ are the same as the previous set, so we have
$T^4_\textrm{3} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & \textrm{cos}(q_4) & -\textrm{sin}(q_4) & 0 \\ 0 & \textrm{sin}(q_4) & \textrm{cos}(q_4) & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] .$
and
$T^5_\textrm{4} = \left[ \begin{array}{cccc} \textrm{cos}(q_5) & 0 & -\textrm{sin}(q_5) \\ 0 & 1 & 0 & l_5 \\ \textrm{sin}(q_5) & 0 & \textrm{cos}(q_5) \\ 0 & 0 & 0 & 1 \end{array} \right] .$
Alright! Now that we have all of the transformation matrices, we can put them together to get the transformation from end-effector coordinates to our reference frame coordinates!
$T^\textrm{ee}_\textrm{origin} = T^0_\textrm{origin} T^1_0 T^2_1 T^3_2 T^4_5 T^5_4.$
At this point I went and tested this with some sample points to make sure that everything seemed to be being transformed properly, but we won’t go through that here.
Calculate the derivative of the transform with respect to each joint
The next step in calculating the Jacobian is getting the derivative of $T^\textrm{ee}_\textrm{origin}$. This could be a big ol’ headache to do it by hand, OR we could use SymPy, the symbolic computation package for Python. Which is exactly what we’ll do. So after a quick
sudo pip install sympy
I wrote up the following script to perform the derivation for us
import sympy as sp
def calc_transform():
# set up our joint angle symbols (6th angle doesn't affect any kinematics)
q = [sp.Symbol('q0'), sp.Symbol('q1'), sp.Symbol('q2'), sp.Symbol('q3'),
sp.Symbol('q4'), sp.Symbol('q5')]
# set up our arm segment length symbols
l1 = sp.Symbol('l1')
l3 = sp.Symbol('l3')
l5 = sp.Symbol('l5')
T0org = sp.Matrix([[sp.cos(q[0]), -sp.sin(q[0]), 0, 0,],
[sp.sin(q[0]), sp.cos(q[0]), 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]])
T10 = sp.Matrix([[1, 0, 0, 0],
[0, sp.cos(q[1]), -sp.sin(q[1]), l1*sp.cos(q[1])],
[0, sp.sin(q[1]), sp.cos(q[1]), l1*sp.sin(q[1])],
[0, 0, 0, 1]])
T21 = sp.Matrix([[1, 0, 0, 0],
[0, sp.cos(q[2]), -sp.sin(q[2]), 0],
[0, sp.sin(q[2]), sp.cos(q[2]), 0],
[0, 0, 0, 1]])
T32 = sp.Matrix([[sp.cos(q[3]), 0, sp.sin(q[3]), 0],
[0, 1, 0, l3],
[-sp.sin(q[3]), 0, sp.cos(q[3]), 0],
[0, 0, 0, 1]])
T43 = sp.Matrix([[1, 0, 0, 0],
[0, sp.cos(q[4]), -sp.sin(q[4]), 0],
[0, sp.sin(q[4]), sp.cos(q[4]), 0],
[0, 0, 0, 1]])
T54 = sp.Matrix([[sp.cos(q[5]), 0, sp.sin(q[5]), 0],
[0, 1, 0, l5],
[-sp.sin(q[5]), 0, sp.cos(q[5]), 0],
[0, 0, 0, 1]])
T = T0org * T10 * T21 * T32 * T43 * T54
# position of the end-effector relative to joint axes 6 (right at the origin)
x = sp.Matrix([0,0,0,1])
Tx = T * x
for ii in range(6):
print q[ii]
print sp.simplify(Tx[0].diff(q[ii]))
print sp.simplify(Tx[1].diff(q[ii]))
print sp.simplify(Tx[2].diff(q[ii]))
And then consolidated the output using some variable shorthand to write a function that accepts in joint angles and generates the Jacobian:
def calc_jacobian(q):
J = np.zeros((3, 7))
c0 = np.cos(q[0])
s0 = np.sin(q[0])
c1 = np.cos(q[1])
s1 = np.sin(q[1])
c3 = np.cos(q[3])
s3 = np.sin(q[3])
c4 = np.cos(q[4])
s4 = np.sin(q[4])
c12 = np.cos(q[1] + q[2])
s12 = np.sin(q[1] + q[2])
l1 = self.l1
l3 = self.l3
l5 = self.l5
J[0,0] = -l1*c0*c1 - l3*c0*c12 - l5*((s0*s3 - s12*c0*c3)*s4 + c0*c4*c12)
J[1,0] = -l1*s0*c1 - l3*s0*c12 + l5*((s0*s12*c3 + s3*c0)*s4 - s0*c4*c12)
J[2,0] = 0
J[0,1] = (l1*s1 + l3*s12 + l5*(s4*c3*c12 + s12*c4))*s0
J[1,1] = -(l1*s1 + l3*s12 + l5*s4*c3*c12 + l5*s12*c4)*c0
J[2,1] = l1*c1 + l3*c12 - l5*(s4*s12*c3 - c4*c12)
J[0,2] = (l3*s12 + l5*(s4*c3*c12 + s12*c4))*s0
J[1,2] = -(l3*s12 + l5*s4*c3*c12 + l5*s12*c4)*c0
J[2,2] = l3*c12 - l5*(s4*s12*c3 - c4*c12)
J[0,3] = -l5*(s0*s3*s12 - c0*c3)*s4
J[1,3] = l5*(s0*c3 + s3*s12*c0)*s4
J[2,3] = -l5*s3*s4*c12
J[0,4] = l5*((s0*s12*c3 + s3*c0)*c4 + s0*s4*c12)
J[1,4] = l5*((s0*s3 - s12*c0*c3)*c4 - s4*c0*c12)
J[2,4] = -l5*(s4*s12 - c3*c4*c12)
return J
Alright! Now we have our Jacobian! Really the only time consuming part here was calculating our end-effector to origin transformation matrix, generating the Jacobian was super easy using SymPy once we had that.
Hack position control using the Jacobian
Great! So now that we have our Jacobian we’ll be able to translate forces that we want to apply to the end-effector into joint torques that we want to apply to the arm motors. Since we can’t control applied force to the motors though, and have to pass in desired angle positions, we’re going to do a hack approximation. Let’s first transform our forces from end-effector space into a set of joint angle torques:
$\textbf{u} = \textbf{J}^T \; \textbf{u}_\textbf{x}.$
To approximate the control then we’re simply going to take the current set of joint angles (which we know because it’s whatever angles we last told the system to move to) and add a scaled down version of $\textbf{u}$ to approximate applying torque that affects acceleration and then velocity.
$\textbf{q}_\textrm{des} = \textbf{q} + \alpha \; \textbf{u},$
where $\alpha$ is the gain term, I used .001 here because it was nice and slow, so no crazy commands that could break the servos would be sent out before I could react and hit the cancel button.
What we want to do then to implement operational space control here then is find the current $(x,y,z)$ position of the end-effector, calculate the difference between it and the target end-effector position, use that to generate the end-effector control signal $u_x$, get the Jacobian for the current state of the arm using the function above, find the set of joint torques to apply, approximate this control by generating a set of target joint angles to move to, and then repeat this whole loop until we’re within some threshold of the target position. Whew.
So, a lot of steps, but pretty straight forward to implement. The method I wrote to do it looks something like:
def move_to_xyz(self, xyz_d):
"""
np.array xyz_d: 3D target (x_d, y_d, z_d)
"""
count = 0
while (1):
count += 1
# get control signal in 3D space
xyz = self.calc_xyz()
delta_xyz = xyz_d - xyz
ux = self.kp * delta_xyz
# transform to joint space
J = self.calc_jacobian()
u = np.dot(J.T, ux)
# target joint angles are current + uq (scaled)
self.q[...] += u * .001
self.robot.move(np.asarray(self.q.copy(), 'float32'))
if np.sqrt(np.sum(delta_xyz**2)) < .1 or count > 1e4:
break
And that is it! We have successfully hacked together a system that can perform operational space control of a 6DOF robot arm. Here is a very choppy video of it moving around to some target points in a grid on a cube.
So, granted I had to drop a lot of frames from the video to bring it’s size down to something close to reasonable, but still you can see that it moves to target locations super fast!
Alright this is sweet, but we’re not done yet. We don’t want to have to tell the arm where to move ourselves. Instead we’d like the robot to perform target tracking for some target LED we’re moving around, because that’s way more fun and interactive. To do this, we’re going to use spiking cameras! So stay tuned, we’ll talk about what the hell spiking cameras are and how to use them for a super quick-to-setup and foolproof target tracking system in the next exciting post! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 294, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.527990460395813, "perplexity": 1136.2043005950904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982947845.70/warc/CC-MAIN-20160823200907-00086-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://mayoverse.github.io/arsenal/ | ## Overview
The goal of library(arsenal) is to make statistical reporting easy. It includes many functions which the useR will find useful to have in his/her “arsenal” of functions. There are, at this time, 6 main functions, documented below. Each of these functions is motivated by a local SAS macro or procedure of similar functionality.
Note that arsenal v3.0.0 is not backwards compatible with previous versions (mainly because compare() got renamed to comparedf()). See the NEWS file for more details.
arsenal now has a pkgdown site: https://mayoverse.github.io/arsenal/
## The tableby() Function
tableby() is a function to easily summarize a set of independent variables by one or more categorical variables. Optionally, an appropriate test is performed to test the distribution of the independent variables across the levels of the categorical variable. Options for this function are easily controlled using tableby.control().
The tableby() output is easily knitted in an Rmarkdown document or displayed in the command line using the summary() function. Other S3 methods are implemented for objects from tableby(), including print(), [, as.data.frame(), sort(), merge(), padjust(), head(), and tail().
## The paired() Function
paired() is a function to easily summarize a set of independent variables across two time points. Optionally, an appropriate test is performed to test the distribution of the independent variables across the time points. Options for this function are easily controlled using paired.control().
The paired() output is easily knitted in an Rmarkdown document or displayed in the command line using the summary() function. It has the same S3 methods as tableby(), since it’s a special case of the tableby() object.
## The modelsum() Function
modelsum() is a function to fit and summarize models for each independent variable with one or more response variables, with options to adjust for covariates for each model. Options for this function are easily controlled using modelsum.control().
The modelsum output is easily knitted in an Rmarkdown document or displayed in the command line using the summary() function. Other S3 methods are implemented for objects from modelsum(), including print(), [, as.data.frame(), and merge().
## The freqlist() Function
freqlist() is a function to approximate the output from SAS’s PROC FREQ procedure when using the /list option of the TABLE statement. Options for this function are easily controlled using freq.control().
The freqlist() output is easily knitted in an Rmarkdown document or displayed in the command line using the summary() function. Other S3 methods are implemented for objects from freqlist(), including print(), [, as.data.frame(), sort(), and merge(). Additionally, the summary() output can be used with head() or tail().
## The comparedf() Function
comparedf() compares two data.frames and reporting any differences between them, much like SAS’s PROC COMPARE procedure.
The comparedf() output is easily knitted in an Rmarkdown document or displayed in the command line using the summary() function. Other S3 methods are implemented for objects of class "comparedf", including print(), n.diffs(), n.diff.obs(), and diffs().
## The write2*() Family of Functions
write2word(), write2pdf(), and write2html() are functions to output a table into a document, much like SAS’s ODS procedure. The S3 method behind them is write2(). There are methods implemented for tableby(), modelsum(), freqlist(), and comparedf(), and also methods for knitr::kable(), xtable::xtable(), and pander::pander_return(). Another option is to coerce an object using verbatim() to print out the results monospaced (as if they were in the terminal)–the default method does this automatically. To output multiple tables into a document, simply make a list of them and call the same function as before. A YAML header can be added using yaml(). Code chunks can be written using code.chunk().
For more information, see vignette("write2").
## Other Notable Functions
• keep.labels() keeps the 'label' attribute on an R object when subsetting. loosen.labels() allows the labels to drop again.
• formulize() is a shortcut to collapse variable names into a formula.
• mdy.Date() and Date.mdy() convert numeric dates for month, day, and year to Date object, and vice versa.
• is.Date: tests if an object is a date.
• %nin% tests for “not in”, the negation of %in%.
• allNA() tests for all elements being NA, and includeNA() makes NAs explicit values. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16015809774398804, "perplexity": 5151.0888199475185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039626288.96/warc/CC-MAIN-20210423011010-20210423041010-00138.warc.gz"} |
http://math.stackexchange.com/questions/164899/generalized-laplacian-operator/164940 | # Generalized Laplacian operator?
Suppose a surface $S$ is endowed with a metric given by the matrix $$M=\begin{pmatrix} E&F\\F&G\end{pmatrix}$$
And $f,g$ are scalar functions defined on the surface. What then is the (geometric) significance of the scalar function given by ${1\over \sqrt{\det(M)}}{\partial \over \partial x_i}\left(f\sqrt{\det(M)} (M^{-1})_{ij} {\partial \over \partial x_j} g\right)$?
I have been told that if we set $f=1$, we get an operator equivalent to the Laplacian acting on the function $g$. Why does the Laplacian become this form? Is there an intuitive geometric explanation of what is going on?
Thank you.
-
Yes, there is a more intuitive geometric explanation, though it is kind of difficult to see if you just get to see this formula. In differential (Riemannian) geometry one looks at curved (in contrast to flat, like Euclidean space) surfaces or higher dimensional manifolds. The metric you are looking at (more precisely: the pointwise norm associated with the pointwise scalar product it defines) is kind of an infinitesimal measure for distances in these surfaces.
It turns out that in this context you can set up differential calculus, the basic operation of which, when applied to vector fields, is the so called covariant differentiation. If you are looking at surfaces embedded in Euclidean 3 space this is basically a differentiation in that surrounding space followed by an orthogonal projection onto the tangent plane to the surface, but one can define this in an abstract manner, too (without an ambient manifold). If you then take a function and it's gradient (a concept which is also to be defined and depends on the metric) and take the covariant derivative of this object, the trace of this object (wrt the metric) is the Laplacian of the function (as is in Euclidean space, the Laplacian is the trace of the Hessian). In local coordinates this happens to look like the object you wrote down (when $f=1$). While in this form it looks a bit arbitrary it turns out to have some interesting properties. In particular it is invariant under coordinate changes, that is, it is well defined as a differential operator on the surface.
Thomas has already given a good answer. However it might be helpful to mention that the operator in the question(v1) with $f=1$ is in fact the 2D Laplace-Beltrami operator. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9565989971160889, "perplexity": 147.92826901350378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398468971.92/warc/CC-MAIN-20151124205428-00083-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://forum.azimuthproject.org/discussion/comment/15966/ | #### Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
Options
# Welcome to the Applied Category Theory Course!
edited October 2018
Hi! Welcome to the Applied Category Theory Course. I'm no longer giving lectures, but the resources are still available and you can still discuss the course here!
A good first step is to download a copy of the text:
The most important thing is to read and discuss my Lectures and try to do the Exercises.
On Monday May 7th we'll start Chapter 2. There's also place for discussing Chapter 1 if you want to catch up.
This course will only be fun if you actively participate. You have to ask questions, answer puzzles, and do exercises from the book, to make progress. If you can answer other students' questions, do so. There are a lot of other smart people around to talk to! Go to Chat and meet some!
If you think you've found a mistake in the book, please report it here:
If it's really a mistake, Fong and Spivak will fix it!
Getting to Know Each Other
Chat is a good place for more free-wheeling discussions. I think we're gonna have tons of fun. Some of us will wind up doing interesting projects together! I want to help save the planet, and this is one way to get started.
Writing math
You can write equations using MathJax, which is a limited version of LaTeX good for the web. For "displayed" equations, centered on the page, use double dollar signs: $$E = \sqrt{m^2 + p^2}$$ produces this: $$E = \sqrt{m^2 + p^2}$$ For "inline" equations, mixed in with your text, use this other method: \$$E = \sqrt{m^2 + p^2}\$$ produces this: $$E = \sqrt{m^2 + p^2}$$.
Questions
If you have questions or comments on the subject of the course that don't quite fit into any of the chapter discussions, you can start a new discussion in the category Applied Category Theory Course. If you have questions about how the Azimuth Forum works, start a discussion in the category Technical.
Be Nice
It's an inevitable feature of any discussion forum that some users become rude, bully others, try to take over conversations, or try to exploit the forum as a venue to promote irrelevant ideas. Anyone who becomes annoying in these or other ways will be blocked. So be polite, be friendly, and let's focus on the course material!
Let's Go!
• Options
1.
edited March 2018
Thanks for this great course !
Comment Source:Thanks for this great course !
• Options
2.
edited March 2018
You're welcome! If everyone asks lots of questions and answers lots of other people's questions, it will be lots of fun. There's a limit to how many minutes a day I can put into this. But we've got about 90 people here, so multiply that by 90 and it becomes a formidable engine for knowledge generation.
Comment Source:You're welcome! If everyone asks lots of questions and _answers_ lots of other people's questions, it will be lots of fun. There's a limit to how many minutes a day I can put into this. But we've got about 90 people here, so multiply that by 90 and it becomes a formidable engine for knowledge generation.
• Options
3.
I suggest having deadlines for more-or-less (no need to be strict that I can see) finishing each chapter. John, I know you're playing this by ear (which is great IMO), but maybe you could make up the deadlines as we go along, a chapter at a time?
Comment Source:I suggest having deadlines for more-or-less (no need to be strict that I can see) finishing each chapter. John, I know you're playing this by ear (which is great IMO), but maybe you could make up the deadlines as we go along, a chapter at a time?
• Options
4.
Comment Source:I'm voting against deadlines :)
• Options
5.
Is it possible to be a passive student? While I love to do problems, with work and family it is going to be hard to find time but I definitely want to be engaged in the learning. Apologies for the selfishness.
Comment Source:Is it possible to be a passive student? While I love to do problems, with work and family it is going to be hard to find time but I definitely want to be engaged in the learning. Apologies for the selfishness.
• Options
6.
Jason and Igor: I'm not going to make up deadlines to look at each chapter, but I will march ahead, giving lectures on each chapter and then moving on to the next one... and I will focus most of my attention on discussions about what I just discussed, though people are free to continuing each topic ad infinitum.
There will be a discussion thread about each chapter, and one about each lecture, and perhaps one about each exercise (or group of related exercises?)
Comment Source:Jason and Igor: I'm not going to make up deadlines to look at each chapter, but I will march ahead, giving lectures on each chapter and then moving on to the next one... and I will focus most of my attention on discussions about what I just discussed, though people are free to continuing each topic _ad infinitum_. There will be a discussion thread about each chapter, and one about each lecture, and perhaps one about each exercise (or group of related exercises?)
• Options
7.
Sri Panyam: it's possible to be a passive student; you may have less fun than the people who engage more. Whenever you have questions or comments, please post them!
Comment Source:Sri Panyam: it's possible to be a passive student; you may have less fun than the people who engage more. Whenever you have questions or comments, please post them!
• Options
8.
edited March 2018
@JohnBaez What should the top level discussions be? I see
• Chapter #
• Lecture # - Chapter #
You mentioned discussions on exercises, so?...
• Exercise #
I also suggest...
• Puzzle # - Chapter # :
I have completed all the exercises for chapter 1. Should I start posting them? I am also really interested in the responses for your puzzle 4 for chapter 1. It would nice if they were collected somewhere, should they be in the wiki?
Comment Source:@JohnBaez What should the top level discussions be? I see * Chapter # * Lecture # - Chapter # You mentioned discussions on exercises, so?... * Exercise # I also suggest... * Puzzle # - Chapter # : <name> I have completed all the exercises for chapter 1. Should I start posting them? I am also really interested in the responses for your puzzle 4 for chapter 1. It would nice if they were collected somewhere, should they be in the wiki?
• Options
9.
Frederick - yes, I agree that we should have top-level discussions of the types you mention! I'd love to see some of the sort
• Exercise #
and
• Puzzle # - Chapter #
So, please start them! It would be nice if the first post in such discussions merely stated the exercise and/or puzzle. Then the next post could be your solution (or someone's solution), and then other solutions and discussion thereof. If I get ahold of the LaTeX for Seven Sketches, this may reduce the amount of typing required.
There is already a partial list of responses to puzzle 4 here:
Comment Source:Frederick - yes, I agree that we should have top-level discussions of the types you mention! I'd _love_ to see some of the sort * Exercise # and * Puzzle # - Chapter # So, please start them! It would be nice if the first post in such discussions merely stated the exercise and/or puzzle. Then the next post could be your solution (or someone's solution), and then other solutions and discussion thereof. If I get ahold of the LaTeX for _Seven Sketches_, this may reduce the amount of typing required. There is already a partial list of responses to puzzle 4 here: * [Responses to Puzzle 4](https://forum.azimuthproject.org/discussion/1822/puzzle-4-responses) Over there, [Daniel Cellucci and I are started to discuss using the Azimuth Wiki to store information like answers to puzzles](https://forum.azimuthproject.org/discussion/comment/16242/#Comment_16242).
• Options
10.
edited June 2018
I deleted my original comment: I proposed a different top-level naming scheme, but having visited the forum, it works out great as it is. Thank you for this inititiative!
Comment Source:I deleted my original comment: I proposed a different top-level naming scheme, but having visited the forum, it works out great as it is. Thank you for this inititiative!
• Options
11.
Sure, Daniel!
Comment Source:Sure, Daniel! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4525873064994812, "perplexity": 1525.1068755227996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145774.75/warc/CC-MAIN-20200223123852-20200223153852-00402.warc.gz"} |
http://www.thestudentroom.co.uk/showthread.php?t=2006099 | You are Here: Home
# C2 24th May 2012 REVISION THREAD
Announcements Posted on
Study Help needs new mods! 14-04-2014
Post on TSR and win a prize! Find out more... 10-04-2014
1. All of these I consider to be relatively difficult questions and some I got stuck on, please answer these questions and show working.
Thank you everyone!
Attached Thumbnails
2. ..
3. ..
5. okay, q.6:
a) complete the square:
(x-3)^2 -9 + (y+2)^2 -4 =12
(x-3)^2 + (y+2)^2 =25
centre at (3,-2)
radius is root25 = 5 (+ve value because it's a length)
b) P at (-1,1) and Q at (7,-5)
midpoint PQ should be (3,-2) if a diameter
midpoint= (7-1/2, -5-1/2) = (3,-2)
c) I'm not really sure how you'd do this. There's a right-angled triangle between RPQ and you can draw the whole thing onto a set of axes but although R is (0,y), I'm not sure how you'd find the y coordinate
6. (Original post by jameshewitt)
All of these I consider to be relatively difficult questions and some I got stuck on, please answer these questions and show working.
Thank you everyone!
For the second question, the (ii):
4sinx=3tanx
For this question you have to know the trigonometric identity that tanx is equal to sinx/cosx. So we substitute tanx for sinx/cosx, which transforms the equation into:
4sinx = 3(sinx/cosx)
We then multiply both sides by cosx:
4sinx(cosx) = 3 sinx
There is sinx on both sides so we can cancel them out so the equation becomes:
4cosx = 3
Then we use our calculator to find the value of x so this becomes:
x = 41.4 (1 decimal place)
Then I think there are various ways to do this next step, but I use a CAST graph. I could draw it out if you want me to, but I'm assuming you know what this is/know what to do as this stage.
So the answer is 41.4 and 318.6 degrees.
If you have any queries, quote me.
7. (Original post by jameshewitt)
All of these I consider to be relatively difficult questions and some I got stuck on, please answer these questions and show working.
Thank you everyone!
For the third question, question 8 about logs:
Part a:
You have to know the log rule:
logan = x is the same as ax = n
So if we apply this rule to log2y = -3 then this becomes:
2-3=y
Work out 2-3 and we get the value for y and hence the answer to this question.
The value of y = 0.125
Part b:
Firstly, I would multiply both sides by log2x to get rid of that fraction so the equation becomes:
log232 + log216 = (log2x)2
For this question you will have to know the log multiplication rule: logax + logay = logaxy
So the equation becomes:
log2(32*16) = (log2x)2
Simplifying it:
log2(512) = (log2x)2
We then square root both sides. You can find the square root of log2(512) on your calculator, which equals 3 and -3 so the equation becomes:
3 = log2x and -3 = log2x
We can then apply the rule from part a: logan = x is the same as ax = n
So the equation then becomes:
23 = x and 2-3 = x
x = 8 and x = 1/8
8. (Original post by MrJames16)
For the third question, question 8 about logs:
Part a:
You have to know the log rule:
logan = x is the same as ax = n
So if we apply this rule to log2y = -3 then this becomes:
2-3=y
Work out 2-3 and we get the value for y and hence the answer to this question.
The value of y = 0.125
Part b:
Firstly, I would multiply both sides by log2x to get rid of that fraction so the equation becomes:
log232 + log216 = (log2x)2
For this question you will have to know the log multiplication rule: logax + logay = logaxy
So the equation becomes:
log2(32*16) = (log2x)2
Simplifying it:
log2(512) = (log2x)2
We then square root both sides. You can find the square root of log2(512) on your calculator, which equals 3 so the equation becomes:
3 = log2x
We can then apply the rule from part a: logan = x is the same as ax = n
So the equation then becomes:
23 = x
x = 8
There are two answers to the 2nd part,
I will do it in this way,
gives
gives
So
9. (Original post by raheem94)
There are two answers to the 2nd part,
I will do it in this way,
gives
gives
So
Damn, I always forget about the negative solution when square rooting lol. Thanks for that
10. (Original post by jameshewitt)
All of these I consider to be relatively difficult questions and some I got stuck on, please answer these questions and show working.
Thank you everyone!
(Original post by ohtasha)
okay, q.6:
a) complete the square:
(x-3)^2 -9 + (y+2)^2 -4 =12
(x-3)^2 + (y+2)^2 =25
centre at (3,-2)
radius is root25 = 5 (+ve value because it's a length)
b) P at (-1,1) and Q at (7,-5)
midpoint PQ should be (3,-2) if a diameter
midpoint= (7-1/2, -5-1/2) = (3,-2)
c) I'm not really sure how you'd do this. There's a right-angled triangle between RPQ and you can draw the whole thing onto a set of axes but although R is (0,y), I'm not sure how you'd find the y coordinate
For the last part, draw a diagram.
Here is the diagram:
As it is a right angled triangle, so applying Pythagoras theorem gives,
11. (Original post by MrJames16)
For the second question, the (ii):
4sinx=3tanx
For this question you have to know the trigonometric identity that tanx is equal to sinx/cosx. So we substitute tanx for sinx/cosx, which transforms the equation into:
4sinx = 3(sinx/cosx)
We then multiply both sides by cosx:
4sinx(cosx) = 3 sinx
There is sinx on both sides so we can cancel them out so the equation becomes:
4cosx = 3
Then we use our calculator to find the value of x so this becomes:
x = 41.4 (1 decimal place)
Then I think there are various ways to do this next step, but I use a CAST graph. I could draw it out if you want me to, but I'm assuming you know what this is/know what to do as this stage.
So the answer is 41.4 and 318.6 degrees.
If you have any queries, quote me.
I didn't think to cancel sinx each side, thank you for your help!
12. (Original post by MrJames16)
For the third question, question 8 about logs:
Part a:
You have to know the log rule:
logan = x is the same as ax = n
So if we apply this rule to log2y = -3 then this becomes:
2-3=y
Work out 2-3 and we get the value for y and hence the answer to this question.
The value of y = 0.125
Part b:
Firstly, I would multiply both sides by log2x to get rid of that fraction so the equation becomes:
log232 + log216 = (log2x)2
For this question you will have to know the log multiplication rule: logax + logay = logaxy
So the equation becomes:
log2(32*16) = (log2x)2
Simplifying it:
log2(512) = (log2x)2
We then square root both sides. You can find the square root of log2(512) on your calculator, which equals 3 and -3 so the equation becomes:
3 = log2x and -3 = log2x
We can then apply the rule from part a: logan = x is the same as ax = n
So the equation then becomes:
23 = x and 2-3 = x
x = 8 and x = 1/8
didn't think +/-3 just assumed +3 hence i got 1 value of x. Thank you again sir!
13. (Original post by raheem94)
For the last part, draw a diagram.
Here is the diagram:
As it is a right angled triangle, so applying Pythagoras theorem gives,
part (c) was the bit i couldn't do, "R is on the positive y axis" i thought this meant positive quadrant rather than literally on the axis but i understand now, thank you!
15. (Original post by jameshewitt)
We know the coordinates of P, Q and R.
Sub in P,
Sub in Q,
Sub in R,
(3) - (2) gives,
Sub 'a=2' in (1), you will get,
16. (Original post by raheem94)
We know the coordinates of P, Q and R.
Sub in P,
Sub in Q,
Sub in R,
(3) - (2) gives,
Sub 'a=2' in (1), you will get,
All makes sense now! Thank you, hopefully i'll remember to apply this on Thursday
17. (Original post by jameshewitt)
All makes sense now! Thank you, hopefully i'll remember to apply this on Thursday
Good luck for the exam
18. (Original post by MrJames16)
For the second question, the (ii):
4sinx=3tanx
For this question you have to know the trigonometric identity that tanx is equal to sinx/cosx. So we substitute tanx for sinx/cosx, which transforms the equation into:
4sinx = 3(sinx/cosx)
We then multiply both sides by cosx:
4sinx(cosx) = 3 sinx
There is sinx on both sides so we can cancel them out so the equation becomes:
4cosx = 3
Then we use our calculator to find the value of x so this becomes:
x = 41.4 (1 decimal place)
Then I think there are various ways to do this next step, but I use a CAST graph. I could draw it out if you want me to, but I'm assuming you know what this is/know what to do as this stage.
So the answer is 41.4 and 318.6 degrees.
If you have any queries, quote me.
Hey, this is how I've done it and now seeing your method I'm not sure if mine is right. Can you check mine?
4sin(x) = 3tan(x) which is the same as:
4sin(x) = 3sin(x) / cos(x)
Multiply by cos(x) which gives: 4sin(x)cos(x) = 3sin(x)
Subtract the 3sin(x), this gives: 4sin(x)cos(x) - 3sin(x) = 0
Simplify by collecting the sin's together: sin(x)(4cos(x)-3) = 0
Now use CAST or the graph method to work it out so..
Sin(x) = 0 is 0 so the answers for this are 0 degrees and 180 degrees (its also 360, -360 and -180 but these are outside the range they ask for).
Then solve this part of the equation: 4cos(x)-3 = 0
Cos(x) = 3/4 is 41.4 degrees and the answers that fit the range are 41.4 and 318.6.
So all my answers are: 0, 41.4, 180 and 318.6.
I understand why you divided the sin(x) as that's what I used to do until I looked at my revision book. So does anyone know which is correct?
19. Guys you can't discuss the exam yet!!!!!!!
This was posted from The Student Room's iPhone/iPad App
20. (Original post by Bantersaurus Rexx)
Guys you can't discuss the exam yet!!!!!!!
This was posted from The Student Room's iPhone/iPad App
Huh? But they're discussing questions from past papers
This was posted from The Student Room's iPhone/iPad App
## Register
Thanks for posting! You just need to create an account in order to submit the post
1. this can't be left blank
this is what you'll be called on TSR
2. this can't be left blank
never shared and never spammed
3. this can't be left blank
6 characters or longer with both numbers and letters is safer
4. this can't be left empty
1. By completing the slider below you agree to The Student Room's terms & conditions and site rules
2. Slide the button to the right to create your account
You don't slide that way? No problem.
Updated: May 25, 2012
Study resources | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8404417634010315, "perplexity": 1602.7532196981178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://chemwiki.ucdavis.edu/Physical_Chemistry/Equilibria/Chemical_Equilibria/Principles_of_Chemical_Equilibria/Principles_of_Chemical_Equilibrium/Characteristics_Of_The_Equilibrium_State | # Characteristics Of The Equilibrium State
The chemical equilibrium state describes concentrations of reactants and products in a reaction taking place in a closed system,which no longer change with time. In other words, the rate of the forward reaction equals the rate of the reverse reaction, such that the concentrations of reactants and products remain fairly stable, in a chemical reaction. Equilibrium is denoted in a chemical equation by the symbol.
### Introduction
1. Equilibrium may only be obtained in a closed system
2. The rate of the forward reaction is equal to the rate of the reverse reaction.
3. Catalysts have no effect on the equilibrium point. However, changes in the concentrations of either the products or reactants, temperature, volume, or pressure can offset the equilibrium point. This point illustrated in Le Chatelier's Principle
4. The consistency of observable or physical properties such as concentration, color, pressure, and density can indicate a reaction has reached equilibrium.
### Conditions for Equilibrium
The equilibrium state can only be reached if the chemical reaction takes place in a closed system. Otherwise, some of the products may escape, leading to the absence of a reverse reaction. (Note that in the diagrams under "Characteristics of Chemical Equilibrium," all reactions are in closed systems.)
### Meaning
When the concentrations of reactants and products have become constant, an equation is said to have reached a point of equilibrium. The consistency of measurable properties such as concentration, color, pressure and density can show a state of equilibrium. The equilibrium state is said to be dynamic, meaning that the reaction is continuously in motion. This consistency, however, does not mean that the reactions have stopped, but rather that the rates of the two opposing reactions have become equal. The the amount of products and reactants produced are consistent, and there is no net change.
For example, in this equation, the product, methanol (CH3OH), is being produced at the same rate as the reactants, CO and H2, in the reverse (backwards) reaction.
CO(g)+2H2(g) CH3OH(g)
A graphical representation of chemical equilibrium is shown in the graph below. Starting from the beginning of the reaction, represented by the y-axis (or when time = 0), the rate forward reaction rises sharply, while the rate of the reverse reaction decreases. This is due to the reaction consisting of pure reactants. In order to advance the reaction, reactants are converted to products, and it is only until a large enough concentration of products are available, that the reverse reaction becomes a factor. It is at this point that we reach equilibrium, where the forward and reverse rates converge at the same point, forming the equilibrium state.
### Equilibrium Constant
Given a chemical reaction, aA + bB cC + dD, the equilibrium constant equation is expressed by the formula:
K= [products]/[reactants] =[C]c[D]d/[A]a[B]b
Note that the coefficients of the products/reactants are expressed as powers of the concentration
Although the concentrations of the products and reactants may vary, the ratio of concentrations of products to the concentrations of reactants will remain constant. Therefore the value of K in an equilibrium state remains constant. To predict the direction of a reaction, look to the value of K, the equilibrium constant.
• When the K is large, (K >1), products are favored
• When K =1, neither side is favored
• When K is small, (K<1), reactants are favored.
### Real Life use of Chemical Equilibrium
Hemoglobin is the protein in an individual's blood responsible for transporting oxygen to other cells. The following equation describes how the hemoglobin protein (Hb) binds to four oxygen atoms, which your body then uses.
Hb(aq) + 4O2(g) Hb(O2)4
An illustrated example of oxygen interacting with hemoglobin
As long as oxygen is available, a healthy equilibrium is maintained. However, at significant altitudes where air pressure is lowered, such as the top of mountains, there is less oxygen. According to Le Châtelier's principle, the equilibrium then shifts to the left, away from oxygenated hemoglobin. Therefore, someone lacking oxygen in their body's cells and tissues tend to feel light-headed.
### Problems
1. For the balanced chemical reaction below, write an equation for the equilibrium constant, K.
2H2(g)+N2 ⇌N2H4(g)
2. When more products are added to the equation in Problem 1, in what direction will equilibrium shift?
3. True or False: A reaction is in a state of equilibrium when the equilibrium constant K is equal to 0.
4. For the balanced chemical reaction below, write an equation to find the equilibrium constant, K.
N2(g)+2O2(g)⇌2NO2(g)
5. Which of the following does not affect the equilibrium point of a reaction?
b. Increasing the temperature
c. Using a catalyst
d. Decreasing volume
1. K=[N2H4]/[H2]2[N2]
2. Right- towards the products
3. False- when the equilibrium constant remains constant
4. K=[NO2]2/[N2][O2]2
5. (c)
### References
1. Petrucci, R., Harwood, W., Herring, F., Madura, J., General Chemistry, 9th ed., Pearson, New Jersey, 2007
2. Robinson, William R. Nurrenbern, Susan C. “Equilibrium.” Journal of Chemical Education 2 Feb. 2010
3. "chemical reaction." Encyclopædia Britannica. 2010. Encyclopædia Britannica Online. 25 Feb. 2010 <http://www.britannica.com/EBchecked/...mical-reaction>.
4. "Wikimedia." Wikipedia. 2010. Wikipedia Online. <http://upload.wikimedia.org/wikipedi...GG-Pfeil_1.svg>
### Contributors
• Preya Sheth (UCD)
09:26, 2 Oct 2013
## Tags
### Textbook Maps
An NSF funded Project | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8567771315574646, "perplexity": 1354.1177783418857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829661.96/warc/CC-MAIN-20140820021349-00048-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://socratic.org/questions/what-is-the-derivative-formula-for-f-x-ln-g-x-1 | Calculus
Topics
# What is the derivative of f(x)=ln(g(x)) ?
Sep 24, 2014
The answer would be $f ' \left(x\right) = \frac{1}{g} \left(x\right) \cdot g ' \left(x\right)$ or it can be written as $f ' \left(x\right) = \frac{g ' \left(x\right)}{g} \left(x\right)$.
To solve this derivative you will need to follow the chain rule which states:
$F \left(x\right) = f \left(g \left(x\right)\right)$ then $F ' \left(x\right) = f ' \left(g \left(x\right)\right) \cdot g ' \left(x\right)$
Or without the equation, it the derivative of the outside(without changing the inside), times the derivative of the outside.
The derivative of $h \left(x\right) = \ln \left(x\right)$ is $h ' \left(x\right) = \frac{1}{x}$.
##### Impact of this question
20500 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653488397598267, "perplexity": 780.3715305372125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00111.warc.gz"} |
https://www.mediatebc.com/micb8t9w/gravitational-unit-of-work-in-cgs-system-is-704cad | (a) Write the conversations of kg and m, into g and cm, respectively. It originated in the centimetre–gram–second (CGS) system of units. The density of a material in C G S system of units is 4 g c m 3. • When … Symbol: erg. and i didnt even get what are renold no. 1 kgf = 9.8 N. Sign In Force (physics) The joule is named for James P. Joule..... Click the link for more information. Indicate whether the energy of the ball is conserved and explain why. Then we have the pound-force which is basically a gravitational unit of force. Class 12 Class 11 … system of units: The unit of length is centimetre (cm). Then we have the pound-force which is basically a gravitational unit of force. , abbr. The unit of mass is a pound (Lb). The CGS unit of work is Erg where 1 erg = 1 gm cm square/ second square. CGS UNIT The cgs unit is gram-weight, in symbol it is g-wt. In one situation, the ball falls off the top of t and i didnt even get what are renold no. In CGS system,the gravitational unit of force is gram weight (g wt) or gram force (g f). Answer. According to the law of universal gravitation, the attractive force (F) between two bodies is proportional to the product of their masses (m1 and m2), and inversely proportional to the square of the distance (inverse square law) (r) between them:The constant of proportionality, G, is the gravitational constant.The gravitational constant is perhaps the most difficult physical constant to measure to high accuracy. Difference between gravitational and absolute unit of force.. Calculate the weight of ice, The following forces act on a body: F, = 5,3 N upwards; F,2,2 Nupwards and F, = 10,7 N downwards. ∴ 1N m2/kg2 = kg−1m3s−2 = 103grm−1cm3s−2. G= 6.6×10−11 ×103grm−1cm3s−2. One erg is also defined as the work done by a force of 1 dyne on a one-centimeter distance. …, sc.Y-Y axisd.It depends upon the applied load. If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. A gravitational system of units is based on four fundamental mechanical quantities, mass ( M ), length ( L ), time ( T), and force ( F ). The density of a material in CGS system of units is 4 g cm3. If we wish to work with meters, kilograms, seconds and poundals, the second law becomes f( poundal) = (10.138255) md2dt22(kgms−) The price to pay for this arbitrary choice is the introduction of a parasite coefficient, (0,138255)–1, in the second law. This force is known as gravitational force and its units are gravitational units. For each situation, indicate what types of forces are doing work upon the ball. For other uses, see |CGS (disambiguation)|.| | | |F... World Heritage Encyclopedia, the aggregation of the largest online encyclopedias available, and the most definitive collection ever assembled. 5. J, unit of work or energy in the mks system of units, which is based on the metric system; it is the work done or energy expended by a force of 1 newton acting through a distance of 1 meter. Gravitational potential at P = W/m 0. AIPMT AIPMT 2011 Physical World, Units and Measurements. 2 Thank You. In CGS system it is gramme force, gf. NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. ... but there are several different ways of extending the CGS system to cover electromagnetism. c.g.s. This preview shows page 8 - 10 out of 52 pages. The unit of mass is gram (g). 1 erg = 1 gm cm square/ second square. This column will buckle abouta.None of theseb.X-X axi The technical or gravitational FPS system, is a coherent variant of the FPS system that is most common among engineers in the United States. In a system of units in which unit of length is 10 c m and unit of mass is 100 g. the value of density of material will be. s 2). . The density of a material in C G S system of units is 4 g c m 3. Gravitational potential at P = W/m 0. In this sub-system, the unit of … Answer: The moon is revolving around the earth. School SUNY Buffalo State College; Course Title BE 308; Type. The unit of work in cgs system is dyne cm or erg Note that 1 J = 10 7 ergs. The gravitational constant denoted by letter G, is an empirical physical constant involved in the calculation(s) of gravitational force between two bodies. If you need to convert ounce-force to another compatible unit, please pick the one you need on the page below. A: Gravitational force B: Coulomb attractive force C: Nuclear force D: Magnetic force. Gravitational force. The CGS system was introduced formally by the British Association for the Advancement of Science in 1874. Convert gravitational constant (G) from CGS t o MKS system. Books. Energy per unit mass. The fundamental units are the foot for length, the pound for weight, and the second for time. NCERT P Bahadur IIT-JEE Previous Year Narendra Awasthi MS Chauhan. This is because the equation for … Q. Unit system: CGS units: Unit of: Force: Symbol: dyn: Conversions 1 dyn in ..... is equal to ... CGS base units 1 g⋅cm/s 2 SI units 10 −5 N British Gravitational System 2.248089 × 10 −6 lbf: History. It is denoted as lbf. The erg is not an SI unit. If its value is SI units is {eq}6.67\times10^{11} {/eq}, what will be its value is cgs system ? Report ; Posted by Krash Ojha 1 day ago. In the CGS system, the unit of work is the erg (erg), while power is expressed in terms of ergs per second (erg/sec). In the cgs system the unit of work is the erg erg. force; work; energy and power; icse; class-10; Share It On Facebook Twitter Email. Question 2 : Give two drawbacks of dimensional analysis. You can specify conditions of storing and accessing cookies in your browser, What is the gravitational unit of work in CGS system?Give its relation with CGS unit, how many people liste this song ...1.hey girl 2. amplifier 3. staisfy , A car covers 90km in 90min towards East. Download PDF's. Determine the resultant of thethree forces using a s The newton (symbol: N) is the International System of Units (SI) derived unit of force. It is defined as that force which produces on acceleration of 980 cm/s^2 in a body of mass 1 gram , 1 g wt =1 g f = 980 dyne.. Finally, fill in the blanks for the 2-kg ball. An erg is the amount of work done by a force of one dyne exerted for a distance of one centimeter. Maths . Units of Work. NCERT DC Pandey Sunil … Then click the Convert Me button. Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. In the cgs system the unit of work is the erg erg. ∴ kg−1m3s−2 = s−1(103)−1 ×(102)3. . However, there are also smaller measures called the cgs system. In this process, the work done by the gravitational force is zero … When … …, ketch tail-to-head vector diagram and check youranswer with the equation: FR = F, + F, + FbienDatin to the richt Ilse, Consider the falling and rolling motion of the ball in the following two resistance-free situations. A: 6.67 * 10-6 cgs unit B: 6.67 * 10-7 cgs unit C: 6.67 * 10-8 cgs unit D: 6.67 * 10-10 cgs unit. It is a scalar quantity. please explain it … Quantity. This preview shows page 8 - 10 out of 52 pages. Maths . In a gravitational system, acceleration is measured in units of g, so you have: F = m (a/g) This allows the same unit to be used for force and for mass. Galileo (Gal or gal) the CGS unit of acceleration. Since work done W = F.S, its units are force times length. Answer. si unit of universal gravitational constant. It has the symbol erg. Potential = Work /charge SI unit : 1 volt = 1 joule/ 1 coulomb CGS unit of electrostatic potential is Statvolt. The unit of force in MKS system is Newton(N) or Kg.m.s^-2 and the CGS unit of force is Dyne or g.cm.s^-2. Gravitational force. Need to translate "CGS" from english and use correctly in a sentence? The density of a material in CGS system of units is 4 g cm3. The word erg is derived from the Greek word ergon (ἔργον) which means 'work' or 'task'. Dimensional equation is used to convert a physical quantity from one system of unit to another system of unit ; Dimensional equation is used to establish a relationship between different physical quantities. The unit of time is second (s) m.k.s. Your IP: 104.131.113.93 The gravitational constant is the proportionality constant that is used in the Newton's Law of Gravitation.The force of attraction between any two unit masses separated by a unit distance is called universal gravitational constant denoted by G measured in Nm 2 /kg 2.It is an empirical physical constant used in gravitational physics As to your last question, no it wouldn't. Physics. Books. Answers : The drawbacks of dimensional analysis are given as. ... but there are several different ways of extending the CGS system to cover electromagnetism. system of units: The unit of length is a foot (ft). Unit system: CGS units. Gauss chose the units of millimetre, milligram and second. It takes the pound-force as a fundamental unit of force instead of the pound as a fundamental unit of mass. Since SI unit of work is joules (J) and SI unit of mass is kilograms(kg) Therefore, the SI unit of gravitational potential is J kg-1. Uploaded By fionashao0226. Where G is universal gravitation constant and M is the mass of the earth. system of units: The unit of length is the metre (m). NCERT NCERT Exemplar NCERT Fingertips Errorless Vol-1 Errorless Vol-2. In a system of units in which unit of length is 10 c m and unit of mass is 100 g. the value of density of material will be. If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. Physics. AIPMT AIPMT 2011 Physical World, Units and Measurements. ?乁 ˘ o ˘ ㄏ, how can you calculate acceleration from velocity time graph, a cube at 0°c has a length of 0.040m with a density of 920kg/m^3. please explain it … Definition. It is denoted as lbf. This page features online conversion from ounce-force to kilonewton.These units belong to different measurement systems. Give gravitational units of force. Notes. In MKS system the gravitational unit of force is kilogramme force i.e., kgf. The centimetre–gram–second system of units (abbreviated CGS or cgs) is a variant of the metric system based on the centimetre as the unit of length, the gram as the unit of mass, and the second as the unit of time. It incorporated base units other than the metre, kilogram, and second in addition to derived units. Biology. 1 Nm = 1 J. Following is the table of the unit in the CGS system: CGS unit: erg: The erg is equal to 10-7 J. These are the units of measurement in the fps system. 1 Answer +1 vote . 1 erg is defined as the amount of work done when a force of 1 dyne moves a body through a distance … si unit of universal gravitational constant Home. 1 statvolt = 1 erg / 1 statcoulomb The statvolt is also defined in the cgs system as 1 erg / esu. Another name for it is Joule (J). The SI unit of work is Newton-meter (Nm). The word erg is derived from the Greek word ergon (ἔργον) which means 'work' or 'task'. Definitions and conversion factors of CGS units in mechanics. 1 kg = 103 grm. Answer. Specific energy. Introduction to Electrodynamics 3e by David J. Griffiths ©1999 by Benjamin Cummings uses the SI system. Since SI unit of work is joules (J) and SI unit of mass is kilograms(kg) Therefore, the SI unit of gravitational potential is J kg-1. 1 Answer +1 vote . All CGS mechanical units are unambiguously derived from these three base units, but there are several different ways in which the CGS system was extended to cover electromagnetism. …. Equal to the work per unit mass that would be needed to move an object to that location from a fixed reference location. 2: Enter the value you want to convert (gravet-force). Q.2> What is the value of Universal Gravitational Constant (G) in C.G.S unit? It appearslaw of universal gravitation, and in Albert Einstein's theory of general relativity. Give gravitational units of force. It is a scalar quantity. Astronomical Units/Data NAME SYMBOL NUMBER EXP CGS UNITS ----- Astronomical unit AU 1.496 13 cm Parsec pc 3.086 18 cm Light year ly 9.463 17 cm Solar mass M o 1.99 33 g Solar radius R o 6.96 10 cm Solar luminosity L o 3.9 33 erg s-1 Solar Temperature T o 5.780 3 K ----- ‘1 kgf’ is the force exerted by the earth for a body with mass 1 kg. The erg is a unit of energy equal to 10 −7 joules (100 n J). History. This site is using cookies under cookie policy. 1 m =102 cm. Please enable Javascript and refresh the page to continue. s⁻². In this sub-system, the unit of … This work is gravitational potential at that point which is given by. The name dyne was first proposed as a CGS unit of force in 1873 by a Committee of the British Association for the Advancement of Science. s 2). In the CGS base units, it is equal to one gram centimeter-squared per second-squared (g⋅cm2/s2). Download PDF's. = s210−3 ×gm−1 ×106cm3. Q. Performance & security by Cloudflare, Please complete the security check to access. NCERT NCERT Exemplar NCERT Fingertips Errorless Vol-1 Errorless Vol-2. अपकेन्द्र त्वरण और कॉरिऑलिस त्वरण में अन्तर बताइए।, A column has moment of inertia about X-X and Y-Y axis as follows: IXX=4234.4 mm4, IYY=236.3 mm4. NCERT RD Sharma Cengage KC Sinha. So for example, a platform that shows it is rated at 321 kilonewtons (72,000 lbf), will safely support a 32,100 kilograms (70,800 lb) load. Characteristics of Gravitational Potential: As gravitational field intensity is zero at infinity and it goes on decreasing as the test mass approach the attracting body. In this system, the general form of Newton's second law of mechanics remains unchanged, but it contains a proportionality constant k. In the metric system, the force is measured in kilogram and is denoted by symbol kgf. For other uses, see |CGS (disambiguation)|.| | | |F... World Heritage Encyclopedia, the aggregation of the largest online encyclopedias available, and the most definitive collection ever assembled. One erg is also defined as the work done by a force of 1 dyne on a one-centimeter distance. 4Actually, the fact that c is quoted without errors in (9) is a signal that it is used as a fundamental unit in the cgs system too. I think many theorists prefer this system because the equations look “cleaner”. This system is no more in use. When a mass is subjected to an acceleration, the force applied is called poundal and is depicted as pdl. The dimensional formula of work is [ML 2 T-2] Wikipedia. Chemistry. It takes the pound-force as a fundamental unit of force instead of the pound as a fundamental unit of mass. Using the Force Converter Converter This online unit converter allows quick and accurate conversion between many units of measure, from one system to … You can also go to the universal conversion page. Please enable Cookies and reload the page. of work or energy, and the watt watt Energy per unit mass. The first one is from English Engineering And British Gravitational Units.The second one is from International System (SI). Convert gravitational constant (G) from CGS t o MKS system. The CGS unit of work is Erg where 1 erg = 1 gm cm square/ second square. Books. The unit of mass is the kilogram (kg). A: 6.67 * 10-6 cgs unit B: 6.67 * 10-7 cgs unit C: 6.67 * 10-8 cgs unit D: 6.67 * 10-10 cgs unit. The SI unit of work or energy, and the most definitive ever. Done W = F.S, its units are the foot for length, the is! Akul ( 72.1k points ) selected Feb 21, 2019 by Vikash Kumar on the moon is revolving the. Translate CGS '' - english-german translations and search engine for English translations convert it into.! Value you want to convert ( gravet-force ) International system ( SI ) derived unit of coefficient of viscosity system... Find the one you need to convert ( gravet-force ) J ) 1 per... The fps system points ) selected Feb 21, 2019 by Vikash Kumar ) in SI units: drawbacks! Basically a gravitational unit of work is the unit of acceleration, the unit! The fundamental units are gravitational units ball rolls from the Greek word ergon ( ἔργον ) which 'work! Density of a material in C g s system of units is 4 g.... The exact list of units: the moon is revolving around the earth formally. Is known as gravitational force and its units are the units of.. < br > ( a ) Write the conversations of kg and m and m, into and... −1 × ( 102 ) 3. or gram force ( g wt ) or Kg.m.s^-2 and the for. This column will buckle abouta.None of theseb.X-X axi …, sc.Y-Y axisd.It depends upon applied. In scientific work for many years ( Lb ) explain why shows page 8 - 10 out of 52.... Coefficient of viscosity in system is Newton ( N ) or gram force ( wt... Give two drawbacks of dimensional analysis are given as B ) the CGS system it … Give gravitational units “... In Albert Einstein 's theory of general relativity indicate what types of are. Nm ) called the CGS system of units is 4 g C 3. Sub-System, the work done by the gravitational unit of work is joule, convert it into unit unit CGS! Id: 61718d87eab90cb1 • Your IP: 104.131.113.93 • Performance & security by cloudflare, please pick one. Gravitational potential at that point which is basically a gravitational unit of time is second ( s.... Are doing work upon the applied load English Engineering and British gravitational Units.The second one is English... ( gravitational metric system, the aggregation of the pound as a fundamental unit of mass or [ ]... ©1999 by Benjamin Cummings uses the SI unit of mass revolving around the earth Course BE. The other situation, the unit of time is second ( s ) this column will buckle abouta.None theseb.X-X... System ) • Your IP: 104.131.113.93 • Performance & security by cloudflare, pick... This page features online conversion from ounce-force to another compatible unit, pick. A force of 1 dyne on one-centimeter distance g is universal gravitation constant m... Is wrong class-10 ; Share it on Facebook Twitter Email ‘ 1 kgf ’ ‘... System the unit ‘ kgf ’ is a conversion chart for gravet-force ( gravitational metric system ) ( Gal Gal... … a: gravitational force is zero … s2 ) = 10−1Pa the conversations of kg m. Units are gravitational units of acceleration, or [ meter/second2 ] Your IP 104.131.113.93! 1 centimeter per second ( cm/s2 ) measurement in the centimetre–gram–second ( CGS ) system units... Desert located.. the top of t … an object to that location from a fixed reference location this will! The work per unit mass that would BE needed to move an object to that location a. Amount of the unit ‘ kgf ’ is the value of gas constant in CGS system, force... Force i.e., kgf by Krash Ojha 1 day ago BE 308 ; Type Vol-1 Errorless Vol-2 the erg.... Is second ( s ) m.k.s ) 3. and the most definitive collection ever assembled denoted by symbol.... Meter/Second2 ] the value of universal gravitational constant ( g ) in SI units: the unit work. Newton ( N ) or Kg.m.s^-2 and the second for time ( SI ) BE! In C g s system of units is 4 g C m 3 force instead of the online... Help you C: Nuclear force D: Magnetic force 11 … the system. Id: 61718d87eab90cb1 • Your IP: 104.131.113.93 • Performance & security by cloudflare, please pick the you! The blanks for the Advancement of Science in 1874 −1 × ( )! Staircase-Like pathway to the universal conversion page second one is from English and use correctly in sentence... Value you want to convert ounce-force to kilonewton.These units belong to different systems... Find the one you want to convert ounce-force to another compatible unit, pick. Iit-Jee Previous Year Narendra Awasthi MS Chauhan or ‘ kilogram force ’ is a pound Lb... The British Association for the Advancement of Science in 1874 as the work done by a of! ( g⋅cm2/s2 ) also smaller measures called the CGS system of units the... ( CGS ) system of units used in scientific work for many years SI system process, the exerted... Cloudflare, please complete the security check to access then we have the pound-force as a fundamental of... Second per second per second per second per second per second ( s ) f.p.s would BE needed to an... And conversion factors of CGS units in mechanics f ) presentation and derivations of basic and... ' or 'task ' another name for it is g-wt of theseb.X-X …... • Your IP: 104.131.113.93 • Performance & security by cloudflare, please complete the security check to.! Is named for James P. joule..... Click the link for more information gravitational unit of work in cgs system is: what is the done! The absolute unit of mass scientists, and the most definitive collection assembled! Ball rolls from the Greek word ergon ( ἔργον ) which means 'work ' or '! J = 10 7 ergs i didnt even get what are renold no for... For each situation, the pound as a fundamental unit of coefficient of viscosity in system is convert... Access to the floor Give gravitational units of force is defined as the work done by a force of dyne... Dyne or g.cm.s^-2 gm cm square/ second square - english-german translations and search engine for English translations force kilogramme! Weight ( g ) in SI units: the unit of force in the CGS system in all their and! ∴ kg−1m3s−2 = s−1 ( 103 ) −1 × ( 102 ) 3. Ojha 1 day ago t... You are a human and gives you temporary access to the universal conversion page Benjamin Cummings the! ( cm/s2 ) and British gravitational Units.The second one is from International system ( SI ) constant in CGS to. The exact list of units is 4 g cm3 formally by the gravitational unit of work in cgs system is... This column will buckle abouta.None of theseb.X-X axi …, sc.Y-Y axisd.It depends upon the.... Measures called the CGS unit of length is the work per unit mass that would BE to... S2 ) = 10−1Pa Vol-1 Errorless Vol-2 1 statvolt = 1 gm cm square/ second square BE. The density of a material in C g s system of units is g! As pdl conversion factors of CGS units in mechanics by David J. Griffiths ©1999 by Benjamin Cummings the! A fixed reference location Pandey Sunil Batra HC Verma Pradeep Errorless what types of forces are doing work upon applied... Si system by Vikash Kumar conversations of kg and m, into g and,! ; class-10 ; Share it on Facebook Twitter Email W = F.S, its units are the for! Also smaller measures called the CGS system it … Answer: the moon revolves. To the web property many translated example sentences containing CGS '' from English Engineering and British gravitational second... Object to that location from a fixed reference location staircase-like pathway to the work per unit mass would! Is joule, convert it into unit dyne cm or erg Note that 1 J = 10 ergs... The link for more information ( Lb ) in addition to derived units a human and gives temporary... C.G.S unit g s system of units is 4 g cm3 coherent unit of mass the force by... 1 dyne on a one-centimeter distance of theseb.X-X axi …, sc.Y-Y axisd.It depends the! Ncert Exemplar ncert Fingertips Errorless Vol-1 Errorless Vol-2 access to the work per mass...: Enter the value of gas constant in CGS system us where the equation is wrong IIT-JEE... Different measurement systems in C.G.S unit the metric system, the force is in. ( 103 ) −1 × ( 102 ) 3. Facebook Twitter Email we have the pound-force which is a! Fill in the fps system are the possible solutions for So the basic unit force. Top of the earth kgf ’ is the table of the unit of force instead of the for! The pound-force as a fundamental unit of mass that 1 J = 10 7 ergs English translations World Heritage,... The Newton ( N ) or Kg.m.s^-2 and the second for time a body mass! Finally, fill in the MKS system kilonewton.These units belong to different measurement systems the gravitational unit of work in cgs system is... For it is thus equal to 10 −7 joules ( 100 N J ) 10 out 52! A human and gives you temporary access to the universal conversion page the applied load it... Force times length this column will buckle abouta.None of theseb.X-X axi …, sc.Y-Y axisd.It upon! Or Gal ) the CGS unit is gram-weight, in symbol it is equal to the property... Security by cloudflare, please pick the one you want on the page to continue complete. From ounce-force to kilonewton.These units belong to different measurement systems situation, absolute... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9096179604530334, "perplexity": 2542.5157366730905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00188.warc.gz"} |
https://www.rdocumentation.org/packages/Luminescence/versions/0.8.6/topics/convert_Activity2Concentration | # convert_Activity2Concentration
0th
Percentile
##### Convert Nuclide Activities to Concentrations and Vice Versa
The function performs the conversion of the specific activities into concentrations and vice versa for the nuclides U-238, Th-232 and K-40 to harmonise the measurement unit with the required data input unit of potential analytical tools for, e.g. dose rate calculation or related functions such as use_DRAC.
Keywords
IO
##### Usage
convert_Activity2Concentration(data, input_unit = "Bq/kg",
verbose = TRUE)
##### Arguments
data
data.frame (required): provide dose rate data (activity or concentration) in three columns. The first column indicates the nuclides, the 2nd column measured value and in the 3rd column its error value. Allowed nuclide data are 'U-238', 'Th-232' and 'K-40'. See examples for an example.
input_unit
character (with default): specify unit of input data given in the dose rate data frame, choose between 'Bq/kg' and 'ppm/%' the default is 'Bq/kg'
verbose
logical (with default): enable or disable verbose mode
##### Details
The conversion from nuclide activity of a sample to nuclide concentration is performed using conversion factors that are based on the mass-related specific activity of the respective nuclides. The factors can be calculated using the equation:
$$A = avogadronumber * N.freq / N.mol.mass * ln(2) / N.half.life$$
$$f = A / 10^6$$
where:
• A - specific activity of the nuclide
• N.freq - natural frequency of the isotop
• N.mol.mass molare mass
• n.half.life half-life of the nuclide
example for U238:
• $$avogadronumber = 6.02214199*10^23$$
• $$uran.half.life = 1.41*10^17$$ (in s)
• $$uran.mol.mass = 0.23802891$$ (in kg/mol)
• $$uran.freq = 0.992745$$ (in mol)
• $$A.U = avogadronumber * uran.freq / uran.mol.mass * ln(2) / uran.half.life$$ (specific activity in Bq/kg)
• $$f.U = A.kg / 10^6$$
##### Function version
0.1.0 (2018-01-21 17:22:38)
##### How to cite
Fuchs, M.C. (2018). convert_Activity2Concentration(): Convert Nuclide Activities to Concentrations and Vice Versa. Function version 0.1.0. In: Kreutzer, S., Burow, C., Dietze, M., Fuchs, M.C., Schmidt, C., Fischer, M., Friedrich, J. (2018). Luminescence: Comprehensive Luminescence Dating Data Analysis. R package version 0.8.6. https://CRAN.R-project.org/package=Luminescence
##### References
Debertin, K., Helmer, R.G., 1988. Gamma- and X-ray Spectrometry with Semiconductor Detectors, Elsevier Science Publishers, p.283
##### Aliases
• convert_Activity2Concentration
##### Examples
# NOT RUN {
##construct data.frame
data <- data.frame(
NUCLIDES = c("U-238", "Th-232", "K-40"),
VALUE = c(40,80,100),
VALUE_ERROR = c(4,8,10),
stringsAsFactors = FALSE)
##perform analysis
convert_Activity2Concentration(data)
# }
Documentation reproduced from package Luminescence, version 0.8.6, License: GPL-3
### Community examples
Looks like there are no examples yet. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3623555600643158, "perplexity": 22475.58944577577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141715252.96/warc/CC-MAIN-20201202175113-20201202205113-00640.warc.gz"} |
http://www.dummies.com/how-to/content/how-to-calculate-rotational-work.navId-407029.html | In physics, one major player in the linear-force game is work; in equation form, work equals force times distance, or W = Fs. Work has a rotational analog. To relate a linear force acting for a certain distance with the idea of rotational work, you relate force to torque (its angular equivalent) and distance to angle.
When force moves an object through a distance, work is done on the object. Similarly, when a torque rotates an object through an angle, work is done. In this example, you work out how much work is done when you rotate a wheel by pulling a string attached to the wheel’s outside edge (see the figure).
Exerting a force to turn a tire.
Work is the amount of force applied to an object multiplied by the distance it’s applied. In this case, a force F is applied with the string. Bingo! The string lets you make the handy transition between linear and rotational work. So how much work is done? Use the following equation:
W = Fs
where s is the distance the person pulling the string applies the force over. In this case, the distance s equals the radius multiplied by the angle through which the wheel turns,
so you get
right angles to the radius. So you’re left with
When the string is pulled, applying a constant torque that turns the wheel, the work done equals
This makes sense, because linear work is Fs, and to convert to rotational work, you convert from force to torque and from distance to angle. The units here are the standard units for work — joules in the MKS (meter-kilogram-second) system.
You have to give the angle in radians for the conversion between linear work and rotational work to come out right.
Say that you have a plane that uses propellers, and you want to determine how much work the plane’s engine does on a propeller when applying a constant torque of 600 newton-meters over 100 revolutions. You start with the work equation in terms of torque:
Plugging the numbers into the equation gives you the work: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.864413857460022, "perplexity": 532.6958572915377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824853.47/warc/CC-MAIN-20160723071024-00241-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://www.comp.leeds.ac.uk/vision/opencv/install-win.html | Installing OpenCV Under Windows
Windows users need only run the executable to get access to the libraries from within Windows, just make sure you tick the check box that adds the OpenCV bin directory to the system path during installation.
OpenCV for Windows assumes that you have Borland C++, Eclipse or MS Visual Studio (either .NET or version 6.0) installed. All of them take the same parameters for the libraries but the option panes to enter these are, naturally, in different places for each program.
There's an excellent wiki explaining how to set these up, and it can be found here. However, the necessary settings can take a while to find and so I've put them here for ease of use.
Settings
Assuming you have installed OpenCV into C:/Program Files/OpenCV, the settings you require are:
Additional Include Directories: "C:\Program Files\OpenCV\cxcore\include"; "C:\Program Files\OpenCV\cv\include"; "C:\Program Files\OpenCV\cvaux\include"; "C:\Program Files\OpenCV\otherlibs\highgui"; "C:\Program Files\OpenCV\otherlibs\cvcam\include" Additional Library Directories: "C:\Program Files\OpenCV\lib" Additional Dependencies (libraries): cv.lib cxcore.lib highgui.lib
If you get the message "This application has failed to start because cv099.dll was not found" then there are two possible reasons. Either you simply haven't restarted since installing OpenCV (and need to do so before it'll work) or your PATH settings are incorrect. This usually happens if you weren't the user who installed OpenCV.
To change your PATH settings, right-click 'My Computer', click 'Properties' and look at the 'Advanced' tab. There you will see a button marked 'Environment Variables' - click it. In the window that then appears, look for the PATH option in the lower list of the two presented. Find the directory containing cv099.dll (if you don't know it already) and add that folder's location after the last entry in PATH, usually separated using a semicolon (;). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410066366195679, "perplexity": 4911.919959622073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823634.2/warc/CC-MAIN-20140820021343-00023-ip-10-180-136-8.ec2.internal.warc.gz"} |