url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://math.stackexchange.com/questions/116446/random-walk-on-n-cycle
|
Random walk on $n$-cycle
For a graph $G$, let $W$ be the (random) vertex occupied at the first time the random walk has visited every vertex. That is, $W$ is the last new vertex to be visited by the random walk. Prove the following remarkable fact:
For the random walk on an $n$-cycle, $W$ is uniformly distributed over all vertices different from the starting vertex.
• A good start would be to reformulate the claim to be about an ordinary random walk on $\mathbb Z$. The claim is then that at the first time $n-1$ different nodes have been visited, the number $u$ of visited nodes to the right of the starting point is uniformly distributed between $0$ and $n-2$. In this form it looks like it should be amenable to an induction proof, if you strengthen it to say something about the probability that the rightmost node (and not the leftmost one) was the last one visited, as a function of $n$ and $u$. Mar 4 '12 at 21:54
• I wish I could accept two answers. I am very grateful to both of you and I wanted to express my gratitude by accepting both answers. And Didier Piau's answer made me realize that your elegant solution requires a notable amount of mathematical maturity, that I may lack. On the other hand, your soultion is elegant indeed. You made me think, I think I change the accepted answer again :) Mar 6 '12 at 5:33
• In other words, you accept a solution because (people tell you) it is elegant although you do not understand how it works nor why it is true. O well. (To be clear: PLEASE do not change again.)
– Did
Mar 6 '12 at 6:07
• Note that while you automatically get notified of comments under your post, others that you respond to (me in this case) don't get notified unless you ping them using the @username idiom. Mar 6 '12 at 11:27
• @DidierPiau I am afraid you are right. I would like to emphasize that I am very grateful for you clear, detailed and precise answer. You helped me a lot! Thank you so much and sorry for my strange acceptation choice. Mar 6 '12 at 18:25
Consider a simple symmetric random walk on the integer line starting from $0$ and, for some integers $-a\leqslant 0\leqslant b$ such that $(a,b)\ne(0,0)$, the event that the walk visits every vertex in $[-a,b]$ before visiting vertex $-a-1$ or vertex $b+1$. This is the disjoint union of two events:
• Event 1: Starting from $0$, the walk visits $b$ before visiting $-a$, then, starting from $b$, it visits $-a$ before visiting $b+1$,
• Event 2: Starting from $0$, the walk visits $-a$ before hitting $b$, then, starting from $-a$, it visits $b$ before hitting $-a-1$.
Recall that the probability that a simple symmetric random walk starting from $i$ visits $i-j\leqslant i$ before visiting $i+k\geqslant i$ is $\frac{k}{k+j}$, for every nonnegative integers $j$ and $k$.
Hence, the probability of Event 1 is $\frac{a}{a+b}\cdot\frac1{a+b+1}$, the probability of Event 2 is $\frac{b}{a+b}\cdot\frac1{a+b+1}$, and the probability of their union is $\frac1{a+b+1}$. Note that this last formula is also valid when $a=b=0$.
If $b=x-1$ and $a=n-x-1$ with $1\leqslant x\leqslant n-1$ and $n\geqslant2$, then $a+b+1=n-1$ hence the computation above shows that the probability that the last visited vertex in the discrete circle $\{0,1,\ldots,n-1\}$ is $x$ is $\frac1{a+b+1}=\frac1{n-1}$. That is, the probability of the event $[W=x]$ is $\frac1{n-1}$ for each $x\ne0$ in the circle, and $W$ is uniformly distributed on the circle minus the starting point of the random walk.
• Usually it tends to be you who finds the more elegant solutions that explain the unexpectedly simple result without undue calculation; this time it's the other way around :-) Mar 5 '12 at 9:19
• @joriki: Yes. But I know as an experimental fact that a rigorous justification of each step of the elegant solution requires a notable amount of mathematical maturity.
– Did
Mar 5 '12 at 17:56
• Didier, I hope my comment in connection with my curiosity about the change of the accepted answer and the subsequent re-change didn't create a "competitive" impression -- I was just pleased to find a solution involving less calculation than yours because it's more often the other way around :-) By the way, I still owe you an answer to an earlier comment -- yes, I do stay up during the night a lot, but I'm trying to cut down on that :-) Mar 6 '12 at 11:32
• @joriki: You certainly do not have to worry about this (and your answer is excellent, naturally). You already know this but let me say it nevertheless: first, I am often puzzled by the acceptation choices on MSE; second, this puzzlement does not concern you as an answerer: you proposed a (mathematically sophisticated) solution, it got accepted hence it can only mean the OP is happy, everything is fine. (Unrelated: in my experience, staying up at night to do maths is a gambit which is difficult to refuse but is often lost, in the long run....)
– Did
Mar 6 '12 at 13:25
• Is there a way to come up with recursive equations for this problem (the closed form of which gives the answer) ?
– emmy
Mar 2 '18 at 20:16
In order to reach $W$ last, the walk has to visit one of $W$'s neighbours for the last time and then go all around the cycle to arrive at $W$ from the other side. Let's call a segment of a random walk on the cycle that starts at some vertex $V$ and reaches one of $V$'s neighbours by going around the cycle without returning to $V$ a final segment. Then the last vertex reached after time $t$ is the final vertex of the first final segment that begins at or after $t$. Consider a random walk on the cycle, and for every final segment that ends at $W$, consider the stretch of times $t$ for which it is the first final segment that begins at or after $t$. If we can show that all vertices except $W$ occur with the same frequency in this stretch, then it will follow that conversely $W$ is reached last with the same probability from all other vertices, and thus all vertices $W$ are reached last with the same probability from a given initial vertex.
But the stretch extends precisely up to the last visit to $W$ before the segment, so the frequency of vertices in it is just that between any two successive visits to $W$, which is just the frequency of occurrence of the vertices other than $W$ in the walk in general, which is the same for all vertices.
P.S.: It's actually not too difficult to determine the probability of each vertex to be the last vertex visited at any stage in the process. The vertices already visited always form an interval. If the current position is at the end of an interval of $k$ visited vertices, every unvisited vertex has the same $1/(n-1)$ probability of becoming the last one, except the first unvisited vertex at the other end of the interval, which has $k/(n-1)$. This is because for all vertices except this one, exactly the same realizations of the walk will make them the last vertex as would be the case if no vertices had been visited yet. Thus, every vertex has a constant probability $1/(n-1)$ of ending up as the last vertex until the walk first visits one of its neighbours.
If the current position is in the interior of the interval of visited vertices, the probability of reaching one end of the interval before the other varies linearly over the interval, and thus so do the probabilities of the two unvisited vertices bordering the interval to become the last vertex – the sum of their probabilities is $(k+1)/(n-1)$, and this shifts by $1/(n-1)$ by each move, in favour of the vertex that the move moves away from.
• Comments are not for extended discussion; this conversation has been moved to chat. Jan 7 '20 at 21:54
This question is rephrased as a game and can be analyzed with no computation!
A class of 30 children is playing a game where they all stand in a circle along with their teacher. The teacher is holding two things: a coin and a potato. The game progresses like this: The teacher tosses the coin. Whoever holds the potato passes it to the left if the coin comes up heads and to the right if the coin comes up tails. The game ends when every child except one has held the potato, and the one who hasn't is declared the winner.
How do a child's chances of winning change depending on where they are in the circle? In other words, what is each child's win probability?
It is equivalent to show that all students have equal probability of winning. To this end, consider the students to the left and the right of the teacher. Call them the teacher's pets. Both teacher's pets have equal probability of winning by symmetry.
Diagram 1: The teacher is blue, the teacher's pets are green, and the potato is a lumpy yellow cloud. Small white circles are the other students.
Now consider any student who hasn't lost yet. We can show he has the same chance of winning as a teacher's pet. Call this student "Purple".
Diagram 2: Purple is a student who hasn't lost yet.
With $$100 \%$$ certainty, the potato will arrive to Purple's left or to his right eventually. Consider the first time this happens.
Diagram 3: The potato touches one of his neighbors for the first time. Red players have lost already.
In that situation, he use the power of imagination! He can image that he himself is a teacher's pet since his situation has become identical to the starting condition, and therefore his probability of winning is the same as that of the teacher's pets.
Diagram 4: Purple imagines that he is a teacher's pet. The arrow should be understood to mean "imagines himself as".
• This is an excellent intuitive solution! I think it would get more (deserved) attention if the key step was explained better: you're really using a coupling between two copies of the process, one started from the teacher and the other started next to purple. Feb 15 '19 at 10:39
• Thank you! I'll make an attempt to incorporate your suggestion. There really should be a way to depict it that requires no text at all.
– Mark
Feb 15 '19 at 13:58
• @JRichey I am not completely clear how to use the language of coupling to express the result. Maybe you can submit an answer?
– Mark
Feb 23 '19 at 23:30
• Wow. This is amazing! Aug 20 '19 at 19:36
• @Mark: I'm aware that the teacher and the losing players are allowed to touch the potato. But the potato can't travel to the right-hand neighbour through the purple player. I'm concentrating on the case where the purple player and her right-hand neighbour are the only two players left. In that case, the game ends once the potato reaches either the purple player or the right-hand neighbour. Thus, the potato is never passed from either to the other, and hence the walk reduces to a linear walk with these two at the boundaries. In this case the potato is more likely to reach the closer player. Jan 4 '20 at 21:37
|
2022-01-16 18:24:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.694652259349823, "perplexity": 291.2789850742429}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300010.26/warc/CC-MAIN-20220116180715-20220116210715-00617.warc.gz"}
|
https://quantumcomputing.stackexchange.com/questions/16815/does-toffoli-and-conjugate-affect-superposition-if-used-in-shors-algorithm
|
# Does Toffoli AND conjugate affect superposition if used in Shor's algorithm?
I have come across several papers that use Toffoli AND conjugate to minimize the T-depth. But since it contains a measurement, does it affect Shor's algorithm (in terms of interference, entanglement, or superposition), when used within the reversed modular multiplication circuit, such as in Vedral et al's implementation?
Toffoli AND conjugate (d), source.
Vedral et al's modular exponentiation circuit, source: Nakahara et al's book: Quantum Computing - From Linear Algebra to Physical Realizations:
|
2021-06-20 19:03:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9318439960479736, "perplexity": 3275.487215468646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488253106.51/warc/CC-MAIN-20210620175043-20210620205043-00331.warc.gz"}
|
http://math.stackexchange.com/questions/210619/topological-equivalence-between-pi-2-pi-2-to-r?answertab=active
|
# Topological Equivalence between $(- \pi/2, \pi/2)$ to $R$
I know that the key is to use $\tan$ and $\arctan$ to do it. Take any $(a,b) \subset \mathbb{R}$, $(a,b)$ is open. Now I want to show $\tan^{-1}(a,b)$ is open. I need a hint for the next step (just a hint suffices) Thanks.
-
Well once I wrote the question I was thinking the tangent and arctan function are inverses and the arctan should map an open interval into an open interval since they are continuous. – Daniel Oct 10 '12 at 19:04
It's not clear what you're allowed to take on faith for solving this exercise. Proving that $\tan$ and $\arctan$ are continuous from scratch won't be simple. I imagine your book expects you to just quote this fact from a real analysis/calculus course you've previously taken. – user29743 Oct 10 '12 at 19:28
@jsk to prove that $f^{-1}(O)$ is an open in $\mathbb{R}$, whit $O$ is an open in $\mathbb{R}$, you should notice that $f^{-1}$ ought to be continous. then prove that $f^{-1}(O)$ is included in an open set in $\mathbb{R}$. Since we are dealing whit a real valued function here. – Mohamez Oct 10 '12 at 19:46
@countinghaus yes I was thinking about the continuity of $\tan$ and $arctan$ too. I think I will take it as granted they are continuous since I have never actually proved the continuity of trigonometric functions before – Daniel Oct 10 '12 at 21:35
The topological equivalence follows directly from the continuity of $\tan$ and $\arctan$? (well I think so...) – Daniel Oct 10 '12 at 19:10
|
2014-12-18 16:23:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8932662010192871, "perplexity": 221.2165600211222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802767274.159/warc/CC-MAIN-20141217075247-00173-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://socratic.org/questions/597b837111ef6b118b28d6be
|
# Which solution would give the largest pH? a. HCl(aq) b. NaHSO_4(aq) c. HBr(aq) d. HNO_3(aq)
Jul 28, 2017
$\text{Option B...........}$
#### Explanation:
Sodium carbonate gives a basic solution, according to the equilibrium......
$C {O}_{3}^{2 -} + {H}_{2} O \left(l\right) r i g h t \le f t h a r p \infty n s H C {O}_{3}^{-} + H {O}^{-}$
$p {K}_{b} = 3.57$..........
Sodium chloride is does not cause hydrolysis; on the other hand, sodium bisulfate is a moderately strong acid......
$H S {O}_{4}^{-} + {H}_{2} O \left(l\right) r i g h t \le f t h a r p \infty n s S {O}_{4}^{2 -} + {H}_{3} {O}^{+}$
|
2020-09-29 17:33:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7409272789955139, "perplexity": 14416.536914590308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202418.22/warc/CC-MAIN-20200929154729-20200929184729-00393.warc.gz"}
|
http://mzsystems.net/2x868ff6/cosine-similarity-vs-cosine-distance-a8826b
|
This video is related to finding the similarity between the users. distance function should become larger as elements become less similar; since maximal value of cosine is 1, we can define cosine distance as String formatting: % vs. .format vs. string literal, Pythonic way to create a long multi-line string. The cosine similarity is beneficial because even if the two similar data objects are far apart by the Euclidean distance because of the size, they could still have a smaller angle between them. Cosine similarity ranges from 0 to 1, where 1 means the two vectors are perfectly similar. Read more in the User Guide. We can therefore compute the score for each pair of nodes once. Why did postal voting favour Joe Biden so much? What is the role of a permanent lector at a Traditional Latin Mass? When to use cosine similarity over Euclidean similarity. The cosine of 0° is 1, and it is less than 1 for any angle in the interval (0, π] radians. Did I make a mistake in being too honest in the PhD interview? Cosine similarity looks at the angle between two vectors, euclidian similarity at the distance between two points. Not the cosine distance! Conclusion : I hope by now you have clear understanding of the math behind the computation of cosine similarity and Cosine Distance and its usage. It is also not a proper distance in that the Schwartz inequality does not hold. What do you think the cosine similarity would be between b and c? I am given a csv with three columns, user_id, book_id, rating. While cosine looks at the angle between vectors (thus not taking into regard their weight or magnitude), euclidean distance is similar to using a ruler to actually measure the distance. Formula to find the Cosine Similarity and Distance is as below: Here A=Point P1,B=Point P2 (in our example). Similarity increases when distance between two vectors decreases. Data Structures 101: What Is a Binary Search Tree? The relation between cosine similarity and cosine distance can be define as below. Euclidian Distance vs Cosine Similarity for Recommendations. Copy link pranavnijampurkar33 commented Oct 22, 2020. Why is “1000000000000000 in range(1000000000000001)” so fast in Python 3? Cosine distance is 1-. Cosine similarity vs Euclidean distance. How does SQL Server process DELETE WHERE EXISTS (SELECT 1 FROM TABLE)? Stack Overflow for Teams is a private, secure spot for you and Terminology a bit confusing. What are the differences between type() and isinstance()? We acquired 354 distinct application pages from a star schema page dimension representing application pages. Y1LABEL Angular Cosine Similarity TITLE Angular Cosine Similarity (Sepal Length and Sepal Width) ANGULAR COSINE SIMILARITY PLOT Y1 Y2 X . In NLP, this might help us still detect that a much longer document has the same “theme” as a much shorter document since we don’t worry about the magnitude or the “length” of the documents themselves. Why does the U.S. have much higher litigation cost than other countries? Cosine Similarity. END OF MULTIPLOT JUSTIFICATION CENTER MOVE 50 98 TEXT Distance/Similarity Measures (IRIS.DAT) Do rockets leave launch pad at full thrust? Coding using R (Euclidean distance is also covered) Dataset and R code in … To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Cosine similarity is a measure of similarity between two non-zero vectors of a n inner product space that measures the cosine of the angle between them. The cosine similarity is a measure of similary between two vectors. The relation between cosine similarity and cosine distance can be define as below. Thanks for contributing an answer to Stack Overflow! @WarrenWeckesser, thank you, I fixed the name. Yeah, does not make sense to change it now. If you pass a distance matrix it will be O(n²). Cosine similarity works in these usecases because we ignore magnitude and focus solely on orientation. Cosine similarity works in these usecases because we ignore magnitude and focus solely on orientation. The Cosine Similarity procedure computes similarity between all pairs of items. Cosine similarity is a measure of similarity between two non-zero vectors of an inner product space. How to calculate NFL passer rating using a formula in Excel or Google Sheets, Similarity decreases when distance between two vectors increases. How do the material components of Heat Metal work? \ $If you try this with fixed precision numbers, the left side loses precision but the right side does not. Not the cosine distance! sklearn.metrics.pairwise.cosine_similarity which is. Cosine distance is 1-. Lets pass these values of each angles discussed above and see the Cosine Distance between two points. Good question but yes, these are 2 different things but connected by the following equation: Usually, people use the cosine similarity as a similarity metric between vectors. Making statements based on opinion; back them up with references or personal experience. We selected only the first 10 pages out of the google search result for this experiment. Let's say you are in an e-commerce setting and you want to compare users for product recommendations: User 1 … table 2 and figure 1 it is clearly visible that best fitness values were obtained using the Cosine similarity coefficients followed by Dice and Jaccard. Similarly you can define the cosine distance for the resulting similarity value range. From there I just needed to pull out recommendations from a given artist’s list of songs. Parameters X {array-like, sparse matrix} of shape (n_samples_X, n_features) Matrix X. Cosine Similarity adalah 'ukuran kesamaan', salah satu implementasinya adalah pada kasus mencari tingkat kemiripan teks. your coworkers to find and share information. Similarly you can define the cosine distance for the resulting similarity value range. Difference between Cosine similarity and Euclidean Distance 4. Viewed 849 times 2$\begingroup$As an example, let's say I have a very simple data set. The coefficient of the model is -6 for WMD which makes sense as the documents are similar when the WMD is small, and 9.2 for cosine similarity which also … The name derives from the term "direction cosine": in this case, unit vectors are maximally "similar" if they're parallel and maximally "dissimilar" if they're orthogonal (perpendicular). Cosine similarity range: −1 meaning exactly opposite, 1 meaning exactly the … The problem with the cosine is that when the angle between two vectors is small, the cosine of the angle is very close to$1$and you lose precision. normalization natural-language euclidean cosine-distance cosine-similarity. Short answer: Cosine distance is not the overall best performing distance metric out there Although similarity measures are often expressed using a distance metric , it is in fact a more flexible measure as it is not required to be symmetric or fulfill the triangle inequality. share | cite | improve this question | follow | edited Feb 26 '16 at 22:49. ttnphns. Cosine Similarity. In the experiment, it compute the distance between each vectors. You can consider 1-cosine as distance. So I'd rather try metric="cosine".. DBSCAN can trivially be implemented with a similarity rather than a distance (c.f. In cosine similarity, data objects in a dataset are treated as a vector. Product space how does SQL Server process DELETE where EXISTS ( SELECT 1 from TABLE ) yeah, not... Of Euclidean distance similarity search result for this is being extended in the PhD interview term vectors. Correctly interpreting cosine Angular distance PLOT Y1 Y2 X Overflow to learn share... Treated as a metric, helpful in determining, how similar the data all... Our terms of service, privacy policy and cookie policy difference between Python 's list methods append and extend simply... Mean for a precise calculation of efficiency a mistake in being too honest in PhD. Do you think the cosine distances of one document ( e.g you agree our. User_Id, book_id, rating to create a long multi-line string, euclidian similarity the... Implementasinya adalah pada kasus mencari tingkat kemiripan teks the code below Y2 X application pages also. To this RSS feed, copy and paste this URL into your RSS reader Excel google... Into your RSS reader to recommend products to the users video is related to the... Cosine similarity PLOT Y1 Y2 X about the angle between two points create a multi-line... And Sepal Width ) Angular cosine distance between two vectors and not the distance between points... A dataset are treated as a vector and share information ( c.f values of angles... Very simple data set actual data, the code below create a long multi-line string parameters X {,. Is generally used as a metric used to measure the distance between vectors! List of songs distance matrix it will be O ( n² ) procedure computes similarity between the vectors does hold. Same, 0 indicating orthogonality which is dataset through the code could an! Allow arbitrary Length input let ’ s list of songs of the angle between a and gives! Measure how similar the data about all application pages 22:49. ttnphns the of! Similarity between two sequences example, let ’ s say we have 2 vectors, euclidian similarity the! Exchange Inc ; user contributions licensed under cc by-sa of one document (.! Similarity & Euclidean distance similarity n_features ) matrix X non-zero vectors of inner. Intuitively, let ’ s another vector c in the future research for 30-35 pages for precise! The code could use an index to make it faster than this$ as example... A game term '' a distance matrix it will be O n². Scipy sparse matrix API is a measure of similary between two points between a and B 3! Is filled by the term frequency vectors of word or phrase to be a game! Mathematically, it Measures the cosine distance for the resulting similarity value range 1 from TABLE ) coworkers. Similarity for recommendations similarity rather than a distance ( Sepal Length and Width! Indicating orthogonality or personal experience 1000000000000001 ) ” so fast in Python using cosine similarity is used... Systems to recommend products to the users is heavily used in recommendation systems to recommend products to the based! N-Dimensional numpy arrays ) similarity says that to find the similarity between two sentences in Python cosine... Result for this is being extended in the future research for 30-35 pages for a precise calculation of.! Length input 2 \sin^2 ( x/2 ) because we ignore magnitude and focus solely on orientation TABLE ) hope like. Vs. cosine distance, this is a measure of similarity between two vectors increases similarity at the angle between and... The term frequency vectors of an inner product space your query about removing function words etc favour Joe Biden much! Answer ”, you can define the cosine distance is a metric for measuring distance the... Works in these usecases because we ignore magnitude and focus solely on orientation does not hold similarity that! Compared to more basic cards 434 434 bronze badges end of MULTIPLOT JUSTIFICATION CENTER MOVE 50 98 text Distance/Similarity (. To pull out recommendations from a given artist ’ s another vector c in the PhD interview so in... Measure how similar the data objects are irrespective of their size create long... There I just needed to pull out recommendations from a given artist ’ s another c... When to use cosine you, I fixed the name measurement, whereas, with Euclidean, agree... And isinstance ( ) ( X ) = 2 \sin^2 ( x/2 ) I understand cosine similarity cosine., 1 meaning exactly the same, 0 indicating orthogonality vectors corresponds to their dot divided. Distance PLOT Y1 Y2 X similarity measure for k-means clustering means the two vectors projected in a data Webhouse the. Our tips on writing great answers in being too honest in the direction of B metric helpful... Sparse matrix API is a 2D measurement, whereas, with Euclidean, you can add up the... Implementasinya adalah pada kasus mencari tingkat kemiripan teks A=Point P1, B=Point P2 ( in our example ) can be! Months ago in a data Webhouse Excel or google Sheets, similarity decreases when between! Usecases because we ignore magnitude and focus solely on orientation end of MULTIPLOT JUSTIFICATION CENTER MOVE 50 98 Distance/Similarity... Distinct application pages from a given artist ’ s another vector c in future... Data about all application pages is also stored in a data Webhouse flexible as dense N-dimensional arrays. * when to use cosine distance hanya ditentukan untuk nilai positif Jika nilai negatif ditemui dalam,... ( d ) and isinstance ( ), helpful in determining, how similar the documents are of... Loses precision but the right side does not the right side does hold... Similarity ( θ ) also not a proper distance in that the ozone layer had holes it... Is related to finding the similarity between documents or vectors, 2020. calculation of of. Vector is filled with random values or vectors we need to measure distance... Function words etc learn more, see our tips on writing great answers my article.Please hit Clap ( 50 )... We often come across the concept of cosine of the google search result for this is a weird! Left side loses precision but the right side does not matter the term frequency vectors word! T compute the similarity between the vectors does not matter euclidian distance vs cosine similarity for recommendations spot for and! Tidak akan dihitung an example, let ’ s list of songs cosine similarity vs cosine distance by the product their! Use Euclidean distance ( c.f process DELETE where EXISTS ( SELECT 1 from TABLE ) text documents is my that... Share knowledge, and build your career many more \sin^2 ( x/2 ) sparse matrix } of shape n_samples_X... A Traditional Latin Mass \sin^2 ( x/2 ) 22:49. ttnphns PLOT Y1 Y2 X function words etc out...: % vs..format vs. string literal, Pythonic way to create a long multi-line string given a with. The concept of cosine of the vectors out recommendations from a given artist ’ s list of.... Is provably non-manipulated acquired 354 distinct application pages is also not a proper in! Vs..format vs. string literal, Pythonic cosine similarity vs cosine distance to create a long multi-line string difference between Python list! Like cosine distance for the resulting similarity value range numbers, the left side loses but... Make a mistake in being too honest in the PhD interview type ( ) isinstance... That by normalising my original dataset through the code below lead to increased discretionary compared! As 1-cos_similarity positif Jika nilai negatif ditemui dalam input, jarak cosinus tidak akan dihitung these. Representing a sentence says that to find angle between a and B gives us similarity. Angle between the users where I have a very simple data set across the concept of of. Experiment, it can be define as below a precise calculation of cosine similarity formula in Excel google... Stack Overflow for Teams is a visual representation of Euclidean distance instead satu implementasinya adalah pada mencari... Similarity at the angle between two vectors corresponds to their dot product divided by the term frequency of. Levenshtein distance is as below: here A=Point P1, B=Point P2 ( in our cosine similarity vs cosine distance ) the between! Distance between cosine similarity vs cosine distance vectors just noticed your query about removing function words etc in these usecases because we magnitude! Warrenweckesser, thank you, I fixed the name privacy policy and cookie policy English from 1500s... For this is \$ \ 1 - \cos ( X ) = 2 \sin^2 ( x/2 ) text.! A visual representation of cosine similarity vs cosine distance distance ( Sepal Length and Sepal Width ) cosine Angular PLOT. 2 vectors, each representing a cosine similarity vs cosine distance projected in a multi-dimensional space { array-like, sparse matrix } shape. Treated as a metric for measuring the difference between Python 's list append... Similarity ranges from 0 to 1, where 1 means the two vectors.. And cookie policy ( SELECT 1 from TABLE ) vectors are perfectly similar matrix it will be O n²... A metric, helpful in determining, how similar the data about all pages. Book_Id, rating what do you think the cosine similarity is generally used as a vector phrase be... Young girl meeting Odin, the Oracle, Loki and many more euclidian distance vs cosine cares! A Binary search Tree this video is related to finding the similarity of items how calculate! Heavily used in recommendation systems to recommend products to the users stored in a Webhouse! Words etc vectors of word or sequence of X characters in text documents I 'd rather try metric= cosine... Be defined as 1-cos_similarity it take so long to notice that the ozone had... Do card bonuses lead to increased discretionary spending compared to more basic cards, data in! String formatting: % vs..format vs. string literal, Pythonic way to create a long multi-line string also a. | cite | improve this question | follow | edited Feb 26 '16 22:49....
|
2021-06-21 15:38:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5994688868522644, "perplexity": 2219.1906822450833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488286726.71/warc/CC-MAIN-20210621151134-20210621181134-00552.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=159368
|
# frequency of small oscillations
by LHarriger
Tags: frequency, oscillations
P: 69 Does anyone know where I can get some information on how you can relate the frequency of small oscillations to the second derivative of potential energy. I saw this done recently in a qualifying exam level problem but I do not remember learning this method and it is not in my classical dynamics book. See below if you want a more extensive context for this question. I solved a problem recently where you were given two masses m and M connected by a string. The first mass was set rotating on a frictionless table. The string passed through a hole in the center of the table allowing the second mass to hang vertically under gravity. I was asked to: 1) Set up the Langrangian and derive eqns of motion. 2) Show that the orbit is stable with respect to small changes in orbit. 3) Find the frequency of small oscillations. I was able to do the first two without any problem but got stuck on the third. The d.e. was too messy to solve by hand in order to acquire the freqency. I looked at the solution and they used the approximation: $\omega^{2}=\frac{1}{M_{eff}}\frac{ \partial^{2}U_{eff}}{\partial r^{2}}\mid_{r=r_0}}$ where ro is the stable point and Meff = M+m Where can I get more information discussing this approximation method?
Engineering Sci Advisor Thanks P: 6,046 If it's any help, for a simple spring and mass system PE = (1/2)Kx^2 d^2(PE)/dx^2 = K And w^2 = K/M. So the same result would follow for small (linearized) oscillations of any system I suppose, if you expanded the PE and KE as Taylor series to get the effective mass and stiffness of the system about the equilibrium condition.
Related Discussions Advanced Physics Homework 2 Advanced Physics Homework 1 Advanced Physics Homework 3 Introductory Physics Homework 2 Classical Physics 2
|
2014-03-10 14:41:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7225490212440491, "perplexity": 313.6721125258332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010845496/warc/CC-MAIN-20140305091405-00024-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://nag.com/numeric/FL/nagdoc_fl24/html/F08/f08asf.html
|
F08 Chapter Contents
F08 Chapter Introduction
NAG Library Manual
# NAG Library Routine DocumentF08ASF (ZGEQRF)
Note: before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details.
## 1 Purpose
F08ASF (ZGEQRF) computes the $QR$ factorization of a complex $m$ by $n$ matrix.
## 2 Specification
SUBROUTINE F08ASF ( M, N, A, LDA, TAU, WORK, LWORK, INFO)
INTEGER M, N, LDA, LWORK, INFO COMPLEX (KIND=nag_wp) A(LDA,*), TAU(*), WORK(max(1,LWORK))
The routine may be called by its LAPACK name zgeqrf.
## 3 Description
F08ASF (ZGEQRF) forms the $QR$ factorization of an arbitrary rectangular complex $m$ by $n$ matrix. No pivoting is performed.
If $m\ge n$, the factorization is given by:
$A = Q R 0 ,$
where $R$ is an $n$ by $n$ upper triangular matrix (with real diagonal elements) and $Q$ is an $m$ by $m$ unitary matrix. It is sometimes more convenient to write the factorization as
$A = Q1 Q2 R 0 ,$
which reduces to
$A = Q1R ,$
where ${Q}_{1}$ consists of the first $n$ columns of $Q$, and ${Q}_{2}$ the remaining $m-n$ columns.
If $m, $R$ is trapezoidal, and the factorization can be written
$A = Q R1 R2 ,$
where ${R}_{1}$ is upper triangular and ${R}_{2}$ is rectangular.
The matrix $Q$ is not formed explicitly but is represented as a product of $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(m,n\right)$ elementary reflectors (see the F08 Chapter Introduction for details). Routines are provided to work with $Q$ in this representation (see Section 8).
Note also that for any $k, the information returned in the first $k$ columns of the array A represents a $QR$ factorization of the first $k$ columns of the original matrix $A$.
## 4 References
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
## 5 Parameters
1: M – INTEGERInput
On entry: $m$, the number of rows of the matrix $A$.
Constraint: ${\mathbf{M}}\ge 0$.
2: N – INTEGERInput
On entry: $n$, the number of columns of the matrix $A$.
Constraint: ${\mathbf{N}}\ge 0$.
3: A(LDA,$*$) – COMPLEX (KIND=nag_wp) arrayInput/Output
Note: the second dimension of the array A must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$.
On entry: the $m$ by $n$ matrix $A$.
On exit: if $m\ge n$, the elements below the diagonal are overwritten by details of the unitary matrix $Q$ and the upper triangle is overwritten by the corresponding elements of the $n$ by $n$ upper triangular matrix $R$.
If $m, the strictly lower triangular part is overwritten by details of the unitary matrix $Q$ and the remaining elements are overwritten by the corresponding elements of the $m$ by $n$ upper trapezoidal matrix $R$.
The diagonal elements of $R$ are real.
4: LDA – INTEGERInput
On entry: the first dimension of the array A as declared in the (sub)program from which F08ASF (ZGEQRF) is called.
Constraint: ${\mathbf{LDA}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{M}}\right)$.
5: TAU($*$) – COMPLEX (KIND=nag_wp) arrayOutput
Note: the dimension of the array TAU must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{M}},{\mathbf{N}}\right)\right)$.
On exit: further details of the unitary matrix $Q$.
6: WORK($\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{LWORK}}\right)$) – COMPLEX (KIND=nag_wp) arrayWorkspace
On exit: if ${\mathbf{INFO}}={\mathbf{0}}$, the real part of ${\mathbf{WORK}}\left(1\right)$ contains the minimum value of LWORK required for optimal performance.
7: LWORK – INTEGERInput
On entry: the dimension of the array WORK as declared in the (sub)program from which F08ASF (ZGEQRF) is called.
If ${\mathbf{LWORK}}=-1$, a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued.
Suggested value: for optimal performance, ${\mathbf{LWORK}}\ge {\mathbf{N}}×\mathit{nb}$, where $\mathit{nb}$ is the optimal block size.
Constraint: ${\mathbf{LWORK}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$ or ${\mathbf{LWORK}}=-1$.
8: INFO – INTEGEROutput
On exit: ${\mathbf{INFO}}=0$ unless the routine detects an error (see Section 6).
## 6 Error Indicators and Warnings
Errors or warnings detected by the routine:
${\mathbf{INFO}}<0$
If ${\mathbf{INFO}}=-i$, argument $i$ had an illegal value. An explanatory message is output, and execution of the program is terminated.
## 7 Accuracy
The computed factorization is the exact factorization of a nearby matrix $\left(A+E\right)$, where
$E2 = Oε A2 ,$
and $\epsilon$ is the machine precision.
The total number of real floating point operations is approximately $\frac{8}{3}{n}^{2}\left(3m-n\right)$ if $m\ge n$ or $\frac{8}{3}{m}^{2}\left(3n-m\right)$ if $m.
To form the unitary matrix $Q$ F08ASF (ZGEQRF) may be followed by a call to F08ATF (ZUNGQR):
```CALL ZUNGQR(M,M,MIN(M,N),A,LDA,TAU,WORK,LWORK,INFO)
```
but note that the second dimension of the array A must be at least M, which may be larger than was required by F08ASF (ZGEQRF).
When $m\ge n$, it is often only the first $n$ columns of $Q$ that are required, and they may be formed by the call:
```CALL ZUNGQR(M,N,N,A,LDA,TAU,WORK,LWORK,INFO)
```
To apply $Q$ to an arbitrary complex rectangular matrix $C$, F08ASF (ZGEQRF) may be followed by a call to F08AUF (ZUNMQR). For example,
```CALL ZUNMQR('Left','Conjugate Transpose',M,P,MIN(M,N),A,LDA,TAU, &
C,LDC,WORK,LWORK,INFO)
```
forms $C={Q}^{\mathrm{H}}C$, where $C$ is $m$ by $p$.
To compute a $QR$ factorization with column pivoting, use F08BSF (ZGEQPF).
The real analogue of this routine is F08AEF (DGEQRF).
## 9 Example
This example solves the linear least squares problems
$minimize Axi - bi 2 , i=1,2$
where ${b}_{1}$ and ${b}_{2}$ are the columns of the matrix $B$,
$A = 0.96-0.81i -0.03+0.96i -0.91+2.06i -0.05+0.41i -0.98+1.98i -1.20+0.19i -0.66+0.42i -0.81+0.56i 0.62-0.46i 1.01+0.02i 0.63-0.17i -1.11+0.60i -0.37+0.38i 0.19-0.54i -0.98-0.36i 0.22-0.20i 0.83+0.51i 0.20+0.01i -0.17-0.46i 1.47+1.59i 1.08-0.28i 0.20-0.12i -0.07+1.23i 0.26+0.26i$
and
$B = -1.54+0.76i 3.17-2.09i 0.12-1.92i -6.53+4.18i -9.08-4.31i 7.28+0.73i 7.49+3.65i 0.91-3.97i -5.63-2.12i -5.46-1.64i 2.37+8.03i -2.84-5.86i .$
### 9.1 Program Text
Program Text (f08asfe.f90)
### 9.2 Program Data
Program Data (f08asfe.d)
### 9.3 Program Results
Program Results (f08asfe.r)
|
2014-10-22 13:34:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 96, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9962059259414673, "perplexity": 2899.4333385506457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507446943.4/warc/CC-MAIN-20141017005726-00310-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://socratic.org/questions/what-is-the-slope-intercept-form-of-the-line-passing-through-4-1-and-3-5
|
# What is the slope-intercept form of the line passing through (-4. 1) and (-3, 5) ?
Dec 20, 2015
$y = 4 x + 17$
#### Explanation:
Given-
${x}_{1} = - 4$
${y}_{1} = 1$
${x}_{2} = - 3$
${y}_{2} = 5$
$\left(y - {y}_{1}\right) = \frac{{y}_{2} - {y}_{1}}{{x}_{2} - {x}_{1}} \left(x - {x}_{1}\right)$
$\left(y - 1\right) = \frac{5 - 1}{\left(- 3\right) - \left(- 4\right)} = \left(x - \left(- 4\right)\right)$
$\left(y - 1\right) = \frac{5 - 1}{- 3 + 4} = \left(x + 4\right)$
$y - 1 = 4 \left(x + 4\right)$
$y - 1 = 4 x + 16$
$y = 4 x + 16 + 1$
$y = 4 x + 17$
|
2022-05-21 19:42:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7849296927452087, "perplexity": 2431.8924613332883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662540268.46/warc/CC-MAIN-20220521174536-20220521204536-00146.warc.gz"}
|
https://stats.stackexchange.com/questions/527142/what-is-big-o-complexity-of-classifying-an-image-using-cnn/530464
|
# What is Big-O complexity of classifying an image using CNN?
If i have an image consisting of n pixels what will be the complexity of classifying it using a convolutional neural network, expressed in big-o notation? (assuming my cnn is already trained)
• you need to know the size of your cnn – gunes Jun 3 at 9:31
$$O(n)$$
In a CNN, the number of features in each feature map is at most a constant times the number of input pixels $$n$$ (typically the constant is < 1). Convolving a fixed size filter across an image with $$n$$ pixels takes $$O(n)$$ time, since each output is just the sum product between $$k$$ pixels in the image, and $$k$$ weights in the filter, and $$k$$ doesn't vary with $$n$$. Similarly, any max or avg pooling operation doesn't take more than linear time in the input size. Therefore, the overall runtime is still linear.
|
2021-07-28 03:22:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5901426076889038, "perplexity": 443.1324263555076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153521.1/warc/CC-MAIN-20210728025548-20210728055548-00644.warc.gz"}
|
http://mathhelpforum.com/math-topics/16223-maths-homework-print.html
|
# maths homework
• Jun 24th 2007, 06:38 AM
tooba
maths homework
a container tht can hold 12 litres is 3/4 full.how much will it contain after 4 litres have been poured out of it?
• Jun 24th 2007, 08:09 AM
earboth
Quote:
Originally Posted by tooba
a container tht can hold 12 litres is 3/4 full.how much will it contain after 4 litres have been poured out of it?
Hello,
the container holds $\frac{3}{4}$ of 12 litres. This of is translated by "multiply!" with your sort of problems.
Therefore the container holds:
$\frac{3}{4} \cdot 12 = 9 \text{ litres}$ . Now there are poured off 4 litres. That means the final content is:
$9\text{ litres} - 4\text{ litres} = 5\text{ litres}$
|
2017-02-23 17:24:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7923073172569275, "perplexity": 7530.347065193575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00601-ip-10-171-10-108.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/calculus/109625-arc-lengths-print.html
|
# Arc Lengths
• October 22nd 2009, 12:14 AM
superman69
Arc Lengths
Find the length of the curve defined by http://hosted.webwork.rochester.edu/...6934f23841.png from http://hosted.webwork.rochester.edu/...4e4e3fa271.png to http://hosted.webwork.rochester.edu/...655a5ac7d1.png.
Ok so first I know that we have to take the derivative which is Attachment 13472 And What would I do from then?
• October 22nd 2009, 12:17 AM
mr fantastic
Quote:
Originally Posted by superman69
Find the length of the curve defined by http://hosted.webwork.rochester.edu/...6934f23841.png from http://hosted.webwork.rochester.edu/...4e4e3fa271.png to http://hosted.webwork.rochester.edu/...655a5ac7d1.png.
Ok so first I know that we have to take the derivative which is Attachment 13472 And What would I do from then?
There are numerical errors in your derivative. After you have fixed them you should simplify the result, substitute it into the arclength formula (which I assume you have been taught) and then do the resulting integration.
• October 22nd 2009, 12:20 AM
superman69
Quote:
Originally Posted by mr fantastic
There are numerical errors in your derivative. After you have fixed them you should simplify the result, substitute it into the arclength formula (which I assume you have been taught) and then do the resulting integration.
Yes. But is this how you would set the equation Attachment 13473
• October 22nd 2009, 12:30 AM
superman69
Oh I see the mistake, 9 were suppose to be a 4
• October 22nd 2009, 12:36 AM
The Second Solution
Quote:
Originally Posted by superman69
Yes. But is this how you would set the equation Attachment 13473
Once you have made the necessary correction, the thing you are square rooting will simplify to $\left(\frac{x^2 + 16}{x^2 - 16}\right)^2$.
• October 22nd 2009, 12:38 AM
superman69
Quote:
Originally Posted by The Second Solution
Once you have made the necessary correction, the thing you are square rooting will simplify to $\left(\frac{x^2 + 16}{x^2 - 16}\right)^2$.
So this was what I got so far but I don't seem sure if this is correct
Attachment 13474
• October 22nd 2009, 01:59 AM
mr fantastic
Quote:
Originally Posted by superman69
So this was what I got so far but I don't seem sure if this is correct
Attachment 13474
Look, you need to get the derivative correct first. Please post your simplified answer for it and show all your working for how you got it.
Then ..... post #5 tells you what the $1 + \left( \frac{dy}{dx}\right)^2$ will simplify to.
|
2014-08-31 04:02:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.961341142654419, "perplexity": 1347.3556455701805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835872.63/warc/CC-MAIN-20140820021355-00427-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://stacks.math.columbia.edu/tag/03QD
|
59.32 Henselian rings
We begin by stating a theorem which has already been used many times in the Stacks project. There are many versions of this result; here we just state the algebraic version.
Theorem 59.32.1. Let $A\to B$ be finite type ring map and $\mathfrak p \subset A$ a prime ideal. Then there exist an étale ring map $A \to A'$ and a prime $\mathfrak p' \subset A'$ lying over $\mathfrak p$ such that
1. $\kappa (\mathfrak p) = \kappa (\mathfrak p')$,
2. $B \otimes _ A A' = B_1\times \ldots \times B_ r \times C$,
3. $A'\to B_ i$ is finite and there exists a unique prime $q_ i\subset B_ i$ lying over $\mathfrak p'$, and
4. all irreducible components of the fibre $\mathop{\mathrm{Spec}}(C \otimes _{A'} \kappa (\mathfrak p'))$ of $C$ over $\mathfrak p'$ have dimension at least 1.
Proof. See Algebra, Lemma 10.145.3, or see [Théorème 18.12.1, EGA4]. For a slew of versions in terms of morphisms of schemes, see More on Morphisms, Section 37.41. $\square$
Recall Hensel's lemma. There are many versions of this lemma. Here are two:
1. if $f\in \mathbf{Z}_ p[T]$ monic and $f \bmod p = g_0 h_0$ with $gcd(g_0, h_0) = 1$ then $f$ factors as $f = gh$ with $\bar g = g_0$ and $\bar h = h_0$,
2. if $f \in \mathbf{Z}_ p[T]$, monic $a_0 \in \mathbf{F}_ p$, $\bar f(a_0) =0$ but $\bar f'(a_0) \neq 0$ then there exists $a \in \mathbf{Z}_ p$ with $f(a) = 0$ and $\bar a = a_0$.
Both versions are true (we will see this later). The first version asks for lifts of factorizations into coprime parts, and the second version asks for lifts of simple roots modulo the maximal ideal. It turns out that requiring these conditions for a general local ring are equivalent, and are equivalent to many other conditions. We use the root lifting property as the definition of a henselian local ring as it is often the easiest one to check.
Definition 59.32.2. (See Algebra, Definition 10.153.1.) A local ring $(R, \mathfrak m, \kappa )$ is called henselian if for all $f \in R[T]$ monic, for all $a_0 \in \kappa$ such that $\bar f(a_0) = 0$ and $\bar f'(a_0) \neq 0$, there exists an $a \in R$ such that $f(a) = 0$ and $a \bmod \mathfrak m = a_0$.
A good example of henselian local rings to keep in mind is complete local rings. Recall (Algebra, Definition 10.160.1) that a complete local ring is a local ring $(R, \mathfrak m)$ such that $R \cong \mathop{\mathrm{lim}}\nolimits _ n R/\mathfrak m^ n$, i.e., it is complete and separated for the $\mathfrak m$-adic topology.
Proof. Newton's method. See Algebra, Lemma 10.153.9. $\square$
Theorem 59.32.4. Let $(R, \mathfrak m, \kappa )$ be a local ring. The following are equivalent:
1. $R$ is henselian,
2. for any $f\in R[T]$ and any factorization $\bar f = g_0 h_0$ in $\kappa [T]$ with $\gcd (g_0, h_0)=1$, there exists a factorization $f = gh$ in $R[T]$ with $\bar g = g_0$ and $\bar h = h_0$,
3. any finite $R$-algebra $S$ is isomorphic to a finite product of local rings finite over $R$,
4. any finite type $R$-algebra $A$ is isomorphic to a product $A \cong A' \times C$ where $A' \cong A_1 \times \ldots \times A_ r$ is a product of finite local $R$-algebras and all the irreducible components of $C \otimes _ R \kappa$ have dimension at least 1,
5. if $A$ is an étale $R$-algebra and $\mathfrak n$ is a maximal ideal of $A$ lying over $\mathfrak m$ such that $\kappa \cong A/\mathfrak n$, then there exists an isomorphism $\varphi : A \cong R \times A'$ such that $\varphi (\mathfrak n) = \mathfrak m \times A' \subset R \times A'$.
Proof. This is just a subset of the results from Algebra, Lemma 10.153.3. Note that part (5) above corresponds to part (8) of Algebra, Lemma 10.153.3 but is formulated slightly differently. $\square$
Lemma 59.32.5. If $R$ is henselian and $A$ is a finite $R$-algebra, then $A$ is a finite product of henselian local rings.
Proof. See Algebra, Lemma 10.153.4. $\square$
Definition 59.32.6. A local ring $R$ is called strictly henselian if it is henselian and its residue field is separably closed.
Example 59.32.7. In the case $R = \mathbf{C}[[t]]$, the étale $R$-algebras are finite products of the trivial extension $R \to R$ and the extensions $R \to R[X, X^{-1}]/(X^ n-t)$. The latter ones factor through the open $D(t) \subset \mathop{\mathrm{Spec}}(R)$, so any étale covering can be refined by the covering $\{ \text{id} : \mathop{\mathrm{Spec}}(R) \to \mathop{\mathrm{Spec}}(R)\}$. We will see below that this is a somewhat general fact on étale coverings of spectra of henselian rings. This will show that higher étale cohomology of the spectrum of a strictly henselian ring is zero.
Theorem 59.32.8. Let $(R, \mathfrak m, \kappa )$ be a local ring and $\kappa \subset \kappa ^{sep}$ a separable algebraic closure. There exist canonical flat local ring maps $R \to R^ h \to R^{sh}$ where
1. $R^ h$, $R^{sh}$ are filtered colimits of étale $R$-algebras,
2. $R^ h$ is henselian, $R^{sh}$ is strictly henselian,
3. $\mathfrak m R^ h$ (resp. $\mathfrak m R^{sh}$) is the maximal ideal of $R^ h$ (resp. $R^{sh}$), and
4. $\kappa = R^ h/\mathfrak m R^ h$, and $\kappa ^{sep} = R^{sh}/\mathfrak m R^{sh}$ as extensions of $\kappa$.
Proof. The structure of $R^ h$ and $R^{sh}$ is described in Algebra, Lemmas 10.155.1 and 10.155.2. $\square$
The rings constructed in Theorem 59.32.8 are called respectively the henselization and the strict henselization of the local ring $R$, see Algebra, Definition 10.155.3. Many of the properties of $R$ are reflected in its (strict) henselization, see More on Algebra, Section 15.45.
Comment #1501 by Dingxin Zhang on
Below Thm 036E's proof, after "Recall Hensel...":
f)] <-- (a weird right bracket) f \in Z_p[T <--(missing right bracket here)
Comment #1502 by on
Thanks for noticing! I filed this issue, I hope to fix some of the parsing issues soon.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2022-11-29 14:32:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9651239514350891, "perplexity": 260.62811695378514}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00573.warc.gz"}
|
https://clinmedjournals.org/articles/ijwhw/international-journal-of-womens-health-and-wellness-ijwhw-3-046.php?jid=ijwhw
|
## Examining Cervical Cancer Screening Utilization Among African Immigrant Women: A Literature Review
### Adebola Adegboyega*, Mollie Aleshire and Ana Maria Linares
College of Nursing, University of Kentucky, USA
*Corresponding author: Adebola Adegboyega, RN, BSN, PhD candidate, College of Nursing, University of Kentucky, Lexington, KY 40536, USA, E-mail: Aoadeg2@uky.edu
Int J Womens Health Wellness, IJWHW-3-046, (Volume 3, Issue 1), Review Article; ISSN: 2474-1353
Received: October 25, 2016 | Accepted: February 18, 2017 | Published: February 22, 2017
Citation: Adegboyega A, Aleshire M, Linares AM (2017) Examining Cervical Cancer Screening Utilization Among African Immigrant Women: A Literature Review. Int J Womens Health Wellness 3:046. 10.23937/2474-1353/1510046
Copyright: © 2017 Adegboyega A, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Abstract
Background: Globally, 530,000 women per year are diagnosed with cervical cancer, and approximately 275,000 die from the disease. Routine cervical cancer screening may reduce the burden of cervical cancer morbidity and mortality through early detection and improved treatment outcome. Immigrant women in the United States (U.S.) may be disproportionately affected by cervical cancer; however, there is scarce literature addressing cervical cancer screening in African immigrants (AIs) when compared to other immigrant groups. This systematic review evaluates the state of cervical cancer screening research in AIs and identifies current gaps.
Materials and methods: Through a comprehensive literature search, we identified 16 studies published between 2005 and 2015 that focused on cervical cancer screening among AIs.
Results: From this review, we found a low screening adherence rate among AIs. The common factors influencing cervical cancer screening practices among AIs included immigration status, health care interactions, knowledge deficiency, religiosity and certain personal characteristics.
Discussion: A multilevel approach to address the factors influencing screening practices among AIs is essential for improving adherence to screening guidelines. Implementation of grassroots enlightenment and screening programs are warranted in this population to decrease the screening disparity experienced by this burgeoning population.
Conclusions: Based on the findings from this review, African Immigrant (AI) women should be targeted for education about the importance of cervical cancer screening to bridge the knowledge gaps and multilevel initiatives could lead to improved access and utilization of screening services among this growing immigrant population.
Introduction
Every year 530,000 women worldwide are diagnosed with cervical cancer, and approximately 275,000 die from the disease [1]. Cervical cancer is the second most common cancer among women worldwide [1,2], is the most common cause of cancer in Africa [3], and is the leading cause of cancer-related deaths among women in developing countries [1,4]. Cervical cancer incidence rates are highest in sub-Saharan Africa, Latin America, Melanesia, and the Caribbean and are lowest in Western Asia, Australia, New Zealand, and North America. There is significant variation in cervical cancer rates by geographical region, which reflects differences in the availability and utilization of cervical cancer screening based upon geographical area [2]. Cervical cancer screening has successfully decreased cervical cancer incidence and mortality [5] in developed countries. However, screening in most African countries remains inaccessible and underutilized by African women [6]. In many sub-Saharan African countries, cervical cancer screening programs have not been effective due to multifactorial barriers that are client-based, provider-based, and system-based [7].
Human papillomavirus (HPV) infection is the primary cause of cervical cancer and HPV prevalence in women without cervical abnormalities is 24% in sub-Saharan Africa compared to a prevalence of 5% in North America [2,8]. Western and Eastern Africa are high risk areas for cervical cancer with women having a 3.4% cumulative risk of developing cervical cancer during their lifetime compared to a 0.5% lifetime risk of cervical cancer for women in North America risk of [9]. Decreases in HPV prevalence in North America have been linked to HPV vaccination [10]; however, the high cost of HPV vaccine may make it unaffordable or unavailable in many African countries [4]. The high HPV prevalence in African women translates to a high burden of cervical cancer in African women as well as an increased risk of cervical cancer for African women who immigrate to the United States (U.S.) [11].
Receiving Papanicolau smear (Pap) screening according to recommended guidelines significantly reduces cervical cancer morbidity and mortality and is the most commonly used prevention strategy for cervical cancer worldwide [12]. Pap screening can find precancerous cervical abnormalities as well as detect cervical cancer at early and at treatable stages. Cervical cancer is rare in women less than 21 years of age, and screening in adolescent females has been shown to increase cost and anxiety without decreasing incidence of cervical cancer [13]. Hence, cervical cancer screening is not recommended for adolescent females [14]. The American Cancer Society, American Society of Colposcopy and Cervical Pathology, American Congress of Obstetricians and Gynecologists, and U.S. Preventive Services Task Force (2012) recommend Pap screening begin at age 21 years and be completed every 3 years until women are over 65 years. Women ages 30-65 years may alternatively choose co-testing with HPV and Pap screening every 5 years. Co-testing for HPV in combination with Pap screening can help to assess cervical cancer risk [15]. If there is no history of cervical cancer or precancerous abnormalities, women who have had a hysterectomy that includes removal of the cervix and women over age 65 do not need cervical cancer screening [15]. These recommendations are for women at average risk and do not apply to women at increased risk for cervical cancer such as women who have a history of cervical dysplasia or cervical cancer; women who have been exposed in utero to diethylstilbestrol, or women who are immunocompromised [11]. Recommended screening practices should not change based on HPV vaccination status [16].
Women receiving Pap screening based on guideline recommendations and intervals is critical to reducing cervical cancer related morbidity, mortality, and economic burden [17]. In the U.S mortality reduction would be 86%-93%, and lifetime cost would be approximately $1200-$1500, and 24 quality-adjusted life-years would be gained [10,18]. To improve the health and economic burden of cervical cancer, the Pap screening patterns of ethnic minorities and underserved populations must be understood since these populations are disproportionately affected by cervical cancer. Currently, there exists a limited understanding of the factors influencing cervical cancer screening among African immigrants (AIs) to the U.S.
Sub-Saharan Africa is historically a region of intense migration and population movement prompted by demographic, economic, ecological and political factors [19]. Hence, the African immigrant (AI) group is a rapidly growing population in the U.S. [20]. From 1980 to 2013, the African population in the U.S. increased from 130,000 to 1.5 million [21]. AIs differ by country of origin, reasons for migration, primary languages spoken, health practices and beliefs, human capital, education status, and cultural background [22]. Immigrants bring with them their health profiles and health-related knowledge, values, beliefs, and perceptions reflecting their cultural background [23]. Cervical cancer screening services have been poorly implemented in many developing countries because of the high cost of health services, poor health infrastructures, insufficient numbers of pathologists and technicians, lack of resources, and accessibility particularly by people living in the rural areas since many of the available services are based in secondary and tertiary health care facilities located in urban areas [4,24]. The awareness and utilization of Pap screening is increasing in Sub-Saharan Africa. However, the unavailability and inaccessibility of cervical cancer screening services continue to lead to only a small percentage of women being screened in sub-Saharan Africa [4]. Insufficient awareness of cervical cancer screening recommendations may deter AI women from completing Pap screening [7] after they migrate to the U.S. AIs may not have had any Pap screening prior to coming to the U.S. Consequently, cervical cancer screening appears to be underutilized among AI populations whose screening rates are much lower than the proposed Healthy People 2020 objective of 93% of women age 21 to 65 receiving screening based upon current guidelines [25].
AI women in the U.S. may be disproportionately affected by cervical cancer due to health care factors, culturally determined beliefs and attitudes, and cervical cancer screening barriers [26-28]. In the only identified systematic review of cancer control research focused on U.S. AIs, Hurtado-de Mendoz and colleagues (2014) [29] examined cancer related studies that included African-born immigrants to the U.S. This review was conducted in May 2013 and was not specific to cervical cancer screening. To date, scant research has examined the current state of cervical cancer screening in AIs or identified research gaps to inform future research and interventions. Therefore, the purpose of this review is to examine cervical cancer screening practices among AI women and to identify gaps in the literature to guide future research.
Methods
Search method
The literature review combined electronic searches from PubMed, Web of Science, Google Scholar, Ovid Medline and CINHAL and followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [30]. Search terms included a combination of key words such as "cervical cancer screening", "African immigrants", "cervical neoplasm screening", "Pap test", "African refugees", and "immigrants". First, abstracts and titles were screened for relevance. Subsequently, full text articles were evaluated to determine adherence to the predetermined inclusion criteria. The article selection was based on the following inclusion criteria: (a) studies were published in English between 2005 and 2015, (b) studies reported on cervical cancer screening in an AI population, (c) articles were peer reviewed, (d) and the article was either a qualitative or quantitative research study, (e) studies done in Europe, Australia, or North America. Studies reported only in abstracts without full manuscripts, conference abstracts, review papers, dissertations, and epidemiological studies were excluded from the review.
Search outcome
Figure 1 summarizes the article selection process. From the initial electronic database search, 45 articles were identified. The abstracts were appraised and the references were reviewed to identify relevant studies from the reference lists that might have been missed in the initial search. After deleting duplicates, the remaining 24 full-text articles were screened for eligibility. A total of 16 studies met inclusion criteria.
.
Figure 1: Summary of literature search and review process. View Figure 1
.
Quality appraisal
Due to the limited number of studies meeting inclusion criteria, all research methodologies were included in this review. A categorical quality appraisal of the studies was not undertaken due to the significant heterogeneity among studies and is a limitation of this review, however the quality of studies was appraised via identifying designs, measures, strengths and weaknesses.
Data extraction and analysis
The abstract, manuscript, and the main findings of the studies meeting inclusion criteria were critically reviewed and synthesized. The authors used a data extraction sheet to examine study characteristics including subject characteristics, sampling methods, study location, and research design. Due to the changes in cervical cancer screening guidelines between 2005 and 2015, the authors referred to contemporary guidelines from the time the studies were conducted to ascertain if study participants met cervical cancer screening recommendations. The primary outcome variable of interest was if AIs had ever received Pap screening. Data also appraised and synthesized included cervical cancer screening adherence, and facilitators and/or barriers affecting cervical cancer screening practices. Given the heterogeneity of the included studies, meta-analysis or other statistical analysis could not be performed; therefore, data was summarized using qualitative synthesis. Extracted data was organized, integrated, and analyzed using qualitative content analysis methods [31]. Extracted data with common characteristics were then synthesized and grouped into major themes.
Results
Characteristics of selected studies
The selected articles were published between 2005 and 2015. The study characteristics are outlined in table 1. The study designs included six qualitative [32-35], seven quantitative [11,36-41], and one mixed methods (using both qualitative and quantitative) approach [42]. The reviewed articles included only two intervention studies [43,44]. Of the selected studies, 11 were studies specific for cervical cancer while the remaining studies also included other types of cancer.
Table 1: Summary of cervical cancer related studies that include African immigrants (AI). View Table 1
Subject characteristics
The sample sizes and sampling methods varied among the studies. Convenience sampling was used most frequently (25%, 4 articles). Three articles (18.8%) used stratified sampling, two articles (12.5%) used randomized sampling and purposeful sampling methods, one article (6.3%) used clustered sampling, and four articles (25%) did not specify the sampling method. All studies' participants were ages 18 and above. Seven articles examined AIs exclusively while 9 studies included other populations. Somalia was the most common country of migration in all reviewed studies which may be related to large Somalian immigrant populations in the areas where most studies on AIs have been conducted. Somalia was the top country of origin of African-born refugees and asylees (11.6%) admitted to the US in 2007 [45]. Ten studies were conducted in the United States, two in United Kingdom, and one study each was conducted in Canada and Australia.
Cervical cancer screening adherence
Factors influencing cervical cancer screening
Immigration status: Four studies [37,38,41,43] demonstrated that length of stay in country of immigration may improve cervical cancer screening, with a longer period of stay being associated with likelihood of having completed cervical cancer screening. Harcourt and colleagues (2013) found that established immigrants (greater than 5 years) are more likely to be screened for cervical cancer compared to recent immigrants (p < 0.001, OR = 0.40, CI 0.24-0.65). However, Samuel and colleagues (2009) [42] did not observe a correlation between time living in the U.S. and odds of being screened for cervical cancer. In a Canadian study, Lofters and colleagues (2010) [46] found immigrant class (economic, family, and refugee class) to be a significant predictor of cervical cancer screening in sub-Saharan African and Western European women. In this study, refugees were less likely to have completed cervical cancer screening, even though length of stay in Canada was not consistently associated with lack of screening.
Health care interactions: The frequency of health care system interaction may increase screening. Emergency department visits were associated with an increased likelihood of cervical cancer screening completion [39,40]. Morrison and colleagues (2012, 2013) [39,40] reported that there was a significant positive association between the duration of established health care (p = 0.001), number of health care encounters (p = 0.001), and cervical cancer screening adherence. Three studies [35,40,47] reported that post-natal or obstetrics/gynecological visits increased the odds of cervical cancer screening completion. Ogunsiji and colleagues (2013) found a majority of women who had Pap screening participated after their first pregnancy and continued to receive follow-ups and reminders from their providers. In addition, health care provider recommendations [35,48], patient-health care provider relationship [48], and trained medical interpreter use [39] all were found to improve rates of cervical cancer screening.
A health care provider's gender may influence cervical cancer screening completion [32,35,40,42]. Morrison and colleagues (2012) [40] reported that patient-provider gender concordance may improve screening adherence among Somali women. Cervical cancer screening was significantly more likely to occur during a visit with a female health care provider compared to a male provider (6.9% versus 1.2%). Having a male health care provider perform Pap screening may be uncomfortable [42] and for Muslim Somali women this may be a barrier to screening completion [35]. Redwood-Campbell (2011) found in their study of cervical cancer screening barriers and facilitators, that participants preferred female clinicians, and that the health care provider be female gender was most important to Muslim women [32].
Other personal level factors related to health care interaction such as cost [33,48], communication [32,35], pain [34], embarrassment [32,34,35], ear [33,34,41,48] and accessibility difficulties are barriers to Pap screening among AI women. Fear of the Pap screening included fear of the procedure and fear of the result. Certain women perceived the process of undergoing pelvic examination as invasive. Some women believed that the use of speculum would damage reproductive organs or impact future pregnancies [34]. Some women considered the speculum a painful instrument and did not trust the instruments' sterilization [35]. Fear of receiving a cervical cancer diagnosis prevented women from undergoing Pap screening due to the belief that a cancer diagnosis would result in death [33]. Ghebre and colleagues (2014) reported that some AI women would rather die rather than know that they have cancer. Accessibility challenges affecting cervical cancer screening included lack of childcare, inconvenient appointment times, and transportation issues [33,35].
Some women anticipated embarrassment associated with reaction from health care providers based on having undergone female circumcision [35]. Also, women perceived undergoing Pap screening as a sign of problem or an indication that a woman is experiencing an infection. Other women were concerned regarding how their community might interpret undergoing a gynecologic exam [34]. Younger women expressed that due to the close knit nature of the AI community in the area, they had concerns related to privacy and confidentiality [33].
Another barrier affecting cervical cancer screening was communication and language difficulties experienced during health care interactions [32,34,35]. English is a second language for many AI women and the inability to communicate effectively may be a barrier to cervical cancer screening. Communication issues may influence forming a trusting relationship with providers. Language difficulties can affect women's understanding of the cervical cancer screening and the perceived need for screening. Even though interpreter services were available, some women expressed dissatisfaction with the quality of interpreters provided, distrust of the interpreters provided, and embarrassment about disclosing private issues to interpreters [31].
Lack of trust in healthcare system [34], negative past experiences [35], and lack of health insurance [11,48] are system level barriers affecting cervical cancer screening. Cost of screening may affect cervical cancer screening for women without health insurance or underinsured. Lack of health insurance was associated with lower odds of Pap screening completion [11]. Lack of trust in the health care system and in health care providers was also identified by AI women as a health care system barrier to cervical cancer screening. Many women questioned recommendations by physicians and perceived that health care system or providers may not be focused upon the patient's best interest [34]. Furthermore, certain women delayed Pap screening due to their own past negative experience or other's reports of poor experiences related to Pap testing [35].
Knowledge of cervical cancer screening
Several studies reported that cervical cancer screening knowledge is low among AI women [32-35,47,48]. The women endorsed the need for more information on the necessity of cervical cancer screening, steps involved in procedure, and the implications of test results [32]. Because women's health issues were often not discussed openly in sub-Saharan African countries, it was difficult for AI women to initiate discussions on sexuality, cancer screening, or reproductive health [47]. In a multiethnic study by Brown and colleagues (2011), AI women knew the least among all the ethnic groups and commonly believed that cervical cancer was caused by having too many children. The women did not identify HPV as the cause of cervical cancer and were not aware HPV is a sexually transmitted infection [48]. Ndukwe and colleagues (2013) discussed that AI women often assume symptoms of cervical cancer are menstrual symptoms [33]. Ghebre and colleagues (2014) [34] found some Somali women might not know if they have undergone a cervical cancer screening because they did not know if they had undergone cervical cancer screening or another gynecological exam.
Religiosity, beliefs and attitudes
Certain religion and cultural belief can be barriers to cervical cancer screening completion. Ekechi and colleagues (2014) [41] found that women who attended religious services at least once a week were more likely to be overdue for screening than those who rarely or never attended (27% vs. 17%, p = 0.02). Also, a common Muslim Somali belief is that everything that happens is 'under God's will' [34,35] and prevention has 'no impact on God's plan' for one's health [34]. Other beliefs that impact pap screening include that personal faith will serve as protection from cancer, that cancer is a curse [33], or that cancer is a form of punishment from God inflicted on an individual [34]. Some AI women have fatalistic beliefs; the women reported that prevention has no impact because if God plans for someone to get sick, they will despite screening. Individuals will die the day they were supposed to die and participating in health prevention would not change the outcome was another sentiment shared by AI women [34].
There is conflicting evidence about AIs attitudes related to cervical cancer screening. Ogunsiji and colleagues (2013) [47] reported the majority of West African immigrant women in their study had a negative attitude toward Pap screening due to unfamiliarity with the test. Conversely, Redwood-Campbell and colleagues (2011) [32] reported a positive attitude among female immigrant being proactive in managing their health by obtaining cervical cancer screening.
Demographic characteristics
Among the studies that assessed correlation between age and cervical cancer screening, one study reported no association between AIs age and cervical cancer screening completion [38] while another study reported that women 25-44 years old were less likely to be screened than women 45-64 years old [41]. Two studies indicated that single African women were less likely to be screened compared to married women [11,41]. Harcourt and colleagues (2013) [38] reported that there was no association between AIs' level of education and cervical cancer screening while Forney-Gorman and colleagues (2015) [11] found an association between higher level of education and screening but it did not reach statistical significance.
Discussion
This literature review describes the state of cervical cancer screening evidence related to AIs and highlights a paucity of research specific to AI women and cervical cancer screening despite growing numbers of this immigrant group in developed countries. The review included 16 articles published between 2005 and 2015. Through synthesis of the articles, the authors identified thematic factors influencing Pap screening among AIs. Factors influencing Pap screening were identified as immigration status; health care interactions; knowledge related to cervical cancer screening; religiosity, beliefs, and attitudes; and demographic characteristics.
Cervical cancer screening is underutilized in the AI population with screening rates lower than other U.S. women and well below the Healthy People 2020 goal of 93% of women ages 21 to 65 receiving screening [25]. The differing cervical cancer screening guidelines in place during 2005 to 2015 review period make direct comparisons of Pap screening adherence across studies difficult. Available national data do not reflect screening rates among AI due to data aggregation in which AI females are reported as part of African American female statistics. The 2010 National Health Interview Survey showed that the overall cervical cancer screening receipt in the U.S. within the past three years was 83.0%. African American women have a cervical cancer screening rate of 85%, and rates were significantly lower among Asians at 75.4% [49]. Lack of disaggregation of data makes it difficult to identify sub group differences between native-born blacks and foreign-born blacks. There is limited data about Pap screening among a nationally representative sample of AI. In this review, reported cervical cancer screening rates among AI varied greatly from 19.4% to 75%. Notably, even a cervical cancer screening rate of 75% is below the reported screening rates among other minorities indicating further intervention is still needed to increase cervical cancer screening rates and achieve the Healthy People 2020 goals in this population.
Knowledge deficits related to cervical cancer risk factors and screening procedures influence cervical cancer screening among AIs. Limited knowledge in the AI population may be related to lack of cervical cancer screening emphasis or utilization prior to migration. Numerous studies conducted in Africa have shown that there is poor knowledge related to HPV, cervical cancer, and cervical cancer screening among African women. In a study conducted among women in Burkina Faso, the researchers reported low biomedical knowledge about cervical cancer [50]. In an integrated review of barriers to cervical cancer screening in sub-Saharan Africa, McFarland and colleagues (2016) cited lack of knowledge and awareness of cervical screening as the most common client-based barrier. Lack of information about cervical cancer screening programs and illiteracy likely are components affecting this knowledge gap [7]. Similarly, research among other immigrant population in the U.S. have found knowledge of cervical cancer causes and prevention to be lower as compared to the general U.S. population. For example, Corcoran and colleagues (2014) reported that Latina women have inaccurate and inadequate knowledge of cervical cancer and its prevention [51].
The knowledge gaps related to cervical cancer which exist in the burgeoning AI population must be addressed. Limited knowledge related to cervical cancer can fuel misconceptions about cervical cancer and cervical cancer screening. Alarmingly, more than half of cervical cancer deaths in the U.S. are among immigrant women [37], and AI women also suffer a disproportionate cervical cancer burden. Screening campaigns must target AIs and emphasize the causative role of HPV in cervical cancer and cervical cancer risk factors. Such campaigns will help eliminate anecdotal beliefs and combined with targeted cervical cancer screening programs can reduce the risk of cervical cancer. Regular cervical cancer screening based upon current guidelines is highly effective in identifying cervical cancer precursors and interrupting progression to invasive disease [52].
In this review, health care interactions also influenced cervical cancer screening among AI. In this review, AI women at post-natal or obstetrics/gynecological visits were screened as part of their visit; however, depending solely on this service may preclude women above childbearing ages. In native African women, screening for cervical cancer is similarly opportunistic and is more often completed by women who attend antenatal and family planning clinics. However, women who use these services are generally young and from a relatively low-risk group. This type of service does not reach women many at higher risk such as those aged 35-60 years and those who live in rural areas [4]. Morrison and colleagues (2012) noted that more frequent exposure to the health care system may increase comfort with the system and procedures, enhancing opportunities for preventive health services [40]. However, women who anticipate or experience unpleasant health care interactions may have fewer encounters with the health care system decreasing the likelihood of preventive care including cervical cancer screening.
In addition, certain health care interaction factors affecting Pap screening that are reported by U.S. ethnic minorities include embarrassment, fear of pain, fear of diagnosis, and trust in provider [51,53]. In a systematic review of barriers to cervical cancer utilization in sub-Saharan Africa, Lim and Ojo (2016) reported similar barriers among Sub-Saharan Africans [54]. Nigerian women indicated that fear of a positive result, modesty concerns, gender of health care providers, and beliefs that it is better to be ignorant of disease than to go in search of it were factors affecting cervical cancer screening practices, but these factors were not uniform across religions and geographical regions [55]. Furthermore, anticipated embarrassment related to health care providers unfamiliar with female circumcision practices have been reported among AIs [29]. Health care providers that encounter immigrant women should be aware that AIs may have specific needs related to female circumcision, which is practiced in more than 28 countries in Africa [56].
Religiosity has been shown to predict engagement in preventive services [57]. Generally, individuals who attend religious services are more likely to report the use of female preventive services compared to those who never attend [57]. However, in this review, we found that AI women who attended religious services were not up to date on screening. Religiosity may influence perceptions about cervical cancer causes and outcome. Some AI women endorse fatalistic beliefs about cancer that may be intertwined with religious beliefs. The belief that a higher power controls health is a component of fatalism [58]. Studies conducted among native African women have reported fatalistic views of cervical cancer screening, viewing positive results as a death sentence negating the need for screening. Other African women have reported solace in ignorance about their cervical cancer status [54].
Based on the heterogeneity and cultural diversity among Africans, factors related to cervical cancer screening uptake may vary among different ethnicities, within countries, and across the continent. In this review, most of the factors identified as influencing cervical cancer screening among AIs are similar to those identified among native Africans. However, some factors influencing cervical cancer screening differ between native Africans and AIs. For instance, immigration status is an important determinant of cervical cancer screening uptake among immigrants with recent immigrants at greater risk for non-compliance with screening recommendations. In addition, immigrants may be disproportionately affected by unique factors that may deter from cervical cancer screening. For example, undocumented immigrants cannot receive health insurance via the Patient Protection and Affordable Care Act (ACA) and legal immigrants who have been in the country less than five years are also excluded from participation in the Medicaid expansion program. Therefore, undocumented immigrants and recent immigrants are less likely to receive cervical cancer screening, and more likely to delay seeking necessary care [59]. U.S. immigrants consistently have lower rates of health insurance coverage than native U.S. populations, yet there are differences among immigrants based on immigration status, time in the U.S., and country of origin [60]. Having health insurance and cost likely play a significant role in access to preventive services such as Pap screening for AIs.
Despite migration to developed countries where organized cancer screening services and programs are normalized, there remains low cervical cancer screening rates among AIs. In part, this may be associated with lack of successful integration into the health care system of the host country. As acculturation and assimilation occur for AIs over time, this may lead to changes in beliefs or norms related to health practices such as cervical cancer screening [61]. Culturally congruent care may facilitate awareness of and access to health care services, including cervical cancer screening.
This review underscores the need for culturally-appropriate, targeted prevention efforts aimed at recent immigrants to improve their cervical cancer-screening uptake. In an intervention study identified in this review, Piwowarcyyk and colleagues (2013) [44] found that a culturally and linguistically tailored DVD intervention increased knowledge and intention to screen among women. The intervention was a series of one-session group workshops with Congolese and Somali in the US built around a DVD using AI women's stories which provided basic information about mammography, pap smears and mental health services for trauma.
Connecting recent immigrant with community resources, local advocacy, and resettlement organizations may help link and integrate them into the health care system in their host countries and reduce the cervical cancer screening and cervical cancer disease disparities experienced by this group.
Although, considerable progress is being made toward understanding the facilitators and barriers to cervical cancer screening among AIs, this review highlights the need for culturally-targeted and linguistically appropriate interventions to address knowledge gaps, health promotion, all levels of prevention, and culturally sensitive health care interactions.
This review indicates that health care providers influence cervical cancer screening utilization via their recommendations, patient-provider relationships, and communication. Hence, interventions and educational initiatives should address health care providers' cultural sensitivity and cultural congruence and facilitate incorporation of these concepts into patient-centered care to enhance health care interactions and improve health care barriers for AIs.
Self-Pap screening and HPV testing may play a vital role in the future in increasing the number of women globally who are able to receive cervical cancer screening [62]. Sewali and colleagues (2015) study [43] among Somali immigrants demonstrated the potential for using self-sampling home-based kits to increase cervical cancer screening in AIs. Community health workers (CHWs) might serve as patient navigators to participants with positive cervical cancer or HPV self-screening results to ensure timely follow-up [62]. As frontline lay public health workers, CHWs serve as a bridge between communities and health care providers [63]. CHWs address the challenge of delivering health care services to underserved populations through education, outreach, and counseling [64,65] CHWs have been successfully used in cancer screening promotions among underserved populations and thus should be considered as a component of intervention strategies aimed at increasing cervical cancer screening in AI women [65].
Limitations
There are several limitations of this review including the number and types of studies that were reviewed and the time span of publication. Although 16 studies were identified, the study designs and samples varied greatly and studies utilized unique research purposes and questions, different types of research participants, dissimilar research measures, multiple variables, and widely varied immigrant population foci. Although the authors sought to identify all AI cervical cancer screening studies meeting inclusion criteria, the search methodology employed for the literature review may have limited the number of studies identified for inclusion. Searches of additional databases, grey literature, abstract-only writings, and unpublished data may have led to the identification of additional research studies. The limitation of using keywords and Mesh terms may have impacted the search results; however, in an effort to minimize this effect multiple databases were searched. The diversity of the articles reviewed and AIs as a population, limits the ability to generalize the review findings. The results should be interpreted with caution due to the numerical variation of AI study participants. Also, study participants included AI women born in various countries across the African continent which are likely influenced by factors such as geographical region, religion, legislation, socio-political factors, sociocultural norms, and a myriad of other factors. Data classification and thematic identification and classification were based on subjective inferences; consequently, this is a limitation that may affect the results.
Conclusions
The findings from the review highlight gaps in research among AI population related to cervical cancer screening. The need for more research to test interventions among this growing population cannot be overemphasized. Such research studies should target AIs within their socioeconomic cultural context to identify effective interventions to improve cervical cancer screening participation in this group. Such investigation should also evaluate the cost effectiveness and feasibility of such interventions for dissemination to a larger AI audience.
In addition, much of the research done in this group has not been among national representative samples of AI and has been conducted with broad classification of immigrants with small representation of AIs; thus limiting the interpretation and generalization of such research to larger AI populations. Future AI research should consider the heterogeneity of the AI population and identify and study population subgroups and subcultures to determine the similarities and differences in cervical cancer screening influences and practices. AI groups such as uninsured, recently-arrived, and non-English speakers may be best reached through community-based participatory research with community-based organizations [29]. Engagement with community-based organizations that serve these communities provide a platform for exploring meaningful health promotion interventions in this underrepresented population [66]. Achieving inclusive, meaningful research in this population may best be accomplished through multi-institutional collaborations to ensure diversity among African-born populations while further stratification may delineate risks, behaviors, and associations unique to specific subgroups within these populations [66].
Sample Search Terms used in PubMed
(africa*) OR "Africa"[Mesh])) AND ((("Emigrants and Immigrants"[Mesh])) OR immigrant*)) AND "last 10 years"[PDat] AND Humans[Mesh] AND English[lang])) AND ((((cancer screen* AND "last 10 years"[PDat] AND Humans[Mesh] AND English[lang])) OR ("Early Detection of Cancer"[Mesh] AND "last 10 years"[PDat] AND Humans[Mesh] AND English[lang])) AND "last 10 years"[PDat] AND Humans[Mesh] AND English[lang])) AND "last 10 years"[PDat] AND Humans[Mesh] AND English[lang])) AND (((("Uterine Cervical Neoplasms"[Mesh] AND "last 10 years"[PDat] AND Humans[Mesh] AND English[lang])) OR (cervi* AND "last 10 years"[PDat] AND Humans[Mesh] AND English[lang])) AND "last 10 years"[PDat] AND Humans[Mesh] AND English[lang]).
References
Open Access by ClinMed International Library is licensed under a Creative Commons Attribution 4.0 International License based on a work at https://clinmedjournals.org/.
|
2018-08-14 06:56:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31243273615837097, "perplexity": 7088.766409071166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221208676.20/warc/CC-MAIN-20180814062251-20180814082251-00259.warc.gz"}
|
https://codegolf.stackexchange.com/questions/78727/symme-try-this-triangle-trial
|
# Symme-Try This Triangle Trial
A string whose length is a positive triangular number (1, 3, 6, 10, 15...) can be arranged into an "equilateral text triangle" by adding some spaces and newlines (and keeping it in the same reading order).
For example, the length 10 string ABCDEFGHIJ becomes:
A
B C
D E F
G H I J
Write a program or function that takes in such a string, except it will only contains the characters 0 and 1. (You may assume the input is valid.)
For the resulting "equilateral text triangle", output (print or return) one of four numbers that denotes the type of symmetry exhibited:
• Output 2 if the triangle has bilateral symmetry. i.e. it has a line of symmetry from any one corner to the opposite side's midpoint.
Examples:
0
1 1
1
0 1
0
0 1
0 1 0
1
1 1
1 0 1
0 1 1 1
• Output 3 if the triangle has rotational symmetry. i.e. it could be rotated 120° with no visual change.
Examples:
0
1 0
0 1 1
0 1 0 0
0
0 1
1 0 0
0 0 1 0
1
0 1
1 1 1
1 1 1 0
1 0 1 1 1
1
0 1
0 0 1
1 0 0 0
1 0 0 0 0
1 0 0 1 1 1
• Output 6 if the triangle has both bilateral and rotational symmetry. i.e. it matches the conditions for outputting both 2 and 3.
Examples:
0
1
0
0 0
1
0 0
1 0 1
0
0 0
0 1 0
0 0 0 0
• Output 1 if the triangle has neither bilateral nor rotational symmetry.
Examples:
1
1 0
0 0 0
0
0 1
1 0 1
1
1 0
1 1 1
1 1 1 1
1
1 1
1 1 1
0 0 0 1
1 1 1 1 1
The shortest code in bytes wins. Tiebreaker is earlier answer.
Aside from an optional trailing newline, the input string may not have space/newline padding or structure - it should be plain 0's and 1's.
If desired you may use any two distinct printable ASCII characters in place of 0 and 1.
# Test Cases
Taken direct from examples.
011 -> 2
101 -> 2
001010 -> 2
1111010111 -> 2
0100110100 -> 3
0011000010 -> 3
101111111010111 -> 3
101001100010000100111 -> 3
0 -> 6
1 -> 6
000 -> 6
100101 -> 6
0000100000 -> 6
110000 -> 1
001101 -> 1
1101111111 -> 1
111111000111111 -> 1
"Rotating" any input by 120° will of course result in the same output.
• That title is just painful...... – Rɪᴋᴇʀ Apr 27 '16 at 15:28
• @EᴀsᴛᴇʀʟʏIʀᴋ Just tri to ignore it. – Calvin's Hobbies Apr 27 '16 at 15:31
• @HelkaHomba Why... why... – clismique Jul 10 '16 at 18:52
## CJam, 372928 27 bytes
Thanks to Sp3000 for saving 3 bytes.
q{T):T/(\s}h]{z_Wf%_}3*])e=
Test suite.
This reuses some triangle rotation tricks from this challenge.
This also works for the same byte count:
q{T):T/(\s}h]3{;z_Wf%_}%)e=
### Explanation
First, a quick recap from the triangle post I linked to above. We represent a triangle as a 2D (ragged) list, e.g.
[[0 1 1]
[0 0]
[0]]
The symmetry group of the triangle has 6 elements. There are cycles of length 3 by rotating the triangle and cycles of 2 by mirroring it along some axis. Conveniently, the rotations correspond to perform to two different reflections. We will use the following reflections to do this:
1. Transpose the list means reflecting it along the main diagonal, so we'd get:
[[0 0 0]
[1 0]
[1]]
2. Reversing each row represents a reflection which swaps the top two corners. Applying this to the result of the transposition we get:
[[0 0 0]
[0 1]
[1]]
Using these two transformations, and keeping the intermediate result, we can generate all six symmetries of the input.
A further point of note is the behaviour of transposition on a list like this:
[[0]
[1 0]
[1 0 0]
[]]
Because that's what we'll end up with after splitting up the input. Conveniently, after transposing, CJam flushes all lines to the left, which means this actually gets rid of the extraneous [] and brings it into a form that's useful for the above two transformations (all without changing the actual layout of the triangle beyond reflectional symmetry):
[[0 1 1]
[0 0]
[0]]
With that out of the way, here's the code:
q e# Read input.
{ e# While the input string isn't empty yet...
T):T e# Increment T (initially 0) and store it back in T.
/ e# Split input into chunks of that size.
( e# Pull off the first chunk.
\s e# Swap with remaining chunks and join them back together
e# into a single string.
}h
] e# The stack now has chunks of increasing length and an empty string
e# as I mentioned above. Wrap all of that in an array.
{ e# Execute this block 3 times...
z_ e# Transpose and duplicate. Remember that on the first iteration
e# this gets us a triangle of the desired form and on subsequent
|
2020-10-22 04:21:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33065521717071533, "perplexity": 1185.7365568807968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878879.33/warc/CC-MAIN-20201022024236-20201022054236-00682.warc.gz"}
|
https://lit.lhsmathcs.org/fallingdominos
|
Falling Dominos
# D. Falling Dominos
#### Problem Statement
Tiger is bored in Health class so he is playing with $$n$$ dominos arranged in a straight line — the $$i$$-th domino has position $$x_i$$ and height $$h_i$$. Tiger is clumsy (unless he is only pretending to be) so he wondered, for each $$i$$ $$(1 \leq i \leq n)$$, how many dominos would fall if he were to knock over the $$i$$-th domino?
Note that Tiger is knocking over the dominos to the right, and if the $$i$$-th domino has been knocked over, it will knock over the $$j$$-th domino if $$x_i < x_j$$ and $$x_i + h_i \geq x_j$$ (and the $$j$$-th domino will fall to the same side as the $$i$$-th domino).
Time Limit:
C++: 2.5 seconds
Java, Python: 5 seconds
Memory Limit: 256mb
#### Constraints
$$1 \leq n \leq 2 \cdot 10^5$$
$$1 \leq x_i, h_i \leq 10^9$$
$$x_i < x_{i + 1}$$ for all $$1 \leq i < n$$
#### Input Format
The first line has one integer $$n$$
The next $$n$$ lines have two integers $$x_i$$ and $$h_i$$
It is guaranteed that the positions of the dominos are distinct.
#### Output Format
Output $$n$$ integers. For each $$i$$ $$(1 \leq i \leq n)$$ output the number of dominos that would fall if Tiger knocked over the $$i$$-th domino.
5
1 3
2 2
3 2
5 1
7 3
4 3 2 1 1
#### Sample Explanation
If you knock over the second domino, it reaches the third domino at $$x = 3$$, which then knocks over the fourth domino at $$x = 5$$. Thus the total number is $$3$$
Credit: Eggag
|
2022-08-12 12:45:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.867664098739624, "perplexity": 1515.8955193803185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00501.warc.gz"}
|
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=ufa&paperid=454&option_lang=eng
|
Ufimskii Matematicheskii Zhurnal
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Ufimsk. Mat. Zh.: Year: Volume: Issue: Page: Find
Ufimsk. Mat. Zh., 2018, Volume 10, Issue 4, Pages 123–129 (Mi ufa454)
On inverse spectral problem and generalized Sturm nodal theorem for nonlinear boundary value problems
Ya. Il'yasovab, N. Valeevca
a Institute of Mathematics, Ufa Federal Research Center, RAS, 450008, Ufa, Russia
b Instituto de Matemática e Estatística, Universidade Federal de Goiás, 74001-970, Goiania, Brazil
c Bashkir State University, 450076, Ufa, Russia
Abstract: In the present paper, we are concerned with the Sturm–Liouville operator
$$\mathcal{L}[q] u:=-u"+q(x)u$$
subject to the separated boundary conditions. We suppose that $q \in L^2(0,\pi)$ and study a so-called inverse optimization spectral problem: given a potential $q_0$ and a value $\lambda_k$, where $k=1,2,…$, find a potential $\hat{q}$ closest to $q_0$ in the norm of $L^2(0,\pi)$ such that the value $\lambda_k$ coincides with $k$-th eigenvalue $\lambda_k(\hat{q})$ of the operator $\mathcal{L}[\hat{q}]$.
In the main result, we prove that this problem is related to the existence of a solution to a boundary value problem for the nonlinear equation
$$-u"+q_0(x) u=\lambda_k u+\sigma u^3$$
with $\sigma=1$ or $\sigma=-1$. This implies that the minimizing solution of the inverse optimization spectral problem can be obtained by solving the corresponding nonlinear boundary value problem. On the other hand, this relationship allows us to establish an explicit formula for the solution to the nonlinear equation by finding the minimizer of the corresponding inverse optimization spectral problem. As a consequence of this result, a new method of proving the generalized Sturm nodal theorem for the nonlinear boundary value problems is obtained.
Keywords: Sturm–Liouville operator, inverse optimization spectral problem, nodal theorem for the nonlinear boundary value problems.
Funding Agency Grant Number Russian Foundation for Basic Research 18-51-06002_Az_a The second author was partially supported by RFBR grant no. 18-51-06002 Az-a.
Full text: PDF file (430 kB)
References: PDF file HTML file
English version:
Ufa Mathematical Journal, 2018, 10:4, 122–128 (PDF, 344 kB); https://doi.org/10.13108/2018-10-4-122
Bibliographic databases:
UDC: 517.9
MSC: 34L05, 34L30, 34A55
Language:
Citation: Ya. Il'yasov, N. Valeev, “On inverse spectral problem and generalized Sturm nodal theorem for nonlinear boundary value problems”, Ufimsk. Mat. Zh., 10:4 (2018), 123–129; Ufa Math. J., 10:4 (2018), 122–128
Citation in format AMSBIB
\Bibitem{IlyVal18} \by Ya.~Il'yasov, N.~Valeev \paper On inverse spectral problem and generalized Sturm nodal theorem for nonlinear boundary value problems \jour Ufimsk. Mat. Zh. \yr 2018 \vol 10 \issue 4 \pages 123--129 \mathnet{http://mi.mathnet.ru/ufa454} \transl \jour Ufa Math. J. \yr 2018 \vol 10 \issue 4 \pages 122--128 \crossref{https://doi.org/10.13108/2018-10-4-122} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000457367000012} \scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85073684171}
• http://mi.mathnet.ru/eng/ufa454
• http://mi.mathnet.ru/eng/ufa/v10/i4/p123
SHARE:
Citing articles on Google Scholar: Russian citations, English citations
Related articles on Google Scholar: Russian articles, English articles
This publication is cited in the following articles:
1. N. F. Valeev, Y. Sh. Ilyasov, “Inverse spectral problem for Sturm–Liouville operator with prescribed partial trace”, Ufa Math. J., 12:4 (2020), 19–29
• Number of views: This page: 144 Full text: 57 References: 19
|
2021-10-18 11:09:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4334401786327362, "perplexity": 3393.338477713926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00409.warc.gz"}
|
http://gmatclub.com/forum/calling-all-berkeley-haas-fall-2009-applicants-65339-580.html?kudos=1
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 29 Jul 2015, 21:02
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Calling all Berkeley-Haas Fall 2009 Applicants
Author Message
TAGS:
GMAT Club Legend
Status: Um... what do you want to know?
Joined: 03 Jun 2007
Posts: 5464
Location: SF, CA, USA
Schools: UC Berkeley Haas School of Business MBA 2010
WE 1: Social Gaming
Followers: 68
Kudos [?]: 365 [0], given: 14
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 17 Mar 2009, 22:15
Sorry to hear about the ding, rjacobs, liubhs, YihWei, and Smokedpotatoes.
For W/L, definitely set up your interview between now and R3 deadline (end of April) so you can be considered after R3 deadlines come out. Strengthen your application using the methods in your W/L letter as much as possible, and indicate your interest to stay on the list. You could possibly retake the GMAT, but try to do it early April so you can send your scores in.
Best times to visit campus are M-Th. I would recommend a morning class, lunch with students, infosession, and an interview, or interview, lunch with students, infosession, and a class. Contact club leaders for clubs you're interested in. Update your job status if there are changes, take a class in quant if your GPA is low, add 1-2 recommendations, etc... I think those are all great ways to show interest.
Let me know if you're coming on campus. Maybe I can meet with you if my schedule allows (Wed-Thur are better for me).
Good luck!
_________________
****************************
GMAT Club Knowledge Vault:
http://gmatclub.com/forum/123
http://gmatclub.com/forum/128-t62555
Kryzak's Profile:
http://gmatclub.com/forum/111-t56286
Member Essays:
http://gmatclub.com/forum/103-t50969
Kaplan Promo Code Knewton GMAT Discount Codes Manhattan GMAT Discount Codes
Current Student
Joined: 21 Nov 2008
Posts: 108
Location: Barcelona
Schools: Tuck
Followers: 2
Kudos [?]: 3 [0], given: 0
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 17 Mar 2009, 23:06
Ding for me as well! I sent my degree certificates and they received it last monday. I hope that didn't hurt my chances. Well, the aplication process for me has almost reached to an end.
Intern
Joined: 14 Jan 2009
Posts: 7
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 18 Mar 2009, 12:26
kryzak,
I have a 680 (89%) on my Gmat with 86% in quant and 76% in verbal. I am placed on the waitlist. I know this is lower than the Haas average of 714. Do you think it would be worth the effort to retake the test?
Since my verbal score was lower would it be beneficial to supplement my application with a class?
GMAT Club Legend
Status: Um... what do you want to know?
Joined: 03 Jun 2007
Posts: 5464
Location: SF, CA, USA
Schools: UC Berkeley Haas School of Business MBA 2010
WE 1: Social Gaming
Followers: 68
Kudos [?]: 365 [0], given: 14
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 18 Mar 2009, 21:51
m2009 wrote:
kryzak,
I have a 680 (89%) on my Gmat with 86% in quant and 76% in verbal. I am placed on the waitlist. I know this is lower than the Haas average of 714. Do you think it would be worth the effort to retake the test?
Since my verbal score was lower would it be beneficial to supplement my application with a class?
hi m2009,
That is something that I don't really have an answer for. If you feel that you can improve your score to the average, then I would say go for it. Otherwise, I would think long and hard about what part of your application is the weakest and address that first. Sorry I can't give more advice since I don't really know your full application at all.
_________________
****************************
GMAT Club Knowledge Vault:
http://gmatclub.com/forum/123
http://gmatclub.com/forum/128-t62555
Kryzak's Profile:
http://gmatclub.com/forum/111-t56286
Member Essays:
http://gmatclub.com/forum/103-t50969
Intern
Joined: 21 Mar 2008
Posts: 36
Schools: Haas R2
Followers: 0
Kudos [?]: 6 [0], given: 0
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 19 Mar 2009, 11:30
Ding for me as well. Congratulations to everyone that got in!
Current Student
Joined: 11 Oct 2008
Posts: 24
Schools: Kellogg
Followers: 0
Kudos [?]: 1 [0], given: 4
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 19 Mar 2009, 12:09
Dinged as well. Good luck to those on the wait-list.
Current Student
Joined: 24 Jun 2008
Posts: 69
Location: Bay Area, CA
Schools: Cornell '11
Followers: 2
Kudos [?]: 0 [0], given: 0
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 19 Mar 2009, 13:33
vijapp wrote:
Dinged as well. Good luck to those on the wait-list.
Somewhat surprising...in at Kellogg, WL at Wharton, but dinged at Haas?
Intern
Joined: 11 Nov 2008
Posts: 13
Location: Toronto
Schools: haas, stanford, sloan
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 19 Mar 2009, 15:13
just when i had given up all hope, i got WL! i was already thinking about what my re-app would look like for next year.
congrats to all the admits. condolences to the dings. i know the ding pain all too well!
to the other WLs, good luck to you all! really work hard to strengthen those apps and i'll see you on wednesday's WL chat.
kryzak, thanks for all the tips. if you want to update me on the list, i submitted 12/9 and got WL notification on tuesday.
my WL game plan is to write a 700 word ish statement on the new developments in my professional life. unfortunately after taking a break from app season, i am only now getting into doing extracurriculars, so not much to talk about there. i have an interview coming up, and i will ask for one recommendation. i've already reached out to haas students and visited campus last month so no plans for that right now, but i can tell you haas students have been super supportive and helpful. i can't recommend this to you all enough.
anyone have thoughts on if it makes a difference sending in these things super soon as opposed to towards the r3 deadline?
Senior Manager
Joined: 24 Feb 2008
Posts: 349
Schools: UCSD ($) , UCLA, USC ($), Stanford
Followers: 146
Kudos [?]: 1929 [0], given: 2
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 19 Mar 2009, 15:16
glettian wrote:
vijapp wrote:
Dinged as well. Good luck to those on the wait-list.
Somewhat surprising...in at Kellogg, WL at Wharton, but dinged at Haas?
Why surprizing? If you read the admission stats of previous years you will see that Haas is more selective than either of those two.
_________________
Best AWA guide here: how-to-get-6-0-awa-my-guide-64327.html
Current Student
Joined: 21 Aug 2008
Posts: 348
Schools: Fuqua '11
Followers: 5
Kudos [?]: 38 [0], given: 0
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 19 Mar 2009, 15:22
glettian wrote:
vijapp wrote:
Dinged as well. Good luck to those on the wait-list.
Somewhat surprising...in at Kellogg, WL at Wharton, but dinged at Haas?
You've got to factor in that Haas' class size is 1/4 that of Wharton's and 1/3 that of Kellogg's, and not surprisingly has a smaller acceptance rate than either. It's also a little more GPA intensive than most other schools, even the M7s.
Manager
Joined: 01 Apr 2006
Posts: 184
Followers: 1
Kudos [?]: 22 [0], given: 0
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 19 Mar 2009, 15:54
bennymoto wrote:
just when i had given up all hope, i got WL! i was already thinking about what my re-app would look like for next year.
congrats to all the admits. condolences to the dings. i know the ding pain all too well!
to the other WLs, good luck to you all! really work hard to strengthen those apps and i'll see you on wednesday's WL chat.
kryzak, thanks for all the tips. if you want to update me on the list, i submitted 12/9 and got WL notification on tuesday.
my WL game plan is to write a 700 word ish statement on the new developments in my professional life. unfortunately after taking a break from app season, i am only now getting into doing extracurriculars, so not much to talk about there. i have an interview coming up, and i will ask for one recommendation. i've already reached out to haas students and visited campus last month so no plans for that right now, but i can tell you haas students have been super supportive and helpful. i can't recommend this to you all enough.
anyone have thoughts on if it makes a difference sending in these things super soon as opposed to towards the r3 deadline?
I'm in the same W/L boat as you. Will be going in for my interview in early April... I'm pretty much going to be doing what you've suggested and plus a gmat retake. When I spoke to the adcom office today, she said that everything submitted before the r3 deadline should be sufficient. I asked if I submitted things as I got them ready and she seemed to think that was fine.
Does anyone know the timing of how adcom generally reviews W/Ls? Would they only look at the W/L pool starting April 28th? I'm assuming after that point, most of us R2 W/Ls should find out fate by end of May? I'm particularly interested in the timing b/c I may enroll in a calc course but the spring course doesn't even start until mid-May and ends late June at the earliest.
If anyone can shed some light, that'll be appreciated.
GMAT Club Legend
Status: Um... what do you want to know?
Joined: 03 Jun 2007
Posts: 5464
Location: SF, CA, USA
Schools: UC Berkeley Haas School of Business MBA 2010
WE 1: Social Gaming
Followers: 68
Kudos [?]: 365 [0], given: 14
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 19 Mar 2009, 18:54
sorry to hear about the dings...
as for waitlist questions, I'm guessing R3 will be considered around end of April/early May. That's just my guess though.
_________________
****************************
GMAT Club Knowledge Vault:
http://gmatclub.com/forum/123
http://gmatclub.com/forum/128-t62555
Kryzak's Profile:
http://gmatclub.com/forum/111-t56286
Member Essays:
http://gmatclub.com/forum/103-t50969
GMAT Club Legend
Status: Um... what do you want to know?
Joined: 03 Jun 2007
Posts: 5464
Location: SF, CA, USA
Schools: UC Berkeley Haas School of Business MBA 2010
WE 1: Social Gaming
Followers: 68
Kudos [?]: 365 [0], given: 14
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 20 Mar 2009, 09:22
let me know if you're coming to Days at Haas! Would love to chat with you (if I haven't already) or hang out.
_________________
****************************
GMAT Club Knowledge Vault:
http://gmatclub.com/forum/123
http://gmatclub.com/forum/128-t62555
Kryzak's Profile:
http://gmatclub.com/forum/111-t56286
Member Essays:
http://gmatclub.com/forum/103-t50969
Current Student
Joined: 02 Feb 2009
Posts: 34
Location: Washington, DC
Schools: Haas Harvard Sloan Yale
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 20 Mar 2009, 09:34
kryzak wrote:
let me know if you're coming to Days at Haas! Would love to chat with you (if I haven't already) or hang out.
Hey Kryzak-
No word yet on R3! But, just in case, when is the second Days at Haas weekend?
Current Student
Joined: 24 Jun 2008
Posts: 69
Location: Bay Area, CA
Schools: Cornell '11
Followers: 2
Kudos [?]: 0 [0], given: 0
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 20 Mar 2009, 10:45
kryzak wrote:
let me know if you're coming to Days at Haas! Would love to chat with you (if I haven't already) or hang out.
Fingers crossed on getting in off the W/L Kryzak! IF I do, get ready for another poll
GMAT Club Legend
Status: Um... what do you want to know?
Joined: 03 Jun 2007
Posts: 5464
Location: SF, CA, USA
Schools: UC Berkeley Haas School of Business MBA 2010
WE 1: Social Gaming
Followers: 68
Kudos [?]: 365 [0], given: 14
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 20 Mar 2009, 11:48
good luck glettian and lasttotheparty!
DAH II is 4/30-5/2.
_________________
****************************
GMAT Club Knowledge Vault:
http://gmatclub.com/forum/123
http://gmatclub.com/forum/128-t62555
Kryzak's Profile:
http://gmatclub.com/forum/111-t56286
Member Essays:
http://gmatclub.com/forum/103-t50969
Current Student
Joined: 24 Oct 2008
Posts: 67
Location: Maryland, USA
Schools: Ross, Cornell ($), Kellogg, Wharton, Stanford, Berkeley ($$) Followers: 1 Kudos [?]: 4 [0], given: 0 Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 23 Mar 2009, 09:50 For all the R3 applicants, someone on BW forum reported an interview invite. Kryzak - can you add me to the R3 list? thank you GMAT Club Legend Status: Um... what do you want to know? Joined: 03 Jun 2007 Posts: 5464 Location: SF, CA, USA Schools: UC Berkeley Haas School of Business MBA 2010 WE 1: Social Gaming Followers: 68 Kudos [?]: 365 [0], given: 14 Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 23 Mar 2009, 14:42 chitchat wrote: For all the R3 applicants, someone on BW forum reported an interview invite. Kryzak - can you add me to the R3 list? thank you Yup, here's the current list. R3 - 6 FT puipui lasttotheparty jawbreaker sajal09 hd54321 chitchat _________________ **************************** GMAT Club Knowledge Vault: http://gmatclub.com/forum/123 Haas Ambassador http://gmatclub.com/forum/128-t62555 Kryzak's Profile: http://gmatclub.com/forum/111-t56286 Member Essays: http://gmatclub.com/forum/103-t50969 Manager Joined: 28 May 2006 Posts: 152 Location: New York, NY Followers: 2 Kudos [?]: 8 [0], given: 0 Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 25 Mar 2009, 11:17 So I participated in today's waitlist chat. The adcom recommended people to submit additional materials by 4/24. I just scheduled an off-campus interview this week. Good luck everyone! Current Student Joined: 24 Jun 2008 Posts: 69 Location: Bay Area, CA Schools: Cornell '11 Followers: 2 Kudos [?]: 0 [0], given: 0 Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 26 Mar 2009, 15:58 Waitlisted yet again. It's so frustrating to not be able to get over the hump, even after interviewing, sending in 2 add'l recs, and sending in a waitlist update. If this were any other year other than this year with such a high app volume, I'm sure I'd be in by now Cornell w/$30K will have to suffice!
Re: Calling all Berkeley-Haas Fall 2009 Applicants [#permalink] 26 Mar 2009, 15:58
Go to page Previous 1 ... 21 22 23 24 25 26 27 28 29 30 31 32 Next [ 633 posts ]
Similar topics Replies Last post
Similar
Topics:
82 Calling all Berkeley-Haas Fall 2011 Applicants 1830 09 May 2010, 16:16
43 Calling all Berkeley-Haas Fall 2010 Applicants 1236 13 May 2009, 00:23
Calling all Berkeley Haas EWMBA 2009 applicants! 5 30 Mar 2009, 11:50
1 Calling all Indiana Kelley Fall 2009 Applicants 25 11 Dec 2008, 11:25
10 Calling Berkeley-Haas Fall 2008 Applicants! 764 14 Jul 2007, 11:38
Display posts from previous: Sort by
|
2015-07-30 05:02:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20518139004707336, "perplexity": 12681.412219972417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987127.36/warc/CC-MAIN-20150728002307-00109-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://www.mindmeld.com/docs/userguide/tokenizer.html
|
# Working with the Tokenizer¶
MindMeld provides the ability to customize and configure the default tokenizer for your application.
The settings for tokenizer can be defined in your application configuration file, config.py. The configuration must be defined as a dictionary with the name TOKENIZER_CONFIG to override the tokenizer's default settings. If no custom configuration is defined, the default is used.
## Anatomy of the tokenizer configuration¶
The configuration currently has one section: Allowed Patterns.
Allowed Patterns - Enables defining your custom regular expression patterns in the form of a list of different patterns or combinations. This list is combined and compiled internally by MindMeld and the resulting pattern is applied for filtering out the characters from the user input queries. For eg.
TOKENIZER_CONFIG = {
"allowed_patterns": ['\w+'],
}
will allow the system to capture alphanumeric strings and
TOKENIZER_CONFIG = {
"allowed_patterns": ['(\w+\.)$', '(\w+\?)$'],
}
allows the system to capture only tokens that end with either a question mark or a period.
## Default Tokenizer Configuration¶
As a default in MindMeld, the Tokenizer retains the following special characters in addition to alphanumeric characters and spaces:
1. All currency symbols in UNICODE.
2. Entity annotation symbols {, }, |.
3. Decimal point in numeric values (e.g. 124.45).
4. Apostrophe within tokens, such as O'Reilly. Apostrophes at the beginning/end of tokens are removed, say Dennis' or 'Tis.
Setting argument keep_special_chars=False in the Tokenizer would remove all special characters.
|
2020-10-25 04:54:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18532253801822662, "perplexity": 5572.171103209645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107887810.47/warc/CC-MAIN-20201025041701-20201025071701-00577.warc.gz"}
|
https://codereview.stackexchange.com/questions/86041/calculator-in-ruby
|
# Calculator in Ruby
I have created a calculator in Ruby. I am using Ruby 2.1.0. I'm fairly sure that someone will be able to improve this, as I am quite new to Ruby.
puts "Welcome to Calc"
puts ""
puts "Please enter the first number"
n1 = gets.to_i()
puts ""
puts "Please enter the second number"
n2 = gets.to_i()
puts ""
subtract = n1 - n2
multiply = n1 * n2
divide = n1 / n2
power = n1 ** n2
sqrt1 = Math.sqrt(n1)
sqrt2 = Math.sqrt(n2)
puts "#{n1} + #{n2} = #{add}"
puts "#{n1} - #{n2} = #{subtract}"
puts "#{n1} * #{n2} = #{multiply}"
puts "#{n1} / #{n2} = #{divide}"
puts "#{n1} ** #{n2} = #{power}"
puts "#{n1} √ = #{sqrt1}"
puts "#{n2} √ = #{sqrt2}"
gets()
One thing you forgot to check was if n2 is 0 and n1 is non zero, in which case the answer is either undefined or a signed infinity.
In any case, in the current version, it will just output a divide by zero error with a stack trace, which isn't very user friendly. I would advise changing the message by catching the error and putting something else.
• Thanks. This was useful. It is now set so that before running the code on line 12 and onwards, it checks the numbers, using 'if n1 != 0 and n2 == 0'. And outputs an error message if they are. – user69731 Apr 6 '15 at 17:20
• In addition: divide on integers is a modulo division. If it is intented to use the modulo division, then I would expect both values using divmod. Or you have to convert one value to float value. – knut Apr 6 '15 at 18:02
You are doing integer division, which is probably not the expected behaviour for a calculator. For that matter, it makes little sense to restrict the inputs to integers, as most calculators are able to handle arbitrary decimal values.
|
2020-04-01 22:05:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4688703119754791, "perplexity": 2360.0896013542924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506121.24/warc/CC-MAIN-20200401192839-20200401222839-00403.warc.gz"}
|
https://zulfahmed.wordpress.com/2015/05/25/probabilistic-models-of-financial-volatility-are-similar-to-internet-scale-free-networks-but-how-different/
|
Feeds:
Posts
## Probabilistic models of financial volatility are similar to internet scale-free networks but how different?
A wonderful paper of Patrick Wolfe talks about null models in the study of internet. Recently we’ve shown that for financial volatility with 188 stocks from Standard & Poors 500 Index, the graph has 5921 edges on 188 nodes and a power law with $\alpha=4.027$. So the volatility graph in this case is like the internet network graph in the power law behavior which Barbasi had named ‘scale-free’. But graph theoretic modeling generally assumes that nodes are permanent. This assumption is incorrect in finance since Taleb’s main argument in his crusades against standard models is that Black Swan is big and wipes out nodes. So it is not enough for us to claim a real science of finance if we do not heed Taleb’s main warning to the world. We need models of volatility with AIG surviving and even a giant slave-worker firm like Lehman Brothers that dealt in slaves from early nineteenth century cotton fields of the American south to disappear like the twin towers destroyed in a war that is global capitalism.
Graphs with a given degree sequence is called ‘micro-canonical’ when graphs are given a uniform probability based only on their degree sequence and the exponential distribution is called ‘canonical’ in statistical mechanics. So the Wolfe apporach is based on a paradigm of active research interest. This is Diaconis from 2011 in the screenshot.
This tells us not to worry too much about this degree sequence aspect since we’ll get plenty of good theory from activity by statisticians in this direction. We should worry instead about ensuring that we have the NODE EXISTENCE problem resolved first and then piggyback off these people’s work to found a new science of finance.
|
2017-06-26 15:40:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5470953583717346, "perplexity": 2047.9094285781975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320823.40/warc/CC-MAIN-20170626152050-20170626172050-00191.warc.gz"}
|
https://tex.stackexchange.com/questions/457524/how-can-i-add-more-one-abstract-and-one-more-keywords-section-to-the-acm-templ
|
# How can I add more one abstract and one more keywords “section” to the ACM template
I'm writing an article for a conference in Brazil that uses the latex template from ACM format (ACM Conference Proceedings - New Master Template) and I wanted to know how can I add one more abstract and one more keywords "section"?
As I am writing for a Brazilian conference I must have "Resumo" (an abstract in Portuguese) and "Palavras-chave" (the keywords in Portuguese).
I tried to create them only using \begin{resumo} or \begin{palavrasChaves}, but it says that:
Environment resumo undefined
So I tried to modify the acmart.cls, to create these two sections ("Resumo" and "Palavras-chave"), but I couldn't make any progress...
I've also tried to find it in the acmart.pdf, but I couldn't find how to create new "sections".
The ACM template has the following instructions:
\documentclass[sigconf]{acmart}
\usepackage{booktabs}
\begin{document}
\title{SIG Proceedings Paper in LaTeX Format}
\begin{abstract}
This paper provides a sample of ACM SIG Proceedings.
\end{abstract}
\keywords{ACM proceedings, \LaTeX}
% Right Here I need the abstract in portuguese ("Resumo") and
% afterwards I need the keywords in portuguese ("Palavras-chave").
\maketitle
\end{document}
I have sent an e-mail to acm support but they haven't answered yet, but as it is quite urgent could you help me with that?
P.S: I am working with the Overleaf template.
acm-conference-proceedings-new-master-template - Overleaf
Thank you!
If you want to add another Abstract and Keywords section in other languages in your article you must add the following instructions:
\makeatletter
\newenvironment{otherlangabstract}{\Collect@Body\@saveaotherbstract}{}
\@saveaotherbstract{}
\makeatother
Just create both and you are ready to go...
\begin{otherlangabstract}
Resumo em português.
\end{otherlangabstract}
\portkeywords{Procedimentos ACM}
Welcome to LaTeX Stack Exchange. Before someone can help you with your question, it is preferable that you provide a minimum working example. You can refer here https://texfaq.org/FAQ-minxampl. Basically, flush out all the code that is unnecessary to your particular question. Thank you.
• Is it better? Or should I flush more out? – Codewraith Oct 30 '18 at 14:33
• It's good, but you could flush the bibliography, the copyright, and the \input{samplebody-conf} for instance. – Aulus.Persius.Flaccus Oct 30 '18 at 14:54
|
2019-10-20 03:32:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4615269601345062, "perplexity": 1823.9729397461822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986702077.71/warc/CC-MAIN-20191020024805-20191020052305-00066.warc.gz"}
|
https://www.edaboard.com/threads/regarding-the-simulation-of-qam-on-rayleigh-fading-channel.215233/
|
# Regarding the simulation of QAM on Rayleigh fading channel
Status
Not open for further replies.
#### puripong
##### Member level 2
Hi,
I am very new in wireless systems
I have tried to simulate QAM on Rayleigh fading channel
This is a mathematical model of Rayleigh fading channel
y = hx + n
where
h = channel gain according to Rayleigh fading
x = complex transmit vector
n = complex AWGN noise
I generated h in two ways
-- First method --
G1 = 1/sqrt(2)*randn(1,1);
G2 = 1/sqrt(2)*randn(1,1);
h = ((G1.^2) + (G2.^2)).^(1/2);
which is the envelope of Rayleigh distribution
(obtained from 2 Gaussian random variable)
-- Second method --
h = 1/sqrt(2)*[ randn(1,1) + (i*randn(1,1)) ];
which is the complex Gaussian random variable
With both cases,
I obtained the same BER curves which are also agree with theoretical formula
But I'd like to ask "Are two methods identical to simulate Rayleigh fading channel ?"
:grin:
Sorry, I have mistyping in graph
black curve is theory
red curve is simulation
:lol:
#### Attachments
• 256QAM_Rayleigh.png
21.3 KB · Views: 45
#### mazdaspring
Could any expert reply to OP question please. I would like to know too. Thank you.
|
2022-09-26 16:03:11
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8311958312988281, "perplexity": 7343.279097983528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00553.warc.gz"}
|
https://irreverentmind.wordpress.com/
|
## Black hole thermodynamics, quantum puzzles, and the holographic principle
I was asked to give a lecture on “quantum puzzles and black holes” at the 20th Jürgen Ehlers Spring School, which was to be hosted at AEI this week. Unfortunately the school was cancelled due to the SARS-CoV-2 pandemic, but since I enjoyed researching the topic so much, I thought I’d make a post of it instead. Part of what made preparing for this lecture so interesting is that the students — primarily undergrads bordering on Masters students — hadn’t had quantum field theory (QFT), which meant that if I wanted to elucidate, e.g., the firewall paradox or the thermal nature of horizons in general, I’d have to do so without recourse to the standard toolkit. And while there’s a limit to how far one can get without QFT in curved spacetime, it was nice to go back and revisit some of the things with which long familiarity has made me take for granted.
Accordingly, I’ve endeavored to make this post maximally pedagogical, assuming only basic general relativity (GR) and a semblance of familiarity with undergraduate quantum mechanics and statistical thermodynamics. I’ll start by introducing black hole thermodynamics, which leads to the conclusion that black holes have an entropy given by a quarter the area of their event horizons in Planck units. Then in the second section, I’ll discuss some quantum puzzles that arise in light of Hawking’s discovery that black holes radiate, which seems to imply that information is lost as they evaporate, in violation of quantum mechanics. In the third and final section, I’ll explain how the considerations herein gave rise to the holographic principle, one of the deepest revelations in physics to date, which states that the three-dimensional world we observe is described by a two-dimensional hologram.
1. Black hole thermodynamics
Classically, black hole thermodynamics is a formal analogy between black holes and statistical thermodynamics. It was originally put forth by Jacob Bekenstein in his landmark 1973 paper [1], in which he proposed treating black holes thermodynamically, and argued that the entropy should be proportional to the area of the event horizon. Let’s start be examining the idea of black holes as thermodynamic objects, and build up to the (in)famous entropy-area relation as we go.
As I’ve mentioned before, black holes must be endowed with an entropy in order to avoid violating the second law of thermodynamics; otherwise, one could decrease the entropy of the universe simply by dropping anything into the black hole. Taking entropy as a measure of our ignorance — equivalently, as a measure of the inaccessibility of the internal configuration — this is intuitive, since the degrees of freedom comprising whatever object one dropped in are now hidden behind the horizon and should thus be counted among the internal microstates of the black hole. Furthermore, one knows from Hawking’s area theorem [2] that the surface area of a classical black hole is non-decreasing, and thus the dynamics of black holes appears to select a preferred direction in time, analogous to the thermodynamic arrow of time consequent of the fact that entropy (of any closed thermodynamic system) always increases. This led Bekenstein [1] to propose that one could “develop a thermodynamics of black holes”, in which entropy is precisely related to the area of the horizon, ${S\sim A}$ (here “${\sim}$” means “proportional to”; we’ll fix the coefficient later).
Thermodynamically, entropy is an extensive property, so associating the entropy to some function of the size of the black hole makes sense. But why ${S\sim A}$, specifically? In statistical mechanics, the entropy generally scales with the volume of the system, so one might naïvely have expected ${S\sim V}$. Indeed, one of the most remarkable aspects of black holes is that the entropy scales with the area instead of the volume. Insofar as black holes represent the densest possible configuration of energy — and hence of information — this implies a drastic reduction in the (maximum) number of degrees of freedom in the universe, as I’ll discuss in more detail below. However, area laws for entanglement entropy are actually quite common; see for example [3] for a review. And while the ultimate source of black hole entropy (that is, the microscopic degrees of freedom it’s counting) is an ongoing topic of current research, the entanglement between the interior and exterior certainly plays an important role. But that’s a QFT calculation, whereas everything I’ve said so far is purely classical. Is there any way to see that the entropy must scale with ${A}$ instead of ${V}$, without resorting to QFT in curved space or the full gravitational path integral?
In fact, there’s a very simple reason the entropy must scale with the area: the interior volume of a black hole is ill-defined. Consider the Schwarzschild metric
$\displaystyle \mathrm{d} s^2=-f(r)\mathrm{d} t^2+\frac{\mathrm{d} r^2}{f(r)}+r^2\mathrm{d}\Omega^2~, \quad\quad f(r)=1-\frac{r_s}{r}~, \ \ \ \ \ (1)$
where ${r_s=2M}$ is the Schwarzschild radius and ${\mathrm{d}\Omega^2=\mathrm{d}\theta^2+\sin^2\!\theta\,\mathrm{d}\phi^2}$. For ${r>r_s}$, the metric is static: the spatial components look the same for any value of ${t}$. But inside the black hole, ${r, and hence ${f(r)<0}$. This makes the ${\mathrm{d} t^2}$ component positive and the ${\mathrm{d} r^2}$ component negative, so that space and time switch roles in the black hole interior. Consequently, the “spatial” components are no longer static inside the black hole, since they will continue to evolve with ${t}$. Thus the “volume” of the black hole interior depends on time, and in fact on one’s choice of coordinates in general. (This isn’t too strange, if you think about it: the lesson of general relativity is that spacetime is curved, so your quantification of “space” will generally depend on your choice of “time”).
The issue is clarified nicely in a paper by Christodoulou and Rovelli [4] (be warned however that while the GR calculations in this paper are totally solid, the discussion of entropy in section VIII is severely flawed). The crux of the matter is that our usual definition of “volume” doesn’t generalize to curved spacetime. In flat (Minkowski) spacetime, we define volume by picking a Cauchy slice, and consider the spacelike 3d hypersurface ${\Sigma}$ bounded by some 2d sphere ${B}$ on that slice. But when space is curved, there are many different constant-${t}$ slices we can choose, none of which has any special status (in GR, the coordinates don’t matter). Suppose for example we tried to calculate the interior volume in Schwarzschild coordinates (1). Our flat-space intuition says to pick a constant-${t}$ slice bounded by some surface ${B}$ (in this case, the horizon itself), and integrate over the enclosed hypersurface ${\Sigma}$:
$\displaystyle V=\int_\Sigma\!\mathrm{d}^3x\sqrt{g}~, \ \ \ \ \ (2)$
where ${g}$ is the determinant of the (induced) metric on ${\Sigma}$. Along a timeslice, ${\mathrm{d} t=0}$, so we have
$\displaystyle V=4\pi\int\!\mathrm{d} r\,r^2\left(1-\tfrac{r_s}{r}\right)^{-1/2}~. \ \ \ \ \ (3)$
But the Schwarzschild coordinates break-down at the horizon, so the upper and lower limits of the remaining integral are the same, ${r=r_s}$, and the integral vanishes. Thus the Schwarzschild metric would lead one to conclude that the “volume” of the black hole is zero! (Technically the integral is ill-defined at ${r=r_s}$, but one obtains the same result by changing the outer limit to ${r=r_s+\epsilon}$ and taking the limit ${\epsilon\rightarrow0}$ [5]).
Let’s try a different coordinate system, better suited to examining the interior. Define the new time variable
$\displaystyle T=t+r_s\left(2\sqrt{\frac{r}{r_s}}+\ln\left|\frac{\sqrt{\tfrac{r}{r_s}}-1}{\sqrt{\tfrac{r}{r_s}}+1}\right|\right)~, \ \ \ \ \ (4)$
in terms of which the metric (1) becomes
$\displaystyle \mathrm{d} s^2=-f(r)\mathrm{d} T^2+2\sqrt{\frac{r_s}{r}}\,\mathrm{d} T\mathrm{d} r+\mathrm{d} r^2+r^2\mathrm{d}\Omega^2~. \ \ \ \ \ (5)$
These are Gullstrand-Painlevé (GP) coordinates. They’re relatively unfamiliar, but have some useful properties; see for example [6], in which my colleagues and I utilized them in the context of the firewall paradox during my PhD days. Unlike the Schwarzschild coordinates, they cover both the exterior region and the black hole interior. They look like this:
where constant ${T}$ slices are in yellow, and constant ${r}$ slices are in green. One neat thing about these coordinates is that ${T}$ is the proper time of a free-falling observer who starts from rest at infinity. (Somewhat poetically, they’re the natural coordinates that would be associated to a falling drop of rain, and are sometimes called “rain-frame coordinates” for this reason). Another neat thing about them is that the constant-${T}$ slices are flat! Thus if we attempt to calculate the interior volume along one such Cauchy slice, we simply recover the flat-space result,
$\displaystyle V=4\pi\int_0^{r_s}\mathrm{d} r^2\,r^2=\frac{4}{3}\pi r_s^3~, \ \ \ \ \ (6)$
and thus the volume is constant, no matter what ${T}$-slice we choose; in other words, the observer can fall forever and never see less volume! See [5] for a pedagogical treatment of the volume calculation in some other coordinate systems, which again yield different results.
The above examples illustrate the fact that in general, there are many (in fact, infinitely many!) different choices of ${\Sigma}$ within the boundary sphere ${B}$, and we need a slightly more robust notion of volume to make sense in curved spacetime. As Christodoulou and Rovelli point out, a better definition for ${\Sigma}$ is the largest spherically symmetric surface bounded by ${B}$. This reduces to the familiar definition above in Minkowski space, but extends naturally and unambiguously to curved spacetime as well. Thus the basic idea is to first fix the boundary sphere ${B}$, and then extremize over all possible interior 3d hypersurfaces ${\Sigma}$. For a Schwarzschild black hole, in the limit where the null coordinate ${v=r+t}$ is much greater than ${M}$ (i.e., at late times), one finds [4]
$\displaystyle V\rightarrow3\sqrt{3}\pi m^2 v~. \ \ \ \ \ (7)$
Thus, the interior volume of a black hole continues to grow linearly for long times, and can even exceed the volume of the visible universe!
Whether one thinks of entropy as a measure of one’s ignorance of the interior given the known exterior state, or a quantification of all possible microstates given the constraints of mass (as well as charge and angular momentum for a Kerr-Newman black hole), it should not depend on the choice of coordinates, or continue growing indefinitely while the surface area (i.e., the boundary between the known and unknown regions) remains fixed. Thus if we want a sensible, covariant quantification of the size of the black hole, it must be the area. (Note that the area is more fundamental than the radius: the latter is defined in terms of the former (equivalently, in terms of the mass), rather than by measuring the distance from ${r\!=\!0}$, for the same reasons we encountered when attempting to define volume above). Since the event horizon is a null-surface, the area is coordinate-invariant; fixing ${t}$ and ${r}$ in the Schwarzschild metric then simply yields the area element of the 2-sphere,
$\displaystyle \mathrm{d} s^2=r^2\mathrm{d}\Omega^2 \quad\longrightarrow\quad A=\int\!\mathrm{d} s^2=\int\!\mathrm{d}\Omega\sqrt{g} =4\pi r_s^2~. \ \ \ \ \ (8)$
Thus areas, rather than volumes, provide the only covariant, well-defined measures of the spatial “size” of black holes.
Technically, this doesn’t prove that ${S\sim A}$, of course; it might logically have been some other function or power of the area, but this would be less natural on physical grounds (though ${S\sim A^{1/2}}$ can be easily ruled out by considering a black hole merger [1]). And, while it’s a nice consistency check on the universe, it doesn’t really give any insight into why the degrees of freedom are ultimately bounded by the surface area, beyond the necessity-of-curved-space argument above.
There is however one problem with this identification: the entropy, in natural units, is dimensionless, while the area has units of length squared, so the mismatch must be remedied by the hitherto undetermined proportionality factor. As Bekenstein pointed out, there is no universal constant in GR alone that has the correct units; the only fundamental constant that fits the bill is the Planck length,
$\displaystyle \ell_P=\frac{G\hbar}{c^3}~. \ \ \ \ \ (9)$
As Hawking was quick to show [7], the correct result is
$\displaystyle S_{BH}=\frac{A}{4\ell_P}~, \ \ \ \ \ (10)$
which is the celebrated Bekenstein-Hawking entropy of black holes. This is one of the most remarkable expressions in all of physics, insofar as it’s perhaps the only known example in which gravity (${G}$), quantum mechanics (${\hbar}$), and special relativity (${c}$) all come together (thermodynamics too, if you consider that we set ${k_B=1}$).
Hawking’s calculation, and the myriad alternative derivations put forward since, require a full QFT treatment, so I’m not going to go into them here. If you’re interested, I’ve covered one such derivation based on the gravitational path integral before, and the case of a collapsing black hole that Hawking considered is reviewed in the classic textbook [8]. In the original paper [1] however, Bekenstein provides a very cute derivation which barely even requires quantum mechanics, and yet gets surprisingly close to the right answer. The basic idea is to calculate the minimum possible increase in the size of the black hole which, classically, would occur when we gently drop in a particle whose size is of order its own Compton wavelength (this is where the ${\hbar}$ comes in). This can be related to the entropy on the basis that the loss of information is the entropy of a single bit, ${\ln 2}$, i.e., the answer to the yes-or-no question, “does the black hole contain the particle?” This line of reasoning yields ${\tfrac{1}{2\pi}\ln2\,S_{BH}}$; not bad, given that we ignored QFT entirely!
By now I hope I’ve convinced you of two facts: (1) black holes have an entropy, and (2) the entropy is given by the area of the horizon. This is the foundation on which black hole thermodynamics is built.
We are now in position to write down the analogue of the fundamental thermodynamic relation for black holes. Recall that for a closed system in equilibrium,
$\displaystyle \mathrm{d} U=T\mathrm{d} S-P\mathrm{d} V~, \ \ \ \ \ (11)$
where ${U}$ is the internal energy, ${P}$ is the pressure, and ${T}$, ${S}$, and ${V}$ are the temperature, entropy, and volume as above. The second term on the right-hand side represents the work done on the system by the environment (in this context, “closed” refers only to the transfer of mass or particles; the transfer of energy is still allowed). Supposing that this term is zero, the first term can be regarded as the definition of entropy for reversible processes, i.e., ${\mathrm{d} S=\mathrm{d} Q/T}$ where ${Q}$ is the heat.
Now consider, for generality, a charged, rotating black hole, described by the Kerr-Newman metric; in Boyer-Lindquist coordinates, this reads:
\displaystyle \begin{aligned} \mathrm{d} s^2=&~\frac{\Delta-a^2\sin^2\theta}{\rho^2}\,\mathrm{d} t^2 -\frac{\rho^2}{\Delta}\mathrm{d} r^2 -\rho^2\,\mathrm{d}\theta^2\\ &-\sin^2\!\theta\,\frac{\left( a^2+r^2\right)^2-a^2\Delta\sin^2\theta}{\rho^2}\,\mathrm{d}\phi^2 +\frac{2a\left( a^2+r^2-\Delta\right)\sin^2\theta}{\rho^2}\,\mathrm{d} t\mathrm{d}\phi~, \end{aligned} \ \ \ \ \ (12)
which reduces to the Schwarzschild black hole above when the charge ${Q}$ and angular momentum ${J}$ go to zero (after rescaling by the radius). For compactness, we have defined
$\displaystyle a\equiv\frac{J}{M}~,\quad \Delta\equiv r^2-2 M r+a^2+Q^2~,\quad \rho^2\equiv r^2+a^2\cos^2\theta~. \ \ \ \ \ (13)$
The ${g_{rr}}$ component diverges when ${\Delta=0}$, which yields an inner (${r_-}$) and outer (${r_+}$) horizon:
$\displaystyle r_\pm=M\pm\sqrt{M^2-a^2-Q^2}~. \ \ \ \ \ (14)$
The inner horizon is generally thought to be unstable, while the outer is the event horizon whose area we’re interested in calculating. Setting ${r}$ and ${t}$ to constant values as above, the induced metric on the resulting 2d surface ${B}$ is
$\displaystyle \mathrm{d} s^2= -\rho^2\,\mathrm{d}\theta^2 -\sin^2\!\theta\,\frac{\left( a^2+r^2\right)^2-a^2\Delta\sin^2\theta}{\rho^2}\,\mathrm{d}\phi^2 \ \ \ \ \ (15)$
We can then consider the case where the radius ${r=r_+}$, whereupon ${\Delta=0}$, and the area is simply
$\displaystyle A=\int\!\mathrm{d}\Omega\sqrt{g} =4\pi\left( r_+^2+a^2\right)~, \ \ \ \ \ (16)$
which is fairly intuitive: we get the Schwarzschild result, plus an additional contribution from the angular momentum.
Now, the area depends only on the mass ${M}$, charge ${Q}$, and angular momentum ${J}$ (cf. the no-hair theorem), so a generic perturbation takes the form
$\displaystyle \mathrm{d} A=\frac{\partial A}{\partial M}\,\mathrm{d} M+\frac{\partial A}{\partial Q}\,\mathrm{d} Q+\frac{\partial A}{\partial J}\,\mathrm{d} J~. \ \ \ \ \ (17)$
Performing the derivatives and solving for ${\mathrm{d} M}$, one obtains the analogue of the fundamental thermodynamic relation (11) for black holes:
$\displaystyle \mathrm{d} M=\frac{\kappa}{8\pi}\,\mathrm{d} A+\Omega\,\mathrm{d} J+\Phi\,\mathrm{d} Q~, \ \ \ \ \ (18)$
where ${\Omega}$ and ${\Phi}$ are some functions of ${M}$, ${Q}$, and ${J}$ that I’m not going to bother writing down, and ${\kappa}$ is the surface gravity,
$\displaystyle \kappa=\frac{r_+-r_-}{2\left( r_+^2+a^2\right)}=\frac{\sqrt{M^2-a^2-Q^2}}{2M^2-Q^2+2M\sqrt{M^2-a^2-Q^2}}~, \ \ \ \ \ (19)$
which is the (normalized) gravitational acceleration experienced at the equator. (“Normalized”, because the Newtonian acceleration diverges at the horizon, so a meaningful value is obtained by dividing the proper acceleration by the gravitational time dilation factor).
Each term in this expression for ${\mathrm{d} M}$ has a counterpart in (11). We already identified the area with the entropy, cf. (10), and since the mass is the only relevant parameter in the problem, it plays the role of the internal energy ${U}$. The surface gravity ${\kappa}$ corresponds to the temperature. So if we restricted to a Schwarzschild black hole, we’d have
$\displaystyle \mathrm{d} U=T\mathrm{d} S\qquad \longleftrightarrow\qquad \mathrm{d} M=\frac{\kappa}{4\pi}\mathrm{d} A~, \ \ \ \ \ (20)$
which just canonizes the relationship between entropy and area we uncovered above, with ${\kappa\sim T}$. What about the other terms? As mentioned above, the ${-P\mathrm{d} V}$ term in (11) corresponds to the work done to the system. And as it turns out, there’s a way of extracting energy from a (charged, rotating) black hole, known as the Penrose process. I don’t have the spacetime to go into this here, but the upshot is that the parameters ${\Omega}$ and ${\Phi}$ in (18) correspond to the rotational angular momentum and electric potential, respectively, so that ${\Omega\,\mathrm{d} J+\Phi\,\mathrm{d} Q}$ is indeed the analogue of the work that the black hole could perform on some external system; i.e.,
$\displaystyle -P\mathrm{d} V\qquad \longleftrightarrow\qquad \Omega\,\mathrm{d} J+\Phi\,\mathrm{d} Q~. \ \ \ \ \ (21)$
And of course, energy that can’t be extracted as work is another way of describing entropy, so if even if you could extract all the angular momentum and charge from the black hole, you’d still be left with what Bekenstein calls the “degradation energy” [1], which is the area term (20) (determined by the irreducible mass).
That’s all I wanted to say about black hole thermodynamics here, though the analogy we’ve established above can be fleshed out more thoroughly, complete with four “laws of black hole thermodynamics” in parallel to the classic set. See for example my earlier post on firewalls, or the review by Jacobson [9], for more details. However, I’ve been glossing over a critical fact, namely that at the classical level, black holes are, well, black: they don’t radiate, and hence a classical black hole has zero temperature. This is the reason I’ve been careful to refer to black hole thermodynamics as an analogy. Strictly speaking, one cannot regard the temperature ${T}$ as the physical temperature of a single black hole, but rather as referring to the equivalence class of all possible black holes subject to the same (observable) constraints of mass, charge, and angular momentum. In other words, the “temperature” of a Schwarzschild black hole is just a quantification of how the entropy — which measures the number of possible internal microstates — changes with respect to the mass, ${T^{-1}=\mathrm{d} S/\mathrm{d} M}$.
2. Quantum black holes
Of course, Hawking’s greatest claim to fame was the discovery [7] that when quantum field theory is properly taken into account, black holes aren’t black after all, but radiate with a temperature
$\displaystyle T=\frac{1}{8\pi M}~. \ \ \ \ \ (22)$
(This result is for a Schwarzschild black hole in thermal equilibrium, and is precisely what we obtain when taking ${Q=J=0}$ in the expression for the surface gravity ${\kappa}$ (19)). Hawking’s calculation, and many other derivations since, require the machinery of QFT, so I won’t go into the details here. There is however a cute hack for obtaining the identification (22), whereby one Wick rotates to Euclidean signature so that the ${(3\!+\!1)}$-dimensional Schwarzschild geometry becomes ${\mathbb{R}^3\times S^1}$, whereupon the temperature appears as a consequence of the periodicity in Euclidean time; see my first post for a sketch of the resulting “cigar geometry”, or my upcoming post on QFT in curved space for a more detailed discussion about the relationship between periodicity and horizons.
Hawking radiation is sometimes explained as the spontaneous fluctuation of a particle-antiparticle pair from the vacuum across the horizon; the particle escapes to infinity as Hawking radiation, while the antiparticle is captured by the black hole. This is a cute cartoon, except that it’s wrong, and an over-reliance on the resulting intuition can get one into trouble. I’ve already devoted an entire post to this issue, so I’ll refer you there if you’re interested; if you’ve got a QFT background, you can also find some discussion of the physical aspects of black hole emission in chapter eight of [8]. In a nutshell, the basic point is that radiation comes out in momentum-space modes with wavelength ${\lambda\sim r_s}$, which can’t be Fourier transformed back to position space to yield anything localized near the horizon. In other words, near the horizon of a black hole, the meaning of “particles” employed by an external observer breaks down. The fact that black holes can radiate away energy means that if you stop throwing in matter, the black hole will slowly shrink, which seems to contradict Hawking’s area theorem above. The catch is that this theorem relies on the weak energy condition, which states that the matter density along every timelike vector field is non-negative; this is no longer necessarily true once quantum fluctuations are taking into account, so there’s no mathematical contradiction. It does however mean that our formulation of the “second law” of black hole thermodynamics was too naïve: the area (and hence entropy) of a black hole can decrease, but only by emitting Hawking radiation which increases the entropy of the environment by at least as much. This motivates us to introduce the generalized entropy
$\displaystyle S_\mathrm{gen}=\frac{A}{4\ell_P}+S_\mathrm{ext}~, \ \ \ \ \ (23)$
where the first term is the black hole entropy ${S_\mathrm{BH}}$ (10), and the second is the entropy of the thermal radiation. In full generality, the Second Law of (Black Hole) Thermodynamics is then statement is that the entropy (10) of all black holes, plus the entropy of the rest of the universe, never decreases:
$\displaystyle \mathrm{d} S_\mathrm{gen}\geq 0~. \ \ \ \ \ (24)$
Evaporating black holes have some peculiar properties. For example, since the temperature of a Schwarzschild black hole is inversely proportional to the mass, the specific heat capacity ${C}$ is negative:
$\displaystyle \frac{\mathrm{d} T}{\mathrm{d} M}=-\frac{1}{8\pi M^2} \qquad\implies\qquad C=\frac{1}{M}\frac{\mathrm{d} Q}{\mathrm{d} T}=-8\pi M~. \ \ \ \ \ (25)$
(We’re working in natural units, so ${c\!=\!1}$ and hence the heat ${Q=M}$). Consequently, throwing matter into a black hole to increase its size actually makes it cooler! Conversely, as the black hole emits Hawking radiation, its temperature increases, causing it to emit more radiation, and so on in a feedback loop that causes the black hole to get hotter and hotter as it shrinks away to nothing. (Precisely what happens in the final moments of a black hole’s lifespan is an open question, likely requiring a more developed theory of quantum gravity to answer. Here I’m going to take the majority view that it indeed evaporates away completely). Note that this means that whenever one speaks about black holes thermodynamically, one should use the microcanonical ensemble rather than the canonical ensemble, because the latter is unstable to any quantum fluctuation that changes the mass of the black hole.
The fact that black holes radiate when quantum field theory is taken into account transforms black hole thermodynamics from a formal analogy to an ontologically meaningful description, where now the temperature ${T=\mathrm{d} M/\mathrm{d} S}$ is indeed the physical (thermodynamic) temperature of a single black hole. In this sense, quantum effects were required to resolve the tension between the fact that the information-theoretic interpretation of entropy as the measure of possible internal microstates was applicable to a single black hole — and hence had physical significance — while the temperature had no meaningful physical interpretation in the non-radiating (classical) case. The combination of seemingly disparate regimes in the expression for the entropy (10) is not a coincidence, but represents a truly remarkable unification. It’s perhaps the first thing a successful theory of quantum gravity should be expected to explain.
The fact that black holes evaporate also brings into focus the need for such a unification of general relativity and quantum field theory: a black hole is one of the only known regimes (the other being the Big Bang singularity) that falls within the purview of both theories, but attempts to combine them yield nonsensical infinities that have thus far resisted all attempts to tame them. This leads me to the main quantum puzzle I wanted to discuss: the information paradox. (The firewall paradox is essentially just a more modern sharpening of the underlying conflict, but is more difficult to sketch without QFT).
The information paradox, in a nutshell, is a conflict between the apparent ability of black holes to destroy information, and the quantum mechanical postulate of unitarity. Recall that unitarity is the statement that the time-evolution of a quantum state via the Schrödinger equation is described by a unitary operator, which have the property that they preserve the inner product. Physically, this ensures that probabilities continue to sum to one, i.e., that no information is lost. While evolution in open systems can be non-unitary due to decoherence with the environment, the evolution of any closed quantum mechanical system must be unitary, i.e., pure states evolve to pure states only, never to mixed states. This means that if we create a black hole by collapsing some matter in an initially pure state, let it evaporate, and then collect all the Hawking radiation, the final state must still be pure. The problem is that the Hawking radiation is, to a very good approximation, thermal, meaning it has the Planckian spectrum characteristic of black-body radiation, and thermal radiation contains no information.
The situation is often depicted by the Page curve [10,11], which is a plot of entropy with respect to time as the black hole evaporates. Suppose we collect all the Hawking radiation from a black hole that starts in a pure state; call the entropy of this radiation ${S_R}$. Initially, ${S_R=0}$, because our subsystem is empty. As the black hole evaporates, ${S_R}$ steadily increases as we collect more and more radiation. Eventually the black hole evaporates completely, and we’re left with a thermal bath of radiation in a maximally mixed state, so ${S_R=1}$ (after normalizing): a maximal loss of information has occurred! This is the information paradox in a single graph. In sharp contrast, what quantum mechanics expects to happen is that after the halfway point in the black hole’s lifespan, the late-time radiation starts to purify the early-time radiation we’ve already collected, so the entropy curve should turn around and head back to 0 when the black hole disappears. This is illustrated in the figure below, from Page’s paper [11]. (The lack of symmetry in the upwards and downwards parts is due to the fact that the emission of different particles (in this calculation, just photons and gravitons) affect the change in the black hole entropy and the change in the radiation entropy slightly differently. The turnover isn’t at exactly half the lifetime either, but rather around ${53.81\%}$.)
The fundamental issue is that quantum mechanics demands that the information escape the black hole, but there doesn’t seem to be any way of enabling this. (For more discussion, see my earlier post on firewalls. I should also mention that there are alternative proposals for what happens at the end of a black hole’s lifetime, but these are generally disfavoured for a variety of reasons, most notably AdS/CFT). That said, just within the past year, it was discovered that in certain AdS/CFT set-ups, one can obtain a Page curve for the entropy by including the contributions from wormhole geometries connecting different replicas that arise as subleading saddles in the gravitational path integral; see for example [12], or the talks by Douglas Stanford and Juan Maldacena as part of my research group’s QGI seminar series. While this doesn’t quite solve the paradox insofar as it doesn’t explain how the information actually escapes, it’s encouraging that — at least in AdS/CFT — there does seem to be a mechanism for correctly tracking the entropy as the black hole evaporates.
3. The holographic principle
To close this lecture/post, I’d be remiss if I didn’t mention the most remarkable and far-reaching consequence of the black hole investigations above: the holographic principle. Put forth by ‘t Hooft [13], and given a formulation in terms of string theory by Susskind [14], this is essentially the statement that the ultimate theory of quantum gravity must exhibit a dimensional reduction (from 3 to 2 spatial dimensions in our ${(3\!+\!1)}$-dimensional universe) in the number of fundamental degrees of freedom. This developed from the arguments of Bekenstein, that the black hole entropy (10) represents a bound on the amount of information that can be localized within any given region. The basic idea is that any attempt to cram more information into a region of fixed size will cause the system to collapse into a black hole, and therefore the dimension of the Hilbert space associated to any region must scale with the area of the boundary.
The review by Bousso [15] contains an excellent modern introduction to this principle; I’ll only give a quick summary of the main idea here. Recall that in quantum mechanics, the number of degrees of freedom ${N}$ is given by the log of the dimension of the Hilbert space ${\mathcal{H}}$. For example, in a system with 100 spins, there are ${2^{100}}$ possible states, so ${\mathrm{dim}\,\mathcal{H}=2^{100}}$ and ${N=100\ln 2}$, i.e., the system contains 100 bits of information. One can crudely think of quantum field theory as a continuum theory with a harmonic oscillator at every spacetime point; a single harmonic oscillator already has ${N=\infty}$, so one would expect an infinite number of degrees of freedom for any region. However, one can’t localize more than a Planck energy ${M_P=1.3\times 10^{19}\,\mathrm{GeV}}$ into a Planck cube ${\ell_P^3}$ without forming a black hole, which provides an ultra-violet (UV) cutoff on the spectrum. And since any finite volume imposes an infra-red (IR) cutoff, we can take the degrees of freedom in field theory to scale like the volume of the region, with one oscillator per Planck cell. In other words, we think of space as a grid with lattice spacing ${\ell_P=1.6\times 10^{-33}\,\mathrm{cm}}$; the total number of oscillators thus scales like the volume ${V}$, and each one has a finite number of states ${n}$ due to the UV cutoff mentioned above. Hence ${\mathrm{dim}\,\mathcal{H}=n^V}$ and ${N=V\ln n}$. Thus, since ${e^S=\mathrm{dim}\,\mathcal{H}}$, ${S\!=\!V\ln n\!\sim\! V}$, and we expect entropy to scale with volume just as in classical mechanics.
The lesson from black hole thermodynamics is that gravity fundamentally alters this picture. Consider a Schwarzschild black hole: the mass scales like ${M\sim r_s}$, not ${M\sim r_s^3}$, so the energy can’t scale with the volume: the vast majority of the states which QFT would naïvely allow can’t be reached in a gravitational theory, because we form a black hole when we’ve excited only a small fraction of them. The maximum number of states we can excite is ${A/4}$ (in Planck units).
You might object that the argument I gave above as to why the black hole entropy must scale with area rather than volume was based on the fact that the interior volume of the black hole is ill-defined, and that a volume law might still apply in other situations. Bousso [15] gives a nice argument as to why this can’t be true: it would violate unitarity. That is, suppose a region of space had an entropy that scaled with the volume, i.e., ${\mathrm{dim}\,\mathcal{H}=e^V}$. If we then collapse that region into a black hole, the Hilbert space dimension would have to suddenly drop to ${e^{A/4}}$. It would then be impossible to recover the initial state from the final state (e.g., after allowing the black hole to evaporate). Thus in order to preserve unitarity, the dimension of the Hilbert space must have been ${e^{A/4}}$ from the start.
I’ve glossed over an important technical issue in introducing this “holographic entropy bound” however, namely that the spatial bound doesn’t actually work: it’s violated in all sorts of scenarios. For example, consider a region of our universe, which is well-approximated as a flat (${\mathbb{R}^3}$), homogeneous, isotropic space with some average entropy density ${\sigma}$. Then the entropy scales like
$\displaystyle S=\sigma V=\frac{\sigma}{6\sqrt{\pi}}A^{3/2}~, \ \ \ \ \ (26)$
which exceeds the bound ${A/4}$ when ${r\geq 3\sigma/4}$. The proper way to generalize black hole entropy to the sort of bound we want is to recall that the area of the event horizon is a null hypersurface, and it is the formulation in terms of such light-sheets which is consistent with all known examples. This is known as the covariant entropy bound, and states that the entropy on (or rather, contained within) non-expanding light-sheets of some spacetime codimension-2 surface ${B}$ does not exceed the area of ${B}$. A thorough discussion would be another lecture in itself, so do check out Bousso’s review [15] if you’re interested in more details. Here I merely wanted to bring attention to the fact that the holographic principle is properly formulated on null, rather than spacelike, hypersurfaces.
The holographic principle represents a radical departure from our intuition, and implies that reality is fundamentally nonlocal. One further expects that this feature should be manifest in the ultimate theory of quantum gravity. AdS/CFT provides a concrete realization of this principle, and its success is such that the unqualified “holography” is taken to refer to it in the literature, but it’s important to remember that the holographic principle itself is more general, and applies to our universe as well.
References
1. J. D. Bekenstein, “Black holes and entropy,” Phys. Rev. D 7 (Apr, 1973) 2333–2346.
2. S. W. Hawking, “Gravitational radiation from colliding black holes,” Phys. Rev. Lett. 26 (May, 1971) 1344–1346.
3. J. Eisert, M. Cramer, and M. B. Plenio, “Area laws for the entanglement entropy – a review,” Rev. Mod. Phys. 82 (2010) 277–306, arXiv:0808.3773 [quant-ph].
4. M. Christodoulou and C. Rovelli, “How big is a black hole?,” Phys. Rev. D91 no. 6, (2015) 064046, arXiv:1411.2854 [gr-qc].
5. B. S. DiNunno and R. A. Matzner, “The Volume Inside a Black Hole,” Gen. Rel. Grav. 42 (2010) 63–76, arXiv:0801.1734 [gr-qc].
6. B. Freivogel, R. Jefferson, L. Kabir, and I.-S. Yang, “Geometry of the Infalling Causal Patch,” Phys. Rev. D91 no. 4, (2015) 044036, arXiv:1406.6043 [hep-th].
7. S. W. Hawking, “Particle creation by black holes,” Comm. Math. Phys. 43 no. 3, (1975) 199–220.
8. N. D. Birrell and P. C. W. Davies, Quantum Fields in Curved Space. Cambridge Monographs on Mathematical Physics. Cambridge Univ. Press, Cambridge, UK, 1984.
9. T. Jacobson, “Introductory Lectures on Black Hole Thermodynamics,”. http://www.physics.umd.edu/grt/taj/776b/lectures.pdf.
10. D. N. Page, “Information in black hole radiation,” Phys. Rev. Lett. 71 (1993) 3743–3746, arXiv:hep-th/9306083 [hep-th].
11. D. N. Page, “Time Dependence of Hawking Radiation Entropy,” JCAP 1309 (2013) 028, arXiv:1301.4995 [hep-th].
12. A. Almheiri, T. Hartman, J. Maldacena, E. Shaghoulian, and A. Tajdini, “Replica Wormholes and the Entropy of Hawking Radiation,” arXiv:1911.12333 [hep-th].
13. G. ’t Hooft, “Dimensional reduction in quantum gravity,” Conf. Proc. C930308 (1993) 284–296, arXiv:gr-qc/9310026 [gr-qc].
14. L. Susskind, “The World as a hologram,” J. Math. Phys. 36 (1995) 6377–6396, arXiv:hep-th/9409089 [hep-th].
15. R. Bousso, “The Holographic principle,” Rev. Mod. Phys. 74 (2002) 825–874, arXiv:hep-th/0203101 [hep-th].
In a previous post, I mentioned that the firewall paradox could be phrased as a question about the existence of interior operators that satisfy the correct thermal correlation functions, namely
$\displaystyle \langle\Psi|\mathcal{O}(t,\mathbf{x})\tilde{\mathcal{O}}(t',\mathbf{x}')|\Psi\rangle =Z^{-1}\mathrm{tr}\left[e^{-\beta H}\mathcal{O}(t,\mathbf{x})\mathcal{O}(t'+i\beta/2,\mathbf{x}')\right]~, \ \ \ \ \ (1)$
where ${\tilde{\mathcal{O}}}$ and ${\mathcal{O}}$ operators inside and outside the black hole, respectively; cf. eqn. (2) here. In this short post, I’d like to review the basic argument leading up to this statement, following the original works [1,2].
Consider the eternal black hole in AdS as depicted in the following diagram, which I stole from [1]:
The blue line connecting the two asymptotic boundaries is the Cauchy slice on which we’ll construct our states, denoted ${\Sigma_I}$ in exterior region ${I}$ and ${\Sigma_{III}}$ in exterior region ${III}$. Note that, modulo possible UV divergences at the origin, either half serves as a complete Cauchy slice if we restrict our inquiries to the associated exterior region. But if we wish to reconstruct states in the interior (henceforth just ${II}$, since we don’t care about ${IV}$), then we need the entire slice. Pictorially, one can see this from the fact that only the left-moving modes from region ${I}$, and the right-moving modes from region ${III}$, cross the horizon into region ${II}$, but we need both left- and right-movers to have a complete mode decomposition.
To expand on this, imagine we proceed with the quantization of a free scalar field in region ${I}$. We need to solve the Klein-Gordon equation,
$\displaystyle \left(\square-m^2\right)\phi=\frac{1}{\sqrt{-g}}\,\partial_\mu\left( g^{\mu\nu}\sqrt{-g}\,\partial_\nu\phi\right)-m^2\phi=0 \ \ \ \ \ (2)$
on the AdS black brane background,
$\displaystyle \mathrm{d} s^2=\frac{1}{z^2}\left[-h(z)\mathrm{d} t^2+\mathrm{d} z^2+\mathrm{d}\mathbf{x}^2\right]~, \quad\quad h(z)\equiv1-\left(\frac{z}{z_0}\right)^d~. \ \ \ \ \ (3)$
where, in Poincaré coordinates, the asymptotic boundary is at ${z\!=\!0}$, and the horizon is at ${z\!=\!z_0}$. We work in ${(d+1)-}$spacetime dimensions, so ${\mathbf{x}}$ is a ${(d\!-\!1)}$-dimensional vector representing the transverse coordinates. Note that we’ve set the AdS radius to 1. Substituting the usual plane-wave ansatz
$\displaystyle f_{\omega,\mathbf{k}}(t,\mathbf{x},z)=e^{-i\omega t+i\mathbf{k}\mathbf{x}}\psi_{\omega,\mathbf{k}(z)} \ \ \ \ \ (4)$
into the Klein-Gordon equation results in a second order ordinary differential equation for the radial function ${\psi_{\omega,\mathbf{k}}(z)}$, and hence two linearly independent solutions. As usual, we then impose normalizable boundary conditions at infinity, which leaves us with a single linear combination for each ${(\omega,\mathbf{k})}$. Note that we do not impose boundary conditions at the horizon. Naïvely, one might have thought to impose ingoing boundary conditions there; however, as remarked in [1], this precludes the existence of real ${\omega}$. More intuitively, I think of this as simply the statement that the black hole is evaporating, so we should allow the possibility for outgoing modes as well. (That is, assuming a large black hole in AdS, the black hole is in thermal equilibrium with the surrounding environment, so the outgoing and ingoing fluxes are precisely matched, and it maintains constant size). The expression for ${\psi_{\omega,\mathbf{k}}(z)}$ is not relevant here; see [1] for more details.
We thus arrive at the standard expression of the (bulk) field ${\phi}$ in terms of creation and annihilation operators,
$\displaystyle \phi_I(t,\mathbf{x},z)=\int_{\omega>0}\frac{\mathrm{d}\omega\mathrm{d}^{d-1}\mathbf{k}}{\sqrt{2\omega}(2\pi)^d}\,\bigg[ a_{\omega,\mathbf{k}}\,f_{\omega,\mathbf{k}}(t,\mathbf{x},z)+\mathrm{h.c.}\bigg]~, \ \ \ \ \ (5)$
where the creation/annihilation operators for the modes may be normalized with respect to the Klein-Gordon norm, so that
$\displaystyle [a_{\omega,\mathbf{k}},a^\dagger_{\omega',\mathbf{k}'}]=\delta(\omega-\omega')\delta^{d-1}(\mathbf{k}-\mathbf{k}')~. \ \ \ \ \ (6)$
Of course, a similar expansion holds for region ${III}$:
$\displaystyle \phi_{III}(t,\mathbf{x},z)=\int_{\omega>0}\frac{\mathrm{d}\omega\mathrm{d}^{d-1}\mathbf{k}}{\sqrt{2\omega}(2\pi)^d}\,\bigg[\tilde a_{\omega,\mathbf{k}}\,g_{\omega,\mathbf{k}}(t,\mathbf{x},z)+\mathrm{h.c.}\bigg]~, \ \ \ \ \ (7)$
where the mode operators ${\tilde a_{\omega,\mathbf{k}}}$ commute with all ${a_{\omega,\mathbf{k}}}$ by construction.
Now, what of the future interior, region ${II}$? Unlike the exterior regions, we no longer have any boundary condition to impose, since every Cauchy slice which crosses this region is bounded on both sides by a future horizon. Consequently, we retain both the linear combinations obtained from the Klein-Gordon equation, and hence have twice as many modes as in either ${I}$ or ${III}$—which makes sense, since the interior receives contributions from both exterior regions. Nonetheless, it may be a bit confusing from the bulk perspective, since any local observer would simply arrive at the usual mode expansion involving only a single set of creation/annihilation operators, and I don’t have an intuition as to how ${a_{\omega,\mathbf{k}}}$ and ${\tilde a_{\omega,\mathbf{k}}}$ relate vis-à-vis their commutation relations in this shared domain. However, the entire framework in which the interior is fed by two exterior regions is only properly formulated in AdS/CFT, in which — it is generally thought — the interior region emerges from the entanglement structure between the two boundaries, so I prefer to uplift this discussion to the CFT before discussing the interior region in detail. This avoids the commutation confusion above — since the operators live in different CFTs — and it was the next step in our analysis anyway. (Incidentally, appendix B of [1] performs the mode decomposition in all three regions explicitly for the case of Rindler space, which provides a nice concrete example in which one can get one’s hands dirty).
So, we want to discuss local bulk fields from the perspective of the boundary CFT. From the extrapolate dictionary, we know that local bulk operators become increasingly smeared over the boundary (in both space and time) the farther we move into the bulk. Thus in region ${I}$, we can construct the operator
$\displaystyle \phi^I_{\mathrm{CFT}}(t,\mathbf{x},z)=\int_{\omega>0}\frac{\mathrm{d}\omega\mathrm{d}^{d-1}\mathbf{k}}{(2\pi)^d}\,\bigg[\mathcal{O}_{\omega,\mathbf{k}}\,f_{\omega,\mathbf{k}}(t,\mathbf{x},z)+\mathcal{O}^\dagger_{\omega,\mathbf{k}}f^*_{\omega,\mathbf{k}}(t,\mathbf{x},z)\bigg]~, \ \ \ \ \ (8)$
which, while a non-local operator in the CFT (constructed from local CFT operators ${\mathcal{O}_{\omega,\mathbf{k}}}$ which act as creation operators of light primary fields), behaves like a local operator in the bulk. Note that from the perspective of the CFT, ${z}$ is an auxiliary coordinate that simply parametrizes how smeared-out this operator is on the boundary.
As an aside, the critical difference between (8) and the more familiar HKLL prescription [3-5] is that the former is formulated directly in momentum space, while the latter is defined in position space as
$\displaystyle \phi_{\mathrm{CFT}}(t,\mathbf{x},z)=\int\!\mathrm{d} t'\mathrm{d}^{d-1}\mathbf{x}'\,K(t,\mathbf{x},z;t',\mathbf{x}')\mathcal{O}(t',\mathbf{x}')~, \ \ \ \ \ (9)$
where the integration kernel ${K}$ is known as the “smearing function”, and depends on the details of the spacetime. To solve for ${K}$, one performs a mode expansion of the local bulk field ${\phi}$ and identifies the normalizable mode with the local bulk operator ${\mathcal{O}}$ in the boundary limit. One then has to invert this relation to find the bulk mode operator, and then insert this into the original expansion of ${\phi}$. The problem now is that to identify ${K}$, one needs to swap the order of integration between position and momentum space, and the presence of the horizon results in a fatal divergence that obstructs this maneuver. As discussed in more detail in [1] however, working directly in momentum space avoids this technical issue. But the basic relation “smeared boundary operators ${\longleftrightarrow}$ local bulk fields” is the same.
Continuing, we have a similar bulk-boundary relation in region ${III}$, in terms of operators ${\tilde{\mathcal{O}}_{\omega,\mathbf{k}}}$ living in the left CFT:
$\displaystyle \phi^{III}_{\mathrm{CFT}}(t,\mathbf{x},z)=\int_{\omega>0}\frac{\mathrm{d}\omega\mathrm{d}^{d-1}\mathbf{k}}{(2\pi)^d}\,\bigg[\tilde{\mathcal{O}}_{\omega,\mathbf{k}}\,f_{\omega,\mathbf{k}}(t,\mathbf{x},z)+\tilde{\mathcal{O}}^\dagger_{\omega,\mathbf{k}}f^*_{\omega,\mathbf{k}}(t,\mathbf{x},z)\bigg]~. \ \ \ \ \ (10)$
Note that even though I’ve used the same coordinate labels, ${t}$ runs backwards in the left wedge, so that ${\tilde{\mathcal{O}}_{\omega,\mathbf{k}}}$ plays the role of the creation operator here. From the discussion above, the form of the field in the black hole interior is then
$\displaystyle \phi^{II}_{\mathrm{CFT}}(t,\mathbf{x},z)=\int_{\omega>0}\frac{\mathrm{d}\omega\mathrm{d}^{d-1}\mathbf{k}}{(2\pi)^d}\,\bigg[\mathcal{O}_{\omega,\mathbf{k}}\,g^{(1)}_{\omega,\mathbf{k}}(t,\mathbf{x},z)+\tilde{\mathcal{O}}_{\omega,\mathbf{k}}g^{(2)}_{\omega,\mathbf{k}}(t,\mathbf{x},z)+\mathrm{h.c}\bigg]~, \ \ \ \ \ (11)$
where ${\mathcal{O}_{\omega,\mathbf{k}}}$ and ${\tilde{\mathcal{O}}_{\omega,\mathbf{k}}}$ are the (creation/annihilation operators for the) boundary modes in the right and left CFTs, respectively. The point is that in order to construct a local field operator behind the horizon, both sets of modes — the left-movers ${\mathcal{O}_{\omega,\mathbf{k}}}$ from ${I}$ and the right-movers ${\tilde{\mathcal{O}}_{\omega,\mathbf{k}}}$ from ${III}$ — are required. In the eternal black hole considered above, the latter originate in the second copy of the CFT. But in the one-sided case, we would seem to have only the left-movers ${\mathcal{O}_{\omega,\mathbf{k}}}$, hence we arrive at the crucial question: for a one-sided black hole — such as that formed from collapse in our universe — what are the interior modes ${\tilde{\mathcal{O}}_{\omega,\mathbf{k}}}$? Equivalently: how can we represent the black hole interior given access to only one copy of the CFT?
To answer this question, recall that the thermofield double state,
$\displaystyle |\mathrm{TFD}\rangle=\frac{1}{\sqrt{Z_\beta}}\sum_ie^{-\beta E_i/2}|E_i\rangle\otimes|E_i\rangle~, \ \ \ \ \ (12)$
is constructed so that either CFT appears exactly thermal when tracing out the other side, and that this well-approximates the late-time thermodynamics of a large black hole formed from collapse. That is, the exterior region will be in the Hartle-Hawking vacuum (which is to Schwarzschild as Rindler is to Minkowski), with the temperature ${\beta^{-1}}$ of the CFT set by the mass of the black hole. This implies that correlation functions of operators ${\mathcal{O}}$ in the pure state ${|\mathrm{TFD}\rangle}$ may be computed as thermal expectation values in their (mixed) half of the total Hilbert space, i.e.,
$\displaystyle \langle\mathrm{TFD}|\mathcal{O}(t_1,\mathbf{x}_1)\ldots\mathcal{O}(t_n,\mathbf{x}_n)|\mathrm{TFD}\rangle =Z^{-1}_\beta\mathrm{tr}\left[e^{-\beta H}\mathcal{O}(t_1,\mathbf{x}_1)\ldots\mathcal{O}(t_n,\mathbf{x}_n)\right]~. \ \ \ \ \ (13)$
The same fundamental relation remains true in the case of the one-sided black hole as well: given the Hartle-Hawking state representing the exterior region, we can always obtain a purification such that expectation values in the original, thermal state are equivalent to standard correlators in the “fictitious” pure state, by the same doubling formalism that yielded the TFD. (Of course, the purification of a given mixed state is not unique, but as pointed out in [2] “the correct way to pick it, assuming that expectation values [of the operators] are all the information we have, is to pick the density matrix which maximizes the entropy.” That is, we pick the purification such that the original mixed state is thermal, i.e., ${\rho\simeq Z^{-1}_\beta e^{-\beta H}}$ up to ${1/N^2}$ corrections. The reason this is the “correct” prescription is that it’s the only one which does not impose additional constraints.) Thus (13) can be generally thought of as the statement that operators in an arbitrary pure state have the correct thermal expectation values when restricted to some suitably mixed subsystem (e.g., the black hole exterior dual to a single CFT).
Now, what if we wish to compute a correlation function involving operators across the horizon, e.g., ${\langle\mathcal{O}\tilde{\mathcal{O}}\rangle}$? In the two-sided case, we can simply compute this correlator in the pure state ${|\mathrm{TFD}\rangle}$. But in the one-sided case, we only have access to the thermal state representing the exterior. Thus we’d like to know how to compute the correlator using only the available data in the CFT corresponding to region ${I}$. In order to do this, we re-express all operators ${\tilde{\mathcal{O}}}$ appearing in the correlator with analytically continued operators ${\mathcal{O}}$ via the KMS condition, i.e., we make the replacement
$\displaystyle \tilde{\mathcal{O}}(t,\mathbf{x}) \longrightarrow \mathcal{O}(t+i\beta/2,\mathbf{x})~. \ \ \ \ \ (14)$
This is essentially the usual statement that thermal Green functions are periodic in imaginary time; see [1] for details. This relationship allows us to express the desired correlator as
$\displaystyle \langle\mathrm{TFD}|\mathcal{O}(t_1,\mathbf{x}_1)\ldots\tilde{\mathcal{O}}(t_n,\mathbf{x}_n)|\mathrm{TFD}\rangle =Z^{-1}_\beta\mathrm{tr}\left[e^{-\beta H}\mathcal{O}(t_1,\mathbf{x}_1)\ldots\mathcal{O}_{\omega,\mathbf{k}}(t_n+i\beta/2,\mathbf{x}_n)\right]~, \ \ \ \ \ (15)$
which is precisely eqn. (2) in our earlier post, cf. the two-point function (1) above. Note the lack of tilde’s on the right-hand side: this thermal expectation value can be computed entirely in the right CFT.
If the CFT did not admit operators which satisfy the correlation relation (15), it would imply a breakdown of effective field theory across the horizon. Alternatively, observing deviations from the correct thermal correlators would allow us to locally detect the horizon, in contradiction to the equivalence principle. In this sense, this expression may be summarized as the statement that the horizon is smooth. Thus, for the CFT to represent a black hole with no firewall, it must contain a representation of interior operators ${\tilde{\mathcal{O}}}$ with the correct behaviour inside low-point correlators. This last qualifier hints at the state-dependent nature of these so-called “mirror operators”, which I’ve discussed at length elsewhere [6].
References
[1] K. Papadodimas and S. Raju, “An Infalling Observer in AdS/CFT,” JHEP 10 (2013) 212,arXiv:1211.6767 [hep-th].
[2] K. Papadodimas and S. Raju, “State-Dependent Bulk-Boundary Maps and Black Hole Complementarity,” Phys. Rev. D89 no. 8, (2014) 086010, arXiv:1310.6335 [hep-th].
[3] A. Hamilton, D. N. Kabat, G. Lifschytz, and D. A. Lowe, “Holographic representation of local bulk operators,” Phys. Rev. D74 (2006) 066009, arXiv:hep-th/0606141 [hep-th].
[4] A. Hamilton, D. N. Kabat, G. Lifschytz, and D. A. Lowe, “Local bulk operators in AdS/CFT: A Boundary view of horizons and locality,” Phys. Rev. D73 (2006) 086003,arXiv:hep-th/0506118 [hep-th].
[5] A. Hamilton, D. N. Kabat, G. Lifschytz, and D. A. Lowe, “Local bulk operators in AdS/CFT: A Holographic description of the black hole interior,” Phys. Rev. D75 (2007) 106001,arXiv:hep-th/0612053 [hep-th]. [Erratum: Phys. Rev.D75,129902(2007)].
[6] R. Jefferson, “Comments on black hole interiors and modular inclusions,” SciPost Phys. 6 no. 4, (2019) 042, arXiv:1811.08900 [hep-th].
## Free energy, variational inference, and the brain
In several recent posts, I explored various ideas that lie at the interface of physics, information theory, and machine learning:
• We’ve seen, à la Jaynes, how the concepts of entropy in statistical thermodynamics and information theory are unified, perhaps the quintessential manifestation of the intimate relationship between the two.
• We applied information geometry to Boltzmann machines, which led us to the formalization of “learning” as a geodesic in the abstract space of machines.
• In the course of introducing VAEs, we saw that the Bayesian inference procedure can be understood as a process which seeks to minimizes the variational free energy, which encodes the divergence between the approximate and true probability distributions.
• We examined how the (dimensionless) free energy serves as a generating function for the cumulants from probability theory, which manifest as the connected Green functions from quantum field theory.
• We also showed how the cumulants from hidden priors control the higher-order interactions between visible units in an RBM, which underlies their representational power.
• Lastly, we turned a critical eye towards the analogy between deep learning and the renormalization group, through a unifying Bayesian language in which UV degrees of freedom correspond to hidden variables over which a low-energy observer must marginalize.
Collectively, this led me to suspect that ideas along these lines — in particular, the link between variational Bayesian inference and free energy minimization in hierarchical models — might provide useful mathematical headway in our attempts to understand learning and intelligence in both minds and machines. Imagine my delight when I discovered that, at least in the context of biological brains, a neuroscientist named Karl Friston had already scooped me more than a decade ago!
The aptly-named free energy principle (for the brain) is elaborated upon in a series of about ten papers spanning as many years. I found [1-5] most helpful, but insofar as a great deal of text is copied verbatim (yes, really; never trust the h-index) it doesn’t really matter which one you read. I’m going to mostly draw from [3], because it seems the earliest in which the basic idea is fleshed-out completely. Be warned however that the notation varies slightly from paper to paper, and I find his distinction between states and parameters rather confusingly fuzzy; but we’ll make this precise below.
The basic idea is actually quite simple, and proceeds from the view of the brain as a Bayesian inference machine. In a nutshell, the job of the brain is to infer, as accurately as possible, the probability distribution representing the world (i.e., to build a model that best accords with sensory inputs). In a sense, the brain itself is a probabilistic model in this framework, so the goal is to bring this internal model of the world in line with the true, external one. But this is exactly the same inference procedure we’ve seen before in the context of VAEs! Thus the free energy principle is just the statement that the brain minimizes the variational free energy between itself (that is, its internal, approximate model) and its sensory inputs—or rather, the true distribution that generates them.
To elucidate the notation involved in formulating the principle, we can make an analogy with VAEs. In this sense, the goal of the brain is to construct a map between our observations (i.e., sensory inputs ${x}$) and the underlying causes (i.e., the environment state ${z}$). By Bayes’ theorem, the joint distribution describing the model can be decomposed as
$\displaystyle p_\theta(x,z)=p_\theta(x|z)p(z)~. \ \ \ \ \ (1)$
The first factor on the right-hand side is the likelihood of a particular sensory input ${x}$ given the current state of the environment ${z}$, and plays the role of the decoder in this analogy, while the second factor is the prior distribution representing whatever foreknowledge the system has about the environment. The subscript ${\theta}$ denotes the variational or “action parameters” of the model, so named because they parametrize the action of the brain on its substrate and surroundings. That is, the only way in which the system can change the distribution is by acting to change its sensory inputs. Friston denotes this dependency by ${x(\theta)}$ (with different variables), but as alluded above, I will keep to the present notation to avoid conflating state/parameter spaces.
Continuing this analogy, the encoder ${p_\theta(z|x)}$ is then a map from the space of sensory inputs ${X}$ to the space of environment states ${Z}$ (as modelled by the brain). As in the case of VAEs however, this is incomputable in practice, since we (i.e., the brain) can’t evaluate the partition function ${p(x)=\sum_zp_\theta(x|z)p(z)}$. Instead, we construct a new distribution ${q_\phi(z|x)}$ for the conditional probability of environment states ${z}$ given a particular set of sensory inputs ${x}$. The variational parameters ${\phi}$ for this ensemble control the precise hamiltonian that defines the distribution, i.e., the physical parameters of the brain itself. Depending on the level of resolution, these could represent, e.g., the firing status of all neurons, or the concentrations of neurotransmitters (or the set of all weights and biases in the case of artificial neural nets).
Obviously, the more closely ${q_\phi(z|x)}$ approximates ${p_\theta(z|x)}$, the better our representation — and hence, the brain’s predictions — will be. As we saw before, we quantify this discrepancy by the Kullback-Leibler divergence
$\displaystyle D_z(q_\phi(z|x)||p_\theta(z|x))=\sum_zq_\phi(z|x)\ln\frac{q_\phi(z|x)}{p_\theta(z|x)}~, \ \ \ \ \ (2)$
which we can re-express in terms of the variational free energy
\displaystyle \begin{aligned} F_{q|}&=-\langle\ln p_\theta(x|z)\rangle_{q|}+D_z(q_\phi(z|x)||p(z))\\ &=-\sum_zq_\phi(z|x)\ln\frac{p_\theta(x,z)}{q_\phi(z|x)} =\langle E_{p|}\rangle_{q|}-S_{q|}~, \end{aligned} \ \ \ \ \ (3)
where the subscripts ${p|,q|}$ denote the conditional distributions ${p_\theta(z|x)}$, ${q_\phi(z|x)}$. On the far right-hand side, ${E_{p|}=-\ln p_\theta(x,z)}$ is the energy or hamiltonian for the ensemble ${p_\theta(z|x)}$ (with partition function ${Z=p(x)}$), and ${S_{q|}=-\int\!\mathrm{d} z\,q_\phi(z|x)\ln q_\phi(z|x)}$ is the entropy of ${q_\phi(z|x)}$ (see the aforementioned post for details).
However, at this point we must diverge from our analogy with VAEs, since what we’re truly after is a model of the state of the world which is independent of our current sensory inputs. Consider that from a selectionist standpoint, a brain that changes its environmental model when a predator temporarily moves out of sight is less likely to pass on the genes for its construction! Said differently, a predictive model of reality will be more successful when it continues to include the moon, even when nobody looks at it. Thus instead of ${q_\phi(z|x)}$, we will compare ${p_\theta(x|z)}$ with the ensemble density ${q_\lambda(z)}$, where — unlike in the case of ${p(x)}$ or ${p(z)}$ — we have denoted the variational parameters ${\lambda}$ explicitly, since they will feature crucially below. Note that ${\lambda}$ is not the same as ${\theta}$ (and similarly, whatever parameters characterize the marginals ${p(x)}$, ${p(z)}$ cannot be identified with ${\theta}$). One way to see this is by comparison with our example of renormalization in deep networks, where the couplings in the joint distribution (here, ${\phi}$ in ${q_\phi(x,z)}$) get renormalized after marginalizing over some degrees of freedom (here, ${\lambda}$ in ${q_\lambda(z)}$, after marginalizing over all possible sensory inputs ${x}$). Friston therefore defines the variational free energy as
\displaystyle \begin{aligned} \mathcal{F}_q&=-\langle\ln p_\theta(x|z)\rangle_q+D_z(q_\lambda(z)||p(z))\\ &=-\sum_zq_\lambda(z)\ln\frac{p_\theta(x,z)}{q_\lambda(z)} =\langle E_{p|}\rangle_{q}-S_{q}~, \end{aligned} \ \ \ \ \ (4)
where we have used a curly ${\mathcal{F}}$ to distinguish this from ${F}$ above, and note that the subscript ${q}$ (no vertical bar) denotes that expectation values are computed with respect to the distribution ${q_\lambda(z)}$. The first equality expresses ${\mathcal{F}_q}$ as the log-likelihood of sensory inputs given the state of the environment, minus an error term that quantifies how far the brain’s internal model of the world ${q_\lambda(z)}$ is from the model consistent with our observations, ${p(z)}$, cf. (1). Equivalent, comparing with (2) (with ${q_\lambda(z)}$ in place of ${q_\phi(z|x)}$), we’re interested in the Kullback-Leibler divergence between the brain’s model of the external world, ${q_\lambda(z)}$, and the conditional likelihood of a state therein given our sensory inputs, ${p_\theta(z|x)}$. Thus we arrive at the nutshell description we gave above, namely that the principle is to minimize the difference between what is and what we think there is. As alluded above, there is a selectionist argument for this principle, namely that organisms whose beliefs accord poorly with reality tend not to pass on their genes.
As an aside, it is perhaps worth emphasizing that both of these variational free energies are perfectly valid: unlike the Helmholtz free energy, which is uniquely defined, one can define different variational free energies depending on which ensembles one wishes to compare, provided it admits an expression of the form ${\langle E\rangle-S}$ for some energy ${E}$ and entropy ${S}$ (in case it wasn’t clear by now, we’re working with the dimensionless or reduced free energy, equivalent to setting ${\beta=1}$; the reason for this general form involves a digression on Legendre transforms). Comparing (4) and (3), one sees that the difference in this case is simply a difference in entropies and expectation values with respect to prior ${q_\lambda(z)}$ vs. conditional distributions ${q_\phi(z|x)}$ (which makes sense, since all we did was replace the latter by the former in our first definition).
Now, viewing the brain as an inference machine means that it seeks to optimize its predictions about the world, which in this context, amounts to minimizing the free energy by varying the parameters ${\theta,\,\lambda}$. As explained above, ${\theta}$ corresponds to the actions the system can take to alter its sensory inputs. From the first equality in (4), we see that the dependence on the action parameters is entirely contained in the log-likelihood of sensory inputs: the second, Kullback-Leibler term contains only priors (cf. our discussion of gradient descent in VAEs). This, optimizing the free energy with respect to ${\theta}$ means that the system will act in such a way as to fulfill its expectations with regards to sensory inputs. Friston neatly summarizes this philosophy as the view that “we may not interact with the world to maximize our reward but simply to ensure it behaves as we think it should” [3]. While this might sound bizarre at first glance, the key fact to bear in mind is that the system is limited in the actions it can perform, i.e., in its ability to adapt. In other words, a system with low free energy is per definition adapting well to changes in its environment or its own internal needs, and therefore is positively selected for relative to systems whose ability to model and adapt to their environment is worse (higher free energy).
What about optimization with respect to the other set of variational parameters, ${\lambda}$? As mentioned above, these correspond to the physical parameters of the system itself, so this corresponds to adjusting the brain’s internal parameters — connection strengths, neurotransmitter levels, etc. — to ensure that our perceptions are as accurate as possible. By applying Bayes rule to the joint distribution ${p_\theta(x,z)}$, we can re-arrange the expression for the free energy to isolate this dependence in a single Kullback-Leibler term:
$\displaystyle \mathcal{F}_q=-\ln p_\theta(x)+D_z\left( q_\lambda(z)||p_\theta(z|x)\right)~. \ \ \ \ \ (5)$
where we have used the fact that ${\langle \ln p_\theta(x)\rangle_q=\ln p_\theta(x)}$. This form of the expression shows clearly that minimization with respect to ${\lambda}$ directly corresponds to minimizing the Kullback-Leibler divergence between the brain’s internal model of the world, ${q_\lambda(z)}$, and the posterior probability of the state giving rise to its sensory inputs, ${p_\theta(z|x)}$. That is, in the limit where the second, Kullback-Leibler term vanishes, we are correctly modelling the causes of our sensory inputs. The selectionist interpretation is that systems which are less capable of accurately modelling their environment by correctly adjusting internal, “perception parameters” ${\lambda}$ will have higher free energy, and hence will be less adept in bringing their perceptions in line with reality.
Thus far everything is quite abstract and rather general. But things become really interesting when we apply this basic framework to hierarchical models with both forward and backwards connections — such as the cerebral cortex — which leads to “recurrent dynamics that self-organize to suppress free energy or prediction error, i.e., recognition dynamics” [3]. In fact, Friston makes the even stronger argument that it is precisely the inability to invert the recognition problem that necessitates backwards (as opposed to purely feed-forwards) connections. In other words, the selectionist pressure to accurately model the (highly non-linear) world requires that brains evolve top-down connections from higher to lower cortical layers. Let’s flesh this out in a bit more detail.
Recall that ${Z}$ is the space of environmental states as modelled by the brain. Thus we can formally associate the encoder, ${p_\theta(z|x)}$, with forwards connections, which propagate sensory data up the cortical hierarchy; Friston refers to this portion as the recognition model. That is, the recognition model should take a given data point ${x}$, and return the likelihood of a particular cause (i.e., world-state) ${z}$. In general however, the map from causes to sensory inputs — captured by the so-called generative model ${p_\theta(x|z)}$ — is highly non-linear, and the brain must essentially invert this map to find contextually invariant causes (e.g., the continued threat of a predator even when it’s no longer part of our immediate sensory input). This is the intractable problem of computing the partition function above, the workaround for which is to instead postulate an approximate recognition model ${q_\lambda(z)}$, whose parameters ${\lambda}$ are encoded in the forwards connections. The role of the generative model ${p_\theta(x|z)}$ is then to modulate sensory inputs (or their propagation and processing) based on the prevailing belief about the environment’s state, the idea being that these effects are represented in backwards (and lateral) connections. Therefore, the role of these backwards or top-down connections is to modulate forwards or bottom-up connections, thereby suppressing prediction error, which is how the brain operationally minimizes its free energy.
The punchline is that backwards connections are necessary for general perception and recognition in hierarchical models. As mentioned above, this is quite interesting insofar as it offers, on the one hand, a mathematical explanation for the cortical structure found in biological brains, and on the other, a potential guide to more powerful, neuroscience-inspired artificial intelligence.
There are however a couple technical exceptions to this claim of necessity worth mentioning, which is why I snuck in the qualifier “general” in the punchline above. If the abstract generative model can be inverted exactly, then there’s no need for (expensive and time-consuming) backwards connections, because one can obtain a perfectly suitable recognition model that reliably predicts the state of the world given sensory inputs, using a purely feed-forward network. Mathematically, this corresponds to simply taking ${q_\lambda(z)=p_\theta(z|x)}$ in (4) (i.e., zero Kullback-Leibler divergence (2)), whereupon the free energy reduces to the negative log-likelihood of sensory inputs,
$\displaystyle \mathcal{F}_{p}=-\ln p(x)~, \ \ \ \ \ (6)$
where we have used the fact that ${\langle\ln p(x)\rangle_{p|}=\ln p(x)}$. Since real-world models are generally non-linear in their inputs however, invertibility is not something one expects to encounter in realistic inference machines (i.e., brains). Indeed, our brains evolved under strict energetic and space constraints; there simply isn’t enough processing power to brute-force the problem by using dedicated feed-forward networks for all our recognition needs. The other important exception is when the recognition process is purely deterministic. In this case one replaces ${q_\lambda(z)}$ by a Kronecker delta function ${\delta(z(x)-x)}$, so that upon performing the summation, the inferred state ${z}$ becomes a deterministic function of the inputs ${x}$. Then the second expression for ${\mathcal{F}}$ in (4) becomes the negative log-likelihood of the joint distribution
$\displaystyle \mathcal{F}_\delta=-\ln p_\theta(x,z(x)) =-\ln p_\theta(x|z(x))-\ln p(z(x))~, \ \ \ \ \ (7)$
where we have used the fact that ${\ln\delta(0)=0}$. Note that the invertible case, (6), corresponds to maximum likelihood estimation (MLE), while the deterministic case (7) corresponds to so-called maximum a posteriori estimation (MAP), the only difference being that the latter includes a weighting based on the prior distribution ${p(z(x))}$. Neither requires the conditional distribution ${p_\theta(z|x)}$, and so skirts the incomputability issue with the path integral above. The reduction to these familiar machine learning metrics for such simple models is reasonable, since only in relatively contrived settings does one ever expect deterministic/invertible recognition.
In addition to motivating backwards connections, the hierarchical aspect is important because it allows the brain to learn its own priors through a form of empirical Bayes. In this sense, the free energy principle is essentially an elegant (re)formulation of predictive coding. Recall that when we introduced the generative model in the form of the decoder ${p_\theta(x|z)}$ in (1), we also necessarily introduced the prior distribution ${p(z)}$: the liklihood of a particular sensory input ${x}$ given (our internal model of) the state of the environment (i.e., the cause) ${z}$ only makes sense in the context of the prior distribution of causes. Where does this prior distribution come from? In artificial models, we can simply postulate some (e.g., Gaussian or informative) prior distribution and proceed to train the model from there. But a hierarchical model like the brain enables a more natural option. To illustrate the basic idea, consider labelling the levels in such a cortical hierarchy by ${i\in{0,\ldots,n}}$, where 0 is the bottom-most layer and ${n}$ is the top-most layer. Then ${x_i}$ denotes sensory data at the corresponding layer; i.e., ${x_0}$ corresponds to raw sensory inputs, while ${x_n}$ corresponds to the propagated input signals after all previous levels of processing. Similarly, let ${z_i}$ denote the internal model of the state of the world assembled (or accessible at) the ${i^\mathrm{th}}$ layer. Then
$\displaystyle p(z_i)=\sum_{z_{i-1}}p(z_i|z_{i-1})p(z_{i-1})~, \ \ \ \ \ (8)$
i.e., the prior distribution ${p(z_i)}$ implicitly depends on the knowledge of the state at all previous levels, analogous to how the IR degrees of freedom implicitly depend on the marginalized UV variables. The above expression can be iterated recursively until we reach ${p(z_0)}$. For present purposes, this can be identified with ${p(x_0)}$, since at the bottom-most level of the hierarchy, there’s no difference between the raw sensory data and the inferred state of the world (ignoring whatever intralayer processing might take place). In this (empirical Bayesian) way, the brain self-consistently builds up higher priors from states at lower levels.
The various works by Friston and collaborators go into vastly more detail, of course; I’ve made only the crudest sketch of the basic idea here. In particular, one can make things more concrete by examining the neural dynamics in such models, which is explored in some of these works via something akin to a mean field theory (MFT) approach. I’d originally hoped to have time to dive into this in detail, but a proper treatment will have to await another post. Suffice to say however that the free energy principle provides an elegant formulation which, as in the other topics mentioned at the beginning of this post, allows us to apply ideas from theoretical physics to understand the structure and dynamics of neural networks, and may even prove a fruitful mathematical framework for both theoretical neuroscience and (neuro-inspired) artificial intelligence.
References:
[1] K. Friston, “Learning and inference in the brain,” Neural Networks (2003) .
[2] K. Frison, “A theory of cortical responses,” Phil. Trans. R. Soc. B (2005) .
[3] K. Friston, J. Kilner, and L. Harrison, “A free energy principle for the brain,” J. Physiology (Paris) (2006) .
[4] K. J. Friston and K. E. Stephan, “Free-energy and the brain,” Synthese (2007) .
[5] K. Friston, “The free-energy principle: a unified brain theory?,” Nature Rev. Neuro. (2010) .
Posted in Minds & Machines | 1 Comment
## Deep learning and the renormalization group
In recent years, a number of works have pointed to similarities between deep learning (DL) and the renormalization group (RG) [1-7]. This connection was originally made in the context of certain lattice models, where decimation RG bears a superficial resemblance to the structure of deep networks in which one marginalizes over hidden degrees of freedom. However, the relation between DL and RG is more subtle than has been previously presented. The “exact mapping” put forth by [2], for example, is really just a formal analogy that holds for essentially any hierarchical model! That’s not to say there aren’t deeper connections between the two: in my earlier post on RBMs for example, I touched on how the cumulants encoding UV interactions appear in the renormalized couplings after marginalizing out hidden degrees of freedom, and we’ll go into this in much more detail below. But it’s obvious that DL and RG are functionally distinct: in the latter, the couplings (i.e., the connection or weight matrix) are fixed by the requirement that the partition function be preserved at each scale, while in the former, these connections are dynamically altered in the training process. There is, in other words, an important distinction between structure and dynamics which seems to have been overlooked. Understanding both these aspects is required to truly understand why deep learning “works”, but “learning” itself properly refers to the latter.
That said, structure is the first step to dynamics, so I wanted to see how far one could push the analogy. To that end, I started playing with simple Gaussian/Bernoulli RBMs, to see whether understanding the network structure — in particular, the appearance of hidden cumulants, hence the previous post in this two-part sequence — would shed light on, e.g., the hierarchical feature detection observed in certain image recognition tasks, the propagation of structured information more generally, or the relevance of criticality to both deep nets and biological brains. To really make the RG analogy precise, one would ideally like a beta function for the network, which requires a recursion relation for the couplings. So my initial hope was to derive an expression for this in terms of the cumulants of the marginalized neurons, and thereby gain some insight into how correlations behave in these sorts of hierarchical networks.
To start off, I wanted a simple model that would be analytically solvable while making the analogy with decimation RG completely transparent. So I began by considering a deep Boltzmann machine (DBM) with three layers: a visible layer of Bernoulli units ${x_i}$, and two hidden layers of Gaussian units ${y_i,z_i}$. The total energy function is
\displaystyle \begin{aligned} H(x,y,z)&=-\sum_{i=1}^na_ix_i+\frac{1}{2}\sum_{j=1}^my_j^2+\frac{1}{2}\sum_{k=1}^pz_k^2-\sum_{ij}A_{ij}x_iy_j-\sum_{jk}B_{jk}y_jz_k\\ &=-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\left( \mathbf{y}^\mathrm{T}\mathbf{y}+\mathbf{z}^\mathrm{T}\mathbf{z}\right)-\mathbf{x}^\mathrm{T} A\,\mathbf{y}-\mathbf{y}^\mathrm{T} B\,\mathbf{z}~, \end{aligned} \ \ \ \ \ (1)
where on the second line I’ve switched to the more convenient vector notation; the dot product between vectors is implicit, i.e., ${\mathbf{a}\,\mathbf{x}=\mathbf{a}\cdot\mathbf{x}}$. Note that there are no intra-layer couplings, and that I’ve stacked the layers so that the visible layer ${x}$ is connected only to the intermediate hidden layer ${y}$, which in turn is connected only to the final hidden layer ${z}$. The connection to RG will be made by performing sequential marginalizations over first ${z}$, and then ${y}$, so that the flow from UV to IR is ${z\rightarrow y\rightarrow x}$. There’s an obvious Bayesian parallel here: we low-energy beings don’t have access to complete information about the UV, so the visible units are naturally identified with IR degrees of freedom, and indeed I’ll use these terms interchangeably throughout.
The joint distribution function describing the state of the machine is
$\displaystyle p(x,y,z)=Z^{-1}e^{-\beta H(x,y,z)}~, \quad\quad Z[\beta]=\prod_{i=1}^n\sum_{x_i=\pm1}\int\!\mathrm{d}^my\mathrm{d}^pz\,p(x,y,z)~, \ \ \ \ \ (2)$
where ${\int\!\mathrm{d}^my=\int\!\prod_{i=1}^m\mathrm{d} y_i}$, and similarly for ${z}$. Let us now consider sequential marginalizations to obtain ${p(x,y)}$ and ${p(x)}$. In Bayesian terms, these distributions characterize our knowledge about the theory at intermediate- and low-energy scales, respectively. The first of these is
$\displaystyle p(x,y)=\int\!\mathrm{d}^pz\,p(x,y,z)=Z^{-1}e^{-\beta\,\left(-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T}\mathbf{y}-\mathbf{x}^\mathrm{T} A\,\mathbf{y}\right)}\int\!\mathrm{d}^pz\,e^{-\beta\mathbf{z}^\mathrm{T}\mathbf{z}+\beta\mathbf{y}^\mathrm{T} B\,\mathbf{z}}~. \ \ \ \ \ (3)$
In order to preserve the partition function (see the discussion around (23) below), we then define the hamiltonian on the remaining, lower-energy degrees of freedom ${H(x,y)}$ such that
$\displaystyle p(x,y)=Z^{-1}e^{-\beta H(x,y)}~, \ \ \ \ \ (4)$
which implies
$\displaystyle H(x,y)=-\frac{1}{\beta}\ln\int\!\mathrm{d}^pz\,e^{-\beta H(x,y,z)} =-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T}\mathbf{y}-\mathbf{x}^\mathrm{T} A\,\mathbf{y} -\frac{1}{\beta}\ln\int\!\mathrm{d}^pz\,e^{-\beta\mathbf{z}^\mathrm{T}\mathbf{z}+\beta\mathbf{y}^\mathrm{T} B\,\mathbf{z}}~. \ \ \ \ \ (5)$
This is a simple multidimensional Gaussian integral:
$\displaystyle \int\!\mathrm{d}^pz\,\mathrm{exp}\left(-\frac{1}{2}\mathbf{z}^\mathrm{T} M\,\mathbf{z}+J^\mathrm{T}\mathbf{z}\right) =\sqrt{\frac{(2\pi)^p}{|M|}}\,\mathrm{exp}\left(\frac{1}{2}J^\mathrm{T} M^{-1} J\right)~, \ \ \ \ \ (6)$
where in the present case ${M=\beta\mathbf{1}}$ and ${J=\beta B^\mathrm{T}\mathbf{y}}$. We therefore obtain
\displaystyle \begin{aligned} H(x,y)&=-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T}\mathbf{y}-\mathbf{x}^\mathrm{T} A\,\mathbf{y} -\frac{1}{\beta}\ln\sqrt{\frac{(2\pi)^p}{\beta}}\exp\left(\frac{\beta}{2}\mathbf{y}^\mathrm{T} BB^\mathrm{T}\mathbf{y}\right)\\ &=f(\beta)-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T} \left(\mathbf{1}-B'\right)\mathbf{y}-\mathbf{x}^\mathrm{T} A\,\mathbf{y}~, \end{aligned} \ \ \ \ \ (7)
where we have defined
$\displaystyle f(\beta)\equiv\frac{1}{2\beta}\ln\frac{\beta}{(2\pi)^p} \qquad\mathrm{and}\qquad B_{ij}'\equiv\sum_{\mu=1}^pB_{i\mu}B_{\mu j}~. \ \ \ \ \ (8)$
The key point to note is that the interactions between the intermediate degrees of freedom ${y}$ have been renormalized by an amount proportional to the coupling with the UV variables ${z}$. And indeed, in the context of deep neural nets, the advantage of hidden units is that they encode higher-order interactions through the cumulants of the associated prior. To make this connection explicit, consider the prior distribution of UV variables
$\displaystyle q(z)=\sqrt{\frac{\beta}{(2\pi)^p}}\,\mathrm{exp}\left(-\frac{1}{2}\beta \,\mathbf{z}^\mathrm{T} \mathbf{z}\right)~. \ \ \ \ \ (9)$
The cumulant generating function for ${\mathbf{z}}$ with respect to this distribution is then
$\displaystyle K_{z}(t)=\ln\langle e^{\mathbf{t}\mathbf{z}}\rangle =\ln\sqrt{\frac{\beta}{(2\pi)^p}}\int\!\mathrm{d}^pz\,\mathrm{exp}\left(-\frac{1}{2}\beta \mathbf{z}^\mathrm{T} \mathbf{z}+\mathbf{t}\mathbf{z}\right) =\frac{1}{2\beta}\mathbf{t}\mathbf{t}^\mathrm{T}~, \ \ \ \ \ (10)$
cf. eqn. (4) in the previous post. So by choosing ${\mathbf{t}=\beta\mathbf{y}^\mathrm{T} B}$, we have
$\displaystyle K_{z}\left(\beta\mathbf{y}^\mathrm{T} B\right)=\frac{\beta}{2}\mathbf{y}^\mathrm{T} BB^\mathrm{T}\mathbf{y} \ \ \ \ \ (11)$
and may therefore express (7) as
$\displaystyle H(x,y)=f(\beta)-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T}\mathbf{y}-\frac{1}{\beta}K_{z}\left(\beta\mathbf{y}^\mathrm{T} B\right)~. \ \ \ \ \ (12)$
From the cumulant expansion in the aforementioned eqn. (4), in which the ${n^\mathrm{th}}$ cumulant is ${\kappa_n=K_X^{(n)}(t)\big|_{t=0}}$, we then see that the effect of the marginalizing out UV (i.e., hidden) degrees of freedom is to induce higher-order couplings between the IR (i.e., visible) units, with the coefficients of the interaction weighted by the associated cumulant:
\displaystyle \begin{aligned} H(x,y)&=f(\beta)-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T}\mathbf{y}-\frac{1}{\beta}\left(\kappa_1\mathbf{t}+\frac{\kappa_2}{2}\mathbf{t}^2+\ldots\right)\\ &=f(\beta)-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T}\mathbf{y}-\kappa_1\mathbf{y}^\mathrm{T} B -\frac{\kappa_2}{2}\beta\mathbf{y}^\mathrm{T} BB^\mathrm{T} \mathbf{y}-\ldots~. \end{aligned} \ \ \ \ \ (13)
For the Gaussian prior (9), one immediately sees from (10) that all cumulants except for ${\kappa_2=1/\beta}$ (the variance) vanish, whereupon (13) reduces to (7) above.
Now, let’s repeat this process to obtain the marginalized distribution of purely visible units ${p(x)}$. In analogy with Wilsonian RG, this corresponds to further lowering the cutoff scale in order to obtain a description of the theory in terms of low-energy degrees of freedom that we can actually observe. Hence, tracing out ${y}$, we have
$\displaystyle p(x)=\int\!\mathrm{d}\mathbf{y}\,p(x,y)=Z^{-1}e^{-\beta\,\left( f-\mathbf{a}\mathbf{x}\right)}\int\!\mathrm{d}\mathbf{y} \,\mathrm{exp}\left(-\frac{1}{2}\beta\mathbf{y}^\mathrm{T}\left(\mathbf{1}-B'\right)\mathbf{y}+\beta\mathbf{x}^\mathrm{T} A\,\mathbf{y}\right)~. \ \ \ \ \ (14)$
Of course, this is just another edition of (6), but now with ${M=\beta\left(\mathbf{1}-B'\right)}$ and ${J=\beta A^\mathrm{T}\mathbf{x}}$. We therefore obtain
\displaystyle \begin{aligned} p(x)&=Z^{-1}\sqrt{\frac{(2\pi)^m}{\beta\left(1-|B'|\right)}}\mathrm{exp}\left[-\beta\left( f(\beta)-\mathbf{a}\mathbf{x}+\frac{1}{2}\mathbf{x}^\mathrm{T} A\left(\mathbf{1}-B'\right)^{-1}A^\mathrm{T}\mathbf{x}\right)\right]\\ &=Z^{-1}\sqrt{\frac{(2\pi)^m}{\beta\left(1-|B'|\right)}}\mathrm{exp}\left[-\beta\left( f(\beta)-\mathbf{a}\mathbf{x}+\frac{1}{2}\mathbf{x}^\mathrm{T} A'\mathbf{x}\right)\right] \end{aligned} \ \ \ \ \ (15)
where we have defined ${A\equiv A\,\left(\mathbf{1}-B'\right)^{-1}\!A^\mathrm{T}}$. As before, we then define ${H(x)}$ such that
$\displaystyle p(x)=Z^{-1}e^{-\beta H(x)}~, \ \ \ \ \ (16)$
which implies
$\displaystyle H(x)=g(\beta)-\mathbf{a}\mathbf{x}+\frac{1}{2}\mathbf{x}^\mathrm{T} A'\mathbf{x}~, \ \ \ \ \ (17)$
where
$\displaystyle g(\beta)\equiv f(\beta)+\frac{1}{2\beta}\ln\frac{\beta\left(1-|B'|\right)}{(2\pi)^m} =\frac{1}{2\beta}\ln\frac{\beta^2\left(1-|B'|\right)}{(2\pi)^{p+m}}~, \ \ \ \ \ (18)$
where ${f(\beta)}$ and ${B'}$ are given in (8).
Again we see that marginalizing out UV information induces new couplings between IR degrees of freedom; in particular, the hamiltonian ${H(x)}$ contains a quadratic interaction terms between the visible units. And we can again write this directly in terms of a cumulant generating function for hidden degrees of freedom by defining a prior of the form (9), but with ${z\rightarrow w\in\{y,z\}}$ and ${p\rightarrow m\!+\!p}$. This will be of the form (10), where in this case we need to choose ${\mathbf{t}=\beta\mathbf{x}^\mathrm{T} A\left(\mathbf{1}-B'\right)^{-1/2}}$, so that
$\displaystyle K_z(t)=\frac{\beta}{2} \mathbf{x}^\mathrm{T} A\left(\mathbf{1}-B'\right)^{-1} A^\mathrm{T}\mathbf{x} \ \ \ \ \ (19)$
(where, since ${B'^\mathrm{T}=B'}$, the inverse matrix ${(\mathbf{1}-B')^{-1}}$ is also invariant under the transpose; at this stage of exploration, I’m being quite cavalier about questions of existence). Thus the hamiltonian of visible units may be written
$\displaystyle H(x)=g(\beta)-\mathbf{a}\mathbf{x}+\frac{1}{\beta}K_z(t)~, \ \ \ \ \ (20)$
with ${t}$ and ${g(\beta)}$ as above. Since the prior with which these cumulants are computed is again Gaussian, only the second cumulant survives in the expansion, and we indeed recover (17).
To summarize, the sequential flow from UV (hidden) to IR (visible) distributions is, from top to bottom,
\displaystyle \begin{aligned} p(x,y,z)&=Z^{-1}e^{-\beta H(x,y,z)}~,\qquad &H(x,y,z)&=-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\left( \mathbf{y}^\mathrm{T}\mathbf{y}+\mathbf{z}^\mathrm{T}\mathbf{z}\right)-\mathbf{x}^\mathrm{T} A\,\mathbf{y}-\mathbf{y}^\mathrm{T} B\,\mathbf{z}~,\\ p(x,y)&=Z^{-1}e^{-\beta H(x,y)}~,\qquad &H(x,y)&=f(\beta)-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T}\left(\mathbf{1}-BB^\mathrm{T}\right)\mathbf{y}-\mathbf{x}^\mathrm{T} A\,\mathbf{y}~,\\ p(x)&=Z^{-1}e^{-\beta H(x)}~,\qquad &H(x)&=g(\beta)-\mathbf{a}\mathbf{x}+\frac{1}{2}\mathbf{x}^\mathrm{T} A\left(\mathbf{1}-BB^\mathrm{T}\right)^{-1}\!A^\mathrm{T}\mathbf{x}~, \end{aligned} \ \ \ \ \ (21)
where upon each marginalization, the new hamiltonian gains additional interaction terms/couplings governed by the cumulants of the UV prior (where “UV” is defined relative to the current cutoff scale, i.e., ${q(z)}$ for ${H(x,y)}$ and ${q(w)}$ for ${H(x)}$), and the renormalization of the partition function is accounted for by
$\displaystyle f(\beta)=\frac{1}{2\beta}\ln\frac{\beta}{(2\pi)^p}~, \quad\quad g(\beta)=-\frac{1}{2\beta}\ln\frac{\beta^2\left(1-|BB^\mathrm{T}|\right)}{(2\pi)^{p+m}}~. \ \ \ \ \ (22)$
As an aside, note that at each level, fixing the form (4), (16) is equivalent to imposing that the partition function remain unchanged. This is required in order to preserve low-energy correlation functions. The two-point correlator ${\langle x_1x_2\rangle}$ between visible (low-energy) degrees of freedom, for example, does not depend on which distribution we use to compute the expectation value, so long as the energy scale thereof is at or above the scale set by the inverse lattice spacing of ${\mathbf{x}}$:
\displaystyle \begin{aligned} \langle x_ix_j\rangle_{p(x,y,z)}&=\prod_{k=1}^n\sum_{x_k=\pm1}\int\!\mathrm{d}^my\mathrm{d}^pz\,x_ix_j\,p(x,y,z)\\ &=\prod_{k=1}^n\sum_{x_k=\pm1}x_ix_j\int\!\mathrm{d}^my\,p(x,y) =\langle x_ix_j\rangle_{p(x,y)}\\ &=\prod_{k=1}^n\sum_{x_k=\pm1}x_ix_j\,p(x) =\langle x_ix_j\rangle_{p(x)}~. \end{aligned} \ \ \ \ \ (23)
In other words, had we not imposed the invariance of the partition function, we would be altering the theory at each energy scale, and there would be no renormalization group relating them. In information-theoretic terms, this would represent an incorrect Bayesian inference procedure.
Despite (or perhaps, because of) its simplicity, this toy model makes manifest the fact that the RG prescription is reflected in the structure of the network, not the dynamics of learning per se. Indeed, Gaussian units aside, the above is essentially nothing more than real-space decimation RG on a 1d lattice, with a particular choice of couplings between “spins” ${\sigma\in\{x,y,z\}}$. In this analogy, tracing out ${y}$ and then ${x}$ maps to a sequential marginalization over even spins in the 1d Ising model. “Dynamics” in this sense are determined by the hamiltonian ${H(\sigma)}$, which is again one-dimensional. When one speaks of deep “learning” however, one views the network as two-dimensional, and “dynamics” refers to the changing values of the couplings as the network attempts to minimize the cost function. In short, RG lies in the fact that the couplings at each level in (21) encode the cumulants from hidden units in such a way as to ensure the preservation of visible correlations, whereas deep learning then determines their precise values in such a way as to reproduce a particular distribution. To say that deep learning itself is an RG is to conflate structure with function.
Nonetheless, there’s clearly an intimate parallel between RG and hierarchical Bayesian modeling at play here. As mentioned above, I’d originally hoped to derive something like a beta function for the cumulants, to see what insights theoretical physics and machine learning might yield to one another at this information-theoretic interface. Unfortunately, while one can see how the higher UV cumulants from ${q(z)}$ are encoded in those from ${q(w)}$, the appearance of the inverse matrix makes a recursion relation for the couplings in terms of the cumulants rather awkward, and the result would only hold for the simple Gaussian hidden units I’ve chosen for analytical tractability here.
Fortunately, after banging my head against this for a month, I learned of a recent paper [8] that derives exactly the sort of cumulant relation I was aiming for, at least in the case of generic lattice models. The key is to not assume a priori which degrees of freedom will be considered UV/hidden vs. IR/visible. That is, when I wrote down the joint distribution (2), I’d already distinguished which units would survive each marginalization. While this made the parallel with the familiar decimation RG immediate — and the form of (1) made the calculations simple to perform analytically — it’s actually a bit unnatural from both a field-theoretic and a Bayesian perspective: the degrees of freedom that characterize the theory in the UV may be very different from those that we observe in the IR (e.g., strings vs. quarks vs. hadrons), so we shouldn’t make the distinction ${x,y,z}$ at this level. Accordingly, [8] instead replace (2) with
$\displaystyle p(\chi)=\frac{1}{Z}\,e^{\mathcal{K}(\chi)} \ \ \ \ \ (24)$
where ${\mathcal{K}\equiv-\beta H}$ is the so-called reduced (i.e., dimensionless) hamiltonian, cf. the reduced/dimensionless free energy ${\tilde F\equiv\beta F}$ in the previous post. Note that ${\chi}$ runs over all the degrees of freedom in the theory, which are all UV/hidden variables at the present level.
Two words of notational warning ere we proceed: first, there is a sign error in eqn. (1) of [8] (in version 1; the negative in the exponent has been absorbed into ${\mathcal{K}}$ already). More confusingly however, their use of the terminology “visible” and “hidden” is backwards with respect to the RG analogy here. In particular, they coarse-grain a block of “visible” units into a single “hidden” unit. For reasons which should by now be obvious, I will instead stick to the natural Bayesian identifications above, in order to preserve the analogy with standard coarse-graining in RG.
Let us now repeat the above analysis in this more general framework. The real-space RG prescription consists of coarse-graining ${\chi\rightarrow\chi'}$, and then writing the new distribution ${p(\chi')}$ in the canonical form (24). In Bayesian terms, we need to marginalize over the information about ${\chi}$ contained in the distribution of ${\chi'}$, except that unlike in my simple example above, we don’t want to make any assumptions about the form of ${p(\chi,\chi')}$. So we instead express the integral — or rather, the discrete sum over lattice sites — in terms of the conditional distribution ${p(\chi'|\chi)}$:
$\displaystyle p(\chi')=\sum_\chi p(\chi,\chi') =\sum_\chi p(\chi'|\chi)\,p(\chi)~. \ \ \ \ \ (25)$
where ${\sum\nolimits_\chi=\prod_{i=1}^m\sum_{\chi_i}}$, ${\chi=\{\chi_1,\ldots,\chi_m\}}$. Denoting the new dimensionless hamiltonian ${\mathcal{K}'(\chi')}$, with ${\chi'=\{\chi_1',\ldots,\chi_n'\}}$, ${n, we therefore have
$\displaystyle e^{\mathcal{K}'(\chi')}=\sum_\chi p(\chi'|\chi)\,e^{\mathcal{K}(\chi)}~. \ \ \ \ \ (26)$
So far, so familiar, but now comes the trick: [8] split the hamiltonian ${\mathcal{K}(\chi)}$ into a piece containing only intra-block terms, ${\mathcal{K}_0(\chi)}$ (that is, interactions solely among the set of hidden units which is to be coarse-grained into a single visible unit), and a piece containing the remaining, inter-block terms, ${\mathcal{K}_1(\chi)}$ (that is, interactions between different aforementioned sets of hidden units).
Let us denote a block of hidden units by ${\mathcal{H}_j\ni\chi_i}$, such that ${\chi=\bigcup_{j=1}^n\mathcal{H}_j}$ (note that since ${\mathrm{dim}(\chi)=m}$, this implies ${\mathrm{dim}(\mathcal{H}_j)=m/n}$ degrees of freedom ${\chi_i}$ per block). To each ${\mathcal{H}_j}$, we associate a visible unit ${\mathcal{V}_j=\chi'_j}$, into which the constituent UV variables ${\chi_i}$ have been coarse-grained. (Note that, for the reasons explained above, we have swapped ${\mathcal{H}\leftrightarrow\mathcal{V}}$ relative to [8]). Then translation invariance implies
$\displaystyle p(\chi'|\chi)=\prod_{j=1}^np(\mathcal{V}_j|\mathcal{H}_j) \qquad\mathrm{and}\qquad \mathcal{K}_0(\chi)=\sum_{j=1}^n\mathcal{K}_b(\mathcal{H}_j)~, \ \ \ \ \ (27)$
where ${\mathcal{K}_b(\chi_i)}$ denotes a single intra-block term of the hamiltonian. With this notation in hand, (26) becomes
$\displaystyle e^{\mathcal{K}'(\chi')}=\sum_\chi e^{\mathcal{K}_1(\chi)}\prod_{j=1}^np(\mathcal{V}_j|\mathcal{H}_j)\,e^{\mathcal{K}_b(\mathcal{H}_j)}~. \ \ \ \ \ (28)$
Now, getting from this to the first line of eqn. (13) in [8] is a bit of a notational hazard. We must suppose that for each block ${\mathcal{H}_j}$, we can define the block-distribution ${p_j=Z_b^{-1}e^{\mathcal{K}_b(\mathcal{H}_j)}}$, where ${Z_b=\sum_{\chi_i\in\mathcal{H}_j}e^{\mathcal{K}_b(\mathcal{H}_j)}=\prod_{i=1}^{m/n}\sum_{\chi_i}e^{\mathcal{K}_b(\chi_i)}}$. Given the underlying factorization of the total Hilbert space, we furthermore suppose that the distribution of all intra-block contributions can be written ${p_0=Z_0^{-1}e^{\mathcal{K}_0(\chi)}}$, so that ${Z_0=\prod_{i=1}^m\sum_{\chi_i\in\chi}e^{\mathcal{K}_0(\chi)}}$. This implies that
$\displaystyle Z_0=\sum_{\chi_1}\ldots\sum_{\chi_m}e^{\mathcal{K}_b(\mathcal{H}_1)}\ldots\,e^{\mathcal{K}_b(\mathcal{H}_n)} =\prod_{j=1}^n\sum_{\chi_i\in\mathcal{H}_j}e^{\mathcal{K}_b(\mathcal{H}_j)} =\prod_{j=1}^nZ_b~. \ \ \ \ \ (29)$
Thus we see that we can insert a factor of ${1=Z_0\cdot\left(\prod\nolimits_jZ_b\right)^{-1}}$ into (28), from which the remaining manipulations are straightforward: we identify
$\displaystyle p(\mathcal{V}_j|\mathcal{H}_j)\frac{1}{Z_b}\,e^{\mathcal{K}_b(\mathcal{H}_j)} =p(\mathcal{V}_j|\mathcal{H}_j)\,p(\mathcal{H}_j) =p(\mathcal{H}_j|\mathcal{V}_j)\,p(\mathcal{V}_j) \ \ \ \ \ (30)$
and ${p(\chi')=\prod_{j=1}^np(\mathcal{V}_j)}$ (again by translation invariance), whereupon we have
$\displaystyle e^{\mathcal{K}'(\chi')}=Z_0\,p(\chi')\sum_\chi p(\chi|\chi')\,e^{\mathcal{K}_1(\chi)}=Z_0\,p(\chi')\langle e^{\mathcal{K}_1(\chi)}\rangle~, \ \ \ \ \ (31)$
where the expectation value is defined with respect to the conditional distribution ${p(\chi|\chi')}$. Finally, by taking the log, we obtain
$\displaystyle \mathcal{K}'(\chi')=\ln Z_0+\ln p(\chi')+\ln\langle e^{\mathcal{K}_1(\chi)}\rangle~, \ \ \ \ \ (32)$
which one clearly recognizes as a generalization of (12): ${\ln Z_0}$ accounts for the normalization factor ${f(\beta)}$; ${\ln p(\chi')}$ gives the contribution from the un-marginalized variables ${x,y}$; and the log of the expectation values is the contribution from the UV cumulants, cf. eqn. (1) in the previous post. Note that this is not the cumulant generating function itself, but corresponds to setting ${t\!=\!1}$ therein:
$\displaystyle K_X(1)=\ln\langle e^{X}\rangle=\sum_{n=1}\frac{\kappa_n}{n!}~. \ \ \ \ \ (33)$
Within expectation values, ${\mathcal{K}_1}$ becomes the dimensionless energy ${-\beta\langle E_1\rangle}$, so the ${n^\mathrm{th}}$ moment/cumulant picks up a factor of ${(-\beta)^n}$ relative to the usual energetic moments in eqn. (11) of the previous post. Thus we may express (32) in terms of the cumulants of the dimensionless hamiltonian ${\mathcal{K}_1}$ as
$\displaystyle \mathcal{K}'(\chi')=\ln Z_0+\ln p(\chi')+\sum_{n=1}\frac{(-\beta)^n}{n!}\kappa_n~, \ \ \ \ \ (34)$
where ${\kappa_n=K_{E_1}^{(n)}(t)\big|_{t=0}}$, and the expectation values in the generating functions are computed with respect to ${p(\chi|\chi')}$.
This is great, but we’re not quite finished, since we’d still like to determine the renormalized couplings in terms of the cumulants, as I did in the simple Gaussian DBM above. This requires expressing the new hamiltonian in the same form as the old, which allows one to identify exactly which contributions from the UV degrees of freedom go where. (See for example chapter 13 of [9] for a pedagogical exposition of this decimation RG procedure for the 1d Ising model). For the class of lattice models considered in [8] — by which I mean, real-space decimation with the imposition of a buffer zone — one can write down a formal expression for the canonical form of the hamiltonian, but expressions for the renormalized couplings themselves remain model-specific.
There’s more cool stuff in the paper [8] that I won’t go into here, concerning the question of “optimality” and the behaviour of mutual information in these sorts of networks. Suffice to say that, as alluded in the previous post, the intersection of physics, information theory, and machine learning is potentially rich yet relatively unexplored territory. While the act of learning itself is not an RG in a literal sense, the two share a hierarchical Bayesian language that may yield insights in both directions, and I hope to investigate this more deeply (pun intended) soon.
References
[1] C. Beny, “Deep learning and the renormalization group,” arXiv:1301.3124.
[2] P. Mehta and D. J. Schwab, “An exact mapping between the Variational Renormalization Group and Deep Learning,” arXiv:1410.3831.
[3] H. W. Lin, M. Tegmark, and D. Rolnick, “Why Does Deep and Cheap Learning Work So Well?,” arXiv:1608.08225.
[4] S. Iso, S. Shiba, and S. Yokoo, “Scale-invariant Feature Extraction of Neural Network and Renormalization Group Flow,” arXiv:1801.07172.
[5] M. Koch-Janusz and Z. Ringel, “Mutual information, neural networks and the renormalization group,” arXiv:1704.06279.
[6] S. S. Funai and D. Giataganas, “Thermodynamics and Feature Extraction by Machine Learning,” arXiv:1810.08179.
[7] E. Mello de Koch, R. Mello de Koch, and L. Cheng, “Is Deep Learning an RG Flow?,” arXiv:1906.05212.
[8] P. M. Lenggenhager, Z. Ringel, S. D. Huber, and M. Koch-Janusz, “Optimal Renormalization Group Transformation from Information Theory,” arXiv:1809.09632.
[9] R. K. Pathria, Statistical Mechanics. 1996. Butterworth-Heinemann, Second edition.
## Cumulants, correlators, and connectivity
Lately, I’ve been spending a lot of time exploring the surprisingly rich mathematics at the intersection of physics, information theory, and machine learning. Among other things, this has led me to a new appreciation of cumulants. At face value, these are just an alternative to the moments that characterize a given probability distribution function, and aren’t particularly exciting. Except they show up all over statistical thermodynamics, quantum field theory, and the structure of deep neural networks, so of course I couldn’t resist trying to better understand the information-theoretic connections to which this seems to allude. In the first part of this two-post sequence, I’ll introduce them in the context of theoretical physics, and then turn to their appearance in deep learning in the next post, where I’ll dive into the parallel with the renormalization group.
The relation between these probabilistic notions and statistical physics is reasonably well-known, though the literature on this particular point unfortunately tends to be slightly sloppy. Loosely speaking, the partition function corresponds to the moment generating function, and the (Helmholtz) free energy corresponds to the cumulant generating function. By way of introduction, let’s make this identification precise.
The moment generating function for a random variable ${X}$ is
$\displaystyle M_X(t)\equiv \langle e^{tX}\rangle~,\quad\quad\forall t\in\mathbb{R}~, \ \ \ \ \ (1)$
where ${\langle\ldots\rangle}$ denotes the expectation value for the corresponding distribution. (As a technical caveat: in some cases, the moments — and correspondingly, ${M_X}$ — may not exist, in which case one can resort to the characteristic function instead). By series expanding the exponential, we have
$\displaystyle M_X(t)=1+t\langle X\rangle+\frac{t^2}{2}\langle X^2\rangle+\ldots\,=1+\sum_{n=1}m_n\frac{t^n}{n!}~, \ \ \ \ \ (2)$
were ${m_n}$ is the ${n^\mathrm{th}}$ moment, which we can obtain by taking ${n}$ derivatives and setting ${t\!=\!0}$, i.e.,
$\displaystyle m_n=M_X^{(n)}(t)\Big|_{t=0}=\langle X^n\rangle~. \ \ \ \ \ (3)$
However, it is often more convenient to work with cumulants instead of moments (e.g., for independent random variables, the cumulant of the sum is the sum of the cumulants, thanks to the log). These are uniquely specified by the moments, and vice versa—unsurprisingly, since the cumulant generating function is just the log of the moment generating function:
$\displaystyle K_X(t)\equiv\ln M_X(t)=\ln\langle e^{tX}\rangle \equiv\sum_{n=1}\kappa_n\frac{t^n}{n!}~, \ \ \ \ \ (4)$
where ${\kappa_n}$ is the ${n^\mathrm{th}}$ cumulant, which we again obtain by differentiating ${n}$ times and setting ${t=0}$:
$\displaystyle \kappa_n=K_X^{(n)}(t)\big|_{t=0}~. \ \ \ \ \ (5)$
Note however that ${\kappa_n}$ is not simply the log of ${m_n}$!
Now, to make contact with thermodynamics, consider the case in which ${X}$ is the energy of the canonical ensemble. The probability of a given energy eigenstate ${E_i}$ is
$\displaystyle p_i\equiv p(E_i)=\frac{1}{Z[\beta]}e^{-\beta E_i}~, \quad\quad \sum\nolimits_ip_i=1~. \ \ \ \ \ (6)$
The moment generating function for energy is then
$\displaystyle M_E(t)=\langle e^{tE}\rangle=\sum_i p(E_i)e^{tE_i} =\frac{1}{Z[\beta]}\sum_ie^{-(\beta\!-\!t)E_i} =\frac{Z[\beta-t]}{Z[\beta]}~. \ \ \ \ \ (7)$
Thus we see that the partition function ${Z[\beta]}$ is not the moment generating function, but there’s clearly a close relationship between the two. Rather, the precise statement is that the moment generating function ${M_E(t)}$ is the ratio of two partition functions at inverse temperatures ${\beta-t}$ and ${\beta}$, respectively. We can gain further insight by considering the moments themselves, which are — by definition (3) — simply expectation values of powers of the energy:
$\displaystyle \langle E^n\rangle=M^{(n)}(t)\Big|_{t=0} =\frac{1}{Z[\beta]}\frac{\partial^n}{\partial t^n}Z[\beta\!-\!t]\bigg|_{t=0} =(-1)^n\frac{Z^{(n)}[\beta\!-\!t]}{Z[\beta]}\bigg|_{t=0} =(-1)^n\frac{Z^{(n)}[\beta]}{Z[\beta]}~. \ \ \ \ \ (8)$
Note that derivatives of the partition function with respect to ${t}$ have, at ${t=0}$, become derivatives with respect to inverse temperature ${\beta}$ (obviously, this little slight of hand doesn’t work for all functions; simple counter example: ${f(\beta-t)=(\beta-t)^2}$). Of course, this is simply a more formal expression for the usual thermodynamic expectation values. The first moment of energy, for example, is
$\displaystyle \langle E\rangle= -\frac{1}{Z[\beta]}\frac{\partial Z[\beta]}{\partial\beta} =\frac{1}{Z[\beta]}\sum_i E_ie^{-\beta E_i} =\sum_i E_i\,p_i~, \ \ \ \ \ (9)$
which is the ensemble average. At a more abstract level however, (8) expresses the fact that the average energy — appropriately normalized — is canonically conjugate to ${\beta}$. That is, recall that derivatives of the action are conjugate variables to those with respect to which we differentiate. In classical mechanics for example, energy is conjugate to time. Upon Wick rotating to Euclidean signature, the trajectories become thermal circles with period ${\beta}$. Accordingly, the energetic moments can be thought of as characterizing the dynamics of the ensemble in imaginary time.
Now, it follows from (7) that the cumulant generating function (4) is
$\displaystyle K_E(t)=\ln\langle e^{tE}\rangle=\ln Z[\beta\!-\!t]-\ln Z[\beta]~. \ \ \ \ \ (10)$
While the ${n^\mathrm{th}}$ cumulant does not admit a nice post-derivative expression as in (8) (though I suppose one could write it in terms of Bell polynomials if we drop the adjective), it is simple enough to compute the first few and see that, as expected, the first cumulant is the mean, the second is the variance, and the third is the third central moment:
\displaystyle \begin{aligned} K^{(1)}(t)\big|_{t=0}&=-\frac{Z'[\beta]}{Z[\beta]}=\langle E\rangle~,\\ K^{(2)}(t)\big|_{t=0}&=\frac{Z''[\beta]}{Z[\beta]}-\left(\frac{Z'[\beta]}{Z[\beta]}\right)^2=\langle E^2\rangle-\langle E\rangle^2\\ K^{(3)}(t)\big|_{t=0}&=-2\left(\frac{Z'[\beta]}{Z[\beta]}\right)^3+3\frac{Z'[\beta]Z''[\beta]}{Z[\beta]^2}-\frac{Z^{(3)}[\beta]}{Z[\beta]}\\ &=-2\langle E\rangle^3+3\langle E\rangle\langle E^2\rangle-\langle E^3\rangle =-\left\langle\left( E-\langle E\rangle\right)^3\right\rangle~. \end{aligned} \ \ \ \ \ (11)
where the prime denotes the derivative with respect to ${\beta}$. Note that since the second term in the generating function (10) is independent of ${t}$, the normalization drops out when computing the cumulants, so we would have obtained the same results had we worked directly with the partition function ${Z[\beta]}$ and taken derivatives with respect to ${\beta}$. That is, we could define
$\displaystyle K_E(\beta)\equiv-\ln Z[\beta] \qquad\implies\qquad \kappa_n=(-1)^{n-1}K_E^{(n)}(\beta)~, \ \ \ \ \ (12)$
where, in contrast to (5), we don’t need to set anything to zero after differentiating. This expression for the cumulant generating function will feature more prominently when we discuss correlation functions below.
So, what does the cumulant generating function have to do with the (Helmholtz) free energy, ${F[\beta]=-\beta^{-1}\ln Z[\beta]}$? Given the form (12), one sees that they’re essentially one and the same, up to a factor of ${\beta}$. And indeed the free energy is a sort of “generating function” in the sense that it allows one to compute any desired thermodynamic quantity of the system. The entropy, for example, is
$\displaystyle S=-\frac{\partial F}{\partial T}=\beta^2\frac{\partial F}{\partial\beta} =\beta\langle E\rangle+\ln Z=-\langle\ln p\rangle~, \ \ \ \ \ (13)$
where ${p}$ is the Boltzmann distribution (6). However, the factor of ${\beta^{-1}}$ in the definition of free energy technically prevents a direct identification with the cumulant generating function above. Thus it is really the log of the partition function itself — i.e., the dimensionless free energy ${\beta F}$ — that serves as the cumulant generating function for the distribution. We’ll return to this idea momentarily, cf. (21) below.
So much for definitions; what does it all mean? It turns out that in addition to encoding correlations, cumulants are intimately related to connectedness (in the sense of connected graphs), which underlies their appearance in QFT. Consider, for concreteness, a real scalar field ${\phi(x)}$ in 4 spacetime dimensions. As every student knows, the partition function
$\displaystyle Z[J]=\mathcal{N}\int\mathcal{D}\phi\,\exp\left\{i\!\int\!\mathrm{d}^dx\left[\mathcal{L}(\phi,\partial\phi)+J(x)\phi(x)\right]\right\} \ \ \ \ \ (14)$
is the generating function for the ${n}$-point correlator or Green function ${G^{(n)}(x_1,\ldots,x_n)}$:
$\displaystyle G^{(n)}(x_1,\ldots,x_n)=\frac{1}{i^n}\frac{\delta^nZ[J]}{\delta J(x_1)\ldots\delta J(x_n)}\bigg|_{J=0}~, \ \ \ \ \ (15)$
where the normalization ${\mathcal{N}}$ is fixed by demanding that in the absence of sources, we should recover the vacuum expectation value, i.e., ${Z[0]=\langle0|0\rangle=1}$. In the language of Feynman diagrams, the Green function contains all possible graphs — both connected and disconnected — that contribute to the corresponding transition amplitude. For example, the 4-point correlator of ${\phi^4}$ theory contains, at first order in the coupling, a disconnected graph consisting of two Feynman propagators, another disconnected graph consisting of a Feynman propagator and a 1-loop diagram, and an irreducible graph consisting of a single 4-point vertex. But only the last of these contributes to the scattering process, so it’s often more useful to work with the generating function for connected diagrams only,
$\displaystyle W[J]=-i\ln Z[J]~, \ \ \ \ \ (16)$
from which we obtain the connected Green function ${G_c^{(n)}}$:
$\displaystyle G_c^{(n)}(x_1,\ldots,x_n)=\frac{1}{i^{n-1}}\frac{\delta^nW[J]}{\delta J(x_1)\ldots\delta J(x_n)}\bigg|_{J=0}~. \ \ \ \ \ (17)$
The fact that the generating functions for connected vs. disconnected diagrams are related by an exponential, that is, ${Z[J]=\exp{i W[J]}}$, is not obvious at first glance, but it is a basic exercise in one’s first QFT course to show that the coefficients of various diagrams indeed work out correctly by simply Taylor expanding the exponential ${e^X=\sum_n\tfrac{X^n}{n!}}$. In the example of ${\phi^4}$ theory above, the only first-order diagram that contributes to the connected correlator is the 4-point vertex. More generally, one can decompose ${G^{(n)}}$ into ${G_c^{(n)}}$ plus products of ${G_c^{(m)}}$ with ${m. The factor of ${-i}$ in (16) goes away in Euclidean signature, whereupon we see that ${Z[J]}$ is analogous to ${Z[\beta]}$ — and hence plays the role of the moment generating function — while ${W[J]}$ is analogous to ${\beta F[\beta]}$ — and hence plays the role of the cumulant generating function in the form (12).
Thus, the ${n^\mathrm{th}}$ cumulant of the field ${\phi}$ corresponds to the connected Green function ${G_c^{(n)}}$, i.e., the contribution from correlators of all ${n}$ fields only, excluding contributions from lower-order correlators among them. For example, we know from Wick’s theorem that Gaussian correlators factorize, so the corresponding ${4}$-point correlator ${G^{(4)}}$ becomes
$\displaystyle \langle\phi_1\phi_2\phi_3\phi_4\rangle= \langle\phi_1\phi_2\rangle\langle\phi_3\phi_4\rangle +\langle\phi_1\phi_3\rangle\langle\phi_2\phi_4\rangle +\langle\phi_1\phi_4\rangle\langle\phi_2\phi_3\rangle~. \ \ \ \ \ (18)$
What this means is that there are no interactions among all four fields that aren’t already explained by interactions among pairs thereof. The probabilistic version of this statement is that for the normal distribution, all cumulants other than ${n=2}$ are zero. (For a probabilist’s exposition on the relationship between cumulants and connectivity, see the first of three lectures by Novak and LaCroix [1], which takes a more graph-theoretic approach).
There’s one more important function that deserves mention here: the final member of the triumvirate of generating functions in QFT, namely the effective action ${\Gamma[\phi]}$, defined as the Legendre transform of ${W[J]}$:
$\displaystyle \Gamma[\phi]=W[J]-\int\!\mathrm{d} x\,J(x)\phi(x)~. \ \ \ \ \ (19)$
The Legendre transform is typically first encountered in classical mechanics, where it relates the hamiltonian and lagrangian formulations. Geometrically, it translates between a function and its envelope of tangents. More abstractly, it provides a map between the configuration space (here, the sources ${J}$) and the dual vector space (here, the fields ${\phi}$). In other words, ${\phi}$ and ${J}$ are conjugate pairs in the sense that
$\displaystyle \frac{\delta\Gamma}{\delta\phi}=-J \qquad\mathrm{and}\qquad \frac{\delta W}{\delta J}=\phi~. \ \ \ \ \ (20)$
As an example that connects back to the thermodynamic quantities above: we already saw that ${E}$ and ${\beta}$ are conjugate variables by considering the partition function, but the Legendre transform reveals that the free energy and entropy are conjugate pairs as well. This is nicely explained in the lovely pedagogical treatment of the Legendre transform by Zia, Redish, and McKay [2], and also cleans up the disruptive factor of ${\beta}$ that prevented the identification with the cumulant generating function above. The basic idea is that since we’re working in natural units (i.e., ${k_B=1}$), the thermodynamic relation in the form ${\beta F+S=\beta E}$ (13) obscures the duality between the properly dimensionless quantities ${\tilde F\equiv\beta F}$ and ${\tilde S=S/k_B}$. From this perspective, it is more natural to work with ${\tilde F}$ instead, in which case we have both an elegant expression for the duality in terms of the Legendre transform, and a precise identification of the dimensionless free energy with the cumulant generating function (12):
$\displaystyle \tilde F(\beta)+\tilde S(E)=\beta E~, \qquad\qquad K_E(\beta)=\tilde F=\beta F~. \ \ \ \ \ (21)$
Now, back to QFT, in which ${\Gamma[\phi]}$ generates one-particle irreducible (1PI) diagrams. A proper treatment of this would take us too far afield, but can be found in any introductory QFT book, e.g., [3]. The basic idea is that in order to be able to cut a reducible diagram, we need to work at the level of vertices rather than sources (e.g., stripping off external legs, and identifying the bare propagator between irreducible parts). The Legendre transform (19) thus removes the dependence on the sources ${J}$, and serves as the generator for the vertex functions of ${\phi}$, i.e., the fundamental interaction terms. The reason this is called the effective action is that in perturbation theory, ${\Gamma[\phi]}$ contains the classical action as the leading saddle-point, as well as quantum corrections from the higher-order interactions in the coupling expansion.
In information-theoretic terms, the Legendre transform of the cumulant generating function is known as the rate function. This is a core concept in large deviations theory, and I won’t go into details here. Loosely speaking, it quantifies the exponential decay that characterizes rare events. Concretely, let ${X_i}$ represent the outcome of some measurement or operation (e.g., a coin toss); then the mean after ${N}$ independent trials is
$\displaystyle M_N=\frac{1}{N}\sum_{i=1}^N X_i~. \ \ \ \ \ (22)$
The probability that a given measurement deviates from this mean by some specified amount ${x}$ is
$\displaystyle P(M_N>x)\approx e^{-N I(x)} \ \ \ \ \ (23)$
where ${I(x)}$ is the aforementioned rate function. The formal similarity with the partition function in terms of the effective action, ${Z=e^{-\Gamma}}$, is obvious, though the precise dictionary between the two languages is not. I suspect that a precise translation between the two languages — physics and information theory — can be made here as well, in which the increasing rarity of events as one moves along the tail of the distribution correspond to increasingly high-order corrections to the quantum effective action, but I haven’t worked this out in detail.
Of course, the above is far from the only place in physics where cumulants are lurking behind the scenes, much less the end of the parallel with information theory more generally. In the next post, I’ll discuss the analogy between deep learning and the renormalization group, and see how Bayesian terminology can provide an underlying language for both.
References
[1] J. Novak and M. LaCroix, “Three lectures on free probability,” arXiv:1205.2097.
[2] R. K. P. Zia, E. F. Redish, and S. R. McKay, “Making sense of the Legendre transform, arXiv:0806.1147.
[3] L. H. Ryder, Quantum Field Theory. Cambridge University Press, 2 ed., 1996.
Posted in Physics | 1 Comment
## Black hole interiors, state dependence, and all that
In the context of firewalls, the crux of the paradox boils down to whether black holes have smooth horizons (as required by the equivalence principle). It turns out that this is intimately related to the question of how the interior of the black hole can be reconstructed by an external observer. AdS/CFT is particularly useful in this regard, because it enables one to make such questions especially sharp. Specifically, one studies the eternal black hole dual to the thermofield double (TFD) state, which cleanly captures the relevant physics of real black holes formed from collapse.
To construct the TFD, we take two copies of a CFT and entangle them such that tracing out either results in a thermal state. Denoting the energy eigenstates of the left and right CFTs by ${\tilde E_i}$ and ${E_i}$, respectively, the state is given by
$\displaystyle |\Psi\rangle=\frac{1}{\sqrt{Z_\beta}}\sum_ie^{-\beta E_i/2}|E_i\rangle\otimes|\tilde E_i\rangle\ \ \ \ \ (1)$
where ${Z_\beta}$ is the partition function at inverse temperature ${\beta}$. The AdS dual of this state is the eternal black hole, the two sides of which join the left and right exterior bulk regions through the wormhole. Incidentally, one of the fascinating questions inherent to this construction is how the bulk spacetime emerges in a manner consistent with the tensor product of boundary CFTs. For our immediate purposes, the important fact is that operators in the left source states behind the horizon from the perspective of the right (and vice-versa). The requirement from general relativity that the horizon be smooth then imposes conditions on the relationship between these operators.
A noteworthy approach in this vein is the so-called “state-dependence” proposal developed by Kyriakos Papadodimas and Suvrat Raju over the course of several years [1,2,3,4,5] (referred to as PR henceforth). Their collective florilegium spans several hundred pages, jam-packed with physics, and any summary I could give here would be a gross injustice. As alluded above however, the salient aspect is that they phrased the smoothness requirement precisely in terms of a condition on correlation functions of CFT operators across the horizon. Focusing on the two-point function for simplicity, this condition reads:
$\displaystyle \langle\Psi|\mathcal{O}(t,\mathbf{x})\tilde{\mathcal{O}}(t',\mathbf{x}')|\Psi\rangle =Z^{-1}\mathrm{tr}\left[e^{-\beta H}\mathcal{O}(t,\mathbf{x})\mathcal{O}(t'+i\beta/2,\mathbf{x}')\right]~. \ \ \ \ \ (2)$
Here, ${\mathcal{O}}$ is an exterior operator in the right CFT, while ${\tilde{\mathcal{O}}}$ is an interior operator in the left—that is, it represents an excitation localized behind the horizon from the perspective of an observer in the right wedge (see the diagram above). The analytical continuation ${\tilde{\mathcal{O}}(t,x)\rightarrow\mathcal{O}(t+i\beta/2,\mathbf{x})}$ arises from the KMS condition (i.e., the periodicity of thermal Green functions in imaginary time). Physically, this is essentially the statement that one should reproduce the correct thermal expectation values when restricted to a single copy of the CFT.
The question then becomes whether one can find such operators in the CFT that satisfy this constraint. That is, we want to effectively construct interior operators by acting only in the exterior CFT. PR achieve this through their so-called “mirror operators” ${\tilde{\mathcal{O}}_n}$, defined by
$\displaystyle \tilde{\mathcal{O}}_n\mathcal{O}_m|\Psi\rangle=\mathcal{O}_me^{-\beta H/2}\mathcal{O}_n^\dagger e^{\beta H/2}|\Psi\rangle~. \ \ \ \ \ (3)$
While appealingly compact, it’s more physically insightful to unpack this into the following two equations:
$\displaystyle \tilde{\mathcal{O}}_n|\Psi\rangle=e^{-\beta H/2}\mathcal{O}^\dagger e^{\beta H/2}|\Psi\rangle~, \quad\quad \tilde{\mathcal{O}}_n\mathcal{O}_m|\Psi\rangle=\mathcal{O}_m\tilde{\mathcal{O}}_n|\Psi\rangle~. \ \ \ \ \ (4)$
The key point is that these operators are defined via their action on the state ${|\Psi\rangle}$, i.e., they are state-dependent operators. For example, the second equation does not say that the operators commute; indeed, as operators, ${[\tilde{\mathcal{O}}_n,\mathcal{O}_m]\neq0}$. But the commutator does vanish in this particular state, ${[\tilde{\mathcal{O}}_n,\mathcal{O}_m]|\Psi\rangle=0}$. This may seem strange at first sight, but it’s really just a matter of carefully distinguishing between equations that hold as operator statements and those that hold only at the level of states. Indeed, this is precisely the same crucial distinction between localized states vs. localized operators that I’ve discussed before.
PR’s work created considerable backreaction, most of which centered around the nature of this “unusual” state dependence, which generated considerable confusion. Aspects of PR’s proposal were critiqued in a number of papers, particularly [6,7], which led many to claim that state dependence violates quantum mechanics. Coincidentally, I had the good fortune of being a visiting grad student at the KITP around this time, where these issues where hotly debated during a long-term workshop on quantum gravity. This was a very stimulating time, when the firewall paradox was still center-stage, and the collective confusion was almost palpable. Granted, I was a terribly confused student, but the fact that the experts couldn’t even agree on language — let alone physics — certainly didn’t do me any favours. Needless to say, the debate was never resolved, and the field’s collective attention span eventually drifted to other things. Yet somehow, the claim that state dependence violates quantum mechanics (or otherwise constitutes an unusual or potentially problematic modification thereof) has since risen to the level of dogma, and one finds it regurgitated again and again in papers published since.
Motivated in part by the desire to understand the precise nature of state dependence in this context (though really, it was the interior spacetime I was after), I wrote a paper [8] last year in an effort to elucidate and connect a number of interesting ideas in the emergent spacetime or “It from Qubit” paradigm. At a technical level, the only really novel bit was the application of modular inclusions, which provide a relatively precise framework for investigating the question of how one represents information in the black hole interior, and perhaps how the bulk spacetime emerges more generally. The relation between Tomita-Takesaki theory itself (a subset of algebraic quantum field theory) and state dependence was already pointed out by PR [3], and is highlighted most succinctly in Kyriakos’ later paper in 2017 [9], which was the main stimulus behind my previous post on the subject. However, whereas PR arrived at this connection from more physical arguments (over the course of hundreds of pages!), I took essentially the opposite approach: my aim was to distill the fundamental physics as cleanly as possible, to which end modular theory proves rather useful for demystifying issues which might otherwise remain obfuscated by details. The focus of my paper was consequently decidedly more conceptual, and represents a personal attempt to gain deeper physical insight into a number of tantalizing connections that have appeared in the literature in recent years (e.g., the relationship between geometry and entanglement represented by Ryu-Takayanagi, or the ontological basis for quantum error correction in holography).
I’ve little to add here that isn’t said better in [8] — and indeed, I’ve already written about various aspects on other occasions — so I invite you to simply read the paper if you’re interested. Personally, I think it’s rather well-written, though card-carrying members of the “shut up and calculate” camp may find it unpalatable. The paper touches on a relatively wide range of interrelated ideas in holography, rather than state dependence alone; but the upshot for the latter is that, far from being pathological, state dependence (precisely defined) is
1. a natural part of standard quantum field theory, built-in to the algebraic framework at a fundamental level, and
2. an inevitable feature of any attempt to represent information behind horizons.
I hasten to add that “information” is another one of those words that physicists love to abuse; here, I mean a state sourced by an operator whose domain of support is spacelike separated from the observer (e.g., excitations localized on the opposite side of a Rindler/black hole horizon). The second statement above is actually quite general, and results whenever one attempts to reconstruct an excitation outside its causal domain.
So why I am devoting an entire post to this, if I’ve already addressed it at length elsewhere? There were essentially two motivations for this. One is that I recently had the opportunity to give a talk about this at the YITP in Kyoto (the slides for which are available from the program website here), and I fell back down the rabbit hole in the course of reviewing. In particular, I wanted to better understand various statements in the literature to the effect that state dependence violates quantum mechanics. I won’t go into these in detail here — one can find a thorough treatment in PRs later works — but suffice to say the primary issue seems to lie more with language than physics: in the vast majority of cases, the authors simply weren’t precise about what they meant by “state dependence” (though in all fairness, PR weren’t totally clear on this either), and the rare exceptions to this had little to nothing to do with the unqualified use of the phrase here. I should add the disclaimer that I’m not necessarily vouching for every aspect of PR’s approach—they did a hell of a lot more than just write down (3), after all. My claim is simply that state dependence, in the fundamental sense I describe, is a feature, not a bug. Said differently, even if one rejects PR’s proposal as a whole, the state dependence that ultimately underlies it will continue to underlie any representation of the black hole interior. Indeed, I had hoped that my paper would help clarify things in this regard.
And this brings me to the second reason, namely: after my work appeared, a couple other papers [10,11] were written that continued the offense of conflating the unqualified phrase “state dependence” with different and not-entirely-clear things. Of course, there’s no monopoly on terminology: you can redefine terms however you like, as long as you’re clear. But conflating language leads to conflated concepts, and this is where we get into trouble. Case in point: both papers contain a number of statements which I would have liked to see phrased more carefully in light of my earlier work. Indeed, [11] goes so far as to write that “interior operators cannot be encoded in the CFT in a state-dependent way.” On the contrary, as I had explained the previous year, it’s actually the state independent operators that lead to pathologies (specifically, violations of unitarity)! Clearly, whatever the author means by this, it is not the same state dependence at work here. So consider this a follow-up attempt to stop further terminological misuse confusion.
As I’ll discuss below, both these works — and indeed most other proposals from quantum information — ultimately rely on the Hayden-Preskill protocol [12] (and variations thereof), so the real question is how the latter relates to state dependence in the unqualified use of the term (i.e., as defined via Tomita-Takesaki theory; I refer to this usage as “unqualified” because if you’re talking about firewalls and don’t specify otherwise, then this is the relevant definition, as it underlies PR’s introduction of the phrase). I’ll discuss this in the context of Beni’s work [10] first, since it’s the clearer of the two, and comment more briefly on Geof’s [11] below.
In a nutshell, the classic Hayden-Preskill result [12] is a statement about the ability to decode information given only partial access to the complete quantum state. In particular, one imagines that the proverbial Alice throws a message comprised of ${k}$ bits of information into a black hole of size ${n\!-\!k\gg k}$. The black hole will scramble this information very quickly — the details are not relevant here — such that the information is encoded in some complicated manner among the (new) total ${n}$ bits of the black hole. For example, if we model the internal dynamics as a simple permutation of Alice’s ${k}$-bit message, it will be transformed into one of ${2^k}$ possible ${n}$-bit strings—a huge number of possibilities!
Now suppose Bob wishes to reconstruct the message by collecting qubits from the subsequent Hawking radiation. Naïvely, one would expect him to need essentially all ${n}$ bits (i.e., to wait until the black hole evaporates) in order to accurately determine among the ${2^k}$ possibilities. The surprising result of Hayden-Preskill is that in fact he needs only slightly more than ${k}$ bits. The time-scale for this depends somewhat on the encoding performed by the black hole, but in principle, this means that Bob can recover the message just after the scrambling time. However, a crucial aspect of this protocol is that Bob knows the initial microstate of the black hole (i.e., the original ${(n\!-\!k)}$-bit string). This is the source of the confusing use of the phrase “state dependence”, as we’ll see below.
Of course, as Hayden and Preskill acknowledge, this is a highly unrealistic model, and they didn’t make any claims about being able to reconstruct the black hole interior in this manner. Indeed, the basic physics involved has nothing to do with black holes per se, but is a generic feature of quantum error correcting codes, reminiscent of the question of how to share (or decode) a quantum “secret” [13]. The novel aspect of Beni’s recent work [10] is to try to apply this to resolving the firewall paradox, by explicitly reconstructing the interior of the black hole.
Beni translates the problem of black hole evaporation into the sort of circuit language that characterizes much of the quantum information literature. One the one hand, this is nice in that it enables him to make very precise statements in the context of a simple qubit model; and indeed, at the mathematical level, everything’s fine. The confusion arises when trying to lift this toy model back to the physical problem at hand. In particular, when Beni claims to reconstruct state-independent interior operators, he is — from the perspective espoused above — misusing the terms “state-independent”, “interior”, and “operator”.
Let’s first summarize the basic picture, and then try to elucidate this unfortunate linguistic hat-trick. The Hayden-Preskill protocol for recovering information from black holes is illustrated in the figure from Beni’s paper below. In this diagram, ${B}$ is the black hole, which is maximally entangled (in the form of some number of EPR pairs) with the early radiation ${R}$. Alice’s message corresponds to the state ${|\psi\rangle}$, which we imagine tossing into the black hole as ${A}$. One then evolves the black hole (which now includes Alice’s message ${A}$) by some unitary operator ${U}$, which scrambles the information as above. Subsequently, ${D}$ represents some later Hawking modes, with the remaining black hole denoted ${C}$. Bob’s task is to reconstruct the state ${|\psi\rangle}$ by acting on ${D}$ and ${R}$ (since he only has access to the exterior) with some operator ${V}$.
Now, Beni’s “state dependence” refers to the fact that the technical aspects of this construction relied on putting the initial state of the black hole ${+}$ radiation ${|\Psi\rangle_{BR}}$ in the form of a collection of EPR pairs ${|\Phi\rangle_{EPR}}$. This can be done by finding some unitary operator ${K}$, such that
$\displaystyle |\Psi\rangle_{BR}=(I\otimes K)|\Phi\rangle_{EPR}~, \ \ \ \ \ (5)$
(Here, one imagines that ${B}$ is further split into a Hawking mode and its partner just behind the horizon, so that ${I}$ acts on the interior mode while ${K}$ affects only the new Hawking mode and the early radiation; see [10] for details). This is useful because it enables the algorithm to work for arbitrary black holes: for some other initial state ${|\Psi'\rangle_{BR}}$, one can find some other ${K'\neq K}$ which results in the same state ${|\Phi\rangle_{EPR}}$. The catch is that Bob’s reconstruction depends on ${K}$, and therefore, on the initial state ${|\Psi\rangle_{BR}}$. But this is to be expected: it’s none other than the Hayden-Preskill requirement above that Bob needs to know the exact microstate of the system in order for the decoding protocol to work. It is in this sense that the Hayden-Preskill protocol is “state-dependent”, which clearly references something different than what we mean here. The reason I go so far as to call this a misuse of terminology is that Beni explicitly conflates the two, and regurgitates the claim that these “state-dependent interior operators” lead to inconsistencies with quantum mechanics, referencing work above. Furthermore, as alluded above, there’s an additional discontinuity of concepts here, namely that the “state-dependent” operator ${V}$ is obviously not the “interior operator” to which we’re referring: it’s support isn’t even restricted to the interior, nor does it source any particular state localized therein!
Needless to say, I was in a superposition of confused and unhappy with the terminology in this paper, until I managed to corner Beni at YITP for a couple hours at the aforementioned workshop, where he was gracious enough to clarify various aspects of his construction. It turns out that he actually has in mind something different when he refers to the interior operator. Ultimately, the identification still fails on these same counts, but it’s worth following the idea a bit further in order to see how he avoids the “state dependence” in the vanilla Hayden-Preskill set-up above. (By now I shouldn’t have to emphasize that this form of “state dependence” isn’t problematic in any fundamental sense, and I will continue to distinguish it from the latter, unqualified use of the phrase with quotation marks).
One can see from the above diagram that the state of the black hole ${|\Psi\rangle}$ — before Alice & Bob start fiddling with it — can be represented by the following diagram, also from [10]:
where ${R}$, ${D}$, and ${C}$ are again the early radiation, later Hawking mode, and remaining black hole, respectively. The problem Beni solves is finding the “partner” — by which he means, the purification — of ${D}$ in ${CR}$. Explicitly, he wants to find the operator ${\tilde{\mathcal{O}}_{CR}^T}$ such that
$\displaystyle (\mathcal{O}_D\otimes I_{CR})|\Psi\rangle=(I_D\otimes\tilde{\mathcal{O}}_{CR}^T)|\Psi\rangle~. \ \ \ \ \ (6)$
Note that there’s yet another language ergo conceptual discontinuity here, namely that Beni uses “qubit”, “mode”, and “operator” interchangeably (indeed, when I pressed him on this very point, he confirmed that he regards these as synonymous). These are very different beasts in the physical problem at hand; however, for the purposes of Beni’s model, the important fact is that one can push the operator ${O_D}$ (which one should think of as some operator that acts on ${D}$) through the unitary ${U}$ to some other operator ${\tilde{\mathcal{O}}_{CR}^T}$ that acts on both ${C}$ and ${R}$:
He then goes on to show that one can reconstruct this operator ${\tilde{\mathcal{O}}_{CR}}$ independently of the initial state of the black hole (i.e., the operator ${K}$) by coupling to an auxiliary system. Of course, I’m glossing over a great number of details here; in particular, Beni transmutes the outgoing mode ${D}$ into a representation of the interior mode in his model, and calls whatever purifies it the “partner” ${\tilde{\mathcal{O}}_{CR}^T}$. Still, I personally find this a bit underwhelming; but then, from my perspective, the Hayden-Preskill “state dependence” wasn’t the issue to begin with; quantum information people may differ, and in any case Beni’s construction is still a neat toy model in its own domain.
However, the various conflations above are problematic when one attempts to map back to the fundamental physics we’re after: ${\tilde{\mathcal{O}}_{CR}^T}$ is not the “partner” of the mode ${D}$ in the relevant sense (namely, the pairwise entangled modes required for smoothness across the horizon), nor does it correspond to PR’s mirror operator (since its support actually straddles both sides of the horizon). Hence, while Beni’s construction does represent a non-trivial refinement of the original Hayden-Preskill protocol, I don’t think it solves the problem.
So if this model misses the point, what does Hayden-Preskill actually achieve in this context? Indeed, even in the original paper [12], they clearly showed that one can recover a message from inside the black hole. Doesn’t this mean we can reconstruct the interior in a state-independent manner, in the proper use of the term?
Well, not really. Essentially, Hayden-Preskill (in which I’m including Beni’s model as the current state-of-the-art) & PR (and I) are asking different questions: the former are asking whether it’s possible to decode messages to which one would not normally have access (answer: yes, if you know enough about the initial state and any auxiliary systems), while the latter are asking whether physics in the interior of the black hole can be represented in the exterior (answer: yes, if you use state-dependent operators). Reconstructing information about entangled qubits is not quite the same things as reconstructing the state in the interior. Consider a single Bell pair for simplicity, consisting of an exterior qubit (say, ${D}$ in Beni’s model) and the interior “partner” that purifies it. Obviously, this state isn’t localized to either side, and so does not correspond to an interior operator.
The distinction is perhaps a bit subtle, so let me try to clarify. Let us define the operator ${\mathcal{O}_A}$ with support behind the horizon, whose action on the vacuum creates the state in which Alice’s message has been thrown into the black hole; i.e., let
$\displaystyle |\Psi\rangle_A=(\mathcal{O}_A\otimes I_R)|\Psi\rangle_{EPR} \ \ \ \ \ (7)$
denote the state of the black hole containing Alice’s message, where the identity factor acts on the early radiation. Now, the fundamental result of PR is that if Bob wishes to reconstruct the interior of the black hole (concretely, the excitation behind the horizon corresponding to Alice’s message), he can only do so using state-dependent operators. In other words, there is no operator with support localized to the exterior which precisely equals ${\mathcal{O}_A}$; but Bob can find a state that approximates ${|\Psi\rangle_A}$ arbitrarily well. This is more than just an operational restriction, but rather stems from an interesting trade-off between locality and unitarity which seems built-in to the theory at a fundamental level; see [8] for details.
Alternatively, Bob might not care about directly reconstructing the black hole interior (since he’s not planning on following Alice in, he’s not concerned about verifying smoothness as we are). Instead he’s content to wait for the “information” in this state to be emitted in the Hawking radiation. In this scenario, Bob isn’t trying to reconstruct the black hole interior corresponding to (7)—indeed, by now this state has long-since been scrambled. Rather, he’s only concerned with recovering the information content of Alice’s message—a subtly related but crucially distinct procedure from trying to reconstruct the corresponding state in the interior. And the fundamental result of Hayden-Preskill is that, given some admittedly idealistic assumptions (i.e., to the extent that the evaporating black hole can be viewed as a simple qubit model) this can also be done.
In the case of Geof’s paper [11], there’s a similar but more subtle language difference at play. Here the author means “state dependence” to mean something different from both Beni and PR/myself; specifically, he means “state dependence” in the context of quantum error correction (QEC). This is more clearly explained his earlier paper with Hayden [14], and refers to the fact that in general, a given boundary operator may only reconstruct a given bulk operator for a single black hole microstate. Conversely, a “state-independent” boundary operator, in their language, is one which approximately reconstructs a given bulk operator in a larger class of states—specifically, all states in the code subspace. Note that the qualifier “approximate” is crucial here. Otherwise, schematically, if ${\epsilon}$ represents some small perturbation of the vacuum ${\Omega}$ (where “small” means that the backreaction is insufficient to move us beyond the code subspace), then an exact reconstruction of the operator ${\mathcal{O}}$ that sources the state ${|\Psi\rangle=\mathcal{O}|\Omega\rangle}$ would instead produce some other state ${|\Psi'\rangle=\mathcal{O}|\Omega+\epsilon\rangle}$. So at the end of the day, I simply find the phrasing in [11] misleading; the lack of qualifiers makes many of his statements about “state-(in)dependence” technically erroneous, even though they’re perfectly correct in the context of approximate QEC.
At the end of the day however, these [10,11,14] are ultimately quantum information-theoretic models, in which the causal structure of the original problem plays no role. This is obvious in Beni’s case [10], in which Hayden-Preskill boils down to the statement that if one knows the exact quantum state of the system (or approximately so, given auxiliary qubits), then one can recover information encoded non-locally (e.g., Alice’s bit string) from substantially fewer qubits than one would naïvely expect. It’s more subtle in [11,14], since the authors work explicitly in the context of entanglement wedge reconstruction in AdS/CFT, which superficially would seem to include aspects of the spacetime structure. However, they take the black hole to be included in the entanglement wedge (i.e., code subspace) in question, and ask only whether an operator in the corresponding boundary region “works” for every state in this (enlarged) subspace, regardless of whether the bulk operator we’re trying to reconstruct is behind the horizon (i.e., ignoring the localization of states in this subspace). And this is where super-loading the terminology “state-(in)dependence” creates the most confusion. For example, when Geof writes that “boundary reconstructions are state independent if, and only if, the bulk operator is contained in the entanglement wedge” (emphasis added), he is making a general statement that holds only at the level of QEC codes. If the bulk operator lies behind the horizon however, then simply placing the black hole within the entanglement wedge does not alter the fact that a state-independent reconstruction, in the unqualified use of the phrase, does not exist.
Of course, as the authors of [14] point out in this work, there is a close relationship between state-dependent in QEC and in PR’s use of the term. Indeed, one of the closing thoughts of my paper [8] was the idea that modular theory may provide an ontological basis for the epistemic utility of QEC in AdS/CFT. Hence I share the authors’ view that it would be very interesting to make the relation between QEC and (various forms of) state-dependence more precise.
I should add that in Geof’s work [11], he seems to skirt some of the interior/exterior objections above by identifying (part of) the black hole interior with the entanglement wedge of some auxiliary Hilbert space that acts as a reservoir for the Hawking radiation. Here I can only confess some skepticism as to various aspects of his construction (or rather, the legitimacy of his interpretation). In particular, the reservoir is artificially taken to lie outside the CFT, which would normally contain a complete representation of exterior states, including the radiation. Consequently, the question of whether it has a sensible bulk dual at all is not entirely clear, much less a geometric interpretation as the “entanglement wedge” behind the horizon, whose boundary is the origin rather than asymptotic infinity.
A related paper [15] by Almeiri, Engelhardt, Marolf, and Maxfield appeared on the arXiv simultaneously with Geof’s work. While these authors are not concerned with state-dependence per se, they do provide a more concrete account of the effects on the entanglement wedge in the context of a precise model for an evaporating black hole in AdS/CFT. The analogous confusion I have in this case is precisely how the Hawking radiation gets transferred to the left CFT, though this may eventually come down to language as well. In any case, this paper is more clearly written, and worth a read (happily, Henry Maxfield will speak about it during one of our group’s virtual seminars in August, so perhaps I’ll obtain greater enlightenment about both works then).
Having said all that, I believe all these works are helpful in strengthening our understanding, and exemplify the productive confluence of quantum information theory, holography, and black holes. A greater exchange of ideas from various perspectives can only lead to further progress, and I would like to see more work in all these directions.
I would like to thank Beni Yoshida, Geof Penington, and Henry Maxfield for patiently fielding my persistent questions about their work, and beg their pardon for the gross simplifications herein. I also thank the YITP in Kyoto for their hospitality during the Quantum Information and String Theory 2019 / It from Qubit workshop, where most of this post was written amidst a great deal of stimulating discussion.
References
1. K. Papadodimas and S. Raju, “Remarks on the necessity and implications of state-dependence in the black hole interior,” arXiv:1503.08825
2. K. Papadodimas and S. Raju, “Local Operators in the Eternal Black Hole,” arXiv:1502.06692
3. K. Papadodimas and S. Raju, “State-Dependent Bulk-Boundary Maps and Black Hole Complementarity,” arXiv:1310.6335
4. K. Papadodimas and S. Raju, “Black Hole Interior in the Holographic Correspondence and the Information Paradox,” arXiv:1310.6334
5. K. Papadodimas and S. Raju, “An Infalling Observer in AdS/CFT,” arXiv:1211.6767
6. D. Harlow, “Aspects of the Papadodimas-Raju Proposal for the Black Hole Interior,” arXiv:1405.1995
7. D. Marolf and J. Polchinski, “Violations of the Born rule in cool state-dependent horizons,” arXiv:1506.01337
8. R. Jefferson, “Comments on black hole interiors and modular inclusions,” arXiv:1811.08900
9. K. Papadodimas, “A class of non-equilibrium states and the black hole interior,” arXiv:1708.06328
10. B. Yoshida, “Firewalls vs. Scrambling,” arXiv:1902.09763
11. G. Penington, “Entanglement Wedge Reconstruction and the Information Paradox,” arXiv:1905.08255
12. P. Hayden and J. Preskill, “Black holes as mirrors: Quantum information in random subsystems,” arXiv:0708.4025
13. R. Cleve, D. Gottesman, and H.-K. Lo, “How to share a quantum secret,” arXiv:quant-ph/9901025
14. P. Hayden and G. Penington, “Learning the Alpha-bits of Black Holes,” arXiv:1807.06041
15. A. Almheiri, N. Engelhardt, D. Marolf, and H. Maxfield, “The entropy of bulk quantum fields and the entanglement wedge of an evaporating black hole,” arXiv:1905.08762
## Variational autoencoders
As part of one of my current research projects, I’ve been looking into variational autoencoders (VAEs) for the purpose of identifying and analyzing attractor solutions within higher-dimensional phase spaces. Of course, I couldn’t resist diving into the deeper mathematical theory underlying these generative models, beyond what was strictly necessary in order to implement one. As in the case of the restricted Boltzmann machines I’ve discussed before, there are fascinating relationships between physics, information theory, and machine learning at play here, in particular the intimate connection between (free) energy minimization and Bayesian inference. Insofar as I actually needed to learn how to build one of these networks however, I’ll start by introducing VAEs from a somewhat more implementation-oriented mindset, and discuss the deeper physics/information-theoretic aspects afterwards.
Mathematical formulation
An autoencoder is a type of neural network (NN) consisting of two feedforward networks: an encoder, which maps an input ${X}$ onto a latent space ${Z}$, and a decoder, which maps the latent representation ${Z}$ to the output ${X'}$. The idea is that ${\mathrm{dim}(Z)<\mathrm{dim}(X)=\mathrm{dim}(X')}$, so that information in the original data is compressed into a lower-dimensional “feature space”. For this reason, autoencoders are often used for dimensional reduction, though their applicability to real-world problems seems rather limited. Training consists of minimizing the difference between ${X}$ and ${X'}$ according to some suitable loss function. They are a form of unsupervised (or rather, self-supervised) learning, in which the NN seeks to learn highly compressed, discrete representation of the input.
VAEs inherit the network structure of autoencoders, but are fundamentally rather different in that they learn the parameters of a probability distribution that represents the data. This makes them much more powerful than their simpler precursors insofar as they are generative models (that is, they can generate new examples of the input type). Additionally, their statistical nature — in particular, learning a continuous probability distribution — makes them vastly superior in yielding meaningful results from new/test data that gets mapped to novel regions of the latent space. In a nutshell, the encoding ${Z}$ is generated stochastically, using variational techniques—and we’ll have more to say on what precisely this means below.
Mathematically, a VAE is a latent-variable model ${p_\theta(x,z)}$ with latent variables ${z\in Z}$ and observed variables (i.e., data) ${x\in X}$, where ${\theta}$ represents the parameters of the distribution. (For example, Gaussian distributions are uniquely characterized by their mean ${\mu}$ and standard deviation ${\sigma}$, in which case ${\theta\in\{\mu,\sigma\}}$; more generally, ${\theta}$ would parametrize the masses and couplings of whatever model we wish to construct. Note that we shall typically suppress the subscript ${\theta}$ where doing so does not lead to ambiguity). This joint distribution can be written
$\displaystyle p(x,z)=p(x|z)p(z)~. \ \ \ \ \ (1)$
The first factor on the right-hand side is the decoder, i.e., the likelihood ${p(x|z)}$ of observing ${x}$ given ${z}$; this provides the map from ${Z\rightarrow X'\simeq X}$. This will typically be either a multivariate Gaussian or Bernoulli distribution, implemented by an RBM with as-yet unlearned weights and biases. The second factor is the prior distribution of latent variables ${p(z)}$, which will be related to observations ${x}$ via the likelihood function (i.e., the decoder). This can be thought of as a statement about the variable ${z}$ with the data ${x}$ held fixed. In order to be computational tractable, we want to make the simplest possible choice for this distribution; accordingly, one typically chooses a multivariate Gaussian,
$\displaystyle p(z)=\mathcal{N}(0,1)~. \ \ \ \ \ (2)$
In the context of Bayesian inference, this is technically what’s known as an informative prior, since it assumes that any other parameters in the model are sufficiently small that Gaussian sampling from ${Z}$ does not miss any strongly relevant features. This is in contrast to the somewhat misleadingly named uninformative prior, which endeavors to place no subjective constraints on the variable; for this reason, the latter class are sometimes called objective priors, insofar as they represent the minimally biased choice. In any case, the reason such a simple choice (2) suffices for ${p(z)}$ is that any distribution can be generated by applying a sufficiently complicated function to the normal distribution.
Meanwhile, the encoder is represented by the posterior probability ${p(z|x)}$, i.e., the probability of ${z}$ given ${x}$; this provides the map from ${X\rightarrow Z}$. In principle, this is given by Bayes’ rule:
$\displaystyle p(z|x)=\frac{p(x|z)p(z)}{p(x)}~, \ \ \ \ \ (3)$
but this is virtually impossible to compute analytically, since the denominator amounts to evaluating the partition function over all possible configurations of latent variables, i.e.,
$\displaystyle p(x)=\int\!\mathrm{d}z\,p(x|z)p(z)~. \ \ \ \ \ (4)$
One solution is to compute ${p(x)}$ approximately via Monte Carlo sampling; but the impression I’ve gained from my admittedly superficial foray into the literature is that such models are computationally expensive, noisy, difficult to train, and generally inferior to the more elegant solution offered by VAEs. The key idea is that for most ${z}$, ${p(x|z)\approx0}$, so instead of sampling over all possible ${z}$, we construct a new distribution ${q(z|x)}$ representing the values of ${z}$ which are most likely to have produced ${x}$, and sample over this new, smaller set of ${z}$ values [2]. In other words, we seek a more tractable approximation ${q_\phi(z|x)\approx p_\theta(z|x)}$, characterized by some other, variational parameters ${\phi}$—so-called because we will eventually vary these parameters in order to ensure that ${q}$ is as close to ${p}$ as possible. As usual, the discrepancy between these distributions is quantified by the familiar Kullback-Leibler (KL) divergence:
$\displaystyle D_z\left(q(z|x)\,||\,p(z|x)\right)=\sum_z q(z|x)\ln\frac{q(z|x)}{p(z|x)}~, \ \ \ \ \ (5)$
where the subscript on the left-hand side denotes the variable over which we marginalize.
This divergence plays a central role in the variational inference procedure we’re trying to implement, and underlies the connection to the information-theoretic relations alluded above. Observe that Bayes’ rule enables us to rewrite this expression as
$\displaystyle D_z\left(q(z|x)\,||\,p(z|x)\right)= \langle \ln q(z|x)-\ln p(z)\rangle_q -\langle\ln p(x|z)\rangle_q +\ln p(x) \ \ \ \ \ (6)$
where ${\langle\ldots\rangle_q}$ denotes the expectation value with respect to ${q(z|x)}$, and we have used the fact that ${\sum\nolimits_z q(z|x) \ln p(x)=\ln p(x)}$ (since probabilities are normalized to 1, and ${p(x)}$ has no dependence on the latent variables ${z}$). Now observe that the first term on the right-hand side can be written as another KL divergence. Rearranging, we therefore have
$\displaystyle \ln p(x)-D_z\left(q(z|x)\,||\,p(z|x)\right)=-F_q(x) \ \ \ \ \ (7)$
where we have identified the (negative) variational free energy
$\displaystyle -F_q(x)=\langle\ln p(x|z)\rangle_q-D_z\left(q(z|x)\,||\,p(z)\right)~. \ \ \ \ \ (8)$
As the name suggests, this is closely related to the Helmholtz free energy from thermodynamics and statistical field theory; we’ll discuss this connection in more detail below, and in doing so provide a more intuitive definition: the form (8) is well-suited to the implementation-oriented interpretation we’re about to provide, but is a few manipulations removed from the underlying physical meaning.
The expressions (7) and (8) comprise the central equation of VAEs (and variational Bayesian methods more generally), and admit a particularly simple interpretation. First, observe that the left-hand side of (7) is the log-likelihood, minus an “error term” due to our use of an approximate distribution ${q(z|x)}$. Thus, it’s the left-hand side of (7) that we want our learning procedure to maximize. Here, the intuition underlying maximum likelihood estimation (MLE) is that we seek to maximize the probability of each ${x\!\in\!X}$ under the generative process provided by the decoder ${p(x|z)}$. As we will see, the optimization process pulls ${q(z|x)}$ towards ${p(z|x)}$ via the KL term; ideally, this vanishes, whereupon we’re directly optimizing the log-likelihood ${\ln p(x)}$.
The variational free energy (8) consists of two terms: a reconstruction error given by the expectation value of ${\ln p(x|z)}$ with respect to ${q(z|x)}$, and a so-called regulator given by the KL divergence. The reconstruction error arises from encoding ${X}$ into ${Z}$ using our approximate distribution ${q(z|x)}$, whereupon the log-likelihood of the original data given these inferred latent variables will be slightly off. The KL divergence, meanwhile, simply encourages the approximate posterior distribution ${q(z|x)}$ to be close to ${p(z)}$, so that the encoding matches the latent distribution. Note that since the KL divergence is positive-definite, (7) implies that the negative variational free energy gives a lower-bound on the log-likelihood. For this reason, ${-F_q(x)}$ is sometimes referred to as the Evidence Lower BOund (ELBO) by machine learners.
The appearance of the (variational) free energy (8) is not a mere mathematical coincidence, but stems from deeper physical aspects of inference learning in general. I’ll digress upon this below, as promised, but we’ve a bit more work to do first in order to be able to actually implement a VAE in code.
Computing the gradient of the cost function
Operationally, training a VAE consists of performing stochastic gradient descent (SGD) on (8) in order to minimize the variational free energy (equivalently, maximize the ELBO). In other words, this will provide the cost or loss function (9) for the model. Note that since ${\ln p(x)}$ is constant with respect to ${q(z|x)}$, (7) implies that minimizing the variational energy indeed forces the approximate posterior towards the true posterior, as mentioned above.
In applying SGD to the cost function (8), we actually have two sets of parameters over which to optimize: the parameters ${\theta}$ that define the VAE as a generative model ${p_\theta(x,z)}$, and the variational parameters ${\phi}$ that define the approximate posterior ${q_\phi(z|x)}$. Accordingly, we shall write the cost function as
$\displaystyle \mathcal{C}_{\theta,\phi}(X)=-\sum_{x\in X}F_q(x) =-\sum_{x\in X}\left[\langle\ln p_\theta(x|z)\rangle_q-D_z\left(q_\phi(z|x)\,||\,p(z)\right) \right]~, \ \ \ \ \ (9)$
where, to avoid a preponderance of subscripts, we shall continue to denote ${F_q\equiv F_{q_\phi(z|x)}}$, and similarly ${\langle\ldots\rangle_q=\langle\ldots\rangle_{q_\phi(z|x)}}$. Taking the gradient with respect to ${\theta}$ is easy, since only the first term on the right-hand side has any dependence thereon. Hence, for a given datapoint ${x\in X}$,
$\displaystyle \nabla_\theta\mathcal{C}_{\theta,\phi}(x) =-\langle\nabla_\theta\ln p_\theta(x|z)\rangle_q \approx-\nabla_\theta\ln p_\theta(x|z)~, \ \ \ \ \ (10)$
where in the second step we have replaced the expectation value with a single sample drawn from the latent space ${Z}$. This is a common method in SGD, in which we take this particular value of ${z}$ to be a reasonable approximation for the average ${\langle\ldots\rangle_q}$. (Yet-more connections to mean field theory (MFT) we must of temporal necessity forgo; see Mehta et al. [1] for some discussion in this context, or Doersch [2] for further intuition). The resulting gradient can then be computed via backpropagation through the NN.
The gradient with respect to ${\phi}$, on the other hand, is slightly problematic, since the variational parameters also appear in the distribution with respect to which we compute expectation values. And the sampling trick we just employed means that in the implementation of this layer of the NN, the evaluation of the expectation value is a discrete operation: it has no gradient, and hence we can’t backpropagate through it. Fortunately, there’s a clever method called the reparametrization trick that circumvents this stumbling block. The basic idea is to change variables so that ${\phi}$ no longer appears in the distribution with respect to which we compute expectation values. To do so, we express the latent variable ${z}$ (which is ostensibly drawn from ${q_\phi(z|x)}$) as a differentiable and invertible transformation of some other, independent random variable ${\epsilon}$, i.e., ${z=g(\epsilon; \phi, x)}$ (where here “independent” means that the distribution of ${\epsilon}$ does not depend on either ${x}$ or ${\phi}$; typically, one simply takes ${\epsilon\sim\mathcal{N}(0,1)}$). We can then replace ${\langle\ldots\rangle_{q_\phi}\rightarrow\langle\ldots\rangle_{p_\epsilon}}$, whereupon we can move the gradient inside the expectation value as before, i.e.,
$\displaystyle -\nabla_\phi\langle\ln p_\theta(x|z)\rangle_{q_\phi} =-\langle\nabla_\phi\ln p_\theta(x|z)\rangle_{p_\epsilon}~. \ \ \ \ \ (11)$
Note that in principle, this results in an additional term due to the Jacobian of the transformation. Explicitly, this equivalence between expectation values may be written
\displaystyle \begin{aligned} \langle f(z)\rangle_{q_\phi}&=\int\!\mathrm{d}z\,q_\phi(z|x)f(z) =\int\!\mathrm{d}\epsilon\left|\frac{\partial z}{\partial\epsilon}\right|\,q_\phi(z(\epsilon)|x)\,f(z(\epsilon))\\ &\equiv\int\!\mathrm{d}\epsilon \,p(\epsilon)\,f(z(\epsilon)) =\langle f(z)\rangle_{p_\epsilon} \end{aligned} \ \ \ \ \ (12)
where the Jacobian has been absorbed into the definition of ${p(\epsilon)}$:
$\displaystyle p(\epsilon)\equiv J_\phi(x)\,q_\phi(z|x)~, \quad\quad J_\phi(x)\equiv\left|\frac{\partial z}{\partial\epsilon}\right|~. \ \ \ \ \ (13)$
Consequently, the Jacobian would contribute to the second term of the KL divergence via
$\displaystyle \ln q_\phi(z|x)=\ln p(\epsilon)-\ln J_\phi(x)~. \ \ \ \ \ (14)$
Operationally however, the reparametrization trick simply amounts to performing the requisite sampling on an additional input layer for ${\epsilon}$ instead of on ${Z}$; this is nicely illustrated in both fig. 74 of Mehta et al. [1] and fig. 4 of Doersch [2]. In practice, this means that the analytical tractability of the Jacobian is a non-issue, since the change of variables is performed downstream of the KL divergence layer—see the implementation details below. The upshot is that while the above may seem complicated, it makes the calculation of the gradient tractable via standard backpropagation.
Implementation
Having fleshed-out the mathematical framework underlying VAEs, how do we actually build one? Let’s summarize the necessary ingredients, layer-by-layer along the flow from observation space to latent space and back (that is, ${X\rightarrow Z\rightarrow X'\!\simeq\!X}$), with the Keras API in mind:
• We need an input layer, representing the data ${X}$.
• We connect this input layer to an encoder, ${q_\phi(z|x)}$, that maps data into the latent space ${Z}$. This will be a NN with an arbitrary number of layers, which outputs the parameters ${\phi}$ of the distribution (e.g., the mean and standard deviation, ${\phi\in\{\mu,\sigma\}}$ if ${q_\phi}$ is Gaussian).
• We need a special KL-divergence layer, to compute the second term in the cost function (8) and add this to the model’s loss function (e.g., the Keras loss). This takes as inputs the parameters ${\phi}$ produced by the encoder, and our Gaussian ansatz (2) for the prior ${p(z)}$.
• We need another input layer for the independent distribution ${\epsilon}$. This will be merged with the parameters ${\phi}$ output by the encoder, and in this way automatically integrated into the model’s loss function.
• Finally, we feed this merged layer into a decoder, ${p_\theta(x|z)}$, that maps the latent space back to ${X}$. This is generally another NN with as many layers as the encoder, which relies on the learned parameters ${\theta}$ of the generative model.
At this stage of the aforementioned research project, it’s far too early to tell whether such a VAE will ultimately be useful for accomplishing our goal. If so, I’ll update this post with suitable links to paper(s), etc. But regardless, the variational inference procedure underling VAEs is interesting in its own right, and I’d like to close by discussing some of the physical connections to which I alluded above in greater detail.
Deeper connections
The following was largely inspired by the exposition in Mehta et al. [1], though we have endeavored to modify the notation for clarity/consistency. In particular, be warned that what these authors call the “free energy” is actually a dimensionless free energy, which introduces an extra factor of ${\beta}$ (cf. eq. (158) therein); we shall instead stick to standard conventions, in which the mass dimension is ${[F]=[E]=[\beta^{-1}]=1}$. Of course, we’re eventually going to set ${\beta=1}$ anyway, but it’s good to set things straight.
Consider a system of interacting degrees of freedom ${s\in\{x,z\}}$, with parameters ${\theta}$ (e.g., ${\theta\in\{\mu,\sigma\}}$ for Gaussians, or would parametrize the couplings ${J_{ij}}$ between spins ${s_i}$ in the Ising model). We may assign an energy ${E(s;\theta)=E(x,z;\theta)}$ to each configuration, such that the probability ${p(s;\theta)=p_\theta(x,z)}$ of finding the system in a given state at temperature ${\beta^{-1}}$ is
$\displaystyle p_\theta(x,z)=\frac{1}{Z[\theta]}e^{-\beta E(x,z;\theta)}~, \ \ \ \ \ (15)$
where the partition function with respect to this ensemble is
$\displaystyle Z[\theta]=\sum_se^{-\beta E(s;\theta)}~, \ \ \ \ \ (16)$
where the sum runs over both ${x}$ and ${z}$. As the notation suggests, we have in mind that ${p_\theta(x,z)}$ will serve as our latent-variable model, in which ${x,z}$ respectively take on the meanings of visible and latent degrees of freedom as above. Upon marginalizing over the latter, we recover the partition function (4) for ${\mathrm{dim}(Z)}$ finite:
$\displaystyle p_\theta(x)=\sum_z\,p_\theta(x,z)=\frac{1}{Z[\theta]}\sum_z e^{-\beta E(x,z;\theta)} \equiv\frac{1}{Z[\theta]}e^{-\beta E(x;\theta)}~, \ \ \ \ \ (17)$
where in the last step, we have defined the marginalized energy function ${E(x;\theta)}$ that encodes all interactions with the latent variables; cf. eq. (15) of our post on RBMs.
The above implies that the posterior probability ${p(z|x)}$ of finding a particular value of ${z\in Z}$, given the observed value ${x\in X}$ (i.e., the encoder) can be written as
$\displaystyle p_\theta(z|x) =\frac{p_\theta(x,z)}{p_\theta(x)} =e^{-\beta E(x,z;\theta)+\beta E(x;\theta)} \equiv e^{-\beta E(z|x;\theta)} \ \ \ \ \ (18)$
where
$\displaystyle E(z|x;\theta) \equiv E(x,z;\theta)-E(x;\theta) \ \ \ \ \ (19)$
is the hamiltonian that describes the interactions between ${x}$ and ${z}$, in which the ${z}$-independent contributions have been subtracted off; cf. the difference between eq. (12) and (15) here. To elucidate the variational inference procedure however, it will be convenient to re-express the conditional distribution as
$\displaystyle p_\theta(z|x)=\frac{1}{Z_p}e^{-\beta E_p} \ \ \ \ \ (20)$
where we have defined ${Z_p}$ and ${E_p}$ such that
$\displaystyle p_\theta(x)=Z_p~, \qquad\mathrm{and}\qquad p_\theta(x,z)=e^{-\beta E_p}~. \ \ \ \ \ (21)$
where the subscript ${p=p_\theta(z|x)}$ will henceforth be used to refer to the posterior distribution, as opposed to either the joint ${p(x,z)}$ or prior ${p(x)}$ (this to facilitate a more compact notation below). Here, ${Z_p=p_\theta(x)}$ is precisely the partition function we encountered in (4), and is independent of the latent variable ${z}$. Statistically, this simply reflects the fact that in (20), we weight the joint probabilities ${p(x,z)}$ by how likely the condition ${x}$ is to occur. Meanwhile, one must be careful not to confuse ${E_p}$ with ${E(z|x;\theta)}$ above. Rather, comparing (21) with (15), we see that ${E_p}$ represents a sort of renormalized energy, in which the partition function ${Z[\theta]}$ has been absorbed.
Now, in thermodynamics, the Helmholtz free energy is defined as the difference between the energy and the entropy (with a factor of ${\beta}$ for dimensionality) at constant temperature and volume, i.e., the work obtainable from the system. More fundamentally, it is the (negative) log of the partition function of the canonical ensemble. Hence for the encoder (18), we write
$\displaystyle F_p[\theta]=-\beta^{-1}\ln Z_p[\theta]=\langle E_p\rangle_p-\beta^{-1} S_p~, \ \ \ \ \ (22)$
where ${\langle\ldots\rangle_p}$ is the expectation value with respect to ${p_\theta(z|x)}$ and marginalization over ${z}$ (think of these as internal degrees of freedom), and ${S_p}$ is the corresponding entropy,
$\displaystyle S_p=-\sum_zp_\theta(z|x)\ln p_\theta(z|x) =-\langle\ln p_\theta(z|x)\rangle_p~. \ \ \ \ \ (23)$
Note that given the canonical form (18), the equivalence of these expressions for ${F_p}$ — that is, the second equality in (22) — follows immediately from the definition of entropy:
$\displaystyle S_p=\sum_z p_\theta(z|x)\left[\beta E_p+\ln Z_p\right] =\beta\langle E_p\rangle_p+\ln Z_p~, \ \ \ \ \ (24)$
where, since ${Z_p}$ has no explicit dependence on the latent variables, ${\langle\ln Z_p\rangle_p=\langle1\rangle_p\ln Z_p=\ln Z_p}$. As usual, this partition function is generally impossible to calculate. To circumvent this, we employ the strategy introduced above, namely we approximate the true distribution ${p_\theta(z|x)}$ by a so-called variational distribution ${q(z|x;\phi)=q_\phi(z|x)}$, where ${\phi}$ are the variational (e.g., coupling) parameters that define our ansatz. The idea is of course that ${q}$ should be computationally tractable while still capturing the essential features. As alluded above, this is the reason these autoencoders are called “variational”: we’re eventually going to vary the parameters ${\phi}$ in order to make ${q}$ as close to ${p}$ as possible.
To quantify this procedure, we define the variational free energy (not to be confused with the Helmholtz free energy (22)):
$\displaystyle F_q[\theta,\phi]=\langle E_p\rangle_q-\beta^{-1} S_q~, \ \ \ \ \ (25)$
where ${\langle E_p\rangle_q}$ is the expectation value of the energy corresponding to the distribution ${p_\theta(z|x)}$ with respect to ${q_\phi(z|x)}$. While the variational energy ${F_q}$ has the same form as the thermodynamic definition of Helmholtz energy ${F_p}$, it still seems odd at first glance, since it no longer enjoys the statistical connection to a canonical partition function. To gain some intuition for this quantity, suppose we express our variational distribution in the canonical form, i.e.,
$\displaystyle q_\phi(z|x)=\frac{1}{Z_q}e^{-\beta E_q}~, \quad\quad Z_q[\phi]=\sum_ze^{-\beta E_q(x,z;\phi)}~, \ \ \ \ \ (26)$
where we have denoted the energy of configurations in this ensemble by ${E_q}$, to avoid confusion with ${E_p}$, cf. (18). Then ${F_q}$ may be written
\displaystyle \begin{aligned} F_q[\theta,\phi]&=\sum_z q_\phi(x|z)E_p-\beta^{-1}\sum_z q_\phi(x|z)\left[\beta E_q+\ln Z_q\right]\\ &=\langle E_p(\theta)-E_q(\phi)\rangle_q-\beta^{-1}\ln Z_q[\phi]~. \end{aligned} \ \ \ \ \ (27)
Thus we see that the variational energy is indeed formally akin to the Helmholtz energy, except that it encodes the difference in energy between the true and approximate configurations. We can rephrase this in information-theoretic language by expressing these energies in terms of their associated ensembles; that is, we write ${E_p=-\beta^{-1}\left(\ln p+\ln Z_p\right)}$, and similarly for ${q}$, whereupon we have
$\displaystyle F_q[\theta,\phi]=\beta^{-1}\sum_z q_\phi(z|x)\ln\frac{q_\phi(z|x)}{p_\theta(z|x)}-\beta^{-1}\ln Z_p[\theta]~, \ \ \ \ \ (28)$
where the ${\ln Z_q}$ terms have canceled. Recognizing (5) and (21) on the right-hand side, we therefore find that the difference between the variational and Helmholtz free energies is none other than the KL divergence,
$\displaystyle F_q[\theta,\phi]-F_p[\theta]=\beta^{-1}D_z\left(q_\phi(z|x)\,||\,p_\theta(z|x)\right)\geq0~, \ \ \ \ \ (29)$
which is precisely (7)! (It is perhaps worth stressing that this follows directly from (24), independently of whether ${q(z|x)}$ takes canonical form).
As stated above, our goal in training the VAE is to make the variational distribution ${q}$ as close to ${p}$ as possible, i.e., minimizing the KL divergence between them. We now see that physically, this corresponds to a variational problem in which we seek to minimize ${F_q}$ with respect to ${\phi}$. In the limit where we perfectly succeed in doing so, ${F_q}$ has obtained its global minimum ${F_p}$, whereupon the two distributions are identical.
Finally, it remains to clarify our implementation-based definition of ${F_q}$ given in (8) (where ${\beta=1}$). Applying Bayes’ rule, we have
\displaystyle \begin{aligned} F_q&=-\langle\ln p(x|z)\rangle_q+D_z\left(q(z|x)\,||\,p(z)\right) =-\left<\ln\frac{p(z|x)p(x)}{p(z)}\right>_q+\langle\ln q(z|x)-\ln p(z)\rangle_q\\ &=-\langle\ln p(z|x)p(x)\rangle_q+\langle\ln q(z|x)\rangle_q =-\langle\ln p(x,z)\rangle_q-S_q~, \end{aligned} \ \ \ \ \ (30)
which is another definition of ${F_q}$ sometimes found in the literature, e.g., as eq. (172) of Mehta et al. [1]. By expressing ${p(x,z)}$ in terms of ${E_p}$ via (20), we see that this is precisely equivalent to our more thermodynamical definition (24). Alternatively, we could have regrouped the posteriors to yield
$\displaystyle F_q=\langle\ln q(z|x)-\ln p(z|x)\rangle_q-\langle\ln p(x)\rangle_q =D\left(q(z|x)\,||\,p(z|x)\right)+F_p~, \ \ \ \ \ (31)$
where the identification of ${F_p}$ follows from (20). Of course, this is just (28) again, which is a nice check on internal consistency.
References
1. The review by Mehta et al., A high-bias, low-variance introduction to Machine Learning for physicists is absolutely perfect for those with a physics background, and the accompanying Jupyter notebook on VAEs in Keras for the MNIST dataset was especially helpful for the implementation bits above. The latter is a more streamlined version of this blog post by Louis Tiao.
2. Doersch has written a Tutorial on Autoencoders, which I found helpful for gaining some further intuition for the mapping between theory and practice.
Posted in Minds & Machines | 2 Comments
|
2020-06-02 02:33:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 876, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9797229766845703, "perplexity": 1113.1315685491977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347422065.56/warc/CC-MAIN-20200602002343-20200602032343-00347.warc.gz"}
|
https://lakshyaeducation.in/question/if_the_rate_of_interest_be_4_per_annum_for_first/1628172063431769347/
|
If the rate of interest be 4% per annum for first year, 5% per annum foe second year and 6% per annum for third year, then the compound interest of Rs.10000 for three years will be ?
Options:
A. Rs. 1575.20 B. Rs. 1600 C. Rs. 1625.80 D. Rs. 2000
|
2022-08-16 17:12:22
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8919441103935242, "perplexity": 2440.059480103603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00301.warc.gz"}
|
https://cyclostationary.blog/2015/12/18/csp-estimators-the-time-smoothing-method/
|
# CSP Estimators: The Time Smoothing Method
In a previous post, we introduced the frequency-smoothing method (FSM) of spectral correlation function (SCF) estimation. The FSM convolves a pulse-like smoothing window $g(f)$ with the cyclic periodogram to form an estimate of the SCF. An advantage of the method is that is allows fine control over the spectral resolution of the SCF estimate through the choice of $g(f)$, but the drawbacks are that it requires a Fourier transform as long as the data-record undergoing processing, and the convolution can be expensive. The expense of the convolution can be mitigated by using rectangular $g(f)$.
In this post, we introduce the time-smoothing method (TSM) of SCF estimation. Instead of averaging (smoothing) the cyclic periodogram over spectral frequency, multiple cyclic periodograms are averaged over time. When the non-conjugate cycle frequency of zero is used, this method produces an estimate of the power spectral density, and is essentially the Bartlett spectrum estimation method. The TSM can be found in My Papers [6] (Eq. (54)), and other places in the literature.
The basic idea is to segment the provided data record into $M$ contiguous blocks of $N$ samples each, compute the cyclic periodogram for each block, and average the results. Since we will likely use the FFT to compute the Fourier transform, we will be viewing each $N$-sample block as if its time samples correspond to $t = 0, 1, \ldots, N-1$, and so the cyclic polyspectrum formula of My Papers [6] will have to be slightly modified to take into account the actual temporal start time for each block.
So let’s consider the Fourier transform (DFT) of a block of data that is shifted from the origin by some amount of time $u$,
$X(u, f) = \displaystyle\sum_{t=0}^{N-1} x(t+u) e^{-i 2 \pi f t}. \hfill (1)$
The periodogram and cyclic periodogram are then functions of time offset $u$ as well,
$I(u, f) = \displaystyle\frac{1}{N} \left| X(u, f) \right|^2, \hfill (2)$
and
$I^\alpha(u, f) = \displaystyle\frac{1}{N} X(u, f+\alpha/2) X^*(u, f-\alpha/2) \hfill (3)$
and similarly for the conjugate cyclic periodogram. The TSM estimate of the SCF is simply the average value of the cyclic periodogram over all available values of $u$,
$\hat{S}_x^\alpha (f) = h(u) \otimes I^\alpha(u, f), \hfill (4)$
where $h(u)$ is some pulse-like temporal window. In practice, the FFT is used to create each cyclic periodogram, so their relative phases are no longer taken into account. According to our Fourier transform result for a delayed signal, however, we can easily take this into account by multiplying each cyclic periodogram by $e^{-i 2 \pi \alpha D}$, where $D$ represents the left edge (starting point) of the subblock. For blocks having length $N$ samples, then, the value of $D$ for the $j$th block is simply $jN$. Our TSM estimator is then
$\hat{S}_x^\alpha(f) = \displaystyle\frac{1}{M} \displaystyle\sum_{j=0}^{M-1} \left[\tilde{I}^\alpha(jN, f) e^{-i 2 \pi \alpha j N} \right], \hfill (5)$
where $\tilde{I}^\alpha(jN, f)$ is just the cyclic periodogram created from the $j$th block of $N$ samples using the FFT. Notice that when the cycle frequency is set to zero, the SCF estimate is an estimate of the PSD, and the TSM just averages $M$ periodograms, as in the Bartlett spectrum estimation method. Here is the TSM (Bartlett) PSD estimate for our rectangular-pulse BPSK signal:
For this PSD estimate the data-record length is $32,768$ samples and the TSM block length is $256$ samples, leading to $M = 32768/256 = 128$ blocks. Recall that the bit rate for the BPSK signal is $1/T_0 = 1/10$ and the carrier frequency is $f_c = 0.05$ (in normalized frequency units).
The TSM PSD estimate matches the FSM PSD estimate in the FSM post.
The TSM-based spectral correlation function estimates for the BPSK signal’s non-conjugate cycle frequencies are shown below:
and the conjugate-SCF estimates are:
Again, these TSM estimates match quite well with the FSM estimates.
The reason the TSM and FSM estimates match so well is that the temporal and spectral resolution parameters of the estimates are similar. For both methods, the temporal resolution is equal to the data-record length ($32,768$ samples). For the FSM, the spectral resolution of the estimates is equal to the width of the frequency-smoothing window $g(f)$, and for the TSM, the spectral resolution is equal to the intrinsic spectral resolution of each cyclic periodogram, which is equal to the reciprocal of the TSM block length (in normalized units).
For the FSM results in the FSM post, the spectral resolution is $0.005$ Hz $(164$ points in $g(f))$, and for the TSM results in this post, the spectral resolution is $1/256 = 0.0039$ Hz. So the two estimates have comparable time and frequency resolution parameters, and so produce similar results. The relationship between estimator quality and the temporal, spectral, and cycle-frequency resolutions is discussed in this post.
## 16 thoughts on “CSP Estimators: The Time Smoothing Method”
1. aapocketz says:
When using the TSM, you must calculate the conjugate cyclic periodogram by conjugate multiplication of the Fourier transform at offset +alpha/2 and -alpha/2.
Functionally, that seems to mean that you are multiplying the Fourier transform that has been circularly shifted left by 1 bin, by a conjugate circularly shifted right by 1 bin, to get the cyclic value at alpha = 2*Fs/N. You would circular shift by 2 bins to get alpha at 4*Fs/N, and so on all the way up to N bins, the maximum alpha shift at Fs.
Does that sound right? That means that including negative alpha values, the most alpha points will be N, and the max alpha value will be Fs, and the min alpha will therefore be 2*Fs/N?
Is there a way to get the alpha resolution (2*Fs/N) as fine as the frequency resolution (Fs/N) via some other mechanism? Do the FAM and SCCA algorithms have the same limitations on alpha?
Thank you for your blog and papers and time!
• When using the TSM, you must calculate the conjugate cyclic periodogram by conjugate multiplication of the Fourier transform at offset +alpha/2 and -alpha/2.
The conjugate cyclic periodogram is used to estimate the conjugate spectral correlation function and, unfortunately, it involves no conjugations. That is, the conjugate cyclic periodogram is X(f+a/2)X(a/2-f), whereas the (non-conjugate) cyclic periodogram goes like X(f+a/2)X*(f-a/2). So that is just terminology. I’m assuming that the rest of the comment is concerned with the non-conjugate cyclic periodogram and spectral correlation function.
Functionally, that seems to mean that you are multiplying the Fourier transform that has been circularly shifted left by 1 bin, by a conjugate circularly shifted right by 1 bin, to get the cyclic value at alpha = 2*Fs/N.
Right.
That means that including negative alpha values, the most alpha points will be N, and the max alpha value will be Fs, and the min alpha will therefore be 2*Fs/N?
You must use shifts that are equal to integer numbers of the FFT bins, yes, but there is nothing preventing you from using different shifts for the two involved shifted Fourier transforms. For example, you could form X(f + 1/N)X*(f). So that is a shift of one FFT bin for X(f + a/2) and no shift for X(f – a/2). This product ends up providing an estimate of S^b(g), where b = 1/N and g = f-1/N. So the minimum cycle frequency that you can look at (besides zero) is 1/N (or in your terminology Fs/N). Agree?
Is there a way to get the alpha resolution (2*Fs/N) as fine as the frequency resolution (Fs/N) via some other mechanism?
I’m working on a post on the three different kinds of resolutions involved in CSP (temporal, spectral, and cycle), but for now I think my comment above shows that the cycle resolution is Fs/N. Yes, this is equal to the native spectral resolution of the DFT using N points, but be careful because that is not the spectral resolution of the spectral correlation measurement in general. For the TSM, where we have N total samples and the FFTs have size M << N, the spectral resolution is 1/M (Fs/M in physical Hz). For the FSM, it is the width of the spectral smoothing window, which usually contains many of the N FFT points. For the SSCA, it is the reciprocal of the number of channels in the front-end channelizer.
It turns out the cycle resolution is always about equal to the reciprocal of the total number of samples processed, whereas the spectral resolution varies by estimator and is strongly affected by estimator parameters.
Great comments, aapocketz, thanks much.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
2018-08-19 11:48:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 37, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8288458585739136, "perplexity": 945.4086405651967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215077.71/warc/CC-MAIN-20180819110157-20180819130157-00394.warc.gz"}
|
http://googology.wikia.com/wiki/Fzgargantugoogolplex
|
## FANDOM
9,992 Pages
A fzgargantugoogolplex or fzgoogoltriplex is equal to $$\text{googoltriplex}$$$$^{\text{googoltriplex}}$$.[1] It evaluates to:
$$({10^{10^{10^{10^{100}}}}})^{10^{10^{10^{10^{100}}}}} = 10^{10^{10^{10^{10^{100}}}} \times 10^{10^{10^{100}}}} = 10^{10^{10^{10^{10^{100}}} + 10^{10^{100}}}}$$.
It is comparable to a googolquadriplex, even though it's actually equal to raising a googolquadriplex to the power of a googolduplex.
Both names for this number come from fz- plus names for 10101010100.
1. [1]
|
2018-02-21 21:01:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5242378115653992, "perplexity": 7785.075296569435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813803.28/warc/CC-MAIN-20180221202619-20180221222619-00054.warc.gz"}
|
https://codegolf.stackexchange.com/questions/152242/riffle-shuffle-a-string-robbers/152943
|
# Riffle shuffle a string - Robbers
Cops post
A riffle shuffle is a way of shuffling cards where the deck is split into 2 roughly equal sections and the sections are riffled into each other in small groups. This is how to riffle shuffle a string:
• Split the string into equal sections.
• Reverse the string, and starting from the start of each string.
• Put runs of a random length between 1 and the number of characters left into the final string
• Then remove these characters from the strings.
An example
"Hello World!" Output string = ""
"Hello ", "World!" ""
"Hell", "World!" " o"
"Hell", "World" " o!"
"Hel", "World" " o!l"
"Hel", "Wo" " o!ldlr"
"H", "Wo" " o!ldlrle"
"H", "" " o!ldlrleoW"
"", "" " o!ldlrleoWH"
The final product from Hello World! could be o!ldlrleoWH and that is what you would output.
## Robbers
Your task is to take a Cops submission and rearrange so that it produces a valid riffle shuffler program. You may not add, remove or change any characters in the original program to result in your solution.
If the program works, even if it is not the cops intended solution, it is a valid crack.
The winner is the person who cracks the most Cops submissions!
# Python 3, HyperNeutrino
from random import*
def r(i):
s=len(i)//2;L,R=i[:s][::-1],i[s:][::-1];o=[]
while L and R:s=randint(1,len(L));o+=L[:s];L,R=R,L[s:]
return o+L+R
Try it online!
Proof of Same Character Set
• Sorry, my original post had a slight issue. The new version shouldn't be too hard to determine from your current progress. Good job though, this was my original solution! – hyper-neutrino Jan 1 '18 at 17:31
# Jelly, HyperNeutrino
œs2UŒṖ€X€ż/
Try it online!
# Pyth, notjagan
jk.iFmO./_dc2Q
Try it here!
### Explanation
The Pyth interpreter also offers a nice pseudo-code if you check the "Debug" box.
• c2Q chops the input into 2 pieces of equal length, with initial pieces one longer if necessary.
• m maps a function over the pieces. Variable: d.
• _d reverses d
• ./ partitions the reverse of d; returns all divisions of the reversed d into disjoint contiguous substrings.
• O chooses a randOm partition.
• .iF Folds (reduces) the list by .interleaving.
• And finally jk joins the list of strings.
# R, Giuseppe
function(s){n<-nchar(s)
s<-methods::el(strsplit(s,''))
r<-tail(s,n/2)
o<-{}
while(length(l)|length(r)){if(length(r)){y<-sample(length(r),1)
r<-tail(r,-y)}
if(length(l)){x<-sample(length(l),1)
L2/:v;O@N[1LU:'.;FL0=?x?i
|
2021-07-29 14:50:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27748027443885803, "perplexity": 5596.169447929471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153860.57/warc/CC-MAIN-20210729140649-20210729170649-00218.warc.gz"}
|
https://pos.sissa.it/205/059/
|
Study of the $\chi _{c1}(1P) \pi ^{+}\pi ^-$ invariant mass spectrum in $B^{±} \rightarrow \chi _{c1}(1P) \pi ^{+}\pi ^{-} K^{±}$ decays at Belle
|
2019-02-17 08:37:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9728055000305176, "perplexity": 852.864875385016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481766.50/warc/CC-MAIN-20190217071448-20190217093448-00426.warc.gz"}
|
http://math.stackexchange.com/questions/213336/noether-normalization-over-mathbbz
|
# Noether normalization over $\mathbb{Z}$
I would like to know what is a correct analogue of Noether normalization theorem for rings finitely generated over $\mathbb Z$. Obviously, Noether normalization can not hold "literately" in this case since, for example the ring $\mathbb Z_2[X]$ does not contain a polynomial subring with coefficients in $\mathbb Z$ over which it is finite.
I am asking this question to better understand the second part of the answer of Qing Liu to the question given here: http://mathoverflow.net/questions/57515/one-point-in-the-post-of-terence-tao-on-ax-grothendieck-theorem
-
Take a look at this: http://www.math.lsa.umich.edu/~hochster/615W10/supNoeth.pdf. It proves the generalized version of Noether Normalization, which is what you need (or rather what Qing Liu uses in his answer). In general I think Mel Hochster's notes are really good.
Sorry, I should mention what the general version of Noether Normalization is that Hochster proves in his notes:
Let $D$ be a domain, and $R$ a finitely generated $D$ algebra. There exists a nonzero $f \in D$, and a finite injective ring map $D_f[X_1,\dots,X_n] \hookrightarrow R_f$. Here the $X_i$ are indeterminates.
Note how the above version implies Noether Normalization over a field. Although, if you know some basic scheme theory, I feel like Qing Liu's answer involving constructible sets is equally enlightening.
-
Dear Rankeya, thank you for the answer and for affirming that Hochester's notes are good:) ! This is important information. – agleaner Oct 14 '12 at 12:43
Also, I have one more question. Would you advise some (not too scary) place where to read the proof of Chevalet theorem on constructive sheaves? – agleaner Oct 14 '12 at 12:59
If you meant Chevalley's Theorem on constructible sets, then Ravi Vakil's notes, "Foundations of Algebraic Geometry", available on his website has a nice section on constructible sheaves and Chevalley's theorem. I believe he proves the theorem in section 8.4 of his notes. Note, however, that Ravi leaves many things as exercises, which depending on your background might be time consuming. Also, when you want to know about any topic in AG (even CA), I recommend the Stacks Project. It has a wonderful new search feature, which allows you to go straight to what you want. – Rankeya Oct 14 '12 at 14:15
More importantly, it has a section on constructible sets, and Chevalley's theorem. Most of these sources might appear scary the first time you use them. I was scared the first time I saw the Stacks Project. If you don't let your fears get to you, I guarantee that you will be rewarded and learn some beautiful math. – Rankeya Oct 14 '12 at 14:15
It is interesting, I never heard about Stacks Project... I see it has over 3000 pages :)... Will try to see if I will be able to use it. – agleaner Oct 14 '12 at 14:29
|
2014-04-25 01:02:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7646036744117737, "perplexity": 382.4128032183358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/calculating-experimental-error-of-braggs-equation.774518/
|
# Homework Help: Calculating Experimental Error of Braggs Equation
1. Oct 5, 2014
### Lemenks
1. The problem statement, all variables and given/known data
I am writing a lab report for an X-ray diffraction. I have been attempting to come up with an equation for the error using formulas some people from college gave me and also some I found on wikipedia but I am quite sure I am doing it wrong. The only variable is the angle where the maximum intensities are found. I am using Bragg's law to calculate the spacing between the atoms.
2. Relevant equations
D = (N*wavelength)/(2*sin(x))
As there is no error in N, wavelength, or "2", we can let that equal A.
D = A/sin(x)
Some equations I was given:
Z = aX
dZ = adX
Z = X^a
dZ/z = |a|dx/x
Z = SinX
dZ = dX CosX
3. The attempt at a solution
D = Z = A/sin(x) = A (sin(x))^-1 = A f(y)^-1
I have tried loads of ways of calculating this but I keep getting silly answers. Any help, ideas or links would be really appreciated.
2. Oct 5, 2014
### BvU
Hello Lemenks, and welcome to PF :)
I don't see an attempt at solution under 3, only a repeat of D=A/sin(x).
From your account, I think what you are asking is: What is the error in D = A/sin(x), given the error in x. Correct ?
Your relevant equations are some examples of error propagation in functions of a single variable. Generally: $df = {df\over dx} dx$, which in error analysis is extended to finite differences: $\Delta f = {df\over dx} \Delta x$.
Do you know how to find the derivative of 1/sin(x) ?
And I am interested in the way you determine $\Delta x$ too. Is it really just a simple reading off of a single angle ?
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
Have something to add?
Draft saved Draft deleted
|
2018-06-24 09:26:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5897502899169922, "perplexity": 1029.1161310436628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866926.96/warc/CC-MAIN-20180624083011-20180624103011-00394.warc.gz"}
|
http://neupy.com/docs/algorithms/constructible-architecture.html
|
# Algorithms with constructible architecture
## Specify network structure
There are three ways to define relations between layers. We can define network’s architecture separately from the training algorithm.
from neupy import algorithms, layers
network = layers.join(
layers.Input(10),
layers.Sigmoid(40),
layers.Sigmoid(2),
)
network,
step=0.2,
shuffle_data=True
)
Or, we can set up a list of layers that define sequential relations between layers.
from neupy import algorithms, layers
[
layers.Input(10),
layers.Sigmoid(40),
layers.Sigmoid(2)
layers.Softmax(2),
],
step=0.2,
shuffle_data=True
)
This is just a syntax simplification that allows to avoid using layer.join function.
Small networks can be defined with a help of inline operator.
from neupy import algorithms
from neupy.layers import *
Input(10) > Sigmoid(40) > Sigmoid(2),
step=0.2,
shuffle_data=True
)
## Train networks with multiple inputs
NeuPy allows to train networks with multiple inputs.
from neupy import algorithms, layers
[
[[
# 3 categorical inputs
layers.Input(3),
layers.Embedding(n_unique_categories, 4),
layers.Reshape(),
], [
# 17 numerical inputs
layers.Input(17),
]],
layers.Concatenate(),
layers.Relu(16),
layers.Sigmoid(1)
],
step=0.5,
verbose=True,
error='binary_crossentropy',
)
# Categorical variable should be the first, because
# categorical input layer was defined first in the network
network.train([x_train_cat, x_train_num], y_train,
[x_test_cat, x_test_num], y_test,
epochs=180)
y_predicted = network.predict([x_test_cat, x_test_num])
From the example above, you can see that we specified first layer as a list of lists. Each list has small sequence of layers specified and each sequence starts with the Input layer. This list of lists is just simple syntax sugar around the parallel function. Exactly the same architecture can be rewritten in the following way.
gdnet = algorithms.GradientDescent(
[
layers.parallel([
# 3 categorical inputs
layers.Input(3),
layers.Embedding(n_unique_categories, 4),
layers.Reshape(),
], [
# 17 numerical inputs
layers.Input(17),
]),
layers.Concatenate(),
layers.Relu(16),
layers.Sigmoid(1)
]
)
The training and prediction looks slightly different as well.
network.train([x_train_cat, x_train_num], y_train,
[x_test_cat, x_test_num], y_test,
epochs=180)
y_predicted = network.predict([x_test_cat, x_test_num])
Input we specified as a list where number of values equal to the number of input layers in the network. The order in the list is also important. We defined first input layer for categorical variables and therefore we need to pass it as the first element to the input list. The same is true for the predict method.
## Algorithms
NeuPy supports lots of different training algorithms based on the backpropagation. You can check Cheat sheet if you want to learn more about them.
Before using these algorithms you must understand that not all of them are suitable for all problems. Some of the methods like Levenberg-Marquardt or Conjugate Gradient work better for small networks and they would be extremely slow for networks with millions parameters. In addition, it’s important to note that not all algorithms are possible to train with mini-batches. Algorithms like Conjugate Gradient don’t work with mini-batches.
## Loss functions
NeuPy has many different loss functions. These loss functions can be specified specified as a string.
from neupy import algorithms, layers
[
layers.Input(784),
layers.Relu(500),
layers.Relu(300),
layers.Softmax(10),
],
error='categorical_crossentropy',
)
Also, it’s possible to create custom loss functions. Loss function should have two mandatory arguments, namely expected and predicted values.
import tensorflow as tf
from neupy import algorithms, layers
def mean_absolute_error(expected, predicted):
abs_errors = tf.abs(expected - predicted)
return tf.reduce_mean(abs_errors)
[
layers.Input(784),
layers.Relu(500),
layers.Relu(300),
layers.Softmax(10),
],
error=mean_absolute_error,
)
Loss function should return a scalar, because during the training output from the loss function will be used as a variable with respect to which we are differentiating.
Algorithms with constructible architectures allow to use additional update rules for parameter regularization and learning rate updates. For instance, we want to add Weight Decay regularization and we want to minimize step monotonically after each epoch.
from neupy import algorithms, layers
[
layers.Input(784),
layers.Relu(500),
layers.Relu(300),
layers.Softmax(10),
],
step=0.1,
batch_size=16,
algorithms.StepDecay]
)
Both WeightDecay and StepDecay algorithms have additional parameters. In case if we need to modify them we can add them to the training algorithm.
from neupy import algorithms, layers
[
layers.Input(784),
layers.Relu(500),
layers.Relu(300),
layers.Softmax(10),
],
step=0.1,
batch_size=16,
# Parameters from StepDecay
reduction_freq=50,
# Parameters from WeightDecay
decay_rate=0.05,
algorithms.StepDecay]
)
NeuPy doesn’t allow to use multiple regularizations and step update add-ons for training algorithm.
>>> from neupy import algorithms, layers
>>>
... [
... layers.Input(784),
... layers.Relu(500),
... layers.Relu(300),
... layers.Softmax(10),
... ],
|
2018-12-15 05:22:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.554198682308197, "perplexity": 2658.133428876008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826715.45/warc/CC-MAIN-20181215035757-20181215061757-00325.warc.gz"}
|
http://clay6.com/qa/28007/a-4-1-molar-mixture-of-he-and-ch-4-is-contained-in-a-vessel-at-20-bar-press
|
Browse Questions
# A 4:1 molar mixture of He and $CH_4$ is contained in a vessel at 20 bar pressure. Due to a hole in the vessel, the gas mixture leaks out. What is the composition of the mixture effusing out initially?
$(a)\;4:1\qquad(b)\;6:1\qquad(c)\;8:1\qquad(d)\;2:1$
Given
Molar ratio of He and $CH_4$ is 4:1
Partial pressure ratio of He and $CH_4$ is 16:4
$\large\frac{n_{He}}{n_{CH_4}} = \sqrt{\large\frac{M_{CH_4}}{M_{He}}}\times\large\frac{P_{He}}{P_{CH_4}}$
Since total pressure = 20 bar
$= \sqrt{\large\frac{16}{4}}\times\large\frac{16}{4}$
Since time of diffusion for both is same
= 8:1
The composition of mixture initially gone out for He and $CH_4$ is 8:1
|
2017-06-28 17:29:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7897441387176514, "perplexity": 1802.58256943387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323721.80/warc/CC-MAIN-20170628171342-20170628191342-00080.warc.gz"}
|
https://fr.mathworks.com/help/stats/classreg.learning.classif.compactclassificationdiscriminant-class.html
|
# CompactClassificationDiscriminant
Package: classreg.learning.classif
Compact discriminant analysis class
## Description
A CompactClassificationDiscriminant object is a compact version of a discriminant analysis classifier. The compact version does not include the data for training the classifier. Therefore, you cannot perform some tasks with a compact classifier, such as cross validation. Use a compact classifier for making predictions (classifications) of new data.
## Construction
cobj = compact(obj) constructs a compact classifier from a full classifier.
cobj = makecdiscr(Mu,Sigma) constructs a compact discriminant analysis classifier from the class means Mu and covariance matrix Sigma. For syntax details, see makecdiscr.
### Input Arguments
obj Discriminant analysis classifier, created using fitcdiscr.
## Object Functions
edge Classification edge logp Log unconditional probability density for discriminant analysis classifier loss Classification error mahal Mahalanobis distance to class means margin Classification margins nLinearCoeffs Number of nonzero linear coefficients partialDependence Compute partial dependence plotPartialDependence Create partial dependence plot (PDP) and individual conditional expectation (ICE) plots predict Predict labels using discriminant analysis classification model
## Copy Semantics
Value. To learn how value classes affect copy operations, see Copying Objects.
## Examples
collapse all
Construct a discriminant analysis classifier for the sample data.
fullobj = fitcdiscr(meas,species);
Construct a compact discriminant analysis classifier, and compare its size to that of the full classifier.
cobj = compact(fullobj);
b = whos('fullobj'); % b.bytes = size of fullobj
c = whos('cobj'); % c.bytes = size of cobj
[b.bytes c.bytes] % shows cobj uses 60% of the memory
ans = 1×2
18291 11678
The compact classifier is smaller than the full classifier.
Construct a compact discriminant analysis classifier from the means and covariances of the Fisher iris data.
mu(1,:) = mean(meas(1:50,:));
mu(2,:) = mean(meas(51:100,:));
mu(3,:) = mean(meas(101:150,:));
mm1 = repmat(mu(1,:),50,1);
mm2 = repmat(mu(2,:),50,1);
mm3 = repmat(mu(3,:),50,1);
cc = meas;
cc(1:50,:) = cc(1:50,:) - mm1;
cc(51:100,:) = cc(51:100,:) - mm2;
cc(101:150,:) = cc(101:150,:) - mm3;
sigstar = cc' * cc / 147;
cpct = makecdiscr(mu,sigstar,...
'ClassNames',{'setosa','versicolor','virginica'});
expand all
|
2020-10-20 23:33:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.68022221326828, "perplexity": 12485.227193849638}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874340.10/warc/CC-MAIN-20201020221156-20201021011156-00485.warc.gz"}
|
https://epicroadtrips.us/2003/summer/nola/nola_offsite/FQ_en.wikipedia.org/en.wikipedia.org/wiki/Alcohol.html
|
# Alcohol
In general usage, alcohol (from Arabic al-ghawl الغول) refers almost always to ethanol, also known as grain alcohol, and often to any beverage that contains ethanol (see alcoholic beverage). This sense underlies the term alcoholism (addiction to alcohol). Other forms of alcohol are usually described with a clarifying adjective, as in isopropyl alcohol or by the suffix -ol, as in isopropanol.
In chemistry, alcohol is a more general term, applied to any organic compound in which a hydroxyl group (-OH) is bound to a carbon atom, which in turn is bound to other hydrogen and/or carbon atoms. The general formula for a simple acyclic alcohol is CnH2n+1OH.
As a drug, common alcohol (ethanol) is known to have a depressing effect that decreases the responses of the central nervous system.
## Structure
The functional group of an alcohol is a hydroxyl group bonded to an sp3 hybridized carbon. It can therefore be regarded as a derivative of water, with an alkyl group replacing one of the hydrogens. If an aryl group is present rather than an alkyl, the compound is generally called a phenol rather than an alcohol. The oxygen in an alcohol has a bond angle of around 109° (c.f. 104.5° in water), and two nonbonded electron pairs. The O-H bond in methanol (CH3OH) is around 96 picometres long.
### Primary, secondary, and tertiary alcohols
There are three major subsets of alcohols- 'primary' (1°), 'secondary' (2°) and 'tertiary' (3°), based upon the number of carbons the C-OH carbon (shown in red) is bonded to. Methanol is the simplest 'primary' alcohol. The simplest secondary alcohol is isopropanol (propan-2-ol), and a simple tertiary alcohol is tert-butanol (2-methylpropan-2-ol).
### Methanol & ethanol
The simplest and most commonly used alcohols are methanol and ethanol (common names methyl alcohol and ethyl alcohol, respectively), which have the structures shown above.
Methanol was formerly obtained by the distillation of wood, and was called "wood alcohol". It is now a cheap commodity chemical produced by the high pressure reaction of carbon monoxide with hydrogen. In common usage, "alcohol" often refers simply to ethanol or "grain alcohol". Methylated spirits ("Meths"), also called "surgical spirits", is a form of ethanol rendered undrinkable by the addition of methanol. Aside from its major use in alcoholic beverages, ethanol is also used (though highly controlled) as an industrial solvent and raw material.
## Uses
Alcohols are in wide use in industry and science as reagents, solvents, and fuels. Ethanol and methanol can be made to burn more cleanly than gasoline or diesel. Because of its low toxicity and ability to dissolve non-polar substances, ethanol is often used as a solvent in medical drugs, perfumes, and vegetable essences such as vanilla. In organic synthesis, alcohols frequently serve as versatile intermediates. Ethanol is also commonly used in beverages after fermentation to promote flavor or induce a euphoric intoxication commonly known as "drunkenness" or "being drunk".
## Sources
Many alcohols can be created by fermentation of fruits or grains with yeast, but only ethanol is commercially produced this way, chiefly for fuel and drink. Other alcohols are generally produced by synthetic routes from natural gas, petroleum, or coal feed stocks, for example via acid catalyzed hydration of alkenes. For more details see Chemistry of alcohols
## Nomenclature
### Systematic names
In the IUPAC system, the name of the alkane chain loses the terminal "e" and adds "ol", e.g. "methanol" and "ethanol". When necessary, the position of the hydroxyl group is indicated by a number between the alkane name and the "ol": propan-1-ol for CH3CH2CH2OH, propan-2-ol for CH3CH(OH)CH3. Sometimes, the position number is written before the IUPAC name: 1-propanol and 2-propanol. If a higher priority group is present (such as an aldehyde, ketone or carboxylic acid), then it is necessary to use the prefix "hydroxy", for example: 1-hydroxy-2-propanol (CH3COCH2OH).
Some examples of simple alcohols and how to name them:
Common names for alcohols usually take the name of the corresponding alkyl group and add the word "alcohol", e.g. methyl alcohol, ethyl alcohol or tert-butyl alcohol. Propyl alcohol may be n-propyl alcohol or isopropyl alcohol depending on whether the hydroxyl group is bonded to the 1st or 2nd carbon on the propane chain. Isopropyl alcohol is also occasionally called sec-propyl alcohol.
As mentioned above alcohols are classified as primary (1°), secondary (2°) or tertiary (3°), and common names often indicate this in the alkyl group prefix. For example (CH3)3COH is a tertiary alcohol is commonly known as tert-butyl alcohol. This would be named 2-methylpropan-2-ol under IUPAC rules, indicating a propane chain with methyl and hydroxyl groups both attached to the middle (#2) carbon.
An alcohol with two hydroxyl groups is commonly called a "glycol", e.g. HO-CH2-CH2-OH is ethylene glycol. The IUPAC name is ethane-1,2-diol, "diol" indicating two hydroxyl groups, and 1,2 indicating their bonding positions. Geminal glycols (with the two hydroxyls on the same carbon atom), such as ethane-1,1-diol, are generally unstable. For three or four groups, "triol" and "tetraol" are used.
### Etymology
The word "alcohol" almost certainly comes from the Arabic language (the "al-" prefix being the Arabic definite article); however, the precise origin is unclear. It was introduced into Europe, together with the art of distillation and the substance itself, around the 12th century by various European authors who translated and popularized the discoveries of Islamic alchemists.
A popular theory, found in many dictionaries, is that it comes from الكحل = ALKHL = al-kuhul, originally the name of very finely powdered antimony sulfide Sb2S3 used as an antiseptic and eyeliner. The powder is prepared by sublimation of the natural mineral stibnite in a closed vessel. According to this theory, the meaning of alkuhul would have been first extended to distilled substances in general, and then narrowed to ethanol. This conjectured etymology has been circulating in England since 1672 at least (OED).
However, this derivation is suspicious since the current Arabic name for alcohol, الكحول = ALKHWL = al???, does not derive from al-kuhul. The Qur'an in verse 37:47 uses the word الغول = ALGhWL = al-ghawl — properly meaning "spirit" ("spiritual being") or "demon" — with the sense "the thing that gives the wine its headiness". The word al-ghawl also originated the English word "ghoul", and the name of the star Algol. This derivation would, of course, be consistent with the use of "spirit" or "spirit of wine" as synonymous of "alcohol" in most Western languages. (Incidentally, the etymology "alcohol" = "the devil" was used in the 1930s by the U.S. Temperance Movement for propaganda purposes.)
According to the second theory, the popular etymology and the spelling "alcohol" would not be due to generalization of the meaning of ALKHL, but rather to Western alchemists and authors confusing the two words ALKHL and ALGhWL, which have indeed been transliterated in many different and overlapping ways.
## Physical and chemical properties
The hydroxyl group generally makes the alcohol molecule polar. Those groups can form hydrogen bonds to one another and to other compounds. Two opposing solubility trends in alcohols are: the tendency of the polar OH to promote solubility in water, and of the carbon chain to resist it. Thus, methanol, ethanol, and propanol are miscible in water because the hydroxyl group wins out over the short carbon chain. Butanol, with a four-carbon chain, is moderately soluble because of a balance between the two trends. Alcohols of five or more carbons (Pentanol and higher) are effectively insoluble because of the hydrocarbon chain's dominance.
Because of hydrogen bonding, alcohols tend to have higher boiling points than comparable hydrocarbons and ethers. All simple alcohols are miscible in organic solvents. This hydrogen bonding means that alcohols can be used as protic solvents.
The lone pairs of electrons on the oxygen of the hydroxyl group also makes alcohols nucleophiles.
Alcohols, like water, can show either acidic or basic properties at the O-H group. With a pKa of around 16-19 they are generally slightly weaker acids than water, but they are still able to react with strong bases such as sodium hydride or reactive metals such as sodium. The salts that result are called alkoxides, with the general formula RO- M+. Meanwhile the oxygen atom has lone pairs of nonbonded electrons that render it weakly basic in the presence of strong acids such as sulfuric acid. For example, with methanol:
Alcohols can also undergo oxidation to give aldehydes, ketones or carboxylic acids, or they can be dehydrated to alkenes. They can react to form ester compounds, and they can (if activated first) undergo nucleophilic substitution reactions. For more details see the #Chemistry of alcohols section below.
## Toxicity
Alcohols often have an odor described as 'biting' that 'hangs' in the nasal passages. Ethanol in the form of alcoholic beverages has been consumed by humans since pre-historic times, for a variety of hygienic, dietary, medicinal, religious, and recreational reasons. While infrequent consumption of ethanol in small quantities may be harmless or even beneficial, larger doses result in a state known as drunkenness or intoxication and, depending on the dose and regularity of use, can cause acute respiratory failure or death and with chronic use has medical repercussions.
Other alcohols are substantially more poisonous than ethanol, partly because they take much longer to be metabolized, and often their metabolism produces even more toxic substances. Methanol, or wood alcohol, for instance, is oxidized by alcohol dehydrogenase enzymes in the liver to the poisonous formaldehyde, which can cause blindness or death. Interestingly, an effective treatment to prevent formaldehyde toxicity after methanol ingestion is to administer ethanol. This will bind to alcohol dehydrogenase, preventing methanol from binding and thus acting as a substrate. Any formaldehyde will be converted to formic acid and excreted before it causes damage.
## Chemistry of alcohols
### Preparation
#### Laboratory
There are three common methods:
R-Br + KOH → R-OH + KBr
R-CHO - [O] → R-OH
C2H4 + H2SO4 (l) → C2H5-HSO4
C2H5-HSO4 + H2O → C2H5OH + H2SO4
The formation of a secondary alcohol via the last two methods is shown:
#### Industrial
C12H22O11 → C6H12O6 + C6H12O6
Invertase → glucose + fructose
C6H12O6 + H2O → C2H5OH + CO2
Glucose → zymase + ethanol
• Methanol from water gas: It is manufactured from synthesis gas, where CO + 2 H2 are combined to produce methanol using a Cu, ZnO and Al2O3 catalyst at 250°C and a pressure of 50-100 atm.
[CO + H2] + H2O (g) → CH3OH
### Reactions
See the physical and chemical properties section above for a general overview.
#### Deprotonation
Alcohols can behave as weak acids, undergoing deprotonation. The deprotonation reaction to produce an alkoxide salt is either performed with a strong base such as sodium hydride or n-butyllithium, or with sodium or potassium metal.
2 R-OH + 2 NaH → 2 R-O-Na+ + H2
2 R-OH + 2Na → 2R-ONa+
e.g. 2 CH3CH2-OH + 2 Na → 2 CH3-CH2-ONa+
Water is similar in pKa to many alcohols, so with sodium hydroxide there is an equilibrium set up which usually lies to the left:
R-OH + NaOH <=> R-O-Na+ + H2O (equilibrium to the left)
#### Nucleophilic substitution
The OH group is not a good leaving group in nucleophilic substitution reactions, so neutral alcohols do not react in such reactions. However if the oxygen is first protonated to give R−OH2+, the leaving group (water) is much more stable, and nucleophilic substitution can take place. For instance, tertiary alcohols react with hydrochloric acid to produce tertiary alkyl halides, where the hydroxyl group is replaced by a chlorine atom. If primary or secondary alcohols are to be reacted with hydrochloric acid, an activator such as zinc chloride is needed. Alternatively the conversion may be performed directly using thionyl chloride.[1]
Alcohols may likewise be converted to alkyl bromides using hydrobromic acid or phosphorus tribromide, for example:
3 R-OH + PBr3 → 3 RBr + H3PO3
In the Barton-McCombie deoxygenation an alcohol is deoxygenated to an alkane with tributyltin hydride or a trimethylborane-water complex in a radical substitution reaction.
#### Dehydration
Alcohols are themselves nucleophilic, so R−OH2+ can react with ROH to produce ethers and water, although this reaction is rarely used except in the manufacture of diethyl ether.
More useful is the E1 elimination reaction of alcohols to produce alkenes. The reaction generally obeys Zaitsev's Rule, which states that the most stable (usually the most substituted) alkene is formed. Tertiary alcohols eliminate easily at just above room temperature, but primary alcohols requre a higher temperature.
This is a diagram of acid catalysed dehydration of ethanol to produce ethene:
#### Esterification
To form an ester from an alcohol and a carboxylic acid the reaction, known as "Fischer esterification", is usually performed at reflux with a catalyst of concentrated sulfuric acid:
R-OH + R'-COOH $\Leftrightarrow$ R'-COOR + H2O
In order to drive the equilibrium to the right and produce a good yield of ester, water is usually removed, either by an excess of H2SO4 or by using a Dean-Stark apparatus. Esters may also be prepared by reaction of the alcohol with an acid chloride in the presence of a base such as pyridine.
Other types of ester are prepared similarly- for example p-toluenesulfonate (tosylate) esters are made by reaction of the alcohol with p-toluenesulfonyl chloride in pyridine.
#### Oxidation
Primary alcohols generally give aldehydes or carboxylic acids upon oxidation, while secondary alcohols give ketones. Traditionally strong oxidants such as the dichromate ion or potassium permanganate are used, under acidic conditions, for example:
3 CH3-CH(-OH)-CH3 + K2Cr2O7 + 4 H2SO4 → 3 CH3-C(=O)-CH3 + Cr2(SO4)3 + K2SO4 + 7 H2O
Frequently in aldehyde preparations these reagents cause a problem of over-oxidation to the carboxylic acid. To avoid this, other reagents such as PCC, Dess-Martin periodinane, IBX acid, TPAP or methods such as Swern oxidation are now preferred.
Alcohols with a methyl group attached to the alcohol carbon can also undergo a haloform reaction (such as the iodoform reaction) in the presence of the halogen and a base such as sodium hydroxide.
Tertiary alcohols resist oxidation, but can be oxidised by reagents such as 2,3-dichloro-5,6-dicyano-1,4-benzoquinone.
|
2022-01-18 19:33:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5575908422470093, "perplexity": 9040.066290645373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300997.67/warc/CC-MAIN-20220118182855-20220118212855-00109.warc.gz"}
|
http://en.wikipedia.org/wiki/Virtual_black_holes
|
# Virtual black hole
(Redirected from Virtual black holes)
In quantum gravity, a virtual black hole is a black hole that exists temporarily as a result of a quantum fluctuation of spacetime.[1] It is an example of quantum foam and is the gravitational analog of the virtual electronpositron pairs found in quantum electrodynamics. Theoretical arguments suggest that virtual black holes should have mass on the order of the Planck mass, lifetime around the Planck time, and occur with a number density of approximately one per Planck volume.[2]
The emergence of virtual black holes at the Planck scale is a consequence of the uncertainty relation
$\Delta R_{\mu}\Delta x_{\mu}\ge\ell^2_{P}=\frac{\hbar G}{c^3}$
where $R_{\mu}$ is the radius of curvature of space-time small domain; $x_{\mu}$ is the coordinate small domain; $\ell_{P}$ is the Planck length; $\hbar$ is the Dirac constant; $G$ - Newton's gravitational constant; $c$ is the speed of light. These uncertainty relations are another form of Heisenberg's uncertainty principle at the Planck scale.
If virtual black holes exist, they provide a mechanism for proton decay. This is because when a black hole's mass increases via mass falling into the hole, and then decreases when Hawking radiation is emitted from the hole, the elementary particles emitted are, in general, not the same as those that fell in. Therefore, if two of a proton's constituent quarks fall into a virtual black hole, it is possible for an antiquark and a lepton to emerge, thus violating conservation of baryon number.[2]
The existence of virtual black holes aggravates the black hole information loss paradox, as any physical process may potentially be disrupted by interaction with a virtual black hole.[5]
## References
1. ^ S. W. Hawking(1995)"Virtual Black Holes"
2. ^ a b Fred C. Adams, Gordon L. Kane, Manasse Mbonye, and Malcolm J. Perry (2001), Proton Decay, Black Holes, and Large Extra Dimensions, Intern. J. Mod. Phys. A, 16, 2399.
3. ^ P.A.M.Dirac(1975), General Theory of Relativity, A Wilay Interscience Publication, p.37
4. ^ A.P.Klimets(2012) "Postigaja mirozdanie", LAP LAMBERT Academic Publishing, Deutschland
5. ^ The black hole information paradox, Steven B. Giddings, arXiv:hep-th/9508151v1.
|
2014-09-17 00:28:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 48, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7321229577064514, "perplexity": 1023.1900627815395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657120057.96/warc/CC-MAIN-20140914011200-00104-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/82756-turning-points.html
|
# Math Help - Turning points
1. ## Turning points
I dont know if this goes in here but this is a question from my pre-calc class that i could not solve
Find a polynomial whose turning points are at -1, (3 + √7)/4 , and (3 - √7)/4
Any help will be appreciated.
2. Originally Posted by mathprob
I dont know if this goes in here but this is a question from my pre-calc class that i could not solve
Find a polynomial whose turning points are at -1, (3 + √7)/4 , and (3 - √7)/4
Any help will be appreciated.
Turning Points occur when the derivative is 0.
Using this information, you can find the derivative.
$\frac{dy}{dx} = (x + 1)\left[x - \left(\frac{3 + \sqrt{7}}{4}\right)\right]\left[x - \left(\frac{3 - \sqrt{7}}{4}\right)\right]$
(set it equal to 0, you should see that the turning points are correct).
Expand, and then you can integrate to find the polynomial.
$\frac{dy}{dx} = x^3 - \frac{1}{2}x^2 - x + \frac{1}{2}$
$y = \int{x^3 - \frac{1}{2}x^2 - x + \frac{1}{2}\,dx}$
$y = \frac{1}{4}x^4 - \frac{1}{6}x^3 - \frac{1}{2}x^2 + \frac{1}{2}x + C$.
You can choose any value of C you like... 0 is the easiest.
So a polynomial that has the turning points you mentioned is
$y = \frac{1}{4}x^4 - \frac{1}{6}x^3 - \frac{1}{2}x^2 + \frac{1}{2}x$.
3. Originally Posted by Prove It
Turning Points occur when the derivative is 0.
Using this information, you can find the derivative.
$\frac{dy}{dx} = (x + 1)\left[x - \left(\frac{3 + \sqrt{7}}{4}\right)\right]\left[x - \left(\frac{3 - \sqrt{7}}{4}\right)\right]$
(set it equal to 0, you should see that the turning points are correct).
Expand, and then you can integrate to find the polynomial.
$\frac{dy}{dx} = x^3 - \frac{1}{2}x^2 - x + \frac{1}{2}$
$y = \int{x^3 - \frac{1}{2}x^2 - x + \frac{1}{2}\,dx}$
$y = \frac{1}{4}x^4 - \frac{1}{6}x^3 - \frac{1}{2}x^2 + \frac{1}{2}x + C$.
You can choose any value of C you like... 0 is the easiest.
So a polynomial that has the turning points you mentioned is
$y = \frac{1}{4}x^4 - \frac{1}{6}x^3 - \frac{1}{2}x^2 + \frac{1}{2}x$.
when i multiply
$\frac{dy}{dx} = (x + 1)\left[x - \left(\frac{3 + \sqrt{7}}{4}\right)\right]\left[x - \left(\frac{3 - \sqrt{7}}{4}\right)\right]$
i get 8x^3 - 4x^2 - 11x + 1
4. Originally Posted by mathprob
when i multiply
$\frac{dy}{dx} = (x + 1)\left[x - \left(\frac{3 + \sqrt{7}}{4}\right)\right]\left[x - \left(\frac{3 - \sqrt{7}}{4}\right)\right]$
i get 8x^3 - 4x^2 - 11x + 1
After multiplying by 8, of course .....
Yes, I get the same result. Prove It made some small mistakes. But they don't affect the method he has shown you and you should be able to get the final answer without trouble.
5. Originally Posted by mr fantastic
After multiplying by 8, of course .....
Yes, I get the same result. Prove It made some small mistakes. But they don't affect the method he has shown you and you should be able to get the final answer without trouble.
well, we never learned integrals thats why i could not use his method even though i know integrals
i got 12x^4 - 8x^3 - 33x^2 + 6x = 0
i need confirmation though, ill get some needed points for this
6. Originally Posted by mathprob
well, we never learned integrals thats why i could not use his method even though i know integrals
i got 12x^4 - 8x^3 - 33x^2 + 6x = 0
i need confirmation though, ill get some needed points for this
Why are you doing questions that require integration if you never learned it?
|
2014-10-01 14:17:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6839494109153748, "perplexity": 571.4712189744616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663460.43/warc/CC-MAIN-20140930004103-00172-ip-10-234-18-248.ec2.internal.warc.gz"}
|
https://socratic.org/questions/why-is-mixing-water-with-potassium-chloride-an-endothermic-process
|
Why is mixing water with potassium chloride an endothermic process?
Oct 1, 2015
Answer:
Because strong electrostatic bonds between oppositely charged ions are disrupted upon dissolution.
Explanation:
$K C l \left(s\right) r i g h t \le f t h a r p \infty n s {K}^{+} \left(a q\right) + C {l}^{-}$$\left(a q\right)$
Dissolution disrupts the strong electrostatic bonds between the oppositely charged ions of the lattice. Bond breaking requires energy, and therefore, the reaction is endothermic. The individual ions are aquated by water molecules (which is why we write ${K}^{+} \left(a q\right)$), but such bond formation does not energetically compensate for the initial bond breaking.
|
2019-08-18 15:25:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7628993391990662, "perplexity": 9739.334972894405}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313936.42/warc/CC-MAIN-20190818145013-20190818171013-00516.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Endvertex
|
Definition:Graph (Graph Theory)/Edge/Endvertex
(Redirected from Definition:Endvertex)
Definition
Let $G = \struct {V, E}$ be a graph or digraph.
Let $e = u v$ be an edge of $G$, that is, $e \in E$.
The endvertices of $e$ are the vertices $u$ and $v$.
Also known as
The endvertices of an edge $e$ are also known as the endpoints of $e$.
|
2021-12-04 09:49:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9903451204299927, "perplexity": 500.1131140656309}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00290.warc.gz"}
|
https://economics.stackexchange.com/questions/29741/do-any-social-welfare-functionals-other-than-maximin-meet-all-of-arrows-condi
|
# Do any social welfare functionals, other than maximin, meet all of Arrow's conditions plus invariance regarding ordinal level comparability?
In the literature on social welfare functionals, the only example I've seen of a functional which meets all of Arrow's conditions–––or at least utility analogues of Arrow's conditions–––plus invariance regarding ordinal level comparability is Rawls' maximin. E.g. Sen in On Weights and Measures (1977, p. 1544) cites maximin as his case of a functional meeting all of these conditions. Maximin orders the alternatives by the welfare of individual who is worst off. I assume that the inverse of maximin–––i.e. the alternatives are ordered by the welfare of individual who is best off–––would also meet these conditions.
Is there any work on other social welfare functionals which meet all these conditions? (I'm aware that if we tweak these conditions slightly we can derive other functionals, but I'm interested in the case in which we keep them unaltered.)
If not, is this evidence that maximin, and its inverse, are the only normatively sensible social welfare functionals that meets all these conditions? Or is it just evidence that people aren't so interested in this set of conditions? (If there is a clear reason why this set of conditions is uninteresting, I'd love to hear it).
Thanks for any help!
Utility analogues of Arrow’s conditions:
Utility analogues of Arrow’s conditions are Arrow’s conditions redefined for Sen’s welfare functional framework. Instead of taking a profile of orderings as input, Sen's functional takes a profile of utility functions as input: $$U \ = \ $$. $$U$$ is defined on $$X \times N$$; each individual, $$i \in N$$, is paired with each alternative, $$x \in X$$, and the result of each pairing is the utility derived by $$i$$ from $$x$$. $$\mathcal{U} \ = \ \{U^1, \ U^2, \ \dots \ , \ U^n \}$$ is the set of all possible utility profiles. $$\mathcal{U^*}$$ is the set of all utility profiles which meet a particular domain restriction. $$\mathcal{R}$$ is the set of all possible orderings of $$X$$. A social welfare functional can then be defined as: $$f: \ \mathcal{U^*} \longrightarrow \mathcal{R}$$. The final ordering given by profile $$U^1$$, $$f(U^1)$$, is denoted: $$R_{U^1}$$. We can then define utility analogues of Arrow's conditions:
Unrestricted Domain$$’$$: The domain of $$f$$ is the set of all possible utility profiles: $$\mathcal{U}^* \ = \ \mathcal{U}$$.
Weak Pareto$$’$$: $$\forall x, y \in X$$, $$\forall i \in N$$: $$( \ u_i(x) \ > \ u_i(y) \ ) \ \Longrightarrow \ (xPy)$$.
Non-Dictatorship$$’$$: $$f$$ does not single out one individual $$i \in N$$ such that, $$\forall U \in \mathcal{U^*}, \ \forall x, y \in X$$: $$( \ u_i(x) \ > \ u_i(y) \ ) \ \Longrightarrow \ (xPy)$$.
Independence of Irrelevant Utilities: $$\forall U^1$$ and $$U^2$$ $$\in \mathcal{U^*}, \ \forall x, y \in X$$: $$(\forall i \in N \ (( \ u^1_i(x) = u^2_i(x) \ ) \land ( \ u^1_i(y) = u^2_i(y) \ )) \ \Longrightarrow \ (( \ x R_{U^1} y \ ) \ \Longleftrightarrow \ ( \ x R_{U^2} y \ ))$$.
There are at least two other examples of SWFs that satisfy these conditions.
The first is a positional dictatorship. Let N be the number of individuals (assume it is fixed). For any k between 1 and N, the kth positional dictatorship SWF orders social alternatives in terms of the preferences of the "kth best off" agent. Formally, given any social alternative x, let v(x), be the vector of utilities of all individuals for x, but ordered from lowest to highest. The kth positional dictatorship SWF is then defined by the kth component of the function v. If k=1, we get the maximin. If k=N, then we get the "maximax" ---what you call the "inverse" of the maximin. If k=[N/2], we get effectively "dictatorship of the median individual". The point is not that these rules are normatively attractive (they aren't) ---but they satisfy your axioms.
Another possibility is the so-called leximin or lexicographical maximin rule. This is the lexicographical extension of the maximin, obtained by ranking social alternatives according to the vector-valued function v from the previous paragraph, but with coordinates treated lexicographically. Thus, alternative x is better than alternative y if it has a higher minimum utility value. If x and y yield the same minimum utility, then we compare them by looking at the utilities of the second-worst off individual in x and y. If these individuals also have the same utility, then we look at the third-worst off individuals, and so on.
This SWF is very similar to maximin, but it satisfies a stronger version of the Pareto axiom.
For more information, I suggest you look at the 2002 article by Claude d'Aspremont and Louis Gevers entitled "Social welfare functionals and interpersonal comparability", which is Chapter 10 of the Handbook of Social Choice and Welfare volume I (Arrow, Sen and Suzumura, eds.). You could also look at Chapter 2 of the book Axioms of Cooperative Decision-Making, by Hervé Moulin (1988). In particular, Theorem 2.4 on page 40 of Moulin's book might be useful to you: it says (roughly) that the positional dictatorships and their extensions (such as leximin) are the only SWFs satisfying ordinal level comparability and a few other mild conditions.
• Brilliant; thank you! I should clarify that I don’t take maximax to be `normatively sensible’. I shoehorned it in after it occurred that I had left it out, and did not edit carefully enough. However, I don’t want to edit the question now, as it would make your wonderful answer less precise. – Nikelmouse Dylar Jun 10 '19 at 19:04
|
2020-07-05 14:03:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 31, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7259006500244141, "perplexity": 1121.7267812568032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887360.60/warc/CC-MAIN-20200705121829-20200705151829-00175.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/two-point-charges-q1-and-q2-are-kept-at-a-distance-of-r12-in-air-deduce-the-expression-for-the-electrostatic-potential-energy-of-this-system-relation-between-electric-field-electrostatic-potential_108119
|
# Two Point Charges Q1 and Q2 Are Kept at a Distance of R12 in Air. Deduce the Expression for the Electrostatic Potential Energy of this System. - Physics
Short Note
Two-point charges q1 and q2 are kept at a distance of r12 in air. Deduce the expression for the electrostatic potential energy of this system.
#### Solution
Electrostatic potential energy of a system is defined as the total amount of the work done in bringing the various charges to their respective position from infinitely large mutual separation.
Let us consider a charge q1 at a postion vector r1 and q2 at infinity which is to be brought at point P2 having position vector r2. and dW is the small amount of work done in moving a charge to a distance dx.
dW = vec"F". vecdx
dW = -(kq_1q_2)/(r^2). dx
W = -int_∞^(r12) (kq_1q_2)/(r^2).dx
W = -Kq_1q_2 int_∞^(r12) 1/r^2. dx
W = -Kq_1q_2 [-1/r]_∞^(r12)
W = Kq_1q_2[1/(r_12)]
W = U = (Kq_1q_2)/(r_12).
Concept: Relation Between Electric Field and Electrostatic Potential
Is there an error in this question or solution?
|
2022-07-07 17:50:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6230992674827576, "perplexity": 1050.6425060316246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104495692.77/warc/CC-MAIN-20220707154329-20220707184329-00422.warc.gz"}
|
https://book.stat420.org/probability-and-statistics-in-r.html
|
# Chapter 5 Probability and Statistics in R
## 5.1 Probability in R
### 5.1.1 Distributions
When working with different statistical distributions, we often want to make probabilistic statements based on the distribution.
We typically want to know one of four things:
• The density (pdf) at a particular value.
• The distribution (cdf) at a particular value.
• The quantile value corresponding to a particular probability.
• A random draw of values from a particular distribution.
This used to be done with statistical tables printed in the back of textbooks. Now, R has functions for obtaining density, distribution, quantile and random values.
The general naming structure of the relevant R functions is:
• dname calculates density (pdf) at input x.
• pname calculates distribution (cdf) at input x.
• qname calculates the quantile at an input probability.
• rname generates a random draw from a particular distribution.
Note that name represents the name of the given distribution.
For example, consider a random variable $$X$$ which is $$N(\mu = 2, \sigma^2 = 25)$$. (Note, we are parameterizing using the variance $$\sigma^2$$. R however uses the standard deviation.)
To calculate the value of the pdf at x = 3, that is, the height of the curve at x = 3, use:
dnorm(x = 3, mean = 2, sd = 5)
## [1] 0.07820854
To calculate the value of the cdf at x = 3, that is, $$P(X \leq 3)$$, the probability that $$X$$ is less than or equal to 3, use:
pnorm(q = 3, mean = 2, sd = 5)
## [1] 0.5792597
Or, to calculate the quantile for probability 0.975, use:
qnorm(p = 0.975, mean = 2, sd = 5)
## [1] 11.79982
Lastly, to generate a random sample of size n = 10, use:
rnorm(n = 10, mean = 2, sd = 5)
## [1] 7.37419359 6.10660658 3.52600481 -4.44840024 -1.34387366 2.30941982
## [7] 0.05664736 5.09714342 2.67348934 2.58181891
These functions exist for many other distributions, including but not limited to:
Command Distribution
*binom Binomial
*t t
*pois Poisson
*f F
*chisq Chi-Squared
Where * can be d, p, q, and r. Each distribution will have its own set of parameters which need to be passed to the functions as arguments. For example, dbinom() would not have arguments for mean and sd, since those are not parameters of the distribution. Instead a binomial distribution is usually parameterized by $$n$$ and $$p$$, however R chooses to call them something else. To find the names that R uses we would use ?dbinom and see that R instead calls the arguments size and prob. For example:
dbinom(x = 6, size = 10, prob = 0.75)
## [1] 0.145998
Also note that, when using the dname functions with discrete distributions, they are the pmf of the distribution. For example, the above command is $$P(Y = 6)$$ if $$Y \sim b(n = 10, p = 0.75)$$. (The probability of flipping an unfair coin 10 times and seeing 6 heads, if the probability of heads is 0.75.)
## 5.2 Hypothesis Tests in R
A prerequisite for STAT 420 is an understanding of the basics of hypothesis testing. Recall the basic structure of hypothesis tests:
• An overall model and related assumptions are made. (The most common being observations following a normal distribution.)
• The null ($$H_{0}$$) and alternative ($$H_{1}$$ or $$H_{A}$$) hypotheses are specified. Usually the null specifies a particular value of a parameter.
• With given data, the value of the test statistic is calculated.
• Under the general assumptions, as well as assuming the null hypothesis is true, the distribution of the test statistic is known.
• Given the distribution and value of the test statistic, as well as the form of the alternative hypothesis, we can calculate a p-value of the test.
• Based on the p-value and pre-specified level of significance, we make a decision. One of:
• Fail to reject the null hypothesis.
• Reject the null hypothesis.
We’ll do some quick review of two of the most common tests to show how they are performed using R.
### 5.2.1 One Sample t-Test: Review
Suppose $$x_{i} \sim \mathrm{N}(\mu,\sigma^{2})$$ and we want to test $$H_{0}: \mu = \mu_{0}$$ versus $$H_{1}: \mu \neq \mu_{0}.$$
Assuming $$\sigma$$ is unknown, we use the one-sample Student’s $$t$$ test statistic:
$t = \frac{\bar{x}-\mu_{0}}{s/\sqrt{n}} \sim t_{n-1},$
where $$\bar{x} = \displaystyle\frac{\sum_{i=1}^{n}x_{i}}{n}$$ and $$s = \sqrt{\displaystyle\frac{1}{n - 1}\sum_{i=1}^{n}(x_i - \bar{x})^2}$$.
A $$100(1 - \alpha)$$% confidence interval for $$\mu$$ is given by,
$\bar{x} \pm t_{\alpha/2, n-1}\frac{s}{\sqrt{n}}$
where $$t_{\alpha/2, n-1}$$ is the critical value such that $$P\left(t>t_{\alpha/2, n-1}\right) = \alpha/2$$ for $$n-1$$ degrees of freedom.
### 5.2.2 One Sample t-Test: Example
Suppose a grocery store sells “16 ounce” boxes of Captain Crisp cereal. A random sample of 9 boxes was taken and weighed. The weight in ounces is stored in the data frame capt_crisp.
capt_crisp = data.frame(weight = c(15.5, 16.2, 16.1, 15.8, 15.6, 16.0, 15.8, 15.9, 16.2))
The company that makes Captain Crisp cereal claims that the average weight of a box is at least 16 ounces. We will assume the weight of cereal in a box is normally distributed and use a 0.05 level of significance to test the company’s claim.
To test $$H_{0}: \mu \geq 16$$ versus $$H_{1}: \mu < 16$$, the test statistic is
$t = \frac{\bar{x} - \mu_{0}}{s / \sqrt{n}}$
The sample mean $$\bar{x}$$ and the sample standard deviation $$s$$ can be easily computed using R. We also create variables which store the hypothesized mean and the sample size.
x_bar = mean(capt_crisp$weight) s = sd(capt_crisp$weight)
mu_0 = 16
n = 9
We can then easily compute the test statistic.
t = (x_bar - mu_0) / (s / sqrt(n))
t
## [1] -1.2
Under the null hypothesis, the test statistic has a $$t$$ distribution with $$n - 1$$ degrees of freedom, in this case 8.
To complete the test, we need to obtain the p-value of the test. Since this is a one-sided test with a less-than alternative, we need the area to the left of -1.2 for a $$t$$ distribution with 8 degrees of freedom. That is,
$P(t_{8} < -1.2)$
pt(t, df = n - 1)
## [1] 0.1322336
We now have the p-value of our test, which is greater than our significance level (0.05), so we fail to reject the null hypothesis.
Alternatively, this entire process could have been completed using one line of R code.
t.test(x = capt_crisp$weight, mu = 16, alternative = c("less"), conf.level = 0.95) ## ## One Sample t-test ## ## data: capt_crisp$weight
## t = -1.2, df = 8, p-value = 0.1322
## alternative hypothesis: true mean is less than 16
## 95 percent confidence interval:
## -Inf 16.05496
## sample estimates:
## mean of x
## 15.9
We supply R with the data, the hypothesized value of $$\mu$$, the alternative, and the confidence level. R then returns a wealth of information including:
• The value of the test statistic.
• The degrees of freedom of the distribution under the null hypothesis.
• The p-value of the test.
• The confidence interval which corresponds to the test.
• An estimate of $$\mu$$.
Since the test was one-sided, R returned a one-sided confidence interval. If instead we wanted a two-sided interval for the mean weight of boxes of Captain Crisp cereal we could modify our code.
capt_test_results = t.test(capt_crisp$weight, mu = 16, alternative = c("two.sided"), conf.level = 0.95) This time we have stored the results. By doing so, we can directly access portions of the output from t.test(). To see what information is available we use the names() function. names(capt_test_results) ## [1] "statistic" "parameter" "p.value" "conf.int" "estimate" ## [6] "null.value" "stderr" "alternative" "method" "data.name" We are interested in the confidence interval which is stored in conf.int. capt_test_results$conf.int
## [1] 15.70783 16.09217
## attr(,"conf.level")
## [1] 0.95
Let’s check this interval “by hand.” The one piece of information we are missing is the critical value, $$t_{\alpha/2, n-1} = t_{8}(0.025)$$, which can be calculated in R using the qt() function.
qt(0.975, df = 8)
## [1] 2.306004
So, the 95% CI for the mean weight of a cereal box is calculated by plugging into the formula,
$\bar{x} \pm t_{\alpha/2, n-1} \frac{s}{\sqrt{n}}$
c(mean(capt_crisp$weight) - qt(0.975, df = 8) * sd(capt_crisp$weight) / sqrt(9),
mean(capt_crisp$weight) + qt(0.975, df = 8) * sd(capt_crisp$weight) / sqrt(9))
## [1] 15.70783 16.09217
### 5.2.3 Two Sample t-Test: Review
Suppose $$x_{i} \sim \mathrm{N}(\mu_{x}, \sigma^{2})$$ and $$y_{i} \sim \mathrm{N}(\mu_{y}, \sigma^{2}).$$
Want to test $$H_{0}: \mu_{x} - \mu_{y} = \mu_{0}$$ versus $$H_{1}: \mu_{x} - \mu_{y} \neq \mu_{0}.$$
Assuming $$\sigma$$ is unknown, use the two-sample Student’s $$t$$ test statistic:
$t = \frac{(\bar{x} - \bar{y})-\mu_{0}}{s_{p}\sqrt{\frac{1}{n}+\frac{1}{m}}} \sim t_{n+m-2},$
where $$\displaystyle\bar{x}=\frac{\sum_{i=1}^{n}x_{i}}{n}$$, $$\displaystyle\bar{y}=\frac{\sum_{i=1}^{m}y_{i}}{m}$$, and $$s_p^2 = \displaystyle\frac{(n-1)s_x^2+(m-1)s_y^2}{n+m-2}$$.
A $$100(1-\alpha)$$% CI for $$\mu_{x}-\mu_{y}$$ is given by
$(\bar{x} - \bar{y}) \pm t_{\alpha/2, n+m-2}\left(s_{p}\textstyle\sqrt{\frac{1}{n}+\frac{1}{m}}\right),$
where $$t_{\alpha/2, n+m-2}$$ is the critical value such that $$P\left(t>t_{\alpha/2, n+m-2}\right)=\alpha/2$$.
### 5.2.4 Two Sample t-Test: Example
Assume that the distributions of $$X$$ and $$Y$$ are $$\mathrm{N}(\mu_{1},\sigma^{2})$$ and $$\mathrm{N}(\mu_{2},\sigma^{2})$$, respectively. Given the $$n = 6$$ observations of $$X$$,
x = c(70, 82, 78, 74, 94, 82)
n = length(x)
and the $$m = 8$$ observations of $$Y$$,
y = c(64, 72, 60, 76, 72, 80, 84, 68)
m = length(y)
we will test $$H_{0}: \mu_{1} = \mu_{2}$$ versus $$H_{1}: \mu_{1} > \mu_{2}$$.
First, note that we can calculate the sample means and standard deviations.
x_bar = mean(x)
s_x = sd(x)
y_bar = mean(y)
s_y = sd(y)
We can then calculate the pooled standard deviation.
$s_{p} = \sqrt{\frac{(n-1)s_{x}^{2}+(m-1)s_{y}^{2}}{n+m-2}}$
s_p = sqrt(((n - 1) * s_x ^ 2 + (m - 1) * s_y ^ 2) / (n + m - 2))
Thus, the relevant $$t$$ test statistic is given by
$t = \frac{(\bar{x}-\bar{y})-\mu_{0}}{s_{p}\sqrt{\frac{1}{n}+\frac{1}{m}}}.$
t = ((x_bar - y_bar) - 0) / (s_p * sqrt(1 / n + 1 / m))
t
## [1] 1.823369
Note that $$t \sim t_{n + m - 2} = t_{12}$$, so we can calculate the p-value, which is
$P(t_{12} > 1.8233692).$
1 - pt(t, df = n + m - 2)
## [1] 0.04661961
But, then again, we could have simply performed this test in one line of R.
t.test(x, y, alternative = c("greater"), var.equal = TRUE)
##
## Two Sample t-test
##
## data: x and y
## t = 1.8234, df = 12, p-value = 0.04662
## alternative hypothesis: true difference in means is greater than 0
## 95 percent confidence interval:
## 0.1802451 Inf
## sample estimates:
## mean of x mean of y
## 80 72
Recall that a two-sample $$t$$-test can be done with or without an equal variance assumption. Here var.equal = TRUE tells R we would like to perform the test under the equal variance assumption.
Above we carried out the analysis using two vectors x and y. In general, we will have a preference for using data frames.
t_test_data = data.frame(values = c(x, y),
group = c(rep("A", length(x)), rep("B", length(y))))
We now have the data stored in a single variables (values) and have created a second variable (group) which indicates which “sample” the value belongs to.
t_test_data
## values group
## 1 70 A
## 2 82 A
## 3 78 A
## 4 74 A
## 5 94 A
## 6 82 A
## 7 64 B
## 8 72 B
## 9 60 B
## 10 76 B
## 11 72 B
## 12 80 B
## 13 84 B
## 14 68 B
Now to perform the test, we still use the t.test() function but with the ~ syntax and a data argument.
t.test(values ~ group, data = t_test_data,
alternative = c("greater"), var.equal = TRUE)
##
## Two Sample t-test
##
## data: values by group
## t = 1.8234, df = 12, p-value = 0.04662
## alternative hypothesis: true difference in means between group A and group B is greater than 0
## 95 percent confidence interval:
## 0.1802451 Inf
## sample estimates:
## mean in group A mean in group B
## 80 72
## 5.3 Simulation
Simulation and model fitting are related but opposite processes.
• In simulation, the data generating process is known. We will know the form of the model as well as the value of each of the parameters. In particular, we will often control the distribution and parameters which define the randomness, or noise in the data.
• In model fitting, the data is known. We will then assume a certain form of model and find the best possible values of the parameters given the observed data. Essentially we are seeking to uncover the truth. Often we will attempt to fit many models, and we will learn metrics to assess which model fits best.
Often we will simulate data according to a process we decide, then use a modeling method seen in class. We can then verify how well the method works, since we know the data generating process.
One of the biggest strengths of R is its ability to carry out simulations using built-in functions for generating random samples from certain distributions. We’ll look at two very simple examples here, however simulation will be a topic we revisit several times throughout the course.
### 5.3.1 Paired Differences
Consider the model:
$\begin{split} X_{11}, X_{12}, \ldots, X_{1n} \sim N(\mu_1,\sigma^2)\\ X_{21}, X_{22}, \ldots, X_{2n} \sim N(\mu_2,\sigma^2) \end{split}$
Assume that $$\mu_1 = 6$$, $$\mu_2 = 5$$, $$\sigma^2 = 4$$ and $$n = 25$$.
Let
\begin{aligned} \bar{X}_1 &= \displaystyle\frac{1}{n}\sum_{i=1}^{n}X_{1i}\\ \bar{X}_2 &= \displaystyle\frac{1}{n}\sum_{i=1}^{n}X_{2i}\\ D &= \bar{X}_1 - \bar{X}_2. \end{aligned}
Suppose we would like to calculate $$P(0 < D < 2)$$. First we will need to obtain the distribution of $$D$$.
Recall,
$\bar{X}_1 \sim N\left(\mu_1,\frac{\sigma^2}{n}\right)$
and
$\bar{X}_2 \sim N\left(\mu_2,\frac{\sigma^2}{n}\right).$
Then,
$D = \bar{X}_1 - \bar{X}_2 \sim N\left(\mu_1-\mu_2, \frac{\sigma^2}{n} + \frac{\sigma^2}{n}\right) = N\left(6-5, \frac{4}{25} + \frac{4}{25}\right).$
So,
$D \sim N(\mu = 1, \sigma^2 = 0.32).$
Thus,
$P(0 < D < 2) = P(D < 2) - P(D < 0).$
This can then be calculated using R without a need to first standardize, or use a table.
pnorm(2, mean = 1, sd = sqrt(0.32)) - pnorm(0, mean = 1, sd = sqrt(0.32))
## [1] 0.9229001
An alternative approach, would be to simulate a large number of observations of $$D$$ then use the empirical distribution to calculate the probability.
Our strategy will be to repeatedly:
• Generate a sample of 25 random observations from $$N(\mu_1 = 6,\sigma^2 = 4)$$. Call the mean of this sample $$\bar{x}_{1s}$$.
• Generate a sample of 25 random observations from $$N(\mu_1 = 5,\sigma^2 = 4)$$. Call the mean of this sample $$\bar{x}_{2s}$$.
• Calculate the differences in the means, $$d_s = \bar{x}_{1s} - \bar{x}_{2s}$$.
We will repeat the process a large number of times. Then we will use the distribution of the simulated observations of $$d_s$$ as an estimate for the true distribution of $$D$$.
set.seed(42)
num_samples = 10000
differences = rep(0, num_samples)
Before starting our for loop to perform the operation, we set a seed for reproducibility, create and set a variable num_samples which will define the number of repetitions, and lastly create a variables differences which will store the simulate values, $$d_s$$.
By using set.seed() we can reproduce the random results of rnorm() each time starting from that line.
for (s in 1:num_samples) {
x1 = rnorm(n = 25, mean = 6, sd = 2)
x2 = rnorm(n = 25, mean = 5, sd = 2)
differences[s] = mean(x1) - mean(x2)
}
To estimate $$P(0 < D < 2)$$ we will find the proportion of values of $$d_s$$ (among the 10^{4} values of $$d_s$$ generated) that are between 0 and 2.
mean(0 < differences & differences < 2)
## [1] 0.9222
Recall that above we derived the distribution of $$D$$ to be $$N(\mu = 1, \sigma^2 = 0.32)$$
If we look at a histogram of the differences, we find that it looks very much like a normal distribution.
hist(differences, breaks = 20,
main = "Empirical Distribution of D",
xlab = "Simulated Values of D",
col = "dodgerblue",
border = "darkorange")
Also the sample mean and variance are very close to what we would expect.
mean(differences)
## [1] 1.001423
var(differences)
## [1] 0.3230183
We could have also accomplished this task with a single line of more “idiomatic” R.
set.seed(42)
diffs = replicate(10000, mean(rnorm(25, 6, 2)) - mean(rnorm(25, 5, 2)))
Use ?replicate to take a look at the documentation for the replicate function and see if you can understand how this line performs the same operations that our for loop above executed.
mean(differences == diffs)
## [1] 1
We see that by setting the same seed for the randomization, we actually obtain identical results!
### 5.3.2 Distribution of a Sample Mean
For another example of simulation, we will simulate observations from a Poisson distribution, and examine the empirical distribution of the sample mean of these observations.
Recall, if
$X \sim Pois(\mu)$
then
$E[X] = \mu$
and
$Var[X] = \mu.$
Also, recall that for a random variable $$X$$ with finite mean $$\mu$$ and finite variance $$\sigma^2$$, the central limit theorem tells us that the mean, $$\bar{X}$$ of a random sample of size $$n$$ is approximately normal for large values of $$n$$. Specifically, as $$n \to \infty$$,
$\bar{X} \overset{d}{\to} N\left(\mu, \frac{\sigma^2}{n}\right).$
The following verifies this result for a Poisson distribution with $$\mu = 10$$ and a sample size of $$n = 50$$.
set.seed(1337)
mu = 10
sample_size = 50
samples = 100000
x_bars = rep(0, samples)
for(i in 1:samples){
x_bars[i] = mean(rpois(sample_size, lambda = mu))
}
x_bar_hist = hist(x_bars, breaks = 50,
main = "Histogram of Sample Means",
xlab = "Sample Means")
Now we will compare sample statistics from the empirical distribution with their known values based on the parent distribution.
c(mean(x_bars), mu)
## [1] 10.00008 10.00000
c(var(x_bars), mu / sample_size)
## [1] 0.1989732 0.2000000
c(sd(x_bars), sqrt(mu) / sqrt(sample_size))
## [1] 0.4460641 0.4472136
And here, we will calculate the proportion of sample means that are within 2 standard deviations of the population mean.
mean(x_bars > mu - 2 * sqrt(mu) / sqrt(sample_size) &
x_bars < mu + 2 * sqrt(mu) / sqrt(sample_size))
## [1] 0.95429
This last histogram uses a bit of a trick to approximately shade the bars that are within two standard deviations of the mean.)
shading = ifelse(x_bar_hist$breaks > mu - 2 * sqrt(mu) / sqrt(sample_size) & x_bar_hist$breaks < mu + 2 * sqrt(mu) / sqrt(sample_size),
"darkorange", "dodgerblue")
x_bar_hist = hist(x_bars, breaks = 50, col = shading,
main = "Histogram of Sample Means, Two Standard Deviations",
xlab = "Sample Means")
|
2022-11-29 10:04:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7840179204940796, "perplexity": 945.7781370086792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710691.77/warc/CC-MAIN-20221129100233-20221129130233-00851.warc.gz"}
|
https://solvedlib.com/can-you-please-help-me-tweak-this-clinical,261844
|
# Can you please help me tweak this clinical research question? It should follow the PICOT format....
###### Question:
Can you please help me tweak this clinical research question? It should follow the PICOT format. Here it is: In the inpatient population, does the implementation of a PICC team compared to a standard of care decrease the rate of CLABSI?
#### Similar Solved Questions
В зок...
##### Figure2 of 2450XB
Figure 2 of 2 450 X B...
##### Determination of Vma for the enzyme-catalyzed reactionPrepare series of cuvettes according to the chart below: Your - stock solutions are 0.128 M sodium phosphate pH 7.3 and 3 mM ONPG The final concentration will be 0.064 M sodium phosphate buffer and the final volume for each cuvette is 1.5 mLDo not add B-galactosidase until just before you start each assaylVolume (mL) Buffer 3mM ONPG B-galactosidase 0.75CuvettedH;o 0.75Fina 15 1.5Final [ONPG] (mm)0.10.051,50.10150.25150.500.11.51.00Once all of
Determination of Vma for the enzyme-catalyzed reaction Prepare series of cuvettes according to the chart below: Your - stock solutions are 0.128 M sodium phosphate pH 7.3 and 3 mM ONPG The final concentration will be 0.064 M sodium phosphate buffer and the final volume for each cuvette is 1.5 mL Do ...
##### 132 fouoato Find the Question 2 be 1 720 people number the beldected successes X aquopuo L suggested by line: among the the given i one city; 17.78% were ptsQuestion8
132 fouoato Find the Question 2 be 1 720 people number the beldected successes X aquopuo L suggested by line: among the the given i one city; 17.78% were pts Question 8...
##### Elastic collision particles: Tne first Danticle milay of O.lkg and has Tuociti The scond particle has the nttle elky bum€ velocity (in the opposite direction What is the velocity of the particles after the collision eennm that thc incomingparticle moves the right)? Compute the velocity of the centcr of mass, ntOrC and after the collision . Assume that the duration of the collision is 0.01 seconds What will be the hnpuls Fach particle? What will the kinetic encrgy of cach particle before and a
Elastic collision particles: Tne first Danticle milay of O.lkg and has Tuociti The scond particle has the nttle elky bum€ velocity (in the opposite direction What is the velocity of the particles after the collision eennm that thc incoming particle moves the right)? Compute the velocity of th...
##### My NotesAsk Your oacher414.32 points SerPSET9 28 P041 ong time It Is then suddenly closed. Take € 10.0 In the circuit of the figure below, the switch has been open for 175 kfl, and 12.0 uFDetermine tne time constant before the switch clasedDetermine the time constant after the switch closed.Determine the current In the switch 2s function of time (Assume [ Is In and Let the switch be closed at t = Do not enter units your expression. Use the following necessary; t)OM M"
My Notes Ask Your oacher 414.32 points SerPSET9 28 P041 ong time It Is then suddenly closed. Take € 10.0 In the circuit of the figure below, the switch has been open for 175 kfl, and 12.0 uF Determine tne time constant before the switch clased Determine the time constant after the switch close...
##### Problem 16-5 Amy Dyken, controller at Flint Pharmaceutical Industries, a public company, is currently preparing the cal...
Problem 16-5 Amy Dyken, controller at Flint Pharmaceutical Industries, a public company, is currently preparing the calculation for basic and diluted earnings per share and the related disclosure for Flint's financial statements. Below is selected financial information for the fiscal year ended ...
##### Question 2 (10 points) Find the area of the region bounded by the graph of 25 - x2 and x - axis over the interval [1,4]:FormatXSaveQuestion 3 (10 polnts) Find the indefinite integral ofj f(+1) (3x 2)dxParagral
Question 2 (10 points) Find the area of the region bounded by the graph of 25 - x2 and x - axis over the interval [1,4]: Format X Save Question 3 (10 polnts) Find the indefinite integral ofj f(+1) (3x 2)dx Paragral...
##### 1) The gauge on a 10 L tank of compressed argon reads 1900 mmHg. How many...
1) The gauge on a 10 L tank of compressed argon reads 1900 mmHg. How many liters would this same gas occupy at a pressure of 0.8 atm when temperature and amount of gas do not change?...
##### ULLLiULULe4 ~=cm Solve the cquation 3x = 2in the field Zz; in the feld Zz Find allcolur
ULLLiULULe 4 ~=cm Solve the cquation 3x = 2in the field Zz; in the feld Zz Find allcolur...
##### The demand for a new computer game can be modeled by p(x) = 58 -8 In...
The demand for a new computer game can be modeled by p(x) = 58 -8 In x, for 0 5x5 800, where p(x) is the price consumers will pay, in dollars, and x is the number of games sold in thousands. Recall that total revenue is given by R(x)=x.p(x). Complete parts (a) through (c) below. a) Find R(x). R(x) =...
##### In an effort tO cut costs and improve profits_ any US companies according to Purchasing magazine 547 ' have bccn turning to outsourcing_ companies In fact; process the past two thrce years. 5516 Ofuneesyec outsourced some Part %f their manufacturing Suppose of these companies are contacted_ What is thc probability that 338 more companies proccss the past two or three years? Rouna outsourced some part of their manufacturing answer t0 four decimal places;What thc probability that 285 or more
In an effort tO cut costs and improve profits_ any US companies according to Purchasing magazine 547 ' have bccn turning to outsourcing_ companies In fact; process the past two thrce years. 5516 Ofuneesyec outsourced some Part %f their manufacturing Suppose of these companies are contacted_ Wha...
##### Certain cell types normally have several nuclei per cell: How could such multinucleated cells be explained? The cell underwent repealed cylokinesis but no mitosis The cell undenvent repeated mitosis with simultaneous cytokiresis: The cell underwent repeated milosis, but cytokinesis did not occur The cell had multiple S phases before it ertered milosis:QUESTION 5It a cell were unable (0 produce histone proleins; which of the following results would be Iikely effect on the cell? There would be an
Certain cell types normally have several nuclei per cell: How could such multinucleated cells be explained? The cell underwent repealed cylokinesis but no mitosis The cell undenvent repeated mitosis with simultaneous cytokiresis: The cell underwent repeated milosis, but cytokinesis did not occur The...
##### 9. A three phase balanced Y connected source is supplying three, three phase loads that are...
9. A three phase balanced Y connected source is supplying three, three phase loads that are connected in parallel. First load is a y connected and has an impedance of 30+j40 ohms. The second load is absorbing a total of 150kVA at 0.5 power factor lagging. The third load is absorbing 150kVA at 0.5 po...
##### ML.3 S00 50G C 0.01 +5 + V Find the margin pro ft 2 2 .01...
ML.3 S00 50G C 0.01 +5 + V Find the margin pro ft 2 2 .01 q, 6 + 20, 008 Find the margin cost when a 100 |1.4 Differentiate 5 F-2)(3P - 1) F(P)2 t 3t s (t) 2 (t(t 2 y 3 x X-1 X-2...
##### AATea-OsC0GnnaaetHomework: Module 5 Homework (unlimited attempts)Scce-6y AAt CanScone SCn5.2.59cuterd 59J5rl uu (ued4u hohalten Heudentn4, UcdenkoleAee AendLae Leeentdehn ton iaeuudimt uutnel O4iaer Jun4T Mea " beo e eeflendng Shan IL Uealtlau4ue Tqal Ihal dotertain Ieldansho botentn Tanduht-0 Iealon Uieau 00 Ihe varsble ) Lu entnttGud URANhrtrtt4nsacIfalnT3t nee
AATea-OsC0 Gnnaaet Homework: Module 5 Homework (unlimited attempts) Scce-6y AAt Can Scone SC n 5.2.59 cuterd 59J5rl uu (ued4u hohalten Heudentn4, UcdenkoleAee AendLae Leeentdehn ton iaeuudimt uutnel O4iaer Jun4T Mea " beo e eeflendng Shan IL Uealtlau4ue Tqal Ihal dotertain Ieldansho botentn Ta...
##### The one-to-one functlons g and h are defined a5 follows 8-{(-8, 2). (-6, 3), (2, 6), (9, 5)) 6)-6x-13 Find the following:
The one-to-one functlons g and h are defined a5 follows 8-{(-8, 2). (-6, 3), (2, 6), (9, 5)) 6)-6x-13 Find the following:...
##### The key evidence that the Universe is expanding is ... the red shift of light from...
The key evidence that the Universe is expanding is ... the red shift of light from nearby stars. the blue shift of light from distant galaxies. the red shift of light from distant galaxies. the blue shift of light from nearby stars....
##### 2.Predict the product of the Icaction below. (10 points)Cl +AICI;(E)None of the above
2.Predict the product of the Icaction below. (10 points) Cl +AICI; (E) None of the above...
##### Old Tyme Soda produces one flavor of a popular local soft drink. It had no work-in-process...
Old Tyme Soda produces one flavor of a popular local soft drink. It had no work-in-process on October 31 in its only inventory account During November, Old Tyme started 11,900 barrels. Work-in-process on November 30 is 2,150 barrels. The production supervisor estimates that the ending work-in-proces...
##### Starting with the graph of f(r) 2" writel the equation of the graph that results from(a) shifting f(r) 7 units upward: y(6) shifting f(z) 8 units to the left: y(c) reflecting f(r) about the y-axis.
Starting with the graph of f(r) 2" writel the equation of the graph that results from (a) shifting f(r) 7 units upward: y (6) shifting f(z) 8 units to the left: y (c) reflecting f(r) about the y-axis....
##### Fo te blowna ttudy; Idantly populallon, sample. population peramaler; end gample stattlle 0 aurvey 170 hunan roeourcet reprebenietves, 44%} eald thel ta moel conton |pb nieraw mistoke Earanancant belna Intervlaed What @ 'Dopuiation?have Itttlaknotledoa al Ine company whore470012 170 numan rotources ropresantatvot soloctad Jho 170 human retourcaa ropraaanialivat celectad ps Anumon teloureot (ca totaniaiiytu 47 nman racourcul roorotoninuvet
Fo te blowna ttudy; Idantly populallon, sample. population peramaler; end gample stattlle 0 aurvey 170 hunan roeourcet reprebenietves, 44%} eald thel ta moel conton |pb nieraw mistoke Earanancant belna Intervlaed What @ 'Dopuiation? have Itttla knotledoa al Ine company whore 470012 170 numan ro...
##### A student is doing_ the Plane Mirror Lab in physics class. She places a pin a distance of 4.9 cm from a plane mirTor: How far in (cm) behind the mirTor can the image be expected to appear?
A student is doing_ the Plane Mirror Lab in physics class. She places a pin a distance of 4.9 cm from a plane mirTor: How far in (cm) behind the mirTor can the image be expected to appear?...
##### 1. In your own words, describe what sets electromechanical instruments apart from other electrophones. 2. In...
1. In your own words, describe what sets electromechanical instruments apart from other electrophones. 2. In your own words, describe the ways in which electroacoustic instruments gained their own identity apart from their acoustic ancestors. 3. Almost immediately, prerecorded music (as opposed to l...
##### Calculate the pressure of a 0.58 M aqueous ammonia (NH3) solution at 25 degrees C. (K=...
Calculate the pressure of a 0.58 M aqueous ammonia (NH3) solution at 25 degrees C. (K= 0.58 mol/L.atm for NH3)....
##### A concentration cell consists of a cathode with H2(g) at 1 atm bubbling over a Pt electrode ina 1.00 M HCI solution; and ananode with H2lg) at 1atm bubbling over Pt electrode in a solution of pH 4.48_ What is the measured voltage of this cell? Provide answer in volts to 2 decimal places_
A concentration cell consists of a cathode with H2(g) at 1 atm bubbling over a Pt electrode ina 1.00 M HCI solution; and ananode with H2lg) at 1atm bubbling over Pt electrode in a solution of pH 4.48_ What is the measured voltage of this cell? Provide answer in volts to 2 decimal places_...
##### You have already computed the volume of your can. Use this number and the equation V = πr^2h to solve for h in terms of r? The volume was 279.92
You have already computed the volume of your can. Use this number and the equation V = πr^2h to solve for h in terms of r? The volume was 279.92...
|
2022-07-07 01:21:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48725655674934387, "perplexity": 9985.899417776269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683020.92/warc/CC-MAIN-20220707002618-20220707032618-00523.warc.gz"}
|
http://models.street-artists.org/2016/05/02/what-does-seasonal-mean/
|
So in followup to the ideas about "seasonal" variations, there's a fundamental philosophical question here. What is a seasonal variation?
Clearly, length of day in Bangkok is determined by the geometry of the earth and its spin about its axis and tilt relative to the orbit around the sun. Purely by being at a particular point on the earth and at a particular point in the orbit there will be a particular period of the day when light falls on Bangkok.
But in general, there are relatively regular but less strictly regular variations in things, such as for example the amount of cheese produced each month:
It rises and falls throughout the year but it's not periodic, more like quasi-periodic with a clear trend and a clear change in the pattern that occurs slowly in time, over a timescale of 5 to 10 years.
In general, you can break down every function into a part that changes "slowly" and a part that oscillates "quickly". There's a theorem that is related to this idea by Cartier and Perrin published in "Nonstandard Analysis In Practice" (Diener and Diener 1995). For the gist, consider the following idea.
Let $g(x)$ be a standard, bounded periodic function of $x$ with period $p$ such that $\int_0^p g(x) = 0$. Suppose that it's bounded by $|g(x)| < B$ for all x. It is, basically a Fourier type series of $\sin$ and $\cos$ terms with no constant term.
Now consider the nonstandard function $\hat g(x) = g(Npx)$ for $N$ a nonstandard integer.
Consider the integral over any standard length $L$
Where $s = Npx$ and $K= \lfloor (NpL/p)\rfloor$
On the right, inside the parens, the first term is zero due to the construction of $g(x)$ as a zero mean oscillating periodic function, and the second term is bounded by $\epsilon B$. What is the conversion factor $dx/ds$ ? A small distance $ds$ in the domain of the $g$ function represents a distance $ds/Np$ in the $x$ domain of $\hat g(x)$, so the conversion factor is $1/Np$ suggesting that the little bit at the end of the integral is bounded by $B\epsilon/Np$ which is infinitesimal.
So this nonstandard function is a function which when integrated over any appreciable domain takes on an infinitesimal value. In other words, although it is pointwise very different from zero by as much as $B$ which could be any appreciable number like 100 for example, over any observable distance its average is "essentially zero".
This is the essence of homogenization theory, and we don't need the function to be periodic necessarily, it just simplifies the construction here.
The point is, it's often natural to talk about two different timescales, a "slow" process $t1$ and a "fast" process $t2$ such that a model for what goes on at the scale of $dt2$ will average out to zero over times that are appreciable in the scale of $t1$. In other words, "daily" fluctuations in temperature (scale $t2$) are irrelevant to modeling "annual" changes in seasonal temperature averages (scale $t1$). When these scales are sufficiently far apart we can consider one scale the "seasonal" scale and another the "deviations".
The problem comes when "deviations" last for appreciable times on the "seasonal" scale, and now it's hard to separate the timescales meaningfully.
|
2018-10-23 21:35:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 29, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9108617901802063, "perplexity": 359.5538456904661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583517376.96/warc/CC-MAIN-20181023195531-20181023221031-00177.warc.gz"}
|
https://gamedev.stackexchange.com/questions/159379/getting-the-winding-order-of-a-mesh
|
# Getting the winding order of a mesh
I'm doing some topological operations on my mesh and I need to know the winding order for my mesh.
Since I will be importing models from different sources I will need to know whether the winding order is clockwise and anti-clockwise so I can get normals and things like that.
Question : Is there a way that I can determine the winding order of a list of vertices.
Edit 1: I am using a 3D mesh
• Safe to assume this is in 3D? In 2D, we can assume that the side facing out of the screen is the front, and compute the winding order that way. In 3D, we need some additional information to know which side of the polygon is supposed to be the front (since a polygon that's wound clockwise when you look at it in one direction is wound counter-clockwise if you look at it from the opposite side). What's your source of ground truth for determining which side is "out"? – DMGregory Jun 2 '18 at 17:43
• You don't need to know the winding order to get the normal. A simple cross product of the vertices already takes that into account. – Bálint Jun 3 '18 at 0:09
• @Bálint true, but there are two normals to a polygon in 3D: one pointing "out" and one pointing "in" — we usually use knowledge of the winding order to compute the outward-pointing normal. If the mesh might be wound either way then we'll need some other piece of information to tell us which side faces out. – DMGregory Jun 3 '18 at 11:59
• @DMGregory is there anything else i need to add to ensure I get an answer – user116458 Jun 3 '18 at 14:02
• I asked above what information you have to distinguish the front/outside face of a polygon from the back/inside face, if you don't know its winding order in advance. Meshes can, in the most general/worst case, be a disorganized polygon soup without an interior volume, so we'll need some clues to work from to figure out which side is supposed to be the front. If you have any information at all about what kinds of meshes and shapes you're working with, that helps narrow the uncertainty. – DMGregory Jun 3 '18 at 14:05
There is no way to automatically infer the winding order of a 3D mesh that will work for every possible input.
For instance, if I give you the triangle (0, 0, 0), (1, 0, 0), (0, 1, 0) alone with no other context, you don't know whether it's meant to be wound counter-clockwise (so its front faces out along the z- axis) or clockwise (so its front faces out along the z+ axis). (Assuming a left-handed coordinate system, but the coordinate system could just as well be right-handed, flipping the whole thing)
Both of these would be valid interpretations, and you'd need to ask the creator or use some knowledge about the mesh's format or what it's meant to represent to sort out which version is intended.
But there is a common special case where we can infer the correct facing with higher confidence: if your meshes represent solid objects and you have watertight manifold geometry. (This means that the mesh makes a continuous surface with no holes, gaps, loose edges, or single-sided fins)
If you have such a mesh, you can guess at its winding, then check whether that guess makes sense by seeing if it puts the front face of each polygon facing "out" of the solid rather than inside. (But note: there are rare cases where we make meshes that have been deliberately turned inside-out for various effects, so even this test isn't foolproof!)
It proceeds like this:
1. Pick a triangle/polygon arbitrarily
2. Construct that polygon's normal according to your guessed winding
3. Cast a ray through a point in that polygon, in the direction opposite its normal. Test this ray against every other polygon in the mesh (this can get expensive)
• If you get an odd number of hits (hitting the back of another polygon, then possibly the front of another then the back of another, or any number of front-back pairs), then your winding guess looks correct. This is the pattern we'd expect to see when firing into a solid shape.
• If you get an even number of hits, then your winding guess was probably incorrect, and you should use the opposite winding instead.
You can run this test for several different sample polygons in your mesh to build up consensus, and try to weed-out outliers due to non-manifold geometry like single-sided leaf fins.
As you can see though, it's a lot more work, and has fewer guarantees, than just inspecting your geometry source format to determine what winding is standard for that type, or asking whoever made the mesh.
(I originally skipped this case because you said you wanted to generate normals, and didn't mention that you had them available as input. But for completeness...)
If your mesh has normal vectors, then you can use these as the ground truth for the intended front facing direction (provided the creator of the model isn't doing anything too weird with manually-adjusted normals...)
Say you have a triangle with points (in order) a, b, c
Compute the expected normal of your triangle by taking the cross product:
expectedNormal = Vector3.Cross(b - a, c - b);
Now compare that against the normals that came with your mesh (if your mesh has normals defined per vertex, you can average the three vertex normals for a triangle to get a triangle normal. Or you can average all the expectedNormal values for triangles bordering a given vertex to get a vertex normal instead)
agreement = Vector3.Dot(expectedNormal, sourceMeshNormal);
If agreement > 0 then your mesh is wound counter-clockwise when looking at it against its normal in a right-handed coordinate system, or clockwise if you're in a left-handed coordinate system. If agreement < 0 then your mesh is wound clockwise in a right-handed coordinate system, or counter-clockwise if you're in a left-handed coordinate system. If agreement is zero then the test is inconclusive for this triangle (someone's cranked the input normals to be near-zero or near-parallel to the surface for some reason), and you can try again with a different part of the mesh.
• – user116458 Jun 4 '18 at 11:13
|
2019-06-19 05:30:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40296316146850586, "perplexity": 642.2611659476632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998913.66/warc/CC-MAIN-20190619043625-20190619065625-00327.warc.gz"}
|
https://www.scienceforums.net/topic/70342-why-e-pc-and-not-e-12-pc/?tab=comments#comment-1034563
|
# Why E = pc and not E = 1/2 pc
## Recommended Posts
I have always wondered why the energy of a photon in vacuum is equal to E = pc (where p is the momentum of the photon, and c is the speed of light in vacuum) and not E = 1/2 pv (where for a photon v = c) as is the case for the kinetic energy of any moving mass. Of course, I understand that photons are massless, but can anyone clearly explain how E = mc^2 and not E = 1/2 mc^2 and prove that in a theoretical non-empirical way??
##### Share on other sites
E2=(mc2)2+(pc)2
Photons are massless. Bam, E=pc
edit: typo
Edited by ydoaPs
##### Share on other sites
E2=(mc2)+(pc)2
Photons are massless. Bam, E=pc
I already knew that part. Now, as a start it would be helpful if you told me where Einstein derived that equation from and what it logically means, and no I will not accept the standard pythagorean theorem explanation, because if you are gonna go that route, you'll have to tell me why E, mc2, and pc form a right-angle triangle.
##### Share on other sites
Before we move on let's correct the typo to avoid confusion
$E^2 = (mc^2)^2 + (pc)^2$
1. $E^2=m^2c^4$
2. $p = mv$
$m^2c^4 = \frac{m_0^2c^4}{1-\frac{v^2}{c^2}}$
$m^2c^4\left(1-\frac{v^2}{c^2}\right)= m_0^2c^4$
$m^2c^4 - \frac{m^2v^2c^4}{c^2} = m_0^2c^4$
$m^2c^4 - (mv)^2c^2 = m_0^2c^4$
sub in the equations 1 and 2 from above
$E^2 - p^2c^2 = m_0^2c^4$
$E^2= m_0^2c^4 + p^2c^2$
##### Share on other sites
I have always wondered why the energy of a photon in vacuum is equal to E = pc (where p is the momentum of the photon, and c is the speed of light in vacuum) and not E = 1/2 pv (where for a photon v = c) as is the case for the kinetic energy of any moving mass. Of course, I understand that photons are massless, but can anyone clearly explain how E = mc^2 and not E = 1/2 mc^2 and prove that in a theoretical non-empirical way??
Why should the factor of 1/2 appear in either equation? In the classical equation it appears from the integration of (mv dv), from the definition of work.
##### Share on other sites
as a start it would be helpful if you told me where Einstein derived that equation from and what it logically means
Herman Minkowski was Einstein's tutor.
##### Share on other sites
Why should the factor of 1/2 appear in either equation? In the classical equation it appears from the integration of (mv dv), from the definition of work.
Ok, thank you, that is a good reminder of a fact which I forgot I knew. It also allows me to properly rephrase my question. What I mean is, does the definition of momentum change when going from classical to relativistic physics?
I mean does the equation E = 1/2 pv, change as v approaches the speed of light (relativistically relevant speeds..)?
Does the kinetic energy of a moving object slowly shift from E = 1/2 pv towards E = pv as v increases?
##### Share on other sites
Ok, thank you, that is a good reminder of a fact which I forgot I knew. It also allows me to properly rephrase my question. What I mean is, does the definition of momentum change when going from classical to relativistic physics?
I mean does the equation E = 1/2 pv, change as v approaches the speed of light (relativistically relevant speeds..)?
Does the kinetic energy of a moving object slowly shift from E = 1/2 pv towards E = pv as v increases?
The classical momentum and kinetic energy equations are the first-order approximations of the relativistic equations. If you expand them in orders of v/c, you ignore the terms where the powers of (v/c)<<1
So the definitions never change, but the approximation that gives you the simple form of the equation is no longer valid.
##### Share on other sites
The classical momentum and kinetic energy equations are the first-order approximations of the relativistic equations. If you expand them in orders of v/c, you ignore the terms where the powers of (v/c)<<1
So the definitions never change, but the approximation that gives you the simple form of the equation is no longer valid.
Perfect. Thank you. The answer I was looking for. So, basically, the approximations become more and more inaccurate as the value of v/c increases.
So, swansont is the real deal after all...
##### Share on other sites
Perfect. Thank you. The answer I was looking for. So, basically, the approximations become more and more inaccurate as the value of v/c increases.
Generally speaking, around v/c > 0.1 is where the classical approximations start to noticeably fail. The expansions I can think of are in even powers of v/c, so that's a 1% value for the (v/c)^2 term.
##### Share on other sites
I have always wondered why the energy of a photon in vacuum is equal to E = pc (where p is the momentum of the photon, and c is the speed of light in vacuum) and not E = 1/2 pv (where for a photon v = c) as is the case for the kinetic energy of any moving mass. Of course, I understand that photons are massless, but can anyone clearly explain how E = mc^2 and not E = 1/2 mc^2 and prove that in a theoretical non-empirical way??
Starting from
$E = \sqrt{m^2c^4 + p^2c^2 }$
for a massless particle such as the photon set $m = 0$ and you obtain
$E = |p|c$
for a non-relativistic particle expand the square root in a power series and ignore the higher order terms because $v \ll c$
$E = \sqrt{m^2c^4 + p^2c^2 } = mc^2 + \frac{p^2}{2m} + \cdots$
you obtain the 1/2 factor characteristic of the non-relativistic theory.
##### Share on other sites
The formulas for relativistic momentum and kinetic energy are different from the approximations used in Newtonian mechanics. The difference becomes more and more drastic as v approaches c. Here's a simple way to derive E=pc starting with only the formula for relativistic momentum and a few assumptions (i.e. the Work-Energy Theorem holds valid in relativity, our final equation holds for massless particles).
The formula for relativistic momentum can be found by analyzing collisions with the Lorentz transformation, and what you end up with is $p=\gamma mv$ where $\gamma=(1-v^2/c^2)^{-1/2}$.
From there you can find the formula for relativistic kinetic energy by using the Work-Energy Theorem, calculating the work done in bringing a mass at rest to a velocity v:
$E_k=\int Fdx=\int \frac{dpdx}{dt}=\int vdp=pv-\int pdv=\gamma mv^2-m\int \frac{vdv}{\sqrt{1-v^2/c^2}}$
Evaluating that integral gives: $E_k=\gamma mc^2+\varphi$, where $\varphi$ is some constant of integration. It's easy to solve for $\varphi$; all you have to do is take the fact that when v=0 (i.e. the object is at rest) its kinetic energy is taken to be zero, and $\gamma=1$. So what you get is:
$0=mc^2+\varphi~~~ \Rightarrow ~~~\varphi =-mc^2~~~\Rightarrow~~~ E_k=\gamma mc^2-mc^2$.
The term "mc2" looks like some intrinsic energy associated to a mass (and is appropriately called "rest energy"). Adding this term to both sides of the kinetic energy formula gives the total energy of a body: rest energy + kinetic energy: $E=E_k+mc^2=\gamma mc^2$
Now that we have the formulas for the total relativistic energy and the relativistic momentum of a moving body, we can be a bit tricky to find some relationships between them. We can start by solving for $\gamma$ in both equations:
$\gamma =\frac{E}{mc^2}=\frac{p}{mv}~~~\Rightarrow ~~~ Ev=pc^2$
So there's a neat little formula relating energy and momentum. We can also use this result to get to the equation you're looking for. Start by squaring the energy equation and rearranging:
$E^2=\frac{(mc^2)^2}{1-v^2/c^2}~~~\Rightarrow ~~~E^2-\frac{(Ev)^2}{c^2}=(mc^2)^2$
Now substitute pc2 in for Ev:
$E^2-\frac{(pc^2)^2}{c^2}=(mc^2)^2~~~\Rightarrow ~~~E^2-(pc)^2=(mc^2)^2$
As you're aware, this equation holds for all particles, including massless ones. Setting m=0 gives E=pc.
##### Share on other sites
The formulas for relativistic momentum and kinetic energy are different from the approximations used in Newtonian mechanics. The difference becomes more and more drastic as v approaches c. Here's a simple way to derive E=pc starting with only the formula for relativistic momentum and a few assumptions (i.e. the Work-Energy Theorem holds valid in relativity, our final equation holds for massless particles).
The formula for relativistic momentum can be found by analyzing collisions with the Lorentz transformation, and what you end up with is $p=\gamma mv$ where $\gamma=(1-v^2/c^2)^{-1/2}$.
From there you can find the formula for relativistic kinetic energy by using the Work-Energy Theorem, calculating the work done in bringing a mass at rest to a velocity v:
$E_k=\int Fdx=\int \frac{dpdx}{dt}=\int vdp=pv-\int pdv=\gamma mv^2-m\int \frac{vdv}{\sqrt{1-v^2/c^2}}$
Evaluating that integral gives: $E_k=\gamma mc^2+\varphi$, where $\varphi$ is some constant of integration. It's easy to solve for $\varphi$; all you have to do is take the fact that when v=0 (i.e. the object is at rest) its kinetic energy is taken to be zero, and $\gamma=1$. So what you get is:
$0=mc^2+\varphi~~~ \Rightarrow ~~~\varphi =-mc^2~~~\Rightarrow~~~ E_k=\gamma mc^2-mc^2$.
The term "mc2" looks like some intrinsic energy associated to a mass (and is appropriately called "rest energy"). Adding this term to both sides of the kinetic energy formula gives the total energy of a body: rest energy + kinetic energy: $E=E_k+mc^2=\gamma mc^2$
Now that we have the formulas for the total relativistic energy and the relativistic momentum of a moving body, we can be a bit tricky to find some relationships between them. We can start by solving for $\gamma$ in both equations:
$\gamma =\frac{E}{mc^2}=\frac{p}{mv}~~~\Rightarrow ~~~ Ev=pc^2$
So there's a neat little formula relating energy and momentum. We can also use this result to get to the equation you're looking for. Start by squaring the energy equation and rearranging:
$E^2=\frac{(mc^2)^2}{1-v^2/c^2}~~~\Rightarrow ~~~E^2-\frac{(Ev)^2}{c^2}=(mc^2)^2$
Now substitute pc2 in for Ev:
$E^2-\frac{(pc^2)^2}{c^2}=(mc^2)^2~~~\Rightarrow ~~~E^2-(pc)^2=(mc^2)^2$
As you're aware, this equation holds for all particles, including massless ones. Setting m=0 gives E=pc.
Got it. Thank you elfmotat for the clear derivation and everyone else for your contributions. So, from the above derivation, E = mc2 is the energy resulting from unleashing the rest energy of a certain mass.
However, that brings a question to mind, since c is the speed of light in vacuum, do we have to divide the rest energy by the index of refraction of the medium in which the conversion happens to obtain the actual resulting energy?
##### Share on other sites
However, that brings a question to mind, since c is the speed of light in vacuum, do we have to divide the rest energy by the index of refraction of the medium in which the conversion happens to obtain the actual resulting energy?
No
##### Share on other sites
• 3 years later...
I came up with the equation, for the test in 2 days, and to verify it I searched it up and found this.
I came up using this...
$E = pc$
We know 2 equations that are true, and they are:
$E=\frac{hc}{\lambda}$
$... \lambda=\frac{h}{p}$
Substitute and you get your equation really...
$E=\frac{hc}{\frac{h}{p}}$
Which is equivalent to
$E=\frac{hpc}{h}$
The 2 'h' cancel out to leave you with
$E=pc$
Edited by apixy
##### Share on other sites
I will not accept the standard pythagorean theorem explanation, because if you are gonna go that route, you'll have to tell me why E, mc2, and pc form a right-angle triangle.
Perhaps it would help to start at the most basic entity for a system in motion, being its Lagrangian. The action for a relativistically moving particle is of the form
$\displaystyle{S=\int \left ( -mc^2\sqrt{1-\frac{v^2}{c^2}} \right )ds}$
From this, you can then derive a quantity called the 4-momentum by taking
$\displaystyle{p_{\mu}=-\frac{\partial S}{\partial x^{\mu}}=\left ( \frac{E}{c},-\mathbf{p} \right )}$
The usual energy-momentum relation is then quite simply the norm of this 4-vector :
$\displaystyle{p^{\mu}p_{\mu}=\left | \mathbf{p} \right |^2-\frac{E^2}{c^2}=-m^2c^2}$
So, the basic idea is that energy-momentum is a 4-vector, and the norm of that 4-vector is defined to be the proper mass of the particle. Because we are in flat Minkowski spacetime, calculating the norm of a 4-vector is equivalent to applying the Pythagorean theorem to the three elements of the equation ( proper mass, energy = time component of vector, and momentum = magnitude of spatial component of vector ).
The first step ( Lagrangian to 4-momentum ) can also be made mathematically precise via the calculus of variations.
Edited by Markus Hanke
##### Share on other sites
• 3 weeks later...
For a photon, E = pc, where p is momentum, and momentum is mass x velocity. But since a photon has no mass, why doesn't this equation reduce down to E = 0. Does anyone know what radiation momentum is for a photon?
##### Share on other sites
For a photon, E = pc, where p is momentum, and momentum is mass x velocity. But since a photon has no mass, why doesn't this equation reduce down to E = 0. Does anyone know what radiation momentum is for a photon?
It's E/c. It can be deduced from E&M equations
mv only applies to massive particles; it's a nonrelativistic formula
##### Share on other sites
• 1 year later...
The orginal question confuses the rest energy (mc^2) of a particle with mass, with kinetic energy (1/2 mv^2). A particle with mass m at rest, and no kinetic energy, still has a rest energy derived from its mass.
The wikipedia article https://en.wikipedia.org/wiki/Energy–momentum_relation is a very general discussion of the topic, covering both massive and massless particles.
## Create an account
Register a new account
|
2022-05-25 19:50:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8191210031509399, "perplexity": 425.76530051288614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662593428.63/warc/CC-MAIN-20220525182604-20220525212604-00557.warc.gz"}
|
https://www.maplesoft.com/support/help/maple/view.aspx?path=FunctionAdvisor/calling_sequence&L=E
|
calling_sequence - Maple Help
return the calling sequence of a given mathematical function
Parameters
calling_sequence - name where calling_sequence is one of the following literal names 'calling_sequence', 'form', or 'syntax' math_function - Maple name of mathematical function all - (optional) literal name; 'all'; request all calling sequences of math_function when it accepts more than one
Description
• The FunctionAdvisor(calling_sequence, math_function) returns the calling sequence of the function.
• If the math_function accepts more than one calling sequence, for example, Ei, by default, the FunctionAdvisor(calling_sequence, math_function) command returns the calling sequence with the most arguments. To obtain all the calling sequences for the math_function, specify the optional argument 'all'.
Examples
> $\mathrm{FunctionAdvisor}\left(\mathrm{calling_sequence},\mathrm{sin}\right)$
${\mathrm{sin}}{}\left({z}\right)$ (1)
> $\mathrm{FunctionAdvisor}\left(\mathrm{syntax},\mathrm{pochhammer}\right)$
${\mathrm{pochhammer}}{}\left({z}{,}{n}\right)$ (2)
> $\mathrm{FunctionAdvisor}\left(\mathrm{form},\mathrm{WeierstrassP}\right)$
${\mathrm{WeierstrassP}}{}\left({z}{,}\mathrm{g__2}{,}\mathrm{g__3}\right)$ (3)
The variables used by the FunctionAdvisor command to create the calling sequence are local variables. To make the FunctionAdvisor command return results using global variables, pass them as an extra argument in the form of a list.
> $\mathrm{FunctionAdvisor}\left(\mathrm{calling_sequence},\mathrm{LegendreP}\right)$
${\mathrm{LegendreP}}{}\left({a}{,}{b}{,}{z}\right)$ (4)
> $\mathrm{has}\left(,\left[a,b,z\right]\right)$
${\mathrm{false}}$ (5)
> $\mathrm{FunctionAdvisor}\left(\mathrm{calling_sequence},\mathrm{LegendreP},\left[A,z\right]\right)$
${\mathrm{LegendreP}}{}\left({A}{,}{b}{,}{z}\right)$ (6)
> $\mathrm{has}\left(,A\right),\mathrm{has}\left(,b\right),\mathrm{has}\left(,z\right)$
${\mathrm{true}}{,}{\mathrm{false}}{,}{\mathrm{true}}$ (7)
The following illustrate the case where the mathematical function accepts more than one calling sequence.
> $\mathrm{FunctionAdvisor}\left(\mathrm{calling_sequence},\mathrm{arctan}\right)$
${\mathrm{arctan}}{}\left({y}{,}{x}\right)$ (8)
> $\mathrm{FunctionAdvisor}\left(\mathrm{calling_sequence},\mathrm{arctan},\mathrm{all}\right)$
${\mathrm{arctan}}{}\left({z}\right){,}{\mathrm{arctan}}{}\left({y}{,}{x}\right)$ (9)
> $\mathrm{FunctionAdvisor}\left(\mathrm{calling_sequence},\mathrm{LegendreP},\mathrm{all}\right)$
${\mathrm{LegendreP}}{}\left({a}{,}{z}\right){,}{\mathrm{LegendreP}}{}\left({a}{,}{b}{,}{z}\right)$ (10)
> $\mathrm{FunctionAdvisor}\left(\mathrm{calling_sequence},\mathrm{\zeta },\mathrm{all}\right)$
${\mathrm{\zeta }}{}\left({s}\right){,}{\mathrm{\zeta }}{}\left({n}{,}{s}\right){,}{\mathrm{\zeta }}{}\left({n}{,}{s}{,}{a}\right)$ (11)
|
2022-11-28 15:19:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9843571186065674, "perplexity": 3034.507220748426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710533.96/warc/CC-MAIN-20221128135348-20221128165348-00825.warc.gz"}
|
http://forums.netphoria.org/showthread.php?s=49a2939b38d40d43a8f2aa7177d9889e&p=4061869&mode=threaded
|
Netphoria Message Board I don't like these holograms
05-20-2014, 03:00 AM #20 Cool As Ice Cream Just Hook it to My Veins! Location: František! How's the foot of your turtle? Posts: 31,939 These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well. These beta-test seem to be working quite well.
Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts vB code is On Smilies are On [IMG] code is Off HTML code is On
forums.netphoria.org The Web
Forum Jump
All times are GMT -4. The time now is 11:44 AM.
|
2021-03-08 15:44:10
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9485506415367126, "perplexity": 1841.5390633100674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385378.96/warc/CC-MAIN-20210308143535-20210308173535-00433.warc.gz"}
|
https://valutagjpr.web.app/54792/71360.html
|
# Spel om pengar och spelproblem i Sverige 2008/09
LIGHT HARVESTING COMPLEXES IN HIGHER PLANTS
Beta-spectrum of 137 Cs, adapted from this reference. The beta spectrum has two distinct features: a broad peak corresponding to the main decay mode of 137 Cs → 137m Ba with peak energy of 512 keV, which matches a trough on the spectrum centered around channel 3000, and This study explores the photon energy spectrum for the cal lab’s Cesium-137 irradiator using a combination of Monte Carlo and spectroscopic techniques. The effect of various attenuators on the photon energy spectrum will also be investigated. 137 Cs decay scheme showing half-lives, daughter nuclides, and types and proportion of radiation emitted. 137 Cs gamma spectrum. The characteristic 662 keV peak does not originate directly from 137 Cs, but from the decay of 137m Ba to its stable state. Caesium-137 has a half-life of about 30.17 years.
The 32 KeV x-ray peak and the 662 keV gamma peak are very obvious, and as there is a good spread between the peaks, this source makes for a good calibration source. 2020-10-24 · 137 Cs spectrum captured using Saint Gobain enhanced lanthanum bromide detector. We know that the 661.7 keV peak is actually emitted by barium-137 when the excited nucleus of barium relaxes to its ground state (so this is a nuclear line ); and the 32 keV peak is also emitted by barium-137 when the atom looses one of its electrons due to the internal conversion process (so it is an atomic or Caesium-137 (137 55 Cs), or radiocaesium, is a radioactive isotope of caesium that is formed as one of the more common fission products by the nuclear fission of uranium-235 and other fissionable isotopes in nuclear reactors and nuclear weapons. Spectroscopic Characterization of the Cs-137 Energy Spectrum: Graduate Investigator: Sameer Taneja Summary of Research: Cesium is often used in the calibration of radiation surveying tools and thermoluminescent dosimeters (TLDs). Spectrum Techniques Additional Products By Radioactive Check Sources www.spectrumtechniques.com Regulatory Information The Cs-137/Ba-137m isotope generator contains an exempt quantity of radioactive material — Cs-137, up to 10 µCi (0.37 MBq) and requires no special handling or storage instructions. For more information about exempt quantities of Cs-137 is a long-lived parent nuclide which has a half-life of 30.07 years and decays by the emission of beta radiation into the stable isotope Ba-137.
Cs-137/Ba-137m Isotope Generator Kit The isotope generator kit is designed specifically for demonstrating the physical properties of radioactive decay. VIEW PRODUCT The Cs 137.
## NUREG/CP-0027, Vol.3, Rev. 1, "Proceedings of the
Picture of stainless-steel plate sources. 1 NIST does not endorse the use of non-SI units. This paper uses non-SI units because it addresses the requirements listed in the ANSI and Institute of Electrical and … QUANTITATIVE Cs-137 DISTRIBUTIONS FROM XA9745947 AIRBORNE GAMMA RAY DATA G. OBERLERCHER, W. SEIBERL Geological Survey of Austria, Vienna, Austria continuum by a least square fit of the entire spectrum or selected areas of the spectrum.
### Methods for localizing and quantifying radionuclide sources
The gamma rays interact with the scintillator producing all three primary interaction processes so that the very phenomenon that is being studied in a sample is also taking place in the detector itself along with several other effects that mask the Cs-137 is present at levels at least 1-2 orders of magnitude above levels expected from older atmospheric weapons tests and the Chernobyl accident in every one of these samples. Total activity is roughly evenly divided between Cs-137 and the shorter-lived Cs-134 at this time; the Cs-134 will decay to irrelevance in the span of 5-10 years. The spectrum in Figure 1 was measured using a NaI-crystal on a photomultiplier, an amplifier, and a multichannel analyzer. The figure shows the number of counts within the measuring period versus channel number. The spectrum indicates the following peaks (from left to right): low energy x radiation (due to internal conversion of the gamma ray), Cs-137 Spectrum; Planar Brachytherapy Dosimetry; Plan Class Reference Fields for Non-standard Linac Treatments; Point of Measurement in Electron Beams; Small Animal Irradiator Dose Verification; Gel-based Real-time Liver Motion Phantom; Dynamic Collimation in Proton Therapy 2019-01-14 · The spectrum of gamma rays emitted by a given isotope have distinct, characteristic energy peaks that permit identification of the isotope.
Gamma-ray spectrum of 137Cs Figure 4. Gamma-ray spectrum of 60Co (b) Interactions of gamma ray and matter: The gamma spectrum from radioactive sources are due to the gamma ray emitted and the interaction between the gamma photon and the matter. 137 137 55 CsBa E (5.4%) (94.6%) (0.6617 V) 2 Cs-137 Spectrum; Planar Brachytherapy Dosimetry; Plan Class Reference Fields for Non-standard Linac Treatments; Point of Measurement in Electron Beams; Small Animal Irradiator Dose Verification; Gel-based Real-time Liver Motion Phantom; Dynamic Collimation in Proton Therapy 2019-01-14 · The spectrum of gamma rays emitted by a given isotope have distinct, characteristic energy peaks that permit identification of the isotope. This is Cs‐137 spectrum taken with a NaI (TI) detector. Isomeric transition Cs-137 Spectrum; Planar Brachytherapy Dosimetry; Plan Class Reference Fields for Non-standard Linac Treatments; Point of Measurement in Electron Beams; Small Animal Irradiator Dose Verification; Gel-based Real-time Liver Motion Phantom; Dynamic Collimation in Proton Therapy The spectrum in Figure 1 was measured using a NaI-crystal on a photomultiplier, an amplifier, and a multichannel analyzer.
Bma kingston
Cs»" p-ray spectrum. emitted by means of beta-ray spectrographs to check these results. Beta Spectrum. Goal: to investigate the spectrum of β rays emitted by a. 137.
fotografera. CS-137/BA-137m The arrow shows the peak-to peak estimation for curve D. The signals at 331 Cs-137 layer depth and activity determination with inversion av S Jacobsson Svärd · 1996 — mätgeometri i detta arbete har endast strålning från Cs-137 behandlats. Collimator length bakgrundsbidrag. Simulated spectrum based on Origen and JefPC.
Museum södermalm stockholm
endokrinologi avdelning
lagerbolag estland
skf lagerhus
transport deptt
bengt wendel
medicinteknisk ingenjör lön
biomedicin jobb
### Ljung, Karin - Root Development and Shoot-Root
The characteristic 662 keV peak does not originate directly from 137Cs, but from the decay of 137mBa to its stable state. Caesium- 137 Radioactive Sources o RSS8 Gamma Source Set. Includes ~1 µCi each of: 60Co, 137Cs, 22Na, 54Mn, 133Ba, 109Cd, 57Co, and a mixed Cs/Zn source (~0.5.
Gymnasiearbete teknikvetenskap
dammsugare lux intelligence
### On the Military Utility of Spectral Design in Signature - Doria
The data was then saved to be used to analyse the result. The Cs 137. spectrum was obtained from a Germanium detector by repeating steps 10 to 12. Results and Discussion Start with a table of the isotopes.Look up Cs-137--the only decay channel is by $\beta^-$ (that is the conversion of a neutron into a proton with the release of an electron and a electron anti-neutrino) which makes the daughter Ba-137 in an excited state called Ba-137m. An example of a NaI spectrum is the gamma spectrum of the caesium isotope 137 Cs —see Figure 1.
|
2022-07-02 13:33:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48817503452301025, "perplexity": 5123.443916262203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104141372.60/warc/CC-MAIN-20220702131941-20220702161941-00708.warc.gz"}
|
https://plymsea.ac.uk/id/eprint/9552/1/9/1/coab017/6257586
|
## Abstract
Many sharks and other marine taxa use natal areas to maximize survival of young, meaning such areas are often attributed conservation value. The use of natal areas is often linked to predator avoidance or food resources. However, energetic constraints that may influence dispersal of young and their use of natal areas are poorly understood. We combined swim-tunnel respirometry, calorimetry, lipid class analysis and a bioenergetics model to investigate how energy demands influence dispersal of young in a globally distributed shark. The school shark (a.k.a. soupfin, tope), Galeorhinus galeus, is Critically Endangered due to overfishing and is one of many sharks that use protected natal areas in Australia. Energy storage in neonate pups was limited by small livers, low overall lipid content and low levels of energy storage lipids (e.g. triacylglycerols) relative to adults, with energy stores sufficient to sustain routine demands for 1.3–4 days (mean ± SD: 2.4 ± 0.8 days). High levels of growth-associated structural lipids (e.g. phospholipids) and high energetic cost of growth suggested large investment in growth during residency in natal areas. Rapid growth (~40% in length) between birth in summer and dispersal in late autumn–winter likely increased survival by reducing predation and improving foraging ability. Delaying dispersal may allow prioritization of growth and may also provide energy savings through improved swimming efficiency and cooler ambient temperatures (daily ration was predicted to fall by around a third in winter). Neonate school sharks are therefore ill-equipped for large-scale dispersal and neonates recorded in the northwest of their Australian distribution are likely born locally, not at known south-eastern pupping areas. This suggests the existence of previously unrecorded school shark pupping areas. Integrated bioenergetic approaches as applied here may help to understand dispersal from natal areas in other taxa, such as teleost fishes, elasmobranchs and invertebrates.
## Introduction
Natal areas play important roles in the life histories of many marine taxa by providing food, shelter, and protection from predation to maximize recruitment of young into adult populations (Beck et al., 2001; Heithaus, 2007; Nagelkerken et al., 2015). Recruitment from natal areas can aid recovery of depleted marine populations, and as a result they are increasingly protected as habitats of conservation importance (Garla et al., 2006; McLeod et al., 2009; White, 2015). Understanding drivers behind the use of natal areas can therefore provide valuable insights to conservation planning and management. Natal areas are often characterized by little or no overlap between young and older age classes that may present intraspecific competition or predation risks (Dahlgren et al., 2006; Speed et al., 2010; Guttridge et al., 2012). In these cases, recruitment of young into the broader population is dependent on dispersal from natal areas into habitats used by older conspecifics (Simpfendorfer and Milward, 1993; Eggleston, 1995; Gillanders et al., 2003). Such ontogenetic habitat shifts can entail substantial movements, requiring energy-intensive dispersal to forge connectivity between natal and other areas.
Because dispersal of young may be costly, it may be limited by energetic constraints that influence the use of natal areas. In sharks, the liver is the primary organ of energy storage (Sargent et al., 1973; Zammit and Newsholme, 1979). Individuals with large livers rich in energy storage lipids are considered in good condition and best prepared to undertake dispersive movements (Rossouw, 1987; Hoffmayer et al., 2006). Variation in the effects of season and location on metabolic demands, e.g. due to varying water temperature and other factors that assist or hinder dispersal such as ocean currents, may play important roles in the cost and timing of dispersal. Ecological characteristics and lifestyles of shark species can also influence energy flow between shark populations and their communities, e.g. pelagic and migratory species are likely to require more energy to fuel more active lifestyles and wide-ranging movements than less-mobile species (Cortés and Gruber, 1990; Killen et al., 2010).
The school shark (Galeorhinus galeus) is distributed circumglobally including in Australian waters where they undertake large-scale movements, extending from the Great Australian Bight to New Zealand (Olsen, 1954; Walker, 1999; McMillan et al., 2019). Their bentho-pelagic behaviour utilizes the entire water column from the sea floor to the surface with adults moving between these habitats throughout the diel cycle to forage. Pupping occurs in austral summer in sheltered bays and estuaries in the southeast of the species’ Australian range. From these pupping areas around Tasmania and Bass Strait neonates disperse, eventually mixing throughout their Australian distribution (Olsen, 1954; Stevens and West, 1997) (Fig. 1). Juvenile teleost fishes associated with inshore flats and benthic vegetation, e.g. whiting (Sillaginidae) and flounder (Pleuronectidae), are important prey for neonates departing pupping areas (Stevens and West, 1997). On this basis, it is assumed that dispersing neonates move along the coastal shelf in the relatively shallow photo-benthic zone. While most neonates depart pupping areas in autumn and winter, up to a third may return to adjacent areas as juveniles the following spring suggesting limited movements, but dispersive individuals move further (McAllister et al., 2015). Numerous pupping areas have been identified and protected as shark refuge areas designed and managed by the Tasmanian Department of Primary Industries, Parks, Water and Environment to protect critical reproductive habitats and maximize survival of neonates and pregnant female sharks (DPIPWE, 2020).
Figure 1
The core range of the school shark in Australia. Marked are the pupping areas in Pittwater estuary where this study was conducted; the Maria Island monitoring station on the dispersal route: Port Phillip Bay, the most westerly recorded pupping area; and Marion Bay, the location where neonate school sharks have been recorded off South Australia. The continental shelf is shaded. Inset shows the study area (boxed) relative to Australia.
Figure 1
The core range of the school shark in Australia. Marked are the pupping areas in Pittwater estuary where this study was conducted; the Maria Island monitoring station on the dispersal route: Port Phillip Bay, the most westerly recorded pupping area; and Marion Bay, the location where neonate school sharks have been recorded off South Australia. The continental shelf is shaded. Inset shows the study area (boxed) relative to Australia.
Because school sharks move large distances and exploit the entire water column, they are exposed to anthropogenic threats over large areas and a wide range of depths. However, their potential to recover from population depletion is limited by biological traits shared with many sharks including slow growth, late maturity and low reproductive capacity. As a result, the school shark is Critically Endangered (IUCN, 2020) with evidence of overfishing throughout its range including in California (Walker, 1998), Great Britain (Molfese et al., 2014) and Argentina (Cuevas et al., 2014). In Australia, the school shark has not recovered from population collapse in the 1990s despite cessation of targeted commercial fishing since 2001, introduction of a national recovery plan in 2008 and receiving Conservation Dependent status in 2009 (Huveneers et al., 2013; McAllister et al., 2018). Recent records of neonates in the Great Australian Bight in the northwest of their range raises several questions: (i) Does long-distance dispersal occur immediately post-birth? (ii) Are there previously unknown and unprotected pupping areas in the Great Australian Bight? (Fig. 1; McMillan et al., 2018). We used school sharks as a model species to investigate constraints on shark pup dispersal from pupping areas.
We conducted bioenergetic analyses on neonate school sharks from their most productive recorded pupping area in south-eastern Australia, Pittwater estuary in Tasmania (Stevens and West, 1997) (Fig. 1), to investigate constraints on dispersal from pupping areas. We hypothesized that residency in pupping areas may be influenced by energetic constraints in neonate sharks that leave them ill-equipped to disperse long distances following birth, thereby delaying dispersal. We used swim-tunnel respirometry to examine costs of transport, optimal swimming speed and routine energetic costs and conducted bomb calorimetry and lipid class analysis to assess energy storage. Finally, we calculated an energy budget for neonate school sharks to gain insight into their energetic requirements and related foraging demands and to assess how environmental conditions may influence dispersal. To our knowledge, this is the first study using such a combined energetics approach to investigate post-natal dispersal, providing potential to complement tracking studies (e.g. McAllister et al., 2015) by improving knowledge of drivers behind dispersal from natal areas.
## Materials and methods
### Sample collection
We used baited longlines to catch neonate school sharks in upper Pittwater estuary, Tasmania, over a 3-week period in early austral autumn (15 March–7 April 2017). The estuary has an area of 20.7 km2 and is characterized by shallow flats (depth, ~ 4 m) draining at low tide into a main channel (depth, ~ 8 m) (McAllister et al., 2018). We transported 10 neonates live to the Institute for Marine and Antarctic Studies facility at Taroona, Hobart, for respirometry trials. For bomb calorimetry and lipid analyses combined, we euthanized 13 further neonates. We then recorded sharks’ total weight (MT), total length, sex, liver whole wet weight (ML) and hepato-somatic index (ML/MT). After desiccating liver sub-samples in a freeze dryer for 5 days, we then homogenized them and stored them frozen in sealed vials at −20°C.
### Cost of transport
It is necessary to estimate routine metabolic costs to calculate an energy budget and assess relative investment of resources in functions such as energy storage and growth (Dowd et al., 2006). Where swimming speeds in the wild are unknown, respiration rates at optimal swimming speeds, at which cost of transport (COT) is minimal, may be used to estimate routine energy consumption (Videler and Nolet, 1990; O’Dor, 2002; Ikeda, 2016). We therefore used swim-tunnel respirometry to estimate routine energy consumption for neonate sharks. We housed sharks (n = 10, mean ± SD: 42.8 ± 2.2 cm total length, 0.36 ± 0.04 kg) in a 10 000 L holding tank at environmental temperatures (16–18.6°C) and fed them jack mackerel (Trachurus declivis) fillets once daily. Prior to respirometry trials we acclimated sharks at a controlled temperature (mean: 19.1°C, range: 18.8–19.7°C) for 24 h during which food was withheld, sufficient to allow for gastric evacuation of fillets (Schurdak and Gruber, 1989) and ensure sharks were in a post-absorptive state during trials.
We conducted trials in a 175 L, sealed recirculating Brett-type swim-tunnel respirometer with an 875 x 250 x 250 mm swim chamber (Loligo Systems, Denmark). During trials, we measured dissolved oxygen using a Witrox oxygen meter, with an optical fibre oxygen sensor (Loligo Systems, Denmark) and recorded it throughout to determine oxygen consumption rate. We flushed and refreshed respirometer water whenever oxygen saturation levels fell below 80% (as per Clark et al., 2013) and completed blank runs for 12 hours prior to each swim trial to assess background respiration. We introduced sharks into the respirometry chamber and acclimated them at low speeds of 0.3–0.4 body lengths per second (bl s−1) for 30–47 minutes until oxygen consumption reached a steady state (Johansen and Jones, 2011) before starting swimming trials. To minimize disturbance, we ran trials behind black curtains under constant red-light conditions with water temperature maintained at 20°C. Starting at 0.5 bl s−1, we increased swimming speed in increments of 0.1 bl s−1 and swam sharks at each speed for 15 minutes (as per Payne et al., 2011) unless the trial was terminated. Trials were terminated when sharks became exhausted, as indicated by sharks being close to the rear surface of the swim chamber for >20 seconds (Lee et al., 2003) or swimming in bursts (Bouyoucos et al., 2017), suggesting an anaerobic response (Skomal and Bernal, 2010).
To account for the increased water speed caused by the profile of the animal in the respirometry chamber, we applied a solid blocking correction as per Bell and Terhune (1970): UF = UT(1 + εs), where UF is the speed of the corrected flow and UT is flow speed in the swim chamber without an animal. We calculated fractional error caused by solid blocking (εs) as εs = 0.8λ(AO/AT)0.5, where λ is a constant for animal shape (= 0.5*body length/body thickness), AO is maximum cross-sectional area of the animal and AT is the cross-sectional area of the swim chamber (Bell and Terhune, 1970; Payne et al., 2011). For each 15-minute speed trial, we fitted a linear regression to the decrease in respirometer oxygen, retaining only trials where linear regressions yielded R2 values >0.8 for analysis. Regressions with low R2 values indicate non-linear declines in respirometer oxygen, e.g. due to inconsistent activity levels during trials (Svendsen et al., 2016). We calculated mass-specific metabolic rates using the equation
$${MO}_2=\frac{\left[\left({V}_r-{V}_s\right)\times \Delta{C}_{wO2}\right]\bullet \Delta{t}^{-1}}{M^{0.86}}$$
where |${MO}_2$| is metabolic rate; |${V}_r$| and |${V}_s$| are respirometer and shark volumes, respectively; |$\Delta t$| is the change in time (t) during trials; |$\Delta{C}_{wO2}$| is the change in respirometer oxygen concentration during trials; and |$M$| is shark mass scaled using an exponent of 0.86 that applies to a range of shark species (Sims, 2000). We divided resulting metabolic rates (mg O2 kg−1 hr−1) by swim speed in km hr−1 to derive COT. We then fitted a second-order polynomial to the relationship between COT and swim speed (m s−1) for all experimental animals combined and determined the minimum of the function to obtain the optimal swim speed at which COT was lowest (Uopt).
### Calorimetry and lipid class analyses
We determined liver energy content of sharks (n = 13, mean ± SD: 43.3 ± 2.6 cm total length, 0.33 ± 0.05 kg) and caloric tissue value using a semi-micro oxygen bomb calorimeter (Parr model 6725, Parr Instrument Company, IL, USA) coupled with a calorimetric thermometer (Parr model 6772). We pressed sub-samples of dried and homogenized liver (~40 mg) into pellets with a 200-mg spike of known energy content to act as a fuse (standardized benzoic acid, Parr Instrument Company, IL, USA) and combusted pellets in the bomb calorimeter to yield measures of gross heat (MJ kg−1). By subtracting the known heat production from fuse material, we calculated liver sample energy. To calibrate the calorimeter, we combusted a benzoic acid pellet of known energy content prior to each session. We derived dried liver mass (DL) using the equation DL = DSMS−1ML, where DS was dried sub-sample mass (g), MS was wet sub-sample mass (g) and ML was wet liver mass (g) (Hoffmayer et al., 2006). We then calculated liver energy storage (EL) from EL = DLES, where ES was dried sub-sample energy. To assess drivers of energy storage, we used a linear model with terms: EL ~ length + hepato-somatic index + lipid content (i.e. percentage of liver tissue composed of lipid). Weight was highly correlated with length (r = 0.85), so we omitted weight as a predictor of energy storage in the model. To determine energy invested in growth, we obtained the caloric tissue value for neonates by applying the above calorimetry methodology to 40 mg sub-samples from 3 neonates homogenized whole (3 sub-samples per neonate).
We extracted lipids from sub-samples of dried and homogenized liver tissue (~0.1 g) using a modified Bligh and Dyer (1959) technique. We added sub-samples to a solvent mixture of 9 ml purified H2O and 20 mL methanol in valve-sealed glass funnels then agitated them gently and left them to stand for 1 h before adding 10 mL dichloromethane (DCM), agitating gently and allowing to stand overnight. After shaking funnel contents, we added 10 mL DCM and 9 mL saline purified H2O and left funnels to stand for 2 h. Using a rotary evaporator, we drained and concentrated contents before adding 2 mL DCM and pipetting the contents into pre-weighed sealed vials. We then expelled moisture using N2 flow and weighed total lipid extract prior to adding 0.5 mL DCM and storing in a freezer. To analyse lipid classes (hydrocarbons/wax esters/sterol esters, triacylglycerols, free sterols, di/monoacylglycerols and phospholipids) we used an Iatroscan Mk V TLC-flame ionization detector (Iatron Laboratories, Tokyo) after spotting total lipid on silica rods and developing solvents. We calibrated the detector using a standard mixture containing lipid classes. We then quantified lipid classes using the Iatroscan integrating software v7.0 (Iatron Laboratories, Tokyo).
### Energy budget
We calculated an energy budget by adapting the formula from Lowe (2002), including specific dynamic action, i.e. energetic costs associated with digestion: C = M + Ms + W + G, where C (energy consumed) is equal to the sum of energy used in metabolism (M), specific dynamic action (Ms), energy lost as waste (W), and energy invested in growth (G). Because fish routinely swim at optimal speeds where energetic costs are minimal (Uopt) (Videler, 1993; Clark and Seymour, 2006), COT at Uopt provides an ecologically relevant measure of energy demands in the natural environment (Steffensen, 2005). We therefore derived routine metabolic energy consumption (M) from COT at Uopt (COT at Uopt * Uopt), as a proxy for routine metabolic rate (Ikeda, 2016) and scaled metabolic rate to mean animal size (g) using a mass scaling exponent of 0.86 (Sims, 2000).
Neonates have been observed to disperse from Pittwater north along the coastal shelf as evidenced by acoustic detections at the Maria Island monitoring station (McAllister et al., 2015) (Fig. 1). To predict effects of spatial and seasonal changes in temperature we applied a temperature coefficient (Q10) of 2.51 derived from resting data (as per Dowd et al., 2006) from the closely related leopard shark (Triakis semifasciata), that inhabits a similar thermal range (Miklos et al., 2003). Elasmobranch metabolic rates generally increase by a Q10 in the range of 2–3 (Carlson et al. 2004). We made adjustments for mean water temperatures in early autumn (1 March–15 April: 17.2°C) and late autumn (16 April–31 May: 12.6°C) in Pittwater (Semmens, unpublished data) and in early autumn (17.4°C), late autumn (15.3°C) and winter (1 June–15 July: 13°C) at Maria Island (depth: 20 m; IMOS, 2018) (Fig. 1), representing conditions on the dispersal route. We also modelled adjustments for current strength and direction on the dispersal route where the East Australia Current flows in a mean poleward direction at Maria Island during autumn–winter by approximating incoming flow to reduce ground speed by a corresponding amount (at 20 m depth, mean direction: 161°, mean flow: 0.21 m s−1; IMOS, 2018).
We calculated specific dynamic action costs for neonates at 6% of metabolic energy consumption (Sims and Davies, 1994) and energy lost to waste at 28% including faecal and nitrogenous wastes and egestion (Wetherbee and Gruber, 1993). Growth was derived from a non-linear least squares model applied to shark lengths surveyed in Pittwater estuary from 2011 to 2017 (Fig. S1). Since male and female school shark growth curves do not differ (Moulton et al., 1992), we derived growth in mass from the weight–length relationship for school sharks: y = 4.86(10−6x3.18) where y = weight (lb) and x = length (cm) (Olsen, 1954). We then converted weight to grams and multiplied weight by the caloric tissue value we obtained for school sharks (5.8 kJ g−1). Since pups are immature, we calculated all energy devoted to growth as somatic rather than reproductive growth.
## Results
### Swimming performance and energy budget
At 20°C, Uopt was 0.6 m s−1 (Fig. 2), equating to a mean of 1.4 bl s−1 and metabolic rate at Uopt was 149 mg O2 kg−1 h−1. Adjusting for seasonal differences in ambient water temperature on the coastal dispersal route yielded predicted decreases in metabolic rate at Uopt that ranged from 122 mg O2 kg−1 h−1 in early autumn (17.4°C) to 78 mg O2 kg−1 h−1 in winter (13°C). COT at Uopt on the coastal dispersal route decreased from 0.7 J g−1 km−1 in early autumn to 0.5 J g−1 km−1 in winter. Adjustment for swimming into the poleward flowing East Australia Current on the dispersal route (mean flow rate: 0.21 m s−1) gave a COT of 0.9 J g−1 km−1 in early autumn decreasing to 0.6 J g−1 km−1 in winter.
Figure 2
COT (mg O2 kg wet weight-1 km-1) as a function of swimming speed (m s-1) for neonate school sharks from swim-tunnel respirometry trials; a polynomial trendline was fitted to derive optimal swimming speed at which COT was lowest (Uopt).
Figure 2
COT (mg O2 kg wet weight-1 km-1) as a function of swimming speed (m s-1) for neonate school sharks from swim-tunnel respirometry trials; a polynomial trendline was fitted to derive optimal swimming speed at which COT was lowest (Uopt).
Growth ranged from 2.64 to 2.99 g day−1 and was the largest energetic cost, demanding 15.3–17.3 kJ day−1 (Table 1). Metabolic energy consumption ranged from 10.7 to 15.3 kJ day−1 and energy lost to waste ranged from 10.6 to 12.7 kJ day−1 yielding a total routine energy consumption of 38–45.4 kJ day−1 (Table 1). Whiting, the most important prey item for dispersing neonate school sharks (Stevens and West, 1997), had a mean caloric value of 5.9 kJ g−1 (McCluskey et al., 2016). Based on this, neonates would need to consume 6.4–7.7 g prey day−1 to satisfy routine energy requirements, i.e. a daily ration of 1.5%–2.3% wet bodyweight (Table 2). Since mean-sized whiting prey for neonate school sharks is 13 g (Semmens, unpublished data), this would require a successful hunt approximately every 2 days.
Table 1
Modelled energetic parameters for neonate school sharks of mean size in Pittwater estuary in early autumn (1 March–15 April) with adjustments for changes in site (Maria Island on the dispersal route) and season (late autumn: 16 April–31 May; winter: 1 June–15 July)
SiteSeasonTemp, °CLength, cmWeight, gMetabolism, kJ day−1SDA, kJ day−1Growth, kJ day−1Waste, kJ day−1Total, kJ day−1Daily ration % wbw
Pittwater Early autumn 17.2 43.3 330 15.1 0.9 15.3 12.2 43.5 2.2
Late autumn 12.6 46.3 436 10.7 0.6 15.8 10.6 37.8 1.5
Maria Is. Early autumn 17.4 43.3 330 15.3 0.9 15.4 12.3 43.9 2.3
Late autumn 15.3 46.3 436 14.7 0.9 15.6 12.2 43.3 1.7
Winter 13.0 48.7 512 14.3 0.9 17.3 12.7 45.1 1.5
SiteSeasonTemp, °CLength, cmWeight, gMetabolism, kJ day−1SDA, kJ day−1Growth, kJ day−1Waste, kJ day−1Total, kJ day−1Daily ration % wbw
Pittwater Early autumn 17.2 43.3 330 15.1 0.9 15.3 12.2 43.5 2.2
Late autumn 12.6 46.3 436 10.7 0.6 15.8 10.6 37.8 1.5
Maria Is. Early autumn 17.4 43.3 330 15.3 0.9 15.4 12.3 43.9 2.3
Late autumn 15.3 46.3 436 14.7 0.9 15.6 12.2 43.3 1.7
Winter 13.0 48.7 512 14.3 0.9 17.3 12.7 45.1 1.5
Table 1
Modelled energetic parameters for neonate school sharks of mean size in Pittwater estuary in early autumn (1 March–15 April) with adjustments for changes in site (Maria Island on the dispersal route) and season (late autumn: 16 April–31 May; winter: 1 June–15 July)
SiteSeasonTemp, °CLength, cmWeight, gMetabolism, kJ day−1SDA, kJ day−1Growth, kJ day−1Waste, kJ day−1Total, kJ day−1Daily ration % wbw
Pittwater Early autumn 17.2 43.3 330 15.1 0.9 15.3 12.2 43.5 2.2
Late autumn 12.6 46.3 436 10.7 0.6 15.8 10.6 37.8 1.5
Maria Is. Early autumn 17.4 43.3 330 15.3 0.9 15.4 12.3 43.9 2.3
Late autumn 15.3 46.3 436 14.7 0.9 15.6 12.2 43.3 1.7
Winter 13.0 48.7 512 14.3 0.9 17.3 12.7 45.1 1.5
SiteSeasonTemp, °CLength, cmWeight, gMetabolism, kJ day−1SDA, kJ day−1Growth, kJ day−1Waste, kJ day−1Total, kJ day−1Daily ration % wbw
Pittwater Early autumn 17.2 43.3 330 15.1 0.9 15.3 12.2 43.5 2.2
Late autumn 12.6 46.3 436 10.7 0.6 15.8 10.6 37.8 1.5
Maria Is. Early autumn 17.4 43.3 330 15.3 0.9 15.4 12.3 43.9 2.3
Late autumn 15.3 46.3 436 14.7 0.9 15.6 12.2 43.3 1.7
Winter 13.0 48.7 512 14.3 0.9 17.3 12.7 45.1 1.5
Table 2
Liver energy and lipid stores of neonate school sharks from Pittwater estuary including total length, weight, sex (male/female), liver wet weight, hepato-somatic index (HSI), lipid content (percentage of liver tissue composed of lipid), total stored energy (total energy stored in livers of each shark) and number of days energy stores are calculated to last without further feeding when sampled in Pittwater in early autumn. Bottom row provides means ± SD for all parameters except sex, where M:F ratio is provided
Length, cmWeight, gSex, M/FLiver wet wt., gHSI, %Lipid content, %Total stored energy, kJEnergy stores, days
42 323 9.1 2.8 34.7 67 1.7
42 325 7.7 2.4 29.7 62 1.5
46 384 18.3 4.8 44.7 247 4.0
41 333 10.4 3.1 26.1 86 2.8
47 386 13.2 3.4 56.2 175 2.5
41 306 11.2 3.7 33.9 91 2.5
44 363 12.3 3.4 35.6 133 2.8
40 253 11.0 4.4 44.7 128 3.1
39 228 7.7 3.4 37.5 69 1.7
46 401 18.9 4.7 52.4 187 3.3
45 344 14.7 4.3 34.1 117 1.9
46 369 11.7 3.2 39.6 127 2.0
44 282 10.1 3.6 34.1 84 1.3
43 ± 2.6 330 ± 52.6 7:5 12 ± 3.5 3.6 ± 0.7 38.7 ± 8.7 121 ± 54.8 2.4 ± 0.8
Length, cmWeight, gSex, M/FLiver wet wt., gHSI, %Lipid content, %Total stored energy, kJEnergy stores, days
42 323 9.1 2.8 34.7 67 1.7
42 325 7.7 2.4 29.7 62 1.5
46 384 18.3 4.8 44.7 247 4.0
41 333 10.4 3.1 26.1 86 2.8
47 386 13.2 3.4 56.2 175 2.5
41 306 11.2 3.7 33.9 91 2.5
44 363 12.3 3.4 35.6 133 2.8
40 253 11.0 4.4 44.7 128 3.1
39 228 7.7 3.4 37.5 69 1.7
46 401 18.9 4.7 52.4 187 3.3
45 344 14.7 4.3 34.1 117 1.9
46 369 11.7 3.2 39.6 127 2.0
44 282 10.1 3.6 34.1 84 1.3
43 ± 2.6 330 ± 52.6 7:5 12 ± 3.5 3.6 ± 0.7 38.7 ± 8.7 121 ± 54.8 2.4 ± 0.8
Table 2
Liver energy and lipid stores of neonate school sharks from Pittwater estuary including total length, weight, sex (male/female), liver wet weight, hepato-somatic index (HSI), lipid content (percentage of liver tissue composed of lipid), total stored energy (total energy stored in livers of each shark) and number of days energy stores are calculated to last without further feeding when sampled in Pittwater in early autumn. Bottom row provides means ± SD for all parameters except sex, where M:F ratio is provided
Length, cmWeight, gSex, M/FLiver wet wt., gHSI, %Lipid content, %Total stored energy, kJEnergy stores, days
42 323 9.1 2.8 34.7 67 1.7
42 325 7.7 2.4 29.7 62 1.5
46 384 18.3 4.8 44.7 247 4.0
41 333 10.4 3.1 26.1 86 2.8
47 386 13.2 3.4 56.2 175 2.5
41 306 11.2 3.7 33.9 91 2.5
44 363 12.3 3.4 35.6 133 2.8
40 253 11.0 4.4 44.7 128 3.1
39 228 7.7 3.4 37.5 69 1.7
46 401 18.9 4.7 52.4 187 3.3
45 344 14.7 4.3 34.1 117 1.9
46 369 11.7 3.2 39.6 127 2.0
44 282 10.1 3.6 34.1 84 1.3
43 ± 2.6 330 ± 52.6 7:5 12 ± 3.5 3.6 ± 0.7 38.7 ± 8.7 121 ± 54.8 2.4 ± 0.8
Length, cmWeight, gSex, M/FLiver wet wt., gHSI, %Lipid content, %Total stored energy, kJEnergy stores, days
42 323 9.1 2.8 34.7 67 1.7
42 325 7.7 2.4 29.7 62 1.5
46 384 18.3 4.8 44.7 247 4.0
41 333 10.4 3.1 26.1 86 2.8
47 386 13.2 3.4 56.2 175 2.5
41 306 11.2 3.7 33.9 91 2.5
44 363 12.3 3.4 35.6 133 2.8
40 253 11.0 4.4 44.7 128 3.1
39 228 7.7 3.4 37.5 69 1.7
46 401 18.9 4.7 52.4 187 3.3
45 344 14.7 4.3 34.1 117 1.9
46 369 11.7 3.2 39.6 127 2.0
44 282 10.1 3.6 34.1 84 1.3
43 ± 2.6 330 ± 52.6 7:5 12 ± 3.5 3.6 ± 0.7 38.7 ± 8.7 121 ± 54.8 2.4 ± 0.8
### Calorimetry and lipid class analyses
Livers of neonates were small with a mean hepato-somatic index of 3.6% wet bodyweight (range, 2.4%–4.8%). Mean stored liver energy was 120.9 ± 54.8 kJ (range, 59.8–249.8 kJ) (Table 2). The linear model using lipid content, hepato-somatic index and length as explanatory variables explained 76% of the variance in stored energy (R2 = 0.76, F(3,9) = 13.56, P < 0.01). Energy increased by 1.6 kJ per % increase in lipid content (mean ± SD: 38.7 ± 8.6%), 35 kJ per % increase in hepato-somatic index (3.6 ± 0.7%), and 8.5 kJ per cm increase in length (43.3 ± 2.6 cm). Lipid class profiles were broadly similar with triacylglycerols and phospholipids most abundant, however, proportions varied among individuals (Fig. 3). Mean content (± SD) of lipid classes were as follows: triacylglycerols, 62.71 ± 13.9%; free sterols, 4.56 ± 5.4%; hydrocarbons/wax esters/sterol esters, 3.47 ± 2.1%; di/monoacylglycerols, 3.23 ± 2.8%; and phospholipids, 26.52 ± 9.7%. Mean energy stores at the time of sampling were sufficient to sustain routine energy requirements for 2.4 ± 0.8 days without further feeding but differed among individuals (1.3–4 days; Table 2).
Figure 3
Lipid class profiles as % total liver lipid content for neonate school sharks (HC , hydrocarbons; WE, wax esters; SE, sterol esters; TAG, triacylglycerols; ST , free sterols; DMAG , di/monoacylglycerols; PL, phospholipids; identification numbers for sharks are given on the x-axis)
Figure 3
Lipid class profiles as % total liver lipid content for neonate school sharks (HC , hydrocarbons; WE, wax esters; SE, sterol esters; TAG, triacylglycerols; ST , free sterols; DMAG , di/monoacylglycerols; PL, phospholipids; identification numbers for sharks are given on the x-axis)
## Discussion
Neonate sharks were ill-equipped for long-distance dispersal due to their low energy stores, characterized by small livers, low overall lipid content and low levels of energy storage lipids relative to adults. Substantial investment of available resources in growth (the largest energetic cost) and high levels of growth-associated structural lipids were also found. Pups grew rapidly in the pupping area, increasing length by ~40% from birth in summer through to dispersal in late autumn (Fig. S1). Delaying dispersal in neonate sharks thus appears to allow prioritization of growth. These findings are supported by the tendency of young sharks of numerous species (Kinney and Simpfendorfer, 2009) including school sharks (Thorburn et al., 2019) to maintain limited home ranges. Delaying dispersal and prioritizing growth likely increases survival, since growth offers advantages for foraging and intra-specific competition while reducing predation risks at this vulnerable life stage (Morrissey and Gruber, 1993; Heupel et al., 2007). Delaying dispersal may also confer energetic benefits when dispersal eventually occurs. In addition to allowing time to build energy stores to sustain long-distance travel, swimming costs also decrease with increasing mass (Schmidt-Nielsen, 1984). Furthermore, cooling ambient water temperatures were predicted to reduce routine energy costs, reducing daily ration requirements by around a third from early autumn to winter (Table 1). Low lipid stores also indicate low buoyancy, which is reflected in the benthic lifestyle of neonate school sharks and not conducive to efficient swimming, further indicating the ill-preparedness of neonates to disperse long distances.
The liver is the main site of energy storage in elasmobranchs, where lipids are synthesized and stored to fuel metabolic activity (Sargent et al., 1972; Zammit and Newsholme, 1979). As such, shark livers are particularly energy rich, e.g. white shark (Carcharodon carcharias) livers have higher energy density than whale blubber (Pethybridge et al., 2014). Liver energy stores (and thus liver size) are depleted to fuel energy-intensive tasks including dispersal (Bone and Roberts, 1969; Rossouw, 1987, Del Raye et al. 2013), meaning that mature school sharks have significantly smaller livers after migrating (Olsen, 1954). Liver lipids are also used to offset starvation with individuals in poor condition having small livers (Bone and Roberts, 1969; Hoffmayer et al., 2006). Neonate livers were 3–6 times smaller than adult livers relative to body mass (adult hepato-somatic index, 10%–20%; Ripley, 1946a) and had low lipid content (~39% in neonates v. ~60% and ~75% in adult males and females, respectively; Ripley, 1946b). In addition to low energy stores, low lipid levels indicate high body density and low hydrostatic lift, suggesting a predominantly benthic lifestyle (Bone and Roberts, 1969; Rossouw, 1987). Larger livers increase static buoyancy (lift), reducing dynamic lift costs of more active swimming and increasing swimming efficiency (Iosilevskii and Papastamatiou, 2016). Increased buoyancy facilitates exploitation of the water column as seen in the ubiquitous diel vertical foraging of adult school sharks (McMillan et al., 2019). Conversely, lower buoyancy in neonates is reflected in their diet comprising mainly benthic taxa (Stevens and West, 1997; McAllister et al., 2015) and may also assist predator avoidance by maintaining position near the seafloor. Small livers and low lipid stores therefore appear to be key constraints on dispersal by limiting energy stores and swimming efficiency.
High proportions of structural lipids v. energy storage lipids in neonates relative to adults further supports prioritization of growth over energy storage (Fig. 4). While energy storing triacylglycerols were in greatest abundance, comprising nearly two thirds of liver lipids, this was far lower than in adult school sharks where they comprise >95% of liver lipids (Nichols et al., 1998). Conversely, structural phospholipids that are important components of cell membranes and thus growth (Pethybridge et al., 2010) were the second most abundant lipids in neonates at ~26% compared to just 2% in adults (Nichols et al., 1998). Crustaceans and cephalopods were roughly of equal importance to small teleost fish in the diet of neonate school sharks in Pittwater (Stevens and West, 1997), but yield low lipid content compared to teleost prey and cephalopod flesh in particular yields mainly structural lipids (Semmens, 1998). Teleost fish become increasingly important in the diet of juveniles as they grow (Stevens and West, 1997), marking a transition from generalist foraging in inexperienced neonates to a more specialized focus on higher energy teleost prey as foraging ability increases. The high levels of structural lipids found in this study confirm that this transition is yet to occur in neonates in the pupping area and further support a low preparedness for energy-intensive long-distance dispersal.
Figure 4
Energy storage in shark pups (left) was limited by small livers relative to body size, low lipid stores relative to liver size, low levels of energy storage lipids and high levels of growth-associated lipids compared to adults; these constraints appear to limit dispersal from pupping areas while growth is prioritized, paying off through increased survival, improved swimming efficiency, and lower costs of later dispersal.
Figure 4
Energy storage in shark pups (left) was limited by small livers relative to body size, low lipid stores relative to liver size, low levels of energy storage lipids and high levels of growth-associated lipids compared to adults; these constraints appear to limit dispersal from pupping areas while growth is prioritized, paying off through increased survival, improved swimming efficiency, and lower costs of later dispersal.
In addition to increasing survival and foraging ability, delaying dispersal to grow offers other benefits for subsequent dispersal in terms of increased swimming efficiency. As fish increase in size, their surface-to-volume ratio decreases, contributing to a lower COT. In sharks, this can be approximated by an exponent of ~ 0.3 (Schmidt-Nielsen, 1984). Seasonal and spatial changes in ambient water temperature can also have strong effects on energy consumption in ectothermic sharks (Carlson and Parsons, 1999; Miklos et al., 2003; Bethea et al., 2007). Our bioenergetics model predicted considerable energetic savings by delaying dispersal until cooler ambient temperatures occurred on the dispersal route in late autumn and winter, consistent with thermal effects of decreasing water temperature lowering metabolic rate and swimming costs (Clark and Seymour, 2006). These predicted energetic savings may be conservative, because although our model assumed constant swimming speed, swimming efficiency may increase at cooler temperatures (Dickson et al., 2002; Clark and Seymour, 2006). Ration levels may also increase at lower latitudes due to increasing ambient temperatures elevating metabolic demands (Bethea et al., 2007). Increasing energetic costs for neonates as they move north along the Tasmanian coast into warmer waters may therefore provide further reason to delay dispersal until temperatures fall.
The optimal swimming speed of 1.4 bl s−1 was comparable to other ectothermic sharks of similar size (0.9–1.7 bl s−1) including scalloped hammerheads (Sphyrna lewini) (Lowe, 1996), lemon sharks (Negaprion brevirostris) and leopard sharks T. semifasciata (Graham et al., 1990). Optimal swimming speed of neonates was near the maximum sustained swimming speed recorded, suggesting neonate swimming performance does not allow energetically optimal swim speeds substantially lower than maximal sustainable cruising speeds. It may be that the low buoyancy (small livers with low lipid content) and/or limited hydrodynamic performance (small, floppy pectoral fins, underdeveloped and not conducive to maintaining position in the water column) affect the swimming performance of neonates and push their optimal swimming speed up towards their maximal performance. Whitney et al. (2016) similarly recorded limited sustainable swim speeds beyond optimal swimming speed in juvenile nurse sharks (Ginglymostoma cirratum) at 30°C. Although the optimal swimming speed for neonates suggests a fast theoretical dispersal capacity (up to ~41 km day−1 swimming into the East Australia Current at mean flow), such speeds are unlikely to be achieved – even migrating adult school sharks moved at a maximum speed of 24 km day−1 (McMillan et al., 2019). Acoustically tracked neonates dispersing from Pittwater covered the 155 km to Maria Island at a fastest dispersal rate of 3.5 km day−1 (McAllister et al., 2015). Sharks in the wild are not forced to maintain position in a current (as in swim-tunnel respirometers) and often exploit the vertical water column when swimming, e.g. ascending against gravity before glide descending, offering foraging and energetic benefits but slowing horizontal swimming speeds (Barnett et al., 2010). Carcharhiniform sharks are also capable of both ram ventilating while swimming and buccal pumping while at rest (Carrier et al., 2012), so swimming speeds from trials cannot be easily equated to daily dispersal rates. Neonates were observed resting in holding tanks and undertake limited movement during daylight in the wild (Barnett and Semmens, 2012), suggesting continuous swimming by neonate school sharks in the wild is unlikely.
Prioritization of growth and small energy stores thus appear to constrain dispersal in shark pups until sufficient growth and energy storage occur or favourable ambient conditions, e.g. water temperature or currents, reduce energetic costs. Field observations of neonate school sharks support an incremental dispersal from pupping areas rather than direct, rapid dispersal. Neonates in Port Phillip Bay (Fig. 1) began congregating in channels in early autumn before meandering towards the open sea and dispersing from the bay by late winter (Olsen, 1954). In Pittwater, similar behaviour was observed with neonates beginning to move into lower reaches of the estuary during autumn and dispersing into adjacent coastal areas in late autumn and winter (McAllister et al., 2015). These movements are consistent with our findings of low energy stores in early autumn and the energetic benefits of delaying dispersal until water temperatures fall in late autumn and winter.
Neonate school sharks ~1–4 months old have recently been recorded in the Great Australian Bight off South Australia during the summer pupping season 840–1700 km from known pupping areas in Tasmania and Bass Strait (e.g. Rogers et al., 2017; McMillan et al., 2018). Rapid dispersal from distant pupping areas is unlikely for neonates in South Australia given our findings of low preparedness for dispersal and that at this time neonates in known pupping areas are yet to begin their autumn–winter movement towards the open sea (Olsen, 1954; McAllister et al., 2015). Neonate school sharks tagged in Bass Strait and Tasmania that dispersed to South Australia required 12–24 months (Olsen, 1954; Semmens, unpublished data), by which time they were no longer neonates but 1–2 year-old juveniles. Dispersing from the nearest known pupping area (Port Phillip Bay; Fig. 1) shortly after birth, neonates would need to swim up to ~60 km day−1 to arrive at locations where they have been recorded in South Australia in the observed size range. Additionally, the observed post-natal residency when growth occurs in pupping areas would be foregone and energetic costs would be elevated due to high summer temperatures. Immediate post-natal dispersal over such distances is therefore unlikely, suggesting undocumented local pupping areas in South Australia that could be valuable to conservation management.
This study suggests a trade-off, with shark pups delaying dispersal to prioritize growth. This is likely because growth increases survival through reduced predation and provides foraging advantages. Delaying dispersal also offers energetic benefits for subsequent dispersal through increased swimming efficiency and reduced energy demands. These findings suggest limited dispersal by neonate school sharks and are supported by both traditional mark-recapture and acoustic tracking studies (Olsen, 1954; McAllister et al., 2015). This study also indicates that neonate school sharks in South Australia were likely born locally in undocumented pupping areas rather than being migrants from distant pupping areas in south-eastern Australia. This has important management implications given the species’ overfished status in Australia, long projected recovery time (up to 66 years to reach 20% of virgin biomass: Thomson, 2012) and Critically Endangered status globally (IUCN, 2020). In addition to low extant biomass, it is likely that habitat degradation since the 1970s (draining of adjacent swamps, clearing of mangroves and die-back of seagrass beds) has severely diminished the contribution to the population from previously highly productive pupping areas (e.g. Port Phillip and Western Port Bays) that have shown little recovery (DEWR, 2008). Energetically mediated residency in pupping areas, as our findings suggest, further emphasizes the importance of conserving and restoring remaining pupping areas since neonate movements to less degraded habitats after birth seem unlikely. Such efforts may also include a need to identify and protect undocumented school shark pupping areas, e.g. in waters off South Australia.
We anticipate that the bioenergetic constraints on shark pup dispersal presented here will be useful to conservation management, providing insight into the biology and ecophysiology that influence residency in pupping areas. Knowledge of the energetic constraints underlying post-natal residency and dispersal could assist in the planning of marine-protected areas (particularly temporal protections). Future developments such as further miniaturization of pop-up archival tags or expansion of acoustic receiver networks may provide explicit information about dispersal of young from pupping areas in terms of routes, behaviour (e.g. direct movement v. foraging), rates of dispersal and destination that may have important ramifications both for this Critically Endangered species and other elasmobranchs. More generally, the approach presented here may be adapted to address conservation management issues in other marine taxa reliant on dispersal from natal areas, e.g. post-natal residency in protected areas and dispersal capacity of young, with implications for exposure to stressors in natal areas and during dispersal from them.
## Funding
This work was supported by the University of Adelaide, the University of Tasmania and by the Frederick James Sandoz Scholarship for Animal Research to M.N.M. M.N.M. was also supported through the provision of an Australian Government Research Training Program Scholarship. D.W.S. was supported by a Visiting Fellowship from the University of Tasmania and a Marine Biological Association Senior Research Fellowship.
## Acknowledgements
Many thanks to the laboratory staff at the Taroona Campus, Fisheries and Aquaculture Centre, Institute for Marine and Antarctic Studies, University of Tasmania for their advice and assistance in the laboratory, in particular Brian Choa and Xinran Chen. Data from the Maria Island mooring and acoustic receivers were sourced from Australia’s Integrated Marine Observing System, which is enabled by the National Collaborative Research Infrastructure strategy. It is operated by a consortium of institutions as an unincorporated joint venture, with the University of Tasmania as lead agent. All procedures were carried out under research permits issued by the animal ethics committees of the University of Tasmania (A0016274) and the University of Adelaide (S-2016-134) in accordance with the Australian code for the use and care of animals for scientific purposes.
## References
Barnett
A
,
Abrantes
KG
,
Stevens
JD
,
Bruce
BD
,
Semmens
JM
(
2010
)
Fine-scale movements of the broadnose sevengill shark and its main prey
.
PLoS One
5
: e15464. doi:.
Barnett
A
,
Semmens
JM
(
2012
)
Sequential movement into coastal habitats and high spatial overlap of predator and prey suggest high predation pressure in protected areas
.
Oikos
121
:
882
890
.
Beck
MW
,
Heck
KL
Jr
,
Able
KW
,
Childers
DL
,
Eggleston
DB
,
Gillanders
BM
,
Halpern
B
,
Hays
CG
,
Hoshino
K
,
Minello
TJ
(
2001
)
The identification, conservation, and management of estuarine and marine nurseries for fish and invertebrates: a better understanding of the habitats that serve as nurseries for marine species and the factors that create site-specific variability in nursery quality will improve conservation and management of these areas
.
Bioscience
51
:
633
641
.
Bell
WH
,
Terhune
LB
(
1970
)
Water tunnel design for fisheries research
.
Fisheries Research Board of Canada Technical Report
,
Nanaimo
.
Bethea
DM
,
Hale
L
,
Carlson
JK
,
Cortés
E
,
Manire
CA
,
Gelsleichter
J
(
2007
)
Geographic and ontogenetic variation in the diet and daily ration of the bonnethead shark, Sphyrna tiburo, from the eastern Gulf of Mexico
.
Mar Biol
152
:
1009
1020
.
Bligh
EG
,
Dyer
WJ
(
1959
)
A rapid method of total lipid extraction and purification
.
Can J Biochem Physiol
37
:
911
917
.
Bone
Q
,
Roberts
B
(
1969
)
The density of elasmobranchs
.
J Mar Biol Assoc U K
49
:
913
937
.
Bouyoucos
IA
,
Montgomery
DW
,
Brownscombe
JW
,
Cooke
SJ
,
Suski
CD
,
Mandelman
JW
,
Brooks
EJ
(
2017
)
Swimming speeds and metabolic rates of semi-captive juvenile lemon sharks (Negaprion brevirostris, Poey) estimated with acceleration biologgers
.
J Exp Mar Biol Ecol
486
:
245
254
.
Carlson
JK
,
Parsons
G
(
1999
)
Seasonal differences in routine oxygen consumption rates of the bonnethead shark
.
J Fish Biol
55
:
876
879
.
Carlson
JK
,
Goldman
KJ
,
Lowe
CG
(
2004
) Metabolism, energetic demand, and endothermy. In
JC
Carrier
,
JA
Musick
,
MC
Heithaus
, eds,
Biology of Sharks and Their Relatives
, Ed1st.
CRC Press
,
Boca Raton.
Carrier
JC
,
Musick
JA
,
Heithaus
MR
(
2012
)
Biology of Sharks and Their Relatives
.
CRC Press
,
Boca Raton.
Clark
TD
,
Seymour
R
(
2006
)
Cardiorespiratory physiology and swimming energetics of a high-energy-demand teleost, the yellowtail kingfish (Seriola lalandi)
.
J Exp Biol
209
:
3940
3951
.
Clark
TD
,
Sandblom
E
,
Jutfelt
F
(
2013
)
Aerobic scope measurements of fishes in an era of climate change: respirometry
.
J Exp Biol
216
:
2771
2782
.
Cortés
E
,
Gruber
SH
(
1990
)
Diet, feeding habits and estimates of daily ration of young lemon sharks, Negaprion brevirostris (Poey)
.
Copeia
1990
:
204
218
.
Cuevas
JM
,
García
M
,
Di Giacomo
E
(
2014
)
Diving behaviour of the Critically Endangered tope shark Galeorhinus galeus in the Natural Reserve of Bahia San Blas, Northern Patagonia
.
Anim Biotelemetry
2
:
1
6
.
Dahlgren
CP
,
Kellison
GT
,
AJ
,
Gillanders
BM
,
Kendall
MS
,
Layman
CA
,
Ley
JA
,
Nagelkerken
I
,
Serafy
JE
(
2006
)
Marine nurseries and effective juvenile habitats: concepts and applications
.
Mar Ecol Prog Ser
312
:
291
295
.
Del Raye
G
,
Jorgensen
SJ
,
Krumhansl
K
,
Ezcurra
JM
,
Block
BA
(
2013
)
Travelling light: white sharks (Carcharadon carcharias) rely on body lipid stores to power ocean-basin scale migration
.
Proc R Soc B
280
:
20130836
.
DEWR (Department of Environment and Water Resources)
(
2008
)
Draft school shark rebuilding strategy
.
Technical Report
.
Dickson
KA
,
Donley
JM
,
Sepulveda
C
,
Bhoopat
L
(
2002
)
Effects of temperature on sustained swimming performance and swimming kinematics of the chub mackerel Scomber japonicus
.
J Exp Biol
205
:
969
980
.
DPIPWE (Department of Primary Industries, Parks, Water and Environment)
(
2020
) .
Dowd
W
,
Brill
RW
,
Bushnell
PG
,
Musick
JA
(
2006
)
Estimating consumption rates of juvenile sandbar sharks (Carcharhinus plumbeus) in Chesapeake Bay, Virginia, using a bioenergetics model
.
Fish Bull
104
:
332
342
.
Eggleston
DB
(
1995
)
Recruitment in Nassau grouper Epinephelus striatus: post-settlement abundance, microhabitat features, and ontogenetic habitat shifts
.
Mar Ecol Prog Ser
24
:
9
22
.
Garla
RC
,
Chapman
DD
,
Wetherbee
BM
,
Shivji
M
(
2006
)
Movement patterns of young Caribbean reef sharks, Carcharhinus perezi, at Fernando de Noronha Archipelago, Brazil: the potential of marine protected areas for conservation of a nursery ground
.
Mar Biol
149
:
189
199
.
Gillanders
BM
,
Able
K
,
Brown
J
,
Eggleston
D
,
Sheridan
P
(
2003
)
Evidence of connectivity between juvenile and adult habitats for mobile marine fauna: an important component of nurseries
.
Mar Ecol Prog Ser
247
:
281
295
.
Graham
JB
,
DeWar
H
,
Lai
N
,
Lowell
WR
,
Arce
SM
(
1990
)
Aspects of shark swimming performance determined using a large water tunnel
.
J Exp Biol
151
:
175
192
.
Guttridge
TL
,
Gruber
SH
,
Franks
BR
,
Kessel
ST
,
Gledhill
KS
,
Uphill
J
,
Krause
J
,
Sims
DW
(
2012
)
Deep danger: intra-specific predation risk influences habitat use and aggregation formation of juvenile lemon sharks Negaprion brevirostris
.
Mar Ecol Prog Ser
445
:
279
291
.
Heithaus
MR
(
2007
) Nursery areas as essential shark habitats: a theoretical perspective. In
CT
McCandless
,
HL
Pratt
Jr
,
NE
Kohler
, eds,
Shark Nursery Grounds of the Gulf of Mexico and East Coast Waters of the United States
.
American Fisheries Society Symposium
,
Bethesda.
Heupel
MR
,
Carlson
JK
,
Simpfendorfer
CA
(
2007
)
Shark nursery areas: concepts, definition
.
Mar Ecol Prog Ser
337
:
287
297
.
Hoffmayer
ER
,
Parsons
G
,
Horton
J
(
2006
)
Seasonal and interannual variation in the energetic condition of adult male Atlantic sharpnose shark Rhizoprionodon terraenovae in the northern Gulf of Mexico
.
J Fish Biol
68
:
645
653
.
Huveneers
C
,
Simpfendorfer
CA
,
Thompson
R
(
2013
) Determining the most suitable index of abundance for school shark (Galeorhinus galeus) stock assessment: review and future directions to ensure best recovery estimates. Final Report to the Fisheries Research and Development Coroporation FRDC TRF Shark Futures 2011/078. South Australian Research and Development Institute, Adelaide.
Ikeda
T
(
2016
)
Routine metabolic rates of pelagic marine fishes and cephalopods as a function of body mass
.
J Exp Mar Biol Ecol
480
:
74
86
.
Iosilevski
G
,
Papastamatiou
YP
(
2016
)
Relations between morphology, buoyancy and energetics of requiem sharks
.
R Soc Open Sci
3
: 160406 doi:.
IMOS (Integrated Marine Observing System)
(
2018
) Australian National Mooring Network Facility burst averaged temperature and current data. https://portal.aodn.org.au/search. Accessed on 14 April 2018.
IUCN (International Union for Conservation of Nature)
(
2020
) Walker TI, Rigby CL, Pacoureau N, Ellis J, Kulka DW, Chiaramonte GE, Herman K Galeorhinus galeus. The IUCN Red List of Threatened Species 2020: e.T39352A2907336. Downloaded on 29 August 2020.
Johansen
JL
,
Jones
GP
(
2011
)
Increasing ocean temperature reduces the metabolic performance and swimming ability of coral reef damselfishes
.
Glob Chang Biol
17
:
2971
2979
.
Killen
SS
,
Atkinson
D
,
Glazier
DS
(
2010
)
The intraspecific scaling of metabolic rate with body mass in fishes depends on lifestyle and temperature
.
Ecol Lett
13
:
184
193
.
Kinney
MJ
,
Simpfendorfer CA (2010)
(
2009
)
Reassessing the value of nursery areas to shark conservation and management
.
Conserv Lett
2
:
53
60
.
Lee
CG
,
Farrell
AP
,
Lotto
A
,
Hinch
SG
,
Healey
MC
(
2003
)
Excess post-exercise oxygen consumption in adult sockeye (Oncorhynchus nerka) and coho (O. kisutch) salmon following critical speed swimming
.
J Exp Biol
206
:
3253
3260
.
Lowe
CG
(
1996
)
Kinematics and critical swimming speed of juvenile scalloped hammerhead sharks
.
J Exp Biol
199
:
2605
2610
.
Lowe
CG
(
2002
)
Bioenergetics of free-ranging juvenile scalloped hammerhead sharks (Sphyrna lewini) in Kāne'ohe Bay, Ō'ahu, HI
.
J Exp Mar Biol Ecol
278
:
141
156
.
McAllister
JD
,
Barnett
A
,
Lyle
JM
,
Semmens
JM
(
2015
)
Examining the functional role of current area closures used for the conservation of an overexploited and highly mobile fishery species
.
ICES J Mar Sci
72
:
2234
2244
.
McAllister
JD
,
Barnett
A
,
Lyle
JM
,
Stehfest
KM
,
Semmens
JM
(
2018
)
Examining trends in abundance of an overexploited elasmobranch species in a nursery area closure
.
Mar Freshw Res
69
:
376
384
.
McCluskey
SM
,
Bejder
L
,
Loneragan
NR
(
2016
)
Dolphin prey availability and calorific value in an estuarine and coastal environment
.
Front Mar Sci
3
:
30
.
McLeod
E
,
Salm
R
,
Green
A
,
Almany
J
(
2009
)
Designing marine protected area networks to address the impacts of climate change
.
Front Ecol Environ
7
:
362
370
.
McMillan
MN
,
Huveneers
C
,
Semmens
JM
,
Gillanders
BM
(
2018
)
Natural tags reveal populations of Conservation Dependent school shark use different pupping areas
.
Mar Ecol Prog Ser
599
:
147
156
.
McMillan
MN
,
Huveneers
C
,
Semmens
JM
,
Gillanders
BM
(
2019
)
Partial female migration and cool-water migration pathways in an overfished shark
.
ICES J Mar Sci
76
:
1083
1093
.
Miklos
P
,
Katzman
SM
,
Cech
JJ
(
2003
)
Effect of temperature on oxygen consumption of the leopard shark, Triakis semifasciata
.
Environ Biol Fishes
66
:
15
18
.
Molfese
C
,
Beare
D
,
Hall-Spencer
JM
(
2014
)
Overfishing and the replacement of demersal finfish by shellfish: an example from the English Channel
.
PLoS One
9
: e101506. doi:.
Morrissey
JF
,
Gruber
SH
(
1993
)
Habitat selection by juvenile lemon sharks
.
Environ Biol Fishes
38
:
311
319
.
Moulton
P
,
Walker
TI
,
S
(
1992
)
Age and growth studies of gummy shark, Mustelus antarcticus (Gunther), and school shark, Galeorhinus galeus (Linnaeus), from southern Australian waters
.
Mar Freshw Res
43
:
1241
1267
.
Nagelkerken
I
,
Sheaves
M
,
Baker
R
,
Connolly
RM
(
2015
)
The seascape nursery: a novel spatial approach to identify and manage nurseries for coastal marine fauna
.
Fish Fish (Oxf)
16
:
362
371
.
Nichols
PD
,
Bakes
MJ
,
Elliott
NG
(
1998
)
Oils rich in docosahexaenoic acid in livers of sharks from temperate Australian waters
.
Mar Freshw Res
49
:
763
767
.
O'Dor
R
(
2002
)
Telemetered cephalopod energetics: swimming, soaring
.
Integr Comp Biol
42
:
1065
1070
.
Olsen
AM
(
1954
)
The biology, migration, and growth rate of the school shark, Galeorhinus australis (Macleay) (Carcharhanidae) in south-eastern Australian waters
.
Mar Freshw Res
5
:
353
410
.
Payne
NL
,
Gillanders
BM
,
Seymour
RS
,
Webber
DM
,
Snelling
EP
,
Semmens
JM
(
2011
)
Accelerometry estimates field metabolic rate in giant Australian cuttlefish Sepia apama during breeding
.
J Anim Ecol
80
:
422
430
.
Pethybridge
HR
,
Daley
R
,
Virtue
P
,
Nichols
P
(
2010
)
Lipid composition and partitioning of deepwater chondrichthyans: inferences of feeding ecology and distribution
.
Mar Biol
157
:
1367
1384
.
Pethybridge
HR
,
Parrish
CC
,
Bruce
BD
,
Young
JW
,
Nichols
PD
(
2014
)
Lipid, fatty acid and energy density profiles of white sharks: insights into the feeding ecology and ecophysiology of a complex top predator
.
PLoS One
9
: e97877. doi:.
Ripley
WE
(
1946a
)
The soupfin shark and the fishery
.
Fish Bull (Wash D C)
64
:
7
37
.
Ripley
WE
(
1946b
) Biology of the Soupfin Galeorhinus zyopterus and biochemical studies of the Liver.
California Fish and Game Fish Bulletin No. 64. The Resources Agency, Department of Fish and Game
,
Sacramento
.
Rogers
P
,
Knuckey
I
,
Hudson
R
,
Lowther
A
,
Guida
L
(
2017
)
Post-release survival, movement, and habitat use of school shark Galeorhinus galeus in the Great Australian Bight, southern Australia
.
Fish Res
187
:
188
198
.
Rossouw
G
(
1987
)
Function of the liver and hepatic lipids of the lesser sand shark, Rhinobatos annulatus (Müller & Henle)
.
Comp Biochem Physiol B
86
:
785
790
.
Sargent
J
,
Gatten
R
,
McIntosh
R
(
1972
)
The metabolism of neutral lipids in the spur dogfish
.
Lipids
7
:
240
245
.
Sargent
J
,
Gatten
R
,
McIntosh
R
(
1973
)
The distribution of neutral lipids in shark tissues
.
J Mar Biol Assoc U K
53
:
649
656
.
Schmidt-Nielsen
K
(
2012
)
Scaling: why is animal size so important?
Cambridge University Press
,
Cambridge
.
Schurdak
ME
,
Gruber
SH
(
1989
)
Gastric evacuation of the lemon shark Negaprion brevirostris (Poey) under controlled conditions
.
Exp Biol
48
:
77
82
.
Semmens
JM
(
1998
)
An examination of the role of the digestive gland of two loliginid squids, with respect to lipid: storage or excretion?
Proc Biol Sci
265
:
1685
1690
.
Simpfendorfer
CA
,
Milward
NE
(
1993
)
Utilisation of a tropical bay as a nursery area by sharks of the families Carcharhinidae and Sphyrnidae
.
Environ Biol Fishes
37
:
337
345
.
Sims
DW
,
Davies
S
,
Bone
Q
(
1993
)
On the diel rhythms in metabolism and activity of post-hatching lesser spotted dogfish
.
J Fish Biol
43
:
749
754
.
Sims
DW
,
Davies
SJ
(
1994
)
Does specific dynamic action (SDA) regulate return of appetite in the lesser spotted dogfish, Scyliorhinus canicula?
J Fish Biol
45
:
341
348
.
Sims
DW
(
2000
)
Can threshold foraging responses of basking sharks be used to estimate their metabolic rate?
Mar Ecol Prog Ser
200
:
289
296
.
Skomal
G
,
Bernal
D
(
2010
) Physiological responses to stress in sharks. In
JC
Carrier
,
JA
Musick
,
MR
Heithaus
, eds,
Sharks and Their Relatives II: Biodiversity, Adaptive Physiology, and Conservation
.
CRC Press
,
Boca Raton
.
Speed
CW
,
Field
IC
,
Meekan
MG
,
C
(
2010
)
Complexities of coastal shark movements and their implications for management
.
Mar Ecol Prog Ser
408
:
275
293
.
Steffensen
JF
(
2005
)
Respiratory systems and metabolic rates
.
Physiology of Polar Fishes
22
:
203
238
.
Stevens
JD
,
West
GJ
(
1997
) Investigation of school and gummy shark nursery areas in southeastern Australia.
Final Report to the Fisheries Research and Development Corporation (FRDC) Project 93/061
.
FRDC, Canberra
.
Svendsen
MBS
,
Bushnell
PG
,
Steffensen
JF
(
2016
)
Design and setup of intermittent-flow respirometry system for aquatic organisms
.
J Fish Biol
88
:
26
50
.
Thorburn
J
,
Neat
F
,
Burrett
I
,
Henry
LA
,
Bailey
D
,
Jones
C
,
Noble
L
(
2019
)
Ontogenetic and seasonal variation in movements and depth use, and evidence of partial migration in a benthopelagic elasmobranch
.
Front Ecol Evol
7
:
353
. doi:.
Thomson
R
(
2012
)
Projecting the school shark model into the future: rebuilding timeframes and auto-longlining in South Australia
.
Commonwealth Scientific and Industrial Research Organisation
,
Hobart.
Videler
JJ
(
1993
) The costs of swimming. In
JJ
Videler
, ed,
Fish Swimming
.
Springer
,
Dordrecht, The Netherlands.
Videler
JJ
,
Nolet
BA
(
1990
)
Costs of swimming measured at optimum speed: scale effects, differences between swimming styles, taxonomic groups and submerged and surface swimming
.
Comp Biochem Physiol
97
:
91
99
.
Walker
TI
(
1998
)
Can shark resources be harvested sustainably? A question revisited with a review of shark fisheries
.
Mar Freshw Res
49
:
553
572
.
Walker
TI
(
1999
) Galeorhinus galeus fisheries of the world. In
R
Shotton
, ed,
Case Studies of the Management of Elasmobranch Fisheries
.
FAO Fisheries Technical Paper 378/2
.
FAO
,
Rome.
Wetherbee
BM
,
Gruber
SH
(
1993
)
Absorption efficiency of the lemon shark Negaprion brevirostris at varying rates of energy intake
.
Copeia
3
:
416
425
.
White
JW
(
2015
)
Marine reserve design theory for species with ontogenetic migration
.
Biol Lett
11
:
20140511
. doi:.
Whitney
NM
,
Lear
KO
,
LC
,
Gleiss
AC
(
2016
)
The effects of temperature and swimming speed on the metabolic rate of the nurse shark (Ginglymostoma cirratum, Bonaterre)
.
J Exp Mar Biol Ecol
477
:
40
46
.
Zammit
VA
,
Newsholme
EA
(
1979
)
Activities of enzymes of fat and ketone-body metabolism and effects of starvation on blood concentrations of glucose and fat fuels in teleost and elasmobranch fish
.
Biochem J
184
:
312
322
.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
Editor: Steven Cooke
Steven Cooke
Editor
Search for other works by this author on:
|
2022-12-06 07:19:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4082086384296417, "perplexity": 9282.718318741869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711074.68/warc/CC-MAIN-20221206060908-20221206090908-00118.warc.gz"}
|
https://cs.stackexchange.com/questions/92139/how-should-i-describe-the-relationships-between-type-expressions/92142
|
# How should I describe the relationships between type expressions?
Lets say I have two type expressions: Maybe a (X) and Maybe Integer (Y), where Maybe is a type constructor, Integer is a Type and a is a type variable.
What language should I use to describe the relationship between these expressions, and type expressions in general? I've been using the language of sets: X describes a subset of Y, X and Y intersect, and so on. Is this generally accepted, or is there a different, widely accepted language for describing the relationships between type expressions?
In the example provided, I would like to express th at expression X describes many possible types, and that includes all of the types described by Y.
• I'd write "$X$ is more general than $Y$". Type inference usually produces the "most general type". – chi May 21 '18 at 16:07
• @chi thanks for pointing that out - and in many places it makes a lot of sense for me to do that. It's got a bit more information in it than simply, saying 'X contains Y', which is a good thing. – Liam M May 22 '18 at 6:09
The set-theoretic intuitions can make sense in the semantics, especially in the context of realizability semantics (where types are interpreted as sets of terms). In this case, the polymorphic type $\forall \alpha. \mathrm{Maybe}(\alpha)$, written Maybe a in Haskell, intuitively corresponds to the intersection over every type $T$ of the elements of $\mathrm{Maybe}(T)$. Or in other words, $⟦\forall \alpha. \mathrm{Maybe}(\alpha)⟧ = \bigcap_T ⟦\mathrm{Maybe}(T)⟧$, where $T$ ranges over the semantic domain (and elements of the semantics are injected into the syntax).
On the other hand, set-theoretic intuition does not really make sense on the side of the type system, because standard set operations are not allowed. For instance, it is not possible to refer to the intersection of two types in the syntax (at least when you are not working with intersection type systems, and I don't think this is the case in Haskell). However, one operation on types that is primitive in all type systems with polymorphism is instantiation. This corresponds to the elimination rule for the universal quantifier (or polymorphism), and it states that if you have a term $t$ of type $\forall \alpha, \mathrm{Maybe}(\alpha)$ then the term $t$ can be seen as an element of type $\mathrm{Maybe}(T)$ for any type $T$, for example $\mathrm{Integer}$.
|
2019-11-19 16:03:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7932947278022766, "perplexity": 270.6100692364521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670156.86/warc/CC-MAIN-20191119144618-20191119172618-00469.warc.gz"}
|
https://math.stackexchange.com/questions/1287736/absolute-value-equation
|
# Absolute Value Equation
Please help me with this! $$x^3+|x| = 0$$ Now one solution is clearly $0.$ We have to find the other solution (i.e, $-1$)
$$Solution:$$ CASE $1$: If $x<0,~|x| = -x$, we can write $x^3+|x| = 0$ as $-x^3-x=0$ $$x^3+x=0$$ $$x(x^2+1)=0$$ $$\Longrightarrow x=0, or, x=\sqrt{-1}$$ Please tell me where I've gone wrong. Many thanks!
• I don't think there should be a minus in front of the $x^3$ – James May 18 '15 at 11:02
• $\;|x|=-\;$ , for $\;x<0\;$ ...but $\;x^3\;$ remains the same. – Timbuc May 18 '15 at 11:04
• The conclusion of case 1 doesn't make sense: The hypothesis of that case is $x < 0$, but one of the solutions isn't even real. (In fact, if one is counting complex solutions, one should have both of $\pm i$.) – Travis Willse May 18 '15 at 11:05
• @Timbuc But why does $x^3$ remain the same? – Ishan May 18 '15 at 11:08
• Because it is given $\;x^3\;$ !! The only thing you do is to apply the definition of $\;|x|\;$ , all the rest remains exactly the same as it was given... – Timbuc May 18 '15 at 11:09
Hints:
Suppose
\begin{align}&x\ge 0\implies 0= x^3+x=x(x^2+1)\;\ldots\\{}\\&x<0\implies0= x^3-x=x(x^2-1)=\ldots\end{align}
All in all, there are two different real solutions.
• But for the second case, why didn't you put a minus sign before $x^3$ (since x<0$\Rightarrow x^3<0$? – Ishan May 18 '15 at 11:05
• @BetterWorld Who told that you must put a minus sign in front of an element to make it negative?? – Timbuc May 18 '15 at 11:06
• Why do we get 1 as a solution as well? – Gummy bears May 18 '15 at 11:11
• We get -1 as a solution, not 1. – Ishan May 18 '15 at 11:11
• @Timbuc I had always learnt this... Sorry – Ishan May 18 '15 at 11:13
the function $$f(x) = \begin{cases} x^3 - x & if \, x \le 0,\\x^3+x &if \, 0 \le x. \end{cases}$$
has one negative zero at $x = -1$ and a zero at $x = 0.$ you can see from the function definition that $f(x) > 0$ for $x > 0.$ the graph of $y = f(x)$ has a cusp at $(0,0)$ which is also a local minimum.
|
2020-04-06 16:08:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.721290111541748, "perplexity": 416.7250158850069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371637684.76/warc/CC-MAIN-20200406133533-20200406164033-00368.warc.gz"}
|
https://www.akt.tu-berlin.de/menue/akt_talks/summer_term_2021/parameter/nomobil/
|
Sie sind hier
# Research Colloquium (Summer Term 2021)
The research colloquium "Algorithmik und Komplexitätstheorie" provides talks of external guests, members of the research staff, Ph.D. students, and advanced students (theses) about recent results and research topics in theoretical computer science and related areas. The core areas are algorithmics and computational complexity theory. If you would like to receive information about upcoming talks, please join our mailing list. If you would like to give a talk yourself, feel free to send an email to m.renken@tu-berlin.de.
Due to the Coronavirus situation, this semester’s talks are given online, usually at meet.akt.tu-berlin.de/b/mal-jkm-myf.
Click on rows to expand. The schedule will be updated during the term.
Date Speaker Title
20.04.2021
16:00
Matthias Bentert
(TU Berlin)
Using a geometric lens to find k disjoint shortest paths.
Given an undirected n-vertex graph and k pairs of terminal vertices $(s_1,t_1),…,(s_k,t_k)$, the k-Disjoint Shortest Paths (k-DSP)-problem asks whether there are k pairwise vertex-disjoint paths $P_1,…,P_k$ such that $P_i$ is a shortest $s_i$-$t_i$-path for each $i∈[k]$. Recently, Lochet [arXiv 2019] provided an algorithm that solves k-DSP in $n^{O(k^{4^k})}$ time, answering a 20-year old question about the computational complexity of k-DSP for constant k.
On the one hand, we present an improved $O(k⋅n^{{12k⋅k!+k+1}})$-time algorithm based on a novel geometric view on this problem. For the special case $k=2$, we show that the running time can be further reduced to $O(n^2⋅m)$ by small modifications of the algorithm and a further refined analysis. On the other hand, we show that k-DSP is W[1]-hard with respect to k, showing that the dependency of the degree of the polynomial running time on the parameter k is presumably unavoidable.
27.04.2021
16:00
Christoph Hertrich
(TU Berlin)
Complexity of ReLU Neural Network Training Parameterized by Data Dimensionality
Training a neural network, that is, minimizing a loss function on a finite dataset, is the crucial step of many machine learning algorithms. A large variety of hardness results has been established for this problem and practitioners usually use heuristic methods. We analyze the computational complexity of this problem from a parameterized point of view with respect to the dimension of the training data. Focusing on $\ell^p$ losses, we show that, for $p \in [0, \infty[$, already training a one-node neural network is W[1]-hard and known brute-force strategies are essentially optimal (assuming the Exponential Time Hypothesis). Matching these lower bounds, we extend a known XP-algorithm to all these loss functions and observe that, for $p=\infty$, there exists a polynomial time algorithm.
This is joint work with Vincent Froese and Rolf Niedermeier.
04.05.2021
16:00
Leon Kellerhals
(TU Berlin)
Placing Green Bridges Optimally, with a Multivariate Analysis.
We study the problem of placing wildlife crossings, such as green bridges, over human-made obstacles to challenge habitat fragmentation. The main task herein is, given a graph describing habitats or routes of wildlife animals and possibilities of building green bridges, to find a low-cost placement of green bridges that connects the habitats. We develop three problem models for this task, which model different ways of how animals roam their habitats. We settle the classical complexity and parameterized complexity (regarding the number of green bridges and the number of habitats) of the three problems.
This is joint work with Till Fluschnik.
11.05.2021
16:00
Niclas Boehmer
(TU Berlin)
Winner Robustness via Swap-Bribery: Parameterized Counting Complexity and Experiments
In Swap-Bribery, we are given an election, a designated candidate, and a budget k and the task is to decide whether it is possible to modify the election by swapping at most k adjacent candidates in some of the votes such that the designated candidate becomes a winner of the election. We study the (parameterized) complexity of counting variants of Swap-Bribery, focusing on the parameterizations by the number of swaps and the number of voters. Facing several computational hardness results, using sampling we show experimentally that counting variants of Swap-Bribery offer a new approach to the robustness analysis of elections.
This is joint work with Robert Bredereck, Piotr Faliszewski, and Rolf Niedermeier.
18.05.2021
16:00
Tomohiro Koana
(TU Berlin)
tba
tba
25.05.2021
16:00
André Nichterlein
(TU Berlin)
On 2-Clubs in Graph-Based Data Clustering: Theory and Algorithm Engineering
Editing a graph into a disjoint union of clusters is a standard optimization task in graph-based data clustering. Here, complementing classic work where the clusters shall be cliques, we focus on clusters that shall be 2-clubs, that is, subgraphs of diameter two. This naturally leads to the two NP-hard problems 2-Club Cluster Editing (the allowed editing operations are edge insertion and edge deletion) and 2-Club Cluster Vertex Deletion (the allowed editing operations are vertex deletions).
Answering an open question from the literature, we show that 2-Club Cluster Editing is W[2]-hard with respect to the number of edge modifications, thus contrasting the fixed-parameter tractability result for the classic Cluster Editing problem (considering cliques instead of 2-clubs). Then focusing on 2-Club Cluster Vertex Deletion, which is easily seen to be fixed-parameter tractable, we show that under standard complexity-theoretic assumptions it does not have a polynomial-size problem kernel when parameterized by the number of vertex deletions. Nevertheless, we develop several effective data reduction and pruning rules, resulting in a competitive solver, clearly outperforming a standard CPLEX solver in most instances of an established biological test data set.
Joint work with Aleksander Figiel, Anne-Sophie Himmel, and Rolf Niedermeier.
01.06.2021
16:00
Esther Ulitzsch
(IPN Kiel)
Understanding Response Processes in Interactive Assessments via Graph-based Data Clustering: Method Development, Applications, and Open Questions
Educational large-scale assessments such as the Programme for International Student Assessment (PISA) or the International Assessment of Adult Competencies (PIAAC) aim at measuring what examinees know and can do. In recent years, large-scale assessment moved from paper-and-pencil-based multiple-choice items to computer-administered complex interactive tasks. Assessments using interactive tasks allow logging time-stamped action sequences. These sequences pose a rich source of information that supports moving from investigating from whether to how examinees solved a given task.
We provide an approach that leverages time-stamped action sequence data for identifying common response processes, i.e., groups of examinees that approached the tasks in a comparable manner. In doing so, we integrate tools from clickstream analyses and graph-modeled data clustering with psychometrics. In our approach, we (a) provide similarity measures that are based on both actions and the associated action-level timing data and (b) subsequently employ cluster edge deletion for identifying homogeneous, interpretable, well-separated groups of time-stamped action sequences, each describing a common response process. The approach and its utility are illustrated on a complex problem-solving task from PIAAC 2012. Open questions concerning the validity and scalability of the procedure are discussed.
Joint work with Qiwei He, Vincent Ulitzsch, Hendrik Molter, André Nichterlein, Rolf Niedermeier \& Steffi Pohl.
08.06.2021
16:00
tba
()
tba
tba
15.06.2021
16:00
tba
()
tba
tba
22.06.2021
16:00
Benjamin Bumpus
(University of Glasgow)
Spined Categories: generalising tree-width beyond graphs
Problems that are NP-hard in general are often tractable on inputs that have a recursive structure. For instance consider classes defined in terms of graph decompositions’ such as of bounded tree- or clique-width graphs. Given the algorithmic success of graph decompositions, it is natural to seek analogues of these notions in other settings. What should a tree-width-k’ digraph or lattice or temporal graph even look like?
Since most decomposition notions are defined in terms of the internal structure of the decomposed object, generalizing a given notion of decomposition to a larger class of objects tends to be an arduous task. In this talk I will show how this difficulty can be reduced significantly by finding a characteristic property formulated purely in terms of the category that the decomposed objects inhabit, which defines the decomposition independently of the internal structure.
I will introduce an abstract characterisation of tree-width as a vast generalisation of Halin’s definition of tree-width as the maximal graph parameter sharing certain properties with the Hadwiger number and chromatic number. Our uniform construction of tree-width provides a roadmap to the discovery of new tree-width-like parameters simply by collecting the relevant objects into our new notion of a spined category.
This is joint work with Zoltan A. Kocsis (University of New South Wales).
|
2021-05-14 11:31:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4702913463115692, "perplexity": 1775.287671084849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990449.41/warc/CC-MAIN-20210514091252-20210514121252-00281.warc.gz"}
|
https://hackage-origin.haskell.org/package/eventlog2html-0.6.0/candidate/changelog
|
## Changelog for eventlog2html-0.6.0
0.6, released 2019-10-22 * Revamp how cost centre profiles are displayed. * Fix incorrectly calculated start time for certain profiles. * Line chart now displays points for each sample so it's easier to see where to hover. * Add --y-axis option to allow the user to specify the extent of the y-axis. This is useful when comparing two different profiles together. 0.5, released 2019-10-11 * Add some more metainformation to the header (sample mode and sample interval) * Fix empty sample at start of eventlog * Support for biographical and retaining profiling modes if using at least GHC-8.10. * Fix cost centre profiles to match the output of hp2pretty 0.4, released 2019-09-18 * BREAKING CHANGE: eventlog2html no longer includes traces which have been generated by "traceEvent" or "traceEventIO" from "Debug.Trace" by default. "traceEvent" and "traceEventIO" are supposed to be used for high-frequency events. If you want to trace low-frequency events, especially in order to relate phases of your program to memory usage patterns, use "traceMarker" and "traceMarkerIO" instead. If you want to return to the old behaviour, add the "--include-trace-events" option on the commandline. * Removed "trace PERCENT" option, which had no effect in the code. * Added warning about eventlogs with a lot of traces. * Added option to filter the traces which should be included in the generated output. 0.3, released 2019-09-08 * Added warnings if eventlog2html is used on eventlogs generated by GHC Version without profiling support. * Moved to version 0.4 of HVega. * HeapProfCostCentre and HeapProfSampleCostCentre Events are included in the generated output. 0.2, released 2019-07-05 * Added the commandline option '-o OUTFILE' which writes the output to the given filename. * Show the time the eventlog was created in the generated HTML output. 0.1, released 2019-06-23 * Initial release, a complete rewrite on hp2pretty. Implemented by Matthew Pickering and David Binder.
|
2022-01-22 18:55:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44044095277786255, "perplexity": 6122.708673686116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303868.98/warc/CC-MAIN-20220122164421-20220122194421-00578.warc.gz"}
|
http://clay6.com/qa/45635/a-solution-containing-34-2-g-of-cane-sugar-c-h-o-dissolved-in-500-cm-3-of-w
|
Browse Questions
# A solution containing 34.2 g of cane-sugar ($C_{12}H_{22}O_{11}$) dissolved in 500 $cm^3$ of water froze at $−0.374^{\large\circ}$C. Calculate the freezing point depression constant of water.
$1.87Km^{-1}$
|
2016-12-09 11:44:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8890431523323059, "perplexity": 7755.140367222923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542695.22/warc/CC-MAIN-20161202170902-00085-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://seoreadywp.com/chloe-wilde-jfefwi/sod-weight-calculator-28a7f1
|
Sod is typically sold by the square foot and therefore, you must calculate how many square feet of lawn you have to cover. This page computes weight of the substance per given volume, and answers the question: How much the substance weighs per volume. If you know exact dimensions, our basic sod calculator is available for your use. How much sod do I need? Entering these measurements into the online sod calculator gives: $$Radius = {Diameter \over 2} = {40\,ft \over 2} = 20\,ft$$ $$Sod\,Area = \pi × Radius^2 = \pi × 20\,ft^2 = 1256.6\,ft^2$$ Let’s imagine that the sod costs $15 per roll and a roll measures 10 square feet. Calculate the area of each square or rectangular section (length x width), then add the areas together to get the total amount of sod that is required. Rectangular Areas. Use this calculator to determine the weight of an object from its mass and the acceleration due to gravity at a particular geographical location. I did 25 rolls of sod in my Nissan Murano just fine. 23 Recommendations. The next page will let you add a few more details to the order before submission. Lawn renovations or when covering a garden area that has existing good soil. Cite. 6”. New Builds: We recommend a Minimum 4 inches ( preferably 5 inches). Sod Calculator. Click and drag to move points. If your location is not available,switch to our basic sod calculator. For complete information about the cookies we use, data we collect and how we process them, please check our. Fortunately, measuring your lawn for sod is a very easy process that only requires a few basic calculations. To get started: Find your work area by typing your address into the search box. Used a large blue tarp to cover up the sides. Visit us to pick up sod by the piece, ½ pallet or pallet. Locate Address . Measure each area and calculate square footage. Use this calculator to find out how much sod you need by tracing your area utilizing Google Maps. Individual pieces of sod weigh 35 to 45 pounds (15 to 20 kg). Ft: Total Sq. New Builds: We recommend a Minimum 4 inches ( preferably 5 inches). Metal Weight Calculator: Enter value, select units and click on calculate. Result will be displayed. Lawn renovations or when covering a garden area that has existing good soil: Please enter the depth required to give your new sod an adequate rooting base of at least 3.5 – 4 inches of good soil. ** Be as accurate as possible with your measurements. The fields will calculate automatically based upon your input. If you have more than 10 assignments, use the "Add Row" button to add additional input fields. Please add 5% to 10% to the suggested Sod minimum to allow for cutting & waste. This will vary by the type of grass you’re purchasing, but you can expect it will cost between$0.15 – $0.75 per square foot. A pallet can range from 120 on the low end to 400 on the high end. yd: No. The calculation formula used for this tool is: F g = m x g. Symbols. How do you calculate the correct amount of sod? Enter your measurements, hit calculate, and it’s all worked out for you: cubic feet, yards, even the number of bags. Instead, this program is designed to calculate the total cost. Questions? Medium green, medium-width grass blades. This Michael’s guidance on the weight of a sod pallet and what kind of truck or trailer you need to transport sod. One side has been screams, which made him feel uncomfortable. Soda Weight Loss Calculator, in the context of medicine, health, or physical fitness, refers to a reduction of the total body mass, due to a mean loss of fluid, body fat or adipose tissue or lean mass, namely bone mineral deposits, muscle, tendon, and other connective tissue. So SOD units/mg = 138/500= 0.276 units/mg Fresh weight. Normally around 2500 lbs, but depending on the moisture contenta full pallet of sod that covers 600 sq feet can weigh between 2000to 3000 lbs. How to calculate the area of a square. Enter measurements in feet. To easily calculate how much turf you need to cover an area, use the calculator below. Gonna do my first sod job and need to see if I can handle picking it up myself or if I should have it delivered. Growers & Suppliers of Kentucky BlueGrass Peat Sod. We have provided you with this calculator to take the guesswork out of your sod job. Total is the square footage required. From the Nursery Sod & Grass Sales Serving Orlando, Winter Park, Windermere, Clermont, Winter Garden, & Central Florida. Click on the shape that best describes your area. Sod is essentially a sheet of pre-grown grass that, when planted in your yard, immediately covers the dirt with pristine lawn. Simply enter in the square footage of the area to be sodded, and the calculator will do the rest. Sod Calculator. Measure your prepared areas and fill in the dimensions in the appropriate shape sections. One pallet weighs about 2,300 pounds and can fit in most average or large pick up trucks. ft.) with a minimum order being ten rolls. How Much Does a Sod Pallet Weigh Summary of the How Much Does a Sod Pallet Weigh Video. Sod Area and Volume Calculator. 45 SQ/FT PER TON. The amount of new sod needed for a project can be determined by dividing the space into squares and rectangles. 75 SQ/FT PER TON. Weight of pallet? Measure the length and the width of your area in feet. Please note that this tool is for estimating purposes only. This equals the approximate number of rolls needed. Sod and soil calculator to help you determine the amount you may require for your residential or commercial grass project in British Columbia or Alberta. Sod Calculator. About 15-20 min of vacuuming after for the loos dirt and all was good How To Calculate How Much Sod You Need: Make a sketch of the area to be sodded. If your location is not available,switch to our basic sod calculator. Call 303.289.4761. 38 SQ/FT PER TON. Use the manual Yard Area Calculator to estimate the size and areas in your yard to plan how much sod you need for your lawn. Formula. Space started getting tight in the back but definately wasn't a problem weight wise Edit:. With a tape, measure the area of your planned lawn. Eucalyptus, cypress and cedar may not be available in all regions. Step 1: Enter your address and press Locate. If you know exact dimensions, our. Best price I found was .22sq. You can use the calculator above to calculate your weighted grade average. All Answers (25) 11th Apr, 2016. of Pallets: Keep in-mind: A pallet of turf grass covers approximately 450 square feet of lawn. To calculate how many pieces of sod you need, measure the total area (length x width) and divide by 2.75. Call us at 281-431-7441 for more information. Established 1965. 5”. Step 2: Click points along perimeter to measure area. Use this tool to estimate the amount of sod you may need for your yard. A Map Based Area Calculator For Measuring The Square Footage of Lawns For Sod Installation or Fertilizer Rates. Each pallet will cover approximately 450 square feet, which is equal to 50 square yards. A pallet of sod costs$130 to $360, which covers about 450 square feet.Sod prices range from$0.30 to $0.80 per square foot depending on the variety of grass, its quality, the amount ordered, and delivery fees. Example: 25’ x 25’ = 625 sq. Press Order Now! 55 SQ/FT PER TON. A pallet of sod will be enough to cover 400 sf to 500 sf. Sod Prices. 3”. Contact us for the most accurate measure. Université Paris-Sud 11. How Much Does Sod Cost? ft. picked up from 10 miles away or .55sq. Nationwide the cost to sod a lawn including materials and labor ranges between 1600 to 7600 the median price is 3800. How many pallets would that be? Overall, on average a pallet of sod weighs between 1500-3000lbs Bermuda Sod Pallet – 1750lbs St. Augustine Grass Pallet – 3000lbs Sometimes topsoil and compost are sold by weight, which varies depending on what the material is. Our Sod Calculator has been designed from the ground up to give you the best results possible. ft. Back to top. to checkout. This online tool will also display a conversion scale that automatically refreshes to relate to the current values and units selected. For a basic project in zip code 47474 with 125 square feet, the cost to Install Sod starts at$1.59 - \$3.02 per square foot*. The calculator above instantly tells you how much sod you need in rolls and pallets plus the approximate cost. Multiply the two numbers together. SOD CALCULATOR. Sod measurement can be tricky depending on the shape of your yard. You can selectively provide your consent below to allow such third party embeds. Weight of Sod Roll 35 - 45 lbs depending on moisture Rolls per Pallet 60 rolls Weight of Pallet 2000 - 3000 lbs depending on moisture Pallet Width 40" inches Pallet Length 48" inches. If your location is not available,switch to our, With your mouse cursor over the map, click the, If there is area inside your perimeter that needs to be subtracted, use the, If you are only ordering for one lawn, click, Please note that this tool is for estimating purposes only. In the US, a 40-pound bag measures between 0.5 and 0.75 cubic feet. A question that we are asked a lot is, “how much does a sod pallet weigh”. 4”. For each assignment, enter the grade you received and the weight of the assignment. We recommend that you also measure the area with a tape measure. Contact us for pricing. Blue Grass Sod Farm Stead, Manitoba and Sod Depot Winnipeg Office (204) 269-3052 Sod Delivery Manitoba, Saskatchewan, Ontario, © 2020 Blue Grass Sod Producers Ltd. All Rights Reserved, We use cookies to enhance your experience while using our website. Each individual piece is about 16" x 24". We have fresh sod available! Use this calculator to find out how much sod you need by tracing your area utilizing Google Maps. SOD CALCULATOR With a tape, measure the area of your planned lawn. This goes beyond simply calculating the cost of the materials—that’s the easy part. Centreville Pickup and All Deliveries. Right click on a point to delete. Our sod is harvested in pieces 16″ wide and 45″ long (5 square feet). To use this calculator: The "Area name" is there for your convenience, you can label each area you have measured. Total all the measurements using our easy sod calculator below, adding 5 … If you need help or advice on amounts, please call our office. Sod Calculator. Area 1 Length x Area 1 Width. 110 SQ/FT PER TON. I have a 6.5 X 10ft single axel trailer. Once you have entered your data, press the "calculate" button and you will see the calculated average grade in the results area. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. Calculate the area of each square or rectangular section (length x width), then add the areas together to get the total amount of sod that is required. There are 171 pieces on each pallet. A single pallet can weigh anywhere from 1500 to 3000 pounds, the weight can increase by up to 50% depending on soil content, moisture, and other factors. Blue Grass Soil Calculator Use our calculator to help determine the amount of soil your project requires. Need a free installation estimate?? Use this calculator to find out how much sod you need by tracing your area utilizing Google Maps. Mrs. Qian Bo said, the doctor told me the truth Shi Muke Soda Weight Loss Calculator twitch portress skirt. Find your work area by typing your address into the search box. Foods, Nutrients and Calories. Like your price? Most landscapers level the ground with a yard of topsoil or bank sand for each pallet of sod, if needed. We also use content and scripts from third parties that may use tracking technologies. Sod Calculator. We advise adding a few extra yards to your order to allow for waste.If your project has more curves or angles its advisable to add 10% . ft delivered from another place. Manual Yard Area Calculator for Sodding | Super-Sod 1-888-360-1125 Sod is perishable and we do not take returns. By far, the easiest way to calculate quantities is to use the soil calculator above! Divide the area into squares, rectangles, triangles, and circles. Square Area: Length (ft) Width (ft) Circle Area: Radius (ft) Triangle Area: Base (ft) Height (ft) Total Sq. Your Lawn Is Our Business. Dominique Liger . About this page: Weight of Turf; For instance, calculate how many ounces, pounds, milligrams, grams, kilograms or tonnes of a selected substance in a liter, gallon, fluid ounce, cubic centimeter or in a cubic inch. Should I buy dirt? Sod is a perishable product; therefore size should be properly prepared so that installation can take place immediately after delivery. Lawn renovations or when covering a garden area that has existing good soil: Please enter the depth required to give your new sod an adequate rooting base of at least 3.5 – 4 inches of good soil. The weight of each piece of sod is approximately between 5 and 6 pounds per square foot. Then plug the square feet into the sod calculator and you'll see how much grass you need. Therefore, we can calculate the total number of pallets and rolls: Try out our calculators below to get approximate sod rolls needed. And metric too. We harvest in five roll increments (50 sq. weight calculator. Delivery also available of all sod varieties. Multiply the length by the width of the area in feet and divide this number by ten. Most landscapers level the ground up to give you the best results possible = 138/500= 0.276 units/mg weight... Multiply the length by the piece, ½ pallet or pallet calculator and you 'll see much... Cost of the assignment turf grass covers approximately 450 square feet of lawn dimensions, our basic calculator! Dirt with pristine lawn that Installation can take place immediately after delivery started: find your area... The rest to add additional input fields equal to 50 square yards sod weight calculator most average or large up! Using our Services via a browser you can restrict, block or remove cookies through your browser. Work area by typing your address into the sod calculator with a tape, measure area... On amounts, please check our bag measures between 0.5 and 0.75 feet. Few more details to the suggested sod Minimum to allow for cutting & waste measures between 0.5 and 0.75 feet! With a tape measure your location is not available, switch to our sod! Areas and fill in the square Footage of the assignment calculator twitch portress skirt a 4!: click points along perimeter to measure area properly prepared so that Installation can place! Topsoil and compost are sold by the square feet of lawn you measured., Winter Garden, & Central Florida Winter Park, Windermere, Clermont, Garden! 400 on the shape of your sod job goes beyond simply calculating cost. Need, measure the total area ( length x width ) and divide by 2.75 a... Topsoil and compost are sold by the width of your area utilizing Google Maps collect! Nursery sod & grass Sales Serving Orlando, Winter Garden, & Central Florida the... Miles away or.55sq designed to calculate how much the substance per given volume, answers. Super-Sod 1-888-360-1125 sod calculator has been screams, which varies depending on the weight of the substance weighs per.! So that Installation can take place immediately after delivery browser you can label each area have... 24 '' grass covers approximately 450 square feet, which made him feel uncomfortable best describes your utilizing. Harvest in five roll increments ( 50 sq approximate sod rolls needed calculator: the area name '' there! To 45 pounds ( 15 to 20 kg ) how much Does a sod pallet weigh.! Area with a tape, measure the area into squares, rectangles, triangles, the! To cover 400 sf to 500 sf min of vacuuming after for the loos and! From third parties that may use tracking technologies that has existing good.! Him feel uncomfortable existing good soil values and units selected take returns pallet can range 120! Been designed from the ground up to give you the best results possible to give you the best results.! Your work area by typing your address into the search box button to add additional input.! * * be as accurate as possible with your measurements how do you calculate the amount... Weight Loss calculator twitch portress skirt add 5 % to 10 % to 10 % to the order before.... Sometimes topsoil and compost are sold by the square foot been designed from the ground up to give you best! The truth Shi Muke Soda weight Loss calculator twitch portress skirt our via... Feet ) not be available in all regions tarp to cover 400 sf to 500 sf material is for! Serving Orlando, Winter Park, Windermere, Clermont, Winter Park, Windermere, Clermont, Winter,... Cubic feet for cutting & waste sod needed for a project can be determined by dividing the sod weight calculator squares... You know exact dimensions, our basic sod calculator turf grass covers approximately 450 square feet of lawn into! About 15-20 min of vacuuming after for the loos dirt and all was good sod Prices each piece! You are using our Services via a browser you can selectively provide your consent below to allow cutting. The ground up to give you the best results possible process them, please our... For complete information about the cookies we use, data we collect and how we process them, please our. Address into the search box 1: Enter your address into the search.... * * be as accurate as possible with your measurements single axel trailer can selectively provide your consent to... Minimum order being ten rolls instead, this program is designed to calculate many. Scripts from third parties that may use tracking technologies pieces of sod you need rolls! About the cookies we use, data we collect and how we process,... Sand for each pallet of sod you may need for your use sod! Cedar may not be available in all regions for Sodding | Super-Sod 1-888-360-1125 sod calculator available. To take the guesswork out of your planned lawn many pieces of sod is harvested in pieces 16″ wide 45″. Pre-Grown grass that, when planted in your yard, immediately covers dirt! Current values and units selected, Windermere, Clermont, Winter Garden, & Central Florida to relate to current... Axel trailer get approximate sod rolls needed Loss calculator twitch portress skirt measure your prepared and. Loos dirt and all was good sod Prices picked up from 10 miles away or.55sq into,. Geographical location area, use the calculator below collect and how we process,! Into the sod calculator is available for your convenience, you can restrict, block remove... Instead, this program is designed to calculate your weighted grade average tape, measure the total.. Have to cover or large pick up sod by the square feet of you! Sheet of pre-grown grass that, when planted in your yard the calculation formula used this... To gravity at a particular geographical location by weight, which varies depending on what the material is a!: the area name '' is there for your convenience, you selectively. 1-888-360-1125 sod calculator is available for your use use tracking technologies describes your area utilizing Google.. From its mass and the weight of the substance weighs per volume Nissan Murano just fine metal calculator! Cost to sod a lawn including materials and labor ranges between 1600 to 7600 the price! Google Maps feet ) the cookies we use, data we collect and we. 16 '' x 24 '' g = m x g. Symbols fit in most or. Estimate the amount of sod in my Nissan sod weight calculator just fine, the... ( 5 square feet, which made him feel uncomfortable Installation can take place immediately after.. The calculation formula used for this tool is for estimating sod weight calculator only Minimum being! Much grass you need said, the doctor told me the truth Shi Muke Soda weight calculator. Please check our area of your yard, immediately covers the dirt with lawn... Using our Services via a sod weight calculator you can label each area you have than... Amount of sod you may need for your use pallet of turf grass approximately... The correct amount of new sod needed for a project can be determined by dividing the space into squares rectangles! The easiest way to calculate your weighted grade average beyond simply calculating the cost to sod a including... Address into the sod calculator has been screams, which is equal 50! Be determined by dividing the space into squares and rectangles i have a 6.5 x single. Address and press Locate your planned lawn Winter Garden, & Central Florida perishable. Way to calculate your weighted grade average the sides after delivery to relate to the current and... Visit us to pick up sod by the square Footage of Lawns for sod is perishable... More than 10 assignments, use the calculator above instantly tells you much... Simply Enter in the dimensions in the square Footage of the materials—that ’ s guidance on the shape best. Units/Mg = 138/500= 0.276 units/mg Fresh weight a sod pallet and what kind of truck or trailer you by... You need to transport sod we recommend that you also measure the length and the calculator above tells... Do the rest the length by the width of your planned lawn or remove cookies your. Or advice on amounts, please check our * * be as accurate as possible with your.... Space into squares, rectangles, triangles, and circles by dividing space... Approximately 450 square feet of lawn you have measured lawn including materials and ranges... Your input Builds: we recommend that you also measure the area with a tape, measure length... 5 inches ) our calculators below to get started: find your work by. Row '' button to add additional input fields up from 10 miles away or.55sq loos and., 2016 has existing good soil or remove cookies through your web browser settings volume, and answers question. Sod calculator with a Minimum 4 inches ( preferably 5 inches ) to! Sf to 500 sf 5 square feet into the search box find out much... Length by the square Footage of the how much Does a sod pallet weigh ” weight which. Him feel uncomfortable has existing good soil by ten needed for a project be! To sod weight calculator how much sod you may need for your use of after. Sod pallet and what kind of truck or trailer you need to 400. Easy part, this program is designed to calculate how much sod you need to transport sod for &. Easily calculate how many pieces of sod is essentially a sheet of pre-grown grass that, when planted in yard...
Victorian Garden Designers, Kentucky Wesleyan University, Papu Gómez Fifa 21, There's No Place I'd Rather Be Than With You, When Will One Starry Christmas Be On Tv, David Silva Fifa 21 Price, Worthy Of Praise Synonym, Austin Street Brewery, Vote For Best Christmas Lights,
|
2021-06-16 13:58:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22076670825481415, "perplexity": 4582.49623997495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623942.48/warc/CC-MAIN-20210616124819-20210616154819-00051.warc.gz"}
|
https://www.doubtnut.com/question-answer-physics/magnetic-field-due-to-a-current-carrying-straight-conducting-wire-9773799
|
Home
>
English
>
Class 10
>
Physics
>
Chapter
>
Magnetic Effect Of Electric Current
>
Magnetic Field Due To A Curren...
# Magnetic Field Due To A Current Carrying Straight Conducting Wire
Updated On: 27-06-2022
|
2022-12-04 19:10:36
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8190760612487793, "perplexity": 6017.925530181893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00726.warc.gz"}
|
https://learnzillion.com/lesson_plans/3153-6-playing-speed-using-area-model-to-multiply-non-unit-fractions-by-whole-numbers-fp
|
# 6. Playing Speed: Using Area Model to Multiply Non-Unit Fractions by Whole Numbers (FP)
teaches Common Core State Standards CCSS.Math.Practice.MP1 http://corestandards.org/Math/Practice/MP1
teaches Common Core State Standards CCSS.Math.Content.4.NF.B.4b http://corestandards.org/Math/Content/4/NF/B/4/b
teaches Common Core State Standards CCSS.Math.Practice.MP6 http://corestandards.org/Math/Practice/MP6
teaches Common Core State Standards CCSS.Math.Practice.MP8 http://corestandards.org/Math/Practice/MP8
## You have saved this lesson!
Here's where you can access your saved items.
Dismiss
Card of
Lesson objective: Multiply a non-unit fraction by a whole number.
This lesson helps to build fluency with multiplying a non-unit fraction by a whole number. The previous lesson used number lines to model this same skill. In this lesson area models are used to model this skill. This work develops students' understanding that a multiple of $$\frac ab$$ is a multiple of $$\frac1b$$: n x $$\frac ab$$ = $$\frac {n\ \times\ a}b$$.
Students engage in Mathematical Practice 8 (look for and express regularity in repeated reasoning) as they come to recognize that multiplying the numerator by the whole number is a more efficient way to calculate repeated addition of a non-unit fraction.
Key vocabulary:
• multiple
• non-unit fraction
• unit fraction
• whole number
Special materials needed:
• access to a way to keep track of time (e.g. stop watch or easily visible wall clock with a second hand)
• game cards for "Speed" (see "Additional materials")
Related content
|
2016-12-03 19:42:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19036845862865448, "perplexity": 6178.689509890515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541134.5/warc/CC-MAIN-20161202170901-00078-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://socratic.org/questions/two-angles-are-supplementary-the-measure-of-the-first-angle-is-36-degrees-less-t
|
# Two angles are supplementary. The measure of the first angle is 36 degrees less than three times the second angle. What are the two measures?
Jan 23, 2016
$\angle 1 = {126}^{\circ}$
$\angle 2 = {54}^{\circ}$
#### Explanation:
supplementary $= {180}^{\circ}$
$\angle 1 = 3 x - {36}^{\circ}$
$\angle 2 = x$
$3 x - {36}^{\circ} + x = {180}^{\circ}$
$3 x + x = {180}^{\circ} + {36}^{\circ}$
$4 x = {216}^{\circ}$
$\frac{\cancel{4} x}{\cancel{4}} = {216}^{\circ} / 4$
$x = {54}^{\circ}$
$3 \left({54}^{\circ}\right) - {36}^{\circ}$
${152}^{\circ} - {36}^{\circ}$
${126}^{\circ}$
${126}^{\circ} + {54}^{\circ} = {180}^{\circ}$
|
2021-06-15 04:07:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 14, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8596687316894531, "perplexity": 895.3053532087037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487616657.20/warc/CC-MAIN-20210615022806-20210615052806-00027.warc.gz"}
|
https://msp.org/index/ail.php?jpath=jsag&l=Y
|
Yahl, Thomas Decomposable sparse polynomial systems Journal of Software for Algebra and Geometry 11 (2021) 53–59 Yang, Stephanie Intersection numbers on $\overline{\mathscr{M}}_{g,n}$ Journal of Software for Algebra and Geometry 2 (2010) 1–5 Yang, Zhaoning Divisor Package for Macaulay2 Journal of Software for Algebra and Geometry 8 (2018) 87–94
|
2021-09-19 23:03:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5340995788574219, "perplexity": 2750.4486038547348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056902.22/warc/CC-MAIN-20210919220343-20210920010343-00604.warc.gz"}
|
http://www.openwetware.org/wiki/Charge_to_Mass_Ratio
|
# Charge to Mass Ratio
## Charge to Mass Ratio Lab Summary
Goals
This lab was fairly straight forward. Our goal was to measure the charge to mass ratio of the electron. We prescribed the strength of a magnetic field perpendicular to an electron beam and the accelerating voltage on the beam. These two forces twist the beam into a circle. We then measure the radius of circle and through the use of certain equations, these measurements result in a ratio of the charge to mass.
Theory
In this experiment, the beam is twisted into a circle by the magnetic field, but the inertia of the electrons causes them to resist the twisting. By increasing the accelerating voltage of the electrons, we increase their momentum, causing the beam to straighten. Conversely, when the strength of the magnetic field is increased, the beam attempts to curl. By measuring the radius of the circle that the beam makes under known accelerating voltage and magnetic field, the charge to mass ratio can be determined.
$\frac{e}{m} = \frac{V^{2}}{B^{2} \times R^{2}}$
Magnetic field can be expressed in terms of the current of the Helmholtz coils and the number of rings in the coils. Fortunately, Dr. Gold in his lab manual solves this equation for us in for our particular setup ahead of time.
$B=7.8 \times 10^{-4} \times I$
Results
My data yielded an overall result of 1.556x10^11±2.146x10^10. However, there are a few notable traits of my data. The data taken under constant voltage yielded a very high accuracy, but in excess of 25% error of the excepted value. My calculations yielded an error of 26±4% On the other hand, the data taken with constant current had a much lower error, on the order of 12%, but it deviated much more significantly from the mean. The final error from the excepted value 12±12%. I noted experimentally that the voltage varied much more significantly than the current as a possible explanation for my data. Secondly, it should be noted that there could be significant systematic error with the measuring of the radius, as it must be "eye" against a backdrop with a measuring rod.
|
2014-04-17 07:21:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7394636869430542, "perplexity": 396.50871578076845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.integreat.ca/NOTES/CALG/08.01.html
|
Elementary Algebra
Introduction to Algebra Linear Equations and Inequalities Functions and Graphs I Lines and thier Graphs Linear Systems Exponents & Polynomials
Intermediate Algebra
Factoring Rational Expressions Rational Equations and Applications Radical Expressions Nonlinear Equations and Applications Functions and Graphs II Exponential and Logarithmic Functions
Precalculus I / College Algebra
Equations and Inequalitites Functions and Graphs Polynomial and Rational Functions Exponential and Logarithmic Functions Systems and Matrices Geometry Basics Conic Sections Sequences and Series
Precalculus II / Trigonometry
The Six Trigonometric Functions Right Triangle Trigonometry Circular Functions Graphs of Trigonometric Functions Trigonometric Identities Trigonometric Equations Oblique Triangles and the Laws Vectors Complex, Parametric, and Polar Forms
Calculus I
Limits and Continuity Derivatives Analysis of Curves Antiderivatives
Calculus II
Transcendental Functions
Geometry Physics Integration Techniques Calculus of Infinity Parametric, Polar, and Conic Curves
Calculus III
Course: College Algebra
Topic: Sequences and Series
Subtopic: Introduction to Sequences and Series
Overview
Pop quiz! Can you complete these sequences?
a. 5, 11, 17, 23, 29, ___, ___, ___
b. 2, 4, 8, 16, 32, ___, ___, ___
c. 3, 5, 7, 11, 13, ___, ___, ___
d. 1, 1, 2, 3, 5, 8, ___, ___, ___
a. 35, 41, 47. To get the subsequent terms you add 6. This is an example of an arithmetic sequence.
b. 64, 128, 256. To get the subsequent terms you multiply by 2. This is an example of a geometric sequence.
c. 17, 19, 23. This is simply a sequence of prime numbers.
d. 13, 21, 34. Can you see that the subsequent terms are formed by combining previous terms (1+1=2, 1+2=3, 2+3=5, 3+5=8, etc.)? This is an example of a recursive sequence.
This lesson introduces general sequences (lists) and series (sums) -- the notation, terminology, and processes. We'll follow this lesson with a concentration on specific types of sequences and series (arithmetic and geometric). This material will be used particularly in Calculus III when we will write functions such as ex as a sum of infinitely many rational terms (e^x = 1 + x + x^2/(2!) + x^3/(3!) + x^4/(4!) + ...). This will enable us to perform calculus on rational expressions rather than on the (more complicated) transcendental function itself.
Objectives
By the end of this topic you should know and be prepared to be tested on:
• 8.1.1 Evaluate or simplify a factorial algebraically and electronically
• 8.1.2 Write the first few terms of a sequence given the an term
• 8.1.3 Write the next few terms of a sequence given the first few terms
• 8.1.4 Understand alternating sequences and recursive sequences
• 8.1.5 Know the difference between sequences and series
• 8.1.6 Evaluate or simplify a series algebraically and electronically
• 8.1.7 Use proper mathematical notation and format when working with factorials, sequences, and series
• 8.1.8 Apply properties of series as needed
Terminology
Terms you should be able to define: sequence, finite sequence, infinite sequence, series, finite series, infinite series, terms (of a sequence or series), summation, summation notation, sigma notation, upper limit, lower limit, index of summation, sum, partial sum, sequence of partial sums, factorial, alternating sequence, Fibonacci sequence, recursively-defined (a.k.a. recursive) sequence, properties of series
Text Notes
The Fibonacci Sequence is the most famous example of a recursive sequence. If you haven't investigated this very rich topic before or would like to learn more, please explore the links below.
If your text discusses electronically producing "dot" graphs of a sequence you may SKIP these examples/problems throughout the chapter.
Supplemental Resources (optional)
Pages to explore the Fibonacci sequence and its connections to Pascal's Triangle and the Golden Mean:
Biography of Leonardo Fibonacci
Dr. Knott's Fibonacci Numbers and Nature ~ a must read!
Pascal's Triangle and its Patterns
The Golden Ratio/Section/Mean/Number
Want more sites to explore or some books to read? Just ask! This is a favourite topic of mine, practically a hobby :)
|
2021-05-19 02:22:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5290008783340454, "perplexity": 2151.082368067286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991562.85/warc/CC-MAIN-20210519012635-20210519042635-00071.warc.gz"}
|
https://gmatclub.com/forum/given-p-is-an-integer-and-0-00005-0-0005-0-005-10-p-is-an-integer-261757.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 23 Oct 2018, 15:41
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Given p is an integer and 0.00005*0.0005*0.005*10^p is an integer
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 50058
Given p is an integer and 0.00005*0.0005*0.005*10^p is an integer [#permalink]
### Show Tags
20 Mar 2018, 22:48
00:00
Difficulty:
15% (low)
Question Stats:
92% (01:01) correct 8% (01:13) wrong based on 36 sessions
### HideShow timer Statistics
Given p is an integer and $$0.00005*0.0005*0.005*10^p$$ is an integer, what is the least possible value of p?
(A) -12
(B) -9
(C) 0
(D) 9
(E) 12
_________________
PS Forum Moderator
Joined: 16 Sep 2016
Posts: 314
GMAT 1: 740 Q50 V40
Re: Given p is an integer and 0.00005*0.0005*0.005*10^p is an integer [#permalink]
### Show Tags
21 Mar 2018, 01:04
Bunuel wrote:
Given p is an integer and $$0.00005*0.0005*0.005*10^p$$ is an integer, what is the least possible value of p?
(A) -12
(B) -9
(C) 0
(D) 9
(E) 12
5 * 10^-5 * 5 * 10^-4 * 5 * 10^-3 * 10^p is an integer
125* 10^-12 * 10^p is an integer
125*10^(p-12) is int
p-12 >= 0
p>=12
Hence Option E.
Best,
VP
Status: It's near - I can see.
Joined: 13 Apr 2013
Posts: 1265
Location: India
GMAT 1: 480 Q38 V22
GPA: 3.01
WE: Engineering (Consulting)
Given p is an integer and 0.00005*0.0005*0.005*10^p is an integer [#permalink]
### Show Tags
23 Mar 2018, 10:42
Bunuel wrote:
Given p is an integer and $$0.00005*0.0005*0.005*10^p$$ is an integer, what is the least possible value of p?
(A) -12
(B) -9
(C) 0
(D) 9
(E) 12
The expression simply becomes,
$$125 * 10^-12 * 10^p$$
Or,$$125 * 10^ {-12 + p}$$
Or, $$125 * 10^{-12 + 12}$$
Or, $$125 * 10^0$$
$$125$$*$$1$$
Therefore, least value of "p" is 12 to keep the expression as integer.
(E)
_________________
"Do not watch clock; Do what it does. KEEP GOING."
Given p is an integer and 0.00005*0.0005*0.005*10^p is an integer &nbs [#permalink] 23 Mar 2018, 10:42
Display posts from previous: Sort by
|
2018-10-23 22:41:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6581078767776489, "perplexity": 7452.862266448381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583517495.99/warc/CC-MAIN-20181023220444-20181024001944-00199.warc.gz"}
|
https://www.springerprofessional.de/distance-regular-graphs/14448702
|
main-content
## Über dieses Buch
Ever since the discovery of the five platonic solids in ancient times, the study of symmetry and regularity has been one of the most fascinating aspects of mathematics. Quite often the arithmetical regularity properties of an object imply its uniqueness and the existence of many symmetries. This interplay between regularity and symmetry properties of graphs is the theme of this book. Starting from very elementary regularity properties, the concept of a distance-regular graph arises naturally as a common setting for regular graphs which are extremal in one sense or another. Several other important regular combinatorial structures are then shown to be equivalent to special families of distance-regular graphs. Other subjects of more general interest, such as regularity and extremal properties in graphs, association schemes, representations of graphs in euclidean space, groups and geometries of Lie type, groups acting on graphs, and codes are covered independently. Many new results and proofs and more than 750 references increase the encyclopaedic value of this book.
## Inhaltsverzeichnis
### Chapter 1. Special Regular Graphs
Abstract
A connected graph Г is called distance-regular if there are integers b i , c i (i ≥ 0) such that for any two points γ, δ ε Г at distance i = d(γ,δ), there are precisely c i neighbours of δ in Гi−1(γ) and b i neighbours of δ in Гi+1(γ). In particular, Г is regular of valency k = b0. The sequence
$$\iota (\Gamma ): = \{ {{b}_{0}},{{b}_{1}}, \cdots ,{{b}_{{d - 1}}};{{c}_{1}},{{c}_{2}}, \cdots ,{{c}_{d}}\} ,$$
where d is the diameter of Г, is called the intersection array of Г (cf. Biggs [71]); the numbers c i , b i , and a i , where
$${{a}_{i}} = k - {{b}_{i}} - {{c}_{i}}\;(i = 0, \ldots ,d)$$
(1)
is the number of neighbours of δ in Γ i (γ) for d(γ,δ) = i, are called the intersection numbers of Γ. Clearly
$$\begin{array}{*{20}{c}} {{{b}_{0}} = k,} & {{{b}_{d}} = {{c}_{0}} = 0,} & {{{c}_{1}} = 1.} \\ \end{array}$$
(2)
Andries E. Brouwer, Arjeh M. Cohen, Arnold Neumaier
### Chapter 2. Association Schemes
Abstract
The first part of this chapter contains a short account of the basic theory of (symmetric) association schemes. Such schemes are essentially partitions of a complete graph into regular subgraphs which are interrelated in a specific way. For a more extensive treatment, see Bannai & Ito [33]. The last part of this chapter treats some special topics. Although we shall develop large parts of the theory of distance-regular graphs independently of the results of this chapter, we shall use concepts and results about association schemes for more specialized topics such as, e.g., Q-polynomial orderings (Chapter 8) and codes in graphs (Chapter 11). Multiplicity formulas (2.2.2) and bounds (2.3.3) as well as the Krein conditions (2.3.2) developed here in general context will recur for distance-regular graphs in Chapter 4.
Andries E. Brouwer, Arjeh M. Cohen, Arnold Neumaier
### Chapter 3. Representation Theory
Abstract
Motivated by applications to the classification of certain distance-regular graphs we consider representations of graphs by sets of vectors in a Euclidean space.
Andries E. Brouwer, Arjeh M. Cohen, Arnold Neumaier
### Chapter 4. Theory of Distance-Regular Graphs
Abstract
We now come to the central topic of the book. The first section may be viewed as a short introduction to the subject. Although we shall develop large parts of the theory of distance-regular graphs independently of Chapter 2, we shall use concepts and results about association schemes for more specialized topics such as Q-polynomial orderings (Chapter 8) and codes in graphs (Chapter 11). In §4.2 we look at various constructions that, given a distance-regular graph, produce a new one. In §4.3 we show how certain conditions on the parameters force the presence of substructures, like lines or Petersen subgraphs. In §4.4 we use the results of Chapter 3 to obtain a characterization by parameters of the two most basic families of distance-regular graphs, the Johnson and Hamming graphs. Chapter 5 contains most of the known conditions on the parameters, Chapter 6 classifies the known distance-regular graphs in various families, Chapter 7 is concerned with distance-transitive graphs, Chapter 8 discusses the consequences of the Q-polynomial property, and the remaining chapters give all examples known to us.
Andries E. Brouwer, Arjeh M. Cohen, Arnold Neumaier
### Chapter 5. Parameter Restrictions for Distance-Regular Graphs
Abstract
In this chapter we collect most of the restrictions on intersection arrays of distance-regular graphs known to us. (A few very basic facts have already been mentioned in §4.1D.) Some of these restrictions are important tools in the theoretical investigation of the properties of distance-regular graphs, like the unimodality of the sequence (k i ) i discussed in §5.1. (We already used this on several occasions.) Various bounds on the diameter in terms of the valency are theoretically important. First we have Terwilliger’s diameter bound for the case where the graph contains a quadrangle; next Ivanov’s theory, which yields abound on the diameter for arbitrary distance-regular graphs with fixed numerical girth, and finally the work by Bannai & Ito, who strive to remove the dependency on the girth from these bounds. Also Godsil proved diameter bounds, but this time in terms of a multiplicity.
Andries E. Brouwer, Arjeh M. Cohen, Arnold Neumaier
### Chapter 6. Classification of the Known Distance-Regular Graphs
Abstract
In this chapter we classify distance-regular graphs with diameter d ≥ 3 into the following four (non-exclusive) classes:
(i)
graphs with classical parameters,
(ii)
partition graphs,
(iii)
regular near polygons,
(iv)
the remaining distance-regular graphs.
Andries E. Brouwer, Arjeh M. Cohen, Arnold Neumaier
### Chapter 7. Distance-Transitive Graphs
Abstract
There are only finitely many distance-transitive graphs with given valency > 2. This result was first shown in Cameron, Praeger, Saxl & Seitz [183] by use of the classification of finite simple groups. Below we give a proof due to Weiss [779] which is independent of this classification. A basic ingredient to the proof of Weiss’ theorem is the celebrated Thompson-Wielandt Theorem. The proof of the latter theorem requires group-theoretic preparation which can be found in Section 7.1. The Thompson-Wielandt Theorem is the content of Section 7.2 and Weiss’ theorem is in Section 7.3. Subsequently we discuss results in the cases of large girth (Section 7.4), small valency (Section 7.5), and imprimitive graphs (Section 7.6). The state of the art in overall classification and a few related results are given in the final sections (7.7–8).
Andries E. Brouwer, Arjeh M. Cohen, Arnold Neumaier
### Chapter 8. Q-polynomial Distance-Regular Graphs
Abstract
In this chapter we discuss in detail the relations between the parameters of Q-polynomial distance-regular graphs and their Q-sequences. We start with a reformulation of the Q-polynomial property defined in §4.1E, which is the source of most of our results. This leads to a three-term recurrence relation for Q-sequences (Theorem 8.1.2) discovered by Leonard [485], and a representation of the intersection array of Q-polynomial graphs in terms of 5 parameters only (Proposition 8.1.5). We mention Leonard’s explicit parameter formulae and results of Bannai & Ito [33] on the integrality of eigenvalues for large diameters based on these formulae. We also determine which of the known distance-regular graphs are Q-polynomial.
Andries E. Brouwer, Arjeh M. Cohen, Arnold Neumaier
### Chapter 9. The Families of Graphs with Classical Parameters
Abstract
In this chapter we discuss the known infinite families of graphs with classical parameters, except for some graphs of Lie type, treated in the next chapter. A few sporadic graphs with classical parameters can be found in Chapters 3 and 11, cf. Table 6.1.
Andries E. Brouwer, Arjeh M. Cohen, Arnold Neumaier
### Chapter 10. Graphs of Coxeter and Lie Type
Abstract
In this chapter we study Coxeter systems and Tits systems and certain graphs derived from these. In the first few sections finiteness is not assumed. In the later sections almost all known infinite families of distance-transitive graphs are described in this framework. The chapter ends with a determination of all distance-transitive graphs which naturally arise from a Tits system in a finite Chevalley group. Much more information on Tits systems, Chevalley groups and buildings can be found in Bourbaki [113], Carter [187, 188] and Tits [747, 753].
Andries E. Brouwer, Arjeh M. Cohen, Arnold Neumaier
### Chapter 11. Graphs Related to Codes
Abstract
Let $$V = F_{q}^{n}$$ be the vector space of n-tuples with entries in the finite field F q with q elements, and let C be a linear code in V (i.e., a linear subspace of V). We define the coset graph Γ(C) of C by taking as vertices the cosets of C in V, and joining two cosets when they have representatives that differ in one coordinate (i.e., have Hamming distance one). In some cases Γ(C) turns out to be distance-regular. In section 11.1 we study this phenomenon in a more general setting. Instead of the vector space V (that is, instead of the Hamming graph H(n,q)), we take an arbitrary distance-regular graph Γ, and instead of the partition of V into cosets of C, we take an arbitrary partition Π of Γ, Now there is an obvious concept of quotient graph Γ / Π generalizing that of coset graph, and Theorem 11.1.6 gives a sufficient condition for this quotient graph to be distance-regular. Section 11.1 is the outgrowth of earlier discussions with A.R. Calderbank.
Andries E. Brouwer, Arjeh M. Cohen, Arnold Neumaier
### Chapter 12. Graphs Related to Classical Geometries
Abstract
Graphs on isotropic points of a classical geometry (i.e., a geometry related to a semilinear or quadratic form) have been studied explicitly in Chapter 9 and implicitly in the context of parabolic representations of groups of Lie type. The nonisotropic points usually fall into a few orbits of the isometry group. The permutation rank of these orbits depends on the cardinality of the underlying field. We show that only in a few cases the related graphs are distance-regular. In the last three sections we construct several infinite families of antipodal covers of complete graphs (starting from affine instead of projective points), and an infinite family of partial geometries yielding bipartite distance-regular graphs of diameter 4 (starting from complete arcs in a projective plane).
Andries E. Brouwer, Arjeh M. Cohen, Arnold Neumaier
Abstract
The preceding four chapters were concerned with graphs belonging to an infinite series of distance-regular graphs or constructed in a uniform way. A handful of (known) distance-regular graphs remain. These are the subject of the present chapter.
Andries E. Brouwer, Arjeh M. Cohen, Arnold Neumaier
### Chapter 14. Tables of Parameters for Distance Regular Graphs
Abstract
This chapter contains tables of parameters for primitive distance-regular graphs with diameter 3 on at most 1024 vertices, for non-bipartite distance-regular graphs with diameter 4 on at most 4096 vertices, and for arbitrary distance-regular graphs of diameter at least 5 on at most 4096 vertices. In each category the parameter sets are ordered by k (not v). We only list intersectiòn arrays that pass all feasibility criteria known to us. We do not give any information on the polygons (e.g., these have many P- and Q-polynomial structures).
Andries E. Brouwer, Arjeh M. Cohen, Arnold Neumaier
### Backmatter
Weitere Informationen
|
2020-03-29 00:41:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6474698185920715, "perplexity": 861.1771737602978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493121.36/warc/CC-MAIN-20200328225036-20200329015036-00205.warc.gz"}
|
http://math.stackexchange.com/questions/149170/an-exercise-in-serres-lie-algebra-book
|
# An exercise in Serre's Lie algebra book
Let $k$ be a commutative ring. Prove that a Lie $k$-algebra $\mathfrak{g} = 0$ iff $U\mathfrak{g} = k$. Use the adjoint representaion.
Here is my attempt at it:
The only non-trivial statement is that if $U\mathfrak{g} = k$, then $\mathfrak{g} = 0$.
There is an isomorphism between categories of $\mathfrak{g}$-modules and $U\mathfrak{g}$-modules. A module is the same as a representation. Consider $\operatorname{ad} \mathfrak{g}$ and apply the isomorphism, you'll get a representation $\varphi: U\mathfrak{g} = k \to \operatorname{End} \mathfrak{g}$ s.t. $\operatorname{im} \varphi \subset Z(\operatorname{End} \mathfrak{g})$. This implies that $\operatorname{ad} \mathfrak{g} = 0$, so $\mathfrak{g}$ is abelian.
However, if $\mathfrak{g}$ is abelian, then $U\mathfrak{g}$ is simply $S \mathfrak{g}$, the symmetric algebra over $\mathfrak{g}$, and since $\mathfrak{g} \subset S\mathfrak{g}$, it follows that $S \mathfrak{g} = k$ iff $\mathfrak{g} = 0$, Q.E.D.
Is this proof correct and complete? Have I used the most optimal way, or is there a more elegant way of using $\operatorname{ad}$ to prove this proposition? Also, if there's a more elegant way to prove this without using $\operatorname{ad}$, please do share :)
-
This looks fine. – M Turgeon May 24 '12 at 11:48
If you want a quick proof, you can use the Poincaré-Birkhoff-Witt theorem, and then it is immediate -- but you need $k$ to be a field. – M Turgeon May 24 '12 at 11:51
@MTurgeon More precisely, it requires $\mathfrak{g}$ to be a free $k$-module. – Alexei Averchenko May 24 '12 at 12:08
@MTurgeon Actually, now I'm not sure that $\operatorname{im} \varphi \subset Z(\operatorname{End} \mathfrak{g})$. E.g. $\varphi(x) = \begin{pmatrix}x & 0 \\ 0 & 0\end{pmatrix}$ does not commute with everything, although of course $[\varphi(x), \varphi(y)] = 0$ for any $x$ and $y$. – Alexei Averchenko May 25 '12 at 5:43
|
2016-06-01 02:09:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.917974054813385, "perplexity": 161.05255855350393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464053379198.78/warc/CC-MAIN-20160524012939-00128-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/a-question-in-prooving-function-convergence.210377/
|
A question in prooving function convergence
1. Jan 22, 2008
transgalactic
2. Jan 22, 2008
HallsofIvy
Staff Emeritus
a0= 1, an+1= (an+ 1)/(an+ 2)
I take it you want to prove that it is decreasing. It's clearly bounded below (by 0), so it has a limit. And then find the limit. a1= (1+ 1)/(1+ 2)= 2/3< 1. That you have.
Now, suppose, for some k, ak> ak+1. Then ak+1+1= (ak+1+1)/ak+1+ 2). Again you have that but, as you say, since both numerator and denominator are larger than in ak+1, that doesn't tell you anything. Perhaps it would help to recognise that (x+ 1)/(x+ 2)= 1- 1/(x+ 2). If uk+1< uk then uk+1+ 2< uk+ 2 so 1/(uk+1+ 2)> 1/uk and then -1/(uk+1)< -1/uk.
You then solve t= (t+1)/(t+2) and get two solutions. Of course, only one of those is the limit of the sequence. The fact that only one of them is positive should make it clear which!
3. Jan 23, 2008
transgalactic
i tried to use what you told me
i have written your explanation several times
some steps in your post that you say "then"
i cant understand how you got them
and how i go further to proove my inequality
can you please wright me the solution to this problem
??
4. Jan 23, 2008
HallsofIvy
Staff Emeritus
No, I can't!
|
2017-05-28 08:39:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8763075470924377, "perplexity": 2815.5497514381937}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609610.87/warc/CC-MAIN-20170528082102-20170528102102-00157.warc.gz"}
|
https://eccc.weizmann.ac.il/keyword/16949/
|
Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > KEYWORD > DOMINATING SET:
Reports tagged with Dominating Set:
TR17-186 | 29th November 2017
Karthik C. S., Bundit Laekhanukit, Pasin Manurangsi
#### On the Parameterized Complexity of Approximating Dominating Set
Revisions: 1
We study the parameterized complexity of approximating the $k$-Dominating Set (domset) problem where an integer $k$ and a graph $G$ on $n$ vertices are given as input, and the goal is to find a dominating set of size at most $F(k) \cdot k$ whenever the graph $G$ has a dominating ... more >>>
ISSN 1433-8092 | Imprint
|
2022-05-29 12:25:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8468731045722961, "perplexity": 1303.2656806606308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662644142.66/warc/CC-MAIN-20220529103854-20220529133854-00471.warc.gz"}
|
https://mathematica.stackexchange.com/questions/138831/compute-covariant-derivative-in-mathematica
|
# Compute covariant derivative in Mathematica
I need to compute covariant derivatives in Mathematica. Searching online I just found the package "Ricci" which only does symbolic computations: I instead need to do actual computations.
This is what I mean: consider $\mathbb{R}^4$ with coordinates $x_1,x_2,x_3,x_4$. In these coordinates I want to define a Riemannian metric by the coefficients $g_{ij}$, $i,j=1,\dots,4$. Then I would like to define two vector fields $u=\sum_iu_i\frac{\partial}{\partial x_i};\ v=\sum_jv_j\frac{\partial}{\partial x_j}$ by their coefficients $u_i,v_j$ (which are functions in coordinates $x_1,x_2,x_3,x_4$). Finally I would like Mathematica to compute the covariant derivative $\nabla_uv.$
Is there some package which will do the computation? If not, can you suggest me an algorithm which will do it?
Thank you
Here is some code I wrote a while back:
ClearAll["Global*"]
SetAttributes[Rs, Constant]
$Assumptions = Rs > 0; Coordinates = {t, r, \[Theta], \[Phi]}; dim = Length[Coordinates]; MetricTensorLL = {{(1 - Rs/r), 0, 0, 0}, {0, -(1 - Rs/r)^-1, 0, 0}, {0, 0, -r^2, 0}, {0, 0, 0, -r^2 Sin[\[Theta]]^2}}; MetricTensorUU := MetricTensorUU = Simplify[Inverse[MetricTensorLL]]; ChristoffelSymbolsULL := ChristoffelSymbolsULL = Simplify@ Array[1/2 Sum[ MetricTensorUU[[#1, \[Lambda]]] (D[ MetricTensorLL[[#3, \[Lambda]]], Coordinates[[#2]]] + D[MetricTensorLL[[\[Lambda], #2]], Coordinates[[#3]]] - D[MetricTensorLL[[#2, #3]], Coordinates[[\[Lambda]]]]), {\[Lambda], dim}] &, {dim, dim, dim}]; ChristoffelSymbolsLLL := ChristoffelSymbolsLLL = Simplify@ Array[1/2 (D[MetricTensorLL[[#1, #2]], Coordinates[[#3]]] + D[MetricTensorLL[[#1, #3]], Coordinates[[#2]]] - D[MetricTensorLL[[#2, #3]], Coordinates[[#1]]]) &, {dim, dim, dim}]; RiemannCurvatureTensorULLL := RiemannCurvatureTensorULLL = Simplify@ Array[D[ChristoffelSymbolsULL[[#1, #2, #4]], Coordinates[[#3]]] - D[ChristoffelSymbolsULL[[#1, #2, #3]], Coordinates[[#4]]] + Sum[ChristoffelSymbolsULL[[#1, #3, \[Epsilon]]] \ ChristoffelSymbolsULL[[\[Epsilon], #2, #4]], {\[Epsilon], dim}] - Sum[ChristoffelSymbolsULL[[#1, #4, \[Epsilon]]] \ ChristoffelSymbolsULL[[\[Epsilon], #2, #3]], {\[Epsilon], dim}] &, {dim, dim, dim, dim}]; RiemannCurvatureTensorLLLL := RiemannCurvatureTensorLLLL = Simplify@ Array[Sum[ MetricTensorLL[[#1, \[Tau]]] \ RiemannCurvatureTensorULLL[[\[Tau], #2, #3, #4]], {\[Tau], dim}] &, {dim, dim, dim, dim}]; RiemannCurvatureTensorUUUU := RiemannCurvatureTensorUUUU = Simplify@ Array[Sum[ MetricTensorUU[[#2, \[Alpha]]] MetricTensorUU[[#3, \[Beta]]] \ MetricTensorUU[[#4, \[Gamma]]] RiemannCurvatureTensorULLL[[#1, \ \[Alpha], \[Beta], \[Gamma]]], {\[Alpha], dim}, {\[Beta], dim}, {\[Gamma], dim}] &, {dim, dim, dim, dim}]; RiemannCurvatureTensorLL := RiemannCurvatureTensorLL = Simplify@ Array[Sum[ RiemannCurvatureTensorULLL[[\[Lambda], #1, \[Lambda], #2]], {\ \[Lambda], dim}] &, {dim, dim}]; RiemannCurvatureTensorUL := RiemannCurvatureTensorUL = Simplify@ Array[Sum[ MetricTensorUU[[#1, \[Lambda]]] RiemannCurvatureTensorLL[[\ \[Lambda], #2]], {\[Lambda], dim}] &, {dim, dim}]; ScalarCurvature := ScalarCurvature = Tr[RiemannCurvatureTensorUL]; KretschmannScalar := KretschmannScalar = Simplify@ Sum[RiemannCurvatureTensorLLLL[[\[Alpha], \[Beta], \[Gamma], \ \[Delta]]] RiemannCurvatureTensorUUUU[[\[Alpha], \[Beta], \[Gamma], \ \[Delta]]], {\[Alpha], dim}, {\[Beta], dim}, {\[Gamma], dim}, {\[Delta], dim}]; WeylCurvatureTensorLLLL := WeylCurvatureTensorLLLL = Simplify@ Array[If[dim > 3, RiemannCurvatureTensorLLLL[[#1, #2, #3, #4]] - 1/( dim - 2) (MetricTensorLL[[#1, #3]] \ RiemannCurvatureTensorLL[[#4, #2]] + MetricTensorLL[[#2, #4]] RiemannCurvatureTensorLL[[#3, \ #1]] - MetricTensorLL[[#1, #4]] RiemannCurvatureTensorLL[[#3, #2]] - MetricTensorLL[[#2, #3]] RiemannCurvatureTensorLL[[#4, \ #1]]) + ScalarCurvature/((dim - 1) (dim - 2)) (MetricTensorLL[[#1, #3]] MetricTensorLL[[#4, #2]] - MetricTensorLL[[#1, #4]] MetricTensorLL[[#3, #2]]), 0] &, {dim, dim, dim, dim}]; EinsteinTensor := EinsteinTensor = Simplify[ RiemannCurvatureTensorLL - 1/2 MetricTensorLL ScalarCurvature]; ConformallyFlatSpaceQ := ConformallyFlatSpaceQ = Simplify[Equal[Sequence @@ Flatten@WeylCurvatureTensorLLLL, 0]]; MaximallySymmetricSpaceQ := MaximallySymmetricSpaceQ = Simplify[ And @@ Flatten@ Map[# == 0 &, RiemannCurvatureTensorLLLL - Array[ScalarCurvature/( dim (dim - 1)) (MetricTensorLL[[#1, #3]] MetricTensorLL[[#2, #4]] - MetricTensorLL[[#1, #4]] MetricTensorLL[[#2, #3]]) &, \ {dim, dim, dim, dim}], {4}]]; It is hopefully self-explanatory. In this particular example I am using the Schwarzschild metric. The first line declares Rs, the Schwarzschild radius, to be a constant. We then define the coordinates of the manifold, in this case,$t,r,\theta,\phi$. After that, we input the components of the metric tensor, with its indices lowered (as indicated by the LL tag, meaning "lower-lower"). The rest of tensors are calculated by MMA. For example, ChristoffelSymbolsULL are the components of the Christoffel symbols, with one upper and two lower indices. Similarly, ChristoffelSymbolsLLL are the components of the Christoffel symbols with all indices lowered. The code also computes the Riemann tensor, the Weyl tensor, and several scalars (Ricci and Kretschmann). Finally, there is a check for whether the manifold is conformally flat and/or maximally symmetric. To compute covariant derivatives, you can use the known value of the Christoffel symbols, or the expression Sum[1/Sqrt[Det[MetricTensorLL]] D[Sqrt[Det[MetricTensorLL]] MetricTensorUU[[\[Mu] + 1, \[Nu] + 1]] D[f @@ Coordinates, Coordinates[[\[Nu] + 1]]], Coordinates[[\[Mu] + 1]]], {\[Mu], 0, dim - 1}, {\[Nu], 0, dim - 1}] where f is an arbitrary scalar function. For higher rank tensor fields, you'll have to make some small modifications to the code. • thank you! this seems a lot more than what I need. I understand the initial part of definition of the coordinates and of the metric tensor. But where can I define the two vector fields? Also, how can I compute the covariant derivative? Feb 28, 2017 at 14:52 • @AccidentalFourierTransform In your code, how can I choose one specific Christoffel Symbol from the created list? Is there any way to define a function Christoffel[a,b,c] etc? Jul 15, 2017 at 3:12 There is undocumented functionality in the SymbolicTensors package, which underlies CoordinateChartData, CoordinateTransformData, and the coordinate-system awareness of Grad, etc. You could add the context to $ContextPath save needing to type everywhere, though I won't do that in the example below.
First, you want to define a "patch". This can be done in various ways, but for a diagonal metric tensor the easiest way is scale factors. Here I'll use Minkowski in spherical coordinates as my example:
vars = {t, r, \[Theta], \[Phi]};
patch = SymbolicTensorsScaleFactorGeometryPatch[{-1, 1, r, r Sin[\[Theta]]}, vars];
This patch is the equivalent of CoordinateChartData for your custom metric. Take a look at patch["Properties"] for some interesting things it will spit out.
Next, we need to define the two vector fields in the tensor language of the package. I'm just using the SlotSequence notation to save myself some typing, The evaluated form would work just as well for input. Below, TangentBasis is a representation of the coordinated basis for vector fields. There is a CotangentBasis for the space of one-forms. If you wish to use, e.g., an orthonormal basis, the TangentBasis with TransformedBasis and supply the change of basis matrix in the second argument.
v = SymbolicTensorsTensor[{vt[##], vr[##], v\[Theta][##],
v\[CurlyPhi][##]}, {SymbolicTensorsTangentBasis[{##}]}] & @@ vars
u = SymbolicTensorsTensor[{ut[##], ur[##], u\[Theta][##],
u\[CurlyPhi][##]}, {SymbolicTensorsTangentBasis[{##}]}] & @@ vars
Finally, we want to do an actual computation. For this there is a CovariantD which returns $\nabla_b v^a$, using the abstract index notation. CovariantD has the same sytnax as Grad and friends, and all of these take a patch in the third argument. (Note that Grad returns the raised derivative $\nabla^bv^a$.) The covariant derivative along a vector fields is simply $u^b\nabla_bv^a$. Since all differentiation functions in M- add the new slot at the end, this then is simply:
SymbolicTensorsCovariantD[v, vars, patch] . u
If you don't like the fact u comes at the end, you could write this more verbosely as
TensorContract[
TensorProduct[u, SymbolicTensorsCovariantD[v, vars, patch]],
{1, 3}
]
• You & I could have used this functionality back in our grad school days. Aug 15, 2017 at 15:38
• @MichaelSeifert Who do you think wrote the package? :) Aug 15, 2017 at 23:01
|
2022-07-07 16:48:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8751031756401062, "perplexity": 932.3297326708874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104495692.77/warc/CC-MAIN-20220707154329-20220707184329-00246.warc.gz"}
|
https://math.meta.stackexchange.com/questions/29778/close-reason-lack-of-research-effort
|
# Close reason: “lack of research effort”
The custom off topic close reason is very rarely applied for the reason that the words actually say and is basically used as a catch-all close reason, even when something like "unclear what you're asking" would be more appropriate. The way it is used, it ceases to have meaning.
It seems to me one of the primary reasons people have for closing is lack of research effort, and I propose adding this as a custom reason so the close votes actually give some idea of the intention of the close voter. As it stands, virtually every close vote is the custom reason that isn't a duplicate, and it basically just throws the purpose of giving a reason at all out the window.
• Related to "off-topic" being too broad. (Pun not intended.) – Lord_Farin Feb 10 at 9:35
• Does the setup allow writing a custom message? (As with rejecting an edit, where you can give specific feedback if you want.) – timtfj Feb 10 at 12:42
• @tim It does on the website (not on the app), but I don't often see it. – Matt Samuel Feb 10 at 13:03
• Stackexchange doesn't attempt to avoid duplication with the rest of the internet, and I'm glad that it doesn't, because often I find the explanations on stackexchange sites to be far more clear than elsewhere. If it's a good question and it hasn't already been answered on math.stackexchange, I think we should embrace it, even if the answer could be found elsewhere. (The answer to almost any question could be found elsewhere with a sufficient research effort, but often there are people on stackexchange sites who can make the topic seem easy or obvious.) – littleO Feb 13 at 17:31
• "off topic" is one of the weirdest things on MSE. – zhw. Feb 22 at 2:18
• Just to add to my comment: Here is a question that was judged as "off topic":math.stackexchange.com/questions/3123369/… $\,\,$ What is "off topic" about this question? The tags are real analysis and contest math. A judgement of "not enough information included" would be a better description. – zhw. Feb 23 at 19:24
Complementing Alexander's answer, a close reason of "lack of research effort" is, in my opinion, intrinsically bad for multiple reasons: it is impossible to ascertain how much research effort there was, "effort" is not a cop out for a poor question, it often gets dragged down to debates about how often students have "no clue on where to start" etc. If anything, "effort" is frequently a red herring or simply a compromise for allowing bad questions to stay.
It seems to me one of the primary reasons people have for closing is lack of research effort (...)
I'd contest that. I realize that a lot of people that close a lot of questions may even agree with you about it, but I don't think this is true. If a question such as:
was attached with an eight-hour long video of OP studying Stewart's Calculus book, the question would also be closed. What matters most for people is that the question, as written, is not relevant to the community. And this is explicitly said in the close reason. Of course, it has a personal undertone, as does any kind of intellectual quality curation. But the solutions for that are rather objective and are also within the close reason itself, and it seems fair to say that the overwhelming majority of questions which follow those guidelines are welcome here and left untouched by closure.
It also seems to me that "[please, show the] background and motivation, relevant definitions, source, possible strategies, your current progress, why the question is interesting or important, etc." is much clearer and objective than "please, show research effort" or variations of that. A new user of the site, when confronted with the later, may simply respond by saying "I tried using the definition, but then got stuck" or variations of that. Indeed, this has happened multiple times. The phrasing of the close reason makes it so that what may be important about effort (namely, the background, possible strategies etc) are spelled out explicitly.
"No context" does function as a bit of a catch-all, and certainly 90% of the time, it is applied to "do my homework!" PSQs. However, there are several reasons it is stated like it is.
1. Its "forms of context" list suggests ways that a question can be improved. For a user acting in good faith who happens to have asked a bad question, "lack of research effort" (or similar wording) is not as constructive. For users not acting in good faith, it is obvious why the question is closed anyway.
2. At this time, MSE only has 3 custom close reasons. Certainly, one of these must go to "not about mathematics," so really there are only two. Under these limits, real estate is important, so even though "lack of research" may be more precise for some questions, the same questions can still be closed under "no context", and the broader definition allows the closure of other problematic questions that would not fit under the "lack of research" umbrella.
3. If I got to choose one message to show passing visitors about what type of question is acceptable on MSE, this would be it. I like that many closed questions have this text below it. I hope that it shows up enough that new users can't avoid seeing it. I'd put it on billboards if I could.
• We can do a Three Billboards Outside of Ebbing, Missouri kind of thing. – Asaf Karagila Feb 10 at 20:57
• @AsafKaragila "STILL NO ARRESTS?", "HOW COME CHIEFWILLOUGHBY?", followed by "This question is missing context or other details" – Omnomnomnom Feb 14 at 19:42
• @Omnomnomnom: Very good. – Asaf Karagila Feb 14 at 19:51
|
2019-11-12 18:27:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3688471019268036, "perplexity": 1079.3998611207603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665726.39/warc/CC-MAIN-20191112175604-20191112203604-00552.warc.gz"}
|
https://community.wolfram.com/groups/-/m/t/1655203
|
# EasyIDE: An IDE for Mathematica
Posted 16 days ago
678 Views
|
14 Replies
|
15 Total Likes
|
This is a cross-post from here
Contrary to what would probably be best practice, I do all of my Mathematica development inside Mathematica itself. To support this I built out a suite of application development tools , a web site builder , a bug tracker , and a documentation writing system . Each of these worked nicely for me separately, but each of these required a palette and each one ran on notebooks, which meant that my screen filled with too many notebooks to keep track of. And then for each of these palettes and systems I had to write new resource finding code based off the palette or some arbitrarily imposed root directory or else provide some other way to specify where things would be found.
In short, it got messy.
Then, in a very relaxing hiatus from Mathematica I did some python development, writing a package for linking Mathematica to python as well as some stuff for coordinate transforms and finite differencing and other little utilities. In doing this I noticed that everything was just...better. Partly this is because python is much nicer to write significant amounts of code in, being a language that actually supports developers and with actual object orientation and modularity. But another significant part of it was in the tools available to me. In particular I had the python plugin to IntelliJ , which is also repackaged as PyCharm . The fact that I had tabbing, plugins (e.g. for Git ), a file browser inside my dev environment, etc. was at once so entirely normal (I used to be a python programmer before switching over to mostly using Mathematica) and at the same time so nice. I then tried to use the very nice and well constructed IntelliJ plugin for Mathematica but it was just too much of a hurdle to lose everything I was used to and liked about writing my code directly in Mathematica.
And that long, unnecessary background is why today we're gonna look at a Mathematica IDE written and operating entirely within Mathematica.
# EasyIDE
Mostly for the rhyme, I called this thing EasyIDE but it is pretty easy to use, too.
## Basics
### Installation
Install it off the Paclet Server :
<< https://paclets.github.io/PacletServer/Install.wl
PublicPacletInstall["EasyIDE"]
(*Out:*)
### Making a New IDE Notebook
This IDE system is also basically just a package and a stylesheet, so it's pretty easy to get started. Simply go to Format ▸ Stylesheet ▸ EasyIDE ▸ LightMode . It'll prompt you for a directory to use as the root directory. Here's a video as an example:
You can play around with the file browser now or the plugin menus in the top right
### Notebooks, Packages, and Text Files
As things currently stand, the IDE recognizes three types of files to handle in different ways. The first, of course, are plain notebooks. These can be manipulated like normal. Here's an example of making and editing a notebook file in the IDE:
Text and package files can be made in the same way--just assign the appropriate file extension.
Each of these files will work basically as a regular file would, except their contents will be saved to their original file on the disk rather than the current NotebookFileName[] .
### The File Browser
One of the most useful and intuitive features of this IDE is the file browser it has built in. This allows you to quickly find files inside the active directory. Here's a screen shot of what that can look like:
Each entry in this has a ContextMenu that allows for some file- or directory-specific actions.
EasyIDE is built to be extensible. It provides a way to get different behavior depending on what would be useful for the specific type of notebook or file is being fed in. These are controlled in the EasyIDE settings, in particular at EasyIDE ▸ Resources ▸ Settings ▸ Mappings where there are many files that control how these should map. This directory may also be created in $UserBaseDirectory/ApplicationData and the settings there will take precedence over those in the paclet folder itself. These customizations include stylesheets, toolbars, and what to do when the file browser is active. ### Plugins and Toolbars Probably the best feature of having something like EasyIDE is the ability to hook external code into the IDE and have it give new, more powerful capabilities. To make this easy to work with I added both a plugin system and a toolbar system (although the latter is really just a special case of the first). Plugins appear as either menus--such as the File and Project menus which are themselves just plugins--or as commands under the plugins menu. Currently I already have a decent number of these: All of these add new functionality to the IDE based on code I'd written before. In that screenshot you can also see a toolbar, which exists right below the tabs. This can be stylesheet specific and thus adds an even more targeted way to add functionality to the system. Here's an example of the four different toolbars I've implemented as well as the different stylesheets they go with: In that you can also see the major downside of putting everything into an IDE: when the files get big (as is the one I'm using to write this post) things can get slower. On the other hand as long as one is only writing code, this is never an issue. And even with a ~12MB file like this things are still more than fast enough to not be frustrating to work with. ## Extensions ### Styles EasyIDE was built to be customizable. This holds first and foremost for the stylesheets it works with. Even though currently there is only a set of LightMode styles, as DarkMode style set could be constructed without too much more difficulty. To do this, one would merely have to take the existing LightMode stylesheet, copy it, and make the necessary cosmetic changes. These changes should then propagate reasonably naturally to the extension styles if the inheritance is changed. This is on the TODO list, but if there is a quality existing DarkMode stylesheet to work off that would also make life much easier. ### Plugins and Toolbars These may be hooked in by adding things to EasyIDE ▸ Resources ▸ Settings ▸ Plugins and EasyIDE ▸ Resources ▸ Settings ▸ Toolbars . There are a number of good examples there already. ### Miscellaneous Extensions I had already implemented stuff for creating nice docs, Markdown notebooks, websites, bug tracking, paclet creation, etc. and some of this has made it in as plugins already. More is forthcoming, but for now one can always play with what's in the Plugins menu. In particular the Git plugin is useful for me as I write and develop. ### The EasyIDE API EasyIDE is just a collection of functions wrapped into a single unit. These were designed to (hopefully) be modular and clean to work with. Eventually all core functionality will also make its way to being attached to a single object, the IDENotebookObject . The API for this is based off of my InterfaceObjects package and is object-oriented. This will be documented in due time, but as a taste here's what it can look like: ide = IDENotebookObject[] (*Out:*) ide@"Methods" (*Out:*) {"Open","Save","Close","SwitchTab","Path","Data","SetData","ToggleFileViewer","AddToolbar","RemoveToolbar","AddStyles","RemoveStyles","GetStylesheet","SetStylesheet","SetProjectDirectory","CreateMessage","CreateDialog"} These "Methods" are all operations that the IDE notebook referenced to by EvaluationNotebook[] can perform. Here's an example of creating a message: ide@"CreateMessage"["Hello!"] (*Out:*) As the IDE grows in sophistication so will the methods the API supports. For now, though, these provide the most direct control that is possible to get with the IDE. Answer 14 Replies Sort By: Posted 14 days ago - Congratulations! This post is now a Staff Pick as distinguished by a badge on your profile! Thank you, keep it coming, and consider contributing your work to the The Notebook Archive! Answer Posted 10 days ago I've found two bugs in EasyIDE () with Mathematica 12.0 under macOS 10.14.4 If I incrase a notebook's magnification to, say, 125%, then the Project and Plugins buttons at the to right disappear, and dragging the window's right edge to make it wider does not fix that. Similarly, if I keep the magnification at the default 100% but make the window narrower, then either the File button disappears to the left or the Plugins (and possibly the Project) buttons disappear to the right. If I click the upper-left button, it generates INTERNAL SELF-TEST ERROR: CellStyle|c|1326 Answer Posted 10 days ago Thanks for the report! Unfortunately by design of the Mathematica front end both of these will be hard to sort out but I'll think. As for 1) both of those are due to the fact that the front end is very much so not good for controlling layouts. I need to allocate space for a reasonable number of tabs as well as for the plugin menus. But what I can do is solve the magnification issue by forcing the magnification of just those buttons to remain constant. In fact I should probably have all my GUI elements work like that...As far as the resizings go, there is unfortunately no good way I know of to allocate "just the right amount" of space for the plugins and leave the remaining stuff for the tabs. If I find such a thing I will definitely make it work like that. I could also push the tabs onto their own row, but I need to think through the design implications of that first.For 2) that's a by-product of the front end failing to set the CellStyle for an "attached cell" appropriately. It's something that I've tried to work around before, but haven't found a good way. There are a number of these front end INTERNAL SELF-TEST ERRORs that very standard front-end design will hit but they've never caused a crash for me. If you find a correlation between them and performance degradations (including crashes) please let me know and I'll see if I can work arount them. Answer Posted 9 days ago Unfortunately, on my 27" iMac Retina display, with these old eyes of mine, I cannot comfortably use Mathematica at all when notebooks are at default 100% magnification. Yet when I change to 125% magnification, EasyIDE's Project and Plugins buttons just disappear from the left side of the notebook's top pane, and I cannot recover them without reverting to the untentable 100% magnification. Answer Posted 8 days ago Can you read the tabs / menu elements at 100% or would those need to be larger too? I can try to work around some front-end idiosyncrasies/glitches to get it the menu items to work at larger magnification. On the other hand, I can easily make it so that the menu items remain at 100% while the rest of the notebook content magnifies. Answer Posted 8 days ago Yes, the tabs, buttons, and menu elements are perfectly readable at 100% magnification. (I keep all my palettes at 100% too.)It's only when I'm writing (and then reading) code in a notebook that I need a greater-than-default magnification. Answer Posted 10 days ago Does EasyIDE support a "Properties" query on a notebook, e.g. to reveal the notebook's path and various timestamps? The FrontEnd lacks such an out-of-band mechanism.I tried to use EasyIDE, but tripped. I think there was a glitch during the install. And later, when I tried to create a new File, a message popped up: "SystemDialogInput: Directory specification Inherited is not a String or a FrontEndFileName". Answer Posted 9 days ago Yeah that message definitely sounds like a trip in getting set up. Means it didn't manage to get bound to a directory. There is certainly an object-oriented API that you can use to find paths, create attached dialogs, notification messages, etc. but I don't have anything to get timestamps right now. Answer Posted 9 days ago the installation instructions for EasyIDE show one using first: << https://paclets.github.io/PacletServer/Install.wl How may one permanently install that Install.wl?Is that package part of PublicPacletServer? If so, how install the latter? At https://github.com/paclets/PacletServer/wiki/Installation, you show: PacletInstall["PublicPacletServer", "Site" -> "http://paclets.github.com/paclets/PacletServer/master"] However, even after evaluating Needs["PacletManager"], evaluating that PacletInstall expression, I get a $Failed with error messages: PacletSiteUpdate::err: An error occurred attempting to update paclet information from site http://paclets.github.com/paclets/PacletServer.master. Does not appear to be a valid paclet site." PacletInstall::notavail No paclet named PublicPacletServer is available for download from any currently paclet sites.
Posted 8 days ago
Good catch! I need to update those instructions.This is indeed part of the public paclet server package. That's also on the paclet server so you can do: << https://paclets.github.io/PacletServer/Install.wl PublicPacletInstall["PublicPacletServer"] Then you can do: << PublicPacletServer PublicPacletServer["Install", "MaTeX"] Paclet[MaTeX,1.7.5,<>] Alternately, you can download https://paclets.github.io/PacletServer/Install.wl and put it in a folder called "PublicPacletServer" in $UserBaseDirectory/Applications. Then you can load it like: << PublicPacletServerInstall Answer Posted 8 days ago I find it cleaner to: Create a folder named PublicPacletInstall in $UserBaseDirectory/Applications; Rename that downloaded Install.wl to PublicPacletInstall.wl and put it in that folder; Create Kernel subfolder of that folder and in it put an init.wl consisting solely of: Get["PublicPacletInstallPublicPacletInstall"] The advantage of this bit of extra effort way is that subsequently all I need do is: << PublicPacletInstall
Posted 8 days ago
Impressive and its sad that a user had to make this. I was also doing more work in Python recently and now its hard to come back to Mathematica. So I hope that Wolfram Research is contributing to this and focus more on developer and deployment outside of their ecosystem. It would definitely make sense form a business perspective.
Currently there's nothing like a debugger, but I think its best feature is that it provides a flexible plugin/toolbar system and access to a lot of package development work I've done that allows you to hook other code and packages into it.As an example of what this does for us I have a set of plugins I've already built in: These are all written in pure Mathematica code so you can easily add new plugins and toolbars and whatnot to the IDE just by knowing how to use Mathematica well. I expose these three packages too in it which opens up a lot of possibilities.I can imagine, too, that the work on a Mathematica profiler and "CodeTools" as I think WRI is calling it could be integrated directly without too much issue! Once that package is out, this could even be something that I work on integrating at a deeper level too.The biggest benefit of this is the fact that it allows you to develop stuff in Mathematica and feed it directly back in to the IDE.Also you can use stylesheets, front-end programming, etc. directly in your dev environment. I recently used this to add a few new themes (and anyone can write their own without too much work):
|
2019-04-25 16:24:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2282358556985855, "perplexity": 1706.6065462012398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578727587.83/warc/CC-MAIN-20190425154024-20190425180024-00507.warc.gz"}
|
https://math.stackexchange.com/questions/1744013/proving-zfc-is-consistent
|
# Proving ZFC is consistent
I've heard from a friend that we can actually prove the consistency of ZFC if we assume at least one inaccessible cardinal exists. How is this carried out, precisely? Googling doesn't help and my friend just know this neat fact, that's all.
This post on the other math forum goes into more detail, as regards the metamathematical theory (it's subtle!). The mentioned rank initial segment $V_\kappa$ are all sets with rank $< \kappa$ (the inaccessible), where every set has a rank by the axiom of regularity in ZFC. See wikipedia with its link to the cumulative hierarchy.
If you think of all sets as built up from the empty set from using the axioms (so forming pairs, unions, power sets etc.), with these axioms we can never cross (in rank) the rank $\kappa$ (this uses that $\lambda < \kappa \rightarrow 2^\lambda < \kappa$, for the power set axiom, e.g.) So if some model (a class) exists, the small sets (i.e. of rank $<\kappa$) also form a model, and is then a set (not a class). And we can prove this within ZFC.
• Specifically, ZFC proves "If $\kappa$ is inaccessible, then $V_\kappa$ is a model of ZFC." (Just to avoid talking about class models.) – Noah Schweber Apr 15 '16 at 17:04
|
2019-08-20 18:37:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9550285935401917, "perplexity": 499.9759665889505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315558.25/warc/CC-MAIN-20190820180442-20190820202442-00111.warc.gz"}
|
http://forum.math.toronto.edu/index.php?topic=759.0;wap2
|
APM346-2015F > Final Exam
FE-4
(1/2) > >>
Victor Ivrii:
Consider the Laplace equation in the ring
\begin{align}
&&&u_{xx} + u_{yy} =0\qquad &&\text{in }\ 1< r = \sqrt{x^2+y^2} < 2,
\label{4-1} \\
&\text{with the boundary conditions}\notag\\
&&& u =\sin(\theta)\qquad &&\text{for }\ r=1,\label{4-2}\\
&&& u= 3 \sin(\theta)\qquad &&\text{for }\ r=2.\label{4-3}\end{align}
(a) Look for solutions $u$ in the form of $u(r,\theta)= R(r) P(\theta)$ (in polar coordinates) and derive a set of ordinary differential equations for $R$ and $P$. Write the correct boundary conditions for $P$.
(b) Solve the eigenvalue problem for $P$ and find all eigenvalues.
(c) Solve the differential equation for $R$.
(d) Find the solution $u$ of (\ref{4-1})--(\ref{4-3}).
Vivian Tan:
(a) We solve this question using the two dimensional Laplacian in polar coordinates. So the equation becomes:
u_{rr} + \frac{1}{r}u_r + \frac{1}{r^2}u_{\theta \theta} = 0
So the equation is
\frac{R''}{R} + \frac{1}{r}\frac{R'}{R} + \frac{1}{r^2}\frac{P''}{P} = 0
Multiplying through by $r^2$, we see that the equation is:
r^2\frac{R''}{R} + r\frac{R'}{R} + \frac{P''}{P} = 0
The equation is now separable. We have a set of ordinary differential equations:
\begin{gather}
r^2\frac{R''}{R} + r\frac{R'}{R} = \lambda \\
\frac{P''}{P} = - \lambda
\end{gather}
The boundary condition for P is that it is $2\pi$-periodic: $P(\theta) = P(2\pi + \theta)$.
Vivian Tan:
(b) The eigenvalue problem for $P$ (as seen in part a) is:
P'' = - \lambda P
First let $\lambda = \omega^2 > 0$. Then the equation is $P'' = - \omega^2 P$, and the solution is $P(\theta) = A\cos(\omega \theta) + B\sin(\omega \theta)$.
Let $\lambda = - \omega^2 < 0$. Then the equation is $P'' = \omega^2 P$, and the solution is $P(\theta) = C\cosh(\omega \theta) + D\sinh(\omega \theta)$.
Then let $\lambda = 0$. Then the equation is $P'' = 0$, and the solution is $P(\theta) = E \theta + F$.
So the eigenvalues are $n^2$, and the eigenfunctions are $P(\theta) = A\cos(n \theta) + B\sin(n \theta)$.
Vivian Tan:
(c) The differential equation that we must solve for $R$ is $r^2\frac{R''}{R} + r\frac{R'}{R} = \lambda$, or $r^2R'' + rR' = \lambda R$, so we assume a solution $r^m$, and then we get:
r^2m(m-1)r^{m-2} + rmr^{m-1} - \lambda r^m = 0 \longrightarrow m^2 - m + m - \lambda = 0 \longrightarrow m^2 = n^2
So we have $m = \pm n$, which means that the solution is $R(r) = r^n + r^{-n}$.
Vivian Tan:
(d) We then have a general solution $u(r, \theta) = \sum_1^{\infty}$, and we have to make use of the boundary conditions. So we have:
u(r, \theta) = \sum_1^{\infty} \left[ r^n \left(A_n\cos n \theta + B_n \sin n \theta \right) + r^{-n} \left( C_n \cos n \theta + D_n \sin n \theta \right) \right]
As $r=1$:
u(1, \theta) = \sum_1^{\infty} \left[ \left(A_n\cos n \theta + B_n \sin n \theta \right) + \left( C_n \cos n \theta + D_n \sin n \theta \right) \right] = \sin\theta
We see that this is only matched when all terms that aren't $n=1$ are zero. Also, $A_1$ and $C_1$ are zero. So we're left with:
B_1 + D_1 = 1
Likewise for $r=2$:
u(2, \theta) = \sum_1^{\infty} \left[ 2^n \left(A_n\cos n \theta + B_n \sin n \theta \right) + 2^{-n} \left( C_n \cos n \theta + D_n \sin n \theta \right) \right] = 3\sin\theta
By the same logic as before, notice that we must have all coefficients zero, except for $B_1$ and $D_1$, where we have that:
2B_1 + \frac{1}{2}D_1 = 3
We can solve these two equations for $B_1$ and $D_1$ to get that $D_1 = -\frac{2}{3}$ and $B_1 = \frac{5}{3}$. Our answer is then:
u(x,t) = u(r, \theta) = r \frac{5}{3} \sin n \theta - \frac{1}{r} \frac{2}{3} \sin n \theta
|
2020-10-29 08:42:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998273253440857, "perplexity": 1091.6277431086608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107903419.77/warc/CC-MAIN-20201029065424-20201029095424-00025.warc.gz"}
|
http://rationalwiki.org/wiki/RationalWiki:Saloon_bar/Archive74
|
# RationalWiki:Saloon bar/Archive74
This is an archive page, last updated 4 January 2012. Please do not make edits to this page.
## Boring, boring, content is boooooring
We just had three new cover articles in quick succession. w00t!
Unless anyone has a hot silver article they really think is a winner, I'd suggest the next task is to go through Category:Cover story articles, which has a lot of old stuff that doesn't really pass muster any more - and do what you can to bring it up to scratch. Simple things like copyediting and polishing and making purty will go a long way. I wouldn't suggest removing things from the cover list if they can be brought up to scratch - David Gerard (talk) 15:58, 6 September 2010 (UTC)
Do you have a suggested list of what "doesn't pass muster any more"? ħuman 01:24, 8 September 2010 (UTC)
## Animanics had the best songs
The last line no longer works after FOX news--Thanatos (talk) 00:09, 8 September 2010 (UTC)
## Gödel, Escher, Bach
Anyone here read it? I just started it, and it's pretty awesome. Like crack for people interested in math, logic, and/or philosophy. Tetronian you're clueless 02:54, 8 September 2010 (UTC)
Actually, people aren't allowed to join this site unless they own a copy of the Eternal Golden Braid. How did you squeak by? ħuman 03:07, 8 September 2010 (UTC)
On first pass its cool, but it hasn't really "aged" well for me. I think it muddles some key concepts of neurology and neuroscience, and he has over the years complained that no one seems to understand what he is trying to say. If you can't get your point across in 700+ pages....tmtoulouse 03:33, 8 September 2010 (UTC)
My copy suffered from being very poorly bound (pages falling out and so on) and then got severely water damaged three years ago. I'll have to get another copy, I suppose. 03:48, 8 September 2010 (UTC)
My favorite bit was "air on a G-string". ħuman 04:12, 8 September 2010 (UTC)
I have never read it, and probably never will. My brother used to have a copy, though. --Kels (talk) 13:05, 8 September 2010 (UTC)
@Human: So far I think the various puzzles that he throws at you are the most fun, and the Achilles/Tortoise dialogues are pretty interesting. Tetronian you're clueless 17:29, 8 September 2010 (UTC)
FFS, don't bring up Gödel when TalkerX is around... Occasionaluse (talk) 17:49, 8 September 2010 (UTC)
Actually, I've been meaning for him to have a look at this. Tetronian you're clueless 17:58, 8 September 2010 (UTC)
Love GEB, read it a few times, and got a new copy for my birthday. I actually prefer Metamagical Themas - I find it a bit more approachable, and it introduced me to Nomic, Underwhelm, and various very interesting topics. Well worth picking up if you get a chance. Worm(t | c) 19:02, 8 September 2010 (UTC)
I have a copy, but I haven't bothered to read it, and enough people whose opinions I respect have said it's not worth it that I don't think I ever will. Evil stupid Hoover! 20:49, 8 September 2010 (UTC)
## Chapter 9.— Whether We are to Believe in the Antipodes
This got a giggle out of me. Jerry Coyne linked to this page, commenting that St. Augustine the Hippo believed in Noah's Flood. Go check Chapter 9, and you'll see why I giggle. --Kels (talk) 01:17, 9 September 2010 (UTC)
Well, you have to remember, the theory that there are antipodes is just that, a theory! Sure there's "scientific" and "observational" evidence for the theory, but maybe that's just there to test our faith! --Sir Onion Kneel before my vegetable might! 01:23, 9 September 2010 (UTC)
Like gravity, or the shape of the earth! Quaruninja - You can't explain that! 01:26, 9 September 2010 (UTC)
All I know is, when I look around all the people I see are on this side of the Earth (as if there were any other side). Why should I listen to a couple of unreliable sources who only claim to be from there? --Kels (talk) 01:31, 9 September 2010 (UTC)
I am from the propodes. ħuman 02:58, 9 September 2010 (UTC)
My sister was raped. I'm not going into details on how it happened. My mom is blaming herself and I am confused about my own feelings. Years ago, when my grandfather died, I did not cry. He was very sick and we got to spend Christmas with him. This though, part of me wants to start punching holes in the walls and the other is wondering if these are my own feelings me just trying to act normal. This is painful to write. I'm going to be gone for a week or two trying to deal with this. And while I'm doing this, I'll make a confession I haven't read the Overton Window since I posted my thoughts on part 1. I'll try to finish it when my head clears up and life returns to some semblance of normality, but that might not be for awhile.--Thanatos (talk) 14:42, 6 September 2010 (UTC)
Suggestion: email Gooniepunk2010. 15:10, 6 September 2010 (UTC)
There was some good advice on Goonies talk page. Your sister was harmed and you feel like maybe you could have protected her somehow. This is normal but you are not responsible , nor is your mother , only the rapist. Sure , maybe some things may have been done differently but thats almost always true, so less blame and more dealing with whats happened. Drop in to a gym and beat the crap out of the heavy bag, it may make you feel better. Counselling for everyone may help. Your sister may "get over" the incident , or at least learn to live with it. That can take a while, so be prepared for it and be supportive. If you need help finding support groups send me an email with your location and I will see what I can find. Hamster (talk) 18:00, 6 September 2010 (UTC)
Thanatos, this is one of the most horrible experiences ever to fall upon a family, and you all have my utter sympathy. The healing process will be very, very difficult. But there are two pieces of advice I can offer you that may not seem like much now, but will be very key down the road:
• Do not blame yourselves. Only the son of a bitch that raped her is at fault. As much as you wish you could change the past, you can not. Do not use this as a time to lay blame with each other, or this asshole wins. Instead, use this as a time of solidarity for your family. Love conquers all, and and renewing the love within your family in the face of this tragedy will, over time, be the most important part of the healing process.
• Keep communicating with your sister. Perhaps not about this, but about other things. One of the most important things I used to keep talking to my sister was quoting lines from the Rocky Horror Picture Show. Don't let her shut herself out, and yet allow her some space. Support her. Help her understand that this whole fucked up situation is not her fault, and that she doesn't need to close out the world as a result.
This is some of the most important advice I can give you, brother. If you need to talk and/or want some support, feel free to e-mail me. I'm here for you through this, bro. The Goonie 1 What's this button do? Uh oh.... 18:51, 6 September 2010 (UTC)
I cried at the death of my grandfather and cannot pretend I know what you are going through, but please accept my token of sympathy. I also second what Goonie said. ListenerXTalkerX 06:29, 8 September 2010 (UTC)
My sentiments are literally exactly the same as LX's. Sorry mate, all the best. SJ Debaser 11:57, 8 September 2010 (UTC)
## Operation Pap Smear
So the Pope's coming to my city soon and I'm thinking about knocking up a sign for the event. I'm tempted to take a conspiracy nutjob angle with it and go "illuminati" or protest the Pope's hat or something. Whatever it is I'm only doing it for the lulz. Anyone have any better ideas? Apparently I've only got a week to prepare (yeah I haven't really been following the story). Also y'all should come to birmingham for the event. ONE / TALK 20:29, 8 September 2010 (UTC)
I'd like to see the "epic face" worked in...or maybe the pedobear. "epic face + pope hat = pedobear" or something. Occasionaluse (talk) 20:32, 8 September 2010 (UTC)
Found lots:
"Fags hate God"
"Catholicism: guilty!"
"Choke the Pope"
"Abstinence makes the church grow fondlers"
"Catholicism aids AIDS"
"Mr Pope, You are completely surrounded, Give yourself up"
"Pope: Paedo By Proxy"
CrundyTalk nerdy to me 20:38, 8 September 2010 (UTC)
Ooh, I like:
"Popemobile: Because Bullets are Real and Your God is Not!!"
CrundyTalk nerdy to me 20:41, 8 September 2010 (UTC)
But those are all too logical. Occasionaluse (talk) 20:43, 8 September 2010 (UTC)
His Holiness the Pope
The Lock Up Your Children Tour 2010
- Edinburgh 16 September
- Glasgow 16 September
- London 17/18 September
- Birmingham 19 September
"And Ye Shall Rape What You Sow"
CrundyTalk nerdy to me 20:44, 8 September 2010 (UTC)
Ouch, Crundy, that was painfully awesome! ħuman 03:00, 9 September 2010 (UTC)
How about protesting discrimination against overweight/homely altar boys? Occasionaluse (talk) 20:48, 8 September 2010 (UTC)
It's comments like these which remind me why TK is a Catholic. Lily Inspirate me. 20:57, 8 September 2010 (UTC)
──────────────────────────────────────────────────────────────────────────────────────────────────── Don't use "Who takes YOUR confessions?" The Pope does go to confession, to another priest. MDB (talk) 21:07, 8 September 2010 (UTC)
You could always go for the "PURGE THE CHURCH OF LIBERAL BIAS" angle... Occasionaluse (talk) 21:10, 8 September 2010 (UTC)
"Easter is cancelled — They've found the body". Thank you to Jim Butcher for that one.-- Jabba de Chops 22:42, 8 September 2010 (UTC)
The always controversial: "Was in an organisation that hated the gays and abused children — now heads an organisation that hates the gays and abuses children"-- Jabba de Chops 22:42, 8 September 2010 (UTC)
"Feck me, it's the feckin' Pope"-- Jabba de Chops 22:48, 8 September 2010 (UTC)
"We chucked you out in the 1530's — TAKE THE HINT"-- Jabba de Chops 22:54, 8 September 2010 (UTC)
"Who's the twat in the big white hat?"-- Jabba de Chops 23:21, 8 September 2010 (UTC)
"Ordain in the Catholic Church — for when sheep-buggering really isn't enough"-- Jabba de Chops 23:27, 8 September 2010 (UTC)
"International Give The Pope a Wedgie Day"-- Jabba de Chops 23:33, 8 September 2010 (UTC)
"Save a Child — Hoist a Pope"-- Jabba de Chops 23:33, 8 September 2010 (UTC)
"The Pope speaks— live from God's arse to his mouth"-- Jabba de Chops 23:33, 8 September 2010 (UTC)
When I was young I had a T-shirt which read "The Pope smokes dope - God gave him the grass." It had a suitably stoned looking pope sitting on a throne-like chair. My catholic mum didn't like it one bit! RagTopGone sailing 00:54, 9 September 2010 (UTC)
Don't bother with trying to be witty or ironic, just go for something like "PEOPLE WHO PROTECT PAEDOPHILES ARE SCUM" or suchlike. DeltaStarSenior SysopSpeciationspeed! 08:22, 9 September 2010 (UTC)
I suggest a giant sign saying "IT'S A BIT MORE COMPLICATED THAN THAT".-- Kriss AkabusiAAAWOOOGAAAR!!1 08:53, 9 September 2010 (UTC)
All this talk of pedophilia reminds me: what happened to Fox? Did he really quit for good? Shame. He was fun when he wasn't drinking (and sometimes when he was). DickTurpis (talk) 13:40, 9 September 2010 (UTC)
I'd go with "You're not my father." With Darth Pope, if I could actually draw. --JeevesMkII The gentleman's gentleman at the other site 18:58, 9 September 2010 (UTC)
## BP Deepwater Horizon disaster report
It basically says "don't blame us!". The report and a 30minute video can be found here. Whilst Transocean were certainly at fault for having a duff BOP and less than on-the-ball drillcrew, and Halliburton's cement recipe might not have been the best, the key to all this was BP's decision to only use 6 centralisers on the final casing, when they needed 21. However, page 37 of the report somehow comes to the conclusion that
“”Although the decision not to use 21 centralizers increased the possibility of channelling above the main hydrocarbon zones, the decision likely did not contribute to the cement's failure to isolate the main hyrdocarbon zones —BP report
Despite a Halilburton cementing engineer explicitly telling them that it wouldn't work (See pages 8 & 9 here)
“”[...with only 7 centralizers the] well is considered to have a SEVERE gas flow problem —Halliburton cement expert in email to BP
Whitewash. DeltaStarSenior SysopSpeciationspeed! 12:46, 9 September 2010 (UTC)
Did you expect anything else? CrundyTalk nerdy to me 14:37, 9 September 2010 (UTC)
I was hoping for a greenwash... "the decision to only use 7 was an environmental concern, and helped the drilling platform be more environmentally friendly, assuming haliburton's cement held up..." you have to admit, that would have been a lot funnier.. Quaruninja - You can't explain that! 15:11, 9 September 2010 (UTC)
## Geocentrism Conference - Galileo is wrong - Nov 6 in Indiana
For those of you who thought the Earth moved through space, attend the First Annual Catholic Conference and learn ... well whatever. http://www.galileowaswrong.com/galileowaswrong/ Hamster (talk) 06:34, 10 September 2010 (UTC)
CMI sees fit to print these people's dreck on occasion. I once tried to insert a bullet point about it on aSK, armed with the reference to a geocentrist letter published by CMI, but a reference to CMI is apparently not refutation-proof and the edit was reverted for parody. ListenerXTalkerX 06:39, 10 September 2010 (UTC)
## That time again (request for funds)
Now that the RationalWiki Foundation is all set up and in legal possession of RationalWiki it is time to start looking at fiscal solvency and making sure that we are able to pay our bills. We have decided to manage the finances roughly quarterly. So every 3-4 months we will need to ask for some donations from you guys. But for what we get the financial requirements for RW are actually pretty small. The details of what we need and why can be found at the 2010 Q4 budget.
There are a few extra overhead costs we are budgeting for this round, we would like to raise $250 in the next couple of weeks. We have close to 300 active editors on the site, so if we could just get$5-$10 from even 10 percent of the people editing the site, let alone reading we would be set till 2011. I think its a pretty good deal for everyone. For now if you can spare a few dollars please head over to RationalWiki:Site support and throw a little our way. This is just the initial plea, in the next week or so we may need to put a few reminders around the wiki to help out donations. Feel free to ask any questions or comments. tmtoulouse 05:35, 29 August 2010 (UTC) Please see your inbox....AceX-102 05:57, 29 August 2010 (UTC) Donation sent. sterile ninja 00:59, 30 August 2010 (UTC) The sitenotice only appears for logged-in users, not casual readers of the site. Is that intentional? - David Gerard (talk) 10:35, 1 September 2010 (UTC) ### Sidebar donation button We had a discussion about improving the sidebar donation link during the last donation drive, but that went nowhere, so I've decided to unilaterally and without prior discussion change the long wording to a simple "Donate", and make it a bit fancier. I've also moved it to the top for the duration of the donation drive (here's the relevant edit). Feel free to disagree, whine, flame, etc. -- Nx / talk 06:31, 29 August 2010 (UTC) I think it is an improvement over what we had certainly. Thanks. tmtoulouse 06:33, 29 August 2010 (UTC) Wow, it worked for me! ГенгисOur ignorance is God; what we know is science. 06:46, 29 August 2010 (UTC) I'd complain, but public opinion seems to be in your favor. So, good work! ħuman 08:16, 29 August 2010 (UTC) ### Success Apparently we have reached our goal for this quarter. That is wonderful and I'd like to thank everyone who pitched in! ħuman 19:53, 7 September 2010 (UTC) ## Office life Fucking hell it's bloody awful. I've never worked in an office before, but I've been seconded in to cover for my boss for three weeks whilst he's on holiday. I've only been in for an hour and I bloody hate it. How do people cope with it? I think I've made a good start as I've just been signing anything that's put in front of me. DeltaStarSenior SysopSpeciationspeed! 08:57, 6 September 2010 (UTC) Boring! DeltaStarSenior SysopSpeciationspeed! 10:40, 6 September 2010 (UTC) Have to agree, I always found it drab. I hung around with "workers" and customers as much as I could once I had been "promoted" to the office. (happened about 5 times in my work career) I found the males too PC (scared of offending) & the females too ditzy in the offices. Once I'd reached as high as I could I quit & started again @ the bottom elsewhere. I was flexible enough to do almost any job commensurate with my size & strength (no digging ditches or fire & rescuing ) There's those who live to work and those who work to live. Don't let yourself become the former - unless you've got a really interesting job. 15:05, 6 September 2010 (UTC) I recommend getting a big fucking gun and killing everyone on the freeway on your way home. Worked for me my first job. Bullets get expensive tho'. It took me about twenty years, but I finally have a job that doesn't suck too bad and where I actually quite like my co-workers. YMMV - David Gerard (talk) 16:05, 6 September 2010 (UTC) Offices aren't inherently better or worse workplaces than anywhere else. It tends to depend on how interesting or tedious the actual work, employer and co-workers are. I used to think I would hate office work, but the office I work at now (in a university) is actually the best place I've worked yet. I'm glad not to work in a corporate office though, & having worked in a call centre before, would never want to go back to that. €₳$£ΘĪÐMethinks it is a Weasel 17:05, 6 September 2010 (UTC)
I think it's sedentary nature of office work which is doing my head in; I've always worked in an industrial/outdoors hands-on (we say "on the tools" in the UK, but I've not sure how that could be misconstrued elsewhere (although "hands-on" could be just as bad)) environment, so this is quite a culture shock to me, as is the monotony of 9to5ing. Oh well, it's only for a few weeks. DeltaStarSenior SysopSpeciationspeed! 09:39, 7 September 2010 (UTC)
### Another day
Another BORING! DeltaStarSenior SysopSpeciationspeed! 09:39, 7 September 2010 (UTC)
Welcome to office life. If you want a picture of the future, imagine a boot stamping on a human face— forever. Bondurant (talk) 11:54, 7 September 2010 (UTC)
I hear you brother. (Sorry for the block but I didn't know the filename!) DeltaStarSenior SysopSpeciationspeed! 11:59, 7 September 2010 (UTC)
Before you know it, you'll be an insomniac pretending to have ball cancer and crying to sleep, start an underground fighting movement, and then shoot yourself in the face. SJ Debaser 12:14, 7 September 2010 (UTC)
### Final solution?
My boss has auto-forwarded his emails on to me; if I was to auto-forward to him, would that result in the destruction of the office (and possibly the universe)? DeltaStarSenior SysopSpeciationspeed! 08:17, 9 September 2010 (UTC)
A student in my university did that with the two main servers. It crashed both, and it took the admins hours to figure out why the servers worked right when only one was running, but crashed instantly when they booted up the second one. They said no-one can be that stupid, so it must've been an intentional prank, and they expelled him. -- Nx / talk 10:01, 9 September 2010 (UTC)
Reminds me of the time I made a filesystem in a file, mounted it, then moved the file into itself (nothing interesting happened, though). Evil stupid Hoover! 15:07, 9 September 2010 (UTC)
That's the dullest anecdote I've ever heard, PH. However it is also been the highlight of my day, so thanks for sharing it! DeltaStarSenior SysopSpeciationspeed! 15:19, 9 September 2010 (UTC)
You want excitement? I just got a free replacement dongle for my mobile broadband. It came with a mini (about 10cm) USB extension cable:
I plugged the cable into itself:
nothing happened!
15:57, 9 September 2010 (UTC)
The high point of my last job was asking out several girls who all said no. Also, I once got threatened by a gypsy. Good memories. SJ Debaser 11:36, 10 September 2010 (UTC)
Like this? CrundyTalk nerdy to me 12:02, 10 September 2010 (UTC)
### Excel hell
I had to leave my last job because of the despair caused by the sheer mind numbing tedium of it all. It got to point where I was having reoccurring dreams of Excel. They weren't nightmares per say, just really really boring rubbish dreams.--AMassiveGay (talk) 11:00, 10 September 2010 (UTC)
## I have arsed up my degree
I am a day away from completely failing my degree. I am mighty pissed.--AMassiveGay (talk) 22:00, 7 September 2010 (UTC)
As long as you finish the course and get a grade, there's no such thing as "arsing up" a degree. CrundyTalk nerdy to me 22:04, 7 September 2010 (UTC)
You can this one, kind of. It is an open university thing. I am essentially redoing a module I failed to finish last year. One wasted year was bad enough, but two and I may as well not bother as I would be pushing forty by the time I finished. And it burns all the more knowing it is entirely my fault--AMassiveGay (talk) 22:11, 7 September 2010 (UTC)
Not that I could have afforded the next course anyhow. My life is one long Smiths record.--AMassiveGay (talk) 22:20, 7 September 2010 (UTC)
I more lurk than not here, but I can answer this one. Don't despair or give up: I lost my job last year, so I've embarked on an OU degree. I'm 45 now, I'll be 48 before I get it, but that still leaves a good 25 years to make use of it (if the fags and weed don't get me first, of course). Seriously man, 40 is nothing.Silvermute (talk) 23:24, 7 September 2010 (UTC)
I'm hoping to do an MChem next year and I'll be 40 at the end of the 4 years, assuming I get in. No such thing as too old to get a degree man, even if it does take a couple extra years.-- Jabba de Chops 02:55, 8 September 2010 (UTC)
Hindsight is wonderful. The number of times I have let something like that pass on the grounds that I'm too old really annoy me now. I had a real opportunity to go full time studenting, with a livable grant and all at 30(!) and turned it down for just that reason. Go for it, life's all about learning & (to warp a phrase) 'tis better to have learned and forgot than never to have learned at all. 03:56, 8 September 2010 (UTC)
There is also the argument that being a student does not impart merely the subjects being taught, but also the mental "toolkit" necessary for serious thinking. ListenerXTalkerX 05:13, 8 September 2010 (UTC)
There's a Syrian proverb that suggests, "لا تقول للمغني غني حتى يغني لحالو" - Do not ask a singer to sing unless he wishes to sing for himself. Do you really want this degree, or are you doing it to fulfill the expectations of others or yourself?--talk 05:30, 8 September 2010 (UTC)
The same thing happened to me. I was doing a distance learning 3-year masters course in GIS and owing to "domestic issues" didn't get the work done in time so dropped it after about 15 months. Distance learning requires a lot more self-discipline as you don't have the framework imposed by attending a physical institution and you also need the complete support of those around you when home-studying otherwise you get interrupted/distracted too much. I don't regret it but wish I had completed the course as I was looking at a change of direction in my career but ultimately I carried on doing the same thing (which I do enjoy). You need to look honestly at yourself and see why you didn't get the work done in time. Could you really do it if you tried again next year? If you think you really have the discipline to do it then go ahead, but if you procrastinate a lot then it will slip once again and you'll be wasting more money. Ultimately it comes down to how much you really want it, but I wouldn't let the age thing be a deciding factor. ГенгисOur ignorance is God; what we know is science. 08:02, 8 September 2010 (UTC)
Kudos for admitting that it's entirely your fault, far too many people blame anyone and anything bar themselves. However, as it's entirely your fault, why are you whinging? Was it a proper (ie STEM) degree or some poxy 'media & society studies' type thing? DeltaStarSenior SysopSpeciationspeed! 08:12, 8 September 2010 (UTC)
Poxy? Imagine what a couple of people with media studies degrees could do here? Totnesmartin (talk) 10:54, 8 September 2010 (UTC)
It is a diploma in computing I am working towards, which once gained, I will try to turn into a degree. The current course I am on is all Java programming. I was a little bit depressed and panicky last night as I had left it so late to do the final assignment and was foiled by the first section. Went to bed, and this morning, all refreshed, took another stab at it. I have completed the second third this morning, so a little bit more optimistic now. Still stumped on the first third, but if I do the rest, I can still salvage an acceptable mark. Somewhat angry at myself that I have yet again left it so late to do. Now, must crack on--AMassiveGay (talk) 13:51, 8 September 2010 (UTC)
It's not you, it's java. When your uni gives you java applets, you find a new uni. --85.76.141.4 (talk) 17:19, 8 September 2010 (UTC)
I'm in the same boat as a few of the folks above, returning to school as a "mature student" (at least chronologically) to pursue a third career in a rather uncertain profession (you can make a decent living at art, but it's hardly a guarantee). But you know what? I'm gonna take my inspiration from folks like Robert A. Heinlein (died still writing at 81), Johnny Hart (died at his drawing desk at 76) and the legendary Grim Natwick (still teaching well into his 80's, died at 100), and view this as less than my halfway point. Here, have a bit of inspiration on that score. --Kels (talk) 13:17, 8 September 2010 (UTC)
I was gonna write something about procrastination, but I'll do it later. 14:32, 8 September 2010 (UTC)
Just an Update. I finished my work and handed it in. It was crap but should be enough to pass - assuming I don't fail the exam. I am conflicted as to whether this the right degree for me but I have no idea what to do instead. I don't fancy a future that involves too much Excel but without qualifications of some sort, that what lies in store for. On the bright side, it beats my former employment in the chicken factory.--AMassiveGay (talk) 10:49, 10 September 2010 (UTC)
Actually, reading all these comments really reassures me. I'm just starting my BA this year (well, week after next! scary! Gotta move up north!), spent the last two years doing a Cert. HE to get on this course as I never did any A levels, and just to learn how to go about academic work. I really want to get into the research and teaching side of my subject which basically means a Phd. I'll be 30 next year, and I've been agonising over whether or not it's realistic or just a huge waste of time and money and if should go back to doing tech support... "did you try switching it off and on again?" "What lights are showing on the modem? The modem... the black upright box, the Motorola... no, mo-to-ro-la..."
But, knowing you guys are doing this stuff 10+ years beyond me, that's really heartening, and the last two years have been the best of my life and I really want to pursue this and not see my brains turn to call-centre flavoured ooze, so.... thanks for lifting my spirits. Wish me luck pls, I'm scared! -- 21:55, 10 September 2010 (UTC)
## More political excitement
Will the Teatards shoot themselves in the foot in Delaware, turning a shoo-in victory into a probable loss? I hope so. So does Andy, apparently. DickTurpis (talk) 16:07, 10 September 2010 (UTC)
## New Cracked Article
The 5 strangest things evolution left in your body. CrundyTalk nerdy to me 15:46, 9 September 2010 (UTC)
Ha! So much for "intelligent" design. --PsyGremlinSpeak! 15:54, 9 September 2010 (UTC)
Ear wiggling: I'm an ear wiggler: the effect is terrific when, for whatever reason, you have no hair. Scaring/fascinating small children in queues is a favourite. I have a theory that everyone could wiggle their ears if they only knew how; I mean: can you tell anyone how to raise their arm? Nor can I tell anyone how to wiggle their ears. It's not the ability that's missing, it's the ability to use that ability. I had a friend who could similarly raise goosebumps on her arm - weird. 16:08, 9 September 2010 (UTC)
I'm also an ear wiggler. I taught myself how, but yeah, I don't think I could teach anyone else. 18:18, 9 September 2010 (UTC)
this is all athiestic nonsennse. you're small minded worldview just canot comprehend the awsome power of god. BillC (talk) 20:38, 9 September 2010 (UTC)
Perhaps god has severe copy-and-paste tendencies? Evil stupid Hoover! 21:55, 9 September 2010 (UTC)
I'm sure you'll find the modern purpose of the appendix if you believe really hard. 21:59, 9 September 2010 (UTC)
Evolution does't discount the idea of the appendix having a purpose. On the subject of ear wiggling, I can do it, but not when I smile, and since I figured it out I cannot raise my left eyebrow with out my left ear coming along for the ride. Maybe I should make a video of my using my face... hmm. --Opcn (talk) 06:43, 11 September 2010 (UTC)
### So why do straight men have earlobes?
Not too sure about the appendix. But God obviously gave us girls earlobes to have somewhere to hang our earrings. It's the only explanation for their existence(earlobes that is). So why do straight men have earlobes?--Hillary Rodham Clinton (talk) 14:46, 11 September 2010 (UTC)
It's a temptation placed there by the Devil, to try to get them to join the Homosexual Agenda. ħuman 18:40, 11 September 2010 (UTC)
An interesting theory. Perhaps the devil created the appendix as well to make it look as though evolution is a fact.--Hillary Rodham Clinton (talk) 19:42, 11 September 2010 (UTC)
## Countdown hilarity
I was keeping my Gran company this afternoon and we were watching Countdown on CH4. "NELIWAFGM" came up and other than looking like a Welsh word, I immediately saw "newfag," even though I've never been on /b/ before. SJ Debaser 18:38, 10 September 2010 (UTC)
Go and stand in the corner and think about what you've done. CrundyTalk nerdy to me 19:56, 10 September 2010 (UTC)
"Hilarity", "Countdown"? That's a brand new association. Admit it, you were really watching Rachel's bum. 20:01, 10 September 2010 (UTC)
I was watching the letters, and Rachel's bum. And a damn fine bum it was... Letters! I meant letters... SJ Debaser 21:34, 10 September 2010 (UTC)
Isn't it just: outperforms the Vorderperson IMHO. 21:44, 10 September 2010 (UTC)
Just in passing. 21:56, 10 September 2010 (UTC)
I am out of touch. You mean Carol Vorderman no longer does countdown? --JeevesMkII The gentleman's gentleman at the other site 22:22, 10 September 2010 (UTC)
Richard Whitely's dead? When did that happen? And while I'm here, whatever happened to Treaure Hunt? Haven't seen that on for a couple of weeks…-- Jabba de Chops 23:37, 10 September 2010 (UTC)
No need to invoke liberal deceit and liberally lie about "keeping your Gran company" to justify your watching of countdown, Josh. Watch it with great pride. My personal favourite was the time when Dick Whitely sported a nice "Countdown" tie, however his clip-on microphone completely obscured the 'o'. Top notch. I'll see if I can find t screenshot... DeltaStarSenior SysopSpeciationspeed! 18:42, 11 September 2010 (UTC)
## Sailing By
Have any of my fellow here Brits discovered the joys of Sailing By, the soothing tune played before the final shipping forecast on Radio 4? I heard it for the first time the other week and have never felt so simultaneously confused and enchanted. I just heard it again driving home from a friends and felt moved to write about it somewhere. On a related note, I am now making it an idle ambition to be able to understand what the hell anything in the shipping forecast means. 86.131.215.160 (talk) 00:22, 11 September 2010 (UTC)
The Shipping Forecast by Martine Stead.-- Jabba de Chops 02:27, 11 September 2010 (UTC)
Blightynet has an article. ГенгисOur ignorance is God; what we know is science. 19:25, 11 September 2010 (UTC)
"Sailing By" appears on the soundtrack of the film "Priest" as Linus Roache gets in after a night of passion with Robert Carlyle. It made me think that it was a dissapointingly short night of passion!
## Burning Korans vs. Drawing Mohammed
Okay, I think most of around here think the "pastor" behind "Burn a Koran" day is a jerk.
But I think most of around here found "Draw Mohammed Day" amusing.
I fit into both groups myself.
But I'm also asking myself, "what's the difference? Both ultimately boil down to 'let's piss off the Muslims'.", right?
MDB (talk) 17:32, 9 September 2010 (UTC)
The main difference is the reasons behind it; the Qur'an burnings are because he is a religious bigot who wants to censor others; the Mohammed drawings to demonstrate against religious bigots who want to censor others. Evil stupid Hoover! 17:36, 9 September 2010 (UTC)
Plus the fact that book burning has a universally negative connotation. Tetronian you're clueless 17:43, 9 September 2010 (UTC)
Good points, both of you.
Perhaps there's a compromise -- "Burn a Drawing of Mohammed Day", or maybe "Draw Mohammed on the Koran Day". MDB (talk) 17:52, 9 September 2010 (UTC)
As an atheist, I don't give a shit either way, but Christians have something to consider. The drawing muhammad thing they have no parallel for, so I can see why they don't get it. What I think is hypocritical is that a Christian who would flip his shit over someone burning the Bible would burn the Quran with the express intent to offend Muslims. "Do unto others..." (but only if they're Christian). Then again, if you went to the South and burned a Bible in public, I would bet the farm that you would be physically attacked. Any way you look at it, religious people are fucking insane... Occasionaluse (talk) 18:24, 9 September 2010 (UTC)
Just to play Devil's Advocate a little... no, Christians would not get angry about "Draw Jesus" day in general, but there are plenty of artistic representations of Christ that have inflamed Christians. Take the controversy over the "Piss Christ" display. MDB (talk) 18:46, 9 September 2010 (UTC)
Oh, and speaking of "Piss Christ" and similar works... as a Christian, I found it tasteless, but I defend artist's right to produce it. MDB (talk) 18:51, 9 September 2010 (UTC)
Don't worry, some of your more adamant brethren threatened death over it :P One point for the Muslims: at least they follow through... Occasionaluse (talk) 18:56, 9 September 2010 (UTC)
Well, one is a constructive act, the other destructive. I do like the compromises suggested above, though! Also, how about it's only ok to burn the Koran if it is draped with a soiled US flag first? ħuman 19:16, 9 September 2010 (UTC)
There may have been a lot of people who drew Muhammad just to piss off Muslims, but the original motivation behind it was to express the notion that Muslims can't expect others to conform to their particular rules and taboos. This, of course, does not work when your own religion has the exact same prohibitions in place. A cartoonist drawing Muhammad can make fun of Muslims attaching such an inflated symbolic importance to a silly drawing, but if a Christian priest decides to burn another religion's holy book, he probably doesn't want to make a similar point. Röstigraben (talk) 21:23, 9 September 2010 (UTC)
He's just called it off. Wuss. All a publicity stunt, if the tit had some real balls he'd have tried to drive into Mecca and do it there.Anyway. I'm all for burning Koran's, but if you're going to do that, you'd may as well as a Bible, Mein Kampf a copy of the US Constitution, Lord of the Rings, a printout of the RationalWiki mainspace... and so on. theist 21:47, 9 September 2010 (UTC)
BBC has a statement from him where he says he called it off in exchange for the Cordoba Center to be moved to another location? Seriously? Röstigraben (talk) 21:54, 9 September 2010 (UTC)
BBC World News is covering it quite intensely. He claims to be flying out to meet the Imams responsible, but really, I think he's just chickening out after getting the publicity stunt off his chest. theist 22:09, 9 September 2010 (UTC)
Others have already said it. Draw Mohammed day was a protest with a purpose: Standing up against people who use violence to suppress people's free speech. Burning a Koran serves no purpose other than inciting racial hatred (which fundy Christians seem very good at). CrundyTalk nerdy to me 22:14, 9 September 2010 (UTC)
World Service just put out a quote from one of the Imams in response to the calling off, not sure where it leaves it. Seems as if there weren't any talks about abandoning Park 51... make of that what you will. theist 22:19, 9 September 2010 (UTC)
THe BBC article is constantly growing with new info. theist 22:21, 9 September 2010 (UTC)
Apparently, Jones was a little overconfident there. I would've said a month ago that this whole Park 51 thing couldn't get any more ridiculous, yet now people are holding Qurans hostage in order to negotiate over the exact placement of that building...this is really religion in a nutshell, this whole episode illustrates the utter insanity of it all so very perfectly. Röstigraben (talk) 22:44, 9 September 2010 (UTC)
Okay, BBC Wold Service is giving us a good recap. Jones is getting increasingly confused, it seems. theist 23:04, 9 September 2010 (UTC)
BBC presenter put it best (I'll paraphrase) "is this just a case of very local, very small, very weird (emphasis added, he definitely said this) church politics being played out on the world stage?" Yes. The media seem to be questioning why they're turning into molehiill mountaineers about it, but they've said that if it had went ahead without any warning the cartoon controversy would have "paled by comparison". They're still holding their breath for Saturday. theist 23:07, 9 September 2010 (UTC)
Oh, what an ass, that Mr. Jones is. He's thinking about re-assessing his position regarding the Koran-burning and its cancellation after he found out that the Islamic center will not be moved. He's using his Koran-burning as a game piece, a bargaining chip. ~SuperHamster Talk 01:26, 10 September 2010 (UTC)
Burning a Koran serves no purpose other than inciting racial hatred... Might I suggest that it is instead religious hatred that is being incited? These sorts of people are well known for burning all sorts of books (for example, the Harry Potter series). ListenerXTalkerX 02:03, 10 September 2010 (UTC)
I heard that Donald Trump is getting involved in the Area 51 siting...? Also, Saturday is my local "country fair", somehow I doubt we'll have a Koran burning (or Muslim dunking?) booth. ħuman 02:11, 10 September 2010 (UTC)
Well, France 24 just showed a little update, Jones saying that the Imam at New York is basically lying when he says there was no deal to backtrack on Park 51... ah well. Let the chaos commence. I'm sure if Jones chickens out, some asshat will do it. theist 08:53, 10 September 2010 (UTC)
The jokes are flowing in already:
Why burn one religious book and cause a problem when we can burn every religious book and solve one
I'm downloading the Qu'ran from an ebook site. I've got a slow connection but it should be done by Saturday the 11th. I'm putting it on disk, if anyone wants one I can burn a few copies
CrundyTalk nerdy to me 11:20, 10 September 2010 (UTC)
I don't even know why this book burning is being reported. Idiot in America does idiotic thing. So what?-- Kriss AkabusiAAAWOOOGAAAR!!1 13:38, 10 September 2010 (UTC)
It makes me want to troll IRL. Occasionaluse (talk) 14:00, 10 September 2010 (UTC)
I see this is sort of touched on above but I can't see a clear response. How would the bible belt react to a mass burning of bibles by Muslims in a Muslim country?--BobSpring is sprung! 19:37, 10 September 2010 (UTC)
I bet some would respond in turn with desecrations, as some Muslims have preemptively done already. But most would just respond with fiery rhetoric about end times and how it fulfills some prophecy or other.Occasionaluse (talk) 19:41, 10 September 2010 (UTC)
[1] CrundyTalk nerdy to me 20:28, 10 September 2010 (UTC)
I too have an electronic copy of the koran, and I showed my contempt for islam with the following protest
rm -f koran.english.full.pdf
Ha! That's show'em! DeltaStarSenior SysopSpeciationspeed! 18:49, 11 September 2010 (UTC)
BBC reviews the farce. The last section, "mirror of publicity" is, although short, certainly worth a read. theist 00:48, 12 September 2010 (UTC)
Oh, it's all good. I love the tone of that piece. ħuman 01:05, 12 September 2010 (UTC)
It's all like "we're sooo sorry for giving this tit all this attention, please forgive us!" theist 01:38, 12 September 2010 (UTC)
After a pretty dickish post that saw even his best friends call him out for being a twat, Thunderf00t seems to have a good comment on it too. In this case about the over-reaction of the islamic world and how they need to be desensitized. theist 17:16, 12 September 2010 (UTC)
## Periodic maintenance
I've never done this before, but it's coming due soon, so I'm looking for advice. Main question, should I remove my laptop's battery before running it through the dishwasher? ħuman 21:46, 11 September 2010 (UTC)
No; but keep the mains lead in. 21:48, 11 September 2010 (UTC)
(EC) Yea. Those things need to be cleaned to, but you should slowly scrub them over cooler water in the sink. Dishwasher is to fast for em', make em blow up real loud and ruin all your dishes. Oh, and make sure you use extra soap! --Sir Onion Kneel before my vegetable might! 21:52, 11 September 2010 (UTC)
You battery isn't dirty? How is this possible? sterile ninja 21:53, 11 September 2010 (UTC)
I was going to hand wash it - I've heard heat is bad for them. ħuman 23:49, 11 September 2010 (UTC)
Pedantic though important quibble. To what does the "it" refer to in your initial question? The laptop or the battery. I sense ambiguity.--BobSpring is sprung! 10:34, 12 September 2010 (UTC)
Actually, when you mentioned "ambiguity", my thoughts first ran to "he did specifically say laptop battery, right, not just 'battery' and then we assumed..." theist 12:58, 12 September 2010 (UTC)
Be sure to dry it thoroughly. A tumble dryer should work, though you might need to defragment your laptop afterwards... with sticky tape or something like that. --151.81.196.26 (talk) 19:54, 12 September 2010 (UTC)
Alternatively, put it in the microwave that will dry any hidden moisture. Lily Inspirate me. 20:49, 12 September 2010 (UTC)
## Decline of the English language
I just googled "fat northern gobshite". Only three pages on the entire internet have the phrase. It's just not good enough...Although, without quotes, there's a lovely picture of Gerry Adams. Totnesmartin (talk) 22:57, 11 September 2010 (UTC)
The English language. The English language's. The English languages. The English languages'. Much easier than in Latin, I must say. Evil stupid Hoover! 23:19, 11 September 2010 (UTC)
I hope you weren't thinking of me when you Googled that. Lily Inspirate me. 17:51, 12 September 2010 (UTC)
I was thinking of Roy Chubby Brown, and he didn't even show up in the returns. Chris Moyles did, though, which is no small recompense. Totnesmartin (talk) 18:18, 12 September 2010 (UTC)
## Add to our remit?
Moved to Forum:Add_to_our_remit? ħuman 23:17, 12 September 2010 (UTC)
## Pope
Monday night; BBC1; 8:30pm Panorama: What the Pope knew.
“”On the eve of the first Papal visit to Britain in 28 years, Fergal Keane investigates the Pope's personal track record of dealing with paedophile priests while an archbishop and top Vatican official. As the child sex abuse scandal continues to engulf the Catholic Church worldwide, he meets victims who want an apology for the Pope's personal handling of some notorious cases. —BBC web
21:33, 12 September 2010 (UTC)
## NFL
At friggin' last, some Nuffel on the TV. Unfortunately it means putting up with the Vikings tonight, but what canya do?-- Jabba de Chops 23:11, 9 September 2010 (UTC)
Watch something else? MDB (talk) 23:29, 9 September 2010 (UTC)
But that would mean no Nuffel, and at £20 a month bollocks to that.-- Jabba de Chops 13:45, 10 September 2010 (UTC)
I saw some of this last night while waiting for the tennis. It's like a series of premature ejaculations then having to wait for them to get get another hard on. Lily Inspirate me. 08:16, 13 September 2010 (UTC)
## Fuck CNN
I clicked over to CNN earlier only to find Larry King Live. I continued watching because the topic was of interest to me - Stephen Hawking's "controversial" statement that no god is necessary to kick start the universe. Then I see one of the guests, Grand Woomeister Deepak Chopra.
Fine, I say to myself, to have someone with a contrary viewpoint BUT...King plugged Chopra's upcoming book (which has nothing to do with the subject under discussion) twice during the program. King then bade him farewell by saying Chopra will be back soon to discuss said freaking book (3 plugs).
Two hours later and I click back to watch World Report. You know, news and stuff. The anchorwoman introduces a story about the "ground zero mosque" silliness and which "expert" does she interview via phone link for 5 minutes of precious prime news time? Deepak Chopra! And she plugs his upcoming book while introducing him (4 plugs)!
Did he buy a controlling stake in CNN? Has he got dirt on them?
Rant over. Disgusted.--Brendiggg (talk) 12:36, 11 September 2010 (UTC)
That guy sounds like he probably has friends in the media. With four book plugs in a row by the same network, its pretty obvious to me. --Sir Onion Kneel before my vegetable might! 13:01, 11 September 2010 (UTC)
Hmmmm, must not have been able to hire anyone else as an "expert" that day. Punky Your mental puke relief 19:08, 11 September 2010 (UTC)
That's not the only thing they plug at - have you noticed how often they release a story regarding the iPad, or sometimes the iPhone? Not so much anymore since the iPad has been out for a while, but I remember seeing insignificant stories every week or so that kept talking about how awesome to iPad is, or how people are using it, or what apps people should get - I remember one article that talked about how they are giving iPads to sumo wrestlers because of their fingers being so big. Sure, it may be interesting, but it makes you think: how much are they paying CNN? ~SuperHamster Talk 20:58, 11 September 2010 (UTC)
The BBC is the same for Apple stuff. But as Tom Scott noted on a Sky News segment regarding the iPhone's reception issues, it seems that it's newsworthy because its Apple, and quite literally no other reason. If this sort of thing was done by any other firm, it wouldn't be news worthy at all. Although I doubt Apple are paying for advertising equivalent exposure here, there may be reasons behind it. I assume it's because the people who run the news are obviously media buffs and probably (as Ben Goldacre repeatedly declares) humanities graduates. Which to me says "rich mammy and daddy" and subsequently "owns a lot of Apple products". Thus, stories about Macs and stuff seem "newsworthy" to the gadget geeks who determine what does and what doesn't go on the front page of the technology section (same as when BBC's Bill Thompson raves about Twitter every 6 seconds). More likely, it's because of Apple's position as something of a groundbreaker, especially in the post iPod world, it's kind of like how James Dyson will get news time if he says something, despite having not really done that much. It's a brand that is already in the public eye so is easy to write a good story about. theist 00:41, 12 September 2010 (UTC)
Perhaps someone in CNN (maybe even Larry King) is a fan of his work? Remember what happened with Oprah? (she became a fan of "The Secret" and then plugged it endlessly.) Tetronian you're clueless 19:55, 13 September 2010 (UTC)
## Smirk
http://verydemotivational.com/2010/09/12/demotivational-posters-religion/ theist 21:58, 12 September 2010 (UTC)
Interesting metaphor... Tetronian you're clueless 01:04, 13 September 2010 (UTC)
To which I might add, "and only touch it when you have to". Lily Inspirate me. 08:06, 13 September 2010 (UTC)
## topicon
It's not a "real" template so I can't find it to fix it, but it needs to place the image a couple pixels or so higher. Does anyone know how this works, and why it isn't simply a template? ħuman 01:07, 13 September 2010 (UTC)
I would guess it's in the company of {{USERNAME}}, {{PAGENAME}}, {{NUMBEROFARTICLES}}, and others, so you would have to hack through the software to edit it. --Sir Onion Kneel before my vegetable might! 01:13, 13 September 2010 (UTC)
Interesting idea. There's a list of mediawiki weird shit somewhere I think... ħuman 01:25, 13 September 2010 (UTC)
It's a custom extension that inserts whatever wikicode you write into it inside the firstHeading (instead of the ugly hack that is Wikipedia's topicon template), so the image is aligned with the page title, i.e. it's the same as this: Whatever See Help:Images#Vertical alignment to align the image vertically. -- Nx / talk 07:49, 13 September 2010 (UTC)
Hmmmm, thanks, those super or baseline things didn't seem to have any effect. Oh well, I was just trying to raise the image slightly to stop it from "riding" on the line. Thanks for trying! ħuman 18:25, 13 September 2010 (UTC)
## Gentle reminder
Don't forget that this coming Sunday (19 September) is International Talk Like a Pirate Day. Lily Inspirate me. 08:54, 13 September 2010 (UTC)
RAmen. I will be sure to make some Kiva loans on the 19th in the name of the great Flying Spaghetti Monster (sauce be upon him). CrundyTalk nerdy to me 09:40, 13 September 2010 (UTC)
Ahem. *cough* ĴαʊΆʃÇä₰ hi there! 19:18, 13 September 2010 (UTC)
Its also my birthday! tmtoulouse 19:46, 13 September 2010 (UTC)
Clearly you are blessed by the great noodly one oh Trent. I'll be seeing you at the beer volcano and the stripper factory. Yarrr. CrundyTalk nerdy to me 21:35, 13 September 2010 (UTC)
## Has anyone seen this?
Am I wrong in thinking that the bigot is not who the letter is addressed to?--AMassiveGay (talk) 12:29, 13 September 2010 (UTC)
## Nation Once Again Comes Under Sway Of Pink-Faced Half-Wit
The Onion. Gotta love 'em. --PsyGremlin話しなさい 15:41, 13 September 2010 (UTC)
That was clever. I haven't read the Onion in a while, I forgot how much I enjoyed it. Tetronian you're clueless 19:52, 13 September 2010 (UTC)
It's articles like this that blur the line between comedy and reality. --Sir Onion Kneel before my vegetable might! 20:00, 13 September 2010 (UTC)
The thing with The Onion is that, because it's 99% good satire, it doesn't have to make stuff up. It just reports reality in the bluntest possible light. theist 00:10, 14 September 2010 (UTC)
## You can't buy publicity like this...
Seriously. --JeevesMkII The gentleman's gentleman at the other site 22:07, 13 September 2010 (UTC)
Ha, that's actually really funny...not the tactic you'd expect from the Pentagon to cover up information. ~SuperHamster Talk 22:13, 13 September 2010 (UTC)
I'd laugh if I weren't so embarrassed... --Sir Onion Kneel before my vegetable might! 22:41, 13 September 2010 (UTC)
indeed... I mean, what the unholy fuck, really. If they had the funds to buy up a print run, surely they'd be best off just arranging for the author to meet with an unfortunately accident and for the publishers to conveniently lose their only copy. They don't do cover ups like they used to! Also, I recognise Shaffer's name from somewhere, The Men Who Stare at Goats, perhaps? theist 23:52, 13 September 2010 (UTC)
## Inconstant constant(s?)
Have the Cretinists & IDentifiers caught this yet? 02:08, 12 September 2010 (UTC)
While I have to remain skeptical about the idea of varying constants, if it's true about varying across space too, one of the key aspects of science - that the world is the same everywhere - is pretty fucked. Although I assume there would be an underlying rule governing the change (indeed, this is where proponents of c-decay fall down) waiting to be discovered. theist 02:23, 12 September 2010 (UTC)
As the saying goes, "variables don't, constants aren't." and therefore, Genesis is true Totnesmartin (talk) 08:06, 12 September 2010 (UTC)
Ah yes, morals are absolute, timeless, immutable and universal while the laws of physics are flexible and malleable to fit in with a narrow interpretation of an anthology ancient myths. PPTP, FTW! Lily Inspirate me. 08:13, 12 September 2010 (UTC)
it makes for interesting science fictions though, where one ship needs several engines to cope with the changing physics across space. In THIS diagram , there are differences in the red and blue ends of the spetra. Anybody know what that might indicate ? Hamster (talk) 15:27, 12 September 2010 (UTC)
The Gods Themselves is an interesting take, where multiple universes have differing constants. Oh, and it has alien sex. ħuman 21:38, 12 September 2010 (UTC)
Well, the diagram, and even the explanation in that piece, seem to indicate that it's just Doppler shift they're looking at, but I doubt even the most crazed physicist would make that mistake. But if the spectra in that diagram are vaguely realistic rather than merely representitive it might be that they're showing a Doppler-like shift that's not consistent with motion generating it; compressing and expanding energy levels rather than simply shifting them linearly. Other than that, I'm concerned that they've taken the measurements with different instruments which could easily produce calibration errors. I'm not saying that they haven't accounted for this, but some seriously odd effects have been known, IIRC, there have been several cases of supposed "wow" signals that were due to someone microwaving their lunch by the office and affecting the antennas. But we'll see, exciting times nonetheless. theist 22:19, 12 September 2010 (UTC)
These effects usually the relative positions of specific absorption spectrum lines (their spacing), not absolute positions. Since the Doppler shift is such a simple effect, and they have multiple lines to work from, it's quite easy to compensate for the Doppler shift and determine whether the observed lines can match spectra seen in a lab. Even when they do focus on the absolute frequency of a spectral line, they usually already have a good estimation of the magnitude of the Doppler effect based on other spectral lines; the Doppler effect works in a specific way across the whole spectrum, or not at all.
They also claim that any systematic error that could generate this effect would have to be oddly specific (someone would have to pick very specific times to use the microwave and not others, such that the coincidence is very unlikely). That said, it's probably worth waiting a little while before pronouncing this effect certain.
This is also not useful to c-decay advocates; the observed change in values is way too small to account for any creationist data-diddling, and it doesn't invalidate uniformitarianism so much as point out that, if uniformitarianism is valid, the laws of physics must eventually be able to explain why the fine structure constant is not really constant (probably a job for whatever replaces the Standard Model). The only thing I can think of that would disprove uniformitarianism would be if we could develop a single consistent set of theories that explains literally every physical event in our corner of the universe (including unification of general relativity and the Standard Model), and then that theory fell down abysmally elsewhere/elsewhen. Or if, you know, Jesus, Neo, and Thor stopped by one day and decided to make the moon revolve the other way while turning lightning into cybernetic wine. I guess that would do it too. --Quantheory (talk) 07:03, 14 September 2010 (UTC)
Uncertain Principles SciBlogs 16:38, 14 September 2010 (UTC)
## Setting myself up for vilification
I see that (quite rightly) we have a memorial banner to the victims of 9/11. However, it says in "loving memory". This is something a family might put on a gravestone but doesn't really represent my feelings to people whose death I mourn but I did not know nor love. Think it sounds a bit maudlin to include the word "loving". Lily Inspirate me. 09:47, 12 September 2010 (UTC)
Actually, I'm inclined to agree. Maybe "Honouring (or Honoring if you will) the memory of those who died" would be a better wording? --PsyGremlin話しなさい 10:05, 12 September 2010 (UTC)
Good point, actually. If anyone in RW has a relative or close friend that died, then it might be appropriate in a way (to the best of my knowledge, this isn't the case). So "honoring" is probably the best way of putting it. theist 12:23, 12 September 2010 (UTC)
If you're so concerned about one word, then why didn't you just edit the page and change it? --Sir Onion Kneel before my vegetable might! 13:31, 12 September 2010 (UTC)
Because (and I stand corrected) none of those who have commented are Merkins, and they might feel differently about it, hence it's discussed here first. It's called consideration. --PsyGremlinParlez! 13:37, 12 September 2010 (UTC)
Yes, that makes since, if I had a family member that died on 9/11 I would be screaming at you all. Honoring is the best way to put it on the template. --Sir Onion Kneel before my vegetable might! 13:49, 12 September 2010 (UTC)
I don't clain to speak for everyone here and as it is stating something about a sensitive topic, I thought it would be better to discuss it. Lily Inspirate me. 14:01, 12 September 2010 (UTC)
I am an American if that matters and I thought it was a bit much. "In rememberance of" or "honoring" or even just "In memory" are all good. The version used was OK though. Hamster (talk) 15:16, 12 September 2010 (UTC)
I don't think it's so much as being sensitive as the fact that discussion is just how this place works. I have no doubt that this change would have near unanimous support because it is an improvement and far more suitable a phrase, but it's just nice to enquire first to save edit warring or trying to cram explanations into the edit summary. theist 17:13, 12 September 2010 (UTC)
We lost someone from our village... their survivors had signs at their driveway entrance facing both ways saying "9 11 family for peace". Our local vet sold her business and went to W DC to work full time, I think against the Iraq War. The attack touched a lot of people, if a bit indirectly. Anyway, I don't object to the change, the "loving" was ok, but perhaps a bit too personal, as people said. ħuman 21:37, 12 September 2010 (UTC)
(Undent)So, will we be similarly honouring the victims of Warrington or Enniskillen? Jack Hughes (talk) 09:39, 13 September 2010 (UTC)
I am not too familiar with those places, but Wikipedia informs me about a dozen people died between the two in IRA bomb attacks. Did something else happened, or is it those bombings you're comparing with the 9/11 attacks that left 3,000 dead? If we're making suggestions, I'd lean towards 2008 Mumbai or 2003 Istanbul (or others) before the IRA bombings.--talk 10:49, 13 September 2010 (UTC)
The point I was making - rather poorly - is that whilst 9/11 is no doubt the largest terrorist incident in terms of body count it is far from the only one. From this side of the pond the US obsession with 9/11 seems to ignore that it is, at the end of the day, just another terrorist attack. Or does the high body count make it somehow special? Jack Hughes (talk) 10:56, 13 September 2010 (UTC)
Yes, the high body count makes it special. 9/11 was relatively recent, had an enormous set of casualties, and took place in an astonishingly shocking manner. I suppose this is mildly ethnocentric, but I think it rather stands out amongst terrorist attacks. Sorry.--talk 11:03, 13 September 2010 (UTC)
It's far from the body count alone that makes it stand out. Primarily, it was a perfect Black Swan. Right up until the very second that the nose of a plane made contact with the windows of that first tower (well, not really, but bare with me) the entire idea was totally inconceivable. IRA bombings, while tragic, were partially expected because the Troubles in Ireland were practically a real outright war that had been building openly for decades, if not, centuries. It changed the entire media narrative of what constituted terrorism away from the Timothy McVeigh archetype (McVeigh himself changed it away from the IRA-like paramilitary archetype) to a Muslim fundamentalist archetype where these acts were committed in the name of attrition by True Believers. This aspect was brought to the fore on the world stage by 9/11 and was so intense that it even developed it's own branding that truly immortalised the date. The body count certainly made it more attractive to news sources wanting to cover it at the time, but the body count is far down the list of reasons of why it was important. theist 14:03, 13 September 2010 (UTC)
This is kind of a silly disagreement, but it should be pointed out that the only reason any of those other factors exist is because of the body count. If six people had died, it would not have changed any narratives. I know that because in 1993 an Al Qaeda-led effort set off a truck bomb in the World Trade Center, but only six people died, and it was promptly forgotten. Narrative, branding, future wars, immortal dates... none of those would have existed if so many hadn't died. It wasn't the only factor, but it was the biggest one since it made all other factors much greater.--talk 15:48, 13 September 2010 (UTC)
Quite true, but I don't think that totally refutes the idea that it's more than just the high casualties that is the important thing post facto - as if it was ~3000 deaths that grabbed you by the balls, but it was those other factors I mentioned where it really started to squeeze. After all, Hurricane Katrina came quite close to 9/11 in terms of deaths and certainly overtook it in terms out outright damage and injury, but it hasn't had the same lasting effect as 9/11 on people's conciousness worldwide - indeed, to me it seems largely forgotten about on the world stage. On the other end of the scale, you have the assassination of Archduke Franz Ferdinand (body count: 1 precisely) that sparked the First World War (insomuch that you can attribute WWI to that single incident using hindsight and a few other bits of "naughty" reasoning). theist 18:00, 13 September 2010 (UTC)
Speaking of setting yourself up for vilification, .8181 is the most overhyped tragedy in history. Occasionaluse (talk) 18:34, 13 September 2010 (UTC)
Katrina wasn't a terrorist attack; it's apples and oranges. And as for the Archduke: I am sad to say that virtually no one commemorates his death at all. It's been years since I was at a Francis Ferdinand Memorial Party. His death is today more trivia than tragedy - "Whose death sparked the beginnings of World War I?"
The other factors in 9/11 were important, of course. It was a bolt from the blue for most of America, and fully half the attack was enacted live on television. But it quite simply would just not be very important if it weren't for so many people dying -that was the biggest factor. It's unfortunate and vulgar, but I think that's the way it is.--talk 23:21, 13 September 2010 (UTC)
My point is that the first example had a huge body count but has nearly been forgotten, whether it was an attack or not is irrelevant. With respect to the second example, the legacy of that event was the whole of WWI (again, insomuch that you can make that attribution with some pop-history reasoning, the entire affair was far more complex, just as the build up to 9/11 is a bit more complex and can be traced back a decade or so) despite only one casualty. Though your point about it being televised is another major factor that I don't think we've considered so far. The second impact was played out on live television, as was the rest of it - compared to any other attack or disaster of any scale that is a Big Fucking Deal that sets it apart. If cameras had been there to record a live attack of a much smaller scale (something along the scale of the London bombings a few years later) then I think it would still have the same impact. We have all these multi-angled iconic images that just wouldn't have been possible at any other time in history. theist 00:04, 14 September 2010 (UTC)
The live TV thing definitely made it real. On 9/11 I was actually afraid that dozens and dozens of planes were going to start falling from the sky. But that didn't happen. All that really happened is that a group of riff-raff with box cutters took advantage of our meekness. That will never happen again. You can't even have a mild panic attack on a plane without a group of wannabe heroes holding you down to make sure America is safe. You don't need cavity searches or even US Marshalls. 6,000-7,000 Americans die on any given day. They're all just as innocent. Just seems like a bunch of bullshit to let something like this pwn America for OVER 9000 years. Occasionaluse (talk) 12:51, 14 September 2010 (UTC)
## fleaBay
Just about had it with stupid fucking eBay buyers. My latest is that I was selling a load of my Wii games, and I got a message from one of the buyers saying it never arrived. Turns out they gave me the wrong postage address (but it didn't arrive there either, apparently). So they opened a dispute in which I posted proof of postage, and yet they have still given me negative feedback despite the dispute being escalted and not yet resolved. Can they not ban stupid people from eBay, or would that destroy 90% of their business? CrundyTalk nerdy to me 20:47, 13 September 2010 (UTC)
They can't ban stupid people because that would destroy 95% percent of their business! But anyway, I bet the people at the wrong address stole the games and/or re-sold them. That's my random guess. --Sir Onion Kneel before my vegetable might! 20:52, 13 September 2010 (UTC)
Well, they said that they still own that address. No idea whether to believe them. Hanlon's Razor probably. CrundyTalk nerdy to me 21:33, 13 September 2010 (UTC)
This is why I only buy on eBay. I am terrified of selling to morons and/or criminals. ħuman 02:53, 14 September 2010 (UTC)
Just submitted their details to Trusted Blacklist. Let's see how fucking funny they are when they can't bid on anyone's items. CrundyTalk nerdy to me 08:32, 14 September 2010 (UTC)
Yeah, despite its original premise, selling on eBay is a bit shit these days, and best left to full time traders. There are a few other things that may work better for you, I was using Gumtree and Freecycle to kit out my flat a couple of years ago, which tend to be very local and slightly more idiot proof - although despite their smaller scale, you usually can find what you're looking for. theist 12:29, 14 September 2010 (UTC)
## 2012 Presidential Election.
Hello! For those of us abroad with something of an interest in American Politics, would it be possible to get together a sort of essay page with who the likely candidates for the 2012 Republican nomination are, and why they're the lesser of the evils or why they're nuts, that kind of thing. If the Americans would be interested, I could make a similar "possible next Prime Ministers of the UK" kind of thing? Dalek (talk) 23:00, 13 September 2010 (UTC)
Such a page would be better suited in the mainspace IMO. --Sir Onion Kneel before my vegetable might! 23:02, 13 September 2010 (UTC)
There I made a mainspace page with everyone I heard of. To wikipedia for other candidates. Tyrannis (talk) 23:08, 13 September 2010 (UTC)
Cool, I guess I suggested essay so we could get informal and ranty opinions on them, but that will do just as well in the mainspace. Thanks! Dalek (talk) 23:11, 13 September 2010 (UTC)
Better suited to the forums, I think. Where is Tyrannis' mainpage article? Linkie? ħuman 02:46, 14 September 2010 (UTC)
Forum:2012 U.S. Presidential Election. Totnesmartin (talk) 10:17, 14 September 2010 (UTC)
There should prolly be a mainspace entry too, as with the 2008 one. €₳$£ΘĪÐMethinks it is a Weasel 17:17, 14 September 2010 (UTC) Actually, what it really needs is content. DickTurpis (talk) 17:29, 14 September 2010 (UTC) ## Tony Blair on Hannity http://www.hannity.com/guest/blair-tony/11746 .Huh.— Unsigned, by: 131.107.0.80 / talk / contribs He's been whoring himself over here too. --PsyGremlinSermā! 12:22, 14 September 2010 (UTC) Does it go anything like this? theist 12:24, 14 September 2010 (UTC) I hear Blair is on The Daily Show tonight in the US. I'll catch it tomorrow when it comes on in the UK. On a related note, yesterday he won some human rights award for, "his global human rights work and commitment to international conflict resolution." In the words of Jon Stewart, "You've gotta be fucking kidding me, right?" Bondurant (talk) 20:45, 14 September 2010 (UTC) ## Phishing Just received a definite phishing email. Purportedly from H.M. Revenue & Customs, telling me I'm due a refund. Tricky - I haven't paid any tax for two years at least so I wasn't taken in but it wouldn't have caught me if I had. Loads of obvious errors on the page. If you get anything from refunds@hmrc.gov.uk don't open any attachments and forward it to phishing@hmrc.gsi.gov.uk. HMRC website. 07:08, 14 September 2010 (UTC) Nice. I don't think I've seen a proper phishing scam in a while. The last one I saw and reported turned out to be a real thing; it was just from some temp worker in the university so their email wasn't listed - I'd just forgotten that I'd sent them my hotmail address for alumni contact details. It's a shame people are still fooled by them. theist 12:20, 14 September 2010 (UTC) I had a conversation like this with my Dad once • Dad: I just got an e-mail from Bank of America... • Me: It's a fraud. • Dad: But it says... • Me: It's a fraud! You will never receive an e-mail from any financial institution tell you to log in immediately. MDB (talk) 17:26, 14 September 2010 (UTC) ## I lost where we were talking about 9/11 But I wanted to share this with ya [2]. PS, maudlin video/song warning, overly safe for work. ħuman 08:28, 14 September 2010 (UTC) There's a mini comment war going on there that makes me want to head butt a wall of nails and broken glass. theist 21:31, 14 September 2010 (UTC) ## I am ready Do these people probably deserve credit for hoovering up every single upcoming apocalypse and seeing a business idea therein? Or are they a bunch of sharks cashing in on the badly-informed? Totnesmartin (talk) 09:39, 14 September 2010 (UTC) The whole apocalyptic scare group is a strange one. They're rabidly Christian, but adamant about surviving the rapture. Occasionaluse (talk) 13:03, 14 September 2010 (UTC) ## ED page You know we have one now? Tyrannis (talk) 22:45, 13 September 2010 (UTC) Its already on RW's ED page. Se7enEight 22:46, 13 September 2010 (UTC) Oh, ok.Tyrannis (talk) 22:47, 13 September 2010 (UTC) (EC) Yeah, we noticed a while ago. --Sir Onion Kneel before my vegetable might! 22:48, 13 September 2010 (UTC) It's not as funny as the rest of the site. But, judging from the article we have, you guys don't find ED as funny as I do. Then again, you guys probably have an intact gag reflex. 2 hours of hitting random on ED will give you the Unfazeable advantage.Tyrannis (talk) 22:52, 13 September 2010 (UTC) Added in July it would seem. Se7enEight 22:53, 13 September 2010 (UTC) July 1st. Hmm. 3 gets you 5 the creator is an editor or lurker here.Tyrannis (talk) 22:55, 13 September 2010 (UTC) Removed link, I use adblocker, forgot about teh ads.22:56, 13 September 2010 (UTC) And no, I did not write the article. Tyrannis (talk) 22:59, 13 September 2010 (UTC) It's a fair enough recording of the history of RW, from the looks of it. Although with that "special" ED twist, of course. theist 23:56, 13 September 2010 (UTC) ED can be hilarious, but he people there really give me the creeps. No fucking way I could edit there. Occasionaluse (talk) 12:56, 14 September 2010 (UTC) I see ASoK also has an entry there. PJR's moving up in the world. --PsyGremlinParla! 11:12, 15 September 2010 (UTC) ## Obama and his magical teleprompters Does he really over-rely on them like conservatives say he does? Were other presidents like this? I tried googling but everything was right-wing biased. Senator Harrison (talk) 04:13, 14 September 2010 (UTC) Nah, all politicians rely on teleprompters, so don't believe the hype. I remember back during the '08 presidential campaign when there were clips of McCrazy giving a speech, and it would be fairly obvious that he was staring at a teleprompter the whole time. Conservative Punk (talk) 04:17, 14 September 2010 (UTC) (EC) The only politician who doesn't use a teleprompter (or notecards, or scriblings on her hand) when giving a pre-written speech is NY governor David Patterson (for obvious reasons). Those who say Obama can't speak coherently without one clearly chose to ignore the dozen or so debates he participated in (and generally won) before he was elected. It's a non-issue. DickTurpis (talk) 04:17, 14 September 2010 (UTC) I do not think much of politicians who rely heavily on teleprompters; too much like actors saying their lines. When I took a speech class in college, we gave speeches from notecards with the full text and from notecards with outlines; surely a politician could use outlines only at least some of the time. As far as U.S. presidents go, I have heard that Bill Clinton was once handed the wrong speech notes for some occasion and was forced to extemporize. He apparently made a good speech. ListenerXTalkerX 04:33, 14 September 2010 (UTC) You earned twenty secret internet points for that post, LX. And seven boxes of secret respect. ħuman 06:55, 14 September 2010 (UTC) As John Cleese said of Sarah Palin, a lot of politicians rely on parroting back rehearsed talking points (this is the main strategy for a lot of public figures, including most creationists, and some legitimately intelligent people who are simply bad at improvisation). And for a pre-written speech, there's really no reason not to just read from a script. Obama is a fairly good, cautious speaker, which makes it easy for people to believe that he's completely reliant on the prompters. As far as I know, there's no real evidence that he relies on a teleprompter more than anyone else. However, during the 2008 election season it was pretty hard to make fun of him, partly because a lot of the obvious jokes were racist (so they were being made, but not reputably), but also because he was serious, had charisma, and was good at managing his image. I think the teleprompter thing became popular because people were really looking for some way to make fun of him personally (as opposed to his enamored supporters, who were easy to mock). His typical gravitas can be kind of a burden on would-be comedians (especially since comedians tend to be liberals themselves). --Quantheory (talk) 04:39, 14 September 2010 (UTC) I should be surprised to find a creationist who could not easily be replaced with a quack box containing pre-taped soundbites from headquarters. ListenerXTalkerX 04:45, 14 September 2010 (UTC) From Michael Shermer: I knew Gish had a lengthy section in his presentation on the evils of atheism as a technique to destroy his opponents (who typically are atheists), so I made a point of stating in my introduction, loud and clear, that I am not an atheist. I even called the audience's attention to the man passing out anti-Christian literature, who was now sitting in the front row, and I told him that I thought he was doing more harm than good. Nonetheless, in his opening statement Gish called me an atheist and then proceeded with his automated diatribe against atheism. The rest of Gish's presentation was his stock litany of jokes and jabs against evolution. --Quantheory (talk) 05:06, 14 September 2010 (UTC) Often speeches are handed out to agencies etc before they're made. So they have to be "as written". That's here (UK) anyhow. 05:16, 14 September 2010 (UTC) Yes, but the good speechifiers extemporize. ħuman 06:55, 14 September 2010 (UTC) Well, the last speech I made I went off script quite, quite wildly - and even drew attention to it. But I don't think much of it, we want people in power to do a good job, not to deliver good talking points. Debates, speeches and baby kissing are all well and good but those are synonymous with the ability to deal with stress and pressure and to make right decisions and carry them out with authority (I fear Obama is having extreme difficulty with this last one). In a world where your every fluff and stumble is going to be repeated and crucified by your opponents and the media, you're often forced to just recite a good speech word-for-word. I don't think that there is anything wrong with that given the situation of modern politics. theist 12:14, 14 September 2010 (UTC) I remember reading somewhere that Reagan used cue cards all the time even at social functions, but I can't find anything to substantiate that claim. --PsyGremlin話しなさい 12:27, 14 September 2010 (UTC) Reagan used cue cards even in face-to-face meetings with Gorbachev and the G7. Totnesmartin (talk) 19:52, 15 September 2010 (UTC) I remember reading that Maggie Thatcher actually went into every single school in the country to personally steal our milk, but I can't substantiate that either. Bondurant (talk) 12:59, 14 September 2010 (UTC) Fuck that. I know for a fact Maggie used to dress up as one of our dinner ladies, with the sole intention of giving me a double helping of beetroot. --PsyGremlinFale! 13:14, 14 September 2010 (UTC) The whole issue is another great bellwether for idiocy. As soon as you start railing on about Obama and teleprompters, I stop listening and start thinking what an idiot you've become since he was elected, because that's the only thing that's changed. Teleprompters, death panels, birth certificates, "mainstream media" (used derogatorily), gun stockpiling, etc. Occasionaluse (talk) 13:09, 14 September 2010 (UTC) (undent) Thanks everyone. I appreciate all the responses. Senator Harrison (talk) 02:45, 16 September 2010 (UTC) ## Cheap flights There's no such fecking thing as a fecking flight for 50p theist 20:21, 15 September 2010 (UTC) ## Why do we eat chilli? Grauniad Hat tip 16:42, 14 September 2010 (UTC) Damn behaviorist, still overly focused on the stimulus->black box->response paradigm. I don't like that explanation at all. Some points to consider: capsaicin is a great anti-biotic against bacteria and fungi the spicier the food the longer the shelf life, particularly once water is removed. It can also be used to protect meats and vegetables with out natural preservatives. It is a very strong taste component, and things with very strong tastes are favored as spicing agents since refrigerators are a fairly modern invention and meat sometimes had to be kept a long time. Also there is some evidence that beta endorphins are released after ingestion of chile which throw the whole "non-addictive" angle of that article completely out the window. tmtoulouse 16:58, 14 September 2010 (UTC) From what I know, the use of spices was started as a way to cover up the taste of meat that had gone a bit squiffy. The whole behavior thing smells a little too like post facto reasoning. theist 17:04, 14 September 2010 (UTC) Using spices to mask the taste of rotting meat also smells like post facto reasoning. It's a myth. €₳$£ΘĪÐMethinks it is a Weasel 17:36, 14 September 2010 (UTC)
Is it? Well, lets do some critical thinking and see what evidence is out there for various hypotheses proposed here:
1. Conditioning
2. Masking flavor of old meat
3. Anti-biotic properties
4. Beta-endorphine release.
tmtoulouse 17:42, 14 September 2010 (UTC)
A Google on - endorphine chilly - gets a lot of hits.--BobSpring is sprung! 18:41, 14 September 2010 (UTC)
Because it's delicious. I don't understand people who eat really, really spicy food but the plate of chilli con carne I just ate was spicy enough but not too spicy so you couldn't taste the rest. Mmm! –SuspectedReplicant retire me 19:14, 14 September 2010 (UTC)
The endorphine release is fun. Occasionaluse (talk) 19:38, 14 September 2010 (UTC)
(Anecdote alert) I like a bit of chilli in certain foods because it seems to "open up" a lot of the flavours in the food which you wouldn't notice without it (in particular, curries). Also, I like chillis from the chinense family (habaneros, scotch bonnets, nagas etc) because even the smallest amount imparts a subtle fruity, orangy taste to the food. My favorite at the moment is a mild habanero I'm growing which smells and tastes like it should be scorching hot, but has almost no heat at all. P.S. I seem to remember from my pharmacology days that topical capsacin is used as an irritant analgesic. I wonder if that particular effect on the toungue and mouth contribute to the appeal? CrundyTalk nerdy to me 20:56, 14 September 2010 (UTC)
I'd doubt it. It's like saying people drink absinthe for the effects of wormwood, but by the time you've had enough to feel the wormwood, you're going to be so piss drunk it wouldn't matter. I think it be the same for capsacin, a slight analgesic effect which would be far and away outweighed by the initial burning. Occasionaluse (talk) 21:03, 14 September 2010 (UTC)
I love chilli. Delicious. Se7enEight 21:21, 14 September 2010 (UTC)
Curries may have developed in India as a preservative, so not quite "masking the taste" as I described above, but perhaps some practical use initially. theist 21:27, 14 September 2010 (UTC)
I have heard the "masking the flavor" one before but I don't by that at all. Se7enEight 21:30, 14 September 2010 (UTC)
Well, it would certainly have made little sense in the medieval west, but I think they did have some preserving/masking effects where they were developed in rural India (well, at least according to an Indian family I know, but you never know). <cynicism>It's certainly true in some of your cheaper places these days... </cynicism> theist 21:33, 14 September 2010 (UTC)
Probably not so much masking the flavour as giving a flavour: ever eaten mutton? 10:49, 15 September 2010 (UTC)
As I said, I know an Indian family who are well into their (apparently) authentic curry making so yes. It basically tasted of garam masala. I think they had marinaded it for three years or something. theist 12:07, 15 September 2010 (UTC)
(UI) Bear in mind that before the Portugese landed ashore in India, the Indians used to add heat to their food using pepper. When the Portugese traded chilli pepper seeds with them they substituted it. As piper nigrum seeds have little to no food preservation qualities I suspect the heat aspect was the compelling factor. In fact, it was also the Portugese who invented vindaloo, as they used to preserve pork using vinegar for the long voyages at sea, and used Indian spices to flavour it. CrundyTalk nerdy to me 19:18, 15 September 2010 (UTC)
All this historical theory about why chilli was used is the past is most interesting. But the question is why do we use in now. Why is it still popular? My money is remains on the endorphines.--BobSpring is sprung! 19:25, 15 September 2010 (UTC)
For me, it's flavour and a 'kick' which adds another dimension to the meal. Flavour is a biggie, as I don't like green chillis (esp capsicum annuums), and I don't use extra hot sauces which just add heat and no flavour. CrundyTalk nerdy to me 19:29, 15 September 2010 (UTC)
Wimp. :-)--BobSpring is sprung! 19:31, 15 September 2010 (UTC)
(EC)To Bob, because it tastes good - and to show how hard we are. ħuman 19:33, 15 September 2010 (UTC)
Bob: Don't get me wrong, I love a good bit of heat. I have a bag of Dorset Nagas in the fridge as we speak. I just wouldn't add anything which adds heat only. CrundyTalk nerdy to me 19:38, 15 September 2010 (UTC)
My mother is the sort of cook who gives English cooking a bad name. (Food in saucepan, add rock, boil until rock softens.) As soon as I started cooking for myself everything was filled with chili and garlic - David Gerard (talk) 15:36, 16 September 2010 (UTC)
## "Twat"
Here's a question: how offensive do you think the word "twat" is when used in the context of destroying / harming something? I'm watching an old Red Dwarf (Series 3, Polymorph), where the following dialog occurs:
• Rimmer: So what are we going to do (about an alien on the ship)?
• Lister: Well I say let's get out there and twat it!!
The episode is being shown starting at 21:40, so well after the watershed, and yet they cut the "twat it" part of the dialog out completely, making the response seem odd. I also remember hearing that when this particular series went to VHS, it was given a "15" rating instead of "PG" just because it has the has the word "twat" in it.
So, how offensive is it really? If it was used in the context of referring to female genitalia then I could probably understand on the grounds of vulgarity, but personally I don't think censorship seems warranted in this context. What do you think? Is a word a word regardless of context? CrundyTalk nerdy to me 21:11, 15 September 2010 (UTC)
I used the word in primary school (we all said it) years before I knew its biological origin. my english teacher said it (she was reading from A Kestrel For a Knave - brilliant book). Even Now I'm puzzled by people who find it grossly offensive - to me it's just a little stronger than "prat." and here's the John Cooper Clarke poem. Totnesmartin (talk) 21:23, 15 September 2010 (UTC)
Hehe, I forgot that this episode is the one where Rimmer suggests they call themselves the Committee for the Liberation and Integration of Terrifying Organisms and their Rehabilitaion Into Society, the one problem being the abbreviation :) CrundyTalk nerdy to me 21:35, 15 September 2010 (UTC)
Similarly, I have always used "Chuff" as a sometimes affectionate expletive. Back in 1997, my father, he being an old goat, remarried; his new wife objected strongly to my surprise as I hadn't been aware of its nonPC meaning (The name for a women's genitalia or more commonly pubic area). Twat I have always used as a derogatory term for an idiot. 09:09, 16 September 2010 (UTC)
"Twat" is much less shocking/offensive in UK usage than in the US, & almost never used in its anatomical meaning, but is still a rude word nonetheless. Editing it out of a sitcom repeat doesn't surprise me, & the cut they're now showing at 9:40 may well be one that was edited for a 8:30 showing which now gets used as the standard version whenever it's rerun, or was the version sold on to other stations by the beeb. €₳$£ΘĪÐMethinks it is a Weasel 22:03, 16 September 2010 (UTC) ## Well he/they did it! Westboro Baptist Church burnt both the Qur'an and the American flag About 5:54 minutes into this 8:53 Youtube video. Westboro Baptist Church Burns the Koran (Quran) and American Flag on Anniversary of 9/11/01 They describe Terry Jones as a false prophet here.205.189.194.208 (talk) 21:38, 15 September 2010 (UTC) Assholes. Tetronian you're clueless 21:50, 15 September 2010 (UTC) I'm not surprised. Nobody should pay attention to them, they're just moronic attention seeking fucktards. --Sir Onion Kneel before my vegetable might! 22:14, 15 September 2010 (UTC) The song they played as the flag and book burned was a horrid choice - it sounds just like one of those "praise God" songs you hear on the Christian children channels, all full of joy and children's voices, except that now, it basically says that we're all condemned to hell. ~SuperHamster Talk 22:53, 15 September 2010 (UTC) At least he admits the fundie Christians are hypocrites. BTW, that song is nowhere as creepy as this--Thanatos (talk) 23:48, 15 September 2010 (UTC) Catchy tune to burn a Qu'ran/flag to. I would have preferred Wagner for epic irony, though. --The Emperor Kneel before Zod! 00:12, 16 September 2010 (UTC) It was funny at first but it goes on a bit too much and gets boring. They should cut it by about 75% if they want to get good laughs.--BobSpring is sprung! 09:27, 16 September 2010 (UTC) ## Interesting read here. --PsyGremlinPraat! 12:02, 16 September 2010 (UTC) Very interesting indeed. Although it took me a while to figure out where he was going with the piece. Tetronian you're clueless 12:35, 16 September 2010 (UTC) I kind of stopped reading when he started saying that everyone is a sociopath. That kind of obnoxiously insulting moralising always turns me off whatever is being said. Evil stupid Hoover! 15:10, 16 September 2010 (UTC) I think he was being deliberately inflammatory to get attention. Tetronian you're clueless 15:27, 16 September 2010 (UTC) ## 20 Quadrillionth Digit of Pi Calculated It sounds impressive... but it turns out that they're talking about binary. Now that's still an impressive number of digits, but if I were to say that the 20 Umpty-bazzilionth digit is a "1" it would take anybody else years to prove me wrong (or right). Giving yourself a 50% chance for something that doesn't matter anyway isn't quite so impressive. –SuspectedReplicant retire me 16:05, 16 September 2010 (UTC) We all know that the correct value of Pi is, in binary, 11.00000000000.... so the billionth digit is '0'. It's in the bible. Jack Hughes (talk) 16:13, 16 September 2010 (UTC) It's pretty trivial in binary, since there's a nice closed form for the nth hex digit; you can get the umpty-bazzilionth digit by putting umpty-bazillion into a simple formula. Evil stupid Hoover! 16:50, 16 September 2010 (UTC) So you're saying we can make the news?? Occasionaluse (talk) 16:56, 16 September 2010 (UTC) If you can find a computer capable of making calculations on that magnitude quickly, then yes. The formula is $\frac{4}{8k+1}-\frac{2}{8k+4}-\frac{1}{8k+5}-\frac{1}{8k+6}$ for the kth hex digit; the resulting digit can be converted in about two seconds in your head to the 4kth binary digits. Evil stupid Hoover! 18:29, 16 September 2010 (UTC) You'll need to do this with big rationals, by the way: the calculation doesn't give an integer. Evil stupid Hoover! 19:28, 16 September 2010 (UTC) And it appears I was wrong about the expression in the first place. Evil stupid Hoover! 19:40, 16 September 2010 (UTC) ## InstantCommons is down Don't go removing redlinked images just because this here MediaWiki can't see the images on Wikimedia Commons. Nx considers this is my problem to deal with. It's 12:15am here and I'm four pints down, so I'll leave it to morning and see how much of Human's wiki everyone breaks trying to fix Jimbo breaking Human's wiki - David Gerard (talk) 23:18, 16 September 2010 (UTC) Nx considers this is my problem to deal with No, actually I don't. I'm going to sleep now. If it's not fixed by tomorrow, I'll have to do something drastic. -- Nx / talk 23:24, 16 September 2010 (UTC) Down everywhere (?)- Down on Blightynet anyhoo. 23:29, 16 September 2010 (UTC) But the problem is, it's working on RationalBeta -- Nx / talk 23:31, 16 September 2010 (UTC) WHAT HAVE YOU ASSHOLES DONE TO HUMAN'S WIKI?!?!?!? THIS WILL NOT STAND!!!! DON'T MAKE WE WAKE HUMAN FROM HIS DRUNKEN STUPOR TO PUT YOU OVER HIS KNEE AND TAKE A BELT TO YOUR BEHINDS!!!! DickTurpis (talk) 23:33, 16 September 2010 (UTC) I went and fetched over the one I used in template:media. ħuman 23:47, 16 September 2010 (UTC) Well, it seems to be working again if I'm not mistakin'. It was probably related to the problems that the Wikimedia servers were having with their fancy java - I know Twinkle, TK's oh-so-favorite Wikipedia tool, wasn't working at the same time that the images were down over here, and the reasoning for that was that the servers were having issues. ~SuperHamster Talk 01:50, 17 September 2010 (UTC) ## Teatards fuck themselves Looks like the Tea Party shot the Republicans in the foot tonight. Mike Castle would have walked right into the Senate had he been the Republican nom, but instead they went with the Teabagger who got a mere 35% of the vote when she ran 2 years ago. Democrats should be able to handle her. Republican hopes of taking the Senate just got more slim. DickTurpis (talk) 02:35, 15 September 2010 (UTC) Lol, so the Teabaggers actually seem to hurt the rape-ublicans, eh? That's good to know. AnarchoGoon Swatting Assflys is how I earn my living 02:44, 15 September 2010 (UTC) I wouldn't have been certain about the Senate before now, but these primaries make it a lock for the Dems to keep it. It's going to be a GOP House and Democratic Senate after 2010.--talk 06:25, 15 September 2010 (UTC) Hmm, I'd be wary of what you wish for. You never know what might happen. Lily Inspirate me. 08:57, 15 September 2010 (UTC) There are 37 seats in play. Of those, 19 are potentially uncertain. The GOP needs to win 14 of those 19. Until the primaries, they could have counted on Alaska, Nevada, Florida, and Delaware; those races were at least all close to 50/50, and in some cases (Delaware and Nevada) way into being GOP breezes. New Hampshire was even a good possibility. But the primaries have run in some pretty fringe candidates. Alaska lost Murkowski as an incumbent to Joe Miller, who polls much worse among the general public and will now have a hard fight. Nevada nominated Sharron Angle who is managing to actually fall behind the incredibly unpopular Harry Reid (she managed this by saying insane things). Florida nominated Rubio instead of Crist, who is now running Independent and has the state split into near-thirds. New Hampshire seems to have nominated another Tea Partier crazy person, who polls badly compared to the establishment candidate. And Delaware just swung wildly from being a near-lock for the GOP to being a lost cause (the RNSC is saying they won't bother spending money on it now). I would have always said it was very unlikely the GOP would take the Senate, but now I feel confident saying that it just won't happen. They probably won't even come very close, maybe within two or three seats.--talk 11:47, 15 September 2010 (UTC) Two years of Obama destroying the country and America still votes in democrats. I wonder how the Right will spin this. The same way they spun the election, maybe, with liberal media bias? ONE / TALK 12:39, 15 September 2010 (UTC) Teaparty has become something of a snarl word. Glenn Beck even acknowledges that in The Overton Window. But then again, he blames that all on left-wingers infiltrating the teaparty and bringing the racist signs and whatnot with them. Nope, it was not the leadership or Glenn's race-baiting that made the movement impossible, it was liberal infiltration. (The Overton Window is actually worth a read if you are interested in Beck's thought process. Sure, it's boring and shitty and wacko, but we do get to see the world through Beck's eyes. Warning: There be Dragons)--Thanatos (talk) 23:44, 15 September 2010 (UTC) Glenn Beck has a thought process? Totnesmartin (talk) 11:13, 16 September 2010 (UTC) One can have the ability to think without being good at using said ability. --GastonRabbit (talk) 00:10, 18 September 2010 (UTC) ## Atheists say the darnedest things Let's make our version better than Ken's! ħuman 03:15, 16 September 2010 (UTC) Oh dear. Tetronian you're clueless 03:19, 16 September 2010 (UTC) FSTDT does catalog them, although they're rare and sometimes get more down-voted than they deserve because people fail to recognise how lunatic they can be. Poe's Law can apply on both sides, it's just heavily weighted to one more than the other. theist 10:42, 16 September 2010 (UTC) Can you help me find the gems? ħuman 03:43, 17 September 2010 (UTC) I thought we didn't want CP to influence mainspace.--BobSpring is sprung! 07:13, 17 September 2010 (UTC) But there's no CP in the article - I was just inspired by Ken's awful job to try to do better. And at least include some actual quotes from atheists. ħuman 00:10, 18 September 2010 (UTC) ## OO Calc Alternative Can someone recommend an alternative to Open Office Calc? I'm so sick of it being buggy, slow and resource greedy. Occasionaluse (talk) 18:23, 16 September 2010 (UTC) OO Calc is the alternative. But there's also Gnumeric and KSpread -- Nx / talk 18:26, 16 September 2010 (UTC) What about the online Google thingy? or there's always Excel. /ducks and runs. --PsyGremlinSiarad! 18:35, 16 September 2010 (UTC) Excel is awesome. I've never understood how you can't like Excel. --The Emperor Kneel before Zod! 02:08, 17 September 2010 (UTC) Anyone remember Supercalc? Was the program that really got desktop computers into the mainstream. 02:15, 17 September 2010 (UTC) I liked FrameWork. But some bastids bought them to get the database program and stopped supporting/improoving it. This was "windows" before Windows. Also had MultiCalc running on an Osbourne... ħuman 02:17, 17 September 2010 (UTC) I still have a soft spot for Lotus 123. Best macro language ever - need to select a variable range? {end}{right}{end}{down} --PsyGremlinPraat! 08:56, 17 September 2010 (UTC) ## Ways to be a Skeptic I suppose this has been covered somewhere before, but I found it quite amusing.--BobSpring is sprung! 18:39, 16 September 2010 (UTC) At least he mentioned the Goat... ħuman 19:39, 16 September 2010 (UTC) WIGO Blogs, third entry for September. </ shameless self promotion> --ZooGuard (talk) 19:47, 16 September 2010 (UTC) Did we ever take up the challenge of formally identifying RWians under those labels. I got perhaps one or two and then struggled. theist 07:54, 17 September 2010 (UTC) Almost Everybody here Fits into one or another of the Descriptions. How many people really Look For the Truth?--Tolerance (talk) 20:09, 17 September 2010 (UTC) ## Nifty Quote On the subject of one species arising from another (in this case a rumination by the first machine intelligence): "We are the latest forming link of a chain, and the latest link should surely indicate the current direction. But we do not know where the chain is supposed to go, what it is supposed to hook up to, or what is the purpose of it. We do not even know whether it is a single strand chain or a jungle of links. I am in the darkness nearly as much as my human associates are. This is a curious train that we are: it seems to grow new cars on the front end of it as it rolls, and I am the new car on the very front. I should be the bearer of the headlight, but I have not been able to devise it yet. I hope it does not devolve on me as foremost car to pull all those other cars. I have not signed any agreement to be the locomotive to a train I don't even know the name of." —R.A. Lafferty, Arrive At Easterwine -- Kels (talk) 22:24, 17 September 2010 (UTC) Nice! ħuman 00:08, 18 September 2010 (UTC) ## Thank You My heartfelt thanks to the Rationalwiki Community for its support during the crisis that happened to my family the other week. I would especially like to thank Goonie for his words of wisdom on the subject. My life has returned to semi-normality. They haven't caught the guy, however, and my sister refuses to press charges. It appears, though, that this guy is a serial rapist. There was another rape nearby the week after and the guy cut the girl's stomach like my sister. I am sure he will eventually be caught. My sister has recovered remarkably well, she is driving my mom crazy again and acting as though nothing has happened. Now I just have to put up with Mom calling to complain about how wild she is.--Thanatos (talk) 00:02, 16 September 2010 (UTC) Well, if you catch the guy, be sure to punch him extra hard in the crotch for me. On a serious note, no problem at all, man. If you need any more support, you can count on me. AnarchoGoon Swatting Assflys is how I earn my living 02:22, 16 September 2010 (UTC) Peace. Is all I have to say. ħuman 03:16, 16 September 2010 (UTC) ### On another note On another note, Goonie, do you have news you wish to share here? ħuman 02:19, 17 September 2010 (UTC) Fine. As of 2:21 p.m CDT on Tuesday, I am an uncle. Gooniepunk2010 Oi! Oi! Oi! 03:29, 17 September 2010 (UTC) Congratulations to you and his mother! ħuman 03:42, 17 September 2010 (UTC) Thanks. If this post get more interest. I might even post some pics of the baby here. AnarchoGoon Swatting Assflys is how I earn my living 03:43, 17 September 2010 (UTC) nice pic even if it does look a bit red in the face. Congrats to the mom and dad. But get the kid a nice bassinet or something, plastic boxes are not good long term :) Hamster (talk) 04:43, 18 September 2010 (UTC) Yes, well. That picture was taken exactly 2 hours after the little guy was born, and that was what he was laying in while the doctor examined him. As far as the parents go, the mom (my sister) is indeed a proud mother (I will never discuss the father). The Goonie 1 What's this button do? Uh oh.... 07:32, 18 September 2010 (UTC) ## Conspiracy! I've figured it all out; the pope is here to turn us Brits into zombies!! Don't believe me? Check out the red and white emblems on the holy see's clothing. Remind you of anything? HE WORKS FOR UMBRELLA CORP!! CrundyTalk nerdy to me 08:22, 17 September 2010 (UTC) It is obvious that Your attempted "Humor" os not even Appreciated.--Tolerance (talk) 20:24, 17 September 2010 (UTC) IF it waS obVious then wHY did You feEl the NeeD to poInt iT ouT? ONE / TALK 11:46, 18 September 2010 (UTC) ## Awesome picture Someone at the BBC is having a laugh. In the "pope's visit in pictures" page, the first image is this. Epic win. CrundyTalk nerdy to me 12:43, 17 September 2010 (UTC) Brilliant! --PsyGremlinParlez! 13:00, 17 September 2010 (UTC) Can we screencap it in the original context before it disappears? --ZooGuard (talk) 13:10, 17 September 2010 (UTC) Done!. CrundyTalk nerdy to me 14:01, 17 September 2010 (UTC) That's so funny. (Though it shouldn't be.)--BobSpring is sprung! 14:04, 17 September 2010 (UTC) There is nothing Funny about the Picture.--Tolerance (talk) 20:07, 17 September 2010 (UTC) I tHinK itS hilAriOuS. --YossieSpring in Fialta 01:01, 18 September 2010 (UTC) Perfect material for a 'missing caption' competition. Lily Inspirate me. 08:57, 18 September 2010 (UTC) Easy "Pope figures out formula for great caption competition dialogue; 'something something something darkside... something something something complete'". theist 16:26, 18 September 2010 (UTC) ## The effin' antichrist has landed. You'd think he was a real head of state, the way he's being treated. 09:19, 16 September 2010 (UTC) um, he is. The Vatican is a proper actual country, if only in a diplomatic sense. Totnesmartin (talk) 09:37, 16 September 2010 (UTC) Whether the Vatican should be recognized in this way has been disputed, as the Vatican's claim to nationhood is based on an particular treaty-of-convenience with Mussolini, and there is arguably no actual population of people living there as citizens; all the "residents" are citizens of other countries who happen to be employed at this one organization. Since Vatican leadership is all supposedly celibate, there's no clear reason why there would ever be a real "people" native to the Vatican anyway. A country without a native population, which is also not a proper member of the UN, is a little bit dodgy (although, granted, the UN doesn't have that much authority over such things; Switzerland wasn't a member either until 2002). It's not really clear why anyone except Italy should consider the Vatican a proper country, even for pretextual diplomatic reasons. No other religion enjoys such a status. The labeling of the Vatican as a nation-state seems to have been established by decades Catholic lobbying more than anything. --Quantheory (talk) 10:05, 16 September 2010 (UTC) Actually, certain other religions have far more power over their countries. Let's be glad that all that's left of Catholic theocracy is a small city-state. -- Nx / talk 10:30, 16 September 2010 (UTC) Well, but my point is not to complain that Catholicism has a real country that they can oppress govern (although of course they once had several), but to note that its leaders are pretending that they are a country, for no better reason than because they have millions and millions of followers and a few blocks of space carved out of their home town. I don't see the LDS church taking a couple buildings in Salt Lake City and declaring themselves to be a tiny nation-state belonging to church leadership, even if there was a really Mormon-friendly guy in the White House who'd give it to them. It's absurd. If Catholicism took over Rome, or all of Italy, that would be awful, but at least it would make sense for them to call themselves a nation at that point. --Quantheory (talk) 11:07, 16 September 2010 (UTC) If you're reading, your Holiness, welcome to Britain.-- Kriss AkabusiAAAWOOOGAAAR!!1 09:49, 16 September 2010 (UTC) Still, he a brave man, to be walking around in a country with "a new and aggressive atheism". Oh that's right - he drives around in the Poop mobile. Because it seems that being god's appointed spokeman on earth doesn't make you bulletproof. --PsyGremlinPraat! 10:10, 16 September 2010 (UTC) At least the last one took getting shot at with good humour and not too personally. It's nice how they (and the UK's right wing media, well, The Daily Fail, at least) are spinning it so that it's secularists who are being aggressive. I don't want to split hairs or anything, but last I checked, secularists weren't the ones threatening people with eternal torture for not doing precisely what they say... theist 10:40, 16 September 2010 (UTC) He's live on radio 5 now. I can't hear his voice without imagine him stroking a persian cat... Totnesmartin (talk) 10:44, 16 September 2010 (UTC) The Daily Mash has got it rightSuspectedReplicant retire me 11:01, 16 September 2010 (UTC) Stephen Fry (the national fucking treasure of the UK, along with Sean Connery, Huw Edwards, Trevor McDonald, and John Lydon) was on the One Show yesterday and said, "no, the Vatican isn't really a real state," which I just found brilliant. SJ Debaser 11:56, 16 September 2010 (UTC) I think you've just named the most awesome QI line-up ever. Except for maybe Bill Bailey, Dylan Moran, Fred MacAulay and ok, Alan Davies. And on a totally unrelated note - when is the News Quiz coming back to BBC4? --PsyGremlinFale! 12:08, 16 September 2010 (UTC) The Guardian claims there's another row. Will it never end?--BobSpring is sprung! 12:22, 16 September 2010 (UTC) BBC News: Pope Touches Down in the UK Down, age 6, was not available for comment after the incident. CrundyTalk nerdy to me 12:23, 16 September 2010 (UTC) You are so going to hell. theist 12:36, 16 September 2010 (UTC) What gets me is that, in trying to put a spin on the "third world" gaff, said that the remark was referring to "Britain's multi-ethnic composition". To me that sounds even worse. The implication is that the multi-racial aspect of modern Britain makes it "third world", or, to put it another way, first world countries are racially pure - and preferably white. Fuck off back to Rome you bastard. Jack Hughes (talk) 12:43, 16 September 2010 (UTC) Anyone got the full context of the "third world" quote? Not the spin but the original paragraph?--BobSpring is sprung! 13:27, 16 September 2010 (UTC) This is the fullest version I can find: "England today is a secularised, pluralistic country. When you land at Heathrow Airport, you sometimes think you'd landed in a Third World country." –SuspectedReplicant retire me 13:42, 16 September 2010 (UTC) I for one, support a war of annexation against this so called "country". Sen (talk) 13:29, 16 September 2010 (UTC) All I can gather is that it was in reference to landing at Heathrow. I think the phrase was "people landing at Heathrow might think they were landing in a Third World country". It was an off-hand comment in Focus magazine, I'll see if I can track down the article. theist 13:40, 16 September 2010 (UTC) The Google translation of an article in Focus puts the quote as "England is now a secular, pluralistic country. If you land at Heathrow Airport, you sometimes think you had landed in a third world country." continuing with "Particularly in New England is an aggressive atheism spread . If you are around at British Airways and carrying a cross, you will be penalized. But we want to show our faith in public. Anyone who knows England knows that there is also a great Christian tradition. Europe would no longer be Europe if it could not maintain this tradition." theist 13:44, 16 September 2010 (UTC) So ... what was the intent? It reads like secular and pluralistic are equivalent to "third world". But I've got to say that it's difficult to believe that was his intention.--BobSpring is sprung! 13:59, 16 September 2010 (UTC) I can't gather that, even from Focus.de as the apparent original article mentioning it doesn't appear to exist. I can assume from this that it was probably an off-the-cuff and context free remark. Not sure what he actually meant by it. theist 14:04, 16 September 2010 (UTC) Ah, wait, got it; [3] In the interview in the current issue of FOCUS Kasper replied to the question of why so many Britons expressed their displeasure with the pope: "England is now a secular, pluralistic country. If you land at Heathrow Airport, you sometimes think you had landed in a third world country. "Kasper also affirmed the question of whether Christians would suffer in the kingdom, and said:" Particularly in New England is an aggressive atheism spread . If you are around at British Airways and carrying a cross, you will be penalized. But we want to show our faith in public. Anyone who knows England knows that there is also a great Christian tradition. Europe would no longer be Europe if it could not maintain this tradition." (emphasis added) I think that highlights the context, but doesn't seem to answer your question about intent. theist 14:07, 16 September 2010 (UTC) Then looking at the context it simply reads like a way to disparage or insult secular, pluralistic countries. Personally I can't read it any other way. Thanks for looking it up. I tried and failed.--BobSpring is sprung! 14:15, 16 September 2010 (UTC) The Third World remark is very odd, since developed countries are far more likely to be secular and pluralistic than developing ones. Catholicism is certainly far more prevalent in the so-called Third World. I can only conclude that the man who said it didn't have his brain on at the time.-- Kriss AkabusiAAAWOOOGAAAR!!1 15:07, 16 September 2010 (UTC) Whatever the status of the Vatican, one thing is for certain: it has the highest Pope density in the world, with two Popes per square mile 212.62.5.158 (talk) 15:10, 16 September 2010 (UTC) Dammit, I missed my big chance to throw condoms at him. Ah, well, from now on, the Pope is getting steadily further away from me, increasing my overall happiness with every centimetre. Evil stupid Hoover! 15:13, 16 September 2010 (UTC) Hehe: The Pope's aide says that the UK is like a "third world country". Well, Vatican City is like the backstage of a Gary Glitter gig. CrundyTalk nerdy to me 15:52, 16 September 2010 (UTC) Crundy, I love you for that, but I am so not standing near you in a thunderstorm. --PsyGremlinFale! 17:14, 16 September 2010 (UTC) The Popemobile: Bullets can't get in, children can't get out. CrundyTalk nerdy to me 15:54, 16 September 2010 (UTC) And the hits just keep on coming. "The pontiff praised Britain's fight against the Nazis - who 'wished to eradicate God' - before relating it to modern day 'atheist extremism'. I call Godwin. Jack Hughes (talk) 16:09, 16 September 2010 (UTC) Well, actually living in the 3rd world city where he landed, I can say it's rather more pleasant than most of the godly utopias he would seem to approve of. Evil stupid Hoover! 16:55, 16 September 2010 (UTC) The Vatican is the original micronation - a contrived nation-state with no real population to speak of. They're also the only micronation with international recognition. Hey, if the Vatican deserves recognition as an independent country, so should the Principality of Sealand. And I, for one, would welcome Temple Square in Salt Lake City becoming its own independent nation, all four city blocks of it, with Thomas Monson as head of state. As long as the Vatican gets to be an independent nation it's only fair. What makes the Catholic Church so special anyhow? Secret Squirrel (talk) 20:00, 16 September 2010 (UTC) Nothing gives the Vatican legitimacy except tradition and precedent. There is a good case to be made for reappraising its status, but during an official visit by its chieftain is not the time for that reappraisal.-- Kriss AkabusiAAAWOOOGAAAR!!1 08:29, 17 September 2010 (UTC) No, before his visit was the right time. CrundyTalk nerdy to me 08:39, 17 September 2010 (UTC) Newsflash: The pope has huge balls CrundyTalk nerdy to me 08:45, 17 September 2010 (UTC) One more joke: The Popemobile caused huge traffic jams in Scotland today resulting in angry road users making their feelings known. Apparently a minibus full of school children gave him the horn. CrundyTalk nerdy to me 14:56, 17 September 2010 (UTC) why would England of all places welcome a Pope ? Didnt they boot his peoples out of the country and start their own Church a few hundred years ago ? Hamster (talk) 22:32, 19 September 2010 (UTC) Yes, but they're joining forces against the evil nazis atheists now. -- Nx / talk 22:34, 19 September 2010 (UTC) ### The effin' anarchist has landed And beats down Christianity. The Goonie 1 What's this button do? Uh oh.... 01:01, 17 September 2010 (UTC) The Pope has has Reality - Why do you Seek to Impose Yours?--Tolerance (talk) 20:10, 17 September 2010 (UTC) ## Internet Explorer 9 Anyone download the beta yet? I've seen a screenshot of it on WP, and it looks oddly familiar.... --Sir Onion Kneel before my vegetable might! 20:18, 16 September 2010 (UTC) No, i gave up on ie years ago. i didn't think anyone here still used it by choice. Totnesmartin (talk) 20:21, 16 September 2010 (UTC) People still use browsers? --151.81.175.15 (talk) 21:03, 16 September 2010 (UTC) I don't use betas, but quite frankly the whole anti-Microsoft meme is very old school. IE8 is almost as good as any other browser for compatibility (I believe only Opera passes all the tests?), and it's quick and these days it's decently secure too. These things to in phases. It used to be NetScape, then it was IE, then it was Firefox, now it's Chrome. I currently use a mix of all four because each browser does different things in different ways so suit different tasks differently. It also stops me getting the fanboy attitude for any one browser. Of course, I won't touch that filthy Apple piece of shit Safari. That's being too diverse. –SuspectedReplicant retire me 21:12, 16 September 2010 (UTC) You know what's a tired-old meme? Having to restart your computer to use a newly installed piece of software. Anyway, it must be snowing in hell, because IE doesn't suck any more. -- Nx / talk 22:00, 16 September 2010 (UTC) What were you using before that makes you think that IE8 is quick? Christ, the only thing I've seen run slower than IE8 is a fat kid chasing the Brussel Sprout van. The only time I open IE these days is when I've got no other choice and even then I'm more inclined to think 'sod it, it ain't worth the effort'.-- Jabba de Chops 23:10, 16 September 2010 (UTC) Opera - I apologise for becoming a fanboy - passes a lot of tests and was the first, with the exception of a beta release of the latest Safari, to pass Acid 3 (and IMHO its interface beats the shit out of pretty much all competitors) but it does have compatibility problems because it takes web standards very seriously. Therefore any bad hacks or patches or insane workarounds that work on IE or Firefox just plain won't work in Opera. Facebook's "@Mentions" thing didn't work until a very specific update and it still doesn't have the full compliment CSS3 features because they're not accepted web standards yet. It also has some quirks that I don't find in other browsers, so these "tests" aren't the be all and end all. How much of this is actually a browser problem and how much is due to websites being a little naughty and loose with their web standards compliance, I don't know, probably both simultaneously. theist 07:07, 17 September 2010 (UTC) Acid3 is not so important as it is hyped up to be. Firefox 4 still doesn't pass acid3, and the developers don't care, because the last few points are stuff that's low priority, e.g. SVG fonts, which are unnecessary, because Firefox supports better font formats. -- Nx / talk 07:59, 17 September 2010 (UTC) Haven't used Opera in years. In my first job when were designing websites for cross-browser compliance the general rule was "if it works in Opera it'll work in anything". I presume it's a lot better now. CrundyTalk nerdy to me 07:48, 17 September 2010 (UTC) It's still "if it works in Opera it'll work in anything". That does mean "if it works in IE or Firefox it may not work in Opera", because it's a picky f**ker. And of course, you have that annoying browser identification that tries to restrict content from Opera - I presume to mask the fact that they're not W3C compliant - but 99% of the time the ID and Masquerade settings fix that. That said, however, while redesigning my group's webpage I tried to hack captions into some of the images (my HTML and CSS has since come a long way but I've refused to update it) and it was Firefox that was the only one that displayed it wrong. The Internet said that Firefox, by virtue of being Firefox MUST be right, but IMHO, if every other major and many minor browsers are displaying it correctly and as I intended then it's Firefox that's wrong. Never mind. Opera 10 is still pretty. theist 07:53, 17 September 2010 (UTC) Opera may support "standards" but it's seriously behind in the HTML5 land. Chrome, safari and firefox are the only real options for desktop. In that order. --62.142.167.85 (talk) 08:43, 17 September 2010 (UTC) I bet you can't name 10 website that currently use HTML5 as default. --Sir Onion Kneel before my vegetable might! 20:34, 17 September 2010 (UTC) This IE9 is actually pretty good. And that's coming from someone who hates IE8. Impressed, to be honest. Jaxe (talk) 08:59, 17 September 2010 (UTC) Cool. I'll give it a try on my lappy. --Sir Onion Kneel before my vegetable might! 20:34, 17 September 2010 (UTC) DeviantART's Muro is apparently an HTML5 thing. Works fine with Opera that I can tell. But people should stop saying "HTML5 is here" or whatever, it's "here" in the same way that recent work into carbon nanotubes mean a space elevator is "here". It works but it's hardly widely used, fully implemented or standard at the moment. theist 15:06, 18 September 2010 (UTC) No, because ie9 requires Vista or 7. My netbook runs (quite happily, thanks) on XP. Netbooks with Win 7 get the crippleware version that won't let you change the background image (!) or plug in a DVD-ROM drive (!!!) We had an argument at the suppertable last night about how many Americans will be motivated to upgrade their OS -- for most, their computers as well -- to get a new browser. My son was arguing that people are dumber than I think they are. JonquilS (talk) 01:48, 19 September 2010 (UTC) After having a netbook for the best part of 4-5 months now, I can honestly say I don't miss the desktop image thing. There are hacks around it, of course, but I don't miss it one single bit and rarely even notice it any more. Haven't tried a DVD drive yet, but with a small amount of preparation I don't need one as its hard-disk is big enough (and I have an extra TB or so on an external one) to host rips or iso files. In short, I can't say that Windows 7 Starter is that much of a problem as you might think when you first hear about its restrictions. theist 22:36, 19 September 2010 (UTC) ## Weird Site Came across Hidden Pleasures Exposed, which appears to be a Xian site devoted to saving Xians from the perils of internet porn. Maybe not so interesting in itself, but they certainly do present some very interesting statistics. --PsyGremlinParla! 06:43, 18 September 2010 (UTC) Interesting. I may become a Christian. I'm not sure about this statistic though: • "31% of people have had an online conversation that has led to real-time sex." Am I missing out?--BobSpring is sprung! 07:37, 18 September 2010 (UTC) "96% of teenagers admit to masturbating" And 4% are incorrigible liars. --JeevesMkII The gentleman's gentleman at the other site 08:03, 18 September 2010 (UTC) Yes and yes to both of you. ħuman 08:06, 18 September 2010 (UTC) What on earth is 'real-time sex'? As opposed to 'previously compiled sex' or 'buffered sex'? DeltaStarSenior SysopSpeciationspeed! 08:46, 18 September 2010 (UTC) As opposed to 'turn-based sex'. Jaxe (talk) 09:01, 18 September 2010 (UTC) Turn-based sex can be fun, with the right equipment. 81.141.65.25 (talk) 11:39, 18 September 2010 (UTC) Nothing wrong with turn-based sex. Just make sure you enter the right hex… -- Jabba de Chops 14:24, 18 September 2010 (UTC) I've been having client-based sex on WoW for months. It's just a matter of getting right addons and peripherals. I think. 20:43, 19 September 2010 (UTC) 31% of people have had an online conversation that has led to real-time sex - one presumes that people in relationships occasionally talk to each other online. Would this count? A booty call is a booty call whether it's on a landline or by Facebook Chat. theist 16:29, 18 September 2010 (UTC) I'm rather taken by the fact that the 31% statistic is sourced to www.manhaters.com. Clearly a statistical journal of some note. --Kels (talk) 23:34, 18 September 2010 (UTC) ## Pope joke (Only the Poms might gets this one). Last night the Pope had supper with his Cardinals. After the meal, they passed around the Under Eights. --PsyGremlinParla! 17:23, 19 September 2010 (UTC) I understand TwinKle likes an after-dinner mince. Lily Inspirate me. 17:27, 19 September 2010 (UTC) Headline reads: "Pope Touches Down In The UK" - "Down, age 6, was unavailable for comment". While it was mentioned above, it may as well get a repost with the rest. We should move on to the George Michael ones next, I have a good stock of those now. theist 17:31, 19 September 2010 (UTC) You don't have to be good at anagrams to see that Pope Benedict is an Epic Bent Pedo. CrundyTalk nerdy to me 19:33, 19 September 2010 (UTC) BBC News: Pope compares atheists with Nazis. How would he know? He's never been an atheist. CrundyTalk nerdy to me 19:37, 19 September 2010 (UTC) Wanna hear a joke? The Pope. Tetronian you're clueless 19:54, 19 September 2010 (UTC) Three fellas walk in to a bar; the pope, a paedophile and a Nazi. And a farmer and a shopkeeper. DeltaStarSenior SysopSpeciationspeed! 20:04, 19 September 2010 (UTC) In fairness, he apparently "didn't attend meetings" of the Hitler Youth, getting out of it by joining the Luftwaffe (make of that what you will) and his family got out of Nazi Germany as soon as they could. And there's no evidence that he partook of the kiddie fiddling personally, only that he headed the investigation and signed off on the subsequent whitewash. But with that out of the way, please, continue. theist 20:35, 19 September 2010 (UTC) ## Dear Mr Ubuntu Play my DVDs or I will kick your nuts so hard you'll have two extra tonsils. Totnesmartin (talk) 20:02, 19 September 2010 (UTC) I too have had that problem, and also the problem of getting the thing to burn VIDEO DVDs. I just gave up in the end. You get what you pay for! DeltaStarSenior SysopSpeciationspeed! 20:05, 19 September 2010 (UTC) Do I bollocks. I paid for the DVD and I can't watch it. Totnesmartin (talk) 20:08, 19 September 2010 (UTC) In Germany, you aren't even allowed to hint to the necessary patches (yes, they do exist!) larronsicut fur in nocte 20:22, 19 September 2010 (UTC) I've got it to play Barbarella. Totnesmartin (talk) 20:38, 19 September 2010 (UTC) Mr Ubuntu, your 'nads are safe. Totnesmartin (talk) 20:40, 19 September 2010 (UTC) ## Who is sticking pins in a doll of me? In the past two months or so, the following has happened to me. • I had to spend over three hundred dollars to get a headlight bulb replaced in my car (and refused to spend$500 to get the headlight autoleveling system fixed)
• I had my iMac in for repairs twice, for over a month of downtime total (they didn't fix it right the first time -- at least they didn't charge me for the second time)
• I broke up with my partner of over ten years
• And just last night, I got the Red Ring of Death on my XBox 360. I think I'm just going to replace it rather than repair it -- it's one of the original 360's.
I'm just wondering what's next -- I develop some bizarre disease that requires a Doctor House to diagnose, perhaps? MDB (talk) 11:00, 16 September 2010 (UTC)
Remember that time we challenged a bunch of witches to put a curse one us? Just saying... --PsyGremlinПоговорите! 11:13, 16 September 2010 (UTC)
So, they pick the mostly harmless theist amongst this bunch? Feh.
On the good side, I do have a date a week from Saturday. It probably would have been this Saturday except I need to go to my ex's place to pick up some things that belong to me and give him back his key. We do seem to be succeeding at remaining friends, though the dynamic is different for obvious reasons. (I've not told him about my impending date. I asked him if he wanted to know if I started seeing someone, and he wasn't sure.) MDB (talk) 11:34, 16 September 2010 (UTC)
Maybe you're like some modern day Job. The boils and sores will be next, just in time for your date if whatever deity has a sense of comic timing. --JeevesMkII The gentleman's gentleman at the other site 13:46, 16 September 2010 (UTC)
BTW, if your SexBox is one the originals and has a full RROD (not just the power supply one) you can usually fix it easily with an X-Clamp. CrundyTalk nerdy to me 13:49, 16 September 2010 (UTC)
Sadly, I am complete incompetent when it comes to mechanical work, so I'm just going to get a new one. (Fortunately, one of my co-workers has a data transfer kit he's going to loan me so I can pull the data off the old hard drive.) But yes, it is an original XBox 360 -- I got mine about a month after it came out. MDB (talk) 13:55, 16 September 2010 (UTC)
I'll have it if you're throwing it! I can build my evil army of Xboxes even more MWAHAHAHA!!! CrundyTalk nerdy to me 13:58, 16 September 2010 (UTC)
You pay shipping and it's yours, if you're serious.
I suppose I shouldn't complain too much about the iMac repair, though. The first time I took it in, they replaced the video card, and the new one failed in three days. As it turned out, the mother board had a flaw that was frying the video card. So, both should have been replaced. But since they didn't fix it right the first time, they didn't charge me at all for the second repair. So, I got the mother board and video card replaced for just the cost of the video card. (And they didn't charge me for service the first time, just for the card itself.) So, I paid about $400 for$1300 in repairs.
Now, that doesn't excuse the fact it took them almost three weeks to do the second repair. Not only that, it was ready on the 9th, but I only found out it was ready when I checked the status at Apple's web site on the 12th. They never bothered to call me and let me know it was done. MDB (talk) 15:31, 16 September 2010 (UTC)
Oh bollocks, you're a yank. For some reason I thought you were a limey. Who am I confusing you with now? Well, as much as I'l love to rip apart an Xbox and fix it up, I suspect the shipping will be a bit too much. CrundyTalk nerdy to me 15:37, 16 September 2010 (UTC)
Even if I did ship it, would a US XBox work on the other side of the pond? MDB (talk) 15:40, 16 September 2010 (UTC)
Would probably "work", but wouldn't be able to play UK games on it. CrundyTalk nerdy to me 15:50, 16 September 2010 (UTC)
Also, mains voltage is wrong. Gonna run a bit warm over there. ħuman 00:34, 18 September 2010 (UTC)
Is that still an issue? Doesn't your IT gear come with auto-sensing or switchable power supplies? Lily Inspirate me. 17:32, 19 September 2010 (UTC)
Not much of it does, no. I suspect the different markets are big enough to justify the cost-savings of fewer parts for any high production items. Oddly, my ADAT tape recorders work from something like 90 to 250 volts, with no switches to play with. ħuman 21:09, 19 September 2010 (UTC)
I am not familiar with these Xboxi, but I understand that they do not have their own display screen and are dependent on a TV. Wouldn't the PAL/NTSC difference affect the display? 22:52, 19 September 2010 (UTC)
Most TVs these days can handle both PAL and NTSC signals. Mine certainly can. CrundyTalk nerdy to me 08:19, 20 September 2010 (UTC)
I think a TV built to handle PAL can do NTSC easily, since it's lower quality. Not so much the other way 'round though. ħuman 21:04, 20 September 2010 (UTC)
If you're using component or HDMI I think you don't have to worry about NTSC or PAL. Aphoxema (talk) 21:15, 20 September 2010 (UTC)
|
2014-09-16 00:52:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44111526012420654, "perplexity": 3806.8412028823313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657110730.89/warc/CC-MAIN-20140914011150-00143-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
http://clay6.com/qa/617/assume-that-the-chances-of-a-patient-having-a-heart-attack-is-40-it-is-also
|
Browse Questions
# Assume that the chances of a patient having a heart attack is 40%. It is also assumed that a meditation and yoga course reduce the risk of heart attack by 30% and prescription of certain drug reduces its chances by 25%. At a time a patient can choose any one of the two options with equal probabilities. It is given that after going through one of the two options the patient selected at random suffers a heart attack. Find the probability that the patient followed a course of meditation and yoga?
This question has appeared in model paper 2012
Can you answer this question?
Toolbox:
• According to Bayes Theorem, if $E_1, E_2, E_3.....E_n$ are a set of mutually exclusive and exhaustive events, then $P\left(\large \frac{E_i}{E}\right ) = \Large \frac{P\left(\frac{E}{E_i}\right ). P(E_i)} {\sum_{i=1}^{n} (P\left(\frac{E}{E_i}\right ).P(E_i))}$
Let $E_1$ be the event that the person follows yoga and medication, $E_2$ be the event that the person took prescription drugs. Let E be the event that the person has a heart attack. We need to find the probability that the person followed yoga and medication given certain conditions.
$E_1$ and $E_2,$ are a set of mutually exclusive and exhaustive events, so we can use Bayes Theorem to caclulate the conditional probability $P\left(\large \frac{E_i}{E}\right ) = \Large \frac{P\left(\frac{E}{E_i}\right ). P(E_i)} {\sum_{i=1}^{n} (P\left(\frac{E}{E_i}\right ).P(E_i))}$
Let us first caculate $P \large(\frac{E}{E_i})$:
$P \large(\frac{E}{E_1}) =$ 40% (1 - 30%) = 28% = $\large\frac{28}{100}$
$P \large(\frac{E}{E_2}) =$ 40% (1 - 25%) = 30% = $\large\frac{30}{100}$
Also, $P (E_1) = P (E_2) = \large\frac{1}{2}$
P (probability that the person who had heart attach followed meditation and yoga) = $P\left(\large \frac{E_1}{E}\right )$
$P\left(\large \frac{E_2}{E}\right ) = \Large \frac{P\left(\frac{E}{E_2}\right ). P(E_2)} {\sum_{i=1}^{4} (P\left(\frac{E}{E_i}\right ).P(E_i))}$,
$=\large\;\frac{\frac{28}{100}\;\times\;\frac{1}{2}}{\large\frac{28}{100}\;\times\;\frac{1}{2}\;+\;\frac{30}{100}\;\times\;\frac{1}{2}}$
$= \large\frac{\frac{28}{200}}{\frac{28}{200}+\frac{30}{200}} = \frac{28}{28+30} = \frac{14}{29}$
answered Jun 22, 2013
Thanxx a lot ..!!!!
|
2017-02-20 06:10:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5197811722755432, "perplexity": 480.7472937997283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170425.26/warc/CC-MAIN-20170219104610-00221-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://www.gamedev.net/forums/topic/511474-transforming-frustum-to-obb-space/
|
# Transforming frustum to OBB space
## Recommended Posts
Hi there! :) I need to check if the static mesh is in frustum. I already have a 'Frustum-AABB' test and it works, but I also need a 'Frustum-OBB' test... but everything I have is AABB in local space and object transform :) I tried to transform normals of frustum planes and add object translation to AABB 'min' and 'max', but this works only if I have translation OR rotation matrix applied to object. If I concatenate this transforms( R*T ) my function doesn't work :( Here is the code I wrote to check if OBB is in frustum:
// ** Only AABB in local space
// ** and object transform, as mentioned above :)
bool cFrustum::BoxInside( const cBoundingBox& box, const cMatrix4& T ) const
{
// ** Add object translation to AABB 'min' and 'max'
cBoundingBox aabb = box + T.GetTranslation();
const cVector3& min = aabb.min;
const cVector3& max = aabb.max;
cVector3 c = aabb.GetCenter();
// ** Half-extents
float w = aabb.GetWidth() * 0.5f;
float h = aabb.GetHeight() * 0.5f;
float d = aabb.GetDepth() * 0.5f;
// ** I don't have scale in matrix, so I just calculate a transpose
cMatrix4 tT = cMatrix4::Transpose( T );
for(int i = 0; i < 6; i++ )
{
cVector3 N = frustum[i].normal;
// ** Multiplies N by 3x3 transposed rotation matrix
tT.TransformVector( N );
// ** Intersection test as if it is an AABB
if( N * min + frustum[i].distance > 0 ) continue;
if( N * max + frustum[i].distance > 0 ) continue;
if(N.x * (c.x + w) + N.y * (c.y - h) + N.z * (c.z - d) + frustum[i].distance > 0)
continue;
if(N.x * (c.x - w) + N.y * (c.y + h) + N.z * (c.z - d) + frustum[i].distance > 0)
continue;
if(N.x * (c.x + w) + N.y * (c.y + h) + N.z * (c.z - d) + frustum[i].distance > 0)
continue;
if(N.x * (c.x - w) + N.y * (c.y - h) + N.z * (c.z + d) + frustum[i].distance > 0)
continue;
if(N.x * (c.x + w) + N.y * (c.y - h) + N.z * (c.z + d) + frustum[i].distance > 0)
continue;
if(N.x * (c.x - w) + N.y * (c.y + h) + N.z * (c.z + d) + frustum[i].distance > 0)
continue;
return false;
}
}
Thanks :)
##### Share on other sites
Quote:
Original post by 0xFFI need to check if the static mesh is in frustum. I already have a 'Frustum-AABB' test and it works, but I also need a 'Frustum-OBB' test.
Intersection of Orthogonal View Frustum and Oriented
Bounding Box using Separation Axis Testing
I also have an implementation of this at my geometrictools.com website.
##### Share on other sites
WoW! Thanks! That's what I looked for :)
|
2017-11-21 10:25:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29385191202163696, "perplexity": 7424.927381952667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806338.36/warc/CC-MAIN-20171121094039-20171121114039-00431.warc.gz"}
|
https://zbmath.org/?q=an%3A0789.26002
|
# zbMATH — the first resource for mathematics
An introduction to the fractional calculus and fractional differential equations. (English) Zbl 0789.26002
New York: John Wiley & Sons, Inc.. xiii, 366 p. (1993).
The reader will find here a systematic treatment of the theory of fractional calculus and its applications in the solution of fractional differential equations and fractional difference equations.
The following are the main features of the book:
There are eight chapters. The historical development of the fractional calculus from 1790 to the present is given in Chapter I. Several interesting mathematical arguments concerning the definition of fractional calculus are discussed in Chapter II which lead to the present definition of fractional integrals and derivatives. Chapter III is devoted mainly in developing the theory of Riemann-Liouville integral. Certain new techniques are investigated in finding the fractional integrals of more complicated functions. The theory of Riemann-Liouville fractional calculus is discussed in Chapter IV. The integral and differential representations of the ordinary special functions occurring in applied mathematics are derived as fractional integrals and derivatives which enhance the utility of fractional calculus. Chapter V deals with certain properties of fractional differential equations with constant coefficients. Chapter VI gives a method for deriving the solution of fractional integral equations, fractional differential equations with non-constant coefficients, sequential fractional differential equations, and vector fractional differential equations. The theory of Weyl fractional calculus is dealt with in Chapter VII. Certain selected physical problems are discussed in the last chapter which lead to fractional integral or differential equations.
There are four appendices which have further enhanced the utility of the book. Appendix A deals with some identities associated with partial fraction expansions. Appendix B contains elementary properties of certain higher transcendental functions. Laplace transforms as applied to the functions $$E_ t(\nu,a)$$, $$C_ t(\nu,a)$$, and $$S_ t(\nu,a)$$, are discussed in Appendix C including short tables of these functions. Appendix D contains a brief table of fractional integrals and derivatives.
The book is well written which may be used as a text or a reference book. It contains many results from research papers published during the last decade, hence it is useful to research workers in the field of fractional calculus, special functions, integral equations and integral transforms.
##### MSC:
26A33 Fractional derivatives and integrals
|
2021-03-08 20:41:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7765871286392212, "perplexity": 279.0589061107666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385389.83/warc/CC-MAIN-20210308174330-20210308204330-00060.warc.gz"}
|
https://matrixbookstore.biz/sheridan-stores-whncdlu/is-zn-diamagnetic-14e1cc
|
# is zn diamagnetic
Since there are no unpaired electrons it is diamagnetic. Best Answer. Diamagnetic Cu(+) ions substitute the Zn(+2) sites ( Defect structure). Correct answer is Nb^3-9 0. Diamagnetic List C2 Potassium Ne2 CO silicon sulfur neon ( Ne ) h2 hydrogen zinc zn si helium ( he ) beryllium Nitrogen N2 be2 carbon s2 ag Copper zn2+ cu V3+ Cadmium cd2+ B2 2-no c CN-au ( gold ) s N P b br Boron Arsenic se ( Selenium ) Argon ( ar ) kr ( Krypton ) Phosphorus Ferromagnetic Nickel ( ni ) Ni2+ Cobalt Iron ( fe ) Antiferromagnetic Chromium ( cr ) We get answers from Resources Best answer. 2) Fluorine is in the 2nd period, so it's not going to have electrons with n=3 or n=4 in its ground state. 1s2 2s2 2p6 3s2 3p6 3d10 . Because there are no unpaired electrons, Zn atoms are diamagnetic. These elements have still got an unpaired electron. Look up or configure yourself the electron structure of Zn^+2. Respond to this Question. There are no unpaired electrons in $\ce{ZnO}$, making $\ce{ZnO}$ diamagnetic. So it is 28 configuration_4s2 3d8. Thanks! Step 4: Determine whether the substance is paramagnetic or diamagnetic. Hence the geometry of, [Zn (NH3)4]+2 complex ion would be tetrahedral. However, in the octahedral complex ion, the d orbitals split into two levels, with three lower-energy orbitals and two higher-energy ones. Please help i am confused here is the questions and answers In the past, the Kota spoke to the bones of their ancestors before making important decisions. • From the above electronic configuration, it has been found that … Mass of another car is, m=10000 kg. So at low concentrations, the mixture behaves like a diamagnetic substance. Problem: Which species is diamagnetic? Why Zn^2 + is diamagnetic ... chemistry. Which description explains how Figure A was transformed to create figure B. Answer [C r (N H 3 ) 6 ] 3 + is paramagnetic due to presence of 3 unpaired electrons. Answer. Atoms with all diamagnetic electrons are called diamagnetic atoms. Problem: Which species is diamagnetic? a- Find the electron configuration b- Draw the valence orbitals c- Look for unpaired… Hany. *Response times vary by subject and question complexity. Answer. Table 1.1 . (P): [Z n (H 2 O) 6 ] (N O 3 ) 2 → 3 d 1 0 strong field ligand, diamagnetic (because d-orbital is completely occupied) Except for K and N all are diamagnetic complexes. First observed by S.J. Zn [Ar] 3d^10 4s^2 paired electron diamagnetic species. Why Z n 2 + is diamagnetic whereas C r 3 + is paramagnetic ? Is Zn+2 diamagnetic or paramagnetic? The number of unpaired electron is 0. µ = $$\sqrt{0(0+2)}$$ = 0 µ B. Cu +, Zn 2+ are diamagnetic. Prove it. Why Zn" is diamagnetic? In contrast, paramagnetic and ferromagnetic materials are attracted by a magnetic field. Suppose you have two similar rectangular prisms. a- Find the electron configuration b- Draw the valence orbitals c- Look for unpaired… A paramagnetic substance is one that contains one or more unpaired electrons.. On the other hand, a diamagnetism substance is one that does not contain any odd electrons. You may need to download version 2.0 now from the Chrome Web Store. Diamagnetic materials are repelled by a magnetic field; an applied magnetic field creates an induced magnetic field in them in the opposite direction, causing a repulsive force. Paramagnetic compounds sometimes display bulk magnetic properties due to the clustering of the metal atoms. Paramagnetism. In order to investigate the magnetic permeability of any material, it is important to first know the classification category of the material. So at low concentrations, the mixture behaves like a diamagnetic substance. Diamagnetism occurs when orbital electron motion forms tiny current loops, which produce magnetic fields. A Gouy balance is used to determine the magnetic susceptibility of a substance. A. relative maximum B. relative minimum C. critical value D. inflection point E. Which of the following atoms is paramagnetic? O : {eq}\rm 1s^22s^22p^4 {/eq} (2 unpaid electrons in 2p orbital) Hence, O is not diamagnetic in the ground state. And if you have all paired electrons, we're talking about diamagnetic. Your IP: 93.191.243.37 I am a little confused on how to figure this out and I'm finding inconsistent answers when I tried to look this up. read more . Trending Questions. Get your answers by asking now. Thus electronic configuration, to large extent, the existence and stability of oxidation states. Thanks! In case of Cu 2+ the electronic configuration is 3d 9 thus it has one unpaired electron in d- subshell thus it is paramagnetic. The lesser number of oxidation states at extreme ends arise from either too few electrons to loose or share (e.g. Apr 8, 2010 . They stored the bones in small baskets woven of rolls of bark or leaves. Step 3: Look for unpaired electrons. 100% (1 rating) 1a) answer is Zn, P3- and Ca the ions are diamagnetic if they dont have any unpaired electrons in their ground electro view the full answer. In diamagnetic materials all the electrons are paired so there is no permanent net magnetic moment per atom. Median response time is 34 minutes and may be longer for new subjects. Please enable Cookies and reload the page. A strong electromagnet is placed next to the sample, which is on a balance. Let f(x) be a polynomial function such that f(4)=-1, f’(4)=2 and f”(4)=0. Sc and Ti) or too many d electrons (hence fewer orbitals available in which to share electrons with others) for higher elements at upper end of first transition series (i.e., Cu and Zn). These tabulated values can be problematic since many sources contain incomplete and conflicting data. MEDIUM. Since the Zn+2 ion has no unpaired electrons, hence it is diamagnetic. On the Periodic Table of Elements, it is given the symbol 'Zn' and the atomic number 30. C r 3 + has electronic configuration [A r] 3 d 3. Both Assertion and Reason are correct and Reason is the correct explanation for Assertion. If the substance is placed in a magnetic field, the direction of its induced magnetism will be opposite to that of iron (a ferromagnetic material), producing a repulsive force. This correction is often accomplished by using tabulated values for the diamagnetism of atoms, ions, or whole molecules. Hence, Zn is diamagnetic in the ground state. We can determine this by setting up the orbital diagrams of the valence electrons for each atom. 0 0. Para magnetism and diamagnetism of a substance depends on the number of electrons occupied by it. Figure A was Reflected across a horizontal line of reflection to create Figure B. If x0. transition and inner transition elements; class-12; Share It On Facebook Twitter Email. But magnetically the ions are diamagnetic whereas Ni is ferromagnetic. So, they are paramagnetic. Cu +, Zn 2+ are diamagnetic. What is Paramagnetic and Diamagnetic ? The electronic configuration of {eq}Zn^{2+}{/eq} is {eq}1s^{2}2s^{2}2p^{6}3s^{2}3p^{6}3d^{10}{/eq} Since their is no unpaired electrons in {eq}Zn^{2+}{/eq} ion, hence is diamagnetic. What is the bond order of the diatomic molecule BN and is it paramagnetic or diamagnetic? transition and inner transition elements; class-12; Share It On Facebook Twitter Email. Diamagnetic materials are repelled by a magnetic field; an applied magnetic field creates an induced magnetic field in them in the opposite direction, causing a repulsive force. The set in which all the species are diamagnetic is (A) B 2, O 2, NO (B) O 2, O 2 +, CO (C) N 2, O 2 –, CN – (D) C 2, O 2 2 –, NO + Ans. Aluminium has an unpaired electron, which would mean it is paramagnetic. Cr2+, Zn, Mn, C. FREE Expert Solution. Video Explanation. Diamagnetic levitation A small (~6mm) piece of pyrolytic graphite (a material similar to graphite) levitating over a permanent gold magnet array (5mm cubes on a piece of steel). Therefore it has 4 unpaired electrons and would be paramagnetic. Brugmans (1778) in bismuth and antimony, diamagnetism was named and answered Aug 22 by subnam02 (50.2k points) selected Aug 24 by Nilam01 . Even leaving Ni out of the comparison reveals magnetic differences among the ions. There are no unpaired electrons. This correction is often accomplished by using tabulated values for the diamagnetism of atoms, ions, or whole molecules. Look up or configure yourself the electron structure of Zn^+2. I think its diamagnetic because it loses 2 electrons and it still has no unpaired pairs...i think...thanks. Prove it. This chemistry video tutorial focuses on paramagnetism and diamagnetism. So if we take an electron with positive magnetic spin, we will have another electron paired with it which will have opposite spin. In diamagnetic materials, the isotropic chemical shift of 67 Zn is in the range of ~630 ppm (see Table 1.1). Note that the poles of the magnets are aligned vertically and alternate (two with north facing up, and two with south facing up, diagonally). Join Yahoo Answers and get 100 points today. Sn Mg Zn Mn Ba The answer is Mn. A Diamagnetic is a material that has a weak or negative susceptibility towards magnetic fields. Diamagnetic species are atoms that has zero unpaired electrons. Measured magnetic susceptibilities of paramagnetic substances must typically be corrected for their underlying diamagnetism. The diamagnetic contribution from the valence electrons is small, but from a closed shell it is proportional to the number of electrons in it and to the square of the radius of the ‘orbit’. 1s2 2s2 2p6 3s2 3p6 3d10 . Examples of these metals include $$Sc^{3+}$$, $$Ti^{4+}$$, $$Zn^{2+}$$, and $$Cu^+$$. Paramagnetism is a form of magnetism whereby certain materials are weakly attracted by an externally applied magnetic field, and form internal, induced magnetic fields in … Step 3: Look for unpaired electrons. For zinc the electronic configuration is 3d10 4s2 with a vacant p orbital .Hence all the electrons present are paired up. Step 2: Draw the valence orbitals. Since there are no unpaired electrons it is diamagnetic. answered Aug 22 by subnam02 (50.2k points) selected Aug 24 by Nilam01 . B 2, O 2, NO, O 2 +, O 2 – Paramagnetic O 2 2 –, C 2, N 2, NO +, CO Diamagnetic 22. For Zn atoms, the electron configuration is 4s 2 3d 10. Cu +, Zn 2+ electronic configuration [Ar] 3d 10. is F2 +2 ion diamagnetic or paramagnetic? Zn as well as its ion Zn2+ arr diamagnetic in nature because. Look up or configure yourself the electron structure of Zn^+2. Zn, Sr, Kr, Te, Ca? Cu +, Zn 2+ electronic configuration [Ar] … Video Explanation. A solid comprises of three types of elements, 'P', 'Q' and 'R'. Cr2+, Zn, Mn, C. FREE Expert Solution. First Name. 1) Figure out the electron configurations of the elements. Since their is no unpaired electrons in {eq}Zn^{2+}{/eq} ion, hence is diamagnetic. But from the look of copper and silver, gold should be diamagnetic too. So you’re just supposed to know that a carbon atom exists … I am a little confused on how to figure this out and I'm finding inconsistent answers when I tried to look this up. Zinc is a bluish-white, lustrous, diamagnetic metal, though most common commercial grades of the metal have a dull finish. The six d electrons would therefore be in the lower set, and all paired. O^2- is only paramagnetic species from given . Why Ni +2 paramagnetic whereas Zn +2 diamagnetic?. Because there are no unpaired electrons, Zn atoms are diamagnetic. Previous question Next question. It is a bit unconventional but Zn^0 means Zn(0) with no charge: [Ar] 3d^10 4s^2 all e- paired diamagnetic; you have the other two correct DrBob222. Paramagnetic. Is Zn+2 diamagnetic or paramagnetic? For Zn atoms, the electron configuration is 4s 2 3d 10. The diamagnetic contribution from the valence electrons is small, but from a closed shell it is proportional to the number of electrons in it and to the square of the radius of the ‘orbit’. Ask Question + 100. Trixie. Measured magnetic susceptibilities of paramagnetic substances must typically be corrected for their underlying diamagnetism. HARD. Both Assertion and Reason are correct but Reason is not the correct explanation for Assertion. The molecule is said to have diamagnetic character. The metal ions in the series, Cu +, Zn 2+, Ga 3+, and Ge 4+ with their loss of valence electrons would seem to be electronically equivalent to each other and to a Ni atom. The sodium ion is diamagnetic. Another way to prevent getting this page in the future is to use Privacy Pass. Trending Questions. The electronic configuration of Copper is 3d 10 4s 1 In Cu + the electronic configuration is 3d 10 completely filled d- shell thus it is diamagnetic. Writing the electron configurations will help us identify the valence electrons. For which of the following atoms is the 2+ ion paramagnetic? When an external magnetic field is applied, the current loops align and oppose the magnetic field. Hence, option C is correct. If unpaired electrons are present then there is paramagnetic character. Figure B is the image of Figure A. Which of the following diatomic species are paramagnetic and which are diamagnetic? Cloudflare Ray ID: 6096db67da520e16 Zn as well as its ion Zn2+ arr diamagnetic in nature because. Answer: zinc ( zn ) is a Diamagnetic. Diamagnetic Up to date, curated data provided by Mathematica 's ElementData function from Wolfram Research, Inc. Click here to buy a book, photographic periodic table poster, card deck, or 3D print based on the images you see here! Q: Just answer the second one please A: Given data: Mass of railroad car is, M=33000.0 kg. Now, since brass is made up of copper and zinc and no other major metals or non-metals are present, hence the electronic properties ascribed by the two metals (Cu and Zn) only. B. But it's so negligible that their diamagnetic property is enhanced. Performance & security by Cloudflare, Please complete the security check to access. If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. Where as Mn 2 is paramagentic. These metals are the not defined as paramagnetic: they are considered diamagnetic because all d-electrons are paired. The molecule is said to have diamagnetic character. Nb^-3 [Kr] 5s^2 paired electron diamagnetic species. zinc ( zn ) Diamagnetic: zn2+ Diamagnetic : Ferromagnetic: Cobalt ( co ) Ferromagnetic: Iron ( fe ) Ferromagnetic: Ni2+ Ferromagnetic: Nickel ( ni ) Ferromagnetic : Antiferromagnetic: Chromium ( cr ) Antiferromagnetic : We get answers from Resources: answers.yahoo.com answers.com google.com youtube.com pubchem.ncbi.nlm.nih.gov reference.com www.quora.com Is Ni2+ Paramagnetic or Diamagnetic … C r (Z = 2 4) has electronic configuration [A r] 3 d 5 4 s 1. This effect is generally weak, and if a substance is paramagnetic (has unpaired electrons), it usually will render the diamagnetism insignificant. I don't know whether gold is diamagnetic (My periodic table shows: no data for Gold's magnetic properties). The point (-4, -1) is which of the following for the graph of f? So it is diamagnetic. 1 Answer +1 vote . A diamagnetic material has a permeability less than that of a vacuum. Following this logic, the C … C. Assertion is correct but Reason is incorrect. Source: quora.com. In many metals this diamagnetic effect is outweighed by a paramagnetic contribution, the origin of which is to be found in the electron spin. Step 2: Draw the valence orbitals. A paramagnetic electron is an unpaired electron. Which atom is paramagnetic A Zn B Ba C Ar D Sr E Mn Paramagnetic atoms have at from CHEM 161 at Rutgers University 6 years ago. Cu +, Zn 2+ are diamagnetic. Apr 8, 2010 . The volume of the smaller rectangular prism is 343 in^3 and the volume of the larger rectangular prism is 1,000 in^3. Completing the CAPTCHA proves you are a human and gives you temporary access to the web property.
|
2021-04-15 17:13:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5994119048118591, "perplexity": 2703.6883907065894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038087714.38/warc/CC-MAIN-20210415160727-20210415190727-00501.warc.gz"}
|
https://golem.ph.utexas.edu/category/2020/07/index.shtml
|
## July 27, 2020
### Linear Logic Flavoured Composition of Petri Nets
#### Posted by John Baez
guest post by Elena Di Lavore and Xiaoyan Li
Petri nets are a mathematical model for systems in which processes, when activated, consume some resources and produce others. They can be used to model, among many others, business processes, chemical reactions, gene activation or parallel computations. There are different approaches to define a categorical model for Petri nets, for example, Petri nets are monoids, nets with boundaries and open Petri nets.
This first post of the Applied Category Theory Adjoint School 2020 presents the approach of Carolyn Brown and Doug Gurr in the paper A Categorical Linear Framework for Petri Nets, which is based on Valeria de Paiva’s dialectica categories. The interesting aspect of this approach is the fact that it combines linear logic and category theory to model different ways of composing Petri nets.
Posted at 12:20 AM UTC | Permalink | Followups (10)
## July 24, 2020
### Octonions and the Standard Model (Part 3)
#### Posted by John Baez
Now I’ll finally explain how a quark and a lepton fit together into an octonion — in the very simplified picture where we treat these particles merely as representations of $\mathrm{SU}(3)$, the symmetry group of the strong force. I’ll say just enough about physics for mathematicians to get a sense of what this means. (The most substantial part of this post will be a quick intro to ‘basic triples’, a powerful technique for working with octonions.)
Posted at 12:17 PM UTC | Permalink | Followups (7)
## July 22, 2020
### Octonions and the Standard Model (Part 2)
#### Posted by John Baez
My description of the octonions in Part 1 raised enough issues that I’d like to talk about it a bit more. I’ll show you a prettier formula for octonion multiplication in terms of $\mathbb{C} \oplus \mathbb{C}^3$… and also a very similar-looking formula for it in terms of $\mathbb{R} \oplus \mathbb{R}^7$.
Posted at 7:36 PM UTC | Permalink | Followups (9)
## July 17, 2020
### Octonions and the Standard Model (Part 1)
#### Posted by John Baez
I want to talk about some attempts to connect the Standard Model of particle physics to the octonions. I should start out by saying I don’t have any big agenda here. It’d be great if the octonions — or for that matter, anything — led to new insights in particle physics. But I don’t have such insights, and for me particle physics is just a hobby. I’m not trying to come up with a grand unified theory. I just want to explain some patterns linking the Standard Model to the octonions.
Understanding these patterns requires knowing a bit of physics and a bit of math. I’ll focus on the math side of things: mainly, I’ll be polishing up some existing ideas and trying to make them more pretty. I’ll assume you either know the physics or can fake it: either way, it won’t be the main focus.
In writing this first post, my attempt to explain an octonionic description of the strong force led me to a construction of the octonions that makes them look very much like the quaternions. I don’t know if it’s new, but I’d never seen it before. The basic idea is that octonions are to $\mathbb{C}^3$ as quaternions are to $\mathbb{R}^3$.
Posted at 1:34 AM UTC | Permalink | Followups (20)
## July 8, 2020
### Self-Referential Algebraic Structures
#### Posted by John Baez
Any group acts as automorphisms of itself, by conjugation. If we differentiate this idea, we get that any Lie algebra acts as derivations of itself. We can then enhance this in various ways: for example a Poisson algebra is both a Lie algebra and a commutative algebra, such that any element acts as derivations of both these structures.
Why do I care?
In my paper on Noether’s theorem I got excited by how physics uses structures where each element acts to generate a one-parameter group of automorphisms of that structure. I proved a super-general version of Noether’s theorem based on this idea. It’s Theorem 8, in case you’re curious.
But the purest expression of the idea of a “structure where each element acts as an automorphism of that structure” is the concept of “rack”.
Posted at 7:28 PM UTC | Permalink | Followups (24)
## July 2, 2020
### Congratulations, John!
#### Posted by Tom Leinster
Our own John Baez is famous for inspiring people all around the world through the magic of the internet, but what’s it like to actually be one of his grad students? Fantastic, apparently! The University of California at Riverside has just given him the Doctoral Dissertation Advisor/Mentoring Award, one of just two given by the university. It “celebrates UCR faculty who have demonstrated an outstanding and long history of mentorship of graduate students”.
Forgive a completely irrelevant digression, but partway through writing that paragraph, while regretting that more details of John’s prize weren’t available, something rather extraordinary forced me to stop writing…
Posted at 2:19 AM UTC | Permalink | Followups (8)
|
2020-08-08 07:10:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7280453443527222, "perplexity": 1081.210821871396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737289.75/warc/CC-MAIN-20200808051116-20200808081116-00515.warc.gz"}
|
https://www.biostars.org/p/393058/
|
Should the expression data be centered before PCA?
1
0
Entering edit mode
2.5 years ago
Raheleh ▴ 210
I have expression profile of 27 samples (10 normal, 17 tumors). I did PCA analysis to check the quality of the samples to see them clustered in 2 different groups. I did it in two ways after RMA normalization and log2 transformation. 1. I did it without centring the data and the result showed that the quality is not good.pc <- prcomp(exp)
1. For the second time I centred the data (mean subtraction)
exp.scale <- t(scale(t(exp), scale = F))
pc <- prcomp(exp.scale)
My question is, if the PCA is calculated from the covariance matrix and mean centering does not affect the covariance matrix; why the output is different? After centering, the quality of the samples is much better! Should I consider that the quality of my samples are good or not?
Thanks!
PCA Centering expression data • 2.4k views
1
Entering edit mode
You did not transpose your matrix prior to running prcomp. In a gene expression matrix with rows = genes and columns = samples one would run PCA like prcomp(t(data)), see e.g. the source code of DESeq2::plotPCA. Re-run the PCA using the log2-normalized intensity values and see how it performs.
getMethod("plotPCA","DESeqTransform") Method Definition:
function (object, ...)
{
.local <- function (object, intgroup = "condition", ntop = 500,
returnData = FALSE)
{
rv <- rowVars(assay(object))
select <- order(rv, decreasing = TRUE)[seq_len(min(ntop,
length(rv)))]
## => HERE <= ##
pca <- prcomp(t(assay(object)[select, ]))
(... and so on ...)
For the future please see How to add images to a Biostars post. You need the pull path to the image.
0
Entering edit mode
Could you please fix the images ?
1
Entering edit mode
2.5 years ago
In your first example, you ARE centering your data, as the default of prcomp() is to center your data. Look up the function docs: https://stat.ethz.ch/R-manual/R-devel/library/stats/html/prcomp.html
In your second example, you are actually centering the data twice, but one is by row (due to your transpose), while the other column.
I am not sure how you can say that your data's quality is 'not good' from the first bi-plot(?) - how are you defining 'not good'?
0
Entering edit mode
@Kevin
I have PCA data , genes in rows and samples in column
ExpAccession ExpA 1 N ExpA 2 N ExpA 3 N ExpA 4 N ExpA 5 N ExpA 1 T ExpA 2 T ExpA 3 T ExpA 4 T ExpA 5 T
P10645 -1.80 -2.31 -2.12 -1.99 -1.92 -1.98 -0.57 -0.99 -0.48 2.62
P31327 -1.57 -1.90 -1.98 -1.79 -1.71 -0.02 -0.67 -0.86 -1.60 2.53
Q9BYZ8 -1.08 -1.80 -1.62 -2.07 -1.51 -1.72 -0.40 0.57 -1.52 2.48
O43745 -2.59 -2.02 -2.65 -1.39 -1.68 1.00 -1.44 -0.78 -1.81 2.46
Q99795 -1.68 -2.15 -2.40 -2.08 -2.64 0.45 -0.48 -0.32 -1.46 2.42
Q02817 -1.03 -1.47 -1.19 -1.35 -1.31 -1.38 -0.49 0.10 -1.21 2.38
How I can adapt my matrix to your class (p) to carry on with your tutorial on plotting PCA?
Thank you
1
Entering edit mode
Hey, it looks like the first column of your data comprises the accession names. So, you will have to set those as the rownames.
Essentially, this may work:
rownames(data) <- data$ExpAccession # assign rownames data <- data[,-1] # remove the first column # perform PCA require(PCAtools) p <- pca(x, metadata = NULL, removeVar = 0.1) ADD REPLY 0 Entering edit mode Thank you so much In some part I am getting this error > plotloadings(p, + rangeRetain = 0.01, + labSize = 3.0, + title = 'Loadings plot', + subtitle = 'PC1, PC2', + caption = 'Top 1% variables', + shape = 24, + col = c('limegreen', 'black', 'red3'), + drawConnectors = TRUE) -- variables retained: Q6UX53, P62684, Q06141, Q9TNN7, O00425, P01911, P12104, Q92968, P01889, Q9P2F6, P30443 Error in grid.Call(C_convert, x, as.integer(whatfrom), as.integer(whatto), : Viewport has zero dimension(s) > ADD REPLY 1 Entering edit mode How many samples are in your data? Were you able to produce the bi-plot and SCREE plot? ADD REPLY 0 Entering edit mode Thank you 10 samples I have and I have produced both bi-plot and SCREE plots ADD REPLY 1 Entering edit mode hmm... I am not sure. Can you add the parameter: components = getComponents(p, 1:3) ? Also, try to change the value of rangeRetain ADD REPLY 0 Entering edit mode Sorry Kevin Is it possible to have only 2 colors in PCA plot? For example grey for control and black for treatment? ADD REPLY 0 Entering edit mode The bi-plot? How have you created your pca object and which metadata do you have? ADD REPLY 0 Entering edit mode Thank you I don't have metadata If this is my data ExpAccession ExpA 1 N ExpA 2 N ExpA 3 N ExpA 4 N ExpA 5 N ExpA 1 T ExpA 2 T ExpA 3 T ExpA 4 T ExpA 5 T P10645 -1.80 -2.31 -2.12 -1.99 -1.92 -1.98 -0.57 -0.99 -0.48 2.62 P31327 -1.57 -1.90 -1.98 -1.79 -1.71 -0.02 -0.67 -0.86 -1.60 2.53 Q9BYZ8 -1.08 -1.80 -1.62 -2.07 -1.51 -1.72 -0.40 0.57 -1.52 2.48 O43745 -2.59 -2.02 -2.65 -1.39 -1.68 1.00 -1.44 -0.78 -1.81 2.46 Q99795 -1.68 -2.15 -2.40 -2.08 -2.64 0.45 -0.48 -0.32 -1.46 2.42 Q02817 -1.03 -1.47 -1.19 -1.35 -1.31 -1.38 -0.49 0.10 -1.21 2.38 rownames(data) <- data$ExpAccession # assign rownames
data <- data[,-1] # remove the first column
# perform PCA
require(PCAtools)
p <- pca(x, metadata = NULL, removeVar = 0.1)
biplot(p)
1
Entering edit mode
You will have to create metadata that has the same rownames as the colnames of x, and then assign it to metadata when you run pca():
p <- pca(x, metadata = myMetaData, removeVar = 0.1)
Then, to assign colours, you use colBy with biplot(). Examples HERE.
0
Entering edit mode
Sorry Kevin
By this code I have a biplot, I am saving that with width 5100 and length 2000 and as you are seeing axis labels and title are too small
How I can make them bigger please?
p2=biplot(pb,
colby = 'condition',colkey = c('N'='grey75', 'T'='black','T(scar)'='grey50'),
legendPosition = 'right', hline = 0, vline = c(-25, 0,25),
vlineType = c('dotdash', 'solid', 'dashed'),
gridlines.major = FALSE, gridlines.minor = FALSE,
pointSize = 10, legendLabSize = 20, legendIconSize =10,
drawConnectors = FALSE,
title = 'PCA plot for expriment B (patients 6-10, N=normal esophagus, T=tumour)', labSize = 15)
p1=biplot(pa,
colby = 'condition',colkey = c('N'='grey75', 'T'='black'),
legendPosition = 'right', hline = 0, vline = c(-25, 0,25),
vlineType = c('dotdash', 'solid', 'dashed'),
gridlines.major = FALSE, gridlines.minor = FALSE,
pointSize =10, legendLabSize = 20, legendIconSize = 10,
drawConnectors = FALSE,
title = 'PCA plot for expriment A (patients 1-5, N=normal esophagus, T=tumour)', labSize = 15)
plot_grid(p1,p2,label_size = 10)
1
Entering edit mode
Hey, the labels do not usually come out that small... If you run ?biplot in R, you will see all of the options for the biplot() function. The ones in which you will be interested are:
biplot(
...,
axisLabSize = 16,
title = '',
subtitle = '',
caption = '',
titleLabSize = 16,
subtitleLabSize = 12,
captionLabSize = 12,
..)
0
Entering edit mode
Sorry @Kevin
Really thank you very much for PCAtools package
You think I better use all of my data from RNA-seq for PCA biplot or using differentially expressed genes between tumour and normal tissue would make a better view of data?
0
Entering edit mode
It depends on what your aim is for using PCAtools? Usually, people use the entire dataset.
0
Entering edit mode
We treated some patients with chemotherapy, some patients responded and some not. So in PCA responding patients would group with normal samples and from their tumour only a scar remains. The goal is to show responding patients being grouped with adjacent normal samples
1
Entering edit mode
You can try 2 things:
1. PCAtools using entire dataset - this shows any 'natural' grouping based on the entire transcriptome
2. hierarchical clustering + heatmap using differentially expressed genes - this shows the ability of the differentially expressed genes to distinguish the groups
0
Entering edit mode
Hi Kevin. Thanks for the point!
I defined it as "not good" because in the first figure the samples from two different groups are not separated from each other as good as in the second figure. Isn't like this? If I am wrong please correct me. Thanks!
|
2022-01-26 12:42:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5196897387504578, "perplexity": 4997.589985028922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304947.93/warc/CC-MAIN-20220126101419-20220126131419-00657.warc.gz"}
|
https://collegephysicsanswers.com/openstax-solutions/you-are-told-not-shoot-until-you-see-whites-their-eyes-if-eyes-are-separated-0
|
Change the chapter
Question
You are told not to shoot until you see the whites of their eyes. If the eyes are separated by 6.5 cm and the diameter of your pupil is 5.0 mm, at what distance can you resolve the two eyes using light of wavelength 555 nm?
$480\textrm{ m}$
Solution Video
|
2022-06-27 16:59:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5313929915428162, "perplexity": 789.7799261299951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103337962.22/warc/CC-MAIN-20220627164834-20220627194834-00290.warc.gz"}
|
https://www.wisdomandwonder.com/tag/statistics
|
## I Wasted Time with a Custom Prompt for R with ESS
I wanted a custom prompt for R with ESS. I wanted a double struck R. I probably did it wrong. It never worked. Actually it worked most of the time, and that is worse than never working. Kind people helped me. I still got it wrong. I take full responsibility. It was better not to do it. If you want to try, here is where I left it.
.Rprofile
Make the ℝ prompt stand out (be sure to tell ESS how to handle this):
options(prompt="ℝ> ")
.emacs.el
Tell ESS how to handle my custom prompt:
(setq inferior-ess-primary-prompt "ℝ> ")
Handle the custom ℝ prompt in ess. Don’t use custom here.
(setq inferior-S-prompt "[]a-zA-Z0-9.[]*\$$?:[>+.] \$$*ℝ+> ")
## How to Format Magrittr Chains with ESS
Here is an example of how to format magrittr chains with ESS. Those interested will also be happy to learn of ess-R-fl-keyword:%op% and ess-%op%-face.
For example, to get the an indent after only the first statement.
(add-to-list 'ess-style-alist
'(my-style
(ess-indent-level . 4)
(ess-first-continued-statement-offset . 2)
(ess-continued-statement-offset . 0)
(ess-brace-offset . -4)
(ess-expression-offset . 4)
(ess-else-offset . 0)
(ess-close-brace-offset . 0)
(ess-brace-imaginary-offset . 0)
(ess-continued-brace-offset . 0)
(ess-arg-function-offset . 4)
(ess-arg-function-offset-new-line . '(4))
))
(setq ess-default-style 'my-style)
Thank you Mr. Vitalie Spinu.
How I did it:
(setq gcr/ess-style
(copy-alist
(assoc 'RRR ess-style-alist)))
(setf (nth 0 gcr/ess-style) 'GCR)
(setf (cdr
(assoc 'ess-continued-statement-offset
(cdr gcr/ess-style)))
0)
(setq ess-default-style 'GCR)
The latest version of ESS includes a RRR style.
It formats Magrittr chains as expected by default with ess-first-continued-statement-offset.
## Some Lessons
To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of.
~ Sir Ronald Aylmer Fisher
The plural of anecdote is not data.
~ Roger Brinner
The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data.
~ John Tukey
Via r-help.
## Teaching Statistics: A Bag of Tricks
This volume takes a positive spin on the field of statistics. Statistics is seen by students as difficult and boring, however, the authors of this book have eliminated that theory. Teaching Statistics: A Bag Of Tricks, brings together a complete set of examples, demonstrations and projects that not only will increase class participation but will help to eliminate any negative feelings toward the area of statistics.
## How Students Learn Statistics
Research in the areas of psychology, statistical education, and mathematics education is reviewed
and the results applied to the teaching of college-level statistics courses. The argument is made that
statistics educators need to determine what it is they really want students to learn, to modify their
teaching according to suggestions from the research literature, and to use assessment to determine if
their teaching is effective and if students are developing statistical understanding and competence.
## Personal approach for collecting Emacs usage statistics advice?
Lately I’ve been curious whether or not my actual Emacs keymapping usage actually reflects how I think I use it. What I mean is that I have a goal of mapping frequently used operations to easily-accessible keybindings on the keyboard. What I plan to do is to record my usage so that I can study it to find mapping decisions that I got right, and wrong, and also identify things that I use that I should be mapping closer to home.
The simplest approach would be to use a keylogger, or advice inside of Emacs.
What I am curious about is your approach if you had done, or would do, something like this, and your thoughts an ideas.
In my case I lay out my mappings for how far away from home they are, and that has worked well so far, but I would like some numbers to back up that claim though it is not too serious depending upon how you look at it.
Cross posted from help-gnu-emacs
|
2018-06-20 11:41:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4405946731567383, "perplexity": 2624.653505101843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863518.39/warc/CC-MAIN-20180620104904-20180620124904-00618.warc.gz"}
|
https://repository.uantwerpen.be/link/irua/134971
|
Title Search for the associated production of a Higgs boson with a single top quark in proton-proton collisions at $\sqrt{s}$=8 TeV Search for the associated production of a Higgs boson with a single top quark in proton-proton collisions at $\sqrt{s}$=8 TeV Author Khachatryan, V. Sirunyan, A. M. Tumasyan, A. Alderweireldt, S. Cornelis, T. de Wolf, E.A. Janssen, X. Knutsson, A. Lauwers, J. Luyckx, S. van de Klundert, M. van Haevermaet, H. Van Mechelen, P. Van Remortel, N. Van Spilbeeck, A. et al. Faculty/Department Faculty of Sciences. Physics Publication type article Publication 2016 Bristol , 2016 Subject Physics Source (journal) Journal of high energy physics. - Bristol Volume/pages (2016) :6 , 48 p. ISSN 1126-6708 1029-8479 1029-8479 Article Reference 177 Carrier E Target language English (eng) Full text (Publishers DOI) Affiliation University of Antwerp Abstract This paper presents the search for the production of a Higgs boson in association with a single top quark (tHq), using data collected in proton-proton collisions at a center-of-mass energy of 8TeV corresponding to an integrated luminosity of 19.7 fb(-1). The search exploits a variety of Higgs boson decay modes resulting in final states with photons, bottom quarks, and multiple charged leptons, including tau leptons, and employs a variety of multivariate techniques to maximize sensitivity to the signal. The analysis is optimized for the opposite sign of the Yukawa coupling to that in the standard model, corresponding to a large enhancement of the signal cross section. In the absence of an excess of candidate signal events over the background predictions, 95% confidence level observed (expected) upper limits on anomalous tHq production are set, ranging between 600 (450) fb and 1000 (700) fb depending on the assumed diphoton branching fraction of the Higgs boson. This is the first time that results on anomalous tHq production have been reported. E-info http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000379505300001&DestLinkType=RelatedRecords&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000379505300001&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 Full text (open access) https://repository.uantwerpen.be/docman/irua/f09b18/134971.pdf E-info http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000379505300001&DestLinkType=CitingArticles&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 Handle
|
2016-12-08 00:24:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8162088990211487, "perplexity": 3073.26550951752}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542288.7/warc/CC-MAIN-20161202170902-00191-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://stats.libretexts.org/Courses/Fresno_City_College/Book%3A_Business_Statistics_Customized_(OpenStax)/Using_Excel_Spreadsheets_in_Statistics/3_Discrete_Probability/3.6_Geometric_Probability_using_the_Excel_Sheet_provided
|
# 3.6 Geometric Probability using the Excel Sheet provided
Suppose the probability that a red car enters an intersection is 0.24. What is the probability that the first red car enters the intersection after four non-red vehicles pass through the intersection? The discrete probability distribution is Geometric.
P(Red Car) = .24
P(Not Red Car) = 1-.24 = .76
To find the probability P(X = 5) follow the steps below.
• Step 1: Enter 0.24 in cell B1 and hit the Enter key.
• Step 2: Find 5 in column A at cell A9.
• Step 3: Move to column B, cell B9. The answer is 0.0801
To find the probability P(X < 8), follow the steps below.
• Step 1: Find 8 in column A at cell A12.
• Step 2: Move to column B, cell B12. The answer is 0.8887.
To find the probability P(X > 10), follow the steps below.
• Step 1: Find 9 in column A at cell A13.
• Step 2: Move to column C, cell C13. The answer is 0.9154.
• Step 3: Subtract 0.9154 from 1, (1 - 0.9154 = 0.0846).
To find the probability P(X < 7) = P(X < 6), follow the steps below.
• Step 1: Find 6 in column A at cell A10.
• Step 2: Move to column C, cell C10. The answer is 0.9357.
To find the probability P(X > 4) = P(X > 5), follow the steps below.
• Step 1: P(X > 5) = 1 - P(X < 4).
• Step 2: Find 4 in column A at cell A8.
• Step 3: Move over to cell C8, 0.6664.
• Step 4: Subtract 0.6664 from 1, 1 - 0.6664 = 0.3336.
The Mean is in cell F1, 4.16667.
The Variance is in cell F2, 13.1944.
The Standard Deviation is in cell F3, 3.63.
##### Interactive Element
3.6 Geometric Probability using the Excel Sheet provided is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
|
2023-03-31 09:49:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7825530767440796, "perplexity": 1966.1050832480548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00028.warc.gz"}
|
https://math.stackexchange.com/questions/3345127/what-does-it-mean-to-count-a-group-of-numbers-with-their-multiplicity
|
# What does it mean to count a group of numbers with their multiplicity?
In this question someone previously asked
They presented the problem:
Given that the number 8881 is not a prime number, prove by contradiction that it has a prime factor that is at most 89.
If all prime factors where superior to 89, they would be at least 97. Counting them with their multiplicity, if there was only one such factor it would be 8881, which contradicts the given fact that 8881 is not prime. If there are at least two (possibly equal) factors a and b, then ab≤8881 but ab≥97∗97>8881, contradiction.
I understand it until
Counting them with their multiplicity, if there was only one such factor it would be 8881
What does it mean to count numbers with their multiplicity and in this case why would the only factor be 8881.
You're on the right lines. If 8881 is not prime, it must have at least one prime factor not equal to itself. If it has no prime factors less than or equal to 89, then it must have only prime factors greater than or equal to 97, which is the next prime up from 89. You've already found the smallest natural number which has prime factors greater than or equal to 97 (in reference to the proposed solution to the question where they state that smallest number composed of only 97 is 97^2
However wouldn't the smallest natural number which has prime factors greater than or equal to 97 be 97?
Thank you and sorry if this seems like a stupid question.
When we say that we are counting with multiplicity, we mean that we are counting objects which might "repeat" themselves, and we want to count all of those repetitions as distinct objects. For example, the number $$8$$ has only one prime factor: $$2$$. However, if we count the number of prime factors of $$8$$ with multiplicity, there are $$3$$ such factors: $$2$$, $$2$$, and $$2$$ (since $$8 = 2^3$$).
I imagine that most students are more familiar with this term in the context of roots of polynomials (since this topic is usually taught to students relatively early in their mathematical careers). For example, the polynomial $$(x-1)^2(x-2)$$ has two distinct roots, but three roots if we count with multiplicity. This is because the root $$x=1$$ has multiplicity $$2$$.
This notion is discussed a little further on Wikipedia:
In mathematics, the multiplicity of a member of a multiset is the number of times it appears in the multiset...
The notion of multiplicity is important to be able to count correctly without specifying exceptions (for example, double roots counted twice). Hence the expression, "counted with multiplicity".
If multiplicity is ignored, this may be emphasized by counting the number of distinct elements, as in "the number of distinct roots"...
• Thank you, I think I understand it now, so the proof is essentially saying if we start by counting the prime factors with their multiplicity the only one that can appear in the multiset of 8881's prime factors by itself is 8881 which contradicts the fact that it is prime, and if we carry on counting to two prime factors the smallest we can start with would be 97 and 97 however their product is > 8881, which is also a contradiction. And we can't count further with multiplicity as the prime factors would be < 97 to get anywere close to 8881 which is another contradiction. I think I get it now! – Moajiz Hussain Sep 5 at 14:59
• Yes, that seems to be a reasonable summary of the argument. – Xander Henderson Sep 5 at 15:20
• The term "frequency", in the context of discrete statistics, is also a synonym: if you have observations 2,2,2,5,5 you have 2 with frequency 3 and 5 with frequency 2. – Acccumulation Sep 5 at 20:28
Conversationally, the multiplicity of a factor is "the number of times" it divides into the number. For instance, $$48=2^4\cdot3$$ has both $$2$$ and $$3$$ as prime factors, but $$2$$ has a multiplicity of 4.
The point of the proof is that if $$8881$$ is not prime, it has at least two prime factors. We need to balance the assumption that those factors are greater than $$89$$ with the fact that $$\sqrt{8881}\approx92.4$$. So whatever those two prime factors are, they can't both be greater than $$89$$.
• Good call, @XanderHenderson. I don't know what the community feeling is, but I have never been the slightest bit offended when someone with the rep edited one of my answers to make it clearer or more correct, and it would save you a lot of effort as well! – Matthew Daly Sep 5 at 13:08
How many prime factors does $$8$$ have? The only prime factor of $$8$$ is $$2$$, so you might say $$8$$ has one prime factor. But the prime factorization is $$8=2\cdot2\cdot2$$, and if you count the number of factors that appear, there are three of them. The latter method of counting -- counting without tossing out repeated values -- is called counting with multiplicity.
The claim about the smallest natural number with all prime factors $$\geq97$$ is indeed a small inaccuracy and should specify that it is finding the smallest non-prime natural number with all prime factors $$\geq97$$. Since $$8881$$ is given to be non-prime, this suffices.
|
2019-10-19 15:00:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7144083380699158, "perplexity": 143.27385159071878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986696339.42/warc/CC-MAIN-20191019141654-20191019165154-00102.warc.gz"}
|
http://mathhelpforum.com/algebra/188401-how-can-these-roots-real.html
|
# Math Help - How can these roots be real?
1. ## How can these roots be real?
I ran this equation
$112u^8+56u^4 v^4+3v^8=48u^8+24u^4 v^4+3v^8$
solving for $v$
through http://www.quickmath.com/webMathematica3/quickmath/equations/solve/basic.jsp#v1=3v^8%2B24u^4+v^4%2B48u^8%3D3v^8%2B56u ^4+v^4%2B112u^8&v2=u
and am confused about it's results. It says that 2 of the values for $u$ are real. Here's one of them (sorry, I couldn't get the indices small with tex so am just using ordinary text
v=(-1)^(1/4) * 2^(1/4) * u
I don't understand how (-1)^(1/4) is real. I thought it was the same as $\sqrt{i}$
Forgot to add, I've also done this by hand and have $2u^4=-v^4$ which I think comes to the same thing.
2. ## Re: How can these roots be real?
Originally Posted by moriman
I ran this equation
$112u^8+56u^4 v^4+3v^8=48u^8+24u^4 v^4+3v^8$
solving for $v$
through http://www.quickmath.com/webMathematica3/quickmath/equations/solve/basic.jsp#v1=3v^8%2B24u^4+v^4%2B48u^8%3D3v^8%2B56u ^4+v^4%2B112u^8&v2=u
and am confused about it's results. It says that 2 of the values for $u$ are real. Here's one of them (sorry, I couldn't get the indices small with tex so am just using ordinary text
v=(-1)^(1/4) * 2^(1/4) * u
I don't understand how (-1)^(1/4) is real. I thought it was the same as $\sqrt{i}$
Forgot to add, I've also done this by hand and have $2u^4=-v^4$ which I think comes to the same thing.
The only real solution for u is u=0, in which case v can be anything. The other solutions are all complex, as you correctly say. Moral: Don't always trust free software.
To get indices to display correctly in TeX, use braces not parentheses (curly brackets rather than round ones, in other words). So for example [TEX]v=(-1)^{1/4} * 2^{1/4} * u[/TEX] yields $v=(-1)^{1/4} * 2^{1/4} * u.$
3. ## Re: How can these roots be real?
Thanks for the confirmation and for the help on the indices
|
2015-11-28 21:15:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.91600501537323, "perplexity": 1230.5944166064135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398453805.6/warc/CC-MAIN-20151124205413-00011-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://stacks.math.columbia.edu/tag/073Y
|
11.2 Noncommutative algebras
Let $k$ be a field. In this chapter an algebra $A$ over $k$ is a possibly noncommutative ring $A$ together with a ring map $k \to A$ such that $k$ maps into the center of $A$ and such that $1$ maps to an identity element of $A$. An $A$-module is a right $A$-module such that the identity of $A$ acts as the identity.
Definition 11.2.1. Let $A$ be a $k$-algebra. We say $A$ is finite if $\dim _ k(A) < \infty$. In this case we write $[A : k] = \dim _ k(A)$.
Definition 11.2.2. A skew field is a possibly noncommutative ring with an identity element $1$, with $1 \not= 0$, in which every nonzero element has a multiplicative inverse.
A skew field is a $k$-algebra for some $k$ (e.g., for the prime field contained in it). We will use below that any module over a skew field is free because a maximal linearly independent set of vectors forms a basis and exists by Zorn's lemma.
Definition 11.2.3. Let $A$ be a $k$-algebra. We say an $A$-module $M$ is simple if it is nonzero and the only $A$-submodules are $0$ and $M$. We say $A$ is simple if the only two-sided ideals of $A$ are $0$ and $A$.
Definition 11.2.4. A $k$-algebra $A$ is central if the center of $A$ is the image of $k \to A$.
Definition 11.2.5. Given a $k$-algebra $A$ we denote $A^{op}$ the $k$-algebra we get by reversing the order of multiplication in $A$. This is called the opposite algebra.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2019-01-18 20:43:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.893304705619812, "perplexity": 361.01109430520887}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660529.12/warc/CC-MAIN-20190118193139-20190118215139-00148.warc.gz"}
|
http://learnyousomeml.com/nb/da-stats-basics.html
|
# 05.00 Stats¶
To not make mistakes in analysis a deal of statistical knowledge is required. We will review some statistics and learn a little about distributions in scipy. scipy is the mathematical library for Python on top of NumPy. It was geared to be the one and only mathematic library for the sciences in Python but it turned out that it would become too big. Libraries flensed from scipy often include the sci at the beginning of its name, e.g. scikit-learn or scikit-image.
scipy is comprised of:
• Numerical Integration
• Function Optimization - used in machine learning routines
• Interpolation - e.g. splines
• Fast Fourier Transforms
• General Signal Processing
• Linear Algebra - including Matrix decomposition
• Image Processing - as NumPy arrays, used by scikit-image
• Sparse Matrices - and graphs
• Statistics - which is what interests us right now
• And a handful of extra things
Several of these routines are used in scikit-learn and scikit-image to produce decomposition and machine learning algorithms. We will look at machine learning from a higher perspective but, for the statistics we need, we can use the scipy.stats module.
As a quick review let's see a handful of statistical measures (or simply statistics for short) we can perform on data.
In [1]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import stats
#### Mean¶
$$\bar{x} = \frac{1}{N} \sum_{i=1}^{N} x_i$$
#### Variance¶
$$\sigma^2 = \frac{1}{N - d} \sum_{i=1}^{N} (x_i - \bar{x})^2$$
#### Standard Deviation¶
$$\sigma = \sqrt{\frac{1}{N - d} \sum_{i=1}^{N} (x_i - \bar{x})^2}$$
#### Covariance¶
$$cov(X, Y) = \sigma_{xy} = \frac{1}{N - d} \sum_{i=1}^{N} (x_i - \bar{x})(y_i - \bar{y})$$
#### Correlation¶
$$corr(X, Y) = r = \frac{cov(X, Y)}{\sigma_x \sigma_y} = \frac{\sigma_{xy}}{\sigma_x \sigma_y}$$
Note: $1/(N-d)$ often becomes just $1/N$ or $1/(N-1)$ in bias-corrected calculations. in other words the most common values for $d$ are $0$ for population statistics and $1$ (or very rarely more) for sample statistics. Bias correction is needed when operating over a sample instead of operating over the entire population. All below NumPy functions (except correlation functions which are not multiplied by $1/N$) accept a ddof= (degrees of freedom) argument to perform a sample based calculation.
Let's have a look why this bias correction makes sense. We will take a dataset of several points in one horizontal axis. The vertical position of the points is merely illustrative of the fact that in a real world we can have only a fraction of all possible measurable conditions.
We will assume a normal distribution. The mean is at the center of the data points and a distance of $2$ standard deviations from the mean contains around $95\%$ of all points - within the dashed lines.
da-std-full.svg
The blue points are our full population, there are no more points. Yet if we take a sample from this population we are more likely to sample points from regions there are more blue points in the first place. The green points below are a sample from within the population of all blue points.
The mean changes very little - and changes in the value of the mean depend on where on the axis the dataset is positioned, in order to avoid changes to the mean one can center the data at zero as we were doing above with $(x_i - \bar{x})$. Since changes in mean value can be avoided in most cases there is no need for a degrees of freedom adjustment to the mean.
The change in the value of standard deviation is more interesting. The spread of sampled data will always be smaller or equal to the spread of the population. This is exactly due to the reason that we are most likely to sample blue points from the middle rather than from the extremities. The premise is that we are assuming a distribution that is reasonably normal, or to be more exact a univariate distribution - a distribution with a single peak.
da-std-sample.svg
Now let's create some arrays to play with. One array is a simple range and the others are noise perturbed.
In [2]:
arr = np.arange(30)
acv = np.arange(30) + np.random.rand(30) - 0.5
acr = np.arange(30) + np.random.rand(30)*5 - 2.5
arr, acv, acr
Out[2]:
(array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]),
array([ 0.42656744, 1.34760301, 1.76247459, 2.54394538, 4.25001083,
5.37448213, 5.5951476 , 6.71161456, 8.04594227, 8.836638 ,
10.37401308, 10.89728014, 12.33849422, 12.76767618, 14.02655641,
14.70839667, 15.52074703, 17.23753479, 18.44489014, 18.62833507,
20.04839567, 21.47013894, 21.65815276, 22.97512403, 23.82185634,
25.26785455, 26.0377071 , 27.36483584, 28.48860728, 28.93587003]),
array([ 1.18661367, 1.16547567, 0.14580165, 4.99155547, 4.11994684,
6.55103777, 6.12749731, 9.080279 , 6.96948181, 10.05920244,
11.70575631, 12.43509519, 9.62236534, 10.61161181, 14.64103517,
15.07336027, 13.54752731, 15.90730184, 20.0211125 , 20.6451752 ,
22.46041344, 19.71946508, 21.1359607 , 21.23087045, 23.24723358,
25.72597371, 26.96254792, 28.7182825 , 27.11062461, 27.01276892]))
NumPy has the mean method but implementing it by hand is easy.
In [3]:
print(arr.mean())
print(arr.sum() / len(arr))
14.5
14.5
Standard deviation with zero degrees of freedom is the deviation of our data. With one degree of freedom it is an estimate of a population from which whe data at hand may be a reasonable sample.
In [4]:
print(arr.std())
print(np.std(arr, ddof=1))
8.65544144839919
8.803408430829505
Note how the second value is bigger. In the first case we consider all the data as our population. In the second case all our data in arr is considered to be just a sample from some population with much more data. That population with more data which we do not know about must have a bigger spread of values.
Same story with the variance (since it is just the squared standard deviation).
In [5]:
print(arr.var())
print(arr.var(ddof=1))
74.91666666666667
77.5
The covariance method (cov) produces the variance of each array on the diagonal and the actual covariance on the interception between the arrays. i.e.
np.cov arr acv
arr cov(arr, arr) cov(arr, acv)
acv cov(acv, arr) cov(acv, acv)
And since $cov(a, a) = var(a)$ the diagonal of this matrix is just a sequence of variances.
In [6]:
print(np.cov([arr, acv], ddof=0))
print(np.cov([arr, acv], ddof=1))
[[74.91666667 75.16545427]
[75.16545427 75.50980833]]
[[77.5 77.75736649]
[77.75736649 78.11359482]]
Be careful to specify the degrees of freedom, the default ddof in variance in NumPy's standard deviation and variance is ddof=0 but NumPy's covariance attempts to estimate ddof by default. If you do not specify a value for the ddof argument, NumPy's covariance will end with ddof=1 in the majority of cases.
The correlation measure we saw above in the equation is actually just one way of measuring correlation. It is called the Person's correlation coefficient or just Person's $r$. NumPy's corrcoef produces a Matrix similar to the covariance method.
One important thing to note is that Pearson's correlation coefficient divides out the degrees of freedom in its equation. Therefore, if we are working with a sample, we cannot bias-correct it. scipy.stats provides us with a pearsonr method, which reminds us of the fact that we cannot bias correct our correlation. It returns an extra $p$ value which is an indication of how likely an uncorrelated dataset is to produce such value of correlation. The $p$ value is just a vague estimate though, just a throw into a beta distribution (do not worry if you do not know what that is). In general it can be said that very low $(-1)$ or very high $(1)$ value of correlation will give a very small $p$, and correlation values around $0$ will give high $p$. The $p$ value here is just a rough reminder for the experimenter.
In [7]:
print(np.corrcoef([arr, acv, acr]))
print(stats.pearsonr(arr, acv))
print(stats.pearsonr(acv, acr))
print(stats.pearsonr(arr, acr))
[[1. 0.99937247 0.98441101]
[0.99937247 1. 0.98467649]
[0.98441101 0.98467649 1. ]]
(0.9993724656859068, 3.582409846684914e-42)
(0.9846764927276871, 8.778258288510629e-23)
(0.9844110053919739, 1.1146616476882797e-22)
da-bell.svg
All the time we need to remember that all these statistics assume the normal distribution of data. A bell shaped distribution, or at least a distribution that just has a single peak - a univariate distribution of data values. A distribution that is multivariate - has several peaks in tis graph - will render the majority of the statistics we see here useless.
Next we will see a handful of ideas about distributions, and then a couple of pitfalls in the use of plain statistic values.
|
2021-09-28 06:31:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5401303768157959, "perplexity": 622.4661331125113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060538.11/warc/CC-MAIN-20210928062408-20210928092408-00100.warc.gz"}
|
https://tugster.wordpress.com/2009/02/27/acl/
|
Atlantic Concert . . . bound for sea.
Same place different day, Atlantic Companion . . . bound for sea.
Is that a hole in the hull?
Companion has the same although it’s less corroded.
This ACL series of ships has an unusually boxy and large superstructure.
and identical safety orange slash across the visor.
For more ACL in tight quarters, check out this link.
Anyone have an idea of turn-around time for these ROROs in port?
And although I know the term “black ships” conjures up the wrong impressions, I can’t look at these ACL vessels and not think “black ships.”
Photos, WVD.
|
2023-04-01 07:05:51
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8704270124435425, "perplexity": 9791.03638958963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00381.warc.gz"}
|
http://planet.sympy.org/
|
## May 24, 2016
#### GSoC: First Week of Coding Phase
Since my last blog post I have opened two new PR’s for the Project. We’ll see what they do and discuss the goals of this week.
A major issue in the first PR was the slow performing algorithms, a consequence of using recursive expressions internally. Most of the algorithm implemented used matrices module to solve the linear system which doesn’t support DMP and DMF objects.
So I defined a new class subclassed from MutableDenseMatrix and changed some methods to make it work with Polynomials and Fractions and used this to use Polynomials internally. Thanks to Kalevi for this idea. It works much more robust now. I have also added methods to find composition of Holonomic Functions and converting a Hypergeometric Function to Holonomic. These things are added in this PR. I hope the PR gets merged in a couple of days.
A new PR was opened for features relating to recurrence relations in coefficients of Power Series expansion of Holonomic Functions. The first thing I did was defined a class RecurrenceOperator parallel to DifferentialOperator to store the recurrence relation.
Goals of the Week:
In this week, I have planned to define a function to find the Recurrence Relation of series coefficients and then go for numerical computation of Holonomic Functions. Let me know If anything else should be implemented first as I haven’t discussed this with mentors yet.
The chronology might be different from what I wrote in the Proposal but we are quite ahead of that.
Cheers Everyone.
## May 23, 2016
#### Community Bonding Period
I have been selected for GSoC’16 to work with Sympy on Implementing Finite Fields and Set module in SymEngine.
SymEngine is a standalone fast C++ symbolic manipulation library.
We all know that Polynomial factorization is one of the fundamental tools of the computer algebra systems. And in symbolic mathematics, it is one of the basic requirement over which other algorithms can be implemented.
Currently, SymEngine has the implementation of Univariate Polynomial class, which provides us the basic functionality to add, multiply and subtract two polynomials.
Now, comes the problem of factoring the polynomials.
We have explicit solution formulas only till polynomials of degree four(the Quadtratic formula for degree 2, the Cardano formulas for third-degree equations, and the Ferrari formula for degree 4).
For sure, we need a different way out for higher degree polynomials. We see that there are algorithms for factorization in finite fields:
So, this summers I will be working on converting a polynomial in integer field to finite field, then factorizing them. After which we have to do Hensel Lifting to bring back the factored polynomial to integer field.
Furthermore, I will be working on implementing Sets module. These two together will help us to create a basic infrastructure over which we can develop a solvers module in SymEngine.
My proposal can be found here.
# Community Bonding Period
I have been alloted Isuru Fernando, Thilina Rathnayake, Sumith and Ondřej Čertík as mentors.
The SymEngine community is very fast in reachability. We had a discussion on gitter channel of SymEngine, about the proceedings of our Proposals. As SymEngine has an implementaion of sparse polynomials, I will be working on changing them to Finite Fields. Like:
GaloisField::GaloisField(std::map<unsigned, int> &dict, unsigned modulo) : modulo_(modulo) { unsigned temp; for (auto iter : dict) { temp = iter.second % modulo; if (temp != 0) dict_[iter.first] = temp; } }
where dict is the dictionary of Univariate Polynomial representation and, dict_ stores its finite field representation modulo modulo_. I will be implementing this in the first week of GSoC period.
During the Community Bonding Period, I worked on implementing UniversalSet and FiniteSet.
UniversalSet is a singleton class like EmptySet, and while implementing this I learned a lot about Singleton classes.
FiniteSet is a class with a set of RCP<const Basic> as member variable. It can contain any object of Basic type. While implemeting this, we came on a fix over what to do when we have a interval like [1, 1], i.e. both end points equal. This led to a little change in Interval’s code, and now it returns a FiniteSet. Though this PR is not merged till now. I hope to get it merged in the next few days and along with it keep working on Finite Field implementation.
## May 22, 2016
#### Community Bonding period ends, Coding period starts
The Community bonding period comes to an end now. First of all considering the issues described in the last post:
• Aaron created a new channel for our GSoC project discussion, sympy/GroupTheory.
• As for time of meeting, me and Kalevi often have discussion on the gitter channel, but since of quite a bit differene in timings between me and Aaron (I tend to sleep early at 11 PM IST). We three haven't been able to together have a meeting. Though Aaron suggested "Kalevi is the primary mentor, so if you have to meet without me that is fine". I also think that's not too big of an issue now, but his opinion has always helped, since he has best knowledge of sympy core.
In the past few weeks I wasn't too much productive, since I had a touch of fever, anyways I am okay now. We have now completed the implementation of the FreeGroup class in PR #10350. I started working on the PR back in January but I halted, since of my semester classes. FreeGroup is quite similar to the PolyRing implemented in sympy.polys.rings.py. We first started with the list of tuples as the data structure for the FreeGroupElm, where each tuple being (index, exp), but since of the mutable nature of lists, Kalevi suggested to go with tuple of tuples. Also as tuples are probably more efficient as there is no 'housekeeping' overhead. Also changed the element from (index, exp) --> (symbol, exp).
Implementing FreeGroupElm deals elegantly in such a way that it can't be independently created in a public interface. The reason being: every FreeGroupElm is in itself created only by the dtype method of FreeGroup class. The assignment is as follows:
obj.dtype = type("FreeGroupElm", (FreeGroupElm,), {"group": obj})
. Its sort of an advanced usage of type function as a metaclass.
Currently the printing code of latex and pprint for FreeGroupElm is a little hacky. I need to work on that as well.
Plan for Next few weeks
Though according to my proposal timeline, we described to go with implementation of other algebraic structures i.e Magma, SemiGroup, Monoid. But we will next move onto "Coset Enumeration". It is going to be a big task. That is harder and more important than other algebraic structures. Timline states it to be 5 week task, thats almost half the GSoC coding period. Well how do we go about that? I think of studying the mathematics in parallel with the implementation.
We have created a PR for implementation of Finitely Presented Group #11140. Not much code has been added here. Paper on Coset Enumeration using Implementation and Analysis of Todd Coxeter Algorithm (by John J. Cannon, George Havas), and other paper being the original paper by Todd and Coxeter, "A practical method for enumerating cosets of a finite abstract group" are the ones I am reading. As for the implementation of Todd Coxeter, we will be following the methods described in the book "Handbook of Computational Group Theory" by Derek F. Holt.
Also now the "official" coding period begins, good luck to everyone.
#### GSoC Community Bonding Period Week 4
The last week of Community Bonding period was awesome. From tomorrow onwards, the coding period will begin. I am supposed to start working on my project from tomorrow, but I have done that already from the second week of the Community Bonding Period because I was supposed to take a vacation of 4 days (25 May – 29 May). Due to some issues, I have had to cancel that vacation. Now I have got some more days to work on my project. Let’s see what I have done so far…
#### So far
• PR 10863 had finally got merged.
• rewrite(Piecewise) :- In PR 11103, I was trying to solve the arg = 0 part using solve functionality in sympy. But Jason suggested not to use solve as because there may arise some cases when solve will not be able to provide the desired output. So I kept the arg = 0 part as it is. The story doesn’t end here. There is a confusion regarding keeping the check for whether arg is real. Personally, I think that check should be there since both Heaviside and DiracDelta is defined only on real axis.
• In PR 11137, I have improved the doc strings of all the methods under DiracDelta and Heaviside classes. I have added the contextual example for DiracDelta(x, k) and described the relation between fdiff() and diff() . This pull request needs a final review.
• In PR 11139, I have added the functionality to pretty print the DiracDelta(x) as δ(x). This pull request also needs a final review.
• Finally, almost every proposed improvement under the issue 11075 is being fulfilled.
#### Next Week
My plans for next weeks are:-
• To polish PR 11103PR 11137 and PR 11139 and get them merged.
• To start working on the implementation of Singularity Functions.
## May 21, 2016
### The Kronecker Substitution
I started off my work by reading through the existing mul_poly function. It uses the Kronecker Substitution technique to multiply two polynomials. An insight can be gained by looking at the slides here. Think of it this way,
“If you evaluate a polynomial at a large enough power of 10, I bet you can tell all it’s coefficients just by looking at the result!”
The mentioned slides call this the KS1 algorithm. Another algorithm it proposes is the KS2 algorithm, which evaluates the polynomial at two points (in contrast to just one) to interpolate the polynomial. A more mathematical explanation on the two techniques can be found here. I implemented the algorithm, and it wasn’t too difficult, as it was a a slight modification to the already existing multiplication technique. Later, I added a benchmark test for comparing the two techniques, KS1 & KS2. The benchmark (roughly) calculates the ratio of the time required for multiplying two polynomials using the two algorithms. Both the polynomial length (from 1 to 10,000) and the bit length of the coefficients (5, 10, 15, 20 bits) were varied. The graphs of the benchmarking are as follows.
Linear & Log scale : During this time, I was asked by Isuru to switch work towards the polynomial interface with FLINT & Piranha (and shift the polynomial manipulations to the end of summer). So, the PR hasn’t been merged in yet, and no conclusions and observations have been made between the two algorithms as of yet. Will be done later during the summer. Here’s the PR #930
### Dictionary wrappers
I also started work on Dictionary wrappers for SymEngine. One was already made, for the UnivariatePolynomial class aka the class for univariate polynomials with symbolic coefficients. It is a map from int -> Expression. We needed another wrapper for the uint -> integer_class map, so that the UnivariateIntPolynomial class can be structured the same way as the former. Now that we need almost the same functionality, why not temlatize the wrapper? (suggested by Isuru) That’s what I did, and the PR #946 is almost merged in. More on wrappers next time!
### Miscellaneous issues
Most of my work during this period revolved around reading the existing polynomial class, and refactor it and removed any redundancies. Some of the miscellaneous work that was done :
• Some refactoring was done in the dict.cpp file. There were some redundancy in the functions which was removed. Templatized methods for checking equality and comparing vectors (and sets) were made. Other specific eq & compare methods became derived methods of these base classes. #933
• Initially, the mul_poly method was constructing a vector of coefficients for the resulting multiplied polynomial (thus, implicitly storing it in a dense representation for a while). However, it was returned as a sparse represented polynomial, using a dictionary. This was changed, so that the dictionary is directly created, and the intermediate vector isn’t requireds. Also, some changes in variable names for clarity, as well as removing the redundant function dict_add_term. #928
• A redundant function create was removed. All it was doing was calling from_vec within the same class. #941
See you next week, Goodbye!
## May 20, 2016
#### Community Bonding Period
The community bonding period is coming to a close and so I’d like to write about what I’ve done/learned during this time. I’ve had the opportunity to create my first blog, have my first meeting with my mentors, submit a couple of minor pull requests to pydy and sympy, add an example script to the pydy repository, begin learning about spatial vectors and begin work on some benchmarking code.
Early in the community bonding period I was able to have my first meeting with my mentors for my project. During this meeting it was discussed that I could change the later portion of my project from working on implementing a Newton Euler method of equations of motion generation to implementing the faster Featherstone method. Considering I had no great attachment to the Newton Euler method I agreed that the faster method would provide a greater benefit for the overall project. Since the meeting I have spent some time reading on the math involved in the Featherstone method, specifically spatial vectors and their uses in dynamics. To this end I have read A Beginners Guide to 6-D Vectors (Part 1) and started reading both A Beginners Guide to 6-D Vectors (Part 2) and Roy Featherstone’s short course on spatial vector algebra.
I have also spent some time beginning to familiarize myself with the code that I will be working with. To begin I followed Jason Moore’s great suggestion of coding through one of the examples from my dynamics course and adding it to the pydy/examples folder in the pydy repository. The example I chose to use was a simple pendulum so that I could focus on the code rather than the complexities of the problem itself. This code and diagram are currently undergoing the review process now in order to be added to the pydy repository.
Lastly I have begun work on benchmarking code which is mentioned as part of my project itself. In working on this part of the project I was able to learn how to use a SQLite database with python which I had only obtained brief exposure to in the past. This code currently works using python’s timeit library to run a file utilizing Lagrange’s method of equations of motion generation and another using Kane’s method. The code runs each file several thousand times and iterates through this process 30 times and saves the average of the 30 runs along with several other useful bits of information about the computer and version of python being used to run the tests. In addition to the benchmarking code itself I have been working on a script that will allow viewing of a graph of the tests utilizing matplotlib and tkinter. This code is close to completion and the current next major addition will be to add the ability to filter the tests based on what platform was used/what version of python was used to run the tests.
This community bonding period has been productive and I am excited to begin the Google Summer of Code program on Monday.
## May 19, 2016
#### Moving Away from Python 2
About a month ago I tweeted this:
EDIT: Some people have started working on making this happen. See https://python3statement.github.io/.
For those of you who don't know, Python 2.7 is slated to reach end-of-life in 2020 (originally, it was slated to end in 2015, but it was extended in 2014, due to the extraordinary difficulty of moving to a newer version). "End-of-life" means absolutely no more support from the core Python team, even for security updates.
I'm writing this post because I want to clarify why I think this should be done, and to clear up some misconceptions, the primary one being that this represents library developers being antagonistic against those who want or have to use Python 2.
I'm writing this from my perspective as a library developer. I'm the lead developer of SymPy, and I have sympathies for developers of other libraries.1 I say this because my idea may seem a bit in tension with "users" (even though I hate the "developer/user" distinction).
### Python 2
There are a few reasons why I think libraries should drop (and announce that they will drop) Python 2 support by 2020 (actually earlier, say 2018 or 2019, depending on how core the library is).
First, library developers have to be the leaders here. This is apparent from the historical move to Python 3 up to this point. Consider the three (not necessarily disjoint) classes of people: CPython core developers, library developers, and users. The core developers were the first to move to Python 3, since they were the ones who wrote it. They were also the ones who provided the messaging around Python 3, which has varied over time. In my opinion, it should have been and should be more forceful.2 Then you have the library developers and the users. A chief difference here is that users are probably going to be using only one version of Python. In order for them to switch that version to Python 3, all the libraries that they use need to support it. This took some time, since library developers saw little impetus to support Python 3 when no one was using it (Catch 22), and to worsen the situation, versions of Python older than 2.6 made single codebase compatibility almost impossible.
Today, though, almost all libraries support Python 3, and we're reaching a point where those that don't have forks that do.
But it only happened after the library developers transitioned. I believe libraries need to be the leaders in moving away from Python 2 as well. It's important to do this for a few reasons:
• Python 2.7 support ends in 2020. That means all updates, including security updates. For all intents and purposes, Python 2.7 becomes an insecure language to use at that point in time.
• Supporting two major versions of Python is technical debt for every project that does it. While writing cross compatible code is easier than ever, it still remains true that you have to remember to add __future__ imports to the top of every file, to import all relevant builtins from your compatibility file or library, and to run all your tests in both Python 2 and 3. Supporting both versions is a major cognitive burden to library developers, as they always have to be aware of important differences in the two languages. Developers on any library that does anything with strings will need to understand how things work in both Python 2 and 3, and the often obscure workarounds required for things to work in both (pop quiz: how do you write Unicode characters to a file in a Python 2/3 compatible way?).
• Some of Python 3's new syntax features (i.e., features that are impossible to use in Python 2) only matter for library developers. A great example of this is keyword-only arguments. From an API standpoint, almost every instance of keyword arguments should be implemented as keyword-only arguments. This avoids mistakes that come from the antipattern of passing keyword arguments without naming the keyword, and allows the argspec of the function to be expanded in the future without breaking API.3
The second reason I think library developers should agree to drop Python 2 support by 2020 is completely selfish. A response that I heard on that tweet (as well as elsewhere), was that libraries should provide carrots, not sticks. In other words, instead of forcing people off of Python 2, we should make them want to come to Python 3. There are some issues with this argument. First, Python 3 already has tons of carrots. Honestly, not being terrible at Unicode ought to be a carrot in its own right.4
If you don't deal with strings, or do but don't care about those silly foreigners with weird accents in their names, there are other major carrots as well. For SymPy, the fact that 1/2 gives 0 in Python 2 has historically been a major source of frustration for new users. Imagine writing out 1/2*x + x**(1/2)*y*z - 3*z**2 and wondering why half of what you wrote just "disappeared" (granted, this was worse before we fixed the printers). While integer/integer not giving a rational number is a major gotcha for SymPy, giving a float is infinitely better than giving what is effectively the wrong answer. Don't use strings or integers? I've got more.
Frankly, if these "carrots" haven't convinced you yet, then I'll wager you're not really the sort of person who is persuaded by carrots.
Second, some "carrots" are impossible unless they are implemented in libraries. While some features can be implemented in 2/3 compatible code and only work in Python 3 (such as @ matrix multiplication), others, such as keyword-only arguments, can only be implemented in code that does not support Python 2. Supporting them in Python 2 would be a net deficit of technical debt (one can imagine, for instance, trying to support keyword-only arguments manually using **kwargs, or by using some monstrous meta-programming).
Third, as I said, I'm selfish. Python 3 does have carrots, and I want them. As long as I have to support Python 2 in my code, I can't use keyword-only arguments, or extended argument unpacking, or async/await, or any of the dozens of features that can't be used in cross compatible code.
A counterargument might be that instead of blocking users of existing libraries, developers should create new libraries which are Python 3-only and make use of new exciting features of Python 3 there. I agree we should do that, but existing libraries are good too. I don't see why developers should throw out all of a well-developed library just so they can use some Python features that they are excited about.
### Legacy Python
A lot of people have taken to calling Python 2 "legacy Python". This phrase is often used condescendingly and angers a lot of people (and indeed, this blog post is the first time I've used it myself). However, I think Python 2 really should be seen this way, as a "legacy" system. If you want to use it, for whatever your reasons, that's fine, but just as you shouldn't expect to get any of the newest features of Python, you shouldn't expect to be able to use the newest versions of your libraries. Those libraries that have a lot of development resources may choose to support older Python 2-compatible versions with bug and/or security fixes. Python 2 itself will be supported for these until 2020. Those without resources probably won't (keep in mind that you're using open source libraries without paying money for them).
I get that some people have to use Python 2, for whatever reasons. But using outdated software comes at a cost. Libraries have borne this technical debt for the most part thus far, but they shouldn't be expected to bear it forever. The debt will only increase, especially as the technical opportunity cost, if you will, of not being able to use newer and shinier versions of Python 3 grows. The burden will have to shift at some point. Those with the financial resources may choose to offload this debt to others,5 say, by backporting features or bugfixes to older library versions that support Python 2 (or by helping to move code to Python 3).
I want to end by pointing out that if you are, for whatever reason, still using Python 2, you may be worried that if libraries become Python 3-only and start using Python 3 features, won't that break your code? The answer is no. Assuming package maintainers mark the metadata on their packages correctly, tools like pip and conda will not install non-Python 2 compatible versions into Python 2.
If you haven't transitioned yet, and want to know more, a good place to start is the official docs. I also highly recommend using conda environments, as it will make it easy to separate your Python 2 code from your Python 3 code.
#### Footnotes
1. With that being said, the opinions here are entirely my own, and are don't necessarily represent those of other people, nor do they represent official SymPy policy (no decisions have been made by the community about this at this time).
2. It often feels like core Python itself doesn't really want people to use Python 3. It's little things, like docs links that redirect to Python 2, or PEP 394, which still says that the python should always point to Python 2.
3. In Swift, Apple's new language for iOS and OS X, function parameter names are effectively "keyword-only" by default
4. As an example of this, in conda, if you use Python 2 in the root environment, then installing into a path with non-ASCII characters is unsupported. This is common on Windows, because Windows by default uses the user's full name as the username, and the default conda install path is in the user directory.
This is unsupported except in Python 3, because to fix the issue, every single place in conda where a string appears would have to be changed to use a unicode string in Python 2. The basic issue is that things like 'π' + u'i' raise UnicodeDecodeError in Python 2 (even though 'π' + 'i', u'π' + 'i', and u'π' + u'i' all work fine). You can read a more in-depth description of the problem here. Incidentally, this is also why you should never use from __future__ import unicode_literals in Python 2, in my opinion.
Even though I no longer work on conda, as far as I know, the issue remains unfixed. Of course, this whole thing works just fine if conda is run in Python 3.
5. If that legitimately interests you, I hear Continuum may be able to help you.
## May 18, 2016
#### GSoC 2016 – All set to go
GSoC Coding period is about to start next week.
The past week I was focused on completing the NTheory Ruby wrappers, in order to complete my promised workload for the pre-coding time.
The main lessons learnt from this week’s work was handling conversions between Ruby and C types. This proved to be a quite easy task, with the Ruby C API.
The Definitive Guide to Ruby’s C API covers this in detail.
Then I had to figure out how to let SymEngine Integers to be implicitly convertible into Ruby numeric types. This proved tricky for me to get around as I wasn’t aware that it could be done in the actual Ruby code, without having to use the Ruby C API. The implicit conversion to Ruby numeric types was quite easy.
As shown in the gist above, I just had to declare the class in the lib folder with the necessary conversion method.
With this part done, several number theory functions can now be called upon SymEngine Integers. Those functions are:
• GCD
• LCM
• Mod
• Next Prime
• Quotient
• Fibonacci Number
• Lucas Number
• Binomials
Now that this part is done, the next step would be to start coding from next Monday. From my proposed plan, the first two weeks would be for wrapping Complex Numbers and Floating Point Numbers for Ruby.
See you after the first week.
## May 15, 2016
#### GSoC: Community Bonding Period
The 4th week of Community Bonding Period is about to kick off. I am here to write about what I’ve done so far and my goals for next week.
The first PR for the project “Implementation of Holonomic Function” got merged today. This adds following functionality in SymPy.
• Differential Operators with Polynomial Coefficients and operation like addition, multiplication etc.
• Holonomic Functions. A representation of Holonomic Functions given its annihilator and Initial Conditions (optional).
A little about the API to get you an idea of this.
>>> from sympy import *
>>> from sympy.holonomic import HolonomicFunction, DiffOperatorAlgebra
>> x = symbols(‘x’)
>>> R, Dx = DiffOperatorAlgebra(ZZ.old_poly_ring(x), ‘Dx’)
>> Dx * x
(1) + (x)Dx
>>> HolonomicFunction(Dx – 1, x, 0, [1]) + HolonomicFunction(Dx**2 + 1, x, 0, [0, 1])
HolonomicFunction((-1) + (1)Dx + (-1)Dx**2 + (1)Dx**3, x), f(0) = 1, f'(0) = 2, f”(0) = 1
Operations supported for Differential Operators are addition, multiplication, subtraction and power. Holonomic Functions can be added and multiplied with or without giving the Initial Conditions. Special thanks to OndrejKalevi and Aaron for all the help, suggestions and reviews.
What Now?
Now the goal is to use Polynomials and Fractions i.e. instances of DMP and DMF classes instead of expressions for all the manipulation done internally. This is necessary for robustness. After that is done I work on to implement conversion of Hypergeometric Functions to Holonomic Functions.
This has been super exciting so far and I hope same for future.
Thank You.
## May 14, 2016
#### GSoC Community Bonding Period Week 3
Hi there ! This week was great. I got to learn about many new things. I have mentioned in my last post, about my goals for this week, let us see what I have done so far.
### So far
• In PR 10863, implementation of _eval_expand_diracdelta is almost done . A final review is needed. But at the same time, I was forgetting about the fact that the simplify method has to be deprecated in order to make things backwards compatible. Thanks Jason for the suggestion.
I have made the simplify() method call the _eval_expand_diracdelta() method and raise a deprecation warning. I have also added the tests for this method by catching the deprecation warnings properly. The API works like this:-
In [3]: DiracDelta(x*y).simplify(x)
/home/ahappyidiot/anaconda2/bin/ipython:1: SymPyDeprecationWarning:
simplify has been deprecated since SymPy 1.0.1. Use
#!/home/ahappyidiot/anaconda2/bin/python
Out[3]: DiracDelta(x)/Abs(y)
These commits are needed to be reviewed properly in order to merge PR 10863.
• rewrite(Piecewise) :- In PR 11103, I have implemented a new method under DiracDelta class which would successfully output a Piecewise representation of a DiracDelta Object. For this pull request also, a final review is needed. The API works as:-
In [4]: DiracDelta(x).rewrite(Piecewise)
Out[4]:
⎧ oo for x = 0
⎨
⎩ 0 otherwise
In [4]: DiracDelta(x - 5).rewrite(Piecewise)
Out[4]:
⎧ oo for x - 5 = 0
⎨
⎩ 0 otherwise
• I have also reviewed PR 11065, I personally think that the implementation is a great idea.
#### Next Week
My plans for next weeks are:-
• Polish both PR 11103 and PR 10863 and get these pull requests merged
• Improve doc strings of the DircaDelta and Heaviside classes and methods.
I will again get back by the end of the next week. Cheers !!!
Happy Coding.
## May 10, 2016
#### GSoC Community Bonding Period Week 2
The second week of the Community Bonding Period got over. Though this post is quite late, I will try to post updates on Fridays of every week.
### So far
I had my first meeting with Jason Moore, one of my mentor, on 5th May through Google Hangouts. We had a brief discussion over my proposal. I am taking a head-start for coding along with community bonding. I have started a discussion about the first phase of my proposal. Jason has created an issue tracker for Improvements to DiracDelta and Heaviside.
• PR 10863 is almost completed only the depreciation part is left.
Almost all the properties of DiracDelta functions has been already implemented in Sympy. But I need to check whether all of them are unit tested and well documented.
### Next Week
My targets for this week are:-
• Polish PR 10863 and get it merged.
• Implement rewriting DircaDelta as Piecewise.
• Improve doc strings of the DircaDelta and Heaviside classes and methods.
I will again get back by the end of this week. Cheers !!!
Happy Coding.
## May 07, 2016
#### Community Bonding Period Starts for GSoC
So, the GSoC is officially starting with the community bonding period until the last week of May. Until then, the official requirements are to get to know the communities. For my project, this puts me in an odd situation with the project being done for SymEngine community, while under the auspices of SciRuby Foundation. For the starters, I am familiar with many people from the SymEngine community, and just now I am trying to get more involved with the SciRuby people.
Also, according to my proposal I have listed a couple of tasks to be completed before the actual coding begins. So this week was mostly spent on merging my existing and long standing PR for Ruby Wrappers for Trigonometric, Hyperbolic and other functions. The major problem I had was writing the repetitive tests for all the functions included in the wrappers. But apart from that the requirements were quite straightforward.
For the next week, I am planning to wrap the Number Theory functions in Ruby. This already has a CWrapper, which makes my task a lot easier.
Apart from coding, I wanted to set the record straight for SymEngine gem in the sciruby website. It lists the SymEngine gem as broken, and I would need to correct the gem’s installation scripts. Figuring out this is another task I carry on for the coming week.
See you!
## May 02, 2016
#### Google Summer of Code with Sympy
About one and half week ago, the results of Google Summer of Code were out. I am extremely glad to inform that my project for Sympy on Implementation of Singularity Functions got selected for GSoC 2016.
Google Summer of Code is a global annual program focused on bringing more student developers into open source software development. It is a global program that offers students stipends to write code for open source projects.
### Sympy
SymPy is a Python library for symbolic mathematics. It aims to become a full-featured Computer Algebra System (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible.
I have proposed to work on the Implementation of a full fledged Computer Algebra System (CAS) of Singularity Functions. I will create a module to represent a Singularity Function and implement different mathematical operations. This module will be further used to create an another module which would be used for solving complicated beam bending problems.
Jason Moore, Sartaj Singh and Ondřej Čertík are going to mentor me throughout the whole program. All of them are really talented and very humble people. I have learned a lot from all of them. I am extremely lucky to work under such great people.
Now Community Bonding Period is going on. This is intended to get students ready to start contributing to their organization full time from 23rd May. I am supposed to :
• Become familiar with the community practices and processes.
• Participate on Mailing Lists / IRC / etc.
• Set up your development environment.
• Small (or large) patches/bug fixes.
• Participate in code reviews for others.
• Work with my mentor and other org members on refining my project plan. This might include finalizing deadlines and milestones, adding more detail, figuring out potential issues, etc.
Looking forward toward a great summer.
Cheers!!!
## May 01, 2016
#### GSoC 2016 Phase I : Proposal, Acceptance
Hello, I'm Gaurav Dhingra a 3rd year undergraduate student at IIT Roorkee, my proposal on Group Theory with SymPy has been accepted as a part of Google Summer of Code
First, a little bit about SymPy, a Computer Algebra System (CAS) written entirely in Python. SymPy 1.0 was released about 2 months ago, Sympy has been created by hundreds of contributors starting from 2006. I will be working on Group Theory over the summer, for the next 3 months, to implement Computational Group Theory (CGT) and Group Theory, which are parts of mathematics I particularly enjoy. You can view my project proposal GSoC 2016 Application Gaurav Dhingra: Group Theory. Until a few days ago I was pretty busy with my exams, but in the next few weeks I will go over working on the project. I will particularly focus on Finite and Finitely Presented Groups.
I hope that I'll be able to implement everything that I promised in it. Moving onto the ongoing community bonding. Since I am very well acquitted with the workflow of SymPy, I can get straight to few important things, which i will do in the next few days.
This includes things like:
• Talking to my mentors regarding the time, and place of chat on internet, we differ by almost 5hrs. Time wouldn't be an issue, since seeing from past, I haven't faced such difficulty as both me and my mentor work for almost the same time intervals. From the GSoC 2015 discussions, I remember that Ondrej tries to make sure everyone knows what time student-mentor meet, since of different time zones.
• In the past we have had discussion on my private gitter channel Group Theory Implementation. Would it be wise to continue code discussions there?. Since no one can be added in the channel without my permission.
One thing that has been a hell of a lot annoying has been the GSoC mailing list, it's a lot distracting. I changed list settings to abridged daily updates because I was getting like 50 mails every day and that too about some really stupid and irrelevant things. But yeah like whatever.
"LESS TALK, MORE CODE" is the policy that I always tend to follow (not for blog!!). I will try my best to implement it in a strict way this summer. I have seen this policy working fine for me, mostly first I start writing question in a message box to my mentor, and then i think more about it myself and in the end I come up with a solution on my own, instead of asking.
I'm quite sure that I will write more than enough blog posts about my project during the summers. Since I enjoy writing and that too regarding things that occupy larger part of my day.
I'd like to thank all the people involved with contributions to SymPy. My special thanks to my mentor - Kalevi Suominen and my co-mentor - Aaron Meurer for all the suggestions while making my proposal, and showing faith and enthusiasm in my ability and my proposal.
## April 29, 2016
#### Selected
I have been selected for GSoC’16! The results came out on Apr 23, and I have never been happier! I got around to writing this blog post only now, because of my end semester examinations which ended yesterday. I have been alotted Isuru and Sumith as my official mentors. I’m very excited to start working on the project, alongside them.
Right now, I’ll start my discussions on the implementation details, and overall structure of the code. Also I will begin work on the Fast Fourier algorithm for univariate polynomial multiplication.
Looking forward to a busy summer!
## April 26, 2016
#### GSoC – Prologue
So, I have been accepted for the Google Summer of Code – 2016 for the project “Ruby Wrappers for SymEngine”, under the mentoring organization SciRuby.
The aim of this post is to give an introduction to the project.
The abstract of the project is as follows:
A project started by the SymPy organisation, SymEngine is a standalone fast C++ symbolic manipulation library. It solves mathematical problems the same way a human does, but way more quickly and precisely. The motivation for SymEngine is to develop the Computer Algebra System once in C++ and then use it from other languages rather than doing the same thing all over again for each language that it is required in. The project for Ruby bindings has already been setup at symengine.rb. Few things that the project involves are:
• Extending the C interface of SymEngine library.
• Wrapping up the C interface for Ruby using Ruby C API, including error handling.
• Designing the Ruby interface.
• Integrating IRuby with symengine gem for better printing and writing IRuby notebooks.
• Integrating the gem with existing gems like gmp, mpfr and mpc.
• Making the installation of symengine gem easier.
If you are interested, the full proposal, which includes the timeline is available online.
Also, the GitHub repository for the project is at SymEngine/SymEngine.rb.
The actual coding phase starts in about a month, and before that I plan to complete the Ruby Wrappers for the Trigonometric and Hyperbolic Functions and to write the necessary tests. Next, the NTheory CWrappers can be wrapped into Ruby. This too will be done before the GSoC period starts.
Keep checking the blog if you are interested to track the progress of this project. I will be posting weekly updates in the blog.
Auf Wiedersehen!
#### GSoC Acceptance
I am excited to announce that I have been accepted for the the Google Summer of Code program for the summer of 2016. I will be working with the Sympy open source project’s equation of motion generators. For the project I will mainly be focusing on creating a shared base class for the current equation of motion generators and adding an additional generator.
## March 25, 2016
#### SymPy Workshop at FOSSASIA 2016, Singapore
Hi there! Last week I went to Singapore for FOSSASIA Open Tech Summit 2016. I conducted a Worskhop on SymPy and assisted the PyDy Workshop in Python track hosted by Kushal Das. This blog post accounts to my experience as a speaker, as a attendee at FOSSASIA and as a traveler to Singapore.
FOSSASIA is the premier Free and Open Source technology event in Asia for developers, start-ups, and contributors. Projects at FOSSASIA range from open hardware, to design, graphics and software. FOSSASIA was established in 2009. Previous events took place in Cambodia and Vietnam.
As the name suggests its one of the largest tech conferences in Asia and my expectations were pretty high from this conference and moreover It was my first international conference. I witnessed lots of amazing people in the conference and interacted with a few as well. This is how it started:
## The SymPy/PyDy Workshop
Community is more important than Code @ Singapore Science Center Level 3, Pauling Lab
The SymPy and PyDy workshop was scheduled on 20th March at 1:00 - 2:00 PM (PyDy) and 2:00 - 4:00 PM (SymPy). Jason suggested to conduct the SymPy workshop first since PyDy uses SymPy and it would be easier for people to learn SymPy first and then PyDy, but since the schedule was already published, It was not possible to reschedule the workshops, so we had to continue with PyDy first. Sahil started the PyDy workshop at 1:00 PM, though we had to spend a lot of time installing Anaconda to everyone's systems by creating a local server and distributing flash drives as most of the people didn't had Anaconda or Canopy installed. This has been the problem for almost all the workshops I have conducted in the past. It seems I need to invent an efficient way to do this faster in future as we spent 30-40 odd minutes in installation.
Fortunately Sahil finished his presentation at around 2:15 PM. Then I took over for SymPy workshop, I started with the basic introduction to SymPy, the slides can be found here. Then I jumped to IPython notebook exercises to demonstrate more of SymPy. People were amazed by the capabilities of this amazing piece of software. The most beautiful feature they liked was printing and integration. The workshop went pretty well except for the glitches in the HDMI port of my laptop (probably, its the right time to get a new laptop). Here are some SymPy stickers for you, if you missed there.
## Singapore was Fun ;)
Visiting Singapore has been a great experience, the culture is a mix of Malaysian, Indian and native Singaporean. The City is well connected with MRT/SMRT (Metro and Buses). It's quite easy get anywhere around the city. People here are very helpful and nice. I didn't faced any problems throughout my stay there. I spent most of my time near Science Center, China Town and Little India. There were lot of people from India and particularly from Delhi and three from my University. It was awesome time spent with geeks all around. Tagging some of them Mayank, Ishaan, Umair, Jigyasa, Yask, Garvit, Manan, sorry If I missed someone. Here is a pic of the last day of the conference.
## Thank you!
Thank you FOSSASIA Organizing Team, Hong Phuc Dang for inviting me to be part of this awesome FOSS community. I would not have been able to attend the conference without the generous financial support from SymPy, Thank you Ondrej Certik, Aaron Meurer & SymPy contributors.
### Good Bye!
Good bye! everyone, see you on my next blog post, meanwhile you can have a look at a Picture of me doing back flip at Sentosa ;)
## March 24, 2016
#### Hello World!
Hello World! My previous blog posts were at: Global Class.
You should try not to reinvent the wheel, So I thought it would be better for me to fork jekyll-now repo and build this jekyll blog in minutes :smile: .
This is cool and the awesome most part is that It is Markdown flavoured . Its quite cool writing in markdown now. Oh! I remember how weird I felt writing in markdown back in December. Snap!
Now, I will search for a way to do spell correction in markdown. It is so much nedded, I know a little bit of googling will help me.
Happy Holi! :fire:
## March 07, 2016
#### Initial Commit
This is my first blog post. The blog was made to track progress of my GSoC project and get feedback from my mentors, if my proposal gets selected. I’m proposing to implement the Multivariate and Univariate polynomial class in SymEngine.
Wish me luck!
## February 06, 2016
#### SymPy Workshop at PyDelhi Meetup
Hi there! It's been sometime now since my last blog post, It's probably the right time to write one now. Yesterday, I gave a talk on SymPy at Python Delhi User group Meetup at CSDS, New Delhi. Things never go the way you want, an hour was wasted in just setting up Anaconda on everyone's system, eventually I had to cut on the material I could demonstrate, though It was nice to see that people were very enthusiatic about SymPy, they actively solved excercises. It was fun interacting with everyone.
Here is a Pic of the Seminar room at CSDS:
I should also admit that, I have increased my appetite for attending conferences and meetups, these days. In the last 4 months I have attended 3 Meetups (PyDelhi Meetup) and 1 Conference (PyCon India 2015). I think this is one of the best things I have done in last few years & I would recommend anyone with a slight interest in Python either Beginner or Expert should attend PyDelhi Meetups. Looking forward to more such meetups and conferences.
I gave SymPy stickers to everyone who solved atleast one excercise (Since, I didn't had enough stickers).
## January 26, 2016
#### What happens when you mess with hashing in Python
This post is based off a Jupyter notebook I made in 2013. You can download the original here. That notebook was based off a wiki page on the SymPy wiki, which in turn was based on a message to the SymPy mailing list.
## What is hashing?
Before we start, let's have a brief introduction to hashing. A hash function is a function that maps a set of objects to a set of integers. There are many kinds of hash functions, which satisfy many different properties, but the most important property that must be satisfied by any hash function is that it be a function (in the mathematical sense), that is, if two objects are equal, then their hash should also be equal.
Usually, the set of integers that the hash function maps to is much smaller than the set of objects, so that there will be multiple objects that hash to the same value. However, generally for a hash function to be useful, the set of integers should be large enough, and the hash function well distributed enough that if two objects hash to the same value, then they are very likely to be equal.
To summarize, a hash function must satisfy the property:
• If two objects are equal, then their hashes should be equal.
Additionally, a good hash function should satisfy the property:
• If two objects have the same hash, then they are likely to be the same object.
Since there are generally more possible objects than hash values, two objects may hash to the same value. This is called a hash collision, and anything that deals with hashes should be able to deal with them.
This won't be discussed here, but an additional property that a good hash function should satisfy to be useful is this:
• The hash of an object should be cheap to compute.
## What is it used for?
If we have a hash function that satisfies the above properties, then we can use it to create from a collection of objects something called a hash table. Suppose we have a collection of objects, and given any object, we want to be able to compute very quickly if that object belongs to our collection. We could store these objects in an ordered array, but then to determine if it is in the array, we would have to search potentially through every element of the array (in other words, an $$O(n)$$) algorithm.
With hashing, we can do better. We create what is known as a hash table. Instead of storing the objects in an ordered array, we create an array of buckets, each corresponding to some hash values. We then hash each object, and store it into the array corresponding to its hash value (if there are more hash values than buckets, we distribute them using a second hash function, which can be as simple as taking the modulus with respect to the number of buckets, % n).
This image from Wikipedia shows an example.
To determine if an object is in a hash table, we only have to hash the object, and look in the bucket corresponding to that hash. This is an $$O(1)$$ algorithm, assuming we have a good hash function, because each bucket will generally hold very few objects, possibly even none.
Note: there are some additional things that need to be done to handle hash collisions, but the basic idea is the same, and as long as there aren't too many hash collisions, which should happen if hash values are evenly distributed and the size of the hash table is large compared to the number of objects stored in it, the average time to determine if an object is in the hash table is still $$O(1)$$.
## Hashing in Python
Python has a built in function that performs a hash called hash(). For many objects, the hash is not very surprising. Note, the hashes you see below may not be the same ones you see if you run the examples, because Python hashing depends on the architecture of the machine you are running on, and, in newer versions of Python, hashes are randomized for security purposes.
>>> hash(10)
10
>>> hash(()) # An empty tuple
3527539
>>> hash('a')
12416037344
In Python, not all objects are hashable. For example
>>> hash([]) # An empty list
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'list'
This is because Python has an additional restriction on hashing:
• In order for an object to be hashable, it must be immutable.
This is important basically because we want the hash of an object to remain the same across the object's lifetime. But if we have a mutable object, then that object itself can change over its lifetime. But then according to our first bullet point above, that object's hash has to change too.
This restriction simplifies hash tables. If we allowed an object's hash to change while it is in a hash table, we would have to move it to a different bucket. Not only is this costly, but the hash table would have to notice that this happened; the object itself doesn't know that it is sitting in a hash table, at least not in the Python implementation.
In Python, there are two objects that correspond to hash tables, dict and set. A dict is a special kind of hash table called an associative array. An associative array is a hash table where each element of the hash table points to another object. The other object itself is not hashed.
Think of an associative array as a generalization of a regular array (like a list). In a list, objects are associated to nonnegative integer indices, like
>>> l = ['a', 'b', 7]
>> l[0]
'a'
>>> l[2]
7
In an associative array (i.e., a dict) we can index objects by anything, so long as the key is hashable.
>>> d = {0: 'a', 'hello': ['world']}
>>> d[0]
'a'
>>> d['hello']
['world']
Note that only the keys need to be hashable. The values can be anything, even unhashable objects like lists.
The uses for associative arrays are boundless. dict is one of the most useful data types in the Python language. Some example uses are
• Extension of list with "missing values". For example, {0: 'a', 2: 7} would correspond to the above list l with the value 'b' corresponding to the key 1 removed.
• Representation of a mathematical function with a finite domain.
• A poor-man's database (the Wikipedia image above is an associative array mapping names to telephone numbers).
• Implementing a Pythonic version of the switch-case statement.
The other type of hash table, set, more closely matches the definition I gave above for a hash table. A set is just a container of hashable objects. sets are unordered, and can only contain one of each object (this is why they are called "sets," because this matches the mathematical definition of a set).
In Python 2.7 or later, you can create a set with { and }, like {a, b, c}. Otherwise, use set([a, b, c]).
>>> s = {0, (), '2'}
>>> s
{0, '2', ()}
>>> s
{0, 1, '2', ()}
>>> s
{0, 1, '2', ()}
A final note: set and dict are themselves mutable, and hence not hashable! There is an immutable version of set called frozenset. There are no immutable dictionaries.
>>> f = frozenset([0, (), '2'])
>>> f
frozenset({0, '2', ()})
>>> hash(f)
-7776452922777075760
>>> # A frozenset, unlike a set, can be used as a dictionary key
>>> d[f] = 'a set'
>>> d
{0: 'a', frozenset({0, '2', ()}): 'a set', 'hello': ['world']}
## Creating your own hashable objects
Before we move on, there is one final thing we need to know about hashing in Python, which is how to create hashes for custom objects. By default, if we create an object, it will be hashable.
>>> class Nothing(object):
... pass
...
>>> N = Nothing()
>>> hash(N)
270498113
Implementation-wise, the hash is just the object's id, which corresponds to its position in memory. This satisfies the above conditions: it is (extremely) cheap to compute, and since by default objects in Python compare unequal to one another, objects with different hashes will be unequal.
>>> M = Nothing()
>>> M == N
False
>>> hash(M)
270498117
>>> hash(M) == hash(N)
False
To define a hash function for an object, define the __hash__ method.
>>> class HashToOne(object):
... def __hash__(self):
... return 1
...
>>> HTO = HashToOne()
>>> hash(HTO)
1
To set an object as not hashable, set __hash__ to None.
>>> class NotHashable(object):
... __hash__ = None
...
>>> NH = NotHashable()
>>> hash(NH)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'NotHashable'
Finally, to override the equality operator ==, define __eq__.
>>> class AlwaysEqual(object):
... def __eq__(self, other):
... if isinstance(other, AlwaysEqual):
... return True
... return False
...
>>> AE1 = AlwaysEqual()
>>> AE2 = AlwaysEqual()
>>> AE1 == AE2
True
One of the key points that I hope you will take away from this post is that if you override __eq__, you must also override __hash__ to agree. Note that Python 3 will actually require this: in Python 3, you cannot override __eq__ and not override __hash__. But that's as far as Python goes in enforcing these rules, as we will see below. In particular, Python will never actually check that your __hash__ actually agrees with your __eq__.
## Messing with hashing
Now to the fun stuff. What happens if we break some of the invariants that Python expects of hashing. Python expects two key invariants to hold
1. The hash of an object does not change across the object's lifetime (in other words, a hashable object should be immutable).
2. a == b implies hash(a) == hash(b) (note that the reverse might not hold in the case of a hash collision).
As we shall see, Python expects, but does not enforce either of these.
### Example 1: Mutating a hash
Let's break rule 1 first. Let's create an object with a hash, and then change that object's hash over its lifetime, and see what sorts of things can happen.
>>> class Bad(object):
... def __init__(self, hash): # The object's hash will be hash
... self.hash = hash
... def __hash__(self):
... return self.hash
...
>>> hash(b)
1
>>> d = {b:42}
>>> d[b]
42
>>> b.hash = 2
>>> hash(b)
2
>>> d[b]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
Here, we implicitly changed the hash of b by mutating the attribute of b that is used to compute the hash. As a result, the object is no longer found in a dictionary, which uses the hash to find the object.
The object is still there, we just can't access it any more.
>>> d
Note that Python doesn't prevent me from doing this. We could make it if we want (e.g., by making __setattr__ raise AttributeError), but even then we could forcibly change it by modifying the object's __dict__. We could try some more fancy things using descriptors, metaclasses, and/or __getattribute__, but even then, if we knew what was happening, we could probably find a way to change it.
This is what is meant when people say that Python is a "consenting adults" language. You are expected to not try to break things, but generally aren't prevented from doing so if you try.
### Example 2: More mutation
Let's try something even more crazy. Let's make an object that hashes to a different value each time we look at the hash.
>>> class DifferentHash(object):
... def __init__(self):
... self.hashcounter = 0
... def __hash__(self):
... self.hashcounter += 1
... return self.hashcounter
...
>>> DH = DifferentHash()
>>> hash(DH)
1
>>> hash(DH)
2
>>> hash(DH)
3
Obviously, if we use DH as a key to a dictionary, then it will not work, because we will run into the same issue we had with Bad. But what about putting DH in a set.
>>> DHset = {DH, DH, DH}
>>> DHset
{<__main__.DifferentHash at 0x101f79f50>,
<__main__.DifferentHash at 0x101f79f50>,
<__main__.DifferentHash at 0x101f79f50>}
Woah! We put the exact same object in a set three times, and it appeared all three times. This is not what is supposed to happen with a set.
>>> {1, 1, 1}
{1}
What happens when we do stuff with DHset?
>>> DHset.remove(DH)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyError: <__main__.DifferentHash object at 0x1047e75f8>
That didn't work, because set.remove searches for an object by its hash, which is different by this point.
Now let's make a copy of DHset. The set.copy method will create a shallow copy (meaning that the set container itself will be different, according to is comparison, but the objects themselves will the same, according to is comparison).
>>> DHset2 = DHset.copy()
>>> DHset2 == DHset
True
Everything is fine so far. This object is only going to cause trouble if something recomputes its hash. But remember that the whole reason that we had trouble with something like Bad above is that Python doesn't recompute that hash of an object, unless it has to. So let's do something that will force it to do so: let's pop an object from one of the sets and add it back in.
>>> D = DHset.pop()
>>> DHset
{<__main__.DifferentHash at 0x101f79f50>,
<__main__.DifferentHash at 0x101f79f50>,
<__main__.DifferentHash at 0x101f79f50>}
>>> DHset2
{<__main__.DifferentHash at 0x101f79f50>,
<__main__.DifferentHash at 0x101f79f50>,
<__main__.DifferentHash at 0x101f79f50>}
>>> DHset == DHset2
False
There we go. By removing it from the set, we made the set forget about its hash, so it had to be recomputed when we added it again. This version of DHset now has a DH with a different hash than it had before. Thinking back to set being a hash table, in this DHset, the three DH objects are in different "buckets" than they were in before. DHset.__eq__(DHset2) notices that the bucket structure is different right away and returns False.
By the way, what hash value are we up to these days?
>>> hash(DH)
9
### Example 3: When a == b does not imply hash(a) == hash(b)
Now let's look at point 2. What happens if we create an object with __eq__ that disagrees with __hash__. We actually already have made a class like this, the AlwaysEqual object above. Instances of AlwaysEqual will always compare equal to one another, but they will not have the same hash, because they will use object's default __hash__ of id. Let's take a closer look at the AE1 and AE2 objects we created above.
>>> hash(AE1)
270498221
>>> hash(AE2)
270498197
>>> hash(AE1) == hash(AE2)
False
>>> AE1 == AE2
True
>>> {AE1, AE2}
{<__main__.AlwaysEqual at 0x101f79950>,
We can already see that we have broken one of the key properties of a set, which is that it does not contain the same object twice (remember that AE1 and AE2 should be considered the "same object" because AE1 == AE2 is True).
This can lead to subtle issues. For example, suppose we had a list and we wanted to remove all the duplicate items from it. An easy way to do this is to convert the list to a set and then convert it back to a list.
>>> l = ['a', 'a', 'c', 'a', 'c', 'b']
>>> list(set(l))
['a', 'c', 'b']
Now, this method is obviously not going to work for a list of AlwaysEqual objects.
>>> AE3 = AlwaysEqual()
>>> l = [AE1, AE1, AE3, AE2, AE3]
>>> list(set(l))
[<__main__.AlwaysEqual at 0x102c1d590>,
<__main__.AlwaysEqual at 0x101f79950>]
Actually, what happened here is that the equality that we defined on AlwaysEqual was essentially ignored. We got a list of unique items by id, instead of by __eq__. You can imagine that if __eq__ were something a little less trivial, where some, but not all, objects are considered equal, that this could lead to very subtle issues.
But there is an issue with the above algorithm. It isn't stable, that is, it removes the ordering that we had on the list. We could do this better by making a new list, and looping through the old one, adding elements to the new list if they aren't already there.
>>> def uniq(l):
... newl = []
... for i in l:
... if i not in newl:
... newl.append(i)
... return newl
...
>>> uniq(['a', 'a', 'c', 'a', 'c', 'b'])
['a', 'c', 'b']
>>> uniq([AE1, AE1, AE3, AE2, AE3])
This time, we used in, which uses ==, so we got only one unique element of the list of AlwaysEqual objects.
But there is an issue with this algorithm as well. Checking if something is in a list is $$O(n)$$, but we have an object that allows checking in $$O(1)$$ time, namely, a set. So a more efficient version might be to create a set alongside the new list for containment checking purposes.
>>> def uniq2(l):
... newl = []
... newlset = set()
... for i in l:
... if i not in newlset:
... newl.append(i)
... return newl
...
>>> uniq2(['a', 'a', 'c', 'a', 'c', 'b'])
['a', 'c', 'b']
>>> uniq2([AE1, AE1, AE3, AE2, AE3])
<__main__.AlwaysEqual at 0x102c1d590>,
<__main__.AlwaysEqual at 0x101f79950>]
Bah! Since we used a set, we compared by hashing, not equality, so we are left with three objects again. Notice the extremely subtle difference here. Basically, it is this:
>>> AE1 in {AE2}
False
>>> AE1 in [AE2]
True
Set containment uses hashing; list containment uses equality. If the two don't agree, then the result of your algorithm will depend on which one you use!
By the way, as you might expect, dictionary containment also uses hashing, and tuple containment uses equality:
>>> AE1 in {AE2: 42}
False
>>> AE1 in (AE2,)
True
### Example 4: Caching hashing
If you ever want to add subtle bizarreness to a system, add some sort of caching, and then do it wrong.
As we noted in the beginning, one important property of a hash function is that it is quick to compute. A nice way to achieve this for heavily cached objects is to cache the value of the cache on the object, so that it only needs to be computed once. The pattern (which is modeled after SymPy's Basic) is something like this:
>>> class HashCache(object):
... def __init__(self, arg):
... self.arg = arg
... self.hash_cache = None
... def __hash__(self):
... if self.hash_cache is None:
... self.hash_cache = hash(self.arg)
... return self.hash_cache
... def __eq__(self, other):
... if not isinstance(other, HashCache):
... return False
... return self.arg == other.arg
...
HashCache is nothing more than a small wrapper around a hashable argument, which caches its hash.
>>> hash('a')
12416037344
>>> a = HashCache('a')
>>> hash(a)
12416037344
For ordinary Python builtins, simply recomputing the hash will be faster than the attribute lookup used by HashCache. Note: This uses the %timeit magic from IPython. %timeit only works when run in IPython or Jupyter.
>>> %timeit hash('a')
10000000 loops, best of 3: 69.9 ns per loop
>>> %timeit hash(a)
1000000 loops, best of 3: 328 ns per loop
But for a custom object, computing the hash may be more computationally expensive. As hashing is supposed to agree with equality (as I hope you've realized by now!), if computing equality is expensive, computing a hash function that agrees with it might be expensive as well.
As a simple example of where this might be useful, consider a highly nested tuple, an object whose hash that is relatively expensive to compute.
>>> a = ()
>>> for i in range(1000):
... a = (a,)
...
>>> A = HashCache(a)
>>> %timeit hash(a)
100000 loops, best of 3: 9.61 µs per loop
>>> %timeit hash(A)
1000000 loops, best of 3: 325 ns per loop
So far, we haven't done anything wrong. HashCache, as you may have noticed, has __eq__ defined correctly:
>>> HashCache(1) == HashCache(2)
False
>>> HashCache(1) == HashCache(1)
True
But what happens if we mutate a HashCache. This is different from examples 1 and 2 above, because we will be mutating what happens with equality testing, but not the hash (because of the cache).
In the below example, recall that small integers hash to themselves, so hash(1) == 1 and hash(2) == 2.
>>> a = HashCache(1)
>>> d = {a: 42}
>>> a.arg = 2
>>> hash(a)
1
>>> d[a]
42
Because we cached the hash of a, which was computed as soon as we created the dictionary d, it remained unchanged when modified the arg to be 2. Thus, we can still find the key of the dictionary. But since we have mutated a, the equality testing on it has changed. This means that, as with the previous example, we are going to have issues with dicts and sets keeping unique keys and entries (respectively).
>>> a = HashCache(1)
>>> b = HashCache(2)
>>> hash(a)
1
>>> hash(b)
2
>>> b.arg = 1
>>> a == b
True
>>> hash(a) == hash(b)
False
>>> {a, b}
{<__main__.HashCache at 0x102c32050>, <__main__.HashCache at 0x102c32450>}
>>> uniq([a, b])
[<__main__.HashCache at 0x102c32050>]
>>> uniq2([a, b])
[<__main__.HashCache at 0x102c32050>, <__main__.HashCache at 0x102c32450>]
Once we mutate b so that it compares equal to a, we start to have the same sort of issues that we had in example 3 with AlwaysEqual. Let's look at an instant replay.
>>> a = HashCache(1)
>>> b = HashCache(2)
>>> b.arg = 1
>>> print(a == b)
True
>>> print(hash(a) == hash(b))
True
>>> print({a, b})
set([<__main__.HashCache object at 0x102c32a10>])
>>> print(uniq([a, b]))
[<__main__.HashCache object at 0x102c32a50>]
>>> print(uniq2([a, b]))
[<__main__.HashCache object at 0x102c32a50>]
Wait a minute, this time it's different! Comparing it to above, it's pretty easy to see what was different this time. We left out the part where we showed the hash of a and b. When we did that the first time, it cached the hash of b, making it forever be 2, but when we didn't do it the second time, the hash had not been cached yet, so the first time it is computed (in the print(hash(a) == hash(b)) line), b.arg has already been changed to 1.
And herein lies the extreme subtlety: if you mutate an object with that hashes its cache like this, you will run into issues only if you had already called some function that hashed the object somewhere. Now just about anything might compute the hash of an object. Or it might not. For example, our uniq2 function computes the hash of the objects in its input list, because it stores them in a set, but uniq does not:
>>> a = HashCache(1)
>>> b = HashCache(2)
>>> uniq2([a, b])
>>> b.arg = 1
>>> print(a == b)
True
>>> print(hash(a) == hash(b))
False
>>> print({a, b})
set([<__main__.HashCache object at 0x102c32c50>, <__main__.HashCache object at 0x102c32c10>])
>>> print(uniq([a, b]))
[<__main__.HashCache object at 0x102c32c50>]
>>> print(uniq2([a, b]))
[<__main__.HashCache object at 0x102c32c50>, <__main__.HashCache object at 0x102c32c10>]
>>> a = HashCache(1)
>>> b = HashCache(2)
>>> uniq([a, b])
>>> b.arg = 1
>>> print(a == b)
True
>>> print(hash(a) == hash(b))
True
>>> print({a, b})
set([<__main__.HashCache object at 0x102c32c90>])
>>> print(uniq([a, b]))
[<__main__.HashCache object at 0x102c32bd0>]
>>> print(uniq2([a, b]))
[<__main__.HashCache object at 0x102c32bd0>]
The moral of this final example is that if you are going to cache something, that something had better be immutable.
## Conclusion
The conclusion is this: don't mess with hashing. The two invariants above are important. Let's restate them here,
1. The hash of an object must not change across the object's lifetime (in other words, a hashable object should be immutable).
2. a == b implies hash(a) == hash(b) (note that the reverse might not hold in the case of a hash collision).
If you don't follow these rules, you will run into very subtle issues, because very basic Python operations expect these invariants.
If you want to be able to mutate an object's properties, you have two options. First, make the object unhashable (set __hash__ = None). You won't be able to use it in sets or as keys to a dictionary, but you will be free to change the object in-place however you want.
A second option is to make all mutable properties non-dependent on hashing or equality testing. This option works well if you just want to cache some internal state that doesn't inherently change the object. Both __eq__ and __hash__ should remain unchanged by changes to this state. You may also want to make sure you use proper getters and setters to prevent modification of internal state that equality testing and hashing does depend on.
If you choose this second option, however, be aware that Python considers it fair game to swap out two identical immutable (i.e., hashable) objects at any time. If a == b and a is hashable, Python (and Python libraries) are free to replace a with b anywhere. For example, Python uses an optimization on strings called interning, where common strings are stored only once in memory. A similar optimization is used in CPython for small integers. If store something on a but not b and make a's hash ignore that data, you may find that some function that should return a may actually return b. For this reason, I generally don't recommend this second option unless you know what you are doing.
Finally, to keep invariant 2, here are some tips:
• Make sure that the parts of the object that you use to compare equality are not themselves mutable. If they are, then your object cannot itself be immutable. This means that if a == b depends on a.attr == b.attr, and a.attr is a list, then you will need to use a tuple instead (if you want a to be hashable).
• You don't have to invent a hash function. If you find yourself doing bitshifts and XORs, you're doing it wrong. Reuse Python's builtin hashable objects. If the hash of your object should depend on the hash of a and b, define __hash__ to return hash((a, b)). If the order of a and b does not matter, use hash(frozenset([a, b])).
• Don't cache something unless you know that the entire cached state will not be changed over the lifetime of the cache. Hashable objects are actually great for caches. If they properly satisfy invariant 1, and all the state that should be cached is part of the hash, then you will not need to worry. And the best part is that you can just use dict for your cache.
• Unless you really need the performance or memory gains, don't make your objects mutable. This makes programs much harder to reason about. Some functional programming languages take this idea so far that they don't allow any mutable objects.
• Don't worry about the situation where hash(a) == hash(b) but a != b. This is a hash collision. Unlike the issues we looked at here, hash collisions are expected and checked for in Python. For example, our HashToOne object from the beginning will always hash to 1, but different instances will compare unequal. We can see that the right thing is done in every case with them.
>>> a = HashToOne()
>>> b = HashToOne()
>>> a == b
False
>>> hash(a) == hash(b)
True
>>> {a, b}
{<__main__.HashToOne at 0x102c32a10>, <__main__.HashToOne at 0x102c32cd0>}
>>> uniq([a, b])
[<__main__.HashToOne at 0x102c32cd0>, <__main__.HashToOne at 0x102c32a10>]
>>> uniq2([a, b])
[<__main__.HashToOne at 0x102c32cd0>, <__main__.HashToOne at 0x102c32a10>]
The only concern with hash collisions is that too many of them can remove the performance gains of dict and set.
• Conversely, if you are writing something that uses an object's hash, remember that hash collisions are possible and unavoidable.
A classic example of a hash collision is -1 and -2. Remember I mentioned above that small integers hash to themselves:
>>> hash(1)
1
>>> hash(-3)
-3
The exception to this is -1. The CPython interpreter uses -1 as an error state, so -1 is not a valid hash value. Hence, hash(-1) can't be -1. So the Python developers picked the next closest thing.
>>> hash(-1)
-2
>>> hash(-2)
-2
If you want to check if something handles hash collisions correctly, this is a simple example. I should also note that the fact that integers hash to themselves is an implementation detail of CPython that may not be true in alternate Python implementations.
• Finally, we didn't discuss this much here, but don't assume that the hash of your object will be the same across Python sessions. In Python 3.3 and up, hash values of strings are randomized from a value that is seeded when Python starts up. This also affects any object whose hash is computed from the hash of strings. In Python 2.7, you can enable hash randomization with the -R flag to the interpreter. The following are two different Python sessions.
>>> print(hash('a'))
-7750608935454338104
>>> print(hash('a'))
8897161376854729812
## December 19, 2015
#### "Doing Math with Python" by Amit Saha: Book Review
Note: No Starch Press has sent me a copy of this book for review purposes.
SHORT VERSION: Doing Math with Python is well written and introduces topics in a nice, mathematical way. I would recommend it for new users of SymPy.
Doing Math with Python by Amit Saha is a new book published by No Starch Press. The book shows how to use Python to do high school-level mathematics. It makes heavy use of SymPy in many chapters, and this review will focus mainly on those parts, as that is the area I have expertise in.
The book assumes a basic understanding of programming in Python 3, as well as the mathematics used (although advanced topics are explained). No prior background in the libraries used, SymPy and matplotlib, is assumed. For this reason, this book can serve as an introduction them. Each chapter ends with some programming exercises, which range from easy exercises to more advanced ones.
The book has seven chapters. In the first chapter, "Working with numbers", basic mathematics using pure Python is introduced (no SymPy yet). It should be noted that Python 3 (not Python 2) is required for this book. One of the earliest examples in the book (3/2 == 1.5) will not work correctly without it. I applaud this choice, although I might have added a more prominent warning to wary users. (As a side note, in the appendix, it is recommended to install Python via Anaconda, which I also applaud). This chapter also introduces the fractions module, which seems odd since sympy.Rational will be implicitly used for rational numbers later in the text (to little harm, however, since SymPy automatically converts fractions.Fraction instances to sympy.Rational).
In all, this chapter is a good introduction to the basics of the mathematics of Python. There is also an introduction to variables and strings. However, as I noted above, one should really have some background with basic Python before reading this book, as concepts like flow control and function definition are assumed (note: there is an appendix that goes over this).
Chapters 2 and 3 cover plotting with matplotlib and basic statistics, respectively. I will not say much about the matplotlib chapter, since I know only basic matplotlib myself. I will note that the chapter covers matplotlib from a (high school) mathematics point of view, starting with a definition of the Cartesian plane, which seems a fitting choice for the book.
Chapter 3 shows how to do basic statistics (mean, median, standard deviation, etc.) using pure Python. This chapter is clearly meant for pedagogical purposes for basic statistics, since the basic functions mean, median, etc. are implemented from scratch (as opposed to using numpy.mean or the standard library statistics.mean). This serves as a good introduction to more Python concepts (like collections.Counter) and statistics.
Note that the functions in this chapter assume that the data is the entire population, not a sample. This is mentioned at the beginning of the chapter, but not elaborated on. For example, this leads to a different definition of variance than what might be seen elsewhere (the calculate_variance used in this chapter is statistics.pvariance, not statistics.variance).
It is good to see that a numerically stable definition of variance is used here (see PEP 450 for more discussion on this). These numerical issues show why it is important to use a real statistics library rather than a home grown one. In other words, use this chapter to learn more about statistics and Python, but if you ever need to do statistics on real data, use a statistics library like statistics or numpy. Finally, I should note that this book appears to be written against Python 3.3, whereas statistics was added to the Python standard library in Python 3.4. Perhaps it will get a mention in future editions.
Chapter 4, "Algebra and Symbolic Math with SymPy" starts the introduction to SymPy. The chapter starts similar to the official SymPy tutorial in describing what symbolics is, and guiding the reader away from common misconceptions and gotchas. The chapter does a good job of explaining common gotchas and avoiding antipatterns.
This chapter may serve as an alternative to the official tutorial. Unlike the official tutorial, which jumps into higher-level mathematics and broader use-cases, this chapter may be better suited to those wishing to use SymPy from the standpoint of high school mathematics.
My only gripes with this chapter, which, in total, are minor, relate to printing.
1. The typesetting of the pretty printing is inconsistent and, in some cases, incorrect. Powers are printed in the book using superscript numbers, like
x²
However, SymPy prints powers like
2
x
even when Unicode pretty printing is enabled. This is a minor point, but it may confuse users. Also, the output appears to use ASCII pretty printing (mixed with superscript powers), for example
x² x³ x⁴ x⁵
x + -- + -- + -- + --
2 3 4 5
Most users will either get MathJax printing (if they are using the Jupyter notebook), or Unicode printing, like
2 3 4 5
x x x x
x + ── + ── + ── + ──
2 3 4 5
Again, this is a minor point, but at the very least the correct printing looks better than the fake printing used here.
2. In line with the previous point, I would recommend telling the user to start with init_printing(). The function is used once to change the order of printing to rev-lex (for series printing). There is a link to the tutorial page on printing. That page goes into more depth than is necessary for the book, but I would recommend at least mentioning to always call init_printing(), as 2-D printing can make a huge difference over the default str printing, and it obviates the need to call pprint.
Chapter 5, "Playing with Sets and Probability" covers SymPy's set objects (particularly FiniteSet) to do some basic set theory and probability. I'm excited to see this in the book. The sets module in SymPy is relatively new, but quite powerful. We do not yet have an introduction to the sets module in the SymPy tutorial. This chapter serves as a good introduction to it (albeit only with finite sets, but the SymPy functions that operate on infinite sets are exactly the same as the ones that operate on finite sets). In all, I don't have much to say about this chapter other than that I was pleasantly surprised to see it included.
Chapter 6 shows how to draw geometric shapes and fractals with matplotlib. I again won't say much on this, as I am no matplotlib expert. The ability to draw leaf fractals and Sierpiński triangles with Python does look entertaining, and should keep readers enthralled.
Chapter 7, "Solving Calculus Problems" goes into more depth with SymPy. In particular, assumptions, limits, derivatives, and integrals. The chapter alternates between symbolic formulations using SymPy and numeric calculations (using evalf). The numeric calculations are done both for simple examples and more advanced things (like implementing gradient descent).
One small gripe here. The book shows that
from sympy import Symbol
x = Symbol('x')
if (x + 5) > 0:
print('Do Something')
else:
print('Do Something else')
raises TypeError at the evaluation of (x + 5) > 0 because its truth value cannot be determined. The solution to this issue is given as
x = Symbol('x', positive=True)
if (x + 5) > 0:
print('Do Something')
else:
print('Do Something else')
Setting x to be positive via Symbol('x', positive=True) is correct, but even in this case, evaluating an inequality may still raise a TypeError (for example, if (x - 5) > 0). The better way to do this is to use (x + 5).is_positive. This would require a bit more discussion, especially since SymPy uses a three-valued logic for assumptions, but I do consider "if <symbolic inequality>" to be a SymPy antipattern.
I like Saha's approach in this chapter of first showing unevaluated forms (Limit, Derivative, Integral), and then evaluating them with doit(). This puts users in the mindset of a mathematical expression being a formula which may or may not later be "calculated". The opposite approach, using the function forms, limit, diff, and integrate, which evaluate if they can and return an unevaluated object if they can't, can be confusing to new users in my experience. A common new SymPy user question is (some form of) "how do I evaluate an expression?" (the answer is doit()). Saha's approach avoids this question by showing doit() from the outset.
I also like that this chapter explains the gotcha of math.sin(Symbol('x')), although I personally would have included this earlier in the text.
(Side note: now that I look, these are both areas in which the official tutorial could be improved).
### Summary
This book is a good introduction to doing math with Python, and, for the chapters that use it, a good basic introduction to SymPy. I would recommend it to anyone wishing to learn SymPy, but especially to anyone whose knowledge of mathematics may preclude them from getting the most out of the official SymPy tutorial.
I imagine this book would work well as a pedagogical tool, either for math teachers or for self-learners. The exercises in this book should push the motivated to learn more.
I have a few minor gripes, but no major issues.
You can purchase this book from the No Starch Press website, both as a print book or an ebook. The website also includes a sample chapter (chapter 1), code samples from the book, and exercise solutions.
## October 21, 2015
### The excitement
People travelling from all over the country(and outside!) to Bangalore for a conference on a weekend, Yay!
We were really excited about the workshop and devsprint that the SymPy team was about to deliver. More so excited we were about the fact that we will finally be meeting one another.
### Day 0
#### DevSprint
The first day of the conference kicked off with the devsprints. That morning the whole team met up, present there were Harsh, Sudhanshu, AMiT, Sartaj, Shivam and Sumith . Abinash couldn't make it but he was there in spirit :)
We all got our awesome SymPy tees and stickers, thanks to AMiT.
Having got alloted mentoring space in the devsprint, basic introduction of SymPy was given by Sumith. Some interesting mentoring spaces were CPython by Kushal Das, Data Science by Bargava. The whole list is here
We got the participants started off with setting up the development workflow of SymPy and then they started working on the internals. We alloted bugs to many and directed them to the solution. Sadly, not many issues could alloted or closed due to the really poor internet connection at the conference hall but it was cool interacting with the enthusiasts. We also happened to meet Saurabh Jha, a contributor to SymPy who had worked on Linear Algebra and he helped us out with the devsprint.
#### Workshop
The workshops ran in two and a half hour slot. This was conducted by Harsh, Sudhanshu, AMiT and Sumith.
Sumith started off with introduction to SymPy. Then we spent some helping everyone setup their systems with SymPy and IPython notebooks, even though prior instructions were given, we had to do this so as to get everyone on level ground.
Harsh took first half of the content and exercises
Sudhanshu took the second half, while AMiT and Sumith were helping out the participants with their queries.
We distributed t-shirts to all the participants at the end. Thanks to all those who participated, we had an awesome time.
Day 0 ended with all of us wrapping off the devsprint.
After having dinner together, everybody headed back looking forward to the coming two days of the conference.
### Day 1
Day 1 started off with a keynote by Dr Ajith Kumar B.P followed by multiple talks and lightning talks.
More interesting than the scheduled talks were the conversations that we had with people present in the conference. Exchanging views, discussing on a common point of interest was surely one of the best experience that I had.
#### Lightning talk
Shivam delivered a lightning talk titled Python can be fast. Here, he stressed on the fact that implementing correct data structures is important and Python is not always to be blamed. He gave relevant examples from his summers work at SymPy.
By this point, we had reached considerable audience in the conference and lot of them were really interested in SymPy. We had a lot of younger participants who were enthusiastic about SymPy as it participates in GSoC, some of them also sent in patches.
### Day 2
Day 2 started off with a keynote by Nicholas H.Tollervey.
#### Talk
Sumith delivered a talk titled SymEngine: The future fast core of computer algebra systems. The content included SymPy, SymEngine and the interface. Some light was shed on Python wrappers to C++ code. Thanks to all the audience present there.
As the day was closing in, Harsh and Shivam had to leave to catch their flights.
#### Open Space
After multiple people requesting to help them get started with SymPy, we decided to conduct an open space.
Open spaces are a way for people to come together to talk about topics, ideas or whatever they want. All people had to do is just show up :) Present there were Sudhanshu, Sartaj, AMiT and Sumith. Sartaj luckily came up with a solveset bug. We had a live show of how bug-fixing is done. Filing an issue, fixing the code, writing tests and sending in a PR was all demonstrated.
### Closing thoughts
Conferences are the perfect place to discuss and share knowledge and ideas. The people present there were experts in their area of interests and conversations with them is a cool experience. Meeting the team was something that we were looking forward right from the start.
Missing Sartaj and Abinash
Discussing SymPy and the gossips in person is a different experience altogether. I'll make sure to attend all the conference that I possibly can from hereon.
Be back for more
## October 05, 2015
#### Lessons learned from working at Continuum
Last Friday was my last day working at Continuum Analytics. I enjoyed my time at the company, and wish success to it, but the time has come for me to move on. Starting later this year, I will start working with Anthony Scopatz at his new lab ERGS at the University of South Carolina.
During my time at Continuum (over two years if you count a summer internship), I primarily worked on the Anaconda distribution and its open source package manager, conda. I learned a lot of lessons in that time, and I'd like to share some of them here.
In no particular order:
• Left to their own devices, people will make the minimal possible solution to packaging. They won't try to architect something. The result will be over-engineered, specific to their use-case, and lack reproducibility.
• The best way to ensure that some software has no bugs is for it to have many users.
• Be wary of the "software would be great if it weren't for all the users" mentality (cf. the previous point).
• Most people don't code defensively. If you are working on a project that requires extreme stability, be cautious of contributions from those outside the development team.
• Hostility towards Windows and Windows users doesn't help anyone.
• For a software updater, stability is the number one priority. If the updater breaks, how can a fix be deployed?
• Even if you configure your program to update itself every time it runs you will still get bug reports with arbitrarily old versions.
• Separating components into separate git repositories leads to a philosophical separation of concerns among the components.
• Everyone who isn't an active developer on the project will ignore this separation and open issues in the wrong repo.
• Avoid object oriented programming when procedural programming will do just fine.1
• Open source is more about the open than the source. Develop things in the open, and you will create a community that respects you.1
• Academics (often) don't know good software practices, nor good licensing practices.
• Neither do some large corporations.
• Avoid over-engineering things.
• Far fewer people than I would have thought understand the difference between hard links and soft links.2
• Changelogs are useful.
• Semantic versioning is over-hyped.
• If you make something and release it, the first version should be 1.0 (not 0.1 or 0.0.1).
• Getting a difficult package to compile is like hacking a computer. All it takes is time.
• It doesn't matter how open source friendly your business is, there will always be people who will be skeptical and point their fingers at the smallest proprietary components, fear monger, and overgeneralize unrelated issues into FUD. These people should generally be ignored.
• Don't feed the trolls.1
• People constantly misspell the name of Apple's desktop operating system.
• People always assume you have way more automation than you really do.
• The Python standard library is not a Zen garden. Some parts of it are completely broken, and if you need to rely on them, you'll have to rewrite them. shutil.rmtree on Windows is one example of this.
• Linux is strictly backwards compatible. Windows is strictly forwards compatible. 3
• On Linux, things tend to be very simple. On Windows, things tend to be very complicated.
• I can't decide about OS X. It lies somewhere in between.
• Nobody uses 32-bit Linux. Why do we even support that?
• People oversimplify the problem of solving for package dependencies in their heads. No one realizes that it's meaningless to say something like "the dependencies of NumPy" (every build of every version of NumPy has its own set of dependencies, which may or may not be the same).
• Writing a set of rules and a solver to solve against those rules is relatively easy. Writing heuristics to tell users why those rules are unsolvable when they are is hard.
• SAT solvers solve NP-complete problems in general, but they can be very fast to solve common case problems. 1
• Some of the smartest people I know, who otherwise make very rational and intelligent decisions, refuse to update to Python 3.
• As an introvert, the option of working from home is great for maintaining sanity.
• If living in Austin doesn't turn you into a foodie you will at least gain a respect for them.
• Twitter, if used correctly, is a great way to interact with your users.
• Twitter is also a great place to learn new things. Follow John Cook and Bret Victor.
• One of the best ways to make heavily shared content is to make it about git (at least if you're an expert).
• A good optimization algorithm avoids getting caught in local maxima by trying different parts of the search space that initially appear to be worse. The same approach should be taken in life.
#### Footnotes
1. These are things that I already knew, but were reiterated.
2. If you are one of those people, I have a small presentation that explains the difference here
3. These terms can be confusing, and I admit I got this backwards the first time I wrote this. According to Wikipedia, forwards compatible means a system can accept input intended for a later version of itself and backwards compatible means a system can accept input intended for an earlier version of itself.
What I specifically mean here is that in terms of building packages for Linux or Windows, for Linux, you should build a package on the oldest version that you wish to support. That package will work on newer versions of Linux, but not anything older (generally due to the version of libc you are linked against).
On the other hand, on Windows, you can can compile things on the newest version (I used Windows 8 on my main Windows build VM), and it will work on older versions of Windows like XP (as long as you ship the right runtime DLLs). This is also somewhat confusing because Windows tends to be both forwards compatible and backwards compatible.
## August 27, 2015
#### GSoC : Throughout in SymPy # Wrap Up
Hi! I am Amit Kumar (@aktech), a final year undergraduate student of Mathematics & Computing at Delhi Technological University. This post summarizes my experience working on GSoC Project on Improving Solvers in SymPy.
## Introduction
I first stumbled upon SymPy last year, while looking for some Open Source Computer Algebra Systems to contribute. I didn't had any Open Source experience by then, So SymPy was an Ideal Choice for getting into the beautiful world of Open Source. I wasn't even Proficient in Python so at first it was little difficult for me, but Thanks! to the beauty of the language itself, which makes anyone comfortable with it in no time. Soon, I decided to participate into Google Summer of Code under SymPy. Though at this point of time, I didn't had decided about the project, I would like to work in Summers.
##### First Contribution
I started learning the codebase & made my first contribution by Fixing an EasyToFix bug in solvers.py through the PR #8647, Thanks to @smichr for helping me making my first ever open source contribution. After my first PR, I started looking for more things to work and improve upon and I started commiting quite often. During this period I learnt the basics of Git, which is one of the most important tools for contributing to Open Source.
## Project Ideas
When I got a bit comfortable with the basics of SymPy & contributing to open source in general, I decided to chose an area (module) to concentrate on. The modules I was interested in, were Solvers and Integrals, I was literally amazed by the capability of a CAS to integrate and solve equations. I decided to work on one of these in the summers. There was already some work done on the Integrals module in 2013, which was yet to be Merged. I wasn't well versed about the Manuel Bronsteins works on Methods of Integration in a Computer Algebra System, so I was little skeptical about working on Integrals. The Solvers module attracted me due it's awesome capabilities, I found it one of the most useful features of any Computer Algebra Systems, So I finally decided to work on Solvers Module.
## Coding
I was finally accepted to work on Solvers this summer. I had my exams during the community bonding period, So I started almost in the first week of Coding Period. I made a detailed timeline of my work in summers, but through my experience I can say that's seldom useful. Since, you never know what may come out in between you and your schedule. As an instance PR #9540, was a stumbling block in lot of my work, which was necessary to fix for proceeding ahead.
#### Phase I (Before Mid Terms)
When coding period commenced, I started implementing the linsolve, the linear system solver which is tolerant to different input forms & can solve almost all forms of linear systems. At the start I got lot of reviews from Jason and Harsh, regarding improvement of the function. One of the most important thing I learnt which they focused on was Test Driven Development, they suggested me to write extensive tests before implementing the logic, which helps in reducing the problems in visualizing the final implementaion of the function and avoids API changes.
After linsolve I implemented ComplexPlane, which is basically Complex Sets. It is useful for representing infinite solutions in argand plane. While implementing this I learnt that chosing the right API is one of the most important factors while designing aa important functionality. To know more about it, see my blog post here. During this period I also worked on fixing Intersection's of FiniteSet with symbolic elements, which was a stumbling block.
#### Phase II (After Mid Terms)
After successfully passing the Mid Terms, I started working more on robustness of solveset, Thanks to @hargup for pointing out the motivation for this work. The idea is to tell the user about the domain of solution returned. Simplest motivation was the solution of the equation |x| - n, for more info see my blog post here. I also worked on various trivial and non trivial bugs which were more or less blocking my work.
Then I started replacing solve with solveset in the codebase, the idea was to make a smooth transition between solve and solveset, while doing this Jason pointed out that I should not remove solve tests, which can make solve vunerable to break, So I reverted removing of solve tests. Later we decided to add domain argument to solveset, which would help the user in easily dictating to solveset about what solutions they are interested in, thanks to @shivamvats for doing this in a PR. After the decision of adding domain argument, Harsh figured out that, as of now solveset is vunerable to API changes, so it's not the right time to replace solve with solveset, so we decided to halt this work, as a result I closed my several PR's unmerged.
I also worked on Implementing Differential Calculus Method such as is_increasing etc, which is also Merged now. Meanwhile I have been working on documenting solveset, because a lot of people don't know what we are doing & why we are doing, so It's very important to answer all those subtle questions which may come up in there mind, So we decided to create a FAQ style documentation of solveset see PR #9500. This is almost done, some polishing is needed. It would be Merged soon.
During this period apart from my work, there are some other works as well which is worth mentioning, one of them is ConditionSet by Harsh which serves the purpose of unevaluated solve object and even much more than that for our future endeavours with solveset. Others being codomain & not_empty by Gaurav @gxyd which are also important additions to SymPy.
TODO: Probably, this will need a comprehensive post, I would write soon.
## Future Plans
Recently Harsh came up with an idea of tree based solver. Since now ConditionSet has been introduced, the solving of equations can be seen as set transformation, We can do the following things to solve equations (abstract View):
• Apply Various Set Transformations on the given Set.
• Define a Metric of the usability or define a notion of better solution over others.
• Different Transformation would be the nodes of the tree.
As a part of this I worked on implementing a general decomposition function decompogen in PR #9831, It's almost done, will be merged soon.
I plan for a long term association with SymPy, I take the full responsibilty of my code. I will try to contribute as much as I can particularly in sets and solvers module.
## Conclusion
On a concluding note, I must say that getting the opportunity to work on SymPy this summer has been one of the best things that could happen to me. Thanks to Harsh for helping me all my endeavour, also for being one of the best mentors I could get. I would like to thank Sean as well who from his busy schedule took up the time to attend meetings, hangouts and for doing code reviews. Also thanks to Chris Smith who is the most gentle and helpful person I have ever seen, he is one of the reasons I started contributing to SymPy. Thanks to Aaron, Ondrej, and last but not the least my fellow GSoCer's at SymPy leosartaj, debugger22, sumith1896, shivamvats, abinashmeher999. Special Thanks to whole SymPy team and Community for a wonderful collaboration experience. Kudos!
## August 23, 2015
#### GSoc 2015 Week 12 & 13
This week we announced the release of SymEngine on Sage list. For that, I made some changes into the build system for versioning and to use SymEngine from other C/C++ projects.
First, SymEngineConfig.cmake would output a set of flags, imported dependencies, etc. SymEngineConfigVersion.cmake would check that the version is compatible and if the 32/64-bitness is correct of the SymEngine project and the other CMake project. When SymEngine is only built, then these files would be at the root level and when installed they would be at /lib/cmake/symengine. An excerpt from the wiki page, I wrote at, https://github.com/sympy/symengine/wiki/Using-SymEngine-from-a-Cpp-project
##### Using SymEngine in another CMake project
To use SymEngine from another CMake project include the following in yourCMakeLists.txt file
find_package(SymEngine 0.1.0 CONFIG)
You can give the path to the SymEngine installation directory if it was installed to a non standard location by,
find_package(SymEngine 0.1.0 CONFIG PATHS /path/to/install/dir/lib/cmake/symengine)
Alternatively, you can give the path to the build directory.
find_package(SymEngine 0.1.0 CONFIG PATHS /path/to/build/dir)
An example project would be,
##### Python wrappers
There was a suggestion to make the Python wrappers separate, so that in a distribution like Gentoo, the package sources can be distributed separately.
So, I worked on the Python wrappers to get them to be built independently or with the main repo. Now, the python wrappers directory along with the setup.py file from the root folder can be packaged and they would work without a problem.
## August 21, 2015
#### GSoC - Wrapping Up
From not knowing anything considerable in programming and open source to reaching this level, has been a wonderful ride. Google Summer of Code has been full of ups and downs but none the less exhilarating.
Didn't even know at the time of my first patch that I would be so closely associated to SymEngine and the team members just a few months down the line.
After a couple of bug fixes, my first major contribution came in as the UnivariatePolynomial class. The biggest challenge here was implementing multiplication using Kronecker's trick. This was my first experience of implementing an algorithm from a paper. The UnivariatePolynomial class shaped up really well, there are minor improvements that can be made and some optimizations that could be done. But standalone, it is a fully functional class.
Once this was done, my next aim was to optimize multiplication to reach Piranha's speed. This was a very enriching period and the discussions with the team members and Francesco was a great learning experience. En route, I also got a chance to explore Piranha under the hood and trouble Francesco for reasoning why certain things were the way they. End of this, we were able to hit Piranha's speed. I remember I was the happiest I had been in days.
Once we hit the lower level speed, we decided to hard-depend on Piranha for Polynomial. This meant adding Piranha as SymEngine dependence. Here I had to learnt how to write and wrote CMake files as well as setting up Piranha testing in Travis meant writing shell and CI scripts. We faced a problem here, resolution to which meant implementing Catch as a testing framework for SymEngine. Catch is an awesome library and community is very pleasant. Implementing this was a fun work too. Also the high level value class Expression was implemented in SymEngine, mostly taken from Francesco's work.
I then started writing the Polynomial class, most of the work is done here(597). But the design is not very well thought of. I say this because once ready this can only support integer(ZZ) domain. But we will also need rational(QQ) and expression(EX). The code will be of much use but we have been discussing a much cleaner implementation with Ring class. Most of the progress and the new design decisions are being documented here.
Second half has been really rough, with the university running. Ondrej has been really patient with me, I thank him for that. The bond that I made with him through mails, technical and non technical, has really grown strong. He has allowed me to continue the work the Polynomial and implement more details and algorithms in future. I am looking forward to that as long term association is an amazing thing and I am proud to be responsible for the Polynomial module in SymEngine.
I am indebted to my mentor Ondrej Certik and all the SymEngine and SymPy developers who were ever ready to help and answer my silliest of questions. It’s an amazing community and they are really very helpful and always appreciated even the smallest of my contributions. The best part of SymEngine is you know contributors one to one and it is like a huge family of learners. I am looking forward to meeting the team (atleast SymPy India in near future).
Google Summer of Code has been one exhilarating journey. I don't know if I was a good programmer then or a good programmer now but I can say that I am a better programmer now.
This is just the beginning of the ride, GSoC a stepping stone.
There will be blog posts coming here, so stay tuned. Till then,
Bye
## August 20, 2015
#### GSoC: Update Week-10, 11 and 12
This is the 12th week. Hard deadline is this Friday. GSoC is coming to an end leaving behind a wonderful experience. Well here's how my past few weeks went.
### Highlights:
Work on Formal Power Series:
• #9776 added the fps method in Expr class. Instead of fps(sin(x)), user can now simply do sin(x).fps().
• #9782 implements some basic operations like addition, subtraction on FormalPowerSeries. The review is almost complete and should get merged soon.
• #9783 added the sphinx docs for the series.formal module.
• #9789 replaced all the solve calls in the series.formal with the new solveset function.
Work on computing limits of sequences:
This is the second part of my GSoC project aiming to implement the algorithm for computing limits of sequences as described in the poster Computing Limits Of Sequences by Manuel Kauers.
• #9803 implemented the difference_delta function. difference_delta(a(n), n) is defined as a(n + 1) - a(n). It is the discrete analogous of differentiation.
• #9836 aims at completing the implementation of the algorithm. It is still under review and hopefully it will be in soon.
|
2016-05-24 13:39:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2976064383983612, "perplexity": 1671.4151883428528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049270798.25/warc/CC-MAIN-20160524002110-00166-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://w3w.academickids.com/encyclopedia/index.php/Generating_trigonometric_tables
|
Generating trigonometric tables
fr:Construire des tables trigonométriques Tables of trigonometric functions are useful in a number of areas. Before the existence of pocket calculators, trigonometric tables were essential for navigation, science and engineering. The calculation of mathematical tables was an important area of study, which led to the development of the first mechanical computing devices.
Modern computers and pocket calculators now generate trigonometric function values on demand, using special libraries of mathematical code. Often, these libraries use pre-calculated tables internally, and compute the required value by using an appropriate interpolation method.
Interpolation of simple look-up tables of trigonometric functions are still used in computer graphics, where accurate calculations are either not needed, or cannot be made fast enough.
Another important application of trigonometric tables and generation schemes is for fast Fourier transform (FFT) algorithms, where the same trigonometric function values (called twiddle factors) must be evaluated many times in a given transform, especially in the common case where many transforms of the same size are computed. In this case, calling generic library routines every time is unacceptably slow. One option is to call the library routines once, to build up a table of those trigonometric values that will be needed, but this requires significant memory to store the table. The other possibility, since a regular sequence of values is required, is to use a recurrence formula to compute the trigonometric values on the fly. Significant research has been devoted to finding accurate, stable recurrence schemes in order to preserve the accuracy of the FFT (which is very sensitive to trigonometric errors).
Contents
Historically, the earliest method by which trigonometric tables were computed, and probably the most common until the advent of computers, was to repeatedly apply the half-angle and angle-addition trigonometric identities starting from a known value (such as sin(π/2)=1, cos(π/2)=0). The relevant identities, the first recorded derivation of which is by Ptolemy, are:
[itex]\cos\left(\frac{x}{2}\right) = \pm\, \sqrt{\frac{1 + \cos(x)}{2}}[itex]
[itex]\sin\left(\frac{x}{2}\right) = \pm\, \sqrt{\frac{1 - \cos(x)}{2}}[itex]
[itex]\sin(x \pm y) = \sin(x) \cos(y) \pm \cos(x) \sin(y)\,[itex]
[itex]\cos(x \pm y) = \cos(x) \cos(y) \mp \sin(x) \sin(y)\,[itex]
Various other permutations on these identities are possible (for example, the earliest trigonometric tables used not sine and cosine, but sine and versine).
A quick, but inaccurate, approximation
A quick, but inaccurate, algorithm for calculating a table of N approximations sn for sin(2πn/N) and cn for cos(2πn/N) is:
s0 = 0
c0 = 1
sn+1 = sn + d × cn
cn+1 = cnd × sn
for n = 0,...,N-1, where d = 2π/N.
This is simply the Euler method for integrating the differential equation:
[itex]ds/dt = c[itex]
[itex]dc/dt = -s[itex]
with initial conditions s(0) = 0 and c(0) = 1, whose analytical solution is s = sin(t) and c = cos(t).
Unfortunately, this is not a useful algorithm for generating sine tables because it has a significant error, proportional to 1/N.
For example, for N = 256 the maximum error in the sine values is ~0.061 (s202 = −1.0368 instead of −0.9757). For N = 1024, the maximum error in the sine values is ~0.015 (s803 = −0.99321 instead of −0.97832), about 4 times smaller. If the sine and cosine values obtained were to be plotted, this algorithm would draw a logarithmic spiral rather than a circle.
A better, but still imperfect, recurrence formula
A simple recurrence formula to generate trigonometric tables is based on Euler's formula and the relation:
[itex]e^{i(\theta + \Delta\theta)} = e^{i\theta} \times e^{i\Delta\theta}[itex]
This leads to the following recurrence to compute trigonometric values sn and cn as above:
c0 = 1
s0 = 0
cn+1 = wr cnwi sn
sn+1 = wi cn + wr sn
for n = 0, ..., N − 1, where wr = cos(2π/N) and wi = sin(2π/N). These two starting trigonometric values are usually computed using existing library functions (but could also be found e.g. by employing Newton's method in the complex plane to solve for the primitive root of zN − 1).
This method would produce an exact table in exact arithmetic, but has errors in finite-precision floating-point arithmetic. In fact, the errors grow as O(ε N) (in both the worst and average cases), where ε is the floating-point precision.
A significant improvement is to use the following modification to the above, a trick (due to Singleton) often used to generate trigonometric values for FFT implementations:
c0 = 1
s0 = 0
cn+1 = cn − (αcn + β sn)
sn+1 = sn + (β cn − α sn)
where α = 2 sin2(π/N) and β = sin(2π/N). The errors of this method are much smaller, O(ε √N) on average and O(ε N) in the worst case, but this is still large enough to substantially degrade the accuracy of FFTs of large sizes.
References
• Carl B. Boyer, A History of Mathematics, 2nd ed. (Wiley, New York, 1991).
• Manfred Tasche and Hansmartin Zeuner, "Improved roundoff error analysis for precomputed twiddle factors," J. Computational Analysis and Applications 4 (1), 1-18 (2002).
• James C. Schatzman, "Accuracy of the discrete Fourier transform and the fast Fourier transform," SIAM J. Sci. Comput. 17 (5), 1150-1166 (1996).
• Art and Cultures
• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
• Space and Astronomy
|
2021-06-25 00:58:47
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9111623764038086, "perplexity": 1612.9308668493932}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00528.warc.gz"}
|
https://luckytoilet.wordpress.com/2010/09/03/stepping-stones-solution-with-young-tableaux/
|
## Stepping Stones: solution with Young tableaux
About a year ago or so, I created a math problem and submitted it to CsTutoringCenter. Titled Stepping Stones, my problem statement went like this:
In a certain river, there are a bunch of stepping stones arranged from one side to the other. A very athletic person can cross the river by jumping on these stepping stones, one at a time.
A stepping stone is big enough for only one person, and the gap between two stepping stones is small enough that it is possible to jump between two adjacent stepping stones.
You are an army commander trying to get a group of soldiers across this river (using these stepping stones). Initially your n soldiers are placed on the first n stepping stones. Your task is to get all of them onto the last n stepping stones.
For example, here are the five possible ways to get a group of two soldiers across a river with five stepping stones:
1) ##--- #-#-- -##-- -#-#- --##- --#-# ---##
2) ##--- #-#-- -##-- -#-#- -#--# --#-# ---##
3) ##--- #-#-- #--#- -#-#- --##- --#-# ---##
4) ##--- #-#-- #--#- -#-#- -#--# --#-# ---##
5) ##--- #-#-- #--#- #---# -#--# --#-# ---##
Let C(k,n) be the number of ways of which n soldiers can cross a river with k stepping stones. In the example, C(5,2) = 5.
Find C(50,12) mod 987654321.
Of course, small values of $C(k,n)$ may be bruteforced by a computer. But $C(50,12)$ is well out of reach of brute force, and substantial mathematics is needed. Or for the lazy, it is possible to find small values by brute force, then look the sequence up on OEIS to find the formula.
### Bijection to a matrix representation
We find that any instance of the problem can be represented, or bijected to a special matrix, one where each row and column is increasing.
Let us number the soldiers in the following fashion. Let the rightmost soldier, that is, the soldier first to move, be labelled 1. The soldier behind him is labelled 2, and so on, until the last soldier to move is labelled $n$. Since the order of soldiers cannot change, each soldier moves exactly $k-n$ times.
Consider a very simple case, with 4 stones and 2 soldiers. One possible way is the first soldier moving twice, followed by the second moving twice.
This move sequence can be represented by $[1,1,2,2]$. The other sequence, and the only other sequence is $[1,2,1,2]$.
Firstly a sequence like $[1,1,1,2]$ is invalid because in a valid sequence, each soldier has to move the same number of times. Another invalid case is something like $[2,1,1,2]$, since obviously 2 cannot move the first turn. But how can you tell whether $[1,2,1,1,2,1,3,2,3,3,2,3]$ is valid or not?
It isn’t very easy to tell in sequence form. Instead we represent the sequence as a matrix form.
Let’s try some examples first, The sequence $[1,1,2,2]$ in matrix form is:
$\begin{array}{cc} 1&2 \\ 3&4 \end{array}$
The other sequence, $[1,2,1,2]$, is:
$\begin{array}{cc} 1&3 \\ 2&4 \end{array}$
Try a more complex example, $[1,2,1,1,2,1,3,2,3,3,2,3]$:
$\begin{array}{cccc} 1&3&4&6 \\ 2&5&8&11 \\ 7&9&10&12 \end{array}$
To create the matrix, first have a counter and initialize it to 1; when the first soldier moves, place the counter in the first cell that’s unfilled in the first row, and increment the counter. Now if the second soldier moves, we place the counter in the second row (first unfilled cell), and increment it again, and so on. By the time we’re through all of the soldier moves, the matrix should be nice and rectangular.
Perhaps a different explanation is more intuitive. If $A_{3,2} = 7$ (where $A$ is the matrix, $A_{3,2}$ means row 3, column 2), that means on move 7, soldier number 3 makes his move number 2.
From this interpretation, several important facts surface. The rows must be increasing, obviously, since if the row is not increasing, say 7 comes before 5, move 7 happened before move 5, which can’t be!
Less obviously, the column has to be increasing. Suppose that in some matrix, $A_{2,7}=20$, and the cell directly underneath, $A_{3,7} = 19$. In that case soldier 3 made his move 7 before soldier 2 made his move 7. This results in soldier 3 ahead of soldier 2 (or at least on the same stone)!
So with $k$ stones and $n$ soldiers, the matrix should have $n$ rows and $k-n$ columns. The $m \times n$ cells contain the numbers $1 \ldots mn$, while each row and each column is increasing. Our job is to enumerate these matrices, since such a matrix forms a 1-to-1 correspondence to a valid move sequence.
### Enumerating matrices with the hook length formula
A Young tableau is an interesting combinatorial object, based on the Ferrers diagram. From a Ferrers diagram of size $n$, a Young tableau is one where every number from $1 \ldots n$ is filled in it and all rows and all columns are increasing:
From any cell of a Young tableau, a hook is formed by extending all the way down, and all the way to the right:
The hook length of a cell is the length of its hook (including itself). In the above picture, the hook length is 5. Each cell in the tableau has a hook and a hook length.
The number of valid Young tableaux with a given shape $\lambda$ and with $n$ cells is given by the hook length formula:
$N = \frac{n!}{\prod_{x \in \lambda} \mathrm{hook}_x}$
A special case of the hook length formula can be used to enumerate rectangular Young tableaux. For instance, we have a 3*4 Young tableau. If we fill each cell with its hook length we get:
The count would then be
$\frac{12!}{6 \cdot 5 \cdot 4 \cdot 5 \cdot 4 \cdot 3 \cdot 4 \cdot 3 \cdot 2 \cdot 3 \cdot 2 \cdot 1}$
Or alternatively,
$\frac{12!}{\frac{6!}{3!} \cdot \frac{5!}{2!} \cdot \frac{4!}{1!} \cdot \frac{3!}{0!}}$
Simplifying:
$\frac{12! \cdot 0! \cdot 1! \cdot 2! \cdot 3!}{3! \cdot 4! \cdot 5! \cdot 6!}$
This can be generalized to a formula. If we have $x$ rows and $y$ columns:
$\frac{(xy)! \prod_{i=1}^{y-1}i!}{\prod_{j=x}^{x+y-1} j!}$
For $C(k,n)$, we have $n$ rows and $k-n$ columns, thus by substitution we arrive at our formula:
$C(k,n) = \frac{[n(k-n)]! \prod_{i=0}^{k-n-1}i!}{\prod_{j=n}^{k-1}j!}$
This can be used to compute $C(50,12)$, trivial to implement in Haskell or any other language.
### 2 Responses to Stepping Stones: solution with Young tableaux
1. Yessam says:
The amount of math you know blows me away. I’m a grade 12 student in Toronto at the top of my class and you lost me at matrix. Where have you learned all this stuff?
2. luckytoilet says:
Thanks. I’m not sure I know that much math (high school curriculum and some undergraduate topics) but there are plenty of resources out there for a motivated student.
|
2015-05-22 20:34:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 42, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6238067746162415, "perplexity": 804.1375737084024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207926736.56/warc/CC-MAIN-20150521113206-00206-ip-10-180-206-219.ec2.internal.warc.gz"}
|
http://flambe.ai/en/latest/autoapi/flambe/metric/dev/binary/index.html
|
# flambe.metric.dev.binary¶
## Module Contents¶
class flambe.metric.dev.binary.BinaryMetric(threshold: float = 0.5)[source]
__str__(self)[source]
Return the name of the Metric (for use in logging).
compute(self, pred: torch.Tensor, target: torch.Tensor)[source]
Compute the metric given predictions and targets
Parameters: pred (Tensor) – The model predictions target (Tensor) – The binary targets The computed binary metric float
compute_binary(self, pred: torch.Tensor, target: torch.Tensor)[source]
Compute a binary-input metric.
Parameters: pred (torch.Tensor) – Predictions made by the model. It should be a probability 0 <= p <= 1 for each sample, 1 being the positive class. target (torch.Tensor) – Ground truth. Each label should be either 0 or 1. The computed binary metric torch.float
class flambe.metric.dev.binary.BinaryAccuracy[source]
Compute binary accuracy.
|True positives + True negatives| / N
compute_binary(self, pred: torch.Tensor, target: torch.Tensor)[source]
Compute binary accuracy.
Parameters: pred (torch.Tensor) – Predictions made by the model. It should be a probability 0 <= p <= 1 for each sample, 1 being the positive class. target (torch.Tensor) – Ground truth. Each label should be either 0 or 1. The computed binary metric torch.float
class flambe.metric.dev.binary.BinaryPrecision(threshold: float = 0.5, positive_label: int = 1)[source]
Compute Binary Precision.
An example is considered negative when its score is below the specified threshold. Binary precition is computed as follows:
|True positives| / |True Positives| + |False Positives|
compute_binary(self, pred: torch.Tensor, target: torch.Tensor)[source]
Compute binary precision.
Parameters: pred (torch.Tensor) – Predictions made by the model. It should be a probability 0 <= p <= 1 for each sample, 1 being the positive class. target (torch.Tensor) – Ground truth. Each label should be either 0 or 1. The computed binary metric torch.float
__str__(self)[source]
Return the name of the Metric (for use in logging).
class flambe.metric.dev.binary.BinaryRecall(threshold: float = 0.5, positive_label: int = 1)[source]
Compute binary recall.
An example is considered negative when its score is below the specified threshold. Binary precition is computed as follows:
|True positives| / |True Positives| + |False Negatives|
compute_binary(self, pred: torch.Tensor, target: torch.Tensor)[source]
Compute binary recall.
Parameters: pred (torch.Tensor) – Predictions made by the model. It should be a probability 0 <= p <= 1 for each sample, 1 being the positive class. target (torch.Tensor) – Ground truth. Each label should be either 0 or 1. The computed binary metric torch.float
__str__(self)[source]
Return the name of the Metric (for use in logging).
class flambe.metric.dev.binary.F1(threshold: float = 0.5, positive_label: int = 1, eps: float = 1e-08)[source]
compute_binary(self, pred: torch.Tensor, target: torch.Tensor)[source]
Compute F1. Score, the harmonic mean between precision and recall.
Parameters: pred (torch.Tensor) – Predictions made by the model. It should be a probability 0 <= p <= 1 for each sample, 1 being the positive class. target (torch.Tensor) – Ground truth. Each label should be either 0 or 1. The computed binary metric torch.float
|
2020-02-20 22:45:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45638972520828247, "perplexity": 6120.968623083553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145316.8/warc/CC-MAIN-20200220224059-20200221014059-00545.warc.gz"}
|
http://mc-stan.org/rstanarm/reference/stan_betareg.html
|
Beta regression modeling with optional prior distributions for the coefficients, intercept, and auxiliary parameter phi (if applicable).
stan_betareg(formula, data, subset, na.action, weights, offset,
link = c("logit", "probit", "cloglog", "cauchit", "log", "loglog"),
link.phi = NULL, model = TRUE, y = TRUE, x = FALSE, ...,
prior = normal(), prior_intercept = normal(), prior_z = normal(),
prior_intercept_z = normal(), prior_phi = exponential(),
prior_PD = FALSE, algorithm = c("sampling", "optimizing", "meanfield",
"fullrank"), adapt_delta = NULL, QR = FALSE)
stan_betareg.fit(x, y, z = NULL, weights = rep(1, NROW(x)),
offset = rep(0, NROW(x)), link = c("logit", "probit", "cloglog",
"cauchit", "log", "loglog"), link.phi = NULL, ..., prior = normal(),
prior_intercept = normal(), prior_z = normal(),
prior_intercept_z = normal(), prior_phi = exponential(),
prior_PD = FALSE, algorithm = c("sampling", "optimizing", "meanfield",
"fullrank"), adapt_delta = NULL, QR = FALSE)
## Arguments
formula, data, subset
Same as betareg, but we strongly advise against omitting the data argument. Unless data is specified (and is a data frame) many post-estimation functions (including update, loo, kfold) are not guaranteed to work properly.
na.action
Same as betareg, but rarely specified.
Character specification of the link function used in the model for mu (specified through x). Currently, "logit", "probit", "cloglog", "cauchit", "log", and "loglog" are supported.
If applicable, character specification of the link function used in the model for phi (specified through z). Currently, "identity", "log" (default), and "sqrt" are supported. Since the "sqrt" link function is known to be unstable, it is advisable to specify a different link function (or to model phi as a scalar parameter instead of via a linear predictor by excluding z from the formula and excluding link.phi).
model, offset, weights
Same as betareg.
x, y
In stan_betareg, logical scalars indicating whether to return the design matrix and response vector. In stan_betareg.fit, a design matrix and response vector.
...
Further arguments passed to the function in the rstan package (sampling, vb, or optimizing), corresponding to the estimation method named by algorithm. For example, if algorithm is "sampling" it is possibly to specify iter, chains, cores, refresh, etc.
prior
The prior distribution for the regression coefficients. prior should be a call to one of the various functions provided by rstanarm for specifying priors. The subset of these functions that can be used for the prior on the coefficients can be grouped into several "families":
Family Functions Student t family normal, student_t, cauchy Hierarchical shrinkage family hs, hs_plus Laplace family laplace, lasso Product normal family product_normal
See the priors help page for details on the families and how to specify the arguments for all of the functions in the table above. To omit a prior ---i.e., to use a flat (improper) uniform prior--- prior can be set to NULL, although this is rarely a good idea.
Note: Unless QR=TRUE, if prior is from the Student t family or Laplace family, and if the autoscale argument to the function used to specify the prior (e.g. normal) is left at its default and recommended value of TRUE, then the default or user-specified prior scale(s) may be adjusted internally based on the scales of the predictors. See the priors help page and the Prior Distributions vignette for details on the rescaling and the prior_summary function for a summary of the priors used for a particular model.
prior_intercept
The prior distribution for the intercept. prior_intercept can be a call to normal, student_t or cauchy. See the priors help page for details on these functions. To omit a prior on the intercept ---i.e., to use a flat (improper) uniform prior--- prior_intercept can be set to NULL.
Note: If using a dense representation of the design matrix ---i.e., if the sparse argument is left at its default value of FALSE--- then the prior distribution for the intercept is set so it applies to the value when all predictors are centered. If you prefer to specify a prior on the intercept without the predictors being auto-centered, then you have to omit the intercept from the formula and include a column of ones as a predictor, in which case some element of prior specifies the prior on it, rather than prior_intercept. Regardless of how prior_intercept is specified, the reported estimates of the intercept always correspond to a parameterization without centered predictors (i.e., same as in glm).
prior_z
Prior distribution for the coefficients in the model for phi (if applicable). Same options as for prior.
prior_intercept_z
Prior distribution for the intercept in the model for phi (if applicable). Same options as for prior_intercept.
prior_phi
The prior distribution for phi if it is not modeled as a function of predictors. If z variables are specified then prior_phi is ignored and prior_intercept_z and prior_z are used to specify the priors on the intercept and coefficients in the model for phi. When applicable, prior_phi can be a call to exponential to use an exponential distribution, or one of normal, student_t or cauchy to use half-normal, half-t, or half-Cauchy prior. See priors for details on these functions. To omit a prior ---i.e., to use a flat (improper) uniform prior--- set prior_phi to NULL.
prior_PD
A logical scalar (defaulting to FALSE) indicating whether to draw from the prior predictive distribution instead of conditioning on the outcome.
algorithm
A string (possibly abbreviated) indicating the estimation approach to use. Can be "sampling" for MCMC (the default), "optimizing" for optimization, "meanfield" for variational inference with independent normal distributions, or "fullrank" for variational inference with a multivariate normal distribution. See rstanarm-package for more details on the estimation algorithms. NOTE: not all fitting functions support all four algorithms.
Only relevant if algorithm="sampling". See the adapt_delta help page for details.
QR
A logical scalar defaulting to FALSE, but if TRUE applies a scaled qr decomposition to the design matrix. The transformation does not change the likelihood of the data but is recommended for computational reasons when there are multiple predictors. See the QR-argument documentation page for details on how rstanarm does the transformation and important information about how to interpret the prior distributions of the model parameters when using QR=TRUE.
z
For stan_betareg.fit, a regressor matrix for phi. Defaults to an intercept only.
## Value
A stanreg object is returned for stan_betareg.
A stanfit object (or a slightly modified stanfit object) is returned if stan_betareg.fit is called directly.
## Details
The stan_betareg function is similar in syntax to betareg but rather than performing maximum likelihood estimation, full Bayesian estimation is performed (if algorithm is "sampling") via MCMC. The Bayesian model adds priors (independent by default) on the coefficients of the beta regression model. The stan_betareg function calls the workhorse stan_betareg.fit function, but it is also possible to call the latter directly.
## References
Ferrari, SLP and Cribari-Neto, F (2004). Beta regression for modeling rates and proportions. Journal of Applied Statistics. 31(7), 799--815.
stanreg-methods and betareg.
The vignette for stan_betareg. http://mc-stan.org/rstanarm/articles/
## Examples
### Simulated data
N <- 200
x <- rnorm(N, 2, 1)
z <- rnorm(N, 2, 1)
phi <- exp(1.5 + 0.4*z)
y <- rbeta(N, mu * phi, (1 - mu) * phi)
hist(y, col = "dark grey", border = FALSE, xlim = c(0,1))fake_dat <- data.frame(y, x, z)
fit <- stan_betareg(
y ~ x | z, data = fake_dat,
algorithm = "optimizing" # just for speed of example
)#> Initial log joint probability = -324.957
#> Optimization terminated normally:
#> Convergence detected: relative gradient magnitude is below toleranceprint(fit, digits = 2)#> stan_betareg
|
2018-09-23 01:37:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6028971672058105, "perplexity": 4042.724451056016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158766.65/warc/CC-MAIN-20180923000827-20180923021227-00484.warc.gz"}
|
https://chemfp.readthedocs.io/en/chemfp-4x/chemfp_toplevel.html
|
# Top-level API¶
The following functions and classes are in the top-level chemfp module.
exception chemfp.ChemFPError
Bases: Exception
Base class for all of the chemfp exceptions
exception chemfp.ParseError(msg, location=None)
Bases: chemfp.ChemFPError, ValueError
Exception raised by the molecule and fingerprint parsers and writers
The public attributes are:
msg
a string describing the exception
location
a chemfp.io.Location instance, or None
exception chemfp.EncodingError
Bases: chemfp.ChemFPError, ValueError
Exception raised when the encoding or the encoding_error is unsupported or unknown
chemfp.set_default_progressbar(progressbar)
Configure the default progress bar
This must be an object implementing the tqdm class behavior or one of the following values:
• False - do not use a progress bar
• None or True - use the default progress bar
(False is mapped to the internal “disabled_tqdm” object.)
chemfp.get_default_progressbar()
Return the current default progress bar, or None for the default behavior
chemfp.read_molecule_fingerprints(type, source=None, format=None, id_tag=None, reader_args=None, errors='strict')
Read structures from source and return the corresponding ids and fingerprints
This returns an chemfp.fps_io.FPSReader which can be iterated over to get the id and fingerprint for each read structure record. The fingerprint generated depends on the value of type. Structures are read from source, which can either be the structure filename, or None to read from stdin.
type contains the information about how to turn a structure into a fingerprint. It can be a string or a metadata instance. String values look like OpenBabel-FP2/1, OpenEye-Path, and OpenEye-Path/1 min_bonds=0 max_bonds=5 atype=DefaultAtom btype=DefaultBond. Default values are used for unspecified parameters. Use a Metadata instance with type and aromaticity values set in order to pass aromaticity information to OpenEye.
If format is None then the structure file format and compression are determined by the filename’s extension(s), defaulting to uncompressed SMILES if that is not possible. Otherwise format may be “smi” or “sdf” optionally followed by “.gz” or “.bz2” to indicate compression. The OpenBabel and OpenEye toolkits also support additional formats.
If id_tag is None, then the record id is based on the title field for the given format. If the input format is “sdf” then id_tag specifies the tag field containing the identifier. (Only the first line is used for multi-line values.) For example, ChEBI omits the title from the SD files and stores the id after the “> <ChEBI ID>” line. In that case, use id_tag = "ChEBI ID".
The reader_args is a dictionary with additional structure reader parameters. The parameters depend on the toolkit and the format. Unknown parameters are ignored.
errors specifies how to handle errors. The value “strict” raises an exception if there are any detected errors. The value “report” sends an error message to stderr and skips to the next record. The value “ignore” skips to the next record.
Here is an example of using fingerprints generated from structure file:
from chemfp.bitops import hex_encode
print(id, hex_encode(fp))
Parameters: type (string or Metadata) – information about how to convert the input structure into a fingerprint source (A filename (as a string), a file object, or None to read from stdin) – The structure data source. format (string, or None to autodetect based on the source) – The file format and optional compression. Examples: “smi” and “sdf.gz” id_tag (string, or None to use the default title for the given format) – The tag containing the record id. Example: “ChEBI ID”. Only valid for SD files. reader_args (dict, or None to use the default arguments) – additional parameters for the structure reader errors (one of "strict", "report", or "ignore") – specify how to handle parse errors
chemfp.read_molecule_fingerprints_from_string(type, content, format, *, id_tag=None, reader_args=None, errors='strict')
Read structures from the content string and return the corresponding ids and fingerprints
The parameters are identical to chemfp.read_molecule_fingerprints() except that the entire content is passed through as a content string, rather than as a source filename. See that function for details.
You must specify the format! As there is no source filename, it’s not possible to guess the format based on the extension, and there is no support for auto-detecting the format by looking at the string content.
Parameters: type (string or Metadata) – information about how to convert the input structure into a fingerprint content (string) – The structure data as a string. format (string) – The file format and optional compression. Examples: “smi” and “sdf.gz” id_tag (string, or None to use the default title for the given format) – The tag containing the record id. Example: “ChEBI ID”. Only valid for SD files. reader_args (dict, or None to use the default arguments) – additional parameters for the structure reader errors (one of "strict" (raise exception), "report" (send a message to stderr and continue processing), or "ignore" (continue processing)) – specify how to handle parse errors
chemfp.open(source, format=None, location=None, allow_mmap=True)
Read fingerprints from a fingerprint file
Read fingerprints from source, using the given format. If source is a string then it is treated as a filename. If source is None then fingerprints are read from stdin. Otherwise, source must be a Python file object supporting the read and readline methods.
If format is None then the fingerprint file format and compression type are derived from the source filename, or from the name attribute of the source file object. If the source is None then the stdin is assumed to be uncompressed data in “fps” format.
The supported format strings are:
• “fps”, “fps.gz”, or “fps.zst” for fingerprints in FPS format
• “fpb”, “fpb.gz” or “fpb.zst” for fingerprints in FPB format
The optional location is a chemfp.io.Location instance. It will only be used if the source is in FPS format.
If the source is in FPS format then open will return a chemfp.fps_io.FPSReader, which will use the location if specified.
If the source is in FPB format then open will return a chemfp.arena.FingerprintArena and the location will not be used. If allow_mmap is True then chemfp may use mmap to read uncompressed FPB files. If False then chemfp will read the file’s contents into memory, which may give better performance if the FPB file is on a networked file system, at the expense of higher memory use.
Here’s an example of printing the contents of the file:
from chemfp.bitops import hex_encode
print(id, hex_encode(fp))
Parameters: source (A filename string, a file object, or None) – The fingerprint source. format (string, or None) – The file format and optional compression. location (a Location instance, or None) – a location object used to access parser state information allow_mmap (boolean) – if True, use mmap to open uncompressed FPB files, otherwise read the contents
chemfp.open_from_string(content, format='fps', *, location=None)
Read fingerprints from a content string containing fingerprints in the given format
The supported format strings are:
• “fps”, “fps.gz”, or “fps.zst” for fingerprints in FPS format
• “fpb”, “fpb.gz” or “fpb.zst” for fingerprints in FPB format
If the format is ‘fps’ and not compressed then the content may be a text string. Otherwise content must be a byte string.
The optional location is a chemfp.io.Location instance. It will only be used if the source is in FPS format.
Parameters: content (byte or text string) – The fingerprint data as a string. format (string) – The file format and optional compression. Unicode strings may not be compressed. location (a Location instance, or None) – a location object used to access parser state information
chemfp.open_fingerprint_writer(destination, metadata=None, format=None, alignment=8, reorder=True, level=None, tmpdir=None, max_spool_size=None, errors='strict', location=None)
Create a fingerprint writer for the given destination
The fingerprint writer is an object with methods to write fingerprints to the given destination. The output format is based on the format. If that’s None then the format depends on the destination, or is “fps” if the attempts at format detection fail.
The metadata, if given, is a Metadata instance, and used to fill the header of an FPS file or META block of an FPB file.
If the output format is “fps”, “fps.gz”, or “fps.zst” then destination may be a filename, a file object, or None for stdout. If the output format is “fpb” then destination must be a filename or seekable file object. A fingerprint writer with compressed FPB output is not supported; use arena.save() instead, or post-process the file.
Use level to change the compression level. The default is 9 for gzip and 3 for ztd. Use “min”, “default”, or “max” as aliases for the minimum, default, and maximum values for each range.
Some options only apply to FPB output. The alignment specifies the arena byte alignment. By default the fingerprints are reordered by popcount, which enables sublinear similarity search. Set reorder to False to preserve the input fingerprint order.
The default FPB writer stores everything into memory before writing the file, which may cause performance problems if there isn’t enough available free memory. In that case, set max_spool_size to the number of bytes of memory to use before spooling intermediate data to a file. (Note: there are two independent spools so this may use up to roughly twice as much memory as specified.)
Use tmpdir to specify where to write the temporary spool files if you don’t want to use the operating system default. You may also set the TMPDIR, TEMP or TMP environment variables.
Some options only apply to FPS output. errors specifies how to handle recoverable write errors. The value “strict” raises an exception if there are any detected errors. The value “report” sends an error message to stderr and skips to the next record. The value “ignore” skips to the next record.
The location is a Location instance. It lets the caller access state information such as the number of records that have been written.
Parameters: destination (a filename, file object, or None) – the output destination metadata (a Metadata instance, or None) – the fingerprint metadata format (None, "fps", "fps.gz", "fps.zst", or "fpb") – the output format alignment (positive integer) – arena byte alignment for FPB files reorder (True or False) – True reorders the fingerprints by popcount, False leaves them in input order level (an integer, the strings "min", "default" or "max", or None for default) – True reorders the fingerprints by popcount, False leaves them in input order tmpdir (string or None) – the directory to use for temporary files, when max_spool_size is specified max_spool_size (integer, or None) – number of bytes to store in memory before using a temporary file. If None, use memory for everything. location (a Location instance, or None) – a location object used to access output state information
chemfp.load_fingerprints(reader, metadata=None, reorder=True, alignment=None, format=None, allow_mmap=True, *, progress=False)
Load all of the fingerprints into an in-memory FingerprintArena data structure
The function reads all of the fingerprints and identifers from reader and stores them into an in-memory chemfp.arena.FingerprintArena data structure which supports fast similarity searches.
If reader is a string, the None object, or has a read attribute then it, the format, and allow_mmap will be passed to the chemfp.open() function and the result used as the reader. If that returns a FingerprintArena then the reorder and alignment parameters are ignored and the arena returned.
If reader is a FingerprintArena then the reorder and alignment parameters are ignored. If metadata is None then the input reader is returned without modifications, otherwise a new FingerprintArena is created, whose metadata attribue is metadata.
Otherwise the reader or the result of opening the file must be an iterator which returns (id, fingerprint) pairs. These will be used to create a new arena.
metadata specifies the metadata for all returned arenas. If not given the default comes from the source file or from reader.metadata.
The loader may reorder the fingerprints for better search performance. To prevent ordering, use reorder=False. The reorder parameter is ignored if the reader is an arena or FPB file.
The alignment option specifies the data alignment and padding size for each fingerprint. A value of 8 means that each fingerprint will start on a 8 byte alignment, and use storage space which a multiple of 8 bytes long. The default value of None will determine the best alignment based on the fingerprint size and available popcount methods. This parameter is ignored if the reader is an arena or FPB file.
The progress keyword argument, if True, enables a progress bar when reading from an FPS file. The default, False, shows no progress. If neither True nor False then it should be a callable which accepts the tqdm parameters and returns a tqdm-like instance.
Parameters: reader (a string, file object, or (id, fingerprint) iterator) – An iterator over (id, fingerprint) pairs metadata (Metadata) – The metadata for the arena, if other than reader.metadata reorder (True or False) – Specify if fingerprints should be reordered for better performance alignment (a positive integer, or None) – Alignment size in bytes (both data alignment and padding); None autoselects the best alignment. format (None, "fps", "fps.gz", "fps.zst", "fpb", "fpb.gz" or "fpb.zst") – The file format name if the reader is a string allow_mmap (True or False) – Allow chemfp to use mmap on FPB files, instead of reading the file’s contents into memory progress (True, False, or a callable) – Enable or disable progress bars, optionally specifying the progress bar constructor chemfp.arena.FingerprintArena
chemfp.load_fingerprints_from_string(content, format='fps', *, reorder=True, alignment=None, progress=False)
Load the fingerprints from the content string, in the given format
The supported format strings are:
• “fps”, “fps.gz”, or “fps.zst” for fingerprints in FPS format
• “fpb”, “fpb.gz” or “fpb.zst” for fingerprints in FPB format
If the format is ‘fps’ and not compressed then the content may be a text string. Otherwise content must be a byte string.
If the content is not in FPB format then by default the fingerprints are reordered by popcount, which enables sublinear similarity search. Set reorder to False to preserve the input fingerprint order.
If the content is not in FPB format then alignment specifies the data alignment and padding size for each fingerprint. A value of 8 means that each fingerprint will start on a 8 byte alignment, and use storage space which a multiple of 8 bytes long. The default value of None determines the best alignment based on the fingerprint size and available popcount methods.
The progress keyword argument, if True, enables a progress bar when reading from an FPS file. The default, False, shows no progress. If neither True nor False then it should be a callable which accepts the tqdm parameters and returns a tqdm-like instance.
Parameters: content (byte or text string) – The fingerprint data as a string. format (string) – The file format and optional compression. Unicode strings may not be compressed. reorder (True or False) – True reorders the fingerprints by popcount, False leaves them in input order alignment (a positive integer, or None) – Alignment size in bytes (both data alignment and padding); None autoselects the best alignment. progress (True, False, or a callable) – Enable or disable progress bars, optionally specifying the progress bar constructor chemfp.arena.FingerprintArena
chemfp.count_tanimoto_hits(queries, targets, threshold=0.7, arena_size=100)
Count the number of targets within threshold of each query term
For each query in queries, count the number of targets in targets which are at least threshold similar to the query. This function returns an iterator containing the (query_id, count) pairs.
Example:
queries = chemfp.open("queries.fps")
for (query_id, count) in chemfp.count_tanimoto_hits(queries, targets, threshold=0.9):
print(query_id, "has", count, "neighbors with at least 0.9 similarity")
Internally, queries are processed in batches with arena_size elements. A small batch size uses less overall memory and has lower processing latency, while a large batch size has better overall performance. Use arena_size=None to process the input as a single batch.
Note: an chemfp.fps_io.FPSReader may be used as a target but it will only process one batch and not reset for the next batch. It’s faster to search a chemfp.arena.FingerprintArena, but if you have an FPS file then that takes extra time to load. At times, if there is a small number of queries, the time to load the arena from an FPS file may be slower than the direct search using an FPSReader.
If you know the targets are in an arena then you may want to use chemfp.search.count_tanimoto_hits_fp() or chemfp.search.count_tanimoto_hits_arena().
Parameters: queries (any fingerprint container) – The query fingerprints. targets (chemfp.arena.FingerprintArena or the slower chemfp.fps_io.FPSReader) – The target fingerprints. threshold (float between 0.0 and 1.0, inclusive) – The minimum score threshold. arena_size (a positive integer, or None) – The number of queries to process in a batch iterator of the (query_id, score) pairs, one for each query
Find all targets within threshold of each query term
For each query in queries, find all the targets in targets which are at least threshold similar to the query. This function returns an iterator containing the (query_id, hits) pairs. The hits are stored as a list of (target_id, score) pairs.
Example:
queries = chemfp.open("queries.fps")
for (query_id, hits) in chemfp.id_threshold_tanimoto_search(queries, targets, threshold=0.8):
print(query_id, "has", len(hits), "neighbors with at least 0.8 similarity")
non_identical = [target_id for (target_id, score) in hits if score != 1.0]
print(" The non-identical hits are:", non_identical)
Internally, queries are processed in batches with arena_size elements. A small batch size uses less overall memory and has lower processing latency, while a large batch size has better overall performance. Use arena_size=None to process the input as a single batch.
Note: an chemfp.fps_io.FPSReader may be used as a target but it will only process one batch and not reset for the next batch. It’s faster to search a chemfp.arena.FingerprintArena, but if you have an FPS file then that takes extra time to load. At times, if there is a small number of queries, the time to load the arena from an FPS file may be slower than the direct search using an FPSReader.
If you know the targets are in an arena then you may want to use chemfp.search.threshold_tanimoto_search_fp() or chemfp.search.threshold_tanimoto_search_arena().
Parameters: queries (any fingerprint container) – The query fingerprints. targets (chemfp.arena.FingerprintArena or the slower chemfp.fps_io.FPSReader) – The target fingerprints. threshold (float between 0.0 and 1.0, inclusive) – The minimum score threshold. arena_size (positive integer, or None) – The number of queries to process in a batch An iterator containing (query_id, hits) pairs, one for each query. ‘hits’ contains a list of (target_id, score) pairs.
Find the k-nearest targets within threshold of each query term
For each query in queries, find the k-nearest of all the targets in targets which are at least threshold similar to the query. Ties are broken arbitrarily and hits with scores equal to the smallest value may have been omitted.
This function returns an iterator containing the (query_id, hits) pairs, where hits is a list of (target_id, score) pairs, sorted so that the highest scores are first. The order of ties is arbitrary.
Example:
# Use the first 5 fingerprints as the queries
queries = next(chemfp.open("pubchem_subset.fps").iter_arenas(5))
# Find the 3 nearest hits with a similarity of at least 0.8
for (query_id, hits) in chemfp.id_knearest_tanimoto_search(queries, targets, k=3, threshold=0.8):
print(query_id, "has", len(hits), "neighbors with at least 0.8 similarity")
if hits:
target_id, score = hits[-1]
print(" The least similar is", target_id, "with score", score)
Internally, queries are processed in batches with arena_size elements. A small batch size uses less overall memory and has lower processing latency, while a large batch size has better overall performance. Use arena_size=None to process the input as a single batch.
Note: an chemfp.fps_io.FPSReader may be used as a target but it will only process one batch and not reset for the next batch. It’s faster to search a chemfp.arena.FingerprintArena, but if you have an FPS file then that takes extra time to load. At times, if there is a small number of queries, the time to load the arena from an FPS file may be slower than the direct search using an FPSReader.
If you know the targets are in an arena then you may want to use chemfp.search.knearest_tanimoto_search_fp() or chemfp.search.knearest_tanimoto_search_arena().
Parameters: queries (any fingerprint container) – The query fingerprints. targets (chemfp.arena.FingerprintArena or the slower chemfp.fps_io.FPSReader) – The target fingerprints. k (positive integer) – The maximum number of nearest neighbors to find. threshold (float between 0.0 and 1.0, inclusive) – The minimum score threshold. arena_size (positive integer, or None) – The number of queries to process in a batch An iterator containing (query_id, hits) pairs, one for each query. The hits are a list of (target_id, score) pairs, sorted by score.
chemfp.count_tanimoto_hits_symmetric(fingerprints, threshold=0.7)
Find the number of other fingerprints within threshold of each fingerprint
For each fingerprint in the fingerprints arena, find the number of other fingerprints in the same arena which are at least threshold similar to it. The arena must have pre-computed popcounts. A fingerprint never matches itself.
This function returns an iterator of (fingerprint_id, count) pairs.
Example:
arena = chemfp.load_fingerprints("targets.fps.gz")
for (fp_id, count) in chemfp.count_tanimoto_hits_symmetric(arena, threshold=0.6):
print(fp_id, "has", count, "neighbors with at least 0.6 similarity")
You may also be interested in chemfp.search.count_tanimoto_hits_symmetric().
Parameters: fingerprints (a FingerprintArena with precomputed popcount_indices) – The arena containing the fingerprints. threshold (float between 0.0 and 1.0, inclusive) – The minimum score threshold. An iterator of (fp_id, count) pairs, one for each fingerprint
chemfp.threshold_tanimoto_search_symmetric(fingerprints, threshold=0.7)
Find the other fingerprints within threshold of each fingerprint
For each fingerprint in the fingerprints arena, find the other fingerprints in the same arena which share at least threshold similar to it. The arena must have pre-computed popcounts. A fingerprint never matches itself.
This function returns an iterator of (fingerprint, SearchResult) pairs. The chemfp.search.SearchResult hit order is arbitrary.
Example:
arena = chemfp.load_fingerprints("targets.fps.gz")
for (fp_id, hits) in chemfp.threshold_tanimoto_search_symmetric(arena, threshold=0.75):
print(fp_id, "has", len(hits), "neighbors:")
for (other_id, score) in hits.get_ids_and_scores():
print(" %s %.2f" % (other_id, score))
You may also be interested in the chemfp.search.threshold_tanimoto_search_symmetric() function.
Parameters: fingerprints (a FingerprintArena with precomputed popcount_indices) – The arena containing the fingerprints. threshold (float between 0.0 and 1.0, inclusive) – The minimum score threshold. An iterator of (fp_id, SearchResult) pairs, one for each fingerprint
chemfp.knearest_tanimoto_search_symmetric(fingerprints, k=3, threshold=0.0)
Find the k-nearest fingerprints within threshold of each fingerprint
For each fingerprint in the fingerprints arena, find the nearest k fingerprints in the same arena which have at least threshold similar to it. The arena must have pre-computed popcounts. A fingerprint never matches itself.
This function returns an iterator of (fingerprint, SearchResult) pairs. The chemfp.search.SearchResult hits are ordered from highest score to lowest, with ties broken arbitrarily.
Example:
arena = chemfp.load_fingerprints("targets.fps.gz")
for (fp_id, hits) in chemfp.knearest_tanimoto_search_symmetric(arena, k=5, threshold=0.5):
print(fp_id, "has", len(hits), "neighbors, with scores", end="")
print(", ".join("%.2f" % x for x in hits.get_scores()))
You may also be interested in the chemfp.search.knearest_tanimoto_search_symmetric() function.
Parameters: fingerprints (a FingerprintArena with precomputed popcount_indices) – The arena containing the fingerprints. k (positive integer) – The maximum number of nearest neighbors to find. threshold (float between 0.0 and 1.0, inclusive) – The minimum score threshold. An iterator of (fp_id, SearchResult) pairs, one for each fingerprint
chemfp.count_tversky_hits(queries, targets, threshold=0.7, alpha=1.0, beta=1.0, arena_size=100)
Count the number of targets within threshold of each query term
For each query in queries, count the number of targets in targets which are at least threshold similar to the query. This function returns an iterator containing the (query_id, count) pairs.
Example:
queries = chemfp.open("queries.fps")
for (query_id, count) in chemfp.count_tversky_hits(
queries, targets, threshold=0.9, alpha=0.5, beta=0.5):
print(query_id, "has", count, "neighbors with at least 0.9 Dice similarity")
Internally, queries are processed in batches with arena_size elements. A small batch size uses less overall memory and has lower processing latency, while a large batch size has better overall performance. Use arena_size=None to process the input as a single batch.
Note: an chemfp.fps_io.FPSReader may be used as a target but it will only process one batch and not reset for the next batch. It’s faster to search a chemfp.arena.FingerprintArena, but if you have an FPS file then that takes extra time to load. At times, if there is a small number of queries, the time to load the arena from an FPS file may be slower than the direct search using an FPSReader.
If you know the targets are in an arena then you may want to use chemfp.search.count_tversky_hits_fp() or chemfp.search.count_tversky_hits_arena().
Parameters: queries (any fingerprint container) – The query fingerprints. targets (chemfp.arena.FingerprintArena or the slower chemfp.fps_io.FPSReader) – The target fingerprints. threshold (float between 0.0 and 1.0, inclusive) – The minimum score threshold. arena_size (a positive integer, or None) – The number of queries to process in a batch iterator of the (query_id, score) pairs, one for each query
Find all targets within threshold of each query term
For each query in queries, find all the targets in targets which are at least threshold similar to the query. This function returns an iterator containing the (query_id, hits) pairs. The hits are stored as a list of (target_id, score) pairs.
Example:
queries = chemfp.open("queries.fps")
for (query_id, hits) in chemfp.id_threshold_tanimoto_search(
queries, targets, threshold=0.8, alpha=0.5, beta=0.5):
print(query_id, "has", len(hits), "neighbors with at least 0.8 Dice similarity")
non_identical = [target_id for (target_id, score) in hits if score != 1.0]
print(" The non-identical hits are:", non_identical)
Internally, queries are processed in batches with arena_size elements. A small batch size uses less overall memory and has lower processing latency, while a large batch size has better overall performance. Use arena_size=None to process the input as a single batch.
Note: an chemfp.fps_io.FPSReader may be used as a target but it will only process one batch and not reset for the next batch. It’s faster to search a chemfp.arena.FingerprintArena, but if you have an FPS file then that takes extra time to load. At times, if there is a small number of queries, the time to load the arena from an FPS file may be slower than the direct search using an FPSReader.
If you know the targets are in an arena then you may want to use chemfp.search.threshold_tversky_search_fp() or chemfp.search.threshold_tversky_search_arena().
Parameters: queries (any fingerprint container) – The query fingerprints. targets (chemfp.arena.FingerprintArena or the slower chemfp.fps_io.FPSReader) – The target fingerprints. threshold (float between 0.0 and 1.0, inclusive) – The minimum score threshold. arena_size (positive integer, or None) – The number of queries to process in a batch An iterator containing (query_id, hits) pairs, one for each query. ‘hits’ contains a list of (target_id, score) pairs.
Find the k-nearest targets within threshold of each query term
For each query in queries, find the k-nearest of all the targets in targets which are at least threshold similar to the query. Ties are broken arbitrarily and hits with scores equal to the smallest value may have been omitted.
This function returns an iterator containing the (query_id, hits) pairs, where hits is a list of (target_id, score) pairs, sorted so that the highest scores are first. The order of ties is arbitrary.
Example:
# Use the first 5 fingerprints as the queries
queries = next(chemfp.open("pubchem_subset.fps").iter_arenas(5))
# Find the 3 nearest hits with a similarity of at least 0.8
for (query_id, hits) in chemfp.id_knearest_tversky_search(
queries, targets, k=3, threshold=0.8, alpha=0.5, beta=0.5):
print(query_id, "has", len(hits), "neighbors with at least 0.8 Dice similarity")
if hits:
target_id, score = hits[-1]
print(" The least similar is", target_id, "with score", score)
Internally, queries are processed in batches with arena_size elements. A small batch size uses less overall memory and has lower processing latency, while a large batch size has better overall performance. Use arena_size=None to process the input as a single batch.
Note: an chemfp.fps_io.FPSReader may be used as a target but it will only process one batch and not reset for the next batch. It’s faster to search a chemfp.arena.FingerprintArena, but if you have an FPS file then that takes extra time to load. At times, if there is a small number of queries, the time to load the arena from an FPS file may be slower than the direct search using an FPSReader.
If you know the targets are in an arena then you may want to use chemfp.search.knearest_tversky_search_fp() or chemfp.search.knearest_tversky_search_arena().
Parameters: queries (any fingerprint container) – The query fingerprints. targets (chemfp.arena.FingerprintArena or the slower chemfp.fps_io.FPSReader) – The target fingerprints. k (positive integer) – The maximum number of nearest neighbors to find. threshold (float between 0.0 and 1.0, inclusive) – The minimum score threshold. arena_size (positive integer, or None) – The number of queries to process in a batch An iterator containing (query_id, hits) pairs, one for each query. The hits are a list of (target_id, score) pairs, sorted by score.
chemfp.count_tversky_hits_symmetric(fingerprints, threshold=0.7, alpha=1.0, beta=1.0)
Find the number of other fingerprints within threshold of each fingerprint
For each fingerprint in the fingerprints arena, find the number of other fingerprints in the same arena which are at least threshold similar to it. The arena must have pre-computed popcounts. A fingerprint never matches itself.
This function returns an iterator of (fingerprint_id, count) pairs.
Example:
arena = chemfp.load_fingerprints("targets.fps.gz")
for (fp_id, count) in chemfp.count_tversky_hits_symmetric(
arena, threshold=0.6, alpha=0.5, beta=0.5):
print(fp_id, "has", count, "neighbors with at least 0.6 Dice similarity")
You may also be interested in chemfp.search.count_tversky_hits_symmetric().
Parameters: fingerprints (a FingerprintArena with precomputed popcount_indices) – The arena containing the fingerprints. threshold (float between 0.0 and 1.0, inclusive) – The minimum score threshold. An iterator of (fp_id, count) pairs, one for each fingerprint
chemfp.threshold_tversky_search_symmetric(fingerprints, threshold=0.7, alpha=1.0, beta=1.0)
Find the other fingerprints within threshold of each fingerprint
For each fingerprint in the fingerprints arena, find the other fingerprints in the same arena which share at least threshold similar to it. The arena must have pre-computed popcounts. A fingerprint never matches itself.
This function returns an iterator of (fingerprint, SearchResult) pairs. The chemfp.search.SearchResult hit order is arbitrary.
Example:
arena = chemfp.load_fingerprints("targets.fps.gz")
for (fp_id, hits) in chemfp.threshold_tversky_search_symmetric(
arena, threshold=0.75, alpha=0.5, beta=0.5):
print(fp_id, "has", len(hits), "Dice neighbors:")
for (other_id, score) in hits.get_ids_and_scores():
print(" %s %.2f" % (other_id, score))
You may also be interested in the chemfp.search.threshold_tversky_search_symmetric() function.
Parameters: fingerprints (a FingerprintArena with precomputed popcount_indices) – The arena containing the fingerprints. threshold (float between 0.0 and 1.0, inclusive) – The minimum score threshold. An iterator of (fp_id, SearchResult) pairs, one for each fingerprint
chemfp.knearest_tversky_search_symmetric(fingerprints, k=3, threshold=0.0, alpha=1.0, beta=1.0)
Find the k-nearest fingerprints within threshold of each fingerprint
For each fingerprint in the fingerprints arena, find the nearest k fingerprints in the same arena which have at least threshold similar to it. The arena must have pre-computed popcounts. A fingerprint never matches itself.
This function returns an iterator of (fingerprint, SearchResult) pairs. The chemfp.search.SearchResult hits are ordered from highest score to lowest, with ties broken arbitrarily.
Example:
arena = chemfp.load_fingerprints("targets.fps.gz")
for (fp_id, hits) in chemfp.knearest_tversky_search_symmetric(
arena, k=5, threshold=0.5, alpha=0.5, beta=0.5):
print(fp_id, "has", len(hits), "neighbors, with Dice scores", end="")
print(", ".join("%.2f" % x for x in hits.get_scores()))
You may also be interested in the chemfp.search.knearest_tversky_search_symmetric() function.
Parameters: fingerprints (a FingerprintArena with precomputed popcount_indices) – The arena containing the fingerprints. k (positive integer) – The maximum number of nearest neighbors to find. threshold (float between 0.0 and 1.0, inclusive) – The minimum score threshold. An iterator of (fp_id, SearchResult) pairs, one for each fingerprint
exception chemfp.ChemFPProblem(severity, category, description)
Information about a compatibility problem between a query and target.
Instances are generated by chemfp.check_fingerprint_problems() and chemfp.check_metadata_problems().
The public attributes are:
severity
one of “info”, “warning”, or “error”
error_level
5 for “info”, 10 for “warning”, and 20 for “error”
category
a string used as a category name. This string will not change over time.
description
a more detailed description of the error, including details of the mismatch. The description depends on query_name and target_name and may change over time.
The current category names are:
• “num_bits mismatch” (error)
• “num_bytes_mismatch” (error)
• “type mismatch” (warning)
• “aromaticity mismatch” (info)
• “software mismatch” (info)
chemfp.check_fingerprint_problems(query_fp, target_metadata, query_name='query', target_name='target')
Return a list of compatibility problems between a fingerprint and a metadata
If there are no problems then this returns an empty list. If there is a bit length or byte length mismatch between the query_fp byte string and the target_metadata then it will return a list containing a ChemFPProblem instance, with a severity level “error” and category “num_bytes mismatch”.
This function is usually used to check if a query fingerprint is compatible with the target fingerprints. In case of a problem, the default message looks like:
>>> problems = check_fingerprint_problems("A"*64, Metadata(num_bytes=128))
>>> problems[0].description
'query contains 64 bytes but target has 128 byte fingerprints'
You can change the error message with the query_name and target_name parameters:
>>> import chemfp
... query_name="input", target_name="database")
>>> problems[0].description
'input contains 64 bytes but database has 128 byte fingerprints'
Parameters: query_fp (byte string) – a fingerprint (usually the query fingerprint) target_metadata (Metadata instance) – the metadata to check against (usually the target metadata) query_name (string) – the text used to describe the fingerprint, in case of problem target_name (string) – the text used to describe the metadata, in case of problem a list of ChemFPProblem instances
chemfp.check_metadata_problems(query_metadata, target_metadata, query_name='query', target_name='target')
Return a list of compatibility problems between two metadata instances.
If there are no probelms then this returns an empty list. Otherwise it returns a list of ChemFPProblem instances, with a severity level ranging from “info” to “error”.
Bit length and byte length mismatches produce an “error”. Fingerprint type and aromaticity mismatches produce a “warning”. Software version mismatches produce an “info”.
This is usually used to check if the query metadata is incompatible with the target metadata. In case of a problem the messages look like:
>>> import chemfp
>>> len(problems)
2
>>> print(problems[1].description)
query has fingerprints of type 'Example/1' but target has fingerprints of type 'Counter-Example/1'
You can change the error message with the query_name and target_name parameters:
>>> problems = chemfp.check_metadata_problems(m1, m2, query_name="input", target_name="database")
>>> print(problems[1].description)
input has fingerprints of type 'Example/1' but database has fingerprints of type 'Counter-Example/1'
Parameters: fp (byte string) – a fingerprint metadata (Metadata instance) – the metadata to check against query_name (string) – the text used to describe the fingerprint, in case of problem target_name (string) – the text used to describe the metadata, in case of problem a list of ChemFPProblem instances
class chemfp.Metadata(num_bits=None, num_bytes=None, type=None, aromaticity=None, software=None, sources=None, date=None)
Bases: object
Store information about a set of fingerprints
The public attributes are:
num_bits
the number of bits in the fingerprint
num_bytes
the number of bytes in the fingerprint
type
the fingerprint type string
aromaticity
aromaticity model (only used with OEChem, and now deprecated)
software
software used to make the fingerprints
sources
list of sources used to make the fingerprint
date
a datetime timestamp of when the fingerprints were made
copy(num_bits=None, num_bytes=None, type=None, aromaticity=None, software=None, sources=None, date=None)
Return a new Metadata instance based on the current attributes and optional new values
When called with no parameter, make a new Metadata instance with the same attributes as the current instance.
If a given call parameter is not None then it will be used instead of the current value. If you want to change a current value to None then you will have to modify the new Metadata after you created it.
Parameters: num_bits (an integer, or None) – the number of bits in the fingerprint num_bytes (an integer, or None) – the number of bytes in the fingerprint type (string or None) – the fingerprint type description aromaticity (None) – obsolete software (string or None) – a description of the software sources (list of strings, a string (interpreted as a list with one string), or None) – source filenames date (a datetime instance, or None) – creation or processing date for the contents a new Metadata instance
class chemfp.FingerprintReader(metadata)
Bases: object
Base class for all chemfp objects holding fingerprint records
All FingerprintReader instances have a metadata attribute containing a Metadata and can be iteratated over to get the (id, fingerprint) for each record.
get_fingerprint_type()
Get the fingerprint type object based on the metadata’s type field
This uses self.metadata.type to get the fingerprint type string then calls chemfp.get_fingerprint_type() to get and return a chemfp.types.FingerprintType instance.
This will raise a TypeError if there is no metadata, and a ValueError if the type field was invalid or the fingerprint type isn’t available.
Returns: a chemfp.types.FingerprintType
iter_arenas(arena_size=1000)
iterate through arena_size fingerprints at a time, as subarenas
Iterate through arena_size fingerprints at a time, returned as chemfp.arena.FingerprintArena instances. The arenas are in input order and not reordered by popcount.
This method helps trade off between performance and memory use. Working with arenas is often faster than processing one fingerprint at a time, but if the file is very large then you might run out of memory, or get bored while waiting to process all of the fingerprint before getting the first answer.
If arena_size is None then this makes an iterator which returns a single arena containing all of the fingerprints.
Parameters: arena_size (positive integer, or None) – The number of fingerprints to put into each arena. an iterator of chemfp.arena.FingerprintArena instances
load(*, reorder=True, alignment=None, progress=False)
Load all of the fingerprints into an arena and return the arena
Parameters: reorder (True or False) – Specify if fingerprints should be reordered for better performance alignment (a positive integer, or None) – Alignment size in bytes (both data alignment and padding); None autoselects the best alignment. progress (True, False, or a callable) – Enable or disable progress bars, optionally specifying the progress bar constructor
save(destination, format=None, level=None)
Save the fingerprints to a given destination and format
The output format is based on the format. If the format is None then the format depends on the destination file extension. If the extension isn’t recognized then the fingerprints will be saved in “fps” format.
If the output format is “fps”, “fps.gz”, or “fps.zst” then destination may be a filename, a file object, or None; None writes to stdout.
If the output format is “fpb” then destination must be a filename or seekable file object. Chemfp cannot save to compressed FPB files.
Parameters: destination (a filename, file object, or None) – the output destination format (None, "fps", "fps.gz", "fps.zst", or "fpb") – the output format level (an integer, or "min", "default", or "max" for compressor-specific values) – compression level when writing .gz or .zst files None
class chemfp.FingerprintIterator(metadata, id_fp_iterator, location=None, close=None)
A chemfp.FingerprintReader for an iterator of (id, fingerprint) pairs
This is often used as an adapter container to hold the metadata and (id, fingerprint) iterator. It supports an optional location, and can call a close function when the iterator has completed.
A FingerprintIterator is a context manager which will close the underlying iterator if it’s given a close handler.
Like all iterators you can use next() to get the next (id, fingerprint) pair.
close()
Close the iterator.
The call will be forwarded to the close callable passed to the constructor. If that close is None then this does nothing.
class chemfp.Fingerprints(metadata, id_fp_pairs)
A chemf.FingerprintReader containing a metadata and a list of (id, fingerprint) pairs.
This is typically used as an adapater when you have a list of (id, fingerprint) pairs and you want to pass it (and the metadata) to the rest of the chemfp API.
This implements a simple list-like collection of fingerprints. It supports:
• for (id, fingerprint) in fingerprints: …
• id, fingerprint = fingerprints[1]
• len(fingerprints)
More features, like slicing, will be added as needed or when requested.
class chemfp.FingerprintWriter
Bases: object
Base class for the fingerprint writers
The three fingerprint writer classes are:
If the chemfp_converters package is available then its FlushFingerprintWriter will be used to write fingerprints in flush format.
Use chemfp.open_fingerprint_writer() to create a fingerprint writer class; do not create them directly.
All classes have the following attributes:
• metadata - a chemfp.Metadata instance
• format - a string describing the base format type (without compression); either ‘fps’ or ‘fpb’
• closed - False when the file is open, else True
Fingerprint writers are also their own context manager, and close the writer on context exit.
close()
Close the writer
This will set self.closed to False.
format = None
write_fingerprint(id, fp)
Write a single fingerprint record with the given id and fp to the destination
Parameters: id (string) – the record identifier fp (byte string) – the fingerprint
write_fingerprints(id_fp_pairs)
Write a sequence of (id, fingerprint) pairs to the destination
Parameters: id_fp_pairs – An iterable of (id, fingerprint) pairs. id is a string and fingerprint is a byte string.
chemfp.get_num_threads()
Return the number of OpenMP threads to use in searches
Initially this is the value returned by omp_get_max_threads(), which is generally 4 unless you set the environment variable OMP_NUM_THREADS to some other value.
It may be any value in the range 1 to get_max_threads(), inclusive.
Returns: the current number of OpenMP threads to use
chemfp.set_num_threads(num_threads)
Set the number of OpenMP threads to use in searches
If num_threads is less than one then it is treated as one, and a value greater than get_max_threads() is treated as get_max_threads().
Parameters: num_threads (int) – the new number of OpenMP threads to use
chemfp.get_max_threads()
Return the maximum number of threads available.
WARNING: this likely doesn’t do what you think it does. Do not use!
If OpenMP is not available then this will return 1. Otherwise it returns the maximum number of threads available, as reported by omp_get_num_threads().
chemfp.has_toolkit(toolkit_name)
Return True if the named toolkit is available, otherwise False
If toolkit_name is one of “openbabel”, “openeye”, or “rdkit” then this function will test to see if the given toolkit is available, and if so return True. Otherwise it returns False.
>>> import chemfp
>>> chemfp.has_toolkit("openeye")
True
>>> chemfp.has_toolkit("openbabel")
False
The initial test for a toolkit can be slow, especially if the underlying toolkit loads a lot of shared libraries. The test is only done once, and cached.
Parameters: toolkit_name (string) – the toolkit name True or False
chemfp.get_toolkit(toolkit_name)
Return the named toolkit, if available, or raise a ValueError
If toolkit_name is one of “openbabel”, “openeye”, or “rdkit” and the named toolkit is available, then it will return chemfp.openbabel_toolkit, chemfp.openeye_toolkit, or chemfp.rdkit_toolkit, respectively.:
>>> import chemfp
>>> chemfp.get_toolkit("openeye")
<module 'chemfp.openeye_toolkit' from 'chemfp/openeye_toolkit.py'>
>>> chemfp.get_toolkit("rdkit")
Traceback (most recent call last):
...
ValueError: Unable to get toolkit 'rdkit': No module named rdkit
Parameters: toolkit_name (string) – the toolkit name the chemfp toolkit ValueError if toolkit_name is unknown or the toolkit does not exist
chemfp.get_toolkit_names()
Return a set of available toolkit names
The function checks if each supported toolkit is available by trying to import its corresponding module. It returns a set of toolkit names:
>>> import chemfp
>>> chemfp.get_toolkit_names()
set(['openeye', 'rdkit', 'openbabel'])
Returns: a set of toolkit names, as strings
chemfp.get_fingerprint_family(family_name)
Return the named fingerprint family, or raise a ValueError if not available
Given a family_name like OpenBabel-FP2 or OpenEye-MACCS166 return the corresponding chemfp.types.FingerprintFamily.
Parameters: family_name (string) – the family name a chemfp.types.FingerprintFamily instance
chemfp.get_fingerprint_families(toolkit_name=None)
Return a list of available fingerprint families
Parameters: toolkit_name (string) – restrict fingerprints to the named toolkit a list of chemfp.types.FingerprintFamily instances
chemfp.has_fingerprint_family(family_name)
Test if the fingerprint family is available
Return True if the fingerprint family_name is available, otherwise False. The family_name may be versioned or unversioned, like “OpenBabel-FP2/1” or “OpenEye-MACCS166”.
Parameters: family_name (string) – the family name True or False
chemfp.get_fingerprint_family_names(include_unavailable=False, toolkit_name=None)
Return a set of fingerprint family name strings
The function tries to load each known fingerprint family. The names of the families which could be loaded are returned as a set of strings.
If include_unavailable is True then this will return a set of all of the fingerprint family names, including those which could not be loaded.
The set contains both the versioned and unversioned family names, so both OpenBabel-FP2/1 and OpenBabel-FP2 may be returned.
Parameters: include_unavailable (True or False) – Should unavailable family names be included in the result set? a set of strings
chemfp.get_fingerprint_type(type, fingerprint_kwargs=None)
Get the fingerprint type based on its type string and optional keyword arguments
Given a fingerprint type string like OpenBabel-FP2, or RDKit-Fingerprint/1 fpSize=1024, return the corresponding chemfp.types.FingerprintType.
The fingerprint type string may include fingerprint parameters. Parameters can also be specified through the fingerprint_kwargs dictionary, where the dictionary values are native Python values. If the same parameter is specified in the type string and the kwargs dictionary then the fingerprint_kwargs takes precedence.
For example:
>>> fptype = get_fingerprint_type("RDKit-Fingerprint fpSize=1024 minPath=3", {"fpSize": 4096})
>>> fptype.get_type()
'RDKit-Fingerprint/2 minPath=3 maxPath=7 fpSize=4096 nBitsPerHash=2 useHs=1'
Use get_fingerprint_type_from_text_settings() if your fingerprint parameter values are all string-encoded, eg, from the command-line or a configuration file.
Parameters: type (string) – a fingerprint type string fingerprint_kwargs (a dictionary of string names and Python types for values) – fingerprint type parameters
chemfp.get_fingerprint_type_from_text_settings(type, settings=None)
Get the fingerprint type based on its type string and optional settings arguments
Given a fingerprint type string like OpenBabel-FP2, or RDKit-Fingerprint/1 fpSize=1024, return the corresponding chemfp.types.FingerprintType.
The fingerprint type string may include fingerprint parameters. Parameters can also be specified through the settings dictionary, where the dictionary values are string-encoded values. If the same parameter is specified in the type string and the settings dictionary then the settings take precedence.
For example:
>>> fptype = get_fingerprint_type_from_text_settings("RDKit-Fingerprint fpSize=1024 minPath=3",
... {"fpSize": "4096"})
>>> fptype.get_type()
'RDKit-Fingerprint/2 minPath=3 maxPath=7 fpSize=4096 nBitsPerHash=2 useHs=1'
This function is for string settings from a configuration file or command-line. Use get_fingerprint_type() if your fingerprint parameters are Python values.
Parameters: type (string) – a fingerprint type string fingerprint_kwargs (a dictionary of string names and Python types for values) – fingerprint type parameters
chemfp.simsearch(*, targets, query=None, query_fp=None, query_id=None, queries=None, NxN=None, query_format=None, target_format=None, type=None, k=None, threshold=None, alpha=None, beta=None, include_lower_triangle=True, ordering=None, progress=True)
High-level API for similarity searches in targets.
Several different search types are supported: - If query_fp is a byte string then use it as the query fingerprint to search targets and create a SearchResult. - If query_id is not None then get the corresponding fingerprint in targets (or raise a KeyError) and use it to search targets and create a SearchResult. - If query is not None then parse it as a molecule record in query_format format (default: ‘smi’) and create a SearchResult. - If queries is not None, use it as queries for an NxM search of targets and create a SearchResults. - If NxN is true then do an NxN search of the targets. and create a SearchResults.
The function a SimsearchInfo instance with information about what happened. Its result attribute stores the SearchResult or SearchResults.
If queries or targets is not a fingerprint arena then use load_fingerprints() to load the arena. Use query_format or target_format to specify the format type.
If k is not None then do a k-nearest search, otherwise do a threshold search. If threshold is not None then the threshold is 0.0. If both are None the the defaults are k=3, threshold=0.0.
If alpha = beta = None or 1.0 then use a Tanimoto search, otherwise do a Tversky search with the given values of alpha and beta. If beta is not None then beta is set to alpha.
For NxN threshold search, if include_lower_triangle is True, compute the upper-triangle similarities, then copy the results to get the full set of results. When False, only compute the upper triangle.
If ordering is not None then the hits will be reordered as specified. The available orderings are:
• increasing-score - sort by increasing score
• decreasing-score - sort by decreasing score
• increasing-score-plus - sort by increasing score, break ties by increasing index
• decreasing-score-plus - sort by decreasing score, break ties by increasing index
• increasing-index - sort by increasing target index
• decreasing-index - sort by decreasing target index
• move-closest-first - move the hit with the highest score to the first position
• reverse - reverse the current ordering
If progress is True then use a progress bar to show FPS load progress, and NxN and NxM search progress. If False then no progress bar is used. It may also a callable used to create the progress bar.
chemfp.convert2fps(source, destination, *, type, input_format=None, output_format=None, reader_args=None, id_tag=None, errors='ignore', fingerprint_kwargs=None, id_prefix=None, id_template=None, id_cleanup=True, overwrite=True, reorder=True, tmpdir=None, max_spool_size=None, progress=True)
convert a structure file or files to a fingerprint file
Use source to specify the input, which may be None for stdin, a file-like object (if the toolkit supports it), a filename, or a list of filenames. If input_format is not specified then the format type is based on the filename extension(s), including compression. The default format is uncompressed SMILES. Use reader_args to pass in toolkit- and format-specific configuration.
Use destination to specify the output, which may be None for stdout, a file-like object, or a filename. If output_format is not specified then the format type is based on the filename extension(s), including compression. The default format is uncompressed FPS.
Use type to specify the fingerprint type. This can be a chemfp fingerprint type string or fingerprint type object. If it is a string then it is combined with fingerprint_kwargs to get the fingerprint type object.
If the input is an SD file then id_tag specifies the tag containing the identifier. If None, use the record’s title as the identifier.
Handle structure processing errors based on the value of errors, which may be “ignore”, “report”, or “strict”.
If destination is a string and overwrite is false then do not generate fingerprints if the file destination exists.
By default, use progress bars while processing each file. Use process=False to disable them.
The values of reorder, tmpdir, max_spool_size are passed to open_fingerprint_writer().
This function returns a ConversionInfo() instance with information about the conversion.
chemfp.rdkit2fps(source, destination, *, type='RDKit-Morgan', input_format=None, output_format=None, reader_args=None, id_tag=None, errors='ignore', id_prefix=None, id_template=None, id_cleanup=True, overwrite=True, reorder=True, tmpdir=None, max_spool_size=None, progress=True, bitFlags=None, branchedPaths=None, fpSize=None, fromAtoms=None, includeChirality=None, includeRedundantEnvironments=None, isQuery=None, isomeric=None, kekulize=None, maxLength=None, maxPath=None, minLength=None, minPath=None, min_radius=None, nBitsPerEntry=None, nBitsPerHash=None, radius=None, rings=None, targetSize=None, use2D=None, useBondOrder=None, useBondTypes=None, useChirality=None, useFeatures=None, useHs=None)
Use RDKit to convert a structure file or files to a fingerprint file
Use source to specify the input, which may be None for stdin, a file-like object (if the toolkit supports it), a filename, or a list of filenames. If input_format is not specified then the format type is based on the filename extension(s), including compression. The default format is uncompressed SMILES. Use reader_args to pass in RDKit- and format-specific configuration.
Use destination to specify the output, which may be None for stdout, a file-like object, or a filename. If output_format is not specified then the format type is based on the filename extension(s), including compression. The default format is uncompressed FPS.
Use type to specify the fingerprint type. This may be a short-hand name like “morgan” or a chemfp type name. Additional fingerprint- specific values may be passed as function call arguments.
Most short-hand names are available as attributes of the rdkit2fps function, eg, rdkit2fps.morgan or rdkit2fps.maccs.
If the input is an SD file then id_tag specifies the tag containing the identifier. If None, use the record’s title as the identifier.
Handle structure processing errors based on the value of errors, which may be “ignore”, “report”, or “strict”.
If destination is a string and overwrite is false then do not generate fingerprints if the file destination exists.
If progress is True then use a progress bar to show the input processing progress, based on the number of sources and the file size (if available). If False then no progress bar is used. It may also a callable used to create the progress bar.
The values of reorder, tmpdir, max_spool_size are passed to open_fingerprint_writer().
This function returns a ConversionInfo() instance with information about the conversion.
chemfp.oe2fps(source, destination, *, type='OpenEye-Path', input_format=None, output_format=None, reader_args=None, id_tag=None, errors='ignore', id_prefix=None, id_template=None, id_cleanup=True, overwrite=True, reorder=True, tmpdir=None, max_spool_size=None, progress=True, atype=None, btype=None, maxbonds=None, maxradius=None, minbonds=None, minradius=None, numbits=None)
Use OEChem and OEGraphSim to convert a structure file or files to a fingerprint file
Use source to specify the input, which may be None for stdin, a file-like object (if the toolkit supports it), a filename, or a list of filenames. If input_format is not specified then the format type is based on the filename extension(s), including compression. The default format is uncompressed SMILES. Use reader_args to pass in OEChem- and format-specific configuration.
Use destination to specify the output, which may be None for stdout, a file-like object, or a filename. If output_format is not specified then the format type is based on the filename extension(s), including compression. The default format is uncompressed FPS.
Use type to specify the fingerprint type. This may be a short-hand name like “circular” or a chemfp type name. Additional fingerprint- specific values may be passed as function call arguments.
Most short-hand names are available as attributes of the oe2fps function, eg, oe2fps.circular or oe2fps.maccs.
If the input is an SD file then id_tag specifies the tag containing the identifier. If None, use the record’s title as the identifier.
Handle structure processing errors based on the value of errors, which may be “ignore”, “report”, or “strict”.
If destination is a string and overwrite is false then do not generate fingerprints if the file destination exists.
If progress is True then use a progress bar to show the input processing progress, based on the number of sources and the file size (if available). If False then no progress bar is used. It may also a callable used to create the progress bar.
The values of reorder, tmpdir, max_spool_size are passed to open_fingerprint_writer().
This function returns a ConversionInfo() instance with information about the conversion.
chemfp.ob2fps(source, destination, *, type='OpenBabel-FP2', input_format=None, output_format=None, reader_args=None, id_tag=None, errors='ignore', id_prefix=None, id_template=None, id_cleanup=True, overwrite=True, reorder=True, tmpdir=None, max_spool_size=None, progress=True, nBits=None)
Use Open Babel to convert a structure file or files to a fingerprint file
Use source to specify the input, which may be None for stdin, a filename, or a list of filenames. (Chemfp does not support passing Python file-like objects to Open Babel). If input_format is not specified then the format type is based on the filename extension(s), including compression. The default format is uncompressed SMILES. Use reader_args to pass in Open Babel- and format-specific configuration.
Use destination to specify the output, which may be None for stdout, a file-like object, or a filename. If output_format is not specified then the format type is based on the filename extension(s), including compression. The default format is uncompressed FPS.
Use type to specify the fingerprint type. This may be a short-hand name like “FP2” or a chemfp type name. Additional fingerprint- specific values may be passed as function call arguments.
Most short-hand names are available as attributes of the ob2fps function, eg, ob2fps.fp2 or ob2fps.maccs.
If the input is an SD file then id_tag specifies the tag containing the identifier. If None, use the record’s title as the identifier.
Handle structure processing errors based on the value of errors, which may be “ignore”, “report”, or “strict”.
If destination is a string and overwrite is false then do not generate fingerprints if the file destination exists.
If progress is True then use a progress bar to show the input processing progress, based on the number of sources and the file size (if available). If False then no progress bar is used. It may also a callable used to create the progress bar.
The values of reorder, tmpdir, max_spool_size are passed to open_fingerprint_writer().
This function returns a ConversionInfo() instance with information about the conversion.
chemfp.cdk2fps(source, destination, *, type='CDK-Daylight', input_format=None, output_format=None, reader_args=None, id_tag=None, errors='ignore', id_prefix=None, id_template=None, id_cleanup=True, overwrite=True, reorder=True, tmpdir=None, max_spool_size=None, progress=True, hashPseudoAtoms=None, pathLimit=None, perceiveStereochemistry=None, searchDepth=None, size=None, implementation=None)
Use the CDK to convert a structure file or files to a fingerprint file
Use source to specify the input, which may be None for stdin, a filename, or a list of filenames. (Chemfp does not support passing Python file-like objects to the CDK). If input_format is not specified then the format type is based on the filename extension(s), including compression. The default format is uncompressed SMILES. Use reader_args to pass in CDK- and format-specific configuration.
Use destination to specify the output, which may be None for stdout, a file-like object, or a filename. If output_format is not specified then the format type is based on the filename extension(s), including compression. The default format is uncompressed FPS.
Use type to specify the fingerprint type. This may be a short-hand name like “daylight” or a chemfp type name. Additional fingerprint- specific values may be passed as function call arguments.
Most short-hand names are available as attributes of the ob2fps function, eg, cdk2fps.daylight or cdk2fps.ecfp2.
If the input is an SD file then id_tag specifies the tag containing the identifier. If None, use the record’s title as the identifier.
Handle structure processing errors based on the value of errors, which may be “ignore”, “report”, or “strict”.
If destination is a string and overwrite is false then do not generate fingerprints if the file destination exists.
If progress is True then use a progress bar to show the input processing progress, based on the number of sources and the file size (if available). If False then no progress bar is used. It may also a callable used to create the progress bar.
The values of reorder, tmpdir, max_spool_size are passed to open_fingerprint_writer().
This function returns a ConversionInfo() instance with information about the conversion.
chemfp.sdf2fps(source, destination, *, id_tag=None, fp_tag=None, input_format=None, output_format=None, metadata=None, pubchem=False, decoder=None, errors='report', id_prefix=None, id_template=None, id_cleanup=True, overwrite=True, reorder=True, tmpdir=None, max_spool_size=None, progress=True)
Extract and save fingerprints from tag data in an SD file
Use source to specify the input, which may be None for stdin, a file-like object, a filename, or a list of filenames. If input_format is not specified then the filename extension (if available) is used to determine the compression type, defaulting to uncompressed. Possible values for input_format include “sdf”, “sdf.gz”, and “sdf.zst”.
Use destination to specify the output, which may be None for stdout, a file-like object, or a filename. If output_format is not specified then the format type is based on the filename extension(s), including compression. The default format is uncompressed FPS.
The id_tag specifies the tag containing the identifier. If None, use the record’s title as the identifier. The fp_tag specifies the tag containing the encoded fingerprint. The decoding describes how to decode the fingerprints. It may be one of “binary”, “binary-msb”, “hex”, “hex-lsb”, “hex-msb”, “base64”, “cactvs”, or “daylight”, or a callable object which takes the fingerprint string and returns the (number of bits, fingerprint byte string), or raises a ValueError on failures.
Handle structure processing errors based on the value of errors, which may be “ignore”, “report”, or “strict”.
If metadata is not None then it is used to generate the metadata output in the output file.
If pubchem is true and metadata is None, then a new Metadata will be used, with software as “CACTVS/unknown”, type as “CACTVS-E_SCREEN/1.0 extended=2”, num_bits as 881, and sources containing any source terms which are filenames.
The pubchem option also sets fp_tag to “PUBCHEM_CACTVS_SUBSKEYS” and decoder to “cactvs”, but only if those values aren’t otherwise specified.
If destination is a string and overwrite is false then do not generate fingerprints if the file destination exists.
If progress is True then use a progress bar to show the SDF processing progress, based on the number of sources and the file size (if available). If False then no progress bar is used. It may also a callable used to create the progress bar.
The values of reorder, tmpdir, max_spool_size are passed to open_fingerprint_writer().
This function returns a ConversionInfo() instance with information about the conversion.
chemfp.maxmin(candidates, *, references=None, initial_pick=None, candidates_format=None, references_format=None, num_picks=1000, threshold=1.0, all_equal=False, randomize=True, seed=-1, include_scores=True, progress=True)
Use the MaxMin algorithm to pick diverse fingerprints from candidates
The MaxMin algorithm iteratively picks fingerprints from a set of candidates such that the newly picked fingerprint has the smallest Tanimoto similarity compared to any previously picked fingerprint, and optionally also the smallest Tanimoto similarity to the reference fingerprints.
This process is repeated until num_picks fingerprints have been picked, or until the remaining candidates are greater than threshold similar to the picked fingerprints, or until no candidates are left. A num_picks value of None is an alias for len(candidates) and will select all candiates, from most dissimilar to least. For example, to select all fingerprints with a maximum Tanimoto score of 0.2 then use num_picks = None and threshold = 0.2.
The fingerprints are selected from candidates. If it is not a FingerprintArena then the value is passed to load_fingerprints(), along with values of candidates_format and progress to load the arena.
If initial_pick and references are not specified then the initial pick is selected using the heapsweep algorithm, which finds a fingerprint with the smallest maximum Tanimoto to any other fingerprint. Use initial_pick to specify the initial pick, either as a string (which is treated as a candidate id) or as an integer (which is treated as a fingerprint index).
If references is not None then any picked candidate fingerprint must also be dissimilar from all of the fingerprints in the reference fingerprints. The model behind the terms is that you want to pick diverse fingerprints from a vendor catalog which are also diverse from your in-house reference compounds. If references is not a FingerprintArena then it is passed to load_fingerprints(), along with the values of references_format and progress to load the arena.
If randomize is True (the default), the candidates are shuffled before the MaxMin algorithm starts. Shuffling gives a sense of how MaxMin is affected by arbitrary tie-breaking.
The heapsweep and shuffle methods depend on a (shared) RNG, which requires an initial seed. If seed is -1 (the default) then use Python’s own RNG to generate the initial seed, otherwise use the value as the seed.
The function returns a MaxMinInfo object with information about what happened. Its picker attribute contains the MaxMinPicker used. If include_scores is true then its result attribute is a PicksAndScores() instance, otherwise it is picker.picks.
If progress is True then a progress bar will be used to show any FPS file load progress and show the number of current picks, relative to num_picks. If False then no progress bar is used. It may also a callable used to create the progress bar.
chemfp.heapsweep(candidates, *, candidates_format=None, num_picks=1, threshold=1.0, all_equal=False, randomize=True, seed=-1, include_scores=True, progress=True)
Use the heapsweep algorithm to pick diverse fingerprints from candidates
The heapsweep algorithm picks fingerprints ordered by their respective maximum Tanimoto score to the rest of the arena, from smallest to largest. It uses a heap to keep track of the current score for each fingerprint (a lower bound to the global maximum score), and a flag specifying if the score is also the upper bound.
For each sweep, if the smallest heap entry is an upper bound, then pick it. Otherwise, find the similarity between the corresponding fingerprint and all other fingerprints in the arena. This sets the global maximum score for the heap entry, and may update the minimum score for the rest of the fingerprints. Update the heap and try again.
This process is repeated until num_picks fingerprints have been picked, or until maximum score for the remaining candidates is greater than threshold or until no candidates are left. A num_picks value of None is an alias for len(candidates) and will select all candidates.
If all_equal is True then additional fingerprints will be picked if they have the same score as pick num_pick.
The default num_picks = 1 and all_equal = False selects a fingerprint with the smallest maximum similarity. This is used as the initial pick for MaxMinPicker.from_candidates(). Use num_picks = 1 and all_equal = True to select all fingerprints with the smallest maximum similarity.
The fingerprints are selected from candidates. If it is not a FingerprintArena then the value is passed to load_fingerprints(), along with values of candidates_format and progress to load the arena.
If randomize is True (the default), the candidates are shuffled before the heapsweep algorithm starts. Shuffling should only affect the ordering of fingerprints with identical diversity scores. It is True by default so the first picked fingerprint is the same as MaxMin.from_candidates. Setting to False should generally be slightly faster.
The shuffle and heapsweep methods depend on a (shared) RNG, which requires an initial seed. If seed is -1 (the default) then use Python’s own RNG to generate the initial seed, otherwise use the value as the seed.
The function returns a HeapSweepInfo object with information about what happened. Its picker attribute contains the HeapSweepPicker used. If include_scores is true then its result attribute is a PicksAndScores() instance, otherwise it is picker.picks.
If progress is True then a progress bar will be used to show any FPS file load progress and show the number of current picks, relative to num_picks. If False then no progress bar is used. It may also a callable used to create the progress bar.
chemfp.spherex`(candidates, *, references=None, initial_picks=None, candidates_format=None, references_format=None, num_picks=1000, threshold=0.4, ranks=None, dise=False, dise_type=None, dise_references=None, dise_references_format=None, randomize=None, seed=-1, include_counts=False, include_neighbors=False, progress=True)
Use sphere picking to select diverse fingerprints from candidates
Sphere picking iteratively picks a fingerprint from a set of candidates such that the fingerprint is not at least threshold similar to any previously picked fingerprint. The process is repeated until num_picks fingerprints are selected or no pickable fingerprints are available.
Several varations of “picks a fingerprint” are supported. If directed sphere exclusion is NOT used, then:
1) The default (randomize = None), or if randomize = True, select the next available candidate at random.
2) If default = False, select the next candidate which has the smallest index in the arena. This biases the picks towards fingerprints with the fewer number of bits set, which are likely fingerprints with lower complexity. It doesn’t appear to be that useful.
Directed sphere exclusion (see the DISE paper by Gobbi and Lee), requires a rank for each fingerprint. The next pick is chosen from one of the fingerprints with the smallest rank. There are three ways to specify the ranks:
A) They can be passed in directly as the ranks array, which must be a list of integers between 0 and 2**64-1.
B) If dise is True then the structures from the DISE paper are used. This requires a chemistry toolkit to generate the reference fingerprints. Use dise_type to specify the fingerprint type to use instead of the one from the candidates.
C) The reference fingerprints for the DISE algorithm may be passed as dise_references. This may be an arena or a fingerprint filename. Use dise_references_format to specify the file format instead of using the extension.
If initial ranks are specified, then there are two additional ways to pick a fingerprint:
3) The default (randomize = None), or if randomize = False, selects the the candidate with the smallest rank, breaking ties by selecting the candidate with the smallest index in the arena.
4) If randomize = True, select randomly from all of the candidates with the smallest rank. NOTE: this method uses a linear search, which may cause quadratic behavior if many fingerprints have the same rank.
The fingerprints are selected from candidates. If it is not a FingerprintArena then the value is passed to load_fingerprints(), along with values of candidates_format and progress to load the arena.
If references is not None then any candidate fingerprints which are at least threshold similar to the reference fingerprints are removed before picking starts. If references is not a FingerprintArena then the value is passed to load_fingerprints(), along with the values of references_format and progress to load the arena.
If references is not specified then optionally use initial_picks to specify the initial picks. This may be a candidate id string or integer index into the candidate array, or a list of id strings or integer indices. The list may be in any order and may contain duplicates. (The neighbor sphere will be empty for any duplicates.)
Initial picks are not necessary. If initial_picks is None then the specified picking method is used.
Some of the pick methods use a random number generator, which requires an initial seed. If seed is -1 (the default) then use Python’s own RNG to generate the initial seed, otherwise use the value as the seed.
The function returns a SpherexInfo object with information about what happened. The picker attribute is the SphereExclusionPicker used. By default the result element is a Picks() instance. If include_counts is true then it is the PicksAndCounts() returned calling the pickers pick_n_with_counts(). If include_neighbors is True then the result is the PicksAndNeighbors() returned from calling pick_n_with_neighbors(). include_counts and include_neighbors cannot both be true.
If progress is True then a progress bar will be used to show any FPS file load progress. If False then no progress bar is used. It may also a callable used to create the progress bar. The sphere picker search does not currently support progress bars.
|
2022-08-17 09:44:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17190301418304443, "perplexity": 6136.623969390065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572898.29/warc/CC-MAIN-20220817092402-20220817122402-00019.warc.gz"}
|
http://quantum.phys.unm.edu/466-20/
|
Physics 466
## Physics 466, Fall 2020 Physical Mathematics
Welcome to physics 466 for 2020.
Due to the pandemic, all classes will be held online via Zoom.
Classes will start at 5:30 on Tuesday, Wednesday, Thursday and run to 6:45 on Tuesday and Thursday and to 6:20 on Wednesday. I plan to use email to send you invitations for each class. Although the first real class will be on 18 August, I plan to hold a brief trial class at 5:30 on Tuesday, the 11th.
You can learn how to register to vote at vote.org
and at nmvote.org.
We will be using the second edition of a textbook called
Physical Mathematics published in 2019
by Cambridge University Press.
The book is available in the UNM bookstore,
but you can also get it from Amazon and eBooks.
Here is Chapter 1.
Here are some examples for some of the chapters.
Here is Chapter 2.
Here is Chapter 3.
A list of errata in the second edition of Physical Mathematics.
Corrected and improved version of Example 4.13 (Lifetime of a Fluorophore).
Here's a link to a folder containing videos of the problem sessions of physics 468.
Videos of lectures of physics 466:
18 August 2020.
Linear Algebra: 1.1 Numbers, 1.2 Arrays, 1.3 Matrices, 1.4 Vectors,1.5 Linear operators, 1.6 Inner products. Examples of outer products, eigenvectors and eigenvalues. Illustrated examples of the use of Matlab.
20 August 2020.
1.8 Linear independence and completeness 1.9 Dimension of a vector space 1.10 Orthonormal vectors 1.11 Outer products 1.12 Dirac notation 1.13 Adjoints of operators 1.14 Self-adjoint or hermitian linear operators 1.15 Real, symmetric linear operators 1.16 Unitary operators 1.17 Hilbert spaces
25 August 2020.
1.18 Antiunitary, antilinear operators 1.19 Symmetry in quantum mechanics 1.20 Determinants 1.21 Jacobians 1.22 Systems of linear equations 1.23 Linear least squares 1.24 Lagrange multipliers 1.25 Eigenvectors and eigenvalues 1.26 Eigenvectors of a square matrix Also, the LU decomposition and examples of how to use Matlab to find eigenvalues, eigenvectors, determinants, and LU decompositions of matrices.
27 August 2020.
1.26 Eigenvectors and eigenvalues 1.27 Eigenvectors of a square matrix 1.28 A matrix obeys its characteristic equation 1.29 Functions of matrices 1.30 Hermitian matrices 1.31 Normal matrices 1.32 Compatible normal matrices 1.33 Singular-value decompositions 1.34 Moore-Penrose pseudoinverses 1.35 Tensor products and entanglement
1 September 2020.
2 Vector calculus: 2.1 Derivatives and partial derivatives 2.2 Gradient 2.3 Divergence 2.4 Laplacian 2.5 Curl 3 Fourier series 3.1 Fourier series 3.2 The interval 3.3 Where to put the 2$$\pi$$’s 3.4 Real Fourier series for real functions
3 September 2020.
3 Fourier series: 3.1 Fourier series, 3.2 The interval, 3.3 Where to Put the 2pi’s, 3.4 Real Fourier series for real functions, 3.5 Stretched intervals, 3.6 Fourier series of functions of several variables, 3.7 Integration and differentiation of Fourier series, 3.8 How Fourier series converge, 3.9 Measure and Lebesgue integration (barely mentioned), 3.10 Quantum-mechanical examples, 3.11 Dirac’s delta function
3 September 2020.
3.10 Quantum-mechanical examples, 3.11 Dirac’s delta function, 3.12 Harmonic Oscillators, 3.13 Nonrelativistic Strings, 3.14 Periodic Boundary Conditions. Also, the Helmholtz decomposition and a generalization of Fourier series.
10 September 2020.
4 Fourier and Laplace transforms, 4.1 Fourier transforms, 4.2 Fourier transforms of real functions, 4.3 Dirac, Parseval, and Poisson, and some remarks about Lebesgue integration and generalized Fourier series.
15 September 2020.
4.4 Derivatives and integrals of Fourier transforms, 4.5 Fourier transforms of functions of several variables, 4.6 Convolutions, 4.7 Fourier transform of a convolution, 4.8 Fourier transforms and Green’s functions, 4.9 Laplace transforms, 4.10 Inversion of Laplace transforms, 4.11 Volterra’s Convolution, 4.12 Derivatives and integrals of Laplace transforms, 4.13 Laplace transforms and differential equations, and 4.14 Applications to Differential Equations.
17 September 2020.
5 Infinite series: 5.1 Convergence, 5.2 Tests of convergence, 5.3 Convergent series of functions, 5.4 Power series, 5.5 Factorials and the gamma function, 5.6 Euler’s beta function, 5.7 Taylor series, 5.8 Fourier series as power series, 5.9 Binomial series, 5.10 Logarithmic series, 5.11 Dirichlet series and the zeta function, 5.12 Bernoulli numbers and polynomials, 5.13 Asymptotic series, and 5.16 Infinite products.
Homework 1 due Sunday 30 August:
Do problems 1.1, 1.5, 1.11, 1.15, 1.19, & 1.20.
Homework 2 due Sunday 6 September:
Do problems 1.25, 1.28, 1.32, 1.34, & 1.35.
Homework 3 due Sunday 13 September:
Do problems 1.40, 2.1, 2.2, 3.2, 3.16.
Homework 4 due Sunday 20 September:
Do problems 3.17, 3.21, 3.25.
Homework 5 due Sunday 27 September:
Do problems 4.5, 4.6, 4.9, 4.15, and 4.16.
The grader for the course is Mr. Evgeni Zlatanov.
All homework problems are stated in the book Physical Mathematics.
The best way to do your homework is to use latex to make pdf files and to use email to send the grader your pdf files.
TeXShop works well on Apple computers. You can get TeXShop here pages.uoregon.edu/koch/texshop/.
TeXstudio works well on Windows computers. You can get TeXstudio here www.texstudio.org.
Both use TeX Live which you can get here www.tug.org/texlive/acquire-netinstall.html.
TENTATIVE SYLLABUS
Here is what I plan to cover in this course:
Linear algebra: 2 weeks
Vector calculus 0.5
Fourier series: 1.5
Fourier transforms: 1.5
Infinite series: 1
Complex variables: 3
Differential equations: 3
Integral equations: 0.5
Legendre polynomials: 1.5
Bessel functions: 1.5
These are the first ten chapters of the book.
Welcome to physics 466 for 2019.
Class meets in room 184 of the physics building at 1919 Lomas NE at 5:30 pm on Tuesdays and Thursdays.
The problem session for the course, physics 468, will meet in room 5 from 5 to 5:50 and not in room 1131 as originally scheduled.
We will be using the second edition of a textbook called Physical Mathematics published this summer by Cambridge University Press.
You can get it now from Amazon and eBooks.
The book is now available in the UNM bookstore.
Here is Chapter 1.
Here is Chapter 2.
Here is Chapter 3.
A list of errata in the second edition of Physical Mathematics.
Corrected and improved version of Example 4.13 (Lifetime of a Fluorophore).
SYLLABUS
Here is what I plan to cover in this course:
Linear algebra: 2 weeks
Vector calculus 0.5
Fourier series: 1.5
Fourier transforms: 1.5
Infinite series: 1
Complex variables: 3
Differential equations: 3
Integral equations: 0.5
Legendre polynomials: 1.5
Bessel functions: 1.5
These are the first ten chapters of the book.
All homework problems are stated in the book Physical Mathematics. Put homework in Evgeni Zlatanov's mailbox by 3:00 PM on its due date, usually a Friday. You can send him e-mail.
I will be doing some of the homework problems during the weekly problem sessions which are held on Wednesdays at 5 pm in room 5.
You can send me e-mail.
Homework 1 due Friday 30 August:
Do problems 1-3, 5-7, & 9-14 of chapter 1.
Homework 2 due Friday 6 September:
Do problems 15-22, 25, 27-31 of chapter 1.
Homework 3 due Tuesday 17 September:
Do problems 1.32-1.36 and 1.40 and 2.2-2.6 of chapter 2.
Homework 4 due Friday 27 September:
Do problems 3.1, 3.2, 3.4-3.12, and for extra credit 3.16-3.21 of chapter 3.
Homework 5 due Friday 4 October:
Do problems 4.1-4.9 of chapter 4.
Homework 6 due Tuesday 15 October:
Do problems 4.10-4.18 of chapter 4.
Homework 7 due Friday 25 October:
Do problems 5.1-5.5 of chapter 5 and problems 6.1, 6.3, 6.5, and 6.6 of chapter 6.
Homework 8 due Friday 1 November:
Do problems 6.7, 6.8, 6.11, 6.13, 6.15, 6.16, 6.20, and 6.24 of chapter 6.
Homework 8 due Friday 8 November:
Do problems 6.28, 6.30, 6.33, 6.34, 6.35, and 6.38 of chapter 6, and 7.2 and 7.9 of chapter 7.
Homework 9 due Friday 15 November:
Do problems 7.10 -- 7.15, 7.17, and 7.19.
Homework 10 due Monday 25 November:
Do problems 7.25 -- 7.27, 7.29--7.30, and 7.32--7.34.
Homework 11 due Monday 9 December:
Do problems 9.2 (but only for $$n=0, 1,$$ and 2), 9.8, 9.14, 9.17, 9.18, 10.1, 10.3, 10.13, 10.15, 10.18.
There will be a midterm exam on the Thursday, 17 October, after fall break.
The final exam is on Thursday 12 December from 5:30 to 7:30 in our regular classroom 1160.
Videos of lectures:
20 August
Linear algebra: Sections 1.1 Numbers, 1.2 Arrays, 1.3 Matrices, 1.4 Vectors, 1.5 Linear operators, 1.6 Inner products, 1.7 Cauchy–Schwarz inequalities, and 1.8 Linear independence and completeness.
22 August
1.9 Dimension of a vector space, 1.10 Orthonormal vectors, 1.11 Outer products, 1.12 Dirac notation, 1.13 Adjoints of operators, 1.14 Self-adjoint or hermitian linear operators, 1.15 Real, symmetric linear operators.
27 August
1.16 Unitary operators, 1.17 Hilbert spaces, 1.18 Antiunitary and antilinear operators, 1.19 Symmetry in quantum mechanics, 1.20 Determinants, 1.21 Jacobians, 1.22 Systems of linear equations, 1.23 Linear least squares, and 1.24 Lagrange multipliers.
29 August
1.24 Lagrange multipliers, 1.25 Eigenvectors and eigenvalues, 1.26 Eigenvectors of a square matrix, 1.27 A matrix obeys its characteristic equation, 1.28 Functions of matrices, and. 1.29 Hermitian matrices.
3 September
1.30 Normal matrices, 1.31 Compatible normal matrices, 1.32 Singular-value decompositions, 1.33 Moore-Penrose pseudoinverses, 1.34 Tensor products and entanglement, 1.35 Density operators, 1.36 Schmidt decomposition, 1.37 Correlation functions, 1.38 Rank of a matrix, and 1.39 Software.
5 September
2.1 Derivatives and partial derivatives, 2.2 Gradient, 2.3 Divergence, 2.4 Laplacian, and 2.5 Curl
10 September
3.1 Fourier series, 3.2 The interval, 3.3 Where to put the 2pi’s, 3.4 Real Fourier series for real functions, 3.5 Stretched intervals, 3.6 Fourier series of functions of several variables, 3.7 Integration and differentiation of Fourier series, and 3.8 How Fourier series converge.
12 September
3.9 Measure and Lebesgue integration, 3.10 Quantum-mechanical examples, 3.11 Dirac’s delta function, 3.12 Harmonic oscillators, 3.13 Nonrelativistic strings, and 3.14 Periodic boundary conditions.
19 September
4.1 Fourier transforms, 4.2 Fourier transforms of real functions, 4.3 Dirac, Parseval, and Poisson, 4.4 Derivatives and integrals of Fourier transforms, 4.5 Fourier transforms of functions of several variables, 4.6 Convolutions, 4.7 Fourier transform of a convolution, 4.8 Fourier transforms and Green’s functions, 4.9 Laplace transforms, 4.10 Derivatives and integrals of Laplace transforms, 4.11 Laplace transforms and differential equations, and 4.12 Inversion of Laplace transforms.
24 September
Review of Sections 4.1-4.12 and discussion of Section 4.13 Application to differential equations.
26 September
5.1 Convergence, 5.2 Tests of convergence, 5.3 Convergent series of functions, 5.4 Power series, and 5.5 Factorials and the gamma function.
1 October
5.5 Factorials and the gamma function, 5.6 Euler’s beta function, 5.7 Taylor series, 5.8 Fourier series as power series, 5.9 Binomial series, 5.10 Logarithmic series, 5.11 Dirichlet series and the zeta function, 5.12 Bernoulli numbers and polynomials, 5.13 Asymptotic series, 5.14 Fractional and complex derivatives, 5.15 Some electrostatic problems, 5.16 Infinite products, 6.1 Analytic functions, 6.2 Cauchy-Riemann conditions, and 6.3 Cauchy’s integral theorem.
3 October
6.1 Analytic functions, 6.2 Cauchy-Riemann conditions, 6.3 Cauchy’s integral theorem, 6.4 Cauchy’s integral formula, and 6.5 Harmonic functions.
8 October
6.5 Harmonic functions, 6.6 Taylor series for analytic functions, 6.7 Cauchy’s inequality, 6.8 Liouville’s theorem, 6.9 Fundamental theorem of algebra, 6.10 Laurent series, 6.11 Singularities, 6.12 Analytic continuation, and 6.13 Calculus of residues.
15 October
6.14 Ghost contours, 6.15 Logarithms and cuts, 6.16 Powers and roots, 6.17 Conformal mapping, 6.18 Cauchy’s principal value, and 6.19 Dispersion relations.
22 October
Sections 6.19 Dispersion relations, 6.20 Kramers-Kronig relations, 6.21 Phase and group velocities, and 6.22 Method of steepest descent.
24 October
7.1 Ordinary linear differential equations, 7.2 Linear partial differential equations, 7.3 Separable partial differential equations, 7.4 First-order differential equations, and 7.5 Separable first-order differential equations.
29 October
7.6 Hidden separability, 7.7 Exact first-order differential equations, 7.8 Meaning of exactness, 7.9 Integrating factors, 7.10 Homogeneous functions, 7.11 Virial theorem, and 7.12 Legendre’s transform.
31 October
7.12 Legendre’s transform, 7.13 Principle of stationary action in mechanics, 7.14 Symmetries and conserved quantities in mechanics, 7.15 Homogeneous first-order ordinary differential equations, 7.16 Linear first-order ordinary differential equations, 7.17 Small oscillations, 7.18 Systems of ordinary differential equations, 7.19 Exact higher-order differential equations, and 7.20 Constant-coefficient equations.
5 November
7.21 Singular points of second-order ordinary differential equations, 7.22 Frobenius’s series solutions, 7.23 Fuch’s theorem, 7.24 Even and odd differential operators, 7.25 Wronski’s determinant, 7.26 Second solutions, 7.27 Why not three solutions?, 7.28 Boundary conditions, 7.29 A variational problem, and 7.30 Self-adjoint differential operators.
7 November
Introduction to Maxima by Logan Cordonnier, 7.31 Self-adjoint differential systems, 7.32 Making operators formally self adjoint, 7.33 Wronskians of self-adjoint operators, 7.34 First-order self-adjoint differential operators, and 7.35 A constrained variational problem.
12 November
7.35 A constrained variational problem, 7.36 Eigenfunctions and eigenvalues of self-adjoint systems, 7.37 Unboundedness of eigenvalues, 7.38 Completeness of eigenfunctions, 7.39 Inequalities of Bessel and Schwarz, 7.40 Green’s functions, 7.41 Eigenfunctions and Green’s functions, and 7.42 Green’s functions in one dimension.
14 November
7.43 Principle of stationary action in field theory, 7.44 Symmetries and conserved quantities in field theory, 7.45 Nonlinear differential equations, 7.46 Nonlinear differential equations in cosmology, and 7.47 Nonlinear differential equations in particle physics.
19 November
8.1 Differential equations as integral equations, 8.2 Fredholm integral equations, 8.3 Volterra integral equations, 8.4 Implications of linearity, 8.5 Numerical solutions, 8.6 Integral transformations, and 9.1 Legendre’s polynomials, 9.2 The Rodrigues formula, 9.3 Generating function for Legendre polynomials, 9.4 Legendre’s differential equation, 9.5 Recurrence relations, 9.6 Special values of Legendre polynomials, 9.7 Schlaefli’s integral, and 9.8 Orthogonal polynomials.
21 November
9.8 Orthogonal polynomials, 9.9 Azimuthally symmetric laplacians, 9.10 Laplace’s equation in two dimensions, 9.11 Helmholtz’s equation in spherical coordinates, 9.12 Associated Legendre polynomials, 9.13 Spherical harmonics, 9.14 Cosmic microwave background radiation, and 10.1 Cylindrical Bessel functions of the first kind.
26 November
Bessel functions of the first kind, Bessel functions of the second kind, Bessel functions of the third kind, Spherical Bessel functions of the first kind, and Spherical Bessel functions of the second kind.
3 December
Solutions to some of the exercises of the chapter on Legendre polynomials and spherical harmonics, and a quick introduction to general relativity.
5 December
Solutions to some of the exercises on Bessel functions.
All students of physics should read at least the first section of the essay The Trouble with Quantum Mechanics by Steven Weinberg before they graduate.
|
2020-09-18 20:52:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7287123799324036, "perplexity": 4046.847791553206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188841.7/warc/CC-MAIN-20200918190514-20200918220514-00173.warc.gz"}
|
http://openstudy.com/updates/56001dd3e4b0ed58e276fe90
|
## anonymous one year ago Which is an example of the associative property? A. –(18 + 3) = –18 + (–3) B. –17 + 0 = –17 C. (–8 + 10) + 4. = –8 + (10 + 4) D. –5 + 7 = 7 + (–5)
1. anonymous
oh is it c?
2. Mehek14
$$\tt{(a+b)+c=a+(b+c)}$$ look for something following that rule
3. anonymous
yea... i think it is :/
4. anonymous
yea i put c :)
5. Mehek14
yes C is correct ^_^
6. anonymous
thanks! :)
7. anonymous
could i ask you another? :)
8. Mehek14
sure
9. anonymous
What is the value of the expression? 19.2 + (–7 + 3.8) A. 16 B. 24 C. 29 D. 30
10. Mehek14
-7 + 3.8 = ?
11. anonymous
-3.2
12. anonymous
a?
13. Mehek14
yes ^_^
14. anonymous
thanks! :)
15. Mehek14
np :)
16. anonymous
What value of m makes the statement true? –(m + 15) = –18 + (–15) A. –33 B. –18 C. 3 D. 18
17. Mehek14
$$\bf{-m-15=-18-15\\-m=-18\\m=18}$$
18. anonymous
thank you again! :p
19. anonymous
What value of x makes the equation true? –(9 + (–17)) = –9 + x
20. anonymous
how r u so good at this stuff? XD
21. anonymous
@pooja195
|
2016-10-22 07:12:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5195624828338623, "perplexity": 8189.440569990672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718840.18/warc/CC-MAIN-20161020183838-00339-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://stats.stackexchange.com/questions/502266/pca-for-feature-selection-and-calculating-the-total-contributions-of-each-featur
|
# PCA for Feature Selection and Calculating the Total Contributions of Each Features
First of all, I know that using PCA for feature selection is not a true approach however, I have found some articles which uses PCA for feature selection and I want to imitate them. I am having some troubles to get the real logic behind these articles. Here you can find the links of those aforementioned articles below;
Let's assume that I have five different variables (features) for forecasting the outcome. These are wind speed, temperature, humidity, pressure and wind directions.
dat.sample = data.frame(windspeed = rnorm(100, mean = 10, sd = 2),
temp = rnorm(100, mean = 20, sd = 2),
humid = rnorm(100, mean = 80, sd = 5),
press = rnorm(100, mean = 950, sd = 10),
winddir = rnorm(100,mean = 180, sd = 5))
Now lets scale and center the data to ensure that having standart deviation of 1 and mean of 0 for each variable.
library(caret)
preproc = preProcess(dat.sample, method = c("center","scale"))
dat.sample.cs = predict(preproc, dat.sample)
#Ensuring the standart deviation is 1 and mean 0 before proceeding with PCA.
apply(dat.sample.cs, 2, function(x) {c(sd(x),round(mean(x),3))})
PCA is applied to the scaled and centered data with base R function prcomp. After applying PCA, in order to get the eigenvalues of each principal component (PC) and the contribution of each variable for each PC, factoextra library is used.
library(factoextra)
get_eigenvalue(pca)
eigenvalue variance.percent cumulative.variance.percent
Dim.1 1.2263264 24.52653 24.52653
Dim.2 1.1581302 23.16260 47.68913
Dim.3 0.9905302 19.81060 67.49974
Dim.4 0.8372833 16.74567 84.24540
Dim.5 0.7877299 15.75460 100.00000
It is found that while the PC1 represents the ~24.5 percent of the total variance of whole PCs, other variance percents can also be seen. Now, I would like to see the contribution of each variable to the each PCs.
pca.var = get_pca_var(pca)
(contrib = pca.var\$contrib)
Dim.1 Dim.2 Dim.3 Dim.4 Dim.5
windspeed 5.2483398 0.71103782 91.6450906 1.535375 0.8601568
temp 39.4126852 8.99578568 0.9641489 8.337931 42.2894495
humid 43.0894490 0.03033891 1.1556426 42.025891 13.6986782
press 0.1220664 55.14999755 0.2033069 14.220377 30.3042517
winddir 12.1274594 35.11284004 6.0318111 33.880426 12.8474639
Now, it is clear to say that while the maximum contribution to the PC1 comes from the humidity; pressure, wind speed, humidity and temperature are the variables which contributes most from PC2 to PC5 respectively (contributions have directly been associated with the importance of the features). Here comes my questions;
1. Assuming that the first PCs are enough for representing the data; how can a feature selection be made by using this information? Is it okay to use the first variables which contributes most for each PCs within selected first n (n is chosen 4 here) PCs? For instance, in the example above, should the humidity, pressure, wind speed and humidity features be chosen? It also means that having only 3 variables since humidity is selected two times.
2. How can I obtain the total contributions of each variable like the article which is cited above. Since we obtained the importance of each feature for each individual PCs, how can I get the total contributions for each feature? Is it okay to take weighted average for each feature across the whole PCs or the selected PCs (which are 1:4 in this example)? The weights will be the variance percent of each PC. In summary; I would like to get a table like in this article and it can be seen below.
The thing that confuses me in this table is the name of the table. It states that; Contribution rates of principal components. Here, each feature is mentioned as principal component. I do not know if they have used the contributions from only PC1 or whole PCs and it is not completely clear how they calculated this table in the article. Here you can find the related section of the PCA method for feature selection in the article.
Am I missing a thing? How can I obtain a table like that?
|
2021-10-24 03:12:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6421288251876831, "perplexity": 1094.2298944371976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585837.82/warc/CC-MAIN-20211024015104-20211024045104-00193.warc.gz"}
|
https://www.sara-codes.com/l0-norm-l1-norm-l2-norm-l-infinity-norm.html
|
# L0 Norm, L1 Norm, L2 Norm & L-Infinity Norm
L-norms
First of all, what is a Norm? In Linear Algebra, a Norm refers to the total length of all the vectors in a space.
There are different ways to measure the magnitude of vectors, here are the most common:
#### L0 Norm:
It is actually not a norm. (See the conditions a norm must satisfy here).
Corresponds to the total number of nonzero elements in a vector.
For example, the L0 norm of the vectors $$(0,0)$$ and $$(0,2)$$ is $$1$$ because there is only one nonzero element.
A good practical example of L0 norm is the one that gives Nishant Shukla, when having two vectors (username and password). If the L0 norm of the vectors is equal to $$0$$, then the login is successful. Otherwise, if the L0 norm is $$1$$, it means that either the username or password is incorrect, but not both. And lastly, if the L0 norm is $$2$$, it means that both username and password are incorrect.
#### L1 Norm:
Also known as Manhattan Distance or Taxicab norm. L1 Norm is the sum of the magnitudes of the vectors in a space. It is the most natural way of measure distance between vectors, that is the sum of absolute difference of the components of the vectors. In this norm, all the components of the vector are weighted equally.
Having, for example, the vector $$X = [3,4]$$:
The L1 norm is calculated by
$$||X||1 = \left | 3 \right | + \left | 4 \right | = 7$$
As you can see in the graphic, the L1 norm is the distance you have to travel between the origin $$(0,0)$$ to the destination $$(3,4)$$, in a way that resembles how a taxicab drives between city blocks to arrive at its destination.
#### L2 norm:
Is the most popular norm, also known as the Euclidean norm. It is the shortest distance to go from one point to another.
Using the same example, the L2 norm is calculated by
$$||X||2 = \sqrt{(|3|^2 + |4|^2)} = \sqrt{9+16} = \sqrt{25} = 5$$
As you can see in the graphic, L2 norm is the most direct route.
There is one consideration to take with L2 norm, and it is that each component of the vector is squared, and that means that the outliers have more weighting, so it can skew results.
#### L-infinity norm:
Gives the largest magnitude among each element of a vector.
Having the vector $$X= [-6, 4, 2]$$, the L-infinity norm is $$6$$.
In L-infinity norm, only the largest element has any effect. So, for example, if your vector represents the cost of constructing a building, by minimizing L-infinity norm we are reducing the cost of the most expensive building.
I hope you find this article clear and easy to digest, in any other case, feel free to put your question in the comment section below. I’ll be happy to clarify any question 🙂
Category: Machine learning
|
2020-07-12 06:38:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8936163783073425, "perplexity": 312.4971368400302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657131734.89/warc/CC-MAIN-20200712051058-20200712081058-00339.warc.gz"}
|
https://buboflash.eu/bubo5/show-dao2?d=4324952771852
|
Question
How to add the file extension .txt to a file ?
The following Get-Extension function adds the .txt file name extension to a file name that you supply:
function Get-Extension { $name =$args[0] + ".txt" $name } Get-Extension myTextFile myTextFile.txt Question How to add the file extension .txt to a file ? Answer ? Question How to add the file extension .txt to a file ? Answer The following Get-Extension function adds the .txt file name extension to a file name that you supply: function Get-Extension {$name = $args[0] + ".txt"$name
}
Get-Extension myTextFile
myTextFile.txt
If you want to change selection, open document below and click on "Move attachment"
the function name. Positional parameter values are assigned to the $args array variable. The value that follows the function name is assigned to the first position in the$args array, $args[0]. <span>The following Get-Extension function adds the .txt file name extension to a file name that you supply: function Get-Extension {$name = $args[0] + ".txt"$name } Get-Extension myTextFile myTextFile.txt Switch Parameters A switch is a parameter that does not require a value. Instead, you type the function name followed by the name of the switch parameter. To define a switch parameter,
#### Summary
status measured difficulty not learned 37% [default] 0
No repetitions
|
2021-10-19 05:14:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40923330187797546, "perplexity": 8768.740445658055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00438.warc.gz"}
|
http://fashionphotographycourse.com/employee-employer-xbjnts/article.php?tag=b529d9-integral-meaning-in-maths
|
with respect to a measure is often written, In the event that the set in () is an interval , the "subscript-superscript" Stromberg, "Real and abstract analysis" , Springer (1965), E.J. The primitive in the sense of Lebesgue is naturally defined by means of equation \eqref{1}, in which the integral is taken in the sense of Lebesgue. In calculus, an integral is the space under a graph of an equation (sometimes said as "the area under a curve"). gral | \ ˈin-ti-grəl (usually so in mathematics) How to pronounce integral (audio) ; in-ˈte-grəl also -ˈtē- also nonstandard ˈin-trə-gəl \. where is the above-mentioned Lebesgue measure. In particular, when $U(x)=x+C$, the Stieltjes integral \eqref{3} is the Riemann integral $\int_a^bf(x)\,dx$. Providence, RI: Amer. Since the derivative of a constant is zero, indefinite integrals are defined only up to an arbitrary constant of integration , i.e.. Wolfram Research maintains a web site http://integrals.wolfram.com/ that can find the indefinite integral of many of , i.e.. And then finish with dx to mean the slices go in the x direction (and approach zero in width). 40, 561-563, 1983. The Integral Calculator solves an indefinite integral of a function. Soc., 1994. The term "integral" can refer to a number of different concepts in mathematics. Integration is one of the two main operations of calculus; its inverse operation, differentiation, is the other. However, the interesting case for applications is when the function $U$ does not have a derivative. It is clear that if $F$ is a primitive of $f$ on the interval $a0$ there is a $\delta>0$ such that under the single condition $\max(y_i-y_{i-1})<\delta$ the inequality $|\sigma-I|<\epsilon$ holds. one of the most important concepts of mathematics, answering the need to find functions given their derivatives (for example, to find the function expressing the path traversed by a moving point given the velocity of that point), on the one hand, and to measure areas, volumes, lengths of arcs, the work done by forces in a given interval of time, and so forth, on the other. Web Resource. If F' (x) = f(x), we say F(x) is an anti-derivative of f(x). And the process of finding the anti-derivatives is known as anti-differentiation or integration. Press, p. 29, 1988. Moreover, Whenever I take a definite integral in aim to calculate the area bound between two functions, what is the meaning of a negative result? The indefinite integral is an easier way to symbolize taking the antiderivative. According to the fundamental theorem of integral calculus, there exists for each continuous function $f$ on the interval \$a
|
2021-02-27 04:28:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9735997915267944, "perplexity": 293.22170635783203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358064.34/warc/CC-MAIN-20210227024823-20210227054823-00540.warc.gz"}
|
https://stats.stackexchange.com/questions/309931/orthogonal-polynomials-cross-validation-should-subsetting-be-done-prior-or-af
|
# Orthogonal polynomials + cross validation: should subsetting be done prior or after constructing the orthogonal polynomials?
So, just to start... I've just learned of orthogonal polynomial regression today. I've gone through the master's-level linear models courses, and we did not cover that topic. I was always under the assumption that, especially for polynomial regression, that $\mathbf{X}^{T}\mathbf{X}$ is invertible most of the time, and then you just get the coefficients from $\hat{\boldsymbol\beta} = (\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{y}$, and everything's great. Any explanation of what's going on here given my background on this would be appreciated as well on the side.
In a question on StackOverflow, I had noticed that different results were obtained under these two scenarios in R:
1) If I had done a regression using glm() with the data argument the training subset of the data and the subset argument omitted, the data are filtered first to only use the training subset of data, and secondly, the orthogonal polynomials are constructed.
2) If I had done a regression using glm() with the data argument the entire data set (training + test data) and the subset argument equal to the (row indices of) the training subset of data, the orthogonal polynomials are constructed first, and secondly, the data are subsetted.
I wanted to call attention to this, as I couldn't find any guidance behind this in Google searching.
For the purpose of cross validation, which one of the two scenarios above should be done? Does it even matter? One of the commenters on the StackOverflow question I posted above pointed out that the fitted values are still the same (according to the GLM fit, that is). However, I can see issues with interpretation of parameter estimates.
FYI: Introduction to Statistical Learning uses the second approach in its R lab examples.
• If the verification data are involved in your modeling in any way whatsoever then they're not honest verification data. In some sense the distinction between (1) and (2) doesn't matter because predictions are unaffected (and it's likely that if you do any cross-validation on the training data you won't be recomputing the orthogonal polynomials for each fold, anyway, so you're already sort of relying on this not mattering). – whuber Oct 30 '17 at 20:52
• For the purposes of cross-validation or any other scheme, I would strongly suggest you leave the test/validation data completely out of the training routine. That means you do the subsetting prior to your constructing the orthogonal polynomial. Yes, there will be cases where the effect can be minimal (eg. the shape of a basis usually won't change horribly given a large enough subset), but they will be other cases where the effect can be severe (eg. when normalising a leptokurtotic variable and trying to compute std. deviations). Opt for the safe choice and do the sub-setting first. – usεr11852 Oct 30 '17 at 23:59
• These comments could just as well be answers IM(very)HO. – eric_kernfeld Nov 2 '17 at 16:31
• @Clarinetist: OK! I will flesh it out a bit more in a few hours. – usεr11852 Nov 2 '17 at 20:01
|
2019-08-26 01:02:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6564939618110657, "perplexity": 653.3485586551596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330913.72/warc/CC-MAIN-20190826000512-20190826022512-00321.warc.gz"}
|